text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Structure-Dependent Spectroscopic Properties of Yb3+-Doped Phosphosilicate Glasses Modified by SiO2
Yb3+-doped phosphate glasses containing different amounts of SiO2 were successfully synthesized by the conventional melt-quenching method. The influence mechanism of SiO2 on the structural and spectroscopic properties was investigated systematically using the micro-Raman technique. It was worth noting that the glass with 26.7 mol % SiO2 possessed the longest fluorescence lifetime (1.51 ms), the highest gain coefficient (1.10 ms·pm2), the maximum Stark splitting manifold of 2F7/2 level (781 cm−1), and the largest scalar crystal-field NJ and Yb3+ asymmetry degree. Micro-Raman spectra revealed that introducing SiO2 promoted the formation of P=O linkages, but broke the P=O linkages when the SiO2 content was greater than 26.7 mol %. Based on the previous 29Si MAS NMR experimental results, these findings further demonstrated that the formation of [SiO6] may significantly affect the formation of P=O linkages, and thus influences the spectroscopic properties of the glass. These results indicate that phosphosilicate glasses may have potential applications as a Yb3+-doped gain medium for solid-state lasers and optical fiber amplifiers.
Introduction
Yb 3+ -doped laser materials operating at wavelengths around 1 µm have been intensively investigated for a wide variety of applications, such as high-power and short-pulse lasers, material processing, and optical telecommunications [1][2][3][4]. Yb 3+ ions are regarded as the main dopant for the applications because of their simple energy-level scheme, which prevents excited-state absorption and multi-phonon non-radiative decay, and obviates the possibility of concentration quenching through cross-relaxation [5]. Since the first glass laser was obtained in 1961 by Snitzer [6], Yb 3+ -doped glasses have been well established as solid-state lasers and optical fiber amplifiers for optical telecommunications. Recently, for high-power glass-based laser systems, phosphate glasses have been used as a matrix for Yb 3+ ions because of their high rare-earth solubility, high gain coefficient and superior spectroscopic properties [7][8][9]. However, the predominant disadvantages of phosphate glasses are their chemical durability and thermo-mechanical limitations. Therefore, optimizing the glass compositions with significantly improved thermo-mechanical properties is required.
Silicate glasses exhibit excellent chemical durability, thermo-mechanical properties and optical properties. Recent studies have shown that the mechanical properties of phosphate glasses can be efficiently improved by doping with SiO 2 [10][11][12]. Chen Wei et al. [11] suggested that the introduction of SiO 2 into phosphate glasses can strengthen the thermo-mechanical properties of the glass without severely degrading the spectroscopic properties. Zhang Liyan et al. [12] reported that the spectroscopic properties of 60P 2 O 5 -7.5Al 2 O 3 -15K 2 O-17.5BaO glass can be improved by the addition of SiO 2 . Moreover, the Stark splitting of Yb 3+ -doped phosphate glasses is enlarged through the introduction of SiO 2 , which allows the glass to achieve the laser output successfully. The glass structure and the local coordination of rare-earth ions can be effectively modulated by doping SiO 2 into phosphate glasses which critically influences the spectroscopic properties of the glass. Zeng Huidan et al. [13] reported that both the luminous intensity and luminous decay time of the glass appeared to have positive correlations with the amount of bridging oxygen of the glass matrix through using X-ray photoelectron spectroscopy (XPS). Hu Lili et al. [14] reported the mechanism for the decrease in Yb 3+ absorption and emission intensity caused by P 5+ doping. They found that Yb 3+ coordinated to the P-O site in glass with a molar ratio of P 5+ /Al 3+ ≤ 1, and coordinated to the P=O site in glass with a molar ratio of P 5+ /Al 3+ > 1.
In this study, Yb 3+ -doped phosphate glasses in the system BaO-P 2 O 5 were modified by the addition of SiO 2 . The scalar crystal-field N J and Yb 3+ asymmetry degrees were calculated from the Stark splitting levels, which were derived from Lorentz fitting based on the absorption and emission spectra. Furthermore, the influence mechanism of SiO 2 on the structural and spectroscopic properties was investigated systematically using the micro-Raman technique and previous 29 Si MAS NMR experimental results. The results may have certain implications for the realization of a new generation of high-power solid-state lasers for optical telecommunications applications.
Experimental
Yb 3+ -doped silicophosphate glasses with compositions (in mol %) 20BaO-(80-x)P 2 O 5 -xSiO 2 -1Yb 2 O 3 (x = 9, 16, 26.7, 32, and 40 mol %, respectively) were prepared by conventional melt-quenching technique. High purity BaCO 3 , NH 4 H 2 PO 4 , SiO 2 from Sinopharm Chemical Reagent Company (Ning Bo, China), and 99.99% Yb 2 O 3 from Macklin were used as starting materials for preparation of the glasses. About 20 g of raw materials were thoroughly crushed in an agate mortar and the homogeneous mixture was transferred into a corundum crucible, which was preheated at 350 • C for 30 min before being fully melted at 1350-1400 • C for 45 min under continuous stirring. Molten glass was air quenched by casting it onto a preheated brass mold to form bulk glasses and annealed at 430-480 • C for 5 h to reduce the thermal stress and strains. Then the furnace was switched off and the glass was allowed to cool down to room temperature at a cooling rate of about 3 K·min −1 . A slab of 10 mm × 10 mm × 2 mm sample was cut from the specimens and both sides were optically polished for the spectroscopic measurements.
The UV-VIS-NIR absorption spectra of BaO-P 2 O 5 -SiO 2 glasses were measured using a Varian CARY 500 spectrophotometer (Varian Inc., Palo Alto, CA, USA) in the scanning range of 800-1100 nm. With 915 nm laser diode pump, the emission spectra and lifetimes were measured by a high resolution spectrofluorometer FLSP920 cooled with liquid helium (Edinburgh Instruments Ltd., Livingston, UK). A scanning step of 1 nm was used to measure both absorption and emission spectra. The structural information on glass samples was obtained by micro-Raman spectrometer (INVIA, Renishaw, Gloucestershire, UK) with an Ar + -ion laser (514.5 nm) as the irradiation source. Baseline correction was performed using the Wire software program from Renishaw. All the measurements were performed at room temperature.
Results and Discussion
The absorption and emission spectra of 20BaO-(80-x)P 2 O 5 -xSiO 2 -1Yb 2 O 3 (x = 9, 16, 26.7, 32, and 40 mol %, respectively) glasses are plotted in Figure 1. As shown in this figure, the absorption band of the 2 F 5/2 → 2 F 7/2 transition was at 975 nm which corresponds to the transition between the lowest level of the 2 F 5/2 and 2 F 7/2 manifolds. The absorption intensity of glass samples decreased with the increasing SiO 2 content. Under excitation with 915 nm LDs (Laser Diodes), NIR emission peaks at around 975 and 1005 nm were observed. The SiO 2 addition resulted in an increase in the emission intensity at around 975 nm. One broad emission band with the peak centered at 1005 nm was obtained upon excitation by 915 nm. The emission intensity decreased with the increased concentration of SiO 2 up to 26.7 mol %, and then increased as shown in Figure 1b. The variation trend of the luminescent intensity was different from the trend of the absorption intensity, which means other factors must exist that are able to affect the luminescent intensity.
Materials 2017, 10, 241 3 of 8 of the 2 F5/2→ 2 F7/2 transition was at 975 nm which corresponds to the transition between the lowest level of the 2 F5/2 and 2 F7/2 manifolds. The absorption intensity of glass samples decreased with the increasing SiO2 content. Under excitation with 915 nm LDs (Laser Diodes), NIR emission peaks at around 975 and 1005 nm were observed. The SiO2 addition resulted in an increase in the emission intensity at around 975 nm. One broad emission band with the peak centered at 1005 nm was obtained upon excitation by 915 nm. The emission intensity decreased with the increased concentration of SiO2 up to 26.7 mol %, and then increased as shown in Figure 1b. The variation trend of the luminescent intensity was different from the trend of the absorption intensity, which means other factors must exist that are able to affect the luminescent intensity. The lifetime of luminescent ions is a critical parameter for broadband optical amplifiers. The compositional dependences of emission lifetimes are shown in Figure 2. Apparently, the lifetime increases monotonically with the increase of the SiO2 content up to 26.7 mol %, and then decreases slightly with further increasing the content of SiO2. Besides the lifetime, the absorption and stimulated emission cross-sections are also an important factor for solid-state lasers and broadband The lifetime of luminescent ions is a critical parameter for broadband optical amplifiers. The compositional dependences of emission lifetimes are shown in Figure 2. Apparently, the lifetime increases monotonically with the increase of the SiO 2 content up to 26.7 mol %, and then decreases slightly with further increasing the content of SiO 2 . Besides the lifetime, the absorption and stimulated emission cross-sections are also an important factor for solid-state lasers and broadband optical amplifiers. The absorption and emission cross-sections have been calculated by the reciprocity method [15,16]; the absolute value of cross-sections and accurate spectra information can be obtained in Table 1. As shown in Table 1, the absorption and emission cross-sections of 20BaO-(80-x)P 2 O 5 -xSiO 2 -1Yb 2 O 3 (x = 9, 16, 26.7, 32, and 40 mol %, respectively) glass samples decreased with the increasing SiO 2 concentration. The magnitude of the absorption (emission) cross-section at 975 nm for all the studied Yb 3+ -doped glass was found to be in the range of 0.62-1.09 × 10 −20 (0.83-1.46 × 10 −20 cm 2 ), which is much higher than those of the commercial Kigre QX/Yb: 0.50 × 10 −20 (1.06 × 10 −20 cm 2 ) laser glass [17]. The product (σ em × τ exp ) of the stimulated emission cross-section and the lifetime is a significant parameter to depict laser materials for the laser threshold is inversely proportional to σ em × τ exp . The σ em × τ exp values of the Yb 3+ -doped phosphosilicate glass are shown in Table 1. All the σ em × τ exp values of this work were about 1 × 10 −23 cm 2 s, which indicates that these glasses could be a potential material for high-power solid-state lasers and broadband optical amplifiers. optical amplifiers. The absorption and emission cross-sections have been calculated by the reciprocity method [15,16]; the absolute value of cross-sections and accurate spectra information can be obtained in Table 1. As shown in Table 1, the absorption and emission cross-sections of 20BaO-(80-x)P2O5-xSiO2-1Yb2O3 (x = 9, 16, 26.7, 32, and 40 mol %, respectively) glass samples decreased with the increasing SiO2 concentration. The magnitude of the absorption (emission) cross-section at 975 nm for all the studied Yb 3+ -doped glass was found to be in the range of 0.62-1.09 × 10 −20 (0.83-1.46 × 10 −20 cm 2 ), which is much higher than those of the commercial Kigre QX/Yb: 0.50 × 10 −20 (1.06 × 10 −20 cm 2 ) laser glass [17]. The product (σem × τexp) of the stimulated emission cross-section and the lifetime is a significant parameter to depict laser materials for the laser threshold is inversely proportional to σem × τexp. The σem × τexp values of the Yb 3+ -doped phosphosilicate glass are shown in Table 1. All the σem × τexp values of this work were about 1 × 10 -23 cm 2 s, which indicates that these glasses could be a potential material for high-power solid-state lasers and broadband optical amplifiers. Recently, many research studies have been published on NIR luminescence in Yb 3+ -doped glasses; however, the origin of this phenomenon has not been identified. The relation between the glass structure and the spectroscopic properties of Yb 3+ -doped glass is revealed through the evaluation of the scalar crystal-field NJ and Yb 3+ asymmetry degree. According to References [18][19][20], the scalar crystal-field NJ and Yb 3+ asymmetry degree can be calculated from the Stark splitting levels, which can be derived from Lorentz fitting based on the absorption and emission spectra. As shown in Figure 3, the maximum Stark splitting manifold of the 2 F7/2 level (781 cm −1 ) and the scalar crystal-field NJ and Yb 3+ asymmetry degree are observed when the SiO2 concentration is 26.7 mol %. Recently, many research studies have been published on NIR luminescence in Yb 3+ -doped glasses; however, the origin of this phenomenon has not been identified. The relation between the glass structure and the spectroscopic properties of Yb 3+ -doped glass is revealed through the evaluation of the scalar crystal-field N J and Yb 3+ asymmetry degree. According to References [18][19][20], the scalar crystal-field N J and Yb 3+ asymmetry degree can be calculated from the Stark splitting levels, which can be derived from Lorentz fitting based on the absorption and emission spectra. As shown in Figure 3, the maximum Stark splitting manifold of the 2 F 7/2 level (781 cm −1 ) and the scalar crystal-field N J and Yb 3+ asymmetry degree are observed when the SiO 2 concentration is 26.7 mol %. As is known, introducing SiO2 into phosphate glass can effectively modulate the structure and thus lead to a change in the Yb 3+ local field. Therefore, to further elucidate the role of SiO2 in phosphate glass, the detailed structural information of the glass by using the micro-Raman technique was obtained. In Figure 4, micro-Raman spectra are shown as a function of an increasing SiO2 content in the range of 200-1600 cm −1 . The broad bands of the Si (n) units (Si (n) represents the [SiO4] tetrahedral unit and n is the amount of bridging oxygen per tetrahedral) with n = 4, 3, 2, 1 and 0, which are centered at around 1200, 1100, 950, 900, and 850 cm −1 , respectively [21]. The spectra of low-SiO2 glass show four major features centered near 700, 1155, 1277, and 1330 cm −1 , respectively. With an increasing content of SiO2, several new peaks appear at 500, 900, and 970 cm −1 . The bands near 900, 970, 1155 cm −1 are assigned to Si (1) , Si (2) , and Si (4) , respectively. As shown in Figure 4, the band near 1155 cm −1 contributing to the stretching vibration mode in Si (4) becomes wider and moves towards a lower wave number. This may be due to the formation of [SiO6] which broadens the peak near 1155 cm −1 [22][23][24]. The band near 1330 nm is derived from P=O stretching vibration [25,26]. As the content of SiO2 is increased, the intensity of the Raman peak increases until 26.7 mol % SiO2 and then it decreases. This structural change indicates that the introduction of SiO2 can promote the formation of P=O linkages, but it can also break the P=O linkages when the SiO2 content is greater than 26.7 mol %. P=O linkages arouse a remarkable adjustment on the distorted structure and thus result in a dramatic change in the Yb 3+ local structure. As shown in Figure 3d, the variation trend of the asymmetry degree Lorentz peak analysis for absorption (a) and emission (b) spectra of 20BaO-53.3P 2 O 5 -26.7SiO 2 -1Yb 2 O 3 glass (the black lines are the original spectra, while the red lines offer the fitting lines composed of the corresponding multi-fitting peaks); (c) Stark level energies of 2 F 7/2 and 2 F 5/2 manifolds in 20BaO-(80-x)P 2 O 5 -xSiO 2 -1Yb 2 O 3 (x = 9, 16, 26.7, 32, and 40 mol %, respectively) glasses obtained from the Lorentz fitting to the absorption and emission spectra; (d) Scalar crystal-field parameters N J and Yb 3+ asymmetry degree in 20BaO-(80-x)P 2 O 5 -xSiO 2 -1Yb 2 O 3 (x = 9, 16, 26.7, 32, and 40 mol %, respectively) glasses.
As is known, introducing SiO 2 into phosphate glass can effectively modulate the structure and thus lead to a change in the Yb 3+ local field. Therefore, to further elucidate the role of SiO 2 in phosphate glass, the detailed structural information of the glass by using the micro-Raman technique was obtained. In Figure 4, micro-Raman spectra are shown as a function of an increasing SiO 2 content in the range of 200-1600 cm −1 . The broad bands of the Si (n) units (Si (n) represents the [SiO 4 ] tetrahedral unit and n is the amount of bridging oxygen per tetrahedral) with n = 4, 3, 2, 1 and 0, which are centered at around 1200, 1100, 950, 900, and 850 cm −1 , respectively [21]. The spectra of low-SiO 2 glass show four major features centered near 700, 1155, 1277, and 1330 cm −1 , respectively. With an increasing content of SiO 2 , several new peaks appear at 500, 900, and 970 cm −1 . The bands near 900, 970, 1155 cm −1 are assigned to Si (1) , Si (2) , and Si (4) , respectively. As shown in Figure 4, the band near 1155 cm −1 contributing to the stretching vibration mode in Si (4) becomes wider and moves towards a lower wave number. This may be due to the formation of [SiO 6 ] which broadens the peak near 1155 cm −1 [22][23][24]. The band near 1330 nm is derived from P=O stretching vibration [25,26]. As the content of SiO 2 is increased, the intensity of the Raman peak increases until 26.7 mol % SiO 2 and then it decreases. This structural change indicates that the introduction of SiO 2 can promote the formation of P=O linkages, but it can also break the P=O linkages when the SiO 2 content is greater than 26.7 mol %. P=O linkages arouse a remarkable adjustment on the distorted structure and thus result in a dramatic change in the Yb 3+ local structure. As shown in Figure 3d, the variation trend of the asymmetry degree and N J is similar to that of the P=O linkage. According to previous work [27], 29 Si MAS NMR spectra of 20BaO-(80-x)P 2 O 5 -xSiO 2 (x = 9, 16, 26.7, 32, and 40 mol %, respectively) glass samples indicated that [SiO 6 ] existed in these phosphosilicate glasses, and the peaks of [SiO 6 ] significantly decreased when the SiO 2 content was greater than 26.7 mol %. Based on the previous 29 Si MAS NMR and micro-Raman experimental results, these findings further demonstrate that the presence of [SiO 6 ] may significantly affect the formation of P=O, and thus improve the spectroscopic properties of phosphate glasses.
Conclusions
The influence mechanism of SiO2 on the structural and spectroscopic properties of phosphate glasses prepared by the conventional melt-quenching method was systematically investigated using the micro-Raman technique and previous 29 Si MAS NMR analysis. A significant change occurs in the variation trends of fluorescence lifetimes and the scalar crystal-field NJ, and Yb 3+ asymmetry degree when the SiO2 content is greater than 26.7 mol %. It is worth noting that the glass with 26.7 mol % SiO2 possess the longest fluorescence lifetime (1.51 ms), the highest gain coefficient (1.10 ms·pm 2 ), the maximum Stark splitting manifold of 2 F7/2 level (781 cm −1 ), and the greatest NJ and Yb 3+ asymmetry degree. Micro-Raman spectra indicate that the formation of P=O linkages in the glass is responsible for this abnormal variation. With the increase in the SiO2 concentration, the intensity of the P=O linkages increases, and then slightly decreases when the SiO2 content is greater than 26.7 mol %. This variation trend is consistent with the NJ and Yb 3+ asymmetry degree. Additionally, based on the previous 29 Si MAS NMR experimental results, [SiO6] units existing in these phosphosilicate glasses may significantly affect the formation of P=O, and thus influence the spectroscopic properties of the glasses. It can be realized that these phosphosilicate glasses could be materials possessing the potential to be developed as a Yb 3+ -doped gain medium for high-power solid-state lasers and broadband optical amplifiers.
Conclusions
The influence mechanism of SiO 2 on the structural and spectroscopic properties of phosphate glasses prepared by the conventional melt-quenching method was systematically investigated using the micro-Raman technique and previous 29 Si MAS NMR analysis. A significant change occurs in the variation trends of fluorescence lifetimes and the scalar crystal-field N J , and Yb 3+ asymmetry degree when the SiO 2 content is greater than 26.7 mol %. It is worth noting that the glass with 26.7 mol % SiO 2 possess the longest fluorescence lifetime (1.51 ms), the highest gain coefficient (1.10 ms·pm 2 ), the maximum Stark splitting manifold of 2 F 7/2 level (781 cm −1 ), and the greatest N J and Yb 3+ asymmetry degree. Micro-Raman spectra indicate that the formation of P=O linkages in the glass is responsible for this abnormal variation. With the increase in the SiO 2 concentration, the intensity of the P=O linkages increases, and then slightly decreases when the SiO 2 content is greater than 26.7 mol %. This variation trend is consistent with the N J and Yb 3+ asymmetry degree. Additionally, based on the previous 29 Si MAS NMR experimental results, [SiO 6 ] units existing in these phosphosilicate glasses may significantly affect the formation of P=O, and thus influence the spectroscopic properties of the glasses. It can be realized that these phosphosilicate glasses could be materials possessing the potential to be developed as a Yb 3+ -doped gain medium for high-power solid-state lasers and broadband optical amplifiers. | 5,076.4 | 2017-02-28T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Vertical Micro Reactor Stack for Integrated Chemical Reaction System∗
We proposed and fabricated vertical micro reactor stack with vertical fluid flow operation available for the environment analysis, post-genome analysis, gene diagnosis, and screenings of the useful materials for the medicine manufacture. This reactor is characterized as the simple structure and new aspects of the vertical fluid transportation evoked by the use of the fluid filter with micro through-bores. The LIGA process using synchrotron radiation was applied for the fabrication of the fluid filters. The CFD simulation results suggested that the fluid can be held by the fluid filter and easily transported by the pneumatic operation. It was also confirmed that the fluid flow velocity through the filter was controlled by varying the loaded pressure around several kPa. Furthermore, it was expected that the fluid was stirred and mixed when passing through the fluid filter. It was demonstrated that a proposed chemical reactor had a good performance of the vertical fluid flow operation and chemical reaction. [DOI: 10.1380/ejssnt.2005.190]
I. INTRODUCTION
The advantages of micro reactors for the chemical synthesis, discovery and development of substances are generally recognized today. The significant properties of micro reactors such as low energy consumption, high speed, high yields, and short thermal response times for the chemical reactions will lead to the ecological and economical process engineering. Such properties result from the large surface area to volume ratios, rapid thermal diffusion, high gradients of pressure, and concentration of reagent solutions. The possibility to integrate a large number of micro reactors within a finite small space develop several major fields of application such as fine chemicals, combinatorial synthesis, high throughput screening using reactor units operated in parallel and in serial with different functions. However, the standard structure of the conventional micro rectors expands on two-dimensional substrate. From the point of micro integration and cost downing, such structure has a finite restriction. In order to solve above problems, 3D structured reactor operating by vertical fluid flow control have been expected so far. There are not so many examples for achieving such reactors. The fabrication of 3D structured reactor has several difficulty, since suitable microfabrication and packaging technique for vertical direction have not been well developed so far. Several attempts to fabricate stacked structures of 2D channel networks using vertical through holes and nano-porous membrane have performed in order to achieve 3D channel networks [1][2][3][4]. Ikuta et al also proposed the 3D stacked device so called gBiochem- * This paper was presented at International Symposium on Molecule-Based Information Transmission and Reception -Application of Membrane Protein Biofuction-(MB-ITR2005), Okazaki, Japan, 3-7 March, 2005. † Corresponding author<EMAIL_ADDRESS>ical IC familyh using laser modeling technique [4]. These devices have been expected to utilize multiple fluid flows and miniaturize a total micro chemical system for micro combinatorial synthesis. We proposed and fabricated vertical micro reactor stack with vertical fluid flow operation available for the environment analysis, post-genome analysis, gene diagnosis, and screenings of the useful materials for the medicine manufacture. The reactor can operate vertically in stacked structures according to demanded chemical functions such as reaction, isolation, and purification. A variety of three-dimensional microfabrication processes brought by LIGA (abbreviated name of Lithographite, Galvanoformung and Abformung) process using synchrotron radiation allows new type of vertical micro reactors and their components. This reactor is characterized as the simple structure with no movable structure and new aspects of the vertical fluid transportation evoked by the use of the fluid filter with micro throughbores. We have designed the chemical reactor based on the results from the CFD (computational fluid dynamics) simulation and fabricated using LIGA process. Furthermore we investigated the vertical fluid transport properties and mixing property of the fluid filter. It is demonstrated that a proposed vertical micro reactor stack has a good performance of the vertical fluid flow operation.
II. PROPOSAL OF VERTICAL MICRO REACTOR STACK WITH VERTICAL FLUID FLOW OPERATION
In order to achieve the vertical fluid flow operation, we proposed the new method for vertical fluid flows as the fluid was transported through the filters with many micro through-bores. Figure 1 shows the schematic diagram for the stack of the proposed chemical reactor. As the figure shows the stack consists from two unit reservoirs and a liquid filter which separate them. The fluid in the upper reservoirs is held by surface tension of itself on the fluid fil- ters with through-bores. In the case of the transportation of the liquid from upper unit reservoir to the downward unit reservoir, the inside pressure of the upper reservoir is increased and transport the liquid through the filter to the downward reactor. It is prospective that the complex flow behavior when passing through the fluid filter will stir up the fluid drastically.
III. DESIGN OF THE FILTER FOR VERTICAL FLUID FLOW OPERATION
On the basis of the above vertical fluid transportation way we designed the chemical reactor in order to attain vertical unit chemical operation. The design is based on the results from the CFD (Computational Fluid Dynamics) simulation 'FLUENT'. In order to derive useful fluid flow properties, the effects of the structure parameters of the fluid filter on the fluid transportation were investigated. The material of the fluid filter was assumed to be PMMA or PTFE on the simulation. Figure 2 shows one of the results of the CFD simulation. The diameter and the thickness of the through-bores are 40 µm and 200 µm. Material is assumed to be PTFE with the contact angle of 110 • . The pneumatic pressure for the vertical fluid (water) operation is set at 3 kPa. As shown in Fig. 2(a)), the water is well sustained at the surface of the fluid filter. However, it is also found from the CFD results that if the radius of the through-bores become larger or the contact angle of the water become smaller the water is not able to be sustained anymore and the continuous water http://www.sssj.org/ejssnt (J-Stage: http://ejssnt.jstage.jst.go.jp) flow through the filter into downward reactor occurs. As shown in Fig. 2(b)), 3 kPa pneumatic pressure is enough to transfer the water into downward reservoir. The passing time is estimated to be very short as 3.8 × 10 −4 sec. During the water flow through the filter, the air bubble is remained at the surface of the filter basal plain as shown in Fig. 2(c)). The affinity of the filter surface with water is also found to be very essential for sustaining of the water.
We estimated the dependence of the bore diameter on the threshold pressure at which the pneumatic transportation of the fluid from the upward reservoir to the downward reservoir starts by the use of the CFD simulation. We set the distance between the center of the bores and the contact angle of the water to be 70 µm and 70 • or 110 • , respectively. Figure 3 shows the result of the simulation. It shows hyperbolic distribution of the threshold and it increases with the bore diameter less than 30 µm significantly. It is notable that the threshold pressure for the pneumatic operation is fairly low compared with the atmosphere pressure and suggests the pressure loss of the vertical fluid transportation at the reactor stack with several layers restricted to be low less than quarter atom. From the easiness of the control for the pneumatic pressure driving vertical liquid transportation, the smaller bore diameter is desirable since the margin of the pressure set for the vertical flow transportation becomes broader. However because of the limitation of our micro fabrication ability, the smallest bore diameter restricted to 20 µm in the 400 µm thick film structure.
IV. FABRICATION PROCEDURE OF THE REACTOR USING SYNCHROTRON RADIATION
After the above design based on the computer aided fluidic dymamics simulation, the chemical reactor stacks which consist from two or three unit vessels were fabricated using the X-ray lithography and the high precision mechanical fabrication. The shape of the unit vessel was cylinder of which diameter and height were 3 mm and 11 mm respectively. The reactor has the volume of sereral decaded micro litters suitable for the biochemical analysis using test tubes. The micro liquid filters with cylindrical through-bores were fabricated using deep X-ray lithography by irradiating synchrotron radiation to 200 µm or 400 µm thick PMMA sheets. The diameter of the bores ranged from 40 to 50 µm. The deep X-ray exposures were performed using LIGA beamline (BL2) at NewSUBARU [5][6][7] which was established at Hyogo, Japan. The X-ray masks with 10 µm Au absorbers were manufactured in order to obtain high aspect ratio micro bores. Figure 4(a) shows the SEM images of the liquid filter with 400 µm thickness. Figure 4(b) shows the outward appearance of the reactor which consists from three reservoirs.
V. EVALUATION OF THE FLUID TRANSPORTATION PROPERTY
The vertical fluid flow through the micro filter is a key function for this reactor. We evaluated the property of the fluid flow by loading the positive air pressure insides the upper unit reservoir and measuring the flow velocity as a function of the loading pressure. We adopted the pure water as the fluid in the reactor and demonstrated the vertical fluid flow from the upper reservoir unit to the lower reservoir unit. First of all the vertical operation is confirmed by observing both of the sustaining of the fluid and the following transportation to downward reservoir driven by the pneumatic pressure.
Next we investigated the property of the fluid transportation from upper reservoir to the downward reservoir by measuring the passing time of the fluid through the filter. The dependence of the pneumatic flow rate on the water flow rate was investigated. Figure 5 shows the dependences of the water flow rate on the pneumatic flow rate with various loaded pressure from 5 to 10 kPa when using the filter with 40 µm bore diameter and 200 µm thickness. The figure shows that the dependence of the fluid flow transportation behavior through the filter on the loaded pneumatic pressure is not significant at the 40 µm bore diameter of the filter. The threshold can not be found for the pneumatic flow rate at which vertical fluid flow through the fluid filter starts. The results also suggest that the flow velocity can be easily controlled by varying the loaded pressure.
VI. MIXING PROPERTY OF THE FLUID FILTER
We tried to conduct an enzyme reaction by using this reactor and the reaction was observed optically in the reservoir in real time. The enzyme reaction produces the 2-hydroxymuconate semialdehyde, which has characteristic absorption at the wavelength of 375 nm. The absorbance depends on the amount of the 2hydroxymuconate semialdehyde. So we can detect the process of the reaction by observing the absorbance in real time.
First of all, the blending of enzyme (catechol 2,3dioxygenase) and matrix (catechol) liquid was done. They are injected into upper reservoir and mixed. The enzyme reaction occurs through this operation. Then, the mixture was transported by the pneumatic pressure to downwards. The absorbance of the 2-hydroxymuconate semialdehyde was measured to detect the reaction. Figure 6 is the schematic diagram of the applied enzyme reaction and the observed time shift of the absorbance at wavelength of 375 nm.
The figure shows shortening of reactive time after passing of the fluid filter. It is speculated that the increase of reactive efficiency results from the mixing of reagents during their passing through the fluid filter.
VII. CONCLUSION
We proposed and fabricated vertical micro reactor stack with vertical fluid flow operation available for the environment analysis, post-genome analysis, gene diagnosis, and screenings of the useful materials for medicine manufac- tures. This reactor is characterized as the simple structure and new aspects of the vertical fluid transportation evoked by the use of the fluid filter with micro throughbores. The LIGA process using synchrotron radiation was used for the fabrication of the fluid filters for vertical fluid operation. The experiments showed that the water could be sustained on the surface of the fluid filter we proposed. We investigated the property of the fluid transportation from upper reservoir to the downward reservoir by measuring the passing time of the fluid through the filter. The five kPa pneumatic pressure is enough to transform the water into downward reservoir. The affinity of the filter surface to the water is found to be very essential for sustaining of the water. The results also suggest that the flow velocity can be easily controlled by varying the loaded pressure. It is demonstrated that a proposed vertical micro reactor stack has a good performance of the vertical fluid flow operation. The fluid is thought to be mixed by passing through the filter, and shortening of reactive time and the improvement of the reactive efficiency are expected. | 2,959.6 | 2005-01-01T00:00:00.000 | [
"Engineering",
"Chemistry"
] |
AGA-LSTM: An Optimized LSTM Neural Network Model Based on Adaptive Genetic Algorithm
With the increase of the hidden layer, the weight update of the LSTM neural network model depends heavily on the gradient descent algorithm, and the convergence speed is slow, resulting in the local extremum of the weight adjustment, which affects the prediction performance of the model. Based on this, this paper proposes an optimized LSTM neural network model based on adaptive genetic algorithm (AGA-LSTM). In this model, the mean squared error is designed as the fitness function, and the adaptive genetic algorithm (AGA) is used to globally optimize the weights between neuron nodes of the LSTM model to improve the generalization ability. The experimental results show that, on the UCI dataset, the prediction accuracy of the AGA-LSTM model is greatly improved compared to the standard LSTM model, which verifies the rationality of the model.
which could only obtain locally optimal. Based on this, this paper proposes an optimized LSTM neural network model based on adaptive genetic algorithm (AGA-LSTM). The AGA-LSTM model uses the mean squared error of the output of the LSTM neural network, which is designed as an appropriate value function, and an adaptive genetic algorithm (AGA) is used to construct an optimization space, and the weights between the nodes of the LSTM model are globally optimized to improve the prediction performance. Finally, the experimental results on the classic UCI dataset also verify the rationality and effectiveness of the proposed AGA-LSTM neural network model.
2.1Adaptive genetic algorithm
Genetic algorithm is a swarm intelligence algorithm proposed by Holland. It builds an optimization space by simulating the genetic mechanism of the biological world. It is widely used in solving various combinatorial optimization problems. Each chromosome in the genetic algorithm corresponds to an individual in the population, which represents a candidate solution for an optimization problem. However, the crossover probability and mutation probability of genetic algorithm are fixed, so the convergence speed is slow and premature phenomenon is easy to occur, and it cannot obtain the global optimal solution. And adaptive genetic algorithm (AGA) is an improved genetic algorithm proposed by Srinivas et al. [13] to overcome the randomness and blindness of traditional genetic algorithms in selection, crossover and mutation, and it is easier to obtain the global approximate optimal solution. AGA can make the crossover probability and mutation probability be dynamically adjusted with the change of the fitness function and the number of iterations, so as to retain the excellent individuals in the population and avoid the premature phenomenon.
The process of adaptive genetic algorithm includes the steps of initializing the population, calculating the individual fitness function of the population, selecting replication, adaptive crossover, adaptive mutation, and judging whether the iteration stops. The specific description is as follows: 1) Initialize the population space and set the relevant parameters of the algorithm.
2) Calculate the fitness function value of each individual(chromosome) in the population.
3) Select individuals to the crossover pool. Individuals (chromosomes) with the higher value of fitness function are selected and copied to the offspring to form new individuals. Individuals with a low fitness function are eliminated. 4) Adaptive crossover operation. A pair of chromosomes exchange information and recombine, that is, individuals in the population are randomly paired, and some chromosomes between the paired individuals are exchanged with adaptive cross probability. 5) Adaptive mutation operation. That is, each individual in the parent group will change some genes of an individual (chromosome) to other alleles with an adaptive probability. 6) Determine whether the iteration stops, and terminate the iteration if the conditions are met, otherwise skip to step (2).
The crossover probability and mutation probability in the adaptive genetic algorithm can be dynamically adjusted in iterations to maintain the diversity of the population. And the appropriate value function is the main basis for evaluating the merits of individuals, so the crossover probability and mutation probability will change with the change of the appropriate value.
Formulas (1) and (2) show the adaptive calculation formulas of crossover probability and mutation probability in the AGA. Among them, avg f is the average fitness value of all individuals in the population, max f is the largest fitness value of the individual, ' f is the larger fitness value of the two individuals to be crossed, f is the fitness value of the mutated individual; 1 k , 2 k , 3 k and 4 k are parameters between 0 and 1,which are used to adjust the crossover probability and mutation probability.
The adaptive genetic algorithm mainly reduces the crossover probability and mutation probability at the early stage of the iteration by setting the values of 1 k , 2 k , 3 k and 4 k , to ensure the survival of better individuals; at the late stage of the iteration, when all individuals in the population tend to stabilize or tend to the case of local optimal, the crossover probability and mutation probability are increased to overcome the local optimal and find an approximate optimal solution globally.
2.2The LSTM neural network
The LSTM neural network is mainly composed of three gated units, such as the forget gate, input gate and output gate. The unique gated units are used to learn and memorize the sequence data to maintain long-distance time series information dependence and achieve high-precision prediction. The standard LSTM neuron structure is shown in Figure 1.
Input_Gate Output_Gate Figure 1. The structure of the LSTM neuron. As shown in Figure 1, , and respectively represent the input gate, output gate, and forget gate. The input gate mainly processes input data. The forget gate determines the current neuron's retention of historical information. The output gate represents the output result of the neuron.Suppose the input sequence is (x_1,x_2,…,x_t), then at time t, the calculation formula of each parameter of LSTM neuron is as follows: W is the weight between the input and the cell unit; then t h represents the output of the hidden layer at time t; S is the Sigmoid function. The deep LSTM neural network is based on the LSTM neural unit, and thus constructs the network model with multiple hidden layers, and continuously removes redundant information in the data set through the forget gates to maintain long-distance dependencies, so it has stronger predictive performance and better generalization ability.
2.2.1An example.
In this example we can see that there are footnotes after each author name and only 5 addresses; the 6th footnote might say, for example, 'Author to whom any correspondence should be addressed.' In addition, acknowledgment of grants or funding, temporary addresses etc might also be indicated by footnotes [5].
The AGA-LSTM neural network model
In this paper, the adaptive genetic algorithm is used to optimize the weights between the nodes of the LSTM neural network, which can make the weights between neurons more reasonable, and improve the generalization ability and prediction performance of the model. This chapter mainly describes the AGA-LSTM model in detail.
3.1Chromosome coding
In this paper, the adaptive genetic algorithm is used to optimize the weights between the nodes of the LSTM neural network, which can make the weights between neurons more reasonable, and improve the generalization ability and prediction performance of the model. This chapter mainly describes the AGA-LSTM model in detail. Figure 2 shows the structure of LSTM model with 3 hidden layers, Figure 3 shows the coding method of a certain chromosome. Among them, 1 IH w represents the weight between the first neuron of the input layer and the first neuron of the hidden layer of the first layer, and so on. The coding method of chromosomes in this paper includes the weights between all neuron nodes, all of which are real numbers.
3.2Fitness value function
In this paper, the adaptive genetic algorithm is used to optimize the weights between the nodes of the LSTM neural network, which can make the weights between neurons more reasonable, and improve the generalization ability and prediction performance of the model. This chapter mainly describes the AGA-LSTM model in detail.
The mean squared error, usually expressed by, is the mean squared error value on the test set of the global optimal solution obtained by the chromosome during the iteration process, and is also a suitable value function in the AGA-LSTM model. The calculation process is shown in (8
3.3The flow of AGA-LSTM model
In this paper, the adaptive genetic algorithm is used to optimize the weights between the nodes of the LSTM neural network, which can make the weights between neurons more reasonable, and improve the generalization ability and prediction performance of the model. This chapter mainly describes the AGA-LSTM model in detail.
The AGA-LSTM model maps the weight values between the nodes of the LSTM neural network through the adaptive genetic algorithm, and maps each weight value to a certain dimension of the chromosome, making the chromosome a solution set of candidate weights for the LSTM neural network. Afterwards, the adaptive genetic algorithm is used to establish the optimization space and iterate continuously to make the weights between the neurons more reasonable, improving the prediction performance and accuracy of the LSTM model. (2) Train the LSTM neural network to get the default optimal weights.
(3) Use the formula (8) to calculate the fitness value function of each chromosome in the population, and use the adaptive genetic algorithm to construct the optimization space.
(4) Select the chromosomes to the crossover pool. The chromosomes with the fitness value ranking in the top 0.1*N are completely copied to the offspring to form new individuals, and the chromosomes with lower value of the fitness function are eliminated.
(5) According to formula (1), perform an adaptive crossover operation. (6) According to formula (2), perform an adaptive mutation operation. (7) Add 1 to the number of iterations n to determine whether n is greater than the maximum value M of iterations. If yes, terminate the iteration, otherwise skip to step (3). (8) Output the globally optimal chromosome, which corresponds to the optimal weight distribution of the LSTM network model.
It should be noted that when using adaptive genetic algorithm to optimize the weights of the LSTM model, the selection operation adopts the elite selection method: the basic idea is to use the roulette selection method for selection operation, and at the same time The chromosomes with ranged the top 0.1 * N are completely copied to the next generation according to the original coding, N is the population number; in the crossover operation, that is, two new chromosomes are linearly combined after two paired chromosomes are generated. See the formula (1) for the crossover probability; In the adaptive mutation operation, non-uniform mutation is used. For the mutation probability, see the formula (2).
Experimental results and analysis
In this paper, the models based on deep GRU neural network, deep LSTM neural network and deep AGA-LSTM neural network are established respectively, and the performance of AGA-LSTM model is verified by comparing the prediction accuracy of each model. The experimental environment configuration is as follows: deep learning framework tensorflow1.10, the language is Python3.
4.1Experimental data set
This article selects Air Quality (AQ), EEG, Dow Jones Index (DJI), Ozone Level Detection (OLD) in UCI database. Among them, the AQ data set records the air quality data of a region in Italy from 2014 to 2015, which is used to predict the air quality of the region; the DJI data set records the stock price data of the Dow Jones Index, which can be used to predict the stock price trend of a certain period in the future; The OLD records the ozone concentration on the ground, which can be used to predict the ozone concentration of the day, which belongs to the problem of time series prediction.
4.2Establishing prediction models
The modeling steps of the neural network model include data preprocessing, training data set and verification model. Based on the above steps, this paper establishes prediction models based on GRU, LSTM and AGA-LSTM neural networks on three UCI data sets, such as AQ, DJI and OLD, to verify the effectiveness of the AGS-LSTM model. For the pre-processed data set, 70% of them is taken as the training set, 20% is taken as the verification set, and 10% is taken as the test set. The structure of the three network models established in this paper is the same, the input layer and the output layer are set to 1 layer, and the hidden layer is set to 6 layers. And, the error square sum SSE is used as the index to predict the performance of the test model, as shown in formula (9) After the training of the neural network model is completed, the test sets of the above three data sets are sequentially input to each model, and the comparison results of the experiment can be obtained. Table 1 shows the SSE values of the three models of GRU, LSTM and AGA-LSTM under SSD, MDTD, OLD and other data sets. Figure 5 is a histogram drawn according to Table 1. Table 1 and Figure 5, the SSE values of the LSTM model under the three different data sets are the largest, followed by the GRU model, and the SSE values of the AGA-LSTM model are the lowest on each data set, indicating that the model has the highest prediction accuracy, and its generalization ability and prediction performance are best. Figure 6 shows the trend change graph of the average SSE value of the three models under each data set. The average SSE value of the GRU model is 12.3% lower than that of the LSTM model, while the average SSE value of the AGA-LSTM model is 11.9% lower than that of the LSTM model and 22.8% lower than that of the LSTM model. Obviously, the AGA-LSTM model has the smallest error and the best prediction accuracy. This proves the effectiveness of the AGA-LSTM model.
Conclusion
In this paper, an LSTM neural network model based on adaptive genetic algorithm optimization is proposed to solve the problems of slow convergence speed and the emerging of local extremum in the weights adjustment of the LSTM neural network. The experimental results show that, compared with the traditional LSTM and GRU models, the AGA-LSTM model has the lowest average SSE value and the smallest prediction errors under the three UCI data sets. Therefore, the prediction performance and | 3,341.8 | 2020-06-01T00:00:00.000 | [
"Computer Science"
] |
A Bayesian approach for estimation of weight matrices in spatial autoregressive models
We develop a Bayesian approach to estimate weight matrices in spatial autoregressive (or spatial lag) models. Datasets in regional economic literature are typically characterized by a limited number of time periods T relative to spatial units N. When the spatial weight matrix is subject to estimation severe problems of over-parametrization are likely. To make estimation feasible, our approach focusses on spatial weight matrices which are binary prior to row-standardization. We discuss the use of hierarchical priors which impose sparsity in the spatial weight matrix. Monte Carlo simulations show that these priors perform very well where the number of unknown parameters is large relative to the observations. The virtues of our approach are demonstrated using global data from the early phase of the COVID-19 pandemic.
Introduction
Spatial econometrics deals with the study of cross-sectional dependence and interactions among (spatial) observations. A particularly popular spatial econometric model is the spatial autoregressive (or spatial lag) specification, where spatial interdependence between observations is governed by a so-called spatial weight matrix. The spatial weight matrix is typically assumed non-negative, row-standardized and exogenously given, with spatial weights based on some concept of neighbourhood. Geographic neighbourhood is often preferred due to exogeneity assumptions. However, when relying on geographic information, several competing approaches exist for constructing the weight matrix (for a thorough discussion, see LeSage and Pace 2009).
Recently, Kelejian and Piras (2014), Qu and Lee (2015), Han and Lee (2016), and Hsieh and Lee (2016) Since direct estimation of a spatial weight matrix requires estimating at least ( − 1) parameters (ignoring the other model parameters), only few approaches target direct estimation of spatial weight matrices. Recently, Ahrens and Bhattacharjee (2015) and Lam and Souza (2020) tackle this problem through LASSO-based approaches (Tibshirani 1996), which involve (a priori) expert knowledge about the interactions between spatial units, while allowing the final estimates of the spatial weights to slightly deviate from it. However, for regional economic panels, where the time dimension is often limited relative to the number of spatial observations , estimation results in a deleterious proliferation of the number of parameters.
In this paper we describe a novel and flexible Bayesian approach for estimation of spatial weight matrices. Our definition of spatial weight matrices fulfils the typical assumptions employed in the vast majority of spatial econometric literature. The resulting spatial weight matrices are assumed non-negative and specific requirements to identification of the parameters can be easily implemented in a Markov-chain Monte Carlo (MCMC) sampling strategy. Although our primary focus is on row-standardized spatial weight matrices, weights without row-standardization are Ahrens and Bhattacharjee (2015) consider the case of sparsity in the spatial weights by employing shrinkage towards the zero matrix. also implementable. To make our estimation approach applicable to spatial panels where the number of time periods is limited as compared to the number of spatial units , we focus on spatial weight matrices which are binary prior to potential row-standardization.
In this paper we primarily focus on scenarios where no a priori information on the spatial structure is available. However, we also discuss how a priori spatial information can be implemented in a very simple and transparent way. For cases where the number of unknown parameters is large relative to the number of observations, we discuss hierarchical prior setups which impose sparsity in the weight matrix. In a Monte Carlo study, we show that these sparsity priors perform particularly well when the number of spatial observations is large relative to the time periods . We show that our approach can be implemented in an efficient Gibbs sampling algorithm, which implies that the estimation strategy can be easily extended to other spatial econometric specifications. Among several others, such extensions include shrinkage estimation to avoid overparameterization (Piribauer and Cuaresma 2016), more flexible specifications of the innovation process (LeSage 1997), controlling for unobserved spatial heterogeneity (Cornwall and Parent 2017;Piribauer 2016), or allowing for non-linearity in the slope parameters (Basile 2008;Krisztin 2017). It is moreover worth noting that the proposed approach can be easily adapted to matrix exponential spatial specifications (LeSage and Pace 2007), spatial error specifications (see, LeSage and Pace 2009), or local spillover models (Vega and Elhorst 2015).
The rest of the paper is organized as follows: the next section outlines the panel version of the considered spatial lag model. Section 3 discusses the Bayesian estimation approach of the spatial weights along with several potential prior setups. Section 4 presents the Bayesian MCMC estimation algorithm and also discusses how to efficiently deal with the computational difficulties when updating the spatial weights in the MCMC sampler. Section 5 assesses the accuracy of the sampling procedure via a Monte Carlo simulation study. Section 6 illustrates our approach using data on global infection rates of the very first phase of the recent COVID-19 pandemic. The final section concludes.
Econometric framework
We consider a panel version of a global spillover spatial autoregressive model (SAR) of the form: where denotes an × 1 vector of observations on the dependent variable measured at period . and represent parameters associated with fixed effects for the spatial units and time periods, respectively. is an × 0 full rank matrix of explanatory variables, with corresponding 0 × 1 vector of slope parameters 0 . is a standard × 1 disturbance term ∼ N (0, 2 ). The × matrix denotes a spatial weight matrix and is a (scalar) spatial dependence parameter. is non-negative with > 0 if observation is considered as a neighbour to , and = 0 otherwise. A vital assumption is also that = 0, in order to avoid the case that an observation is assumed as a neighbour to itself. A frequently made assumption amongst practitioners is that is row-stochastic with rows summing to unity. In this paper, we mainly present results relating to row-stochastic weight matrices. However, as the decision on row-standardizing depends on the empirical application, it is worth noting that the proposed approach may be easily adapted to problems without row-standardization of .
The reduced form of the SAR model is given by: is a so-called spatial multiplier matrix. To ensure that ( − ) is invertible, appropriate stability conditions need to be imposed. For row-stochastic spatial weight matrices, a sufficient stability condition for the spatial autoregressive parameter often employed is ∈ (−1, 1) (see, for example, LeSage and Pace 2009).
In most cases, the elements of are typically treated as known. In the spatial econometric literature, there are various ways as a means to constructing such a spatial weight matrix. In this study we focus on estimation of weight matrices which are binary prior to row-standardization.
We also consider specifications with a spatial lag of the temporally lagged dependent variable. Sampling strategies for these cases are presented in the appendix.
Thorough discussions on the implications of row-standardization are provided by Plümper and Neumayer (2010) and Liu et al. (2014). We therefore assume that the typical element of our spatial weight matrix can be obtained from an unknown × spatial adjacency matrix (with typical element ). We therefore define = ( ), where (·) denotes the row-standardization function: The elements of the adjacency matrix are assumed as unknown binary indicators, which are subject to estimation. It is worth noting that the assumption of a binary covers a wide range of specifications commonly used in the literature such as contiguity, distance band, or nearest neighbours (see, for example, LeSage and Pace 2009).
To alleviate further notation, we collect the respective dummy variables associated with the fixed effects along with the explanatory variables in a × matrix with corresponding × 1 parameter vector . Moreover, define = 1 , . . . , and D = { , } denotes the data. The Gaussian likelihood (D|•) is then given by: When the elements of the spatial weight matrix are subject to estimation, the number of unknown parameters is likely much larger than the number of observations. Since spatial economic panels often feature limited relative to , the proposed estimation approach has to address the issue of over-parametrization. We discuss different ways to tackle this problem. First and foremost, one may reduce the dimensionality of the problem by imposing a priori information on spatial weights or assuming symmetry of the spatial neighbourhood structure. Alternatively, we consider hierarchical prior setups which impose sparsity in the weight matrix.
When estimating spatial weights in addition to the spatial and slope parameters, identification issues are more complicated as compared to models assuming exogenous spatial weights. We therefore follow De Paula et al. (2019), who provide a thorough discussion on parameter identi-Eq.
(3) implies some observations may have zero neighbours. However, priors on the number of neighbours can be easily elicited to rule out such situations. Moreover, a researcher might easily abstain from row-standardization by neglecting the transformation in Eq. (3).
The function (·) may simply be dropped when considering models without row-standardization of . fication for rather general spatial autoregressive model specifications. As mentioned before, we consider spatial weight matrices which are non-negative and = 0 for all . Further standard assumptions include = | | < 1 ∀ , | | < 1, and || || < for some positive ∈ R, as well as 0 ≠ 0. As an additional identifying assumption, it is important that the main diagonal elements of 2 are not proportional to a vector of ones. (2019) show that a strongly connected spatial network for global identification is needed. Since strong a priori information on the spatial weight matrix is often not available (or desired), we therefore assume ∈ (0, 1) and only consider positive spatial autocorrelation, which is a typical assumption for empirical applications.
Bayesian estimation of W
In this paper we use a Bayesian estimation approach to obtain estimates and inference on the unknown quantities , , 2 , as well as the elements of . After eliciting suitable priors for the unknown parameters, we employ a computationally efficient MCMC algorithm.
Let ( = 1) denote the prior belief in including the th element of the spatial weight matrix.
Conversely, for a proper prior specification the prior probability of exclusion is then simply given by ( = 0) = 1 − ( = 1). With − denoting the elements of the neighbourhood matrix without , the posterior probabilities of = 1 and = 0 conditional on all other parameters are given by: where 1 and 0 are given by through updating the spatial weight matrix via setting = 1 The most obvious case, where this assumption would be violated is a fully connected with = 1/ for all ≠ .
These assumptions can be checked during estimation by using standard rejection sampling techniques in the MCMC sampling steps (see, for example, Pace 2009, or Koop 2003). Rejection sampling rejects draws of parameter combinations which do not fulfil these assumptions. and = 0, respectively. Using the law of total probability, it is straightforward to show that the resulting conditional posterior for is Bernoulli: with¯( 1) = ( = 1| − , , 2 , , D) and¯( 0) = ( = 0| − , , 2 , , D) given in Eq.
(5). Since the conditional posterior follows a convenient and well-known form, efficient Gibbs sampling can be employed.
A Bayesian estimation framework requires elicitation of a prior on . Obvious candidates are independent Bernoulli priors on the unknown indicators : where denotes the prior inclusion probability of , ( = 1) = . Conversely, the prior probability of exclusion then simply takes the form ( = 0) = 1 − .
A natural prior choice would involve setting = = 1/2 for ≠ , and zero otherwise, which implies that each off-diagonal element in has an equal prior chance of being included.
However, in many cases a researcher has possible a priori information on the underlying structure of the spatial weight matrix. The following stylized examples demonstrate how to incorporate such information in a flexible and straightforward way. Case (B) depicts the opposite case where no prior spatial information is available. Specifically, this case considers full estimation of all 2 − potential links with respective prior inclusion To reduce the dimensionality of the parameter space, an interesting alternative might be the assumption of a symmetric , which halves the number of free elements in the spatial weight matrix. This assumption can be imposed in the way by simply simultaneously updating = , respectively. Notes: Alternative prior setups for a linear city of = 15 spatial observations. Case (A) shows a prior specification without any prior uncertainty on the spatial links. This setup implies an exogenous and no estimation of the weights is involved. Case (B) involves no spatial prior information and each element has a prior probability of inclusion = 1/2 ∀ ≠ . Case (C) shows uncertainty of the linkages in only within a certain spatial domain. Case (D) is a stylized prior specification considering uncertainty among two (or more) weight matrices, with setting = 1 in regions where the two matrices overlap. probability = 1/2 for ≠ . Figure 1 depict prior setups where a priori spatial information is available to the researcher, but associated with uncertainty. Case (C) illustrates a prior where the general spatial domain is assumed as being a priori known, but uncertainty over specific linkages exists. In empirical practice, spatial weight matrices based on geographic information are often viewed as being preferable due to exogeneity assumptions to (socio-)economic data.
Subplots (C) and (D) in
The illustrated prior specification follows this idea by still allowing for uncertainty and flexibility among the spatial neighbourhood.
Recent contributions to spatial econometric literature propose selecting (Piribauer and Cuaresma 2016) or combining (Debarsy and LeSage 2018) multiple exogenous spatial weight matrices. Case (D) follows a similar idea by depicting a mixture of a distance band and a contiguity matrix (i.e. neighbourhood if regions share a common border). The intersecting elements of the two spatial structures (resulting in a contiguity matrix) are assumed as being included by setting = 1.
Hierarchical prior setups and sparsity
The prior structure in Eq. (7) involves fixed inclusion probabilities , which implies that the number of neighbours of observation follows a Binomial distribution with a prior expected number of neighbours of ( − 1) . However, such a prior structure has the potential undesirable effect of promoting a relatively large number of neighbours. For example, when = 1/2, the prior expected number of neighbours is ( − 1)/2, since combinations of resulting in such a neighbourhood size are dominant in number.
To put more prior weight on parsimonious neighbourhood structures and therefore promote sparsity in the adjacency matrix, one may explicitly account for the number of linkages in each row of the adjacency matrix = [ 1 , . . . , ] . We consider a flexible prior structure on the number of neighbours that corresponds to a beta-binomial distribution BB ( − 1, , ) with two prior hyperparameters , > 0. The beta-binomial distribution is the result of treating the prior inclusion probability as random (rather than being fixed) by placing a hierarchical beta prior on it. For , the resulting prior can be written as follows: where Γ(·) denotes the Gamma function, and and are prior hyperparameters.
In the case of = = 1, the prior takes the form of a discrete uniform distribution over the number of neighbours. By fixing = 1, we follow Ley and Steel (2009) and anchor the prior
Bayesian MCMC estimation of the model
This section presents the Bayesian MCMC estimation algorithm for the proposed modelling framework. Estimation is carried out using an efficient Gibbs sampling scheme. The only exception is the sampling step for the spatial (scalar) autoregressive parameter , where we propose using a standard griddy Gibbs step. The sampling scheme involves the following steps: I. Set starting values for the parameters (e.g. by sampling from the prior distributions) II. Sequentially update the parameters by subsequently sampling from the conditional posterior distributions presented in this section.
Step II. is repeated for times after discarding the first 0 draws as burn-ins.
The resulting conditional posterior distribution is Gaussian and of well-known form (see, for example, LeSage and Pace 2009): The conditional posterior of 2 is inverted Gamma: A random walk Metropolis-Hastings step for might be employed as an alternative.
Sampling
For the spatial parameter , we use a standard Beta distribution (see LeSage and Pace, 2009, p. 142). The conditional posterior is given by: Note that the conditional posterior for does not follow a well-known form and thus requires alternative sampling techniques. We follow LeSage and Pace (2009) and use a griddy-Gibbs step (Ritter and Tanner 1992) to sample .
Sampling the elements of the adjacency matrix
As discussed in the previous section, we propose two alternative prior specifications for the unknown indicators of the spatial weight matrix . First, an independent Bernoulli prior structure with fixed inclusion probabilities (7). Second, a hierarchical prior structure which treats the inclusion probabilities as random (8). After eliciting the prior, the binary indicators can be sequentially sampled in random order from a Bernoulli distribution with conditional posterior given in (6).
Fast computation of the determinant terms
For the Bayesian MCMC algorithm, it is worth noting that repeated sampling from Eq. (6) is required. However, this requires evaluating the conditional probabilities ( = 1|·) and The main computational difficulty lies in the calculation of the determinants | 0 | and | 1 |, which has to be carried out per Gibbs sampling step for the 2 − unknown elements of the spatial adjacency matrix. The computational costs associated with direct calculation of these determinants steeply rises with -in fact by a factor of O ( 3 ). This makes direct evaluation of the determinant prohibitively expensive, especially for large values of Since the support for is limited, the griddy-Gibbs approach (or sometimes inversion approach) relies on univariate numerical integration techniques of the conditional posterior for and uses the cumulative density function for producing draws of . A Metropolis-Hastings step may be used as a standard alternative, but these typically produce less efficient draws with poorer mixing properties (see also LeSage and Pace 2009).
. To avoid direct evaluation, we provide computationally efficient updates for the determinant, allowing for estimation of models with larger sample sizes.
It is worth noting that it is not necessary to directly calculate the determinant of the × Here, denotes the spatial weight matrix obtained by setting = 1 and = 0, respectively.
Direct evaluation of | | can be largely avoided, since updating changes only the -th row of , if we do not restrict to be symmetric (we will address this case shortly). To illustrate, let ( ) denote the current -to be updated -spatial adjacency matrix, and ( ) the associated spatial Using the so-called matrix determinant lemma, we can efficiently calculate: is an × 1 vector of zeros, except for its -th entry, which is unity. The × 1 vector contains the differences between the -th row of and the -th row of ( ) . (12) provides a simple update) and the inverse of . Direct evaluation of −1 is -similar to direct evaluation of the determinant -prohibitively expensive for moderate to large , since it has to be carried out for each unknown element of . However, we can rely on the so-called Sherman-Morrison formula to avoid direct evaluation of the matrix inverse: Combining the formulas in Eqs. (12) and (13) thus provides a numerically cheap and viable way to update the elements of the spatial adjacency matrix.
Note the implication that an update of necessitates a direct evaluation of the determinant | | and the matrix inverse −1 , as in this case no convenient equations exist. An update of , however, has to be performed only once per Gibbs step, as opposed to the 2 − updates necessary for , thus justifying the relatively higher computational The binary nature of can be exploited for additional computational gains. Either 0 or 1 always exactly equals ( ) and thus its determinant and inverse is already known. This only necessitates calculating | | and ( ) −1 for only = 1 or for = 0, but not both.
If a symmetric spatial adjacency matrix is assumed, the update process remains generally the same, however the determinant and matrix inverse updates have to be performed iteratively.
In this case, both and (for ≠ ) are set to either 1 or 0. Thus, both the -th and the -th row of differ from ( ) . Following the notation in the non-symmetric case, let us denote the differences between these rows as and . To obtain an update of | | and −1 , we first evaluate Eqs. (12) and (13), based on , , | ( ) |, and ( ( ) ) −1 . Using the resulting determinant and matrix inverse, as well as , and , we again evaluate Eqs. (12) and (13), which yield | | and −1 .
Simulation study
To assess the accuracy of our proposed approach, we evaluate its performance in a Monte Carlo study. Our benchmark data generating process comprises two randomly generated explanatory variables, as well as spatial unit and time fixed effects: =˜ ˜+˜+˜+˜˜0 +˜.
To maintain succinct notation, we denote the simulated values in the Monte Carlo study with a tilde. The matrix of explanatory variables˜is defined as˜= [˜1 ,˜2 ], where both˜1 and˜2 are normally distributed with zero mean and variance of one, 0 = 2. The corresponding vector of coefficients is defined as˜0 = [−1, 1] . The vector of residuals˜is generated from a normal distribution with zero mean and˜2 = 0.5. The fixed effects parameters˜and˜are randomly generated from a standard normal distribution.
The row-stochastic spatial weight matrix is based on an adjacency matrix , which is generated from an /20 nearest neighbour specification, by additionally assuming symmetry of the weight matrix prior to row-standardization. The nearest neighbour specification is based costs. More specifically, = ( 0 + 0 )/2 where 0 is a /20 nearest neighbour adjacency matrix. on a randomly generated spatial location pattern, sampled from a normal distribution with zero mean and unity variance. In the Monte Carlo study we vary ∈ {10, 40} and ∈ {20, 100}.
For the Monte Carlo simulation study, we compare the following prior setups: 1. Fixed ( = 1/2) prior: this prior corresponds to the fixed Bernoulli prior specification in Eq. (7), where we set = 1/2. For all prior specifications under scrutiny, we consider two alternative estimation setups by assuming that the adjacency matrix is either symmetric or non-symmetric. We moreover report the predictive performance of two alternative specifications using exogenous weight matrices.
Sparsity
In these cases the employed weights are based on the true (symmetric) adjacency matrix by fixing the accuracy to the 99% and 95% level, respectively. We simulate such cases by randomly switching 1% and 5% of the elements in the true binary adjacency matrix , respectively. The resulting exogenous adjacency matrices thus result in exactly 99% and 95% overlap in the binary observations with the true adjacency matrix, while maintaining the same level of sparsity.
The prior setup for our remaining parameters is as follows. We assume a Gaussian prior for with zero mean and a variance of 100. We use an inverse gamma prior for 2 with rate and shape parameters 0.01. The prior for the spatial autoregressive parameter is a symmetric Beta specification with shape and rate parameters equal to 1.01. The chosen priors can thus be considered highly non-informative.
In Table 1 we use several criteria to evaluate the performance of the alternative specifications.
For the spatial autoregressive and the slope parameters we report the well-known root mean However, a direct comparison of the results between symmetric and non-symmetric specifications does not appear reasonable, since the adjacency matrix in the data generating process is assumed symmetric. squared error (RMSE). For assessing the ability to estimating the spatial adjacency matrix, we use the measure of accuracy. The accuracy measure is defined as the sum of correctly identified unknown elements, divided by the number of total elements to be estimated. This measure is calculated separately for each posterior draw. The reported value is an average over all posterior draws and Monte Carlo iterations. In addition, the last two columns in Table 1 show the results for the benchmark SAR models using exogenous randomly perturbed adjacency matrices with accuracy fixed at the 99% and the 95% level, respectively.
Intuitively, the precision of the estimation improves as the number of observations increases in proportion to the number of unknown parameters. The results in Table 1 largely confirm this intuition. The performance indicators for both and also clearly improve for high levels of spatial autocorrelation ( = 0.8). In scenarios where the number of unknown parameters is smaller than the number of observations our approach even manages to outperform both rather hard benchmarks using exogenous spatial weight matrices close to the true DGP. This relative outperformance appears particularly pronounced when the strength of spatial dependence is large. In these settings, symmetric specifications (which resemble the true DGP) even manage to produce accuracy in the adjacency matrix close to unity.
Particularly interesting results appear in the most challenging Monte Carlo scenarios, where the number of unknown parameters is particularly large relative to the number of observations ( = 100 and = 10). In these scenarios, the number of parameters to be estimated exceeds the number of observations by a factor of more than ten. In these cases, prior specifications without using shrinkage appear to fail estimating the underlying spatial structure by producing rather poor accuracy measures. However, when employing sparsity priors, the table reveals that our approach still manages to produce relatively accurate predictive results. In the existence of pronounced spatial autocorrelation, the sparsity specifications even manage to closely track the predictive performance of the rather tough exogenous benchmarks.
Note that the symmetric specifications (where we impose = ) typically outperform their non-symmetric counterparts due to their resemblance to the true DGP. However, for settings where the number of unknown parameters is smaller than the number of observations both scenarios track each other closely. Among the alternative prior specifications under scrutiny, the table shows rather similar results (no clear best specification emerges) in scenarios where is small relative to The number of unknown parameters amounts to 2 + + 0 +2 and ( −1)/2+ + + 0 +2 for non-symmetric and symmetric spatial weight matrices, respectively. . However, for particularly over-parametrized settings (high and low ) the proposed sparsity priors particularly outperform the fixed setups. Specifically, even in the scenario with = 100 and = 10, the sparsity priors still perform comparatively well.
Empirical illustration
To illustrate our proposed approach using real data, we estimate spatial panel specifications based on country-specific daily infection rates in the very early phase of the coronavirus pandemic. We use the COVID-19 data set provided by the Johns Hopkins University (Dong et al. 2020). The Countries without any (official) infections in the starting period have been excluded from the sample. We moreover exclude India as a clear outlier from the sample due to its particular small (official) infection rates throughout the observation period.
With a biweekly time lag, the dependent variable thus captures data from 2nd of April to the 20th of April ( = 19). For a better comparison, we have fixed the time period captured by for all alternative specifications. It is moreover worth noting that a notable earlier starting date would result in relatively few (cross-sectional) observations. However, our results, are rather robust when considering a longer time horizon.
or Han et al. (2021), among others, and use panel versions of a spatial growth specification for the country-specific COVID-19 infections: where = − −14 , and is an × 1 vector comprising the (logged) daily number of official cases per 100,000 inhabitants per country for time period = 1, ..., . and represent fixed effects for the countries and the time periods, respectively. denotes the spatial weight matrix with spatial autoregressive parameter as defined before. We again primarily focus on rowstochastic weight matrices. Results based on spatial weight matrices without row-standardization are presented in the appendix.
We also consider alternative model specifications using contemporaneous as well as temporal lags of the spatial lag ( − with ∈ {0, 14}). A plethora of recent studies exploit the contemporaneous spatial information ( = 0) for modelling the spread of COVID-19 infections (among others, see Han et al. 2021, Jaya and Folmer 2021, Kosfeld et al. 2021, Guliyev 2020, or Krisztin et al. 2020). Using contemporaneous spatial information appears reasonable when the primary interest lies in quantifying spatial co-movements of infection rates. However, for many questions of interest, a temporal spatial lag − ( > 0) might be an interesting alternative since it reflects the notion that the spatial process of virus transmission takes some time to manifest (Elhorst 2021, Mitze andKosfeld 2021). Since our proposed estimation approach can be easily applied to these alternative specifications, we provide estimates for both specifications.
In addition to the Initial infections variable −14 , matrix −14 contains three explanatory variables on a daily basis. Several studies emphasize the importance of climatic condition on the COVID-19 virus spread. For a survey on the effects of climate on the spread of the COVID-19 pandemic, see Briz-Redón and Serrano-Aroca (2020). We therefore use daily data on the country specific maximum measured temperature (Temperature) and precipitation levels (Precipitation) as additional covariates. Both variables stem from a daily database of country-specifc data, The spatial growth regression in (14) may be alternatively specified in levels rather than in log-differences by setting = . Results using this alternative specification are very similar and are presented in the appendix.
It is worth noting that in the special case of > 0, computational efficiency is tremendously increased, as no log-determinant calculations are required in the MCMC algorithm. The sampling strategy for these cases is presented in the appendix. which was compiled via the Dark Sky API. As a third variable, we also include the well-known stringency index (Stringency) put forward by Hale et al. (2020), which summarizes countryspecific governmental policy measures to contain the spread of the virus. In this application, we use the biweekly average of the reported stringency index. Since all these influences arguably require some time to be reflected in the official infection figures, we use a biweekly lag of 14 days (in accordance with in alternative variants). for specifications using a contemporaneous spatial lag , while the right part summarizes results for the case −14 . For each specification, the first rows contain the posterior mean and standard deviations for the slope parameters followed by estimates of and 2 . Posterior quantities which appear significantly different from zero using a 90% posterior credible interval are depicted in bold. The table moreover presents the average posterior expected number of neighbours, which is given by the average row sum of the matrix of posterior inclusion probabilities based on ( = 1|D).
This measure can be viewed as a measure of sparsity in the estimated matrix of linkages. All specifications moreover contain fixed effects for both and . Table 2 shows rather similar and 2 posterior quantities for the flat and the sparsity prior.
However, there appear some marked differences between the specifications and −14 . In https://www.kaggle.com/datasets/vishalvjoseph/weather-dataset-for-covid19-predictions As robustness checks, we have also tried a shorter lag length of one week. The estimated spatial structures appeared very similar to the biweekly benchmarks. All these additional robustness checks, along with the R codes, are available from the authors upon request.
For the benchmark specifications, the number of unknown parameters and observations thus amounts to 753 and 513, respectively. all cases, spatial dependence appears strong and precisely estimated, but appears particularly high in the temporal lag specification −14 . However, the table similarly reveals higher estimates for the nuisance parameter 2 for the temporal spatial lag models. The table shows rather precise and negative coefficients for the initial infections variable, indicating conditional convergence patterns. For most model variants the table moreover suggests a significant negative impact of the stringency index on infection growth. The majority of the slope parameter estimates associated with the variables temperature and precipitation appear more muted and insignificant. Overall, the table moreover clearly demonstrates that a hierarchical prior setup can enforce sparsity in the resulting adjacency matrix. Both sparsity specifications result in an average number of neighbours smaller than the models with fixed prior specifications. For the sake of visualization, we distinguish between negligible evidence for inclusion (< 0.50; white colour), moderate evidence (0.50 − 0.75; grey colour), and strong evidence (> 0.75; black colour).
The two upper plots in Figure 2 depict posterior inclusion probabilities ( = 1|D) for the specifications involving a contemporaneous spatial lag , while the lower part shows temporal spatial lag specifications −14 . In both cases, the left subplots present results based on independent prior inclusion probabilities of = 1/2. The right plots are based on sparsity priors using = 7. The columns in the subplots indicate marginal posterior importance of the countries as predictors of coronavirus infections in linked countries. Conversely, rows depict the countries to be predicted. The results using sparsity priors generally produce similar patterns as the fixed prior specifications and clearly demonstrate its ability of dimension reduction in the connectivity structure. For the contemporaneous spatial lag specification (upper plots), the figure suggests a slightly more pronounced regional dependency structure as compared to the temporal spatial lags. The figure moreover reveals marked spill-out effects from Asian countries, as well as from Iran and Italy.
Results based on a biweekly temporal spatial lag −14 show even more pronounced spillout effects from Asian countries (most notably China, Republic of Korea, and Singapore). For European countries, results similarly suggest Italy as a further important source country of spatial virus transmission. The estimated spatial linkages are thus in close agreement with the actual origins of the overall virus transmission for the very early period of the global outbreak of the pandemic.
To showcase convergence of the posterior MCMC chains, Figure 3 depicts trace plots for , 2 , and slope parameters. Overall, the trace plots show rather good mixing and convergence The regional dependency structure appears particularly pronounced when a level specification of the infection dynamics is imposed. Sensitivity checks based on this alternative specifications are presented in Figure A1 in the appendix.
When comparing the results, it is important to note that for all specifications under scrutiny, we have fixed time period in the dependent variable ( ranges from the 2nd February to the 20th of February; i.e. = 19).The biweekly temporal spatial lag specification thus inherently comprises spatial information prior to the period in .
properties. Convergence of the chains have moreover been checked using the diagnostics proposed by Geweke (1992) implemented in the R package coda (Plummer et al. 2006). Results moreover appear rather robust concerning alternative modelling frameworks. Estimation results of these alternative specifications are presented in the appendix. Notes: Posterior draws based on 5, 000 MCMC draws, where the first 2, 500 were discarded as burn-ins.
Estimates when using a smaller time lag of seven days also appear very similar. Results along with the R codes used are available from the authors upon request.
Concluding remarks
In this paper we propose a Bayesian approach for estimation of weight matrices in spatial econometric models. A particular advantage of our approach is the simple integration into a standard Bayesian MCMC algorithm. The proposed framework can therefore be adapted and extended in a simple and computationally efficient way to cover a large number of alternative spatial specifications prevalent in recent literature. Our approach may thus be easily extended to cover inter alia non-Gaussian models such as spatial probit (LeSage et al., 2011) or logit specifications (Krisztin and Piribauer, 2021), local spillover models (Vega and Elhorst, 2015), or spatial error models (LeSage and Pace, 2009).
Our approach does not not necessarily rely on specific prior information for the spatial linkages.
Spatial information, however, can be easily implemented in a flexible and transparent way. We moreover motivate the use of hierarchical priors which impose sparsity in the resulting spatial weight matrix. These sparsity priors are particularly useful in applications where the number of unknown parameters exceeds those of the observations. The virtues of our approach comes at the price that we focus on spatial neighbourhood structures which are binary (prior to rowstandardization). However, this assumption is implicitly assumed in many spatial applications in the regional economic literature where spatial weight matrices are constructed based on concepts of contiguity, distance band, or nearest neighbours.
Based on Monte Carlo simulations, we show that our approach appears particularly promising when the number of spatial observations is large relative to the time dimension , which is a rather common characteristic of data sets in the regional science literature. We moreover demonstrate the usefulness of our approach using real data on the outbreak of the COVID-
Estimation strategies for alternative spatial lag specifications
In the empirical application, the paper also considers model variants with a spatial lag on the temporal lag of the dependent variable. The considered specification can be written as: where −1 now denotes the temporal lag of the dependent variable and the other quantities are defined as before. From a Bayesian perspective, it is worth noting that an additional temporal lag of the dependent variable −1 can be treated like any other explanatory variable and thus part of the matrix of covariates .
From a computational perspective, the specification in Eq. (15) is much easier to deal with as compared to SAR models involving a contemporaneous spatial lag in the dependent variable (i.
e.
). This is due to the fact that the likelihood function does not involve a determinant term.
To maintain succinct notation, we again collect the fixed effects along with the explanatory variables in a × matrix and stack the quantities as before = 1 , . . . , , with and −1 denoting the stacked × 1 vectors of the dependent variable and the lag, respectively.
Defining = − ( ⊗ ) −1 − , the likelihood reduces to a much simpler form and is given by: By using the same prior specifications like in the SAR case, the posterior probabilities of including or excluding conditional on the other parameters are then given by: where 1 and 0 denote the updated vector of residuals when = 1 and = 0, respectively.
The conditional Bernoulli posterior for is given by: The remaining conditional posterior distributions required for the MCMC sampler are given by: Unlike the other parameters, the conditional posterior for again takes no well-known form and can be sampled by using a griddy-Gibbs or tuned Metropolis-Hastings step: When using a normal prior distribution for ( ), it is worth noting that the spatial lag can be simply captured in the matrix of explanatory variables, such that the parameter is incorporated in the vector . However, in order to pay particular attention to model stability as well as prior consistency to the benchmark SAR specification in the main body of the paper, we similarly employ a beta prior for , which results in the non-standard form of the conditional posterior for .
When considering specifications with a spatial lag in the explanatory variables (typically referred to as SLX models), the MCMC sampling scheme is rather similar, which also considerably reduces the computational burden as compared to SAR frameworks.
Empirical results for additional model specifications and Monte Carlo diagnostics
This section provides results based on alternative model specifications. We provide estimates and inferences for three different specifications. We first consider a specification where the dependent variable is based on the log levels of infection rates rather than (biweekly) log differences (all else being equal). These results (labelled level specification are presented in Table A1 and Figure A1. Overall, the results in general appear very similar to the benchmark specifications. Second, we also consider specifications without row-standardization of the spatial weight matrix. A summary of the estimation results along with the posterior results for the spatial weigh matrix is provided in Table A2 and Figure A2, respectively. Third, to show the merits of our approach in highly over-parametrized environments, we also present a robustness check with only = 10. Results are presented in Table A3 and Figure A3. We have moreover tried various other robustness checks including versions using a shorter time lag of only seven days or even shorter time periods, which produces similar results. Notes: Posterior quantities based on 5, 000 MCMC draws, where the first 2, 500 were discarded as burn-ins. Values in bold denote significance under a 90% credible interval. Level specifications refer to specifications by using (all else being equal) log levels of infection rates rather than log-differences as the dependent variable.
Note that the interpretation of initial infections variable in the level specifications is different to benchmark case using log differences as dependent variable. Specifically, in the former parameters smaller than unity (as compared to negative parameters in the benchmark specifications) point towards convergence.
From an econometric point of view, estimation is the same as compared to the row-stochastic counterparts without conducting the standardization in the MCMC sampler. However, in this case several caveats arise. Most notably, row-standardization of has the great advantage that the parameter space for the spatial autoregressive parameter is clearly defined, such that the inverse ( − ) −1 exists. To ensure stationarity of the MCMC sampler in case of no row-standardization, we have therefore implemented a rejection step by rejecting draws resulting to singular solutions.
Specifically, in these specifications we reduce the end date of the dependent variable accordingly. Notes: Posterior inclusion probabilities of spatial links based on 5, 000 MCMC draws. Inclusion probabilities 0.50-0.75 (little evidence for inclusion) are coloured grey. Strong evidence for inclusion (>0.75) indicated by black colour. Level specifications refer to specifications by using (all else being equal) log levels of infection rates rather than log-differences as the dependent variable. Notes: Trace plots and posterior densities based on 1, 000 MCMC draws, where the first 500 were discarded as burn-ins. Dashed lines denote prior distributions. | 9,878.4 | 2021-01-28T00:00:00.000 | [
"Economics",
"Mathematics"
] |
Kahoot! as a Gamification Tool in Vocational Education: More Positive Attitude, Motivation and Less Anxiety in EFL
This study aims to reveal the effect of the gamified response system, Kahoot!, on attitude towards the EFL course, motivation, and exam anxiety. For this purpose, an embedded mixed design, in which quantitative and qualitative methods are used together, was preferred. The study group consists of 88 ninth-grade vocational high school students. Before the experimental process, attitude, motivation, and exam anxiety scales were applied to the experimental and control groups as pre-tests. At the end of the experiment, attitude, motivation, and exam anxiety scales were conducted as post-tests to the experimental and control groups. In addition, the views of randomly selected students from the experimental group on Kahoot! application was examined. The results revealed that Kahoot! significantly increased the attitude towards the EFL course with a 0.22 effect size. Kahoot! increased EFL learning motivation and decreased exam anxiety, but this was not significant. Finally, it was revealed that the students thought Kahoot! as funny. Received: 21 April 2021 Accepted: 27 July 2021
Introduction
The rapid change in instructional technologies enables the emergence of new teaching techniques in language teaching. Especially, the development of Web-based systems and the reduction of internet access barriers to a minimum level required the implementation of these techniques. Since the 1980s, much research has been conducted using technology in foreign language teaching (Thorne, Black, & Sykes, 2009). Figueroa Flores (2015 argues that foreign language teachers need new strategies to increase students' motivation and interest in foreign language teaching. Many researchers like Flores now support the use of technology in education to make learning meaningful. Chun, Kern, and Smith (2016) state that technology increases creativity and intellectual capacity. Having a progressive view on the meaningful use of technology in education, Prensky (2001) states that children have the digital language of the internet, computer, and video games. They spend most of their time with games and argue that students' learning should be supported. Kessler (2018) also suggests that technology should be included in education for effective learning because it has become difficult to motivate students with traditional methods (Premarathne, 2017).
Digital game-based learning (DGBL), one of the increasingly popular teaching methods in recent years, is beneficial in education in many respects. DGBL motivates students in the teaching process and is thought to increase engagement and interest. DGBL is an instructional approach in which computer use is supported for educational purposes (Mavridis & Tsiatsos, 2017). Many software designers and commercial companies hope to make academic learning more fun by developing games for learning (Kafai, 2006). Because game-based learning includes the discipline of the game and aims to involve the student in the learning process and increase learning performance (Graham, 2015). According to the report of the world-famous The New Media Consortium (NMC), which provides reports on the development of technology every year, educational games develop critical thinking, problem-solving, and group work, as well as the ability to solve the complexity of social and environmental contrasts (Johnson, Adams Becker, Estrada, & Freeman, 2014). Gee (2003) suggests that motivation can be provided through games, and its continuity can be learned.
Game-based learning performs three tasks; encouraging learning, developing knowledge, and increasing skills (McFarlane, Sparrowhawk, & Heald, 2002). The digital games used in education show their effects in all areas of education. Indeed, Kebritchi, Hirumi, and Bai (2010) argued that digital games increase mathematics achievement; Zin, Jaafar, and Seng Yue (2009) suggest that students who found history courses boring can become more fun with digital games. Papastergiou (2009) suggested that digital games positively affect physics and health education in terms of engagement. Many researchers claim that digitalbased games, which are more promising in foreign language education, have an important effect. Thomas (2012) argues that language learning through digital-based games provides success-oriented learning. Therefore, the use of digital games in the classroom can support cognitive development. It has been suggested that a carefully chosen game allows people to measure their status in terms of cognitive and behavioral development and motivation in the process of learning a foreign language (Cornillie, Thorne, & Desmet, 2012). Hubbard (2009) argues that digital games provide new information and skills to students in modern foreign language teaching methods and reinforce this information even without teachers. has been suggested that sometimes Kahoot!'s causing extreme competition may lead to negative consequences (Licorish, Owen, Daniel, & George, 2018). Bolat, Şimşek, and Ülker (2017) in their study with university students concluded that formative assessment activities were carried out with Kahoot! contributed to affective and cognitive skills. Orhan-Göksün and Gürsoy (2019) have found Kahoot! to be effective in academic success and engagement.
When the literature is examined, it is seen that many studies are examining the effect of Kahoot!. In a new review study, the researches examining Kahoot! were investigated, and it was concluded that Kahoot! mostly increased motivation and engagement (Wang & Tahir, 2020). Another result of the related study was the effectiveness of Kahoot! in learning a foreign language. Kahoot! also offers an effective strategy in terms of providing a fun environment in English education (Putri, 2019;Yürük, 2020), improves Arabic language grammar knowledge and motivation (Eltahir, Alsalhi, Al-Qatawneh, AlQudah, & Jaradat, 2021), increases engagement (Budiati, 2017), and especially helps students in concept teaching (Medina &Hurtado, 2017;Plump & LaRosa, 2017). It benefits educators as well as students (Zengin, Bars, & Şimşek, 2017). Kahoot!, used as a measurement tool by teachers, can positively affect students' exam anxiety by gamifying the evaluation process in education and moving it to a digital platform. According to a study, including gamification in the evaluation process allowed students to reduce their exam anxiety (Isbister, Karlesky, Frye, & Rao, 2012). In this study, the effect of Kahoot! on motivation, attitude, and exam anxiety in English as a foreign language (EFL) teaching was investigated in the sample of vocational high schools. In this context, the research questions are as follows:
Research Design
This study aims to examine the effect of Kahoot!, a gamified online response system, on students' exam anxiety, motivation, and attitudes. For this purpose, an embedded mixed design, in which quantitative and qualitative methods were used together, was used. In the embedded mixed research method, the quantitative or qualitative approach is more dominant, and the research is largely qualitative or quantitative (Yıldırım & Şimşek, 2013).
Journal of Computer and Education Research
Year 2021 Volume 9 Issue 18 682-701
686
This research method can be used in cases where quantitative data collected while answering the research questions are desired to be enriched with qualitative data (Creswell, 2013). In this study, quantitative data obtained with quasi-experimental design were interpreted by supporting them with student views. The research process is presented in Figure 1. As shown in Figure 1, the Motivation Scale in English Language Learning, the Attitude Scale Towards the English Course, and the Exam Anxiety Scale were applied as pretests. Then, a 10-week application period was carried out. After the experiment, all three scales were used as post-test. After the experimental process, the views of the four randomly selected students from the experimental group regarding the activities performed by the Kahoot! application were taken.
Study Group
The study group of the research consists of ninth grade (the first grade of high school) students in four different classes studying at a vocational high school in Mardin in 2019 (the second semester). We used the convenience sampling method while specifying the study group. The classes are similar in terms of academic characteristics, and their English course teacher is the same person (the first author). Two of the classes were randomly allocated as the experimental group (N = 44) and two as the control group (N = 44). All of the participants were male because it was a boys' high school.
Experimental Process
Both the experimental and control groups took English courses by the same teacher for five hours a week for ten weeks. The experimental process was carried out by the first author of the study, a teacher in the relevant high school. To summarize the concepts learned in the course and reinforce the learning during the week, the students in the experimental group were applied gamified online quizzes using the Kahoot! application at the fifth hour.
The quizzes prepared by the teacher were reflected from the smartboard, and students accessed the exam with their quiz passwords. Each quiz consists of 20 questions in total.
Throughout the experimental process, the Ministry of National Education ninth grade second semester English course curriculum was followed. Within the scope of that curriculum, the topics of "6. Bridging culture", "7. World heritage", "8. Emergency and health problems", "9. Invitations and celebrations", and "10. Television and social media" were covered. Two quizzes were applied for each subject, and a total of 10 quiz activities were carried out during ten weeks. It has been reported that the students who are in the top five will gain five points for their verbal grades to ensure active engagement before these activities are implemented. In Figure 2, a one-week course process carried out in the experimental group is presented.
Data Collection Tools
To collect the quantitative data of the research, the Attitude Scale Towards the English Course (Takkaç-Tulgar, 2018), Motivation Scale in English Language Learning (Mehdiyev et al., 2017) and The Revised Exam Anxiety Scale developed by Benson and El-Zahhar (1994) and adapted to Turkish by Akın, Demirci, and Arslan (2012), were used. The Motivation Scale in English Language Learning consists of self-confidence, attitude, and personal use dimensions and 16 items. The scale was developed as a five-point likert type.
Answers to the items were scored as (1) for "Never agree", (2) for "Partially agree", (3) for "Moderately agree", (4) for "Mostly agree" and (5) for "Totally agree". There are four reverse items in self-confidence and two in attitude dimensions. Therefore, a total of six items were reverse coded. The validity and reliability analysis of the scale were made, and the internal consistency reliability test result was found as α = .83. The Revised Exam Anxiety Scale consists of four dimensions: tension, physical symptoms, anxiety and thoughts unrelated to the exams, and 20 items. The scale is scored in four levels as (1) "Never", (2) "Sometimes", (3) "Mostly", and (4) "Always". Scale items were developed as negative expressions. Therefore, 688 the exam anxiety level was interpreted high as the score obtained from the scale increased, and exam anxiety was interpreted low as the score obtained from the scale decreased. The validity and reliability analysis of the scale were made, and the internal consistency reliability test result was found as α = .88. The Attitude Scale Towards the English Course consists of four dimensions: interest, importance, contribution, and informationentertainment, and 26 items. The scale is a five-point likert type, and the scores range from "1: I never agree" to "5: I totally agree". All items were positively expressed on the scale.
Validity and reliability analysis of the scale were made, and the internal consistency reliability test result was found as α = .96.
Data Analysis
Two-factor analysis of variance is a method that analyzes pre-test and post-tests simultaneously with group variables. In this study, two-factor analysis of variance (Mixed ANOVA) for mixed measurements was used since the scores of experimental and control groups will be interpreted according to the pre-test and post-test scores (Can, 2014). The assumptions of the two-way analysis of variance were tested, and the data were found to meet these assumptions. When deciding on the analysis of the quantitative data of the research, the assumptions of the two-way analysis of variance were taken into account. The normal distributions of the data were examined with Kurtosis-Skewness values, Kolmogorov-Smirnov test, and Histogram graphics. Kolmogorov-Smirnov test results are presented in Table 1.
689
When Table 1 is examined, it is seen that the data of the pre-test and post-tests of the experimental and control groups showed normal distribution. The content analysis method was used in the analysis of the qualitative data of the research. Content analysis is an inductive analysis method based on coding revealing the facts underlying the data (Patton, 1990). In content analysis, the data are coded and organized within the framework of certain themes and concepts and interpreted in a way that the reader can understand (Yıldırım & Şimşek, 2013). Draw.io software was used during the visualization of qualitative findings.
The qualitative data were analyzed by the researchers and interpreted with direct citations.
The intercoder reliability method was used to ensure the reliability of the content analysis. In addition, to increase the validity and reliability of the research, the study spread over ten weeks in total, so the innovation effect was prevented.
The Effect of Kahoot! on Students' EFL Learning Motivation
Two-way analysis of variance was used to analyze the pre-test and post-tests of the experimental and control groups to examine the effects of using classroom assessment activities with Kahoot! in the ninth-grade English course on students' EFL learning motivation. The descriptive statistics obtained from the analysis are presented in Table 2, and the significance results are presented in Table 3. In addition, the change in the pre-test and post-test scores of the groups is given in Figure 3. According to this result, similar changes occurred in both groups. To show whether this situation has a significant effect between the two groups, two-way analysis of variance results is given in Table 3. As seen in Table 3, there is no significant difference between the experimental and control groups (F = 1.51, p = .22) at the significance level of p = .05. The fact that the partial effect size (η 2 = 0.02) of the group variable is very close to zero confirms this situation. In Figure 3, the graphic of the change in both groups' pre-test and post-test motivation scores is presented. As seen in Figure 3, the experimental group's motivation increased more than the control group. Based on this result, it can be claimed that the activities carried out with Kahoot! have no significant effect on the students' EFL learning motivation. However, the change in motivation score means does not have a significant impact.
The Effect of Kahoot! on Students' Exam Anxiety
Two-way analysis of variance was used to analyze the pre-test and post-tests of the experimental and control groups together to examine the effects of using classroom assessment activities with Kahoot! in the ninth grade EFL course on students' exam anxiety.
691
Descriptive statistics and significance results obtained from analyzes are given in Table 4 and Table 5. In addition, the distribution of pre-test and post-test scores is presented in Figure 4. Table 5 to show whether this situation has a significant difference between the two groups. As seen in Table 5, there is no significant difference between the experimental and control groups (F = 0.68, p = 0.41) at the significance level of p = .05. The fact that the partial effect size (η 2 = 0.01) of the group variable is also very close to zero confirms this situation. In support of this result, the exam anxiety change in both groups' pre-test and post-test is shown in Figure 4.
The Effect of Kahoot! on Students' Attitudes Towards EFL Course
Two-way analysis of variance was used to analyze the pre-test and post-tests of the experimental and control groups to examine the effects of using classroom assessment activities with Kahoot! in the ninth grade EFL course on students' attitudes towards the EFL course. Descriptive statistics and significance results obtained from analyzes are given in Table 6 and Table 7. In addition, the distribution of pre-test and post-test scores is presented in Figure 5. increased, but the control group decreased slightly. To show whether this situation has a significant effect between the two groups, the results of two-way analysis of variance are given in Table 7. As seen in Table 7, a significant difference was found between the experimental and control groups (F = 9.61, p = 0.003) at the significance level of p = .05. The partial effect size of the group variable was found to be η 2 = 0.10. When the impact of Kahoot! on attitude was calculated with the pre-test and post-test measures, the effect size was 0.22. This finding shows that the group variable causes 22% of the variance in students' attitudes. In support of this finding, Figure 5 shows the distribution of the pre-test and post-test measurements of both groups' attitudes toward the EFL course. As seen in Figure 5, while the attitude of the experimental group students towards the course increased, the control group decreased slightly. Based on this finding, it can be
Students' Views about Kahoot! Application
The views of the students about the Kahoot! application used in the 10-week EFL learning process were consulted, and the obtained codes are presented in Figure 6.
Accordingly, the following codes are obtained; funny, providing effective learning, offering equal opportunities, remembering in the exam, usefulness, providing permanent learning, and creating a competitive environment. When the codes in Figure 6 are examined, it can be said that students are satisfied with the use of Kahoot! in EFL courses and the views are positive. In these views, the students emphasized Kahoot! app's features of permanence, effective learning, and remembering at the time of exam. In addition, motivational elements such as being fun and creating a competitive environment in the classroom have been focused. It can be interpreted that this situation can support the motivation of students towards EFL courses. Kahoot! 's being useful and offering equal opportunities among students may also be effective in students' developing ideas favoring using this application more in EFL courses. The direct expressions of the students regarding these codes are presented in the following section.
The participant students expressed their views related to the code of Kahoot!'s effective learning as follows: F. Y.: "It is an application that we can raise the level of English both by having fun and learning.", Y. Ü.: "I have positive, fun, and instructive thoughts about Kahoot! application" and O. E.: "It is a very educational and guiding application". Related to the code of permanence, Ş. A. expressed his view as: "Because sometimes it was permanent in our minds when a word was repeated more than once. But if it was shown only once, we could only remember these words instantly. We started to keep the words better in our minds". Related to the code of giving an advantage in the exam, Ş. A. stated: "We could find these words in the exams questions. So, we were a bit more advantageous". Related to the code of fun, the students emphasized this aspect of Kahoot! as follows; F. Y.: "It is an application that we can raise our level of English by having fun and learning", O. E.: "I don't think anyone will get bored. There is nothing funny like learning through play", Y. "We have observed that even the unsuccessful students can learn something. Because it can teach even those, who don't know well. There were a lot of students in our class who didn't know English, but even they learned". By these statements, they emphasized it enabling the opportunity of learning for relatively unsuccessful students.
Discussion and Conclusion
In this study, it is aimed to examine the effect of the online response system Kahoot! application, which was applied in the ninth grade EFL course in vocational high school, on students' motivation, attitudes and exam anxiety towards the course, as well as student views on these activities they experienced. The research data were obtained from a study group of 88 students at a vocational high school in Mardin. The results of the research are limited to the answers given by the students to the scales and interview questions, and the activities carried out within the scope of five topics in 2019 (the second semester) ninth-grade textbook as; (6. Bridging culture, 7. World heritage, 8. Emergency and health problems, 9. Invitations and celebrations and 10. Television and social media). Besides, Kahoot! application was used to summarize and reinforce the learnings in the last hour of the week.
The results of the research should be evaluated within these limitations. According to the According to the third research question of the study, the effect of Kahoot! on attitude towards EFL courses was examined. As a result of the analysis made for this purpose, it was revealed that the evaluation activities were carried out with Kahoot! significantly increased the attitudes of the experimental group students towards the EFL course compared to the control group. Studies investigating the effect of Kahoot! on students have revealed that Kahoot! has a positive effect on teachers' and students' attitudes and increases engagement (Wang & Tahir, 2020). Yürük (2020) found that Kahoot! positively affects students' EFL pronunciation skills. As a matter of fact, students with positive attitudes are expected to participate in the lesson. According to Md. Yunus and Nur Rashidah Khairunnisa (2011), a positive attitude and higher motivation may occur in learning English when supporting environmental factors are provided to students. In our study, the fact that the teacher gave additional points to the top five students in the 10-week experimental process may have increased the students' attitudes. Gamification-based teaching practices positively affect students' attitudes (Yildirim, 2017). In addition, the proven effectiveness of Kahoot! in English courses (Wang & Tahir, 2020) supports these results.
The quantitative results of the study have shown that Kahoot! increases both the attitude towards EFL course and EFL learning motivation and reduces exam anxiety.
However, there was a significant difference only in terms of attitude towards the EFL course.
Students' views also support these results. As a matter of fact, students pointed out especially the fun features of Kahoot!. These findings are in parallel with the results in the literature. Kahoot! offers an effective strategy to provide a funny environment in English teaching (Putri, 2019;Yürük, 2020) and increases engagement (Budiati, 2017). Also, in our study, it was understood that students thought Kahoot! contributed to permanent learning, provided an advantage in exams, and created a competitive environment. Views expressing that Kahoot! has a user-friendly interface were also put forward. It can be claimed that Kahoot! is motivating because it provides a competitive environment (Wang, 2015).
However, it should not be forgotten that sometimes Kahoot!'s extreme competition may lead to negative consequences (Licorish et al., 2018). Therefore, it may be beneficial for teachers to consider this situation while using Kahoot!. It may be useful for teachers to use it only in a formative manner so that competition does not cause concern. All these results indicate that if Kahoot! is used in EFL courses in vocational high schools, it will increase the interest in the course. In addition, these results are expected to be reflected in academic achievement.
Journal of Computer and Education Research
Year 2021 Volume 9 Issue 18 682-701 698 Eltahir et al. (2021) used Kahoot! as a game-based online assessment tool and determined that Kahoot! improved the Arabic language grammar knowledge and students' motivation.
Because students' high motivation or positive attitudes may affect academic achievement (Good & Brophy, 2000). Accordingly, Bury (2017), who researched online assessment tools, revealed that these tools improve students' grammar knowledge.
As a result, with this experimental study, we tried to reveal the effect of Kahoot! on EFL learning in vocational high schools. As a result of this study, we conducted it within the framework of the EFL course; it has been proved that enriching the courses with Kahoot! activities significantly affects the attitude towards the EFL course. According to this result, in addition to the attitude towards EFL courses, Kahoot! can positively affect the attitude towards the school. Thus, it can be claimed that the attitudes of vocational high school students, who have a negative attitude towards the school in general (Atalay-Mazlum & Balcı, 2018), can be changed positively with gamified digital-based activities. Finally, it is a limitation that the study participants were only males. The fact that our sample consisted of only males may have manipulated our results. Therefore, similar new research can be conducted with a mixed sample to provide more conclusive results evidence. | 5,629.4 | 2021-07-27T00:00:00.000 | [
"Education",
"Computer Science"
] |
Characterizing the WASP-4 system with TESS and radial velocity data: Constraints on the cause of the hot Jupiter's changing orbit and evidence of an outer planet
Orbital dynamics provide valuable insights into the evolution and diversity of exoplanetary systems. Currently, only one hot Jupiter, WASP-12b, is confirmed to have a decaying orbit. Another, WASP-4b, exhibits hints of a changing orbital period that could be caused by orbital decay, apsidal precession, or the acceleration of the system towards the Earth. We have analyzed all data sectors from NASA's Transiting Exoplanet Survey Satellite together with all radial velocity (RV) and transit data in the literature to characterize WASP-4b's orbit. Our analysis shows that the full RV data set is consistent with no acceleration towards the Earth. Instead, we find evidence of a possible additional planet in the WASP-4 system, with an orbital period of ~7000 days and $M_{c}sin(i)$ of $5.47^{+0.44}_{-0.43} M_{Jup}$. Additionally, we find that the transit timing variations of all of the WASP-4b transits cannot be explained by the second planet but can be explained with either a decaying orbit or apsidal precession, with a slight preference for orbital decay. Assuming the decay model is correct, we find an updated period of 1.338231587$\pm$0.000000022 days, a decay rate of -7.33$\pm$0.71 msec/year, and an orbital decay timescale of 15.77$\pm$1.57 Myr. If the observed decay results from tidal dissipation, we derive a modified tidal quality factor of $Q^{'}_{*}$ = 5.1$\pm$0.9$\times10^4$, which is an order of magnitude lower than values derived for other hot Jupiter systems. However, more observations are needed to determine conclusively the cause of WASP-4b's changing orbit and confirm the existence of an outer companion.
INTRODUCTION
One of the most powerful tools used to study exoplanets is observing them while they transit across the disk of their stars. The transit method can be used to search for temporal variations in the planetary orbital parameters and allows for a direct measurement of the planetary radius and thus its atmosphere (e.g. Charbonneau et al. 2000Charbonneau et al. , 2007. Specifically, light curves of transiting planets can be used to search for transit timing variations (TTVs; Agol et al. 2005;Agol & Fabrycky 2018), transit duration variations (Agol & Fabrycky 2018), and impact parameter variations (Herman et al. 2018;Szabó et al. 2020). The presence of TTVs may indicate addi-tional bodies in the system, a decaying planetary orbit, precession in the orbit, or a variety of other effects (e.g., Applegate 1992, Dobrovolskis & Borucki 1996, Miralda-Escudé 2002Schneider 2003;Agol et al. 2005;Mazeh et al. 2013;Agol & Fabrycky 2018).
It is theorized that orbital decay might occur on short-period massive planets orbiting stars with surface convective zones due to exchange of energy with their host stars through tidal interactions (e.g., Rasio et al. 1996;Lin et al. 1996;Chambers 2009;Lai 2012;Penev et al. 2014;Barker 2020). Measurements of orbital decay would expand our understanding of the hot Jupiter population and its evolution (e.g. Jackson et al. 2008;Hamer & Schlaufman 2019). Even though such decay may occur over millions of years, it is possible to search for small changes in the orbital period of hot Jupiter sys-tems since many of these planets have been monitored for decades.
Currently, WASP-12b is one of the few hot Jupiters confirmed to have a varying period. It is an ultra-hot planet around a G0 star with an orbital period of 1.09 days (Hebb et al. 2009). WASP-12b is believed to have an escaping atmosphere (e.g., Lai et al. 2010;Bisikalo et al. 2013;Turner et al. 2016a) as suggested by Hubble Space Telescope near-ultraviolet observations (Fossati et al. 2010;Haswell et al. 2012;Nichols et al. 2015). Maciejewski et al. (2016) were the first to detect its decreasing orbital period, and subsequent studies have confirmed the period change (Patra et al. 2017;Maciejewski et al. 2018;Bailey & Goodman 2019;Baluev et al. 2019) and established orbital decay as its cause (Yee et al. 2020). The decaying orbit of WASP-12b was confirmed also using transit and occultation observations with NASA's Transiting Exoplanet Survey Satellite (TESS; Ricker et al. 2015) (Turner et al. 2021, see also Owens et al. 2021). The decay rate of WASP-12b was found to be 32.53±1.62 msec yr −1 corresponding to an orbital decay timescale of τ = P/|Ṗ | = 2.90 ± 0.14 Myr (Turner et al. 2021), shorter than the estimated mass-loss timescale of 300 Myr (Lai et al. 2010;Jackson et al. 2017). Assuming the observed decay results from tidal dissipation, Turner et al. (2021) derived a modified tidal quality factor, a dimensionless quantity that describes the efficiency of tidal dissipation, of Q = 1.39±0.15 ×10 5 , which falls at the lower end of values derived for binary star systems (Meibom et al. 2015) and hot Jupiters (Jackson et al. 2008;Husnoo et al. 2012;Barker 2020).
Besides WASP-12b, WASP-4b is a well-studied system with hints of a changing period. WASP-4b is a typical hot Jupiter with a short orbital period of 1.34 days and orbits a G7V star . The planet's atmosphere (e.g. Cáceres et al. 2011;Ranjan et al. 2014;Bixel et al. 2019) and orbital parameters (e.g. Southworth 2009;Winn et al. 2009a;Hoyer et al. 2013;Baluev et al. 2020) have been studied extensively since its discovery in 2008. Bouma et al. (2019) first reported an orbital period variation of WASP-4b using TESS and ground-based observations. Southworth et al. (2019) confirmed that the period was decaying with additional ground-based observations and found a decay rate of 9.2 msec yr −1 . They found that orbital decay and apsidal precession could explain the TTVs after ruling out instrumental issues, stellar activity, the Applegate mechanism, and light-time effect. Bouma et al. (2020) obtained additional radial-velocity (RV) data on WASP-4b using HIRES on Keck and found that the observed orbital period variation could be explained by the sys-tem accelerating toward the Sun at a rate of -0.0422 m s −1 day −1 . Recently, Baluev et al. (2020) analyzed a comprehensive set of 129 transits and additional RV data (presented in Baluev et al. 2019(presented in Baluev et al. ) from 2007(presented in Baluev et al. -2014 (mostly in years not covered by the RV data presented in Bouma et al. 2020). They also confirmed a period change in WASP-4b's orbit but do not confirm the RV acceleration found by Bouma et al. (2020). However, Baluev et al. (2020) did not include the new RV HIRES data from Bouma et al. (2020) in their analysis. Therefore, the cause of the period variation in WASP-4b is still an open question.
Motivated by the possible changing period of WASP-4b, we analyze all three sectors (Figure 1) from TESS and combine the results with all transit, occultation, and RV measurements from the literature to verify its changing period and derive updated planetary properties. TESS is well suited for our study because it provides high-precision time-series data, ideal for searching for TTVs (e.g. Hadden et al. 2019;Pearson 2019;von Essen et al. 2020;Ridden-Harper et al. 2020;Turner et al. 2021). Altogether, the transit data span 13 years from 2007-2020 and the RV data span 12 years from 2007-2019. Using the combined data set we hope to shed light on the cause of WASP-4b's period variation.
OBSERVATIONS AND DATA REDUCTION
TESS observed WASP-4b in Sector 2 (2018-Aug-22 to 2018-Sep-20), Sector 28 (2020-Jul-30 to 2020-Aug-26), and Sector 29 (2020-Aug-26 to 2020. The TESS observations were processed by the Science Processing Operations Center (SPOC) pipeline 1 (Jenkins et al. 2016). The SPOC pipeline produces light curves ideal for characterizing transiting planets since they are corrected for systematics. SPOC produces Presearch Data Conditioning (PDC) and Data Validation Timeseries (DVT) light curves. The PDC light curves are corrected for instrumental systematics (pointing or focus related), discontinuities resulting from radiation events in the CCD detectors, outliers, and flux contamination. The DVT light curves are created by using a running median filter on the PDC light curves to remove any long-period systematics. We use the DVT light curves ( Figure 1) for our analysis because they have less scatter in their out-of-transit (OoT) baseline. As shown in Ridden-Harper et al. (2020) for XO-6b TESS data, the DVT and PDC light curves produce similar results on the timing of the transits. For the Sector 2 data, the light curves produced from the SPOC pipeline have a known issue 2 that overestimates their uncertainties. Therefore, we estimated the uncertainties using the scatter in the OoT baseline as we did in our previous study of WASP-12b (Turner et al. 2021) and as recommended (Barclay, T., private communication). The Sector 28 and 29 data are unaffected by this problem. Therefore, we used the uncertainties provided by the SPOC pipeline.
Transit Modeling
All the TESS transits of WASP-4b were modeled with the EXOplanet MOdeling Package (EXOMOP; Pearson et al. 2014;Turner et al. 2016bTurner et al. , 2017 3 to find a bestfit. EXOMOP creates a model transit using the analytic equations of Mandel & Agol (2002) Each TESS transit ( Figure 1) was modeled with EXOMOP independently. We used 20 6 links and 20 chains for the DE-MCMC model and use the Gelman-Rubin statistic (Gelman & Rubin 1992) to ensure chain convergence (Ford 2006). The mid-transit time (T c ), scaled semi-major axis (a/R * ), planet-to-star radius (R p /R * ), and inclination (i) are set as free parameters for every transit. The linear and quadratic limb darkening coefficients and period (P orb ) are fixed during the analysis. The linear and quadratic limb darkening coefficients are taken from Claret (2017) and are set to 0.382 and 0.210, respectively.
The parameters derived for every TESS transit event can be found in Tables A1-A3 in Appendix A. The modeled light curves for each individual transit can be found in Figures A1-A7 in Appendix A. All parameters for each transit event are consistent within 2σ of every other transit. The Sector 2 data for WASP-4b was analyzed in Bouma et al. (2019) and our timing analysis for each individual transit is consistent within 1σ of their findings (see Figure B8 in Appendix B).
The light curve of WASP-4b was phase-folded at each derived mid-transit time and modeled with EXOMOP to find the final fitted parameters. The phase-folded light curve and model fit can be found in Figure 2. We use the light curve model results combined with literature values to calculate the planetary mass (M p ; Winn 2010), radius (R p ), density (ρ p ), equilibrium temperature (T eq ), surface gravity (log g p ; Southworth et al. 2007), or- Table 4 from the best-fit 2-planet RV model (Model #3). The period of WASP-4b was taken from the orbital decay model in Table 6. All the physical parameters for WASP-4c were taken from the best-fit 2-planet RV model (Model #3).
Occultation
We created an occultation light curve by phase-folding all the data about the secondary eclipse using the first TESS transit as the reference transit time. As shown in Figure 3, we do not see an occultation of WASP-4b in the TESS data. We only show the PDC light curve but this is also the case for the DVT light curve. We find a 3-σ upper limit on the occultation depth (δ occ ) to be 1.34 ×10 −5 . The geometric albedo (A g,occ ) of WASP-4b 4 The Safronov number is a qualitative measure of the potential of a planet to capture or gravitationally scatter other objects in close by orbits can be calculated assuming no thermal contribution (1) We find a 3-σ upper limit on A g,occ of 0.017 using Equation (1). Our upper limit on A g,occ is consistent with the overall trend that hot Jupiters are very dark (e.g. Kipping & Bakos 2011;Močnik et al. 2018;Kane et al. 2020).
Modeling Radial Velocities
We combined all available radial velocity (RV) data of WASP-4 in the literature in our analysis. The complete RV data set is given in Table 2. The set includes CORALIE and HARPS data from Baluev et al. (2019), HARPS data from Pont et al. (2011), and HIRES data from Bouma et al. (2020). We did not include the RV data from Triaud et al. (2010) as these data were taken for Rossiter-McLaughlin measurements and were reduced with a nonstandard pipeline. Note-This table is available in its entirety in machine-readable form. Note-The number of data points (N data ) for all the models was 74.
priors for the orbital period and time of inferior conjunction that were set to the values derived by Bouma et al. (2019). We set the eccentricity of the orbit to zero as indicated by previous upper limit studies (Beerer et al. 2011;Knutson et al. 2014;Bonomo et al. 2017. As done in previous studies, we first modeled the data with only one planet. The free parameters in the model were the orbital velocity semi-amplitude (K b ), the instrument zero-points, white noise instrument jitter for each instrument (σ, added in quadrature to its uncertainties), and the linear acceleration (γ). We ran models with and withoutγ to check if an acceleration is needed to fit the data (Models #1 and #2). The results of the 1-planet models of WASP-4b can be found in Table 4 and Figure 4. We use the Bayesian Information Criterion (BIC) to assess the preferred model. The BIC is defined as where k is the number of free parameters in the model fit and N pts is the number of data points. The power of the BIC is the penalty for a higher number of fitted model parameters, making it a robust way to compare different best-fit models. The preferred model is the one that produces the lowest BIC value. We find a BIC value of 675.08 and 682.76 for the model without (Model #1) and with fittingγ (Model #2), respectively. For two generic models i and j, we can relate the difference in the BIC values between models, ∆(BIC j,i ) = BIC j -BIC i , and the Bayes factor B i,j , the ratio of the likelihood between models i and j, assuming a Gaussian distribution for the posteriors (e.g. Faulkenberry 2018): Therefore, Model #1 without fitting forγ is the preferred model with a ∆(BIC 1,2 ) = BIC 1 -BIC 2 = -7.68 and Bayes factor, B 1,2 , of 46.5. When fitting forγ, we find a linear acceleration term that is positive but is consistent with zero within the uncertainties (γ = 0.0001 +0.0034 −0.0036 m s −1 day −1 ). Our results are in conflict with the findings by Bouma et al. (2020) that find an acceleration along our line of sight at a rate oḟ γ = −0.0422 +0.0028 −0.0027 m s −1 day −1 . We can reproduce the results of Bouma et al. (2020) by modeling only their data (See Figure C13 and Table C5 in Appendix C). Based on this test, we conclude that the acceleration found by Bouma et al. (2020) was caused by modeling only part of the full RV data set. Therefore, we conclude that the changing orbital period detected using the transit data is not caused by the WASP-4 system accelerating towards Earth.
Next, we fit the RV data with a 2-planet model because the residuals of the one-planet fit showed some sinusoidal structure (panel b in Figures 4I-II). This sinusoidal trend is not caused by stellar activity as the S-index time series from the HIRES data shows no signs of secular or sinusoidal trends (Bouma et al. 2020). We performed several different models where we fit for K b , σ,γ, the orbital velocity semi-amplitude of the 2nd body (K c ). and an eccentricity of the 2nd planet (e c ). The results of Models #3 and #4 are summarized in Table 3. We find that Model #3 with K b and K c set as free parameters and e b , e c , andγ fixed to zero finds the best fit with a BIC of 628.34. The derived orbital parameters of this model can be found in Table 4 and the two-planet RV fit can be found in Figures 4III-IV. The two-planet model (Model #3) is highly preferred over the one-planet model (Model #1) with a ∆(BIC 3,1 ) = -46.74 and a Bayes factor, B 3,1 , of 1.41×10 10 . For the second body, we find a period (P c ) of 7001.0 +6.0 −6.6 days, a semi-major axis of 6.82 +0.23 −0.25 AU, and a M c sin(i) of 5.47 +0.44 −0.43 M Jup (Table 1, Table 4). The companion is expected to be much fainter than the planet host star because Wilson et al. (2008) found no evidence for changing spectral line bisectors in their spectroscopic observation. Becker et al. (2017) found that distant exterior companions to hot Jupiters around cool stars (T star < 6200 K) are typically coplanar within 20-30 degrees. Therefore, we find that the mass of WASP-4c is between 5.47-6.50 M Jup assuming that its inclination is within 30 degrees of WASP-4b's inclination. However, more RV measurements are needed to verify the existence of this second planet around WASP-4.
Transit Timing Variations
For the timing analysis, we combined the TESS transit data with all the prior transit and occultation times. All the transit and occultation times used in this analysis can be found in Table 5. In the table we give the Table 4 are the median values of the posterior distributions. The thin blue line is the best fit model. We add in quadrature the RV jitter terms listed in Table 4 with the measurement uncertainties for all RVs. b panels) Residuals to the best fit model. The error bars of the residuals reflect both the measurement error and the jitter from the MCMC fit. The larger error bars for the 1 planet fits reflect the larger amounts of jitter needed to fit the data with only 1 planet. c panels) RVs phase-folded to the ephemeris of WASP-4b. The phase-folded model for WASP-4b is shown as the blue line. The Keplerian orbital models for all other planets (if modeled) have been subtracted. d panels) RVs phase-folded to the ephemeris of the second planet WASP-4c. The Keplerian orbital model for the other planet (panels c, WASP-4b) has been subtracted. Red circles are the velocities binned in 0.08 units of orbital phase. Note-This table is available in its entirety in machine-readable form. original reference in which the data were reported and the reference for the timing if different from the original source. We combine transit data as tabulated by Bouma et al. (2020) and additional transits reported by amateur observers from Baluev et al. (2020). This table is available in its entirety in machine-readable form online. Similar to what was done in Yee et al. (2020), Patra et al. (2017), andTurner et al. (2021), we fit the timing data to three different models. The first model is the standard constant period formalization:
References-
where T 0 is the reference transit time, P orb is the orbital period, E is the transit epoch, and T tra (E) is the calculated transit time at epoch E.
The second model assumes that the orbital period is changing uniformly over time: where dP orb /dE is the decay rate. The third model assumes the planet is precessing uniformly (Giménez & Bastero 1995): where e is a nonzero eccentricity, ω is the argument of pericenter, P s is the sidereal period and P a is the anomalistic period. For all three models, we found the best-fitting model parameters using a DE-MCMC analysis. We used 20 chains and 20 6 links in the model and again we ensure chain convergence using the Gelman-Rubin statistic.
The results of timing model fits can be found in Table 6. Figure 5 shows the transit and occultation timing data fit with the orbital decay and apsidal precession models. In this figure, the best-fit constant-period model has been subtracted from the timing data. The orbital decay model fits the transit and occultation data Orbital Decay Apsidal Precession Figure 5. WASP-4b transit (panels a, b, and c) and occultation (panel d) timing variations after subtracting the data with a constant-period model. The filled black triangles are the data points from the literature and the square orange points are from the TESS data in this paper. All the transit and occultation times can be found in Table 5. The orbital decay and apsidal precession models are shown as the blue and red lines, respectively.
slightly better than the precession model (Table 6, Figure 5).
Our finding that the constant-period model does not fit the data well is consistent with previous studies (Bouma et al. 2019;Southworth et al. 2019;Baluev et al. 2020). The orbital decay and apsidal precession models fit the data with a minimum chi-squared (χ 2 min ) of 276.35 and 270.42, respectively. We find that the orbital decay model is the preferred model with a ∆(BIC) = -5.93 and a Bayes factor of 9.3. Therefore, based on our analysis of the observed timing residuals, the orbital decay model is only slightly preferred over apsidal precession.
Due to the RV measurements showing evidence of a possible second planet, we modeled the two-planet system to see if they could reproduce the TTVs. For this analysis, we used the publicly available TTV analysis package, OCFit 5 (Gajdoš & Parimucha 2019). Specifi-5 https://github.com/pavolgaj/OCFit cally, we used the AgolExPlanet function which is an implementation of Equation 25 in Agol et al. (2005). The priors in OCFit were set to the values given for Model #4 in Table 4 for both planets where the outer planet has an eccentricity of 0.094. We were not able to fit the TTVs well with an outer planet consistent with the RV constraints. We also produced several forward models with OCFit that show that the expected TTV signal from the outer body is less than 2 seconds (dependent on the real mass of the body) over the full observational period (Figure 6). We did not use the preferred two-planet model (Model #3) because this model did not produce a detectable signal. The two objects are assumed to be co-planar but relaxing this condition will only decrease the TTV signal.
We can also analytically calculate the expected TTV signal on WASP-4b (δt b ) using the following equation from Agol et al. (2005) assuming non-resonant planets with large period-ratios on circular orbits: Table 4 where the outer planet has an eccentricity of 0.094 (Model #4). The two objects are assumed to be co-planar.
where M c and P c are the mass and orbital period of the outer companion, M star is the mass of WASP-4, and P b is the period of WASP-4b. The assumptions of equation (12) are all satisfied within the constraints of the best-fit RV model parameters found by RadVel (Table 4). For a M c between 5.47 M Jup and 2 M sun , we find a δt b using equation (12) to be between 1.9×10 −10 secs to 7.4×10 −8 secs. Hence, the expected TTV signal is many orders magnitude below the observed TTVs regardless of the mass of the outer companion. Our results are expected as resonant perturbations between close planets is the main cause of large (∼>mins) TTVs (e.g. Agol et al. 2005;Steffen et al. 2012;Nesvorný et al. 2013;Dawson et al. 2019). Therefore, we conclude that the observed TTVs are caused by orbital decay or apsidal precession and not gravitational perturbations from the outer body.
DISCUSSION
From our analysis, we derived updated planetary parameters and find that the orbital decay model is slightly preferred to explain the observed TTVs. We conclusively rule out linear acceleration as the cause of the period change. We also show that TTVs can not be caused by the second body orbiting WASP-4 regardless of its mass. However, apsidal precession is not ruled out. Further transit and occultation measurements of WASP-4b are needed to disentangle the cause of the variation. The orbital decay and apsidal precession models exhibit very different timing variations in the mid-2020s ( Figure 7). Therefore, we predict that it will be possible to conclusively determine the cause of WASP-4b's changing orbit by then. Figure 7. WASP-4b transit (panels a) and occultation (panel b) timing variations predictions using the best-fit models to the timing data presented in this paper. The lines are 100 random draws from the posteriors of the orbital decay (blue) and apsidal precession (red) models. On both panels, we show the average error of the previous observations.
Assuming the TTVs can be explained entirely by orbital decay, we derive an updated period of 1.338231587±0.000000022 days and a decay rate of -7.33±0.71 msec year −1 . Our results indicate an orbital decay timescale of τ = P/|Ṗ | = 15.77 ± 1.57 Myr, slightly longer than the value derived by Bouma et al. (2019) of 9.2 Myr. Assuming that the planet mass is constant, the rate of change of WASP-4b's orbital period,Ṗ , can be related to its host star's modified tidal quality factor by the constant-phase-lag model of Goldreich & Soter (1966), defined aṡ where M p is the mass of the planet, M is the mass of the host star, R is the radius of the host star and a is the semi-major axis of the planet. Using our derived value ofṖ and planetary parameters from Table 1, we find a modified tidal quality factor of Q = 5.1±0.9 ×10 4 . This value is higher than the value found by The cause of this small discrepancy (all Q values are within 2σ) is due to our including more transit data in our analysis than previous studies. It is currently not clear how to account theoretically for all the observed low values of Q . Our value is an order of magnitude lower than the observed values for binary star systems (10 5 -10 7 ; Meibom et al. 2015) and hot Jupiters (10 5 − 10 6.5 ; e.g., Jackson et al. 2008;Husnoo et al. 2012;Barker 2020) (2019) who found that hot Jupiter host stars tend to be young requiring Q 10 7 . Some possible causes of a large tidal dissipation rate might be that WASP-4 is turning off the main sequence (as suspected for WASP-12; Weinberg et al. 2017) or that an exterior planet could be trapping WASP-4b's spin vector in a high-obliquity state (also predicted for WASP-12b; Millholland & Laughlin 2018). The latter theory is intriguing as we now have evidence for an additional body in the system (Figure 4). More investigation is needed to understand the low value of Q .
If confirmed, WASP-4b would be the second exoplanet after WASP-12b (Yee et al. 2020;Turner et al. 2021) to show evidence of tidal orbital decay. Future observations of WASP-4b are needed to verify this possibility. Other planets such as WASP-103b, KELT-16b, and WASP-18b are also predicted to exhibit large rates of tidal decay (Patra et al. 2020). Additionally, the planets HATS-2b and WASP-64b are also ideal candidates to search for orbital decay because they are in systems similar to WASP-4 (Southworth et al. 2019). To understand the formation and evolution of the hot Jupiter population, timing observations of additional systems are needed.
The discovery of WASP-4c makes the WASP-4 system unique, as it would be the most widely separated companion of a transiting hot-Jupiter known to date. However, our discovery of WASP-4c is not surprising because Knutson et al. (2014) estimated that 27% ± 6% of hot Jupiters have a planetary companion at a distance of 1-10 AU and a mass of 1-13 M Jup . Similar planets may be common in other systems but could only be detected with long time-baseline RV data sets. Therefore, we expect that the unique status of the WASP-4 system is a result of observational bias rather than an intrinsic rarity of such systems.
Future observations of the WASP-4 system can help us put the system in context with the overall hot Jupiter population and shed light on the possible formation scenarios of the system. Specifically, stronger constraints on the obliquity, mutual inclinations, and full orbital parameters of WASP-4c will help us understand planetary migration in this system.
CONCLUSIONS
We analyzed all available sectors of TESS data of WASP-4b to investigate its possible changing orbit. Our TESS transit timing investigations confirm that the planet's orbit is changing ( Figure 5). We conclude that the acceleration of the WASP-4 system towards Earth is not the cause of the period variation after analyzing all available RV data (Figure 4; Table 4). From the RV analysis, we also find evidence of a possible second planet orbiting WASP-4 with a period (P c ) of 7001.0 +6.0 −6.6 days, semi-major axis of 6.82 +0.23 −0.25 AU, and a M c sin(i) of 5.47 +0.44 −0.43 M Jup (Figure 4, Table 4). WASP-4c is the most widely separated companion of a transiting hot-Jupiter discovered to date. This outer planet is not the cause of the observed TTVs ( Figure 6). Our timing analysis slightly favors the orbital decay scenario over apsidal precession as the cause of the TTVs (Table 6, Figure 5). However, apsidal precession cannot be ruled out. We find an updated period of 1.338231587±0.000000022 days, a decay rate of -7.33±0.71 msec year −1 , and an orbital decay lifetime of 15.77±1.57 Myr assuming the system is undergoing orbital decay. The planetary physical parameters are also updated with greater precision than previous studies. More transit, occultation, and RV data are needed over the next few years to determine conclusively the cause of WASP-4b's changing orbit and help place the system in context with the overall hot Jupiter population. ACKNOWLEDGMENTS Support for this work was provided by NASA through the NASA Hubble Fellowship grant #HST-HF2-51495.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
We are grateful to Benjamin (BJ) Fulton for his help with RadVel. We thank Dong Lai and Maryame El Moutamid for useful discussions. We were inspired to pursue this project after attending a TESS hackathon hosted by the Carl Sagan Institute.
We also thank the anonymous referee for the useful comments.
This paper includes data collected by the TESS mission, which are publicly available from the Mikulski Archive for Space Telescopes (MAST Figure A2. Individual TESS transit events (8-16) from Sector 20 of WASP-4b. Other comments are the same as Figure A1. Figure A3. Individual TESS transit events (17-18) from Sector 2 of WASP-4b. Other comments are the same as Figure A1. Figure A4. Individual TESS transit events (1-8) from Sector 28 of WASP-4b. Other comments are the same as Figure A1. Figure A5. Individual TESS transit events (9-15) from Sector 28 of WASP-4b. Other comments are the same as Figure A1. Figure A6. Individual TESS transit events (1-8) from Sector 29 of WASP-4b. Other comments are the same as Figure A1. Figure A7. Individual TESS transit events (9-14) from Sector 29 of WASP-4b. Other comments are the same as Figure A1.
B. DIFFERENCE IN MID-TRANSIT TIMES FOR THE SECTOR 2 TESS DATA
The comparison in the derived mid-transit times between our analysis and Bouma et al. (2020) for the Sector 2 TESS data is found in Figure B8. Our results are consistent within 1σ. Figure B8. Difference in mid-transit times between our analysis and Bouma et al. (2020) for the Sector 2 TESS data.
C. RADIAL-VELOCITY MODELS
We performed an RV analysis on all the RV data in the literature with RadVel. Table 3 compares all the one-planet and two-planet RV models carried out in our analysis. The priors used in RadVel can be found in Table C4. The best fit of the data is the two-planet model (Model # 3 in Table 3) with a BIC of 628.34. In this model, the eccentricities of both bodies (e b and e c ) are fixed to zero and there is no linear acceleration. The planetary parameters derived of all models can be found in Table 4. The posterior distributions for all free parameters of Model # 1 -4 can be found in Figures C11-C12. The best-fit Keplarian orbital models compared to the RV data can be found in Figure 4.
We also performed an RV analysis only on the data presented in Bouma et al. (2020). We used priors listed in Table C4. The results of that analysis can be found in Table C5 and Figure C13. We find aγ = −0.0400 +0.0037 Figure C10. Posterior distributions for all free parameters for the one-planet RV analysis without (Model # 2) fitting for the linear acceleration. The results of this model can be found in Table 4 and Figure 4. Figure C11. Posterior distributions for all free parameters for the best-fit two-planet RV analysis (Model # 3). The results of this model can be found in Table 4 and Figure 4. Figure C12. Posterior distributions for all free parameters for the two-planet RV analysis (Model # 4) fitting for ec. The results of this model can be found in Table 4 and Figure 4. Table C5 are the median values of the posterior distributions. The thin blue line is the best fit 1-planet model. We add in quadrature the RV jitter terms listed in Table C5 with the measurement uncertainties for all RVs. b) Residuals to the best fit 1-planet model. c) RVs phase-folded to the ephemeris of WASP-4b. The small point colors and symbols are the same as in panel a. Red circles are the same velocities binned in 0.08 units of orbital phase.
The phase-folded model for WASP-4b is shown as the blue line. | 7,835.8 | 2021-12-17T00:00:00.000 | [
"Physics"
] |
Overview of myelin, major myelin lipids, and myelin-associated proteins
Myelin is a modified cell membrane that forms a multilayer sheath around the axon. It retains the main characteristics of biological membranes, such as lipid bilayer, but differs from them in several important respects. In this review, we focus on aspects of myelin composition that are peculiar to this structure and differentiate it from the more conventional cell membranes, with special attention to its constituent lipid components and several of the most common and important myelin proteins: myelin basic protein, proteolipid protein, and myelin protein zero. We also discuss the many-fold functions of myelin, which include reliable electrical insulation of axons to ensure rapid propagation of nerve impulses, provision of trophic support along the axon and organization of the unmyelinated nodes of Ranvier, as well as the relationship between myelin biology and neurologic disease such as multiple sclerosis. We conclude with a brief history of discovery in the field and outline questions for future research.
Introduction
Myelin sheath is a modified cell membrane that wraps multiple times around the nerve axon ( Figure 1). Tight, layer-by-layer packing allows for reliable electrical insulation of axons and thereby ensures rapid propagation of nerve impulses-electromagnetic waves driven by electric potential -along the axon and reduce axonal energy consumption. Compact multilayered myelin sheath allows an increase in the velocity of propagation from less than 1 m/s to 50-100 m/s without an increase in the diameter of axons. Myelin sheath is an exclusive innovation of vertebrate organisms and may explain the larger size of vertebrates relative to nearly all other animals (Zalc, 2006).
Optimum insulation depends on the types and ratios of myelin constituent lipids and proteins and myelin water fraction. If the myelin sheath is damaged, axonal insulation is disrupted, and nerve impulses along the axon slow down or fail to conduct, resulting in neurologic dysfunction. Myelin-related pathology underlies several neurogenetic diseases, such as leukodystrophies and inherited demyelinating neuropathies, and acquired neurologic diseases, such as multiple sclerosis (MS) and subacute combined degeneration (Harayama & Riezman, 2018). Myelin degradation also contributes to age-related cognitive decline (Bonetto et al., 2021). It is, therefore, important to understand at the molecular level the processes that underlie the formation of the myelin sheath (myelination) and the replacement of damaged areas of the sheath (remyelination).
In this review, we will discuss the general properties of myelin, focusing on the features of its composition, formation, structure, and function that differentiate it from the more conventional cell membranes. We will also address differences in myelin formation and properties in the central nervous system (CNS) and the peripheral nervous system (PNS).
Myelin sheath and the g-ratio
The myelin sheath is typically made of up to 100 layers tightly wound on top of each other around the axon ( Figure 1A) (Simons and Nave, 2015). Two characteristic periodic morphological features of the myelin sheath are alternating major dense lines and intraperiod lines. The major dense lines are~two to three nm wide and are formed by the closely condensed intracellular (cytoplasmic) surfaces between the inner membranes of the two lipid bilayers, as shown in Figure 1B. The intraperiod lines are wider-4 nm -and are formed by tightly apposed extracellular surfaces of myelin sheaths.
The number of myelin layers determines the thickness of the sheath, which depends on the axon diameter: the larger the axon, the thicker the myelin sheath. The relative thickness of a myelin sheath is conventionally measured as the ratio between the inner diameter and the outer diameter of the myelin sheath-so-called the g-ratio-as shown in Figure 1A. Thus, the thinner the myelin sheath, the closer the g-value is to 1. The optimal g-ratio depends on the requirement to optimize conduction speed and minimize conduction delays, as well as other properties of the system as a whole, such as the need to conserve volume, especially within the intracranial space. The optimal g-ratio was estimated to be~0.77 for CNS and~0.6 for PNS (Chomiak and Hu, 2009). Deviations from the optimal g-ratio may result in abnormal neural development and neurologic disease (York et al., 2021).
Quantitative determination of the g-ratio of myelin is done using electron microscopy; recent developments have made this less timeconsuming (Kaiser et al., 2021). It is also possible to estimate g-ratio in the brain in vivo using advanced magnetic resonance imaging (MRI) techniques (Stikov et al., 2015;West et al., 2016). In healthy subjects, the g-ratio varies by brain region, with higher myelin content in the highly interconnected 'hub regions' than in the peripheral connections (Mancini et al., 2018). In patients with MS, an acquired demyelinating disorder, g-ratio-weighted nodal strength in motor, visual, and limbic regions correlates with disease severity (Kamagata et al., 2019). However, wide application of g-ratio estimation to clinical practice is hindered by the large variability of g-values obtained using various MRI techniques (Ellerbrock, Mohammadi, 2018). Comparisons of five different methods of g-ratio estimation in healthy subjects and multiple sclerosis patients showed high variability of g-values, mostly in MS lesions, and two MRI methods did not correctly predict the degree of demyelination in MS lesions (Berg et al., 2022).
Glial cells and myelinogenesis in the central and the peripheral nervous system
The nervous system is traditionally divided into CNS and PNS. The CNS is comprised of the brain, spinal cord, olfactory and optic nerves, and is myelinated by oligodendrocytes. The PNS is comprised of nerves outside of the CNS-the remaining ten pairs of cranial nerves, spinal nerve roots, and peripheral nerves, and is myelinated by a different type of glial cell-the Schwann cell. The border between central and peripheral myelin-the so-called Obersteiner-Redlich zone-lies along cranial nerves and spinal nerve roots, within a few mm of nerve root entry into the brainstem or the spinal cord. The part of the axon proximal to the Obersteiner-Redlich zone (nearer the cell body) is myelinated with central myelin made by oligodendrocytes, and the part of the axon distal to this zone (farther from the cell body) is myelinated with peripheral myelin made by Schwann cells.
A single oligodendrocyte myelinates between 40 and 60 different axons but only one segment per axon (Simons and Nave, 2015). Thus, each axon in the CNS is myelinated by multiple oligodendrocytes, and each oligodendrocyte myelinates multiple axons. Oligodendrocytes myelinate different axons to variable extents depending on axon diameter to maintain optimal g-ratio. Thus, the same oligodendrocyte will myelinate the larger axons more extensively, yielding a thicker myelin sheath compared to the smaller axons (Waxman and Sims, 1984). An oligodendrocyte typically needs only about 5 h to generate all its myelin, which includes the synthesis of all the necessary proteins and lipids (Czopka et al., 2013).
Within the PNS, Schwann cell myelinates only a single axon, not multiple axons, as do oligodendrocytes in the CNS. Peripheral axons' often span considerable length, and many Schwann cells are required to myelinate the length of a single axon. The diameter of axons in the PNS ranges from~0.1 μm to~20 μm, while in the CNS, the axons tend to be smaller, ranging from <0.1 μm to >10 μm in diameter. (Stassart et al., 2018).
Another important distinction between oligodendrocytes and Schwann cells is that Schwann cells myelinate only axons that are greater than 1 μm in diameter, a process called 'radial sorting'. The wider-diameter peripheral axons conduct impulses at a higher speed than narrower axons, and myelination of the wider axons allows for a further increase in the speed and distance of conducted signal (Feltri et al., 2016). Another feature of myelin sheath found only in the peripheral nerves is Schmidt-Lanterman incisures (SLI): cytoplasmic channels that pass through myelin and connect to the cytoplasm at the edge of the myelin sheath. SLI are formed where there is no tight interaction of adjacent myelin membranes, i.e., not within compact myelin sheath. SLI has a circular-truncated cone shape and are described as 'beads in a stretched state' (Terada et al., 2019).
Although CNS and PNS myelin are formed by different glial cell types, they share similar morphological structures, with some quantitative differences in their lipid composition and more substantial qualitative differences in protein composition. The differences between PNS and CNS myelin may explain why some diseases, such as acute inflammatory demyelinating polyneuropathy, affect only peripheral myelin while others, such as multiple sclerosis-only central myelin. Understanding the differences between the two types of myelin may yield clues into the pathogenesis of these disorders and the processes that underlie myelin degeneration in the nervous system (Quarles, 2005).
Other glial cells-astrocytes and microglia-contribute indirectly to myelinogenesis (Bilimoria and Stevens, 2015;Traiffort et al., 2020). Astrocytes promote the development of myelinating oligodendrocytes and accelerate myelin growth. Microglia remove damaged neurons and promote recovery by eliminating degenerated myelin that accumulates with aging and disease (Prineas et al., 2001;Bsibsi et al., 2014). In early development, myelin with ultrastructural abnormalities is phagocytosed by microglia (Djannatian et al., 2021). Microglia also play a neuroprotective and regenerative role by supporting myelination of axons during development and across the lifespan (Lenz and Nelson, 2018;Santos and Fields, 2021). Interestingly, Schwann cells also participate in myelin clearance after nerve injury (Brosius et al., 2017).
Diverse functions of myelin
In addition to creating tightly packed multilayered insulating segments called 'internodes' around the axon, myelin also plays a role in the assembly of the unmyelinated nodes of Ranvier (NR) between the internodes. The NRs are located roughly equidistant from each other along the axon and are the only points of contact between a myelinated axon and the extracellular environment. The main function of NR is to recharge neuron impulses, ensuring signal spreads along the entire length of the axon, which may be over a meter long in humans. Since the impulse appears to 'leap' from one NR to another, this process is known as "saltatory conduction", from the Latin 'saltus' -a leap. The mechanism underlying saltatory conduction relies on clusters of voltage-gated Na+ and K+ channels within NR, which open and close depending on changes in the membrane potential of the NR.
Formation of ion channel cluster in the NR, reviewed in (Rasband and Peles, 2021), involves multiple players: cytoskeletal scaffold proteins actin, ankyrin G, beta IV spectrin (Leterrier C. et al., 2015), adhesion molecule neurofascin (Alpizar et el., 2019) and others. Myelin proteins are also essential in NR formation as they attach the myelin sheath to the axon on both sides of the node and thereby 'fix' the size of NR. An increase in NR length may alter conduction speed by~20%, similar to the effect produced by altering the number of myelin wraps or the internode length (Arancibia-Cárcamo et al., 2017). Because myelin is necessary for NR assembly and 'size fixing', problems with myelination also compromise NR function and thereby further impair saltatory conduction and exacerbate neurologic dysfunction (Arancibia-Carcamo and Attwell, 2014).
By insulating the axon along its length, the myelin sheath also inhibits access to nutrients from the extracellular compartment to the axon. An area of intense interest is whether myelin sheath may also serve for the provision of trophic support to the underlying axon. It has been postulated that oligodendrocytes can switch their own intermediate metabolism so that the end-product of glycolysis is lactate, which is then taken up by an axon and used by axonal mitochondria to generate ATP (Nave et al., 2010). The process of lactate delivery from oligodendrocytes to axon requires the formation of narrow cytosolic channels, such as Schmidt-Lanterman incisures discussed above, that connect the glial cell body with the axon during myelination (Spiegel and Peles, 2002). Such channels may exist in noncompact myelin, which differs from compact myelin in its molecular structure. An oligodendrocyte-specific protein 2′,3′-cyclic nucleotide 3′-phosphodiesterase (CNP) is essential for preserving cytoplasmic spaced between inner leaflets of non-compact myelin (Snaidero et al., 2017), as will be discussed below. It is also possible that oligodendrocytes provide energy supplies to axons via exosomes (Frühbeis et al., 2020). Failure of the energy-trophic function of oligodendrocytes may contribute to axonal neurodegeneration (Nave et al., 2010;Tepavčević, 2021).
An overview of myelin composition
Myelin sheath, like all cell membranes, is constituted of three main components -water, lipids, and protein molecules, but the ratio of these components in myelin differs from the respective ratio of a more typical cell membrane. The dry myelin sheath is characterized by a high proportion of lipids (70%-85%) and a low proportion of proteins (15%-30%), while the typical cell membrane has an approximatively equal ratio of proteins to lipids (50%/50%) (Poitelon et al., 2020). The high proportion of lipids in myelin makes it less permeable to ions and a better electrical insulator. It also affects the membrane's physical properties, such as rigidity and membrane deformation (Harayama and Riezman, 2018). Myelin is highly susceptible to changes in its composition, and even small changes in the ratio of its constituent elements can result in the breakdown of myelin structure (Chrast et al., 2011).
Myelin water
Quantitative electron microscopy (electron probe X-ray microanalysis) shows that CNS myelin in situ is 33%-55% water, the lowest water content of any morphological compartment (LoPachin et al., 1991). Near the polar phospholipid headgroups, water molecules have an electrostatic orienting effect and form bonds with the hydrophilic groups of lipid and myelin proteins. Myelin prevents water diffusion transversally to the axon and thereby contributes to anisotropy. Therefore, an increase in anisotropy reflects an increase in myelination (Almeida and Lyons, 2017). A change in myelin concentration has a profound impact on the signal strength on magnetic resonance imaging (MRI), and loss of signal on certain sequences may be a biomarker for myelin degeneration (Abel et al., 2020;Edwards et al., 2022). Advanced MRI techniques can differentiate water protons interacting with lipid bilayers (lipidassociated) from intra-and extracellular water protons (Watanabe et al., 2019).
Myelin lipids
Lipids differ from other major biological macromolecules in that they do not form polymers via covalent bonding of monomers but self-assemble due to the hydrophobic effect into macromolecular aggregates, such as lipid bilayer, the basic structure of all cell membranes. Lipids are the main constituents of membranes, but myelin differs from the typical cell membrane in the overall higher proportions of lipids, as well as in the ratio of three major classes of lipid components. In myelin sheath, the proportion of major lipid components is 40% cholesterol, 40% phospholipids, and 20% glycolipids, while in most biological membranes, the ratio is closer to 25%:65%:10%, respectively (Poitelon et al., 2020). Thus, the relative contribution of cholesterol and glycolipids is greater in the formation of a unique multilayer compact myelin structure than in conventional membranes. Slight changes in lipid composition in myelin can alter the intermembrane adhesive properties and lead to the destruction of the myelin structures (Chrast et al., 2011) and serious neurologic illness (Lamari et al., 2013).
Lipids are not directly genetically encoded, but they are synthesized by genetically-encoded enzymes. Thus, myelinogenesis is a strictly regulated process involving the coordinated expression of genes coding for enzymes involved in myelin lipid synthesis and myelin proteins (Campagnoni and Macklin, 1988;Dowhan, 2009). The importance of strictly regulated lipid composition is underscored by a large number of lipid-related genetic diseases. Supplementary Figure 1 of (Harayama and Riezman, 2018) lists 135 genetic defects in lipid metabolism that cause or contribute to human disease.
The process of spontaneous self-organization of lipid molecules into the lipid bilayer in water is largely due to their hydrophobic properties. When lipids are dispersed in water, their hydrophobic tails promote water molecules to form quasi-regular 'clathrate cages' around these hydrophobic parts. Depending on the phospholipid head group, six or more water molecules surround a lipid molecule (Chattopadhyay et al., 2021). When lipid molecules come together, water molecules lose their clathrate cage structure and form more disordered water clusters, thereby increasing the total entropy of the system and making the self-organization of the monolayer of lipid molecules a thermodynamically favorable process (Gao et al., 2022). The free energy is further decreased when two lipid monolayers pack tail-to-tail to form a more favorable arrangement with minimal contact with water-a phospholipid bilayer-the basic structure of biomembranes.
Cholesterol
Cholesterol is amphipathic. It has a polar head with only one hydroxyl group and four rings and a hydrophobic hydrocarbon tail that can readily insert into the hydrophobic interior of cell membranes. The four fused hydrocarbon rings in cholesterol have an almost flat rigid structure, and their contact with other lipids and proteins within the membrane leads to a higher packing density. Thus, cholesterol helps to reduce the penetration of water, gases (e.g., oxygen), and small neutral molecules (e.g., glucose) through the membrane (Shinoda, 2016;Olżyńska et al., 2020). The importance of cholesterol for myelin structure and function can be inferred from its relatively high proportion in myelin (40%) compared to typical cell membranes (25%). A study of electron paramagnetic resonance signals found that cholesterol content strongly influences the membrane's structural organization and permeability (Subczynski, et al., 2017). High cholesterol content (30%-50%) ensures the high hydrophobicity of the membrane and increases membrane packing. Cholesterol is also a key determinant of membrane fluidity. The critical significance of cholesterol in myelin membrane is further highlighted by a study of mice, which lacked the ability to synthesize cholesterol, and had markedly reduced myelination (Saher et al., 2005). Conversely, the process of myelin repair-remyelination -is more efficient when the rate of cholesterol synthesis is increased (Berghoff et al., 2021).
Phospholipids
Two of the major classes of membrane phospholipids-sphingomyelins and phosphatidylcholines-constitute more than 50% of membrane phospholipids. The long lengths of the hydrophobic tails of these phospholipids-ranging from 14 to 24 carbon atoms-increase the interaction between tails, promote tight packing, decrease the fluidity of lipid association and provide a less permeable barrier for ions allowing for better insulation of axons (Chrast et al., 2011;Montani, 2021).
Glycolipids
Two of the most abundant glycolipids in the myelin membrane are galactocerebroside (GalC) and galactosulfatide (sGalC). Glocolipids' long alkyl chains are closely aligned-they can form up to eight intermolecular hydrogen bonds. Glycolipids also interact with phospholipids and cholesterol to promote the formation of dense packing in the bilayer of the myelin membrane (Stoffel and Bosio, 1997). Phospholipids and glycolipids are asymmetrically arranged on the membrane, with phospholipids predominating on the inner sheet of the lipid bilayer and glycolipids on the outer sheet (Stoffel and Bosio, 1997). The network of hydrogen contacts among lipids is conducive to the formation of micro lipid rafts, a kind of liquid crystal structures. These densely packed regions decrease the overall motion of the membrane and make it more rigid and more resistant to fluid/solid phase transition, resulting in the phase transition temperature of the myelin membrane above the physiological body temperature. The deficiency of glycolipid molecules impairs the packing of the lipid bilayer, increases membrane permeability, and causes the breakdown of the conductance of myelinated axons. The important contribution of glycolipids to myelin explains the twofold increase in the proportion of glycolipids in myelin compared to typical biomembrane.
Myelin proteins
Myelin in the CNS and the PNS contains a relatively small quantity of proteins, but they constitute a highly diverse group Frontiers in Chemistry frontiersin.org (Jahn et al., 2020). A search for human myelin proteins in UniProtKB yields 223 results (https://www.uniprot.org/, accessed 12/14/2022). These proteins have very diverse sequences, functions, and structures yet share some common characteristics: they are typically small, usually no more than 30 KDa in weight, have long half-lives (Toyama et al., 2013), and are multifunctional. Another feature common to many myelin proteins is that they are either intrinsically disordered proteins (IDP) or have intrinsically disordered regions (IDR) (Dyson and Wright 2005;Raasakka and Kursula, 2020). The absence of a fixed, ordered three-dimensional structure in part or the whole of myelin protein is due to a relatively small proportion of hydrophobic amino acids and a higher proportion of disorder-promoting amino acids -R, K, E, P, and S, which prevent the formation of an ordered structural domain with a stable hydrophobic core (Romero et al., 2001;He et al., 2009). The high conformational flexibility of IDR allows myelin proteins to adopt variable structures depending on their neighboring contacts. Upon binding with other molecules within myelin, IDRs often undergo a disorder-to-order transition known as coupled folding and binding (Wright and Dyson, 2009). IDRs within myelin proteins play an important role in forming multilayer myelin membranes. For example, the disordered region of the myelin protein zero (P0) participates in developing the mature myelin membrane (Raasakka and Kursula, 2020). In the following sections, we will discuss three structurally important and common myelin proteins: proteolipid protein (PLP), myelin basic protein (MBP), and myelin protein zero. These three proteins are representative of the diversity of myelin-associated proteins and are illustrative of some of the key features of this protein group.
Proteolipid protein (PLP)
PLP is the most abundant myelin protein in the CNS, where it constitutes 38% of the total myelin protein mass. In contrast, the amount of PLP in the PNS is minimal (Jann et al., 2020). PLP1 gene encodes human PLP and is expressed in oligodendrocytes, but also in oligodendrocytes, astrocytes, and even in some neuronal progenitor cells (Harlow et al., 2014). A high level of PLP in myelin is required to preserve myelin integrity. The key role of PLP in the formation of a compact multilayer membrane structure is to bring myelin membranes closer to each other. A reduction in PLP content by 50% causes altered myelin ultrastructure and axonal pathology (Lüders et al., 2019). Mutations in PLP1 gene may result in hypomyelination and a spectrum of neurogenetic disorders, including Pelizaeus-Merzbacher disease and spastic paraplegia 2 (Inoue, 2019;Wolf et al., 2019).
PLP is a highly conserved hydrophobic protein. It comprises four transmembrane segments spanning residues 10-36, 64-88, 152-177, and 234-260, of which 79 amino acids (76%) have hydrophobic side chains. Both the N-and C-termini of PLP are on the cytoplasmic side. PLP exists as two isoforms (UniProt P60201). The larger isoform weights 30 kDa and is 277 amino acids long, and the shorter isoform, PLP/DM20, is 26 kDa and is identical in sequence to the longer version, except for a deletion of 35 amino acids in the intracellular loop (Spörkel et al., 2002). A recent publication shows that both fulllength human PLP and its shorter DM20 isoform have a dimeric, αhelical conformation and discusses structural differences between the isoforms in terms of their impact on protein function and interaction with lipids (Ruskamo et al., 2022).
Experimental 3D structural information for the full-length PLP or DM20 has not been reported, but there is X-ray data of a small fragment of the PLP chain (Uniprot P60201-1: residues 45-53) in the loop between the first and second transmembrane helices (PDB structure 2XPG). This peptide (KLIETYFSK), which covered only 3% of the PLP molecule, forms a complex with HLA class I histocompatibility molecule HLA-A*0301 (McMahon et al., 2011) and may therefore play a role in autoimmunity. It is interesting to note in this context that patients with multiple sclerosis, a chronic demyelinating disorder of CNS, exhibit elevated T-cell and antibody responses to PLP (Greer et al., 2020). The three-dimensional structure of the PLP was recently predicted using the highly-accurate AlphaFold method (Jumper et al., 2021). AlphaFold predicted that the largest part of the PLP chain forms helical structures (https://alphafold.ebi.ac.uk/entry/P60201). In the predicted model, most residues (with the exception of residues 110-140) have relatively small expected position errors.
Myelin basic protein (MBP)
MBP is the second most abundant myelin protein in CNS: it constitutes about 30% of dry protein mass in CNS myelin. MBP is less abundant in the PNS, where it accounts for only 5%-18% of the total myelin protein (Garbay et al., 2000). MBP has a number of different functions: it interacts with other proteins and participates in the transmission of the extracellular signal to the cytoskeleton and tight junctions (Boggs et al., 2006). MBP was called the 'executive' molecule of the myelin membrane in view of its critical role in compact myelin sheath formation (Moscarello et al., 1997).
In mammals, the MBP gene that codes for MBP comprises seven exons. Differential splicing of the primary mRNA leads to different isoforms of the protein. Not all of them are involved in axon myelination: for example, isoform 1 (UniProt P02686-1, 304 amino acids, 33.1 kDa) participates in the early brain development before the onset of myelination (Vassall et al., 2015). The so-called 'classic myelin isoforms' are part of the myelin membrane mostly; they vary in their molecular mass from 14 to 21.5 kDa. The 18.5-kDa isoform (UniProt P02686-5; 171 amino acids) is the most abundant isoform of MBP in mature human myelin in the CNS, while the 17.2 kDa isoform (UniProt P02686-6; 160 amino acids) is the major MBP isoform in the PNS.
In addition to isoform variability, MBP isoforms undergo a large number of post-translational modifications, which include phosphorylation, citrullination of arginyl residues, acetylation of lysine, and other reactions (Zhang, 2012). Such post-translational modification gives rise to eight charged isomers (C1-C8) of isoform 18.5 kDa. The mostly unmodified C1 isomer has the highest positive charge (net charge of +19 at pH 7). In contrast, the mostly modified isomer C8 has the smallest net positive charge of all the isomers (net charge of +13 at pH 7) because of deimination (citrullination) of six arginine residues into uncharged non-canonical amino acid citrulline at positions 26, 32,123, 131, 160, 170 (UniProt P02686-5) (Wood and Moscarello, 1989;Tranquill et al., 2000). The irreversible citrullination reaction reduces the positive surface charge of the MBP, thereby weakening the interactions of the MBP with negatively charged lipids, which leads to a decrease in myelin stability (Martinsen and Kursula, 2022). The process of citrullination may also have clinical implications. In one fulminant case of multiple sclerosis, known as 'Marburg variant,' deimination of 18 of 19 arginyl residues to citrulline within acutely demyelinating plaque led to a dramatic decrease in MBP positive charge. Such a decrease of positivity in MBP is incompatible with MBP function in compacting myelin and may have triggered fatal autoimmune demyelination in this patient (Wood et al., 1996). In this context, it is notable that multiple sclerosis patients' T cells appear to preferentially respond to citrullinated MBP, which suggests that citrullination of MBP may be involved in the induction or perpetuation of multiple sclerosis (Tranquill et al., 2000). Different charge isomers may have different functions in various stages of myelin development. The most positive charged variants C1, C2, and C3 are part of a stable myelin sheath, while C8 charge variant might be of importance during the sheath's development (Moscarello et al., 1994). C1 isomer of isoform 18.5-kDa is characterized by low hydrophobic content -about 25% of all its residues are hydrophobic (Harauz and Boggs, 2013). This is consistent with the localization of this MBP isoform to the cytoplasmic part of the myelin membranes ( Figure 1B). The role of MBP is to bring closer together two apposing negatively charged cytoplasmic leaflets of the myelin membrane that form the major dense line. Force-distance measurements show that maximum adhesion force and minimum cytoplasmic spacing occur when each negative lipid in the membrane can be bound to a positively charged lysine or arginine group on MBP (Min et al., 2009). Excess of MBP causes the formation of a weak gel between myelin surfaces, while an excess of negative charge causes electrostatic swelling of the water gap (Smith, 1992). Thus, excess or deficiency of MBP causes the myelin bilayers to repel each other and may lead to the destruction of myelin (demyelination).
The lipid composition of the myelin leaflet has a major impact on its interactions with MBP. A cholesterol content of 44% in myelin yields the most thermodynamically favorable MBP interaction and is optimal for membrane compaction and thermodynamic stability (Träger et al., 2020). In addition to its structural importance-via interactions with lipids and myelin membrane-associated proteins-MBP also interacts with a large group of proteins related to protein expression and may play a regulatory role in myelinogenesis (Smirnova et al., 2021).
Myelin protein zero
Myelin protein zero molecule (P0) is expressed in higher vertebrates only in the PNS (Yoshida and Colman, 1996), where it makes up more than 50% of all myelin protein. P0 synthesis is regulated by Schwann cell/axon interactions, the so-called 'axonal signal'. Axons can up-and downregulate the expression of Schwann cell genes via a cyclic adenosine monophosphate (cAMP)-dependent pathway (Lemke and Chao, 1988).
The human P0 molecule (P25189 ·MYP0_HUMAN) is 248 amino acids long and consists of an N-terminal region (29 residues) and three domains. The structure of two rat and human P0 extracellular domains have been determined with high resolution by X-ray crystallography (rat-PDB ID 1NEU; Shapiro et al., 1996; human-PDB ID 3OAI; Liu et al., 2012). The structure of the extracellular domain (125 residues) is similar to typical variable domains of immunoglobulins with two beta sheets-sandwich-like structure with a set of Ig-conservative residues, including a pair of Cys residues in the B-and F-strands that form a disulfide bond contact between the two sheets, and Trp residue in the C-strand, which is involved in many intradomain contacts (Figure 2A). An important consequence of the homophilic adhesion properties of extracellular domains of P0 molecule is their ability to form dimers and tetramers. Two extracellular P0 domains form antiparallel dimers, and two neighboring dimers create a tetramer between lipid membranes ( Figure 2B). The dimer and tetramer formation between extracellular domains is strengthened through the participation of the two other P0 domains (Shapiro et., 1996;Plotkowski et al., 2007).
The 27 residues-long transmembrane domain of P0 forms a single helix. The role of this domain in the formation of P0 dimers and tetramers was analyzed in detail byPlotkowsky et al., 2007. An Frontiers in Chemistry frontiersin.org important feature of the transmembrane domain is the presence of a conserved glycine zipper motif-GxxxGxxxG (in human 159GAVIGGVLG167), which is conserved across many membranes' protein sequences. Zipper motif is the primary packing interface of the transmembrane helix. The interaction between helices within the membrane determines the correct orientation Ig domains for dimer formation in extracellular space. The third domain of P0, the 67 residue-long C-terminal cytoplasmic domain, plays a role in tetramer formation. This domain exists in a disordered state, typical for many membrane proteins that interact with lipids, and has a high content of positive charged R, K, and H residues. In the sequence of the human domain shown below, these residues are bolded and marked in red.
RYCWLRRQAALQRRLSAMEKGKLHKPGKDASKRG RQTPVLYAMLDHSRSTKAVSEKKAKGLGESRKDKK
Thus, there are 23 positive charged residues in the third domain that are approximately evenly distributed throughout the sequence and only six negative charged residues. Electrostatic interactions of mostly positive cytoplasmic domain with the negative cytoplasmic phospholipid headgroups are largely responsible for the formation of a stable helical-ordered protein structure (Raasakka and Kursula, 2020). As a result of these interactions, important structural transformations occur within myelin, which brings two neighboring P0 molecules together and 'tighten' the two adjacent membranes. These contacts have a similar function to contacts between MBP within the cytoplasmic part of the myelin membranes, considered above. Four neighboring P0 extracellular domains are assembled as a tetramer with a fourfold symmetry axis Figure 2B. Because this tetrameric association is so stable, it may be considered the main structural unit of the native myelin membrane structure in PNS (Thompson et al., 2002). 6 Myelin: History of discovery and questions for future research Van Leeuwenhoek was the first to detect myelinated fibers in 1717, and Rudolf Virchow described myelin's chemical nature in 1854 and gave it its name. More than another century passed until it was conclusively established that CNS myelin is formed by oligodendrocytes (Bunge et al., 1962). In 1878, Ranvier established that myelin coverage of axons is not continuous but periodically interrupted by non-myelinated sections, which we now call the 'nodes of Ranvier' (NR). Only very recently, the molecular mechanism of the NR assembly was described in detail (Rasband and Peles, 2021), yet many unresolved questions remain. For example, it is not known how the distance between NR is regulated during the process of myelination. The distance between the nodes changes in accordance with the growth of the axon, but how this information is conveyed to oligodendrocytes is unknown. The rich and fascinating history of myelin research from the Renaissance to the present was a subject of a recent review (Boullerne, 2016).
Traditionally, the study of myelin has focused on understanding its properties as an axonal insulator. The current trend in the field is to enlarge the focus to encompass the entire complex involving myelin, oligodendrocyte, axon, and other cells involved in myelination. From this perspective, the study of myelin is not so much an investigation into its complex chemical nature but of the interrelationship and interdependence between living cellular elements that contribute to myelination (Bonetto et al., 2021). This perspective allows one to appreciate the system's plasticity-how functional and structural changes occur in response to changes in the living organism. An example of how this shift in focus yields new insights into myelin biology is the newly described concept of 'adaptive myelination' (Bechler et al., 2018;Bloom et al., 2022). It is clear that the investigation will not end at this stage of cellular plasticity but will proceed to the next level of organizational complexity: the neuroplasticity of the organ level-that of the brain and nervous tissue.
Author contributions
These authors contributed equally to this work and share senior authorship. The authors are listed in alphabetical order.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 7,884.8 | 2023-02-21T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Governance in financial institutions: key elements and preventing the failures
Purpose – The need for robust governance standards in financial institutions requires no overemphasis. However,instancesofgovernancefailureshavebeenarecurringglobalphenomenon.Thispaperexaminesthe keyelementsofgovernanceinfinancialinstitutions,evaluatesreasonsforfailuresandsuggestswaystostrengthengovernanceandpreventsuchfailures. Design/methodology/approach – The author follows a descriptive design and a behavioural approach to understand the governance issues in financial institutions. Findings – The author identifies key elements of governance, and the potential reasons for failures and highlights that the structure of boards, thrust on the adoption of best practices and regulatory guidelines are necessary but not sufficient to ensure failsafe governance standards. The author emphasises the need for recognitionofbehaviouralfactorsandafocusoncontinuousmonitoringandredflaggingoftheconductofkeystakeholdersbythethirdandfourthlinesofdefence.Aneffectivewhistle-blowerpolicy,aclearfocusonorganisationalcultureandthesubjugationofindividualstothesystemscanimprovetherobustnessofthe governancestandardsinfinancialinstitutions. Originality/value – Tothe bestoftheauthor ’ sknowledgeandbelief,theobservationsandsuggestionsmade in the paper are original. The paper contributes by offering a nuanced perspective for strengthening governance in financial institutions.
Introduction
Governance remains the cornerstone of any organisation and financial institutions are no exception.However, in an increasingly integrated financial system, risks quickly spill over across different verticals of financial institutions, exacerbate through the financial system and have a contagion effect on the real sector.Therefore, it is obvious that financial institutions must have robust governance standards and failsafe systems and controls.Any weakness in the governance edifice exposes the financial institutions to operational risk which quickly translates into credit, market, liquidity or reputation risk, or into a combination of these.It is, therefore, natural that financial institutions and their regulators have put in place systems, controls and processes to ensure robust governance standards.These processes have evolved over a period and are continuously subjected to internal, external and supervisory scrutiny.However, instances of governance failures and bad corporate behaviour have recurred not only in India but across the globe.In the Indian context, the financial system has witnessed instances of system or governance failures in banks, NBFCs, and market intermediaries with different dimensions and magnitudes.Generally, such cases come to light through whistle-blowers, usually after a considerable time lag and consequent financial and reputational damage.Common threads across such instances are managerial misconduct, the concentration of power, dubious incentive structure, the lack of market discipline and inadequacies of external oversight.
The emergence of cases of governance failures from time to time indicates that certain maladies, notwithstanding the internal control systems, governance processes, audit mechanisms and regulatory structures, could not be nipped in the bud.This puts additional pressure on the supervisory mechanisms as one of the common challenges with control, audit and oversight functions is that any single failure, at least in public perception, tends to obliterate all the previous instances of effectiveness.Moreover, most of the cases of governance failures do not occur because of insufficient regulations but due to a tendency to get around these rather than following their spirit and intention.Hence, the solution lies in form of strengthening the governance framework in financial institutions by way of nudging good corporate behaviour but perhaps there may not be any straight-jacket approach to ensure good governance.From a regulatory standpoint, there is a need to strive for failsafe systems and processes to the best extent possible and the need to know the precise reasons for governance failure in financial institutions.This involves the assessment of several behavioural issues besides the regulatory aspects.To examine these elements in totality, it is important to look at the following aspects and key questions, which could put the efficacy of governance in jeopardy (Table 1).
This paper examines the above critical questions and is structured as follows.Section 2 provides an overview of the corporate governance edifice with special reference to financial institutions and Basel principles.Section 3 explains the four lines of defence model of governance.Section 4 touches on the corporate governance standards and issues in an Indian context.Section 5 looks at the most probable and time-tested reasons for governance failures.Finally, Section 6 deals with steps for strengthening governance, followed by conclusions in Section 7.
Corporate governance in financial institutions
Corporate governance refers to a set of structures, processes and relationships between a company's management, its board, its shareholders as well as other stakeholders, through which objectives are defined and processes are set for achieving those objectives along with monitoring tools.The primary objective of corporate governance is to safeguard stakeholders' interests in a sustained manner by ensuring that work is undertaken in a legitimate, responsible, and ethical manner.In the case of banks and deposit-taking financial institutions, shareholders' interest must not precede the depositors' interest.Basel Committee
Four lines of defence model
As evident from the BCBS principles of corporate governance, there is a substantial emphasis on how the corporate governance procedures of financial institutions could be used to improve risk management and internal controls (Figure 1).
The four lines of defence model enhance coordination between external parties and internal auditors, thereby minimising the asymmetric information amongst the parties involved.This model places the risk owners and managers operating at the frontline as the first line of defence with defined management controls and internal control measures, such as the delegation of authority, sanction limits, expenditure rules, maker-checker system, etc. to ensure defined and judicious risk-return trade-off.Effective risk management, robust internal control system and corporate culture are integral parts of the governance mechanism in which a specific role is assigned to different functionaries.All the control units such as compliance, risk, finance, etc. serve as the second line with a responsibility of oversight over the first line, besides reporting to the board and/or its audit committee.Internal audit provides independent assurance by way of its auditing function as the third line of defence.Finally, the external audit and supervisors are supposed to regularly interact with the controllers and internal auditors to scrutinise, guide, as and when necessary and promptly suggest improvements and remedial measures.
The BCBS principles and the above model provide a comprehensive guide for strengthening corporate governance in financial institutions and accordingly, financial system regulators have put in place appropriate regulatory requirements.
Governance in financial institutions: the Indian context
Financial sector regulators in India (RBI, SEBI, IRDAI and PFRDA) have put in place regulatory architecture aimed at strengthening governance in the regulated entities.The focus of these regulations remains on the constitution and conduct of the board and senior management, such as the chair and meetings of the board, the composition of certain
Governance in financial institutions
committees of the board, notably, audit, nomination and remuneration, risk management, age, tenure, qualification and remuneration of directors and appointment of the whole-time directors/managing director and chief executive, independent directors and their role, as also the role and responsibilities of key management personnel (KMP), etc. Regulations also prescribe a code of conduct and code of ethics, fit and proper norms, disclosure of compensation for directors and KMP, and reporting structures.Regulatory provisions stipulate that the directors should not interfere in the day-to-day functioning, abstain from influencing the employees and should not be directly involved in the function of appointment and promotion of employees.However, directors are not expected to turn a blind eye if they observe noncompliance to regulations or irregularities in the day-to-day functioning or working of KMP.
Figure 2 presents various elements of the governance system which are intended to achieve good corporate behaviour in financial institutions.It requires a synergic combination of all the components to achieve governance objectives of integrity, truthfulness, honesty, integrity, objectivity, fairness and transparency in the working of financial institutions.A sound system of governance promotes due diligence and oversight, no conflict of interest, ethical, legal and prudential conduct, and the achievement of public interest and the common good of all the stakeholders.It is quite natural that regulations focus on strengthening all the elements of the governance system, inter alia, by mandating independent/public interest directors, direct reporting by compliance, risk and audit functions to the board committees, and specifying robust disclosure requirements.
Almost all the episodes of governance failures, however, present similar stories where all the elements of the governance system perhaps remain present, direct reporting structures exist, independent directors sit on the boards and audits and external evaluations take place, but unfortunately, desired results are not achieved as the persons responsible for curbing the malfeasance or making timely reporting either become a party to the unscrupulous acts or look the other way and fail to find and/or report the shrouded misconduct.The next section examines the reasons for governance and control failures in financial institutions, in a broader context.
Reasons for governance and control failures
As discussed, boards of financial institutions strengthened with independent directors, wellstructured committees and supported by compliance, risk and audit functions have the primary responsibility of ensuring sound systems and controls.However, the instances of governance failures not only in India but all over the world necessitate a closer look at the reasons for governance and control failures (Douglas et al., 2018).A review of several episodes of misconduct and financial imbroglio in financial institutions highlights the following points as the likely reasons for governance conundrums.
(1) Misaligned incentives at the frontline The fact that the first line of defence formed by the field executives and front-line functionaries is responsible for screening out unwarranted risk and blocking transactions with ethical, legal or proprietary issues, while taking on the responsibility for generating sufficient, or at times, targeted revenue for an enterprise, is a great source of misalignment.Many times, due to misaligned incentives, the pressure to achieve targets overwhelms the need for judicious risk-taking and can even pressurise the front-line staff to engage in mis-selling unscrupulous conduct, or concealment of unfavourable deals and positions.Tayan (2019) observe that the tensions between corporate culture, financial incentives and employee conduct were amply illustrated in the Wells Fargo cross-selling scandal.Ironically, Wells Fargo was listed as one of the great places to work for many years while its sales team adopted aggressive and toxic tactics to achieve its targets.So even though its corporate philosophy stated something entirely different, people indulged in what they were paid for, as the incentives were completely misaligned.Boards of financial institutions must, therefore, be conscious of this aspect while making business decisions and setting goals.
(2) Lack of independence and expertise at the second and third line The second and third lines supposedly take care of filtering the risk and misconduct at the front line through their oversight, monitoring and reporting responsibilities.While the compliance, risk and internal audit functions are expected not to have any dual hatting and business targets, and to have a direct reporting line to the Board, in reality, it is difficult for these functionaries to completely dissociate with business processes and functional heads.Hence, many times, compliance and risk functions toe the line taken by business verticals and chief executives, instead of framing their independent opinion.Behaviourally, it is not easy under all circumstances to be a part of the enterprise and develop an independent view and perspective.At times, the functional teams may have superior knowledge and expertise about their domain than that possessed by the risk, compliance and audit teams.It is very natural for the management to place their most talented executives in the roles responsible for business deliveries and revenue generation.Moreover, many business decisions for the want of a different perspective may look reasonable in real-time, while being proved catastrophic in the hindsight.A second-line functionary handling risk management could be called too conservative or a spoilsport for an adverse opinion, which may or may not be proved right in hindsight.The board, therefore, must take the initiatives to ensure expertise and independence in the second line and boost their confidence by demonstrating that the red flags raised by these executives are welcome and helpful.The right tone from the top helps.Further, the nudges such as separating office locations of functional teams and the second/ third line executives, insisting on formal modes of communications, cutting the chances of too much familiarity and at times bringing outside experts on these roles might help in improving the effectiveness of second and third line of defence.
(3) Personality cult is the worst enemy of governance Given the well-defined structure of boards, one wonders why despite having all the necessary structures in place, certain instances of managerial misconduct and governance malfeasance are neither curbed nor reported as expected.This brings us to the critical issue of individuals' Governance in financial institutions positions and behaviour.In institutions where individuals become too powerful either due to their long tenure, knowledge and skills, charisma, etc., their writ becomes too large to be subservient to systems, controls and procedures.Such persons due to their long-standing position develop strong connections and networks to manage things in their right or wrong ways.As already discussed, if any head of a business vertical or, the chief executive becomes very strong and well-connected, it turns out to be practically very difficult for compliance, risk and audit professionals to resist or report any unscrupulous decisions taken by such a person.Hence, financial institutions must not allow the development of personality cults either at the level of senior management or even at the level of the board, whereby individuals become stronger than systems.In the case of KMP the tenure and the zone of influence should be carefully calibrated and managed.Larckar and Tayan (2016) observe that, at times, the CEO could be the root cause of the governance problem because of certain reckless decisions, behaviour, and capture of the Board by his or her long-standing position in senior management.A failure in the Board's oversight role due to such capture could result in massive cultural and procedural collapse.
(4) Rot at the top is the most difficult thing to tackle While the boards and senior management of financial institutions are assigned the responsibility of putting in place the best governance standards and leading by example, at times, it may be possible that they might be involved in misconduct for personal gain.This would be a case of fence eating the grass and would be perhaps very difficult to be acted against by the first, second, or third lines of defence.In such cases, the onus falls on the fourth line by way of external scrutiny and consequent supervisory action.For the effectiveness of the fourth line, it is necessary to have coordination, market intelligence and information sharing between external auditors and supervisors.While both have similar objectives of ensuring strong financial institutions, their mandates and scope would be somewhat different.Aligning such differences by respecting each other's roles with a well-structured mechanism is important for effectiveness.Such alignment and coordination also ensure that any budding malfeasance may not linger undetected for a long time and is detected and curbed swiftly.
(5) Gatekeepers' inability to see through the corporate veil and the weak market discipline External oversight offered by the gatekeepers, namely, concurrent and statutory auditors, rating agencies, credit analysts, etc. provides a valuable fourth line of defence to financial institutions.However, at times, the gatekeepers may lack the incentive to dig deep to be able to see through the corporate veil.Secondly, the quality and access of information available to the gatekeepers may not be truly accurate and transparent, especially in situations of corrosion at the top or managerial misconduct (Core et al., 2006).An easy way out to save one's skin, therefore, could be to release an evasive qualified audit report instead of coming clear with numbers and offering unambiguous observations.Gatekeepers' inability to report unscrupulous transactions and malpractices also leads to poor market discipline because of a lack of credible information to stakeholders about the concerned financial institutions.Since supervisors are traditionally conservative and selective in sharing information, the market discipline hinges to a great extent on disclosures and reporting made by the gatekeepers.Since these are usually less than optimal, the market discipline remains on a weak footing.Kaawaase et al. (2021) observe that corporate governance and internal audit have a strong bearing on financial reporting quality.
Considering the above points, the next section provides insights for strengthening governance in financial institutions.
Strengthening governance in financial institutions
Based on the above discussions, it would be useful to answer the questions raised in Section 1 of this paper, followed by a discussion about the most desired steps that can help strengthen governance in financial institutions.
Coming to the questions raised in Section 1, it is quite apparent that misalignment of incentives and misplaced priorities of senior management leading to gaps in the four lines of defence are the primary reasons for control failures.Objectives and incentives of the first line are unlikely to have a control orientation given their role in the enterprise in the medium to long term, especially if the financial and hierarchical incentives are adversely aligned.Hence, the first line may generally be focussed on short-term gains, and achievement of targets and maybe contend with a tick-the-boxes approach.Similarly, the second line may not be truly independent, and may, at times, lack the expertise and conviction to take things head-on.The same is true with the internal audit as the third line, and hence, the controls, audit and risk management functions embedded in the second and third lines could have a tendency to look the other way or fail to see through shrouded managerial misconduct.Finally, some gatekeepers as a part of the fourth line of defence may lack sufficient incentives, the will, and the ability to travel the extra mile to nip the malfeasance in the bud.The market discipline remains weak due to a limited flow of credible information and awareness in the public space.Developing the right balance of incentives and disincentives, strengthening market discipline, and encouraging whistle-blower mechanisms can contribute a lot to strengthening governance in financial institutions.Even the independent directors on the boards may merely get the information which is presented to them, and it is not easy for them to know if something wrong is happening somewhere deep outside the walls of the boardroom.Hence, it may be necessary for independent directors to keep asking questions and try to keep a tab on the market chatter and grapevine, even though it may not be easy to filter the real issues.
Considering the above, the following steps may help strengthen the governance in financial institutions.
(1) A robust whistle-blower policy is a must A robust and trustworthy whistle-blower policy is an important tool for ensuring effective governance systems by way of wider oversight and enabling timely corrective actions against any breeding or potential malfeasance within the organisation.Every financial institution including the regulatory bodies should have a well-structured whistle-blower policy properly operationalised and widely circulated so that all the stakeholders, including employees and the general public, are encouraged to communicate their concerns about illegal, unethical, and unscrupulous practices and misconducts.Confidentiality, ease of access, and protection of the identity and interests of whistle-blowers remain the most important elements for the effective operation of a whistle-blower policy.It needs no overemphasis that this requires trust both within and outside the organisation that cannot be developed overnight.This can be successful only when people are confident that their inputs would be taken in right earnest and there would be no direct or indirect retribution.Boards of financial institutions must put in place the necessary mechanisms to thoroughly process the inputs received and to provide full protection to the whistle-blowers.
(2) Never allow individuals to overpower the systems and the organisation Governance systems are most damaged when individuals due to their position as founders, major shareholders, family members or associates of directors or KMPs, and/or due to long tenure, superior knowledge or stellar contribution to establishing the financial institution are seen as indispensable and perceived as towering personalities that no one dares to challenge or put forth a contrarian view against the decisions of such persons.In such a situation, systems and processes take a back seat and the risk of governance failures increases manifold.The challenge, however, in such situations, is the assessment of the incipient risk in real-time as everything looks great from the surface.It is only after a fiasco happens that things start looking bad in the hindsight.The solution lies in ensuring the supremacy of the systems, bringing transparency and not allowing individuals, irrespective of their knowledge, skills, experience, seniority, contribution, etc. to continue for a long period and become like demigods for the institution.Further, overbearing senior management or directors could create perverse incentives in the organisation by curbing independent opinions and divergent views.In such a situation, malfeasance is easy to develop and difficult to figure out and address.
(3) The fourth line of defence must keep looking for the red flags of governance deficiencies for timely action.
Further, external auditors and supervisors as a part of the fourth line of defence, must keep looking for the red flag of governance weaknesses on an on-going basis and initiate corrective action, as and when required.Some of the typical red flags seen in financial institutions, inter alia, are long tenure of KMP and directors, presence of close relatives or associates in executive and board positions, heavy influence of one or two persons, too little or too high remunerations, lack of proper recordkeeping, complex systems and ambiguous procedures, very little or too much delegation of powers, almost no discussion or dissent in board meetings, weak internal audit, human resource and risk management departments, non-designation of some executives as KMPs despite being in key positions, a top-down approach in most cases, overbearing hierarchy and high-handedness of senior management.The list could be unending and requires sound judgement and experience on the part of supervisors and external auditors.The challenge remains, however, is to find such red flags in real-time rather than in the hindsight.While doing this, the fourth line could worry that it might be accused of being too hawkish, and there always remains a risk of being proved wrong in hindsight due to the interplay of several internal and external environmental factors.
(4) Never ignore the behavioural aspects The fact that quality of governance in financial institutions is such a complex and fluid phenomenon and has several behavioural elements that a simple check-the-boxes approach cannot be successful.In most cases, improvements in corporate behaviour and governance quality are a matter of conviction and the right incentives requiring thoughtful consideration by all the stakeholders.Besides, it is also important to understand the root cause of institutional failures before attributing everything to the boards.Only in cases, where the failures resulted from strategic errors, inappropriate risk-taking, weak oversight or involvement of board or senior management in frauds, the board should be held responsible but in the cases of failures due to market or external factors, the board may not necessarily be at fault.Moreover, regarding its oversight role, it may not be realistic to expect that the board can detect all instances of malfeasance but it would be fair to expect that the board of financial institutions would make efforts to have eyes and ears in form of institutional mechanisms to curb any potential vested interests, wrong incentives and structural weaknesses.Finally, the quality and efficacy of governance require the presence of several elements but the mere presence of everything may not necessarily ensure good governance due to the interplay of behavioural factors.Focussed attention on the conduct of the KMPs and other stakeholders, and timely action are necessary to ensure robustness in the governance architecture in financial institutions.
Conclusion
Robust governance standards are a prerequisite for financial institutions.Accordingly, a comprehensive set of governance processes, control systems, audit mechanisms, supervisory oversight and regulatory structures are put in place in financial institutions.However, instances of governance failures have been a recurring phenomenon, globally.Almost all the episodes of governance failures present similar stories where all the preferred elements of the governance system remain present, direct reporting structures exist, independent directors sit on the boards and audits and external evaluations take place, but unfortunately, desired results are not achieved as the KMP, directors and other gatekeepers responsible for curbing the malfeasance fail to identify the problems and shrouded misconduct in a pro-active manner or look the other way.For emerging economies like India which is aspiring to make a quantum jump in economic development, a robust financial system is a sine qua non.Hence, there is a need for failsafe systems and processes to the best extent possible and the need to know the precise reasons why and when governance fails in financial institutions.Once the reasons are precisely known, effective solutions in terms of what should and should not be done can be identified and implemented.
This paper looks at several key questions relating to governance in financial institutions and comes out with certain solutions by indicating the role played by misaligned incentives at the frontline, lack of independence and expertise at the second and third lines of defence, development of personality cults as the worst enemy of governance, rot at the top as the most difficult thing to tackle in real-time rather than in the hindsight, stakeholders' inability to see through the corporate veil, overbearing management and weak market discipline as principal reasons for governance failures.The involvement of several behavioural and situation-specific factors in the instances of governance failures should be paid close attention to.
This paper indicates that the implementation of an effective whistle-blower policy is a must, and one should never allow individuals to overpower the systems and due processes, howsoever lucrative and promising it might seem in real-time.Finally, the fourth line of defence must keep looking for the red flags of governance deficiencies that are mostly manifested in managerial and organisational conduct initially, and much later get reflected in the financials.Timely feedback and corrective action are of the essence, else, it would be a case of too little and too late.With thoughtful consideration and pre-emptive steps, the governance in financial institutions can be strengthened to prevent instances of failures, to a great extent, and to repair the damage quickly in case of rare occurrences.
Figure 1.Four line of defence model Figure 2. Governance elements and objectives Banking Supervision (BCBS) came up with a set of 13 corporate governance principles for banks in 2015.A summary of the principles is presented in Table2.These principles provide a comprehensive guide for developing suitable corporate governance systems commensurate with the size, complexity, systemic importance, substitutability and interconnectedness of banks and financial institutions.
❖Is the market discipline too weak to penalise and force course correction in case of bad corporate behaviour, and if more stringent regulations are the way forward?❖ Why do the instances of governance failures come to light after a considerable time lag leading to a situation of too little and too late?Source(s): Author on | 6,069 | 2023-01-16T00:00:00.000 | [
"Business",
"Economics"
] |
Fourier-transform VUV spectroscopy of $^{14,15}$N and $^{12,13}$C
Accurate Fourier-transform spectroscopic absorption measurements of vacuum ultraviolet transitions in atomic nitrogen and carbon were performed at the Soleil synchrotron. For $^{14}$N transitions from the $2s^22p^3\,^4$S$_{3/2}$ ground state and from the $2s^22p^3\,^2$P and $^2$D metastable states were determined in the $95 - 124$ nm range at an accuracy of $0.025\,\mathrm{cm}^{-1}$. Combination of these results with data from previous precision laser experiments in the vacuum ultraviolet range reveal an overall and consistent offset of -0.04 \wn\ from values reported in the NIST database. %The splitting of the $2s^22p^3\,^4$S$_{3/2}$ -- %$2s2p^4\,^4$P$_{5/2,3/2,1/2}$ The splittings of the $2s^22p^3\,^4$S$_{3/2}$ -- $2s2p^4\,^4$P$_{J}$ transitions are well-resolved for $^{14}$N and $^{15}$N and isotope shifts determined. While excitation of a $2p$ valence electron yields very small isotope shifts, excitation of a $2s$ core electron results in large isotope shifts, in agreement with theoretical predictions. For carbon six transitions from the ground $2s^22p^2\,^3$P$_{J}$ and $2s^22p3s\, ^3$P$_{J}$ excited states at $165$ nm are measured for both $^{12}$C and $^{13}$C isotopes.
Introduction
The determination of level energies in first row atoms critically relies on accurate spectroscopic measurements in the vacuum ultraviolet (VUV) region below the atmospheric absorption cutoff. The present study applies a unique Fourier-transform spectroscopic instrument in combination with synchrotron radiation to access this wavelength range at high resolution and accuracy for improving the atomic level structures of N and C atoms, including isotopic effects.
The currently available level energies and line classifications for the N atom, compiled in the comprehensive NIST database [1], mostly originate from the work of Eriksson and coworkers from the late 1950s in combination with the work by Kaufman and Ward [2]. Eriksson measured N I (neutral nitrogen) transitions between 113.4-174.5 nm at about 0.1 cm −1 accuracy and constructed the atomic level structure, also including information on transitions between excited states in the visible and IR region [3,4]. Kaufman and Ward measured the 2p 3 2 D J -3s 2 P J and 2p 3 2 P J -3s 2 P J transitions to extend the knowledge of the level structure of the ground configuration at better than 0.04 cm −1 accuracy [2], also including the forbidden transition 4 S 1/2 -2 P J measured by Eriksson. Further analyses were performed by Eriksson [5,6], and a compilation was made by Moore [7], now used as a primary reference in the NIST database. Eriksson published an extensive analysis with newly determined energy levels at an uncertainty of 0.003 cm −1 [8]. More recently, Salumbides et al. [9] measured 12 transitions from the ground state at around 96 nm using VUV precision laser spectroscopy with 0.005 cm −1 uncertainty, thus providing an accurate connection between the ground and excited states.
The energy level structure and the spectral data for the neutral carbon atom were recently reviewed by Haris and Kramida [10]. Their report includes an accurate summary of the VUV transitions in C I (carbon atoms) [10]. Among the body of reported studies, the VUV measurements by Kaufman and Ward [11] present the highest accuracy, at 0.025-0.047 cm −1 , in the range of 145-193 nm. The C I level energy optimization also includes accurate unpublished UV Fourier-transform data at about 0.004 cm −1 accuracy from Griesmann and Kling reported in [10]. Inclusion of these lines allowed for the determination of some key excited level energies, accurate to 0.0013 cm −1 .
Atomic isotope shifts (ISs) have been studied for a variety of transitions in 12,13 C. Yamamoto et al. [12] and Klein et al. [13] performed high precision IS measurements of the far-infrared lines of the 3 P ground term. For transitions between electronic states, anomalous, negative IS have been measured, e.g., for the 2p 2 1 S 0 -3s 1 P 1 transition [14,15], while the transition between ground 3 P 2 and core-excited state 2s2p 3 5 S 2 showed a positive IS, with the heavier isotope blue-shifted [16].
Ground state excitation to the autoionizing 2s2p 3 3 S 1 state also yielded a positive IS [17], while on the other hand, excitation to 2s2p 3 3 D J and 3s 3 P J , as well as the 1 D 2 -3s 1 P 1 transition exhibit an IS with the opposite sign [18]. Berengut et al. [19] performed theoretical studies on isotopic shifts of C I, explaining significant differences in IS for various transitions.
The IS of the nitrogen atom was investigated as well, but mainly in excitation between excited states. Holmes studied the 14,15 N IS of lines around 800 nm by classical means, finding a −0.4 to −0.6 cm −1 IS for the 3s 4 P -3p 4 L quartet transitions and about 0.07 cm −1 for 3s 2 P -3p 2 P doublet transitions [15,20]. Later, a number of Doppler-free laser saturation studies were performed on the 3s 4 P -3p 4 P, 4 D transitions [21,22]. A strong J-dependence of the specific mass shift (SMS) effect was found to originate from the lower 3s 4 P state [23]. The only measurement of IS in VUV transitions from the 2p 3 4 S 3/2 ground state is that of Salumbides et al. [9], probing 4s 4 P, 3d 2 F, 3d 4 P, 3d 4 D, and 3d 2 D states, where no significant J-dependent SMS was observed.
In the present study, lines of N I in the range of 95-124 nm and C I at 165 nm are investigated by Fourier-transform synchrotron absorption spectroscopy with accurate determination of isotopic shifts. Some 27 lines of N I and six lines of the 2p 2 3 P J -3s 3 P J multiplet of C I are measured at an absolute accuracy of 0.025 cm −1 .
Experimental
The vacuum ultraviolet (VUV) Fourier-transform (FT) spectroscopic instrument at the DESIRS (Dichroïsme Et Spectroscopie par Interaction avec le Rayonnement Synchrotron) beamline of the Soleil Synchrotron facility has been described in detail previously [24,25]. The main originality of the instrument lies in the use of wave-front division interferometry for generating an interferogram avoiding transmission optical materials for beam overlap. As is common in non-dispersive FT spectroscopy, the absorbing gas cell is located between the source and the VUV interferometer, allowing for a geometry where the absorption may occur at a far distance from the FT analyzing instrument.
The energy calibration of the FT spectrometer is intrinsically related to the VUV optical path difference, which is measured via the interferometric detection of the fringes of a HeNe laser probing the back surface of the moving reflector [24]. The FT spectroscopy energy scale is strictly linear and, in principle, requires a single reference if precise absolute calibration is needed. Such calibration of the spectra is performed by comparing with an absorption line of Kr I present in most spectra, for which an accurate value exists in the literature at 85,846.7055 (2) cm −1 [26]. This value for a natural sample of krypton is in agreement with isotope-specific calibrations by high resolution laser measurements [27]. This leads to an estimated uncertainty of 0.025 cm −1 for the N I and C I resonances. The widths of observed lines range from 0.28 to 0.40 cm −1 (FWHM).
In a variety of previous experiments, the FT instrument was applied to perform spectroscopy of quasi-static gas molecules flowing from a capillary-shaped windowless gas cell, such as for nitrogen molecules [28]. Recently, a study was performed probing Rydberg states of O 2 molecules in excitation from metastable states down to 120 nm, produced in a DC-discharge cell equipped with UV-transmissive windows [29]. In the present study, spectroscopy is performed on atoms that were produced via two entirely different methods.
The measurements on atomic nitrogen were carried out by releasing N 2 gas into a windowless gas filter, located upstream on the DESIRS beam line close to the undulator. This filter is usually filled with noble gas for the purpose of suppressing the high energy harmonics produced in the undulator [30]. Neitherthe gas density, nor the absorbing column length are known, but the gas inlet can be controlled to produce the desired signal strength, where a strong limitation is set by the maximum pressure allowed before the safety shutters on the beam line close. The gas filter can be monitored through a viewing window where a radiation emitting plasma can be observed of blue-purple color at the location where the synchrotron pencil beam traverses (see Figure 1). At this location, the synchrotron beam, including its harmonics produced in the undulator, causes photo-dissociation and photo-ionization in a collisional environment, and hence a plasma, where N atoms are produced in the ground state, as well as in the metastable states. The absorption spectra of this nitrogen plasma are measured by the FT instrument some 17 m further downstream. During the measurements, gas samples of 14 N 2 , 14 N 15 N, and 15 N 2 were used to measure and disentangle the isotopic lines of N I. For the spectroscopy of atomic carbon, a DC discharge cell is used, which is located further downstream just in front (by 0.5 m) of the FT instrument inside the conventional gas sample chamber of the FT spectroscopy branch [25]. The DC-discharge is similar to the one used by Western et al. [29], although the cell is windowless in this case in order to reach the VUV spectral range. A flow of CO 2 gas is released at the inlet port and pumped at the rear end. A plasma is generated between the cathodes at a voltage difference of 1000 V with a stabilized discharge current of 20 mA. The discharge is further stabilized by optimizing the pressure and by mixing in of He carrier gas, within the limits allowed by the differential pumping system of the chamber [25]. Spectra are recorded for 12 C and 13 C by using 12 CO 2 and enriched 13 CO 2 gas.
Nitrogen I
Two different sets of lines are measured for N I, lines in excitation from the 4 S 3/2 ground state and lines excited from 2 D J metastable states produced in the plasma. Results will be discussed separately.
3.1.1. Initial State: Ground 2s 2 2p 3 4 S 3/2 Recorded absorption spectra from the 2s 2 2p 3 4 S 3/2 ground state to the 3s 4 P J levels and the 2s2p 4 4 P J core excited levels were measured, the latter shown in Figure 2. Spectra were recorded from samples of 14 N 2 , 14 N 15 N, and 15 N 2 , thus allowing unraveling the isotopic structure. Table 1 lists the transition frequencies as deduced from the spectra. For the core excited states, a clear isotopic splitting was observed, the results of which are included in the table. The same lines of 14 N I were well studied by Kaufman and Ward [2] with uncertainties of 0.06 to 0.1 cm −1 . When comparing the present dataset with that of [2], an average systematic offset of −0.05 cm −1 was found, corresponding to 1.5 σ of combined uncertainties.
The agreement between the present FT data with the previous VUV laser data in the range above 104,000 cm −1 [9], except for the line exciting the 4 P 1/2 level (off by 2σ), is considered as a verification of the calibration accuracy of the present experiment. Overview spectra of N I core changing transitions 4 S 3/2 -2s2p 4 4 P J excited from the ground state, using different isotopic parent gases. The inset presents the 4 S 3/2 -2s2p 4 4 P 5/2 line exhibiting a well-resolved isotopic shift. Note that all spectra were measured in absorption. Table 1. Measured transition frequencies for N I and isotopic shifts for lines excited from the 2s 2 2p 3 4 S 3/2 ground state. Units in cm −1 with uncertainties indicated in parentheses. In the fourth column, the derived transition frequencies of [2] are listed for comparison. For the transitions above 100,000 cm −1 , a comparison is made with results of the precision VUV laser study [9]. The isotopic shift ( 15 N -14 N), ∆ 15−14 , is given in the last column. In cases where no significant isotope shift is observed, the value in parentheses represents an upper limit. (4) The 4 S 3/2 -3s 4 P J transition frequencies show no significant difference in measurement using pure 14 N 2 and 15 N 2 , while from measurements with 14 N 15 N, the spectra show insignificant changes in linewidth. The uncertainty of FWHM is about 0.021 cm −1 , which is dominated by the statistics and the deconvolution of the FT sinc shape of the apparatus function observed in the FT spectrum. From this, it is estimated that the 15 N -14 N (IS) of the 4 S 3/2 -3s 4 P J transitions is less than 0.04 cm −1 . The transitions exciting 3d states also do not display an isotope shift. This is found to be in agreement with the previous, more accurate, VUV laser experiment where an IS of 0.01 cm −1 was determined [9].
Excited
In contrast, the 4 S 3/2 -2s2p 4 4 P J core changing transitions display a distinctive IS, as clearly shown in Figure 2. The spectra of 14 The metastable 2s 2 2p 3 2 D J states lie some 2.5 eV above the ground state 4 S 3/2 , 19,224.464 cm −1 for 2 D 5/2 and 19,233.177 cm −1 for 2 D 3/2 [1]. All transitions excited from these levels are substantially weaker than those excited from the 4 S 3/2 ground state, indicating that the metastable states are less populated. A spectrum of these lines is shown in Figure 3, while Table 2 lists all the observed transitions and their frequencies. Regardless of the small splitting, of about 0.485 cm −1 between 3s 2 D 5/2 and 3s 2 D 3/2 , the transitions observed at 80,430.79 cm −1 and 80,439.02 cm −1 are unambiguously assigned 2 D 3/2 -3s 2 D 3/2 and 2 D 5/2 -3s 2 D 5/2 , based on intensity. Note that ∆J = 0 transitions of 2 D J -3s 2 D J exhibit Einstein A coefficients 10 times larger than ∆J = ±1. Table 2 includes the transition frequencies of observed lines from [2] for comparison. Here, again, a systematic offset is found, now positive and of opposite sign at 0.078 cm −1 , again corresponding to 1.5 σ of the combined uncertainties. The same systematic offset of −0.07 cm −1 is also found when comparing the results of the VUV laser measurements [9] with those of Kaufman and Ward [2].
From the level energies reported in the NIST database, frequencies for these two transitions can be computed, and these predictive results are found to be well in agreement with the present direct measurements (see Table 2). The fine-structure splitting of 2p 3 2 P 3/2 -2 P 1/2 is determined to be 0.483 (35) cm −1 , which is in fair agreement with the paramagnetic resonance result of 0.4326 (3) cm −1 [31].
The transitions from metastable states show no significant IS from spectra obtained with 14 N 2 and 15 N 2 gases. The linewidth obtained with 14 N 15 N gas does not show any additional broadening, from which it is concluded that the IS of these lines is smaller than 0.04 cm −1 . [5]. b Calculated frequency from the NIST database. Figure 4 shows the recording of all six 2p 2 3 P J -3s 3 P J transitions for both carbon isotopes obtained from discharges in 12 CO 2 /He and 13 CO 2 /He gas mixtures. The spectrum of 12 C has several overlapping lines of the A 1 Π-X 1 Σ + (2,0) band of 12 CO [32]. The A-X(2,0) band of 13 C 16 O is blue-shifted by 100 cm −1 , outside the measurement interval displayed. The spectra of 12 C and 13 C are taken in separate measurements in absence of the Kr I reference line in the scan range.
Carbon I
The absolute calibration is verified by interpolating from the 12 CO lines with predicted transition frequencies from Doppler-free two photon spectroscopy results at 0.002 cm −1 accuracy [32,33]. The average difference of the twelve CO lines observed is 0.001 (6) cm −1 , which is smaller than the current measurement uncertainty. This procedure again leads to the same accuracy as for the N I lines, at 0.025 cm −1 . Overview spectra of the C I 3 P J -3s 3 P J multiplet recorded in absorption. The inset shows the 3 P 1 -3s 3 P 0 transition. The 12 C spectrum is partly overlapped with lines from the A-X (2, 0) band of 12 C 16 O. Table 3 lists the transition frequencies for the six C I lines. The table includes the transition frequencies of 12 C and the 13 C-12 C IS. The latter were presented in [10] at better accuracy, but those results did not stem from direct measurement, but rather from combination differences. The results of Haridass et al., directly measured, but at larger uncertainty [18], are included as well. All the measured frequencies agree with each other within the stated uncertainties. Hence, the predicted line positions of [10] are confirmed by experiment.
The ground and 3s 3 P J level energies of 12 C and 13 C are fitted with the six intercombination lines using the LOPT program [34]. Table 4 lists the fitted values from this work, and a comparison with [10] is made. The level energies and uncertainties presented are relative to the ground 3 P 0 state. Note that the inclusion of level energies and splittings in the 2p 2 3 P ground state only serves the purpose of comparing with the much more accurate values of [12,13] to verify the accuracy of the present VUV_data. The predicted transition frequencies agree well with high precision measurements of transitions between ground states [12,13], exhibiting less than a 0.012 cm −1 difference. This finding further supports the calibration accuracy in the present study. Haris and Kramida [10] noted a small systematic shift of −0.00006 nm, or 0.022 cm −1 , with respect to the 12 C values of Kaufman and Ward [11] in the 52,000 to 78,000 cm −1 range of the Griesmann and Kling unpublished FTS (Fourier-Transform Spectroscopic) data. By comparing these values with our measurement, there appears a 0.019 cm −1 average offset, consistent with the claim of [10]. Table 3. Measured frequency and isotopic shift (∆ 13−12 ) of C I 3 P J -3s 3 P J transitions, with uncertainties indicated in parentheses. Note that the 12 C transition frequencies from [10] are computed values. The uncertainty of isotopic shifts from [18] is estimated by taking measurement uncertainties from 12 C and 13 C in quadrature. All values are given in cm −1 .
This Work
Ref. [ [18], it was stated that the corresponding 13 C transition is strongly blended. Table 4. Least-squares fitted C I level energies for the 3 P J -3s 3 P J multiplet and the comparison with [10]. The predicted transitions between ground 3 P J are compared with the high precision measurements in [12,13].
Level Energies of 14 N I
Three transitions from the metastable 2s 2 2p 3 2 D J state, the transitions 2 D 3/2 -3d 2 D 3/2 , 2 D 5/2 -3d 2 D 5/2 , and 2 D 3/2 -3d 2 F 5/2 , share their excited state in the high precision VUV laser measurements of [9]. In combination with those laser measurements, the level energies of all states can be fitted using the LOPT program [34]. Table 5 lists the level energies of 25 states and makes a comparison with the values from the NIST database. A global and consistent negative offset, of −0.04 cm −1 , is observed. For the 2p 3 2 D J metastable states, larger deviations are found of size −0.07 and −0.09 cm −1 .
The determination of 2 D J level energies presented in the NIST database is likely based on the VUV measurements by Kaufman and Ward [2], which are determined by taking a combination of the differences of transitions 2 D J -3s 2 P J and 2 P J -3s 2 P J and the forbidden transition 4 S 3/2 -2 P J measured by Eriksson. The relative uncertainty of the VUV measurements [2] is tested by comparing the fine-structure splitting of 2 D J and 2 P J with results from laser magnetic resonance [35] and paramagnetic resonance [31], respectively. The fitted 2 P J splitting is 0.391 (18) cm −1 , while the paramagnetic resonance measurement gives 0.4326 (27) cm −1 , reflecting a 2.3 σ difference.
These considerations support additional evidence for the consistency of the present FT data and the previous VUV laser precision data. At the same time, they support evidence for the inconsistency, i.e., the global shift in the NIST tabulated data for N I levels. Table 5. Least-squares fitted N I level energies using the transitions from [9] and the current study. The fitted value is compared with the value of the NIST database. All values are in cm −1 .
Level
This Work and Ref. [
Isotope Shifts
The finite mass M of the nucleus results in a small nuclear motion in the center of mass reference frame, where the nuclear momentum P and electron momenta p i are conserved: P = − ∑ i p i . The mass shift can be calculated from the expectation value of the nuclear kinetic energy operator: The first term on the right-hand side represents the Bohr shift or normal mass shift (NMS), while the second term refers to the specific mass shift (SMS). The NMS is proportional to the atomic Rydberg constant and straightforwardly results in a blue shift for a heavier isotope. The SMS is related to electron correlation, so that its magnitude and sign are highly dependent on the specific level involved. Since the NMS and SMS terms depend quadratically on p i 's, they are often of the same order of magnitude, and in some cases are found to cancel [36]. We adopt the convention for the SMS such that: where isotopic masses follow M B > M A and: which ensures that a positive SMS shifts in the same direction as NMS. In the following discussions, we neglect the effects of nuclear field shifts and hyperfine structure [21,22] as these are smaller than the spectral resolution in the present study.
A large IS is associated with the promotion of a 2s electron from the 2s 2 2p 3 ground state to the 2s2p 4 configuration in the excited state. This is consistent with Clark's [37,38] calculations, which showed that the dominant contributions to the k integrals (and SMS) increase with the number of 2p electrons. As a consequence, when the number of 2p electrons in the upper state is larger than that in the lower state, SMS is positive and hence enhances the total IS in the 2s2p 4 excited state configuration. On the other hand, when the number of 2p electrons in the upper state is less than that in the lower state, SMS is negative and largely cancels the total IS for transitions to the 2s 2 2p 2 nl states. The same trend is found for transitions accessed by laser measurements in the infrared [21] and VUV range [9].
Isotope Shift in C I
For carbon, the 13 C -12 C IS for the 2p 2 3 P J -2p3s 3 P J transitions is about -0.10 cm −1 on average. In comparison with [10], there is an average difference of 0.009 cm −1 , hence smaller than the combined uncertainty. Note that the ISs presented in [10] are taken from the theoretical values in [19] with an estimated uncertainty of 0.004 cm −1 . The IS determined in the present study is consistent with, but more accurate than, the measurements of Haridass and Huber [18]. With an NMS of about 0.21 cm −1 for the C I transitions, SMS is derived to be −0.31(4) cm −1 . The negative SMS can be understood from the same arguments as given above, stemming from the smaller number of 2p-electrons in the upper state compared to the lower state in the 3 P J -3s 3 P J transitions. On the other hand, measurements involving core-changing transitions from the 2s 2 2p 2 ground state to the 2s2p 3 excited configuration in C I show a positive SMS resulting in large IS [17,18]. These are consistent with the expected results from ab initio calculations that employ different flavors of (post-)Hartree-Fock methods, obtaining varying levels of accuracy [19,37].
Conclusions
Accurate measurements of transition energies in nitrogen and carbon atoms were obtained at an absolute accuracy of 0.025 cm −1 using VUV Fourier-transform spectroscopy with a synchrotron radiation source. For 14 N and 15 N, transitions originating from the ground 2s 2 2p 3 3 P states, as well as from metastable states 2s 2 2p 3 2 D and 2s2p 3 2 P states are observed. For 12 C and 13 C, transition energies for 2s 2 2p 2 3 P J -2s 2 2p3s 3 P J lines were measured. The comprehensive dataset for N I is included in a reevaluation of the level energies of the excited states, in combination with data from a previous laser-based precision study [9]. This results in an averaged shift of −0.04 cm −1 with respect to the level energies reported in the NIST database [1].
The determination of isotope shifts for carbon and nitrogen in this study will be useful in assessing the effectiveness of various strategies in ab initio calculations for many-electron atoms, in particular towards the treatment of electron correlation. Such tests on isotopic shifts will be complementary to benchmarking with absolute level energies, where the most accurate theoretical description of multiple electrons remains a difficult challenge. | 6,111 | 2020-09-03T00:00:00.000 | [
"Physics"
] |
Shape–Texture Debiased Training for Robust Template Matching
Finding a template in a search image is an important task underlying many computer vision applications. This is typically solved by calculating a similarity map using features extracted from the separate images. Recent approaches perform template matching in a deep feature space, produced by a convolutional neural network (CNN), which is found to provide more tolerance to changes in appearance. Inspired by these findings, in this article we investigate whether enhancing the CNN’s encoding of shape information can produce more distinguishable features that improve the performance of template matching. By comparing features from the same CNN trained using different shape–texture training methods, we determined a feature space which improves the performance of most template matching algorithms. When combining the proposed method with the Divisive Input Modulation (DIM) template matching algorithm, its performance is greatly improved, and the resulting method produces state-of-the-art results on a standard benchmark. To confirm these results, we create a new benchmark and show that the proposed method outperforms existing techniques on this new dataset.
Introduction
Template matching is a technique for finding a rectangular region of an image that contains a certain object or image feature. It is widely used in many computer vision applications, including object tracking [1,2], object detection [3,4], and 3D reconstruction [5,6]. A similarity map is generally used to quantify how well a template matches each location in an image, typically generated by sliding the template through the search image, then the matching position is determined by finding the location of maximum value of the similarity map. Traditional template matching generates the similarity map based on pixel intensity values, and is not robust to hard matching scenarios such as significant non-rigid deformations of the object, changes in the illumination and size of the target, and occlusion [7]. To address this issue, more distinctive hand-crafted features such as scale-invariant feature transform (SIFT) [8] and histogram of oriented gradients (HOG) [9] can be used instead of the intensity values for robust template matching [10][11][12][13]. However, these features must be extracted by certain manually predefined algorithms based on expert knowledge, and therefore have limited description capabilities [14].
With the help of deep features learned from convolutional neural networks (CNNs), vision tasks such as image classification [15,16], object recognition [17,18], and object tracking [19,20] have recently achieved great success. In order to succeed in such tasks, CNNs need to be trained with big data and automatically build internal representations that are less effected by changes in the appearance of objects in different images. Therefore, CNNs have strong description capability far exceeding that of hand-crafted features; recent methods have been successfully applied to a feature space produced by the convolutional layers of a CNN, achieving impressive performance [7,[21][22][23][24][25][26].
The higher layers of CNNs are believed to learn representations of shapes from low-level features [27]. However, recent studies [28,29] have demonstrated that ImageNettrained CNNs are biased toward making categorisation decisions based on texture rather than shape. The same works showed that CNNs could be trained to increase sensitivity to shape, resulting in improved accuracy and robustness of object classification and detection. Assuming that shape information is useful for template matching, these results suggest that the performance of template-matching methods applied to CNN-generated feature spaces could potentially be improved by training the CNN to be more sensitive to shape.
In this article, we verified this assumption by comparing features from five CNN models that had the same network structure while differing in shape sensitivity. Our results show that training a CNN to learn about texture while biasing it to be more sensitive to shape information can improve template matching performance. Furthermore, by comparing template-matching performance when using feature spaces created from all possible combinations of one, two, and three convolutional layers of the CNN, we found that the best results were produced by combining features from both early and late layers. Early layers of a CNN encode lower-level information such as texture, while later layers encode more abstract information such as object identity. Hence, both sets of results (the need to train the CNN to be more sensitive to shape and the need to combine information for early and late layers) suggest that a combination of texture and shape information is beneficial for template matching.
Our main contributions are summarised as follows: • We created a new benchmark; compared to the existing standard benchmark, it is more challenging, provides a far larger number of image pairs, and is better able to discriminate between the performance of different template matching methods. • By training a CNN to be more sensitive to shape information and combining features from both early and late layers, we created a feature space in which the performance of most template matching algorithms is improved. • Using this feature space together with an existing template matching method, DIM [30], we obtained state-of-art results on both the standard and new datasets.
This paper is an extension of work originally presented at ICIVS2021 [31]. The conference paper reported the template matching results of the DIM algorithm using features extracted from four VGG19 models with different shape sensitivities in order to determine the best deep feature space for template matching, then compared the performance of many template-matching algorithms in that feature space. The current work adds a reviews of the latest literature in Section 2, details of the DIM algorithm in Section 3.2, new results using features from a new VGG19 model (Model_E) trained by the latest shape-texture debiased training method [29] along with related discussion in Section 4, visualisation of the results of different template matching algorithms in Section 4.4, and a concluding discussion in Section 5.
Template Matching
Traditional template matching methods calculate the similarity map using a range of metrics such as the normalised cross-correlation (NCC), sum of squared differences (SSD), and zero-mean normalised cross-correlation (ZNCC), which are applied to the pixel intensity or colour values. However, because these methods rely on comparing the values in the template with those at corresponding locations in the image patch, they are sensitive to changes in lighting conditions, non-rigid deformations of the target object, or partial occlusions, which can result in a low similarity score when one or multiple of these situations occur. To overcome the limitations of classic template matching methods, many approaches [7,21,22,[24][25][26]32] have been developed. These methods can be classified into two main categories.
One category attempts to increase tolerance to changes in appearance by changing the computation that is performed to compare the template to the image. For example, Best-Buddies Similarity (BBS) counts the proportion of sub-regions in the template and the image patch that are Nearest-Neighbour (NN) matches [7]. Deformable Diversity Similarity (DDIS) explicitly considers possible template deformation using the diversity of NN feature matches between a template and a potential matching region in the search image [24]. Annulus Projection Transformation and Neighbour Similarity (APT-MNS) [26] builds the global spatial structure of the target object using a novel annulus projection transformation (APT) vector to filter out the incorrectly matched NN candidates, then estimating the best matched candidates using the MNS measurement. Weighted Smallest Deformation Similarity (WSDS) [25] calculates the smallest deformation between each point in the template and its NN matches to explicitly penalise the deformation. In addition, weights are defined for points in the template based on their likelihood of belonging to the background calculated through NN matching with the points around the target window. This reduces the negative effect of background pixels contained in the template box. The Divisive Input Modulation (DIM) algorithm [30] extracts additional templates from the background and lets the templates compete with each other to match the image. Specifically, this competition is implemented as a form of probabilistic inference known as explaining away [33,34], which causes each image element to only provide support for the template that is the most likely match. Previous work has demonstrated that DIM, when applied to colour feature-space, is more accurate in identifying features in an image compared to both traditional and recent state-of-the-art matching methods [30].
The second category of approaches changes the feature space in which the comparison between the template and the image is performed. The aim is for this new feature space to allow better discrimination in template matching while increasing tolerance to changes in appearance. Co-occurrence based Template Matching (CoTM) transforms the points in the image and template to a new feature space defined by the co-occurrence statistics to quantify the dissimilarity between a template and an image [22]. Quality-Aware Template Matching (QATM) is a method that uses a pretrained CNN model as a feature extractor. It learns a similarity score that reflects the (softness) repeatness of a pattern using an algorithmic CNN layer [21]. Occlusion Aware Template Matching (OATM) [32] searches neighbours among two sets of vectors and uses a hashing scheme based on consensus set maximisation, and is hence able to efficiently handle high levels of deformation and occlusion.
Deep Features
Many template matching algorithms from the first category above can be applied both to deep features and directly to colour images. The deep features used by BBS, CoTM, and QATM are extracted from two specific layers of a pre-trained VGG19 CNN [35], conv1_2 and conv3_4. Following the suggestion in [20] for object tracking, DDIS takes features from a deeper layer, fusing features from layers conv1_2, conv3_4, and conv4_4. In [23], the authors proposed a scale-adaptive strategy to select a particular individual layer of a VGG19 to use as the feature space according to the size of template. In each case, using deep features was found to significantly improve template matching performance compared to using colour features.
A recent study has shown that ImageNet-trained CNNs are strongly biased towards recognising textures rather than shapes [28]. The same study demonstrated that the same standard architecture (ResNet-50) [36] that learns a texture-based representation on ImageNet is able to learn a shape-based representation when trained on 'Stylised-ImageNet', a version of ImageNet that replaces the texture in the original image with the style of a randomly selected painting through AdaIN style transfer [37]. This new shapesensitive model was found to be more accurate and robust in both object classification and detection tasks. However, the stylised dataset needs to be generated before the training process using a pre-defined set of texture source images. Due to computation and resource limitations, each image in the stylised dataset is only transferred by one random artistic image, which results in lack of diversity for each sample. Furthermore, the training process is complicated and involves first training the network on both standard and stylised training datasets, then fine-tuning on the standard dataset. In contrast, [29] proposes a shape-texture debiased training method which provides the corresponding supervisions from shape and texture simultaneously. Similarly, this method is based on AdaIN style transfer, with the difference in implementation being that it replaces the original texture information with uninformative texture patterns from another randomly selected image from the training mini-batch rather than with the style of randomly selected artistic paintings. This results in increments of diversity for each image; hence, this method achieves higher accuracy and robustness than [28] for image classification with the ResNet-50 architecture. Inspired by these findings, in this paper we investigate whether enhancing the shape sensitivity of a CNN can produce more distinguishable features that improve the performance of template matching.
Training CNN with Stylised Data
Previous work on template matching in deep feature space (see Section 2) has employed a VGG19 CNN. To enable a fair comparison with those previous results, we used the VGG19 architecture as well. However, we used five VGG19 models that differed in terms of the way they were trained to encode different degrees of shape selectivity, as summarised in Table 1. Model_A to Model_D were trained using the same approach as in [28], with a stylised dataset generated before the training process; the ranking shape-sensitivity of these models was controlled by setting the different training datasets manually. Model_A was trained using the standard ImageNet dataset [35]; we used the pretrained VGG19 model from the PyTorch torchvision library. This model has the least shape bias. Model_B was trained on the Stylised-ImageNet dataset, and thus has the most shape bias. Model_C was trained on a dataset containing the images from both ImageNet and Stylised-ImageNet. Model_D was initialised with the weights of Model_C followed by fine-tuning on ImageNet for 60 epochs using a learning rate of 0.001 multiplied by 0.1 after 30 epochs. Therefore, Model_C and Model_D have intermediate levels of shape bias, with model_D being less selective to shape than Model_C. The stylised data samples used in Model_E were generated during training, and the training process provided supervisions from shape and texture simultaneously [29]. Hence, Model_E has an intermediate level of shape bias, although where it should rank relative to Model_C and Model_D was impossible to quantify. The learning rate was 0.01 multiplied by 0.1 after every 30 epochs for Model_B and Model_E, and after every 15 epochs for Model_C. The number of epochs was 90 for Model_B and Model_E and 45 for Model_C; as the dataset used to train Model_C was twice as large as that used to train Model_B, the number of weight updates was the same for both models. The other training hyperparameters used for each model were a batch size of 256, momentum 0.9, and weight decay 1 × 10 −4 . The optimiser was SGD.
DIM Template Matching Algorithm
The DIM algorithm has previously been found to produce the best performance for template matching in colour feature space [30]. Hence, it was selected as the underlying algorithm to determine the best CNN feature space to use for template matching. A detailed description of the DIM algorithm can be found in [30]; for the convenience of the reader, a brief introduction is provided below.
In contrast to other template matching methods that only use the appearance of the target, DIM considers potential distractors, that is, regions that are similar to the matching target. These distractors are represented as additional templates that are cropped from the same image as the given template. All of the templates, representing both the target and the distractors, compete with each other to be matched with the search image. This inference process performs by explaining away [33,38,39]: possible causes (i.e., templates) compete to explain the sensory evidence (i.e., the search image), and if one cause explains part of the evidence (i.e., a part of the image), then support from this evidence for alternative explanations (i.e., other templates) is reduced, or explained away. An example is shown in Figure 1. DIM minimises the Kullback-Leibler (KL) divergence between the input and a reconstruction of the input created by the additive combination of the templates. This requires the input to be non-negative [30]. Therefore, a pre-processing step is required to separate the positive and rectified negative values of the features directly into two parts, which are then concatenated along the channel dimension: where I is a colour or grayscale image, φ(I) are features extracted from the image, and ReLU is the function that, if positive, outputs an element of the input directly, and otherwise outputs zero.
To apply DIM directly to the image feature space, feature extraction was performed as follows: where c is the index over the number of image channels, k has a value of one for a grayscale image and three for a colour image, γ is a gain factor that was set here to a value of 2, and f is a Gaussian filter with a standard deviation equal to half of the smaller value of the template width or height [30]. This operation results in each channel of φ(I) being represented by the deviations between the pixel intensity values and the local mean intensity. In this paper, the five VGG19 models were used as feature extractors; hence, φ(I) represents deep features of I extracted by the CNNs. Both the template and the search image were pre-processed as described in the previous paragraph. For the template image, additional templates were extracted around locations where the correlation between the target template and the image was strongest, excluding locations where the additional templates would overlap with each other or where the bounding box defined the target [30]. Five additional templates were used in the experiments described in this paper. DIM requires the templates to have dimensions that are odd numbers, otherwise the reconstruction of the input does not align with the actual input; see Equation (3) for details. Therefore, if one side of the target template is even, it is padded by one row on the right or one column on the bottom with zeros, then the new size of target template is used to generate the additional templates.
DIM was implemented using the following equations: where i is an index over the number of input channels (the maximum index k is twice the channel number of the extracted features); j is an index over p, which is the number of different templates being compared to the image; R i is a two-dimensional array representing a reconstruction of Ipre i (I, pre-processed using Equation (1)); E i is a two-dimensional array representing the discrepancy (or residual error) between Ipre i and R i ; S j is a twodimensional array that represents the similarity between template feature j and the image feature at each pixel; w ji is a two-dimensional array representing channel i of template j, with the sum of the values in each template w j being normalised in order to sum to one; v ji is another two-dimensional array representing template values (where the values of v j were made equal to the corresponding values of w j , except that they were normalised to have a maximum value of one); [·] = max(·, ); 1 and 2 are parameters with their values set to 2 max( p ∑ j v ji ) and 1 × 10 −2 , respectively; and indicate element-wise division and multiplication, respectively; and and * represent the cross-correlation and convolution operations, respectively. All elements of S were initially set to zero, and Equations (3)- (5) were iteratively updated and terminated after ten iterations for all of the experiments reported in this paper. For a search image I, in order to avoid a poor estimate of φ(I) and edge effects during template matching, when DIM was directly applied to the image feature-space, I was first padded on all sides with intensity values that were mirror reflections of the image pixel values near the borders of I. The width of the padding was equal to the width of the template on the left and right borders and equal to the height of the template on the top and bottom borders. The final similarity maps S were cropped to be the same size as the original image once the template matching method had been applied [30]. When applying DIM to deep feature space, φ(I) was padded using the same method corresponding to the width and height of the template in deep space, and S was cropped to be the same size as φ(I) after application of DIM.
The best matching location can be represented by a single element with the largest value of the similarity map S j for template j, as in other template matching methods. However, the best matching location is often represented by a small population of neighbouring elements with high values [30]. Therefore, post-processing was performed to sum the similarity values within neighbourhoods: where K e is a binary-valued kernel containing ones within an elliptically shaped region, and has a width and height equal to α times the width w and height h of the template; α was set to 0.025.
Dataset Preparation
The BBS dataset [7] has been widely used for the quantitative evaluation of template matching algorithms [7,[21][22][23][24]. This dataset contains 105 template-image pairs sampled from 35 videos (three pairs per video) from a tracking dataset [40]. Each template-image pair is taken from frames of the video that are 20 frames apart. To evaluate the performance of a template matching algorithm, the intersection-over-union (IoU) is calculated between the predicted bounding box and the ground truth box for the second image in the pair. The overall accuracy is then determined by calculating the area under the curve (AUC) of a success curve produced by varying the threshold of IoU that counts as success.
Although the BBS data is widely used, it is not particularly good at discriminating the performance of different template matching methods. To illustrate this issue, we applied one baseline method (ZNCC) and three state-of-art methods (BBS, DDIS, and DIM) to the BBS dataset in colour space. The results show that there are 52 template-image pairs where all methods generate very similar results; these can be sub-divided into seven templateimage pairs for which all methods fail to match (IoU less than 0.1 for all four methods), 13 template-image pairs for which all methods succeed (IoU greater than 0.8 for all four methods), and 32 template-image pairs for which all methods produce similar intermediate IoU values within 0.1 of each other. This means that only 53 template-image pairs in the BBS dataset help to discriminate the performance of these four template matching methods. These results are summarised in Figure 2. We therefore created a new dataset, the King's Template Matching (KTM) dataset, following a similar procedure to that used to generate the BBS dataset. The new dataset contains 200 template-image pairs sampled from 40 new videos (five pairs per video) selected from a different tracking dataset [41]. In contrast to the BBS dataset, the template and the image were chosen manually in order to avoid pairs that contain significant occlusions and non-rigid deformations of the target (which no method is likely to match successfully), and the image pairs were separated by 30 (rather than 20) frames in order to reduce the number of pairs for which matching would be easy for all methods. These changes make the new data more challenging and provide a far larger number of image pairs that can discriminate the performance of different methods, as shown in Figure 2. Both the new dataset and the BBS dataset were used in the following experiments.
Template Matching Using Features from Individual Convolutional Layers
To reveal how the shape bias affects template matching, we calculated the AUC using DIM with features from every single convolutional layer of the five models. As the features from the later convolutional layers are down-sampled using max-pooling (by a factor of 1 2 , 1 4 , 1 8 , and 1 16 compared to the original image), the bounding box of the template was multiplied by the same scaling factor and the resulting similarity map is resized back to the original image size in order to make the prediction. The AUC scores across the BBS and KTM datasets are summarised in Figure 3, and the mean and standard deviation of these AUC scores are summarised in Figure 4. For all five models, there is a tendency for the AUC to be higher when template matching is performed using lower layers of the CNN compared to later layers. This suggests that template matching relies more on low-level visual attributes such as texture, rather than higher-level ones such as shape. Among the four models trained with stylised samples, the AUC score for most CNN layers is greater for Model_D than Model_E, greater for Model_E than Model_C, and greater for Model_C than Model_B. This tendency, which can be clearly seen in Figure 4, suggests that template matching relies more on texture features than shape features. Comparing Model_A and Model_D, it is hard to say which is better. However, the AUC score calculated on the BBS dataset using features from conv4_4 of Model_D is noticeably better than that for Model_A. This suggests that increasing the shape bias of later layers of the CNN could potentially lead to better template matching. However, this result is not reflected by the results for the KTM dataset. One possible explanation is that, in general, the templates in the KTM dataset are smaller than those in the BBS dataset (with the template size defined in terms of area, i.e., as the product of its width and height; the mean template size of for the KTM dataset is 1603 pixels 2 , whereas it is 3442 pixels 2 for the BBS dataset). Smaller templates tend to be less discriminative. The sub-sampling that occurs in later levels of the CNN results in templates that are even smaller and less disriminative. This may account for the worse performance of the later layers of each CNN when tested using the KTM dataset rather than the BBS dataset. This represents a a confounding factor in attributing the better performance of the early layers to a reliance on texture information.
In order to illustrate the differences in the features learned by Model_A and Model_D, the first three principal components of conv4_4 were converted to RGB values. As shown in Figure 5, the features from Model_D contain more information about edges (shape) than those from Model_A. However, it is hard to distinguish the small object in fourth row, as it is represented by a very small region of the feature space.
Template Matching Using Features from Multiple Convolutional Layers
We compared Model_A, Model_D, and Model_E by applying the DIM template matching algorithm to features extracted from multiple convolutional layers of each CNN. In order to combine feature maps with different sizes, bilinear interpolation was used to make them the same size. If the template was small (height times width less than 4000) the feature maps from the later layer(s) were scaled to be the same size as those in the earlier layer(s). If the template was large, the feature maps from the earlier layer(s) were reduced in size to be the same size as those in the later layer(s). To maintain a balance between low and high level features, the dimension of the feature maps from the later layer(s) was reduced by PCA to the same number as in the earlier layer. Table 2 shows the AUC scores produced by DIM using features from two convolutional layers of Model_A, Model_D, and Model_E. All possible combinations of two layers were tested; the table shows only selected results with the best performance. It can be seen from Table 2 that for the 24-layer combinations for which results are shown, 21 results for both BBS and KTM dataset are better for Model_D than for Model_A, and 14 results for BBS dataset and 13 results for KTM dataset are better for Model_E for Model_A. Hence, both networks with more shape bias perform better than the network with the least shape bias. These results thus support the conclusion that more discriminative features can be obtained by increasing the shape bias of the VGG19 model, which increases the performance of template matching.
The results for Model_D are better than those for Model_E for 17 of the 24 layer combinations for the BBS dataset and for 18 of the 24 layer combinations for the KTM dataset. Furthermore, the best result for each dataset (indicated in bold) is generated using the features from Model_D. Hence, among the three models, Model_D produced best performance. To determine whether fusing features from more layers would further improve template matching performance, DIM was applied to all combinations of three layers from Model_D, resulting in a total of 560 different combinations using three layers. As it is impossible to show all these results in this paper, the highest ten AUC scores are shown in Table 3. For both datasets, using three layers produced an improvement in the best AUC score (around 0.01) compared to using two layers.
Comparison with Other Methods
This section compares our results with those produced by other template matching methods in both colour and deep feature space. When evaluated on the BBS dataset, the deep features used by each template matching algorithm were the features from layers conv1_2, conv4_1, and conv4_4 of Model_D. When evaluated on the KTM dataset, the deep features used as the input to each algorithm were those from layers conv1_1, conv3_4, and conv4_2 of Model_D. BBS, CoTM, and QATM have been tested on BBS data by their authors using different deep features, and thus we compared our results to these earlier published results as well.
The comparison results are summarised in Table 4, and examples of the results for particular images are shown in Figure 6. All methods except QATM and BBS produce improved results using the proposed deep features than when using colour features. This is true for both datasets. Of the methods that have previously been applied to deep features, the performance of two (NCC and QATM) are improved, while that of two others (BBS and CoTM) is made worse when using our proposed method to define the deep feature space. Potential further improvements to the performance of these methods could be achieved by optimising the feature extraction method for the individual template matching algorithm, as has been done here for DIM. However, it should be noted that simple metrics for comparing image patches such as NCC and ZNCC produce close to state-of-the-art performance when applied to our proposed deep feature space, outperforming much more complex methods of template matching such as BBS, CoTM, and QATM when these methods are applied to any of the tested feature spaces, including those proposed by the authors of these algorithms. ZNCC and NCC produce very similar scores on both datasets. ZNCC is similar to NCC, with the only difference being the subtraction of the local mean value from the feature vectors being compared. This operation makes ZNCC more robust to changes of lighting conditions when applied directly to colour images, and for this reason the AUC score of ZNCC on both datasets is higher than that of NCC in colour space. The features extracted by the CNN appear to be insensitive to lighting changes; therefore, the results of NCC and ZNCC are remarkably similar when applied to these features. 1 We were unable to reproduce this result using the code provided by the authors of CoTM. Our different result is shown in the table. 2 The authors of QATM report an AUC score of 0.69 when this method is applied to the BBS dataset [21]. However, examining their source code, we note that this result is produced by setting the size of the predicted bounding box as equal in size to the width and height of the ground truth bounding box. Other methods are evaluated by setting the size of the predicted bounding box equal to the size of the template (i.e., without using knowledge of the ground truth that the algorithm is attempting to predict). We have re-tested QATM using the standard evaluation protocol and our result for the original version of QATM is 0.62. As QATM is designed to work specifically with a CNN, it was not applied directly to colour images.
One known weakness of BBS is that it may fail when the template is very small compared to target image [7]. This may explain the particularly poor results of this method when applied to the KTM dataset.
DIM achieves the best results on both datasets when applied to deep features. DIM performs particularly well on the BBS dataset, producing an AUC of 0.73, which, as far as we are aware, makes it the only method to have scored more than 0.7 on this dataset. The DIM algorithm produces state-of-the-art performance on the KTM dataset when applied to deep features. When applied to colour features, the results are good, although not as good as DDIS on the KTM dataset. This is because small templates in the KTM dataset may contain insufficient detail for the DIM algorithm to successfully distinguish the target object. Using deep features enhances the discriminatory ability of small templates enough that the performance of DIM increases significantly. These results demonstrate that the proposed approach is effective at extracting distinguishable features, which lead to robust and accurate template matching.
Discussion
The experiments described above demonstrate that template matching relies more on low-level visual attributes such as texture than higher-level attributes such as shape. However, it is clear that slightly increasing the shape bias of a CNN by changing the method of training the network and then combining the outputs of a range of convolutional layers produces a feature space in which template matching can be achieved with greater accuracy. This is because the combination of low-level features that can accurately locate the target with high-level features that are more tolerant to appearance changes enables more robust recognition and localisation of the target object.
Conclusions
Our results demonstrate that slightly increasing the shape bias of a CNN by changing the method used to training the network can produce more distinguishable features, allowing template matching to be achieved with greater accuracy. By running a large number of experiments using shape-biased VGG19 architectures, we determined the best combination of convolutional features on which to perform template matching with the DIM algorithm. This same feature space was shown to improve the performance of most other template matching algorithms as well. When applied to our new feature space, the DIM algorithm was able to produce state-of-art results on two benchmark datasets. | 7,781.2 | 2022-09-01T00:00:00.000 | [
"Computer Science"
] |
Subband Adaptive Array for DS-CDMA Mobile Radio
We propose a novel scheme of subband adaptive array (SBAA) for direct-sequence code division multiple access (DS-CDMA). The scheme exploits the spreading code and pilot signal as the reference signal to estimate the propagation channel. Moreover, instead of combining the array outputs at each output tap using a synthesis filter and then despreading them, we despread directly the array outputs at each output tap by the desired user’s code to save the synthesis filter. Although its configuration is far di ff erent from that of 2D RAKEs, the proposed scheme exhibits relatively equivalent performance of 2D RAKEs while having less computation load due to utilising adaptive signal processing in subbands. Simulation programs are carried out to explore the performance of the scheme and compare its performance with that of the
INTRODUCTION
Digital mobile communications are affected by multipath fading and interference causing reduced channel capacity and impaired signal quality.One approach to overcome the problem is to use the spread spectrum or specifically code division multiple access (CDMA).The use of orthogonal codes with large processing gain can help to reduce the cochannel interference (CCI) and prevent users from interfering with each other, that is, reduce the multiple access interference (MAI) [1].Another approach to cancelling interference and increasing channel capacity is to employ array antenna at the base station.The use of an array antenna with an appropriate adaptive algorithm adds another dimension, namely, spatial dimension to channel estimation resulting in spatiotemporal signal processing which has been realised as an efficient scheme for improvement of capacity and interference suppression [2].
The combination of an array antenna and CDMA to maximise performance benefits was first presented by Compton in [3] and studied further in [4,5,6,7,8,9].It was clearly shown that this combination helps to greatly reduce interference and improve channel capacity.When the RAKE receiver is integrated with an adaptive array to become a two-dimensional (2D) RAKE, multipaths are better combined since information from both spatial and temporal domains can be exploited to estimate the propagation channel.Several schemes of 2D RAKE have been proposed and y(t) + . . .
studied in [5,6,7].A typical configuration of 2D RAKE receivers often contains a beamforming structure followed by a conventional one-dimensional (1D) RAKE as presented in [6,7].By using these 2D RAKEs, the system performance has been shown to be greatly improved compared with that of 1D RAKE receiver alone or CDMA with adaptive array antenna without implementing RAKE combination.However, the beamforming structures used in these 2D RAKEs require large computation load which results in increased processing delay.The solution to reduce computational load is to use subband signal processing for array antenna or subband adaptive antenna (SBAA) which has been recently introduced in [10,11,12].SBAA utilises analysis filter bank to decompose the received signal into subbands and performs adaptive signal processing in each subband.The output signals at output taps are then reconstructed using synthesis filter bank.By doing so, the computational load of SBAA decreases significantly compared with broadband beamformers such as tapped delay line adaptive array (TDLAA) [13].Moreover, compared with conventional adaptive arrays perform-ing only spatial processing (narrow-band beamformers), the use of SBAA helps to increase the correlation between multipaths [11] and allows implementation of parallel processing.Therefore, SBAA can be considered a prospective candidate for spatial-temporal processing.
In this paper, we propose a novel scheme of adaptive array for direct-sequence CDMA (DS-CDMA) using subband signal processing to make use of CDMA and SBAA advantages.The subband structure of the scheme is similar with that introduced in [10].However, the method used to generate the reference signal and combine the array output used in the scheme is different.In our approach, we use the spreading code and pilot signal as the reference signal to estimate the propagation channel.To generate the reference signal, the user's spreading code is first transformed to the frequency domain using fast Fourier transform (FFT) and then this transformed code is used to despread the pilot signal.Moreover, instead of combining the array outputs at output taps using a synthesis filter and then despreading them, we despread directly the array outputs by the desired user's code and thus the synthesis filter is saved.Although its configuration is far different from that of the 2D RAKE receivers, the proposed scheme exhibits relatively equivalent performance while having less computation load due to utilising adaptive signal processing in subbands.For this reason, we call the scheme an implicit 2D RAKE receiver.
We organise the rest of the paper as follows.In Section 2, we present the description of the proposed scheme of SBAA for CDMA, focusing on its capability to resolve multipath fading and suppress interference.In Section 3, we compare the performance of the implicit 2D RAKE with that of the standard 2D RAKE using simulation results obtained from computer programs.Finally, we conclude our paper in Section 4.
Configuration description
In this section, we provide the description of the proposed SBAA scheme for DS-CDMA.The configuration of the scheme is shown in Figure 1.
Consider an asynchronous direct-sequence spread BPSK system, where after demodulation to remove the carrier frequency, the received signal of the ith user is given by where α i is the complex amplitude of the received signal, b i (t) is the ith user's symbol given for BPSK modulation as and c i (t) is the spreading code assigned to the ith user with In ( 2) and (3), T b and T c are the bit and chip intervals, respectively.In the practical systems, T b is often selected to be much larger than T c to have high processing gain, that is, P G = T b /T c 1. Assume that the system is affected by multipath fading, where the received signal from the ith user contains P i multipaths with different amplitudes α i,p , delays τ i,p , and arrival angles θ i,p .Taking into consideration the effect of all I users and local noise, the received signal at the array can be written as (4) where a(θ i,p ) is the array response vector corresponding to the pth path of the ith user's signal, and is the noise vector containing independent and identically distributed (i.i.d) noise in each element.For a linear uniformly spaced array, a(θ i,p ) is given by where λ is the signal wavelength, d is the distance between array elements, and [•] T denotes the vector transpose operation.Now if we define as the signal vector received from the pth path of the ith user, then (4) can be rewritten as Next, the received signal x(t) is decimated with a decimation rate D which is smaller than or equal to the number of subbands K before being converted into frequency domain subband samples.For D = K, we have a critical sampling SBAA, and for D < K, we have an oversampling SBAA.In our approach, we use the critical sampling to reduce the complexity in generating the reference signal for the training process.As a result, the decimation rate D in Figure 1 is equal to the number of subbands K. Since critical sampling is assumed, the analysis filter works as a serial-to-parallel (S/P) converter and converts serial signal samples into parallel subband samples.These subband samples in time domain are then transformed into frequency domain subband samples using (FFT).Denote bold symbols with an overhead tilde as vectors containing samples in the frequency domain, the subband signal vectors at nth subband in frequency domain are given by In order to perform the adaptive signal processing in subbands, it is necessary that the reference signal also be converted into frequency domain subbands as the received signal.In our proposed configuration of SBAA for DS-CDMA, the reference signal is generated from the desired user spreading code and the pilot signal.First, the user spreading code is transformed into frequency domain using the FFT transform, and then this frequency domain spreading code is used to spread the pilot signal.The result of this process is the frequency domain reference samples for each subband.Suppose that the 0th user is taken as the user of interest (desired user), while the rest (I −1) users are uninterested (undesired users).Assume that the pilot signal of the 0th user is d 0 (t), then the frequency domain reference samples at the nth subband are given by 1) .(10) It should be pointed out that the spreading code length is equal to the number of subbands K, and thus the array configuration is dependent on the initial selection of the spreading code length.
In the training process, the complex weights in subbands are updated by the error signal defined as the difference between the combined subband signal f (n) and the reference signal in subbands r (n) as follows: Using the mean square error E[( e (n) ) 2 ] as a criterion to optimise the complex weights will result in the optimal weight vectors in subbands given by the well-known Wiener-Hopf equation where ) H ] are the covariance matrices and ) * ] are the reference correlation vectors in subbands.Here E[•], (•) * , and (•) H denote the expectation, the complex conjugate, and the Hermitian operation, respectively.
The subband signals after being weighted by the optimal weights are combined according to each subband and the inverse FFT (IFFT) is then performed on the subband combined signals f (n) to give the array outputs y k (t) in time domain.To convert these array outputs to the serial signal, a synthesis filter or a parallel-to-serial (P/S) converter for the case of the critical sampling SBAA is often needed [12,13,14,15,16].Since the signal-to-interference-plus-noise ratio (SINR) performance of SBAA does not depend on the synthesis filter [12], in our approach instead of converting y k (t) into serial signal y(t) and then despreading this serial signal, we despread directly y k (t) by the desired user's spreading code c 0 (t) to save the synthesis filter bank.The role of this despreading part is the same as that of the correlator in the direct-sequence spread BPSK receivers.
By using our proposed configuration of SBAA for DS-CDMA, several advantages can be achieved including a 2D RAKE's function although its configuration is far different from that of the conventional 2D RAKE receiver.For this reason, hereafter we will call the proposed SBAA for DS-CDMA an implicit 2D RAKE receiver.
Implicit 2D RAKE versus 2D RAKE
In this section, we compare the performance of the proposed implicit 2D RAKE with that of the standard 2D RAKE.A standard RAKE receiver often employs a TDL with complex weights to coherently/incoherently combine delayed paths to maximise the output SINR [8].This standard RAKE is also referred to as 1D RAKE since only the temporal structure of the received signal is exploited to estimate the channel response [6].Due to the increasing research results on spatiotemporal processing, a new configuration of RAKE which is called the spatio-temporal RAKE receiver has been introduced in [5,6,7].The spatio-temporal RAKE, which is also known as 2D RAKE receiver, is an extension of 1D RAKE where a conventional time domain RAKE receiver is combined with an adaptive array antenna to exploit both spatial and temporal structures of the received signal for maximum power combination of delayed paths.Due to the additional spatial dimension, both multipath fading and MAI are better mitigated, leading to the increased channel capacity and improved output SINR [6].When constructing 2D RAKE receivers for CDMA, there exist different methods to integrate 1D RAKE with an adaptive array antenna resulting in different variations of 2D RAKE such as those in [5,6].In this paper, for the purpose of comparing our proposed implicit 2D RAKE with 2D RAKEs, we will consider only the standard 2D RAKE given in Figure 2.This standard 2D RAKE is similar to the one introduced in [7].The main principle of the standard 2D RAKE is that the received signal at the mth antenna s m (t) is first put through a TDL of length K.The output signals from the TDL are then multiplied with an optimum weight vector w m = [wm,1 w m,2 • • • w m,K ] T and combined together.After that, the combined signals from each antenna will be combined with each other and despread to give the output signal y(t).Note that the received signals s m (t) are processed on the chip-by-chip basis by the standard 2D RAKE rather than the block-by-block mode as in the implicit 2D RAKE.Moreover, the process used to update weights in the standard 2D RAKE is done in time domain in contrast to that in the frequency domain in the implicit 2D RAKE.
Now we consider the implicit 2D RAKE presented in Section 2.1.The implicit 2D RAKE receiver is different from the 2D RAKE receivers presented so far [5,6,7] in that it performs adaptive signal processing (beamforming) in subband frequency domain rather than in full-band time domain.By using subband frequency domain processing, the implicit 2D RAKE has the following performance characteristics compared with the standard 2D RAKE.
(i) Relatively equivalent performance.Since SBAA using FFT is a theoretically equivalent form of TDLAA, the performances of both the adaptive arrays are relatively equal.In [13], Compton has shown that the output SINR of TDLAA is identical to that of SBAA using FFT provided that the number of taps in TDLs is the same with the number of samples used by FFT.Consequently, the performance of the implicit 2D RAKE receiver is also the same as that of the standard 2D RAKE if the number of subbands K of the implicit 2D RAKE is the same as the number of employed taps in the standard 2D RAKE.This is true for single path environment since the output SINRs of both the 2D RAKEs are given as a function of the number of antennas M, the processing gain P G , and the input signal-to-noise ratio SNR in as follows: SINR out [dB] = 10 log 10 (M) + 10 log 10 P G However, in multipath fading environment, the conclusion of [13] is no longer valid due to lack of considering the correlation between multipaths.Although subband signal processing has been shown to have capability to enhance the multipath correlation [11,12], the performance of SBAA is in effect still not as good as that of TDLAA [12].Assume that there are two multipaths with equal powers, incident at the array: the direct path with angle of arrival (AOA) of 0 • and the delayed path with AOA = 30 • .For a linear halfwavelength spaced array antenna, the two paths are orthogonal and thus totally uncorrelated.In this case, if the delay of the delayed path is smaller than the number of employed taps, the output SINR of the standard 2D RAKE is given by SINR out [dB] = 10 log 10 (M) + 10 log 10 P G + 10 log 10 (2) whereas the output SINR of the implicit RAKE decreases from the value of ( 14) to the value of (13) depending on the delay of the delayed path.When the delay is very large, there may be difference up to 3 dB in the output SINR of the two 2 RAKE receivers.The performance degradation of the implicit 2D RAKE can be attributed to the block mode, that is, decimation, in processing the received signals.As we will explain in the next part, by decimating the received signals, the 2D RAKE can achieve significantly reduced computational complexity sacrificing the multipath correlation.However, as clearly shown in [11], if the number of subbands, or equivalently the length of spreading code, is chosen large enough, the 2D RAKE can obtain almost full multipath correlation leading to smaller degradation.Moreover, it is noted that practical DS-CDMA systems often suffer multipaths with delay of about several chips.Therefore, if the channel suffers from small delay and the length of spreading code is chosen large enough, the output SINR of the implicit 2D RAKE will be relatively equal to that of the standard 2D RAKE receiver.This conclusion will be supported by simulation results in Section 3. (ii) Reduced computational load.This can be seen by comparing the processing methods of the implicit and standard 2D RAKE receivers.While the standard 2D RAKE processes the received signal in the chip-by-chip basis, this is done on block-by-block mode by the implicit 2D RAKE.As a result, the implicit 2D RAKE requires less mathematical operations than the standard 2D RAKE does.For a K tap and M element array antenna, the standard 2D RAKE employing the sample matrix inversion (SMI) algorithm requires (KM) 3 multiplications for each weight update.The implicit 2D RAKE with K subbands, on the other hand, needs only KM 3 multiplications [13].Taking into account 2K log 2 K multiplications due to both FFT and IFFT processing, the computational load required by the implicit RAKE is K(M 3 + 2 log 2 K).Since DS-CDMA systems are often implemented with large processing gain P G , then K is large, and thus (KM) 3 K(M 3 + 2 log 2 K).Consequently, the use of the implicit 2D RAKE helps to save a considerably large amount of computational load.(iii) Parallel structure.Parallel structure is an advantage of the implicit 2D RAKE over the standard 2D RAKE.
The parallel structure of the implicit 2D RAKE allows implementation of parallel processing, that is, distribution of tasks over different digital signal processors (DSPs), which is very convenient for constructing high complexity systems such as an adaptive array with a large number of antenna elements.
SIMULATION RESULTS
In this section, we carry out the performance analysis of the implicit 2D RAKE using simulation results by computer programs.We will focus our analysis mainly on two capabilities of the implicit 2D RAKE: (i) multipath combining capability and (ii) interference suppression capability.While interference suppression is the inherent capability of adaptive array antenna, multipath combining capability is gained thanks to the use of subband signal processing.We note here again that a conventional adaptive array performs only spatial processing (narrow-band beamforming), and thus does not have capability to combine multipath components.SBAA, on the other hand, was shown in [11] to be able to increase the multipath correlation, and thus has the capability to combine multipath components.We also compare the performance of the implicit 2D RAKE with that of the standard 2D RAKE to support our discussion in Section 2. The simulation model is given in Table 1.
For simplicity, when performing the simulation, we assume perfect synchronisation of the pilot signal and we use the recalculation method to obtain the output SINR.The 1 000 data symbols are first used as the training symbols to obtain the optimal weights by SMI algorithm.These symbols are then used again as the data symbols to calculate the output SINR.
Multipath combining capability
The multipath combining capability of the implicit 2D RAKE is illustrated in Figures 3, 4, and 5.In Figure 3, we assume that there are two multipaths incident at the array: the direct path with θ 0,0 = 0 • and delay τ 0,0 = 0 chip, and the delayed path with θ 0,1 = 30 • and delay τ 0,1 varying from 0 to RAKE decreases gradually between the two theoretical limits as the delay of the delayed path increases.The upper limit is the SINR value when the two paths are completely correlated and are calculated using (14), while the lower limit is the SINR value calculated using ( 13) corresponding to the case in which the two paths are totally uncorrelated.It is also noted that the performance of the standard 2D RAKE is better than that of the implicit 2D RAKE in that the output SINR of the standard 2D RAKE is kept almost constant at the upper theoretical limit.The reason why the implicit 2D RAKE cannot achieve the same output SINR of the standard 2D RAKE is explained as follows.Since the standard 2D RAKE utilises TDLs to combine multipaths if the delay of multipaths are within the length of TDLs, the correlation between multipath components are fully maintained and thus its output SINR is maximised.On the other hand, the correlation between multipaths in each subband of the implicit 2D RAKE decreases as the delay of delayed paths increases [11], causing the output SINR to deteriorate as shown Figure 3. Thus it is clear that if the delay is smaller than K, the standard 2D RAKE combines multipaths better than the implicit 2D RAKE does.However, the practical DS-CDMA systems often suffer multipaths with delays of about several chips, and in such case, the implicit 2D RAKE can achieve relatively equivalent performance of the standard 2D RAKE particularly at low input SNR.
Figure 4 shows the output SINRs as the AOA of the delayed ray θ 0,1 varies.It can be seen that if the delayed ray arrives at the array from an AOA significantly different from the direct ray, then better output SINR can be achieved by both the 2D RAKEs.The reason for this is that when the difference in the AOAs of the two paths is large enough, the 2D RAKEs can produce a supplementary lobe with a certain gain pointing towards the AOA of the delayed ray.By doing so, the power of the delayed ray is optimally combined to maximise the output SINR.Whereas when the difference in the AOAs is small, the 2D RAKEs cannot create the additional lobe, causing the two paths to share the same main lobe, and thus the power of the delayed path cannot be optimally combined, leading to poorer output SINR.It is also noticed that when the delay of the delayed path is small, namely, when τ 0,1 = 1 chip, the performances of the implicit 2D RAKE and the standard 2D RAKE are almost the same.However, as the delay of the delayed path increases, the performance of the implicit 2D RAKE becomes worse than that of the standard 2D RAKE.For τ 0,1 = 5 chips, the standard 2D RAKE can achieve approximately 1.7 dB better output SINR than the implicit 2D RAKE does.
Figure 5 compares the performances of the two 2D RAKEs for different number of antenna elements and input SNRs.In this case, we assume that the received signal contains three multipaths: the direct path with θ 0,0 = 0 • /τ 0,0 = 0 chip, the first delayed path with θ 0,1 = 15 • /τ 0,1 = 1 chip, and the second delayed path with θ 0,2 = −20 • /τ 0,2 = 2 chips.We define the input SNR as the power ratio of each path to the noise, and compare the performances of the two 2D RAKEs for 3 values of input SNR: −10 dB, 0 dB, and 10 dB.It is seen from Figure 5 that the performances of the two 2D RAKEs are relatively equivalent, particularly, for low input SNRs.The reason why the implicit 2D RAKE cannot obtain the same output SINR as the standard 2D RAKE does at high input SNRs can be explained as follows.Since the signal power at the array output includes both the power of the desired signal and an amount of desired signal power correlated in the multipaths, the difference between output SINRs of the two 2D RAKE schemes depends mainly on the capability to extract the correlated signal power from multipaths.At low input SNR, since noise power is dominant, the output SINRs of both the two schemes thus are similar.However, at higher input SNR, the signal and the correlated signal power become dominant.Since the standard 2D RAKE has been shown to combine multipaths better, the correlated power it can extract from multipaths is thus larger than that the implicit 2D RAKE can do.Consequently, the SINR performance of the standard 2D RAKE is better than that of the implicit 2D RAKE at high input SNR.
Interference suppression capability
We now compare the MAI cancellation capabilities of the implicit 2D RAKE and the standard 2D RAKE.The propagation model is set up with one desired user and three other undesired users with interference to noise ratio INR = 0 dB as MAI sources.For each user's signal, we assume there are one direct ray and two delayed rays with AOAs and delays as given in Figure 6.In the figure, the denotation a • /d means that the path is incident at the array from arrival angle a • with d chip delay.When there are no multipaths in all user's signals, that is, each user's signal contains only the direct path (with 0 delay), the propagation environment is called the interference only; whereas if there are multipaths, it is defined as the interference plus multipath environment.
The interference suppression capability of the two 2D RAKE schemes is shown in Figure 7, where the solid and the dotted lines denote the output SINRs of the interference plus multipath and the interference only environments, respectively.It is noticed that in the interference only environment, both the two 2D RAKEs have the same interference suppression capability.However, when there exist multipaths, the performance of the implicit 2D RAKE deteriorates about 1.5 dB compared with that of the standard 2D RAKE.Therefore, it is concluded that although the implicit 2D RAKE achieves the same interference suppression capability with that of the standard 2D RAKE, it suffers the problem of multipaths of the interferences more seriously than the standard 2D RAKE does.
The normalised power patterns of the two 2D RAKEs corresponding to Case 4 of Figure 6 are compared in Figure 8.It is observed that the two 2D RAKEs produce the same power patterns in the interference only environment.However, when there are multipaths, the power pattern of the implicit 2D RAKE becomes worse in that its nulls are still not correctly pointed toward the direct path of interferences causing the poorer performance.Note that the multipaths are not perfectly combined in the implicit 2D RAKE, particularly, with 32 subbands as used in the simulation.Therefore, the nulls are slightly inclined from the direction of interferences.For larger number of subbands or spreading code length, it is expected that the power patterns of the two 2D RAKE will be the same.
CONCLUSION
We have presented a novel configuration of subband adaptive array for DS-CDMA mobile radio which is called the implicit 2D RAKE.It is clearly shown that the implicit 2D RAKE can obtain relatively equivalent performance as the standard 2D RAKE does while saving a large amount of computational load.The proposed configuration therefore can be well applied for DS-CDMA systems to maximise the performance benefits.
It should be noted that the performance of the implicit can be improved to be the same with that of the conventional 2D RAKE by combining with cyclic prefix data transmission scheme [16].For CDMA, we have introduced the so-called cyclic prefix spread code [15] which can maximise the diversity gain of the implicit 2D RAKE in multipath fading environment.This proposed scheme will be a topic in a different paper.
Call for Papers
There is an increasing need to develop efficient "systemlevel" models, methods, and tools to support designers to quickly transform signal processing application specification to heterogeneous hardware and software architectures such as arrays of DSPs, heterogeneous platforms involving microprocessors, DSPs and FPGAs, and other evolving multiprocessor SoC architectures.Typically, the design process involves aspects of application and architecture modeling as well as transformations to translate the application models to architecture models for subsequent performance analysis and design space exploration.Accurate predictions are indispensable because next generation signal processing applications, for example, audio, video, and array signal processing impose high throughput, real-time and energy constraints that can no longer be served by a single DSP.
There are a number of key issues in transforming application models into parallel implementations that are not addressed in current approaches.These are engineering the application specification, transforming application specification, or representation of the architecture specification as well as communication models such as data transfer and synchronization primitives in both models.
The purpose of this call for papers is to address approaches that include application transformations in the performance, analysis, and design space exploration efforts when taking signal processing applications to concurrent and parallel implementations.The Guest Editors are soliciting contributions in joint application and architecture space exploration that outperform the current architecture-only design space exploration methods and tools.
Topics of interest for this special issue include but are not limited to: • modeling applications in terms of (abstract) control-dataflow graph, dataflow graph, and process network models of computation (MoC) Authors should follow the EURASIP JASP manuscript format described at http://www.hindawi.com/journals/asp/.Prospective authors should submit an electronic copy of their complete manuscript through the EURASIP JASP manuscript tracking system at http://www.hindawi.com/mts/,according to the following timetable:
Call for Papers
It is broadly acknowledged that the development of enabling technologies for new forms of interactive multimedia services requires a targeted confluence of knowledge, semantics, and low-level media processing.The convergence of these areas is key to many applications including interactive TV, networked medical imaging, vision-based surveillance and multimedia visualization, navigation, search, and retrieval.The latter is a crucial application since the exponential growth of audiovisual data, along with the critical lack of tools to record the data in a well-structured form, is rendering useless vast portions of available content.To overcome this problem, there is need for technology that is able to produce accurate levels of abstraction in order to annotate and retrieve content using queries that are natural to humans.Such technology will help narrow the gap between low-level features or content descriptors that can be computed automatically, and the richness and subjectivity of semantics in user queries and high-level human interpretations of audiovisual media.
This special issue focuses on truly integrative research targeting of what can be disparate disciplines including image processing, knowledge engineering, information retrieval, semantic, analysis, and artificial intelligence.High-quality and novel contributions addressing theoretical and practical aspects are solicited.Specifically, the following topics are of interest: • Semantics-based multimedia analysis
Call for Papers
When designing a system for image acquisition, there is generally a desire for high spatial resolution and a wide fieldof-view.To achieve this, a camera system must typically employ small f-number optics.This produces an image with very high spatial-frequency bandwidth at the focal plane.To avoid aliasing caused by undersampling, the corresponding focal plane array (FPA) must be sufficiently dense.However, cost and fabrication complexities may make this impractical.More fundamentally, smaller detectors capture fewer photons, which can lead to potentially severe noise levels in the acquired imagery.Considering these factors, one may choose to accept a certain level of undersampling or to sacrifice some optical resolution and/or field-of-view.
In image super-resolution (SR), postprocessing is used to obtain images with resolutions that go beyond the conventional limits of the uncompensated imaging system.In some systems, the primary limiting factor is the optical resolution of the image in the focal plane as defined by the cut-off frequency of the optics.We use the term "optical SR" to refer to SR methods that aim to create an image with valid spatial-frequency content that goes beyond the cut-off frequency of the optics.Such techniques typically must rely on extensive a priori information.In other image acquisition systems, the limiting factor may be the density of the FPA, subsequent postprocessing requirements, or transmission bitrate constraints that require data compression.We refer to the process of overcoming the limitations of the FPA in order to obtain the full resolution afforded by the selected optics as "detector SR."Note that some methods may seek to perform both optical and detector SR.
Detector SR algorithms generally process a set of lowresolution aliased frames from a video sequence to produce a high-resolution frame.When subpixel relative motion is present between the objects in the scene and the detector array, a unique set of scene samples are acquired for each frame.This provides the mechanism for effectively increasing the spatial sampling rate of the imaging system without reducing the physical size of the detectors.
With increasing interest in surveillance and the proliferation of digital imaging and video, SR has become a rapidly growing field.Recent advances in SR include innovative algorithms, generalized methods, real-time implementations, and novel applications.The purpose of this special issue is to present leading research and development in the area of super-resolution for digital video.Topics of interest for this special issue include but are not limited to: • Detector and optical SR algorithms for video • Real-time or near-real-time SR implementations • Innovative color SR processing • Novel SR applications such as improved object detection, recognition, and tracking • Super-resolution from compressed video • Subpixel image registration and optical flow Authors should follow the EURASIP JASP manuscript format described at http://www.hindawi.com/journals/asp/.Prospective authors should submit an electronic copy of their complete manuscript through the EURASIP JASP manuscript tracking system at http://www.hindawi.com/mts/,according to the following timetable:
Call for Papers
In recent years, increased demand for fast Internet access and new multimedia services, the development of new and feasible signal processing techniques associated with faster and low-cost digital signal processors, as well as the deregulation of the telecommunications market have placed major emphasis on the value of investigating hostile media, such as powerline (PL) channels for high-rate data transmissions.
Nowadays, some companies are offering powerline communications (PLC) modems with mean and peak bit-rates around 100 Mbps and 200 Mbps, respectively.However, advanced broadband powerline communications (BPLC) modems will surpass this performance.For accomplishing it, some special schemes or solutions for coping with the following issues should be addressed: (i) considerable differences between powerline network topologies; (ii) hostile properties of PL channels, such as attenuation proportional to high frequencies and long distances, high-power impulse noise occurrences, time-varying behavior, and strong inter-symbol interference (ISI) effects; (iv) electromagnetic compatibility with other well-established communication systems working in the same spectrum, (v) climatic conditions in different parts of the world; (vii) reliability and QoS guarantee for video and voice transmissions; and (vi) different demands and needs from developed, developing, and poor countries.
These issues can lead to exciting research frontiers with very promising results if signal processing, digital communication, and computational intelligence techniques are effectively and efficiently combined.
The goal of this special issue is to introduce signal processing, digital communication, and computational intelligence tools either individually or in combined form for advancing reliable and powerful future generations of powerline communication solutions that can be suited with for applications in developed, developing, and poor countries.
Topics of interest include (but are not limited to) Authors should follow the EURASIP JASP manuscript format described at the journal site http://asp.hindawi.com/.Prospective authors should submit an electronic copy of their complete manuscripts through the EURASIP JASP manuscript tracking system at http://www.hindawi.com/mts/,according to the following timetable:
•
Multicarrier, spread spectrum, and single carrier techniques • Channel modeling • Channel coding and equalization techniques • Multiuser detection and multiple access techniques • Synchronization techniques • Impulse noise cancellation techniques • FPGA, ASIC, and DSP implementation issues of PLC modems • Error resilience, error concealment, and Joint sourcechannel design methods for video transmission through PL channels | 8,064 | 2004-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
RBF: An R package to compute a robust backfitting estimator for additive models
Although highly flexible, non-parametric regression models typically require large sample sizes to be estimated reliably, particularly when they include many explanatory variables. Additive models provide an alternative that is more flexible than linear models, not affected by the curse of dimensionality, and also allows the exploration of individual covariate effects. Standard algorithms to fit these models can be highly susceptible to the presence of a few atypical or outlying observations in the data. The RBF (Salibian-Barrera & Martı́nez, 2020) package for R implements the robust estimator for additive models of Boente et al. (2017), which can resist the damaging effect of outliers in the training set.
Statement of Need
The purpose of RBF is to provide a user-friendly implementation of the robust kernel-based estimation procedure for additive models proposed in Boente et al. (2017), which is resistant to the presence of potentially atypical or outlying observations in the training set.
Implementation Goals
RBF implements a user interface similar to that of the R package gam (Hastie, 2019), which computes the standard non-robust kernel-based fit for additive models using the backfitting algorithm.The RBF package also includes several modeling tools, including functions to produce diagnostic plots, obtain fitted values and compute predictions.
Background
Additive models offer a non-parametric generalization of linear models (Hastie & Tibshirani, 1990).They are flexible, interpretable, and avoid the curse of dimensionality, which means that as the number of explanatory variables increases, neighbourhoods rapidly become sparse, and many fewer training observations are available to estimate the regression function at any one point.
If Y denotes the response variable, and X = (X 1 , . . ., X d ) ⊤ a vector of explanatory variables, then an additive regression model postulates that where the error ϵ is independent of X and its distribution is centered at zero, σ > 0 is an unknown scale parameter, the location parameter µ ∈ R, and g j : R → R are smooth functions.Note that if for all 1 ≤ j ≤ d we have g j (X j ) = β j X j for some β j ∈ R, then Equation 1 reduces to a standard linear regression model.
The backfitting algorithm (Friedman & Stuetzle, 1981) can be used to fit the model in Equation 1 with kernel regression estimators for the smooth components g j .It is based on the following observation: under Equation 1 the additive components satisfy . Thus, each g j is iteratively computed by smoothing the partial residuals as functions of X j .
It is well known that these estimators can be seriously affected by a relatively small proportion of atypical observations in the training set.Boente et al. (2017) proposed a robust version of backfitting, which is implemented in the RBF package.Intuitively, the idea is to use the backfitting algorithm with robust smoothers, such as kernel-based M-estimators (Boente & Fraiman, 1989).These robust estimators solve: , where the minimization is computed over µ ∈ R, and functions g j with E[g j (X j )] = 0 and E[g 2 j (X j )] < ∞.The loss function ρ : R → R is even, non-decreasing and non-negative, and σ is the residual scale parameter.In practice, we replace σ by a preliminary robust estimator σ (for example, the Median Absolute Deviations (MAD) of the residuals from a local median fit) and the expected value by the average over the training set.Note that different choices of the loss function ρ yield fits with varying robustness properties.Typical choices for ρ are Tukey's bisquare family and Huber's loss (Maronna et al., 2018), and when ρ(t) = t 2 , this approach reduces to the standard backfitting.
Simulation experiments reported in Boente et al. (2017) show that the robust backfitting algorithm provides more reliable estimators than the classical approach when the training set includes outliers in different proportions and settings.Those experiments also confirm that the robust backfitting estimators are very similar to the standard ones when the data do not contain atypical observations.
In the next section we illustrate the use of the robust backfitting algorithm as implemented in the RBF package by applying it to a real data set.We also compare the results with those obtained with the standard backfitting approach.
Illustration
The airquality data set contains 153 daily air quality measurements in the New York region between May and September, 1973 (Chambers et al., 1983).The interest is in modeling the mean ozone ("Ozone") concentration as a function of three potential explanatory variables: solar radiance in the frequency band 4000-7700 ("Solar.R"), wind speed ("Wind") and temperature ("Temp").We focus on the 111 complete entries in the data set.
Since the plot in Figure 1 suggests that the relationship between ozone and the other variables is not linear, we propose using an additive regression model of the form Ozone = µ + g 1 (Solar.R) + g 2 (Wind) + g 3 (Temp) + ε .
(2) To fit the model above we use robust local linear kernel M-estimators with a Tukey's bisquare loss function.These choices are set using the arguments degree = 1 and type = 'Tukey ' in the call to the function backf.rob.The model is specified with the standard formula notation in R. The argument windows is a vector with the bandwidths to be used with each kernel smoother.To estimate optimal values we used a robust leave-one-out cross-validation approach (Boente et al., 2017) which resulted in the following bandwidths: R> bandw <-c(136.7285,10.67314, 4.764985) The code below computes the corresponding robust backfitting estimator for Equation 2: R> data(airquality) R> library(RBF) R> ccs <-complete.cases(airquality)R> fit.full <-backf.rob(Ozone~Solar.R + Wind + Temp, windows=bandw, degree=1, type='Tukey', subset = ccs, data=airquality) A different kernel M-estimator can be used in the robust backfitting algorithm by setting type = 'Huber' in the call above.Unlike Tukey's re-descending score function, Huber's function is monotone, and numerical experiments show that the resulting estimator typically has larger bias.However, the corresponding objective function is convex and thus standard algorithms can be used to find the global minimum.Our algorithm takes advantage of this to construct a robust initial value to compute the more robust fit based on Tukey's loss function.
For more details we refer the reader to Boente et al. (2017).
The argument degree is an integer indicating the desired degree of the local polynomial used in the kernel M-estimator.Its default value is 0 (which corresponds to a local constant fit).Other arguments for backf.robinclude convergence controls (epsilon: the maximum allowed relative difference between consecutive estimates, and max.it: the maximum number of iterations), and tuning parameters for the chosen loss function (k.h for Huber's loss, and k.t for Tukey's).The default values for the latter two are those used to construct robust estimators for linear regression that are 95% efficient compared with the least squares ones.
To compare the robust and classical additive model estimators we use the R package gam.
Optimal bandwidths were estimated using leave-one-out cross-validation as before.The two fits differ mainly in the estimated effects of wind speed and temperature.The classical estimate for g 3 (Temp) is consistently lower than the robust counterpart for Temp ≥ 85.For wind speed, the non-robust estimate ĝ2 (Wind) suggests a higher effect over Ozone concentrations for low wind speeds than the one given by the robust estimate, and the opposite difference for higher speeds.
Residuals from a robust fit can generally be used to detect the presence of atypical observations in the training data.Figure 3 displays a boxplot of these residuals.We note four possible outlying points (indicated with red circles).To investigate whether the differences between the robust and non-robust estimators are due to the outliers, we recomputed the classical fit after removing them.Figure 4 shows the estimated curves obtained with the classical estimator using the "clean" data together with the robust ones (computed on the whole data set).Outliers are highlighted in red.Note that both fits are now very close.An intuitive interpretation is that the robust fit has automatically downweighted potential outliers and produced estimates very similar to the classical ones applied to the "clean" observations.detailed scripts reproducing the data analysis above, and another example is included in the package vignette.
Contributions to this project can be submitted via pull requests on the GitHub repository.
Similarly, GitHub issues are the preferred venue to report suggestions and problems with the current version of the software, and seek support.
Figure 1 :
Figure 1: Scatter plot of the airquality data.The response variable is Ozone.
Figure 2 contains partial residuals plots and both sets of estimated functions: blue solid lines indicate the robust fit and magenta dashed ones the classical one.
Figure 2 :
Figure 2: Partial residuals and fits for the airquality data.Robust and classical fits are shown with solid blue and dashed magenta lines, respectively.
Figure 3 :
Figure 3: Boxplot of the residuals obtained using the robust fit.Potential outliers are highlighed with solid red circles.
Figure 4 :
Figure 4: Plots of estimated curves and partial residuals.The solid blue lines indicate the robust fit computed on the whole data set, while the classical estimators computed on the "clean" data are shown with dashed magenta lines.Larger red circles indicate potential outliers. | 2,142 | 2021-04-07T00:00:00.000 | [
"Mathematics"
] |
xGENIA: A comprehensive OWL ontology based on the GENIA corpus.
UNLABELLED
The GENIA ontology is a taxonomy that was developed as a result of manual annotation of a subset of MEDLINE, the GENIA corpus. Both the ontology and corpus have been used as a benchmark to test and develop biological information extraction tools. Recent work shows, however, that there is a demand for a more comprehensive ontology that would go along with the corpus. We propose a complete OWL ontology built on top of the GENIA ontology utilizing the GENIA corpus. The proposed ontology includes elements such as the original taxonomy of categories, biological entities as individuals, relations between individuals using verbs and verb nominalizations as object properties, and links to the UMLS Metathesaurus concepts.
AVAILABILITY
http://www.ece.ualberta.ca/~rrak/ontology/xGENIA/
Background:
The GENIA corpus consists of a set of 2000 annotated abstracts from MEDLINE database concerning "transcription factors in human blood cells". The corpus along with the corresponding taxonomy (ontology) was developed to provide a reference material for bio-textmining.
[1] Since its development the GENIA corpus and ontology have been intensively used by researchers for biological entity recognition [2], ontology creation and population [3], and query processing.
[4] However, recent work on biological name recognition and query processing [4] demonstrates a demand for a more comprehensive and complete ontology that would go along with the GENIA corpus. Other researchers [5] also suggested utilizing an ontology in the information extraction process, which is not feasible with a basic taxonomy only.
We propose xGENIA, an ontology that is based on the GENIA corpus and ontology created by.
[1] This ontology, developed in OWL [6], can be used as a golden standard and a knowledge base for biological information extraction.
Methodology:
The biological entities in the GENIA corpus were preprocessed before we added them to the xGENIA ontology as individuals. The first step of biological entity extraction involves decomposition of nested tags and terms involving ellipsis in coordinated clauses. The decomposed entities are further preprocessed with a set of manually developed rules, a common approach used in biological entity extraction. [3, 4, 5] We created our own set of rules putting special emphasis on the unification of entities carrying identical concepts yet being slightly different in form. Processing entities with the rules involves removing unnecessary white spaces, dividing words and word sequences into separate instances, and removing acronyms embedded in the sequence of words representing their full form.
In order to extract relations we used a set of verbs and verb nominalizations from.
[4] To preserve generality we replaced inflectional variants of verbs and verb nominalizations with their canonical form (e.g., activate, activates, activating, and activated were replaced with activate). That way we reduced the original list of verbs and verb nominalizations from 246 to 142.
We manually assigned the rdfs:subPropertyOf element between verbs and verb nominalizations with prepositions and their canonical forms and between verb nominalizations and their root verbs as well as owl:inverseOf between two verbs of inverse meaning. The relations between the entities were found by searching for two entities appearing in the neighborhood on opposite sides of the verb in the same sentence. The sequence of words that includes the verb and is located between the subject and object entities must not be interrupted by a coma or a semicolon.
To properly identify UMLS® Metathesaurus CUIs, the extracted biological entities were normalized using norm, a tool provided by NLM [7], which is used to create indices on the Metathesaurus database. The normalized entities were then compared against the Metathesaurus MRXNS ENG file, one of the Metathesaurus' indices, and, if found, CUIs were fetched and added to the ontology as the hasCUI datatype property.
Overview of the xGENIA ontology: OWL integrates a taxonomy and instances (called individuals in OWL) of the taxonomy. xGENIA utilizes a variety of the OWL as well as RDF and RDFS (the languages OWL is based on) elements. They include classes (the GENIA's original taxonomy), individuals (biological entities), object properties (relations between the entities), datatype properties (unique identifiers), and others. The core of xGENIA consists of the original taxonomy of 47 categories as described in.
[1] This taxonomy of categories is represented by classes (owl:Class) in our OWL ontology.
Biological entities:
The biological entities annotated in the GENIA corpus constitute individuals of categories they are annotated to. In order to keep the xGENIA ontology coherent the annotated biological entities have been preprocessed (see Methodology) to form unique entities carrying the same concepts regardless of lexical and syntactic differences in the way they were written by the authors. To satisfy constraints on the names of individuals imposed by OWL, each biological entity (individual) is assigned a unique identifier being a concatenation of the name of the class it is assigned to and a consecutive number. (Although OWL allows for assigning more than one class to an individual, this is not the case in the GENIA corpus.) The real name of an entity is represented by the rdfs:label property. Examples of individuals are shown in Figure 1(a).
Relations between biological entities:
xGENIA is also equipped with relations between individuals (represented by hexagons in Figure 1(a)). Each such binary relation binds two individuals through a verb or verb nominalization. These relations come from the corpus and have been extracted using a method described in Methodology). Each predicate (verb or verb nominalization) is represented in ontology as owl:ObjectProperty and has its own domain (rdfs:domain) and range (rdfs:range), i.e., the set of classes being a subject and object, respectively, occurring with the given predicate. To further enrich the ontology, the properties were put in a hierarchy using the rdfs:subPropertyOf element to indicate that one property is a variant of another. Additionally, the owl:inverseOf element denotes that two properties have the same meaning but a different underlying direction (see Figure 1(b)).
Lexical decomposition of biological entities:
Using nested tags in the GENIA corpus, where the inner tags are the lexical stems of the outer tags, xGENIA incorporates a special object property, stemsFrom, which indicates the direction from the outer tags to the inner tags. Following this property for a given entity leads to the lexical root(s) of that entity, which in combination with the unique identifiers (see the next section) of atomic entities (entities that do not instantiate the stemsFrom property) allow for identification of the original entity (see Figure 1(a)).
Concept Unique Identifiers:
To allow for easier identification, some of the annotated entities were assigned to concept unique identifiers (CUI) as found in UMLS® Metathesaurus.
Conclusion:
The xGENIA ontology together with the GENIA corpus provide a sophisticated benchmark for researchers who design and test applications in the field of biological information extraction. The overall statistics of xGENIA are presented in Table 1. The xGENIA ontology is an open project and we will continue improving it in the future. Each new release will be labelled by a unique version number and will be given the description of changes and additions. | 1,617 | 2007-03-20T00:00:00.000 | [
"Computer Science",
"Philosophy"
] |
Assessment of Crash Performance of an Automotive Component Made through Additive Manufacturing
: The objective of this study was to apply an innovative technique to manufacture a plastic automotive component to reduce its weight and costs, and guarantee its design was safe. A frontal impact sled test was simulated, and the damages to the occupant’s legs were assessed, with specific reference to the dashboard’s glove box. The replacement of the current glove box with a new component fabricated using additive manufacturing was analyzed to evaluate its passive safety performance in the event of an automobile accident. The materials analyzed were polyamide and polypropylene, both reinforced with 5% basalt. The sti ff ness of the system was previously characterized by reproducing a subsystem test. Subsequently, the same rating test performed by the Euro NCAP (New Car Assessment Program) was reproduced numerically, and the main biomechanical parameters required by the Euro NCAP were estimated for both the current and the additive production of the component.
Introduction
The use of lightweight designs and materials has become ubiquitous in the modern world. The need for improved fuel economy has emphasized the use of lightweight materials and high-strength designs in automotive-component manufacturing.
Automobile designers maintain high standards for performance, reliability, cost-effectiveness, competitiveness, and safety. Until a few decades ago, safety was considered important, but not decisive, in the commercial success of a vehicle. Following a dramatic increase in the volume of vehicle traffic, accidents have increased; in 2018, they became the leading cause of death for people between the ages of 5 and 29. As a result, there has been a growing awareness of safety, and it has become a central theme in vehicle design. In recent years, efficient numerical techniques have been developed to accurately simulate automobile accidents and limit the number of physical tests [1]. This has made it possible to hypothesize and analyze a high number of different configurations at the design stage and compare their different structural behaviors [2]. When correlating numerical and experimental data, after conducting simulations, it is possible to highlight the emerging criticalities and find the optimal solution.
Before a vehicle can be commercialized, it must pass crash homologation tests using standards dictated by the European Commission. In recent years, autonomous organizations have been set up to carry out these tests of vehicle safety; in Europe, a fundamental testing organization is the Euro NCAP (New Car Assessment Program). These entities attribute a variable number of stars to a vehicle to characterize its safety. The results of the tests carried out by the Euro NCAP are public, which has There are various additive-manufacturing technologies; in our study, the component was fabricated using the fused deposition model (FDM) technique, the most popular extrusion-based additive manufacturing technology, which forms the components by building them up layer by layer. Figure 1 shows the internal structure of the two boxes in various sections. The internal structure of the additive box is impossible to realize through a traditional molding process due to its high geometric complexity.
Appl. Sci. 2020, 10,9106 3 of 11 Figure 1 shows the internal structure of the two boxes in various sections. The internal structure of the additive box is impossible to realize through a traditional molding process due to its high geometric complexity. Another advantage to additive manufacturing could be, in some applications, the reduction of costs. This methodology can be applied to spare parts for premium vehicles, which are generally in low demand. The current production strategy involves increasing the production of spare parts for those vehicles to avoid creating another mold, which increases costs. This strategy's high inventory costs create inefficiencies. For the additive box, no stock is kept in warehouses, and the component is printed only if it is requested.
FEM (Finite Element Method) Modeling
The numerical activity was carried out with the LS-DYNA solver [15], using Hypermesh [16] and ANSA [17] as the pre-processors and Hyperview [18] as the post-processor. The system that was subjected to a knee impact was the front interior of the vehicle, specifically the dashboard glove box. The vehicle used was the Fiat 500 currently in production. To reduce calculation times, only the dashboard body was modeled, rather than the entire vehicle. This operation did not lead to significant errors because the eliminated components (belonging to the rear part of the vehicle) were not directly influenced by the knee impact test.
The FEM modeling involved three phases: 1. The CAD (Computer Aided Design) model is imported into a pre-processor. The current box and the additive box are imported, and both geometries are meshed using mainly shell linear elements with an average size of 5 mm. This setting was used as a compromise between the calculation time and the convergence analysis (the mesh is refined until a satisfactory convergence of the results is reached). 2. Properties and materials were defined. 3. Both meshes were exported to create the dashboard body ( Figure 2). PA + 5%B and PP + 5%B (Table 1) were characterized according to the analytical methodology described below. Another advantage to additive manufacturing could be, in some applications, the reduction of costs. This methodology can be applied to spare parts for premium vehicles, which are generally in low demand. The current production strategy involves increasing the production of spare parts for those vehicles to avoid creating another mold, which increases costs. This strategy's high inventory costs create inefficiencies. For the additive box, no stock is kept in warehouses, and the component is printed only if it is requested.
FEM (Finite Element Method) Modeling
The numerical activity was carried out with the LS-DYNA solver [15], using Hypermesh [16] and ANSA [17] as the pre-processors and Hyperview [18] as the post-processor. The system that was subjected to a knee impact was the front interior of the vehicle, specifically the dashboard glove box. The vehicle used was the Fiat 500 currently in production. To reduce calculation times, only the dashboard body was modeled, rather than the entire vehicle. This operation did not lead to significant errors because the eliminated components (belonging to the rear part of the vehicle) were not directly influenced by the knee impact test.
The FEM modeling involved three phases: 1.
The CAD (Computer Aided Design) model is imported into a pre-processor. The current box and the additive box are imported, and both geometries are meshed using mainly shell linear elements with an average size of 5 mm. This setting was used as a compromise between the calculation time and the convergence analysis (the mesh is refined until a satisfactory convergence of the results is reached).
2.
Properties and materials were defined.
3.
Both meshes were exported to create the dashboard body ( Figure 2).
directions. The chosen nodes were located in the center of the wheels. These nodes were connected by rigid elements to the adjacent structure. The smallest elements were approximately 2 mm in size. The largest elements, about 10 mm in size, were in components far from the impact zone. It was not necessary to have an accurate mesh in this zone because these components will not deform excessively. Both geometric and material nonlinearities (to simulate yielding of the components) were properly enforced. To characterize the materials, the same tensile stress test carried out experimentally was reproduced numerically using a test piece with dimensions prescribed by ISO 527-2 [19]. The adopted plastic constitutive law, which characterized an isotropic hardening, was obtained using an analytical procedure based on the engineering curves of stresses vs. strains that were obtained experimentally. Subsequently, such input curves were calibrated by correlating the calculation model with the physical stress test. When the process was complete, the curves showed an adequate level of correlation ( Figure 3). The descendent part of the curve is generally not relevant to the analysis considered. PA + 5%B and PP + 5%B (Table 1) were characterized according to the analytical methodology described below. The number of shell elements (2D) was nearly 3.40 × 10 5 . The model was constrained in all directions. The chosen nodes were located in the center of the wheels. These nodes were connected by rigid elements to the adjacent structure.
The smallest elements were approximately 2 mm in size. The largest elements, about 10 mm in size, were in components far from the impact zone. It was not necessary to have an accurate mesh in this zone because these components will not deform excessively. Both geometric and material nonlinearities (to simulate yielding of the components) were properly enforced.
To characterize the materials, the same tensile stress test carried out experimentally was reproduced numerically using a test piece with dimensions prescribed by ISO 527-2 [19]. The adopted plastic constitutive law, which characterized an isotropic hardening, was obtained using an analytical procedure based on the engineering curves of stresses vs. strains σ − ε that were obtained experimentally. Subsequently, such input curves were calibrated by correlating the calculation model with the physical stress test. When the process was complete, the σ − ε curves showed an adequate level of correlation ( Figure 3). The descendent part of the curve is generally not relevant to the analysis considered. The number of shell elements (2D) was nearly 3.40 ✕ 10^5. The model was constrained in all directions. The chosen nodes were located in the center of the wheels. These nodes were connected by rigid elements to the adjacent structure.
The smallest elements were approximately 2 mm in size. The largest elements, about 10 mm in size, were in components far from the impact zone. It was not necessary to have an accurate mesh in this zone because these components will not deform excessively. Both geometric and material nonlinearities (to simulate yielding of the components) were properly enforced. To characterize the materials, the same tensile stress test carried out experimentally was reproduced numerically using a test piece with dimensions prescribed by ISO 527-2 [19]. The adopted plastic constitutive law, which characterized an isotropic hardening, was obtained using an analytical procedure based on the engineering curves of stresses vs. strains that were obtained experimentally. Subsequently, such input curves were calibrated by correlating the calculation model with the physical stress test. When the process was complete, the curves showed an adequate level of correlation ( Figure 3). The descendent part of the curve is generally not relevant to the analysis considered.
Methods
The rating test to numerically reproduce an automobile accident required a long calculation time (in this case, approximately 16 h, using 56 parallel processors). Consequently, it is preferable to introduce simplifications to the model to analyze the mechanical behavior of the structure, characterize the stiffness of its components, and proceed with optimization if necessary. A subsystem test was then carried out, with calculation times reduced by 90% compared to the rating test.
Subsystem Test
The subsystem test introduced considerable simplifications. Instead of a dummy, an impactor representing the knee was used ( Figure 4). The analysis was quasistatic. The impactor had an internal diameter of 52.1 mm, an external diameter of 57 mm, and a width of 46.7 mm. This impactor reproduced the dummy's knee geometry, but was simulated as rigid to evaluate the contact force.
Methods
The rating test to numerically reproduce an automobile accident required a long calculation time (in this case, approximately 16 h, using 56 parallel processors). Consequently, it is preferable to introduce simplifications to the model to analyze the mechanical behavior of the structure, characterize the stiffness of its components, and proceed with optimization if necessary. A subsystem test was then carried out, with calculation times reduced by 90% compared to the rating test.
Subsystem Test
The subsystem test introduced considerable simplifications. Instead of a dummy, an impactor representing the knee was used (Figure 4). The analysis was quasistatic. The impactor had an internal diameter of 52.1 mm, an external diameter of 57 mm, and a width of 46.7 mm. This impactor reproduced the dummy's knee geometry, but was simulated as rigid to evaluate the contact force. The evaluated parameter was the force on the knee/impactor. High loads resulted in damage to the knee, including fractures under extreme load conditions. It is possible to modify the geometry of the component to vary its stiffness; for example, the shape and thickness of the box's internal ribs can be modified to achieve a load reduction.
The calculation was repeated for six different positions: Y200, Y230, and Y260 for the left femur; and Y400, Y430, and Y460 for the right femur ( Figure 5). These positions covered the probable areas of contact between the knee and the glove box in the event of a frontal impact. The evaluated parameter was the force on the knee/impactor. High loads resulted in damage to the knee, including fractures under extreme load conditions. It is possible to modify the geometry of the component to vary its stiffness; for example, the shape and thickness of the box's internal ribs can be modified to achieve a load reduction.
The calculation was repeated for six different positions: Y200, Y230, and Y260 for the left femur; and Y400, Y430, and Y460 for the right femur ( Figure 5). These positions covered the probable areas of contact between the knee and the glove box in the event of a frontal impact.
Methods
The rating test to numerically reproduce an automobile accident required a long calculation time (in this case, approximately 16 h, using 56 parallel processors). Consequently, it is preferable to introduce simplifications to the model to analyze the mechanical behavior of the structure, characterize the stiffness of its components, and proceed with optimization if necessary. A subsystem test was then carried out, with calculation times reduced by 90% compared to the rating test.
Subsystem Test
The subsystem test introduced considerable simplifications. Instead of a dummy, an impactor representing the knee was used (Figure 4). The analysis was quasistatic. The impactor had an internal diameter of 52.1 mm, an external diameter of 57 mm, and a width of 46.7 mm. This impactor reproduced the dummy's knee geometry, but was simulated as rigid to evaluate the contact force. The evaluated parameter was the force on the knee/impactor. High loads resulted in damage to the knee, including fractures under extreme load conditions. It is possible to modify the geometry of the component to vary its stiffness; for example, the shape and thickness of the box's internal ribs can be modified to achieve a load reduction.
The calculation was repeated for six different positions: Y200, Y230, and Y260 for the left femur; and Y400, Y430, and Y460 for the right femur ( Figure 5). These positions covered the probable areas of contact between the knee and the glove box in the event of a frontal impact. From Figure 6, we observed that:
•
In Y200, the behavior of the box in PA + 5%B was worse than the current production case. This was not a concern, because it was still below the reference value.
•
In Y400, the PA + 5%B behaved similarly to the current case. It began to move away from the current case toward the end of the calculation, but remained below the reference value. The current case (green curve) showed a sinusoidal trend in the first section, due to the failure of an internal rib. At the moment the rib broke, a decrease in the slope of the curve was observed. The additive box had interwoven ribs and behaved as a homogeneous object. The curves passed through the mean value of the sinusoid.
•
In all other positions, the behavior of additive glove box was always better than the current case. Specifically, PP + 5%B had a force value that was always lower than PA + 5%B.
Appl. Sci. 2020, 10, 9106 6 of 11 From Figure 6, we observed that: • In Y200, the behavior of the box in PA + 5%B was worse than the current production case. This was not a concern, because it was still below the reference value.
•
In Y400, the PA + 5%B behaved similarly to the current case. It began to move away from the current case toward the end of the calculation, but remained below the reference value. The current case (green curve) showed a sinusoidal trend in the first section, due to the failure of an internal rib. At the moment the rib broke, a decrease in the slope of the curve was observed. The additive box had interwoven ribs and behaved as a homogeneous object. The curves passed through the mean value of the sinusoid.
•
In all other positions, the behavior of additive glove box was always better than the current case. Specifically, PP + 5% B had a force value that was always lower than PA + 5% B. As a result, an optimization of the component was not necessary, since it presented better values than the current case, and was well below the reference value.
Rating Test
The subsystem test allowed the preliminary analysis of the performance of the box during a knee impact, but this was not sufficient to represent what occurs in an automobile accident. To obtain more realistic biomechanical parameters, a complete and more complex model must be used.
The objective was to fall within the range established by the Euro NCAP; therefore, the behavior of the additive box was considered acceptable, since it had the same biomechanical score as the box currently in production.
The specific test considered was the frontal impact against a deformable barrier, officially named by the Euro NCAP as the Mobile Progressive Deformable Barrier (MPDB) [20]. The test simulated a collision between two cars of the same weight traveling at a speed of 50 km/h. The protocol prescribed the use in each front seat of a Hybrid III 50th Percentile Male Dummy (which has a size and weight As a result, an optimization of the component was not necessary, since it presented better values than the current case, and was well below the reference value.
Rating Test
The subsystem test allowed the preliminary analysis of the performance of the box during a knee impact, but this was not sufficient to represent what occurs in an automobile accident. To obtain more realistic biomechanical parameters, a complete and more complex model must be used.
The objective was to fall within the range established by the Euro NCAP; therefore, the behavior of the additive box was considered acceptable, since it had the same biomechanical score as the box currently in production.
The specific test considered was the frontal impact against a deformable barrier, officially named by the Euro NCAP as the Mobile Progressive Deformable Barrier (MPDB) [20]. The test simulated a collision between two cars of the same weight traveling at a speed of 50 km/h. The protocol prescribed the use in each front seat of a Hybrid III 50th Percentile Male Dummy (which has a size and weight greater than 50% of the American population) [21,22], and two child dummies placed in the restraint systems in the rear seats. Our goal was to evaluate the biomechanical parameters of the lower limbs. Consequently, only the dashboard body and a passenger-side dummy were used instead of a full-scale model. The sled test [23], a biomechanical test of a subsystem that reproduces the same stresses on the dummies as a full-scale impact [24], was then carried out.
The impact phenomenon was reproduced by subjecting the constrained elements of the sled (floor, seat, and dashboard) to a rigid movement of translation in the opposite direction of the vehicle's advancement. In this way, the dummy, due to inertial effects, was subjected to forward acceleration relative to the elements of the passenger compartment, and was consequently subjected to a loading condition similar to that of a full-scale impact. The LS-DYNA card *BOUNDARY_PRESCRIBED_MOTION_RIGID [15] was used to impose motion. The realistic law of motion used was derived from MPDB [20] experimental tests carried out on the same vehicle. The total number of elements was nearly 1.30 × 10 6 .
The calculation times were significantly longer (nearly 6 h using 56 parallel processors) due to the complex geometry (Figure 7).
Appl. Sci. 2020, 10, 9106 7 of 11 greater than 50% of the American population) [21,22], and two child dummies placed in the restraint systems in the rear seats. Our goal was to evaluate the biomechanical parameters of the lower limbs. Consequently, only the dashboard body and a passenger-side dummy were used instead of a fullscale model. The sled test [23], a biomechanical test of a subsystem that reproduces the same stresses on the dummies as a full-scale impact [24], was then carried out. The impact phenomenon was reproduced by subjecting the constrained elements of the sled (floor, seat, and dashboard) to a rigid movement of translation in the opposite direction of the vehicle's advancement. In this way, the dummy, due to inertial effects, was subjected to forward acceleration relative to the elements of the passenger compartment, and was consequently subjected to a loading condition similar to that of a full-scale impact. The LS-DYNA card *BOUNDARY_PRESCRIBED_MOTION_RIGID [15] was used to impose motion. The realistic law of motion used was derived from MPDB [20] experimental tests carried out on the same vehicle. The total number of elements was nearly 1.30 ✕ 10^6.
The calculation times were significantly longer (nearly 6 h using 56 parallel processors) due to the complex geometry ( Figure 7). The Euro NCAP standard required, in this case, the evaluation of four parameters: compression of the femur and of the tibia, and sliding of the knee and of the tibia (obtained using the combination of moments and forces measured by the tibia load cells, Mx, My, and Fz respectively). In our specific case, femur compression and knee sliding were evaluated.
In Figure 8, four distinct areas are noted with regard to the left femur: • t < 50 ms: The impact has not yet occurred, and there is a purely traction load. The peak at 40 ms was due to the foot's interaction with the floor. As a result of the floor, which was simulated as rigid, there was an overestimation of the loads coming from the feet; • 50 ms < t < 60 ms: The impact has occurred, and the load increases until it changes sign and provides compression; • t = 60 ms: The femur sinks completely inside the box and reaches the maximum compression load; • t > 60 ms: The load decreases due to the removal of the dashboard. The Euro NCAP standard required, in this case, the evaluation of four parameters: compression of the femur and of the tibia, and sliding of the knee and of the tibia (obtained using the combination of moments and forces measured by the tibia load cells, Mx, My, and Fz respectively). In our specific case, femur compression and knee sliding were evaluated.
In Figure 8, four distinct areas are noted with regard to the left femur: • t < 50 ms: The impact has not yet occurred, and there is a purely traction load. The peak at 40 ms was due to the foot's interaction with the floor. As a result of the floor, which was simulated as rigid, there was an overestimation of the loads coming from the feet; • 50 ms < t < 60 ms: The impact has occurred, and the load increases until it changes sign and provides compression; • t = 60 ms: The femur sinks completely inside the box and reaches the maximum compression load; • t > 60 ms: The load decreases due to the removal of the dashboard. These four areas were less noticeable for the right femur (Figure 9), because the impact between the right femur and the dashboard occurred 5 ms later than it did for the left femur, due to the geometric characteristics of the dashboard. We observed that between 55 and 65 ms, the load tended to grow, but would never switch to compression. This is because the acceleration began to decrease between 55 and 60 ms, meaning the stress on the dummy also decreased. At about 70 ms, the set speed curve assumed a horizontal slope, thus exhausting the inertial thrust action on the dummy. The right femur did not reach the maximum sinking during this time interval; consequently, the load did not switch to compression, but remained constant. Regarding the knee sliding ( Figure 10; Figure 11), there was a peak at around 60 ms in conjunction with the impact. Subsequent to the impact, only in the case of the sliding of the left knee, there was a worsening in performance compared to the actual case. This is explained in Figure 12, in which the sections of the box are shown at the moment of the impact with the left leg. These four areas were less noticeable for the right femur (Figure 9), because the impact between the right femur and the dashboard occurred 5 ms later than it did for the left femur, due to the geometric characteristics of the dashboard. We observed that between 55 and 65 ms, the load tended to grow, but would never switch to compression. This is because the acceleration began to decrease between 55 and 60 ms, meaning the stress on the dummy also decreased. At about 70 ms, the set speed curve assumed a horizontal slope, thus exhausting the inertial thrust action on the dummy. The right femur did not reach the maximum sinking during this time interval; consequently, the load did not switch to compression, but remained constant. These four areas were less noticeable for the right femur (Figure 9), because the impact between the right femur and the dashboard occurred 5 ms later than it did for the left femur, due to the geometric characteristics of the dashboard. We observed that between 55 and 65 ms, the load tended to grow, but would never switch to compression. This is because the acceleration began to decrease between 55 and 60 ms, meaning the stress on the dummy also decreased. At about 70 ms, the set speed curve assumed a horizontal slope, thus exhausting the inertial thrust action on the dummy. The right femur did not reach the maximum sinking during this time interval; consequently, the load did not switch to compression, but remained constant. Regarding the knee sliding ( Figure 10; Figure 11), there was a peak at around 60 ms in conjunction with the impact. Subsequent to the impact, only in the case of the sliding of the left knee, there was a worsening in performance compared to the actual case. This is explained in Figure 12, in which the sections of the box are shown at the moment of the impact with the left leg. Regarding the knee sliding ( Figure 10; Figure 11), there was a peak at around 60 ms in conjunction with the impact. Subsequent to the impact, only in the case of the sliding of the left knee, there was a worsening in performance compared to the actual case. This is explained in Figure 12, in which the sections of the box are shown at the moment of the impact with the left leg. These four areas were less noticeable for the right femur (Figure 9), because the impact between the right femur and the dashboard occurred 5 ms later than it did for the left femur, due to the geometric characteristics of the dashboard. We observed that between 55 and 65 ms, the load tended to grow, but would never switch to compression. This is because the acceleration began to decrease between 55 and 60 ms, meaning the stress on the dummy also decreased. At about 70 ms, the set speed curve assumed a horizontal slope, thus exhausting the inertial thrust action on the dummy. The right femur did not reach the maximum sinking during this time interval; consequently, the load did not switch to compression, but remained constant. Regarding the knee sliding ( Figure 10; Figure 11), there was a peak at around 60 ms in conjunction with the impact. Subsequent to the impact, only in the case of the sliding of the left knee, there was a worsening in performance compared to the actual case. This is explained in Figure 12, in which the sections of the box are shown at the moment of the impact with the left leg. In the present case, the front cover was closer to the back in the lower part of the box due to the deformation of the internal ribs, while in the upper part, the knee sunk into the cover, but not excessively, due to the presence of the ribs. In both additive boxes, we observed the opposite phenomenon. Due to the greater rigidity achieved through a higher concentration of ribs and their inclination, the overall thickness remained unchanged in the lower part, but in the upper part, the knee sunk in completely due to the absence of reinforcing ribs.
This led to an increase in the sliding of the knee in the additive case. In the upper part, the femur was free to sink into the cover and move in the -x direction; in the lower part, due to its high stiffness, there was no sinking, and the load was higher than in the current case, so the tibia was moved in the +x direction. This phenomenon is almost zero in the present case, given that the femur and tibia move solidly in the -x direction.
Therefore, regardless of the material, the more complex geometrical configuration of the additive case increased the sliding.
Conclusions
The adoption of a box constructed using additive manufacturing, whether PA + 5% B or PP + 5% B was used, does not lead to a worsening of biomechanical parameters compared to the current box. The structural behavior remained almost unchanged. This result may seem trivial, however, while we did not observe an improvement in structural behavior, we did observe a reduction in weight. For future applications, we can imagine using 3D printing to construct these components for premium vehicles. In the present case, the front cover was closer to the back in the lower part of the box due to the deformation of the internal ribs, while in the upper part, the knee sunk into the cover, but not excessively, due to the presence of the ribs. In both additive boxes, we observed the opposite phenomenon. Due to the greater rigidity achieved through a higher concentration of ribs and their inclination, the overall thickness remained unchanged in the lower part, but in the upper part, the knee sunk in completely due to the absence of reinforcing ribs.
This led to an increase in the sliding of the knee in the additive case. In the upper part, the femur was free to sink into the cover and move in the -x direction; in the lower part, due to its high stiffness, there was no sinking, and the load was higher than in the current case, so the tibia was moved in the +x direction. This phenomenon is almost zero in the present case, given that the femur and tibia move solidly in the -x direction.
Therefore, regardless of the material, the more complex geometrical configuration of the additive case increased the sliding.
Conclusions
The adoption of a box constructed using additive manufacturing, whether PA + 5% B or PP + 5% B was used, does not lead to a worsening of biomechanical parameters compared to the current box. The structural behavior remained almost unchanged. This result may seem trivial, however, while we did not observe an improvement in structural behavior, we did observe a reduction in weight. For future applications, we can imagine using 3D printing to construct these components for premium vehicles. In the present case, the front cover was closer to the back in the lower part of the box due to the deformation of the internal ribs, while in the upper part, the knee sunk into the cover, but not excessively, due to the presence of the ribs. In both additive boxes, we observed the opposite phenomenon. Due to the greater rigidity achieved through a higher concentration of ribs and their inclination, the overall thickness remained unchanged in the lower part, but in the upper part, the knee sunk in completely due to the absence of reinforcing ribs.
This led to an increase in the sliding of the knee in the additive case. In the upper part, the femur was free to sink into the cover and move in the −x direction; in the lower part, due to its high stiffness, there was no sinking, and the load was higher than in the current case, so the tibia was moved in the +x direction. This phenomenon is almost zero in the present case, given that the femur and tibia move solidly in the −x direction. Therefore, regardless of the material, the more complex geometrical configuration of the additive case increased the sliding.
Conclusions
The adoption of a box constructed using additive manufacturing, whether PA + 5%B or PP + 5%B was used, does not lead to a worsening of biomechanical parameters compared to the current box. The structural behavior remained almost unchanged. This result may seem trivial, however, while we did not observe an improvement in structural behavior, we did observe a reduction in weight. For future applications, we can imagine using 3D printing to construct these components for premium vehicles.
Among the possible advantages for automotive industry, the following are noteworthy: warehouse cost reduction for limited-series vehicles, no tooling costs, and fast prototyping. The .stl file containing 3D-printing specifications could be sold directly to customers with a suitable 3D printer so they could independently create the component, or large distribution centers with suitable 3D printers could create the component only upon requests from customers.
One of the main goals of automobile manufacturers is to reduce the weights of their vehicles. Since 75% of fuel consumption is directly related to the weight of a vehicle, reducing weight also means reducing emissions. The mass of the PP + 5%B box is about 23% less than the current box. A car has many other plastic components that are theoretically printable in 3D. A significant reduction in weight could therefore be achieved using additive manufacturing, especially for premium cars, and cost increases resulting from the adoption of this new technology could be offset by the advantages mentioned. | 8,030.4 | 2020-12-19T00:00:00.000 | [
"Materials Science"
] |
Silicon Photonics Transmitter with SOA and Semiconductor Mode-Locked Laser
We experimentally investigate an optical link relying on silicon photonics transmitter and receiver components as well as a single section semiconductor mode-locked laser as a light source and a semiconductor optical amplifier for signal amplification. A transmitter based on a silicon photonics resonant ring modulator, an external single section mode-locked laser and an external semiconductor optical amplifier operated together with a standard receiver reliably supports 14 Gbps on-off keying signaling with a signal quality factor better than 7 for 8 consecutive comb lines, as well as 25 Gbps signaling with a signal quality factor better than 7 for one isolated comb line, both without forward error correction. Resonant ring modulators and Germanium waveguide photodetectors are further hybridly integrated with chip scale driver and receiver electronics, and their co-operability tested. These experiments will serve as the basis for assessing the feasibility of a silicon photonics wavelength division multiplexed link relying on a single section mode-locked laser as a multi-carrier light source.
These supplementary materials contain two sections: A first section on the optical and Electro-Optical (E/O) devices used in the system experiments (mode-locked laser, resonant ring modulators, flip-chip photodetector subassemblies and integrated germanium photodetectors), as well as a second section with additional data supporting a discussion in regards to the discrepancies between Q-factor and Bit Error Ratio (BER) measurements during Transmitter (Tx) characterization.
1) Semiconductor Single Section Mode-Locked Laser
As already mentioned in the main paper, choosing a passively mode-locked, single section Mode Locked Laser (MLL) as a WDM light source has the advantage of providing a compact and power efficient solution with a fixed carrier grid, but also presents additional challenges in regards to Relative Intensity Noise (RIN) and operational stability. It is a well-known fact that the individual lines of a Fabry-Perot laser cannot serve as independent optical carriers due to mode partition noise resulting in excessive RIN: While the total power emitted by a Fabry-Perot is relatively stable (low RIN as measured over the entire spectrum), the optical power contained in individual lines undergoes strong fluctuations as the total power is dynamically reallocated between them. Mode partition noise is largely suppressed in MLLs by means of the mode locking. Recently, we have been able to measure RIN as low as -120 dBc/Hz decaying to its shot noise limit above 4 GHz on isolated comb lines of a single section MLL provided by III-V Lab [49]. A further characteristic of mode locking in single section MLLs is a flattening of the laser spectrum [SM1] which facilitates providing carriers for a substantial number of WDM channels with a well-defined optical power. Unfortunately, while the spectrum flattens overall over the entire center region of the comb as it changes from a bell shaped to a flattop distribution when entering an injection current / temperature range in which the laser mode-locks, the power variations between adjacent channels also become more pronounced and can exceed 1 dB. An example where this effect is relatively pronounced is shown in Fig. SM1. It is analogous to what is also observed in d-D matched combs generated by parametric processes in microresonators [SM2].
With the current state of the art of semiconductor MLLs, in order to obtain optimum locking performance, the injection current and temperature set points of the laser have to be predetermined (Fig. SM2) prior to performing system level experiments (and prior to operating a transceiver module). An outstanding challenge for future MLL designs is to reliably and repeatedly obtain mode locking in predetermined current and temperature ranges, a breakthrough that would also greatly facilitate taking such a technology to production. Integration of Distributed Bragg Reflectors (DBR) on chip [SM3], [SM4] provides additional degrees of design freedom such as tailoring the dispersion and width of the reflection frequency band [SM5] and may be conducive to reaching this objective, as it will greatly reduce the sensitivity of laser characteristics on cleaving accuracy.
The Radio Frequency (RF) linewidth of the laser results from the beat note between adjacent comb lines and is a measure of the degree of correlation of their phase noise. A narrow RF linewidth is the primary indicator of mode locking and correlates with reduced single line RIN. Figure SM2 shows the correlation between the RF linewidth and RIN for the laser reported in [49] (here and in the rest of the paper the integrated RIN is reported as where ' is the standard deviation (std. dev.) of the optical power and *+ is the average optical power). The first graph (a) compares the RF beat note spectrum (color plot) with the integrated RIN of the entire comb (blue curve) for different injection currents. When the RF beat-note is well defined and the corresponding linewidth small (visible as a clear yellow line in the color plot), the comb RIN is also low. On the other hand, regions with a smeared out beat note (poor or no mode locking), transitions between mode locking regimes (suddenly changing Free Spectral Range (FSR) for small current or temperature changes), or coexistence of two RF lines [SM6] result in a high RIN. The next graph (b) shows both the comb RIN (RIN taken over the entire optical spectrum), as well as the RIN of an isolated comb line in a zoomed in injection current region. A correlation between all three characteristics -RF linewidth, comb RIN and single line RIN -is clearly visible. Fig. SM1. Comparison between the optical spectrum of a single section MLL in a temperature and current range in which it is not mode locking, 300 mA and 30 o C (a), and a current and temperature range in which it is locking, 250 mA and 30 o C (b). The black markers in the insets represent the eight carriers with the highest line power. The laser diode gain material and laser stripe geometry are identical to the MLL reported in [49] but the length of the chip adjusted to obtain a 100 GHz FSR. The 51 pm Resolution Bandwidth (RBW) of the measurements was sufficiently wide for the peak power levels to correspond to the total power of the comb lines.
The characteristics of the 100 GHz FSR laser chosen for the system experiments reported here [SM7] are shown in Fig. SM3. It combines both relatively high power per line (on the order of 0 dBm after coupling to a lensed fiber) and moderate RIN (if somewhat higher than for the laser reported in [49]). The MLL is a Quantum Dash (Q-Dash) single section MLL developed and fabricated at III-V Lab. It is based on a Buried Ridge Stripe (BRS) Fabry Perot cavity with a ridge width of 1.25 µm whose gain material consists in six layers of InAs Q-Dashes in an InGaAsP barrier grown on an InP wafer. The rear facet of the laser is provided with a highly reflective thin film coating, while its front facet is as cleaved. After characterizing the laser at different temperatures and injection currents a good operating point was identified at 25°C and 238 mA, that was then used for subsequent system measurements. At this operating point the laser has a FSR of 102.6 GHz and a center wavelength of 1542 nm. Fifteen consecutive lines have power levels between -1.3 and +1.7 dBm and the total power of the comb is 11 dBm, both measured after coupling to a lensed fiber followed by an isolator. A picture of (a) the laser coupling setup, (b) the laser spectrum and (c) the RIN spectrum for an isolated central line can be seen in Fig. SM3. A more systematic study of the RIN per line is reported in section II.A (Fig. 2) in the main text of the paper. . The aggregate comb RIN (blue curve) was integrated from 8 to 300 MHz. (b) RIN of the entire comb (blue curve), RIN of an isolated comb line (black curve) and spectrum of the RF beat note (color plot, same frequency range as in (a)) in a zoomed in injection current range. The aggregate comb RIN was integrated from 8 to 300 MHz as in (a) and the single line RIN was integrated from 5 MHz to 20 GHz and represents the total RIN of the line since the RIN spectrum rolls off and reaches shot noise levels above 4 GHz. A clear correlation between mode locking (as evidenced by a single narrowband RF line), reduced comb RIN and reduced single line RIN is apparent, confirming mode locking to be a prerequisite for the usability of isolated comb lines as optical carriers. This data was taken at a laser operation temperature of 30 o C. Fig. SM3. MLL mounted on a ceramic submount and coupled to a lensed fiber (a), its recorded optical spectrum (b) and the RIN spectrum of one of the central lines (c). In (b) the black line marks the 15 lines within 3 dB of the peak line power. The 8 lines for which signal Q-factors were measured in the system experiments reported below are shown in red. In (c) the RIN power spectral density of the line with center wavelength at 1546 nm is shown. The RBW of the optical spectrum measurement in (b) was sufficiently wide for the peak power levels to correspond to the total power of the comb lines.
2) Resonant Ring Modulators
The Resonant Ring Modulators (RRMs) used for the system experiments have a radius of 10 µm, resulting in a FSR of 10 nm. In addition to the main bus waveguide, a drop waveguide with a low coupling coefficient serves as a tap that can be used to monitor the operating point of the modulator. Modulation is achieved based on the plasma dispersion effect via a phase shifter implemented as a reverse biased pin diode (with a series resistance of 53 W and a capacitance of 39 and 29 fF at respectively 0 and 2 V reverse bias). More detailed information on the modulator design, which was specifically targeted for this system architecture, can be found in [49] (in which it is referred to as the "third category of devices").
Two Tx chips are used for the system experiments, which turned out to have somewhat different characteristics due to fabrication variability: Modulators on the first chip were found to have a loaded resonator Quality Factor (Q-factor) of Q load = 5050 (RRM1), while RRMs on the second chip have a reduced Q load of 4300 due to a slightly higher waveguide coupling coefficient as well as higher waveguide losses (RRM2). We define their Modulation Penalty (MP) as −10 23 ' 4 5' 6 ' 78 , with P 1 and P 0 the power of the 1-and 0-bit-states inside the bus waveguide right after the RRM and P in the power inside the bus waveguide right before the RRM. It corresponds to the reduction of the Optical Modulation Amplitude (OMA) due to the finite voltage (2 V pp ) available to drive the RRM. At the laser frequency to RRM resonance frequency detuning (the "optical carrier detuning") optimized to result in the highest OMA and for a 2 V pp drive voltage the MP is 6.4 dB and 7 dB, respectively for the first and second chip. On the other hand, the combined Grating Coupler (GC) and on-chip bus waveguide losses, respectively 10.3 dB and 9.7 dB for the two chips, are 0.6 dB better for the second chip, resulting in similar output OMAs. This information is provided here since it is relevant for the comparison of individual test results reported in the main parts of the paper -the first chip was used for the Tx characterization with MLL and instrument grade bench top driver electronics (section II.A) while the second chip was wire bonded to chip-scale Tx electronics and characterized with a bench top tunable laser (section II.B). It should also be noted that GCs were not fully optimized in this chip iteration and we have improved them since from ~5 dB IL per GC to better than 3.5 dB in a later chip iteration fabricated in the same full process flow. These improved Insertion Losses (ILs) are further reduced to 3 dB after permanent attachment of a fiber array with index matched epoxy, i.e., 4 dB of link budget improvement would be attainable with this improvement alone. This improved GC design was also used for the Receiver (Rx) chips described in the next subsection and in section III of the main text.
In the case of an unamplified system in which noise is typically dominated by additive Rx noise, the optimum operating point of a modulator corresponds to the highest achievable OMA (assuming the resulting modulator cutoff frequency to also be sufficient, since the latter also depends on the optical carrier detuning in the case of an RRM). On the other hand, in an amplified system signal extinction also matters and the best operating point is between the points with the highest OMA and the highest extinction. Indeed, the signal-ASE beat noise resulting from ASE generated by an optical amplifier also depends on the signal level, so that full extinction is desirable to reduce the zero-level noise, resulting in an additional RRM performance metric in addition to MP and bandwidth. Moreover, higher extinction also reduces the effect of 0-level RIN and allows increasing the channel count while maintaining optical power levels below or at the onset of Semiconductor Optical Amplifier (SOA) saturation. Higher extinction (defined as 10 23 ' 4 ' 6 ) is achieved by reducing the optical carrier detuning at the cost of an increased MP and a reduced E/O bandwidth [28] (reported as the -3 dBe cutoff frequency, i.e., the frequency at which the E/O S 21 has dropped by 3 dBe). This is shown in Figs. SM4 and SM5, in which extinction, MP and bandwidth are plotted versus detuning. While the MP and extinction of RRM1 ( Fig. SM4(a)) were measured under DC-conditions, this was not possible for RRM2 ( Fig. SM4(b)) since it is wire bonded to an AC coupled modulator driver cutting off the signal at low frequencies (below 100 kHz). Thus, its characteristics were obtained by analyzing eye diagrams resulting from modulation with a 4 Gbps Pseudorandom Bit Sequence (PRBS), which is quasi-DC given the high bandwidth of the Tx subsystem. At higher data rates the modulator cutoff frequency also has to be taken into account to determine the optimum optical carrier detuning. The RRM bandwidth is lowest at resonance ( 3 2 where 3 is the carrier frequency and Q is the loaded Q-factor) and increases with detuning due to the emergence of peaking in the E/O S 21 [28] as can be seen in Fig. SM5. It should also be noted here that since the 0-level ASE noise is particularly sensitive to extinction, getting as close as possible to critically coupled RRMs is much more important here than in an unamplified optical link. Between the operating point at which RRM2 has the lowest MP (7 dB at the larger detuning of 18 GHz) and a representative operating point at 7.5 GHz detuning, the estimated RRM cutoff frequency decreases from 32 GHz to ~20 GHz and the measured MP increases from 7 dB to 8.4 dB, but the extinction also increases from 4.8 dB to 8.9 dB. This latter operating point is representative for the system level experiments described in section II.B as it is close to the detuning resulting in optimum optical signal quality at the output of the Tx+SOA, as quantified by the signal Q-factor.
It should also be noted that in all the system experiments reported in this paper we work with positive optical carrier detunings, i.e., the frequency of the laser is above the resonance frequency of the RRM. The reason is twofold: First, this results in a slightly higher OMA, as the contributions to the OMA of dynamic waveguide losses occurring inside the RRM as the free carrier density is modulated and of the refractive index change stack up with the same sign. The other reason is that bistability and self-pulsation occur at negative optical carrier detuning [SM8], thus limiting the maximum optical input power to the RRM in the absence of a control system or a fast offset compensation in the Rx. Even then, dynamic suppression of these instabilities might result in a challenging control problem.
3) Flip-Chip Photodiodes
For the realization of the Flip-Chip Photodetector (FC-PD) based Rx, we have opted for the PDCA04-20-SC InGaAs/InP front side illuminated and front side contacted 1x4 photodiode array from Albis Optoelectronics. This component is flip-chipped onto the Silicon Photonics (SiP) Rx chips and the 20 µm diameter optical aperture of the photodiodes illuminated by means of a GC. It features a typical responsivity of 0.8 A/W in the C-band, a high bandwidth of typically 20 GHz (limited by its RC time constant assuming electrical probing in a 50 W environment) sufficient to support 28 Gbps serial data rates and a low capacitance for a vertical incidence photodiode smaller than 100 fF.
The main challenge associated with the integration of this component is the alignment of the light sensitive areas of the photodiodes with the beams emitted by the GCs and minimization of resulting optical losses. Since the beams exit the GCs at a finite angle from normal incidence (16 o in air) the optimum FC-PD to GC alignment also depends on the height of the bump bonds used for the flip-chip attachment. To accommodate different flip-chip attachment processes, we fabricated a number of test structures (Fig. SM6) with the position of the GC array offset by different amounts relative to the bump bonding pads. Furthermore, we simulated the RF properties of the assembled Rx and optimized the electrical routing of the photodiode signals to the edge of the SiP chip by tailoring the transmission lines so as to minimize electrical losses, impedance discontinuities and cross-talk (to better than -40 dBe). After a few iterations, we have developed an attachment process yielding reproducible results based on 20 µm high SnAu bump bonds. A photograph of an assembled SiP chip can be seen in Fig. SM6. We have measured a reliable external (compound) responsivity (normalized relative to the power in the optical fiber) in the range of 0.17 to 0.21 A/W in the large majority of samples. This relatively low responsivity is expected as we are using two GCs (~6 dB compound losses in this chip design) to route the light to the FC-PD. After deembedding GC losses, we obtain a FC-PD responsivity of about 0.84 A/W, as expected. No saturation effects were observed up to the highest measured optical power of 1 mW (as launched into the fiber prior to fiber-to-chip coupling), as expected based on the low Rx input powers seen in this architecture. Besides, the final bandwidth of the subassembly is typically in the range of 15 to 18 GHz when measured in a 50 Ω RF environment at a 2 V bias (see Fig. SM7), which is sufficient to achieve the targeted serial data rate of 25 Gbps per channel. Consequently, the attachment process and the on-chip transmission lines only introduce a modest degradation of the coupling efficiency and bandwidth.
4) Germanium Waveguide Photodiodes
In parallel, we evaluated SiP Rx chips with monolithically integrated Germanium (Ge) Waveguide Photodetectors (WPDs), with monolithic integration simplifying the assembly as well as reducing optical losses and electrical parasitics [42]. Our design is based on a reverse biased vertical P(Si)-I(Ge)-N(Ge)-junction stack (Fig. SM8). The key requirements for this component are a high bandwidth, a high responsivity and a low dark current. However, these features impose some design constraints that made their mutual optimization challenging in the chosen process line at the Singapore Institute of Microelectronics (IME), A*STAR. The main challenge associated with the design of the Ge WPD was to obtain a sufficiently high cutoff frequency for 25 Gbps serial data rates, due to layout constraints associated with the metal contacts dropped onto the selectively grown Ge pads. These contact pads are required to be at least 2.4 µm to a side, constraining the sizing of the Ge pads and consequently increasing the series resistance associated to electrical transport through the underlying p-doped Silicon (Si). This leads to an increased RC time constant, which turned out to be the limiting factor for the photodiodes' cutoff frequency. Reduction of this series resistance requires higher doping of the Si, which in turns results in deterioration of the material quality of the Ge overgrowth and increases dark currents. Seeking a good tradeoff between these constraints, we moderately increased the p-doping of the underlying Si from the process standard to a peak implanted dopant concentration of 3×10 19 cm -3 . The Ge slab is selectively grown on the unetched Si slab and starts with an abrupt optical junction forming a discontinuity in the vertical cross-section of the waveguiding structure. The 800 nm thick Si + Ge slab is also multimode in the vertical direction, supporting a ground mode primarily confined inside the Ge as well as a first order mode with two vertical lobes. Consequently, a beating pattern arises after the abrupt interface, wherein the light periodically moves up and down between the Si and the Ge [42]. This does not only decrease the effective overlap of the light with the Ge and increase the absorption length in the device, but also allows the light to leak out of the Ge slab region when it happens to be primarily located in the Si region at its edges. This might lead to some reduction in responsivity. Side trenches etched into the Si at the edges of the Ge stripe would lead to additional optical confinement. However, they would also further increase the series resistance to the Si contacts, so that we opted not to implement them. Furthermore, the Si-Ge heterointerface is well known to form a barrier for holes transported from the Ge to the Si region, reducing the electrical collection efficiency and increasing carrier recombination at the boundary between the two semiconductors.
Considering these constraints we have designed our vertical junction Ge WPD as follows: The input waveguide is first tapered to a width of 2 µm before merging to a slab of p-doped Si (dose: 5×10 14 cm -2 at 20 keV). A stripe of lowtemperature Ge with a width of 4.5 µm and a length of 12.4 µm is selectively grown in a window opened in the dielectric films over the slab-region. The top part of the Ge stripe is then n-doped using a shallow Phosphorous implantation process completing the vertical PIN-junction (dose: 4×10 15 cm -2 at 10 keV). Highly p-doped wells are defined in the Si for p-side contacting on both sides of the Ge stripe with a spacing of 200 nm from the Ge edge (dose: 4×10 15 cm -2 at 20 keV). Finally, the n-doped Ge is contacted by dropping contact plugs directly onto the stripe.
Multiple photodiodes have been measured showing a good performance with a deembedded on-chip WPD responsivity of 0.67 A/W at 1550 nm (extracted from the 0.31 A/W external responsivity shown in Fig. SM9(a)) typical for vertical heterojunction Ge photodiodes [SM9] (even though improved responsivities have also been achieved [SM10], [SM11]). Here too, no saturation effects were observed up to 1 mW fiber power, the maximum measured optical power in the responsivity curve. The bandwidth is RC-limited and in excess of 30 GHz at 1V reverse bias when measured in a 50 W RF environment (see Fig. SM9(b)). At 2 V, the WPD capacitance is measured as 14.2 fF based on breakout structures with larger surfaces (and thus a reliably measurable capacitance). However, a typical dark current of 0.7 µA has been measured at 2 V reverse bias. From these results, it is clear that the bandwidth is much higher than needed for 25 Gbps, so there is some margin to sacrifice bandwidth and improve the other performance metrics for future 25 Gbps chip iterations (in particular reduce again the dark current by reducing the doping of the underlying Si, a critical factor for the material quality of the overgrown Ge). Concerned by the high dark current of the Ge WPDs, we characterized their noise spectrum in order to ensure that flicker noise is not an issue. We measured their noise under different illumination conditions as well as with and without voltage bias. For this purpose, we contacted the photodiode chips with RF-probes and biased them to 2 V reverse bias with a bias-T. No flicker noise was discernible above the noise floor of the measurement down to the 40 kHz low frequency cutoff of the bias-T. From the noise floor of the measurement we conclude that the flicker noise Power Spectral Density (PSD) has to be below 4 • 10 522 [mW] B (where f s is the RF signal frequency) and that the std. dev. of the total flicker noise current integrated above 40 kHz (which is below the lower cutoff frequency of the channel, 100 kHz) has to be below 0.1 µA. Given the input referred noise of the Transimpedance Amplifier (TIA) (see section III of the main text), the WPD flicker noise thus does not play a significant role in the Rx sensitivity notwithstanding the high dark current of the devices and the reduced Ge material quality resulting from heteroepitaxy [SM12]. The sensitivity floor of the noise measurement was however not sufficiently low to rule out that other forms of broadband excess noise associated to the material quality, such as increased generation-recombination noise, do not play a role.
II. DISCUSSION OF DISCREPANCIES BETWEEN Q-FACTOR AND BER MEASUREMENTS
For the measurements reported in section II of the main text, the Bit Error Ratio (BER) was recorded for an optimized sampling time and threshold (matching the assumptions used for the Q-factor extraction). However, as we can see in Fig. 6(b), there are significant discrepancies at both 14 and 25 Gbps between the measured BER and the BER predicted based on the measured Q sig (corresponding to respectively +1.2 dBQ and -2 dBQ, wherein Q[dBQ] is defined as 20log 10 (Q sig )). While the BER at 25 Gbps is worse than expected, it is, surprisingly, better at 14 Gbps. We did a series of characterization measurements in order to better understand the possible sources of discrepancies: For one, we verified whether the simplification consisting in a Gaussian noise model when predicting BER from a recorded Q-factor could be an important factor. While it is a well-known fact that the assumption of Gaussian noise statistics is not accurate for ASE noise and leads in particular to the wrong prediction in regards to the optimum decision threshold [SM13], the discrepancy in regards to the predicted BER should be very slight and not sufficient to explain what we observe here [SM13], [SM14]. In order to verify this, we recorded the BER for different decision thresholds for a laser power achieving a BER in the range of 10 -12 at the optimum threshold (blue curve in Fig. SM10(a)). The recorded data was then fitted assuming non-Gaussian Chi square noise statistics and the extinction recorded from the experiment (red curve). The black curve in Fig. SM10(a) shows the BER as a function of decision threshold assuming Gaussian noise statistics with the same 0-and 1-level ASE noise std. devs. As can be seen, the model based on non-Gaussian statistics is in close agreement with the recorded data, while the optimum threshold predicted by the Gaussian noise model is significantly off. Nevertheless, the resulting BER is nearly the same for both models. The excess noise of the oscilloscope was also independently measured and found to be negligible in these experiments. The 1.2 dBQ discrepancy at 14 Gbps may be a penalty associated with recording the eye diagrams with a 20 GHz real-time oscilloscope with a square transfer function: This slightly increases ISI and worsens ASE-signal beat noise [51]. On the other hand, the unexpected -2 dBQ degradation of the BER vs. signal Q-factor seen at 25 Gbps does not seem to be a characteristic of the investigated link, but rather related to the test environment, as it was also seen with the same magnitude in a reference experiment. The latter consisted in cascading a noisy light source (low power laser amplified by an EDFA) with a commercial Mach-Zehnder Modulator (MZM) directly driven by the Programmable Pattern Generator (PPG) and directly fed into the U2T/Finisar Rx without post-modulation optical amplification. This experiment was compared to the investigated link (RRM + Driver Chip + SOA). The laser power was independently set in both experiments so as to obtain a low speed BER of 10 -9 . In both cases the BER worsened from 10 -9 to ~10 -6 as the data rate was increased from 14 to 25 Gbps.
A possible explanation for the degradation of the BER at 25 Gbps is slowly varying jitter not seen in the oscilloscope traces that are comparatively short relative to the gating periods of the BER measurements (that were done without Clock Data Recovery (CDR) at the Error Detector (ED)). In order to evaluate how much jitter would be needed to explain the difference between the measured Q-factor (Q sig ) and the lower Q sig expected based on the BER measurements, we extracted Q-factors from the eye diagrams that were effectively averaged over sampling time ranges according to an assumed peak-to-peak jitter: For each of the sampling times falling within this time range the corresponding BER was first calculated, averaged over the entire range, and then reverse transformed into an averaged effective Q sig . The red dashed line in Fig. 6(b) shows the result when we assume a peak-to-peak time window of 17.2 ps. This amount of jitter is not unlikely since the phase margin of the Bit Error Rate Tester (BERT) is specified as 28 ps at 25 Gbps corresponding to a jitter of 12 ps (the jitter of the PPG alone already accounts for up to 8 ps according to specifications). In the case of the RRM + Driver Chip system experiment, this is further compounded by any jitter generated in the driver chip as well as in the commercial photoreceiver (the latter also applying to the reference measurement). Some additional jitter may also arise from the time delay introduced by the RRM. Since it depends on the optical carrier detuning, it may vary if the optical carrier detuning drifts over time (due to the inaccessibility of the thermal tuner pins, an active control loop of the RRM resonance was not implemented). An exact expression of the time delay can be obtained by taking the derivative of the phase of the RRM's E/O S 21 [28] in respect to the angular RF modulation frequency and is below 7 ps (the latter corresponding to the time delay at low signal frequencies and zero optical carrier detuning and is calculated as 2 3 , where Q is the Q-factor of the resonator and 3 = 2 3 is the angular carrier frequency). While the RRM cannot significantly drift from its operating point without severely compromising MP and other RRM performance metrics, a slow drift of the time delay by 1 or a few ps is possible, further shifting the BERT sampling time from its optimum in the absence of a Rx CDR. Both experiments, the system experiment with RRM and driver chip and the reference experiment with the commercial MZM, have in common that the BER is quite sensitive on sampling time even close to its optimum and is thus sensitive to timing jitter. An example of a horizontal bathtub curve is shown for the RRM + Driver + SOA system experiment in Fig. SM10(b).
III. TRANSMISSION EXPERIMENTS AT LONGER DISTANCES
Even though short distance links are the main target of this work, we made transmission experiments at different drive voltage and distances to investigate chirp induced Inter-Symbol Interference (ISI). It is a well-known fact that RRMs modulate both the phase and amplitude of the carrier. Thus, they induce chirp when amplitude modulating. As the drive voltage is increased, the maximum phase change is also increased and chirp worsens.
The experiments were done with RRMs similar to those used for the system experiments reported in the main part of the paper (same design and same fabrication run, different chip). Light was provided by the Keysight tunable laser and the RRMs were driven by the PPG of the BERT after amplification with a high-speed amplifier from Finisar/U2T (Part Number TGA4943-MOD, > 30 GHz cutoff frequency). The wavelength of the tunable laser was chosen as the wavelength resulting in the highest OMA. The gain of the electrical amplifier was chosen to result in the targeted drive voltage swing. With a 2 V pp signal, no additional eye closure was observed for the investigated transmission distances, up to 10 km. For 6 V pp , a slight increase in jitter and a slight vertical eye closure penalty of 0.5 dB was observed. Figure SM11(a) shows the vertical eye opening at 6 V pp drive voltage as a function of fiber length expressed as a fraction of the DC 1-and 0-levels. The normalization to DC signal levels removes the effect of fiber losses (that is not of significance here as it is also partially the result of connector losses at the interfaces between several patch cords), but maintains the effect of dispersion induced inter-symbol interference. Figures SM11(b) and (c) show examples of eye diagrams (as recorded by the linear U2T/Finisar receiver) after 0 and 10 km fiber transmission lengths.
Another mechanism enhancing dispersion induced penalties might consist in the conversion of RIN into phase noise via the linewidth-enhancement factor of the saturating SOA [SM15] when used in multi-channel configuration (in the single channel experiments done in this paper the SOA is not saturating). While a conclusive quantification of this effect at increased fiber lengths will require further work, we expect this effect to be strongly suppressed by the anti-correlation of RIN between multiple comb lines: When measuring the RIN of a large number of comb lines taken together, the resulting RIN drops by several orders of magnitude compared to isolated comb lines [SM16]. Cross-gain modulation (XGM) resulting from the data modulation in a partially saturating SOA, in conjunction to the linewidthenhancement factor, might however result in increased chirp and higher dispersion penalties. (c) show transmitter eye diagrams, respectively after 0 and 10 km fiber lengths, as generated by an RRM driven with a 6 Vpp signal. Eye diagrams are averaged over repeating PRBS cycles so as to remove noise but maintain the effect of inter-symbol interference. The commercial photoreceiver used to record the eye diagrams was inverting, so that the optical levels have been flipped. | 8,115.8 | 2016-05-27T00:00:00.000 | [
"Physics"
] |
Surface characterization of p-type point contact germanium detectors
P-type point contact (PPC) germanium detectors are used in rare event and low-background searches, including neutrinoless double beta (0vbb) decay, low-energy nuclear recoils, and coherent elastic neutrino-nucleus scattering. The detectors feature an excellent energy resolution, low detection thresholds down to the sub-keV range, and enhanced background rejection capabilities. However, due to their large passivated surface, separating the signal readout contact from the bias voltage electrode, PPC detectors are susceptible to surface effects such as charge build-up. A profound understanding of their response to surface events is essential. In this work, the response of a PPC detector to alpha and beta particles hitting the passivated surface was investigated in a multi-purpose scanning test stand. It is shown that the passivated surface can accumulate charges resulting in a radial-dependent degradation of the observed event energy. In addition, it is demonstrated that the pulse shapes of surface alpha events show characteristic features which can be used to discriminate against these events.
Introduction
The observation of neutrinoless double beta (0νββ) decay would have major implications on our understanding of the origin of matter in our universe. The decay violates lepton number conservation by two units, and the search for it is the most practical way to ascertain whether neutrinos are Majorana particles, i.e. their own antiparticles (ν =ν). Moreover, together with cosmological observations and direct neutrino mass measurements, it could provide information on the absolute neutrino mass scale and ordering [1,2].
One of the most promising technologies to search for 0νββ decay are high-purity germanium (HPGe) detectors. Germanium detectors are intrinsically pure, can be enriched to above 92% in the double beta decaying isotope 76 Ge, and provide an excellent energy resolution of about 0.1% FWHM (full width at half maximum) in the region of interest around Q ββ = 2039 keV.
PPC germanium detectors
P-type point contact (PPC) germanium detectors are semiconductor detectors with a cylindrical shape, see Fig. 1. While the n + contact extends over the lateral and bottom detector surface, the p + readout contact has the form of a small circular well located at the center of the top surface. The size of the point contact is significantly smaller than that of traditional semicoaxial detectors. Therefore, PPC detectors have a lower capacitance, typically in the range of 1 − 2 pF at full depletion, resulting in lower electronic noise and thus in a better energy arXiv:2105.14487v2 [physics.ins-det] 10 Sep 2021 resolution [3,4]. Moreover, PPC detectors can be operated at lower energy thresholds (< 1 keV) which makes them suitable for rare event searches at small energies [5]. Another advantage of this type of detector is the enhanced capability of applying background rejection methods based on so-called pulse shapes. This is due to the specific geometry and arrangement of the electrodes leading to a strong electric field close to the readout contact, and to a relatively low field elsewhere. As a result, the signal shape of events that deposit their energy at a single location (single-site events like 0νββ decay events) in the detector is almost independent of the location of the event. This can be used to discriminate these events from events where energy is deposited at multiple sites (multi-site events like Compton-scattered photons) which are a major source of background [6,7].
Alpha backgrounds
Events induced by alpha particles are a background for 76 Ge-based 0νββ decay searches. They are predominantly caused by the decay of radon isotopes and their progeny, particularly 222 Rn. Radon is a radioactive noble gas which is created naturally as part of the decay chains of uranium and thorium. During the production processing of a germanium detector, it is exposed to air and undergoes various mechanical and chemical treatments. A slight radon contamination of the detector (surface) is unavoidable. Furthermore, radon contamination and outgassing of parts close to the detectors in the experimental environment can also lead to undesired alpha backgrounds.
In the decay chain of 222 Rn, the long-lived isotope 210 Po is of major concern. During its decay to the stable isotope 206 Pb, an alpha particle with an energy of 5407.5 keV is emitted. If the energy of the alpha particle is degraded, it can lead to a background in the region of interest. The degradation can be caused by the alpha particle loosing energy in the material from which it is emitted, by loosing energy in layers on or close to the detector surface, or by charge trapping or charge loss due to dead regions in the detector. In germanium, the penetration depth of an alpha particle with an energy of E α ≈ 5 MeV is of the order of 20 µm.
Beta backgrounds
Beta backgrounds are particularly relevant for 0νββ decay searches for which detectors are submerged in liquid argon (LAr), such as in the Large Enriched Germanium Experiment for Neutrinoless ββ Decay (LEGEND) [8][9][10]. The long-lived isotope 42 Ar (T 1/2 = 32.9 yr) is naturally abundant in LAr when sourced from the atmosphere. It is produced by cosmogenic activation and decays via single beta decay to the short-lived daughter 42 K (T 1/2 = 12.36 h). The decay energy of Q β = 599 keV is too low to create a background event in the region of interest at the Q ββ -value of 76 Ge. However, subsequent beta decays of the short-lived daughter Within the LAr volume, the path length of beta particles from 42 K decays is less than 1.6 cm [11]. Hence, they are only detected if the decay happens in close proximity to the detector surface. Independent of where the beta particles hit the surface, they can lead to background events. However, when impinging on the thick n + lithium layer, surface beta events have a characteristic signal shape which can be used to discriminate against them [12].
The main difference between alpha and beta particles is their penetration depth into the germanium detector. In contrast to alpha particles, electrons from beta decay penetrate deeper, typically up to several millimeters, depending on their energy. Therefore, not all beta particles show the characteristics of events close to the surface, so-called surface effects.
Surface effects and signal development
Compared to most other HPGe detector geometries, PPC detectors have a large passivated surface, usually of the order of 30 − 40 cm 2 . This surface extends over the horizontal top surface (z = 0 mm) excluding the p + contact, see Fig. 1. Typically, the passivated surface is made from sputtered silicon oxide or amorphous germanium (aGe). This layer has a high resistivity and is left floating, i.e. it is at an undefined electric potential. While the n + contact is insensitive to surface alpha events (alpha particles cannot penetrate the few mm-thick lithium-drifted layer), beta particles entering through this surface lead to characteristically slow pulses and can be discriminated against. In contrast, the passivated surface and the point contact are highly sensitive to alpha and beta surface events.
Since the passivation layer is left floating, it is susceptible to charge build-up. A non-zero charge on the passivated surface which for example can be induced by nearby materials at non-zero potentials, changes the electric field in the vicinity of this surface and thus affects the signal formation. Without any charge build-up, the electric field lines close to the passivated surface are almost parallel to that surface. However, in the presence of surface charges, the field has a strong perpendicular component, modifying the hole and electron drift paths.
The signal formation of a germanium detector is described by the Shockley-Ramo theorem [13,14]. Any interaction inside the detector creates a cloud of pairs of charge carriers, i.e. holes and electrons. These charge carriers immediately induce positive and negative mirror charges in the electrodes. The holes drift towards the p + contact, whereas the electrons drift to the n + electrode. For a deposited charge q, the time-dependent signal S(t) in the p + contact is given by where WP( r(t)) denotes the weighting potential at the respective hole (electron) position r h (t) ( r e (t)). The weighting potential of an electrode is determined by the detector geometry and describes how strongly the charge at a given detector position couples to this electrode. For the following discussion, the p + contact is the electrode of interest. By definition, the weighting potential on this contact is one, while it is zero on the n + contact.
At time t = 0, since r h (0) = r e (0), the signal is S(0) = 0. As the holes approach the p + electrode, WP( r h (t)) increases, see Fig. 2a, until the holes are collected at time t col h , and WP( r h (t)) = 1 for t ≥ t col h . As the electrons approach the n + electrode, |WP( r e (t))| decreases until the electrons are collected at time t col e and WP( r e (t)) = 0. As soon as both kinds of charge carriers are collected, only the collected holes determine the signal and S(t) = q. The weighting potential close to the passivated detector surface strongly depends on the radius r. As shown in Fig. 2b, the weighting potential WP(r)| z=0 drops quickly with increasing r. The term 1 − WP(r)| z=0 shows the opposite behavior, i.e. it increases with radius.
Negative surface charges
If the passivated surface carries a negative charge, σ < 0, holes which are not created in close proximity to the p + contact do not drift directly towards the p + contact, but are diverted to the surface, see Fig. 3a. At the passivated surface, they drift very slowly parallel to this surface in the direction of the p + contact. Some holes might even get trapped and stop moving. As a result, the holes are almost stationary and are not collected, at least not within the time in which the signal is recorded. This time is tailored to normal bulk events and is too short to cover the possible collection of delayed holes. Therefore, for times t ≥ t col e , i.e. after electron collection, the signal S(t) becomes The larger the radius r, the smaller the final signal amplitude. Due to the presence of negative surface charges, the electrons are repelled from the surface. Simulated electron drift paths are shown in Fig. 4a. The paths, which are normally almost parallel to the surface, are modified, i.e. the electrons penetrate deeper into the bulk.
Positive surface charges
For a positive charge on the passivated detector surface, σ > 0, the electrons created during a particle interaction are attracted to the surface, whereas the holes are repelled, see Thus, for times t ≥ t col h , i.e. after hole collection, the signal becomes The larger the radius r, the larger the final signal amplitude.
Impact of surface charges on alpha and beta events
For both, negative and positive charges on the passivated detector surface, the signal amplitude is reduced. After the application of the standard calibration, the signal becomes the observed energy E obs . In both cases, E obs is smaller than the true event energy E true . The radial dependence of E obs allows to distinguish experimentally between the two cases.
Due to the small penetration depth of alpha particles in germanium, it is expected that basically all charge carriers are affected by the surface effects described above. Thus, in the case of negative surface charges, it is expected that E obs approximately follows the radial dependence of the weighting potential. The expected signal development for a homogeneously distributed surface charge (σ = −0.3 · 10 10 e/cm 2 ) at varying radii is shown in Fig. 5. Pointlike normalized charges created at z 0 = 16 µm were used in the simulation. Holes created very close to the p + contact are collected quickly. However, the negative charge induced by the electrons still close to the p + contact reduces the signal amplitude. The electrons drift away from the p + contact and the effect is shown as a positive contribution. At r = 2 mm, the holes are fully collected and when the electrons have reached the n + contact, the signal is S(t) = 1.
At higher radii, the holes are not collected and their signal contribution is constant from t = 0 on. Only the negative contribution to the signal from the electrons becomes smaller as the electrons drift. Again, this is shown as a positive contribution being almost identical to S(t) which reaches its final value of WP( r h (0)) at time t col e , cf. Eq. (2). Consequently, the observed energy E obs follows the radially declining weighting potential.
Delayed charge recovery
Delayed charge recovery (DCR) describes the phenomenon of an extra slow charge collection component for surface alpha events [5,15,16]. In the last section, the drift velocity of one kind of charge carriers was assumed to be too low to observe charge collection. DCR reflects that at least some part of the affected charge carriers are collected within the time of signal recording. Compared to events in the detector bulk (e.g. gamma events), the presence of a DCR component for surface alpha events modifies the tail of the signal pulse. As shown in Fig. 6, the tail of the pole-zero-corrected waveform 1 still increases after the charge collection in the detector bulk can be assumed to be completed. In contrast, for a gamma event with the same energy in the detector bulk, the tail stays at a constant value. The distinct DCR feature in the waveform makes the DCR effect an effective tool to identify and reject surface alpha events on the passivated surface of PPC detectors. Figure 6. Typical examples of a bulk gamma event (blue curve) and of a surface alpha event (red curve) with the same energy. The waveforms were recorded using the PPC detector under study. The baseline and pole-zero-corrected alpha signal features a slowly rising tail (see inset). This component can be explained as due to DCR. Due to its proximity to the signal readout electrode, the rise time of the surface alpha event is shorter (steeply rising leading edge) than the rise time of the gamma bulk event.
There are two mechanisms that can potentially explain the delayed collection of charges for surface alpha events:
1.
A certain fraction of charges created during the alpha interaction is trapped in a O(µm)thick region at or near the passivated surface. In this case, the DCR effect corresponds to a slow release of these charges into the detector bulk (with a certain release time τ r ) and their subsequent drift to the electrodes.
2.
Charges created on or close to the passivated surface have a significantly reduced drift velocity compared to the drift velocity in the detector bulk [17]. In this case, the DCR effect corresponds to a slow drift of charges along the passivated surface.
Typically, the charge drift along the passivated surface takes much longer than the time, in which the waveforms are digitized. Until recently, it was assumed that the main component leading to DCR is due to the trapping and the subsequent slow release of charges at the passivated surface. In previous measurements, a charge release time on the order of several microseconds was observed. In addition, the fraction of charge released into the detector bulk was on the order of a few percent [15,16]. However, pulse shape simulations including the effects of diffusion and self-repulsion have demonstrated that surface drifts can also have an impact, cf. Ch. 5.3.1.
The DCR effect can be exploited to define a tail-based pulse shape discrimination parameter, the DCR rate parameter. It is computed by estimating the slope δ of the pole-zero-corrected waveform tail based on a two-point slope estimate [5,16]: 8 of 24 Here, y 1 , y 2 denote average signal values, and t 1 , t 2 average time values. The time values correspond to the average values in the intervals where t 97% is the time, at which the waveform has reached 97% of its maximum amplitude, and t last is the time corresponding to the last sample of the waveform trace. These time windows have been chosen to allow comparisons with the measurement results presented in [5,15,16]. However, it should be noted that this definition introduces a slight dependence on the trigger time in the waveform trace: The second window is not defined relative to the onset of the charge collection, but rather comprises a fixed window (last microsecond of the waveform trace).
Experimental setup
The PPC detector surface characterization measurements presented in this work were carried out in the GALATEA (GermAnium LAser TEst Apparatus) facility, a fully automated multi-purpose scanning test stand that was built to investigate bulk and surface effects of HPGe detectors [18,19]. Due to its versatility, it allows for almost complete scans of the detector surface with alpha, beta, and gamma radiation.
The facility is a large customized vacuum cryostat housing the scanning stages, the germanium detector, the radioactive source(s), and the signal readout electronics. The detector under investigation can be mounted in an aluminum or a copper holding structure which is also used for its cooling. The detector is shielded against infrared (IR) radiation by a cylindrical copper hat. The IR shield has two slits (one on the side of the hat, one on top), along which the collimators with the radioactive sources are guided during the measurements. A system consisting of three independent stages allows an almost complete scan of the detector surface. One stage can rotate the IR shield up to 360 • with respect to the detector, facilitating azimuthal scans. The additional two linear stages are used to move the top collimator across the top surface for top scans, and the side collimator vertically for side scans.
The detector used for this work, see Fig. 1, is a PPC germanium detector with natural isotopic composition and properties that closely resemble those of the detectors previously operated in the MAJORANA DEMONSTRATOR [5,20]. To allow for a scan of the passivated surface, the detector was installed with the point contact facing up in a customized detector mount, see The n + electrode was connected to the high voltage module via a spring-loaded pin located at the detector bottom. Likewise, connection to the p + contact was established with a pogo pin which was attached to a narrow PTFE bar mounted on top of the detector. In addition, the PTFE holding structure was used to guide the signal cable.
Data were acquired with a Struck 14-bit SIS3316 flash ADC (FADC) digitizing the analog signals from the charge sensitive amplifier with a sampling frequency of 250 MHz. For every signal, 5000 samples corresponding to a total waveform trace length of 20 µs were recorded. Figure 7. Simplified sectional view of the PPC detector installed in the GALATEA test facility. The detector is mounted in a copper structure that is cooled via liquid nitrogen. It is surrounded by a copper IR shield. For the scan measurements presented in this manuscript, the detector top surface was irradiated by a radioactive source installed in the top collimator above the IR shield. For reasons of visual clarity, the side collimator of GALATEA, not relevant for this work, is not shown.
Characterization of surface alpha interactions 4.2.1. Source configuration
For the PPC detector surface characterization with alpha particles, an 241 Am source with an activity of A 0 = 40 kBq and an expected FWHM of ∼ 19 keV at the 5.5 MeV alpha peak was mounted in the top collimator of GALATEA. Suitable cylindrical PTFE and copper segments were used to fill the collimator frame. Based on the collimator geometry and the source strength, an alpha rate of ∼ 0.7 counts/s was expected at the detector top surface. In all measurements, the 241 Am beam spot had an incidence of 90 • on the detector surface. In close vicinity to the point contact, the beam spot was shadowed by the PTFE bar. Hence, it was not possible to take data in this region. An uncollimated 228 Th source (A 0 = 100 kBq) was additionally mounted on top of the IR shield for energy calibration purposes, and to confirm pulse shape discrimination capabilities.
Several radial scans at different azimuthal positions, as well as background and stability measurements were conducted. For the radial scans, a measurement time of 2 hr at each scan point provided sufficiently high statistics. The detector was operated at a bias voltage of V B = 1050 V.
Results
The radial response of the PPC detector to surface alpha events is of special interest. The energy spectra of a measurement with the 241 Am source at r = 5 mm, and of a measurement with only the 228 Th source present are shown in Fig. 8. The contribution from the 228 Th source dominates the energy spectrum up to ∼ 2.6 MeV. At higher energies, the measurement with the 241 Am source deviates from the 228 Th-only measurement. The higher count rate is attributed to alpha events. To isolate these events from other events, radius-independent multivariate cuts were developed [21]. More specifically, cuts on various pulse shape parameters were used to exclude regions in the parameter space which did not contain alpha events.
Dependence of the observed energy on the radius
To quantify the dependence of the observed energy E obs α on the radius r for surface alpha events, the energies of the events in the alpha-enriched regions were histogrammed and corrected for background events, see Fig. 9a. At small r, the alpha events form a broad distribution with relatively high E obs α . As r increases, the distribution becomes narrower and shifts to lower values E obs α . A quantitative description of this degradation was obtained by extracting the mean alpha energies E obs α from the distributions. The ranges of E obs α were constrained manually to reject remaining background events. The mean alpha energies E obs α as a function of r for two radial 241 Am scans at different azimuthal positions are shown in Fig. 9b. At the outermost radii, E obs α is strongly reduced, i.e. almost no charges are collected. In contrast, at small r, the mean energy is E obs α > 2500 keV. The results of the two measurements are in good agreement. The observed reduction of the mean alpha energy is consistent with the presence of negative surface charges on the passivation layer, cf. Ch. 3.3. These charges trap the holes created during the interaction, reducing the signal amplitude. The broadness of the peak at low r is partly influenced by the size of the beam spot. However, the total width also includes some stochastic effect. Between the two scans, the detector was unbiased and the cryostat was re-evacuated. This did not affect the observations significantly. Even though E obs α strongly depends on r, alpha events were not lost. This could have occurred for a locally increasing dead layer. However, the total number of alpha events as a function of r remained almost constant at a value of about 5000 events per scan position, see 10. The deviation at small radii is most likely due to a partial shadowing of the 241 Am beam spot by the PTFE bar. The decreasing alpha rate at the outer radii can be explained by the fact that the alphas hit the lithiated layer of the taper which they cannot penetrate. The plot also shows the alpha counts as predicted by GEANT4 simulations. They agree very well with the measurement. The plateau as well as the drop of the event rate at the center and at the edge of the detector are well described by the simulations. However, a small offset of 3 mm was observed. This was found to be due to a corresponding slight offset between the center of the detector and the central position of the collimator. The r values for the data were corrected for this offset.
Radial DCR dependence
The DCR effect is, in general, an effective way to identify surface alpha events. To investigate its radial dependence, the DCR rate parameter was computed for every event as described in Ch. 3.4. The DCR rates were then histogrammed, corrected for background events, and the mean DCR rates DCR r were extracted from the distributions. The dependence of DCR r on r for two radial 241 Am scans at different azimuthal positions is shown in Fig. 11a. Comparable to the mean alpha energy E obs α , DCR r decreases considerably with increasing r. At r > 15 mm, the mean DCR rate of surface alpha events is close to zero. This means that in this region alpha events are no longer distinguishable from bulk events. Another way of quantifying the DCR effect is to convert the mean DCR rate DCR r to an average DCR fraction DCR f , which is defined as E ex α / E obs α , where E ex α is the amount of additionally observed energy due to DCR. This extra observed energy is calculated by converting the mean DCR rate from ADC/ns to keV/ns units and integrating over the length of the waveform tail ∆t ≈ 14 µs. The radial dependence of DCR f is shown in Fig. 11b. The value of DCR f drops with increasing r from about 2 % to about 0.5 % at r ≈ 15 mm, where the value of the mean DCR rate approaches zero. At higher r, DCR f seems to increase again. However, it should be noted that these fractions are numerically problematic as also E obs α approaches zero, see Fig. 9b.
Characterization of surface beta interactions 4.3.1. Source configuration
For the surface characterization measurements with beta particles, a 90 Sr source with an activity of A 0 = 5.0 MBq was mounted in the top collimator. Suitable cylindrical tungsten segments were used to fill the collimator frame. Based on the collimator geometry and the source strength, an electron rate of ∼ 300 counts/s was expected at the detector surface. In all measurements, the 90 Sr beam spot had an incidence of 90 • on the detector surface.
As for the surface characterization measurements with alpha particles, several radial scans at different azimuthal positions, as well as background and stability measurements were conducted. In contrast to the alpha measurement configuration, no radial offset between the center of the detector and the central position of the collimator was observed for the beta measurements: The detector holding structure and the collimator were readjusted between the measurement campaigns. Typically, a measurement time of 0.5 hr per scan point was chosen.
The measurements were conducted with the detector operated either at the bias voltage of V B = 1050 V or at V B = 2000 V. The focus of the analysis is on the data obtained at the higher bias voltage. Less pronounced results were obtained for the data taken with the lower bias voltage. The observed small dependence on the bias voltage is not yet fully understood.
Results
Dependence of the observed energy on the radius First, the energy spectra recorded in the presence of the 90 Sr source were corrected for background events, see Fig. 12. The plot shows that the distribution of E obs β strongly depends on the radius r. In particular, the following two effects can be observed: 1) The total number of events decreases with increasing radius r.
2) The energy continuum E obs β shifts to lower energies with increasing r. This is especially pronounced around the endpoint of the distribution. The first effect was quantified by calculating the total count rate as integrated over the entire energy range (0 − 3 MeV) as a function of r, see Fig. 13a. The count rate first increases and then decreases with r. The reduced rate at small r is due to the partial shadowing of the beam spot by the PTFE bar. While the event rate for alpha events was almost constant, the reduced event rate at higher radii shows that some of the beta electrons are completely lost. This is most likely an experimental artefact: As the activity of the 90 Sr source was higher than the activity of the 241 Am source, the trigger threshold of the data acquisition system had to be increased from ∼ 16 keV to ∼ 50 keV to prevent too high a rate of pile-up events. Therefore, events which were affected severely by surface effects and thus with too small E obs β were not recorded.
The dependence of the spectral endpoint on r was investigated by fitting the energy spectra with a seventh-order polynomial, see Fig. 12. The endpoints E obs 0 were approximated by determining the energies, at which the fit functions drop to a fixed value of 10 −2 counts/(2 keV · s). This value was chosen to avoid statistical fluctuations at smaller count rates. As the estimate of the endpoints is rough and since this method is affected by binning effects, normalized endpoints E 0 = E obs 0 /max(E obs 0 ) are shown in Fig. 13b. The value of E 0 decreases significantly with r. This is in good qualitative agreement with the behavior of E obs α as described in Ch.
4.2.
In particular, the results are again consistent with the presence of negative charges on the passivated detector surface.
Radial dependence of other pulse shape parameters
The radial dependence of other pulse shape parameters and their correlations were also investigated. Two event populations were identified by studying the correlation between the drift time and the energy E obs β . The drift time is defined as the time period in which 90% of the total signal height is reached. The correlations for selected radii are shown in One population, indicated by vertical ellipses, is located at small energies and its drift times decrease with increasing r. The second population, indicated by horizontal ellipses, is located at higher energies and its drift times increase with r. With the help of pulse shape simulations it can be shown that the first population corresponds to events with a small penetration depth, which are sensitive to surface effects. For the observed negative charges on the passivated detector surface, the signal development is driven by the collection of electrons while the holes are almost stationary and provide an almost constant contribution. Since at higher r the electrons are closer to the n + contact, their drift time decreases. As the weighting potential at higher r is small, E obs β is small for these events. The second population corresponds to events with higher penetration depths which are mostly insensitive to surface effects. These interactions are subject to the usual charge collection behavior, i.e. the holes are at least partially collected and E obs β is closer to the true energy of the incident electrons. Since at higher r, the holes have a longer drift path to the p + readout contact, the drift time increases with r. These separated two event populations show that, unlike alphas, not all electrons are affected significantly by surface charges. Only events where the electrons do not penetrate deeply are strongly affected.
Pulse shape simulations
To better understand the measurement results discussed above, dedicated surface event simulations were performed. To this end, the package Siggen consisting of the two programs mjd_fieldgen and mjd_siggen was used [22].
Electric field and weighting potential
The stand-alone program mjd_fieldgen was used to calculate the electric potential, the electric field, and the weighting potential inside the detector. The computation is based on a numerical relaxation algorithm on an adaptive grid. For PPC detectors, due to their cylindrical symmetry, the computation can be performed on a two-dimensional grid (coordinates: r and z). At the passivated detector surface, a reflective symmetry is used as a boundary condition for the relaxation algorithm. This is in accordance with the requirement that for zero surface charge at the passivation layer, the field lines close to the surface are parallel to that surface, such that no charges pass the surface [4].
Signal formation
The signals corresponding to energy depositions at specific locations in the detector can be simulated with mjd_siggen. The program combines the field maps generated with mjd_fieldgen with a charge drift model containing information on the electron and hole mobilities to compute the charge drift path [23]. Furthermore, the corresponding signal is calculated according to the Shockley-Ramo theorem, cf. Eq. (1).
Both programs require a number of user inputs that are read in from a common configuration file. These inputs include the detector geometry and configuration (bias voltage, temperature), the impurity profile, and other settings (initial grid size, charge cloud size, electronics response, etc.). Most importantly for this work, a (homogeneously distributed) surface charge can be added to the passivated detector surface. The surface charge is expressed in units of e/cm 2 and is added as an impurity at every grid point on the surface.
Influence of surface effects on pulse shape parameters
Pulse shape parameter maps were calculated to study the impact of surface effects on important quantities such as the energy E obs , drift time, etc. To this end, point charges with starting positions arranged in a finely meshed grid in the (r, z) plane were simulated using Siggen. The effects of diffusion and self-repulsion were not included in the simulations, cf. Ch. 5.3.1. The parameter maps for negative and positive surface charges (σ = ±0.3 · 10 10 e/cm 2 ) for the quantities energy fraction E obs /E true , A/E, and drift time (0 − 90%) are shown in Fig. 15. The A/E parameter describes the ratio of the maximum amplitude of the current pulse (A), and the amplitude (energy) of the charge pulse (E obs ). It is commonly used to discriminate background events from signal events. More information on this pulse shape parameter and its determination can be found in [6,24].
The energy fraction parameter maps for negative and positive surface charges show that in most of the active detection volume, the true event energy is obtained, i.e. E obs ≈ E true . However, for σ < 0, there is a strong reduction of E obs in a region close to the passivated surface (z 1 mm). In this region, holes created during an interaction are attracted to the surface and become quasi-stationary. The signal development is driven by the drift of electrons to the n + contact, cf. Ch. 3.3. At z 1 mm, the energy fraction E obs /E true decreases for increasing r. Surface alpha events with typical penetration depths of tens of micrometers are fully contained in this region of reduced E obs . In contrast, surface beta events which have in average higher penetration depths of up to a few millimeters, are only partially affected. In the case of positive surface charges, σ > 0, there is only a small region in the vicinity of the point contact where events have a strongly reduced E obs and thus small E obs /E true values. Here, electrons created during an interaction are attracted to the passivated surface and become quasi-stationary. The signal development is driven by the drift of holes to the p + contact. At z 1 mm, the reduction of E obs gets less severe with increasing r. Compared to the case of a negative surface charge build-up, the reduction of E obs is much less pronounced. For increasing absolute surface charge |σ| (at fixed bias voltage) or decreasing bias voltage (at fixed surface charge), the regions of reduced E obs extend towards higher depths z.
The A/E map for a negative surface charge shows that for z 1 mm, the A/E values first slightly decrease and then strongly increase with increasing r. This can be explained by the fact that at larger r, the electrons drift in a slowly changing weighting field for a short drift time which results in fast signals and thus high A/E values. For positive surface charges, high A/E values are encountered in the region close to the point contact. Here, the holes drift in a rapidly changing weighting field for a short drift time which also results in fast signals.
The drift time of events in the region of reduced E obs for σ < 0 decreases with increasing r. This is due to the closer proximity of the electrons to the n + electrode at higher r so they are collected faster. The drift time map also shows that at higher penetration depths (z 1 mm), the drift time increases with increasing r. Here, the holes are collected and their drift times drive what is measured. At larger r, they have a longer drift path to the p + contact and therefore a longer drift time. Likewise, for positive surface charges, where the signal formation in the region of reduced E obs is driven by the collection of holes, the drift time increases with radius.
Full Monte Carlo simulations
An extensive simulation campaign was carried out to better understand the results obtained in the surface characterization measurements with alpha and beta particles. First, realistic energy deposition distributions in the PPC detector were simulated using the toolkit GEANT4. Second, the corresponding signals were simulated using Siggen. Third, various pulse shape parameters were computed and analyzed in post-processing routines. This three-step procedure is illustrated schematically in Fig. 16.
1) GEANT4 simulations
In the first step, the interaction positions and energy depositions of surface alpha and beta events in a PPC germanium detector were simulated using GEANT4. To this end, a simplified geometry of the GALATEA scanning facility was implemented. To acquire sufficiently high statistics, several million events were simulated. For every simulated event, the parameters timestamp, event ID, energy E true , and position (x, y, z) were stored for every charge deposition (hit) in the detector.
2) Pulse shape simulations
The outputs of the GEANT4 simulations were used as an input for pulse shape simulations with Siggen. For a given event, the signals corresponding to the individual hits were simulated and finally summed up to form the signal (weighted with the individual E true ). The evolution of the charge cloud size due to diffusion and self-repulsion was neglected in all simulations. This will be discussed in more detail in Ch. 5.3.1. Moreover, the implementation of a sophisticated electronic response model (e.g. modeling the electronic noise) was omitted. The simulated waveform of every event was stored for a time period of 1500 ns (starting at t = t 0 = 0 ns) for time steps of ∆t = 1 ns. This trace length was chosen to minimize the computing time while ensuring the simulation of the full signal for events close to the passivated detector surface.
3) Post-processing
In post-processing, several pulse shape parameters were extracted from the simulated waveforms. These include the observed energy E obs , the maximum current amplitude to estimate A/E, the signal drift and rise time, and the DCR rate. The DCR effect for surface alpha events was modeled by convolving the current signal with an exponential, followed by a re-integration to obtain the convolved charge signalŜ(t): Here, C denotes a factor containing the fraction of charges released into the detector bulk, S(t) the original (non-convolved) signal, and τ an exponential time constant describing the time scale of charge release. The equation accounts for the fact that the delayed charges are released starting from when the alpha particle penetrates the surface. The DCR rate defined in this equation is proportional to E obs . It should be noted here that the DCR model in Eq. ( 7) was tuned to match the measured effect as presented in this work. Alpha or beta surface events with other topologies, e.g. different incidence and/or energy, may not be described well.
Surface alpha events
In this section, the results of the 241 Am surface alpha event simulations will be discussed and compared to the measurements. The energies E true α , E obs α , and DCR rates for all simulated events are shown for negative surface charges (σ = −0.1, −0.3 · 10 10 e/cm 2 ) in Fig 17. The simulation predicts that with increasing radius r, the alpha population moves towards lower E obs α and DCR values. In addition, the distributions become narrower. This is in good qualitative agreement with the measurements, cf. Fig. 9a. However, it should be noted here that there are differences between the spectral shapes of the measured and simulated 241 Am spectra. This is most likely due to the fact that the simulation framework does not fully cover all relevant effects, e.g. diffusion and self-repulsion of the charge cloud evolution were neglected. Moreover, the simplified simulation settings (e.g. simplified geometry of the experimental setup, homogeneous distribution of the surface charges, discrete simulation grid, no sophisticated electronics response, etc.) could also have an impact.
To quantify the radial dependencies predicted by the pulse shape simulations, the mean energies E obs α and the mean DCR rates DCR r were extracted from the distributions, see and DCR r is predicted to be stronger for a higher absolute amount of surface charges, particularly at small r, see Fig. 18. A direct comparison of the predicted and measured DCR values is not meaningful, since they depend on the trace length which is different for measurement and simulation. Therefore, the simulated DCR r were scaled with a constant factor which was chosen such that the absolute values roughly match. The predicted radial dependencies describe the measurement results qualitatively well. The predicted dependency of DCR r on r describes the measurements also quantitatively for a moderate surface charge of σ = −0.3 · 10 10 e/cm 2 . In contrast, the predicted DCR r slightly deviate from the measured rates, particularly for r 10 mm. This might be due to the simplicity of the applied DCR model, cf. Eq. (7).
Surface beta events
The analysis of the simulated surface beta events was done in analogy to the analysis of the measurements. The simulated energy spectra in the presence of negative surface charges (σ = −0.3, −0.7 · 10 10 e/cm 2 ) are shown in Fig. 19. The predicted energy E obs β degrades with 20 of 24 increasing r. This is in qualitative agreement with the measurements, cf. Ch. 4.3.2. For higher absolute surface charges, the reduction is stronger. The dependence of the integral count rate of E obs β on r is shown in Fig. 20a. The energy threshold used in the simulation had to be adjusted such that the count rates of the simulation roughly match the measured rates. This might have been necessary because of the simplified drift model which did not take diffusion and self-repulsion into account. Thus, only qualitative statements can be made. The simulation describes the increase of the count rate at small r due to the partial shadowing of the beam spot by the PTFE bar. Moreover, the simulated integral count rate decreases with r for r > 5 mm. However, it does not drop as steeply as the measured rate. The reduction of E obs β was also quantified in terms of the shift of the endpoints of the 90 Sr spectra, see Fig. 20b. The spectra were fit with a seventh-order polynomial and the endpoint was approximated as for the measured spectra, cf. Ch. 4.3.2. While the simulation describes the measurement reasonably well at small r, it cannot describe the measurement at large r. A better agreement between simulation and measurement is achieved for the higher value of the assumed negative surface charge. This is in contrast to the reasonable description of the alpha events when assuming a lower surface charge. Two event populations were observed in the radial 90 Sr measurements, cf. Ch. 4.3.2. The first population consists of events with low energies E obs β , for which the drift time decreases with increasing r. In contrast, the second population consists of events with higher energies E obs β , whose drift time increases with increasing r, see Fig. 14. This is validated by the 90 Sr simulations as shown in Fig. 21 and can be explained as follows: Events with low E obs β have a small penetration depth. As discussed in Ch. 5.1, they are affected by surface charges and their energy E obs β is strongly reduced. In contrast, events with a high E obs β generally have a higher penetration depth. These events are less sensitive to surface effects, i.e. most of the holes are collected on the p + contact.
Discussion and outlook
The simulations carried out in the scope of this work were based on the drift models for single holes and electrons. The charges were treated as independent and the final signals were calculated as superpositions of the waveforms as expected for isolated point charges. In addition, the environment of the detector was not taken into account. The resulting simulation results were able to describe the data qualitatively well if certain surface charges were assumed. However, some quantitative differences emerged between predictions and data, and the value of the surface charge, which as was to be expected was not always the same. Two effects not taken into account in the simulations could influence the results significantly and will be discussed in the following sections.
Impact of diffusion and self-repulsion
Thermal diffusion and Coulomb self-repulsion are two effects between the charge carriers which are expected to have a significant impact. Since these processes lead to an increase of the size of the charge cloud during its evolution, they could directly influence the fraction of charge carriers affected by surface effects.
For interactions close to the (passivated) detector surface, the transverse diffusion component and the self-repulsion are of particular importance. The transverse size of the charge cloud initially deposited increases very quickly. The effect becomes stronger for larger and denser energy depositions. In contrast, the longitudinal diffusion is less important because the longitudinal diffusivity is low in a high electric field. Thus, assuming that the field close to the surface is high enough, the charge cloud is expected to become a disk. If the expanding disk drifts parallel to the surface it can eventually intersect with the layer affected by surface effects, even if the original charge cloud did not. This will result in a "smear" of charges which are trapped close to the surface. The lower part of the charge cloud continues drifting, some part can move away and will be collected, some part might be trapped. This results in a modified charge collection behavior compared to the case of independently drifting point charges.
First attempts have been made to include these effects in pulse shape simulations. However, this is very challenging. The three-dimensional charge density distributions for both electrons and holes need to be evolved simultaneously. At each time step, a self-consistent electric field has to be recalculated. Moreover, a fine computational grid (O(20 µm)) and short simulation time steps (O(0.2 ns)) are required. From the computational point of view, these requirements are very challenging. However, work is ongoing to approximate the effects in two-dimensional calculations as well as to speed up three-dimensional calculations to the point that these effects can be included.
Impact of the environment
The assumed charges on the passivated surface are not the only mechanism to change the potential and thus the field close to this surface. The environment of a detector also has an effect on the electric potential. This was investigated for the PPC detector under study using the newly developed software package SolidStateDetectors.jl [25]. The package can calculate the electric potential in and around the detector taking the detector environment into account. The potential was calculated for selected configurations: (a) bare detector, i.e. the detector surroundings were neglected, reflecting boundary conditions at the surface at z = 0 mm were assumed; (b) detector mounted inside the grounded infrared shield and the detector holding structure of GALATEA; (c) detector mounted in GALATEA plus an additional grounded plate above the passivated detector surface at a distance of 2 mm; (d) detector submerged in LAr.
The potential Φ bare of the bare detector is shown in Fig. 22 (a), whereas the changes Φ env − Φ bare caused by the different configurations are depicted in panels (b), (c), and (d). The environment modifies the electric potential considerably, particularly in the region around the point contact. The grounded plate close to the passivated surface has the largest effect. It amounts to a few percent. The influence of a submersion in liquid argon is almost as strong. Such effects are on the same order of magnitude as the effects calculated for assumed moderate charges on the passivated surface. Consequently, future simulation studies to investigate detector surface effects should take the influence of the surrounding materials on the electric field into account. There is also experimental evidence that the detector under study behaved differently in a different environment [15,16].
Summary and conclusions
The response of a p-type point contact (PPC) germanium detector to alpha and beta particles was studied in detail to better understand background events as occurring in experiments searching for neutrinoless double beta decay. The results of the measurements in the vacuum test facility GALATEA demonstrate that the structure of events on the passivated detector surface can be explained by effects like surface charges. For both alpha and beta surface events, a radius-dependent reduction of the energy was observed which can be explained by assuming the presence of a negative surface charge. It was also observed that surface alpha events exhibit a delayed charge recovery (DCR) effect, which can be exploited to effectively reject such events. In dedicated characterization measurements with beta particles, two event populations could be identified. One population could be associated with events having small penetration depths -they are affected by surface effects, whereas the waveforms of the other population with higher penetration depths were found to have no special features. No pronounced DCR effect was observed for surface beta events. Thus, the identification of beta events on the passivated surface is, if at all, only for part of the events possible.
An extensive simulation campaign was carried out to better understand the results of the surface characterization measurements. Pulse shape parameter maps provided insights into the impact of surface charges on quantities such as the event energy, drift time, etc. In addition, the maps revealed that the influence of a positive surface charge is much less pronounced than that of a negative surface charge. Monte Carlo event simulations in combination with pulse shape simulations were capable of reproducing the measurements well qualitatively. In particular, the simulations confirmed that the measurement can be explained by the presence of a negative surface charge.
The presented measurements and corresponding simulations led to a significantly better understanding of PPC detector surface effects. This serves as a basis to better identify surface events as backgrounds to rare event searches with germanium detectors. | 11,957.4 | 2021-05-30T00:00:00.000 | [
"Physics"
] |
Optimal Flow Sensing for Schooling Swimmers
Fish schooling implies an awareness of the swimmers for their companions. In flow mediated environments, in addition to visual cues, pressure and shear sensors on the fish body are critical for providing quantitative information that assists the quantification of proximity to other fish. Here we examine the distribution of sensors on the surface of an artificial swimmer so that it can optimally identify a leading group of swimmers. We employ Bayesian experimental design coupled with numerical simulations of the two-dimensional Navier Stokes equations for multiple self-propelled swimmers. The follower tracks the school using information from its own surface pressure and shear stress. We demonstrate that the optimal sensor distribution of the follower is qualitatively similar to the distribution of neuromasts on fish. Our results show that it is possible to identify accurately the center of mass and the number of the leading swimmers using surface only information.
Introduction
Fish navigate in their habitats by processing visual and hydrodynamic cues from their aqueous environment. Such cues may serve to provide awareness of their neighbors as fish adapt their swimming gaits in groups. Early studies have shown that vision is a critical factor for fish schooling [1]. However, more recent studies have shown that even blinded fish can keep station in a school [2]. Such capabilities are of particular importance in flow environments where vision capabilities may be limited [3]. The flow environment is replete with mechanical disturbances (pressure, shear) that can convey information about the sources that generated them. Fish swimming in groups have been found to process such hydrodynamic cues and balance them with social interactions [4,5]. In order to detect mechanical disturbances in terms of surface pressure and shear stresses fish have developed a specialized organ, the lateral line system. The mechanoreceptors in the lateral line-allowing the sensing of the disturbances in water-are called neuromasts. A number of studies and experiments have shown that the functioning of the lateral line is crucial for several tasks [6,7]. Experiments with trout in the vicinity of objects have shown its importance for Kármán gaiting and bow wake swimming as well as energy efficient station keeping [8,9]. Using the information contained in the flow, the cylinder diameter, the flow velocity, and the position relative to the generated Kármán vortex street were quantified [10,11]. Using blind cave fish, several studies have shown the importance of the lateral line to detect the location and the shape of surrounding objects and avoid obstacles [12][13][14][15]. In another study, the feeding behavior of blinded mottled sculpin was tested and it was found that they use their lateral line system to detect prey [16]. It was also found that blind fish manage to keep their position in schools and lose this ability with a disabled lateral line organ [17]. The importance of the lateral line was also shown for enhanced communication [18], the selection of habitats [19] and rheotaxis [20].
In this work, we mimic the mechanosensory receptors, more specifically the sub-surface 'canal' neuromasts and superficial neuromasts [21,22]. The neuromast on the fish skin are used to detect shear stresses, where the ones residing in the lateral line canals are used to detect pressure gradients [23][24][25][26][27]. Due to the filtering nature of the canals, the detection of small hydrodynamic stimuli against background noise is improved for the subsurface neuromasts [28].
In order to better use and understand the capabilities of the artificial sensors several studies regarding the information content in the flow and optimal harvesting of this information were performed: The prevalence of information on the position of a vibrating source was shown to be linearly coded in the pressure gradients measured by the subsurface neuromasts [46]. Furthermore, it was shown that the variance of the pressure gradient is correlated with the presence of lateral line canals [47]. In [48], fish robots equipped with distributed pressure sensors for flow sensing were combined with Bayesian filtering in order to estimate the flow speed, the angle of attack, and the foil camber. Other studies have focused on dipole sources in order to develop methods that extract information and optimize the parameters of the sensing devices [49,50]. In a recent study artificial neural networks were employed to classify the environment using flow-only information [51][52][53][54]. In order to find effective sensor positions weight analysis algorithms were employed [55].
Following an earlier work for detection of flow disturbances generated from single obstacles [56], we examine the optimality of the spatial distribution of sensors in a self-propelled swimmer that infers the size and the relative position of the leading school. We combine numerical simulations of the two-dimensional Navier-Stokes equation and Bayesian optimal sensor placement to examine the extraction of flow information by pressure gradients and shear stresses and the optimal positioning of associated sensors. The present work demonstrates the capability of sensing a rather complex system using information of shear and pressure. Such information is available both, to biological organisms and artificial swimmers. We remark that the present work does not aim to reproduce biological systems but rather reveal algorithms that may be applicable to robotic systems. At the same time, we find that the identified optimal sensor locations for the two-dimensional artificial swimmers have similarities to biological systems indicating common governing physical mechanisms for the hydrodynamics of natural and artificial swimmers.
The paper is organised as follows: In Section 2.1 we describe the numerical simulations and in Section 2.2 the process of Bayesian optimal experimental design. We present our results in Section 3 and conclude in Section 4.
Flow Simulations
The swimmers are modeled by slender deforming bodies of length L which are characterized by their half-width w(s) along the midline [57,58] A sketch of the parametrization is presented in Figure 1. Following [59], we use w h = s b = 0.04L, s t = 0.95L and w t = 0.01L. The swimmers propel themselves by performing sinusoidal undulations of their midline. This motion is described by a time dependent parameterization of the curvature, Here T p = 1 is the tail-beat period and A is the undulation amplitude which linearly increases from A(0) = 0.82/L to A(L) = 5.7/L to replicate the anguilliform swimming motion described by [60]. Given the curvature along s and a center of mass, the coordinates r(s, t) of the swimmer's midline can be computed by integrating the Frenet-Serret formulas [59]. In turn, the half-width w(s) and the coordinates r(s, t) characterize the swimmer's surface. The flow environment is described by numerical simulations of the two-dimensional incompressible Navier-Stokes equations (NSE) in velocity-pressure (u-p) formulation. The NSE are discretized with second order finite differences and integrated in time with explicit Euler time stepping. The fluid-structure interaction is approximated with Brinkman penalization [58,61,62] by extending the fluid velocity u inside the swimmers' bodies and by including in the NSE a penalization term to enforce no-slip and no-through boundary conditions, Here, ν is the kinematic viscosity, λ = 1/δt is the penalization coefficient, N s is the number of swimmers, u k s,i is the velocity field imposed by swimmer i (composed of translational, rotational and undulatory motions), and χ i is its characteristic function which takes value 1 inside the body of swimmer i and value 0 outside. The characteristic function χ i is computed, given the distance of each grid-point from the surface of swimmer i, by a second-order accurate finite difference approximation of a Heaviside function [63]. The pressure field is computed by pressure-projection [58,64], whereũ k = u k − (u k · ∇)u k + ν∆u k . The terms inside the summation in Equation (4) are due to the non-divergence free deformation of the swimmers.
Schooling Formation
The tail-beating motion that propels forward a single swimmer generates in its wake a sequence of vortices. The momentum contained in the flow field induces forces which swimmers in schooling formation must overcome to maintain their positions in the group [65]. In this study, we maintain the schooling formation for multiple swimmers by employing closed-loop parametric controllers. The tail beating frequency T p,i of each swimmer i is increased or decreased if it lags behind or surpasses respectively a desired position ∆x i in the direction of the school's motion, The mean school trajectory is adjusted by imposing an additional uniform curvature k C,i along each swimmer's midline in order to minimize its lateral deviation ∆y i and its angular deflection ∆θ i , Here, · defines an exponential moving average with weight δt/T p , which approximates the integral term found in PI controllers and The formulation in Equation (6) indicates that if both the lateral displacement and the angular deviation are positive (or both negative) the swimmer will gradually revert to its position in the formation. Conversely, if ∆y i and ∆θ i have different signs the displacement has to be corrected by adding (or subtracting) curvature to the swimmer's midline.
Flow Sensors
We distinguish two types of sensors on the swimmer body. The superficial neuromasts detect flow stresses and the subcanal neuromasts pressure gradients [31,66,67]. From the numerical solution of the 2D Navier-Stokes equation we obtain the flow velocity u = (u, v) and the pressure p at every point of the computational grid. The surface values of these quantities are obtained through a bi-linear interpolation from the nearest grid points. We perform offline analysis by recording the interpolated pressure p and flow velocity u in the vicinity of the body. We remark that we have neglected points near the end of the body to reduce the influence of large flow gradients that are generated by the motion and sharp geometry of the tail. The shear stresses are computed on the body surface using the local tangential velocity in the two nearest grid points. Moreover, we compute pressure gradients along the surface by first smoothing these pressure along the surface using splines implemented in SCIPY [68,69].
Optimal Sensor Placement Based on Information Gain
In the present work, a swimmer is equipped with sensors that are used to identify the size and location of a nearby school. The optimal sensor locations are identified using Bayesian experimental design [70] so that the information obtained from the collected measurements is maximized. We define the information gain as the distance between the prior belief on the quantities of interest and the posterior belief after obtaining the measurements. Here, we choose as measure of the distance the Kullback-Leibler divergence between the prior and the posterior distribution.
Bayesian Estimation of Swimmers
In the present experiment setup, we consider a group of swimmers followed by a single swimmer. The follower needs to identify (i) the relative location r of the center of mass and (ii) the population n f of the leading group. We denote with ϑ = r or ϑ = n f these unknown quantities and allow the follower to update its prior belief p(ϑ) about the leading group of swimmers by collecting measurements on its sensors. These sensors are distributed symmetrically on both sides of the swimmer and are represented by a single point on its mid-line. We denote the k-th measurement location at the upper and the lower part with x 1 (s k ) and x 2 (s k ), respectively. The corresponding measurements are denoted by y 1 k and y 2 k , respectively (see Figure 2 for a sketch of the setup).
Figure 2.
Simulation setup used for determining the optimal sensor distribution on a fish-like body. The follower is initially located inside the rectangular area. The number of swimmers in the leading group is varied between one and eight. The sensor-placement algorithm attempts to find the arrangement of sensors s that allows the follower to determine with lowest uncertainty the relative position r and the number of swimmers n f in the leading group of swimmers. For each sensor s k the swimmer collects measurements y 1 k and y 2 k at locations x 1 (s k ) and x 2 (s k ) on the skin, respectively.
We denote by F(ϑ; s) ∈ R 2n the output of the flow simulation and include an error term ε to account for inaccuracies such as as numerical errors and imperfections in the sensors. The measurements on the swimmer body can be expressed as, We model the error term by a multivariate Gaussian distribution ε ∼ N (0, Σ(s)) with zero mean and covariance matrix Σ(s) ∈ R 2n×2n . In this case the likelihood of a measurement is given by, The covariance matrix depends on the sensor positions s and we assume that the prediction errors are correlated for measurements on the same side of the swimmer and uncorrelated if they originate from opposite sides. Finally, we assume that the correlation is decaying exponentially with the distance of the measurement locations. The functional form of the resulting covariance matrix is given by, where > 0 is the correlation length and σ is the correlation strength. For all the cases described in this work, the correlation length is set to one tenth of the swimmer length = 0.1L. The correlation strength is set to be two times the average of the signals coming from the simulations, where ϑ (i) are samples from the distribution p(ϑ). We remark that the covariance matrix must be symmetric and positive definite. To ensure positive definiteness we have to take special care to the case where we pick a sensor location twice. Notice that when s i = s j for i = j, a non-diagonal entry equals the diagonal entry and positive definiteness is violated. We handle this case by setting the argument of the exponential in Equation (10) to 10 −7 when s i = s j . This form of the correlation error reduces the utility when sensors are placed too close together and prevents excessive clustering of the sensors [71,72]. We wish to identify the locations s yielding the largest information gain about the unknown parameter ϑ of the disturbance. A measure for information gain is defined through the Kullback-Leibler (KL) divergence between the prior belief of the parameter values and the posterior belief, i.e., after measuring the environment. The prior and posterior beliefs are represented through the density functions p(ϑ) and p(ϑ|y, s), respectively. We denote by T the support of p(ϑ). The two densities are connected through Bayes' theorem, where p(y|ϑ, s) is the likelihood function defined in Equation (9) and p(y|s) is the normalization constant. We assume that the prior belief on the parameters ϑ does not depend on the sensor locations, p(ϑ|s) ≡ p(ϑ).
The utility function is defined as [73], The expected utility is defined as the average value over all possible measurements, where Y is the domain of all possible measurements. Using Equation (12) the expected utility can be expressed as,
Estimated Expected Utility for Continuous Random Variables: School Location
When ϑ = r is a continuous random variable and ϑ ∈ Ω ⊂ R 2 . The estimator for the expected utility in this case can be obtained by approximating the two integrals by Monte Carlo integration using N ϑ samples from p(ϑ) and N y samples from p(y|ϑ, s) [70]. The resulting estimator is given by, where ϑ (i) ∼ p ϑ (·) for i = 1, . . . , N ϑ and y (i,j) ∼ p y (·|ϑ (i) , s) for j = 1, . . . , N y . We remark that the computational complexity of this procedure is mainly determined by the number of Navier-Stokes simulations N ϑ . There is no additional computational burden to compute the N y samples following the measurement error model in Equation (8).
Estimated Expected Utility for Discrete Random Variables: School Size
When ϑ is a discrete random variable with finite support taking values in the set {ϑ 1 , . . . , ϑ N ϑ } the expected utility in Equation (15) is given by, Here, ϑ = n f represents the number of swimmers in the leading group. An estimator of the given utility can be obtained by Monte Carlo integration using N y samples from the likelihood distribution p(y|ϑ i , s). The estimator is given by where y (i,j) ∼ p y (·|ϑ (i) , s) for j = 1, . . . , N y . Let ϕ be the random variable representing one of the group configurations. Each group configuration is associated with a unique number ϕ i, for = 1, . . . , n i , where n i is the total number of configurations containing i swimmers.
Notice that the likelihood function for fixed ϑ i , is a mixture of Gaussian distributions with equal weights and that p(y|ϕ = ϕ i, , s) = N (y|F(ϕ i, ; s), Σ(s)). In order to draw a sample from the likelihood, first we draw an integer * with equal probability from 1 to n i and then draw y ∼ p y (·|ϕ i, * , s).
The final form of the estimator is given bŷ
Optimization of the Expected Utility Function
In order to determine the optimal sensor arrangement we maximize the utility estimatorÛ(s) described in Equation (16). It has been observed that the expected utility for many sensors often exhibit many local optima [71,74]. Heuristic approaches, such as the sequential sensor placement algorithm described by [75], have been demonstrated to be effective alternatives. Here, following [75], we perform the optimization iteratively, placing one sensor after the other. We start by placing one sensor s 1 by a grid search in the interval [0, L], where L is the length of the swimmer. In the next step we compute the location of the second sensor by setting s = (s 1 , s) and repeating the grid search for the new optimal location s 2 . This procedure is then continued by defining We note that the scalar variable s denotes the mid-line coordinate of a single sensor-pair, whereas the vector s holds the mid-line coordinates of all sensor-pairs. Besides the mentioned advantages, sequential placement allows to quantify the importance of each sensor placed and provides further insight into the resulting distribution of sensors.
Results
We examine the optimal arrangement of pressure gradient and shear stress sensors on the surface of a swimmer trailing a school of self-propelled swimmers. We consider two sensing objectives: (a) the size of the leading school and (b) the relative position of the school. The simulations correspond to a Reynolds number Re = L 2 ν = 2000. In all experiments, we use 4096 points to discretize the horizontal direction x ∈ [0, 1] and all artificial swimmers have a length of L = 0.1.
For the "size of the leading school" experiment, where the aim is to determine the size of the group, we chose the school-sizes to be ϑ i = 1, . . . , 8. First we consider one configuration per group-size. In this case inferring the configuration is equivalent to inferring the number of swimmer in the group. To increase the difficulty we consider n i different initial configurations. In each configuration we assign a number ϕ i, for i = 1, . . . , 8 and = 1, . . . , n i . In total, we consider N tot = ∑ i n i = 61 distinct configurations each having the same prior probability 1/N tot . In Appendix A we present the initial condition for all configurations. The center of mass of the school is located at x = 0.3 and in the y-axis in the middle of the vertical extent of the domain. We use a controller to fix the distance between x and y coordinates of two swimmers to ∆x = ∆y = 0.15, see Section 2.1.1.
For the "relative position" experiment, where the aim is to determine the relative location of the follower to the center of mass of the leading group, we consider three independent experiments with one, four and seven leading swimmers. Snapshots of the pressure field for these simulations are presented in Figure 3. The prior probability for the position of the group is uniform in the domain [0.6, 0.8] × [0.1, 0.4]. The support of the prior probability is discretized with 21 × 31 gridpoints. Since the experiments are independent, the total expected utility function for the three cases is the sum of the expected utility of each experiment [56]. Figure 3. Snapshots of the pressure field in the environment of the follower swimmer generated by one (a), four (b) and seven (c) schooling swimmers. The snapshots are taken at the moment the measurement was performed for one particular location of the follower in the prior region. High pressure is shown in red and low pressure in blue.
For both experiments we record the pressure gradient and shear stress on the surface of the swimmer using the methods discussed in Section 2.1.2. The motion of the swimmer introduces disturbances on its own surface. In order to distinguish the self-induced from the environment disturbances we freeze the movement of the following swimmer and set its curvature to zero. The freezing time is selected by evolving the simulation until the wakes of the leading group are sufficiently mixed and passed the following swimmer. We found that this is the case for T = 22. The transition from swimming to coasting motion takes place during the time interval [T, T + 1]. Finally, we record the pressure gradient and the shear stress at time T + 2. The resulting sensor-signal associated to the midline coordinates s for a given configuration ϑ is denoted F(ϑ; s), see Equation (8).
Utility Function for the First Sensor
In this section we discuss the optimal location of a single pressure gradient sensor using the estimators in Equations (16) and (20). Recall that we estimate the expected KL divergence between the prior and the posterior distribution for different sensor locations s. The KL divergence can be understood as a measure of distance between two probability distributions. Thus, higher values of divergence correspond to preferable locations for the sensor, leading to higher information gain. The resulting utilities are plotted in Figure 4. For all experiments we find that the tip of the head (s = 0) exhibits the largest utility independent of the number of swimmer in the leading group.
At the tip of the head, the two symmetrically placed sensors have the smallest distance. In Equation (10) we have assumed that the two swimmer halves are symmetric and uncorrelated. Due to the small distance of the sensors at the head, spatial correlation between the sensors across the swimmer halves would decrease the utility of this location. In order to test whether the utility for sensors at the head is influenced by this symmetry assumption, we perform experiments where we place a single sensor on one side of the swimmer. Again, in this case the location at the head is found to have the highest expected utility.
(a) (b) Figure 4. Utility curves for the first sensor using pressure measurements. In (a) the utility estimator for the "size of the leading school" experiment is presented. (b) corresponds to the utility estimator for the "relative position" experiment. We show the resulting curves for one, three and seven swimmer in the leading group and the total expected utility. We observe that although the form does not drastically change, the total utility increases with increasing size of the leading group.
There is evidence that the head experiences the largest variance of pressure gradients F(ϑ; s). The same observations can be made for the density of the sub-canal neuromasts, which is also highest in the front of the fish [47]. To check the presence of this correlation in our study, we examine the variance of the values obtained from our numerical solution of the Navier-Stokes equation. We confirm that our simulations are consistent with this experimental observation. We find that independent of the number of swimmers, the variance in the sensor signal var ϑ (F(ϑ; s)) is largest at s = 0.
Sequential Sensor Placement
In this section we discuss the results of the sequential sensor placement described in Section 2.2.4. For the "size of the leading school" experiment we present the results in Figure 5. In Figure 5a the utility curve for the first five sensors is shown. We observe that the utility curve becomes flatter as the number of sensors increase. Furthermore, we observe that the location where the previous sensor was placed is a minimum for the utility for the next sensor. Figure 5b shows the utility estimator at the optimal sensor for up to 20 sensors and it is evident that the value of the expected utility reaches a plateau. In Figure 5c the found optimal location of the sensors on the skin of the swimmer is presented. The numbers correspond to the iteration in the sequential procedure that the sensor was placed. Note that the sensors are being placed symmetrically.
The optimal sensor placement results for the "relative position" experiment can be found in Figure 6. Similar to the other experiment the utility curves become flatter after every placed sensor and the location for the previous sensor is a minimum for the utility for the next sensor (see Figure 6a). We plot the maximum of the utility for up to 20 sensors (see Figure 6b) and observe a convergence to a constant value. In Figure 6c the found optimal location of the first 20 sensors is presented.
For both experiments, it is evident that the utility of the optimal sensor location approaches a constant value. This fact can be explained by recalling that the expected utility in Equation (15) is a measure of the averaged distance between the prior and the posterior distribution. Increasing the number of sensors leads to an increase in the number of measurements. By the Bayesian central limit theorem, increasing the number of measurements leads to convergence of the posterior to a Dirac distribution. As soon as the posterior has converged, the expected distance from the prior, and thus the expected utility, remains constant. (c) Figure 5. Optimal sensor placement for the pressure sensors and the "size of the leading school" experiment. In (a) the utility estimator for the first five sensors and in (b) the value of the utility estimator at the optimal sensor location for the first 20 sensors are presented. In (c), the distribution of the sensors on the swimmer surface is presented. Here, the numbers associated to each sensor indicate that this location is the i-th sensor location chosen according to Equation (21). Figure 6. Optimal sensor placement for the pressure gradient sensors for the "relative position" experiment. In (a), the utility estimator for the first five sensors and in (b) the value of the utility estimator at the optimal sensor location for the first 20 sensors are presented. In (c), the distribution of the sensors on the swimmer surface is presented. Here, the numbers associated to each sensor indicate that this location is the i-th sensor location chosen according to Equation (21).
The found sensor distributions for the two objectives are similar, having clusters at the head and uniform distribution along the body. In order to underpin the biological relevance of the observed sensor distribution we compare our results to [47]. Given that the canals display significant 3D branching in the head a direct comparison is difficult. However, the found cluster of sensors at the head agrees qualitatively with the high canal density reported in [47].
Inference of the Environment
In this section we demonstrate the importance of the optimal sensor locations and examine the convergence of the posterior distribution. We compute the posterior distribution via Bayes' theorem given in Equation (12). We set y = F(ϑ, s) and compute the posterior for different values of ϑ in the prior region. We consider measurements collected at: (a) the optimal and (b) the worst sensors location.
The posterior probability for the "size of the leading school" experiment is shown in Figure 7. We observe that the worst sensor location implies an almost uniform posterior distribution, reflecting that measurements at this sensor carry no information. On the other hand, the posterior distribution for the optimal sensor is more informative. We observe that for groups with small size the follower is able to identify the size with more confidence, as opposed to larger groups. We compare the posterior for an experiment with only one configuration per group-size to an experiment with multiple configurations. For multiple configurations the posterior is less informative. This indicates that the second case occurs to be a more difficult problem. Finally, notice that the posterior for one configuration is symmetric, where when adding multiple configurations this symmetry is broken. This fact is discussed in Appendix B.
The posterior density for the "relative position" with one leading swimmer is presented in Figure 8. The posterior for the configuration with three and seven swimmers is similar. We compute the posterior for measurements at the best and the worst location for one and three sensors. For the three sensors the worst location has been selected in all three phases of the sequential placement. The results for the normalized densities are shown in Figure 8. We observe that one sensor at the optimal location gives a very peaked posterior. Three optimal sensors can infer the location with low uncertainty. This is not the case for the worst sensors, where adding more sensors does not immediately lead to uncertainty reduction.
Shear Stress Sensors
In this section, we discuss the results for the optimal positioning of shear stress sensors. We follow the same procedure as in Sections 3.1 and 3.2. Here, we omit the presentation of all the results and focus on the similarities and differences to the pressure gradient sensors.
The optimal location for a single sensor for the "size of the leading school" experiment is at s * = 3.01 × 10 −4 . For the "relative position" experiment we find the optimal location s * = 3.84 × 10 −4 . In contrast to the optimal location for one pressure gradient sensor, the found sensor is not at the tip of the head and is at different positions for the two experiments. Examining the variance in the shear signal shows quantitatively the same behaviour as the utility. Comparing the location of the maxima in variance shows that they do not coincide with the found maxima for the expected utility for shear sensors.
We perform sequential placement of 15 sensors. The resulting distribution of sensors is shown in Figure 9. In Section 3.2 we argue that the expected utility must reach a plateau when placing many sensors using the Bayesian central limit theorem. For shear stress sensors we observe that the convergence is slower compared to the pressure gradient sensors. We conclude that the information gain per shear stress sensor placed is lower as for the pressure gradient sensors.
The posterior density obtained for both experiments is less informative when using the same number of sensors. Also this indicates that shear is a less informative quantity yielding a slower convergence of the posterior. This is in agreement with the observation that the subcanal neuromasts associated with pressure gradient sensing are more robust to noise [28]. For multiple fish in schools the resulting flow field is disturbed, thus suggesting the use of pressure gradient sensors. Figure 9. Optimal sensor locations for the shear stress measurements for the "size of the leading school" in (a) experiment and "relative position" experiment in (b).
Discussion
We present a study of the optimal sensor locations on a self-propelled swimmer for detecting the size and location of a leading group of swimmers. This optimization combines Bayesian experimental design with large scale simulations of the two dimensional Navier-Stokes equations. Mimicking the function of sensory organs in real fish, we used the shear stress and pressure gradient on the surface of the swimmers to determine the sensor feedback generated by a disturbance in the flow field.
The optimization was performed for different configurations of swimmers, ranging from a simple leader-follower configuration with two swimmers, to a group of up to eight swimmers leading a single follower. We considered two types of information: the number of swimmers in the leading group and the relative location of the leading group. We find that, although the general shape of the utility function varies between the two objectives, the preferred location of the first sensor on the head of the swimmer is consistent. Furthermore, we find that the objective is only weakly influenced when varying the number of members in the leading group.
We perform a sequential sensor placement and find that the utility converges to a constant value and thus we can conclude that few sensors suffice to infer the quantities of the surrounding flow. Indeed, we find that the optimal sensor locations correspond to a posterior distribution that is strongly peaked around the true value of the quantity of interest. In summary, we find, that for the group sizes under examination, changing the number of swimmers in the leading group does not influence the follower's ability to infer the mean school location. Furthermore, we were able to show that choosing the locations for the measurements in a systematic way we are able to infer the number of swimmer in the leading group and the location of our agent to high accuracy.
We envision that the presented methodology can provide guidance in developing autonomous systems of schooling artificial swimmers. While biological organisms have distinct flow fields from those examined in the present two-dimensional simulations, we believe that the algorithms presented here in can be extended to 3D flows. Moreover, while we draw a distinction between fish and the studied artificial swimmers, we note the capability of identifying neighboring swimmers using shear and pressure information on the body of the swimmers, indicating sufficiency of such type of information for flow sensing.
Funding:
We would like to acknowledge the computational time at Swiss National Supercomputing Center (CSCS) under the project s929. We gratefully acknowledge support from the European Research Council (ERC) Advanced Investigator Award (No. 341117).
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A. Configurations
The configuration used for the "size of the leading school" experiment. For the configurations with three rows the vertical extent y ∈ [0, 0.5] was discretized using 2048 gridpoints, for the ones with four rows it was extended to y ∈ [0, 0.75] and discretized using 3072 gridpoints.
Appendix B. The Posterior Is Not Symmetric
The estimated posterior in Figure 7b is not symmetric with respect to the ϑ true = ϑ diagonal. This observation indicates that the posterior is not symmetric with respect to an exchange of ϑ and ϑ true , the parameter we try to infer and the one used in the simulation. Here, we want to show that this observation is true in general. In order to lighten the notation, we neglect the dependence of the distributions on the sensor location s.
In Section 2.2.3 we showed that the distribution of ϑ i conditioned on measurements y, under the assumption of uniform prior, is proportional to p(ϑ i |y) ∝ p(y|ϑ i ) p(ϑ i ) = 1 8 We want to show that for any i = j, p(ϑ i |ϑ j ) = p(ϑ i |y = F(ϕ j,k )) = p(ϑ j |y = F(ϕ i, )) = p(ϑ j |ϑ i ) , for any configurations ϕ j,k and ϕ i, corresponding to a school of size ϑ j and ϑ i , respectively. From Equation (A1) it is easy to see that (A2) is true due to the fact that Finally, we note that in the case where we have only one configuration per group size, i.e., n i = 1 for all i, the statement in (A3) is not true and the posterior is symmetric. | 8,402.8 | 2020-03-01T00:00:00.000 | [
"Biology",
"Environmental Science",
"Physics"
] |
Multilevel Design for the Interior of 3D Fabrications
: This article presents a multilevel design for infill patterns. The method partitions an input model into subareas and each subarea are applied with di ff erent scales of infill patterns. The number of subareas can be decided by users. For each subarea, there are di ff erent values of the scaling parameter that determines the number of columns and rows of pattern elements, which is useful to change the weight and strength of a certain area by user demands. Subareas can be symmetric or asymmetric to each other depending on the geometry of a 3D model and the application requirements. In each subarea, there are generated symmetric patterns. The proposed method is also applicable to combining di ff erent patterns. The aim of our work is to create lightweight 3D fabrications with lighter interior structures to minimize printing materials and supplementary to strengthen thin parts of objects. Our approach allows for the composition of sparse and dense distributions of patterns of interior 3D fabrications in an e ffi cient way so users can fabricate their own 3D designs.
Introduction
Recent additive manufacturing technology enables to fabricate objects with any geometrical complexity from scanned real objects or designed digital models. Additive manufacturing is widely integrated into different fields through various fabrication methods such as the fused deposition method (FDM) [1] that prints objects by layers, stereolithography (SLA) [2], and the selective laser sintering method (SLS) [3] for manufacturers.
Modeling tools [4][5][6] allow users to design objects with the desired shape and complexity. To improve the durability and mechanical properties of 3D fabrications, there is an efficient and practical approach that fills the interior of 3D fabrications with geometrical patterns using various slicing tools [7][8][9].
In addition, numerous studies focus on the interiors of 3D fabrications to achieve specific functions in terms of the quality of printed objects. An alternative approach to interior structures is the topology optimization that deforms the original shape of the provided design. Commonly, topology optimization algorithms greatly improve the structural soundness of 3D fabrications, as well as minimize material consumption. However, they are not feasible for topology sensitive designs such as mechanical designs, where any geometric interference is not required, and most of the industrial samples demand high topological accuracy. In fact, mostly 3D fabrication techniques are used for creating 3D models with certain functions and purposes, where any geometry modifications do not occur. Moreover, most of the topology optimization methods are complicated, due to the complex and time-consuming pipelines.
Among the studies dedicated to infill patterns, the most related work is adaptive multilevel interior structures [10]. Multilevel design is the best choice for the interior of 3D fabrications as it improves the physical properties of 3D objects and saves printing material.
The existing slicing tools control the pattern size with the volume percentage, but it is difficult to estimate the final pattern size by only setting the required volume percentage. Compared with slicing tools, the proposed method enables the user to specify the number of columns and rows of pattern elements with the specific scaling parameter that generates symmetrically positioned patterns for each subarea.
Therefore, our method makes users to create tailored 3D fabrications with certain qualities in an easy way. In fact, the adaptive multilevel design improves the physical properties of 3D fabrications and reduces material consumption better than uniformly structured patterns-however, it is feasible mostly for simple geometries. In addition, the computational cost is high, and its integration can be complicated for geometrically complex patterns. On the other hand, the proposed method is applicable to many different patterns with different geometrical complexities. In addition, it can be combined with different patterns and can be integrated into 2D and 3D models with ease. Our approach can also balance conflicting requirements such as strengthening and reducing material consumption of 3D fabrications. Moreover, with a scaling parameter, users can create lighter interiors for 3D fabrications by manipulating the sizes of elements according to the user desired requirements. In our method, we use border conditions to prevent overlapping problems for each created subarea of a selected object area. Furthermore, we provide detailed descriptions of the designed subdivision schemes for each presented pattern. We developed three different patterns for our comparison test and integrated a scaling parameter for each scheme to generate outputs. We also applied a scaling parameter for uniformly structured patterns to conduct a comparison test between uniformly structured patterns and multilevel patterns developed by our method.
The main contributions of our study are the followings: We develop a multilevel design approach with a scaling parameter where users can provide the number of columns and rows of pattern elements to create 3D fabrications with tailored qualities.
•
We develop three patterns and provide the designed subdivision schemes.
•
We show the practical application of our method in 2D and 3D models.
•
We focus on saving printing materials by creating lightweight 3D fabrications.
The rest of the paper is organized as follows: Section 2 includes related work where we review previous studies on the interior patterns of 3D fabrications and subdivision methods. Section 3 describes the construction of our method. Sections 4 and 5 describe the details of the subdivision schemes for the developed patterns. Section 6 discusses the experiment results, and the conclusion is provided in Section 7.
Fabrication
To control the physical properties of 3D models, various research teams have presented different interior structuring methods. In the first attempt toward improving the physical properties of objects, as well as reducing material usage, the study [11] proposed a skin frame method which was efficient in saving material-however, it produced a structure that could not withstand high stresses according to the comparison test from the study [12]. The researchers in Reference [12] proposed a method integrated with the Voronoi diagram and computed specific levels for creating each cell depending on the model shape. It greatly strengthens the structurally weak parts of 3D fabrications, but determining the carving level for each cell can be time-consuming.
Porous structures have been widely used for the interior design of 3D fabrications, due to its valued properties of being lightweight, stress-sustainable, and cost-effective. There are several studies dedicated to porous structures. One of the studies [13] presented bone-like porous structures, and another [14] was proposed anisotropic porous structures based on anisotropic centroidal Voronoi tessellations. In the study [15], researchers developed a density-aware internal porous supporting structure to improve the structural soundness of 3D fabrications. The above studies propose more options for the interior design of 3D fabrications in order to improve mechanical performance and minimize material consumption.
Another approach [16] used a medial axis tree to support the interior of objects, similar to a skeleton; this method combined several components that help to improve the physical properties of 3D fabrications.
In the following research work, density manipulation of the microstructures was performed [17]; researchers manipulated microstructures to control the elasticity of 3D fabrications. This method assembled small-scaled microstructures to produce the effect of soft materials. In Reference [18], researchers developed a method to fabricate 3D objects by filling them with microstructures as in the previous study to control their elasticity. Another study [19] was proposed with rhombic cells that automatically satisfies manufacturing requirements regarding the overhang angle and wall thickness. Unlike previous studies, in the research [20], a method of hollowing the interior of 3D fabrications with ellipses to save material and improve their physical properties was proposed. Such studies as [21,22] were conducted where researchers used topology optimization to handle material distributions accordingly with specific requirements.
In the survey study [23], researchers reviewed biomimetic designs in additive manufacturing. Most of the biomimetic designs are microstructural complex topology structures with composite holes or irregular surface morphology that requires special fabrication. To fabricate such biomimetic microstructures, it is necessary to print mostly with SLS 3D printers-powder-based 3D printers that are mostly used by manufacturers, since it is impossible to fabricate accurately with FDM 3D printers. The exceptions for FDM printability among biomimetic structures are a very limited number of structures, including hexagonal-shaped structures.
In fact, the fabrication technology will vary depending on the biomimetic design and scale, including material. For infill patterns, their printability is extremely important. As mentioned earlier, not all biomimetic designs are suitable for printing, particularly with FDM 3D printers, most people utilize home customized FDM 3D printers-therefore, we developed infill patterns that are printable by FDM 3D printers. Topology optimization is an efficient approach for improving physical properties of 3D fabrications. One of the related works to topology optimization is the study [24], where researchers proposed a topology optimization method for generating new profile samples. The method modifies the original profile that is represented by a curve derived from the composed cubic Bezier where each segment contains control points which are transformed in order to generate new design samples. It can be considered as a good option for generating creative designs where topological shape sensitivity of samples is not required. In this study, we do not consider applications of topology optimization, since our goal is the fabrication of 3D objects without topology modifications. In fact, most people use 3D printing technology to fabricate 3D objects with certain functionality like mechanical design, industrial samples or 3D models mostly with geometrical accuracy rather than 3D fabrications with topological freedom.
Our observation has revealed that the above-mentioned studies are targeted only for a single type pattern, while our method is applicable to patterns with different geometries. In our method, we divided the boundary of a 3D model into subregions that can be symmetric or asymmetric to each other; for each subregion, there is a defined feasible value with a scaling parameter that determines the number of steps where patterns will be created. Our method can be considered as a goal-oriented fabrication approach for creating lightweight 3D fabrications and strengthening only the required parts by applying densely distributed patterns while the remaining part of the input model can contain sparsely distributed patterns.
Subdivision
The existing subdivision schemes were mostly proposed for smoothing and modeling purposes; the methods for constructing subdivision schemes are specifically distinct from each other despite some similarities. In this section, we review the reference schemes from different studies. The study [25] Symmetry 2019, 11, 1029 4 of 14 summarized an overview of subdivided surfaces, including scheme construction, property analysis, parametric evaluation, and subdivision surface fitting. Another study [26], was proposed for non-uniform subdivision for B-splines of arbitrary degrees; their approach is similar to the Lane-Riensenfeld algorithm that composes the doubled control points. In the following study [27], the subdivision scheme was designed as the generalized B-splines that unifies classic B-splines with algebraic-trigonometric B-splines and algebraic-hyperbolic B-splines.
The primal subdivision scheme [28] was introduced by Catmull for generalization of bi-cubic uniform B-spline surfaces to the arbitrary topology. Loop's subdivision scheme [29] was introduced to handle triangle control meshes to create a sculptured smooth surface. Further, Zorin [30] proposed a framework for primal/dual quadrilateral subdivision schemes and provided explanations of the schemes to be C1 for irregular surface points.
As it can be observed, most of the presented subdivision schemes are developed for smoothing and modeling applications. In contrast, we design subdivision schemes for new infill patterns and show their practical application in additive manufacturing.
Multilevel Design Construction
This section describes the construction of our method. In our study, the base area of the bounding box of an input model is considered as a target area, and the midpoint algorithm is used for creating subareas.
In our approach, boundary conditions are provided for preventing overlapping problems between subareas. The number of subareas is determined by the required demands. For each subarea, different values of S p are given, which create different-sized elements by forming a multilevel design of a single pattern. The element size depends on the values of S p and subareas. To create elements with a smaller size, the value of S p must be increased. The element size can be defined as follows: where S t is the number of steps, where pattern elements will be created; A = A 1 × A 2 is the subarea; and S 1 is defined as S 1 = A 2 S t and S t can be vary for A 1 and A 2 depending on requirements. It results in unequal numbers of rows and columns. In this study S p determines S t for columns which can be written in an extension form as S t = h 1 , h 2 , . . . . . . h n : n = Z) where ∀h n = [x, y, z] T and S t divides sides of A, with the Euclidean distance as follows: In our method, we consider an additional option to iterate after providing S p , if iterations are performed, the number of element columns will increase at each refinement level according to the following arithmetic sequence: where level, (n − 1) n is the term position, and D n is the difference. In Figure 1, the base area of the present model was partitioned into three subareas as A, B, and C. Each subarea was created with different values of S p . The construction of our method is illustrated in Figure 1. For each subarea we specified border conditions and p S values as follows: Figure 2. All of our output interiors are generated by applying p S without iterations.
Star Grid (SG) Pattern
A symmetric grid mesh Grid V E F with V is the set of vertices, E is the set of edges and F is the set of faces; Any element of ( , , ) Grid V E F can be written in an extension form as the linear combination of the control points ; generally, the entire process can be represented by the following formula: where SG S is the subdivision; SG TR is the topological rules; SG GR is the geometric rules. Formula (4) represents the general case for SG .
Topological and Geometrical subdivision rules: The topological rule for the scheme is described by the process that begins with the generation of new elements; precisely, for each element face For each subarea we specified border conditions and S p values as follows: where A, B, C are the subareas; S p is the scaling parameter; SA is the selected area.
A feasible value of S p is defined depending on the specific requirements. For strengthening an object we use a high value of S p that is applied to fragile regions of the object, while the remaining parts are subjected to a low value of S p , resulting in a multilevel design. Our method performs for patterns with different geometries. In addition, we can combine with other structures, as shown in Figure 2. All of our output interiors are generated by applying S p without iterations. For each subarea we specified border conditions and p S values as follows: Figure 2. All of our output interiors are generated by applying p S without iterations.
Star Grid (SG) Pattern
A symmetric grid mesh Grid V E F with V is the set of vertices, E is the set of edges and F is the set of faces; Any element of ( , , ) Grid V E F can be written in an extension form as the linear combination of the control points ; generally, the entire process can be represented by the following formula: where SG S is the subdivision; SG TR is the topological rules; SG GR is the geometric rules. Formula (4) represents the general case for SG .
Topological and Geometrical subdivision rules: The topological rule for the scheme is described by the process that begins with the generation of new elements; precisely, for each element face
Star Grid (SG) Pattern
A symmetric grid mesh Grid k−1 with a given value of S p is used and Grid k−1 can be written as Grid k−1 = Grid(V, E, F) with V is the set of vertices, E is the set of edges and F is the set of faces; Grid k−1 is consist from G k−1 i : i ∈ Z ∴ set of points V = G i k−1 : i ∈ Z . Any element of Grid(V, E, F) can be written in an extension form as the linear combination of the control points According to the topological rules of the SG pattern, new SG k (V SG , E SG , F SG ) mesh is created; with newly generated faces F SG and a new set of vertices V SG = T i k , M j k+1 : i ∈ Z and j ∈ Z among V = G i k−1 : i ∈ Z ; here V ∈ Grid k−1 ; generally, the entire process can be represented by the following formula: where S SG is the subdivision; TR SG is the topological rules; GR SG is the geometric rules. Formula (4) represents the general case for ∀SG. Topological and Geometrical subdivision rules: The topological rule for the scheme is described by the process that begins with the generation of new elements; precisely, for each element face ∀F old k−1 ∈ The entire procedure of topological subdivision is illustrated in Figure 3. Each element of SG pattern is symmetric as it can be seen from the picture.
The entire procedure of topological subdivision is illustrated in Figure 3. Each element of SG pattern is symmetric as it can be seen from the picture. 1 k for the presented equations of SG pattern, where . Each element of the newly generated k SG is constructed according to the subdivision scheme expressed by the following subdivision matrix: here k i T is the set of points; m S is the subdivision matrix; As described earlier, the construction of our method involves creating subareas from the selected area to generate multiple patterns. For each subarea, different values of p S are applied depending on the specific application. The practical application of our method is illustrated in Figure 4. During pattern generation, vertices are inserted according to the topological and geometrical rules of the subdivision scheme. In the generated SG patterns with new vertices SG V and faces SG F , the edges SG E are increased during iterations. SG pattern elements are generated through the nested subdivision process that is expressed as follows: Here, the coarsest level is Each element of the newly generated SG k is constructed according to the subdivision scheme expressed by the following subdivision matrix: here T i k is the set of points; S m is the subdivision matrix.
As described earlier, the construction of our method involves creating subareas from the selected area to generate multiple patterns. For each subarea, different values of S p are applied depending on the specific application. The practical application of our method is illustrated in Figure 4.
, and its centroid 1 k j M that is defined as The entire procedure of topological subdivision is illustrated in Figure 3. Each element of SG pattern is symmetric as it can be seen from the picture. 1 k for the presented equations of SG pattern, where . Each element of the newly generated k SG is constructed according to the subdivision scheme expressed by the following subdivision matrix: here k i T is the set of points; m S is the subdivision matrix; As described earlier, the construction of our method involves creating subareas from the selected area to generate multiple patterns. For each subarea, different values of p S are applied depending on the specific application. The practical application of our method is illustrated in Figure 4.
Here, the coarsest level is 1 SG and denser levels are During pattern generation, vertices are inserted according to the topological and geometrical rules of the subdivision scheme. In the generated SG patterns with new vertices V SG and faces F SG , the edges E SG are increased during iterations. SG pattern elements are generated through the nested subdivision process that is expressed as follows: Here, the coarsest level is SG 1 and denser levels are SG 2 , SG 3 . . . . . . SG n+1 where n ≥ 3. In fact, SG n+1 can be a competitive option when it is necessary to strengthen 3D fabrications. The presented subdivision scheme is designed to produce SG patterns. Moreover, in the scheme S p is used to control the size of elements. The size of the pattern elements affects factors such as material consumption, printing time, cost, and weight, in addition to the stress-sustainability of 3D fabrications. Furthermore, the element size depends on the value of S p and if we iterate from the refinement level and applied area. In smaller areas, the sizes of elements will be smaller even with a high value of S p . We obtained outputs with S p for the SG patterns, as shown in Figure 5.
control the size of elements. The size of the pattern elements affects factors such as material consumption, printing time, cost, and weight, in addition to the stress-sustainability of 3D fabrications. Furthermore, the element size depends on the value of p S and if we iterate from the refinement level and applied area. In smaller areas, the sizes of elements will be smaller even with a high value of p S . We obtained outputs with p S for the SG patterns, as shown in Figure 5.
Hexagonal Patterns
In the study [31], it is revealed that hexagonal shapes can provide high strength; moreover, these patterns make efficient use of space and building materials by creating more space with less material consumption. Therefore, we considered the hexagonal pattern types as one of the efficient structures for the interior of 3D fabrications that meets major user demands such as reduced consumption of printing materials and strengthening the required parts of 3D fabrications. To create hexagonal pattern types, we design a subdivision scheme that generates natural-looking hexagonal structures.
All hexagonal elements are identical with its symmetry.
Scheme for Hexagonal Patterns
From the provided symmetric The newly formed faces HM F and edges HM E . There are two types of hexagonal patterns with slight differences in topology, as illustrated in Figure 6. They were created with the presented subdivision scheme but with some differences in topological rules. Topological and Geometrical subdivision rules: We developed the construction process of ( , , The general process can be written as follows:
Hexagonal Patterns
In the study [31], it is revealed that hexagonal shapes can provide high strength; moreover, these patterns make efficient use of space and building materials by creating more space with less material consumption. Therefore, we considered the hexagonal pattern types as one of the efficient structures for the interior of 3D fabrications that meets major user demands such as reduced consumption of printing materials and strengthening the required parts of 3D fabrications. To create hexagonal pattern types, we design a subdivision scheme that generates natural-looking hexagonal structures.
All hexagonal elements are identical with its symmetry.
Scheme for Hexagonal Patterns
From the provided symmetric The newly formed faces F HM and edges E HM . There are two types of hexagonal patterns with slight differences in topology, as illustrated in Figure 6. They were created with the presented subdivision scheme but with some differences in topological rules.
consumption, printing time, cost, and weight, in addition to the stress-sustainability of 3D fabrications. Furthermore, the element size depends on the value of p S and if we iterate from the refinement level and applied area. In smaller areas, the sizes of elements will be smaller even with a high value of p S . We obtained outputs with p S for the SG patterns, as shown in Figure 5.
Hexagonal Patterns
In the study [31], it is revealed that hexagonal shapes can provide high strength; moreover, these patterns make efficient use of space and building materials by creating more space with less material consumption. Therefore, we considered the hexagonal pattern types as one of the efficient structures for the interior of 3D fabrications that meets major user demands such as reduced consumption of printing materials and strengthening the required parts of 3D fabrications. To create hexagonal pattern types, we design a subdivision scheme that generates natural-looking hexagonal structures.
All hexagonal elements are identical with its symmetry.
Scheme for Hexagonal Patterns
From the provided symmetric The newly formed faces HM F and edges HM E . There are two types of hexagonal patterns with slight differences in topology, as illustrated in Figure 6. They were created with the presented subdivision scheme but with some differences in topological rules. Topological and Geometrical subdivision rules: We developed the construction process of ( , , The general process can be written as follows: The topological difference defined for hexagonal trapezoidal patterns as the connection of P 2 k with P 5 k results in equilateral trapezoids forming a hexagon, for further equations P k = H k . Topological and Geometrical subdivision rules: We developed the construction process of HM k (V HM , E HM , F HM ), k > 1 with a new set of points H i k : i = Z , the element vertices is defined by the provided subdivision matrix (8a) with V = G i k−1 : i ∈ Z ∈ Grid. The general process can be written as follows: where S hx is the subdivision; TR hx is the topological rules; GR hx is the geometric rules. The equation for hexagonal subdivision can be written in the following form: where S m k is the subdivision matrix; H i k is a new set of points.
Outputs generated according to the presented subdivision scheme for two hexagonal patterns are shown in Figure 7.
Outputs generated according to the presented subdivision scheme for two hexagonal patterns are shown in Figure 7. In this part, we discuss the creation of multilevel designs with p S . For multilevel designs, the selected area of an object is divided into subareas, the number of subareas is determined by application requirements. As an example, we created a multilevel design of a duck model; the selected area of the duck was divided into subareas. We used p S with different values to test our approach and generate the output presented in Figure 8. In this part, we discuss the creation of multilevel designs with S p . For multilevel designs, the selected area of an object is divided into subareas, the number of subareas is determined by application requirements. As an example, we created a multilevel design of a duck model; the selected area of the duck was divided into subareas. We used S p with different values to test our approach and generate the output presented in Figure 8.
where k m S is the subdivision matrix; In this part, we discuss the creation of multilevel designs with p S . For multilevel designs, the selected area of an object is divided into subareas, the number of subareas is determined by application requirements. As an example, we created a multilevel design of a duck model; the selected area of the duck was divided into subareas. We used p S with different values to test our approach and generate the output presented in Figure 8. The coarsest hexagonal pattern types can be achieved with S p = 1. In Figure 8, the difference between S p = 8 and S p = 6 is visible from the element size of the patterns. In fact, the element size of the pattern changes depending on the provided value of S p . We developed hexagonal pattern types to create a multilevel design with our approach. The subdivision scheme was designed to produce the presented hexagonal pattern types with slightly different geometries. Such geometries minimize the amount of printing material used and create lightweight 3D fabrications additionally, while also improving their structural soundness.
Experimental Results
We compared our method with uniformly structured patterns for 3D fabrications. We printed 2D and 3D models and conducted a comparison experiment by measuring their weight; moreover, we evaluated the mechanical behavior of the 3D fabrications by testing them using an electromechanical testing machine Instron-5690 (Instron, USA) to determine the exact external force sustainability. Compression was performed at a speed of 10 mm/min. We tested out our method with 2D and 3D models from different object categories (geometrical figures and animals). All the models were fabricated through an FDM 3D printer MakerBot Replicator 2 (MakerBot, USA) with a size 285 × 153 × 155 mm; we used acrylonitrile butadiene styrene as the printing material. Our platform was developed using C++ language with Visual Studio 2015 and rendered with OpenGL API.
We tested our method on different models and compared it against the uniform structuring method for each presented pattern. We applied S p in both methods to evaluate the efficiency regarding cost-effectiveness and additionally external force sustainability.
Multilevel Design vs. Uniform Design
In this part, we describe the results of the comparison experiments conducted between multilevel designing and uniformly structured patterns. We printed objects with different values of the scaling parameter to show the efficiency of our approach. The first experiment was conducted to reveal the lightest interior structure. We measured the weights of each presented 3D fabrication and compared them; the results are presented in Table 1. The coarsest hexagonal pattern types can be achieved with 1 p S . In Figure 8, the difference between 8 p S and 6 p S is visible from the element size of the patterns. In fact, the element size of the pattern changes depending on the provided value of p S . We developed hexagonal pattern types to create a multilevel design with our approach. The subdivision scheme was designed to produce the presented hexagonal pattern types with slightly different geometries. Such geometries minimize the amount of printing material used and create lightweight 3D fabrications additionally, while also improving their structural soundness.
Experimental Results
We compared our method with uniformly structured patterns for 3D fabrications. We printed 2D and 3D models and conducted a comparison experiment by measuring their weight; moreover, we evaluated the mechanical behavior of the 3D fabrications by testing them using an electromechanical testing machine Instron-5690 (Instron, USA) to determine the exact external force sustainability. Compression was performed at a speed of 10 mm/min. We tested out our method with 2D and 3D models from different object categories (geometrical figures and animals). All the models were fabricated through an FDM 3D printer MakerBot Replicator 2 (MakerBot, USA) with a size 285 × 153 × 155 mm; we used acrylonitrile butadiene styrene as the printing material. Our platform was developed using C++ language with Visual Studio 2015 and rendered with OpenGL API.
We tested our method on different models and compared it against the uniform structuring method for each presented pattern. We applied p S in both methods to evaluate the efficiency regarding cost-effectiveness and additionally external force sustainability.
Multilevel Design vs. Uniform Design
In this part, we describe the results of the comparison experiments conducted between multilevel designing and uniformly structured patterns. We printed objects with different values of the scaling parameter to show the efficiency of our approach. The first experiment was conducted to reveal the lightest interior structure. We measured the weights of each presented 3D fabrication and compared them; the results are presented in Table 1.
Weights of models with interiors having a multilevel design (thickness for all models is 0.8 mm) Star Grid
Hexagonal Hexagonal Trapezoid The coarsest hexagonal pattern types can be achieved with 1 p S . In Figure 8, the difference between 8 p S and 6 p S is visible from the element size of the patterns. In fact, the element size of the pattern changes depending on the provided value of p S . We developed hexagonal pattern types to create a multilevel design with our approach. The subdivision scheme was designed to produce the presented hexagonal pattern types with slightly different geometries. Such geometries minimize the amount of printing material used and create lightweight 3D fabrications additionally, while also improving their structural soundness.
Experimental Results
We compared our method with uniformly structured patterns for 3D fabrications. We printed 2D and 3D models and conducted a comparison experiment by measuring their weight; moreover, we evaluated the mechanical behavior of the 3D fabrications by testing them using an electromechanical testing machine Instron-5690 (Instron, USA) to determine the exact external force sustainability. Compression was performed at a speed of 10 mm/min. We tested out our method with 2D and 3D models from different object categories (geometrical figures and animals). All the models were fabricated through an FDM 3D printer MakerBot Replicator 2 (MakerBot, USA) with a size 285 × 153 × 155 mm; we used acrylonitrile butadiene styrene as the printing material. Our platform was developed using C++ language with Visual Studio 2015 and rendered with OpenGL API.
We tested our method on different models and compared it against the uniform structuring method for each presented pattern. We applied p S in both methods to evaluate the efficiency regarding cost-effectiveness and additionally external force sustainability.
Multilevel Design vs. Uniform Design
In this part, we describe the results of the comparison experiments conducted between multilevel designing and uniformly structured patterns. We printed objects with different values of the scaling parameter to show the efficiency of our approach. The first experiment was conducted to reveal the lightest interior structure. We measured the weights of each presented 3D fabrication and compared them; the results are presented in Table 1.
Weights of models with interiors having a multilevel design (thickness for all models is 0.8 mm) Star Grid
Hexagonal Hexagonal Trapezoid The coarsest hexagonal pattern types can be achieved with 1 p S . In Figure 8, the difference between 8 p S and 6 p S is visible from the element size of the patterns. In fact, the element size of the pattern changes depending on the provided value of p S . We developed hexagonal pattern types to create a multilevel design with our approach. The subdivision scheme was designed to produce the presented hexagonal pattern types with slightly different geometries. Such geometries minimize the amount of printing material used and create lightweight 3D fabrications additionally, while also improving their structural soundness.
Experimental Results
We compared our method with uniformly structured patterns for 3D fabrications. We printed 2D and 3D models and conducted a comparison experiment by measuring their weight; moreover, we evaluated the mechanical behavior of the 3D fabrications by testing them using an electromechanical testing machine Instron-5690 (Instron, USA) to determine the exact external force sustainability. Compression was performed at a speed of 10 mm/min. We tested out our method with 2D and 3D models from different object categories (geometrical figures and animals). All the models were fabricated through an FDM 3D printer MakerBot Replicator 2 (MakerBot, USA) with a size 285 × 153 × 155 mm; we used acrylonitrile butadiene styrene as the printing material. Our platform was developed using C++ language with Visual Studio 2015 and rendered with OpenGL API.
We tested our method on different models and compared it against the uniform structuring method for each presented pattern. We applied p S in both methods to evaluate the efficiency regarding cost-effectiveness and additionally external force sustainability.
Multilevel Design vs. Uniform Design
In this part, we describe the results of the comparison experiments conducted between multilevel designing and uniformly structured patterns. We printed objects with different values of the scaling parameter to show the efficiency of our approach. The first experiment was conducted to reveal the lightest interior structure. We measured the weights of each presented 3D fabrication and compared them; the results are presented in Table 1.
Weights of models with interiors having a multilevel design (thickness for all models is 0.8 mm) Star Grid
Hexagonal Hexagonal Trapezoid The coarsest hexagonal pattern types can be achieved with 1 p S . In Figure 8, the difference between 8 p S and 6 p S is visible from the element size of the patterns. In fact, the element size of the pattern changes depending on the provided value of p S . We developed hexagonal pattern types to create a multilevel design with our approach. The subdivision scheme was designed to produce the presented hexagonal pattern types with slightly different geometries. Such geometries minimize the amount of printing material used and create lightweight 3D fabrications additionally, while also improving their structural soundness.
Experimental Results
We compared our method with uniformly structured patterns for 3D fabrications. We printed 2D and 3D models and conducted a comparison experiment by measuring their weight; moreover, we evaluated the mechanical behavior of the 3D fabrications by testing them using an electromechanical testing machine Instron-5690 (Instron, USA) to determine the exact external force sustainability. Compression was performed at a speed of 10 mm/min. We tested out our method with 2D and 3D models from different object categories (geometrical figures and animals). All the models were fabricated through an FDM 3D printer MakerBot Replicator 2 (MakerBot, USA) with a size 285 × 153 × 155 mm; we used acrylonitrile butadiene styrene as the printing material. Our platform was developed using C++ language with Visual Studio 2015 and rendered with OpenGL API.
We tested our method on different models and compared it against the uniform structuring method for each presented pattern. We applied p S in both methods to evaluate the efficiency regarding cost-effectiveness and additionally external force sustainability.
Multilevel Design vs. Uniform Design
In this part, we describe the results of the comparison experiments conducted between multilevel designing and uniformly structured patterns. We printed objects with different values of the scaling parameter to show the efficiency of our approach. The first experiment was conducted to reveal the lightest interior structure. We measured the weights of each presented 3D fabrication and compared them; the results are presented in Table 1.
Weights of models with interiors having a multilevel design (thickness for all models is 0.8 mm) Star Grid
Hexagonal Hexagonal Trapezoid The coarsest hexagonal pattern types can be achieved with 1 p S . In Figure 8, the difference between 8 p S and 6 p S is visible from the element size of the patterns. In fact, the element size of the pattern changes depending on the provided value of p S . We developed hexagonal pattern types to create a multilevel design with our approach. The subdivision scheme was designed to produce the presented hexagonal pattern types with slightly different geometries. Such geometries minimize the amount of printing material used and create lightweight 3D fabrications additionally, while also improving their structural soundness.
Experimental Results
We compared our method with uniformly structured patterns for 3D fabrications. We printed 2D and 3D models and conducted a comparison experiment by measuring their weight; moreover, we evaluated the mechanical behavior of the 3D fabrications by testing them using an electromechanical testing machine Instron-5690 (Instron, USA) to determine the exact external force sustainability. Compression was performed at a speed of 10 mm/min. We tested out our method with 2D and 3D models from different object categories (geometrical figures and animals). All the models were fabricated through an FDM 3D printer MakerBot Replicator 2 (MakerBot, USA) with a size 285 × 153 × 155 mm; we used acrylonitrile butadiene styrene as the printing material. Our platform was developed using C++ language with Visual Studio 2015 and rendered with OpenGL API.
We tested our method on different models and compared it against the uniform structuring method for each presented pattern. We applied p S in both methods to evaluate the efficiency regarding cost-effectiveness and additionally external force sustainability.
Multilevel Design vs. Uniform Design
In this part, we describe the results of the comparison experiments conducted between multilevel designing and uniformly structured patterns. We printed objects with different values of the scaling parameter to show the efficiency of our approach. The first experiment was conducted to reveal the lightest interior structure. We measured the weights of each presented 3D fabrication and compared them; the results are presented in Table 1.
Weights of models with interiors having a multilevel design (thickness for all models is 0.8 mm) Star Grid
Hexagonal Hexagonal Trapezoid The coarsest hexagonal pattern types can be achieved with 1 p S . In Figure 8, the difference between 8 p S and 6 p S is visible from the element size of the patterns. In fact, the element size of the pattern changes depending on the provided value of p S . We developed hexagonal pattern types to create a multilevel design with our approach. The subdivision scheme was designed to produce the presented hexagonal pattern types with slightly different geometries. Such geometries minimize the amount of printing material used and create lightweight 3D fabrications additionally, while also improving their structural soundness.
Experimental Results
We compared our method with uniformly structured patterns for 3D fabrications. We printed 2D and 3D models and conducted a comparison experiment by measuring their weight; moreover, we evaluated the mechanical behavior of the 3D fabrications by testing them using an electromechanical testing machine Instron-5690 (Instron, USA) to determine the exact external force sustainability. Compression was performed at a speed of 10 mm/min. We tested out our method with 2D and 3D models from different object categories (geometrical figures and animals). All the models were fabricated through an FDM 3D printer MakerBot Replicator 2 (MakerBot, USA) with a size 285 × 153 × 155 mm; we used acrylonitrile butadiene styrene as the printing material. Our platform was developed using C++ language with Visual Studio 2015 and rendered with OpenGL API.
We tested our method on different models and compared it against the uniform structuring method for each presented pattern. We applied p S in both methods to evaluate the efficiency regarding cost-effectiveness and additionally external force sustainability.
Multilevel Design vs. Uniform Design
In this part, we describe the results of the comparison experiments conducted between multilevel designing and uniformly structured patterns. We printed objects with different values of the scaling parameter to show the efficiency of our approach. The first experiment was conducted to reveal the lightest interior structure. We measured the weights of each presented 3D fabrication and compared them; the results are presented in Table 1.
Weights of models with interiors having a multilevel design (thickness for all models is 0.8 mm) Star Grid
Hexagonal Hexagonal Trapezoid
Star Grid Hexagonal Hexagonal Trapezoid
According to the experimental results, the lightest structure among the multilevel design patterns is the hexagonal pattern. Moreover, it can be considered a cost-effective structure and requires less printing time compared to patterns with more edges such as SG and hexagonal trapezoidal patterns. In fact, patterns with complex geometries consume more printing material, but they are beneficial for strengthening purposes. With our proposed method, material consumption can be minimized for patterns with complex topologies. The next part of the experiment involved determining how S p impacted the weights of 3D fabrications. By dividing the base area of models into several subareas, we applied the feasible value of S p for each subarea depending on the application specifications. The models presented in Table 2 were divided into three parts with different values of S p , a high value of S p was used only for the thinner parts of the models to strengthen and achieve a compromise between physical property requirements such as strengthening and creation of lightweight 3D fabrications. The differences are clearly observable between two duck models with hexagonal trapezoidal interiors; one of them weighs 33 g, and the other weighs 23 g. This experiment demonstrated that the weights of the 3D fabrications could be controlled by manipulating the scaling parameter. Through the weight measuring experiment, we observed how S p impacted the physical properties of the 3D fabrications. Table 3. As expected, the lightest pattern was the hexagonal pattern, owing to its geometry, that aids in the efficient use of material. Table 3. As expected, the lightest pattern was the hexagonal pattern, owing to its geometry, that aids in the efficient use of material. Table 3. As expected, the lightest pattern was the hexagonal pattern, owing to its geometry, that aids in the efficient use of material.
Weight of models with uniform structures
Star Grid Hexagonal Pattern Hexagonal Trapezoid 23 g Additionally, we performed comparison tests between uniformly structured patterns to experimentally evaluate the effectiveness of our developed patterns in terms of saving material. For the experiment, we printed 3D fabrications with the same scaling parameters S p = 2 for kittens and S p = 4 for cubes that are shown in Table 3. As expected, the lightest pattern was the hexagonal pattern, owing to its geometry, that aids in the efficient use of material.
2D Kitten
experimentally evaluate the effectiveness of our developed patterns in terms of saving material. For the experiment, we printed 3D fabrications with the same scaling parameters 2 p S for kittens and 4 p S for cubes that are shown in Table 3. As expected, the lightest pattern was the hexagonal pattern, owing to its geometry, that aids in the efficient use of material.
Stress-Sustainability Comparison
As a supplementary part, there was done the second experiment that evaluates the stresssustainability of the created patterns. We performed compression tests exceptionally for the cube models as the geometry of the models influences the compression test results. Therefore, we selected cube models with the three patterns to determine the stress-sustainability for each pattern. As it can be noticed from Table 4, we experimented with uniformly structured patterns and multilevel patterns to show the efficiency of our proposed method and the designed patterns. The compression test results showed that the SG pattern was the best structure for stresssustainability rather than the hexagonal pattern and hexagonal trapezoid. Although hexagonal structures are known for their high strength and durability among natural structures, they have 17 g experimentally evaluate the effectiveness of our developed patterns in terms of saving material. For the experiment, we printed 3D fabrications with the same scaling parameters 2 p S for kittens and 4 p S for cubes that are shown in Table 3. As expected, the lightest pattern was the hexagonal pattern, owing to its geometry, that aids in the efficient use of material.
Stress-Sustainability Comparison
As a supplementary part, there was done the second experiment that evaluates the stresssustainability of the created patterns. We performed compression tests exceptionally for the cube models as the geometry of the models influences the compression test results. Therefore, we selected cube models with the three patterns to determine the stress-sustainability for each pattern. As it can be noticed from Table 4, we experimented with uniformly structured patterns and multilevel patterns to show the efficiency of our proposed method and the designed patterns. The compression test results showed that the SG pattern was the best structure for stresssustainability rather than the hexagonal pattern and hexagonal trapezoid. Although hexagonal structures are known for their high strength and durability among natural structures, they have 12 g experimentally evaluate the effectiveness of our developed patterns in terms of saving material. For the experiment, we printed 3D fabrications with the same scaling parameters 2 p S for kittens and 4 p S for cubes that are shown in Table 3. As expected, the lightest pattern was the hexagonal pattern, owing to its geometry, that aids in the efficient use of material.
Stress-Sustainability Comparison
As a supplementary part, there was done the second experiment that evaluates the stresssustainability of the created patterns. We performed compression tests exceptionally for the cube models as the geometry of the models influences the compression test results.
Therefore, we selected cube models with the three patterns to determine the stress-sustainability for each pattern. As it can be noticed from Table 4, we experimented with uniformly structured patterns and multilevel patterns to show the efficiency of our proposed method and the designed patterns. The compression test results showed that the SG pattern was the best structure for stresssustainability rather than the hexagonal pattern and hexagonal trapezoid. Although hexagonal structures are known for their high strength and durability among natural structures, they have the experiment, we printed 3D fabrications with the same scaling parameters 2 p S for kittens and 4 p S for cubes that are shown in Table 3. As expected, the lightest pattern was the hexagonal pattern, owing to its geometry, that aids in the efficient use of material.
Stress-Sustainability Comparison
As a supplementary part, there was done the second experiment that evaluates the stresssustainability of the created patterns. We performed compression tests exceptionally for the cube models as the geometry of the models influences the compression test results.
Therefore, we selected cube models with the three patterns to determine the stress-sustainability for each pattern. As it can be noticed from Table 4, we experimented with uniformly structured patterns and multilevel patterns to show the efficiency of our proposed method and the designed patterns. The compression test results showed that the SG pattern was the best structure for stresssustainability rather than the hexagonal pattern and hexagonal trapezoid. Although hexagonal structures are known for their high strength and durability among natural structures, they have 86 g the experiment, we printed 3D fabrications with the same scaling parameters 2 p S for kittens and 4 p S for cubes that are shown in Table 3. As expected, the lightest pattern was the hexagonal pattern, owing to its geometry, that aids in the efficient use of material.
Stress-Sustainability Comparison
As a supplementary part, there was done the second experiment that evaluates the stresssustainability of the created patterns. We performed compression tests exceptionally for the cube models as the geometry of the models influences the compression test results.
Therefore, we selected cube models with the three patterns to determine the stress-sustainability for each pattern. As it can be noticed from Table 4, we experimented with uniformly structured patterns and multilevel patterns to show the efficiency of our proposed method and the designed patterns. The compression test results showed that the SG pattern was the best structure for stresssustainability rather than the hexagonal pattern and hexagonal trapezoid. Although hexagonal structures are known for their high strength and durability among natural structures, they have 40 g the experiment, we printed 3D fabrications with the same scaling parameters 2 p S for kittens and 4 p S for cubes that are shown in Table 3. As expected, the lightest pattern was the hexagonal pattern, owing to its geometry, that aids in the efficient use of material.
Stress-Sustainability Comparison
As a supplementary part, there was done the second experiment that evaluates the stresssustainability of the created patterns. We performed compression tests exceptionally for the cube models as the geometry of the models influences the compression test results.
Therefore, we selected cube models with the three patterns to determine the stress-sustainability for each pattern. As it can be noticed from Table 4, we experimented with uniformly structured patterns and multilevel patterns to show the efficiency of our proposed method and the designed patterns. The compression test results showed that the SG pattern was the best structure for stresssustainability rather than the hexagonal pattern and hexagonal trapezoid. Although hexagonal structures are known for their high strength and durability among natural structures, they have 54 g
Stress-Sustainability Comparison
As a supplementary part, there was done the second experiment that evaluates the stress-sustainability of the created patterns. We performed compression tests exceptionally for the cube models as the geometry of the models influences the compression test results.
Therefore, we selected cube models with the three patterns to determine the stress-sustainability for each pattern. As it can be noticed from Table 4, we experimented with uniformly structured patterns and multilevel patterns to show the efficiency of our proposed method and the designed patterns. The compression test results showed that the SG pattern was the best structure for stress-sustainability rather than the hexagonal pattern and hexagonal trapezoid. Although hexagonal structures are known for their high strength and durability among natural structures, they have revealed less stress-sustainability compared with SG patterns. SG pattern has a stronger structure than other patterns that makes more stress-sustainable.
The experimental results showed that the patterns developed using our method effectively resist external forces; moreover, our approach proved to be cost-effective that creates lightweight 3D fabrications.
Strengthening Thin Parts
As a supplementary part of our study, we considered strengthening thin parts of objects as different engineering applications require improved strength of thin parts in samples or industrial models. In fact, thins parts of objects less stress-sustainable, therefore we determined the thin parts of models via visual observation and strengthened them by applying a feasible value of S p , as shown in Figure 9. revealed less stress-sustainability compared with SG patterns. SG pattern has a stronger structure than other patterns that makes more stress-sustainable. 48400 N 20300 N 29500 N The experimental results showed that the patterns developed using our method effectively resist external forces; moreover, our approach proved to be cost-effective that creates lightweight 3D fabrications.
Strengthening Thin Parts
As a supplementary part of our study, we considered strengthening thin parts of objects as different engineering applications require improved strength of thin parts in samples or industrial models. In fact, thins parts of objects less stress-sustainable, therefore we determined the thin parts of models via visual observation and strengthened them by applying a feasible value of p S , as shown in Figure 9.
Conclusion
The main novelty of this study was to demonstrate the successful application of our method for creating a lightweight interior for 3D fabrications and strengthening inner parts of 3D fabrications. In this study, we introduced the pattern element scaling parameter p S that impacts the physical properties of 3D fabrications, as well as their weight, material consumption, and printing time. By revealed less stress-sustainability compared with SG patterns. SG pattern has a stronger structure than other patterns that makes more stress-sustainable. 48400 N 20300 N 29500 N The experimental results showed that the patterns developed using our method effectively resist external forces; moreover, our approach proved to be cost-effective that creates lightweight 3D fabrications.
Strengthening Thin Parts
As a supplementary part of our study, we considered strengthening thin parts of objects as different engineering applications require improved strength of thin parts in samples or industrial models. In fact, thins parts of objects less stress-sustainable, therefore we determined the thin parts of models via visual observation and strengthened them by applying a feasible value of p S , as shown in Figure 9.
Conclusion
The main novelty of this study was to demonstrate the successful application of our method for creating a lightweight interior for 3D fabrications and strengthening inner parts of 3D fabrications. In this study, we introduced the pattern element scaling parameter p S that impacts the physical properties of 3D fabrications, as well as their weight, material consumption, and printing time. By revealed less stress-sustainability compared with SG patterns. SG pattern has a stronger structure than other patterns that makes more stress-sustainable. 48400 N 20300 N 29500 N The experimental results showed that the patterns developed using our method effectively resist external forces; moreover, our approach proved to be cost-effective that creates lightweight 3D fabrications.
Strengthening Thin Parts
As a supplementary part of our study, we considered strengthening thin parts of objects as different engineering applications require improved strength of thin parts in samples or industrial models. In fact, thins parts of objects less stress-sustainable, therefore we determined the thin parts of models via visual observation and strengthened them by applying a feasible value of p S , as shown in Figure 9.
Conclusion
The main novelty of this study was to demonstrate the successful application of our method for creating a lightweight interior for 3D fabrications and strengthening inner parts of 3D fabrications. In this study, we introduced the pattern element scaling parameter p S that impacts the physical properties of 3D fabrications, as well as their weight, material consumption, and printing time. By revealed less stress-sustainability compared with SG patterns. SG pattern has a stronger structure than other patterns that makes more stress-sustainable. 48400 N 20300 N 29500 N The experimental results showed that the patterns developed using our method effectively resist external forces; moreover, our approach proved to be cost-effective that creates lightweight 3D fabrications.
Strengthening Thin Parts
As a supplementary part of our study, we considered strengthening thin parts of objects as different engineering applications require improved strength of thin parts in samples or industrial models. In fact, thins parts of objects less stress-sustainable, therefore we determined the thin parts of models via visual observation and strengthened them by applying a feasible value of p S , as shown in Figure 9.
Conclusion
The main novelty of this study was to demonstrate the successful application of our method for creating a lightweight interior for 3D fabrications and strengthening inner parts of 3D fabrications. In this study, we introduced the pattern element scaling parameter p S that impacts the physical properties of 3D fabrications, as well as their weight, material consumption, and printing time. By revealed less stress-sustainability compared with SG patterns. SG pattern has a stronger structure than other patterns that makes more stress-sustainable. 48400 N 20300 N 29500 N The experimental results showed that the patterns developed using our method effectively resist external forces; moreover, our approach proved to be cost-effective that creates lightweight 3D fabrications.
Strengthening Thin Parts
As a supplementary part of our study, we considered strengthening thin parts of objects as different engineering applications require improved strength of thin parts in samples or industrial models. In fact, thins parts of objects less stress-sustainable, therefore we determined the thin parts of models via visual observation and strengthened them by applying a feasible value of p S , as shown in Figure 9.
Conclusion
The main novelty of this study was to demonstrate the successful application of our method for creating a lightweight interior for 3D fabrications and strengthening inner parts of 3D fabrications. In this study, we introduced the pattern element scaling parameter p S that impacts the physical properties of 3D fabrications, as well as their weight, material consumption, and printing time. By revealed less stress-sustainability compared with SG patterns. SG pattern has a stronger structure than other patterns that makes more stress-sustainable. 48400 N 20300 N 29500 N The experimental results showed that the patterns developed using our method effectively resist external forces; moreover, our approach proved to be cost-effective that creates lightweight 3D fabrications.
Strengthening Thin Parts
As a supplementary part of our study, we considered strengthening thin parts of objects as different engineering applications require improved strength of thin parts in samples or industrial models. In fact, thins parts of objects less stress-sustainable, therefore we determined the thin parts of models via visual observation and strengthened them by applying a feasible value of p S , as shown in Figure 9.
Conclusion
The main novelty of this study was to demonstrate the successful application of our method for creating a lightweight interior for 3D fabrications and strengthening inner parts of 3D fabrications. In this study, we introduced the pattern element scaling parameter p S that impacts the physical properties of 3D fabrications, as well as their weight, material consumption, and printing time. By The experimental results showed that the patterns developed using our method effectively resist external forces; moreover, our approach proved to be cost-effective that creates lightweight 3D fabrications.
Strengthening Thin Parts
As a supplementary part of our study, we considered strengthening thin parts of objects as different engineering applications require improved strength of thin parts in samples or industrial models. In fact, thins parts of objects less stress-sustainable, therefore we determined the thin parts of models via visual observation and strengthened them by applying a feasible value of p S , as shown in Figure 9.
Conclusion
The main novelty of this study was to demonstrate the successful application of our method for creating a lightweight interior for 3D fabrications and strengthening inner parts of 3D fabrications. In this study, we introduced the pattern element scaling parameter p S that impacts the physical properties of 3D fabrications, as well as their weight, material consumption, and printing time. By manipulating p S , it is possible to control the physical properties of 3D fabrications, including weights. Our method can be considered a goal-oriented fabrication approach that can satisfy the desired application demands for 3D fabrications. The experimental test provides the evidential proof of the efficiency of our method, the results of which can be seen in Table 1. Moreover, we include subdivision schemes for each proposed pattern. We have shown practical applications of our method in 2D and 3D models, for the conducted tests we experimented with different models. A key advantage of our method is the controllability of 3D fabrication properties by introducing the parameter p S .
References
Strengthening the thin parts of the models
Conclusions
The main novelty of this study was to demonstrate the successful application of our method for creating a lightweight interior for 3D fabrications and strengthening inner parts of 3D fabrications. In this study, we introduced the pattern element scaling parameter S p that impacts the physical properties of 3D fabrications, as well as their weight, material consumption, and printing time. By manipulating S p , it is possible to control the physical properties of 3D fabrications, including weights. Our method can be considered a goal-oriented fabrication approach that can satisfy the desired application demands for 3D fabrications. The experimental test provides the evidential proof of the efficiency of our method, the results of which can be seen in Table 1. Moreover, we include subdivision schemes for each proposed pattern. We have shown practical applications of our method in 2D and 3D models, for the conducted tests we experimented with different models. A key advantage of our method is the controllability of 3D fabrication properties by introducing the parameter S p . | 14,590.8 | 2019-08-09T00:00:00.000 | [
"Engineering"
] |
Structural and Functional Annotation of Eukaryotic Genomes with GenSAS
The Genome Sequence Annotation Server (GenSAS, https://www.gensas.org) is a secure, web-based genome annotation platform for structural and functional annotation, as well as manual curation. Requiring no installation by users, GenSAS integrates popular command line-based, annotation tools under a single, easy-to-use, online interface. GenSAS integrates JBrowse and Apollo, so users can view annotation data and manually curate gene models. Users are guided step by step through the annotation process by embedded instructions and a more in-depth GenSAS User’s Guide. In addition to a genome assembly file, users can also upload organism-specific transcript, protein, and RNA-seq read evidence for use in the annotation process. The latest versions of the NCBI RefSeq transcript and protein databases and the SwissProt and TrEMBL protein databases are provided for all users. GenSAS projects can be shared with other GenSAS users enabling collaborative annotation. Once annotation is complete, GenSAS generates the final files of the annotated gene models in common file formats for use with other annotation tools, submission to a repository, and use in publications.
Introduction
While advances in sequencing and computational technologies, coupled with more affordable costs, are enabling researchers to routinely sequence genomes of interest, predicting genes and assigning biological relevance to the putative proteins that those genes encode remain challenging tasks for non-computational scientists. Eukaryotic genome annotation involves three major steps: identification and masking of repetitive DNA sequences, structural annotation, and functional annotation (see excellent review of process by Yandell and Ence [1]). Compared to prokaryotic organisms, eukaryotic genome sequences contain repetitive sequences that complicate the annotation process. Repeat identification and masking simply change the bases in repetitive regions to an "N" or "X" nucleotide, allowing downstream tools to ignore the repeat. During the structural annotation portion of the process, DNA landmarks such as protein-coding genes, tRNAs, and rRNAs, which can be determined based on the DNA sequence, are identified. After structural annotation, functional annotation is performed in silico to infer biological function to the proteins of the gene models. After the initial annotation of the genome, some genome sequencing projects (e.g., TAIR, https://www.ara bidopsis.org/) also perform manual curation of the gene models to improve the quality of the annotation.
There are many different programs that annotate specific features, such as gene models, repeats, tRNAs, and rRNAs (examples in Table 1). Some tools are accessible online, while others require a local installation. There are also annotation pipelines that combine multiple annotation tools. MAKER2 [2] is an example of a gene prediction pipeline. MAKER2 runs three different gene prediction programs (SNAP, GeneMark-ES, AUGUSTUS) within the pipeline and also will align user-provided transcript, RNA-seq, and protein evidence to the genome. MAKER2 then uses the results from the gene prediction tools and alignments of provided evidence to generate a consensus gene model set. Another annotation pipeline is the NCBI Eukaryotic Genome Annotation Pipeline (https:// www.ncbi.nlm.nih.gov/genome/annotation_euk/process/). The NCBI pipeline first identifies repetitive DNA sequences for masking; then aligns transcripts, RNA-seq reads, and proteins to the genome sequence; and uses the aligned evidence to predict gene models. The NCBI pipeline also identifies miRNAs, tRNAs, rRNAs, snoRNAs, and snRNAs, in addition to predicting gene models.
The vision of GenSAS (https://www.gensas.org) is to provide a web-based, modular tool that requires no software installation and management and is easy for scientists of all skill levels to use via a graphical user interface. The major annotation steps of the GenSAS workflow include repeat identification and masking, evidence alignment to the genome, structural annotation (genes, rRNA, tRNA), functional annotation of gene models, optional manual editing of the gene models, and creation of final annotation files. The design of GenSAS allows for the addition of new annotation tools; thus the list of available tools in GenSAS can change. This chapter will discuss tools in GenSAS v6.0 (Table 1). Even if new tools are added to GenSAS, the GenSAS interface, and how users interact with GenSAS, remains the same. For the most current list of GenSAS tools, please see https://www.gensas.org/tools. The goal of this chapter is to describe how to use GenSAS and provide pointers on how to get the best annotation possible from using this annotation platform.
GenSAS User Account
GenSAS is available at https://www.gensas.org. Users must register for an account using the "Create new account" link on the right side of the home page. User accounts keep data private, allow for sharing of projects with other GenSAS users, and enable users to log out of GenSAS while jobs are running. In order to ensure that GenSAS is available to as many users as possible, there are some limitations to user accounts and projects (Table 2).
Is Your Genome
Ready for Annotation?
The quality of a genome annotation depends on several factors, but the most important factor is the quality of the genome assembly. The saying "garbage in, garbage out" applies to annotation. If the input genome is split in hundreds of thousands of contigs or scaffolds, smaller than the average gene size, the gene prediction programs will not be very effective. The method used to assemble the genome should have produced a file reporting the number of contigs, the minimum and maximum lengths, and other metrics like the N50 (a weighted average length of contigs where more weight is given to longer contigs). If a majority of the assembled contigs are not over the average gene length for your organism, the genome will not annotate well. If you do not have a metrics report Table 2 GenSAS user account and project limitations
Limitation Details
Projects expire after 60 days unless expiration reset by user Users receive email reminders that a project will expire. If expiration is not reset before 60 days, the project will be deleted.
User accounts will remain active as long as users have an active GenSAS project When all projects have expired, the user is not part of a shared project, and the user has not logged in for 6 months, the GenSAS account is deleted.
User accounts are limited to 250 GB of storage space on GenSAS server This size limitation includes all user-uploaded files as well as results generated by GenSAS. This is for all projects combined. If the user reaches 250 GB, new jobs will not run until old projects/data are deleted to free up space.
Assembly files must be high quality All genome assemblies uploaded to GenSAS are evaluated and only assemblies under 25,000 total sequences and with over 50% of those sequences larger than 2500 bp will be accepted for use in a GenSAS project.
Only seven jobs per user can run at one time While GenSAS does submit jobs to a computational cluster, the cluster resources are not endless. Users can only have seven jobs running at one time. However, more than seven jobs can be submitted and as running jobs complete, the waiting jobs in the queue will run. from the assembly program, PRINSEQ [3] is an easy tool to gather that data. It is available to use on the web (http://edwards.sdsu. edu/cgi-bin/prinseq/prinseq.cgi?home¼1). GenSAS does run PRINSEQ as part of the sequence upload process, but it is highly recommended that you check your genome before loading it to GenSAS. If your genome is in good shape in regard to sequence number and length, another tool to determine the completeness of the assembly is BUSCO [4,5]. BUSCO determines the percentage of the core orthologous genes that are present in the assembly of unannotated genomes. BUSCO can either be run using a virtual machine (https://busco.ezlab.org/) or downloaded and installed locally (https://gitlab.com/ezlab/busco). If your genome assembly is missing a significant number of conserved, core genes according to BUSCO, then it might be best to wait to annotate the genome until a better-quality assembly is available. GenSAS does allow users to run BUSCO on uploaded genome assemblies.
User-Provided Files
It is best to gather all the needed files before starting a GenSAS project. In addition to the genome assembly in the FASTA format, GenSAS also accepts many different types of evidence files. Table 3 lists the types of evidence files that can be provided for GenSAS projects. While having a high-quality assembly is important for a good annotation, having a good collection of evidence that is specific to and originates from the organism genome being annotated is equally important. Species-specific evidence files, used with some of the annotation tools, are especially helpful for non-model organisms. Users only need to provide species-specific data, as GenSAS provides up-to-date common databases such as repeat sequence collections from Repbase [6] for use with RepeatMasker (Table 1) and transcript and peptide sequence collections from NCBI RefSeq [7], SwissProt [8], and TrEMBL [8] for use with alignment programs. GenSAS accepts data that have already been aligned to the genome assembly in the form of GFF3 files or sets of unaligned sequences in the FASTA or FASTQ format for use with alignment tools within GenSAS. Ideally, the evidence files should be from the same organism that the genome sequence originated from, but this is not always the case. If you do not have evidence files from your organism, try to find some data in public repositories, such as GenBank (https://www.ncbi.nlm.nih.gov/). The GFF3 file format (http://gmod.org/wiki/GFF3) is a standard nine-column format that defines annotated features (e.g., gene, mRNA, exon, intron, etc.), their name, their type, and location in the genome sequence and is a common output of annotation tools. Examples of GFF3 files to use as evidence in GenSAS are outputs from other previously run annotation tools, previous versions of the genome annotation, and aligned repeats, transcripts, and proteins. The sequence names from the assembly file must match the sequence names in the first column of the GFF3 file. The GFF3 importers in GenSAS use the feature types in column three of the GFF3 file. Table 4 lists the feature types recognized by each GenSAS GFF3 importer. If the GFF3 file has sequence names that do not match the names in the assembly file or has feature types that GenSAS does not recognizes, then no features will be imported into GenSAS. FASTA files of repeat, transcript, EST, or protein sequences that are specific to the genome being annotated can also be uploaded to GenSAS and are used with various alignment tools within GenSAS. FASTA files have the sequence name on one line that begins with a ">" and the nucleotide or protein sequence on the following line(s). In addition to FASTA files, two other file types can be used as evidence in GenSAS. Gene models from NCBI (https://www.ncbi.nlm.nih.gov/) can be uploaded and used as evidence to train the AUGUSTUS (Table 1) gene prediction program. The gene models must be in the GenBank (.gb) file format and need to have at least 100 sequences that are similar enough at the protein level to align to the genome being annotated. GenSAS also accepts Illumina RNA-seq reads, either as paired or non-paired reads in FASTQ format. It is highly recommended that the RNA-seq reads are filtered by quality prior to upload to GenSAS. RNA-seq reads are aligned to the genome using TopHat2 (Table 1), and the resulting alignment can be used to train AUGUSTUS.
The GenSAS Interface
To access GenSAS, users log in on the homepage (https://www. gensas.org/) and then click on the "Use GenSAS" tab in the header menu. This opens the GenSAS interface which has three main sections. The header area (Fig. 1A) has a flowchart of the annotation process. The flowchart arrows can be clicked to navigate to each stage of the annotation process for the project. When the arrow is gray, it means that the step is not yet available. If the arrow is blue, the step is available for use. A green arrow indicates the current step of the project that is being viewed. The header also displays the name of the open GenSAS project in the upper left corner and links to the user's account details, the GenSAS homepage, and a logout link in the upper right corner. The center section of the GenSAS interface is the tab area (Fig. 1B) and is the main area of the interface. Different tabs open as the workflow progresses. See the last paragraph of this section for more information about the tabs. On the right side of the interface is an accordion menu (Fig. 1C). The accordion menu allows users to access the job queue, open JBrowse/Apollo, share projects with other GenSAS users, and access a "Help" section.
Users can see the status of submitted jobs and view job details in the Job Queue (Fig. 2). In the Job Queue, the "Status" of each job updates periodically or any time the user clicks "Update status." Clicking "View full report" opens a tab to see a more complete status report of each job in the overall job queue. As annotation jobs finish, the annotations are viewable in JBrowse, but users can also click the job name in the Job Queue and open the job results (Fig. 3). In the job results tab, there is a Job Summary section (Fig. 3A) which lists the job name, the settings used, and the day and time of submission and completion. There is also an "Output Files" section which contains links to error and run log files, the raw output from the tool, and the GFF3 file that was loaded into JBrowse. Most tools also have some summary tables (Fig. 3C) that just provide a quick overview of the results. For most tools, multiple jobs of the same tool, with different parameters, can be submitted simultaneously, but the job names need to be unique. This allows users to experiment with different settings or evidence files in the same tool.
GenSAS uses an integrated instance of JBrowse to view data and the JBrowse plug-in, Apollo, for manual curation of annotations. To open JBrowse/Apollo, click on the "Browser" section of the accordion menu on the right of the GenSAS interface, and then click the "Open Apollo" button. The "Apollo" tab will open and has two sections. On the left is the JBrowse display (Fig. 4A), and on the right is the Apollo interface (Fig. 4B). The "Tracks" tab ( Fig. 4C) is used to control which tracks, or the results from each tool, are visible. During the "Annotate" step of GenSAS, manual gene model editing can be performed using the "User-created Annotations" track (Fig. 4D). More details on how to use these Tables (C) tools within GenSAS are in the GenSAS User's Guide (https:// www.gensas.org/apolloJbrowse) as well as on the JBrowse and Apollo websites (Table 1).
GenSAS allows users to share their projects with other GenSAS users once the first job in the Job Queue completes. To share a project, click on the "Sharing" section of the accordion menu on the right, and then click "Share this project." A "Project Sharing" tab opens, and under the "Share this project" section, the name of the other user is entered. The owner of the project can grant the other user read only or full access to the project. With full access to the project, the other user can run annotation jobs and edit gene models with Apollo.
Most of the tabs in GenSAS have a similar layout. In general, if there are different job types or tools to select, there are clickable options on the left side of the tab (Fig. 5A) (see Note 1). The content in the center of the tab (Fig. 5B) will change depending on the option selected, and this is where job names are edited, tool settings are adjusted, or files are selected for upload. All the tabs have an expandable instructions section (Fig. 5C) that provides a brief overview of the annotation step. There are more detailed instructions in the GenSAS User's Guide (https://www.gensas. org/users_guide). There is also a "Proceed to next step" button under the "Instructions" section which moves the annotation process to the next stage. Fig. 4 The "Apollo" tab in GenSAS has the JBrowse interface on the left (A) and the Apollo interface on the right (B). Which tracks are visible is controlled through the "Tracks" Table (C) on the Apollo interface. During the "Annotate" step of GenSAS, user can drag gene models to the "User-created Annotations" track (D) and edit them
Genome Sequence Upload and Project Creation
The first steps of a GenSAS project involve loading files and creating a project (Fig. 6). Before starting a GenSAS project, the genome assembly file needs to be loaded by clicking on the "Sequences" arrow in the flowchart (Fig. 1A). On the Sequences tab, there are three options on the left side: Available Sequences, Upload Sequences, and Subset Sequences. "Available Sequences" just displays a table of sequences that the user has already loaded into GenSAS. Sequences from shared projects are not visible on this table to users who are not the owner of the sequence. Under "Upload Sequences," there is an interface to select a sequence file to upload and fields to select the sequence type (e.g., contig, scaffold, or pseudomolecule) and to enter the assembly version number (see Note 2). To help ensure that GenSAS users get quality results, the uploaded genome assembly metrics are determined using PRINSEQ (Table 1). GenSAS will only use assemblies that are 25,000 sequences or less and have more than 50% of the sequences over 2500 bases in length. Sequence files that do not meet these requirements are flagged and a file of just the sequences above 2500 bases is made available for use in the project, if desired. To use the file of sequences above 2500 bp, click on the "violated" label in the "Status" column of the Available Sequences table for Fig. 5 An example of a GenSAS tab layout. Different options/tools are available to click on the left (A) and the content on the right (B) will change with each option. At the top of the tab is an "Instructions" section and a "Proceed to next step" button (C) that sequence set. A new tab opens and the option to use the filtered file is available to click. The "Subset Sequences" option allows users to select sequences from a previously loaded file by sequence name, or filter sequences by minimum size, and create a subset of sequences for use in a project. "Subset Sequences" is a good option to use to test or optimize the GenSAS workflow on a handful of contigs from their genome assembly before annotating the entire genome. Users can run BUSCO on uploaded genome assemblies by clicking on the "processed" label in the "Status" column for that sequence set in the Available Sequences Table. A new tab opens that has the stats from the PRINSEQ analysis and the option to run BUSCO is at the bottom. Users just need to select the appropriate BUSCO dataset and click "Run BUSCO" to start the job. When the job completes, the results will also be available on the tab where the BUSCO job was created.
Once the genome assembly is uploaded (see Note 3), a new project can be created by clicking on the "Project" arrow in the flowchart. On the "Project" tab, there are two options on the left side: "Load an Existing Project" and "Begin a New Project." Existing projects include those previously created by the user and projects shared by other GenSAS users. When the "Begin a New Project" option is selected, a web form with required and optional information fields appears. The required information includes Sequences • Upload genome assemblies in FASTA format • Assembly files checked using PRINSEQ • Run BUSCO to assess assembly completeness (optional)
Project
• Project created, uploaded sequence selected • User enters organism and project information
GFF3
• Upload GFF3 files of previous annotations or results from other annotation tools (optional)
Evidence
• Upload FASTA files of species-specific repeats, transcripts, ESTs, proteins (optional) • Upload species-specific GenBank gene models and Illumina RNA-seq reads (optional) Fig. 6 An overview of the first steps of the GenSAS annotation process which include uploading files and project creation project name, project type, genus and species, and selection of a sequence group. After the project is created, the "Project" tab displays summary details about the open project and allows users to reset the 60-day project expiration or delete the project. All GenSAS projects expire and are deleted after 60 days (Table 2), unless the user resets the expiration of the project from the Project tab. Users can reset the expiration on a project as often as needed to keep the project active. There is also a "Close this Project" button on the Project tab. If a current project is open, the project first must be closed in order to switch projects or create a new project.
Uploading Supporting Evidence to GenSAS
After project creation, the "GFF3" arrow of the flowchart becomes available. This is an optional step in the pipeline where supporting GFF3 files (discussed in Subheading 2.3) are uploaded to GenSAS. There are six options on the left of the "GFF3" tab: GFF3 Files, Repeats, Transcript Alignments, Protein Alignments, Gene Predictions, and Other Features. The "GFF3 Files" section is a list of all files previously uploaded by the user and can be selected for repeated use. The remaining five options are loaders for specific data types. All imported GFF3 files are visible in JBrowse as new tracks, and certain data types are also available for use in other steps of GenSAS. To load a GFF3 file, select the data type from the options on the left and then either upload a file, or select a previously uploaded GFF3 file, enter a name for the job, and then click "Import GFF3 File." A job will then appear in the Job Queue. After all GFF3 import jobs are started and in the job queue, or to skip this step, click the "Proceed to next step" button, and the "Evidence" step in the flowchart will be available to use. Users can return to the GFF3 step later if they would like to load more files and the import jobs do not have to be completed before moving to the next step. FASTA files of species-specific repeats, transcripts, ESTs, and proteins as well as the GenBank files and RNA-seq reads discussed in Subheading 2.3 can optionally be loaded into GenSAS under the "Evidence" step. To load the FASTA files and GenBank genes file, first select the appropriate data type on the left side of the Evidence tab: Upload Repeat Libraries, Upload Transcripts & ESTs, Upload Proteins, or Upload Gene Structures. Then select the file(s) and click "Upload Files." To upload RNA-seq reads, select the "Upload Illumina RNA-seq." For RNA-seq files, the option is to load a paired set of read files or a single non-paired reads file (see Note 2). Once all the evidence files are loaded, click the "Proceed to next step" button.
Repeat Identification and Masking
The steps of structural annotation include repeat masking, aligning evidence to the genome, predicting gene structures, identifying tRNAs and rRNAs, and creating a consensus gene model set (Fig. 7). Under the "Repeats" step (optional), two repeat finding tools are available: RepeatMasker and RepeatModeler (Table 1). RepeatMasker relies on libraries of previously identified repeats to find repeats in the genome sequence. GenSAS provides the repeat collections from Repbase [6] de novo repeat finder and does not rely on previous evidence, which makes it especially good for non-model organisms where no repeat information is available. After the repeat jobs have completed running, the "Masking" step becomes available. Under the masking step, one or more masking jobs can be selected to create the masked consensus. If a GFF3 file of aligned repeats was imported, it will also be an option Fig. 7 An overview of the structural annotation steps of GenSAS to use in the masked consensus. When the masked consensus is generated, GenSAS produces two versions. One version is hardmasked with the repeat region nucleotides converted to an "X."
Repeats
The other version is a soft-masked, and the repeat region nucleotides have been converted to a lower-case letter. The hard-masked sequence is used with the transcript and protein alignment tools in the "Align" step. For the compatible gene prediction tools under the "Structural" step, the user has the option of using the softmasked sequence as the input or using the default hard-masked sequence. Once the masked consensus job is complete, the "Align" step will be available. If no repeat masking is desired, the unmasked sequence can be used in subsequent steps by not setting up a masked consensus job and just proceeding to the next step.
Alignment of Transcript and RNA-seq Evidence
The optional "Align" step has tools for aligning evidence to the genome sequence. Alignments of transcript and protein evidence can be useful when generating a genes consensus later in the annotation process and can be helpful during manual curation of the annotation. User-provided transcript evidence can be aligned using BLASTþ, BLAT, or PASA (Table 1). For transcript alignments, GenSAS also has the NCBI RefSeq [7] transcript sets available for use. User-provided protein evidence can be aligned using BLASTþ and Diamond (Table 1). For the BLASTþ tool, users can also adjust the settings to change the specificity of the alignment. TopHat2 and HISAT2 (Table 1) are used to align the userprovided RNA-seq reads. Once all alignment jobs have been set up, the "Proceed to next step" button can be clicked to move on to the "Structural" step. Alignment jobs do not have to be completed to move on to the next step; however some tools under the "Structural" step do use results from the alignment tools. The alignment jobs need to complete before the results are available for use in the downstream annotation steps.
Gene Prediction and Other Structural Features
The "Structural" step has tools for gene prediction and for identifying other genetic elements. On the "Structural" tab, there are two options on the left side: "Gene Prediction" and "Other Features." Clicking on these options changes the visible list of available tools. Under "Gene Prediction," there are several tools to choose from. Some can be trained, while others rely on pre-set organism profiles. AUGUSTUS (Table 1) can either be used with the provided pre-trained datasets from model organisms or can be trained using user-provided evidence. To train AUGUSTUS, open the "Options for training AUGUSTUS" section under the AUGUSTUS setting page. There is the option to select four different file types, and in some cases, AUGUSTUS requires specific combinations of these options to work properly (Table 5). If the proper file combination is not selected, an error message will be displayed when the job is submitted. The BRAKER2 (Table 1) tool can be trained with aligned RNA-seq evidence. For non-model organisms without any supporting evidence, a good tool might be GeneMark-ES (Table 1) which performs self-training. The three other gene prediction tools Genscan, GlimmerM, and SNAP (Table 1) have pre-installed profiles for model organisms.
GenSAS will also parse user-uploaded results from FGENESH [9] for use in the annotation process, but FGENESH cannot be run on GenSAS due to license restrictions. Under the "Other Features" section of the "Structural" tab, there are four tools: getorf, RNAammer, SSR Finder, and tRNAscan-SE (Table 1). RNAammer identifies rRNAs, tRNAscan-SE finds tRNAs, SSR Finder identifies simple sequence repeats, and getorf finds open reading frames. For all the structural annotation tools, once a job is submitted, the job name appears in the Job Queue. And as with other GenSAS jobs, each tool can be run multiple times, with different settings, provided each job has a unique name. It is very important once all these jobs complete that the results are critically looked at by the user. Some of these tools may perform better on certain genomes than others, and if the results are poor, then omitting those results from downstream steps is highly recommended. This is very important for the last part of the structural annotation process (Fig. 7), the "Consensus" step. During the consensus step, EVidenceModeler (EVM, Table 1) can be used to create a merged consensus gene set. EVM allows the user to assign weights to each data track that is used to generate the consensus. Higher weights (i.e., 10) indicate that the data are more experimentally based and should be trusted more. Lower weights (i.e., 1) indicate that the data are more theoretical or from mathematical predictions and might not be as accurate. All available tracks are present on a table on the "Consensus" tab (Fig. 8). GenSAS pre-populates the weights (Fig. 8A) and gives transcript alignments a weight of 10, protein alignments a weight of 5, and gene prediction tool results a weight of 1. The dataset weights can be edited by the user and if the user wants a track to be omitted just remove the weight from the box and leave it blank. EVM can be run multiple times with different weight settings if each job is assigned a unique name (Fig. 8B). Table 5 Data type combinations needed to train AUGUSTUS
Training option
Required data type to select in GenSAS (user-provided file type) Genes and transcripts "Gene Structures" (GenBank file) and "cDNA sequences" (FASTA file) Proteins only "Protein Sequences" (FASTA file) Proteins and transcripts "Protein Sequences" (FASTA file) and "cDNA sequences" (FASTA file) RNA-seq reads "BAM File," select results of TopHat2 job
Functional Annotation
The functional annotation portion of GenSAS ( Fig. 9) begins by selecting the Official Gene Set (OGS). The OGS is the gene model set on which the functional annotation tools will be run, manual curation can be performed on, and that the final annotation files will be generated from. The OGS is selected by the user from a list of available gene sets on GenSAS. This list includes gene predictions uploaded by the user at the "GFF3" step, results from the tools under the "Gene Predictions" section of the "Structural" step, and any jobs created at the "Consensus" step. It is up to the user to evaluate the results and to select the gene set that makes the most sense for the organism being annotated. Once an OGS is selected, the "Refine" step becomes available for use. Under the refine step, there is an option to run the OGS through PASA with transcript evidence to help further refine the gene structure junctions and start and stop positions. This step is optional, and it has been observed that this step only works well with transcript evidence from the same organism as the genome being annotated. The "Functional" step is where functional annotation tools are run on the predicted proteins of the OGS. Jobs can be created by clicking on the five tool names on the left: BLASTþ, Diamond, InterProScan, Pfam, SignalP, and TargetP (Table 1). For protein alignments with BLASTþ and Diamond, the available databases include SwissProt [8], TrEMBL [8], NCBI RefSeq proteins [7], and any user-provided protein files. Please note that when using a large protein database (i.e., TrEMBL), in conjunction with a large genome, the BLAST job will take quite a while to complete. The remaining four tools identify functional domains within the predicted proteins. Functional annotation jobs also appear in the Job Queue once submitted, but the results do not appear in JBrowse/ Apollo as individual tracks like the structural annotation tools since the tools are only run on the predicted proteins, and not the entire genome sequence. To view the functional annotation results for the OGS genes, either open the results tab for each tool by clicking on the job name in the Job Queue and click on the mRNA name on OGS • Select Official Gene Set (OGS) for functional annotation
Refine
• Use PASA and species-specific transcript data to refine gene models (optional)
Annotate
• Optional manual curation of gene models using Apollo
Publish
• Final annotation files and reports are produced in GFF3, FASTA, and text formats • Option to create GFF3 file with all annotations • Run BUSCO to assess annotation completeness (optional) Fig. 9 Overview of the functional annotation, manual curation, and final steps of a GenSAS project the summary table or right-click on the gene model in JBrowse and select "View putative annotation." Either of these methods will open the functional results tab for that mRNA (Fig. 10). As with the other GenSAS tabs, the results from each functional annotation tool can be selected on the left side, and when clicked, the content in the tab will change to display those results.
Manual Curation
After the "Functional" step, the "Annotate" step is available. At this step, the "User-created annotations" track, that is part of Apollo, is available to edit in JBrowse. Apollo allows users to manually curate the OGS prior to producing the final annotation files. With Apollo, users can edit intron-exon junctions, start and stop locations, and UTR lengths and add functional annotation notes. While manual curation is an optional step, it is highly recommended. Apollo was designed to allow for collaborative manual annotation efforts between many users and keeps track of which users have made edits. The sharing function of GenSAS allows for users to share their GenSAS project with other GenSAS users allowing those users to also do manual curation in Apollo. For more detailed directions on the manual curation functions of Apollo, please see http://genomearchitect.github.io/usersguide/. There is also a brief example of how to perform manual curation in the GenSAS User's Guide (https://www.gensas.org/ annotate).
Final Annotation Files
When the annotation process is complete, the final files are produced under the "Publish" step. During the Publish step, GenSAS will merge any manually curated genes from Apollo into the OGS and run the functional annotation tools on the manually edited gene models. GenSAS will then rename all the gene models with a Clicking on each link displays the results for that mRNA consistent naming scheme and add the assembly and annotation versions to the file names. GenSAS automatically selects the minimum files needed, such as all the FASTA and GFF3 files associated with the OGS and masked consensus. Users can also select specific tools and have GenSAS prepare the output files from those results as well. Functional annotation results are output as tab-delimited files. Please note that if any changes to the project are made in the previous steps of GenSAS, after the Publish step has been run, the Publish step needs to be run again to produce the newest version of the annotation files. Users also have the option to run BUSCO on the predicted proteins to assess the completeness of the annotation. GenSAS also produces summary reports related to the final annotation features and the tools that were used to produce the annotation. A summary table of genome annotation metrics (e.g., number of genes, CDS, mRNA, tRNA, rRNA, etc.) is produced with custom scripts, and a summary of the type of repeats present in the repeat consensus is generated using a script called "One code to find them all" (Table 1). GenSAS also produces a summary report of which tools were used to create the OGS and functional annotations and the tool versions and settings.
Future Development
The GenSAS development team is constantly looking for ways to make GenSAS better for the user, and feedback from the users drives the improvement of GenSAS. A couple more tools that will be added include the alignment tool GMAP [10] to provide more options for transcript alignments and the gene prediction tool MAKER2 [2].
Notes
1. We recommend using Chrome, Firefox, and Edge Internet browsers with GenSAS. Some users of the Safari Internet browser have had issues with the GenSAS interface displaying properly. The GenSAS interface may not display properly if the "zoom" function of the browser program is being used but will appear normal at 100% magnification.
2. Files over 2 GB in size might not load through the web interface. If you encounter problems loading large files to GenSAS, please contact us (https://www.gensas.org/contact), and we will provide a secure FTP location for file transfer. The FTP can also be used to transfer large files out of GenSAS if needed.
3. Once assembly files are uploaded to GenSAS, the file needs to be processed before it can be used in a GenSAS project. For larger genomes, this takes a bit of time, and it may take several minutes before the genome assembly is available to select for project creation. | 8,475.6 | 2019-01-01T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Cubic Rank Transmuted Modified Burr III Pareto Distribution: Development, Properties, Characterizations and Applications
In this paper, a flexible lifetime distribution called Cubic rank transmuted modified Burr III-Pareto (CRTMBIII-P) is developed on the basis of the cubic ranking transmutation map. The density function of CRTMBIII-P is arc, exponential, left-skewed, right-skewed and symmetrical shaped. Descriptive measures such as moments, incomplete moments, inequality measures, residual life function and reliability measures are theoretically established. The CRTMBIII-P distribution is characterized via ratio of truncated moments. Parameters of the CRTMBIII-P distribution are estimated using maximum likelihood method. The simulation study for the performance of the maximum likelihood estimates (MLEs) of the parameters of the CRTMBIII-P distribution is carried out. The potentiality of CRTMBIII-P distribution is demonstrated via its application to the real data sets: tensile strength of carbon fibers and strengths of glass fibers. Goodness of fit of this distribution through different methods is studied.
Introduction
In recent decades, many continuous univariate distributions have been developed but various data sets from reliability, insurance, finance, climatology, biomedical sciences and other areas do not follow these distributions.Therefore, modified, extended and generalized distributions and their applications to the problems in these areas is a clear need of day.
The modified, extended and generalized distributions are obtained by the introduction of some transformation or addition of one or more parameters to the well-known baseline distributions.These new developed distributions provide better fit to the data than the sub and competing models.Shaw and Buckley (2009) proposed ranking quadratic transmutation map to solve financial problems.
Quadratic Ranking Transmutation Map
Theorem 1.1: Let 1 Z and 2 Z be independent and identically distributed (i.i.d.) random variables with the common cumulative distribution function Gz .Then, the ranking quadratic transmutation map is and (2) If we take 2 1, the distribution in equation ( 2) is known as ranking quadratic transmutation map or transmuted distribution.
Cubic Ranking Transmutation Map
Theorem 2.1: Let 1 Z , 2 Z and 3 Z be i.i.d.random variables with the common cumulative distribution function Gz.Then, the cubic ranking transmutation map is (3)
Proof
Consider the following order: , with probability 3 where and Pr min , , Pr 1 Pr max , , .Arnold et al. (1992) showed that Pr min , , 1 1 If we take 1 the distribution in equation ( 4) is known as Cubic ranking transmutation map or transmuted distribution of order 2.
Definition 2.1
The cumulative distribution function (cdf) and probability density function (pdf) for the cubic rank transmuted distribution are given, respectively, by and Afify et al. (2017) proposed the beta transmuted-H family of distributions.Al-Kadim and Mohammed (2017) presented the cubic transmuted Weibull distribution in terms of basic mathematical properties.Nofal et al. (2017) studied a generalized transmuted-G family of distributions.Alizadeh et al. (2017) developed generalized transmuted family of distributions.Bakouch et al. (2017) introduced a new family of transmuted distributions.Granzotto et al. (2017) proposed a cubic ranking transmutation map and studied different properties.They studied properties of Cubic rank transmuted and Here, the CRTBMIII-P distribution is introduced with the help of ( 6) and ( 7).The cdf and pdf of CRTMBIII-P distribution are given, respectively, by and are parameters.
Structural Properties of CRTMBIII-P Distribution
The survival, hazard, cumulative hazard and reverse hazard functions and the Mills ratio of a random variable X with CRTMBIII-P distribution are given, respectively, by and The elasticity lnF( ) ln The elasticity of CRTMBIII-P distribution shows the behavior of the accumulation of probability in the domain of the random variable.
Shapes of the CRTMBIII-P Density
The following graphs show that shapes of CRTMBIII-P density are arc, exponential, positively skewed, negatively skewed and symmetrical (Fig. 1).The plots of hrf (Fig. 2) are also given.
Sub-Models
The CRTMBIII-P distribution has the following sub models.
Descriptive Measures Based Quantiles
The quantile function of CRTMBIII-P distribution is the solution of the following equation Median of CRTMBIII-P distribution is the solution of the following The random number generator of CRTMBIII-P distribution is the solution of the following , where and the random variable Z has uniform distribution on 0,1 .Some measures based on quartiles for location, dispersion, skewness and kurtosis for the CRTMBIII-P distribution respectively are: Median M=Q (0.5); Quartile deviation and Moors kurtosis measure based on Octiles . The quantile based measures exist even for distributions that have no moments.The quantile based measures are less sensitive to the outliers.
Moments
Moments, incomplete moments, inequality measures, residual and reverse residual life function and some other properties are theoretically derived in this section.
Moments About the Origin
The th ordinary moment of CRTMBIII-P distribution is, .
Mean and Variance of CRTMBIII-P distribution are The factorial moments for CRTMBIII-P distribution are given by is Stirling number of the first kind.
Incomplete Moments
Incomplete moments are used to study mean inactivity life, mean residual life function and other inequality measures.The lower incomplete moments of a random variable X with CRTMBIII-P distribution are where is the incomplete beta function.
The upper incomplete moments for the random variable X with CRTMBIII-P distribution are The mean deviation about mean is 2 2 and the mean deviation about median is , where . Bonferroni and Lorenz curves for a specified probability p are computed from
Residual Life Functions
The residual life, say n mz , of X with CRTMBIII-P distribution has the n th moment The average remaining lifetime of a component at time z, say 1 mz , or life expectancy known as mean residual life (MRL) function is given by The reverse residual life, say,
n Mz of X with CRTMBIII-P distribution having n th moment is The waiting time z for failure of a component has passed with condition that this failure had happened in the interval [0, z] is called mean waiting time (MWT) or mean inactivity time.The waiting time z for failure of a component of X having CRTMBIII-P distribution is defined by
Reliability Measures
In this section, reliability measures are studied.
Stress-Strength Reliability for CRTMBIII-P Distribution
, , , , , , X , , , , , is the characteristic of the distribution of 1 2 and X
X
. Then reliability of the component for CRTMBIII-P distribution is computed as ,, and .5, it means that 1 X and 2 X are i.i.d. and there is equal chance that 1 X is bigger than 2 X .
Characterizations
In order to develop a stochastic function in a certain problem, it is necessary to know whether the selected function fulfills the requirements of the specific underlying probability distribution.To this end, it is required to study characterizations of the specific probability distribution.Certain characterizations of CRTMBIII-P distribution are presented in this section.
Characterization Through Ratio of Truncated Moments
The CRTMBIII-P distribution is characterized using Theorem 1 (Glä nzel; 1987) on the basis of a simple relationship between two truncated moments of functions of X. Theorem 1 is given in Appendix A.
The pdf of X is (11), if and only if qx(in Theorem 1) has the form .
After simplification, we have .
Therefore according to theorem 1, X has pdf (11).
Corollary 6.1.1.Let : 0, X be a continuous random variable and let The pdf of X is (11) if and only if functions qx and .
Remark 6.1.1.The general solution of the above differential equation is where D is a constant.
Characterization via Doubly Truncated Moment
Here CRTMBIII-P distribution is characterized via doubly truncated moment.
X: 0, be a continuous random variable.Then, X has pdf (11) if and only if For random variable X with pdf (11), we have .
Conversely, if (30) holds, then Differentiating with respect to y, we have which is pdf of CRTMBIII-P distribution.
Maximum Likelihood Estimation
In this section, parameter estimates are derived using maximum likelihood method.The log-likelihood function for CRTMBIII-P distribution with the vector of parameters In order to estimate the parameters of CRTMBIII-P distribution, the following nonlinear equations must be solved simultaneously:
Simulation Study
In this section, we perform the simulation study to illustrate the performance of MLE.We consider the CRTMBIII-P distribution with = 2.95, = 0.7, = 2.20, = 3.95, 1 = 0.4, 2 = 0.1, = 1.We generate 1000 samples of sizes 20, 50, 200.The simulation results are reported in Table 3.In the table, it reports the average estimated , , , , 1 , 2 and the standard deviation of the estimates within the parenthesis.From this Table , we observe that the MLE estimates approach true values as the sample size increases whereas the standard deviations of the estimates decrease, as expected.
Applications
In this section, the CRTMBIII-P distribution is compared with TMBIII-P, MBIII-P, BIII-P, IL-P, LL-P distributions.Different goodness fit measures like Cramer-von Mises (W), Anderson Darling (A), Kolmogorov-Smirnov (K-S) statistics with p-values, and likelihood ratio statistics are computed using R-package for tensile strength of carbon fibers and strengths of glass fibers.
The better fit corresponds to smaller W, A, K-S (p-value), AIC, CAIC, BIC, HQIC and value.The maximum likelihood estimates (MLEs) of unknown parameters and values of goodness of fit measures are computed for CRTMBIII-P distribution and its sub-models.The MLEs, their standard errors (in parentheses) and goodness-of-fit statistics like W, A, K-S (p-values) are given in table 4 and 6.Table 5 and 7 displays goodness-of-fit values.
Concluding Remarks
We have developed a more flexible distribution on the basis of the cubic transmuted mapping that is suitable for applications in survival analysis, reliability and actuarial science.The important properties of the proposed CRTMBIII-P distribution such as survival function, hazard function, reverse hazard function, cumulative hazard function, mills ratio, elasticity, quantile function, moments about the origin, incomplete moments, inequality measures and stress-strength reliability measures are presented.The proposed distribution is characterized via ratio of truncated moments and doubly truncated moment.Maximum likelihood estimates are computed.The simulation study for the performance of the MLEs of parameters for the new distribution is carried out.Applications of the proposed model to tensile strength of carbon fibers and strengths of glass fibers are presented to show its significance and flexibility.Goodness of fit shows that the new distribution is a better fit.We have demonstrated that proposed distribution is empirically better for tensile strength of carbon fibers and strengths of glass fibers data.
Appendix A
Theorem 1: Let , F, P be a probability space for given interval 1 2 [d ,d ] [d ,d ] and F is two times continuously differentiable and strictly monotone function on 1 2 [d ,d ] 1 2 [d ,d ] .
Finally, assume that the equation q t h t s t q t h t h t and K is a constant, adopted such that
Figure 1 .
Figure 1.Plots of pdf of CRTMBIII-P distribution
Figure 3 .
Figure 3. Fitted pdf, cdf, survival and pp plots of the CRTMBIII-P distribution for carbon fibers
Figure 4 .
Figure 4. Fitted pdf, cdf, survival and pp plots of the CRTMBIII-P distribution ) Z be i.i.d.random variables with the common cumulative distribution function
Table 2 .
Median, mean, standard deviation, skewness and Kurtosis of the CRTMBIII-P Distribution
Table 4 .
MLEs and their standard errors (in parentheses) and Goodness-of-fit statistics for data set I
Table 5 .
Goodness-of-fit statistics for data set I
Table 6 .
MLEs and their standard errors (in parentheses) and Goodness-of-fit statistics for data set II
Table 7 .
Goodness-of-fit statistics for data set II | 2,706.2 | 2018-12-09T00:00:00.000 | [
"Mathematics"
] |
Structural, dielectric, and thermal properties of Zn and Cr doped Mg- Co spinel nanoferrites
Nanoferrites play a pivotal role in resolving worldwide electronic and microwave devices. Spinel ferrites have exceptional structural, morphological, and dielectric properties. The composition Zn 0.5–xMg0.25+x Co 0.25Cr1–x Fe 1+x O 4 (ZMCCF) where x varies from 0–0.5 with the difference of 0.25 was synthesized via auto combustion (sol-gel) route. The structural, thermal, and dielectric characterizations were done to observe the responses of variation of x in designed nanoferrites. The designed nanoferrites with a variation of x experienced a promising change in structural, thermal, and dielectric responses. Based on Koop’s theory, the dielectric constant decreases with the increase in frequency, which is the favorable trend of spinel ferrites. The different cationic distributions in the spinel structure endorse this behavior. The maximum value of the tangent loss at low frequencies reflects the application of these materials in medium-frequency devices. Therefore, planned spinel nanoferrites may benefit advanced electronics and microwave devices.
Introduction
The study of materials at the atomic and molecular scale and their manipulation is known as nanoscience. Materials behave strikingly response at the nanoscale with enhanced physical and chemical properties. This patently characteristic is not nature' s new law but physics explains it significantly earlier about the reduction in size difference of properties [1]. Therefore, nanoscience is a renowned structure with a minimum of one dimension in the range of 1-100nanometer. Nanotechnology discusses these types of materials, which are devoted to novel applications because of nanosized materials [2]. This achievement increased the efficacy of power consumption that follow-up to reduce the cost of products. In the last few decades, nanotechnology gained pivotal attention in biology, physics, chemistry, geology, and many more owing to augmented features of reduced size especially below 100 nm. In nanoscience, nanotechnology deals with the study of nanoparticles, nanorods, nanotubes, nanofibers, nanospheres, etc, with varying sizes, but one dimension is less than 100 nm. The nanomaterials pull towards versatile physical and chemical properties due to the size and shape variation, leading to multipurpose applications and optical, magnetic, or electrical properties [3][4][5]. Nanoparticles have distinguished characteristics from bulk materials only by the reduction in size, namely, chemical reactivity, energy absorption, and biological mobility in physics, chemistry, and biology, which lead to combination in materials science [6].
In recent years, the development in nanotechnology has focused on synthesizing nanocrystalline materials at the nano and sub-nano scale. In a myriad of fields of technology, nanocrystalline ceramic materials have efficient electric, dielectric, and magnetic properties, which are very beneficial for numerous varieties of electronic devices [7,8]. This class of nanomaterials is a transition metal and lanthanides base elements. These are known as metal oxide ceramic ferrites and are hard, brittle, non-conducting, iron-containing gray or black, and polycrystalline [9]. By virtue of proficient application, ferrites have a plethora for this field of interest. These are the chemical combination of iron oxide with one or more other metals like magnesium, aluminum, barium, manganese, copper, nickel, cobalt, or iron. Conventional electrical, electronic, and magnetic devices are based Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. on ferrites. Ferrite nanoparticles are used in many equipment to suppress and dissipate high frequency noise levels caused by electromagnetic devices [10,11]. Ferrites have immensely changed magnetic, electrical, optical, and structural tunability and applications in different electronic technology [12], biomedical fields [13], energy storage [14], environmental protection [15], etc. Spinel ferrite nanomaterials have become more attractive due to their promising physicochemical properties, including their good electro-optical properties, ease of functionalization, and superparamagnetic properties. A ferrite is usually described by the formula M (Fe x O y ), where M represents any metal that forms divalent bonds, such as any of the elements mentioned earlier. Ferrites are soft and hard and are also classified into spinel, garnet, and hexagonal ferrites.
Spinel nanoferrites have a ceramic material structure consisting of iron oxide and other metallic elements [2]. Spinel nanoferrites have a significant variety of usages, including switching and high-frequency devices [11], removal of organic industrial contaminants [16], hyperthermia treatments [17], high-performance energystorage equipment [10], drug delivery for cancer treatment [18], and waste-water management [19]. The spinel structure ferrites also have been used in lithium-ion batteries [20] in antimicrobial [21,22], and microwave applications [23,24]. The crystal structure of spinel soft ferrites has the formula AB 2 O 4 and is an FCC lattice structure with 64 A sites and 32 B sites for one unit cell that contains eight formula units [2]. The combination of a trivalent cation (Fe 3+ ) and another divalent metallic cation, such as either a transition or post-transition metallic cation (A = Mn, Mg, Co, Ni, Zn). Their chemical compositions and synthesis techniques strongly influence the physical properties that determine their applications [25].
For the synthesis of spinel structure materials, different synthesized techniques were used, including the solgel auto combustion technique [26,27], solvothermal process [28], and coprecipitation route [29][30][31]. The auto combustion (sol-gel) process is commonly used because of its easy synthesis and economical fabrication [25]. The sol-gel methods have also been assessed as an important way to prepare nanoferrites with high purity, homogeneity, and large porosity. Properties are changed/modified owing to the distribution of cations at A and B sites in the spinel soft ferrite nanostructure.
Omelyanchik et al [32] reported To the best of our knowledge, no research has been reported for synthesizing Mg and Cr doping on Zn nanoferrites with the dopant concentration selected for the present research work. The effect of doping on the physical properties of nanoferrites is investigated by x-ray Diffraction (XRD), Scanning Electron Microscopy (SEM), Thermo-gravimetric Analysis (TGA), and Impedance analysis. The designed nanoferrites may benefit a medium range of frequency device applications.
Experimental part 2.1. Materials
All analytical grade nitrates salts of respective metal were used for the synthesis of ferrites. Hydrated salts of magnesium nitrate, zinc nitrate, chromium nitrate, cobalt nitrate, iron (III) nitrate, and citric acid were purchased from Sigma Aldrich.
Methodology
The sol-gel auto-combustion method was used to prepare ZMCCF1, ZMCCF2, and ZMCCF3 specimens. The nitrates and citric acid solution were prepared with a stoichiometric amount of corresponding metal nitrates that act as an oxidizing agent and fuel citric acid as a reducing agent for the combustion reaction. Placed the beaker on a magnetic stirrer to form a homogenous solution. After it, the ammonia solution was added dropwise to maintain the solution pH 7, and when the pH reached 7, the magnetic stirrer switched on. Evaporation turned the solution into a solid-liquid phase called gel, and then auto-combustion gel was converted into fine powder. The powder was placed in the furnace to perform calcination at 600°C, and after being grounded, the prepared ferrites powder was used for different characterizations. The step-by-step procedure of ZMCCF1, ZMCCF2, and ZMCCF3 ferrites preparation is depicted in figure 1.
Characterizations
For the thermal behavior of ZMCCF1, ZMCCF2, and ZMCCF3 nanoferrites, the Perkin Elmer Diamond Thermogravimetric analyzer, Japan, is used for the thermogravimetric analysis and differential thermal analysis. The recorded thermograms were observed from room temperature to 570°C in the air atmosphere. The structural variations were observed using x-ray diffraction STOE with a scan angle 20-60°with a scan rate 2°min −1 . by Cu Kα (λ = 1.5406 Å) at room temperature. The morphological analysis was studied using a Scanning electron microscope, Jeol Japan. The RF impedance/Material analyzer (Agilent E4991A) in the frequency range of 1MHz-1GHz is used to study the dielectric properties of the prepared sample and measure the complex permittivity (ε′ and ε′′) and dielectric tangent loss (tan δ).
Results and discussion
3.1. Thermal investigation Thermogravimetric and Differential Thermal analyses were used to evaluate the phase development of synthesized ZMCCF1, ZMCCF2, and ZMCCF3 nanoferrites powder. Figures 2(a)-(b) shows the TGA and DTA spectra of the as-prepared powder. Each prepared sample demonstrates a single-order decomposition, in which moisture was eliminated from the sample, resulting in weight loss. The temperature versus percentage weight loss of the as-prepared specimen between the temperature range 0°C-570°C is given in figure 2(a). Between the temperatures of 230°and 280°C, the most significant percent weight loss occurred. The percent weight loss between 230°C and 280°C for x = 0.0 and x = 0.50 was nearly the same, while the x = 0.25 nanoferrites exhibited higher stability than the other two at 580°C. The % weight loss was maximum for sample ZMCCF2. It indicates that when the concentration of Fe 3+ and Mg 2+ was enhanced, the thermal stability was enhanced at first, then declined as the value of x increased, reaching a maximum for the ZMCCF3 sample.
As per provided information about % weight loss in ferrites by TGA, DTA analysis helps us calculate the phase transition of the nanoferrites by observing the endo and exothermic reaction when heating the piece of observation with a 10°C rise per minute. The thermogram of temperature versus heat flow is given in figure 2(b). A negative heat flow value means the reaction was endothermic, and the prepared nanoferrites absorbed heat. At a temperature of 500°C heat flow is maximum. It was clear from figure 2(b) that as the value of x increased, the heat absorbed by ferrites decreased. The maximum heat absorbed was observed for the ZMCCF1.
Structural analysis
The XRD spectra of ZMCCF1, ZMCCF2, and ZMCCF3 are given in In equation (6), 'β' indicates full width at half maxima (FWHM), and the values of 'β' are given in table 1. The crystallite size was reduced by increasing the concentration of metal ions from x = 0.0 to 0.5, and the minimum crystallite size was 8.93 nm for the ZMCCF3 sample. The decrease in the crystallite size enhances dislocation density, microstrain, and inter-planar spacing, and also, some lattice distortion will be created in the structure of the host materials. The graphical representation of designed nanoferrites versus crystallite size is depicted in The micrographs also observed that the agglomeration was increased with the insertion of divalent and trivalent metal ions in the lattice.
Dielectric analysis
The real and imaginary parts of the permittivity of ZMCCF1, ZMCCF2, and ZMCCF3 spinel ferrites are depicted in figures 7(a) and (b). Both plots showed that dielectric constant and dielectric loss were reduced with the frequency. When the frequency was raised from low to high, the dielectric loss and constant drop rapidly, and both permittivity components became independent of the frequency. Koop's model was used to describe this behaviour. According to this model, the spinel ferrites comprised well-conducting grains separated by very resistant boundaries [39]. The grain boundaries were more effective than the grains at low frequencies; hence the dielectric constant reached its highest value at the lowest frequency. Because grains were more effective than grain boundaries at higher frequencies, the dielectric constant was small [40]. The dispersed electrons accommodated on the grain boundaries as a result of the electric field, then these electrons merged, and a space charge polarisation was created. As a result, the dielectric constant and loss were high at low frequencies and decreased as frequency increased. The ratio of dielectric loss to the dielectric constant in figure 7(c) is called dielectric tangent loss. At low frequencies, greater energy was required for the electron hopping between ferrous and ferric ions. Due to extremely resistive grain boundaries, the dielectric tan losses were highly conducting grains, which reduced electron hopping dielectric tangent losses dramatically as frequency rises, but additional increases in frequency have little effect on tangent loss.
Conclusions
The ZMCCF1, ZMCCF2, and ZMCCF3 designed nanoferrites were successfully synthesized by the sol-gel process and performed different characterizations, including TGA, DTA, XRD, SEM, and LCR analysis. From TGA plots, it was confirmed that the sample possesses single-phase decomposition, mainly due to the removal of moisture in the synthesized material. The minimum % weight loss was observed for ZMCCF3 and the maximum for ZMCCF1. DTA curves also confirmed that the single decomposition process and phase transition occur near 580°C. Moreover, from the XRD technique, it was observed that there is a change in lattice constant, interplanar spacing, and unit cell volume for each parameter and has maximum value for ZMCCF2 and minimum value for ZMCCF3, but on the other hand, the crystallite size reduced with the insertion of divalent and trivalent ions at their respective lattice site. SEM images confirmed the uniform distribution of nanoparticles and increasing agglomeration by adding the dopant ions. LCR plots exhibited that the dielectric constant decreases with increased applied frequency. The minimum tangent loss was observed for sample ZMCCF3. The effect of varying concentrations of 'x' on the designed nanoferrites concluded that dielectric properties could be enhanced and prove a promising candidate for remarkable physical properties.
Data availability statement
All data that support the findings of this study are included within the article (and any supplementary files). | 2,983.4 | 2023-04-04T00:00:00.000 | [
"Materials Science"
] |
On the Multiplexing Gain of Discrete-Time MIMO Phase Noise Channels
The capacity of a point-to-point discrete-time multi-input-multiple-output (MIMO) channel with phase uncertainty (MIMO phase noise channel) is still open. As a matter of fact, even the pre-log (multiplexing gain) of the capacity in the high signal-to-noise ratio (SNR) regime is unknown in general. We make some progress in this direction for two classes of such channels. With phase noise on the individual paths of the channel (model A), we show that the multiplexing gain is <inline-formula> <tex-math notation="LaTeX">$\frac {1}{2}$ </tex-math></inline-formula>, which implies that the capacity <italic>does not</italic> scale with the channel dimension at high SNR. With phase noise at both the input and output of the channel (model B), the multiplexing gain is upper-bounded by <inline-formula> <tex-math notation="LaTeX">$\frac {1}{2} \min \{{ n_{\text {t}}}, (n_{\text {r}}-2)^{+}\! + 1\}$ </tex-math></inline-formula>, and lower-bounded by <inline-formula> <tex-math notation="LaTeX">$\frac {1}{2} \min \left\{{ n_{\text {t}}, \lfloor \frac {n_{\text {r}}+1}{2} \rfloor }\right\}$ </tex-math></inline-formula>, where <inline-formula> <tex-math notation="LaTeX">$n_{\text {t}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$n_{\text {r}}$ </tex-math></inline-formula> are the number of transmit and receive antennas, respectively. The multiplexing gain is enhanced to <inline-formula> <tex-math notation="LaTeX">$\frac {1}{2}\min \{ n_{\text {t}}, n_{\text {r}}\}$ </tex-math></inline-formula> without receive phase noise, and to <inline-formula> <tex-math notation="LaTeX">$\frac {1}{2}\min \{2 n_{\text {t}}-1, n_{\text {r}}\}$ </tex-math></inline-formula> without transmit phase noise. In all the cases of model B, the multiplexing gain scales linearly with <inline-formula> <tex-math notation="LaTeX">$\min \{ n_{\text {t}}, n_{\text {r}}\}$ </tex-math></inline-formula>. Our main results rely on the derivation of non-trivial upper and lower bounds on the capacity of such channels.
I. INTRODUCTION
The capacity of a point-to-point multiple-input-multipleoutput (MIMO) Gaussian channel is well known in the coherent case, i.e., when the channel state information is available at the receiver [1], [2]. The capacity of the noncoherent MIMO channels, however, is still open in general. Nevertheless, asymptotic results of such channels, e.g., at high signal-to-noise ratio (SNR), have been obtained in some important cases.
In the seminal paper [3], Lapidoth and Moser proposed a powerful technique, called the duality approach, that can be applied to a large class of fading channels and derived the exact high SNR capacity up to an o(1) term. In particular, when the differential entropy of the channel matrix is finite, i.e., h(H H H) > −∞, it was shown in [3] that the pre-log (a.k.a. multiplexing gain), of the capacity is 0 and the high-SNR capacity is log log SNR + χ(H H H) + o(1) where χ(H H H) is the socalled fading number of the channel. In addition, capacity upper and lower bounds for the MIMO Rayleigh and Ricean channels were obtained and shown to be tight at both low and high SNR regimes. In [4], Zheng The work of S. Shamai was supported by the Israel Science Foundation (ISF). The material in this paper was presented in part at the 2016 IEEE Information Theory Workshop.
with coherence time T , the pre-log is M * (1 − M * /T ) where M * min n t , n r , T 2 with n t and n r being the number of transmit and receive antennas, respectively. In this work, we are interested in the MIMO phase noise channels in which the phases of the channel coefficients are not perfectly known.
Applying the duality approach and the "escape-to-infinity" property of the channel input, Lapidoth characterized the high-SNR capacity of the discrete-time phase noise channel in the single-antenna case [5]. It was shown in [6] that the capacityachieving input distribution is in fact discrete. Recently, capacity upper and lower bounds of the single-antenna channels with Wiener phase noise have been extensively studied in the context of optical fiber and microwave communications (see [7], [8], [9] and the references therein). In these works, the upper bounds are derived via duality and lower bounds are computed numerically using the auxiliary channel technique proposed in [10]. In particular, in [9], Durisi et al. investigated the MIMO phase noise channel with a common phase noise, a scenario motivated by the microwave link with centralized oscillators. The SIMO and MISO channels with common and separate phase noises are considered in [11]. The 2 × 2 MIMO phase noise channel with independent transmit and receive phase noises at each antenna was studied in [12], where the authors showed that the multiplexing gain is 1 2 for a specific class of input distributions. For general MIMO channels with separate phase noises, estimation and detection algorithms have been proposed in [13], [14]. However for such channels, even the multiplexing gain is unknown, to the best of our knowledge.
In this work, we make some progresses in this direction. We consider two classes of discrete-time stationary and ergodic MIMO phase noise channels: model A with individual phase noises on the entries of the channel matrix, and model B with individual phase noises at the input and the output of the channel instead. The phase noise processes in both models are assumed to have finite differential entropy rate. For model A, we obtain the exact multiplexing gain 1 2 for any channel dimension, which implies that the capacity does not scale with the channel dimension at high SNR. For model B with both transmit and receive phase noises, we show that the multiplexing gain is upper-bounded by 1 2 min{n t , (n r −2) + +1}, and lower-bounded by 1 2 min{n t , nr+1 2 }. The upper and lower bounds coincide for n r ≤ 3 or n r ≥ 2n t − 1. Further, when receive phase noise is absent, the multiplexing gain is improved and we obtain the exact value of 1 2 min{n t , n r }. If the transmit phase noise is absent instead, the multiplexing gain becomes 1 2 min{2n t − 1, n r }.
The main technical contribution of this paper is two-fold. First, we derive a non-trivial upper bound on the capacity of the MIMO phase noise channel with separate phase noises. The novelty of the upper bound lies in the finding of a suitable auxiliary distributions with which we apply the duality upper bound [15], [16], [3]. It is worth mentioning that, the class of single-variate Gamma output distributions, as the essential ingredient that led to the tight capacity upper bounds on previously studied channels, are not suitable for MIMO phase noise channels in general. In this paper, we introduce a class of multi-variate Gamma distributions that, combined with the duality upper bound, allows us to obtain a complete pre-log characterization for model A and partially for model B. The second contribution is the derivation of the capacity lower bounds for model B, based on the remarkable property of the differential entropy of the output vector in this channel. Namely, we prove that, at high SNR, the pre-log of the said entropy can go beyond the rank of the channel matrix, min {n t , n r }, and scales as n r log SNR as long as n r ≤ 2n t − 1. The upper and lower bounds suggest that, with n r ≥ 2n t − 1 receive antennas, n t transmitted real symbols can be recovered at high SNR. This result has an interesting interpretation based on dimension counting. Let us consider the example of independent and memoryless transmit and receive phase noises uniformly distributed in [0, 2π). In this case, phases of the input and the output do not contain any useful information, only the amplitudes matter. Note that the n r output amplitudes are (nonlinear) equations of 2n t − 1 unknowns, namely, the n t input amplitudes and the n t − 1 relative input phases, assuming the additive noises are negligible at high SNR. It is now not too hard to believe that with n r = 2n t − 1 equations, the receiver can successfully decode the n t input amplitudes by solving the equations. This is however not possible with n r < 2n t − 1, in which case there are too many unknowns as compared to the number of equations. Nonetheless, we can reduce the number of active transmit antennas to n t < n t such that 2n t −1 ≤ n r , which means that the achievable multiplexing gain is . A formal proof in Section VI validates such an argument.
The remainder of the paper is organized as follows. The system model and main results are presented in Section II. Some preliminaries useful for the proof of the main results are provided in Section III. The upper bounds are derived in Section IV and Section V. We prove the lower bound for model B in section VI. Concluding remarks are given in Section VII. Most of the proofs are presented in the main body of the paper, with some details deferred to the Appendix.
Notation
Throughout the paper, we use the following notational conventions. For random quantities, we use upper case letters, e.g., X, for scalars, upper case letters with bold and non-italic fonts, e.g., V V V, for vectors, and upper case letter with bold and sans serif fonts, e.g., M M M, for matrices. Deterministic quantities are denoted in a rather conventional way with italic letters, e.g., a scalar is a k-tuple or a column vector of (x n+1 , . . . , x n+k ); for brevity sometimes x k replaces x k 1 . For convenience, wherever confusion is improbable, elementary scalar functions applied to a vector, e.g., |x x x| or cos(θ θ θ), stand for a point-wise map on each element of the vector, and return a vector with the same dimension as the argument. We use (θ) 2π to denote (θ mod 2π), and (x) + = max{x, 0}. Γ(x) is the gamma function. We also use c 0 to represent a bounded constant whose value is irrelevant but may change at each occurrence. Similarly, c H is a constant that may depend on H H H but the value is irrelevant and bounded for almost all H H H.
A. Channel model
In this paper, we are interested in a class of discrete-time MIMO phase noise channels with n t transmit antennas and n r receive antennas, defined by where the deterministic channel matrix H H H belongs to a set H ⊂ C nr×nt of generic matrices 1 ; x x x t ∈ C nt×1 is the input vector at time t, with the average power constraint 1 N N t=1 x x x t 2 ≤ P ; the additive noise process {Z Z Z t } is assumed to be spatially and temporally white with Z Z Z t ∼ CN (0, I I I nr ); Θ Θ Θ t is the matrix of phase noises on the individual entries of H H H at time t; the phase noise process {Θ Θ Θ t } is stationary and ergodic, and is independent of the additive noise process {Z Z Z t }. Both {Z Z Z t } and {Θ Θ Θ t } are unknown to the transmitter and the receiver. Since the additive noise power is normalized, the transmit power P is identified with the SNR throughout the paper. The end-to-end channel is captured by the random channel matrix H H H h ik e Θ ik i,k . In this paper, we consider two types of discrete-time phase noise processes 2 according to the spatial structures, as shown in Fig. 1: • Model A refers to channels with phase uncertainty on the individual paths (path phase noise), such that the sequence {Θ Θ Θ t } has finite entropy rate It corresponds to the case where the phase information of the channel cannot be obtained accurately, e.g., in optical fiber communications. This model covers the channel with spatially independent phase noises as a special case. • Model B refers to channels with phase noises at the input and/or output, i.e., Θ ik = Θ R,i + Θ T,k . The vector Θ Θ Θ T Θ T,i nt i=1 contains the n t phase noises at the transmit antennas, and Θ Θ Θ R Θ R,k nr k=1 is the vector of the n r phase noises at the receive antennas. This model captures the phase corruption at both the transmit and receive RF chains, e.g., caused by imperfect oscillators. We consider three cases of model B: leftmargin=.5in B1) with both transmit and receive phase noises such that Note that model B1 covers the case where both the transmitter and receiver use separate (and imperfect) oscillators for different antennas, whereas models B2 and B3 contain the case with centralized oscillators at one side and separate oscillators at the other side.
The capacity of such a stationary and ergodic channel is [3], [18] where the supremum is taken over all distributions with the average power constraint Our work focuses on the multiplexing gain r of such a channel, defined as the pre-log of the capacity C(P ) as P → ∞, r lim P →∞ C(P ) log P .
B. Main results
The main results of this work are summarized as follows, and are illustrated in Fig. 2. First, the case with common phase noise is rather straightforward from [9]. Proposition 1. With common phase noise, i.e., Θ Θ Θ t = Θ t 1 1 1 nr×nt and h({Θ t }) > −∞, the multiplexing gain is min{n t , n r } − 1 2 . Proof: The proof is provided in Appendix A.
Then our new results are on channels with separate phase noises either on the individual paths (model A) or at the input/output (model B) of the channel. Theorem 1. The multiplexing gain of model A is 1 2 . The above result shows that extra transmit and receive antennas do not improve the multiplexing gain of a channel with phase uncertainty on each path of the channel. The achievability of the single-antenna case was shown in [5]. Our main contribution lies in the converse, as will be shown in Section IV.
Theorem 2. The multiplexing gain of model B is
• upper-bounded by 1 2 min{n t , (n r − 2) + + 1}, and lowerbounded by 1 2 min{n t , nr+1 2 } with both transmit and receive phase noises, the upper bound is achievable when n r ≤ 3 or n r ≥ 2n t − 1; • min{ nr 2 , nt 2 }, with only transmit phase noise; • min{ nr 2 , n t − 1 2 }, with only receive phase noise. Interestingly, the multiplexing gain of model B depends on the number of transmit and receive antennas differently, which is rarely the case for previously studied point-to-point MIMO channels.
Remark II.1. As shown in Fig. 2, transmit phase noise is more detrimental than receive phase noise, and strictly so when n r > n t > 1. Intuitively, with transmit phase noise each transmitted symbol is accompanied by a different phase noise symbol, which means that no more than half of the total spatial degrees of freedom is available for useful signal. On the other hand, with receive phase noise, although half of the received signal dimension is occupied by phase noises, it is enough to increase the number of receive antennas to recover almost all transmitted symbols.
Remark II.2. Obviously, the multiplexing gain of model B1 is upper-bounded by that of models B2 and B3. Such a "trivial" upper bound is given by min{ nt 2 , nr 2 , n t − 1 2 } = min{ nt 2 , nr 2 }. When n r ≤ n t , the optimal multiplexing gain is nr 2 with phase noises at either side of the channel, whereas no more than is achievable with phase noises at both sides. These are the cases for which model B1 is strictly "worse" than both models B2 and B3. When n r ≥ 2n t − 1, with transmit phase min{n t − , n r } min{n t , n r } noise, the optimal multiplexing gain is nt 2 regardless of the presence of receive phase noise.
Remark II.3. Theorem 2 shows that, when n t = n r = 2 and 3, the exact multiplexing gain of model B1 is (nr−2) + +1 2 which gives 1 2 and 1, respectively. In contrast, the trivial upper bound provides 1 and 3 2 , respectively. These are the two cases of model B1 for which we obtain exact multiplexing gain that is strictly lower than that of models B2 and B3.
The remainder of the paper is dedicated to the proof of the main results. We start with some mathematical preliminaries.
Proof: The case with k = 2n is known and has been proved in [3]. In the following, we provide a simple proof for the general case of k, although we are only interested in the case k = 1 later in the paper. Let us define Then it readily follows from the definition of f k (λ) that To prove that f k (λ) is increasing with λ, it is enough to show that the derivative of f k (λ) with respect to λ is positive. Indeed, is the Jacobian matrix.
Lemma 3. If each element of the n-vector X X X is circularly symmetric with independent phases, and the probability density function (pdf) of X X X exists with respect to the Lebesgue measure, then Proof: Applying Lemma 2 twice, we readily obtain (4) To prove (5), we introduce Φ Φ Φ that is uniformly distributed in [0, 2π) n and independent of X X X and Θ Θ Θ, then Hence, (5) holds with the constant c 0 corresponding to max {|h(Θ Θ Θ) − n|, n log π}. where Proof: To prove (6), we introduce an auxiliary dis- is the Kullback-Leibler divergence, which yields (6). We proceed to prove (7), where we partition [0, 2π) n in such a way that cos(Θ Θ Θ) is a bijective function of Θ Θ Θ in each partition indexed by Ω; the first equality is from Lemma 2; the last inequality is from the boundedness of h(Θ Θ Θ), the fact that Ω only takes a finite number of values, and the application of (6).
Proof: This is a straightforward adaptation of the result in [3, Lemma 6.7-f] for the complex case. The real case can be proved by following the same steps. To be self-contained, we provide an alternative proof as follows. Define V x |V V V T x x x| 2 , and one can verify from the assumptions that We introduce an auxiliary pdf q(V x ) based on the Gamma distribution defined in (2) with some α ∈ (0, 1).
IV. CAPACITY UPPER BOUND FOR MODEL A
The capacity C(P ) in (1) of a stationary and ergodic channel is upper-bounded by the capacity of the corresponding memoryless channel up to a constant term. Following the footsteps of [3], [5], we have is the capacity of a memoryless phase noise channel with the same temporal marginal distribution as the original channel, and the supremum is over all input distributions such that E X X X 2 ≤ P ; using the fact that where r Θ is the differential entropy rate of the phase noise process, we can set c 0 = log(2π) − r Θ . Since we are mainly interested in the multiplexing gain, the constant c 0 does not matter, and it is thus without loss of optimality to consider the memoryless case in this section. The main ingredients of the proof are the genie-aided bound and the duality upper bound. In the following, we detail the five steps that lead to Theorem 1.
A. Step 1: Genie-aided bound
Let us define the auxiliary random variable U as the index of the strongest input entry, i.e., 3 Thus, we use X U to denote the element in X X X with the largest magnitude. It is obvious that U ↔ X X X ↔ Y Y Y form a Markov chain, and that U does not contain more than log n t bits. Assuming that a genie provides U to the receiver, we obtain the following upper bound 3 When there are more than one such elements, we pick an arbitrary one.
B.
Step 2: Canonical form Definition 2 (Canonical channel). We define the canonical form u, u = 1, . . . , n t , of the channel H H H as Note that the elements in the u th column of G G G (u) has normalized magnitudes. Now, with the information U from the genie, the receiver can convert the original channel into one of the canonical forms, namely, the form U .
where a min k,u |h −1 k,u |; (16) is due to the fact that reducing the additive noise increases the mutual information; we definẽ X X X a −1 X X X, and accordingly, In the following, we focus on upper-bounding the mutual information I(X X X; W W W | U ). Note that where the last equality comes from the fact that U is a function of X X X and thus a function ofX X X, sinceX X X is simply a scaled version of X X X. Therefore, it is enough to lower-bound h(W W W |X X X) and upper-bound h(W W W | U ) separately.
C.
Step 3: Lower bound on h(W W W |X X X) whereX U andX V have the largest and second largest magnitudes inX X X, respectively.
Proof: See Appendix B. It is worth mentioning that the above bound depends not only on the strongest but also on the second strongest input of the channel.
D. Step 4: Upper bound on h(W W W | U )
Upper-bounding h(W W W | U ) by a non-trivial but tractable function of the input distribution is hard in general. A viable way for that purpose is through an auxiliary distribution, also called the duality approach. The duality upper bound was first proposed in [15] and [16] for discrete channels and then derived for arbitrary channels in [3]. Namely, for any 4 pdf q(w w w), we have due to the non-negativity of the Kullback-Leibler divergence D(p W W W|U =u q). Hence, the key is to choose a proper auxiliary pdf q(w w w) in order to obtain a tight upper bound on the capacity of our channel. The commonly used auxiliary distributions for MIMO channels are mostly related to the class of isotropic distributions [3], [5], [9]. Unfortunately, the isotropic distributions are not suitable in our case. To see this, let us assume that an isotropic output W W W was indeed close to optimal. On the one hand, the pdf of an isotropic output W W W would only depend on the norm W W W which would be dominated by the largest input entry X U at high SNR. Therefore, the value of E − log q(W W W) would be insensitive to the number of active input entries. On the other hand, the lower bound on the conditional entropy h(W W W |X X X) is increasing with both of the largest input entries X U and X V , according to (19). Therefore, with an isotropic distribution q(w w w), the capacity upper bound E − log q(W W W) − h(W W W |X X X) would become larger when the second strongest input went to zero, i.e., only one transmit antenna was active. But this is in contradiction with the isotropic assumption, since if only one transmit antenna was active, then the output entries would be highly correlated and the output distribution would be far from being isotropic. In light of the above discussion, we are led to think that a good choice of q(w w w) should reflect not only the strongest input entry, but also the weaker ones. We adopt the following pdf built from the multivariate Gamma distribution in Definition 1, whereŵ 1 , . . . ,ŵ nr are the ordered version of w i 's with increasing magnitudes. Essentially, we let each W i be circularly symmetric and let the ordered version of (|W 1 | 2 , . . . , |W nr | 2 ) follow the multivariate Gamma distribution defined in Definition 1. Applying (3) in Lemma 3 and the order statistics (whence the term n r !) [20], we can obtain the pdf of W W W as written in (21). Remarkably, the differences between |W i | 2 and |W j | 2 , i = j, are introduced into the upper bound, which is crucial for bringing in the impact of individual input entriesX i 's other than the strongest entry as will be shown in the following.
Lemma 10. By choosing 0 < α i < 1, i = 1, . . . , n r , and 4 Formally, we should state that the probability measure Q corresponding to the density q(w w w) is such that P (· | U = u) is absolutely continuous with respect to Q. Throughout the paper, for brevity, we implicitly make the assumption to avoid such formalities. µ = min{P −1 , 1}, we have for model A, whereX U andX V are the strongest and second strongest elements inX X X, respectively.
Proof: The calculation is straightforward from the pdf (21), details are provided in Appendix C.
E. Step 5: Upper bound for model A
Combining (18), (19), (20), and (22) from the previous steps, we have where the inequality (24) comes from removing the negative term in (23); to obtain the last inequality, we apply (13) in Lemma 8 with p = 2 and the power constraint E |X U | 2 ≤ a 2 E X X X 2 ≤ a 2 P . Finally, we conclude from (14) and (17) that, for model A, which implies that the multiplexing gain is upper-bounded by By taking the infimum over α α α, we have r A ≤ 1 2 .
V. CAPACITY UPPER BOUND FOR MODEL B
In this section, we derive upper bounds for the three cases of model B, where the phase noises are on the transmitter and receiver sides of the channel. As in the previous section, it is enough to consider the memoryless case for our purpose.
A. Case B1: Transmit and receive phase noises Note that the multiplexing gain of this case is upper-bounded by that of case B2 and case B3, since we can enhance the channel by providing the information on the transmit or receive phase noises to both the transmit and receiver. In other words, the upper bound min{ nr 2 , nt 2 , n t − 1 2 } = min{ nr 2 , nt 2 } is still valid for this case. In the following, we show that we can tighten the upper bound nr 2 to (nr−2) + +1 2 with the duality upper bound using the multi-variate Gamma distribution. The proof is in the same vein as the proof for model A. Specifically, the first four steps are exactly the same as for model A, except for Step 3 in which the conditional entropy has a different lower bound, as shown below.
Lemma 11. For model B1, we have whereX U andX V have the largest and second largest magnitudes inX X X, respectively.
B. Case B2: Transmit phase noise
In this case, the received signal is Y Y Y = H H H(e jΘ Θ ΘT • X X X) + Z Z Z. The channel is characterized by the random matrix H H H = H H Hdiag{e jΘ Θ ΘT }. We shall show that the upper bound is min nt 2 , nr 2 . First, with more receive antennas than transmit antennas, i.e., when n r ≥ n t , we can inverse the channel without losing information, (27) is maximized when X X X is circularly symmetric with n t independent phases. To see this, we introduce a vector of independent and identically distributed (i.i.d.) phases Φ Φ Φ uniformly distributed in [0, 2π) nt and show that, for any X X X, where we use the fact thatZ Z Z is circularly symmetric, and has the same distribution asZ Z Z e −jΦ Φ ΦZ Z Z. Therefore, to derive an upper bound, it is without loss of optimality to assuming that X X X is circularly symmetric with m independent phases. With this assumption, we have I(X X X; e jΘ Θ ΘT • X X X +Z Z Z) = I(|X X X|; e jΘ Θ ΘT • X X X +Z Z Z) + I(∠ X X X ; e jΘ Θ ΘT • X X X +Z Z Z | |X X X|) ≤ I(|X X X|; e j(Θ Θ ΘT+∠ X X X ) • |X X X| +Z Z Z) + I(∠ X X X ; e jΘ Θ ΘT • X X X | |X X X|) ≤ I(|X X X|; e j(Θ Θ ΘT+∠ X X X ) • |X X X| +Z Z Z | Θ Θ Θ T + ∠ X X X ) where the second inequality is obtain by providing Θ Θ Θ T + ∠ X X X to the output and the independence between Θ Θ Θ T +∠ X X X and |X X X|; the last inequality is from the capacity upper bound for a real-value Gaussian channel, and the fact that (Θ Θ Θ T + ∠ X X X ) 2π is uniformly distributed in [0, 2π); we defineZ Z Z e −j(Θ Θ ΘT+∠ X X X ) • Z Z Z. From (28), we get the upper bound nt 2 of the pre-log. In the following, we assume n r ≤ n t , and follow closely to the proof for model A in Section IV-A. We first apply a genie-aided bound, by providing the set of indices of the n r strongest inputs to the receiver. This information, also denoted by U , does not take more than log nt nr bits. Then we also associate with each U a canonical form G G G ( where a (σ max (H H H)) −1 ; we defineX X X a −1 X X X and accordingly, The next step is to derive a lower bound on h(W W W |X X X), where we assume that U = {1, . . . , n r } for notational convenience, and the last inequality is from Lemma 7. Finally, we derive an upper bound on h(W W W | U ) via duality using the following auxiliary distribution on the output W W W, where g α α α is the normalization factor which only depends on α α α and n r . Essentially, we let each W i be independent and circularly symmetric with the squared magnitude following a single-variate Gamma distribution with parameter (µ, α i ), as defined in (2) from Definition 1.
Proof: The following is straightforward from (30), Applying Jensen's inequality on the expectation over Z i , we get Plugging it back to (32), we readily have (31).
Finally, putting together (29) and (31), we obtain where, to obtain the last inequality, we apply (13) in Lemma 8 with p = 2, and the power constraint E |X i | 2 ≤ a 2 E X X X 2 ≤ a 2 P . Therefore, the multiplexing gain is upperbounded by Taking the infimum over α α α, we get nr 2 .
C. Case B3: Receive phase noise
First it is not hard to show the upper bound n t − 1 2 . It is enough to provide the n r − 1 relative angles, {Θ R,k − Θ R,1 } k=2...nr , to the receiver. The channel is now equivalent to the case with common phase noise Θ R,1 . Then we can apply Proposition 1, since Next, we show the upper bound nr 2 . To that end, we write where we define Z Z Z e −jΘ Θ ΘR • Z Z Z which is independent of Θ Θ Θ R since Z Z Z is circularly symmetric; the last equality follows since Y Y Y = e jΘ Θ ΘR • (H H HX X X + Z Z Z ) and thus Θ Θ Θ R is independent of (|Y Y Y|, H H HX X X + Z Z Z , H H HX X X). It remains to show that I(H H HX X X; |Y Y Y|) ≤ nr 2 log + P + c H . To prove this, it is enough to apply h(|Y Y Y|) ≤ nr 2 log + P + c H and to use the fact that h(|Y Y Y| | H H HX X X) = nr k=1 h(|Y k | | H H HX X X) is lower-bounded by some constant according to (9) in Lemma 7.
VI. CAPACITY LOWER BOUND FOR MODEL B
In this section, we derive a lower bound on the capacity of model B. For simplicity, we consider the class of memoryless Gaussian input distributions. Although the optimal input distribution has been proved to be discrete in [6], the use of a simple Gaussian input provides tight lower bounds on the prelog, which is enough for our purpose here. In the following, we only consider the memoryless phase noise channel which can be shown to have a lower capacity than the general stationary and ergodic channel when memoryless input is used. To see this, we write Thus, we focus on the single-letter mutual information I(X X X; Y Y Y) in the rest of the section. As in the previous section, we investigate the three cases separately.
A. Case B1: Transmit and receive phase noises
In this case, we use all the inputs with equal power, i.e., X X X ∼ CN (0, P nt I I I nt ). For convenience, let us rewrite the received signal as where X X X 0 ∼ CN (0, I I I nt ) is the normalized version of X X X; The mutual information of interest can be written as First the following lemma, which provides a lower bound on h(Y Y Y) in (33), is crucial for the achievability proof.
Proof: See Appendix D. Next, we derive upper bounds on the two negative terms in (33) as follows. The conditional differential entropy can be upper-bounded as where the second inequality is due to Lemma 7 and the third inequality is from Lemma 8 and the power constraint Note that the above lower bound holds when we substitute n t by any n t ≤ n t , i.e., by activating only n t transmit antennas. It is clear that when n r − n t + 1 ≥ n t , i.e., n r ≥ 2n t − 1, we should let n t = n t . Otherwise, we should decrease n t to balance between n r − n t + 1 and n t , which gives n t = nr+1 2 . This completes the proof of the lower bound for model B1.
B. Case B2: Transmit phase noise
In this case, we use n t min{n t , n r } input antennas and deactivate the remaining ones. The active inputs, denoted by X X X , are i.i.d. Gaussian, i.e., X X X ∼ CN (0, P n t I I I n t ). We rewrite the output vector as Y Y Y = H H H (e jΘ Θ Θ T • X X X ) + Z Z Z where H H H ∈ C nr×n t is the submatrix of H H H corresponding to the active inputs, and Θ Θ Θ T is similarly defined. It follows that I(X X X ; Y Y Y) = I(X X X ; . The latter is further upper-bounded by n t k=1 E log + |X k | + c H ≤ n t 2 log + P + c H according to (8) in Lemma 7 and (13) in Lemma 8. This shows the lower bound 1 2 min{n t , n r } on the multiplexing gain.
C. Case B3: Receive phase noise
As in Case B1, we let X X X ∼ CN (0, P nt I I I nt ). First h(Y Y Y) is lower-bounded in Lemma 13. Next, it readily follows from since we are in the same situation as in Case B1 whenΘ Θ Θ T is known. Finally, combining (34) and (39), we obtain a lower bound on the mutual information which provides the desired multiplexing gain.
VII. CONCLUSIONS AND DISCUSSIONS
In this work, we investigated the discrete-time stationary and ergodic n r × n t MIMO phase noise channel. We characterized the exact multiplexing gain when phase noises are on the individual paths and when phase noises are at either side of the channel. With both transmit and receive phase noises, upper and lower bounds have been derived. In particular, the upper bound results in this paper have been obtained via the duality using a newly introduced multi-variate Gamma distribution.
For model B1, the upper and lower bounds derived in this paper do not match for n r ∈ [4 : 2n t − 2]. We conjecture that the upper bound 1 2 min {n t , n r − 1} is indeed loose. Let us recall that the upper bound is obtained by lower-bounding h(W W W |X X X) with (25), and by upper-bounding E − log q(W W W) with q(w w w) being the multi-variate Gamma distribution. We believe that both bounds are loose for model B1 in general. First we can show that To see this, we can first write h(W W W |X X X) = h(W W W |X X X,Θ Θ Θ T ) + I(Θ Θ Θ T ; W W W |X X X), then upper-bound the first term with n r E log + |X U | + c H by following closely the steps as in (35)-(36), and upper-bound the second term with k =U E log + |X k | + c H by following closely the steps as in (37)-(38). As compared to the lower bound (25), the upper bound (40) differs only in the terms involvingX k , k ∈ {U, V }. In the following, we argue that even if the lower bound h(W W W |X X X) was the RHS of (40) -which is the largest that one could get as lower bound since it is also an upper bound -we still would not be able to tighten the multiplexing gain upper bound 1 2 min {n r − 1} with the same choice of auxiliary distribution q(w w w). In other words, for the given q(w w w), (25) is tight enough with respect to the upper bound on E − log q(W W W)} − h(W W W |X X X). To prove this, it is enough to observe that E − log q(W W W) does not involve any terms ofX X X other thanX U andX V in such a way to change the high SNR behavior, whereas h(W W W |X X X) is increasing with the strength of eachX k . Therefore, the maximization of E − log q(W W W) − h(W W W |X X X) overX X X will always put allX k , k ∈ {U, V }, to zero, even if h(W W W |X X X) hits the highest value (40). To sum up, if the current upper bound 1 2 min {n t , n r − 1} was indeed loose as we conjecture, one would have to first find a new auxiliary distribution q(w w w) in order to get a tighter upper bound. In particular, the new auxiliary distribution should be such that E − log q(W W W) depends onX k , k ∈ {U, V } at high SNR in a non-trivial way. With such a distribution, the second challenge is to find a lower abound on h(W W W |X X X) that also depends onX k , k ∈ {U, V }, in a non-trivial way. In fact, we conjecture that (40) holds with equality.
For model B2 and B3, the results have the following alternative chain rule interpretation. With transmit phase noise (model B2), the mutual information can be written as where the first term scales as min {n t , n r } log P as if the phase noise were part of the transmitted signal whereas the second part scales as 1 2 min {n t , n r } log P as if Θ Θ Θ were the input with a fixed distribution and X X X were the "fading" known at the receiver side. With receive phase noise (model B), the mutual information can be written differently as Here the first term corresponds to the rate when the phase noise is known, while the second term can be considered as the rate of a "reverse" channel with input Θ Θ Θ R , output X X X, and known fading Y Y Y. In both cases, the original problem of characterizing I(X X X; Y Y Y) boils down to subproblems involving channels without phase noise (i.e., I(X X X, Θ Θ Θ T and I(X X X; Y Y Y | Θ Θ Θ R )) and communications with fixed phase signaling (i.e., There are a few interesting future directions. First, it is possible to extend the results to multi-user channels and study the impact of phase noise to such systems. Second, the lower bound on model B1 suggests the following dimension counting argument: one can recover n t real information with 2n t − 1 real observations, since the remaining n t − 1 dimensions are occupied by the n t − 1 relative phase noises. How to design decoding algorithms that "solve" efficiently the 2n t − 1 nonlinear equations is a question of both theoretical and practical importance. Finally, a more refined analysis should lead to tighter upper and lower bounds on the capacity, beyond the pre-log characterization.
A. Proof of Proposition 1
With common phase noise, we can perform unitary precoding without losing information, and the channel is equivalent to a parallel channel with common phase noise Y Y Y t = e jΘt Σ Σ Σ x x x t + Z Z Z t = e jΘt σ σ σ • x x x t + Z Z Z t , where Σ Σ Σ is a diagonal matrix with the min{n t , n r } non-zero singular values of the matrix H H H and σ σ σ is a vector of these elements. From [9], we know that the multiplexing gain of a M × M channel is upper-bounded by M − 1 2 . This upper bound applies here with M = min{n t , n r }. The lower bound is achieved by using the Gaussian memoryless input X X X t ∼ CN (0, P nt I I I nt ), from which we have Applying a unitary transformation on e jΘ σ σ σ•X X X+Z Z Z, we obtain N h(e jΘ σ σ σ• X X X + Z Z Z | X X X) = N h(e jΘ σ σ σ • X X X + Z 1 | X X X) + N M k=2 h(Z k ) ≤ N E log + σ σ σ • X X X + N c H ≤ N 2 log + P + N c H where Z Z Z is the rotated version of Z Z Z and remains spatially white, the first inequality is from Lemma 7 and the second one is from Lemma 8. Finally, we have 1 N I(X X X N ; Y Y Y N ) ≥ min{n t , n r } − 1 2 log + P + c H , which completes the proof.
B. Proof of Lemma 9 and 11
In the following we shall derive the lower bounds (19) and (25) on the conditional differential entropy h(W W W |X X X) for model A and model B1, respectively.
First we shall show that, for both models, To that end, we analyze h(W i |X X X = x x x) with |x 1 | > |x 2 | > · · · > |x nt | ≥ 0 without loss of generality, i.e., we assume that U = 1 and V = 2. A lower bound of h(W i |X X X = x x x) can be obtained by considering the following cases separately.
• When |x 1 | ≥ 1 and |x 2 | ≤ 1, where g i1 e jΘi,1 is from the matrix G G G (1) defined in (15) since U = 1 by assumption; the second inequality is from Lemma 7 and the third inequality is from Lemma 8. • When |x 1 | ≥ 1 and |x 2 | ≥ 1, where the first inequality is from conditioning reduces entropy; we partition [0, 2π) 2 in such a way that e jΘi,1 g i1 x 1 + e jΘi,2 g i2 x 2 is a bijective function of (Θ i,1 , Θ i,2 ) in each partition indexed by Ω which takes a finite number of values; then we applied the change of variables from Lemma 2 and obtain (42) with φ ∠ gi1x1 − ∠ gi2x2 ; finally, we use the fact that |g i1 g i2 | is bounded for almost every H H H and the application of Lemma 5 to get the last inequality. Note that log |x k | = log + |x k | for k = 1, 2 by assumption. Combining the three cases above and taking expectation over X X X, we get (41).
where (43) is from the fact that W i only depends on W i−1 through the inputX X X and the phase noises {Θ l,1 , . . . , Θ l,nt } l<i ; where the last inequality is from a modified version of (41) by introducing {Θ l,1 , . . . , Θ l,nt } l<i in the condition.
2) Proof of the lower bound (25) for model B1: For model B1, we write where, according to (41), the first term is lower-bounded by In the following, we derive a lower bound on the second term. Let B i nt k=1 g ikXk e jΘ T,k where g ik is the channel coefficient without phase noise from the canonical form U defined in (15). Then where the first inequality is from conditioning reduces entropy; (46) is from Lemma 7. The conditional expectation can be lower-bounded as follows whereΦ ik ∠ g ikXk ; (48) is obtained by applying Lemma 6 where the equality is the application of the change of variables from Lemma 2; the last inequality is from Lemma 5. From (47) and (49), we get where the last inequality is from the application of (12) in Lemma 8 with p = 2 c H . Plugging (45) and (50) into (44), the lower bound (25) is obtained.
• The squared magnitude of each output where G G G T i is the i th row of the canonical matrix G G G (U ) defined in (15); (52) is due to Cauchy-Schwarz; and λ H H H is defined as • The difference of the squared magnitudes Note that the above upper bounds does not depend on i and k. Then, with the above bounds, we take expectation of the terms in (51), and obtain where the last inequality is from Lemma 8. Similarly, basic calculations lead to Taking expectation over X X X in (51), and plugging (53), (54), and (55) into it, we readily obtain (22).
D. Proof of Lemma 13
To prove Lemma 13, we deal with the cases n r = 2n t − 1 and n r = 2n t − 1 separately. Let us defineŶ Y Y andỸ Y Y such that For notational convenience, we define n n r and m n t in the following proof.
1) Case n = 2m − 1: First we show that (34) holds for where S S S ∈ R n−1 with S i |Ŷ i | 2 for i = 1, . . . , n − 1; the second inequality is from the chain rule and that adding the condition on the phase ofŶ n reduces entropy; the last equality is due toŶ n ∼ CN (0, m −1 h h h n 2 ). Next we need to show that h(S S S |Ŷ n ) > −∞. Intuitively, givenŶ n , S S S can be expressed as n − 1 = 2(m − 1) real functions of the 2(m − 1) real random variables Re{Ŷ m−1 }, Im{Ŷ m−1 } . Since h Re{Ŷ m−1 }, Im{Ŷ m−1 } = h(Ŷ m−1 ) is finite for almost every H H H, as long as the mapping is not degenerated, h(S S S |Ŷ n ) should be finite too. This argument is proved formally in the following.
Since for any generic H H H ∈ C n×m , any m rows of the matrix are linear independent, the remaining n − m rows can be written as linear combinations of these rows. Let us take the rows where (56) is due to the fact that the complex gradient of a real-valued function is a unitary transformation of the real gradient (see, e.g., [21, App.A6]); to obtain the last equality, we apply the identity det C C C D D D E E E F F F = det(C C C)det(F F F − E E EC C C −1 D D D). SinceŶ 1 , . . . ,Ŷ m−1 ,Ŷ n are jointly circularly symmetric Gaussian with finite and non-degenerate covariance for any generic H H H, there exists aŶ n circularly symmetric with non-zero bounded variance and independent ofŶ m−1 , such that where the last inequality is from E log |Ŷ 1,R − P 1,1Ŷ1,I | ≥ E log |Ŷ 1,R | ≥ c H due to the independence betweenŶ 1,R and Y 1,I and the application of Lemma 1.
Finally, recalling that T T T I Im{diag{b b b}B B B * }, we have log |det(T T T I )| > −∞ for any generic H H H, it follows from (58) that E log |det(N N N I )| is lower-bounded. By now, we have (12) in Lemma 8 with p = c H n . This completes the proof for the case n = 2m − 1.
2) Case n = 2m−1: Note that if (34) holds for n = 2m−1, then it also holds for n < 2m − 1 and n > 2m − 1. To see this, in the case with n < 2m − 1, we can add 2m − 1 − n receive antennas to have (Y Y Y, Y Y Y ) with Y Y Y being the extra outputs. Since (34) holds for h(Y Y Y, Y Y Y ) by assumption, then we have ≥ (2m − 1) log + P − (2m − 1 − n) log + P + c H = n log + P + c H , where the second inequality is from (34) and the fact that h(Y Y Y ) ≤ (2m − 1 − n) log + P + c H . When n > 2m − 1, we where the second inequality is from Lemma 7; the equality (59) is from the fact that h h h T k (e jΘ Θ ΘT • X X X 0 ) ∼ CN (0, h h h k 2 ).
ACKNOWLEDGEMENT
S. Yang would like to thank G. Durisi for helpful discussions and comments during the early stage of this work. | 12,327 | 2016-03-17T00:00:00.000 | [
"Engineering",
"Physics"
] |
“Antecedents of the service quality for housing loan customers of Indian banks”
The purpose of this paper is to explore the influence of the cost of borrowing, process- ing time and documentation on the service quality of banking institutions in India that sanction housing loans. A research framework was designed to consider the indepen- dent variables influencing service quality by unearthing research gaps in the extant literature on housing loans. All research gaps were transformed into a questionnaire, to which 535 useful responses were received. A five-point Likert scale was used, and a structural equation model was formulated using ADANCO 2.0.1 – all hypotheses were tested with ADANCO. The findings clearly indicate the relevance of the service quality in banking sectors in India. There is a significant relationship between the three independent variables (cost of borrowing, processing time and documentation) and service quality. The outcome of banking service quality is measured through initial personal contact, online banking services, the humanitarian approach, provision of information for services, promise of service delivery and field verification, with all these measures having a very strong impact. This study is restricted to India only, but could be extended to other developing countries in South Asia in the future. This study could also be extended to cover other types of banking loans offered by banking institutions in India. The paper concludes that it is time for banking institutions to take action to sanction housing loans with a view to introducing the instant sanctioning of bank loans that come with real-time access, without resorting to bureaucratic policies and procedures for housing loan customers. economic, social, and religious groups. Ghosh et al. (2015) identify the impact of financial sector liberalization on the availability of credit in both public and private banks. Bhanumurthy and Singh (2013) high-light the trends in and determinants of economic growth in India. The trends revealed by these three articles indicate that financial institutions sanctioning home loans are liberalizing various rules and procedures applicable to the granting of house loans to customers. Service quality plays an important role in determining the choice and selection of a mortgage provider. Cronin and Taylor investi-gate the conceptualization and measurement of service quality and relationships between service quality, consumer satisfaction, and purchase intentions. Service quality is an antecedent of customer satisfac-©
INTRODUCTION
Customers consider bank service quality to be the most important element when selecting mortgage providers and establishing longterm relationships with them. The other three elements are product attributes, access, and communication (Lymperopoulos et al., 2006). Panagariya and More (2014) state that post-reform accelerated growth has worked as a strong stimulus to reduce poverty in India for all economic, social, and religious groups. Ghosh et al. (2015) identify the impact of financial sector liberalization on the availability of credit in both public and private banks. Bhanumurthy and Singh (2013) highlight the trends in and determinants of economic growth in India. The trends revealed by these three articles indicate that financial institutions sanctioning home loans are liberalizing various rules and procedures applicable to the granting of house loans to customers.
Service quality plays an important role in determining the choice and selection of a mortgage provider. Cronin and Taylor (1992) investigate the conceptualization and measurement of service quality and relationships between service quality, consumer satisfaction, and purchase intentions. Service quality is an antecedent of customer satisfac-tion, which, in turn, has a significant effect on purchase intentions. Parasuraman et al. (1985) examine the quality of intangible goods as measured by marketing experts, since the quality of services is largely undefined and not researched. Kaura and Datta (2012) stress the importance of service delivery as a sole variable to improve customer relationships; this study is further supplemented by Kaura (2013), who states that service quality in banks depends on employee behavior, tangibility, information technology and the dimensions of service convenience such as decision, access, transaction, benefit and postbenefit convenience. The scope of the study is geographically confined to India. Customers residing in all metropolitan cities in India are potential respondents to the questionnaire, which is administered on a sample basis. All the customers selected through the sampling process have taken out home equity loans, existing home loans, home improvement loans, home purchase loans and land purchase loans bought under the housing loans scheme.
LITERATURE REVIEW
The literature review is aimed at determining the fundamental elements of the research problem and the progress made on the influencing factors (i.e., the independent variables), namely, the cost of borrowing, processing time and required documentation. These independent variables have an impact on a dependent variable -Service Quality.
The cost of borrowing includes application fees, processing fees and interest on housing loans, penalties for default, foreclosure charges and any other incidental costs incurred during the duration of a housing loan. Meador (1982) notes that the cost of borrowing differs from region to region due to local regulations, different kinds of mortgage available, loan-to-value ratios and foreclosure conditions. This research study focuses on the cost of borrowing as an important determinant of customer satisfaction. The application fee is the fee payable when submitting a home loan application. Application fees are linked to the supply of and demand for housing loans. On the one hand, if the demand for housing loans is more than the supply, the application fee is high, and vice versa. Even big banks hike their application fees when the demand for housing loans soars and, correspondingly, the rate of application rejections is high (Daniels, 2009). On the other hand, however, there is a trend of abandoning application fees as a way to attract more loans into the market. There are many cases in foreign countries where application fees have been scrapped in order to provide an opportunity for new borrowers to access loans (Martin, 2006). Estimating the valuation charges on home loan transactions is a delicate area that would benefit from greater clarity. Different lending institutions impose different valuation charges for their own advantage. Unethical practices exist in many cases, depending upon the culture and characteristics of the lending institutions involved (Thomas, 2016). How much a home loan valuation fee depends on the valuation procedures; these, in turn, determine equilibrium home loan to charge a fair fee (Schwartz & Torous, 1992). Ward (2009) observes that the valuation fee depends on an observed value. A setup cost, as part of a valuation fee, increases in utility; it measures the discount on the maximum price levied when processing a home loan. Cooley (2005) states that processing fees are charged through different platforms. In addition to its own platform, an independent agency provides portals from which information about the various processing fees can be obtained. Lenders use processing fees or points to adjust both their yield and the borrower's annual loan amount (Eaton, 2005). There are two types of interest rates currently in vogue. Equated monthly instalments (EMIs) remain constant when they have a fixed rate of interest. Alternatively, fluctuating EMIs respond to market conditions, with floating interest rates. Many lending institutions give a borrower the option to choose between a fixed or floating interest rate, either at the commencement of the loan or during the life period of the loan. In practice, a fixed rate of interest is the preferred rate of interest by housing loan borrowers (Uberti et al., 2014). Dhillon et al. (1987) observed that all fixed-rate home loans are non-adjustable, with long-term maturities (i.e., 25 to 30 years). Foreclosure occurs when an extenuating circumstance happens. This includes the death of a spouse, illness, job transfer, injury resulting in disability and bankruptcy, and/or job loss. Foreclosure is also resorted to whenever an alternative source of housing loans that attracts a lower rate of interest becomes available (Rodgers & McFarlin, 2017). Carr (2007) indicates that a foreclosure fee is required if the outstanding principal balance of the loan is adjusted or repaid; this is necessary to avoid a future foreclosure crisis. Modification occurs when there is an alteration, addition or amendment of the terms and conditions of existing housing loans. This is a permanent change with regards to payment terms, principal, interest rate and collateral security. An increase or decrease in the mortgage principal balance is more likely to result in a modification to a loan. All these processes are carried out with the stipulation on modification fee (Schmeiser & Gross, 2016). Late fees cause the subsequent monthly payment to be inadequate, generating a pyramiding of late fees (McNulty et al., 2019), and the lending institution is allowed to levy late fees on the unpaid amount (Chiang et al., 2016). Danis and Pennington-Cross (2005) observe that late fees occur over time, meaning that repaying the loan costs more over the long term. Lee and Hogarth (2000) note that home loan borrowers are asked to specify the terms and conditions of the home loan, particularly the various types of fees (including application fee, interest rate, monthly payment, insurance fee, and late payment fees). Bureaucratic processes exist in the home loan market, and there are many incidences of mortgage fraud in various countries. Borrowers submit false income details in the hope of attaining a larger housing loan, in turn encouraging corruptive practices among realtors, appraisers, and lending institutions. This has led to lending institutions sanctioning unqualified borrowers towards the higher limit of home loans available to them, through the administration of large commission fees (Der Hovanesian & Beucke, 2005). Murray (2018) notes that corruptive practices lead to abuses of government subsidies and home improvement schemes. Pfeiffer (2017) states that brokers are normally paid a commission for providing a buying and selling service. The price to be paid for the services rendered by the broker is called the brokerage. Normally, brokerage transactions are carried out through face-to-face interviews during which the terms and conditions of the brokerage contract are finalized (Conklin, 2017).
A method and apparatus for the closed-loop, automatic processing of a loan, including the completion of the application, underwriting, and transference of funds, includes the use of a programmed computer to interface with an applicant, obtain the information needed to process the loan, determine whether to approve the loan, effect electronic fund transfers to the applicant's deposit account and arrange for automatic withdrawals to repay the loan. Toscano (2002) defines the processing time as the time taken to process loan documents and instruments for the purpose of solicitation, verification, grant, extension, renewal, and sale of loans, whether secured or unsecured. In addition to the combination of law and technology, the processing time preserves the persistence, provenance, integrity, legality, utility, evidentiary admissibility, form, and content of loan documents, aiding a speedier approval of home loans. The starting point of the loan process is the initial processing of a home loan by a lending institution. This initial process considers the borrower's income, debts, ability to repay the loan periodically, willingness to repay, credit report and rental payment history. Since it is very difficult to standardize initial processing procedures, this is handled on a case-by-case basis. It is very important for lending institutions to examine the weakness of the borrower and to determine how to fill any weak gaps through any strength demonstrated by the borrower. This is necessary and in lending institutions' best interest of lending institutions. During the home loan application process, a lot of information needs to be collected, namely: Borrower's mailing address, Information on first-and second-lien home loan (if any), Any other borrowings, The specific property address on which loan is to be sanctioned, The duration of the loan, Original loan amount, Debt-to-income ratios, Credit report and credit score, Occupancy status of the mortgaged property, Borrower bankruptcy history (if any), Outstanding loan balance, Information relating to whether the loan has been modified or extended, Any defaults or foreclosure of previous loans, and Any other relevant information related to the borrower's needs. Tealdi et al. (2012) identify a method for automatically fulfilling the approval requirements such as Maintaining a multiplicity of registered service providers, receiving a home loan application that has one or more conditions to be fulfilled for the home loan application to be approved, evaluating one or more conditions to determine one or more actions that should be taken to fulfil those conditions, Automatic approval when prescribed conditions are fulfilled. This step in the loan process is very important, as the applicant is fully aware of their commitments in the loan agreement. As soon as the home loan approval formalities are over, the loan agreement is transmitted to and from the applicant. In suitable cases, the delivery of the agreement to the home loan borrower is done by the borrower's agent. In a few instances, the applicant completes the requisite functions through computer access from a kiosk (Norris, 1999). There are three stages to the withdrawal of the loan amount, namely: the submission of the application form and documents, followed by the sanctioning and then by the disbursement, which is usually communicated through a home loan disbursement letter. Once a home loan sanction letter has been approved, the disbursement process will start. The disbursement of the loan amount takes place in one or more instalments, following any technical and/or legal property verification (e.g., the progress of construction or the readiness of the house). Documentation for a home loan includes the title to the property, tax returns, payslips or other proof of income, bank statements and other assets, credit history, photo ID, renting history and any other documents that are required by lenders from time to time. Mian and Sufi (2017) state that a good loan document is a representation of the true documentation, leading to correct conclusions on the nature of home loan credit supply expansion towards marginal borrowers. Davis (2013) highlights certain problems surrounding documentation in home loan applications, noting a recent rise in challenges to the enforcement of home loans through widespread documentation problems relating to the transfer of assets. He observes that reforms are possible but will require crucial improvements in the recording process of home loan documents. McQuiston (2018) places importance on the simplification, standardization, and rationalization of the loan application process. Advocating for an easy application process in terms of the documentation and forms required, he states that housing loan lenders have an opportunity to gain a greater share of loans if they provide simplified and user-friendly application forms. While McQuiston supports the introduction of a simplified form, Bhutta et al. (2015) advise that lenders do away with conservative and traditional checks through complex application forms. An exemplary application process is possible only if it is simple and straightforward (Leyer et al., 2015). Lending institutions offer online platforms for their housing loan services in order to facilitate faster processing -starting from the enquiry for an application for a loan to closing the loan's account after the final loan instalments are paid (Leng et al., 2018). Furthermore, lending institutions provide technical assistance for the valuation of property, approval of building plans and monitoring different stages of the construction of the home -not only for the safety of the loan, but also for tendering benevolent advice on construction (Hartman, 2017). Ferguson (1999) stated that starting for technical assistance is from locating the infrastructure with other essential resources, such as people, machines, materials, methods, and money. The collection of documents involves getting information from various segments of the loan, which are broadly classified as the initial application stage, process and implementation stage, foreclosure stage and final closure. Educating borrowers on purchasing homes is effective in reducing default or foreclosure. These qualifications are added criteria for considering home loans. Educational qualifications help promote sustainable homeownership by influencing borrowers' information-seeking behavior and strategies for resolving defaults. After paying the final instalment of a home loan, the borrower receives a message stating that the property that was mortgaged is no longer owned by the lending institution. As proof of ownership, the mortgager releases all of the documents that prove that the loan has been paid in full. Anidiobu et al. (2018) stated that a loan document is a signed document that specifies a change in ownership of a property. The delivery of a loan document to the borrower is an indication that the loan process is coming to an end -freeing the borrower from the home loan agreement. This is due to the full and final settlement of all instalments and other charges by the financial institutions. Craft (2015) advocates that the power of attorney is limited to authorizing the mortgage of the property, deposit of instalments, foreclosure of the loans and the final closure. Covenants, caveats, and preventive clauses are introduced to safeguard the interests of the homeowners (borrowers). The power of attorney should be automatically terminated as soon as the loan account is closed, and all documents are received.
Based on the research gap identified by the literature survey, the sequential research questions and problems can be constructed: a) Is the cost of borrowing compatible and commensurate with the extent of various services offered by lending institutions? b) Is the processing time (from initial enquiry to the approval of the home loan) in accordance with the promise made in the marketing materials, meeting the advertised quality of service quality? c) Does the practice of collecting all the required documentation (salary slips, income tax returns, bank statements and past financial history) affect service quality in relation to the swiftness of the loan? d) Can service quality be measured by swifter actions in terms of initial personal contact, online banking services, the humanitarian approach, the provision of information for services, promise of service delivery and field verification?
The research objectives are set with the aim of solving problems identified by research gaps in the extant literature: To ensure that the cost of borrowing is compatible and commensurate with the services rendered by lending institutions. To ensure the time between enquiry and approval of the home loan is in accordance with the promise made in the lending institution's marketing materials. To minimize the number of documents required, facilitating the speedier approval of the loan and improving service quality. To measure the outcome of service quality in terms of initial personal contact, online banking services, the humanitarian approach, the provision of information for services, promise of service delivery and field verification.
RESEARCH METHODS
Since this study involves problem solving, its scope includes a literature review to find areas of research that have yet to be addressed, known as a research gap. Identification of research gaps in the main topic helps frame research questions. Furthermore, research objectives are set as solutions to each research question. Once research objectives are identified, the formulation of meaningful hypotheses can be framed. Both primary and secondary data are used. While the primary data is collected through the administration of question of instruments, the secondary data is collected from past literature, namely Proquest, Ebsco, Google scholars, etc. In this study, home loan borrowers are chosen as respondents to reflect a high level of knowledge and feedback on the service quality of lending institutions that grant home loans. Out of the total population of home loan customers, the questionnaire was sent to about 1,500 respondents. The responses received amounted to 592 (39%). Out of the 592 responses received, only 535 were deemed usable. Hence, the data analysis is carried out based on the results that were obtained from 535 usable responses.
RESULTS AND DISCUSSION
ADANCO (Advanced Analysis of Composites) is a modern software program that is completely different from traditional statistical packages. It is a software for modeling variance-based equations. This new software aims to achieve reliable statistical results. In this section, the focus is centered on the justification for using ADANCO 2.0.1 to analyze and interpret the data. The use of the Likert measurement scale and other statistical findings are also presented in figures and tables. The structural equation model with a path analysis is presented in Figure 1.
HYPOTHESIS DEVELOPMENT AND TESTING
Hypotheses are tested through measuring the total effects of ADANCO 2.0.1. The total effect of one variable on another is the sum of direct effects and all of the indirect effects. The value of the direct effects is interpreted as an increase in the dependent variables if the independent variables were increased by one standard deviation. Under Hypothesis testing, the results from the data analysis will be discussed to answer the research questions and related hypotheses. The results for testing all of these hypotheses are summarized below and followed by an interpretation for the respective hypothesis (Table 1).
Hypotheses
H1: There is a significant relationship between the cost of borrowing and the service quality of banks.
The first hypothesis (H1) shows that the impact of the cost of borrowing on service quality is highly significant (t-value = 2.7625; CI > 99%). Thus, H1 H3: There is a significant relationship between processing time and the service quality of banks.
The third hypothesis (H3) shows that the impact of processing time on margin money is highly significant (t-value = 7.5826; CI > 99%). Thus, H3 (β = 0.4200; p < 0.00) is accepted. This indicates that parameters, such as the initial processing of loan application (0.862), collection of information (0.887), approval of the loan (0.881), delivery of the loan agreement (0.895), and timely withdrawal of loan (0.734), can be a result of successful integration of service quality. In confirming this finding, Bloomquist et al. (2006) observed that the processing time involves application to a set of rules for combining loan product features. The borrowers get the loan approved with a customized combination of loan features based on the loan requirements of the borrower and the rules. These processes are carried out through online banking.
H4: There is a significant relationship between the processing time and documentation of banks.
The fourth hypothesis (H4) shows that the impact of processing time on collateral security is highly significant (t-value = 17.9078; CI > 99%). Thus, H4 (β = 0.7142; p < 0.00) is accepted. This indicates that parameters, such as the initial processing of loan applications (0.862), collection of information (0.887), approval of the loans (0.881), delivery of loan agreements (0.895), and timely withdrawal of the loans (0.734), can be a result of successful integration of documentation. Deroy et al. (2013) stated that the verification process would usually continue after fulfilling conditions, such as the appraisal of the documents, the examination of title deeds and other statutory requirements. This leads to variations in processing time due to the submission of different types of documents. With the development of digital applications, automatic processing of loans is emerging (Norris, 1999).
H5: There is a significant relationship between the documentation and the service quality of banks.
The fifth hypothesis (H5) shows that the impact of documentation on service quality is highly significant (t-value = 5.0755; CI > 99%
CONCLUSION
Based on the structural equation model, there is no doubt that the cost of borrowing, processing time and documentation are decisive influencing factors in determining the service quality of banking institutions offering housing loans. The measures and outcomes of service quality are assessed by initial personal contact, online banking services, humanitarian approaches, provision of information about services, the promise of service delivery, and field verification. The path coefficient of these six outcomes ranges from 0.830 to 0.854. This is a clear indication of this research which has a very strong effect on service quality of a banking institution on housing loans. Wright (1934) opines that a path coefficient of 0.8 and above has the strongest impact on service quality outcomes. This has a wider practical implication for all banking institutions to realize that service quality is a very important determinant for sanctioning home loans to customers. This study was conducted only in India. It may be extended to other countries such as Sri Lanka, Bangladesh, Indonesia, and Malaysia. This study also centers around housing loans at banks. Future research may be extended to other types of loans that are sanctioned by banking institutions.
This study analyzed and suggested the service quality of Indian banking institutions as a prime factor in granting housing loans. From the structural equation model, the cost of borrowing is the dominating factors by exhibiting highest t-value of 38.29 to influence processing time, which in turn is largely related to service quality. Besides, the processing time recorded the 2 nd highest t-value of 17.91 to influence documentation which in turn has a significant relationship with service quality. Similarly, the outcomes of this study are measured by housing loans without margin money, customer satisfaction with service, instant sanction of housing loans, and prompt delivery of services. This is a substantial contribution to the research on banking institutions in India. This paper recommends that the time has come for banking institutions to take steps to introduce improved services on the instant sanctioning of loans with real-time access, without resorting to bureaucratic policies and procedures to housing loan customers. | 5,796 | 2021-04-05T00:00:00.000 | [
"Business",
"Economics"
] |
Dye Tracking Following Posterior Semicircular Canal or Round Window Membrane Injections Suggests a Role for the Cochlea Aqueduct in Modulating Distribution
The inner ear houses the sensory epithelium responsible for vestibular and auditory function. The sensory epithelia are driven by pressure and vibration of the fluid filled structures in which they are embedded so that understanding the homeostatic mechanisms regulating fluid dynamics within these structures is critical to understanding function at the systems level. Additionally, there is a growing need for drug delivery to the inner ear for preventive and restorative treatments to the pathologies associated with hearing and balance dysfunction. We compare drug delivery to neonatal and adult inner ear by injection into the posterior semicircular canal (PSCC) or through the round window membrane (RWM). PSCC injections produced higher levels of dye delivery within the cochlea than did RWM injections. Neonatal PSCC injections produced a gradient in dye distribution; however, adult distributions were relatively uniform. RWM injections resulted in an early base to apex gradient that became more uniform over time, post injection. RWM injections lead to higher levels of dye distributions in the brain, likely demonstrating that injections can traverse the cochlea aqueduct. We hypothesize the relative position of the cochlear aqueduct between injection site and cochlea is instrumental in dictating dye distribution within the cochlea. Dye distribution is further compounded by the ability of some chemicals to cross inner ear membranes accessing the blood supply as demonstrated by the rapid distribution of gentamicin-conjugated Texas red (GTTR) throughout the body. These data allow for a direct evaluation of injection mode and age to compare strengths and weaknesses of the two approaches.
INTRODUCTION
Inner ear end organs contain the sensory epithelium for balance and hearing. These fluid filled compartments are exquisitely sensitive to motion, particularly the fluid motion within these end organs (Hudspeth, 1989). Understanding fluid flow and compartmentalization within these systems is important at multiple levels. The ionic environment surrounding the sensory hair cells is critical to function (Wangemann and Schacht, 1996). Many pathologies associated with hearing loss and balance disorders stem from disruption of these fluid compartments or the pressure associated with these compartments [e.g., Meniere's disease (Hornibrook and Bird, 2016), Superior Semicircular Canal (SCC) Dehiscence (Minor et al., 1998), and Benign Paroxysmal Positional Vertigo (BPPV) (Vazquez-Benitez et al., 2016)]. Finally, therapies to prevent damage and to repair or replace damaged tissue will rely on our ability to deliver compounds uniformly and selectively to the inner ear compartments and end organs that require treatment. We cannot adequately design and assess therapeutics if we do not understand how the delivery system impacts drug distribution both within the ear but also to the brain and beyond.
RWM injection is a common mode of compound delivery that provides direct delivery into the perilymphatic space (Liu et al., 2005(Liu et al., , 2007Iizuka et al., 2008;Akil et al., 2012;Askew et al., 2015;Plontke et al., 2016;Dai et al., 2017;Landegger et al., 2017;Pan et al., 2017;Yoshimura et al., 2018). Plontke et al. (2016) showed direct intracochlear injection into the RWM of guinea pigs resulted in a basal-apical gradient in distribution of the injected markers (i.e., fluorescein or fluorescein isothiocyanate-labeled dextran). RWM injections in guinea pigs lead to compound detection in subarachnoid space, presumably traveling through the nearby cochlea aqueduct (Kaupp and Giebel, 1980). Recently, rapid detection of a biological dye in spinal cord and brain of mice after RWM injection was reported (Akil et al., 2019).
PSCC is another site for inner ear injections that offers the advantage of easy access with limited middle ear damage (Kawamoto et al., 2001;Suzuki et al., 2017;Isgrig and Chien, 2018). Data from lateral SCC (LSCC) injection of markers and dexamethasone suggest a more uniform distribution within the cochlea of guinea pigs (Salt et al., 2012a).
Here we directly compare RWM and PSCC injections at two ages in mice using similar volumes and flow rates. These experiments normalize for experimental variability as well as species and allow for direct assessment of route of entry and age. We demonstrate more uniform dye delivery using PSCC injections with less dye appearing in brain than with RWM injections. RWM injections result in more dye in the brain, implicating the cochlear aqueduct as a potential shunt to cochlear drug flow. Finally, GTTR injected in the PSCC resulted in dye appearing throughout the body, suggesting it can rapidly enter the blood supply, likely by crossing the membranous labyrinth membranes. Together these data begin to identify key parameters regulating distribution of compounds within inner ear compartments.
MATERIALS AND METHODS
Injections of trypan blue (4.6 mM, Life Technologies Corporation, United States), Methylene Blue (100 mM, ACROS Organics, United States), and GTTR [270 µM (Myrdal et al., 2005)] were made directly into the inner ear of mice. For ex vivo imaging, the dissected tissue was kept in Hanks' balanced salt solution (HBSS, Life Technologies Corporation, United States) buffer. C57/BL6 mice of both sexes at postnatal days of P1-P3 and P21-P23 were used for these studies. All animal procedures were approved by the Animal Care and Use Committee at Stanford University.
Injection Protocol
One microliter of compound was injected at a flow rate of 300 nl/min to five to six animals per group and the dye presence at the cochlea base, middle, and apex was monitored at different time points. Cochlear perilymphatic space for an adult mouse was previously estimated as 0.62 µl (Thorne et al., 1999). In this study, we chose a 1 µl injection volume for all experiments to provide enough dye for a complete replacement of the cochlear perilymph. No change in injection volume was used to compensate for any changes in inner ear volume with age. We tested several dyes including methylene blue, AM1-43 (100 µM, Biotium, United States), trypan blue, and GTTR. We chose to use trypan blue as the main dye for these studies because it interacted least with the tissue, it is not taken up by the cells (like AM1-43, methylene blue, or GTTR), nor does it appear to stick to the tissue (like methylene blue). 300 nl/min was selected as the fastest time that did not disrupt the tissue, we were finding a balance between going quickly to limit movement issues with the pipette placement and limiting any damage that might be caused by the fluid pressure inside the cochlea during injection. Flow rate was maintained regardless of age and injection site.
In all experiments, a 10 µl gas-tight syringe (Hamilton, United States) was mounted onto a microinjection pump [UMP3 UltraMicroPump, WPI, United States (Salt et al., 2012a)] and coupled to a glass micropipette tip. The rate and duration of injection were controlled by a microprocessor-based controller (Micro 4, WPI, United States) affixed to the microinjection pump to ensure the appropriate injection volume. The syringe and the micropipette were first filled manually with phosphate buffer saline (PBS) up to 4 µl of the syringe volume. After mounting the syringe onto the pump, using the Micro 4 controller, the pump was programmed to load a 1 µl air gap at the micropipette tip via suction, before loading the dye. Four microliters of the dye was then suctioned into the micropipette, leaving an air gap between PBS and the dye. This approach limited compression, allowing for a highly reproducible volume injection. The glass micropipettes were generated with a micropipette puller (Sutter Instrument Co., Model P-97, United States) and then scored and broken to ∼25 µm inner diameter and ∼45 µm outer diameter tips. Similar pipettes were used for both neonatal and adult injections. For precise localization of the micropipette tip during injections, the pump was mounted on a motorized manipulator (Exfo Burleigh, PCS-6000, United States).
Data Collection
M320 F12 ENT Microscope with a fully integrated camera (Leica, movie resolution: 720 × 480 pixels, image resolution: 2048 × 1536 pixels, Germany) was used for the surgical procedures and live imaging of the dye distribution within the cochlea. The integrated camera stored images and videos on a secure digital (SD) memory card. Example videos are presented in the Supplementary Material S1-S4. The injection process, starting at time 0, was video recorded in an MP4 format and converted to 8-bit JPEGs at 30 s intervals. Free Video to JPG Converter (V.5.0.101 1 ) was used for converting the MP4 files. Following completion of the infusion, JPEG images (8-bit) were captured at 3-min intervals (from 4 to 34 min). SZX10 microscope (Olympus, Japan) equipped with CCD color and monochrome cameras (QIclick-F-CLR-12 and QIclick-F-M-12, 1392 × 1040 pixels, 6.45 µm 2 pixel size, Teledyne QImaging, Canada) was used for dissections and imaging of the dissected cochleae and brains at the ∼5 min and 1-h time points as well as for the fluorescent images using GTTR. Images with TIFF (12-bit) and JPEG (8-bit) formats were captured.
Data Analysis
All images were analyzed similarly. Intensity was measured in 50 (in vivo images) and 100 µm (ex vivo images) diameter regions in apex, middle, and base. For all intensity measurements, Fiji software (open source image processing package 2 ) was used. All statistics were performed with Origin Software using two-sample or pair-sample t-tests. For in vivo images, data from apex, middle, and base from each cochlea were pooled for group comparisons if no regional differences were observed.
Computational Modeling
Finite element method (FEM) simulation software (COMSOL Multiphysics 5.3a) was used for computing the velocity magnitude along the perilymphatic compartment, during injections via PSCC and RWM. The 2D Laminar Flow Module for incompressible flows was applied to solve Navier-Stokes equations coupled with the continuity equation for conservation of mass in the periymphatic compartment. The perilymphatic compartment was modeled as a 2D pipe with three connected sections representing scala tympani (ST), scala vestibuli (SV), and SSCs. As shown in Figure 9C, based on the measurements reported previously for an adult mouse (Thorne et al., 1999), 4.55 and 3.98 mm were chosen for the length of ST and SV. For simplicity, the width of each section was kept constant along the length (i.e., SV: 200 µm, ST: 140 µm). A 4 × 0.09 mm 2 section was added to represent SCCs. The cochlea aqueduct was assumed to have a 100 µm width opening in ST, at 650 µm distance from the RW. It was assumed that the injection was directly into the perilymphatic space, and no back flow was considered at the injection sites. The injection sites at PSCC and RWM were both 25 µm located in the middle length of SCC and middle width of ST, respectively. During the injection, the only inlet for the system was the injection site. In our experiments that we injected the dye at 300 nl/min through a micropipette with 25 µm diameter, the flow velocity at the micropipette tip was calculated 0.01 m/s which was chosen as the inlet velocity in our model. We first assumed that the perilymphatic space inside the inner ear is an isolated compartment with only one outlet (i.e., cochlea aqueduct) and one inlet (i.e., injection site) ( Figure 9D). No material exchange through the walls was possible with this assumption. In our second model ( Figure 9E), we defined a distributed leaky wall at the top of all compartments at which the fluid velocity during the injection was larger than zero (e.g., 10 −5 m/s). This leak is meant to represent both biological and experimental leak associated with the procedure.
Neonatal Surgeries
Neonatal mice were anesthetized with hypothermia induced by placing the pups on a nitrile glove sitting on crushed ice for 3-5 min. Hypothermia was approved by Administrative Panel on Laboratory Animal Care (APLAC) and Stanford veterinary staff for use in neonatal animals. The animals were transferred to an ice water cooled aluminum block to perform the surgery. A 3-5 mm incision was made behind the ear, in the postauricular region with microscissors; muscle tissue was removed with forceps to expose the PSCC and bulla. Kimwipes (Kimtech, United States) were used for absorbing blood at the incision site. For the experiments where cochlea dye distribution was continuously recorded during and after the injection, a larger incision was made in the postauricular region (8-10 mm). The auricle, tympanic membrane, and bulla were removed while the stapes bone was left intact. This approach allowed for better viewing of the cochlea (shown in Figure 2). For PSCC injections, the micropipette was inserted into the PSCC using a micromanipulator. The bony labyrinth in neonatal mice was soft enough to allow micropipette tip penetration. The micropipette tip was visually assessed before and after each injection, and broken tips were not used for data collection. From this experiment it was not evident if the tip ruptured the membranous labyrinth, so it is possible that both endolymph and perilymph compartments were injected. As discussed by other groups, it is possible that the membranous labyrinth was ruptured during PSCC injections potentially causing direct access to the endolymphatic space (Kawamoto et al., 2001;Isgrig and Chien, 2018;Yoshimura et al., 2018). At the end of infusion and immediately after pulling out the injection pipette, a small droplet of 101 cyanoacrylate adhesive (Permabond, United States) was applied to the injection site, which was then covered with a muscle patch. For the RWM injections, the initial incision was made using microscissors at a position about 2 mm lower than the one in PSCC injections in order to expose the bulla. A small opening (1-2 mm) was made in the bulla to expose the RW, and the glass micropipette was inserted into the RWM. At the end of infusion, the injection pipette was removed and the RW was covered with a plug of muscle, and a droplet of cyanoacrylate was applied on the top to attach the muscle plug to the RW niche. For all neonatal surgeries, after sealing the injection site, the surgical incision was closed, and sealed with surgical glue (Suturevet Vetclose, Henry Schein Animal Health, United States). The entire procedure was accomplished within 15-20 min. For the experiments in which the cochleae were examined at 1-h post injection, pups were kept under hypothermia for the duration of the surgery and then transferred to a heating pad (37 • C). Pups revived within 3-5 min. Animals were euthanized at 1-h time point for further evaluation.
Adult Surgeries
Adult mice weighing between 8.5 and 10.5 g were anesthetized with an intraperitoneal injection of a mixture of ketamine (100 mg/kg) and xylazine (10 mg/kg). While animals were anesthetized, the surgery was performed on a heating pad at 37 • C. The fur behind the left ear was removed using hair removal cream and sterilized with 10% povidone iodine followed by 70% ethanol. A postauricular skin incision of ∼10 mm was made. To expose the cochlea for live imaging of the dye distribution during injection, a posterior transcanal incision was made with microscissors. Twelve and six o'clock medial cuts were then made in order to transect the remainder of the ear canal and to excise the bulla. The canal cartilage, dorsolateral surface of the auditory bulla, tympanic membrane, and ossicular chain were then removed except for the stapes bone allowing visualization of all cochlea turns (from the top view as shown in Figure 2), the oval window (OW), and round window (RW). All hemostasis was achieved with electrocautery (Medline, United States). After this exposure, for the PSCC injection, the sternocleidomastoid muscle was cut proximally, while the muscles covering the temporal bone were separated and retracted dorsally to expose the bony wall of the PSCC. A chemical canalostomy was achieved by applying 36.2% phosphoric acid etching gel (Young, United States) with the same caliber tip of the glass micropipette as used for injection and 20 s were allowed to take effect for bone resorption, and enough time to leave endosteum covered by a thin layer of bone (Alyono et al., 2015). A moist roll of cotton was used to remove the remaining gel. The etching treatment softened the bone enough to insert the perfusion micropipette. Using the micromanipulator, the tip of the glass micropipette was advanced (∼2-3 mm) into the canal angled toward the ampulla. This assured no backflow of the injected material and a sturdy tip insertion. The injection of trypan blue was performed as above. After removing the micropipette, the hole was plugged with small pieces of muscle, and covered with 101 cyanoacrylate. For the RWM injection, after the cochlea exposure was achieved, injection was performed with a glass micropipette into the RW niche passing through the RWM. All injections were performed and recorded under the surgical microscope described earlier and the delivery site was closely monitored for leakage during all procedures. No dye leakage was observed in the neonatal and adult mice during PSCC injection. In some RWM injections (∼10% of neonates and ∼50% of adults) a leakage was observed from the injection site. The higher leakage rate in adult may be due to the increased intracochlear pressure compared to neonatal animals which resulted in more backflow. Those animals were not included in the study. After removing the micropipette, the RW was sealed with a plug of muscle. For the experiments where the bulla was removed and the cochlea was monitored during the injection for up to 30 min post injection, the imaging region was occasionally covered by blood or other fluids during the imaging. When removing of these fluids could not be done properly, the animal cases were removed from the data analysis. At the end of each experiment, animals were euthanized under anesthesia. Mouse temporal bones and each side of the brain were harvested and placed in HBSS after which the tissue was imaged.
RESULTS
The fluid filled inner ear consists of discreetly positioned sensory epithelium of the vestibular and auditory systems housed in a contiguous membranous compartment (Sterkers et al., 1982;Hudspeth, 1989). These end organs are located within the temporal bone and are protected by a bony capsule (Felten et al., 2016). The bony capsule is filled with perilymph, a solution very similar to cerebral spinal fluid, and surrounds a membranous labyrinth containing endolymph, a high potassium, and low calcium solution created by supporting cells within each end organ (Sterkers et al., 1988;Wangemann and Schacht, 1996). The endolymphatic compartment is located within the perilymphatic space as shown in Figure 1 (Sterkers et al., 1988;Forge and Wright, 2002;Salt et al., 2012a;Felten et al., 2016). The perilymph and endolymph solutions are shared by the vestibular and cochlear end organs, yet the fluid composition FIGURE 1 | Schematic of the mouse inner ear with coiled (A) and uncoiled (B) configurations of the cochlea. The fluidic compartments for endolymph and perilymph are depicted in gray and light purple, respectively. The cochlea compartment of scala tympani is indicated with the hatch markings and scala vestibuli with the dots. All other compartments are as labeled. The approximate location of the injection pipette is illustrated for PSCC and RWM.
Frontiers in Cellular Neuroscience | www.frontiersin.org is often different between end organs (Sterkers et al., 1988;Wangemann and Schacht, 1996). The mammalian cochlea has three chambers, ST and SV contain perilymph, and scala media (SM) contains endolymph (Figure 1). The SV contacts the OW membrane (OWM) and the ST contacts the RWM (Fernández, 1952;Salt et al., 2012a,b;Felten et al., 2016). SV is contiguous with ST, converging at the apex, at the helicotrema (Fernández, 1952;Salt et al., 1991;Felten et al., 2016;Wright and Roland, 2018). At the basal end the SV is contiguous with the vestibule while the ST ends at the RWM (Figure 1). The cochlear aqueduct is a small channel extending from the ST at the cochlea basal turn, adjacent to the RWM that projects into the cranial cavity (Schuknecht and Seifi, 1963;Gopen et al., 1997). The function and patency of the cochlear aqueduct remains unclear but is suggested to regulate pressure (Carlborg, 1981;Carlborg et al., 1982). The endolymphatic sac extends from the vestibule between the cochlea base and the vestibular end organs and similarly may be a reservoir for excess endolymph production (Kimura and Schuknecht, 1965;Couloigner et al., 1999). Although the inner ear contains two independent fluid compartments, flow through these compartments is complex due to the multiple routes and different resistant properties offered by the shape of the various end organ components. The potential for ion transport between compartments further complicates our understanding of compound delivery (Salt et al., 1991;Salt et al., 2012b).
In the present work, flow within the inner ear was visualized by injecting trypan blue at two independent sites indicated in Figure 1A. The impact of age on the dye distribution within the cochlea after a PSCC or RWM injection was studied by applying similar injection protocols to neonatal and adult mice and comparing the results.
Figure 2 presents in vivo images used for analysis of the dye distribution within cochlea for each experimental group. The images show dye progression within the cochlea at different time points during the injection. From the imaging angle shown in these pictures, the distribution of the dye upon PSCC injection was visible in the cochleae of both neonatal and adult mice. The dye first appeared at the OW region (see Figure 2 PSCC neonate time 30 s as example), continued to the cochlea base, and then migrated toward the apex. Although we did not have the resolution to differentiate migration in ST and SV, the later appearance of the dye at RW region (after reaching the apex) suggests that the dye had first entered the SV and then continued into the ST compartment as was reported in the literature (Salt et al., 2012a). The dye distribution pathway in the cochlea upon RWM injection was not as clearly visible at this imaging angle (see alternate ex vivo images in Figure 5). Despite this difficulty we could track dye in each configuration but cannot compare absolute intensity values between ages for this experiment. In contrast to PSCC injections, visual inspection of RWM injections, ex vivo, show dye accumulation at the cochlea base with less dye found in middle turn and virtually no dye seen at the apex.
Data from In all experiments, the microinjection pump was turned on at 0 s injecting 1 µl of the dye at 300 nl/min. The regions of interest where the intensity was measured at apex, middle, and base for each experiment are shown in yellow in the last picture of each row (180 s). Scale bar is 500 µm. The X-axis provides time snapshots and the Y -axis the specific group being injected.
Frontiers in Cellular Neuroscience | www.frontiersin.org as those with parameters closest to the mean values (presented in Figure 4). Figures 3A,D,G,J show the intensity value decreased significantly for the first 3 min, during dye injection through the PSCC or RWM (Figure 3M). Lower intensity (I) values indicate darker regions, meaning the dye is reaching that area. The intensity changes relative to time 0 ( I) are plotted (as − I to more intuitively reflect an increase in dye concentration) for apex, middle, and base in Figures 3B,E,H,K. I max was defined as the intensity change ( I) at 3 min after starting the injection.
We interpret intensity changes as equivalent to changes in dye levels. In PSCC injected mice (n = 5 for each group of adults and neonates), dye increased at each location for both ages during the entirety of the injection (Figures 3A,B,D,E). In RWM injected neonatal animals (n = 6, Figures 3G,H), there was an increase in dye in basal regions with a modest change for mid and very little change for apical regions. Adult RWM (n = 5) showed a similar pattern to neonatal, though there was a measurable change in the apical region (Figures 3J,K). These data suggest more dye entry for adult compared to neonatal animals for both injection sites and also that dye reaches the apex more readily for PSCC injections than for RWM injections.
A slight increase in the intensity value was observed at the end of infusion (indicated by the arrow in Figure 3A), correlating with the time when the glass micropipette was removed from the injection site. This increase was not observed in RWM injected animals, likely because the limited amount of dye present at apex and middle ROIs did not allow for the dye reduction detection. The lack of change in the basal region is not due to a lack of dye and perhaps suggests a difference due to injection site.
In PSCC injected animals, the dye level reduced slowly after stopping the infusion (Figures 3A,B,D,E,N). In RWM injected neonatal mice (n = 6), after stopping the infusion, apex dye levels were unchanged, there was an increase in dye levels for mid regions and a more robust change at the basal turns, likely indicating a continued progression of dye from the base toward the apex (Figures 3G,H,N). In RWM injected adults (n = 5), dye levels increased in all turns within 30 min post injection again suggesting continued dye progression after the perfusion (Figures 3J,K,N).
A summary of maximal changes monitored at the end of the injection (from the imaging angle shown in Figure 2) shows a base to apex gradient for PSCC injected neonates and RWM injected animals ( Figure 3M). PSCC data lose this gradient in adult, while RWM injections are not different between ages. The relative changes associated with PSCC injection were greater than RWM injections ( Figure 3M); however, the orientation of cochlea during in vivo imaging provides a better view for monitoring the dye progression in PSCC injection compared to the RWM so we performed additional experiments to obtain more equivalent views (see Figures 5, 6 for more direct comparisons at different orientations).
Dye distribution was evaluated 30 min after stopping the infusion relative to the time immediately after removing the pipette ( I 30min ). The majority of PSCC mice showed a reduction in dye accumulation, as indicated by positive values at 30 min post injection ( Figure 3N). RWM injected mice showed a continued increase in dye accumulation as indicated by negative values meaning more dye present ( Figure 3N). These data support the idea that PSCC had uniform distribution early with later time points reflecting diffusion out of the cochlea. In contrast, RWM injections showed less dye within the cochlea; the existing dye distributed more uniformly over the following 30 min.
To evaluate the kinetic differences between modes of injection independent of absolute concentrations we normalized data to the time point immediately before termination of the injection ( Figures 3C,F,I,L). The presented examples demonstrate time delays and rate differences in dye progression between modes of delivery and age. The dramatic differences in dye level are not included in these plots but rather simply an indication of the timing differences, summaries of which are presented in Figure 4B summarizes changes in onset time between groups. A delay in onset time was observed between regions for neonatal animals, being most delayed in the apex, regardless of delivery mode. Adult animals did not show intracochlear differences in onset time. PSCC injections were more delayed than RWM injections in all groups as might be predicted from the longer distance between the injection site and cochlea in PSCC compared to RWM delivery. The age difference may simply represent the change in size of the inner ear reducing resistance to flow in adult mice.
The intensity change over time was not linear for any age or delivery mode (Figures 3C,F,I,L). To compare rates of change we simply used the steepest slope as described in Figure 4A, for each experimental group. Given that these measurements are during perfusion a common slope is predicted that basically relates injection rate to cochlea properties. Reductions in this rate suggest that flow is bifurcating or that different resistances are encountered. That is, if the dye splits into flow in multiple directions the rate will be reduced for either pathway. Results of this analysis are shown in Figure 4C. The steepest slope had larger values in the PSCC injected animals (both neonates and adults) compared to the RWM injected ones. The PSCC injected adult mice had steeper slopes compared to the neonatal ones; but no significant difference in the slope was observed between the neonatal and adult mice injected through the RWM. The slope difference between ages is likely a result of a reduced resistance to flow in the adult animal. The steeper slope with PSCC injections suggests higher levels of dye are entering the cochlea from this site of injection at each time unit. All biological paths from the PSCC injection lead through the cochlea while the RWM injection can bifurcate to the cochlea aqueduct prior to distributing through the entire cochlea. This simple difference likely accounts for the difference in dye distributions for the two injection sites.
A problem with the in vivo imaging is that orientation of the cochlea makes it more difficult to assess distribution with RWM injections. To further assess diffusion post injection and to better visualize dye distributions throughout the cochlea a ) and (E) indicate significant differences between data values at 60 min compared to 5 min at each cochlear region of PSCC injected neonates ( * p = 0.03, * * p = 0.006, * * * p = 0.002). Black asterisks in panels (E,F) indicate significant differences between the data values for neonatal and adult mice ( * p = 0.006 at 5 min, * * p = 0.003 at 60 min, * * * p = 8.3E-4 at 5 min, * * * * p = 0.007 at 60 min). Two-sample t-test was used for calculating the p-values between groups. separate set of experiments was performed where the brains and both cochleae were obtained at 5-and 60-min post injection for each experimental group. This approach allows us to evaluate dye distribution ex vivo within the cochlea from angles that were not accessible in the in vivo images. A representative cochlea from each group of experiments (n = 6 and n = 5 in each group of neonatal and adult cochleae, respectively) is shown in Figure 5. No dye was detected in any contralateral cochleae. It is clear from the images that the PSCC injections achieved more uniformly distributed dye than did the RMW injections. To better assess the dye pathway with RWM injections and to investigate uniformity of distribution, images were obtained from three perspectives at 5 min post injection (Figures 5K-P). These data demonstrate that PSCC has high dye levels throughout the cochlea and even SCCs. In contrast, RWM injections show dye in the basal areas with less distribution to apical regions. In addition, these data suggest that dye intensity differences may in part be due to the dye going elsewhere. This conclusion is supported also by the reduced steepest slopes described in Figure 4.
The intensity values from regions with 100 µm diameter at apex, middle, and base were measured for each cochlea as indicated by yellow dots in Figures 5F,G,I,J. Immediate dissection and inspection of cochlea following the end of PSCC injection (about 5 min post injection) show all cochlea regions were significantly darker than non-injected ones, at both ages (Figures 6A,C). In contrast, RWM injected animals showed a steep gradient where only base in adults and base and middle in neonates were significantly darker than the non-injected cochleae at the 5-min time point (Figures 6A,C).
As shown in Figures 6A,C, the dye was highest at the basal turn of PSCC or RWM injected cochlea, ∼5 min after injection, and values did not differ from each other at either age (p = 0.3). In contrast, apical measurements in neonates show dye levels higher for PSCC injected cochleae than RWM injected cochleae (p = 1.2E−6) while neonatal RWM injected cochleae were not different from control (p = 0.7). In adults, apical values in PSCC remained different from RWM injected values (p = 0.02) but apical values in RWM are again not different from control (p = 0.98).
One hour after injection, dye levels in the PSCC injected neonatal cochleae were significantly reduced in all regions compared to 5 min after injection (Figures 6B,E, see asterisks). Adult animals showed no difference in the dye levels for PSCC injections for any region compared to the 5 min time point (Figures 6C-E). No significant difference in the cochlear dye levels was observed in RWM injected neonatal (Figures 6A,B,F) or adult (Figures 6C,D,F) for 1 h compared to 5 min after injection, suggesting dilution was not happening with this mode of injection at either age.
The percentage of intensity change at each ROI in respect to the control values is summarized at early ( Figure 6E) and late ( Figure 6F) time points for PSCC and RWM injected animals, respectively. For simplicity, the control data from neonatal and adult mice are combined. The dye distribution in neonatal animals presented a gradient from base to apex, after PSCC injection at the 5-min time point (Figure 6E) but not at the 60min time point. This gradient did not exist in the PSCC injected adults at either time points. A steep gradient in dye distribution at both 5 and 60 min was observed after RWM injection in both neonatal and adult mice ( Figure 6F).
One possibility is that part of the dye injected through the RWM travels through the cochlea aqueduct which is located very close to the injection site. In contrast, the PSCC injection site is at the opposite end of the cochlear perilymphatic space, so that during the injection perilymph will be pushed through the cochlea aqueduct while dye enters the cochlea. We inspected the brains of injected animals as a proxy for dye traveling through the cochlea aqueduct. Brain images from each group of injected animals are shown in Figure 7. No dye was detected in the brain of the neonatal mice injected through PSCC (at 5 or 60 min, n = 6 per group). However, trypan blue was detected in 5/6 neonatal mice brains injected through RWM at both 5 and 60 min supporting the idea of dye travel through the cochlea aqueduct. In PSCC injected adult mice, no dye was detected in the brain 5 min after injection, but small traces of trypan blue were observed in four out of seven adult mice 1 h after injection, suggesting travel through the cochlea aqueduct post injection. The dye was also detected in all adult mice brains injected through RWM at both time points (n = 5 for each). Thus, these data are consistent with the hypothesis that RWM injections lose dye through the cochlea aqueduct more readily than PSCC injections.
Different Substances Have Different Distributions After PSCC Injection
It has previously been suggested that the chemical composition of the injected compound can affect distribution within the ear (Nomura, 1961;Salt and Plontke, 2018). In order to investigate the potential variations in compounds progression in the cochlea following the same mode of delivery and using the same injection parameters, two other compounds were tested. We injected GTTR and methylene blue via PSCC in neonatal animals as described above. The results are shown in Figure 8. Methylene blue presented very similar to trypan blue. One hour after injecting methylene blue into the PSCC of P1 mice, no dye was visible through the skin, and after dissection, no trace of dye was detected in the brain or contralateral cochlea ( Figure 8F). However, GTTR distribution was starkly different. One hour after injecting GTTR into the left PSCC of mice at P1, it was detected through the skin of the animals (Figure 8A). Three hours after the injection, the drug was still visible in different parts of the body through the skin (Figure 8B). The fluorescent pictures of a mouse's paws and tail, 1 h after PSCC injection of GTTR is shown in Figure 8C, in comparison with the non-injected pup. GTTR was also injected to P5 mice through PSCC. One hour after the injection, the drug was visible through the whole cochlea, and most of the inner and outer hair cells (Figure 8D). Figure 8E shows that GTTR was also detected in the brain of the P5 injected pups, 1 day after PSCC injection. For this level of distribution, the GTTR must be able to access the blood supply by crossing membranes within the inner ear. These experiments highlighted the fact that in addition to the injection parameters (e.g., volume, flow rate), mode of delivery (e.g., PSCC, RWM), and resistance to the flow within the cochlea that can influence the distribution patterns within the inner ear, under the same conditions, different molecules can propagate differently within the cochlea and whole body.
DISCUSSION
Over the past decade, RWM and PSCC injections are more commonly used for delivering compounds into the inner ear delivering viral vectors for genetic manipulations. Our data set compares PSCC and RWM delivery in neonatal and adult mice using comparable technologies. Identifying and characterizing parameters that modulate drug distribution in the cochlea could help better understand fluid regulation within the inner ear and aid in developing more effective delivery methods for research and therapeutic purposes. Our data provide information on the route of dye flow upon RWM and PSCC injections (Figure 9), the resistance to the dye flow at different ages, and the spread of the dye inside the injected cochlea, to the contralateral cochlea and brain within the first hour after injection.
Our data suggest that PSCC is a better route for drug delivery to the cochlea compared to RWM delivery for several reasons. First, at least in mice, the surgical approach is simpler and the likelihood of damage to middle ear is reduced compared to RWM injection. Second, higher levels of dye reached the cochlea. Third, the dye was more uniformly distributed throughout the cochlea with PSCC injections. And fourth, the dye was more restricted to the inner ear where the injection took place and under these conditions did not reach the brain within the first hour after injection. Our hypothesis for the differences is that the cochlea aqueduct position relative to the injection site is regulating how much dye reaches the cochlea. Figures 9A,B summarize the hypothesis generated from the data collected in this study. In PSCC injections (Figure 9A), the dye first traveled through the SCC and then entered the basal turn of the cochlea via the SV consistent with previous reports (Salt et al., 2012a). Therefore, the compounds first flow through the SV toward the helicotrema, and then continue through the ST toward the cochlea aqueduct/RWM region. In this direction, the cochlear aqueduct can act as a release valve allowing perilymph to flow and be replaced by the injected volume. Our data support a previous report from guinea pig data suggesting PSCC injection provides a more uniform cochlea distribution of drugs (Salt et al., 2012a). In RWM injection (Figure 9B), the dye directly entered the cochlea through ST. At the end of infusion, a sharp gradient in the dye distribution from base to apex was observed at both ages. This is also in agreement with the results reported by Plontke et al. (2016) who showed a basal-apical gradient in the concentration of fluorescein after RWM injection to guinea pigs, and other groups who tested gene expression in mice cochlea by injecting viral vectors through the RWM, observing lower gene expression in the apex compared to the base (Askew et al., 2015;Chien et al., 2015;Yoshimura et al., 2018). We also measured a lower level of dye intensity changes 1-h after injection within the cochleae of RWM injected neonatal animals compared to the PSCC injected ones.
RWM injection has flow direction from the vestibule end of the ST, where the cochlea aqueduct may act to shunt flow (dye) away from the cochlea, toward the brain, thus reducing dye entry into the cochlea. The presence of dye in the brain of animals injected through RWM supports this hypothesis. Transduction in the contralateral ear and brain after RWM injection of viral vectors has been reported in the literature (Landegger et al., 2017), while no gene expression in the contralateral ear was reported after PSCC injection of viral vectors into the cochlea (Kawamoto et al., 2001) also consistent with the above hypothesis.
Monitoring of dye progression within the cochlea provides a direct investigation of fluid dynamics but it is not without limitations. One limitation is that the bony capsule in adult mice is thicker than the cartilaginous otic capsule in neonates, making absolute comparisons of intensity between ages untenable. Evaluating dye distributions ex vivo did provide the needed resolution to probe adult cochlea dye distributions more directly. The dye intensity was large enough to be detected through the bone so relative changes and timing in dye distribution were compared within each age group. A second major limitation is that the imaging plane provides different volumes for apex, middle, and base so that data can be misleading regarding absolute differences between regions. This problem is compounded by the injection sites being on opposite ends of the perilymphatic space so that imaging orientation can bias dye tracking depending on injection site. The ex vivo experiments allowed viewing of cochlea from multiple orientations and so allowed us to interpret dye changes more convincingly. A third limitation in these comparisons is the difference in anesthesia. Hypothermia as an anesthetic in neonatal mice could affect dye distribution indirectly for neonatal animals. It is possible that the difference in apparent resistance is in part due to the temperature difference, in addition to the size difference between adult and neonatal. However, the expected change in viscosity is about threefold which when investigated in the model described below had little effect on the results. Our measurements occur either during perfusion or for at most 1-h post perfusion which does not allow enough time for diffusion to be effective. It is possible that changes in volume or wall mechanics alter the resistance to flow in a temperature dependent way that we cannot account for and so some caution should be taken in comparing across modes of anesthesia. In general, though the major findings happen within groups. The sampling rate due to the small changes in intensity were relatively slow and so it is possible that subtle differences in dye kinetics could be missed. Overall the fundamental conclusions presented are consistent with previous studies and well supported by the data presented here.
Using a FEM simulation, we generated a simplified model to demonstrate the feasibility of our hypothesis that the nearness of the injection site to the cochlea aqueduct dictates the pattern of dye distribution within the cochlea. Figure 9C presents the model geometries which represent an unrolled perilymphatic space including the ST and SV and SCCs. The color gradient in Figure 9C is also represented in Figure 9B as a tool to understand how the cochlea was unrolled. Also included in the model structure are the cochlear aqueduct and the two injection sites. The simulation was run on two models with the same geometry but different boundary conditions. The first model assumed that the perilymphatic space is an isolated compartment with no permeability or leak. The only flow inlet was the injection site, and the only outlet was the cochlea aqueduct ( Figure 9D). The second model assumed that in addition to the cochlea aqueduct acting as an outlet, a distributed leakage occurred within the perilymphatic space during the injection (Figure 9E). The figure presents velocity as an indicator of flow and is expected to be a correlate of the dye distributions measured during the injection. The direction of the flow was from injection site to the cochlea aqueduct, no intrinsic cochlea flow was included. Unrolling the cochlea shows the clear difference in location of the injection sites with the RWM injection reaching the cochlea aqueduct before the cochlea and the PSCC injection reaching the cochlea before the cochlea aqueduct. This fundamental difference is postulated to explain the difference in dye distribution. In PSCC injection (Figure 9D, top), a relatively uniform high velocity is distributed along the SV and ST during the injection. In contrast in RWM injection (Figure 9D, bottom), the high resistance within the cochlea, due to it being a closed system, the velocity distribution is limited to the area between the RW and the cochlea aqueduct. This example is meant to show the importance of the outlet position relative to the cochlea and injection site. In the second model where a perilymphatic leak was included, the velocity pattern was more like the dye patterns observed experimentally. The basal to apex gradient with RWM injection as well as the lower velocity values in the cochlea due to the bifurcation with the cochlea aqueduct were observed. The PSCC injection resulted in higher average cochlear velocities and more uniformity throughout the cochlea than did RWM injections. This model is used simply to illustrate the validity of our hypothesis and a great deal more detailed information is needed to make it more physiologically realistic. Our hypothesis supports a significant role for the cochlea aqueduct in dictating flow through the cochlea. As patency of the cochlea aqueduct is potentially variable, particularly between species, it will be important to assess cochlea aqueduct in human if therapeutic approaches are being considered. Gopen et al. (1997) examined temporal bones in human and showed that 93% of the cochlea aqueducts in 101 samples were not completely obstructed (i.e., in 34% the central lumen was patent throughout the length of aqueduct and in 59% the lumen was filled with loose connective tissue). The similarity of data between guinea pig and mouse supports a similar role in fluid regulation for the cochlea aqueduct. The functional role of the cochlear aqueduct is unclear but is postulated to serve as a pressure regulator for the inner ear (Carlborg et al., 1982) and its role in modulating drug delivery must be considered.
Despite providing a uniform distribution of the compounds with PSCC injections, gene expression within the cochlea induced by delivering viral vectors through the PSCC has resulted in higher rates of transduction in the apex compared to the base (Isgrig et al., 2019). This discrepancy between uniformity of the viral vector's distribution and level of gene expression along the cochlea prompted further investigations to search for factors other than the viral vector distribution that can be responsible for gene expression non-uniformity. We found no difference in elimination times along the cochlea for dyes when compared between injection sites within 1-h after injection. Since we do not know the time frame of elimination, it remains plausible that the transduction gradient is a result of exposure time to the virus. It is also possible that like GTTR, the viral vectors' distribution is influenced by permeability properties and not driven by accessibility via the injected flow. Another possibility though is that there are tonotopic differences in how the virus interacts with sensory and supporting cells. Our data simply show that dye distribution patterns do not directly mimic expression patterns from viral transduction.
Our experiments suggest a higher resistance to fluid flow within the cochlea of younger animals. A longer onset time to steepest rate of change in dye concentration and smaller rate of intensity change in neonatal animals compared to the adults during injection are observations supporting this conclusion. The higher resistance is likely due to the fluid pathways with smaller dimensions in younger animals. It might also be a function of cochlea aqueduct patency with reduced patency modulating total resistance to flow. Thus, animal age can also influence the rate of delivery.
In RWM injections, where the injection site is near the opening of the cochlea aqueduct into the ST, more dye was observed in the brain of animals compared to PSCC injections. Our results are in agreement with the observations of Akil et al. (2019) who injected a biological dye into the RWM of neonatal mice, and within a few minutes detected the dye in the brain and spinal cord of the mice (Akil et al., 2019). Kaupp and Giebel (1980) also reported the appearance of lyophilized rhodamine in the subarachnoidal space after direct application of the substance into the tympanic perilymph near the RW (Kaupp and Giebel, 1980). Gene expression was also observed in the brain after RWM delivery of viral vectors into the cochlea (Landegger et al., 2017). These data support the idea that the cochlear aqueduct can provide a pressure sensitive pathway to the brain. Compounds shunting to the brain, which happens more in older animals and with RWM injections, reduces the dose of injected compounds received by the cochlea. This shunting can also explain the difference in steepest slopes obtained between injection sites. Together these data support the idea that the cochlear aqueduct serves as a pressure release valve and must be taken into account when trying to deliver compounds to the cochlea. Recent data (Yoshimura et al., 2018) demonstrated uniform delivery to the cochlea with RWM injection if a hole was first created in the PSCC. This hole likely provides an additional outlet for the flow that directs more fluid toward the apex, OW, and SCCs successively. Thus, this is very much in accord with a release valve role for the cochlea aqueduct.
Contrary to the trypan blue and methylene blue experiments, injection of GTTR into the PSCC of neonatal mice resulted in distribution of the compound throughout the body within 1 h. This supports the idea of Nomura that cochlear capillaries have different permeabilities based on chemical structure (Nomura, 1961). It also suggests that the chemical structure of compounds may dictate their permeation across the inner ear membranes in order to access the blood supply. Careful consideration of the chemical properties of a given compound will be needed when local inner ear drug delivery is the goal.
CONCLUSION
Data presented here are in good agreement with work from others suggesting that the mode of drug delivery (i.e., RWM or PSCC injection) will alter the distribution of compounds within the cochlea. PSCC injection provides a more uniform distribution throughout the cochlea regardless of age. PSCC injections also result in less dye accessing brain. Flow resistance decreases with age but remains an impediment to distribution.
Data are consistent with the cochlear aqueduct serving as a pressure release valve that can help to uniformly distribute drugs when injected through the PSCC but can inhibit drug delivery to the cochlea by diverting flow (drug) when injected via the RWM. And finally, the chemical nature of the delivered compound will affect the drug spread based on its ability to cross membranes and potentially enter the blood stream.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding author.
ETHICS STATEMENT
The animal study was reviewed and approved by the Stanford APLAC.
ACKNOWLEDGMENTS
We thank C. Gralapp for assistance in preparation of the coiled and uncoiled inner ear diagrams. Our thanks for core support from the Stanford Initiative to Cure Hearing Loss through generous gifts from the Bill and Susan Oberndorf Foundation.
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fncel. 2019.00471/full#supplementary-material VIDEO S1 | Video example of trypan blue progression in neonatal mouse inner ear during PSCC injection. VIDEO S2 | Video example of trypan blue progression in neonatal mouse inner ear during RWM injection. VIDEO S3 | Video example of trypan blue progression in adult mouse inner ear during PSCC injection. VIDEO S4 | Video example of trypan blue progression in adult mouse inner ear during RWM injection. | 11,978.4 | 2019-10-30T00:00:00.000 | [
"Medicine",
"Engineering"
] |
A Postcapitalistic People? Examining the Millennial Generation’s Economic Philosophies and Practices
: This article investigates the economic orientations of the members of the Millennial generation, so as to assess possible shifts towards their adoption of degrowth philosophy and practice. The text provides a general literature review oriented towards indicating the link between the Millennial generation’s economic standpoints and possible directions of evolution of the economic system in the Western world. An orientation towards the market and its economic system has become one of the distinctive features embedded in the portrait of the Millennials, who not only create the dominant social force of the Western world but also represent the first generation in which the majority question well-established market philosophies. The article considers the potential contribution of the Millen-nial generation to the further development of alternatives to traditional notions of growth. Until now, the evolution of the economic framework has been pushed forward mainly by policymakers and government representatives. System designers have shaped the desired outcomes via international agreements, internal policies, and the empowerment of different economic actors, driven by a belief in the long-term benefits of the capitalism–democracy nexus. However, this moment in history, in which such principles are being seriously questioned, creates a space for bottom-up processes and the reconfiguration of economic realities with a potentially transformative effect on the whole framework.
Introduction
The environmental sustainability and social implications of economic growth have been subject to much debate for many decades.Economic development principles are under constant negotiation as expectations with regard to the material conditions embedded in the concept of the "good life" grow, while the requirements of the sustainability model propose limitations to market expansion.In this article, the concept of "degrowth" is adopted as an interpretative frame for the description of a variety of social transformations, introduced by representatives of the Millennial generation, whose behaviors are treated as indicators of future workplace, consumer, and economic trends.
Due to multiple factors of disruption within the current economic model, amplified by the global recession of 2007 and by the COVID-19 pandemic, the legitimacy of this model is being questioned [1][2][3].The COVID-19 global pandemic has further exposed the pitfalls of a hyperconsumption model of economic growth, revealing the scale and extent of its negative consequences.All these factors added to the reality of the environmental destruction linked to consumerism and industrialization which has further undermined the traditional model of growth [4,5].The interest in a "green" or "natural" capitalism is on the rise [6].The variants of a new model of growth vary from different versions of sustainable growth [7] through the steady-state economy [8], ecological citizenship [9], and the green economy [10] to degrowth [11,12], but all of them are united by the idea of linking social with economic goals and meeting the needs of both the present and future generations of humankind.As an examination of the intellectual foundations of all of the above-listed trends, as well as important differences between them, goes beyond the scope of this article, the variety of novel approaches to the Western economic system will be referred to here with the use of the common label of postcapitalism.The article also does not attempt to compare the various economic systems, but rather conceptualize current shifts in the present moment of the evolution of the capitalist system and use the economic orientations of the Millennials as an interpretative frame.
The text provides an integrated literature review oriented towards indicating the link between the Millennial generation's economic standpoints and possible directions of evolution of the economic system in the Western world.Within this framework, the focus has shifted from economic growth to human growth, which stands in line with the highly individualistic philosophies of the dominant generation and its attachment to personal growth and self-development.The phenomenon of "Millennials rising", marking the emergence of the new generation into adulthood, has been linked to the wave of global socioeconomic transformations [13,14].The orientations of the Millennials are perceived as one of the causes of ongoing shifts, as well as a demonstration of the generational potential to redefine the existing geographies of power and established forms of social organization [13,15,16].Millennials have been widely recognized as becoming the world's most important generational cohort for growth in consumer spending, the sourcing of employees, and overall economic prospects.They have been referred to as Echo Boomers and Generation Y, but most call them Millennials because they came of age in the 21st century.It is estimated that by 2025 they will make up around 75% of the workforce in the United States [17].Millennials have been shaped by the forces of globalization and marketization.While traditionally the Western growth system is associated with progress, the greater accumulation of wealth with success, and economic clout with political power [14], Millennials see progress in more technological and human-oriented frames and appear to be rather postmaterialistic [18].They reformulate social and economic ideas to resonate with the values and attributes of the good life and protection of the intangible heritage: the natural and cultural ecologies overlooked by capitalistic economic discourse.Early analyses suggested that these trends are not simply a by-product of the Great Recession of 2008, but may indicate a major shift in Millennials' views on the economic sphere.Millennials' preferences and orientations toward degrowth indicate the direction and potential of scenarios alternative to the notions of the capitalist system.Their market orientations form an interesting lens through which their perspective on the implementation of degrowth principles may be observed.
For the purposes of this research, the Pew Research Center's definition [19] of the Millennials as those born between 1980 and 2000 is used.The focus is on Western social conditions, generational changes, and economy, which often means tracking the realities of these issues in the United States.In the United States, Millennials, representing more than one-quarter of the nation's population, even outnumber the giant Baby Boomer generation.They accounted for 24% of the adult population in the 28-member European Union in 2013.The largest absolute number of Millennials in a country surveyed was in Germany, with 14.68 million.The smallest number was in Greece, with 2.02 million [20].
American Millennials and socioeconomic conditions have been used here as a primary point of reference for two reasons.Firstly, the Millennial generation, because of its size as a proportion of the population, has been thoroughly researched in the United States; secondly, American growth principles and competitiveness have been the model for openmarket developments all around the world.The American economy stands at the center of Western-style consumer capitalism and produces the most influential market-related ideologies [21].
Methods
The article presents an integrated literature review directed at addressing the following research questions: -What are the economic orientations of the members of the Millennial generation and how are new directions of the evolution of the capitalist system (degrowth, green capitalism, natural capitalism) reflected in the cohort's value systems and market practices?-What is the economic position of Millennials in the Western economies?-How are the current stage of the evolution of the economic system in the Western countries and perceptions of it embedded in the Millennials' generational profile interrelated?
Divergent strands of scholarship from several disciplines, including sociology, psychology, economics, and political science, have been used in this article to weave a set of disparate ideas into an argument about interrelationships between the dominant Western generation and the economic system.
In the first step of the review, sources dealing with the sociological picture of the Millennial generation have been compared and integrated.Data about the specific social circumstances that characterized the formative experiences of the members of this generation were combined with findings about the evolutionary pathways of the economic system in the Western world.The general economic literature and reports exploring market trends have been used to assess the limitations that members of this generation may face while pursuing their lives and professional goals.These findings were compared with the data about Millennials' subjective perceptions of the economic system and declared behaviors related to career choices, workplace, and consumer culture.This integrated approach has been effective in tracing the economic orientations of the Millennial generation in the context of the likely future shifts of capitalism, given the increased interest in sustainability, degrowth, and green philosophies.
The Generational Perspective in Tracing Change in the Economic System
The concept of generation forms the theoretical frame for the attempt to assess Millennials' orientations toward degrowth, and it is considered here both as a product of subjective, collective memory as much as empirical, identifiable history.Social-scientific analyses of the Millennial cohort are based on the premise that generational experience may be responsible for shaping shared social/cultural conventions and worldviews.They face similar problems and gain similar experiences within a certain period of time, which also results in the formulation of similar beliefs.A cohesive generational profile based on the common features they represent does not eliminate individual differences and divisions within the group, but may however indicate common traces of meaning-constructing and reality understanding.On this basis, the Millennial "logic of appropriateness" [22] with regard to prosperity, sustainability, and growth can be traced and compared to their parents' and grandparents' generations.With their coming of age, longstanding trends in the labor market and housing and consumer behaviors are being reversed.So far, this generationally driven shift has had the most impact in education and consumer markets, which are particularly susceptible to the influence of younger participants.However, new market orientations have emerged as a consequence of the specific social circumstances that characterized the formative experiences of the members of this generation.At least three meaningful relations can be traced in this context:
The Connection between Formative Generational Events and the Diffusion of Unified Social Norms
Growing up in times of global recession has been one of the formative experiences of this generation, contributing to a unique worldview.Socioeconomic instability has been connected to the exposure to events undermining the moral order of society, such as unethical leadership causing the destruction of a number of important corporations (Enron, Arthur Andersen, TYCO), terrorist attacks (9/11, Beslan, Madrid, London, Berlin, the Paris attacks, the Charlie Hebdo attack), and a series of mass shootings in public spaces (the Columbine massacre and other school shootings, the Anders Breivik massacre in Norway).
The social realities of their formative years were characterized by continuous change, which resulted in not only a YOLO (you only live once!) philosophy embraced by many members of the Millennial generation but also excellent adaptive capacities [23,24].Uncertainty and liquidity have become the general traits indicating the nature of professional development.Many Millennials are opting to work in positions that are not necessarily the best paid or career-oriented but those which bring satisfaction and integrate the work-life balance [25].They place value on the insertion of their personal ideals, values, and identity in organizations, as well as seeking authenticity and meaningfulness when establishing relationships [26].Flexibility and the ability to adapt are especially valued, as organizations increasingly cope with an environment that is uncertain, complex, and often ambiguous.The concept of a career based on predictability and security is in retreat in the modern workplace, but Millennials are increasingly looking for relevance: in a volatile world they want to find a way to make a difference in their lives [27].Global awareness influences their worldviews and makes them look beyond individual transactions of mutual exchange to intentional "we" thinking and ethical purpose.Millennials have high expectations that companies will address important social and environmental issues.A survey of U.S. Millennials found that 88% of respondents wanted to work for an organization with social responsibility values that matched theirs, and 92% of respondents said they would leave an employer due to ethical orientation differences [28].Millennials are the only generation to grow up in a globally interdependent world and to be environmentally conscious from birth [29].They recognize the limits of natural resources related to land, nutrients, biodiversity, water availability, and energy.Preferences for environment-friendly solutions create a frame for general, global well-being, providing also the basis for the more ecology-oriented styles of market relations they represent.Empirical research confirms that Millennials are ready to pay more for environmentally friendly services, products, or brands [30].Their orientations towards work culture and the workplace are also shaped by environmental sensitivity [31].
The Connection between the Intrinsic Conditions and Values of Millennials and the Construction of Social Identities
There are two generational profiles in which the characteristic features of Millennials have been summed up.The first, popularized under the "Generation Me" label, presents them as impulsive and self-oriented.Their generational features create an increasingly materialistic culture that values social position, image, and fame [32].Such a characteristic has been popularized on the basis of analyses of narcissism and empathy levels among college students over recent decades [33].The Millennial generation has been bred in the midst of the self-esteem movement, which resulted in confidence and entitlement behaviors being demonstrated by its members [33,34].It is worth noting that the Generation Me narrative, which is very present in the popular consciousness, has been constructed on the basis of value-laden descriptions that can be easily transmitted into language focusing on human (individual) needs and the confidence needed to propose new forms of business and social relations.At the organizational level, this profile is reflected in the expectation of the Millennial generation's members to work under a new management culture, enabling them to seek individual purpose within companies' organizational contexts, contribute to innovation in the workplace, and reconcile work and leisure in novel ways [35,36].Several studies demonstrate that they place importance on the individualistic aspects of a job and work-life balance.They are seeking rapid advancement and the development of new skills, while also ensuring a meaningful and satisfying life outside of the workplace.This individual approach to work is critical to them and, unlike previous generations, Millennials are unwilling to sacrifice personal pursuits for any type of professional success [33,37].
The portrait of "the most narcissistic generation in history" [33] stands in contrast with the characteristics of the Millennials popularized under the "Generation We" label, in which they are presented as ready to put the greater good ahead of individual rewards [38][39][40].Research demonstrates that representatives of this generation are more community-oriented, caring, activist, and civically involved than previous generations.
The intention of bringing change to the world seems to be a strong factor in the generational profile.They believe that human ingenuity and creativity are the force providing solutions to global problems, and they also expect companies to care about social issues and are ready to build their relations with commercial partners and employees dedicated to the idea of the greater good.
The Connection between Immersion in Technology and Collective Patterns of Action
According to Gary Gumpert and Robert Cathcart's concept, the worldviews and relationships of every generation are influenced by the media ecology of their youthful years [41,42].In the case of Millennials, the cyber-media technological complex stands as a distinctive generational feature, as Millennials are immersed in an info-reality that shapes their opinions and directs their actions.They navigate all aspects of their lives through digital means of communication and use them as channels for change in terms of individuality, self-identity, and self-expression [43].The result of extended technology and media use for the economic orientations of Millennials is important in at least two contexts: they are instantly, globally connected and, being adapted to the patterns of technology, are open to transformations of their environment [42].Furthermore, common and constant access to diverse news coverage and commentary through smartphones, websites, mobile apps, and SaaS apps is seen as an asset in an open society, as proven in several studies [44][45][46][47].However, the impact of the technological proficiency of this generation on the ability to lead and transform socioeconomic realities remains uncertain.
Why Does the Capitalistic Model Not Work for Western Millennials?
In their attempts to reach life and professional goals, Millennials have to face a number of market trends, which substantially transform their point of departure: the rise of outsourcing, the increasingly complex education-work transition, and the rise of knowledge-based jobs.The rules of the economic game have already changed, and in the future, market actors will have to adapt their practices to the realities shaped by cognitive computing, the Internet of Things, AI, etc.The nature of work has been altered; the Chinese economic model threatens the central position of the Western world; and democratic principles are increasingly challenged by populism, nationalism, and narrowly defined interests.
A stagnant economy and volatile social environment have shaped the way Millennials perceive the world and the manner in which their generational consciousness is formed.Recognized patterns of growth or national economic prosperity have lost their potential to help solve economic, environmental, and social challenges on the individual, national, or global levels.Structural shifts in the Western economies, combined with the Great Recession, undercut the generation's ability to build long-term wealth."Between 65 and 70 per cent of households in 25 advanced economies, the equivalent of 540 million to 580 million people, were in segments of the income distribution whose real market incomes-their wages and income from capital-were flat or had fallen in 2014 compared with 2005", says the McKinsey Global Institute [48].Americans are increasingly struggling in their attempts to catch up to the standard of living available to their parent's generation.Only 50% of people born in 1984 will be able to reach a level of material wealth similar to their parents, while 91.5% of those born in 1940 could have enjoyed such [48].Millennials are lagging behind the American Dream; 75% of them indicated that financial issues are a significant source of stress [49].American Millennials are less likely to earn more than their parents than any previous generation in American history [50,51].The major shifts are seen in the areas of housing, education, and work.The burst of the estimated USD 10.6 trillion housing bubble in 2007, a contributing factor to the Great Recession, created serious repercussions for one-third of American households [52].The long-term disruptions associated with the economic crisis have definitely changed the economic prosperity scenarios of the Millennial generation.They turned out to be the most vulnerable, because of the number of macroeconomic factors defining their relative position: the highest rate of unemployment, a tighter credit market, and higher student debt burdens [53,54].Home-ownership rates among 25-34-year-olds fell by a quarter in France; by nearly half in Denmark, Germany, Spain, the United Kingdom, and the United States; and by almost two-thirds in Italy.In the UK, the Nationwide Building Society has estimated that the cost of a first home rose from 2.7 years of salary in 1983 to 5.2 years of salary in 2015 [55].
Increasing inequality and the material fragility of daily life have played a role in forging the generational connections and identities of Millennials.The main shifts can be observed within the areas of education, work, and family.Millennials were raised in an educational ethos in which a college degree was seen as a tool for managing their lives, because of the strong social recognition of the historical link between educational levels and earnings [56].However, in the period of their coming of age, education has become a luxury service-prices are rising more rapidly than the prices of other goods and services; combined government and private student debt levels in the United States quadrupled (in nominal terms) from USD 250 billion in 2003 to USD 1.1 trillion in 2013 [57].
On the level of individual experiences and market trends, the rules of the game also changed for this generation.Although their social and economic environment has been built according to the traditional paradigm of growth, their worldview has rather been shaped by a permanent sense of instability.Sociocultural codes of hard work, personal development, and an educational ethos-the strategies that have long been perceived as providing certain kinds of results-have been broken.As the economic paradigm changes evolutionarily, there is no immediate interpretative frame available that could indicate new strategies for new times.As a consequence, Millennials reject well-established tactics and seek to create the realities in which new economic conceptualizations can be made.Uncertain economic contexts compel them to seek new forms of satisfaction and paths to fulfillment in life, different from the traditional symbols of material status.The ideal of a meaningful life is now being built around actively searching, sharing, and capturing memories earned through experiences.Of the U.S. Millennials, 78% would choose to spend money on a desirable experience or event over buying something desirable [58,59].More than half of Millennials in the United Kingdom (55%) and United States (56%), said that they were spending more on travel than they did a year ago [60].The growing (prepandemic) "experience economy" trend is associated with their social media presence, which propels them to show up, share, and engage in the cultural phenomena arising from it, like FOMO (fear of missing out).
Millennials, being an influential group of consumers, have also been transforming the ways in which brands are created and marketed.Brands are expected to be actively involved in a dialogue with the user and reflect their values, style, and general life philosophy: 50% of U.S. Millennials ages 18 to 24 and 38% of those ages 25 to 34 agreed that brands "say something about who I am, my values, and where I fit in", and 48% of young Millennials reported that they "try to use brands of companies that are active in supporting social causes" [61].Their approach to market institutions is holistic; they do not separate them from other-political or nonprofit-organizations, recognizing their role and responsibilities in shaping social realities.Millennials accelerate the drive towards greater diversity and equality in social life, as they expect their values to be reflected in the political and economic spheres [36].Their inferior economic position is seen as caused by the irresponsible policies of the previous generation, so this is one of the factors influencing the long-term increase in the conscience of the marketplace, which has consequences for future generations.Millennials' life philosophies and beliefs define the good life in the categories of diversity, fulfillment through experience, and balance the between human population and natural conditions of the planet [33].The wide range of these transformations can be indicated as a natural source of generational reorientation-the status quo created by the neoliberal economic vision does not work for Millennials and does not provide them with tools with which to attain their life goals.The identity of the young generations in Western countries is built on the self-perception of sacrifice, marginalization, and victimhood.They have become the first generation so widely and disproportionally hit by unemployment, underemployment, poverty, and exclusion, which confirmed the dysfunctionality of the economic system they inherited.The trend has been confirmed in the COVID-19 economic crisis, as a recent International Labour Organization (ILO) report makes clear; young people are the major victims of the social and economic consequences of the pandemic [62].Such a narrative can be framed by Peter Berger's concept of "pyramids of sacrifice" [63].With reference to the Aztec cult of the Great Pyramids of Cholula (Mexico) and its legitimization of the sacrifice of the lives of thousands of Indians, the author portrayed the human costs embedded in the practices of modernization and development, debunking the myth of growth and its narratives.This concept offers a frame for understanding the generational transition in the social cycle of wealth and privilege.In the traditional schemes of modernization, investments (sacrifices) made by preceding generations or developing societies often served overwhelmingly to benefit later generations and/or rich, Western nations.These pyramids of sacrifice are made possible by the social construction of meaning, the requirement of the agents of change to provide "cognitive respect" to conceptualizations of reality among the sacrificed groups.The order of transition in the case of the Millennial generation is reversed-the younger generation bears the costs of the systemic strains of capitalism, finding itself in the most unfavorable position in the wealth distribution hierarchy.In consequence, Millennials not only become disillusioned about intergenerational solidarity but also are no longer receptive to the growth ideologies produced by the capitalist myth makers.Millennials' personal economic experiences question the explanatory potential of the market narratives, and their environmental sensibility tells them that the world is far from being saved.In effect, the cognitive justification of Millennials' generational orientation is based on sources different from the traditional myths of capitalistic promise.
Will They Really Change Capitalism? The Collective Action Dilemma
A key theme in the social orientations of Millennials is a belief in the need for a transition to a new economy.At the center of this debate stand relationships between economic growth and the environment, work-life balance, and social cohesion.The basic premise of social cognitive theory is that personal agency and social structure operate interdependently to affect human activities [64,65], so the simultaneous investigation of variables in the Millennials' experience and contextual factors create a basis for evaluating the opportunities and risks in the transition process.Given the unfavorable social position the Millennials inherited in the trajectory of the evolution of the market, they naturally search for alternatives formulated in the concepts of "postgrowth" or, as we have seen, degrowth [66][67][68].In the collective action model proposed by van Stekelenburg, Klandermans, and van Dijk [69], ideology plays an important role, being one of the key variables along with instrumentality, anger, and identity.This perspective underlines the motivation of the actors involved, suggesting that people mobilize themselves when they believe that their fundamental values are being threatened.
From the perspective of many Millennials, the paradigm of "growth" is neither natural nor desirable [70].They are, rather, seeking meaning, a wider context, and coherent lives, which situates them beyond purely economic functions as producers and consumers.Furthermore, social, intellectual, and political authorities proved to be ineffective in taming the severe economic and social consequences of the neoliberal economic system.In effect, a sense of responsibility amongst market and social actors diminished gradually, deepening the state of uncertainty, apathy, and withdrawal from the "common" sphere to the point where only 20% of U.S. Millennials feel they can trust the federal government [71].This long-term process weakened the disciplinary technologies used by authorities to impose normative orientations on people [72].The rule of the experts-agents of knowledge [73]who formatted the interpretative structures of the market and the state, ascribing meaning and value to the objects of knowledge, was weakened, shifting the power towards individuals empowered by information technologies.
The worldviews of this generation have already shifted, but the question remains of whether they will be able to propose new, effective ways to address the complexity of the most urgent global problems as they take up leadership roles in organizations and societies.As long as the internal logic of the system remains untouched by the pressures of a reimagined growth model, the legitimacy of the leading economic actors will be increasingly eroding, diminishing their capacity to exert power and enforce social discipline.One of the obstacles when it comes to the challenge for Millennials of creating these pressures is the fact that the route to having a large-scale impact leads through organized politics, which is generally speaking beyond this generation's sphere of interests.Young people in the Western world, although unprecedently connected, can be described as a culturally "atomized" generation: "they have less civic engagement and lower political participation than any previous group" [74].A 2014 Pew Research Center study similarly concluded that U.S. Millennials are "relatively unattached to organized politics" [75].Thus, their identities, worldviews, and attitudes towards growth are transformative, and if persistent will have an impact on global economic relations, but Millennials are not ideologically vocal or institutionally involved.Therefore, the driving force of economic transition, in the scenario they are most likely to engage in, is situated within the area of individual market choices and consumer behaviors.These taken together can either sustain the capitalist growth system or introduce pressures that can transform social norms as reflected in institutional design.
Such a transformation model represents a classic social dilemma based on two components: the nature of the decision-making process leads individuals to be in favor of selfish choices over cooperative ones; when selfish choices are favored over cooperative ones, all the participants receive lower payoffs.The primary problem of the alternative growth collective action paradigm is that all members of a society will be better off if they choose to act against the principles of traditional growth, but nevertheless, it is better for each of them not to do so individually [76].The economic crisis began the process of framing the prospect of postcapitalism on the normative level, generating protest and mobilization based on anger and, to a certain degree, common identification [77].Nevertheless, it has failed to provide a basis for instrumentality, hence leading to the appearance of the subjective belief that the desired changes are attainable via collective action [78].
Environmental activists and other groups representing new social movements, such as animal rights protection advocates or alterglobalists, all appear to be too fragmented to mobilize a mass-scale reaction.Their influence in fostering institutional and political reorientation remains limited.Thus, the praxis of the Western societies, trapped in the vicious circle of consumerism and traditional mechanisms of institutional growth-based reasoning, has remained unchanged.Collective efficacy theory provides a useful framework with which to investigate how people view their ability to solve systemic problems and the effectiveness of their actions in that pursuit and can be used to examine the postcapitalism movements.It captures the link between social cohesion and expectations for action and is defined as "a group's shared belief in its conjoint capabilities to organize and execute the courses of action required to produce given levels of attainments" [65].A recognition of the need for the system's transformation arose, but the scenarios of the alternative framework have been born only in the Millennial generation's beliefs.Furthermore, they involve fragmented, sometimes contradictory forms (rejection of consumerism combined with an expectation of high wages), which are reflected in individual and group behaviors and so can be seen as a platform for new social-norm creation.Thus, any kind of coherent political program based on alternative growth principles is not being consciously implemented, but generational collective self-esteem is increasingly connected to alternative visions of the economic system.
Discussion
This article outlines how Millennials' orientations towards wealth, models of growth, and individual success are shaped by the economic environment and how they may influence future trends in the evolution of this environment.The findings of the major empirical studies covering the preferences of this generation have been integrated so as to build a conceptual picture of its members and address the research questions of whether new directions of evolution of the capitalist system (degrowth, green capitalism) are reflected in Millennials' value systems and practices.This article offers several contributions to the field: The Millennial generation's perspective provides an interesting context in which to draw attention to the various ways of approaching current dynamics in conceptions of growth, in both theory and practice.The rejection of capitalist-growth norms can be treated as a reaction to the transformation of the socioeconomic arena that undermined the ability of social actors to achieve their individual and group purposes related to well-being, happiness, or sustainable development.However, the extent to which the degrowth concept frames Millennials' market orientations remains open to discussion.As an intellectual proposition, degrowth bears a more radical connotation than postgrowth or "prosperity without growth", despite the fact that these labels coexist and are articulated to express a common preoccupation with the environmental and social consequences of unrestricted growth.
As has been highlighted above, Millennial beliefs and behaviors within this area should be analyzed as a generational identity-based attitude and not a goal-oriented movement.However, at least some strongly identifiable features of the Millennials' economic conscience are in agreement with degrowth proposals already formulated in general discourse: their requirement that the work-life balance recognizes similar needs such as work-sharing, as proposed by Latouche [79]; the fact that their market practices represent a turn towards peer-to-peer economy practices producing "social use value" rather than monetary "exchange value"; and their openness to social and technological innovation.Generally, the change in the market orientation of the Millennials is visible, but it is taking place not within a new structural conceptualization of the economic model, but rather in the individual sphere of influence.There is no evidence for a rise of a powerful ideology that could lay the foundations for the further evolution of the market system, but some relatively persistent trends and fashions do illustrate the rise of new social norms.For the majority of Western societies, the market behaviors indicated by degrowth philosophy are still situated within the area of social dilemmas.The position of the Millennial generation, however, may have evolved beyond this frame; they have faced the economic consequences of the processes whereby individual rationality, derived from growth-oriented behaviors, produced a state of collective irrationality, as the ecological, economic, and social costs of the growth model of capitalism proved to be devastating.In the traditional scenario, created throughout the last two centuries, that stood behind the logic of the market and social institutions, patterns of "individually reasonable behaviour lead to a situation in which everyone is worse off than they might have been otherwise" [80].The bankruptcy of the old model, as revealed by the global recession and unfavorable long-term trends, motivated members of the Millennial generation to direct their cultural perspectives towards a rethinking of the market.If this trend is consolidated as we go forward, new scenarios will be built in which the growth principle will be placed outside the definition of rationality.
The interpretation offered here touches on issues of power, conflict and resistance, and collective action, shedding light on the potential and prospects of the Western economic model.Millennials, alongside all other citizens, need to be meaningfully involved in the process of figuring out how new patterns of the consumer-environment-profit balance can be created and how they should evolve in the future.The technological tools that they have at their disposal and the new ways of thinking about social realities, agency, and spheres of influence have already resulted in the emergence of new market practices that can provide the basis for new market concepts.Until now, the evolution of the economic framework has been pushed forward mainly by policymakers and government representatives.System designers have been shaping the desired outcomes via international agreements, internal policies, and the empowerment of different economic actors, driven by a belief in the long-term benefits of the capitalism-democracy nexus.However, this moment in history, in which such principles are being seriously questioned, creates a space for bottom-up processes and the reconfiguration of economic realities with a potentially transformative effect on the whole framework.The Millennial generation has developed some promising change-oriented attitudes, but potentially destructive factors for the attempt to reformulate the system can still be detected.The major inconsistency here is connected to the fact that, despite a fundamental change in attitudes, Millennials live in the reality designed by the traditional model of growth.The socioeconomic environment clearly shapes their decisionmaking in many areas, such as in the delaying of key life-decisions (buying a house, starting a family) [81].The burden of student loans or risks connected with an increasingly internationalized job market may at least partially mitigate the attitude-action line, leading Millennials to make safe life choices with respect to career path or institutional involvement.They are already bearing the costs of the inadequacy of the prevailing economic model, which may become even more severe in the period of the system's transition.Therefore, their willingness to subscribe to the postulate of an intentional downscaling of economic activity and material affluence should be the subject of further research.
-
It presents a coherent picture of the economic orientations of the Millennials, derived from previously fragmented research areas (generational studies, economics, sustainability), that may inform policy and practice.-It provides evidence on collective tendencies on the basis of which the Millennial generation's approach towards the economy and their position within market processes have been assessed.This enables the validation of an argument about the Millennials' reluctance to accept the established rules of the market game.-It documents the relationship between two specific variables-Millennials' structural positions and economic circumstances-revealing the fact that the rejection of capitalist-growth norms can be treated as a reaction to the transformation of the socioeconomic arena that undermined the ability of social actors to achieve their individual and group purposes related to well-being, happiness, or sustainable development.-It contributes to the general conceptualization of the current moment in the history of the evolution of the capitalist system and indicates factors that may further erode the traditional foundations of growth in the Western world.-It indicates further research directions, especially with regard to the institutional and cultural factors shaping both market choices and perceptions of the possibilities and limitations of the current version of Western capitalism. | 8,444.8 | 2021-03-29T00:00:00.000 | [
"Economics",
"Philosophy"
] |
An approach to develop collaborative virtual labs in Modelica
Virtual labs are valuable educational resources in control education, and are widely used in the process industry as tools for operator training and decision aid. In these application domains, virtual labs typically rely on the interactive simulation of large-scale hybrid-DAE models with components of different engineering domains, whose description can be greatly simplified by the use of the Modelica language. Existing free and commercial Modelica libraries of different domains can be used to describe these models. The Interactive Modelica library facilitates developing virtual labs based on Modelica models, using only Modelica. A new major release of the Interactive Modelica library is presented in this paper, whose most relevant feature is to facilitate the implementation of collaborative virtual labs written using only the Modelica language. This library can be used with the environment OpenModelica, facilitating the implementation of cooperative virtual labs using only open software. This type of virtual lab, which allows several students to interact cooperatively with the same model simulation run, is an effective tool in the context of collaborative learning methods. The efficient communication among the graphical user interfaces and the simulation model is a key issue. We developed a new communication protocol, a synchronization algorithm, and redesigned the Modelica classes of the library to make the communication completely transparent to virtual lab developers. The implementation of a collaborative virtual lab for process control education, based on a simplified version of the Tennessee Eastman process, is discussed. The Interactive Modelica library is freely distributed under Modelica License 2 and can be downloaded from http://www.euclides.dia.uned.es/Interactive.
Process System Engineering (PSE) is part of the curriculum in engineering studies such as aerospace, mechanical, chemical, industrial and electrical. PSE covers a wide range of topics, such as [2]: system modeling and simulation, optimization, dynamics and control, and process and plant design. To master these topics, it is important not only to have a good theoretical background but also an engineer ability, i.e. insight and intuition, usually obtained by means of many hours of laboratory work, that can be reduced by using virtual labs. Virtual labs have become widely used in distance universities, where students don't have so many inperson practical lessons. There are many examples of virtual labs for PSE education in literature [3]- [5], but there is a lack of frameworks that facilitate the easy implementation of collaborative virtual labs for PSE education based on complex multi-domain models.
Virtual labs are essentially composed of three parts: the simulation of a mathematical model; the interactive studentto-model interface, called the virtual lab view; and a narrative that typically describes the learning outcomes and activities. Interactivity and visualization are interrelated in virtual labs: students are allowed to change model variables by manipulating the view graphic components and can observe the model behavior by means of animated visualizations. Visualization is an important aid to illustrate the complex problems that arise in PSE [6]. Collaborative virtual labs have several instances of the virtual lab view, which are typically executed on different computers, and allow several students cooperatively interact with the same model simulation run.
In this article, a new major release of the Interactive Modelica library is presented. Its most relevant feature is supporting the implementation of collaborative virtual labs, with multiple instances of the same view that can be executed on different computers, facilitating the interaction of several students with the same simulated model. The communication layer of the library has been completely changed to allow efficient synchronization among the simulation model and several views: every view has to reflect the same model behavior at the same time, and interactive changes on the model state are allowed by manipulating any of the views. Additional visualization components are also provided. The code of this new release, named Interactive 3.0, has been developed to be fully compatible with OpenModelica and Dymola, and it has been tested with Dymola 2021 and OpenModelica 1.16 64 bits.
Interactive 3.0, that can be freely downloaded from [7], is distributed as a Modelica library named Interactive, along with two dynamic-link libraries (DLL) named TCPFunctions and InteractiveLib. Interactive 3.0 is geared to Windows systems because some of its files (VTK, TCPFunctions, Qt, and InteractiveLib libraries) are specific for Windows 64 bits operating systems. TCPFunctions uses the windows socket library, so its code should be changed to be ported to Linux. VTK, Qt, and InteractiveLib can easily be recompiled to a Linux version. There exist Linux versions of the most popular Modelica environments such as Dymola and OpenModelica, that can be used to simulate any Modelica model.
The main contribution of this paper is to provide a free framework for developing collaborative virtual labs using only Modelica. To this end, a new synchronization algorithm and a new communication layer have been developed and included in the Interactive Modelica library. The communication architecture has a fundamental role in these virtual labs, and will be explained in Section V. Additionally, the Interactive Modelica library has been modified to be compatible with the Open-source Modelica environment OpenModelica, facilitating a free solution to collaborative virtual lab implementation.
The structure of the paper is as follows. Firstly, the related work is discussed in Section II, and the design principles and implementation of the Interactive 3.0 Modelica library are discussed in Sections III to V. The software architecture of the Interactive 3.0 Modelica library, focusing on the classes that include the communication code and the TCPFunctions DLL, is discussed in Section III, and the InteractiveLib DLL is described in Section IV. The most relevant aspects of the communication framework are discussed in Section V. Finally, the Interactive 3.0 Modelica library use is illustrated in Section VI through the development of a collaborative virtual lab based on the Tennessee Eastman simplified model [8], [9], a well-known process in chemical engineering. This virtual lab is used to get insight into the behavior of this chemical process plant, and to apply different multi-loop control and optimization strategies.
II. RELATED WORK
The object-oriented modeling language Modelica [10] greatly facilitates the description of hybrid dynamic models, non-causal models described by systems of differentialalgebraic equations (DAE) and events. Modelica provides language constructs to describe time and state events, to reinitialize state variables, to update discrete-time variables, to declare object-oriented constructs, connectors to specify the interaction between models, etc. Besides, there has been an international effort to provide Modelica libraries in different domains (hydraulic, thermal, chemical, mechanical, etc.), some of them free, well documented, and ready to be used. As this type of mathematical model (i.e, hybrid-DAE system) is widely used in process modeling, Modelica is well suited for implementing the type of models found in PSE and process industry.
The Modelica modeling environment (e.g., Dymola [11] and OpenModelica [12], [13]) makes the required manipulation on the model (e.g., remove redundant equations, analyze the computational causality, sort the equations, DAE index reduction, symbolical manipulation of the linear systems of simultaneous equations, tearing of nonlinear systems of simultaneous equations), and generates the executable code adding numeric solvers. These environments usually have graphical model editors that allow composing the model by simply dragging and dropping the components of the Modelica model libraries.
Different research lines have been followed to facilitate interactive simulation and visualization of Modelica models. One approach is to provide Modelica modeling environments with capabilities for interactive simulation. The OpenModelica Connection Editor (OMEdit) provides an interface to the interactive simulation module (OMI) in order to support interactive changes in the model parameters during the simulation run [14]. A web service communication layer for OpenModelica [12], [13] was implemented, and employed in [15], [16] to create interactive online simulations that allow users to change model parameters during the simulation run.
Other approaches are based on cosimulation. Virtual labs for control education were developed in [17] combining Dymola and Ejs [18]; and Dymola and Sysquake [19]. Cosimulation and model exchange based on Functional Mock-up Interface [20] is exploited in [21]- [23] to develop interactive simulations of Modelica models.
The Modelica_DeviceDrivers library [24] allows setting the value of model input variables using external devices (e.g., keyboard, joystick, etc.). The MultiBody Modelica library includes animated objects to visualize the simulation results. Two Modelica libraries for visualization are Mode-lica3D [25] and Visualisation [26].
VirtualLabBuilder [17] and Interactive [27] are two free Modelica libraries that facilitate composing the virtual lab view; establishing the relationship between model variables and the visual properties of the view; and linking the HTML pages that constitute the virtual lab narrative. The virtual lab view is described instantiating and connecting the graphic elements provided in VirtualLabBuilder or Interactive, forming a hierarchical tree that reflects the virtual lab view layout. VirtualLabBuilder and Interactive graphic elements (e.g., containers, animated 2D geometric shapes, basic elements and interactive controls) are Java and C++ code generators, respectively. During the initialization stage of the virtual lab simulation, the virtual lab view application is automatically generated, and the bidirectional model-view communication is established. This is accomplished by the Modelica classes describing the graphic elements, which contain in their initialization sections calls to functions aimed to write this code. The virtual lab view generated by VirtualLabBuilder is programmed in Java and doesn't contain 3D geometric shapes, whereas the view generated by Interactive is programmed in C++ using the Qt, VTK, and Qwt libraries, has better graphic quality and includes 3D geometric components. For a Modelica model to be employed in a virtual lab implemented using VirtualLabBuilder or Interactive, it needs to be adapted according to the methodology proposed in [28]. All model quantities that will be allowed to change interactively (the so-called interactive quantities) have to be selected as state variables. In particular, model parameters are transformed into interactive quantities by describing them as state variables with zero time-derivative. As different interactive actions may require different selections of the state variables, this approach may require executing in parallel several simulation instances, with different selections of the state variables. Modelica facilitates model developers to select the model state variables, and supports the reinitialization of state variables at events.
Previous versions of the VirtualLabBuilder and Interactive Modelica libraries facilitate the development of singleuser virtual labs whose model and view run locally, on the same computer. VirtualLabBuilder is only compatible with Dymola, whereas the latest version of Interactive can be used in combination with other Modelica modeling environments. Interactive 3.0 has been tested with Dymola and OpenModelica in Windows.
III. THE INTERACTIVE MODELICA LIBRARY
The Interactive 3.0 Modelica library is structured into four packages (see Fig. 1). The VLabModels, ViewElements and Examples packages contain the Modelica classes that the virtual lab developer employs. The src package contains partial classes and Modelica functions not intended to be directly used by virtual lab developers.
The src.CServer package encapsulates the C functions included in the TCPFunctions DLL. The TCPfunction DLL includes functions written in C to create a server, to attend requests from clients, and to send and receive TCP messages. There are calls to these functions from the following three partial classes of the Interactive Modelica library: Par-tialView, Drawable and SendElement.
To perform these communication tasks, the TCPFunctions DLL includes the following C functions: -startNClientCserver: starts the server and waits until a determined number of views have been connected. The number of views is a parameter of the VirtualLab class. This function returns a vector with the socket number of each connected view, which is necessary to send/receive data to/from these views. -sendOutput: sends a string as a TCP message to a view.
The string contains the value of the model variables that are visualized by the views. -getVarValues: receives a TCP message containing a string with the following information: number of changes performed on the view, a reference to the changed model variables, and their new value. -sendChalk: sends a 1/0 value depending on whether the changes performed on the view have been applied or not to the model. The VLabModels package includes the PartialView and VirtualLab classes. The virtual lab is described as a Modelica class that includes an object of the VirtualLab class, which has two objects: Model and View. The classes of these two objects, initialized to a null class, must be redeclared to the classes describing the physical model and the view, making use of the Modelica facility to redeclare the class of an object [29]. The class describing the view must inherit from the PartialView abstract class. The procedure to build the virtual lab will be illustrated in Section VI by means of a case study.
The PartialView class has been redesigned to include the following code concerning the communication:
-Declaration of parameters and global variables related
to the server-clients communication, such as the array of socket descriptors. -Initial algorithm section, executed in the Modelica environment before the simulation starts, includes a call to the function startNClientCserver. -When clause, whose code is executed at regular steps.
The code of this clause includes a call to the getVar-Values function, and a sentence changing the value of a global boolean variable named refreshView. The change of the value of the refreshView variable, triggers an event in the SendElement and Drawable partial classes that causes the execution of their communication code. We call the objects that inherit these two partial classes interactive objects, and the number of interactive objects existing in the view description nI. The Container package includes Modelica classes describing windows, panels, the plot container (PlottingPanel class) and the animation container (Canvas class). These classes don't include any communication code.
The Drawables package includes Modelica classes describing components that are hosted inside a PlottingPanel model or a Canvas model. It can be selected by the developer whether the variables of the objects of these classes (such as radius, position, etc.) send or not their value each communication interval to the view (i.e., are or not interactive). The graphic components included in these packages inherit from the Drawable class, which has been modified to include the code to send the interactive variable values to every view. The Drawable class is a partial class that has two global variables: an array with the socket descriptors corresponding to the views and the refreshView variable. This class includes a when clause that is executed only when there is a change of value in the refreshView variable. This when clause includes calls to the sendOutput function to send the interactive data associated to the graphic component to every view. When it is detected that every TCP connection is down, the simulation is terminated.
The InteractiveControls package includes Modelica classes describing components such as numeric boxes that are hosted inside a window or panel. These components inherit from the ControlElement class, which includes code to change the variable value associated with the component. This variable is linked to a model variable, allowing to change the value of the model variables in a way transparent to the user. Some of these components inherit additionally from the SendElement class, which has the same communication code that was included in the Drawable class.
The BasicElements package contains four classes: Label, CheckBox, PauseButton, and Browser. Objects of these classes can be included inside a window or panel. PauseButton creates a button for pausing and resuming the simulation. Browser creates a container of documentation in HTML format.
IV. THE INTERACTIVELIB DLL
The InteractiveLib DLL contains the C++ classes of the view graphic elements and the code to communicate with a server. The C++ source code generated from the Modelica view description includes instantiations of InteractiveLib DLL classes. The InteractiveLib DLL has been programmed in C++ using Qt 5.12, Qwt 6.1, and VTK 7.1. libraries.
Qt [30] is an object-oriented cross-platform framework aimed to develop applications that use C++ as a native language and is available under the terms of GNU Lesser General Public License. It was originally conceived to facilitate the development of graphical user interfaces (GUIs) using its Widgets module, but nowadays provides modules for networking, databases, OpenGL, etc., and bindings for different programming languages. A main feature of Qt is its mechanism for communication between objects, the signal slot mechanism. We have employed its capabilities for networking, OpenGL, and the signal and slot mechanism in the InteractiveLib implementation. The Qt library has been used to develop the C++ code of the view corresponding to the communication between the view and the model, the containers and the interactive elements (i.e., sliders, checkboxes, etc).
Qwt [31] is a set of widgets for technical applications written in C++ and freely distributed as a set of files that must be compiled and installed on the target system. Some of its plots and trails are included in the InteractiveLib DLL.
VTK [32], [33] is an open-source toolkit for creating leading-edge visualization and graphics applications that manipulate and display scientific data, licensed under the BSD license. Its core functionality is written in C++, and runs on Linux, Windows, and Mac. VTK provides a rendering abstraction layer over the underlying graphics library (OpenGL for the most part), and tools for 3D rendering, modeling, image processing, a suite of widgets for 3D interaction, volume rendering, and extensive 2D plotting capability. It supports a wide variety of visualization algorithms and advanced modeling techniques, and it takes advantage of both threaded and distributed memory parallel processing for speed and scalability, respectively. There is a special class included in VTK, named QVTKWidget, to display a VTK window in a Qt window. VTK is used in InteractiveLib DLL, in combination with Qt (by using the QVTWidget class), to create the 3D animation elements such as spheres, Halfpipe, ScalarBar, etc. and for the rendering and the visualization.
The view code has two threads: a thread to handle the graphical user interface and a thread exclusively dedicated to communicate with the simulation model.
The communication thread connects to the server, sends the new model variable values that have been modified due to the user action on the interactive controls (e.g., sliders), gets a message from the server informing whether or not the new values have been modified in the model, and obtains the model variables values needed to refresh the view (see Fig. 3).
The graphic components included in the Container, Drawables, InteractiveControls and BasicElement packages of the Interactive 3.0 Modelica library have an analogous class in the InteractiveLib DLL. There is a MainWindow class in Interactive, and a MainWindow class in InteractiveLib implemented using Qt. This MainWindow class is in every view description and includes the code to render the animation and the graphs; and to close the view application. The classes hosted inside a PlottingPanel and a Canvas model have corresponding classes in the InteractiveLib DLL, whose source code has been developed using Qwt and VTK respectively. There is a special class included in VTK, named QVTKWidget, to display a VTK window in a Qt window. The corresponding class of the Interactive 3.0 Canvas class inherits from QVTKWidget.
V. COMMUNICATION FRAMEWORK
The communication framework is based on the TCP protocol and a multiple client server architecture. There is one model simulation and multiple views, that are connected to the model simulation using a peer to peer centralized network, as is shown in Fig. 2. The communication engine is embedded in each view and the model, which are always synchronized in the same model state. The model simulation stops at regular time steps to exchange TCP messages with each view, a process explained below.
A. SERVER: MODEL SIMULATION
The virtual lab model is a Modelica class that has three parameters to set up the model-views communication: the port number where the views will be connected, number of views to be connected with the simulation model (nV), and time stamp between two successive model-view communications (communication interval).
The Modelica class describing the virtual lab model includes an object describing the Modelica model and an object describing the view, and equations connecting the model and the view variables. The procedure to build the virtual lab will be illustrated in Section VI by means of a case study.
Once the virtual lab model is executed, the model simulation starts a TCP server that attends new TCP requests from views in a fixed port, storing a connection handler for each view connection in a global variable of the type array declared in the PartialView class. The server waits until the predefined number of views are connected to it, and then the simulation begins.
The simulation is stopped at regular steps defined by the communication interval using the sample built-in Modelica function. At these time instants, there is a synchronized and bidirectional flow of messages between the model server and each view client. VOLUME 4, 2016 At these events, the following actions take place sequentially (see Fig. 3).
1) The server waits for each view to send the number of changes performed by the user and the new values of the model variables (nV messages). The server performs or not the changes from a determined view. 2) The server sends a message to each view informing whether the requested changes have been made, and waits until every client acknowledges the message reception (nV messages).
3) The value of the boolean variable refreshView is changed. 4) This change triggers an event that causes every interactive object (i.e., objects whose superclass is Drawable or SendView) to send a message to each view client. Thus, nI messages are sent to the nV clients (nI · nV messages). When the server detects that every client has been disconnected, the simulation is terminated. If there are simultaneously several views with a number of changes greater than zero, the model has to select one of these views and executes only the changes performed by manipulating this view. The selection of this view can be implemented in different ways. We have designed a simple selection procedure to reduce the time to perform the selection and the number of transmitted messages.
The selection procedure is as follows. During the connection of the view to the server, a priority is assigned to each view, depending on the time instant that the view asked the server to be a client. The first view is assigned the highest priority, and the last view the lowest priority. When two or more views send a number of changes different from zero, the selected view is the one of them with the highest priority. The model implements only the changes of this selected view. As the selection procedure is known beforehand, the instructor can use this information to prioritize a view with respect to the rest.
The total number of messages in each communication instant depends linearly on the number of views and the number of interactive objects in each view. Thus, the time involved in the communication increases linearly with the number of views and the complexity of the view.
B. CLIENTS: VIEW
The virtual lab view is a C++ application that contains objects of classes included in the InteractiveLib DLL, that are described in Section IV. This application has an object called CommThread, a thread that handles the connection to the model simulation through the TCP channel. This object includes an array that contains pointers to each view object whose properties are set to new values sent from the model simulation (i.e., interactive object). Thus, the size of this array is nI.
The CommThread object connects to the server, and then starts a loop that repeats the following steps until the view is closed.
1) It emits a signal to refresh every view window. 2) Then, sends the number of changes performed by the user and the new model variable values that have been modified due to the user action on the interactive controls (e.g., sliders). 3) It gets a message from the server informing whether or not the new values have been modified in the model. 4) It gets nI messages from the server, one message from each interactive object with the new model variables values needed to refresh the view (see Fig. 3). The message includes a number to identify the interactive object, which is required to obtain the pointer to the corresponding object and update the values accordingly.
VI. TENNESSEE EASTMAN SIMPLIFIED PROCESS VIRTUAL LAB
The Tennessee Eastman Process model [34] describes a real chemical process that contains a separator/reactor/recycle arrangement, involving two simultaneous gas-liquid exothermic reactions. This non-linear dynamic model has been employed as a benchmark for manufacturing process control, statistical process monitoring, sensor fault detection, and identification of data-driven network models. The Tennessee Eastman Simplified Process (TES) model [8] is a simplification of the Tennessee Eastman Process model. It considers only one process unit, consisting of a combination of a reactor and a separator. The process unit of the TES model has 2 input flows (named Feed 1 and Feed 2) and two output flows (named Purge and Stream 4). Feed 1 contains the non-condensable gases A and C, and trace amounts of an inert gas B. Feed 2 only contains component A. The irreversible reaction A + C −→ D occurs in the vapor phase under isothermal operating conditions. The product D is a non-volatile liquid. The process unit contains a vapor phase, composed of the A, B, and C ideal gases, and a liquid phase composed of pure D. Purge is a gas mixture flow composed of the A, B, and C ideal gases. Stream 4 contains only the liquid D.
A. EDUCATIONAL GOALS
A collaborative virtual lab is designed to teach students the dynamic behavior of the TES model, how to operate, control and optimize this process unit, and the effect of disturbances. The TES model is an example of a multi-input multi-output, nonlinear, open-loop unstable system with fast and slow dynamics. The control challenge is to maintain a specified product rate by manipulating the Feed 1, Feed 2, and Purge flows.
The multi-loop control strategy proposed in [8] is implemented. A diagram of the controlled plant is shown in Fig. 4. It consists of four PI controllers, PI_1 to PI_4, whose pairs of controlled-manipulated variables are respectively: the production rate (F 4 ) and the valve position for Feed 1 (u 1 ); the reactor pressure (P ) and the valve position for Purge (u 3 ); the concentration of component A in Purge (Y A3 ) and the valve position for Feed 2 (u 2 ); and the reactor maximum pressure (P M AX ) and the correction to the production rate setpoint (F 4SP ). The operating pressure must be kept below the shutdown limit of 3000 kPa. Students are asked to solve the tasks working in a group, using the collaborative virtual lab.
B. VIRTUAL LAB MODEL
A Modelica model of the TES process [8] was developed in [9]. The methodology proposed in [28] has been applied: the interactive quantities have been selected as state variables using the Modelica facilities to set the state variables, and the interactive parameters have been redefined as state variables with zero time-derivative. The virtual lab view is composed The structure of the TESimplified Modelica library is shown in Fig. 5a. The TES process unit is described in the Reactor model. The PI controllers have limited output, anti-windup compensation and setpoint weighting [35]. The diagram of the controlled plant, described in the ReactorPID model, is shown in Fig. 5b. parameters given in [8] are taken as initial values for these interactive quantities.
Other interactive quantities of the virtual lab are the setpoints of the four PI controllers, the composition of Feed 1, and the parameters of the reaction rate equation. The reaction rate (R D ) is assumed to depend only on the partial pressures of A (P A ) and C (P C ) as follows: The values of k 0 , α and β given in [8] are taken as initial values for these interactive quantities: k 0 = 0.00117, α = 0.5, β = 0.4, with R D expressed in kmol/h, and P A and P C in kPa.
C. VIRTUAL LAB VIEW
The virtual lab view is composed graphically by instantiating and connecting elements of the Interactive 3.0 Modelica library. The diagram of the Modelica model that describes the virtual lab view is shown in Fig. 6. The virtual lab view automatically generated from this description is shown in Fig. 7.
The Modelica class that describes the view must, on the one hand, be a subclass of the PartialView class, which is included in the Interactive library and contains the code of the model-view bidirectional communication. On the other hand, it has to contain the components employed to define the view, connected forming a tree structure. The PartialView class contains an object named root. This object must be connected to the rest of the view components following the library connection rules. Thus, this component is the root of the tree structure describing the view (see Fig. 6). As shown in Fig. 6, six components are directly connected with root: the mainWindow component of the MainFrame class, which generates the window shown in Fig. 7, and five components of the Dialog class.
Two containers are placed inside mainWindow: a component of the Canvas class, placed in the center of mainWindow, that contains the 3D animated diagram of the TES model; and a component of the Panel class that hosts interactive controls to pause and resume the simulation, and check-boxes to show and hide dialog windows.
The 3D animated diagram of the TES model is composed of drawable elements. The reactor is represented by components of the Cylinder class. The valves are represented by components of the File3DsImporter class, which imports 3D studio file into the view. The controllers are represented by components of the Text and Line classes.
The interactive controls that allow pause/resume the simulation and show/hide windows are described by components of the PauseButton and CheckBox classes. The virtual lab view contains six dialog windows that allow tuning the PI controllers; to change the composition of Feed 1, the reaction rate parameters and the setpoints of the PI controllers; to display the time evolution of the Stream 4 flow rate, the reactor pressure, the concentration of component A in the purge, and the valve positions; and to show the HTML pages that constitute the virtual lab narrative.
D. VIRTUAL LAB SET UP
The Modelica model that describes the complete virtual lab must instantiate the VirtualLab class of the Interactive 3.0 Modelica library, the Modelica class describing the virtual lab model, and the Modelica class describing the virtual lab view. In addition, this model describing the complete virtual lab has to contain the equations equaling the view variables to the corresponding model variables. Finally, values have to be assigned to the following parameters of the VirtualLab class: the length of the model-to-view communication interval, the names of the Modelica classes that describe the model and the view, the number of views, and the IP address of the computer where the simulation model is running (see Fig. 8).
E. TRANSLATION TO EXECUTABLE CODE AND LAUNCH
The Modelica description of the complete virtual lab is translated into executable code using a Modelica modeling environment. The Interactive 3.0 Modelica library has been tested using Dymola 2021 and OpenModelica 1.16. Then, the generated executable code is launched in the server computer, starting the simulation run. At the initialization stage of the simulation, the C++ code of the view application is automatically generated in the server computer, and the simulation run waits for the clients (i.e., the virtual lab view applications) to connect.
This C++ code of the view application, which has been automatically generated in the server computer, needs to be copied to the students' computers and compiled. The Interac-tiveLib, Qt, VTK, and Qwt DLL also need to be copied to the students' computers. Next, the compiled copies of the view application are launched in the students' computers. When the view application is launched in a student's computer, this application gets connected to the server, and the model-view communication is automatically established.
Once the specified number of views are connected (this number is a VirtualLab class parameter), the simulation ini- tialization is completed in the server and the interactive simulation proceeds. The code that needs to be copied to the students' computers doesn't change between successive runs of the virtual lab, so view installation only needs to be done once. Neither Dymola nor OpenModelica needs to be installed on the students' computers.
F. VIRTUAL LAB USE
The virtual lab narrative is presented to the students through HTML pages linked to the virtual lab view. This narrative describes the virtual lab pedagogical goals, the TES process and its multi-loop PI control system, and how to use the virtual lab. Students are asked to read the narrative and to experiment in groups with the collaborative virtual lab to complete the proposed activities. One of these collaborative activities consists in dividing the students into four groups, and assign to each group the tuning of one of the four PI controllers. Next, students are asked to describe and explain the observed influence of the other three controllers' tuning on their own. Some other collaborative activities proposed to the students are listed below. Some of them are the scenarios for process control discussed in [8].
1) Set to zero the value of the PI proportional constant, which is equivalent to cancel the control action. Observe the unstable dynamic of the process: the reactor pressure increases until the shutdown limit is met. 2) Observe the controlled system and describe its evolution. How long does it take to stabilize the system? 3) Change the control parameters. Analyze the limits for controlling the plant. 4) Without touching other controls, comment on what happens after modifying the product flow setpoint to VOLUME 4, 2016 Change the proportionality constant k 0 of the reaction rate equation (i.e., R D = k 0 · P α A · P β C ) from 0.0017 to 0.001, while the exponent β drifts from 0.4 to 0.35.
VII. CONCLUSION
A new major release of the Interactive Modelica library has been presented. Its most relevant feature is to facilitate the implementation of collaborative virtual labs based on Modelica models, using only the Modelica language. Collaborative virtual labs are composed of several synchronized views that interact with a common model simulation run.
The main challenge has been to design an efficient communication between the model and views that is transpar-ent to virtual lab developers. To this end, we analyzed the minimum information needed to be transmitted, developed a synchronization algorithm, and redesigned the Modelica classes containing this code exploiting the advantages of Modelica object orientation and creating partial classes so that they could be easily extended to create new library components.
The use of the Interactive 3.0 Modelica library has been illustrated by discussing the implementation of a collaborative virtual lab for control education, based on the Tennessee Eastman Simplified Process. The Interactive library facilitates easy definition of the virtual lab view and the modelview connection. The obtained virtual lab has good graphic quality and performance, and is a collaborative learning tool that allows students to work in groups for achieving their learning goals.
The Interactive 3.0 Modelica library is freely distributed under Modelica License 2. The C++ code automatically generated by the components of the Interactive Modelica library uses the Qt, Qwt, and VTK libraries. This library is compatible with OpenModelica and Dymola, having been tested with Dymola 2021 and OpenModelica 1.16 64 bits. As OpenModelica is an open-source tool, this compatibility | 8,271.2 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
Integrating scFv into xMAP Assays for the Detection of Marine Toxins
Marine toxins, such as saxitoxin and domoic acid are associated with algae blooms and can bioaccumulate in shell fish which present both health and economic concerns. The ability to detect the presence of toxin is paramount for the administration of the correct supportive care in case of intoxication; environmental monitoring to detect the presence of toxin is also important for prevention of intoxication. Immunoassays are one tool that has successfully been applied to the detection of marine toxins. Herein, we had the variable regions of two saxitoxin binding monoclonal antibodies sequenced and used the information to produce recombinant constructs that consist of linked heavy and light variable domains that make up the binding domains of the antibodies (scFv). Recombinantly produced binding elements such as scFv provide an alternative to traditional antibodies and serve to “preserve” monoclonal antibodies as they can be easily recreated from their sequence data. In this paper, we combined the anti-saxitoxin scFv developed here with a previously developed anti-domoic acid scFv and demonstrated their utility in a microsphere-based competitive immunoassay format. In addition to detection in buffer, we demonstrated equivalent sensitivity in oyster and scallop matrices. The potential for multiplexed detection using scFvs in this immunoassay format is demonstrated.
Introduction
Food poisoning from naturally occurring marine toxins is a worldwide public health issue, and it poses economic issues/concerns for the food industry.Marine toxins, such as saxitoxin (STX) and domoic acid (DA), are associated with algae blooms and can bioaccumulate in shellfish and herbivorous fishes causing food poisoning [1,2].The frequency of STX and DA producing algae blooms is on the rise [3], possibly due to climate change, leading to increasing potential for adverse environmental, economic, and health implications.
STXs, produced by dinoflagellates, are the most well studied cause of paralytic shellfish poisoning (PSP) which is also the most common and lethal form of marine toxin poisoning [2,4].PSP results in paralysis of muscles throughout the body and at higher concentrations can cause death.STX, is a heat stable toxin whose primary target is the voltage-gated sodium channels in the nerve and muscle cells of the body affecting the gastrointestinal and neurological systems [4,5].Because of its potency, STX has been identified as a potential biothreat agent and is regulated as a select agent by the U.S. Center for Disease Control (CDC).
DA is a representative of toxins that cause amnesic shellfish poisoning (ASP).Besides typical food poisoning symptoms, short term memory loss, confusion, and disorientation are observed in ASP.Although higher levels of DA are needed for intoxication, it is a heat stable toxin that can cause kidney damage at levels several orders of magnitude lower than those that cause neurological symptoms [5,6].It acts by stimulating the glutamate receptors.
There is no known antidote for either STX or DA poisoning; therefore, the ability to detect the presence of toxin is vital for the early administration of the correct supportive care [7].Detection of these and related toxins are also of importance for environmental monitoring to limit use of shellfish from that region.Until the last decade, the mouse bioassay (MBA) was the only approved method for the detection of marine toxins.The MBA provides toxicity information about the food sample but not toxin identification.Moral and ethical issues with this method (animal usage) have led to the development of other methods.Liquid chromatography such as HPLC (with fluorometric detection for STX), LC-UV for DA, LC-MS and UPLC-MS/MS [2, 4,8,9], are methods that are able to detect and identify variants of the toxins.HPLC-FD and LC-UV have been validated for use in the EU.Surface Enhanced Raman Scattering (SERS) has also been demonstrated [9,10].While these methods have the specificity, they require extensive sample preparation, expensive equipment, highly trained personnel, and the availability of analytical standards.In addition, interferences from complex matrices have been observed.
Alternatively, biologically-based assays have been developed that can monitor toxicity similar to the MBA and, thereby, provide for rapid screening of samples.Cell-based sensors have been developed for PSP toxins, which utilize only cells instead of animals [11][12][13].In 2012, Van Dolan et al. described a receptor-based assay that was able to detect PSP toxins near the regulatory limits [4,14].Even though cells and receptors have been employed to identify marine toxins and their toxicity, most bio-based methods employ antibodies as a rapid screening tool.Several different immunoassays have successfully been applied to the detection of marine toxins that use either polyclonal (pAbs) or monoclonal antibodies (mAbs) against STX and DA [15][16][17][18][19]. Antibodies have been used in many different detection formats such as: enzyme-linked immunosorbent assays (ELISAs) [20,21], surface plasmon resonance (SPR) [22][23][24][25], fluorescence/chemiluminescence-based microarray assays [26,27] flow cytometry [28,29], and lab-on-a-chip using fluorescence and magnetic particles [30].Recently, two groups have been developing lateral flow immunoassays (LFIs) for the detection of PSP toxins and DA as rapid, low tech screening assays [31][32][33].Campbell et al. reviews biological toxin binders in detail in their 2011 paper [34].
In the last few years, concerns have grown over the reliability and reproducibility of standard antibodies.A. Bradbury wrote an article in Nature describing the issues with using traditional monoclonal and polyclonal antibodies [35].These issues include availability of antibody (polyclonals -batch-to-batch variability), use of animals, and for monoclonals, the death of hybridoma cell lines.A viable alternative is recombinant antibodies including single-chain variable fragment (scFv), and single domain antibodies (sdAb) [36].Recombinant constructs (scFv) that consist of linked heavy and light variable domains that make up the binding domains of conventional antibodies have been produced for the sensitive detection of DA, as well as a number of other biotoxins [37,38].Recombinantly produced binding elements serve to "preserve" mAbs as they can be easily recreated from their sequence information, thereby reducing variability and eliminating issues with loss of cell lines.An additional advantage of scFv over mAbs is the ability to produce reagent recombinantly in E. coli and the potential to produce fusion constructs with enhanced utility that can potentially be tailored to particular sensor systems [38][39][40][41].While not universal, improvements in stability, affinity, and diversity have been observed in scFvs, for example, improvement in both stability and affinity was demonstrated by McConell et al. [42].Herein, we demonstrate recombinantly produced antibody recognition domains, scFvs, in a microsphere-based competitive immunoassay for the detection of STX and DA.This work utilized the previously described anti-DA binding fragment [37] in conjunction with an anti-STX binding domain that was synthesized from the sequence of an anti-STX mAb [15].In addition to detection in buffer, we show the utility of the assay in shellfish matrices.
Sequencing and Evaluation of Anti-STX mAbs for scFv Production
The hybridoma supernatants and cell lines for the sequencing of anti-STX mAbs 5F7 and 1E8 were developed at Ludwig-Maximilians-Universität Munich (LMU) [15].We contracted with Genscript (Piscataway, NJ, USA) to have the variable regions of the mAbs sequenced as well as for the production of each mAb for evaluation.Sequencing showed that 5F7 and 1E8's sequences were unique (Figure 1).The mAbs were evaluated by surface plasmon resonance (SPR) for their ability to bind to a STX-IgG-antigen (Figure 2).STX was coupled to an irrelevant human IgG (HuIgG); the binding kinetics of mAbs 5F7 and 1E8 were observed to be ~2.6 and 2.5 nM, respectively.domain that was synthesized from the sequence of an anti-STX mAb [15].In addition to detection in buffer, we show the utility of the assay in shellfish matrices.
Sequencing and Evaluation of Anti-STX mAbs for scFv Production
The hybridoma supernatants and cell lines for the sequencing of anti-STX mAbs 5F7 and 1E8 were developed at Ludwig-Maximilians-Universität Munich (LMU) [15].We contracted with Genscript (Piscataway, NJ, USA) to have the variable regions of the mAbs sequenced as well as for the production of each mAb for evaluation.Sequencing showed that 5F7 and 1E8's sequences were unique (Figure 1).The mAbs were evaluated by surface plasmon resonance (SPR) for their ability to bind to a STX-IgG-antigen (Figure 2).STX was coupled to an irrelevant human IgG (HuIgG); the binding kinetics of mAbs 5F7 and 1E8 were observed to be ~2.6 and 2.5 nM, respectively.The mAbs were also shown to function in xMAP assays on the MAGPIX instrument.First, each mAb was biotinylated and the dose response direct binding to STX-coated microspheres was evaluated to determine an appropriate concentration to use for a competitive assay (not shown).Next, the two mAbs were shown to function in a competitive format for the detection of STX (Figure 3).The results were very similar to those observed previously in a competitive ELISA assay [15], with 1E8 in this format appearing to have a higher affinity for STX and providing a better limit of detection.domain that was synthesized from the sequence of an anti-STX mAb [15].In addition to detection in buffer, we show the utility of the assay in shellfish matrices.
Sequencing and Evaluation of Anti-STX mAbs for scFv Production
The hybridoma supernatants and cell lines for the sequencing of anti-STX mAbs 5F7 and 1E8 were developed at Ludwig-Maximilians-Universität Munich (LMU) [15].We contracted with Genscript (Piscataway, NJ, USA) to have the variable regions of the mAbs sequenced as well as for the production of each mAb for evaluation.Sequencing showed that 5F7 and 1E8's sequences were unique (Figure 1).The mAbs were evaluated by surface plasmon resonance (SPR) for their ability to bind to a STX-IgG-antigen (Figure 2).STX was coupled to an irrelevant human IgG (HuIgG); the binding kinetics of mAbs 5F7 and 1E8 were observed to be ~2.6 and 2.5 nM, respectively.The mAbs were also shown to function in xMAP assays on the MAGPIX instrument.First, each mAb was biotinylated and the dose response direct binding to STX-coated microspheres was evaluated to determine an appropriate concentration to use for a competitive assay (not shown).Next, the two mAbs were shown to function in a competitive format for the detection of STX (Figure 3).The results were very similar to those observed previously in a competitive ELISA assay [15], with 1E8 in this format appearing to have a higher affinity for STX and providing a better limit of detection.The mAbs were also shown to function in xMAP assays on the MAGPIX instrument.First, each mAb was biotinylated and the dose response direct binding to STX-coated microspheres was evaluated to determine an appropriate concentration to use for a competitive assay (not shown).Next, the two mAbs were shown to function in a competitive format for the detection of STX (Figure 3).The results were very similar to those observed previously in a competitive ELISA assay [15], with 1E8 in this format appearing to have a higher affinity for STX and providing a better limit of detection.
The use of IgG for conjugation of the STX was due to the need to have a glycosylated molecule onto which the STX can be attached.Conjugate preparation followed a procedure that couples through the carbohydrate of the antibody to amines on the antigen paralleling the chemistry originally utilized for the immunogen used to prepare the mAbs [15].STX was conjugated to IgG through a periodate chemistry that specifically activates the carbohydrate residues (described in Section 4.6); once activated, the aldehyde formed is highly reactive toward nucleophiles, especially primary amines.The highly negative pI of glucose oxidase (GO) used in the mAb production makes it less suitable for our application as it does not immobilize well onto the microspheres.The long linker length between the protein and the immobilized STX provided by carbohydrate side chains may contribute to the superiority of this chemistry for formation of both immunogen and toxin analog.The use of IgG for conjugation of the STX was due to the need to have a glycosylated molecule onto which the STX can be attached.Conjugate preparation followed a procedure that couples through the carbohydrate of the antibody to amines on the antigen paralleling the chemistry originally utilized for the immunogen used to prepare the mAbs [15].STX was conjugated to IgG through a periodate chemistry that specifically activates the carbohydrate residues (described in Section 4.6); once activated, the aldehyde formed is highly reactive toward nucleophiles, especially primary amines.The highly negative pI of glucose oxidase (GO) used in the mAb production makes it less suitable for our application as it does not immobilize well onto the microspheres.The long linker length between the protein and the immobilized STX provided by carbohydrate side chains may contribute to the superiority of this chemistry for formation of both immunogen and toxin analog.
Figure 3. MAGPIX xMAP STX competitive immunoassay using mAbs.Each mAb was biotinylated and tested at 1 μg/mL in a competitive assay using STX-HuIgG coated MagPlex beads as described in the experimental section.Additional control bead sets are not shown.The graph is compiled from separate STX dose response assays for each of the mAbs.
Production of scFv Targeting STX and DA
The protein sequences of the variable heavy and light domains from anti-STX mAbs 5F7 and 1E8 were synthesized with the variable heavy domain as a NcoI-NotI fragment and the light chain with flanking BamH1 and XhoI sites with a 26 amino acid long linker (containing the Not I and BamH1 sites) joining the two domains.The DNA for expressing the scFv was inserted, using NcoI-XhoI, to the pET22b(+) expression vector for periplasmic protein production.
Protein yields of ~1-2 mg/L were obtained from the 5F7 scFv, however, although we tried several times, we were unable to purify protein from the 1E8 scFv.We chose to focus on developing an anti-STX xMAP assay utilizing the 5F7 scFv.In the future, strategies such as expression with chaperones, fusion with a constant domain, or CDR grafting could be implemented for production of the 1E8 scFv [42][43][44].
For detection of DA, we were kindly provided with the protein sequence of the anti-DA scFv DA24cB7 (Figure 1) by Dr. Marian Kane [37].We had the gene synthesized with flanking NcoI and XhoI restriction sites for cloning and a 15 amino acid linker between the variable heavy and light chains.The scFv was cloned into pET22b(+) for expression and typically yielded ~5-10 mg/L.
Evaluation of scFv Targeting STX and DA
Like monoclonal and polyclonal antibodies, scFvs have been incorporated into ELISAs, SPR, and LFIs [38].Although the ELISA format affords amplification through the action of an enzyme, it is not easily multiplexed to examine multiple toxins in one well.The Luminex xMAP methodology consists of assays performed on color coded beads that can be highly multiplexed.Both sandwich and
Production of scFv Targeting STX and DA
The protein sequences of the variable heavy and light domains from anti-STX mAbs 5F7 and 1E8 were synthesized with the variable heavy domain as a NcoI-NotI fragment and the light chain with flanking BamH1 and XhoI sites with a 26 amino acid long linker (containing the Not I and BamH1 sites) joining the two domains.The DNA for expressing the scFv was inserted, using NcoI-XhoI, to the pET22b(+) expression vector for periplasmic protein production.
Protein yields of ~1-2 mg/L were obtained from the 5F7 scFv, however, although we tried several times, we were unable to purify protein from the 1E8 scFv.We chose to focus on developing an anti-STX xMAP assay utilizing the 5F7 scFv.In the future, strategies such as expression with chaperones, fusion with a constant domain, or CDR grafting could be implemented for production of the 1E8 scFv [42][43][44].
For detection of DA, we were kindly provided with the protein sequence of the anti-DA scFv DA24cB7 (Figure 1) by Dr. Marian Kane [37].We had the gene synthesized with flanking NcoI and XhoI restriction sites for cloning and a 15 amino acid linker between the variable heavy and light chains.The scFv was cloned into pET22b(+) for expression and typically yielded ~5-10 mg/L.
Evaluation of scFv Targeting STX and DA
Like monoclonal and polyclonal antibodies, scFvs have been incorporated into ELISAs, SPR, and LFIs [38].Although the ELISA format affords amplification through the action of an enzyme, it is not easily multiplexed to examine multiple toxins in one well.The Luminex xMAP methodology consists of assays performed on color coded beads that can be highly multiplexed.Both sandwich and competitive format immunoassays have been demonstrated using the Luminex system [17,45,46].In this article, an xMAP assay monitored using a MAGPIX system for the individual detection of STX and DA utilizing scFv recognition elements was demonstrated, Figures 4, 5, and Figure S1.To demonstrate the ability of the biotinylated-scFv to detect either STX or DA, competitive immunoassays with toxin-coated beads were performed in buffer and commonly contaminated competitive format immunoassays have been demonstrated using the Luminex system [17,45,46].In this article, an xMAP assay monitored using a MAGPIX system for the individual detection of STX and DA utilizing scFv recognition elements was demonstrated, Figures 4, 5, and Figure S1.To demonstrate the ability of the biotinylated-scFv to detect either STX or DA, competitive immunoassays with toxin-coated beads were performed in buffer and commonly contaminated To demonstrate the ability of the biotinylated-scFv to detect either STX or DA, competitive immunoassays with toxin-coated beads were performed in buffer and commonly contaminated shellfish matrices.For sample preparation of the seafood materials, a simple extraction protocol was Toxins 2016, 8, 346 6 of 12 utilized, however, more complex extraction methodologies that are likely more efficient have been described [27,47].As shown in Figure 4, the scFvs were able to detect STX or DA in all three matrices in a dose dependent manner.Beads sets in which STX was coupled to human IgG (HuIgG) or rabbit IgG (RbIgG) showed similar dose response curves while the curves for DA microspheres which were prepared identically gave identical responses (Figure S1).The IC 50 , IC 10 , IC 90 , Min, and Max were determined (Table 1) by fitting the dose response curves using SigmaPlot 12.0 (Systat Software, San Jose, CA, USA) with a four-parameter logistic equation, Figure 4. Averages of the raw data are shown in Figure 5. Cross-reactivity studies for closely related compounds were not performed for the anti-STX scFv tested here, but have previously been examined for its parental mAb [15].The IC 10 , which is the lowest concentration that these assays can be presumed to detect, was in the low ng/mL for both STX and DA.These limits are sufficient for most monitoring needs, but were not as sensitive as have been reported for a number of techniques.The higher IC 10 could be for a number of reasons.One is that the concentration of antibody used in the competition assay will be inversely proportional to the IC 10 observed; this allows for a more robust assay as a low concentration of antibody can lead to greater variances in the signal generated.It is likely, that with additional optimization of the assay reagents, improvements in sensitivity could be realized.Nonetheless, for our purposes of demonstrating these scFv immunoreagents in an xMAP assay format, these sensitivities were respectable.As this work only examined standard curves for the detection of STX and DA in the various oyster and scallop extracts in comparison to a standard curve generated in buffer, extraction efficiencies were not calculated.However, it is possible to evaluate the matrix effect's impact on the ability to discriminate the toxins.By comparing the magnitude of signal response in each matrix to that in buffer and the corresponding maximum signal divided by the minimum signal (signal-to-noise, S/N), one can estimate how the matrices affect the assays, see Table 2.The results of this analysis on the raw data shows that for the STX assay the response is slightly reduced for the oyster matrix and degraded ~50% for the scallop matrix, but the ability to discriminate that signal based on the ratio of S/N shows little change.For the DA assay, the signal strength is dramatically reduced in the matrices, especially for the oyster assay, which was reduced to ~1/4 the buffer signal levels, however, when comparing the S/N for the DA assays, the ratio actually improved in the extracts, possibly due to a reduction in nonspecific binding.Thus, it appears that the ability to discriminate STX and DA in the seafood extracts is not severely degraded, suggesting this minimal extraction protocol may warrant further evaluation to determine its efficiency as compared to others [27,47].
Combined STX and DA Assays
Preliminary efforts to combine the two assays were also undertaken.A combined assay for STX and DA was performed by mixing beads specific for STX and DA as well as both corresponding biotinylated scFvs.Figure S2 shows the results for the spiked STX or DA samples.While there is an increase in the inhibition for the STX microspheres in the DA spiked samples, there is a clear difference in signal at the higher concentration of free DA.The corresponding assay with STX had larger cross-reactivity issues.The cross-reactivity could be due to non-specific binding of the biotinylated scFvs to the microspheres in the absence of antigen.Further work would be required to develop or optimize an assay with less cross-reactivity, including blocking the beads after toxin immobilization or adjusting the buffers and wash buffers utilized to minimize this unwanted condition.
Conclusions
The ability to overcome disadvantages of pAbs and mAbs (e.g., availability and animal usage) by using recombinant constructs such as scFvs is the way of the future.In this study, we demonstrated the use of scFvs for the detection of two marine toxins: STX and DA.We employed these scFvs in competitive assays using the xMAP fluid array technology, obtaining limits of detection that approached those obtained with other antibodies, although further optimization is warranted.We demonstrated detection in both buffer and spiked food matrices with minimal sample preparation.Using the sequence information provided herein, these recognition molecules (scFvs) are now available to other researchers for incorporation into their immunoassay platforms as an alternative to traditional antibodies, and they could also be further engineered to include biotin binding domains or signal transduction domains (i.e., alkaline phosphatase) to further enhance their utility.
scFv Construction and Protein Production
The genes coding for the variable heavy and light domains from anti-STX mAbs 5F7 and 1E8 were synthesized with codons optimized for expression in E. coli (GenScript, Piscataway, NJ, USA).The variable heavy domain was PCR amplified from the plasmid provided by Genscript with primers that introduced flanking NcoI and NotI sites; the light chain was similarly amplified to introduce flanking BamH1 and XhoI sites.The PCR products were digested and gel purified prior to ligation into a pET22b derivative containing the linker sequence AAAGSGSGGGSSGGGSSGGGSGASGS, between the NotI (coded by AAA) and the Bam H1 (coded by the C-terminal GS) sites.Similarly, the sequence for the variable heavy and light chain of the anti-DA scFv DA24cB7, joined by a 15 amino acid linker, was synthesized with flanking NcoI and XhoI sites (GenScript) and cloned into the pET22b expression vector.
The anti-STX scFvs and the anti-DA scFv were grown and produced essentially as described previously [48].Cultures were grown at 25 or 30 • C in terrific broth (TB).Fifty mL overnight cultures were used to inoculate 500 mL of TB and were grown for 3 h before induction by addition of 1 mM isopropyl β-D-1-thiogalactopyranoside (IPTG).After induction, cultures were grown 3 h before pelleting and subjected to an osmotic shock protocol [44,45].The scFv were purified from the shockate by immobilized metal affinity chromatography followed by size exclusion chromatography using a Superdex 75 10/300 GL column (GE Healthcare, Pittsburgh, PA, USA) and a BioLogic Duo-Flow Chromatography System (Bio-Rad, Hercules, CA, USA).The yield of the scFv was determined by UV spectroscopy, measuring the absorbance at 280 nm using a NanoDrop 2000 (Thermo Fisher, Waltham, MA, USA).
Food Matrices Preparation
A simple sample preparation protocol was used to extract the toxins as compared to the more stringent buffer conditions described in Campbell et al. and Szkola et al. [27,47].Bay scallops (live, frozen) and live oysters were purchased from a local grocery store.The oysters were placed at −20 • C overnight.The frozen bay scallops (200 g) were blended in a small Cuisinart food processor until smooth with no additional liquid.The puree was placed in 15 mL centrifuge tubes (VWR, Radnor, PA, USA) in 5 g aliquots and frozen until testing.For the frozen oysters, 120.5 g were blended with no additional liquid, aliquoted in 5 g sizes, and frozen until testing.
Just prior to testing, the 5 g samples were thawed and 10 mL PBSTB were added.To the thoroughly blended samples, either STX or DA were spiked to give 1000 ng/mL or 200 ng/mL, respectively.The samples were mixed and incubated for 2 h at room temperature, then spun to remove large particulates.The supernatant was used for analysis.Extraction efficiency was not determined.
Preparation and Biotinylation of mAbs and ScFv
The antibodies were purified from cell supernatants by MEP HyperCel hydrophobic charge induction chromatography (Pall, East Hills, NY, USA), as described previously [49].Antibodies and scFvs were biotinylated using NHS-LC-LC-Biotin (Thermo-Fischer, Waltham, MA, USA) dissolved in dimethyl sulfoxide (20 g/L).The antibodies were reacted with a 10:1 molar excess of the NHS-LC-LC-Biotin.To enhance the rate of reaction, the pH was increased by the addition of a half-volume of 100 mM sodium borate +100 mM sodium chloride (pH 9.1).After incubation for 1 h at room temperature, the biotinylated antibodies were separated from free biotin by gel filtration on a Bio-gel P10 column (Bio-Rad, Hercules, CA, USA) or by using Zeba spin 7 K desalting columns (Thermo Fisher, Waltham, MA, USA).
Surface Plasmon Resonance Evaluation of Anti-STX mAbs
Surface plasmon resonance (SPR) affinity and kinetics measurements were performed using the ProteOn XPR36 (Bio-Rad).Lanes of a general layer compact (GLC) chip were individually coated with STX covalently linked to an irrelevant HuIgG in 10 mM acetate buffer with pH 5.0.The covalent crosslinking of STX protocol was described previously [15] and in Section 4.6.The STX-HuIgG was attached to the chip following the standard 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide hydrochloride (EDC)/N-hydroxysulfosuccinimide (Sulfo-NHS) coupling chemistry available from the manufacturer.Binding kinetics of each antibody was tested at 25 • C by flowing six concentrations varying from 100 to 0 nM at 100 µL/min for 90 s over the antigen coated chip and then monitoring dissociation for 600 s.For comparison purposes, this data was analyzed using a global Langmuir fit.
Preparation Toxin-Coated MagPlex Microspheres
MagPlex microspheres were coated with DA by first reacting the surface carboxyls with ethylene diamine (EDA) using the standard two step protocol, where 30 µL of the microspheres were washed with 0.1 M sodium phosphate (pH 6.0) three times, and then activated using EDC and Sulfo-NHS 5 mg/mL each.After 20 min the microspheres were washed once with 0.1 M sodium phosphate (pH 6.0) and once with PBS, then microspheres were resuspended in EDA (1 mg/mL) in PBS and incubated overnight.The next day, the microspheres were washed three times with 0.1 M 2-(N-morpholino)ethanesulfonic acid buffer (MES) at pH 4.5.The microspheres were then coated with DA by adding a 2:1 EDC:DA ratio.The microspheres were incubated for 1 h and then washed three times with PBS and stored in the dark at 4 • C.
Use of a different chemistry to bind STX to the microspheres was indicated by the immunogen used to prepare the mAbs [15].MagPlex microspheres (50 µL) coated with STX were prepared by first coating two sets of microspheres with either a purified rabbit or human polyclonal IgG towards irrelevant targets, using the standard protocol described above except they were washed into PBS and then resuspended in 0.4 mL of PBS on the following day.The carbohydrates on the IgG were activated by the addition of 22 µL sodium periodate (46 mM).The microspheres were incubated in the dark for 1 h at room temperature, then washed three times with PBS.The PBS was removed and the microspheres resuspended in 40 µL of 0.1 M sodium bicarbonate (pH 8.5) to which 5 µL of STX (100 µg/mL) was added.The microspheres were incubated for 1 h and then 1 µL of sodium cyanoborohydride (5 M in 1 M NaOH) was added.The reaction was allowed to proceed for 30 min on ice.Then the microspheres were washed three times with PBS and then stored in 0.1 M sodium phosphate (pH 6.0).
Assays
Competitive immunoassay dose response curves were performed first in buffer and then with the spiked food samples.Briefly, added into a well of a 96-well microtiter plate was either a sample containing 1000 ng/mL STX or 200 ng/mL DA such that after serial dilutions (1:4 for STX and 1:3 for DA) 90 µL remained in each well.Next, 10 µL of the biotinylated scFvs (bt-anti-STX, 5F7, at 2.5 µg/mL final for STX and bt-anti-DA at 10 µg/mL final) were added, followed with 10 µL of toxin-coated beads.The foil-covered plate was placed on a FINEPCR micromixer MX4t (Gyeonggi-Do, Korea) for 30 min at room temperature.Using a 96-well flat magnetic plate (BioTek, Winooski, VT, USA), the supernatant was removed and the beads were washed with PBSTB.For signal generation, 50 µL of 2.5 µg/mL SA-PE were added to each well and plate was incubated on shaker for 15 min followed by a wash with PBSTB.Based on previous work which showed a ~5 fold increase in signal [50], a second round of SA-PE was performed as follows.Fifty µL of biotinylated anti-SA (1 µg/mL) were added, incubated for 15 min, and washed with PBSTB.Lastly, another 50 µL of 2.5 µg/mL SA-PE was added and the mixture incubated for 15 min.The beads were washed 2× with PBSTB.PBSTB (100 µL) was added to each well and the plate was analyzed with Luminex MAGPIX.For the assay using the intact mAbs, only a single 30 min incubation with SA-PE (2.5 µg/mL) was performed.Percent inhibition was calculated using the following equation: 100 − [(signal/(blank signal)) × 100] Supplementary Materials: The following are available online at www.mdpi.com/2072-6651/8/11/346/s1, Figure S1: Percent inhibition dose response curves for saxitoxin and domoic acid.Figure S2: Dose response curves for mixed saxitoxin and domoic acid assay in buffer.
Figure 2 .
Figure 2. Surface plasmon resonance evaluation of anti-STX mAbs.The binding affinities of anti-STX mAbs, 5F7 and 1E8 were each evaluated on a surface with immobilized STX-HuIgG.Each mAb was tested simultaneously at six concentrations with an association time of 90 s and a dissociation time of 600 s.See Experimental Section for additional details.
Figure 2 .
Figure 2. Surface plasmon resonance evaluation of anti-STX mAbs.The binding affinities of anti-STX mAbs, 5F7 and 1E8 were each evaluated on a surface with immobilized STX-HuIgG.Each mAb was tested simultaneously at six concentrations with an association time of 90 s and a dissociation time of 600 s.See Experimental Section for additional details.
Figure 2 .
Figure 2. Surface plasmon resonance evaluation of anti-STX mAbs.The binding affinities of anti-STX mAbs, 5F7 and 1E8 were each evaluated on a surface with immobilized STX-HuIgG.Each mAb was tested simultaneously at six concentrations with an association time of 90 s and a dissociation time of 600 s.See Experimental Section for additional details.
Figure 3 .
Figure3.MAGPIX xMAP STX competitive immunoassay using mAbs.Each mAb was biotinylated and tested at 1 µg/mL in a competitive assay using STX-HuIgG coated MagPlex beads as described in the experimental section.Additional control bead sets are not shown.The graph is compiled from separate STX dose response assays for each of the mAbs.
have been demonstrated using the Luminex system[17,45,46].In this article, an xMAP assay monitored using a MAGPIX system for the individual detection of STX and DA utilizing scFv recognition elements was demonstrated, Figures4, 5, and FigureS1.
Figure 4 .
Figure 4. Dose response curves for STX and DA.The left side is % inhibition of the dose responses for STX in buffer (top), oysters (middle), and bay scallops (bottom).The right side is % inhibition of the dose response curves for DA in buffer (top), oysters (middle), and bay scallops (bottom).Data shown are from four to six replicates plus their SEMs.
Figure 5 .
Figure 5. Average fluorescence dose response curves for STX and DA.The left side is STX dose responses in buffer (blue circles), oysters (orange squares), and bay scallops (grey triangles) and the right is DA.Each point represents the average of both bead sets for three experiments plus their SEMs.
Figure 4 .
Figure 4. Dose response curves for STX and DA.The left side is % inhibition of the dose responses for STX in buffer (top), oysters (middle), and bay scallops (bottom).The right side is % inhibition of the dose response curves for DA in buffer (top), oysters (middle), and bay scallops (bottom).Data shown are from four to six replicates plus their SEMs.
Figure 4 .
Figure 4. Dose response curves for STX and DA.The left side is % inhibition of the dose responses for STX in buffer (top), oysters (middle), and bay scallops (bottom).The right side is % inhibition of the dose response curves for DA in buffer (top), oysters (middle), and bay scallops (bottom).Data shown are from four to six replicates plus their SEMs.
Figure 5 .
Figure 5. Average fluorescence dose response curves for STX and DA.The left side is STX dose responses in buffer (blue circles), oysters (orange squares), and bay scallops (grey triangles) and the right is DA.Each point represents the average of both bead sets for three experiments plus their SEMs.
Figure 5 .
Figure 5. Average fluorescence dose response curves for STX and DA.The left side is STX dose responses in buffer (blue circles), oysters (orange squares), and bay scallops (grey triangles) and the right is DA.Each point represents the average of both bead sets for three experiments plus their SEMs.
Table 1 .
Dose response parameters for STX and DA in spiked buffer and food matrices.
Table 2 .
Matrix effects on signal-to-background.
Avg and SD of two bead sets from two separate experiments.*((Max matrix − Min matrix )/(Max buffer − Min buffer )) × 100. | 7,726.6 | 2016-11-01T00:00:00.000 | [
"Biology",
"Chemistry",
"Environmental Science"
] |
Displaced vertex searches for sterile neutrinos at future lepton colliders
We investigate the sensitivity of future lepton colliders to displaced vertices from the decays of long-lived heavy (almost sterile) neutrinos with electroweak scale masses and detectable time of flight. As future lepton colliders we consider the FCC-ee, the CEPC, and the ILC, searching at the Z-pole and at the center-of-mass energies of 240, 350 and 500 GeV. For a realistic discussion of the detector response to the displaced vertex signal and the Standard Model background we consider the ILC's Silicon Detector (SiD) as benchmark for the future lepton collider detectors. We find that displaced vertices constitute a powerful search channel for sterile neutrinos, sensitive to squared active-sterile mixing angles as small as $10^{-11}$.
Introduction
Neutrino oscillation experiments have provided us with convincing evidence that at least two of the light neutrinos are indeed massive. The absolute mass scale of the light neutrino masses is bounded to lie below about 0.2 eV from neutrinoless double beta decay experiments and cosmological constraints, see for instance ref. [1,2] for recent reviews.
An efficient and elegant extension of the Standard Model (SM), that aims at generating the light neutrinos' masses, is given by adding sterile ("right-handed") neutrinos to its field content (see e.g. ref. [3] and references therein). The sterile neutrinos can have a so-called Majorana mass as well as Yukawa couplings to the three active neutrinos and to the Higgs doublet. When the electroweak symmetry is broken, the sterile and active neutrinos mix, which yields light and heavy mass eigenstates that are each subject to a number of experimental constraints.
The naïve one-family type I seesaw relation, given by m ν ≈ y 2 v 2 EW /M , imposes either tiny neutrino Yukawa couplings y (for Majorana masses M on the electroweak scale), or Majorana masses around the Grand Unification scale (for Yukawa couplings of order one), such that an observation of this kind of heavy neutrino at colliders is not very promising. This relation does not hold for seesaw scenarios with two or more sterile neutrinos with a protective symmetry, e.g. a "lepton-number-like" symmetry. In those scenarios, no constraints on the neutrino Yukawa couplings and the Majorana masses arise from the light neutrinos' mass scale (see e.g. refs. [4]). In this case the Yukawa couplings (or, alternatively, the active-sterile mixing angles) are theoretically unsuppressed, which in principle allows for effects to be searched for at particle colliders. A very interesting effect arises from heavy neutrinos with masses below the W boson mass and with very small mixings. Such heavy neutrinos have suppressed couplings to the W and Z as well as to the Higgs boson h, which leads to a long enough lifetime for a potentially visible displacement from the interaction point. Via virtual W, Z and h they decay into the kinematically available SM particles. This effect of a secondary vertex from the decays of the heavy neutrino, that is displaced from the primary vertex, yields an exotic signature and constitutes a powerful search channel 2 The symmetry protected seesaw scenario Sterile (or right-handed) neutrinos can have Majorana masses around the electroweak (EW) scale and unsuppressed active-sterile mixings, when they are subject to a "lepton-number-like" symmetry. The relevant features of seesaw models with this kind of protective symmetry, cf. refs. [4] for models with similar structures, may be represented by the benchmark model that was introduced in [10] and which we refer to as the Symmetry Protected Seesaw Scenario (SPSS). The SPSS considers a pair of sterile neutrinos N I R (I = 1, 2) and a suitable "lepton-number-like" symmetry, where N 1 R (N 2 R ) has the same (opposite) charge as the left-handed SU (2) L doublets L α (α = e, µ, τ ). Light neutrino masses and other lepton-number-violating effects can be introduced by a small deviation from the exact symmetry limit. The Lagrangian density of the SPSS, in the symmetric limit, is given by where L SM contains the usual SM field content and with L α and φ being the lepton and Higgs doublets, respectively. The y να are the complex-valued neutrino Yukawa couplings and the Majorana mass M can be chosen real without loss of generality. We note that the SPSS allows for additional sterile neutrinos, provided their mixings with the other neutrinos are negligible, or their masses are very large, such that their effects decouple. This is a minimal framework that can explain the two observed mass squared differences of the light neutrinos and features four independent parameters relevant for collider experiments, namely the three y να and M . From eq. (1) we can derive the mass matrix M of the neutral fermions, which can be diagonalised with the unitary leptonic mixing matrix U (a parametrization to O(θ 2 ) can be found for instance in ref. [10]): The mass eigenstates are the three light neutrinos ν i (i = 1, 2, 3), which are massless in the symmetric limit, and two heavy neutrinos N j (j = 1, 2) with degenerate mass eigenvalues. The mixing of the active and sterile Figure 2: Production cross section for heavy neutrinos at the FCC-ee, CEPC and ILC at different center-of-mass energies, divided by the square of the active-sterile neutrino mixing angle. Initial state radiation is included for both plots and for the ILC a (L,R) polarisation of (80%, 30%) and beamstrahlung are also included.
neutrinos can be quantified by the mixing angles and their magnitude: Due to the mixing between the active and sterile neutrinos, the light and heavy neutrino mass eigenstates interact with the weak gauge bosons. The present constraints from past and ongoing experiments and the sensitivities of future lepton colliders to the heavy neutrinos for the SPSS have been presented and discussed in [10][11][12]. Further observable features of models with right-handed neutrinos have been investigated with respect to collider phenomenology in [13].
Vertex displacement of heavy neutrinos
In this section we introduce the preliminaries for the search for long-lived heavy neutrinos via displaced vertices. In this line we present the production mechanism for heavy neutrinos and the corresponding cross sections at lepton colliders for various center-of-mass energies. After their production a long-lived heavy neutrino can travel a finite distance before it decays, which is a stochastic process and follows an exponential probability distribution. We quantify the number of heavy neutrinos that can be expected with a specific displacement.
The mechanisms of heavy neutrino production at e + e − -colliders are mediated by the weak gauge bosons and depicted by the Feynman diagrams in fig. 1. We define the heavy neutrino production cross section, to leading order in the small active-sterile mixing, by where we sum over all the light neutrinos (i = 1, 2, 3) and the two heavy neutrinos (j = 1, 2), and the cross section is a function of the center-of-mass energy √ s. We implemented the SPSS via Feynrules [14] into the Monte Carlo event generator WHIZARD 2.2.7 [15,16] and evaluated the heavy neutrino production cross section, including initial state radiation for all colliders, and lepton beam polarisation for the ILC, for the center-of-mass energies 90, 250, 350 and 500 GeV, as shown in fig. 2. At the Z-pole, the production of heavy neutrinos via the s-channel Z boson is dominant, which is sensitive to all neutrino Yukawa couplings |y να | ( α = e, µ, τ ). However, for larger center-of-mass energies, the dominant contribution to the production cross section of the heavy neutrinos comes from the t-channel exchange of a W boson, which is only sensitive to |y νe |. The heavy neutrinos decay into SM particles and their lifetime τ is given by the inverse of the decay width Γ N . For heavy neutrinos lighter than the W boson mass, their decays, which occur via off-shell gauge and Higgs bosons, are suppressed. Furthermore, for small active-sterile mixing angles the |θ| 2 dependency on Γ N can render the heavy neutrinos long-lived compared to SM particles. We evaluate the proper lifetimes from the decay widths for heavy neutrino masses larger than 1 GeV with WHIZARD. In fig. 3 we show the resulting proper lifetime as a function of the heavy neutrino mass, together with the analytical formula from ref. [17] τ e = 4.15 × 10 −12 where only the neutrino coupling to the electron flavour is considered. The heavy neutrino lifetime in eq. (5) is valid for M < m W , and the analytical formula is in good agreement with the obtained numerical result. As one can see in the figure, the lifetime is reduced as additional decay channels open up, especially when the heavy neutrino mass exceeds m W , m Z or m h as the decay channels via on-shell gauge or Higgs bosons become efficient. Depending on the lifetime of the heavy neutrino, it can travel a finite distance after its production at a particle collider before it decays. The heavy neutrino lifetime in the laboratory frame is related to the proper lifetime by with the Lorentz factor γ = M 2 + | p N | 2 /M , where p N is the three momentum of the heavy neutrino. Due to the production process of the heavy neutrino being a 2→2 process with one massive particle in the final state, the magnitude of the three momentum can be expressed as with √ s being the center-of-mass energy. The decay of a long-lived heavy neutrino is a stochastic process and follows an exponential probability distribution. Its probability to decay with a displacement x from the primary vertex with x 1 ≤ x ≤ x 2 (where x 1 and x 2 are an inner and outer boundary) is given by with t i = x i /| v| distance and the velocity | v| = | p N | E N (in natural units). Combining production and decay of heavy neutrinos at lepton colliders yields the expected number of heavy neutrinos, which are produced at the interaction point and decay with a displacement of at least x 1 and at most x 2 : with the integrated luminosity L and the heavy neutrino production cross section from eq. (4) and in fig. 2. Contextualising the formula eq. (9) with the considered future lepton collider experiments is the subject of the next section.
Displaced vertices at future lepton colliders
The search for sterile neutrinos via displaced vertices is considered at the planned future lepton colliders, the FCC-ee, the CEPC, and the ILC, each with its own physics program as shown in fig. 4. We note that we chose the operation scenario G-20 for the ILC because it considers the most integrated luminosity at 500 GeV, anticipating a more promising sensitivity. We furthermore add the Giga-Z operation, for which we reckon with 100 fb −1 (resulting in ∼ 10 9 Z bosons at the Z pole). Figure 4: Proposed modi operandi, defined by target integrated luminosities for each center-of-mass energy, for the considered future lepton colliders. For the FCC-ee [18] we use the product of the target instantaneous luminosities from [19] (for two interaction points) and the envisaged run-times, and the Higgs run with a center-of-mass energy of 240 GeV. For the CEPC we use the exemplary integrated luminosities from the preCDR [20]. For the ILC [21] we consider the G-20 operation scenario from ref. [22], and we further include the Giga-Z operation.
FCC-ee CEPC ILC
Apart from the explicit modus operandi of a future lepton collider, the search for heavy neutrinos via displaced vertices also depends on the detector layout and its performance parameters. In the following we assess the detectability of the signal and possible SM backgrounds. For definiteness we consider the ILC's Silicon Detector (SiD) [23,24] as benchmark, which is chosen as an example and can be expected to yield a performance that is comparable to other planned detectors, e.g. the ILD [24,25]. Table 1: SiD barrel structure, radii in cm. Taken from ref. [24].
The SiD's integrated tracking system is developed for the particle flow algorithm, and it consists in a powerful silicon pixel vertex detector, silicon tracking, silicon-tungsten electromagnetic calorimetry (ECAL) and highly segmented hadronic calorimetry (HCAL). Furthermore, the detector layout incorporates a high-field solenoid, and an iron flux return that is instrumented as muon identification system. The SiD geometry, separated into the barrel and the endcap, allows for a high level of hermeticity with uniform coverage and a transverse impact parameter resolution of ∼ 2 µm over the full solid angle. In the following, we assume a spherical symmetry for the SiD, which is sufficient for our analysis, and use the radii of the individual detector components from the barrel part, which are summarised in tab. 1.
Signal and background
We call the SiD response to the heavy neutrino decay products the signal, and its response to SM processes we call the background.
The possible final states for heavy neutrino mass M < m W , with approximate branching ratios, are The branching ratios have a small dependency on the mass M , that will be neglected in the following. The heavy neutrinos are produced together with a light neutrino, such that their decay products are always associated with missing momentum. The experimental signature that arises from the decay of a long-lived heavy neutrino is given by exactly one secondary vertex, from where all visible particles in the detector originate.
The striking feature of only one visible secondary vertex makes the experimental signature of heavy neutrino decays very distinct from possible SM processes. The discussion of the backgrounds in the following is based on the simulation of O(10 7 ) SM events with WHIZARD [15,16] that were reconstructed with DELPHES [26] using the DSiD detector card [27]. We consider SM processes with the following final states: ff , ff γ, ff γγ, ff νν, ν qq , with f being a charged lepton or a quark, γ a photon and ν a light neutrino. From the considered final states, especially ff and events with neutrinos may give rise to a viable background in the following way: • Loss of particles in the beam pipe: Final state particles with a sufficiently small transverse momentum can remain inside the beam pipe and thus prevent detection. If one such particle recoils e.g. against an ISR photon it may get "kicked" into the detection volume, featuring typically a very small angle to the beam axis. This type of event could be vetoed against with hard gammas, or, similarly, with the angle between beam axis and visible particle. Furthermore, in this type of background the overall charge of the event may be measured as non-zero, which could provide the most powerful veto.
• Miss-reconstructed events: It is possible that a reconstruction algorithm does not identify a normally visible particle. These events can be vetoed against via energy deposits that are located in the detector region opposite the observed particle. Moreover, the overall charge can also be used as a veto. Figure 5: Schematic illustration of the signal, that is given by the decays of a heavy neutrino at a distance cτ from the interaction point. The SM background is given by two light neutrinos (νν) and two long-lived mesons m and m * , which decay sufficiently close to each other, such that only one secondary vertex can be resolved. The detector resolution δx is dependent on the detector component.
• Merging of secondary vertices: When particles with finite lifetime are produced in pairs and decay sufficiently close to each other, such that their individual secondary vertices cannot be resolved from the tracking information, this can constitute a background. This implies that the particles have to be emitted in a very narrow solid angle, which necessitates the production of additional invisible particles, (i.e. light neutrinos) in order to balance the overall momentum. This removes the contribution from the ff events.
In the following, we assume that the above mentioned vetoes remove all the possible backgrounds from processes with one lost or miss-reconstructed particle. Although a loss of signal efficiency is to be expected by such vetoes, we do not consider this in the following. For a quantitative statement on the veto efficiency, a detailed analysis of this type of events after a full detector simulation is needed, which is beyond the scope of the present analysis. This promotes the merging of secondary vertices to the primary source of background for the displaced vertex searches. We show a schematic illustration of the displaced signal and backgrounds in fig. 5. The probability of the decays of both SM particles occur within an unresolvable distance can be assessed with eq. (8) and by taking into account the narrow solid angle. When isotropic emission of the fermionic final states is assumed, see fig. 6, the fraction of two mother particles that are emitted into a narrow solid angle can be estimated by Ω/4π, with Ω = 2π α 0 sin θdθ, α = arcsin(δx/2x), where δx is the spatial resolution of the detector and x the distance from the IP (i.e. the displacement).
To assess the expected signal efficiencies we simulated 10 6 events of semileptonic decays for M = 10 and 40 GeV, respectively. The fast reconstruction with DSiD 2 yields a signal efficiency of ∼ 80% to find at least one jet for M = 10 GeV. For M = 40 GeV we find the efficiencies of ∼ 99%, ∼ 60% and ∼ 20% to find one lepton, one jet, and two jets, respectively. Since one visible object is sufficient for the identification of a displaced vertex, we assume that the signal can be observed with 100% efficiency in the following.
Detector response: the search for heavy neutrinos via displaced vertices
The vertex displacement x is defined by the distance between the primary vertex where a mother particle (e.g. the heavy neutrino or a SM particle with finite lifetime) was produced, and the secondary vertex where the mother particle decays into a number of daughter particles. Since the primary vertex is experimentally unknown, we consider the center of the interaction point instead and use its extension as uncertainty.
Depending on the displacement of the long-lived heavy neutrino, its decay can take place in any of the SiD's detector components. Therefore every component can be considered as an independent probe for displaced vertices with a well defined boundary given by its extension, see tab. 1. In the following, we discuss the search for longlived heavy neutrinos by investigating the individual response of the SiD components to their decays, and possible backgrounds.
Inner region:
We define the inner region as the volume that is enclosed by the vertex detector. The vertex displacement x can generally be inferred from the tracks of the decay products in the vertex detector and the tracker. The precision of x is limited by the resolution of the tracker and the spatial extension of the Interaction point (IP). At the ILC the IP has a vertical extension of ∼ 10 nm and we assume a vertical extension of ∼ 250 nm for the circular FCC-ee and CEPC for all the modi operandi. The impact parameter resolution of the SiD in the transverse plane is ∼ 2 µm. We note that due to our assumption of a spherical detector geometry, this parameter is also valid in the longitudinal direction. We therefore consider the resolution for displaced vertices x res , i.e. the minimum vertex displacement that is separable at 3σ from the IP, to be given by 6 µm for the ILC and 7 µm for the FCC-ee/CEPC. We remark that this resolution is strictly valid for vertical displacements only. We note that for an accurate assessment of the resolution the entire geometry of the detector and of the IP have to be taken into account, which, however, is beyond the scope of this paper.
Conventional search, x < x res : For heavy neutrino decays with x smaller than x res , the vertex displacement cannot be used to distinguish between signal and background. This necessitates a conventional search, where the kinematic distributions are used to distinguish the heavy neutrino signal from the SM background. For instance such a search for neutral heavy neutrinos produced in Z decays was conducted by DELPHI at LEP I [9]. The dominant SM background is given by four-fermion semileptonic and hadronic final states with missing momentum, namely ± νq q and ff νν, for f being a charged SM fermion and q = u, d, s, c, b. We assume that a conventional search for heavy neutrinos is possible as long as their decays have a displacement smaller than the outer radius of the tracker.
Search for displaced vertices, x ≥ x res : The SM particles with lifetimes that can lead to a displaced secondary vertex, that occur dominantly in the inner region, are the π 0 meson (cτ ∼ 20 nm), the τ lepton (cτ ∼ 0.1 mm), and all the D and B mesons (with cτ ∼ 0.1 -0.5 mm). Any of these particles can fake the heavy neutrino signal given that they result in only one secondary vertex, implying that they are accompanied by two light neutrinos. In the inner region, two individual vertices cannot be separated when they are closer than 6 µm, which is set by the tracking resolution of the decay products at 3σ, and not to be confounded with x res .
We estimate the production cross section of the neutral pions to be smaller than σ(e + e − → qqνν) 100 fb (for √ s = m Z and without polarisation), which sets an upper limit of 10 7 events (at the FCC-ee for 110 ab −1 ) among which some may fake the heavy neutrino signal. Most pions decay into two photons, which cannot be mistaken as a signal event. The fraction of events that lead to a "signal-like" final state, for instance ννe + e − γ, is only O(10 −7 ). Furthermore, demanding that the two secondary vertices cannot be distinguished from each other at 3 σ, i.e. that their respective vertices are at most 6 µm apart (see fig. 5), reduces the number of events by another factor of 10 −3 . The number of "signal-like" background events from τ leptons is slightly larger than those of D and B mesons. Considering the simultaneous decay of two τ leptons with cross section σ(e + e − → τ + τ − νν) 2 fb at √ s = m Z (no polarisation), we find that less than 1 event for 110 ab −1 can be expected with a displacement of x ≥ 10 µm at the FCC-ee 3 . We find that the backgrounds at higher center-of-mass energies are also effectively suppressed below one event for x > 10 µm (by the requirement that the mesons or the τ leptons are emitted into a narrow solid angle). Another background for the searches at the Z pole may be the process e + e − → τ + τ − γ due to its large cross section of ∼ 1.6 nb. When the γ is hard and emitted in the direction of the beam pipe, it can escape detection. The condition of close-by decays of the two tau leptons within the inner region yields a suppression factor of ∼ 10 −6 , which leaves e.g. ∼ 10 5 potential background events at the FCC-ee. We remark however that the invariant mass of the decay products may allow to discriminate against this and the other similar backgrounds, when the mass M of the heavy neutrino is larger than the combined rest masses ∼ m m + m m (e.g. ∼ 2m τ ) of the two decaying particles. This may allow to resolve vertex displacements closer to the IP than 10 µm for M > 2m τ , while maintaining an almost background-free environment.
Vertex detector and tracker
The vertex detector is designed to detect the displaced vertices of heavy flavours for their efficient identification. The highly efficient charged particle tracking allows to recognise and measure prompt tracks in conjunction with the ECAL. The vertex displacement can be inferred from the impact parameter of the reconstructed tracks. We note at this point, that the resolution of the impact parameter degrades at the SiD when the heavy neutrino decays take place deep inside the tracker. In this case not all the silicon layers would respond to the tracks from the secondary vertex, which would reduce the resulting impact parameter resolution. Because we expect all the future lepton colliders to have at least one detector with continuous tracking, which has a larger number of layers and thus might experience less degradation of the resolution, we ignore this effect in the following.
For heavy neutrino decays inside the vertex detector/tracker, all kinematic information on the decay products are available, in particular the vertex displacement. Moreover, since the heavy neutrinos are neutral, the displacement becomes directly visible by an appearing secondary vertex whence the decay products emerge.
SM particles which have a vertex displacement that results in their decays taking place dominantly within the vertex detector, are given by the K S (cτ ∼ 2.68 cm) meson and the Λ baryon (cτ ∼ 7.89 cm). We estimate that the condition of two SM particles with finite lifetime to be emitted into a narrow solid angle and to decay close to each other, to reduce the background by a factor < 10 −16 for both, the K S the Λ baryon, such that those contributions are completely negligible.
ECAL and HCAL
The calorimeter system has imaging capabilities that allow for efficient track-following, with a pixel size of ∼ 4µm for the ECAL and ∼ 1 cm for the HCAL, which allows a correct association of energy clusters with tracks.
The heavy neutrino signal, with vertex displacements that result in decays in the calorimetric system, consists in one or more clusters of energy deposits, that should be connected and consistent with one secondary vertex. Defining features of this signal are the absence of tracks and significant momentum imbalance. Due to a lack of tracking information, the decay products may, however, be identified as electrically neutral particles.
The ECAL could record the leptonic part of the signal as a photon. The SM background for this signal contains at least one photon, and missing energy, for instance a pair of light neutrinos and a hard photon, ννγ. The production cross section for this background process is O(1) fb at √ s = m Z for photon energies ∼ 10 GeV and up to O(100) fb at √ s = 500 GeV for the ILC. It may be possible to separate the signal efficiently from the background, but, to be conservative, we shall not consider the leptonic decays of the heavy neutrinos that take place inside the ECAL. From the hadronic and semileptonic decays of the heavy neutrinos that take place in the HCAL, the decay products could be recorded as neutral hadrons. The SM particles that could fake a background are the K L (cτ ∼ 15.34 m). We estimate the production cross section to be smaller than the production cross section qqνν, which is O(1) ab for √ s = m Z , and is ∼ 600 fb for √ s = 500 GeV.
Viable backgrounds can come from τ + τ − γ and τ + τ − νν events, where the tau leptons decay into two K L . If the tau leptons are collimated such that the two K L enter the HCAL with a maximum distance of at most 1 cm, such that the readout cannot yield an indication of the energy deposits being disconnected, this process may yield a fake signal event. The conservation of charge, however, leaves a charged lepton or a charged meson among the decay products of each tau lepton, which can be used as veto against such an event. Furthermore, the τ + τ − γ events are very close to the beam axis, which will allow to suppress them against the signal distribution if necessary. We therefore assume that no background event remains.
Muon identification system
The muon detecting photomultipliers, intertwined with the steel layers of the solenoid flux return, identifies muons from the interaction point with high efficiency and rejects most of the remaining hadrons that spill over from the HCAL. The muon selection combines the information from the tracker, the calorimetric system, and the muon detectors, to reconstruct the muon candidates.
A highly relativistic heavy neutrino will reach the outer radius of the flux return yoke in about 20 ns. For its decays inside the flux return yoke, the resulting decay products interact with the scintillator bars. The ensuing photons are detected with the photomultipliers, and, due to the absence of information from the calorimeters and the tracker, they should not be identified as muon candidates. Instead, they leave a number of hits in the photomultipliers for which there are no SM background processes from leptonic collisions.
The background in this case is given by cosmic ray muons, which can be rejected efficiently by correlating the corresponding hits with the beam collision time. Further backgrounds are given by muons that are created in the interaction of the electron or positron beam with the beam-delivery system, and subsequently traverse the detector parallel to the beam line, also referred to as "fliers". We assume, that all the visible heavy neutrino decay channels leave a measurable imprint in at least one layer of the photomultipliers.
Combined response of the SiD
In the following we discuss the sensitivities of displaced vertex searches for long-lived heavy neutrinos from the individual components of the SiD, and their combination.
Every individual detector component is (in principle) sensitive to the signal from long-lived heavy neutrinos, Figure 7: Schematic illustration of the sensitivity of the different detector components to heavy neutrino decays as a function of the active-sterile mixing parameter and the heavy neutrino mass. The parameter rD is the outer radius of the muon system. We note that the sensitivities of the individual components are overlapping, such that it is not possible to assign one responsive detector component to one specific set of heavy neutrino parameters. and a signal significance S can be established via where N S is the number of signal events and N B the number of SM background events inside the component's volume. The number of signal events N S = N (x 1 , x 2 , √ s, L), given by eq. (9), inside a detector component (with x 1 and x 2 being its inner and outer radii) is controlled by the production cross section σ νN and the lifetime τ lab of the heavy neutrinos, both of which are dependent on the squared active-sterile mixing |θ| 2 and the heavy neutrino mass M . Therefore, a sensitivity of 2σ for the heavy neutrino search via displaced vertices with given mass M can be defined by the value for |θ| 2 that results in a significance larger than 2. This relation maps the sensitivity of each detector component to the signal onto the heavy neutrino parameter space. We show a schematic illustration of the discussed mapping of the SiD components into the heavy neutrino parameter space in fig. 7. Therein each component is assigned a distinct color, and the order corresponds to the layers of the components inside the SiD. Sets of parameters to the right of the inner region lead to a vertex displacement being indistinguishable with the considered vertex resolution, which necessitates a conventional search. Sets of parameters to the left of the muon system take place dominantly outside the detector and are thus invisible. The horizontal lines in the figure denote the number of heavy neutrino decays that are to be expected, which scale with |θ| 2 and are proportional to the cross sections (here taken to be flat) shown in fig. 2. The vertical dashed line denotes the W boson mass and indicates the limit of this search channel. In general, the heavy neutrino mass has to be smaller than the center-of-mass energy of the incident electron-positron beams. However, for M ∼ m W new decay channels for the heavy neutrinos into on-shell W and Z bosons open up, which renders their lifetimes generally too short to allow for a measurable vertex displacement.
For the combined response of the SiD components, see the discussion above in sections 4.2.1 to 4.2.4, we find: The search for long-lived heavy neutrinos via displaced vertices is sensitive to displacements as small as x res , but only for displacements larger than 10 µm the search is essentially free of irreducible background. As discussed the detector components are (almost) background-free with the exception of the ECAL. Furthermore, it is unclear if heavy neutrino decays that occur close to the outer radius of the muon identification system are registered. Since the mapping of the physical extensions of the ECAL and the muon identification system into the heavy neutrino parameter space shows a considerable overlap 4 with the tracker and the HCAL, we therefore consider vertex displacements between 10 µm and 249 cm (i.e. within the outer radius of the HCAL) as conservative bounds for signal events to be free of background and in principle detectable by the SiD. Notice that for M ≤ m W the heavy neutrino has a relativistic velocity β 0.1, such that it decays in the calorimetric system within ∼ 30 ns after the interaction, which may be important when a trigger is used at circular colliders.
We remark that it is very important to include the muon identification system as an independent search for displaced vertices from heavy neutrino decays at future lepton colliders, which may provide independent and complementary information.
Resulting sensitivities
In this section we present the sensitivities of the future lepton colliders to heavy neutrino searches via displaced vertices. The SiD serves as a benchmark detector for all the experiments with the modi operandi of the FCC-ee, CEPC, and ILC from fig. 4.
According to the discussion in section 4.2, we take the heavy neutrino decays with a vertex displacement between 10 µm and 249 cm to be free of background and detectable by the SiD. The absence of SM background implies that the detection of a single event corresponds to the detection of a heavy neutrino signal via displaced vertices with a significance of 1σ, cf. eq. (11). In the following, we demand at least four signal events in order to establish a signal at 2σ.
We show the resulting sensitivities of the FCC-ee, the CEPC and the ILC, respectively, to the searches for heavy neutrinos via displaced vertices in fig. 8. Parameter sets of masses and active-sterile mixings inside the colored areas lead to at least four events inside the SiD. The overall shape of the colored areas can be understood from the schematic illustration in fig. 7. We checked that including the muon identification system does not significantly affect the resulting sensitivities for any of the here considered future lepton colliders and their modi operandi. For comparison we show estimates for the future sensitivity of the conventional searches at 95% confidence level by the black, dashed line [10]. This estimate was obtained by a rescaling of the 95% C.L. exclusion limit from DELPHI with the Z pole luminosities of the respective future lepton collider.
The left-hand plot in fig. 8 shows that the Z pole run of the FCC-ee yields the highest sensitivity due to the large envisaged integrated luminosity. This run is sensitive to smaller active-sterile neutrino mixing compared to the estimates for the conventional searches. The physics runs at higher center-of-mass energies show weaker sensitivities when compared to the Z pole run, but they still improve the projected sensitivity of the LHC, which reaches |θ| 2 ∼ 10 −7 for heavy neutrino masses ∼ 20 GeV, cf. refs. [7].
At the CEPC, the considered modi operandi result in the sensitivities for the Higgs run at 250 GeV and the Z pole run to be comparable, with the former being sensitive to larger heavy neutrino masses. It is interesting to note that, despite the heavy neutrino production cross sections being more than one order of magnitude smaller compared to the Z pole run (see fig. 2), also the Higgs run constitutes a feasible search channel for sterile neutrinos via displaced vertex searches due to the considered integrated luminosities. We remark that the here shown sensitivities for √ s = m Z are strictly valid only for θ µ , θ τ = 0 and |θ| 2 = |θ e | 2 .
The sensitivities for the ILC show that the high-energy run at 500 GeV has a much higher sensitivity compared to the Z pole searches, which is, in analogy to the CEPC, due to the considered integrated luminosities. Comparing the Higgs run of the CEPC with the 500 GeV run at the ILC, which both consider the same integrated luminosity, we find that the ILC outperforms the CEPC, due to the larger heavy-neutrino-production cross section with beam polarisation. On the other hand, a significant enhancement of the sensitivities of the CEPC and the ILC at the Z pole run could be achieved when the run times are prolonged.
Summary and Conclusions
In this work, we have investigated the sensitivity to sterile neutrinos with electroweak-scale Majorana masses from the search for displaced vertices at future lepton colliders. fig. 4. The sensitivities for Ecm = mZ are understood for |θ| 2 = |θe| 2 (and θµ, θτ = 0). The SiD is used as benchmark detector for all the lepton collider experiments, for which we found heavy neutrino signals with vertex displacements between 10 µm and 249 cm to be essentially free of irreducible background, cf. section 4.2.
The black dashed lines denote the conventional Z pole searches (cf. [10]).
We deepened and extended previous work on displaced vertex searches for sterile neutrinos at future lepton colliders in various ways: We considered an explicit low scale seesaw benchmark model, the SPSS, and calculated the heavy-neutrino-production cross section with WHIZARD, including initial state radiation and initial state polarisation (where applicable). 5 As future lepton colliders, we considered the FCC-ee, CEPC, and the ILC and included the different center-of-mass energies planned for the respective physics programs, i.e. the Z pole run, the Higgs run at 240 or 250 GeV, the top threshold scan at 350 GeV and, for the ILC, also 500 GeV. For a realistic assessment of the sensitivity, we used the ILC's SiD as benchmark detector and put emphasis on its response to the displaced heavy neutrino signal and the conceivable SM backgrounds. We find that the SiD is sensitive to the signal in an essentially background-free environment (after suitable cuts), for vertex displacements ranging from 10 µm to the outer radius of the HCAL. We expect that removing the backgrounds (cf. section 4.2) with suitable cuts will somewhat reduce the signal efficiency. For instance the DELPHI experiment at LEP quotes a signal efficiency of ∼ 25%, which would roughly speaking shift up the maximal sensitivity by a factor of two. However, the efficiency may be higher at a future detector, closer to the here assumed 100% signal efficiency. We note that for assessing a more realistic number for the signal efficiency, and also for a better understanding of the response and complementarity of the ECAL, HCAL and muon identification system to the heavy neutrino decays within the respective component, a full simulation of the detector acceptance would be desirable.
The resulting sensitivities of sterile neutrino searches via displaced vertices at future lepton colliders are summarized in figure 8, for a confidence level of 2σ. We find that the FCC-ee at the Z pole run with 110 ab −1 yields the best sensitivity, down to squared active-sterile mixings as small as |θ| 2 ∼ 10 −11 . Comparing this estimated sensitivity to the one for a conventional search for sterile neutrinos at the Z pole, the displaced vertex search is sensitive to significantly smaller active-sterile mixing angles. It turns out that the center-of-mass energies higher than the Z boson mass can already improve the present exclusion limits of the LHC and its projected sensitivities of |θ| 2 ∼ 10 −7 for 300 fb −1 . For the CEPC, the Z pole run and the higher energy run (at 250 GeV) result in comparable sensitivities, while for the ILC the high-energy run (at 500 GeV in the G-20 physics program) results in its best sensitivity.
In summary, our analysis demonstrates that all the modi operandi of all the future lepton colliders can improve the present bounds and the projected LHC reach. Highest sensitivities to sterile neutrinos can be reached in the mass range between ∼ 10 and 80 GeV. This is complementary to the experiments like SHiP [29], which has peak sensitivities at lower masses, around 1 GeV. We thus conclude that the search for displaced vertices at future lepton colliders constitutes a powerful search channel for heavy neutrinos with masses below the W boson mass. | 9,619.4 | 2016-04-08T00:00:00.000 | [
"Physics"
] |
OpenSHS: Open Smart Home Simulator
This paper develops a new hybrid, open-source, cross-platform 3D smart home simulator, OpenSHS, for dataset generation. OpenSHS offers an opportunity for researchers in the field of the Internet of Things (IoT) and machine learning to test and evaluate their models. Following a hybrid approach, OpenSHS combines advantages from both interactive and model-based approaches. This approach reduces the time and efforts required to generate simulated smart home datasets. We have designed a replication algorithm for extending and expanding a dataset. A small sample dataset produced, by OpenSHS, can be extended without affecting the logical order of the events. The replication provides a solution for generating large representative smart home datasets. We have built an extensible library of smart devices that facilitates the simulation of current and future smart home environments. Our tool divides the dataset generation process into three distinct phases: first design: the researcher designs the initial virtual environment by building the home, importing smart devices and creating contexts; second, simulation: the participant simulates his/her context-specific events; and third, aggregation: the researcher applies the replication algorithm to generate the final dataset. We conducted a study to assess the ease of use of our tool on the System Usability Scale (SUS).
Introduction
With the recent rise of the Internet of Things, analysing data captured from smart homes is gaining more research interest. Moreover, developing intelligent machine learning techniques that are able to provide services to the smart home inhabitants are becoming popular research areas.
Intelligent services, such as the classification and recognition of activities of daily living (ADL) and anomaly detection in elderly daily behaviour, require the existence of good datasets that enable testing and validation of the results [1][2][3][4]. The medical field also recognised the importance of analysing ADLs and how these techniques are effective at detecting medical conditions for the patients [5]. These research projects require either real or synthetic datasets that are representative of the scenarios captured from a smart home. However, the cost to build real smart homes and the collection of datasets for such scenarios is expensive and sometimes infeasible for many projects [4,[6][7][8][9]. Moreover, several issues face the researchers before actually building the smart home, such as finding the optimal placement of the sensors [10], lack of flexibility [9,11], finding appropriate participants [4,7] and privacy and ethical issues [12].
Even though there exist real smart home datasets [13][14][15], sometimes, they do not meet the needs of the conducted research project; such as the need to add more sensors or to control the type of the The approaches for the smart home simulation tools can be divided into model-based and interactive approaches. The model-based approaches use statistical models to generate datasets, while the interactive approaches rely on real-time capturing of fine-grained activities using an avatar controlled by a human/simulated participant. Each approach has its advantages and disadvantages.
From what we mentioned earlier, it is apparent that the virtual simulation tool should offer far greater flexibility and lower cost than conducting an actual and physical smart home simulation [6]. The new recent advances in computer graphics, such as virtual reality (VR) technologies, can provide immersive and semi-realistic experiences that could come close to the real experience. The simulation tool should also be open and readily available to both the researchers and the test subjects.
Although there are some research efforts available in the literature for smart home simulation tools, they suffer from some limitations. The majority of these tools are not available in the public domain as an open-source project or limited to a particular platform. Furthermore, most of the publicly-available simulation tools lack the flexibility to add and customise new sensors or devices.
When generating datasets, the model-based approaches are capable of generating bigger datasets, but the granularity of captured interactions is not as fine as the interactive approaches. However, the interactive approaches usually take a longer time to produce the datasets, as they capture the interactions in real time.
In this paper, we present the architecture and implementation of OpenSHS, a novel smart home simulation tool. OpenSHS is a new hybrid, open-source, cross-platform 3D smart home simulator for dataset generation. Its significant contribution is that OpenSHS offers an opportunity for researchers in the field of the Internet of Things (IoT) and machine learning to produce and share their smart home datasets, as well as testing, comparing and evaluating their models objectively. Following a hybrid approach, OpenSHS combines advantages from both interactive and model-based approaches. This approach reduces the time and efforts required to generate simulated smart home datasets. OpenSHS includes an extensible library of smart devices that facilitates the simulation of current and future smart home environments. We have designed a replication algorithm for extending and expanding a dataset. A small sample dataset produced, by OpenSHS, can be extended without affecting the logical order of the events. The replication provides a solution for generating large representative smart home datasets. Moreover, OpenSHS offers a feature for shortening and extending the duration of the generated activities.
The rest of this paper is structured as follows: The following section reviews existing real smart home test beds and simulation tools; this section is concluded by analysing existing smart home simulation tools and comparing them with our proposed tool, OpenSHS. Section 3 presents the architecture of OpenSHS and its implementation. Section 4 presents two usability studies for using OpenSHS by researchers and participants. Followed by Section 5, which lists the limitations of OpenSHS and the planned future work for this project, the paper concludes.
Related Work
The literature is rich with efforts that focus on generating datasets for smart home applications. These efforts can be classified into two main categories, datasets generated either from real smart homes test beds or using smart home simulation tools.
Real Smart Home Test Beds
One of the recent projects for building real smart homes for research purposes was the work carried out by the Centre for Advanced Studies in Adaptive Systems (CASAS) [16], where they created a toolkit called 'smart home in a box', which is easily installed in a home to make it able to provide smart services. The components of the toolkit are small and can fit in a single box. The toolkit has been installed in 32 homes to capture the participants' interactions. The datasets are publicly available online [17].
The TigerPlace [18] project is an effort to tackle the growing ageing population. Using passive sensor networks implemented in 17 apartments within an elder-care establishment. The sensors include motion sensors, proximity sensors, pressure sensors and other types. The data collection took more than two years for some of the test beds.
SmartLab [19] is a smart laboratory devised to conduct experiments in smart living environments to assess the development of independent living technologies. The laboratory has many types of sensors, such as pressure, passive infrared (PIR) and contact sensors. The participants' interactions with SmartLab are captured in an XML-based schema called homeML [20].
The Ubiquitous Home [21] is a smart home that was built to study context-aware services by providing cameras, microphones, pressure sensors, accelerometers and other sensor technologies. The home consists of several rooms equipped with different sensors. To provide contextual services to each resident, the Ubiquitous Home recognises the resident by providing radio-frequency identification (RFID) tags and by utilising the installed cameras.
PlaceLab [22] is a 1000 sq. ft.smart apartment that has several rooms. The apartment has many sensors distributed throughout each room, such as electrical current sensors, humidity sensors, light sensors, water flow sensors, etc. Volunteering participant can live in PlaceLab to generate a dataset of their interaction and behaviour. The project produced several datasets for different scenarios [23].
HomeLab [24] is a smart home equipped with 34 cameras distributed around several rooms. The project has an observation room that allows the researcher to observe and monitor the conducted experiments. HomeLab aims to provide datasets to study human behaviour in smart environments and investigate technology acceptance and usability.
The GatorTech smart home [25] is a programmable and customisable smart home that focuses on studying the ability of pervasive computing systems to evolve and adapt to future advances in sensor technology.
Smart Home Simulation Tools
Smart home simulation tools can be categorised into two main approaches, according to Synnott et al. [6]: model-based and interactive approaches.
Model-Based Approach
This approach uses pre-defined models of activities to generate synthetic data. These models specify the order of events, the probability of their occurrence and the duration of each activity. This approach facilitates the generation of large datasets in a short period. However, the downside of this approach is that it cannot capture intricate interactions or unexpected accidents that are common in real homes. An example of such an approach is the work done by Mendez-Vazquez et al. [7].
PerSim 3D [26] is a tool to simulate and model user activities in smart spaces. The aim of this tool is to generate realistic datasets for complex scenarios of the inhabitant's activities. The tool provides a graphical user interface (GUI) for visualising the activities in 3D. The researcher can define contexts and set ranges of acceptable values for the sensors in the smart home. However, the tool is not available freely in the public domain.
SIMACT [27] is a 3D smart home simulator designed for activity recognition. SIMACT has many pre-recorded scenarios that were captured from clinical experiments, which can be used to generate datasets for the recognition of ADLs. SIMACT is a 3D open-source and cross-platform project developed with Java and uses the Java Monkey Engine (JME) [28] as its 3D engine.
DiaSim [29] is a simulator developed using Java for pervasive computing systems that can deal with heterogeneous smart home devices. It has a scenario editor that allows the researcher to build the virtual environment to simulate a certain scenario.
The Context-Aware Simulation System (CASS) [30] is another tool that aims at generating context information and testing context-awareness applications in a virtual smart home. CASS allows the researcher to set rules for different contexts. A rule can be, for example, turn the air conditioner on if a room reaches a specific temperature. The tool can detect conflicts between the rules of the pre-defined contextual scenarios and determine the best positioning of the sensors. CASS provides a 2D visualisation GUI for the virtual smart home.
The Context-Awareness Simulation Toolkit (CAST) [31] is a simulation tool designed to test context-awareness applications and provides visualisations of different contexts. The tool generates context information from the users in a virtual smart home. CAST was developed with the proprietary technology Adobe Flash and is not available in the public domain.
Interactive Approach
Contrary to the previous approach, the interactive approach can capture more interesting interactions and fine details. This approach relies on having an avatar that can be controlled by a researcher, human participant or simulated participant. The avatar moves and interacts with the virtual environment, which has virtual sensors and/or actuators. The interactions could be done passively or actively. One example of passive interactions is having a virtual pressure sensor installed on the floor, and when the avatar walks on it, the sensor should detect this and emit a signal. Active interactions involve actions such as opening a door or turning the light on or off. The disadvantage of this approach, however, is that it is a time-consuming approach to generate sufficient datasets, as all interactions must be captured in real time.
Park et al. [32] presented a virtual space simulator that can generate inhabitants' data for classifications problems. In order to model inhabitant activities in 3D, the simulator was built using Unity3D [33].
The Intelligent Environment Simulation (IE Sim) [34] is a tool used to generate simulated datasets that capture normal and abnormal ADLs of inhabitants. It allows the researcher to design smart homes by providing a 2D graphical top-view of the floor plan. The researcher can add different types of sensors such as temperature sensors, pressure sensors, etc. Then, using an avatar, the simulation can be conducted to capture ADLs. The format of the generated dataset is homeML [20]. To the knowledge of the authors, IE Sim is not available in the public domain.
Ariani et al. [35] developed a smart home simulation tool that uses ambient sensors to capture the interactions of the inhabitants. The tool has a map editor that allows the researcher to design a floor plan for a smart home by drawing shapes on a 2D canvas. Then, the researcher can add ambient sensors to the virtual home. The tool can simulate binary motion detectors and binary pressure sensors. To simulate the activities and interactions in the smart home, they used the A* pathfinding algorithm [36], to simulate the movement of the inhabitants. During the simulation, all interactions are sampled at 5 Hz and stored into an XML file.
UbiREAL [37] is a Java-based simulation tool that allows the development of ubiquitous applications in a 3D virtual smart space. It allows the researcher to simulate the operations and communications of the smart devices at the network level.
V-PlaceSims [38] is a simulation tool that allows a smart home designer to design a smart home from a floor plan. Then, it allows multiple users to interact with this environment through a web interface. The focus of this tool is the improvement of the designs and management of the smart home.
In addition to the above outlined simulation tools, there are other commercial simulation tools targeting the industry, such as [39][40][41].
Generally, the model-based approach allows the researcher to generate large datasets in short simulation time, but sacrifices the granularity of capturing realistic interactions. On the other hand, the interactive approach captures these realistic interactions, but sacrifices the short and quick simulation time, and therefore, the generated datasets are usually smaller than the ones generated by the model-based approach.
Analysis
Synnott et al. [6] identified several challenges that face the smart home simulation research. One of the key challenges is that many of the available simulation tools [9,11,30,37,38,[42][43][44] focus on testing applications that provide context awareness and visualisation rather than focusing on generating representative datasets. Few of the available tools focus on generating datasets [1,12,45,46]. Another key challenge is to have the flexibility and scalability to add new/customised types of smart devices, change their generated output(s), change their positions within the smart home, etc. The multiple inhabitants' support is also one of the limitations facing the currently-available tools, as this feature is known to be difficult to implement [6].
The review of available smart home simulation tools reveals that the majority of the reported work lacks the openness and availability of the software implementation, which hinders their benefit to the wider research community. Moreover, less than half of the reviewed tools (10 out of 23) do not support multiple operating systems, which can be an issue when working with research teams and/or test subjects. Table 1 shows the analysis and comparison of our proposed tool, OpenSHS, with the existing simulation tools. SIMACT [27] and UbiWise [44] were the only open-source and cross-platform simulation tools available; however, the data generation approach used in that tool is based on a pre-defined script that the researcher plays back within the 3D simulation view.
Apart from the work by [47], this analysis shows that none of the reviewed simulation tools follows a hybrid approach, i.e., a tool that combines the ability of model-based tools to generate large datasets in a reasonable time while keeping the fine-grained interactions that are exhibited by the interactive tools.
Our review shows that fewer simulation tools focus on generating datasets, while the majority of the reviewed tools focus on visualisation and context-awareness applications.
Supporting the simulation of multiple inhabitants is a tricky task, especially for the tools that focus of generating datasets. Most of these tools have an avatar controlled by a single participant at a given time. To have multiple participants conducting a simulation at the same time is one of the identified challenges. When comparing OpenSHS against the available simulation tools reviewed in Table 1, unlike the majority of such tools, our tool is based on Blender and Python, which are open-source and cross-platform solutions; this offers the following benefits: • Improving the quality of the state of the art datasets by allowing the scientific community to openly converge on standard datasets for different domains, • Easier collaborations between research teams from around the globe, • Faster developments and lower entry barriers, • Facilitates the objective evaluations and assessments.
Our tool allows the simulations to be conducted in 3D from a first-person perspective. The only open-source tools we could identify in the literature were SIMACT [27,44]. However, neither of these tools focuses on generating datasets. SIMACT does not allow the participant to create specialised simulations. Instead, it relies on pre-recorded data captured from clinical trials.
IE Sim [34] was extended to use a probabilistic model (Poisson distribution) to augment the interactively recorded data by IE Sim. Therefore, the extended version of IE Sim uses a hybrid approach. However, IE Sim is a 2D simulator, which takes part of the realism out of the simulation. This might be a problem when 3D motion data are important to the researcher, for example in anomaly detection algorithms, as identified by [47].
The fast-forwarding feature makes the simulation less cumbersome especially when the simulation has long periods of inactivity, as in elder-care research. This feature is relevant to interactive and hybrid approaches. OpenSHS's fast-forwarding mechanism streamlines the performance of the simulation and allows the participant to skip in time while conducting a simulation.
Although, OpenSHS currently supports the simulation of one smart home inhabitant, however multiple inhabitants' simulations are partially supported. The current implementation of this feature does not allow real-time simulation of multiple inhabitants. Instead, The first inhabitant records his/her activities, and then, the second inhabitant can start another simulation. The second inhabitant will be able to see the first inhabitant's actions played back in the virtual environment.
The approach that OpenSHS uses to generate datasets can be thought of as a middle ground between the model-based and interactive approaches. The replication mechanism that OpenSHS adapts allows for a quick dataset generation, similar to the model-based approaches. Moreover, the replications have richer details, as the activities are captured in real time, similar to the interactive approaches. Overall, the advantages of OpenSHS can be summarised as follows: 1. Accessibility: The underlying technologies used to develop OpenSHS allowed it to work on multiple platforms, thus ensuring a better accessibility for the researchers and the participants alike. 2. Flexibility: OpenSHS gives the researchers the flexibility to simulate different scenarios according to their needs, by adding and/or removing sensors and smart devices. OpenSHS can be easily modified and customised in terms of positioning and changing the behaviour of the smart devices in the virtual smart home to meet the needs of a research project. 3. Interactivity: Capturing the interactions between the participant and the smart home in OpenSHS was done in a real-time fashion, which facilitates the generation of richer datasets. 4. Scalability: Our simulation tool is scalable and easily extensible to add new types of smart devices and sensors. OpenSHS has a library of smart devices that we will keep developing and updating as new types of smart devices become available. 5. Reproducibility: By being an open-source project, OpenSHS does have the advantage of facilitating reproducibility and allowing research teams to produce datasets to validate other research activities.
OpenSHS Architecture and Implementation
This paper proposes a new hybrid, open-source and cross-platform 3D smart home simulation tool for dataset generation, OpenSHS [51], which is downloadable from http://www.openshs.org under the GPLv2 license [52]. OpenSHS tries to provide a solution to the issues and challenges identified by Synnott et al. [6]. OpenSHS follows a hybrid approach, to generate datasets, combining the advantages of both model-based and interactive approaches. This section presents the architecture of OpenSHS and the technical details of its implementation, which is based on Blender [53] and Python. In this section, we will refer to two entities, the researcher and the participant. The researcher is responsible for most of the work with OpenSHS. The participant is any person volunteering to simulate their activities.
Working with OpenSHS can be divided into three main phases: design phase, simulation phase and aggregation phase. The following subsections will describe each phase.
Design Phase
In this phase, as shown in Figure 2, the researcher builds the virtual environment, imports the smart devices, assigns activities' labels and designs the contexts.
Designing Floor Plan
The researcher designs the 3D floor plan by using Blender, which allows the researcher to easily model the house architecture and control different aspects, such as the dimensions and the square footage. In this step, the number of rooms and the overall architecture of the home are defined according to the requirements of the experiment.
Importing Smart Devices
After the design of the floor plan, the smart devices can be imported into the smart home from the smart devices library, offered by OpenSHS. The current version of OpenSHS includes the following list of active and passive devices/sensors: The smart devices library is designed to be a repository of different types of smart devices and sensors. This list is extensible, as it is programmed with Python. Moreover, the researcher can build a customised sensor/device.
Assigning Activity Labels
OpenSHS enables the researcher to define an unlimited number of activity labels. The researcher decides how many labels are needed according to the experiment's requirements. Figure 4 shows a prototype where the researcher identified five labels; namely, 'sleep', 'eat', 'personal', 'work' and 'other'. This list of activity labels represents a sample of activities, which the researchers can tailor to their needs.
Designing Contexts
After designing the smart home model, the researcher designs the contexts to be simulated. The contexts are specific time frames that the researcher is interested in simulating, e.g., morning, afternoon, evening contexts. For instance, if the researcher aims to simulate the activities that a participant performs when he/she comes back from work during a weekday, then the researcher will design a context for that period. Finally, the researcher specifies the initial states of the devices for each context. Figure 3 shows the overall architecture of the simulation phase. The researcher starts the tool from the OpenSHS interface module, which allows the researcher to specify which context to simulate. Each context has a default starting date and time, and the researcher can adjust the date and time if he/she wants. Every context has a default state for the sensors and the 3D position of the avatar. Then, the participant starts simulating his/her ADLs in that context. During the simulation time, the sensors' outputs and the state of different devices are captured and stored in a temporary dataset. OpenSHS adapts a sampling rate of one second by default, which the researcher can re-configure as required. Once the participant finishes a simulation, the application control is sent back to the main module to start the simulation of another context.
Simulation Phase
The simulation phase aims to capture the granularity of the participants' realistic interactions. However, capturing these fine-grained activities in extended periods of time adds a burden on the participant(s) and sometimes becomes infeasible. OpenSHS offers a solution that mitigates this issue by adapting a fast-forwarding mechanism.
Fast-Forwarding
OpenSHS allows the participant to control the time span of a certain activity: fast-forwarding. For example, if the participant wants to watch the TV for a period of time and does not want to perform the whole activity in real time (since there are no changes in the readings of the home's sensors), the participant can initiate that activity and spawn a dialogue to specify how long this activity lasts. This feature allows the simulation process to be quick and streamlined. The tool will simply copy and repeat the existing state of all sensors and devices during the specified time period. Figure 4 shows the activity fast-forwarding dialogue during a simulation.
Activities Labelling
The researcher is responsible for familiarising the participant with the available activity labels to choose from. During a simulation and before transitioning from one activity to another, the participant will spawn the activity dialogue shown in Figure 4 to choose the new activity from the available list. To ensure a clean transition from one activity to another, OpenSHS will not commit the new label at the exact moment of choosing the new label. Instead, the new label will be committed when a sensor changes its state. For example, in Figure 6, the transition from the first activity (sleep) to the second (personal) is committed to the dataset when the sensor bedroomLightchanges its state, even though the participant did change the label a couple of seconds earlier.
Aggregation Phase
After performing the simulation by the participants, the researcher can aggregate the participants' generated sample activities, i.e., events, in order to produce the final dataset. The results of the simulation phase forms a pool of sample activities for each context. The aggregation phase aims to provide a solution for the generation of large datasets in short simulation time as shown in Figure 5. Hence, this work develops an algorithm that replicates the output of the simulation phase by drawing appropriate samples for each designated context.
This feature encapsulates the model-based approach's advantage with the interactive approach adapted by the simulation phase, which allows OpenSHS to combine the benefits of both approaches, a hybrid approach.
Events Replication
It was evident from the beginning of the development of this project that it is not feasible for a participant to sit down and simulate his/her ADLs for a whole day. Moreover, we wanted to capture the interactions between the inhabitant and the smart home in real time. At the same time, we wanted the process to be less tedious and as streamlined as possible. These requirements brought up the concept of real-time context simulations. Instead of having the user simulating his/her ADLs for extended periods of time, the user simulates only a particular context in real time. For example, let us assume we are interested in an 'early morning' context, and we want to capture the activities that the inhabitant is doing in this time frame, such as what is usually done in the weekdays compared to the weekends in the same context (the 'early morning' context). The user will only perform sample simulations of different events in real time. The greater the number of samples simulated, the richer the generated dataset will be.
To gain more insight into how OpenSHS works, we have built a virtual smart home environment consisting of a bedroom, a living room, a bathroom, a kitchen and an office. Each room is equipped with several sensors totalling twenty-nine sensors of different types. The sensors are binary, and they are either on or off at any given time step.
The result of performing a context simulation can be illustrated by Figure 6. The sample consists of three activity labels, namely 'sleep', 'personal' and 'other'. Each activity label corresponds to a set of sensors' readings. The sensors' readings in the previous figure are readings of binary sensors, and the small circles correspond to an 'ON-state' of that sensor. It is not realistic to aggregate the final dataset by trivially duplicating the contexts samples. There is a need for an algorithm that can replicate the recorded samples to generate a larger dataset. We have designed a replication algorithm for extending and expanding the recorded samples. A small number of simulated events can be extended without affecting their logical order.
To explain the replication algorithm, it is best illustrated by an example. Table 2 shows a set of five samples with their activity labels for a certain context. The first sample has five activities, the second sample has three activities, and so on. When the researcher aggregates the final dataset, the samples of every context are grouped by the number of activities in each sample. Therefore, for the previous example, Sample 1 will be in one group; Samples 2 and 3 will be in a second group; and Samples 4 and 5 will be in a third group. Then, a random group will be chosen, and from that group, a sample will be drawn for each activity. For example, let us take the second group, which contains Samples 2 and 3. The number of activities in this group is three. Therefore, for the first activity, we will either pick the 'sleep' activity from Sample 2 or the 'sleep' activity from Sample 3. The same procedure will be done for the second and third activities. The output will resemble what is shown in Table 3. Table 3. Ten replicated copies based on the samples from Table 2. The context samples shown in Table 2 will produce 25 unique replicated copies. In general, the number of unique replicated copies for a single context can be calculated by the Equation (1). Let G denote the number of the groups of unique length of activities; let S g denote the number of samples for the group g; and let A denote the number of activities within a sample S g . The total number of unique replicated copies R is: OpenSHS can modify the original duration of a performed activity by shortening and/or expanding it. To preserve the structure of a certain activity, we look for the longest steady and unchanged sequence of readings. Then, our algorithm randomly chooses a new duration for this sequence. The new modified sequence length can vary between 5% of the original sequence length, up to its full length. The researcher can use this feature by passing the variable-activities option to the aggregation parameters, as will be shown next.
The researcher can configure a number of parameters to control the generated output, such as: • days: the number of days to be generated; • start-date: specifies the starting date for the dataset; • time-margin: the variability of the starting time for the replicated events; for example, assuming we have a sample that was recorded at 7:30 a.m. and we specified the time margin to be 10 min; the replicated sample could start any time from 7:25 a.m. up to 7:35 a.m.; • variable-activities: make the duration for each activity variable.
Dataset Generation
After running the aggregation algorithm, the researcher can combine all of the scenarios, generated by different participants, into one final comma separated values (CSV) dataset output. Table 4 shows a sample generated dataset. The time-margin parameter does add a level of sophistication to the timing of the recorded activities. This is useful for applications that rely heavily on the time dimension of activities, for example in anomaly detection research.
Implementation
OpenSHS implementation relies on Blender and its game engine. Blender's game engine is programmable by Python.
Blender
Blender was chosen to build the majority of the simulation tool and to act as an infrastructure for OpenSHS. The reasons for this choice can be summarised as: Cross-platform: Blender is available for the three major operating systems; namely, GNU/Linux, Microsoft Windows and Apple macOS. Blender uses OpenGL [54] for its game engine, which is also a cross-platform 3D technology available for the major operating systems.
•
The Blender game engine: Blender's game engine allowed us to add the interactivity to the simulations. The physics engine facilitates the simulation of different types of real sensors and devices. For example, blender has a 'near' sensor, which will only be activated when the 3D avatar controlled by the user is physically near other objects in the scene. Therefore, such a sensor could be used to simulate a proximity sensor easily.
Python
The interaction with the simulation tool is done by controlling a 3D avatar that navigates the smart home space through a first-person perspective similar to most first-person games. Figure 7 shows the 3D avatar navigating the living room. Since Blender's game engine uses Python as a programming language, we developed all of the logic and interactions between the avatar and the virtual environment with it. Moreover, all of the OpenSHS modules are programmed by Python.
OpenSHS Usability
Measuring the usability of a software tool is a challenging and tricky task, since it involves subjective qualities and depends on the context used. John Brooke [55] defines it as "The general quality of the appropriateness to a purpose of any particular artefact". He developed the widely-used System Usability Scale (SUS), which is a questionnaire consisting of ten questions that measure various aspects of the usability of a system. The score of SUS ranges from 0-100.
To assess OpenSHS usability, we conducted a usability study using SUS. Our sample consists of graduate students and researchers interested in smart home research. We carried out multiple sessions, and in each session, we started by introducing OpenSHS and then by presenting its functionalities. After that, we answered any questions the participants had in mind. Afterwards, we allowed the participants to use OpenSHS and explore its features. Finally, the participants were asked to answer a few questions, such as how frequently do they use their computer on daily basis and whether they play first-person 3D video games or not. Then, the participants were asked to fill out the SUS questionnaire.
We carried out two usability studies: one from the perspective of the researchers and the other from the perspective of the participants using OpenSHS. The researchers' group was asked to evaluate OpenSHS usability throughout the three phases (design, simulation, aggregation). The participants group was only requested to evaluate the simulation phase.
For the researchers' group, we collected data from 14 researchers: 85.7% were male and 14.3% female. The average age of the researchers was 36 (min age = 31, max age = 43). All of the researchers reported that they do use their computers on a daily basis, and 93% of them did play 3D first-person games. The aspects that the SUS questionnaire investigates can be summarised as: 1. Frequent use (FU): I think that I would like to use this system frequently. 2. System complexity (SC): I found the system unnecessarily complex. 3. Ease of use (EU): I thought the system was easy to use. 4. Need for support (NS): I think that I would need the support of a technical person to be able to use this system. 5. System's functions integration (FI): I found the various functions in this system were well integrated. 6. System inconsistencies (SI): I thought there was too much inconsistency in this system. 7. Learning curve (LC): I would imagine that most people would learn to use this system very quickly.
8. How cumbersome the system is (CU): I found the system very cumbersome to use. 9. Confidence in the system (CO): I felt very confident using the system. 10. Need for training before use (NT): I needed to learn a lot of things before I could get going with this system. Figure 8 shows the results of our SUS questionnaire for the researchers' group. The odd-numbered statements contribute positively to the overall score if the participant agrees with them (Figure 8a). On the other hand, the even-numbered statements contribute negatively if the researcher agrees with them ( Figure 8b). Calculating the score of our sample revealed that the average SUS score of OpenSHS is 71.25 out of 100 (score min = 40, score max = 85). For the participants' group, 31 participants were asked to answer the SUS questionnaire: 77.5% were male, 22.5% female, and the average age of the participants was 27 (min age = 21, max age = 36). Ninety seven percent did play first-person games, and all of the participants reported that they use their computers on a daily basis. Figure 9 shows the participants' group results. The SUS score for this group is 72.66 out of 100 (score min = 50, score max = 87). The usability results for both groups are promising, but at the same time, they indicate that there is room for improvements. Both groups agree that the learning curve (LC) component of the questionnaire needs improvement. The results also show the need for support from a technical person to use the system.
Future Work
For future work, we plan to include full multiple inhabitants support in real time. Moreover, the smart devices library has few specialised sensors that will be updated in the future to include new types of sensors and devices. Another feature that could improve the design phase of the smart home is the addition of a floor plan editor. Taking into consideration that OpenSHS is an open-source project, released under a free and permissive license, the project could envisage quick and rapid development that would facilitate the support of the aforementioned features.
The more realistic the simulation is, the less the need for building actual smart homes to carry out research. Following the growing advancements in computer graphics, virtual reality (VR) is becoming more accessible and affordable. BlenderVR [56] is an open-source framework that extends Blender and allows it to produce immersive and realistic simulations. Since OpenSHS is based on Blender, one of our future goals is to investigate the incorporation of BlenderVR into our tool to provide more true to life experiences for the smart home simulation and visualisation. In terms of accessibility, we aim to make OpenSHS as accessible as possible. Nowadays, the web technologies and web browsers can be a good platform to facilitate the wider distribution of OpenSHS. Technologies such as WebGL [57] can be used to run OpenSHS in different web browsers, and Blender can export to these technologies.
Currently, the labelling of activities is performed by the participant during the simulation phase. OpenSHS does not perform automatic recognition of these activities. As part of our future work, we plan to investigate the possibility of adding automatic recognition of the participants' activities.
Conclusions
Many smart home research projects require the existence of representative datasets for their respective applications and research interests and to evaluate and validate their results. Many simulation tools available in the literature focus on context-awareness, and few tools have set dataset generation as their aim. Moreover, there is a lack of open-source simulation tools in the public domain. We developed OpenSHS, an open-source, 3D and cross-platform simulation tool for smart home dataset generation. OpenSHS has many features that allow the researchers to easily design different scenarios and produce highly intricate and representative datasets. Our tool offers a library of smart sensors and devices that can be expanded to include future emerging technologies.
OpenSHS allows the researchers to generate seeds of events rapidly. We have presented a replication algorithm that can extend the simulated events to generate multiple unique large datasets. Moreover, conducting a simulation with a participant can be done in a reasonable time, and we provided tools that streamline the process, such as fast-forwarding.
Our tool divides the dataset generation process into three distinct phases, design, simulation and aggregation. In the design phase, the researcher creates the initial virtual environment by building the home, importing smart devices and creating contexts. In the simulation phase, the participant uses the virtual home to generate context-specific events. In the final stage, the researcher applies the replication algorithm to generate the aggregated dataset.
We conducted a usability study using the System Usability Scale (SUS) to assess how usable OpenSHS is. The results of this study were promising, yet they left room for more improvements.
One of the identified issues in smart home simulations tools is having support for multiple inhabitants. This is a challenging task both for the simulation tool and for the participants. Currently, OpenSHS offers partial support for multiple inhabitants. To increase the realism of the simulations, we plan to integrate VR technologies into OpenSHS in the future. The accessibility for both the researchers and the participants is an important feature. Hence, we plan to port the implementation of OpenSHS to run in a web browser. | 9,301.6 | 2017-05-01T00:00:00.000 | [
"Computer Science"
] |
A novel RNA aptamer identifies plasma membrane ATP synthase beta subunit as an early marker and therapeutic target in aggressive cancer
Purpose Primary breast and prostate cancers can be cured, but metastatic disease cannot. Identifying cell factors that predict metastatic potential could guide both prognosis and treatment. Methods We used Cell-SELEX to screen an RNA aptamer library for differential binding to prostate cancer cell lines with high vs. low metastatic potential. Mass spectroscopy, immunoblot, and immunohistochemistry were used to identify and validate aptamer targets. Aptamer properties were tested in vitro, in xenograft models, and in clinical biopsies. Gene expression datasets were queried for target associations in cancer. Results We identified a novel aptamer (Apt63) that binds to the beta subunit of F1Fo ATP synthase (ATP5B), present on the plasma membrane of certain normal and cancer cells. Apt63 bound to plasma membranes of multiple aggressive breast and prostate cell lines, but not to normal breast and prostate epithelial cells, and weakly or not at all to non-metastasizing cancer cells; binding led to rapid cell death. A single intravenous injection of Apt63 induced rapid, tumor cell-selective binding and cytotoxicity in MDA-MB-231 xenograft tumors, associated with endonuclease G nuclear translocation and DNA fragmentation. Apt63 was not toxic to non-transformed epithelial cells in vitro or adjacent normal tissue in vivo. In breast cancer tissue arrays, plasma membrane staining with Apt63 correlated with tumor stage (p < 0.0001, n = 416) and was independent of other cancer markers. Across multiple datasets, ATP5B expression was significantly increased relative to normal tissue, and negatively correlated with metastasis-free (p = 0.0063, 0.00039, respectively) and overall (p = 0.050, 0.0198) survival. Conclusion Ecto-ATP5B binding by Apt63 may disrupt an essential survival mechanism in a subset of tumors with high metastatic potential, and defines a novel category of cancers with potential vulnerability to ATP5B-targeted therapy. Apt63 is a unique tool for elucidating the function of surface ATP synthase, and potentially for predicting and treating metastatic breast and prostate cancer. Electronic supplementary material The online version of this article (10.1007/s10549-019-05174-3) contains supplementary material, which is available to authorized users.
Introduction
Localized prostate and breast cancers are highly curable, but once metastasized to remote organs, these cancers are inevitably lethal. Consequently, an important goal of treatment is to identify and exploit specific vulnerabilities of the metastatic cell. Another key aim is to predict which tumors are at high risk of metastasis, allowing potentially toxic therapy to be tailored to those most likely to benefit. These goals have been aided by an improved understanding of cancer cell genetic drift during tumor progression, which allows certain cells to acquire independence from supportive factors in the tissue of origin, to migrate into the vasculature, and to survive and grow at foreign sites such as liver and bone. In addition to genetic drivers of the metastatic phenotype Electronic supplementary material The online version of this article (https ://doi.org/10.1007/s1054 9-019-05174 -3) contains supplementary material, which is available to authorized users. 1 3 [1][2][3][4][5], epigenetic and protein-level changes have been found to establish tumor cell aggressiveness, including episomal transfer of microRNAs to and from adjacent normal cells, reprogramming of the tumor or stroma by factors released from tumor-infiltrating lymphocytes, and alterations in metabolism brought about by tumor hypoxia [6][7][8]. Energy production from carbon sources is frequently deranged in cancer, and may be associated with changes in the epigenetic state of the cell that promote cell-autonomous increases in tumor aggression (reviewed in [8]). A thorough search for metastasis-promoting changes in the cancer cell thus necessarily extends to exploration of protein content, function, and location.
In this study, we used an unsupervised method: differential Cell-SELEX (Systematic Evolution of Ligands by EXponential enrichment), to search for proteins distinguishing metastatic from non-metastatic subclones of a single parental prostate cancer cell line, LNCaP. Owing to their unique, sequence-specific tertiary structure, single-stranded nucleic acids (either DNA or RNA) can bind to individual proteins with high specificity and affinity, comparable to those of antibodies. These oligonucleotides, known as aptamers, can be modified for stability in biological fluids, labeled with fluorescent tags, fused to other molecules, and delivered in vivo without inciting an immune response [9]. Cell-SELEX uses live cells to select aptamers that recognize cellular proteins in their native and functional state. Differential Cell-SELEX applies the same method to identify aptamers that discriminate between two cell types. The ability to screen large numbers (> 2 40 ) of sequences increases the likelihood of identifying rare or unique surface marker differences.
Here, we report the identification of a novel RNA aptamer (Apt63) that recognizes a plasma membrane feature that is commonly expressed by multiple aggressive prostate and breast cancer cell lines and tumors, but that exhibits low expression or is absent in non-transformed cells and normal tissues. We demonstrate that the aptamer target is the beta subunit of F 1 F o ATP synthase (ATP5B). This protein is a catalytic component of the final enzyme in cellular ATP production by oxidative phosphorylation, and is located on the inner mitochondrial membrane. ATP5B and other components of the F 1 F o ATP synthase complex have previously been identified on the plasma membrane of certain cell and tumor types, where it is referred to as "ecto-ATP synthase"; several studies have shown that the complex is catalytically active in extracellular ATP production [10,11]. Various roles have been established for this activity in a few normal cell types, and particularly in angiogenesis, but its significance and function in cancer remain uncertain. Ecto-ATP synthase acts as a ligand for angiostatin and transduces some of its anti-proliferative and anti-angiogenic effects [12]. Binding to ecto-ATP synthase by angiostatin, membrane-impermeable small molecules and monoclonal antibodies against the ATP5 beta subunit have been shown to promote cell death in a wide range of susceptible cell types, including HeLa, Leishmania, and plant cells ( [13] and citations therein; [14][15][16]). Several studies have linked expression of surface ATP synthase to more-aggressive and later-stage cancer [17,18], suggesting that the activity of this complex on the cell surface may support the survival of these aggressive cells during the transition to metastasis. In this study, we show that Apt63 distinguishes aggressive breast and prostate cancer cell lines from less-aggressive congenic lines, and from non-transformed cells, both human and murine. In vivo, Apt63 binds selectively to ecto-ATP5Bexpressing tumors and not to normal adjacent tissue. Functionally, binding of Apt63 to the plasma membrane exerts selective tumor cell killing by inducing translocation of endonuclease G from mitochondria to nucleus, DNA fragmentation, and apoptosis. We show that that Apt63 plasma membrane binding in clinical tissue biopsies is strongly correlated with advanced tumor stage, and as a corollary, that ATP5B expression in primary tumors is predictive of poor metastasis-free and overall survival. We propose that Apt63 may be useful in early recognition and treatment of a novel subset of highly aggressive primary breast and prostate cancers, defined by surface expression of ATP5B.
Cell lines and cell culture
Human prostate cancer cell lines used in the Cell-SELEX screen were obtained from Dr. Curtis Pettaway [19]. Human prostate cancer cells (PC-3, PC3-ML, RWPE-1) were generously provided by Dr. Kerry Burnstein (University of Miami) , and human breast cancer cell lines (MDA-MB-231, MDA-MB-436, MCF7, MCF10) were obtained from ATCC (Manassas, VA), Murine breast cancer cells lines (4T1, 67NR, E0771, E0771.LMB) were the gift of Dr. Barry Hudson (University of Miami). Dissociated primary tumor lines DT28 and DT22 were the generous gift of Dr. D. El-Ashry (University of Minnesota) [20]. All cell lines were maintained using the suppliers' protocols and maintained in 37 °C, 5% CO 2 tissue culture incubators. All cell lines were routinely tested for mycoplasma using the MycoAlert Mycoplasma Detection Kit (Lonza, Walkersville, MD, USA) and an established PCR protocol [21].
Differential Cell-SELEX
The pool of RNA aptamers used for Cell-SELEX was obtained from a cDNA library with the general template: TCT CGG ATC CTC AGC GAG TCG TCT G-(N40)-CCG 1 3 CAT CGT CCT CCC TA (where N40 represents 40 random nucleotides). The cDNA library was amplified by PCR and transcribed in vitro using a Durascribe T7 RNA synthesis kit (Lucigene, USA) with nuclease-stable 2ʹ-F-dCTP and 2ʹ-F-dUTP as previously described [22]. The aptamer library was purified using an RNeasy kit (Qiagen). Both parental and LNCaP-Pro5 (Pro5) cells were used for negative selection, and LNCaP-LN3 (LN3) for positive selection. Each Cell-SELEX cycle consists two rounds of selection, negative and positive. Parental LNCaP cells were used in the first three selection cycles and Pro5 cells in cycles 4-11. At the beginning of each cycle, 1 µg of the aptamer library was added to 450 µL of PBS containing 0.5 mM MgCl2 and 1 mM CaCl 2 (binding buffer) and RNA was refolded by heating 5 min at 67 °C and cooling at room temperature (RT) for 10 min. All cells used in Cell-SELEX were grown in 75T culture flasks with filtered cups (ThermoFisher Scientific). At 75% confluency, cells were briefly washed with PBS and dissociated from the flask by incubation with 3 ml of Trypsin-EDTA (0.25%) without phenol red (ThermoFisher Scientific) in RT for 3 min. 10 ml of growing media was added to halt Trypsin-EDTA reaction. Detached cells were collected into 15 ml conical tubes (Falcon) and centrifuged for 5 min in a 4 °C tabletop centrifuge (Eppendorf) at 600×g. Cell pellets were resuspended in 5 ml of binding buffer and counted. 2 × 10 5 Pro5 cells were separated into fresh tubes and centrifuged in tabletop centrifuge for 5 min at RT; the cell pellet was then resuspended with the aptamer library and incubated for 10 min at RT on a circular rotator to continuously agitate the cells. Cells were again centrifuged for 5 min at RT, and the supernatant, containing aptamers not bound to the negative selector, was collected and filtered by passing through 0.2 µm Pall Acrodisc® Sterile Syringe Filters with Supor® Membrane (Pall Laboratory). In the positive selection step, 0.5 × 10 5 LN3 cells were first incubated for 10 min at RT with 0.1 mg/ml of yeast tRNA (Sigma) to reduce nonspecific RNA binding, then LN3 cells were washed with binding buffer, centrifuged, and the cell pellet resuspended with the filtered supernatant from the negative selection. LN3 cells were incubated for 10 min on a rotator at RT, followed by isolation of total RNA (including bound aptamers) using Trizol reagent (Invitrogen). RNA aptamers were transcribed from total RNA using an aptamer-specific forward primer and a SuperScript® III Reverse Transcriptase reaction (Invitrogen), and amplified from first strand cDNA by standard PCR (95 °C, 5′, 3 × (94 °C 30″, 52 °C 20″, 72 °C 25″), 15 × (94 °C 30″, 54 °C 20″, 72 °C 25″), 72 °C 5). RNA sequences were transcribed from the resulting cDNA pool using a Durascribe T7 kit as detailed above, and entered into the next Cell-SELEX cycle. RNA aptamer pools were sampled at cycles 1, 4, and 11. After cycle 11, aptamer pools were sequenced, aligned, and analyzed to select candidates for further study as previously described [22,23].
Fluorescence microscopy for aptamer imaging
Cells were seeded into 35 mm glass bottom dishes (Mat-Tek Corporation, Ashland, MA) at a density of 0.3 × 10 6 cells per dish, and allowed to grow for 48 h to 60-75% confluence. Cy3-labeled aptamers were added to culture media at a final concentration of 1 nM and incubated with live cells for 30 min in 37 °C in 5% CO 2 . Following incubation, cells were washed 3 × for 5 min each with PBS and fixed 10 min with 4% paraformaldehyde at RT. After fixation, cells were washed with PBS and counterstained with DAPI (Sigma), 1 µg final concentration for 5 min. To identify membrane co-localization of Cy3-Apt63 and ATP5B antibody, cells were grown on coverslips in 35 mm tissue culture dishes, and stained sequentially with Cy3-Apt63 and AlexaFluor®647 anti-ATP5B antibodies (ab223436, ABCAM), without a permeabilization step. To co-localize Apt63 and ATP5B antibody within mitochondria, cells were treated with 0.05% Triton X-100 for 5 min, washed three times with PBS, then incubated with both the AlexaFluor®647 anti-ATP5B antibody and Cy3-Apt63. Finally, cells were counterstained with DAPI. For some experiments, live cells were first stained with Cy3-Apt63, followed by fixation, treatment with 0.05% Triton X-100, and DAPI counterstaining as described above. Coverslips were mounted with ProLong®Gold antifade reagent (Life Technologies). Fluorescent images were obtained on a confocal microscope (Leica SP5) using a 20x dry objective (Leica PL APO CS).
Aptamer target purification and identification
Apt63 and scrambled sequence (AptScr) were 3′ end-biotinylated using Pierce™ RNA 3′ End Desthiobiotinylation Kit (ThermoFisher Scientific, USA) following the manufacturer's protocol. LNCaP-LN3 and LNCaP-Pro5 cells were each seeded into 100 mm Petri dishes for 48 h at a density 0.5 × 10 6 cells per dish and allowed to grow to 60-75% confluency. On the day of the experiment, cells were incubated at RT for 1 h with the desthiobiotin-RNA-Apt63 or -AptScr complexes, allowing aptamer to bind to target. Following binding, cells were washed 3 × for 5 min each in PBS at RT to remove excess unbound RNAdesthiobiotin complexes, and cross-linked by incubation with 1% paraformaldehyde for 2 min. Next, cells were thoroughly washed 3 × for 5 min each in PBS at RT. To separate membranes from intracellular components, cells were incubated in a mild hypotonic lysis buffer containing 1 M Tris-HCl, 5M NaCl, 50 mM MgCl, 0.1M DTT, and protease inhibitor cocktail for 2 min on ice. Immediately following incubation, cells were gently homogenized in a Dounce homogenizer, ten times on ice, mixed with magnetic beads and left overnight at 4 °C to allow capture of the desthiobiotin-aptamer-target hybrid complexes. On the next day, the beads were thoroughly washed with reagents provided in the kit, and target-aptamer complexes eluted with 30 µL of 8M urea for 10 min at 60 °C. The recovered eluates and total cell lysate were separated on 4-20% gradient SDS-PAGE gels (Bio-Rad, USA). Gels were stained using Pierce™ Silver Stain kits. Protein band distributions were compared between LNCaP-LN3 and LNCaP-Pro5 cell lines, and the most enriched band in the LN3 aptamertarget eluate was cut, sequenced by microcapillary LS/MS/ MS, and analyzed by SEQUEST software at Taplin Mass Spectrometry Facility (Harvard Medical School, Boston MA). The predicted protein target was verified by 4-20% gradient SDS-PAGE gel electrophoresis and western blot in whole cell lysates and aptamer eluates using ATP5B antibodies (ab170947, ABCAM) with ATP5B recombinant protein as a positive control (ab92235, ABCAM).
In vitro aptamer binding affinity and cytotoxicity assays
We used two independent methods to evaluate Apt63 cytotoxicity in a series of cell lines in vitro: (1) direct visualization of Apt63 cytotoxicity using the IncuCyte® S3 Live-Cell Analysis System, and (2) SYTOX™ Green uptake. Binding affinity of Apt63 to its membrane target was measured using the CellTiter-Glo® system (Promega). Detailed procedures are described in Online Resource 2.
Mouse xenograft models and aptamer cytotoxicity in vivo
All animal experiments were approved by and performed in accordance with the guidelines of the University of Miami Institutional Animal Care and Use Committee. For live visualization of Apt63 tumor uptake and retention in vivo, a prostate xenograft tumor model was used. NOD.CB17-Prkdcscid/J male mice (10 weeks old, n = 14, The Jackson Laboratory) were injected orthotopically into the right anterior lobe of the prostate with 2 × 10 6 LN3 and 2 × 10 6 Pro5 cells. For Apt63 cytotoxicity in vivo, we used a previously described breast tumor xenograft mouse model [24]. NOD. CB17-Prkdcscid/J female mice (6-8 weeks old, n = 39, The Jackson Laboratory) were injected with 10 6 of MDA-MB-231 cells into the mammary fat pad. In both xenograft models, when tumors were palpable, 1 nmol of Alexa Fluor™ 647-labeled Apt63, AptScr, or unlabeled oligonucleotides suspended in 200 µL of PBS was injected into the tail vein as a single injection. Mice were imaged and euthanized at specified time points. Tumors and selected tissues were dissected and processed for further analysis. Detailed procedures are described in Online Resource 2.
Tumor tissue and FFPE human biopsy arrays fluorescent staining and analysis
Xenograft tumors were removed from euthanized mice and immediately frozen or fixed with 10% buffered formalin (VWR, USA), paraffin embedded, and processed. Prostate and breast core tissue microarrays (TMA) were purchased from US Biomax, Inc (Rockville, MD). The detailed staining protocols are provided in as described in Online Resource 2. Cy3-Apt63-stained human breast biopsy microarrays were imaged by fluorescence microscopy on a Virtual Slide Microscope (VS120) for overview images and on a confocal microscope (Leica SP5) for high-resolution images, and scored for visual presence or absence (greater or less than 10% of cells, respectively) of Apt63 membrane-specific labeling. A list of TMAs with patient information, tumor grade, and stage used in this study and assigned Apt63 score for each biopsy is provided in Online Resource 3. Statistical analysis was performed for the Pearson correlation coefficient of aptamer membrane-specific stain vs. histopathological grades and stages.
ATP5B expression datasets and analysis
ATP5B gene expression was analyzed in prostate and breast cancer samples downloaded from Gene Expression Omnibus and from the Genomic Data Commons Portal. Specifically, RNA-seq data in FPKM (Fragments Per Kilobase Million) and clinical information of the TCGA Prostate Adenocarcinoma dataset (TCGA-PRAD [25]) were downloaded from the Genomic Data Commons Portal using functions of the TCGAbiolinks R package and used as is. Expression levels of prostate tumors (n = 264) and normal prostate tissue samples (n = 160) from Penney et al. [26] were downloaded from GEO GSE62872 as Series Matrix File and used as is. Raw data of 545 formalin-fixed paraffinembedded (FFPE) tissue samples from primary prostate cancer were downloaded from GEO GSE46691 [27]. Probelevel signals were converted to expression values from CEL files using robust multi-array average procedure RMA [28] and an Entrez gene-centered custom CDF for Affymetrix Human Exon 1.0 ST Array (http://brain array .mbni.med. umich .edu/Brain array /Datab ase/Custo mCDF/CDF_downl oad.asp; version 22). Gene expression profiles of 25 matched normal and tumor breast tissues were downloaded from GEO GSE109169 [29] as Series Matrix File and used as is. Full expression median-centered data, consisting of 522 primary tumors, 3 metastatic tumors, and 22 tumor-adjacent normal samples, and clinical information of the TCGA Breast Invasive Carcinoma dataset (TCGA-BRCA; [30]) were downloaded from https ://tcga-data.nci.nih.gov/docs/ publi catio ns/brca_2012/ and used as is. Finally, we used a breast cancer compendium created from a collection of 4640 samples from 27 major datasets containing microarray data on breast cancer samples annotated with clinical information. The compendium consists of a meta-dataset of gene expression data for 3,661 unique samples from 25 independent cohorts [31,32].
All data analyses were performed in R (version 3.5.1) using Bioconductor libraries (BioC 3.7) and R statistical packages. To identify two groups of tumors with either high or low ATP5B expression, we used the classifier described in [33], based on the standardized expression (score) of a gene or a signature. Tumors were classified as ATP5B 'Low' if the ATP5B score was negative and as ATP5B 'High' if the ATP5B score was positive. To evaluate the prognostic value of the ATP5B score, we used the Kaplan-Meier method to estimate the probability of metastasis-free survival. To confirm these findings, the Kaplan-Meier curves were compared using the log-rank (Mantel-Cox) test. P-values were calculated according to the standard normal asymptotic distribution, using a cutoff of 0.05 for significance. Survival analysis was performed in GraphPad Prism.
Identification of an aptamer recognizing aggressive cancer
As an initial approach to discovering features of cancers with high metastatic risk, we performed differential Cell-SELEX comparing two subclones of a single prostate cancer line (LNCaP) with divergent metastatic potential. Parental LNCaP and Pro5 variant lines, which are poorly metastatic, were used for library subtraction, while the aggressive LN3 line was used for positive screening. 11 cycles of negative and positive selection were performed (Fig. 1a). Ongoing enrichment of high-affinity LN3-specific aptamers was monitored by SYBR® Green fluorescence as an indicator of annealing. Figure 1b shows RoT curve analyses at each cycle, demonstrating a progressive increase in binding affinity and decreasing the complexity of aptamers as selection progress. Aptamer pools were sampled at cycles 0, 1, 4, and 11 and sequenced. Sequences showing a frequency higher than 1/10 6 in the last analyzed cycle (i.e., cycle 11) were selected for further investigation. After this filtering step, 691 unique sequences were clustered into families using Clustal Omega software [34] (Fig. 1c), and representative aptamers from each of 5 selected families were selected for further testing.
The selected RNA aptamers, together with scrambled controls, were labeled with Cy3 and incubated with live LN3 and Pro5 cells. Aptamers #63 and #41 showed strong binding to LN3, while only background fluorescence was seen with the other aptamers and control sequences ( Fig. 1d and Online Resource 4). Pro5 cells were not bound by either aptamer or by scrambled controls (Fig. 1d and Online Resource 4). Because of the greater intensity of Apt63 fluorescence, this sequence was chosen for further testing.
We next asked whether Apt63 would preferentially interact with other highly metastatic cell lines derived from other tissues, including human prostate, human breast, and murine breast cancers. Non-tumorigenic and poorly metastatic cell lines were used for comparison (Fig. 2). As with LN3, the aggressive prostate cancer cell lines PC-3 and PC3-ML were strongly labeled by Apt63, while the non-tumorigenic prostate epithelial cell line RWPE-1 was not (Fig. 2a). The readily metastasizing MDA-MB-231 and MDA-MB-436 breast cancer cell lines were also strongly labeled by Apt63, but the non-tumorigenic breast epithelial cell line MCF10A was not labeled, and the poorly metastasizing MCF-7 line was weakly bound by Apt63. The primary dissociated breast tumor line DT28, which metastasizes efficiently, was strongly labeled by Apt63, but the non-metastasizing DT22 line was not [35]. Apt63 also efficiently discriminated between murine breast cancer cell lines with different metastatic potentials (Online Resource 5), indicating inter-species conservation of the binding target. Live cells stained with Apt63 showed a punctate pattern that appeared to be concentrated at the plasma membrane, with some variation in labeling intensity (Fig. 2a, b). These findings suggest that the target recognized by Apt63 is located on the cell surface and is a common feature among cell lines with high metastatic potential. Representation of the RNA molecules library featuring a central 40 random nucleotides (multicolor), flanked by forward (FP) and reverse (RP) primer sequences. The library was screened for differential binding to surface feature(s) unique to LNCaP-LN3 prostate cancer cells. Negative selection was performed using poorly metastatic LNCaP-Pro5 cells (smooth black) and positive selection was performed on the highly metastasis-prone LN3 subclone (spiky red). Sequential negative and positive selection cycles enrich the aptamer pool for LN-binding sequences (top right). RNA aptamer pools were sampled after cycles 1, 4, and 11 and sequenced. Sequences enriched at cycle
Apt63 binding is selectively cytotoxic to cancer cells in culture and in vivo
We speculated that the target of Apt63 on LN3 cells might be functionally important in promoting its metastatic phenotype, possibly as a survival factor, and that binding by the aptamer might impair this function. Accordingly, we tested for direct Apt63 cytotoxicity in vitro, using two methods. First, real-time cytotoxicity was monitored in Apt63exposed LN3 cells by SYTOX® Green uptake and fluorescence, using an IncuCyte® S3 Live-Cell Analysis System. The SYTOX® Green nucleic acid dye is excluded by healthy cells with normal membrane permeability but diffuses passively through damaged membranes. After addition of Apt63 or a scrambled aptamer (AptScr), together with SYTOX® Green, fluorescent images were recorded every 5 min for the duration of the experiment (Fig. 3a; see also the time-lapse video in Online Resource 6). No difference in cell death was seen among the various conditions at 20 min (Fig. 3a). By 100 min, most LN3 cells exposed to Apt63 were brightly fluorescent, indicating cytotoxicity, but there was no change in ongoing basal death rates of Pro5 or AptScr-treated LN3 cells (Fig. 3a). These findings suggest that Apt63 induces rapid cell death upon engagement of an LN3-enriched epitope, leading to membrane compromise and cell death within 2 h. Next, we estimated the dose dependence of Apt63 cytotoxicity in LN3 cells using ATP content as an indicator of cell viability. Cells were exposed to a range of concentrations of Apt63 and media ATP fluorescence determined using a CellTiter-Glo® system as described in "Materials and methods." A relatively sharp decrease in cell viability is observed at an approximate IC50 = 1.030 nM (R 2 = 0.9497, Fig. 3b). For comparison, the IC50 for angiostatin in this assay was 1.66 µM (R 2 = 0.884. Fig. 3b).
To determine whether Apt63 binding and cytotoxicity were correlated in other cell lines, we grew human PC3, PC3ML, RWPE, MDA-MB-231, MDA-MB-436, MCF7, MCF10A, and mouse T41 and NR 67 cell lines, along with Fig. 2a). No cell death was seen with AptScr (blue bars). Non-tumorigenic epithelial cell lines RWPE and MCF10A, which do not bind Apt63, were completely resistant, while cell lines with weak staining had correspondingly reduced susceptibility (e.g., MCF-7) (Fig. 3c). The residual toxicity of Apt63 in these weakly staining cells could reflect the presence of small subpopulations of vulnerable cells. Overall, however, these findings suggest that Apt63 cytotoxicity is sequence-specific and dependent on the presence of a specific epitope found on multiple cancer cell types.
We then asked whether Apt63 could exert sequence-specific binding and toxicity for cells grown as xenograft tumors in vivo. In initial experiments, we tested whether LN3 and Pro5 tumor cells would differentially take up Apt63 after intravenous injection. Mice bearing LN3 and Pro5 xenograft tumors were injected with a single dose of 1 nmol of Alexa Fluor™ 647-labeled Apt63 or AptScr in 200 µL of PBS into the tail vein. Between 10 min and 3.5 h post injection, Apt63 uptake could be detected in LN3 but not Pro5 xenografts in vivo (Fig. 3d) and in frozen sections of the same tumors (Fig. 3e). No Apt63 uptake was observed in Pro5 xenografts (Fig. 3d), suggesting that Apt63 selectively binds to and accumulates in tumors expressing its plasma membrane target. We next injected Alexa Fluor TM -labeled Apt63 or AptScr into mice bearing MDA-MB-231 xenograft tumors in the mammary fat pad. Mice were monitored and euthanized at 6, 24, and 48 h after a single tail vein injection of Apt63 or AptScr, and frozen sections of tumors were imaged as described above. The fluorescent label was retained by Apt63-exposed MDA-MB-231 xenografts for up to 48 h post injection, while only background AptScr signal was detectable at any point (Fig. 3f).
In the same images, we noted an increase in nuclear fragmentation in MDA-MB-231 xenografts by 24 h after injection of Apt63, relative to AptScr (Fig. 3f). We further explored this by electrophoresis of tumor DNA (Fig. 3g), which revealed a distinct nucleosomal DNA cleavage pattern in Apt63-treated tumors. The cleavage pattern resembled that produced by endonuclease G (endoG), a nuclease that is released from the inter-mitochondrial membrane space during oxidative stress and translocates to the nucleus to initiate a caspase-independent apoptotic pathway [36,37]. Consistent with this, Apt63-treated MDA-MB-231 tumors showed considerable nuclear endoG staining by 24 h, while AptScr-treated tumors did not (Fig. 3h, arrowheads). These effects were not accompanied by any visible cytotoxicity toward adjacent non-tumor tissues, or any obvious adverse effects on the mouse overall conditions during the 48 h after injection. We interpret these results to show that Apt63 binds preferentially to breast and prostate tumor cells that express its plasma membrane target, and that Apt63 induces cell death upon binding, through a mechanism that involves release and nuclear translocation of endoG.
The target of Apt63 is the beta subunit of F 1 F o ATP synthase (ATP5B)
To obtain an enriched fraction of the aptamer target on LN3 plasma membranes, we used a protocol combining a short detergent treatment with mild hypotonic lysis to segregate aptamer-bound membrane proteins from other cell components (see "Materials and methods"). Electrophoresis of this fraction yielded a single predominant protein band (Fig. 4a, Fig. 3 Apt63 binding is selectively cytotoxic to cancer cells in culture and in vivo. a Rapid in vitro cytotoxicity induced by Apt63. Cell death was monitored in real time by SYTOX® Green fluorescence as described in "Materials and methods." Representative photographs after 10 min (left column), and after 2-h post-treatment (right column). Original magnification: × 20. b Concentration dependence of Apt63 cytotoxicity. Apt63 (unlabeled) or angiostatin was added to the cells at the indicated concentrations and luciferin luminescence was measured at 2 h in an EnVision™ plate reader. Readings were normalized to untreated cells and plotted using GraphPad Prism 8 Software. Apt63 IC50 = 1.030 nM. IC50 for angiostatin = 1.66 micromolar (> 10 3 × higher). c Cell selectivity of Apt63 cytotoxicity. 5). c Apt63 co-localizes on plasma membrane with anti-ATP5B antibody. ATP5B antibody (green) and Apt63 (red) were bound to live LN3 cells, followed by fixation and imaging by confocal microscope. d The Apt63 target is extractable by detergent treatment of fixed cells. (left) Cy3-Apt63 (red) and plasma membrane marker WGA (green) were incubated with live cells followed by fixation and imaging as in (c). (right) Similarly treated cells, except that fixed slides were subjected to a short 0.05% Triton X-100 treatment. Note loss of Cy3 Apt63 signal from plasma membrane. e Apt63 stains mitochondria in permeabilized cells. Cells were fixed, then permeabilized with 0.05% Triton X-100 and incubated with Cy3-Apt63 or Cy3-AptScr and ATP5B antibody. In these permeabilized LN3 cells, Apt63 and ATP5B antibody staining are co-localized within mitochondria. (Color figure online) lane 2.) A similar protein band was identified in PC-3 membrane fractions (not shown). These bands were isolated and sent for protein sequencing by mass spectrometry (n = 2 samples from LN3, n = 1 from PC3). The top hit in all 3 samples was ATP5B, with 28.36% of the ATP5B protein sequence detected (Table 1).
To confirm the identity of the aptamer target, we performed a western blot analysis of aptamer-associated cell membrane proteins and total cell lysates using an ATP5B antibody (Fig. 4b). A single protein band was present in Apt63-bound membrane fractions of LN3 (Fig. 4b, lanes 1, 2) but not Pro5 cells (Fig. 4b, lane 3). The same band was readily detected in whole cell lysates of both cell lines (Fig. 4c, lanes 4, 5), and co-migrated with recombinant ATP5B protein (4c, lane 6). ATP5B antibody and Apt63 co-localized on the surface of intact LN3 cells (Fig. 4d), and within mitochondria in permeabilized LN3 cells (Fig. 4e), consistent with ectopically expressed ATP5B on the plasma membrane as a common target of the antibody and Apt63. This surface target could be extracted by detergent treatment (Fig. 4d, right), further supporting the plasma membrane location of ATP5B.
Membrane ATP5B as a correlate of tumor metastasis in clinical populations
It is not clear how ATP5B gene expression and ecto-ATP5B levels are related in any given cell type; ATP5B protein is subject to substantial post-translational and functional regulation, including plasma membrane redistribution [16,[38][39][40][41]. However, several components of the ATP synthase complex have been reported to be upregulated in cancer [40,42,43]. We therefore asked whether ATP5B expression was associated with cancer phenotypes in clinical populations by comparing ATP5B transcript levels in tumor vs. normal tissue in multiple prostate and breast cancer datasets. ATP5B expression was significantly higher in primary tumors when compared with normal tissues in both prostate (Fig. 5a-c) and invasive ductal breast cancer (Fig. 5d, e). In tandem with this, mean ATP copy number was significantly increased in ER-positive tumors, and in a subset of ER− (Fig. 5f), although interestingly the average copy number for ER− tumors was reduced. For both types of cancer, above-median ATP5B expression was associated with significantly decreased metastasis-free (Fig. 6a, b) and overall (Fig. 5c, d) survival. These findings are consistent with a role for ATP5B, along with other members of the complex, in supporting metastatic progression.
To characterize ATP5B protein content in human breast and prostate cancer samples, we used Apt63 to label prostate and breast cancer tissue microarrays (TMAs) representing a range of tumor grades and stages. As in the xenograft studies shown above, we confirmed that Apt63 staining co-localized with staining by a monoclonal ATP5B antibody within tumor tissue, while normal adjacent stroma was only minimally bound by either reagent (Fig. 7a). Both cytosolic and membrane staining patterns could be identified by high-resolution confocal microscopy. We observed considerable sample-to-sample heterogeneity of staining patterns across different categories of breast cancer; in some tumors, Apt63 predominantly labeled cytosolic components, including mitochondria, while in others, a clear plasma membrane pattern was identified (Fig. 7b). Staining of normal tumor-adjacent tissue was consistently weak (Fig. 7b, brackets). Using a semi-quantitative score for the presence or absence of membrane-bound ATP5B (see Online Resource 2), we saw no consistent association between the presence of ecto-ATP5B and tumor grade, PAM50 subtype, or hormone receptor status. However, plasma membrane staining of breast cancer cells by Apt63 was strongly and positively associated with tumor stage (r = 0.997, p = 3.12E−03), appearing in 42/46 of lymph node metastases, 0/12 normal breast tissue samples, and intermediate values in DCIS and invasive carcinomas (summary is presented in Table 2 and in Online Resource 3). These results provide further support for a functional relationship between plasma membrane ATP5B, as indicated by binding by Apt63, and breast cancer metastasis.
Discussion
Here we show that ectopic plasma membrane ATP5B, a subunit of F 1 F o -ATP synthase, denotes a high metastasis-risk phenotype in breast and prostate cancer, and a vulnerability of cancer cells in vivo. F 1 F o ATP synthase is a highly conserved enzyme complex residing on the inner mitochondrial membrane, where it conducts the final step in oxidative ATP production. Its 30 protein components are organized into two domains, the F o proton-translocating domain and the F 1 catalytic domain [44,45]. Three pairs of ATP5A and ATP5B subunits form the catalytic core of F 1 in the inner mitochondrial membrane, generating ATP molecules as H + transits the F o pore. Defects in ATP synthase contribute to diseases including microbial infection, immune deficiency, neuropathies, obesity, diabetes, and cancer [35,46,47]. A plasma membrane-located ATP synthase (ecto-ATP synthase) was initially discovered as a cancer neoantigen more than 20 years ago [48]. Fully functional ATP synthase complexes have been identified on the plasma membrane of certain normal and many tumor cells, and may either hydrolyze or synthesize ATP [10,11]. Ecto-ATP synthase has been proposed to act as a receptor for apo-A1 and thereby to regulate HDL uptake by hepatocytes [49,50] and to promote endothelial progenitor cell proliferation and angiogenesis [51]. Angiostatin has been shown to bind to ecto-ATP synthase and disrupt its ATP synthetic activity, contributing to its anti-angiogenic effects [10,52]. However, angiostatin is able to exert these functions through other receptors on the cell surface, including c-met [53], proteoglycan NG2 [54], and annexin II [55]. The importance of these functions of ecto-ATP synthase in normal cells remains to be fully elucidated [38,56]. ATP5B emerged in our unbiased screen as a plasma membrane feature that distinguishes the aggressive LNCaP-LN3 cell line from isogenic LNCaP and LNCaP-Pro5 cells, which metastasize infrequently [19]. Collectively, our data suggest that acquiring this feature may have enabled the metastatic phenotype of the LN3 subclone. Despite substantial effort, no other specific drivers of the aggressive LN3 phenotype have been identified. LN3 cells grow well in the absence of androgen, but do not have androgen receptor amplification [57]; LN3 also exhibits higher resistance to apoptosis, associated with upregulation of anti-apoptotic BCL-2 and down-regulation of BAK and BAX [57]. LN3s express higher levels of macrophage-inhibitory cytokine-1 (MIC1/GDF15) [58], the chaperone gp96 [59] and VEGFA [60], and have greater tumor vascularity [59,60] than non-metastatic LNCaP lines. No genetic differences have been shown to explain these properties, although LN3 displays unique deletions in 16q23-qter and 21q of unknown functional significance [61] and lacks a missense mutation in PlexinB1 found in parental LNCaP cells that appears to be silent [62]. Previous proteomic analyses found no features distinguishing LN3 from the less-aggressive isogenic lines [63]. The same study found that endoplasmic reticulum protein ERp5 is overexpressed and displayed on the plasma membrane of both LN3 and Pro5 cells, demonstrating that cycling of intracellular peptides to the plasma membrane is not a rare event during tumorigenesis [63].
Considerable evidence links ecto-ATP synthase to aggressive cancer cell growth. Plasma membrane-associated ATP5 subunits, including ATP5B, have been correlated with moreaggressive, larger and more advanced tumors, in multiple cancers including breast, lung, and prostate [17,42,64]. In our breast cancer TMA analysis of Apt63 binding, including biopsies representing 416 subjects, surface ATP5B appears to define a unique subset of highly aggressive breast and prostate cancers, present on 45% of DCIS and 55% of invasive ductal carcinomas, and on almost all (91.3%) lymph node metastases. Apt63 staining did not appear to align with tumor size or hormone receptor status, suggesting that ecto-ATP5B denotes an independent tumor phenotype. Surface ATP5B also appears to be important as a tumor-specific survival factor: cancer cells expressing ecto-ATP5B were rapidly killed by Apt63 binding, undergoing nuclear translocation of endonuclease G and DNA fragmentation, while adjacent normal tissues were spared. This selective toxicity could mean that certain breast and prostate tumors are dependent on the presence of functional ecto-ATP synthase, and points to a vulnerability not shared by non-transformed cells.
The biological importance of ecto-ATPase has been explored using a range of physiological and synthetic ligands, including angiostatin, plasminogen, monoclonal antibodies, peptides, and small molecules binding to the F 1 module [12,14,16,[64][65][66][67]. The effects of these agents are both cell type and ligand-specific, but most reduce extracellular ATP production and cell proliferation, and some initiate programmed death. In HUVECs, which express high levels of ecto-ATP synthase, angiostatin inhibited cell proliferation and ATP production, but was not cytotoxic [10]; in A549 lung cancer cells, both angiostatin and a polyclonal anti-ATP5B antibody blocked ATP synthesis, induced intracellular acidification, and triggered cell death [68]. A monoclonal ATP5B antibody (McAb178-5G10) inhibited surface ATP generation and inhibited proliferation of HUVECs and MDA-MB-231 cells, but was not toxic by itself [69]. The same antibody induced apoptosis in A549 cells, accompanied by falls in extracellular ATP, intracellular pH, and ERK and AKT phosphorylation [14]. Another monoclonal antibody against ATP5B (mAb6F2C4) inhibited extracellular ATP synthesis, proliferation, anchorage-independent colony formation of the hepatoma cell line SMMC-7721 [65]; this antibody was also able to reduce hepatoma xenograft growth in vivo. The kringle 1-5 domain of plasminogen, an ecto-ATP synthase ligand, triggered caspase-dependent apoptosis in endothelial cells [52]. On the other hand, binding of apolipoprotein A1 to ecto-ATP synthase promoted the survival and differentiation of endothelial progenitor cells [51]. Differences in binding sites, effects on enzyme conformation, and protein interactions of ATP synthase ligands could explain these divergent effects. Additional microenvironmental factors, including acidic extracellular pH, may permit tumor-selective killing [13,65]. Further studies will be required to elucidate the specific mechanisms of Apt63induced programmed cell death in breast and prostate cancer, including effects on extracellular pH, reactive oxygen species, and purinergic nucleotides.
The quantitative relationship between ATP5B gene expression and surface ATP synthase is undetermined and likely complex: ATP synthase subunits are encoded by both nuclear and mitochondrial genomes, and are coordinately regulated through incompletely defined translational and post-translational means [38,39,41,[70][71][72]. Nonetheless, the associations we have identified between ATP5B gene expression and both metastasis-free and overall survival in breast and prostate cancer are remarkable. It is possible that proteomic analysis would demonstrate still stronger links. Comparing the proteomes of Table 2 Tissue biopsies by pathology groups and scoring for Apt63 membrane staining High-resolution confocal images of Cy3-Apt63 TMAs were examined for membrane-pattern staining as illustrated in Fig. 7. Apt63-specific membrane staining correlates with cancer stage, r = 0.997 and p-value 3.12E-03(*carcinoma). Group of benign tumors represented by adenosis, hyperplasia, and fibroadenosis. Online Resource 3 contains the complete list of 416 core biopsies with stage, grade, pathology diagnosis, and Apt63 score MCF-7 breast cancer and a highly invasive subclone, Pan et al. [42] found that another ATP synthase subunit, ATP5A was overexpressed in the aggressive subclone. ATP5A was identified on the surface of these cells, as well as on MDA-MB-231 and MDA-MB-453 breast cancer cell lines, but not on parental MCF-7 cells, or on non-tumorigenic MCF-10F breast epithelial cells [42]. In parallel, increased immunoreactive ATP5A was seen in 94% of breast cancers, as well as in 21.2% of normal tissues. This analysis did not discriminate between membrane and cytosolic staining, but the findings are consistent with a relationship between ATP5 protein levels and appearance on the plasma membrane. Other investigators have shown that ATP5B and other subunits of ATP synthase travel on lipid rafts that may shuttle between mitochondrial and plasma membranes [16,73]; co-localization with caveolin-1 may be required to maintain a functional surface complex in vascular endothelium [73]. It will be important to examine the extent to which ATP5B expression correlates with surface ATP5B in future clinical studies of breast cancer prognosis. A challenge in determining whether and how cancer cells utilize ecto-ATP synthase lies in the essential role of this enzyme in normal cell metabolism, and the unclear pathway by which the complex arrives at the cell surface. Our aptamer represents a new tool that will assist in elucidating these questions. Its rapid and selective cytotoxicity to cells expressing ecto-ATP5B may help to resolve structural and mechanistic questions about the importance of this complex to cancer cell survival and metastasis. Ultimately, the ability of Apt63 ability to target this important but poorly understood tumor antigen in primary breast and prostate tumors may help both to predict and mitigate the risk of future metastasis. | 9,690.8 | 2019-04-20T00:00:00.000 | [
"Medicine",
"Biology"
] |
An engagement-aware predictive model to evaluate problem-solving performance from the study of adult skills’ (PIAAC 2012) process data
The benefits of incorporating process information in a large-scale assessment with the complex micro-level evidence from the examinees (i.e., process log data) are well documented in the research across large-scale assessments and learning analytics. This study introduces a deep-learning-based approach to predictive modeling of the examinee’s performance in sequential, interactive problem-solving tasks from a large-scale assessment of adults’ educational competencies. The current methods disambiguate problem-solving behaviors using network analysis to inform the examinee’s performance in a series of problem-solving tasks. The unique contribution of this framework lies in the introduction of an “effort-aware” system. The system considers the information regarding the examinee’s task-engagement level to accurately predict their task performance. The study demonstrates the potential to introduce a high-performing deep learning model to learning analytics and exami-nee performance modeling in a large-scale problem-solving task environment collected from the OECD Programme for the International Assessment of Adult Competencies (PIAAC 2012) test in multiple countries, including the United States, South Korea, and the United Kingdom. Our findings indicated a close relationship between the examinee’s engagement level and their problem-solving skills as well as the importance of modeling them together to have a better measure of students’ problem-solving performance.
Large-scale digital assessment in an interactive online environment is designed to evaluate examinees' thinking and problem-solving skills (Van Laar et al., 2017).An increasing number of large-scale assessments, such as the Programme for International Assessment of Adult Competencies (PIAAC), the Programme for International Student Assessment (PISA), and the Trends in International Mathematics and Science Study (TIMSS), have recently introduced more innovative test solutions with novel item formats to assess problem-solving or collaborative problem-solving performance (e.g., Barber et al., 2015;Mullis et al., 2021).For example, PIAAC is an international assessment which was the first fully computer-based large-scale assessment in education and the first to provide public anonymized log file information widely. 1 PIAAC's problem-solving assessment in a technology-rich environment (PS-TRE hereafter) is designed to assess the adult examinee's ability to use "digital technology, communication tools and networks to acquire and evaluate information, communicate with others and perform practical tasks" (Rouet et al., 2009, p.9;(OECD), 2012, p. 47).In this test, examinees are provided with varying types of problem-solving tasks that embed authentic real-life scenarios.
These non-traditional, interactive, digital problem-solving items encourage examinees to demonstrate their authentic skill sets using their responses and the traces of activities associated with solving the task (Jiang et al., 2021).The traces stored as metadata of examinees' interactions are the process log data or click-stream information.The process log data provides insights into the examinee's behavior that are not easily disambiguated with the response data alone, especially in many non-traditional and interactive large-scale assessments.The process log information uncovers more individualized and diagnostic evidence about the examinees' latent abilities (Goldhammer et al., 2014;He & von Davier, 2015;Scherer et al., 2015;Wang et al., 2021) which enhances the reliability and validity evidence of the assessments (Kroehne & Goldhammer, 2018;Ramalingam & Adams, 2018), and identifies the examinees who are depicting anomalous behaviors (Lundgren & Eklöf, 2020).For instance, Jiang et al., (2021) demonstrated how the process data gathered specifically from students' drag-and-drop actions in a large-scale digitally-based assessment environment could infer examinees' varying levels of cognitive and metacognitive processes, such as their problem-solving strategies.
Incorporating the process information in a large-scale assessment to achieve such goals requires several methodological and empirical considerations.First, the complex micro-level evidence from the examinees (i.e., process log data) needs to be analyzed to extract explainable and interpretable patterns that inform the examinee's latent abilities (e.g., problem-solving strategies, Polyak et al., 2017;von Davier, 2017).Second, the examinees' demonstration of knowledge and skills need to be modeled in the sequences of task levels to provide more generalizable implications compared to the item-level results (Ai et al., 2019;Jiang et al., 2020;Liu et al., 2019aLiu et al., , 2019b;;Wang et al., 2017).Third, careful consideration is required to evaluate the effect of students' sentiments or affect that may influence their performance, such as their task-disengagement behaviors (Wise, 2020).
With the recent wide introduction of machine learning and deep learning approaches in large-scale assessments and learning analytics, increasing attempts are being made to more effectively and efficiently analyze the process data from large-scale assessments.Hence, in this study, we propose a novel analytic framework where the examinee's complex and long traces of process log data are used to understand the problem-solving skills and performance.The present study is rooted in the fields of learning analytics and psychometrics.We combined multiple advanced computational methods, including social network analysis and deep neural network models.Our framework also models the examinee's task-engagement status for a more accurate representation of the 1 https:// www.oecd.org/ skills/ piaac/.
performance and skill demonstration in the series of interactive tasks.One research question is addressed to guide the study: Does modeling the engagement levels with problem-solving skills improve the prediction performance of the LSTM model for items solved on a large-scale assessment?
To describe how our research questions were addressed using the PIAAC's PS-TRE assessment, the subsequent sections focus on three primary topics.First, we present the construct measured by the PS-TRE test and its three core dimensions, thereby providing contextual information on the types of tasks our research aims to investigate and evaluate.Second, we offer an overview of the literature, concentrating on methodologies introduced to understand the PS-TRE construct, with a specific focus on recent studies that have utilized process data to model the tasks associated with this construct.Lastly, we provide an overview of how test engagement is currently modeled in various largescale assessment settings, underscoring the importance of capturing test engagement in the PS-TRE.
Problem-solving tasks in PIAAC PS-TRE
The PIAAC's problem-solving assessment in a technology-rich environment (PS-TRE) is designed to assess the adult examinee's ability to use "digital technology, communication tools and networks to acquire and evaluate information, communicate with others and perform practical tasks" (OECD, 2012).Problem-solving usually means that people cannot solve problems through routine activities, which needs a complex hierarchy of cognitive skills and processes.Technology-rich environment indicates that some technologies (e.g., spreadsheets, Internet search, websites, email, social media, or their combination) are required to solve the task in assessment (Vanek, 2017).
The three core dimensions of PS-TRE include task/problem statements, technologies, and cognitive dimensions.These dimensions are closely connected because examinees rely on their choice of technologies to solve the problems, which requires their cognitive skills to successfully use the selected technology to solve the problem or accomplish the task.The examinees are provided with varying types of problem-solving tasks that embed authentic real-life scenarios based on intertwined dimensions of PS-TRE.A problem-solving task can be provided by connecting any domains in each core conceptual dimension as described in Appendix 1.
The interactions between these three core components create complex problem-solving tasks.Examinees are required to use a sequence of actions to correctly address these tasks, resulting in a substantial collection of process logs and clickstream information.The following section will explore the modeling of this extensive data, aiming to extract meaningful insights into the problem-solving strategies used by examinees during the assessment.
Modeling problem-solving strategies in PS-TRE with process data
Increasingly, studies have been conducted to introduce various computational and artificial intelligence-powered methods to effectively understand examinees' responses as well as the complex interaction process log information gathered in PIAAC'S PS-TRE.These studies often adopted clustering analysis (He, Liao, & Jiao, 2019), pattern mining analysis (Liao et al., 2019), graph modeling approaches (Ulitzsch et al., 2021), and clickstream analysis with supervised learning models (Ulitzsch et al., 2022).For instance, He et al., (2019a, b) adopted the K-means algorithm (Lloyd, 1982) to cluster the behavioral patterns from one representative PS-TRE item based on features extracted from process data (i.e., unigrams by n-grams model, total response time, and the length of action sequences) to explore the relationship between behavioral patterns and proficiency estimates covaried by employment-based background.That is, more actions and longer response time tended to generate higher PS-TRE scores when getting incorrect answers.Their findings indicated process data tends to be more informative when items are not answered correctly.
To further investigate the impact of employment-based background, Liao et al., (2019) mapped the employment-based variables with action sequences in process data using the regression analysis and chi-square feature selection mode.They found that groups with different levels of employment-based background variables tended to generate distinctive characteristics regarding the action sequences to solve problems.However, it should be noted that the previous approaches (e.g., He et al., 2019a, b;Liao et al., 2019) only analyzed item-level timing data instead of time consumed between actions (i.e., action-level time), so the more detailed underlying cognitive processes due to timestamped action sequences might be neglected.Ulitzsch et al., (2021) proposed a two-step approach to analyze complete information contained in time-stamped action sequences for a deeper investigation of the behavioral processes underlying task completion.The researchers integrated tools from clickstream analyses and graph-modeled data clustering with psychometrics so that they can combine action sequences and action-level times into one analysis framework.In another study, enriching generic features extracted from sequence data by clickstream analysis, Ulitzsch et al., (2022) extracted features from time-stamped early action sequences (i.e., early-window clickstream data) and an extreme gradient boosting (XGBoost) classifier was used (Chen & Guestrin, 2016).Within the procedure, early-window datasets were created to train the model by getting rid of all afterward time-stamped actions (i.e., occurred after a given number of actions or a given amount of time from the sequences) thereby allowing the features taken from clickstreams to focus on the occurrence, frequency, and sequentiality of actions by adding features based on the amount of time consumed to carry out certain actions.Based on the clickstream analysis with a supervised learning model, Ulitzsch et al., (2022) investigated the early predictability of success or failure on problem-solving tasks before examinees complete the tasks and deepen the understanding of the trajectories of behavioral patterns in PS-TRE.
These studies demonstrated excellent potential to leverage our understanding with interpretable results to study different facets of students' knowledge and abilities.However, no study, to our knowledge, was introduced in the sequence of task-level, with the potential to consider the examinees' engagement status in the analysis of problem-solving knowledge modeling.Therefore, in the subsequent section, we will introduce how test-taking engagement has been defined in previous literature, along with the methodologies explored to investigate such constructs.Consequently, we will highlight the benefits and advantages of employing test-taking engagement as a simultaneous measure to effectively evaluate students' performance.
Engagement in knowledge modeling with problem-solving performance in large-scale assessment
Test-taking engagement is used to describe if the test taker remains engaged throughout a test, which is an underlying assumption of using all psychometrics models in practice (Wise, 2015(Wise, , 2017)).The term test-taking engagement also refers to the test-taking effort.Test disengagement was defined as providing or omitting responses to items with no adequate effort (Kuang & Sahin, 2023), indicated by rapid-guessing behavior (Schnipke, 1996;Wise & Kong, 2005) and item-skipping behavior.A lack of test-taking engagement is a major threat to the validity of test score interpretation even in good test design (Wise & DeMars, 2006), especially in low-stakes assessments such as PIAAC (Goldhammer et al., 2016).
Modeling test-taking engagement in problem-solving tasks resolves the potential validity threat (e.g., construct-irrelevant variance) that can confound the examinees' performance results (Braun et al., 2011;Goldhammer et al., 2016;Keslair, 2018;Wise, 2020).Information gathered from the examinee's response data in the task was commonly used to model their task-engagement level.Various item response theory (IRT)-based models incorporate students' engagement to predict people's latent traits (Deribo et al., 2021;Liu et al., 2019a, b;Wise & DeMars, 2006).For instance, Wise and DeMars, (2006) introduced the effort-moderate IRT (EM-IRT) model, where disengaged responses are treated as missing data and fit the engaged responses to a unidimensional IRT model.Response time was used to identify students' engagement in the EM-IRT model.More recently, studies explored the use of data gathered from the interactions, such as process log data, to evaluate examinees' test-taking effort and motivation (Lundgren & Eklöf, 2020;2021).
The combination of response time and response behaviors was used as an "enhanced" method to detect examinees' disengagement (Sahin & Colvin, 2020).Within this approach, the response behaviors (e.g., keypresses, clicks, and clicking interactive tools) were collected from the process data (Kuang & Sahin, 2023).Sahin and Colvin, (2020) set up the threshold for the maximum number of response behaviors that suggest no or minimum engagement.However, they did not use statistical models to analyze the response behaviors from process data.A small number of studies have demonstrated the capacity to model the examinee's engagement and problem-solving performance from process data or a sequence of tasks (as well as at an individual task level).Since test engagement can be treated as a latent trait under response behaviors and deep learning approaches have the advantage of modeling process data or a sequence of tasks to capture examinees' response behaviors, it is worth investigating how to apply deep learning approaches (such as Long Short-Term Memory Networks) to detect test engagement.
Long short-term memory networks
Our study implements one of the variational models of recurrent neural network (RNN) models to effectively and accurately track students' problem-solving performance from a sequence of PS-TRE tasks.Unlike traditional neural network models, the RNN models introduce a simple loop structure in the hidden layer to consider a sequence or a history of input.In our study, we use one of the special variations of the RNN models, which is the Long Short-Term Memory (LSTM) network.The LSTM model consists of units called memory blocks.Each memory block consists of multiple gates-input, forget, and output gates-that control the flow of information.
Figure 1 provides an overview of an example LSTM memory cell structure.In our study, we use the memory cell to input, modify, extract, and communicate the deterministic information about examinee's problem-solving strategies and performance on a sequence of tasks, where t represents the task that the examinee is interacting with.Specifically, input data is determined based on n batch size with d features and h number of hidden layers, x(t) ∈ R n×d , and the hidden state of the previous task h(t − 1) ∈ R n×h , indicating the final input data as X T = [h(t − 1), x(t)] .This input data is first provided to the forget gate f (t) ∈ R n×h , input gate i(t) ∈ R n×h , and an output gate o(t) ∈ R n×h .The forget gate governs the degree to which the information from the previous tasks is omitted from the cell state, the input gate governs how much new information about the examinee's problem-solving skills are inferred from the current task, and the output gate produces the output that will be communicated to the next cell state for the task, t + 1.
The interim values after entering the gates are computed as below, where w xi , w xf , w xo ∈ R d×h and w hi , w hf , w ho ∈ R h×h represent weights of each gate, and b i , b f , b o ∈ R 1×h represent bias of each gate, respectively.The input node c(t) ∈ R n×h is also computed similarity with the other gates, where the activation function of tanh(x) = (e x − −e −x )/(e x + e −x ) replaces the sigmoid function in the other three gates.
The memory cell outputs the internal state and the hidden state h(t) ∈ [−1, 1] .The hidden state at task t will concern the input, forget, and output gates by deciding the impact of the current memory to the next memory cell.The hidden state that is close to the value of 0 will minimize the current impact to the next cell while the value close to 1 will impact the internal state value of the next cell with no restriction.The memory cell updates the internal state c(t) in the task t by gathering the information from the forget, input, and the previous cell state as follows: (1)
Attention mechanism
The simple LSTM model can be limited in detecting which element provides the important aspect of information to determine examinees' problem-solving performance while accounting for their engagement level.Hence, we introduce an attention layer to explicitly model this information.Let H ∈ R d×t represents the hidden layers derived from the memory cell of each problem-solving task t with an LSTM model with d hidden layers.The attention layer we use in the current finding is the global attention layer.The global attention layer represents the latent information extracted from the sequences of output from the encoder (i.e., input data is encoded using LSTM) in order to help decoders (i.e., output data is generated using LSTM) utilize global evidence related to examinees' problem-solving skills to output correct predictions.The dot-product attention computes the element-wise multiplication between the hidden states of encoder and decoder of task t, h t and s t with the attention weight W = {w 1 , w 2 , ..., w n }, where the attention α is captured as follows: Then, the final weighted representation of the hidden state is derived by combining the dot-product attention ( α ) and the hidden layer ( H ) as r = Hα T .Using this infor- mation, we can represent the students' problem-solving performance as a combination of projection parameters W p and W r , are h * = sigmoid(W p r + W x h n ) .The parameters W p and W x are learned during training (Rocktaschel et al., 2015).In our study, we use these projection parameters to visualize whether the attention layer is accurately capturing the examinee's problem-solving performance and engagement across a sequence of problem-solving tasks.The final univariate/multivariate outcome(s) (performance and engagement) of this process will be computed using h * , as y = softmax(W s h * + b s ), where W s represents the output layer weights and b s represents the output layer bias.This way we will be able to produce whether the student was engaged (= 0), not engaged (= 1), as well as the score category that the students acquired from the task as the final outcome of our model (see Fig. 2).
Using Long Short-Term Memory (LSTM) models to evaluate students' engagement and performance from process log data in the PS-TRE is a particularly effective approach (2) (3) α = softmax(h T t W a s t ) due to several key advantages of LSTMs.These neural networks are uniquely suited for handling sequential data, a core aspect of process log data, where the order and timing of actions are critical indicators of student engagement and performance.This allows us to evaluate students' performance and engagement effectively across multiple items and tasks, moving beyond analyzing the examinee's performance at an individual item level (e.g., Shin et al., 2022;Tang et al., 2016).LSTMs excel in capturing not just immediate dependencies but also long-term patterns in sequences, which is crucial in the PS-TRE context where early actions can influence later ones, or patterns of engagement may change over time.This indicates possibilities of capturing the information and storing the information from the examinee's process data at the very first task or the item they engage with, and utilizing their information to infer and predict their performance at the very last item they interact with.
The ability of LSTMs to learn complex patterns in sequential data is another significant advantage.They can handle variable-length sequences, a common characteristic in PS-TRE log data, ensuring consistent model performance across different data lengths (Hernández-Blanco et al., 2019).This aspect is vital, considering each examinee's interaction with the assessment varies in length and complexity.One of the standout features of LSTMs is their capacity for automatic feature extraction from raw sequential data.This is particularly beneficial for PS-TRE, where manually identifying relevant features from log data can be challenging.LSTMs can not only understand the context of each action within the broader sequence of events but also use this understanding to predict future behavior.This predictive ability is not only useful for analyzing past and present actions but also offers potential applications in real-time scenarios, such as adaptive testing or personalized learning interventions.Furthermore, LSTMs are robust to noise and irregularities in data, which are common in log files due to varied user behaviors and system inconsistencies (e.g., Fei & Yeung, 2015).Their capability to generalize from training data to unseen test data is vital for deploying models in different assessment environments.
Hence, the LSTM's proficiency in processing sequential data, its capability to detect and learn relevant features, and its robustness against data irregularities make it an appropriate choice for modeling the dynamics of student engagement and performance in PS-TRE.By leveraging the rich, time-ordered data in process logs, LSTMs provide deep insights crucial for educational assessments and learning analytics.
Data
We used the data collected from the first round of the OECD PIAAC Main Study, which was conducted from August 2011 to November 2012, involved 24 countries/economies, and was the first computer-based large-scale assessment to provide public anonymized log file data.2Our investigation focused on the cognitive domain of PS-TRE.A total of 14 tasks were dichotomously or polytomously scored (five 3-point, one 2-point, and 8 dichotomously scored items) (OECD, 2016).We analyzed the data collected from the United States (4131 units3 ), South Korea (7024 units), and the United Kingdom (7250 units).The log file of the PS-TRE tasks contained various information including the environment from which the event was issued (within the stimulus, outside of the stimulus), the event type, timestamps, and a detailed description of the event.In this proposed study, we experimented with the items included in one booklet (PS1) to demonstrate the prediction capacity of our proposed analytics framework (see Table 1).
Binary task engagement level
The method of T-disengagement (Goldhammer et al., 2016) was used to label test takers' engagement by response time as part of the training set.The term "T-disengagement" (OECD, 2019) describes a situation where examinees spend less time on a PIAAC task than a task-specific threshold.The approach to computing this item-specific threshold is based on the relationship between the probability of giving correct answers and the time spent on the item (Goldhammer et al., 2016).The underlying idea of this approach is that disengaged examinees tend to be less accurate than engaged examinees (Wise, 2017).The computation procedure first determined the time threshold t, it is necessary to compute the probability of getting a task correct on time t.The observations with a time on task between t and t + 10(s) are used.Then, the probability of correctness is modeled as a linear function of time if the number of the observations is enough (e.g., > 200).Last, the task-specific time threshold is determined as the smallest t for which the estimated probability of correctness is higher than 10%.The T-disengagement value was used in our study to create an engagement indicator, labeling test-takers' engagement based on response time as part of the training set.If an examinee spends less time on a task than the task-specific threshold, they are labeled as a disengaged examinee.Otherwise, they are considered engaged.Using the threshold calculated for each item in PS-TRE, we generated a binary outcome variable representing each examinee's engagement status.
Methods
Figure 3 provides a conceptual representation of our analytic model.Our analytic framework is based on a specific neural network model called the Long Short-Term Memory networks (LSTM; Hochreiter & Schmidhuber, 1997).The LSTM model takes a sequence of actions from the examinees which was captured while they were navigating through each item.The first layer of the model focused on converting the input sequences of actions from the process log data into a directed graph, where a node represents an activity in an item and the edges represent the connectivity between the two actions.The edges are weighted by the total amount of time between the two actions.Then, the overall task-navigating process of the examinees was summarized using network statistics.Network statistics summarize the interactions present in the network.Our analysis adopted five network statistics.This method includes five key network statistics: centralization, density, flow hierarchy, shortest path, and total number of nodes, each contributing to a comprehensive understanding of the interactions within the network.This approach aligns with recent trends in educational data mining, where network analysis is increasingly applied to understand learning processes (Salles et al., 2020;Zhu et al., 2016).
Converting process log data into a directed graph in the first layer in LSTM for predictive modeling is a strategic decision that offers numerous benefits, particularly in the context of assessing complex sequential data like that found in PS-TRE.This conversion allows for a structured representation of the data, where each node in the graph represents an individual action or activity, and directed edges signify the sequence and transition between these actions.Importantly, by weighting these edges with the time elapsed between actions, the graph effectively captures the temporal dynamics integral to understanding examinee engagement and problem-solving processes.
This graph-based approach significantly enhances the analysis of sequential interactions among different actions (Zhu et al., 2016).It provides a more nuanced perspective on how examinees approach and navigate through tasks, revealing patterns and strategies in their problem-solving process.By employing network analysis techniques, such as evaluating centralization, density, flow hierarchy, shortest path, and the total number of nodes, the model can delve deeper into the complexity and efficiency of examinees' approaches.Additionally, the directed graph structure is highly conducive to advanced machine learning techniques, such as those used in LSTM models, facilitating more accurate predictions and classifications based on the patterns identified in the graph (e.g., Zeng et al., 2021;Zhang & Guo, 2020).Beyond the analytical advantages, this representation also aids in the interpretability and visualization of the data, making it more accessible for educators and researchers to understand and visualize the problem-solving process.Moreover, this method's flexibility and scalability make it adaptable to various assessment scenarios, capable of accommodating different types of actions and interactions (Hanga et al., 2020).Overall, this first layer's approach of transforming log data into a directed graph lays a robust foundation for subsequent, in-depth analysis, capitalizing on the strengths of network analysis and machine learning to provide insightful interpretations of examinee behavior.
The encoder and decoder then summarized the network statistics and map them into the prediction outcomes.The encoder summarizes the input and represents it as an interim representation called internal state vectors.The decoder, on the other hand, generates sequences of output using the internal state vectors from the encoder as an input.In our study, we presented two variations of models that differ in the type and the number of outputs associated with the input.The first model (Attention-LSTM) only concerns the association between students' process activities (log information) and their performance outcome (i.e., categorical scores) in each task.The second model (Effort-Aware LSTM) additionally models the associations between students' process activities with their task engagement level to reduce any effects stemming from the low-stakes characteristics of the current dataset.In summary, the second model is designed to produce output regarding students' performance scores simultaneously with their task engagement level for each task.
In order to increase the interpretability of the model decisions (i.e., whether the model is correctly stipulating the information related to students' latent ability level), we included an attention mechanism.The global attention layer represents the latent information extracted from the sequences of output from the encoder in order to help decoders utilize global evidence related to examinees' problem-solving skills (Model 1) and problem-solving skills with engagement level (Model 2).
Evaluation
A two-step evaluation process was used.First, the two variations of the model were compared based on the overall and item (or task)-specific performance score prediction accuracies.In the first step of our evaluation process, we compared the two variations of the LSTM model based on their ability to predict overall and item-specific performance scores.To ensure a comprehensive assessment, we employed three evaluation metrics: accuracy, F1-score, and the area under the Receiver Operating Characteristic (ROC) curve.These metrics were chosen for their ability to provide a balanced view of the model's predictive performance, considering aspects like the balance between sensitivity and specificity (ROC curve) and the harmonic mean of precision and recall (F1-score).The final evaluation metrics were derived from the average results obtained through threefold cross-validation.This cross-validation approach adds rigor to our evaluation, ensuring that the performance metrics are robust and not overly fitted to a specific partition of the data.
The second step involved conducting a Principal Component Analysis (PCA) on the final attention layer of our engagement-aware model (e.g., Chen et al., 2018).Applying PCA to the last attention layer of an LSTM model, which handles complex datasets related to student engagement and performance, offers significant benefits.Firstly, PCA is instrumental in reducing the dimensionality of high-dimensional outputs generated by the attention layer.This reduction is crucial, as it retains essential patterns and variances while uncovering underlying latent associations.The ability of PCA to reveal latent relationships within the attention layer's output is particularly valuable.It exposes underlying structures that might not be immediately evident, providing deeper insights into how the model processes and combines various aspects of the input data (e.g., Qiao & Li, 2020;Zhang et al., 2020).Moreover, PCA helps validate the focus of the attention mechanism, ensuring that it aligns with features pertinent to the task.This validation is essential for confirming that the model adheres to theoretical and empirical expectations, ensuring that the predictive model is focusing on and depending on the 'adequate' source of information for the decision-making process (Terrin et al., 2003).
Results
Tables 2 and 3 provide the overall performance results of the two variations of the models proposed in this study.The results showed that the Effort-Aware Attention-LSTM model could achieve improved performance in predicting student performance scores in all three-evaluation metrics across all three countries.Our first model (Attention-LSTM) produced f1-scores close to 0.82, ROC of 0.70-0.75, and accuracy of 0.75-0.78across all three countries.The second model (Engagement-Aware Attention-LSTM) produced f1-scores close to 0.88-0.90,ROC of 0.82-0.84,and accuracy of 0.80-0.88.The prediction performance on the examinee's engagement level produced f1-scores close to f1-scores 0.92-0.94,ROC of 0.86 to 0.88, and accuracy of 0.84-0.87.In summary, an improvement in the problem-solving performance prediction was observed in the second model.For individual tasks (see Table 3), similar patterns were identified across the three countries where engagement-aware models acquired slightly improved performance results compared to the other model.The model results also demonstrated that the engagement-aware model could predict the engagement and disengagement level of the participants across all five tasks with high performance accuracies.Specifically, the improvement in prediction accuracy was the highest in Task 5 where F1-score improved by + 0.21 to + 0.28, and accuracy improved by + 0.20 to + 0.23.
Attention-layer visualization: engagement and performance latent variables
Appendix 2 provides visualizations of the attention layer from the engagement-aware model for each problem-solving task with the U.S. participant data set.The principal component analysis results visualized the potential underlying components that our attention mechanism captured to make correct decisions regarding students' performance results.The results showed that the interim output of the attention layer could be systematically explained by the two components which aligned with the problem-solving performance skill level with a relatively small variance explained by the second component, engagement level.The two components accounted for 75.5% and 14.4% of the variance in Task 1 attention score, 74.1% and 13.7% of the variance in Task 2 attention score, 56% and 30.7% in Task 3, 75.9% and 13.4% in Task 4, and 80.3% and 9.1% in Task 5.
More specifically, the size of the dots in Appendix 3 represents the students' performance scores, whereas the bigger dots represent students who scored higher in the task.The red and blue dots each represent students' engagement and disengagement status (Goldhammer et al., 2016).The figures for Tasks 1, 4, and 5 showed clear alignments between the principal component scores and the problem-solving performance and engagement levels.For instance, visualization of the principal component scores for Tasks 1, 4, and 5 indicates a visible alignment between the size of the dot along the continuum of principal component score 1.Moreover, a clear alignment between the color coordination of the dots with principal component score 1, where the higher component score indicated an increased engagement level.However, the alignment between component scores and the performance and engagement level was less clear when visualized in Tasks 2 and 3, where the color coordination of the dots (engagement vs. disengagement) was less distinctive across the component scores.
The Pearson's correlation coefficients between the principal component scores and the examinee's performance and the engagement level also revealed similar findings.The primary component score in Task 4 and Task 5 showed moderate to high positive correlations coefficients with the students' engagement level (0.45-0.53) and the performance level (0.28-0.67).The primary component in Task 1, interestingly, showed moderate negative correlations with the engagement score (-0.57) and a positive correlation with the performance (0.564).We also observed that when the PCA scores aligned well with the engagement and the performance level, that comparably higher contribution to the prediction performance was observed.We discussed this and the implications of these findings further in the next section (Table 4).
Conclusions and discussion
The purpose of our study was to describe and demonstrate an analytic framework where the complex and long traces of process log data are used to understand the problemsolving skills and performance based on the examinee's log data in a problem-solving task in PIAAC 2012.Our engagement aware-LSTM model could outperform the other model in accurately classifying students based on their problem-solving performance.The current empirical findings situate well in the existing literature by highlighting the importance of behavioral patterns or action sequences that are valuable to capture in modeling the examinee's problem-solving skills in PIAAC (He et al., 2019a, b).Some of the widely discussed benefits of incorporating behavioral patterns into problem-solving performance modeling involve the improvement of measurement accuracy (He et al., 2019a, b;Sireci & Zenisky, 2015), the establishment of the evidence to capture other latent or cognitive dimensions, such as engagement (He & von Davier, 2016;Zhu et al., 2016), and improvement in capturing abnormal behaviors (Hellas et al., 2017).Consistent with the previous literature, incorporating sequence-level process log features could successfully be associated with their performance (0.82-0.83 f1-score on average) while modeling students' engagement levels (0.92-0.97 f1-score on average) simultaneously in our findings.In our study, the low engagement that was captured across the problem-solving tasks could be interpreted as one source of anomalies that were commonly reported in the previous literature concerning formative or low-stakes assessments (Pastor et al., 2019;Pools & Monseur, 2021).
In addition, the findings from the current study align with previous research results indicating a close relationship between the examinee's engagement level and their problem-solving skills as well as the importance of modeling them together to have a better measure of students' problem-solving performance.Previously the connections between problem-solving performance and engagement were studied in relation to the complexity of the testing or assessment environments such as interactive games (Eseryel et al., 2014).For instance, Lein et al. (2016) indicated that engagement is a unique significant predictor that was associated with students' mathematical problem-solving performance when controlling for students' prior knowledge.Similarly, ongoing efforts are made in measurement research, where variations of IRT models are introduced to accurately estimate students' abilities (Nagy & Ulitzsch, 2022;Wise & DeMars, 2006).
Accordingly, the problem-solving task with the largest performance improvement in measuring students' problem-solving performance was in Task 1, Task 4, and Task 5, where the correlation coefficients between the performance and the engagement scores were the highest (ρ = 0.480, ρ = 0.412, ρ = 0.373).Conversely, in the tasks that showed a low to the negligible correlation between engagement and performance (2 and 3), the improvement in performance also remained relatively low.
Implication
The results provide practical and methodological implications for test developers and psychometric researchers.Using our approach, students' problem-solving abilities can be modeled in real-time and predicted to provide more direct and prompt feedback for student performance.Also, the visualization and validation of the interim layer of complex machine learning models provide important evidence and insights to psychometric researchers which allows them to compare the model performance of deep learning models with the traditional psychometric approaches, such as IRT.Last, our engagement-aware model may allow test developers to adopt the system in a low-stakes assessment setting where the accurate evaluation of the student's ability, knowledge, and skills is challenging due to the lack of student motivation or engagement.Wise and Kong, (2005) previously outlined large-scale assessment scenarios where the simultaneous measurement of engagement and students' ability level (e.g., problem-solving performance) may be recommended.First, the use of a low-stakes environment to pilot and validate the large-scale high-stakes exams may entail assessment situations where engagement detection may be necessary.Large-scale assessments, such as PIAAC and PISA commonly adopt such approaches to investigate the psychometric properties of the item prior to being officially introduced in their test booklets.Second, large-scale assessments are increasingly used to make inferences about teacher, school, and district evaluation, which may be deemed by the students to have low to negligible consequences for each participating individual.Not explicitly modeling students' engagement level during the participation may have significant consequences on validity of the test scores.
In essence, the deep learning methods proposed in this study provide the benefits of a data-informed and machine-learning based approach with an educational and psychometric consideration which could increase the capacity of promptly and accurately deriving decisions about examinee's performance from the education assessment with an increasingly digitized environment.
Limitations and future research
While our study was carefully constructed and implemented to avoid potential bias, we acknowledge that it is not free from limitations, which can be addressed in future research.First, the use of Principal Component Analysis (PCA) to improve the validity and interpretability of our model provided important benefits.However, it is important to recognize the limitations of PCA, notably its linear nature, which might not capture all non-linear relationships in the data.Also, the interpretation of the principal components, being linear combinations of original features, might not always be straightforward.Despite these limitations, the application of PCA on the last attention layer remains a valuable tool, offering a balanced approach to understanding and interpreting complex models in the context of educational assessments.Hence, we encourage future studies focusing on validating the PCA results to evaluate whether such patterns and relationships can be replicated and revealed when analyzing similar types of process data in large-scale assessment settings.
Fig. 1 A
Fig. 1 A Conceptual Representation of an LSTM Memory Cell
Fig. 2 A
Fig. 2 A Conceptual Representation of the Attention Layer in LSTM
Fig. 3 A
Fig. 3 A Conceptual Representation of the Effort-Aware Attention-LSTM Model
Table 1
Demographic Information of the three Countries/Datasets of the Current Study
Table 2
Experiment Results 1-Overall Average Prediction Performance US: United States; SK: South Korea; GB: United Kingdom a DV: Dependent Variable; The performance and engagement level is simultaneously predicted in the second model
Table 3
Experiment Results 2-Task (Item)-level Average Prediction Performance a DV: Dependent Variable; The performance and engagement level is simultaneously predicted in the second model
Table 4
The Pearson Correlation Coefficients between the Principal Components and Engagement and Performance | 9,055.2 | 2024-03-01T00:00:00.000 | [
"Education",
"Computer Science"
] |
Prototyping of a Situation Awareness System in the Maritime Surveillance
This paper discusses about the design of a Situation Awareness (SA) system to support vessel crews and control room operators in improving the decision making process. The architecture of the system is ontology based. The vessel crews and control room operators may face a loss of SA. They may have limited cognitive abilities which make it difficult to make a decision in a high stress level, short time availability and continuously evolving situation with incomplete information. In this work, we describe the application of Semantic Web Rule Language to represent corresponding knowledge in the maritime surveillance domain. The result of this research will demonstrate that an ontology based system can be used to remodel the information into a meaningful and valuable form to predict the future states of SA and improve the decision making process.
Introduction
Decision making is a crucial part in performing maritime surveillance. Decision is made in various conditions, from threatening to normal situations. Vessel crews and control room operators need to fuse information from diverse sources to observe and analyze the situation before making a decision.
Experience, training, intelligence and a healthy physical condition will influence the process of interpreting information. Although the incoming information is the same, different people may have different interpretations (1).
This research proposes an SA system to improve the vessel crew and control room operator in analyzing the incoming information to provide an understanding of the situation to assist in making a good decision in a threatening situation. For the purpose of this research, the Semantic Web Rule Language (SWRL) will be employed to represent the knowledge and model the threatening scenario to support the vessel crew and control room operator in the decision making process.
This paper is organized as follows. Section two discusses the SA in the decision making process. In section three, we describe the proposed architecture of system for the analysis of threatening vessel behavior. Meanwhile, section four discusses about the R-Scene prototype and case study whereas Section five presents the ontology based SA. Finally, the conclusion will be presented in section six. 1 To whom any correspondence should be addressed.
SA in the decision making process
As cited in (2) Endsley defined SA as "The perception of the elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future". This research proposes a method to extend the concept of SA using ontology and inference to make projections of the future. Figure 1 illustrates four layers describing the different levels of abstraction. The bottom layer is the "World". It is a symbol for physical or conceptual things, or both that may be the object of situation. On the right side of the World layer is a human head that shows that the SA actually takes place in the human's brain. Hence, the human observes the information of the world and human gets input from the computer, as displayed in Figure 1.
The second layer is the "Perception" layer. The dots in this layer represent the objects from the World layer that are obtained from sensors and represented in the computer memory. The arrow from the World layer to the radar image represents the sensory process, providing data to the computer. This layer is compatible with the output of the Perception process in the Endsley's model. The next layer is "Comprehension". The layer illustrates the lines that represent the relation of each point. The process of filtering irrelevant information in order to obtain useful data by integrating some relevant information occurs in this layer. This layer represents also the Comprehension stage in the Endsley's model of SA.
The top layer is "Projection". This layer has a direct relationship with the Endsley's model in which projection is defined as the capability to predict future state situation based on the events of comprehension.
The Proposed Architecture System of R-Scene
In the analysis of threatening vessel behavior, we proposed for a system that consists of four main processes as demonstrated in figure 3. Generally, the four processes will include: 1. The process of interpreting raw data into readable form to obtain the pattern from previous scenarios. The obtained pattern will be used to predict the future state.
2. The process of merging sensors and vessel database. Data from sensors and vessel database will be fed back to the ontology processing engine to update the context ontology. This step is compatible with the SA level 1 3. The tracking process. In this process, the system will filter all related data to model the situation. This step is compatible with the SA level 2.
4. The process of predicting possible future states of the current situation. In this stage, the inference engine will process all new information using the predefined SWRL rules and characterize the behavior of the vessel. This step is compatible with the SA level 3.
R-Scene Prototype and Case Study
Our prototype system, the R-Scene Prototype, is illustrated in figure 4. The speeds of targets are 12 knots for the first target, 40 knots for the second target and 15 knots for the third target, respectively. Knowing that the maximum speed allowed in that area is 40 knots, the alert class will give the warning message and point out the second target as a suspicious target.
Ontology-Based SA
Vandecasteele & Napoli (4) mentioned that to understand the phenomenon, it is necessary to reflect the information in a simple way. In order to be able to discriminate between normal and threatening situation, the use of context is inevitable. For the purpose of our research, we will utilize the ontology to reach our objectives. This section will briefly describe the terms ontology, knowledge base and rule base.
Ontology
This paper will discuss the ontology approach from the artificial intelligence (AI) perspective. The ontology developed in this research is used as a tool to automatically integrate the knowledge and information part. Previous study by (5) emphasized on the importance for one to bear in mind that ontology merely serves as a specification of a conceptualization. For the AI systems, ontology is defined as representational vocabulary for a researcher who needs to share information in a domain, what exists is that which can be represented (5,6).
The knowledge base
To implement the ontology we first need to translate the expert knowledge in such a way that it can be applied in our system. We plan to conduct interviews with expert to obtain knowledge about the threatening situations, how to identify it and how to solve the corresponding problems. The Protégé 4.2.0 software is employed to process the information gathered. Figure 5 presents an example of ontology for maritime surveillance. The maritime ontology consists of 4 main classes: Vessel: this class consists of all basic information about a vessel. These include the vessel id, vessel name, type of vessel (fishing boat, cargo, etc.), maximum speed, deadweight, operator, flag and build year. Vessel class data will be derived from the vessel database.
Situation: this class represents the possible interpretation of a situation in the maritime surveillance. The results from interviews will be used to generate the situation class.
Alert: this class defines various alerts related to the maritime surveillance. The interview results will be utilized to generate the alert class.
Context: the context class consists of elements that enrich alerts. It provides information related to vessel characteristics, its departure point, velocity and position. The context class data will be derived from radar data.
However, to make it more useful in the process of analysis, ontology requires for the addition of a rule base for the identification and characterization of threatening situations.
Rule based system
To represent and integrate rules into our ontology, we will utilize the Semantic Web Rule Language (SWRL), a combination of OWL-DL and RuleML (8), that will enrich the semantics of an ontology. The rule was constructed by combining the experts knowledge and useful information from literature researches (9). Suppose the aim of the rule is to detect a ship moving at a speed that is excessive for its type. The request, translated into the SWRL is shown below and read as follows, "If a vessel (?vId) of an identified type (?vType) has a speed (?vSpeed) greater than (greaterThan) the maximum speed allowed for that type of vessel (?vMaxSpeed), then trigger an alert (Alert_Speed_HighSpeed)". In this example, we attempt to make comparison between the speed of the vessel and the maximum speed allowed for the vessel. If the vessel has a speed greater than the maximum speed, then the system will issue the alert of high speed.
Conclusion
This paper describes the development of a prototype for the detection of threatening vessel behaviour. The proposed prototype system R-Scene will integrate maritime surveillance and maritime SA. The prototype will provide maritime SA with a simple and easy user interface for maritime security and surveillance applications. The feature of the prototype will integrate vessel information database and radar data. The tracking process enables track analysis and target query by location, date and/or vessel specific information. In the prediction process, the analysis will denote the historical and prediction of the vessel movement. The example of scenario shows that the warning message will appear when the speed of target exceeds the specified maximum speed. For future works, some other scenarios will be tested to enrich the maritime ontology. Scenario of the maritime navigation in a prohibited area and prediction of collisions are planned to be completed in the next stages of our work. | 2,247 | 2013-12-20T00:00:00.000 | [
"Computer Science"
] |
Application of Fiber Bragg Gratings as a Sensor of Pulsed Mechanical Action
The pulsed elongation of fiber Bragg gratings is considered in order to be used to measure the displacement or deformation rate of objects. Optimal measurement modes were determined, numerical simulation of the output signal was performed during pulsed elongation or compression of the fiber grating, and the main patterns were analyzed. The results of the application of the Bragg gratings for the experimental determination of the deformation rate of materials under pulsed magnetic action are presented. Experimentally obtained and theoretical dependencies are compared. The dependencies of the change in the grating parameters—the coefficient and the half-width of the reflection spectrum with successive shortening of the grating—are given.
It is known that the resonance wavelength λ B of the radiation reflected increases with the stretching of the FBG, which is due to a change in the period of the FBG, the refractive index and the diameter of the fiber core. For FBGs created in a standard quartz optical fiber (9/125 µm), a linear relationship takes place between the change in the wavelength ∆λ B of the maximum spectral density of reflected radiation and the relative elongation of the FBG ε [1,7,28]: where k BE is the proportionality coefficient approximately equal to 1.2 · 10 3 nm (12 nm per % or 1.2 · 10 −3 nm per µε) when the FBG is stretched at λ B 1550 nm and room temperature. It should be noted that k BE depends on the temperature of the FBG [29,30] and linearly depends on λ B [28]. The ability to measure the relative elongation by changing the value of ∆λ B makes it possible to use FBGs as sensors of elongation and mechanical stress. However, all measurements of this type can be considered as measurements under stationary stretching or compression of FBGs: during the measurements, the parameters of the FBG do not change, and the FBG itself is elongated (or compressed) uniformly along its entire length, including for vibration and ultrasonic wave sensors [11,12,15]. With the pulsed stretching of an optical fiber with an FBG located at some distance from the impact site, the stretching of an FBG occurs with a delay and with a short pulse of stretching or compression; in addition, the structure of the FBG cannot be considered spatially homogeneous. The operation of sensors based on FBG under pulsed mechanical action has not been studied in sufficient detail.
Presumably, the only application for which pulsed mechanical action on FBG has been studied is the systems for shock wave and detonation diagnostics [17,22,24,[31][32][33][34]. The basic concept [17] is that the change in the radiation reflection parameters (spectral and total power), which can be determined in real time during the physical destruction of FBG, occurs during the propagation of the detonation wave. The use of a chirped FBG gives a strong dependence of the spectral density of the reflected radiation on the length of the undestroyed part of the FBG, which simplifies the calculation of the required wave parameters. The use of high-speed interrogation for sensing in detonation and shock wave experiments is considered in [22,32]. In general, the determination of shock wave parameters can be performed without destroying the FBG by measuring changes in the spectrum of reflected radiation in real time [34][35][36]. These articles do not consider the elongation wave of an optical fiber with FBG passing a certain distance along the fiber before exposure to FBG, as well as simple physical and mathematical approximations and estimates for describing and numerically modeling the propagation of stretching or compression waves. This work is devoted to the study of such properties of FBG and their application for the research of pulsed mechanical action created by a pulsed magnetic field on electrical and structural materials.
Fundamental Principles
With a slow change in the FBG parameters, the change in wavelength is usually recorded using a source with a variable wavelength λ and a spectrally insensitive photodetector or using a source with a wide radiation spectrum and a spectrometer measuring the spectral density of reflected radiation (an interrogator). To register rapid changes in FBG parameters (occurring in microseconds and fractions of microseconds), the use of common interrogators cannot provide good accuracy in determining the dependence of the resonant wavelength on time, and it becomes necessary to use the power of radiation reflected by FBG at a fixed wavelength of the source as a measured value to determine the stretching of the optical fiber. The dynamic range of such measurements is significantly less than when using an interrogator, but the frequency band is significantly larger. The description of the fundamental principles presented below is also applicable for individual channels of multichannel systems and elements of the array of high-speed interrogator spectrometers. The measurement of pulse dependencies is most relevant for cases of mechanical action on the FBG, since the change in temperature in FBGs is a relatively slow process.
Let us consider the simplest model given below to study the basic patterns. Let the spectral power density of laser radiation (p LD ) and reflected FBG radiation (p FBG ) be described by Gaussian functions: where A FBG and A LD are the normalization factors, λ FBG and λ LD are the central wavelength of the spectrum of reflection of FBG and laser radiation, and σ FBG and σ LD are the half-width of the spectrum of reflection of FBG and laser radiation. The following Gaussian approximation is used: where P 0 is the laser radiation power in the optical fiber, k r,FBG is the maximum reflection coefficient of the FBG, and P in is the radiation power at the input of the FBG. Assuming the attenuation in the used segment of the fiber is negligible and the spectral sensitivity of the photodetector is constant within the range of the change λ, by performing integration along the wavelength, it is possible to obtain an analytical solution for the power value P p of the reflected FBG laser radiation, which will be recorded by the photodetector: It is clear that the maximum power of the laser radiation reflected by the grating is achieved when the central wavelengths λ 0,FBG and λ 0,LD are equal. When increasing or decreasing λ FBG , the value of P p decreases. If, in accordance with (1), where λ 0,FBG is the central wavelength of the spectral density of reflected radiation by FBG at zero elongation, then based on (4), it is possible to obtain the analytical dependence P p (ε). According to Formula (4), the dependence P p (ε) is also Gaussian. With the lengthening of the FBG (ε > 0), the nature of the change in P p (ε) depends on the position of the initial working point λ 0,FBG relative to λ LD . If the case is λ 0,FBG < λ LD , then with the extension of the FBG, the power P p (ε) increases until the equality λ FBG = λ LD is reached, and in the case of λ 0,FBG > λ LD , it decreases. So, for example, with sinusoidal elongation, in the first case, the output signal (power P p ) will be in-phase with respect to the elongation, and in the second case, it will be in antiphase. Let us introduce the notation: Then, if we consider the small elongations at which k BE ε σ S occurs, we can assume that the dependence of P p is a linear function of ε, the proportionality coefficient of which depends on the position of the starting point (ε = 0). The minimum (theoretically zero) sensitivity to the change of ε takes place at the extremum point (λ FBG = λ LD ). The theoretical dependence of the sensitivity to the change of ε, which can be defined as dP p dε , has the form: The maximum sensitivity value is reached at the point , which can be obtained by equating to zero the second derivative of expression (4) taking into account (5): and consider the limit for the case ε → 0. The value of ∆λ ms is estimated to be 0.01% (or 10 4 µε) for typical values of σ FBG and σ LD (for lasers with a DFB structure).
If the lengthening effect is applied to the input end of the optical fiber, in which the FBG is located at a distance of L b from the input end, then the change in the reflective properties of the FBG will occur with a delay of the amount indicated below: where ν S is the propagation velocity of the stretching wave in the fiber. The duration ∆τ FBG of the passage of the wave front through the FBG with a length of L FBG is L FBG /ν S . If: where ∆τ i is the characteristic time of change of mechanical stress, then such an effect can be considered as quasi-stationary. In this case, the delay in changing the reflective properties of the grating can be calculated by the Formula (9), and the inhomogeneity of the FBG under this influence can be neglected.
If the values of ∆τ FBG and ∆τ i are approximately the same, and even more so if there is: then with such an effect, the FBG will no longer be uniform in length, i.e., for a short mechanical pulse, the FBG should be considered chirped. The change in the magnitude of the reflected radiation power with such a short exposure will be less than with a quasi-stationary one.
It should be noted that the propagation velocity of the acoustic wave ν s in an optical fiber is significantly less than in a continuous medium. It can also be assumed that the acoustic wave propagates mainly through the protective coating of the fiber, and there is also a significant dispersion of the acoustic pulse during its propagation through the fiber. As will be shown below, the experimentally measured ν S value is in the range from 2.5 to 3.8 km/s, depending on the tension of the optical fiber, the temperature of the fiber, the presence of bends, etc. The estimated value of ∆τ FBG is 3 µs at L FBG = 10 −2 m.
Measurement Methodology
The basis for numerical modeling and processing of the received signals are experimentally measured emission and reflection spectra of the semiconductor lasers and fiber Bragg gratings used. In the studies described below, DFB-type semiconductor lasers were used. The fiber Bragg gratings were inscribed using the optical scheme for the manufacture of a FBG based on a Talbot interferometer [37] and the Optosystems CL7500 KrF excimer laser system (manufactured in Troitsk), operating according to the master oscillator-power amplifier scheme [38,39]. Such system produces radiation with high spatial (>5 mm) and temporal coherence, which allows the use of a tunable Talbot interferometer to create a high-contrast interference pattern in the FBG inscription area. The SMF-28 standard optical fiber subjected to preliminary low-temperature hydrogen loading was used for the inscription of the FBG [40,41]. The use of the inscription technology described above makes it possible to manufacture high-performance Bragg reflectors of various types with a wavelength adjustment of the Bragg resonance [42].
The lengths of the manufactured fiber Bragg gratings were 10 and 15 mm. The parameters of the FBG were measured after the release of hydrogen from the optical fiber. The spectra were measured using an Anritsu brand spectrometer, model MS9740B. The obtained emission spectra of semiconductor lasers were approximated by the Gaussian function. The temperature of the semiconductor laser body was changed using the Peltier element and measured by the thermocouple. The following values of the parameters of the laser used in the work were obtained: the central wavelength of the radiation (λ 0,LD ) at the operating temperature is 1050.6 nm, the half-width of the spectrum (σ LD ) is 0.0158 nm.
The spectral dependencies of the reflected radiation of FBGs were approximated by three functions: The parameters of approximations of the spectral dependence of the FBGs-half-width and the standard deviation of the approximation from the measured dependence for the two FBGs-used in the work are shown in Table 1. From the data obtained, it follows that all the approximating functions give a relatively small standard deviation (S a ) of the determined parameters and can be used in further work. However, each of them has its own specific application, in particular, function 2 (13) allows you to take into account the presence of lateral maxima of the reflection spectrum of the FBG, if such exist, and function 1 (12) allows you to obtain analytical expressions when calculating the reflected radiation power. The use of more complex approximations, such as those considered in the review [7], is not required to solve this problem, since the integral value is of interest and not the spectral position of the peak.
Numerical Simulation of Pulsed Mechanical Action
The simplest simulation of the pulse stretching of the FBG was carried out in the approximation of uniform stretching. The time delay ∆τ p between the moment of the beginning of the pulse action at the input end of the optical fiber and the moment of the beginning of the stretching of the grating was calculated based on the propagation velocity of the deformation wave, and the stretching of the FBG was assumed to be spatially homogeneous along the entire length of the FBG from the moment of reaching (beginning) FBG stretching waves.
Let the relative elongation at the input end of the fiber created by the loading system under the action of a magnetic field on the sample under study be determined by the expression: where τ is the time, τ a is the scale factor, and τ d is the decay decrement. Expression (15) describes an effect that creates only a wave of fiber stretching and does not create a compression wave. Formally, the delay τ p in modeling can be taken into account using the time offset t = τ + τ p . Taking into account that (1): Using, for example, approximation (13): ) 2 (19) by setting the values σ 2,LD , σ 2,FBG , λ 0,LD , λ 0,FBG , α, where α is the normalization factor for expression (15) specifying the elongation, by performing integration (17) numerically, one can obtain the desired dependence V(t). For the Gaussian approximation of the laser spectra, taking into account Formulas (4) and (5), the expression for the output signal can be presented in analytical form: Expression (20) also allows you to analyze all the basic patterns similarly to the numerical calculation given above. As an example, in the Figure 1 shows the calculated output waveforms at σ LD = 0.15 nm, σ FBG = 0.047 nm, λ 0,LD = 1551.0 nm, λ 0,FBG =1550.9 nm, τ a = 1 a.u., τ d = 5 a.u. for conditionally small (α = 10), medium (α = 200) and large (α = 10000) elongation. The specified parameters of the laser radiation spectrum and the reflection of the FBG are selected in order to obtain the most visual dependencies of the output signal.
The simulation results confirm the main patterns discussed (noted) above. At relatively small impacts, i.e., at a small value of the relative elongation (αk BE E(t) σ LD ), the waveform at the output of the system repeats the dependence of the elongation on time, if the operating point (i.e., λ 0,FBG ) does not coincide with the wavelength of the maximum spectral density of the laser (λ 0,LD ). If there is λ 0,FBG < λ 0,LD , then the lengthening of the FBG leads to an increase in the signal at the output of the system. If there is λ 0,FBG > λ 0,LD , then the lengthening of the FBG leads to a decrease in the signal, i.e., the signal will be inverted relative to the time dependence of the elongation.
For the conditionally average elongation at the front of the elongation pulse (15), the signal initially increases until the condition λ 0,FBG = λ 0,LD is reached and then decreases, possibly to zero, which corresponds to the condition λ 0,FBG σ S . When the elongation decreases (on the decline of the elongation pulse), the output signal occurs when the λ FBG returns to the sensitivity range | λ FBG − λ LD |∼ σ S .
The output signal differs from zero only at small values of the relative elongation at the front and the decay of the pulse at a conditionally large pulse elongation, i.e., when the dependence (15) has a value close to zero. The zero output signal corresponds to the condition λ 0,FBG σ S similarly to the previous case. It should be noted that the amplitude of the V max pulse signal for conditionally weak effects depends on the magnitude of the maximum elongation of E max (in Figure 1, item 1 is 0.02 units), and for medium and strong effects, the V max value is constant and does not depend on E max (in this case-0.8 units).
Measurement of the Rate of Tension Rise of an Optical Fiber with Bragg Grating
When using an FBG as a pulse elongation sensor for many practical applications, an important characteristic is the rate of increase in elongation ξ el : where t is the time. The value ξ el has dimension s −1 ; however, it is more convenient to use the value µε/s. If at the initial moment of time there is λ LD − λ FBG > (2 . . . 3)σ S and the change in the relative elongation k BE ε < 3σ S , then with a uniform elongation of the fiber, the output signal will be an impulse of a close-to-Gaussian shape. The pulse duration at the level of 1/e(τ 1/e ) corresponds to a change in the wavelength of the FBG by the value of the doubled half-width σ S . Then, according to (1) and (21): If the value of ε during τ 1/e cannot be considered a constant value, then a more accurate estimate of ξ el can be obtained based on the half-width of the pulse duration τ 1/e : where τ h = τmax − τ 1/e is the pulse duration between the points corresponding to the maximum of the signal and the signal level 1/e. In some cases, the speed of movement of an object under pulsed mechanical action is of interest, which can be determined by the speed ν f of movement of the end of the light guide attached to the object under study. At low ν f speeds, its determination is not difficult: where L OF is the length of the stretched (or elongated) optical fiber, and ε is the change in elongation during measurement τ m . The condition of applicability (24) is a uniform distribution of elongation over the entire length of the optical fiber, i.e., the possibility of using a quasi-stationary solution.
If the stretching or compression wave is localized on a limited length of the fiber, then condition (24) is not fulfilled. For such a mechanical effect, the grating should be considered spatially inhomogeneous (chirped). A strict solution or modeling of the light reflection of chirped FBG or FBG exposed to a shock wave [16,17] is a serious difficulty. For the simplest estimation of the magnitude of the radiation reflection from an inhomogeneously elongated (or compressed) FBG, it is possible to neglect the phase of reflected waves from individual strokes of waves and use the summation of the magnitude of reflected radiation from different regions of the FBG, within which the parameters of the FBG can be considered constant, and the resonant wavelengths are different. The problem of using this approach is, as a rule, the lack of information about the reflection coefficient and spectral characteristics of various areas of the FBG. The experimental study of such properties of FBG is given below.
Let us introduce λ s,e as the difference of the resonant wavelengths at the beginning (λ 0,sFBG ) and the end (λ 0,eFBG ) of the selected area of the FBG with length L s,e . To be able to perform the above assessment, it is necessary that the total reflected radiation power can be assumed to be additive relative to the radiation power reflected from each of the selected sites, and the inequality is also fulfilled: Then, the estimated value of the minimum rate of rise (or decline) of the relative elongation ξ el in inverse seconds (1/s) for the application of such a model is determined by the expression: If the value of the half-width of the radiation spectrum σ sFBG , reflected from the FBG section, does not depend on the length of this section (L s,e ), then this also simplifies the task of modeling signals under pulsed action on the grating.
Experimental Data
The electrical scheme for testing materials for uniaxial direct tension used in this work likes to the previously used one to study the mechanical properties of the TiNi alloy [43][44][45]. The experimental setup (Figure 2) consisted of pulsed current generator 1 (PCG), curved flat conductors 2, samples of the studied material S 1 and S 2 , a fiber-optic system including the semiconductor laser 5 with power supply unit, photodetector module 10, circulator 7 with optical fibers 6 and fiber Bragg grating 8, as well as electronic unit 11 performing conversion (ADC), signal processing, and display of measurement results on the personal computer. The fiber containing the Bragg grating was located inside the hole of sample S 1 and conductors 2. The end of the fiber with FBG was attached to the inner surface of the test sample S 2 at point 4. The distance from the fixation point of fiber 4 to FBG was 0.5 m. The specified value of the initial stretching of the fiber was provided by the fixing device 9. The total length of fiber 8 was approximately 1 m. The surfaces of conductor 2 were separated by electrical insulation thickness h. The waveform of the current was recorded using the oscilloscope 12 and Rogovski coil 13. The parameters of the main PCG elements were as follows: the storage capacitor was 14.5 µF, the operating voltage was up to 30 kV, the intrinsic induction of PGG was 100 nH, and the wave resistance was 0.08 Ω. The shock load was formed under the action of a pulsed magnetic field of flat parallel copper conductors on which the pulse current generator is discharged. Parameters of current pulses: duration-1-5 microseconds; the maximum value of the current in the pulse-10-100 kA. The created pressure was transferred to the sample made of the special shape in such a way that its deformable part was subjected to uniaxial direct stretching [44,45]. 5-semiconductor laser with power supply unit; 6-optical fiber; 7-circulator; 8-fiber Bragg grating; 9-device that fixes the fiber; 10-photodetector module; 11-electronic unit; 12-oscilloscope; 13-Rogovski coil.
The pressure pulse formed in the flat conductors was calculated from the obtained current waveforms. This pressure was transferred to the sample made in a special shape in such a way that the deformable part was subjected to uniaxial direct stretching [44,45]. An important advantage of the described method is the possibility of regulating the strength and duration of exposure to the material under study [46][47][48], and the disadvantage is the presence of strong electromagnetic interference, which makes it difficult to use common electronic sensors to measure the parameters of such exposure [49,50].
Examples of the received waveforms are shown in Figures 3 and 4. Thus, the pulses shown in Figure 3 correspond to the induced interference (dependence 1) caused by the current pulse, from the front of which the time starts counting on the waveforms, as well as the pulses corresponding to the front and the decay of the stretching wave (dependence 2, 3) passing through the FBG. The specified waveform was obtained on the optical fiber having initially some tension, such that the operating point was near zero on the front of the Gaussian dependence U(ε) and an increase in ε led to an increase in the output signal U. For the given waveform, the delay between the front of the current pulse and pulse 2 ( Figure 3) was 139 microseconds; between the maxima of pulses 2 and 3-35 microseconds, the duration of pulse 2 from level 1/e at the pulse front to the pulse maximum is approximately 0.4 µs. That is, the propagation velocity of the stretching wave along the optical fiber with a protective shell was approximately 3.6 km/s, and the estimated value of the rate of expansion of the stretching, assuming σ S = 0.068, was 200 µε/s. The waveform shown in Figure 4 was obtained in the absence of the initial tension of the optical fiber, and the fiber itself was partially bent. Since the pulse has two maxima, it can be concluded that the pulse front 2 and the decay 3 correspond to the front and decay of the stretching wave passing through the FBG, and point 4 corresponds to the transition from fiber stretching to compression, i.e., the rate of change ε at this point is 0. For this waveform, the delay between the front of the current pulse and pulse 2 ( Figure 4) was 153 µs between the maxima of pulses 2 and 3-21 µs, and the pulse duration from the level 1/e at the front of the pulse 2 to the maximum of the pulse is approximately 5 µs, which allows us to assume a significant increase in the dispersion of the pulse. Since the time required to achieve a fiber stretching value equal to the tension in the previous case considered is not known for sure, the estimation of the wave propagation velocity based on L b and τ SF can give a significant error. The estimated rate of increase in stretching at the pulse front is 16 µε/s for this case and should be regarded as an equivalent value (close to the average value) due to non-fulfillment of the condition ν = const.
The experimental study of the spectral characteristics of the reflection on the limited area of FBG was also carried out. For this purpose, a 15 mm long FBG was made, and then, after the hydrogen was released from the FBG, the grating was successively shortened by the chipping method by approximately 1 mm and the reflection spectrum was measured at each length of the FBG. The obtained data-the value of the maximum spectral radiation density (p(L FBG )) and the half-width σ sFBG of the spectrum as function of the length of the FBG-are shown in Figures 5 and 6. It follows from the data obtained that the dependence p(L FBG ) is nonlinear, but the linear approximation of p(L FBG ) can be used as the simplest approximation in the range from 3 mm to 13 mm. That is, the condition of additivity of the reflected radiation power from the length of the FBG is approximately fulfilled in the specified range of lengths of the FBG. In addition, as the simplest approximation, the half-width of the reflection spectrum of the FBG can be assumed to be a constant when the length of the FBG is more than 7 mm. The resonant wavelength of the reflection of the FBG with the change in the length of the FBG from 2.5 mm to 15 mm changed by no more than 0.05 nm, which is presumably caused by a change in the distribution of internal mechanical stresses and the bending radius of the FBG during its shortening. It can be assumed that all gratings manufactured using the technology described above have approximately the same parameters.
Conclusions
Despite the fact that the use of FBG as a sensor of pulsed mechanical action, rigorously, is not non-invasive, such devices can be used to determine the parameters of pulsed mechanical action. An important advantage of using FBGs for these purposes is their insensitivity to pulsed electromagnetic fields, which makes it possible to use FBGs to determine the parameters of mechanical deformation of surfaces or objects under the action of pulsed electromagnetic action or in electromagnetic fields with high intensity. To determine the parameters for relatively small elongations or contractions, it is advisable to choose the operating point of the system in accordance with expressions (7) and (8) and for large ones at the initial section of the dependence V(ε).
It follows from the results obtained that the recorded rate of increase in elongation, determined by the parameters of the output signal pulse, can be commensurate with the speed of sound propagation in an optical fiber, and the parameters of the laser and the FBG used must be matched with the parameters of the measured process. In this paper, the general patterns of operation of the system based on a single FBG under pulsed tension or compression are considered. For more accurate measurements, more advanced systems should be developed and used, in particular using multiple FBGs. | 6,794.2 | 2022-09-26T00:00:00.000 | [
"Physics",
"Engineering"
] |
Discovering causal interactions using Bayesian network scoring and information gain
Background The problem of learning causal influences from data has recently attracted much attention. Standard statistical methods can have difficulty learning discrete causes, which interacting to affect a target, because the assumptions in these methods often do not model discrete causal relationships well. An important task then is to learn such interactions from data. Motivated by the problem of learning epistatic interactions from datasets developed in genome-wide association studies (GWAS), researchers conceived new methods for learning discrete interactions. However, many of these methods do not differentiate a model representing a true interaction from a model representing non-interacting causes with strong individual affects. The recent algorithm MBS-IGain addresses this difficulty by using Bayesian network learning and information gain to discover interactions from high-dimensional datasets. However, MBS-IGain requires marginal effects to detect interactions containing more than two causes. If the dataset is not high-dimensional, we can avoid this shortcoming by doing an exhaustive search. Results We develop Exhaustive-IGain, which is like MBS-IGain but does an exhaustive search. We compare the performance of Exhaustive-IGain to MBS-IGain using low-dimensional simulated datasets based on interactions with marginal effects and ones based on interactions without marginal effects. Their performance is similar on the datasets based on marginal effects. However, Exhaustive-IGain compellingly outperforms MBS-IGain on the datasets based on 3 and 4-cause interactions without marginal effects. We apply Exhaustive-IGain to investigate how clinical variables interact to affect breast cancer survival, and obtain results that agree with judgements of a breast cancer oncologist. Conclusions We conclude that the combined use of information gain and Bayesian network scoring enables us to discover higher order interactions with no marginal effects if we perform an exhaustive search. We further conclude that Exhaustive-IGain can be effective when applied to real data. Electronic supplementary material The online version of this article (doi:10.1186/s12859-016-1084-8) contains supplementary material, which is available to authorized users.
Background
The problem of learning causal influences from passive data has attracted a good deal of attention in the past 30 years, and techniques have been developed and tested. The constraint-based technique for learning Bayesian networks is a well-known method [1], and has been implemented in the Tetrad package (http:// www.phil.cmu.edu/tetrad/). This method orients edges which are compelled to be causal influences. Another method for learning Bayesian networks is the greedy equivalent search (GES) [2], which does not in itself distinguish which edges are compelled to be causal. However, post-processing of its resultant network can compel edges. Both these (and other) strategies assume the composition property, which states that a variable Z and a set of variables S are not independent conditional on T, then there exists a variable X in S such that X and Z are not independent conditional on T [2]. When T is the empty set, this property simply states if Z and S are not independent then there is an X in S such that Z and X are not independent. So, at least one variable in S much be correlated with Z. However, if two or more variables interact in some way to affect Z, there could be little marginal effect for each variable, and the observed data could easily not satisfy the composition property. Furthermore, if interacting variables have strong marginal effects, the causal learning algorithms do not distinguish them as interactions, but only as individual causes.
So, the standard methods for learning causal influences do not learn that causes are interacting to cause a target, and do not even discover causes that are interacting with little or no marginal effect. An important task then is to learn such interactions from data. A method that does this could be a preliminary step before applying a causal learning algorithm. This paper concerns the development of a new method that does this in the case of discrete variables. We first provide some examples of situations where discrete variables interact.
Interaction examples
An example, which has recently received a lot of attention, is gene-gene interactions, called epistasis. Biologically, epistasis describes a situation where a variant at one locus prevents the variant at a second locus from manifesting its effect [3]. Epistasis between n loci is called pure epistasis if none of the loci individually are predictive of phenotype and is called strict epistasis if no proper multi-locus subset of the loci is predictive of phenotype [4]. Epistasis has been defined statistically as a deviation from additivity in a model summarizing the relationship between multi-locus genotypes and phenotype [5]. It is believed that much of genetic risk for disease is due to epistatic interactions [6][7][8][9]. A Single nucleotide polymorphism (SNP) is a substitution of one base for another. Genome-wide association studies (GWAS) investigate many SNPs, often numbering in the millions, along with a phenotype such as disease status. By investigating single-locus associations, researchers have identified over 150 risk loci associated with 60 common diseases and traits [10][11][12][13]. However, these single-locus investigations would miss epistatic interactions with little marginal effect.
Another important example is the interaction of clinical or genomic variables with treatments to affect patient outcomes. For example, Herceptin is a treatment for breast cancer patients which is effective for HER2+ patients. So, Herceptin and HER2 status interact to affect survival. This is a well-known relationship. However, we now have large scale breast cancer and other datasets [14] from which we can learn treatment-variable interactions that are not yet known. This knowledge will enable us to better provide precision medicine.
As another example, we are now obtaining abundant hospital data concerning workflow. These data can be analysed to determine good personnel combinations and sequencing [15].
Statistical interactions
In statistics, the standard definition of an interaction is a relationship where the simultaneous influence of two or more variables on a target variable is not additive. However, when we leave the domain of regression and deal with the type of non-linear discrete interactions discussed above, this definition is limited. For example, researchers have developed the Noisy-Or model to combine the effect of binary causes that are independently causing a binary target [16]. We would not call this relationship an interaction; yet the rule for combining the individual effects is not additive. When variables combine to affect a target with no marginal effect (e.g. pure, strict epistasis), we definitely can say there is an interaction. Figure 1 shows Bayesian networks illustrating these two disparate situations. We discuss Bayesian networks further in the Methods Section. However, briefly a Bayesian networks consists of nodes which represent random variables, edges between the nodes, and the conditional probability distribution of every variable given each combination of values of its parents. Figure 1a shows a causal relationship with no marginal effects. That is, By the symmetry of the problem, we see the same result holds for Y. Figure 1b shows a causal relationship developed with the Noisy-Or model. That model assumes each cause has a causal strength that independently affects the target. See [16] for the details of the assumptions. In this case the causal strength of X is p x = 0.9 and the causal strength of Y is p y = 0.9. From these causal strengths, the Noisy-Or model computes the conditional probabilities of Z as follows: The examples just shown are two extreme cases, providing us with clear examples of an interaction and a noninteraction. However, in general, there does not appear to be a dichotomous way to classify a discrete causal relationship as an interaction or a non-interaction. So, we propose a fuzzy set membership definition of a discrete interaction in the Methods Section.
Previous research on learning discrete interactions
The problem concerning learning genetic epistasis from GWAS datasets has recently inspired ample research on learning discrete interactions from high-dimensional datasets. Researchers applied standard statistical techniques including logistic regression [17,18], and regularized logistic regression [19,20]. However, many felt that regression may not work well at learning interacting loci because the assumptions in these models are too restrictive. So researchers applied machine learning strategies including modeling full interactions [21], using information gain [22], a technique called SNP Harvester [23], using ReliefF [24], applying random forests [25], a strategy called predictive rule inference [26], a method called Bayesian epistasis association mapping (BEAM) [27], the use of maximum entropy [28], Bayesian network learning [29][30][31], and Bayesian network learning combined with information gain [32]. A well-known new technique called Multifactor Dimensionality Reduction (MDR) [33] was also developed. MDR combines two or more variables into a single variable (hence leading to dimensionality reduction); this changes the representation space of the data and facilitates the detection of nonlinear interactions among the variables. MDR has been applied to detect epistatically interacting loci in hypertension [34], sporadic breast cancer [35], and type II diabetes [36]. Jiang et al. [37] evaluated the performance of 22 Bayesian network scoring criteria and MDR when learning two interacting SNPs with no marginal effects. Using 28,000 simulated datasets and a real Alzheimer's GWAS dataset, they found that several of the Bayesian network scoring criteria performed substantially better than other scores and MDR. The BN score that performed best was the Bayesian Dirichlet equivalence uniform score, which is based on the probability of the data given the model.
Henceforth, we refer to a candidate cause as a predictor. The multiple beam search algorithm (MBS) was developed in [29] to discover causal interactions. MBS starts by narrowing down the number of predictors using a Bayesian network scoring criterion (discussed in the Methods Section) to identify a best set of possible predictors. Next it starts a beam from each of these predictors. It performs greedy forward search on this beam by adding the predictor that increases the score the most. It stops when no predictor addition increases the score. Next MBS does greedy backward search on each beam by deleting the predictor that increases the score the most. It stops when no predictor deletion increases the score. The set of predictors discovered in this manner is a candidate causal interaction. However, if two predictors each have a strong individual effect, they will have a high score together and will therefore be identified as an interaction, even if they do not interact. MBS-IGain [32] resolves this difficulty. MBS-IGain also used MBS to develop beams and uses Bayesian network scoring to end the forward search. However, it uses information gain to choose the next predictor rather than adding the predictor that increases the score the most. In a comparison using 100 simulated 1000-predictor datasets with 15 interacting predictors involved in 5 interactions, MBS-IGain substantially outperformed nine epistasis learning methods including MBS [29], LEAP [31], logistic regression [18], MDR [33] combined with a heuristic search, full interaction modeling [21], information gain alone [22], SNP Harvester [23], BEAM [27], and a technique that uses maximum entropy [28]. 1 On the left is a Bayesian network representing a causal interaction with no marginal effects, and on the right is a Bayesian network representing a causal interaction described by the Noisy-Or model Methods MBS-IGain requires some marginal effect to detect interactions containing more than two predictors. If the dataset is not high-dimensional, we can alleviate this difficulty by instead doing an exhaustive search while using the model selection criteria in MBS-IGain. However, the exhaustive search is not straightforward because we must not only score each candidate model M, but also check the submodels of M to see how much information is provided if we do not combine them into M. We develop Exhaustive-IGain, which does this.
We compare the performance of Exhaustive-IGain to MBS-IGain using 10 simulated 40-predictor datasets based on 5 interactions with marginal effects, 16 simulated 40-predictor datasets based on two predictors interacting with no marginal effects, 16 simulated 40predictor datasets based on 3 predictors interacting with no marginal effects, and 16 simulated 40-predictor datasets based on 4 predictors interacting with no marginal effects. We use Exhaustive-IGain to learn interactions from a real clinical breast cancer dataset.
Since Exhaustive-Gain uses Bayesian networks and information gain, we first review these.
Bayesian networks
Bayesian networks [16,[38][39][40] are an important architecture for reasoning under uncertainty in machine learning. They have been applied to many domains including biomedical informatics [41][42][43][44][45][46]. A Bayesian network (BN) represents a joint probability distribution by a directed acyclic graph (DAG) G = (V,E), where the nodes in V are random variables and the edges in E represent relationships among the variables, and by the conditional probability distribution of every node X ∈ V given every combination of values of the node's parents. The edges in the DAG often represent causal relationship [16]. A BN modeling causal relationship among variables related to respiratory diseases appears in Fig. 2.
Using a BN, we can determine probabilities of interest with a BN inference algorithm [16]. For example, using the BN in Fig. 1, if a patient has a smoking history (H = yes), a positive chest X-ray (X = pos), and a positive CAT scan (CT = pos), we can determine the probability of the patient having lung cancer (L = yes). That is, we can compute P(L = yes| H = Yes, X = pos, CT = pos). Inference in BNs is NP-hard [47]. So, approximation algorithms are often employed [16].
Learning a BN from data concerns learning both the parameters and the structure (called a DAG model). In the score-based structure-learning approach, a score is assigned to a DAG based on how well DAG model G fits the Data. The Bayesian score [48] is the probability of the Data given G. This score, which uses a Dirichlet distribution to represent prior belief concerning each conditional probability distribution in the BN, follows: where n is the number of variables in the model, r i is the number of states of X i , q i is the number of different values that the parents of X i can jointly assume, a ijk is a hyperparameter, and s ijk is the number of times X i assumed its k th value when the parents of X i assumed their j th value. When a ijk = α/r i q i , where α represents a prior equivalent sample size, we call the Bayesian score the Bayesian Dirichlet equivalent uniform (BDeu) score [49].
It has been shown that the problem of learning a BN DAG model from data is NP-hard [50]. Resultantly, heuristic search algorithms have been developed [16].
Information gain, interaction strength, and interaction power
Information theory [51] concerns the quantification and communication of information. Given a discrete random variable Z with m alternatives, the entropy H(Z) is defined as follows: If we repeat n trials of the experiment having outcome Z, then it is possible to show that the entropy H(Z) is the limit as n → ∞ of the expected value of the number of bits needed to report the outcome of every trial. Entropy provides a measure of our uncertainty in the value of Z in the sense that, as entropy increases, it takes more bits on the average to resolve our uncertainty. Entropy achieves its maximum value when P(z i ) = 1/m for all z i , and its minimum value (0) when P(z j ) = 1 for some z j .
The expected value of the entropy of Z given X is called the conditional entropy of Z given X. We denote conditional entropy as H(Z | X). Mathematically, we have where X has k alternatives. Knowledge of the value of X can reduce our uncertainty in Z. The information gain of Z relative to X is defined to be the expected reduction in the entropy of Z given X: IGðZ; XÞ ¼ HðZÞ−HðZjXÞ: Let IG(Z;X,Y) denote the information gain of Z relative to the joint probability distribution of X and Y. The interaction strength (IS) of X and Y relative to Z as then defined as follows: Let IG(Z;A) denote the information gain of Z relative to the joint distribution of all variables in set A. The IS of variable X and set of variables A is then defined as follows: Since A is a set, A ∪ {X} should technically be used in the IG expression. However, we represent this union by X, A. Interaction strength provides a measure of the increase in information gain obtained when X and A are known together relative to knowing each of them separately.
When IG(Z;M) ≠ 0, we define the interaction power (IP) of model M for effect Z as follows: Since information gain (IG) is nonnegative, it is straightforward that IP(Z;M) ≤ 1. If M is causing Z with no marginal effects (e.g. pure, strict epistasis), the IP is 1. We would consider this a very strong interaction. When the IP is small, the increase in IG obtained by considering the variables together is small compared to considering them separately. We would consider this a weak interaction or no interaction at all.
Jiang et al. [32] show that if the variables in M are independent causes of Z, then So, in situations we often investigate, the IP is between 0 and 1, and therefore satisfies the notion of a fuzzy set [52], where the greater the value of the IP the greater membership the model has in the fuzzy set of interactions.
The IS and IP can be used to discover interactions. In this next section we develop algorithms for learning interactions that use the IS and the IP.
Interaction strength algorithms
We present the multiple beam search information gain (MBS-IGain) and exhaustive search information gain (Exhaustive-IGain) algorithms, which use information gain and Bayesian network scoring to learn interactions. MBS-IGain, which was previously developed in [32], does a heuristic search, while Exhaustive-IGain does an exhaustive search. Figure 3 shows Algorithm MBS-IGain. The score(Z;M) in Algorithm MBS-IGain is the BDeu score of the DAG model that has the predictors in M being parents of the target Z. The notation score(Z:Y) indicates that Y is the only parent of Z. MBS-IGain symbiotically uses the IS and IG functions and a Bayesian network scoring criterion. Initially, the most promising predictors are chosen using the scoring criterion. A beam is then started from each of these predictors. On each beam, the predictor, which has the highest IS with the set of predictors chosen so far, is greedily chosen. The search ends when either the IS is small relative to the IG of the model (based on a threshold T), indicating that the IP would be small, or when adding the predictor decreases the score of the model. This latter criterion is included because we not only want to discover predictors that seem to be interacting, but we also want to discover probable models. On the other hand, the check for a sufficiently large IS is performed because a set of SNPs could score very high as parents of Z when there is no interaction. For example, if X and Y each have strong causal strengths for Z but affect Z independently, the model with them as parents of Z would score high. The Noisy-OR model [16] is such a model. In this situation the model X → Z ← Y would have a high score without there being an interaction. Finally, a parameter R, which puts a limit on the size of the model M learned, could be included in MBS-IGain.
MBS-IGain will miss a 3-predictor or 4-predictor pure epistatic interaction. When there are not many predictors, we can ameliorate this problem by doing an exhaustive search. Algorithm Exhaustive-IGain, which appears in Fig. 4, does this. The parameter R is the maximum size of the interactions we are considering. For each set M of size between 2 and R, the algorithm checks every subset A of M to see if the ratio of IS(Z;M ˗ A,A) to IG(Z;M) exceeds a threshold T. In this way it makes certain that the IP exceeds T. It also checks that M yields a higher score than both A and M-A. If M passes these tests for every subset, then M is considered an interaction.
Reporting the noteworthiness of an interaction
Once we discover an interaction, we need to report its noteworthiness. First, we report its IP to indicate its strength as an interaction. However, if the model is unlikely, it is still not very noteworthy even if the IP is large. So, we also need to in some way report the significance of the model. Standard p-values are not very informative because there is more than one null hypothesis. Consider where k sums over the two 1-predictor models. The BNPP extends to larger models, but the number of competing hypotheses grows exponentially with size of the model. However, in general, we usually don't learn an interaction with more than 5 predictors. Jiang et al. [30] discuss and provide prior probabilities in the case of interactions learned from GWAS datasets.
Evaluation methodology
We evaluated Exhaustive-IGain by comparing it to MBS-IGain using simulated datasets, and by applying it to a real breast cancer dataset. We discuss each of these next.
Simulated datasets
One hundred simulated datasets based on interacting trinary variables causing a binary target were developed by Chen et al. [53]. They labeled the predictors SNPs and the target a disease. Therefore, we will proceed with this terminology. Each dataset had 1000 total SNPs, and consisted of 1000 cases and 1000 controls. The datasets were generated based on two 2-SNP interactions, two 3-SNP interactions, and one 5-SNP interaction, making a total of 15 causative SNPs. The effects of the interactions were combined using the Noisy-Or model [16]. The 5 interactions used to generate the datasets were as follows: 1. S1, S2, S3, S4, S5 2. S6, S7, S8 3. S9, S10, S11 4. S12, S13 5. S14, S15 The model that X and Y are both parents of Z is on the left, and its three competing models are on the right Each of these 5 interactions exhibits some marginal effect. As mentioned in the Introduction Section, MBS-IGain [33] previously outperformed 9 other methods at interaction discovery using these 100 datasets. We developed 10 datasets based on these same interactions, but with only 40 total SNPs. Each dataset has 1000 cases and 1000 controls.
Urbanowicz et al. [54] created GAMETES, which is a software package for generating pure, strict epistatic models with random architectures. We used GAMETES to develop 2-SNP, 3-SNP, and 4-SNP models of pure epistatic interaction. That is, there are no marginal effects. The software allows the user to specify the heritability and the minor allele frequency (MAF). We used values of heritability ranging between 0.01 and 0.2, and values of MAF ranging between 0.1 and 0.4. Using these values, we generated 16 datasets based on pure, strict 2-SNP interactions, 16 datasets based on pure, strict 3-SNP interactions, and 16 datasets based on pure, strict 4-SNP interactions. The 2-SNP and 3-SNP based datasets contained 1000 cases and 1000 controls, and the 4-SNP based datasets contained 5000 cases and 5000 controls. All the simulated datasets are available in Additional file 1.
We used both MBS-IGain and Exhaustive-IGain to analyze both sets of datasets. We ran both algorithms with all combination of the following values of the threshold T in the algorithms: T = 0.1, 0.2; and the parameter α in the BDeu score: α = 9, 54, 128.
We compared the results using the following two performance criteria: Criterion 1: This criterion determines how well the method discovers the predictors in the interactions, but does not concern itself with whether the method discovers the actual interactions. First, the learned interactions are ordered by their scores. Then each predictor is ordered according to the first interaction in which it appears. Finally, the power according to criterion 1 is computed as follows: where N K (i) is the number of true interacting predictors appearing in the first K predictors learned for the ith dataset, M is the total number of interacting predictors in all interactions, and H is the number of datasets. Criterion 2: This criterion measures how well a method discovers each of the interactions. The criterion used the Jaccard index which is as follows: The Jaccard index equals 1 if the two sets are identical and equals 0 if their intersection is empty. The criterion provides a separate measure for each true interaction. The learned interactions are first ordered by their scores for each dataset i. Denote the jth learned interaction in the ith dataset by M j (i), and denote the true interaction we are investigating by C. For each i and j we compute Jaccard(M j (i),C). We then set The power according to criterion 2 for interaction C is then computed as follows: where H is the number of datasets and M is the total number of interacting predictors in interaction C.
Real dataset
The METABRIC data set [15] has clinical data and outcomes for 1981 primary breast cancer tumors. Table 1 shows the clinical variables and their values used in our analysis. The data in three of these variables were transformed from their original METABRIC values using domain knowledge and the equal distribution discretization strategy. The transformations follow: age_at_diagnosis: This variable was discretized to the five ranges shown using the equal distribution discretization technique and breast cancer expert knowledge. size: This variable was discretized to the three standard ranges shown. lymph_nodes_positive: This variable was grouped into the six ranges shown.
The outcome variable is whether the patient died from breast cancer. If the person was known to die from breast cancer, the days after initial consultation that the patient died is recorded. If the person was not known to die from breast cancer, the days after initial consultation that the patient was last seen alive or died from another cause is recorded. If a patient was known to die from breast cancer within x years after initial consultation or is known to be alive x years after initial consultation, we say their breast cancer survival status is known x years after initial consultation. These data provide us with 1698 patients whose breast cancer survival status is known 5 years after initial consultation, 1228 patients whose breast cancer survival status is known 10 years after initial consultation, and 782 patients whose breast cancer survival status is known 15 years after initial consultation.
We used Exhaustive-IGain to learn interactions that affect 5 year, 10 year, and 15 year breast cancer survival.
Simulated datasets based on marginal effects
The results were similar for all combinations of the parameters, but best when T = 0.2 and α = 54. Figure 6 shows Power 1 (K) for K ≤ 25 for the Exhaustive-IGain and MBS-IGain algorithms. Figure 7 shows Power 2 (K,C) for K ≤ 12 for each interaction C for the two methods. Figure 7f shows the average of Power 2 (K,C) over all 5 interactions. It is initially surprising that MBS-IGain does slightly better than Exhaustive-IGain according to Power Criterion 1 and, on the average, according to Power Criterion 2. These results can be attributed to the superior performance of MBS-IGain for interaction {S1,S2,S3,S4,S5} (Fig. 7a) and interaction {S9,S10,S11} (Fig. 7c). An explanation for this superior performance is as follows. MBS-IGain, for example, could have S9 and S10 already chosen on a beam and be considering S11 next. The model {S9,S10,S11} is only checked for interaction strength relative to the models {S9,S10} and {S11}. So, if the information gain of {S9,S10,S11} satisfies a threshold relative to the sum of the information gains of {S9,S10} and {S11} (and it increases the score), the model will be chosen. On the other hand, for Exhaustive-IGain to choose model {S9,S10,S11}, that model must also beat the sum of the gains for {S9,S11} and {S10} and the sum of the gains for {S10,S11} and Table 1 The clinical variables in the METABRIC dataset Variable Description Values age_at_diagnosis age at diagnosis of the disease 0-39, 39-54, 54-69, 69-84, 84-100 menopausal_status inferred menopausal We see from Fig. 8a that both methods discover the 2-SNP interaction very well. In fact Exhaustive-IGain ranked the correct interaction first in 15 of the datasets and 3 rd in the remaining dataset, while MBS-IGain ranks it first in 15 of the datasets and 4 th in the remaining dataset (This information is not in the figure). In the case of a 2-SNP interaction, MBS-IGain effectively does an exhaustive search, explaining why it performs almost as well as Exhaustive-IGain. Its slightly worse performance is due to its different exit criteria concerning the score. It stops adding predictors when no predictor increases the score. On the other hand, Exhaustive-IGain checks whether any sub-model has a higher score than the model being considered. Exhaustive-IGain achieves this performance with very few false discoveries. The average number of interactions discovered by Exhaustive-IGain is 2.0. On the other hand, the average number of interactions discovered by MBS-IGain is 4.75. Figure 8b shows that Exhaustive-IGain also discovers the 3-SNP interactions extremely well, while MBS-IGain exhibits poor performance. This poor performance is to be expected. That is, when there are no marginal effects, if {S1,S2,S3} is our interaction, S2 or S3 would be chosen first on the beam initiating from S1 only by chance. In general, Exhaustive-IGain exhibited this good performance with a low false positive rate. The average number of interactions discovered for 15 of the datasets was 2.47. However, for one of the datasets, 100 interactions (the maximum reported) were identified.
As Fig. 8c shows, Exhaustive-IGain performed well for the 4-SNP interactions, but not as well it did for the smaller models. This result indicates that higher order interactions are more difficult to discover. As expected, MBS-IGain again showed very poor performance. For 14 of the datasets, the average number of interactions discovered by Exhaustive-IGain was 1.85. However, for two of the datasets, 100 interactions were discovered. Table 2 shows the correlations of each of the predictors with breast cancer survival according to both the BNPP and Pearson's chi-square test. Except for a few exceptions, the two methods are in agreement. Our purpose here is not to discuss these correlations, but rather to provide them as a frame of reference for the learned interactions, which appear in Table 3. Table 3 shows the interactions learned from the Metabric dataset that have IPs > 0.4. The data indicates that histological interacts with menopausal_status to affect both 5 year and 15 year breast cancer death survival. A consultation with a breast cancer oncologist 1 reveals that invasive ductal carcinoma (IDC) has a worse prognosis in premenopausal women, but other histological types do not. Furthermore, Table 2 indicates that neither histological nor menopausal status is highly correlated with 5 year or 15 year breast cancer death survival by themselves. Table 3 also shows that the data indicates hormone and menopausal_status interact to affect 10 breast cancer death survival. The breast cancer oncologist indicated that hormone therapy is more effective in post-menopausal women. As Table 2 shows, neither hormone nor menopau-sal_status are highly correlated with 10 year breast cancer death survival by themselves. Finally, Table 3 shows that the data indicates that histological and hormone interact to affect 5 year breast cancer death survival. The oncologist stated IDC might respond slightly worse to hormone therapy than other types, but that this difference is not well-established.
Real dataset
The BNPP is a relatively new concept, and the IP is a complete new concept. So, we do not have the same intuition for their values as we have for a p-value. That is, we have come to consider a p-value of 0.05 meaningful partly due to Fisher's [55] statement in 1921 that "it is convenient to draw the line at about the level at which we can say: Either there is something in the treatment, or a coincidence has occurred such as does not occur more than once in twenty trials," and also due to years of experience. To provide a context for the results in Table 3, Table 4 shows the average BNPPs and IPs of all 2, 3, 4, and 5 predictor models obtained from the Metabric dataset. As we would expect, the value of the BNPP decreases as the size of the models increases. However, the IP is small for models of all sizes. The models we learned (Table 3) are all 2-predictor models. So we compare those results to the averages for 2predictor models. Our IP results of 0.43, 0.47, 0.72 and 0.49 are all substantially larger than the 2-predictor IP average of 0.042. Three of our BNPP results, namely
Conclusions
We compared Exhaustive-IGain to MBS-IGain using simulated datasets based on interactions with marginal effects, and simulated datasets based on interactions with no marginal effects. MBS-IGain performed as well as (actually slightly better than) Exhaustive-IGain when analysing the datasets based on interactions with marginal effects. MBS-IGain is O(Rn 2 ) whereas Exhaustive-IGain is O(n R ), where n is the number of predictors and R is the maximum size of the models considered. So, our results indicate that MBS-IGain achieves similar results to Exhaustive-IGain with this type of dataset, but much more efficiently. On the other hand, as could be expected, MBS-IGain could not discover pure epistatic interactions involving more than two SNPs. Exhaustive-IGain performed very well at discovering 3-SNP interactions, and reasonably well at discovering 4-SNP interactions. We conclude from these results that the combined use of information gain and Bayesian network scoring enables us to discover higher order pure epistatic interactions if we perform an exhaustive search. When we applied Exhaustive-IGain to a real breast cancer dataset to learn interactions affecting breast cancer survival, we learned interactions that agreed with the judgements of a breast cancer oncologist. We conclude that Exhaustive-IGain can be effective when applied to real data. | 7,863.8 | 2016-05-26T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Implications of Pulsar Timing Array Data for Scalar-Induced Gravitational Waves and Primordial Black Holes: Primordial Non-Gaussianity $f_{\mathrm{NL}}$ Considered
Multiple pulsar-timing-array collaborations have reported strong evidence for the existence of a gravitational-wave background. We study physical implications of this signal for cosmology, assuming that it is attributed to scalar-induced gravitational waves. By incorporating primordial non-Gaussianity $f_{\mathrm{NL}}$, we specifically examine the nature of primordial curvature perturbations and primordial black holes. We find that the signal allows for a primordial non-Gaussianity $f_{\mathrm{NL}}$ in the range of $-4.1\lesssim f_{\mathrm{NL}} \lesssim 4.1$ (68\% confidence intervals) and a mass range for primordial black holes $m_{\mathrm{pbh}}$ spanning from $\sim10^{-5}M_{\odot}$ to $\sim10^{-2}M_{\odot}$. Furthermore, we find that the signal favors a negative non-Gaussianity, which can suppress the abundance of primordial black holes. We also demonstrate that the anisotropies of scalar-induced gravitational waves serve as a powerful tool to probe the non-Gaussianity $f_{\mathrm{NL}}$. We conduct a comprehensive analysis of the angular power spectrum within the nano-Hertz band. Looking ahead, we anticipate that future projects, such as the Square Kilometre Array, will have the potential to measure these anisotropies and provide further insights into the primordial universe.
I. INTRODUCTION
Multiple collaborations in the field of pulsar timing array (PTA) observations have presented strong evidence for a signal exhibiting correlations consistent with a stochastic gravitational-wave background (GWB) [1][2][3][4].The strain has been measured to be on the order of 10 −15 at a pivot frequency of 1 yr −1 .Though this GWB aligns with expectations from astrophysical sources, specifically inspiraling supper-massive black hole (SMBH) binaries [5], it is important to note that the current datasets do not rule out the possibility of cosmological origins or other exotic astrophysical sources, which have been explored in collaborative accompanying papers [6,7].Notably, several cosmological models have demonstrated superior fits to the signal compared to the SMBH-binary interpretation.If these alternative models are confirmed in the future, they may provide compelling evidence for new physics.
In this study, our focus lies on the cosmological interpretation of the signal, specifically the existence of scalarinduced gravitational waves (SIGWs) [8][9][10][11][12][13].This possibility had been used for interpreting the NANOGrav 12.5year dataset [14] in Refs.[15][16][17][18][19][20][21][22][23][24].It was recently revisited by the PTA collaborations [6,7], but the statistics of primordial curvature perturbations was assumed to be Gaussian.However, it was demonstrated that primordial non-Gaussianity f NL significantly contributes to the energy density of SIGWs [25][26][27][28][29][30][31][32][33].This indicates noteworthy modifications to the energy-density spectrum, which is crucial for the data analysis of PTA datasets.On the other hand, it has been shown that primordial non-Gaussianity f NL could generate initial inhomogeneities in SIGWs, leading to anisotropies characterized by the angular power spectrum [33].Related studies can be found in Refs.[34][35][36][37][38][39][40][41][42].Our analysis will encompass a comprehensive examination of the angular power spectrum within the PTA band.Moreover, this spectrum is capable of breaking the degeneracies among model parameters, particularly leading to possible determination of f NL , and playing a crucial role in distinguishing between different sources of GWB.Therefore, by interpreting the signal as originating from SIGWs, we aim to study physical implications of PTA datasets for the nature of primordial curvature perturbations, including their power spectrum and angular power spectrum.
The remaining context of this paper is arranged as follows.In Section II, we will provide a brief summary of the homogeneous and isotropic component of SIGWs.In Section III, we will show implications of the PTA data for the power spectrum of primordial curvature perturbations and then for the mass function of PBHs.In Section IV, we will study the inhomogeneous and anisotropic component of SIGWs and show the corresponding angular power spectrum in PTA band.In Section V, we make concluding remarks and discussions.
II. SIGW ENERGY-DENSITY FRACTION SPECTRUM
In this section, we show a brief but self-consistent summary of the main results of the energy-density fraction spectrum in a framework of SIGW theory.
To quantify contributions of f NL to the energy density, we express the primordial curvature perturbations ζ in terms of their Gaussian components ζ g , i.e., [69] Here, f NL represents the non-linear parameter that characterizes the local-type primordial non-Gaussianity.To simplify the subsequent analytic formulae, we introduce a new quantity as follows It is worth noting that perturbation theory requires the condition A S F 2 NL < 1, where A S will be defined later.We define the dimensionless power spectrum of ζ g as ⟨ζ g (q)ζ g (q ′ )⟩ = δ (3) In this work, we assume that ∆ 2 g (q) follows a log-normal FIG. 1. Unscaled (or equivalently, AS = 1 and F NL = 1) contributions to the energy-density fraction spectrum of SIGWs.
distribution with respect to ln q [21, 31, 70-72] where A S represents the spectral amplitude at the spectral peak wavenumber q * , and σ denotes the standard deviation that characterizes the width of the spectrum.The wavenumber q is straightforwardly converted into the frequency ν, namely, q = 2πν.Through a detailed derivation process based on Wick's theorem, we can decompose Ωgw ∼ ⟨ζ 4 ⟩ into three components depending on the power of f NL .However, the complete derivations have been simplified by employing a Feynman-like diagrammatic approach [25,28,[30][31][32][33].
Here, we present the final results
Ωgw
where we provide the analytic expressions for Ω(n) gw , which are proportional to A 2 S (A S F 2 NL ) n with n = 0, 1, 2, in Appendix A. They were evaluated using the vegas package [73], while their numerical results for σ = 1/2, 1, 2 are reproduced in Fig. 1.Specifically, Ω(0) gw corresponds to the energy-density fraction spectrum in the case of Gaussianity, while Ω(1) gw and Ω(2) gw fully describe the contributions of local-type primordial non-Gaussianity f NL .
The energy-density fraction spectrum of SIGWs at the present conformal time η 0 can be expressed as Ωgw (η, q) .
(6) In the above equation, Ω rad,0 h 2 = 4.2 × 10 −5 represents the physical energy-density fraction of radiations in the present universe [74].T and T eq correspond to the cosmic temperatures at the emission time and the epoch of matter-radiation equality, respectively.ν can be related to T , g * ,ρ (T ), and g * ,s (T ) as follows [21] Here, g * ,ρ and g * ,s represent the effective relativistic degrees of freedom in the universe, which are tabulated functions of T as provided in Ref. [75].To illustrate the interpretation of current PTA data in the framework of SIGWs, we depict Ωgw,0 (ν) with respect to ν in Fig. 2, using three specific sets of model parameters.
III. IMPLICATIONS OF PTA DATA FOR NEW PHYSICS
In this section, we investigate the potential constraints on the parameter space of the primordial power spectrum and PBHs using the NANOGrav 15-year (NG15) data.While it is possible to obtain constraints from other PTA datasets using the same methodology, we do not consider them in this study, as they would not significantly alter the leading results of our current work.
By performing a comprehensive Bayesian analysis [7], we could gain valuable insights for the posteriors of four independent parameters, i.e., F NL , A S , σ, and ν * , for which the priors are set to be F NL ∈ [−30, 30], log 10 A S ∈ [−3, 1], σ ∈ [0, 5], and log 10 (ν * /Hz) ∈ [−9, −5].Here, we also adopt the aforementioned condition of perturbativity, namely, A S F 2 NL < 1.The inference results within 68% confidence intervals are given as We can also recast Eq. ( 8) into constraints on f NL , i.e., Fig. 3 shows two-dimensional contours in log 10 A S − F NL plane at 68% (dark blue regions) and 95% (light blue regions) confidence levels.There is a full degeneracy in the sign of primordial non-Gaussianity f NL , as the energydensity fraction spectrum is dependent of only the absolute value of F NL , as demonstrated in Fig. 1.The above results indicate that the PTA observations have already emerged as a powerful tool for probing physics of the early universe.
We can further recast the constraints on the primordial curvature power spectrum into constraints on the nature of PBHs, which is characterized by their mass function.Due to significant uncertainties in the formation scenarios of PBHs (as discussed in reviews such as Ref. [42]), we adopt a simplified scenario [61] to illustrate the importance of primordial non-Gaussianity f NL .The initial mass function of PBHs is described by ) where P (ζ) represents the probability distribution function (PDF) of primordial curvature perturbations, σ g is the standard variance of the Gaussian component ζ g in the PDF, and ζ c stands for the critical fluctuation.We further find σ 2 g = ⟨ζ 2 g ⟩ = ∆ 2 g (q)d ln q = A S by considering the power spectrum defined in Eq. ( 4).Additionally, it is known that ζ c is of order O(1), with specific values of 0.7 and 1.2, as suggested by Ref. [76].
To evaluate Eq. ( 13), we devide F NL into two regimes, i.e., F NL > 0 and F NL < 0. In the case of F NL > 0, we solve the equation ζ(ζ g ) = ζ c , yielding a relation By substituting it into Eq.( 13), we gain where erfc(x) is the complementary error function.Similarly, in the case of −(4ζ c ) −1 < F NL < 0, we gain (16) In contrast, in the case of F NL < −(4ζ c ) −1 , no PBHs were formed in the early universe, since the curvature perturbations are expected to never exceed the critical fluctuation.As a viable candidate for cold dark matter, the abundance of PBHs is determined as [77] f pbh ≃ 2.5 × 10 8 β g * ,ρ (T f ) 10.75 where m pbh represents the mass of PBHs, and T f denotes cosmic temperature at the formation occasion.Roughly speaking, m pbh can be related to the horizon mass m H and then the peak frequency ν * , namely, [17] Based on Eq. ( 11), we could infer that the mass range of PBHs is the order of O(10 −5 − 10 −2 )M ⊙ .However, the inferred abundance of PBHs exceeds unity in the case of a sizable positive F NL , indicating an overproduction of PBHs.This is because the inferred value of A S is typically one order of magnitude larger than the value of A S that leads to f pbh = 1.To illustrate this result more clearly, we include into Fig. 3 two solid curves corresponding to m pbh = 10 −2 M ⊙ and f pbh = 1 in the cases of ζ c = 0.7 (purple curve) and ζ c = 1.2 (rose curve), respectively.For comparison, we mark the critical value F NL = −(4ζ c ) −1 with dotted lines.Therefore, we find that a negative F NL is capable of alleviating the overproduction of PBHs, especially when considering a sizable negative F NL , namely, F NL < −(4ζ c ) −1 , which prevents the formation of any PBHs.However, due to large uncertainties in model buildings, it remains challenging to exclude the PBH scenario through analyzing the present PTA data.
In summary, it is crucial to measure the primordial non-Gaussianity or at least determine the sign of F NL in order to assess the viability of the PBH scenario.However, it is impossible to determine the sign of F NL through measurements of the energy-density fraction spectrum of SIGWs, due to the sign degeneracy.In the next section, we will propose that the inhomogeneous and anisotropic component of SIGWs has the potential to break the sign degeneracy, as well as other degeneracies in model parameters, opening up new possibilities for making judgments about the PBH scenario in the future.
IV. SIGW ANGULAR POWER SPECTRUM
In this section, we investigate the inhomogeneities and anisotropies in SIGWs via deriving the angular power spectrum in the PTA band, following the research approach established in our previous paper [33].
The inhomogeneities in SIGWs arise from the longwavelength modulations of the energy density generated by short-wavelength modes.As discussed in Section II, SIGWs originate from extremely high redshifts, corresponding to very small horizons.However, due to limitations in the angular resolution of detectors, the signal along a line-of-sight represents an ensemble average of the energy densities over a sizable number of such horizons.Consequently, any two signals would appear identical.Nevertheless, the energy density of SIGWs produced by short-wavelength modes can be spatially redistributed by long-wavelength modes if there are couplings between the two.The local-type primordial non-Gaussianity f NL could contribute to such couplings.
Similar to the temperature fluctuations of relic photons [78], the initial inhomogeneities in SIGWs at a spatial location x can be characterized by the density contrast, which is denoted as δ gw (η, x, q), given by where the energy-density full spectrum ω gw (η, x, q) is defined in terms of the energy density, namely, ρ gw (η, x) = ρ crit (η) d 3 q, ω gw (η, x, q)/q 3 .We specifically get ω gw ∼ ⟨ζ 4 ⟩ x , where the subscript x denotes an ensemble average within the horizon enclosing x [33,34] [74].Using Feynman-like rules and diagrams, we get an expression for δ gw (η, x, q), i.e., [33] where we introduce a quantity of the form The present density contrast, denoted as δ gw,0 (q), can be estimated analytically using the line-of-sight approach [80][81][82].It is contributed by both the initial inhomogeneities and propagation effects, given by [34] δ gw,0 (q) = δ gw (η, x, q) + [4 − n gw,0 (ν)] Φ(η, x) .(22) Here, n gw,0 (q) denotes the index of the present energydensity fraction spectrum in Eq. ( 6), given by For the propagation effects, we consider only the Sachs-Wolfe (SW) effect [83], which is characterized by the Bardeen's potential on large scales Φ(η, x) = 3 5 We assume the statistical homogeneity and isotropy for the density contrasts on large scales, similar to the study of cosmic microwave background (CMB) [84].
The anisotropies today can be mapped from the aforementioned inhomogeneities.The reduced angular power spectrum is useful to characterize the statistics of these anisotropies.It is defined as the two-point correlator of the present density contrast, namely, where δ gw,0 (q) has been expanded in terms of spherical harmonics, i.e., Roughly speaking, we get C ℓ ∼ δ 2 gw,0 ∝ ⟨ζ gL ζ gL ⟩ ∼ ∆ 2 L .Detailed analysis using Feynman-like rules and diagrams was conducted in our previous paper [33].We summarize the main results as follows which can be recast into the angular power spectrum Analogous to CMB, for which the root-mean-square (rms) temperature fluctuations is determined by [ℓ(ℓ + 1)C CMB ℓ /(2π)] 1/2 , the rms density contrast for SIGWs is determined by [ℓ(ℓ + 1)C ℓ (ν)/(2π)] 1/2 , which represents the variance of the energy-density fluctuations.It is vital to note that the rms density contrast is constant with respect to multipoles ℓ, but depends on frequency bands.
In Figure 4, we present the rms density contrast as a function of gravitational-wave frequency.We also include the energy-density fraction spectrum for comparison.Roughly speaking, we find that Cℓ is the order of 10 −4 , depending on specific sets of model parameters.It is worth noting that the angular power spectrum can break degeneracies among these parameters.For instance, based on Fig. 4, we observe a coincidence in the energy-density fraction spectra for three different parameter sets.However, the angular power spectrum breaks this coincidence, particularly in the case of the sign degeneracy of f NL .This result suggests that measurements of the anisotropies in SIGWs have the potential to determine the primordial non-Gaussianity [33].Recently, 10 9 10 8 10 7 [Hz] an upper limit of Cℓ < 20% was inferred from the NG15 data [86].However, this limit is not precise enough to test the theoretical predictions of our present work.In contrast, based on Fig. 4, we anticipate that the Square Kilometre Array (SKA) program [85] will offer sufficient precision to measure the non-Gaussianity f NL .
V. CONCLUSIONS
In this study, we examined the implications of recent PTA datasets for understanding the nature of primordial curvature perturbations and primordial black holes (PBHs).Specifically, we investigated the influence of primordial non-Gaussianity f NL on the inference of model parameters, and vice versa, by analyzing the recent NG15 data.In particular, at 68% confidence level, we inferred |f NL | < 4.1, which is competitive with the constraints from measurements of CMB.Even when considering the non-Gaussianity f NL , we found that the PBH scenario is in tension with the NG15 data, except when a sizable negative f NL is considered, which can significantly suppress the abundance of PBHs.Our results indicated that the PTA observations have already emerged as a powerful tool for probing physics of the early universe and dark matter.Moreover, we proposed that the anisotropies of SIGWs serve as a powerful probe of the non-Gaussianity f NL in the PTA band.For the first time, we conducted the complete analysis of the angular power spectrum in this frequency band and found that it can effectively break potential degeneracies among the model parameters, particularly the sign degeneracy of f NL .Additionally, we explored the detectability of the anisotropies in SIGWs in the era of the SKA project.
Notes added.-During the preparation of this paper, a related study [87] appears, which examines the posteriors of NG15 data.The authors suggest that the Gaussian scenarios for SIGWs are in tension with the current PTA data at a 2σ confidence level, but non-Gaussian scenarios that suppress the abundance of PBHs can alleviate this tension.Given the significant uncertainties in the formation scenarios of PBHs (as discussed in reviews such as Ref. [42]), the main focus of our research is to simultaneously examine the energy-density fraction spectrum and the angular power spectrum of SIGWs, by incorporating the complete contributions arising from primordial non-Gaussianity f NL .We also address the importance of pri-mordial non-Gaussianity to SIGWs through a Bayesian analysis over the NG15 data.dφ 12 cos 2φ 12 J(u 1 , v 1 , x → ∞)J(u 2 , v 2 , x → ∞) (A2) The calculation for the average of the squared oscillation J(u, v, x → ∞) has been provided in Ref. [33], as well as in earlier studies referenced in Refs.[12,13,30,31], i.e., J(u i , v i , x → ∞)J(u j , v j , x → ∞) = 9 1 − s 2 i 1 − s 2 j t i (t i + 2) t j (t j + 2) s 2 i + t 2 i + 2t i − 5 s 2 j + t 2 j + 2t j − 5 8 (−s i + t i + 1) 3 (s i + t i + 1) 3 (−s j + t j + 1) 3 (s j + t j + 1) The equations presented in this appendix can be utilized to numerically calculate the energy density of SIGWs in a self-consistent manner.
2 ,
FIG.2.Energy-density fraction spectra of SIGWs for different sets of independent parameters.The NG15 data are also shown for comparison. | 4,482.8 | 2023-07-02T00:00:00.000 | [
"Physics"
] |
Experimental and simulation data for point-by-point wire arc additively manufactured carbon steel bars loaded in uniaxial tension
Wire arc additive manufacturing is considered to allow a reduced material consumption for structural steel components by efficiently distributing the material only where necessary. Parts produced with this technology exhibit an irregular, imperfect geometry, which influences their structural behaviour. This paper describes a dataset, which includes geometry information for point-by-point wire arc additively manufactured steel bars, force and displacement measurements from performed uniaxial tensile tests on such bars, and force and displacement values from geometrically and materially non-linear simulations of the bars with imperfect geometry. The geometry data was obtained by 3D scanning the steel bars. Moreover, a script is provided that allows processing the scanned geometry data such that it can be used to generate suitable finite element meshes for geometrically and materially non-linear analyses. The force and displacement data from the uniaxial tensile tests were collected through measurements with a load cell for the force and with the help of digital image correlation measurements for the displacements. The non-linear simulations of the experiments were conducted with the computer aided engineering software Abaqus on processed approximations of the irregular scanned geometry. The described dataset can be used for better understanding the influence of the irregular geometry on the structural behaviour of wire arc additively manufactured parts. Moreover, researchers can apply the data to validate finite element simulation models and approaches for predicting the structural behaviour of different wire arc additively manufactured parts.
Value of the Data
• These data are useful in understanding how the irregular and imperfect scanned geometry of test specimens in general, and wire arc additively manufactured test specimens in particular, can be considered in finite element simulations.• The data is useful in validating finite element simulation models and approaches for predicting the structural behaviour of wire arc additively manufactured parts.• Researchers in the field of wire arc additive manufacturing in general, and especially those dealing with wire arc additive manufacturing in structural engineering may benefit from this dataset for their own research related to the influence of the imperfect irregular geometry of wire arc additively manufactured parts on their structural behaviour.
• The geometry data can be analysed with different methods to evaluate geometry-related parameters of wire arc additively manufactured specimens.• Researchers can use the experimental and simulation data to validate their finite element models for predicting the structural behaviour of wire arc additively manufactured parts.
Background
Wire arc additively manufactured parts exhibit irregular surface geometries.For characterising such surfaces, generally 3D scanning is used to obtain a point cloud or a mesh that can be further processed and analysed by various mesh-processing software.The meshes obtained from the 3D scanning are highly irregular and not suitable for generating finite element meshes for simulations.A more regular mesh needs to be generated to approximate the 3D scanned one.These mesh-processing steps were performed in the related research article for wire arc additively manufactured steel bars with a script in the Rhino3D/Grasshopper environment.After the related research article was published, the corresponding author received several inquiries on how the geometry for finite element simulations of wire arc additively manufactured parts can be generated from the 3D scanned mesh.This interest in the methodology used to process the 3D scanned mesh for use in finite element simulations represented the original motivation for compiling the dataset described in this data article.
Data Description
The data presented in this article includes geometry information for point-by-point wire arc additively manufactured steel bars, force and displacement measurements from performed uniaxial tensile tests on such bars, and force and displacement values from geometrically and materially non-linear simulations of the bars with imperfect geometry.Data is provided for eighteen specimens -six series of three specimens each, produced with different angles of the steel bar axis to the vertical (build angle b-a ) and different angles between torch axis and steel bar axis (nozzle angle n-a ).Table 1 gives an overview of the specimen series, while the sketches in the diagrams from Fig. 1 illustrate the different angles.
The odd-numbered specimens for which data is provided had as-printed irregular geometry.The corresponding even-numbered specimens had milled surfaces and were used in [1] for deriving an elastic-plastic material model for the wire arc additively manufactured carbon steel.
The dataset [2] described in this article consists of six files.The ZIP-file "01_scanned-geometry.zip" contains 36 files -one STL-file and one CSV-file for each specimen.The files are named with the specimen name (e.g., "TS01a.stl"and "TS01a.csv"for the specimen TS01a).The STL-files contain the raw irregular triangular meshes obtained through 3D scanning.The CSV-files contain three columns with the coordinates in millimetres of the points with which the triangular meshes are built.
Table 1
Overview of the specimen series.TS11a TS11b TS11c * The odd-numbered specimens mentioned in this table had as-printed irregular geometry.For these specimens, also geometrically and materially nonlinear analyses with imperfect geometry (GMNIAs) were conducted.The corresponding even-numbered specimens had milled surfaces (constant round cross-sections with regular surfaces).They were intentionally not included in the table and the data article, since no GMNIAs were conducted for them.The CSV-file "02_force-vs-displacement_experiments.csv" contains the measured forces and displacements from the uniaxial tensile tests for all eighteen specimens.The forces are given in [kN] and the displacements in [mm].The displacements are provided as elongations of a 38 mm long part of the wire arc additively manufactured steel bars.The file consists of three header rows specifying the specimen name, the type of data (displacement or force) and the units, respectively, followed by the rows with the data.The file consists of 36 columns, for each of the 18 specimens one with displacement values and one with force values.
The GH-file "03_GH-script_Scan-mesh_to_FE-mesh.gh" contains a Grasshopper script that was used to cut a 38 mm long part from the 3D scanned wire arc additively manufactured steel bars, to transform the irregular triangular meshes to more regular quad meshes, and to generate from these meshes closed surfaces that can be used for the finite element simulations.The script requires as input one of the STL-files from the ZIP-file "01_scanned-geometry.zip" and offers as output a closed surface that can then be exported for example from Rhino 3D as STP-file (see ZIP-file "04_processed-geometry.zip").
The ZIP-file "04_processed-geometry.zip" contains 54 files -one STL-file, one CSV-file and one STP-file for each specimen.The files are named with the specimen name followed by an underscore and the letters "FE" (e.g., "TS01a_FE.stl","TS01a_FE.csv"and "TS01a_FE.stp"for the specimen TS01a).The STL-files contain the processed more regular quad meshes of the 38 mm long steel bar parts obtained from the irregular triangular 3D scanned meshes with the Grasshopper script "03_GH-script_Scan-mesh_to_FE-mesh.gh".The CSV-files contain three columns with the coordinates in millimetres of the points with which the more regular quad meshes are built.The STP-files contain the closed surfaces that envelope the 38 mm long parts of the wire arc additively manufactured bars and which were imported to the computer aided engineering software Abaqus for performing the finite element simulations of the uniaxial tensile tests.
The ZIP-file "05_input-files_simulations.zip" contains 18 files -one INP-file for each specimen.The files are named with the specimen name (e.g., "TS01a.inp"for the specimen TS01a).The finite element simulations of the uniaxial tensile tests can be started from these input files.They include all the necessary information as geometry of the specimens, finite element mesh, boundary conditions, loading, requested data output intervals.
The CSV-file "06_force-vs-displacement_simulations.csv" contains the forces and displacements from the simulations of the uniaxial tensile tests for all eighteen specimens.The forces are given in [kN] and the displacements in [mm].The displacements are provided as elongations of the 38 mm long parts of the wire arc additively manufactured steel bars used in the simulations.The file consists of three header rows specifying the specimen name, the type of data (displacement or force) and the units, respectively, followed by the rows with the data.The file consists of 36 columns, for each of the 18 specimens one with displacement values and one with force values.
For better understanding the provided data, Fig. 1 illustrates the data in CSV-file "02_forcevs-displacement_experiments.csv" and CSV-file "06_force-vs-displacement_simulations.csv" as curves for the different test specimens listed in Table 1 .
Experimental Design, Materials and Methods
The data described in this article includes geometry information for wire arc additively manufactured (WAAM) steel bars and force-displacement data pairs obtained for such bars under tensile loading from experiments and finite element simulations.The production of the steel bars involved utilizing a configuration that included an ABB IRB 4600/40 robot, a Fronius TPS 500i Pulse power source, and a Fronius 60i Robacta Drive Cold Metal Transfer (CMT) torch featuring a 22 °neck.A detailed description of the manufacturing process along with the used WAAM process parameters can be found in [1] .Steel bars with a target diameter of 8 mm and a length of 160 mm were printed.Different angles of the steel bar axis to the vertical (build angle b-a ) and different angles between torch axis and steel bar axis (nozzle angle n-a ) were used as illustrated by the sketches in the diagrams from Fig. 1 .The bars were printed with the material Union SG 2-H [3] .This is a solid wire designed for the gas metal arc welding (GMAW) of unalloyed and low-alloy steels.
The uniaxial tensile test specimens were designed based on specifications from EN ISO 6892-1 [4] and DIN 50125 [5] , under consideration of particularities related to the production process by wire arc additive manufacturing.The additively manufactured steel bars were welded by robot into cylinders made of structural steel with a length of 500 mm, an outer diameter of 30 mm and a centrally drilled hole with a diameter of 10 mm.The geometry of the specimens is illustrated in Fig. 2 a.The added structural steel cylinders (i) facilitated the 3D scanning process for obtaining the specimen geometry and (ii) allowed to minimize the risk for failure outside the desired measurement length for elongations.
The geometry of the wire arc additively manufactured steel bars exhibited strongly irregular surfaces due to the production process of welding droplet by droplet.The 3D scanner ATOS Core from GOM and the corresponding scan software, which operate based on principles of photogrammetry, were used for obtaining a 3D point cloud and a 3D triangular mesh of each specimen (see Fig. 2 b).To conduct the measurements, a nearly imperceptible coating of matt white paint was applied to the WAAM steel bars to prevent reflections.Additionally, markers were affixed to the cylinders to facilitate image correlation among the roughly 30 pictures taken for each specimen from various perspectives.For the geometry data in the file "01_scannedgeometry.zip" from the shared dataset [2] , the 3D scanned irregular triangular meshes were aligned in space with the software GOM Inspect [6] .The z-axis corresponded to the average of the axes of the two steel cylinders.The xy-plane was given by the top surface of the bottom cylinder on which a notch was cut with a milling tool.The positive x-axis was defined in the direction of this notch.
For the uniaxial tensile tests, a Zwick universal testing machine for loads up to 200 kN was used.The tests were conducted under displacement control with a displacement rate of 0.01 mm/s, which corresponds to a strain rate of 0.0 0 025 s −1 ± 20% according to [4] .The test specimens were clamped in the machine on both sides over a length of 40 mm of the steel cylinders.The force was measured with a load cell, while for the displacement a digital image correlation (DIC) system from Correlated Solutions, Inc. with two cameras with a resolution of 12 MP was used.For the DIC measurements, an irregular speckle pattern of black ink dots on a matt white paint was applied on the specimens.The images were captured and evaluated with the VIC-3D 8 system from Correlated Solutions, Inc.For the displacements provided in the file "02_force-vs-displacement_experiments.csv" from the shared dataset [2] , a virtual extensometer was applied on the steel bars within the VIC-3D 8 system over a length of 38 mm (see Fig. 2 c) with which the elongation of this specimen part was calculated.
Geometrically and materially nonlinear analyses with imperfect geometry (GMNIA) were performed for the eighteen WAAM steel bars previously tested experimentally.For the simulations, only the 38 mm long part of the specimens was used, within which the failure occurred in the experiments and for which the elongation was calculated based on the DIC measurements.Since the very detailed irregular triangular mesh obtained by 3D scanning the specimens was not suitable for generating a finite element mesh, a Grasshopper script was used within the Rhinoceros 3D [7] environment to approximate it with a more regular quad mesh (see Fig. 2 d).The Grasshopper script is provided in the file "03_GH-script_Scan-mesh_to_FE-mesh.gh".It first cuts out the middle 38 mm long part of the 3D scanned specimen geometry and then regenerates the irregular surface with a quad mesh defined by section curves and a refinement factor of 0.5 mm.This refinement factor controls the distance between the section curves as well as the interval for segmentation of these section curves.Finally, the script transforms the quad mesh into a closed surface which can be baked from Grasshopper into Rhinoceros 3D and from there exported as STP-file.
The GMNIA simulations were performed with the computer aided engineering software Abaqus [8] .The geometry of the specimens was imported from the STP-files previously generated with the Grasshopper script in the Rhinoceros 3D environment.An overview of the model showing the finite element mesh density, the boundary conditions and the loading is shown in Fig. 2 e.A finite element mesh size of 0.5 mm was chosen based on a convergence study.The study aimed at using an as coarse as possible mesh size to allow a shorter computation time, but at the same time an as fine enough mesh as necessary for reproducing the irregular surfaces accurately enough in order to correctly predict the failure points along the bar length.Mesh size values that are a divisor of the considered bar length (38 mm), print layer height (1 mm) and section-curves refinement factor for generating the geometry (0.5 mm) were considered.Twenty-node quadratic brick elements with reduced integration (C3D20R) were used.These showed a slightly better suitability, especially for reproducing the behaviour of the test specimens in the necking part after reaching the maximum force, compared to the 8-node linear brick elements with reduced integration (C3D8R), which were used for the simulations in [1] .Boundary conditions and loading were imposed on two reference points linked to the terminal surfaces of the 38 mm long WAAM bar components, employing rigid body constraints.For the bottom reference point (RP-1), all translational and rotational degrees of freedom were restrained, while for the top one (RP-2) all rotational degrees of freedom and the two translational degrees of freedom perpendicular to the axis of the bars were fixed.The loading was applied as a displacement of 15 mm on the top reference point (RP-2) in positive direction of the bar axis (z-direction).Two loading steps were defined, one until 2.5 mm and the other one until 15 mm displacement.For each of these steps 125 data pairs of force and displacement were requested as output.The material model used in the simulations was an elastic-plastic one, derived from uniaxial tests on milled WAAM steel bars, as described in [1] .For the elastic properties, a Young's modulus of 195,0 0 0 MPa and a Poisson's ratio of 0.3 were defined.Regarding the plastic properties, ten sets of data, consisting of true yield stress and true plastic strain pairs, were specified, as outlined in Table 2 .The model definitions described here can be True plastic strain ε pl,true [-] 0.0 0 0 0.028 0.050 0.100 0.150 0.200 0.400 0.600 1.0 0 0 1.500 Fig. 3. Uniaxial tensile test specimens after failure and corresponding qualitative stress contour plots from the GMNIA simulations.
found for each of the eighteen simulated specimens in the Abaqus input files provided in the file "05_input-files_simulations.zip".
For better understanding the performed experimental tests and finite element simulations as well as the provided data, Fig. 3 illustrates pictures of the 18 specimens after failure in the uniaxial tensile tests and corresponding qualitative stress contour plots from the performed geometrically and materially non-linear analyses with imperfect geometry (GMNIA).The stress con-tour plots from the simulations were printed for the same force at which the failure occurred in the uniaxial tensile experiments after necking, not for the same displacement (see force versus displacement curves in Fig. 1 ; the shown contour plots are for the force values at which the test curves end or show an almost vertical drop).
Limitations
There are no significant limitations for the data described in this article regarding data collection and curation.The only aspect worth mentioning is that the orientation of the wire arc additively manufactured steel bars was only tracked starting with the 3D scanning of the geometry.This means that the steel bars were oriented in the same direction for all the methods applied to collect the data described in the article (3D scanning of imperfect geometry, uniaxial tensile tests, and finite element simulations).However, a correlation of the bottom and top ends of the steel bars during manufacturing and their orientation during the subsequent steps was not carried out.However, this has no relevance for the data described in the current article, only eventually for the geometry data evaluated in [1] .
Fig. 1 .
Fig.1.Force versus displacement data from the uniaxial tensile tests and the corresponding finite element simulations displayed as curves for the different test series listed in Table1.
Fig. 2 .
Fig. 2. Methods applied for collecting the data described in the article: (a) uniaxial tensile test specimen geometry, (b) geometry obtained from 3D scanning, (c) setup and method for obtaining force-displacement data from uniaxial tensile tests, (d) geometry used for the finite element simulations, and (e) model for obtaining force-displacement data from finite element simulations.
predicting the structural behaviour of different wire arc additively manufactured parts.©2024 The Author(s).Published by Elsevier Inc.This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ ) Geometry of the wire arc additively manufactured test specimens (mesh in STL file format, points in CSV file format); 2) Table with force and displacement values from the experiments (CSV file format); 3) Grasshopper script for generating the simplified specimen geometry for the finite element simulations (GH file format); 4) Simplified geometry of the test specimens for finite element simulations (mesh in STL file format, points in CSV file format, closed surface in STP file format); 5) Input files for the finite element simulations (text files in INP file format) 6) Table with force and displacement values from the finite element simulations (CSV file format).
Data collectionThe geometry of the wire arc additively manufactured test specimens was obtained by 3D scanning with a GOM ATOS Core instrument.The obtained irregular meshes were processed with a Grasshopper script to generate simplified closed surfaces that can be used for finite element simulations.The wire arc additively manufactured specimens were tested in uniaxial tension on a Zwick universal testing machine under displacement control.The displacements were obtained from digital image correlation measurements.The finite element simulations were performed geometrically and materially nonlinear with the computer aided engineering software Abaqus 2021.name: Dataset for point-by-point wire arc additively manufactured carbon steel bars loaded in uniaxial tension -experiments and simulations Data identification number: https://doi.org/10.3929/ethz-b-00 06390 04 Direct URL to data: https://www.research-collection.ethz.ch/handle/20.50 0.11850/6390 04 Related research article V.-A.Silvestru, I. Ariza, J. Vienne, L. Michel, A.M. Aguilar Sanchez, U. Angst, R. Rust, F. Gramazio, M. Kohler, A. Taras, 2021.Performance under tensile loading of point-by-point wire and arc additively manufactured steel bars for structural components, Mater.Des.205, 109740.https://doi.org/10.1016/j.matdes.2021.109740 . | 4,554.2 | 2024-01-01T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Multichannel Repeater for Coherent Radar Networks Enabling High-Resolution Radar Imaging
Coherent radar networks allow for spanning very large apertures that include all sensors with their subapertures in the network, resulting in very good angular resolution. However, as a radar network typically has a sparse array, its performance depends on the flexibility of the antenna and network node placement. Thus, in this work, a new type of radar network is presented, allowing for a highly flexible network and array design and providing an excellent performance in direction-of-arrival (DoA) estimation. This is reached by using multichannel repeaters (MCRs) in combination with a single multiple-input multiple-output (MIMO) radar. The multichannel repeaters (MCRs) receives the radar’s signal via a line-of-sight path, and then re- transmits it via multiple transmit channels, each with a dedicated mixer used for multiplexing. To show the feasibility and the performance of this concept, a 77- $\mathrm { \text {G} \text {Hz} }$ radar network consisting of a digital $4\,\, \times 4$ multiple-input multiple-output (MIMO) radar and two four-channel MCRs is presented. It is designed following network array design recommendations mathematically derived in this work and combined with an adapted signal processing and network-based direction-of-arrival (DoA) estimation. The high performance of the system is demonstrated not only via systematic measurements in an anechoic chamber, but also in various automotive scenarios including multiple road users.
I. INTRODUCTION
D RIVEN by the advances in autonomous driving, recent research shows a rapid development in the imaging capacities of radars.This not only includes the measurement of range and radial velocity, but also an improved estimation of the angle of the target [1], [2].This is accomplished by expanding the virtual aperture of the radar either by increasing the number of transmit (Tx) and receive (Rx) channels or by designing a network of cooperative radar sensors.
Recent publications discuss multiple-input multiple-output (MIMO) radars with more than 100 [3], [4], [5] or even up to 1700 [6] virtual channels, typically achieved by using a radar frontend with multiple transceiver chips.While these approaches show a high performance in terms of radar imaging, they also lead to large, inflexible hardware designs, as all channels must be placed on a single printed circuit board (PCB).Thus, often the capabilities of MIMO radars are not limited due to technical reasons, but due to limitations in placement and available space, which is especially true for automotive applications [5].
With the radar network approach, this drawback can be overcome.As multiple cooperative radars are combined, the number of channels per radar and thus their size can be reduced.The benefits of radar networks are most significant when coherency is established [2], [7], i.e. when phase coherency is achieved between signals from different radar nodes.Then, a bistatic evaluation is possible without performance degradation caused by phase noise as in incoherent networks [8], [9].Furthermore, the signals of several nodes can be combined for phase coherent direction-of-arrival (DoA) estimation and thus the additional radars in the network lead to the same benefit as additional channels on a MIMO radar.
While first efforts in coherent radar networks mainly include widely distributed multistatic radar systems [7], [10], [11] and coherent multistatic synthetic aperture radars (SARs) [12], [13], [14], recently these systems became a competitor of massive MIMO systems as they allow to build up extended apertures for DoA estimation.However, the approaches differ in system and array design as well as in their way of achieving coherency.An overview is given in Table I.
In [15], a system is proposed where a reference clock is shared among multiple frequency modulated continuous wave (FMCW) radars.Based on this low-frequency coupling, a coherent DoA estimation including the use of the bistatic signals is possible.However, a distribution network for the reference clock is necessary, and additional phase noise generated by each radar's phase-locked loop (PLL) and voltage-controlled oscillator (VCO) cannot be eliminated.The work in [16] shows that coherent DoA estimation is possible in an uncoupled radar network, but it still suffers from a phasenoise-induced performance loss.
In [17] and [18], radar-repeater networks are introduced for FMCW as well as digital orthogonal frequency-division multiplexing (OFDM) radars.These networks consist of a single radar and several repeaters.The repeaters receive the radar's Tx signal reflected at the target.Then, they modulate and re-transmit the signal.After a second reflection at the target, the bistatic signal is received and evaluated by the radar.This type of network will be referred to as a symmetric-path radar-repeater network from here on.The repeaters do not down-convert the signal, thus, all mono-and bistatic signals are fully coherent and can be directly used for DoA estimation.Since no connection between the network nodes is necessary, these networks are highly flexible.However, due to the double reflection at the target, they suffer from high path losses for the bistatic signals.
In [19] and [20], the coherency in a network of FMCW radars is achieved by sharing a trigger and a clock, thus leading to a low-frequency coupled setup.Coherent signal processing is achieved by estimating and correcting the phase offset algorithmically.Two different approaches are presented for the exploitation of the network.In [19], all mono-and bistatic virtual apertures are combined, leading to a virtual network array (VNA) consisting of four uniform linear array (ULA) subapertures.While this results in a very large network aperture, due to the combination of several ULAs with gaps in between, it suffers from a high sidelobe level (SLL).In contrast, in [20], only the bistatic arrays are combined to a single ULA.This way, the SLL is massively reduced, but at the expense of highly reduced aperture size compared to the use of the full network.Thus the potential of the network in terms of angular resolution is not fully exploited.
When analyzing the relationship between array and position of the network nodes, the resulting VNA, and the networkbased DoA estimation, it is shown that for a proper DoA estimation a radar network allowing for a flexible VNA design is crucial.This is not possible with the networks presented in [15], [16], and in [19], [20], as the bistatic subarrays will always be in the middle of the two monostatic subarrays.
Thus, in this work, a new architecture for coherent radar networks is proposed.It uses a combination of a radar with a new type of repeater node.Instead of feeding the repeater via a reflection at the targets as proposed in [17] and [18], it is fed by a direct line-of-sight (LoS) between radar and repeater.This way, the bistatic path losses are drastically reduced, while the advantages of flexible low-cost repeater nodes remain.This new system architecture is combined with a new repeater design, a so-called multichannel repeater (MCR).Each MCR consists of one Rx antenna, after which the signal is split up into multiple independent modulated Tx channels.This way, multiple Tx antennas per repeater are used, leading to an improved VNA.Using the MCR concept, a radar network is designed.It consists of two fourchannel MCRs and a 77 GHz 4 × 4 MIMO OFDM radar.The virtual aperture of the network has a size of 239 λ/2, which, according to the Rayleigh criterion [1], leads to a theoretical target separability of 0.584 • in azimuth, while an SLL of −6.8 dB before compressed sensing (CS) is achieved.The network is combined with an adapted signal processing.Its high DoA estimation and imaging performance are not only systematically evaluated in an anechoic chamber, but also successfully in practical scenarios.
The article is structured as follows.At first, the concept of the VNA and considerations regarding the design of a radar network's array are introduced in Section II.This is followed by a detailed description of the proposed radar network concept and signal processing in Section III.The network-based DoA estimation is explained in Section IV.Then, Section V describes the design of the 77-GHz system, including the design of the MCR and the network setup.Finally, Section VI provides a detailed evaluation of the proposed system based on measurements.
II. NETWORK ARRAY DESIGN
The virtual array of a single MIMO radar is calculated based on the spatial convolution of the Tx antenna positions and the Rx antenna positions of the radar [21], [22].To extend this model to the VNA, the various nodes, i.e. the sensors in the network, and their positions have to be taken into account.
A. Virtual Network Array
A VNA is the virtual array of the entire network and thus determines the sampling of the phase values used for the network-based DoA estimation.It is defined based on a spatial grid, which typically is a λ/2-grid.The position of each of the N nw network nodes is described by the vector d nw ∈R N nw .Each kth element d nw [k] equals the position in the spatial grid of the first virtual antenna of the kth node.Furthermore, each node has its own subarray, describing the positions of the node's N sub virtual antennas in the spatial grid as d sub,k ∈R N sub , k = 1, 2, . . ., N nw .The VNA is then calculated by which equals a spatial convolution of the position of each node with the virtual antenna positions in their respective subarray.
The notation is shown in Fig. 1 using the example of a network with three subarrays, each consisting of four virtual antennas.
B. Analysis
As the VNA typically is a sparse array with several subarrays in blocks and gaps in between, it very likely exhibits an ambiguity function [23] with a high SLL.This problem exists in previous works like [17], [18], and [19].In this section, the relationship between the VNA design and the SLL is analyzed and design recommendations are derived.
In the following, a simple network consisting of N nw nodes with identical subarrays of size l sub and antennas in a λ/2-grid is considered, leading to a normalized subarray size of For ULA-subarrays, L sub = N sub − 1 applies.For sparse subarrays, additionally the empty antenna positions within the subarray have to be taken into account.
A DoA estimation based on the Fourier transform [24] is assumed.A target at angle θ introduces a phase progression along the virtual antenna elements in the VNA with an angular frequency of equaling the phase offset between adjacent virtual antennas in the grid.This phase progression now is sampled by a VNA consisting of N nw subarrays.Each subarray is described by a subarray function w(x) equaling the antenna positions in the subarray as a window function centered at the position d nw [k] + l sub /2.The subarray function w(x) may account for the gaps in sparse subarrays as well as for an optional subarray tapering.In case of an non-tapered ULA-subarray, w(x) equals a rectangular function rect(x).The Fourier transform of the angle-dependent phase progression sampled by the VNA, the so-called steering vector γ(x), thus equals the angular spectrum ( ) as in ( 4) and ( 5), shown at the bottom of the page.
Three factors determine the Fourier transform and thus target separability and SLL: the subarray function w(x), the size l sub of the subarrays, and the positions of the subarrays d nw [k].Their influence on the SLL can be shown by the example of a simple network of two non-tapered ULA-subapertures [w(x) = rect(x)], which was used, e.g., in [17] and [18].Assuming d nw [1] = 0, N nw = 2, and W ( ) = si( ), (5) simplifies to ( ) 1 + e −j d nw [2] .
The DoA estimation is then based on the absolute value of the Fourier transform.With 1 + e −j d nw [2] = 2 cos( d nw [2]) + 2 (7) the absolute value of the angular spectrum becomes The sinc function si( (l sub /2)) equals the angular spectrum when only using one subarray and no network.Due to the multiplication with (2 cos( d nw [2]) + 2) 1/2 , the main lobe becomes narrower with increasing d nw [2], which increases the target separability.However, also the SLL is significantly increased.Fig. 2(a) shows the example of a VNA with a size of 239 λ/2 consisting of two ULA subarrays with N sub = 16 virtual antennas each.A DoA estimation is simulated for a target at 0 • .The SLL equals −0.1 dB, which makes this exemplary VNA barely usable.
C. Interpretation
The previous analysis has clearly shown that an array consisting of two ULA subapertures does not lead to a satisfactory Steering vector: node position network (5) Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.imaging performance.Therefore, recommendations for the design of radar networks are derived in the following.1) By using at least three subarrays, the multiplication with the term (2 cos( d nw [2]) + 2) 1/2 can be avoided.Instead, (5) includes a sum of several exponential terms, each containing the respective node position.When using proper distances between the subarrays, the exponential terms may sum up destructively at the sidelobes.This becomes obvious when comparing the examples in Fig. 2(a) and (b), where the SLL is improved from −0.1 to −2 dB by adding a third subarray at a position between the other two.The position of the middle node is crucial, thus the one resulting in the lowest SLL is chosen.2) With the use of a sparse subaperture, the relation between the subaperture size l sub and the size of the gaps in between can be improved.This can be clearly seen when comparing the examples in Fig. 2(b) and (c).Again, three subarrays with 16 virtual antennas each are used, but they are sparse with a size of l sub = 65 λ/2.This way, the SLL is further reduced to −6.8 dB. 3) When choosing the distances between the subarrays, the trade-off between the size of the VNA and the SLL has to be considered.4) Subarray tapering may not be helpful when using small subarrays, as the loss of power at the outer virtual antenna elements can decrease the performance.Using, for example, a Hann window for each subarray in Fig. 2(b) increases the SLL to −0.7 dB.
III. NETWORK CONCEPT
To provide a high-resolution network-based DoA estimation with a low SLL, a network that allows to create a VNA according to the recommendations proposed in Section II is needed.Thus, a concept for a coherent radar network allowing for a highly flexible array design while providing a good link budget is presented.
A. System Concept
The proposed radar network uses a single radar in combination with a new type of repeater, the so-called MCR.In contrast to the repeater presented in [17] and [18], the MCR receives its input signal from the radar via a direct LoS, as illustrated in Fig. 3, forming a so-called triangular path configuration.The receive signal of the MCR is then distributed to the MCRs different Tx channels.Each channel has its own mixer, which is fed with a modulation signal in the kilohertz region.This way, each channel can be modulated by a different frequency, which allows for a multiplexing of the signals.Due to the low modulation frequency, no significant phase noise is added.
The bistatic signals from the MCR are then transmitted into the channel, and, after a reflection at the targets, are received by all radar Rx channels.The virtual subarray of an MCR can therefore be calculated by a convolution of the MCRs transmit channel positions and the radar Rx array.Principally, the system concept allows for an arbitrary number of MCR, only restricted by the condition, that a good LoS between the radar and each MCR can be established.Also the number of channels per repeater is not restricted by the concept.However, the multiplexing strategy must be considered.
B. Multiplexing
By using an OFDM radar, a straightforward multiplexing based on subcarrier interleaving is made possible [25].The OFDM signal consists of N orthogonal subcarriers in a frequency spacing of f and M OFDM-symbols [26], [27].To assure the orthogonality of the subcarriers, each OFDM symbol has the length T = 1/ f .In each OFDM symbol, each subcarrier is modulated by a complex modulation symbol d Tx .These modulation symbols can be combined to Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.a matrix D Tx ∈ C N ×M .For further details regarding OFDM radar signal processing, it is referred to [28].
To enable multiplexing at the radar transmitter, for each radar Tx channel k, an individual modulation matrix D k Tx is used.In the case of frequency-division multiplexing (FDM), as in this work, they differ in the subcarriers used for the respective transmitter.The repeater Tx channels are then included in the modulation by leaving empty subcarriers in the radar's Tx signals, which the repeater are using for their transmit signal.Analogously to [18], the modulation frequency of each repeater channel k is thus an integer multiple of the subcarrier spacing.To assure orthogonality, h k must be different for each channel.Furthermore, in the radar Tx signal, there must remain empty subcarriers leaving room for the MCR signals.When using double-sideband (DSB) mixers in the MCR, both the upper and lower sidebands of each repeater channel must be considered.Furthermore, for the recovery of the phase of the modulation signal, a repeater Tx channel must transmit onto two different subcarriers, which is explained in detail in Section IV-A.If multiple modulation signals are derived from the same signal source and thus are aligned in phase, which, e.g., can be the case for all channels on the same repeater, this is only necessary for one channel per signal source.Overall, while in the channel all subcarriers are fully occupied, an OFDM signal sparse in subcarrier direction is transmitted at each Tx antenna of the radar or an MCR, which ensures orthogonality between the different signals from different Tx antennas.In Fig. 4, the subcarrier assignment is exemplarily shown for a network consisting of a radar with four transmitters and two four-channel repeaters with singlesideband (SSB) mixers.
In general, considering a network of a radar with N Tx,radar transmitters and N MCR repeaters with N Tx,MCR channels each and one modulation signal source per repeater, only every n Tx th channel is used for the same Tx channel, which A drawback of this multiplexing strategy is that the unambiguous range is reduced by the factor n Tx .However, in [29], other multiplexing techniques with a lower reduction of unambiguous range are presented for symmetric-path radar-repeater networks, which could also be adapted for the MCR network proposed here.
C. Signal Processing
Analogously to standard OFDM radar signal processing [28], at first, the sampled Rx signal is reshaped to a matrix containing one OFDM symbol per column and the cyclic prefix (CP) is removed.The CP is the guard interval included in the transmit signal before every OFDM symbol [28].By performing a column-wise fast Fourier transform (FFT), the matrix is transformed into the symbol domain receive matrix In the next step, the different mono-and bistatic channels are separated and the transmit symbols are removed.The receive signal initially contains all signals from all Tx channels in the network.To evaluate a specific one, only the correct subcarriers assigned to this signal must be evaluated.For the monostatic signal, this is done straightforward by an element-wise multiplication (⊙) of the receive matrix D Rx with the complex conjugate (denoted as (•) * ) of the transmit matrix D k Tx by As D k Tx includes zeros on all subcarriers not used by kth transmit channel, only the wanted subcarriers are evaluated.Afterwards, the complex radar image I for the monostatic channels is calculated by performing a row-wise FFT and a column-wise inverse fast Fourier transform (IFFT) [28].
In case of the bistatic channels, the frequency shifts by the repeaters have to be considered.This is done by at first shifting the transmit matrix of the LoS channel D LoS Tx by the modulation factor h = f mod / f in subcarrier direction.This shifted matrix is defined as D LoS Tx (h).Then, analogously to (11), the division matrix D bi Div (h) of the respective bistatic signal is calculated by Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
In the example in Fig. 4 as well as in the network presented in Section V, Tx4 is used for the LoS.In this case, it holds that D LoS Tx (0) = D 4 Tx .Afterwards, the phase shift induced by the modulation signal during each cyclic prefix is corrected by means of a correction vector [18] κ(h) = e −j2π hT CP /T , e −j4π hT CP /T , . . ., e −jM2π hT CP /T .(13) For the correction, every mth column of D bi Div (h) is multiplied by the mth element of κ(h).For further details, it is referred to [18, Sec.III.B].Then, analogously to the monostatic evaluation, the bistatic radar image I (h) can be calculated by performing a row-wise FFT and a column-wise IFFT.
D. Comparison to Symmetric Path Radar-Repeater Networks
In the following, a short overview over the differences between the triangular-path repeater network using MCR proposed in this work and the symmetric path radar-repeater network from [17] and [18] is given.
1) Path Loss: For the MCR, the path loss between the MCR Tx and the radar Rx approximately equals the path loss of a monostatic radar and can be estimated using the radar equation.Additionally, loss caused by the LoS path with length d LoS has to be considered, which leads to an MCR Rx power of where P Tx,LoS is the LoS Tx power of the radar, G ant,LoS the antenna gain of the LoS antennas, and λ the wavelength of the radio frequency (RF) carrier.With a sufficient repeater gain, this loss can be compensated, achieving the same detection range as with monostatic radars.In contrast, using a symmetric path radar-repeater network, both the path from the radar to the repeater and back contain a reflection at the targets.This leads to a 1/R 8 dependency of the path loss considering point targets.The networks from [17] and [18] are therefore only suitable for short-range applications.
2) Array Design: In this work, each MCR provides an additional array of Tx antennas, creating a virtual subarray which can be calculated by the spatial convolution of the MCR's Tx array and the radar's Rx array.If required, the Tx array of each MCR can be individually designed, leading to a high flexibility.In contrast, each single-channel repeater leads to a repetition of the radar's virtual array in the VNA for the symmetric-path radar-repeater network from [17] and [18].
3) Hardware Complexity: In [17] and [18], less hardware components are needed per virtual antenna added by a repeater in comparison to the MCR proposed in this work.Hence, a symmetric-path radar-repeater network leads to a potentially lower hardware complexity and power consumption.However, the hardware complexity of an MCR network still is much lower than that of a network of multiple radar sensors.
4) Risk of Ghost Targets: In the symmetric-path radar repeater network, the signal is also reflected at different targets on the way from the radar to the repeater and the way back.This leads to ghost targets [30].In a triangular-path repeater network as the one proposed in this work, these ghost targets do not occur.
IV. NETWORK-BASED DOA ESTIMATION
When using a coherent radar network, the DoA estimation is based on a sparse VNA consisting of several subarrays.Thus, the DoA estimation as known for MIMO radars [22], [31] has to be adapted.In this section, the different steps of the network-based DoA estimation are explained in detail.While the first step, the phase reconstruction, has to be done exclusively in radar-repeater networks, the other steps are independent of the network architecture.
A. Phase Reconstruction
As radar and MCR are not synchronized, the phase of the modulation signal at t = 0 s, which is defined as the start of the radar Tx signal, is not known.Thus, a reconstruction and compensation is necessary to retrieve full phase coherency.The modulation signal of the kth MCR channel can be described by where A k is the signal amplitude and t < 0 is the point in time closest to t = 0 s where it holds arg(y mod,k ( t)) = 0 [18].The unknown modulation phase then is defined as Assuming that the different modulation frequencies of one MCR are synchronized, the modulation phases of its channels have a fixed relation H k = f mod,k / f mod,0 .Assuming channel k = 0 has the lowest modulation frequency, each modulation phase φ mod,k can be calculated based on φ mod,0 as This simplifies the phase reconstruction, as only one modulation phase per MCR has to be reconstructed.As described in [18] and [32], the reconstruction of φ mod,0 can be determined by modulating the same MCR channel with a two-tone signal.By modulating channel 0 with f mod,0 and f mod,II,0 = b f mod,0 , b > 1, the modulation phase can be determined by where φ Rp,II,0 = arg(a II,0 ) and φ Rp,0 = arg(a 0 ) are the phases measured over the reflection at the same target on the different modulation frequencies.Alternatively, the direct coupling between MCR Tx and radar Rx can be used.Calculating with complex numbers, this equals z mod,0 = e jφ mod,0 = (b−1) a II,0 a 0 (19) and is unambiguous for 1 < b ≤ 2.
When using an OFDM radar, it thus makes sense to modulate the first channel of each MCR onto two neighboring subcarriers.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
B. Combination of the Subarrays
By performing the signal evaluation described in Section III-C, for each virtual channel of the VNA a complex range-Doppler (Rv) matrix I is generated.In order to perform a DoA estimation, for each target, the phases of all channels have to be combined correctly.However, different aspect angles of the network nodes result in range-angle coupling [15] and differences in the radial velocity of each node [33], which have to be considered when identifying the target peaks.Furthermore, the range induced by the LoS and the delay inside an MCR has to be considered, and a discrepancy in the modulation frequency may lead to a velocity offset.
After correctly identifying the target peaks, the complex subarray steering vectors can be combined to a network steering vector γ according to the VNA.In this work, no subarray tapering is used.All elements in γ thus are normalized to have the same amplitude.
C. Zero Padding
When having a VNA consisting of multiple subarrays and gaps in between, it results in a block-wise sampling structure as described in (7).In this case, wrong results may occur due to sampling errors when performing an FFT-based DoA without extending the steering vector via zero padding (ZP).The effect is most grave in the case of ULA subarrays with large gaps in between, as, for example, in the VNAs shown in Fig. 2(a) and (b), which is shown in Fig. 5.
It can be explained at the example of a VNA with two ULA subarrays using (7) and (8).Considering the Nyquist-Shannon theorem [34], the term cos( d nw [2]) is free from aliasing, if the resolution in the Fourier domain is smaller than (2d nw [2]) −1 .Thus, using a λ/2-grid, the steering vector γ needs to have a length of 2d nw [2](2/λ) elements.
Although the effect is reduced when using more and sparse subarrays, to avoid this error, ZP is also recommended for other VNA.In the general case of N nw subapertures, the length of γ is should be larger than 2d nw [N nw ](2/λ), with the N nw th subaperture having the largest d nw .In practice, an even larger ZP is favorable for further interpolation.In this work, the steering vector is extended to more than eight times its length.
D. FFT-Based DoA Estimation Using Compressed Sensing
Before the final steps of the DoA estimation, the steering vectors are corrected based on a calibration, e.g., a calibration with a target at 0 • .Furthermore, a near field correction as proposed in [15] and [35] is applied.
Then, due to the sparsity of the VNA, the DoA estimation itself is performed using a CS algorithm in combination with a Fourier transform [24], [36].This way, the sparse elements in the steering vector are reconstructed and sidelobes as well as artifacts are reduced.In this work, an iterative method with adaptive thresholding (IMAT) [37] is used since this algorithm combines well with ZP.
V. SYSTEM DESIGN Based on the proposed network concept, a system at 77-GHz is designed.It uses the OFDM-radar demonstrator presented in [38] and two four-channel MCRs.
A. Multichannel Repeater
The MCRs are designed using Rogers 3003G2 substrate PCBs and GaAs monolithic microwave integrated circuits (MMICs) from UMS.A block diagram and a photograph are presented in Fig. 6.The Rx antenna is connected via a waveguide-to-microstrip transition.Then, the signal is split into four different signals for the four channels of the repeater.Each channel includes an UMS CHM2179b98F DSB mixer.Additionally, before each power divider and after the mixer, UMS CHA2080-98F variable gain amplifiers (VGAs) are used to compensate the losses of the LoS path, the microstrip lines, and the mixer, leading to a total of seven amplifiers.As Tx antennas, an eight-element patch antenna array is used.The Tx antennas are placed in a λ/2-grid.The Tx array of the MCR is identical to the Tx array of the radar and can be found in Table II.
B. Radar
The radar consists of a 4×4 77-GHz frontend and a digital backend based on a Xilinx RFSoC [39].Baseband signals are generated and processed by eight digital-to-analog converters (DACs) and analog-to-digital converters (ADCs) each, with two per channel used for in-phase and quadrature components.Each baseband signal is sampled with 1 GSa/s, which allows for a signal bandwidth of about 400 MHz.Further details can be found in [38].
The same eight-element patch arrays as on the MCRs are used as radar antennas and placed as specified in Table II.The signal of the fourth Tx channel is split by a power divider and feeds the radar transmit antenna as well as a waveguide antenna used for the LoS path to the MCRs.
C. Network Design
The proposed network consists of three nodes, the radar and two MCRs.As shown in Fig. 7, the MCRs are mounted in a distance of 100 λ/2 and 175 λ/2.This leads to a network aperture with a size of 239 consisting of three sparse subapertures, each with a size of 65 λ/2.The VNA is the same as depicted in Fig. 2(c), with the colors of the network nodes in Fig. 7 matching the colors of the corresponding subarrays in the VNA.It has to be mentioned that the network nodes are offset only along the x-axis, but not in yor z-direction (see Fig. 7 for the axis definition).
D. Multiplexing
The used multiplexing pattern is depicted in Fig. 8.The first three subcarriers are used by Tx1, Tx2, and Tx3.Then, there are nine subcarriers used by the lower sidebands (LSBs) of MCR2 and MCR1, followed by Tx4, which in this work is connected to a regular Tx antenna as well as to the LoS antenna.Above the subcarrier of Tx4, there are nine subcarriers for the upper sidebands (USBs) of both MCRs.This pattern is then repeated 187 times, which results in a total of N = 4114 subcarriers.The modulation frequencies of the MCRs are listed in Table III.As all modulation signals for both repeaters are generated using the same multichannel
VI. MEASUREMENTS AND EVALUATION
Using the proposed radar network, measurements for evaluation and verification of the system are performed.This includes a systematical evaluation of the radar networks performance based on measurements in an anechoic chamber as well as radar imaging of automotive scenarios.The OFDM parameters are presented in Table IV.All measurements are calibrated using a copper pole with a diameter of 27 mm placed at boresight (θ = 0 • ).
A. Sidelobe Level
For a validation of the SLL, a measurement using the same copper pole as for calibration but placed at θ GT = −3 • in a range of R = 3.5 m is performed.Fig. 9 shows the DoA estimation result of this scenario.The theoretical SLL of −6.8 dB from Fig. 2(c) is matched.Thus, this low sidelobe level is achieved not only theoretically but also practically.
In contrast, using only the radar and hence a single subarray, the sidelobe level is increased to −3.4 dB.Additionally, the 3 dB width of the radar's main lobe is 1.72 • .Using the Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
B. Target Separability
The target separability is evaluated based on measurements of two aluminum poles with 12 mm diameter at a range of R = 3.5 m.They are measured at different separations x in steps of 1 cm.The results in Fig. 10 include network-based DoA estimations with and without CS as well as DoA estimations using the radar only.
The theoretical target separability based on Rayleigh criterion [1] equals θ NW = 0.584 • .In measurements, this separability is not reached, as targets with a difference in angle of θ GT = 0.66 • are not separable.However, as shown in Fig. 10(a), at the next measured angle difference θ GT = 0.82 • they are.Thus, the measured target separability is slightly higher than the Rayleigh criterion, which is to be expected, as the Rayleigh criterion represents a theoretical minimum.Yet, it is much lower than the target separability of the radar only, which is 2.15 • according to the Rayleigh criterion.Fig. 10(b) and (c) shows the results of similar measurements with pole spacings of θ GT = 0.99 • and θ GT = 1.15 • , respectively.While in all three DoA estimations in Fig. 10, the targets can be separated successfully, the deviation between the ground-truth (GT) spacing and the measurement result differs.In Fig. 10(c), the error is only 0.03 • , well within the accuracy of the scenario setup, in Fig. 10(a) and (b) it is at about 0.2 • .However, since in a Fourier transform two closely spaced peaks influence each other depending on their phase [40], at a target spacing close to the network's separability, an error is to be expected.
C. Automotive Scenarios
In order to verify the performance of the radar network in practical scenarios, measurements with automotive targets including a car, a pedestrian, a bicycle, and a motorcycle are performed.In the evaluation, at first an Rv-image is created, where for the range evaluation, a zero padding to more than eight times the initial length is used.Then, a bird-view image Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
is created, where for every range cell, a network-based DoA estimation using CS is performed for the velocity bin with the highest power.
The measurement scenarios shown in Fig. 11 include a small number of road users, which are imaged very detailed due to the high angular resolution of the network.Fig. 11(a) shows a scenario with a car and a pedestrian.Multiple reflection points at the corners and the license plate of the car as well as both legs of the pedestrian are visible.A high number of features can be seen especially in Fig. 11(b), where both fork tubes of the bicycle as well as the left leg of the cyclist are visible.Fig. 11(c) shows a motorcycle approaching the radar with 3.5 m s −1 .Multiple features like the crash bar can be observed.
Fig. 12 visualizes the measurement results of difficult scenarios with multiple road users close to each other.Furthermore, a comparison of the network-based DoA estimation and a DoA estimation using only the radar's 4 × 4 array is shown.In Fig. 12(a) and (b), the scenario includes a car, a pedestrian, and a bicycle, all at a similar distance between 4 m and 5 m.Despite the combination of a weak pedestrian next to a strongly reflecting car, in both cases, all road users are visible in detail when using the radar network.In contrast, in Fig. 12(a), the pedestrian is not detected when using only the radar.Also, additional features like different reflection points at the front of the bicycle are only detected by the network.
In the very dense scenario in Fig. 12(b), where there is a road user every 0.5 m, still all targets are well detectable using the network.However, some features especially of the bicycle are lost compared to Fig. 11(b).In contrast, using the radar only, far less details and reflection points are visible, and the pedestrian as well as the cyclist are harder to detect.
In Fig. 12(c), a bicycle and a motorcycle are placed handlebar to handlebar.Using the radar network, the motorcycle is visible with a high level of details.Also the bicycle still is detected, despite the strong target directly next to it.With the radar only, both targets are detectable, but just as single peaks.
By the measurements in Fig. 12, it thus is verified that using the radar network weak targets can be detected next to strong ones, separated only by the DoA estimation.This is of special importance, as in automotive scenarios weak targets typically include the most vulnerable road users.
VII. CONCLUSION
To show the potential of coherent radar networks, a network system is proposed allowing for a flexible design of the VNA while providing a good link budget.It is based on a digital radar accompanied by two MCR, leading to three network nodes and thus three subapertures.With a VNA designed based on the proposed recommendations for network array design, a high target separability is reached while maintaining a low SLL.The measurement results not only show the high target separability of the network-based DoA estimation, but also its applicability in automotive scenarios.With the high angular resolution, a very detailed image of the traffic scenario and the different road users is created.Furthermore, the ability to detect weak targets next to strong ones exclusively based on the DoA estimation is demonstrated.All in all, the results show the high potential of network-based radar imaging using a well-designed radar network.
ACRONYMS
In the following, an overview over the most important acronyms is given.
Fig. 1 .
Fig. 1.VNA of a network consisting of three subarrays with four virtual antennas each.
Fig. 2 .
Fig. 2. Simulation results for a DoA estimation of a target at 0 • using different VNA without subarray tapering.Different colors in the VNAs represent different subarrays.(a) VNA with two 16-element ULA subarrays.(b) VNA with three 16-element ULA subarrays.(c) VNA with three 16-element sparse subarrays.
Fig. 3 .
Fig.3.Concept of a triangular-path radar-repeater network using MCR, exemplarily shown for a network with one four-channel MCR.
Fig. 4 .
Fig. 4. Subcarrier assignment exemplarily shown for a network consisting of a radar with four transmitters and two four-channel MCRs with SSB mixers.The pattern is repeated.
Fig. 5 .
Fig. 5. Comparison of the DoA estimation results of a target at 5 • with ZP to eight times the array size and without ZP.The network with two 16-element ULA subarrays from Fig. 2(a) is simulated.
Fig. 8 .
Fig. 8. Subcarrier assignment as used in the presented radar repeater network.Channel MCR1-1 modulates onto two different subcarriers to enable phase reconstruction as explained in Section IV-A.The pattern is repeated 187 times.arbitrary waveform generator (AWG), the modulation signals of MCR1 and MCR2 are synchronized and MCR2 thus does not need a channel modulating onto two subcarriers.
Fig. 9 .
Fig. 9. DoA estimation of a single pole at θ GT = −3 • .With the network aperture, a SLL of −6.8 dB can be reached, equaling the theoretical performance as shown in Fig. 2(c).
Fig. 11 .
Fig. 11.Measurements of static and dynamic low density automotive scenarios.The measurement results are shown as bird's view, the DoA estimation is performed based on the whole network using CS.The measurement scenarios include (a) a car and a pedestrian at [0, 4] m; (b) a bicycle at boresight, fork at [0, 5] m; (c) a motorcycle approaching at 3.5 m/s.
TABLE I RADAR
NETWORKS WITH COHERENT DOA ESTIMATION
TABLE III MODULATION
FREQUENCIES OF THE REPEATER CHANNELS | 9,321.2 | 2024-05-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Effectively processing medical term queries on the UMLS Metathesaurus by layered dynamic programming
Background Mapping medical terms to standardized UMLS concepts is a basic step for leveraging biomedical texts in data management and analysis. However, available methods and tools have major limitations in handling queries over the UMLS Metathesaurus that contain inaccurate query terms, which frequently appear in real world applications. Methods To provide a practical solution for this task, we propose a layered dynamic programming mapping (LDPMap) approach, which can efficiently handle these queries. LDPMap uses indexing and two layers of dynamic programming techniques to efficiently map a biomedical term to a UMLS concept. Results Our empirical study shows that LDPMap achieves much faster query speeds than LCS. In comparison to the UMLS Metathesaurus Browser and MetaMap, LDPMap is much more effective in querying the UMLS Metathesaurus for inaccurately spelled medical terms, long medical terms, and medical terms with special characters. Conclusions These results demonstrate that LDPMap is an efficient and effective method for mapping medical terms to the UMLS Metathesaurus.
Background
Efficiently processing and managing biomedical text data is one of the major tasks in many medical informatics applications. Biomedical text analysis tools, such as MetaMap [1] and cTAKES [2], have been developed to extract and analyze medical terms from biomedical text. However, medical terms often have multiple names, which make the analysis difficult. As an effort to standardize medical terms, the Unified Medical Language Systems (UMLS) [3] maintains a very valuable resource of controlled vocabularies. It contains over 200 million medical terms (also known as "medical concepts"). Each medical term is identified by a unique id known as a Concept Unique Identifier (CUI). The UMLS also records relations between medical terms. As a result, mapping biomedical text data to the UMLS and mining UMLS associated datasets often yield rich knowledge for many biomedical applications [4][5][6][7][8].
In order to effectively query or use the UMLS, one of the fundamental tasks is to correctly map a biomedical term to a UMLS concept. Currently, there are a number of publicly available tools to achieve this goal. One notable approach is to use the official UMLS UTS service (UMLS Metathesaurus Browser) available on the UMLS official website (https://uts.nlm.nih.gov). Users are able to input a medical term and the system will return a query result. MetaMap [1], which has been developed and maintained by US National Library of Medicine, has become a standard tool in mapping biomedical text to the UMLS Metathesaurus. cTAKES [2] is an opensource natural language processing system that can process clinical notes and identify named entities from various dictionaries, including the UMLS.
However, after having been using these tools in our research, we found that they do not work well in mapping medical terms that are just slightly different from the terms in the UMLS. For example, the UMLS Metathesaurus Browser, MetaMap, and cTAKES fail to process the query term "1-undecene-1-O-beta 2',3',4',6'-tetraacetyl glucopyranoside" even if it has only one character different (missing "-" between "beta" and "2") from the official UMLS concept "1-undecene-1-O-beta-2',3',4',6'tetraacetyl glucopyranoside". This drawback makes it hard to handle many real world data such as Electronic Health Records, which contain a lot of noisy information including missing and incorrect data [9]. In addition, they often fail to handle long medical terms even if those terms are identical to the terms in the UMLS. For example, the Metathesaurus Browser cannot handle query terms with more than 75 characters, and sometimes cannot even accurately answer a query term that exactly matches a concept name in the UMLS (see discussions in the result section). MetaMap and cTAKES, on the other hand, often breaks down a long medical term into several shorter terms. For example, if we query MetaMap with a clinical drug "POMEGRANATE FRUIT EXTRACT 150 MG Oral Capsule", we get several UMLS concepts such as "C1509685 POMEGRANATE FRUIT EXTRACT", "C2346927 Mg++", and "C0442027 Oral", instead of this drug concept which has a unique CUI C3267394 in the UMLS. The situation becomes even worse when medical terms contain special characters, i.e., characters other than numbers or letters, such as "{", "}", " (",")","-", etc. For example, MetaMap completely fails to find any relevant CUI to the medical concept "cyclo(Glu(OBz)-Sar-Gly-(Ncyclohexyl)Gly)2". These drawbacks are very undesirable when handling biomedical texts. By studying the UMLS Metathesaurus, we found that a significant number of medical terms are quite long. About 10.7% of UMLS concepts contain at least 75 characters (including white spaces), and about 50.9% of UMLS concepts contains at least 32 characters. In addition, a large amount of medical terms contain special characters. More than 61.3% of UMLS concepts contain at least one special characters and about 11% of UMLS concepts contains at least 5 special characters. In fact, we found many special characters are optional in a medical term. For example, term "Cyclic AMP-Responsive DNA-Binding Protein" and term "Cyclic AMP Responsive DNA Binding Protein" both refer to the same concept "C0056695" in the UMLS Metathesaurus, though the latter is missing two "-". The UMLS handles a medical term with different names by including multiple common names in the Metathesaurus. Given the fact that in many cases special characters are optional, it is practically impossible to let Metathesaurus contain all possible names. Considering a UMLS concept with 20 special characters, if each special character may be replaced by a white space, then there are approximately 1 million aliases for this concept alone, not to mention that more than 0.3% of UMLS concepts contain 20 special characters or more.
This problem is in fact related to the classical spelling correction problem in which a misspelled word will be corrected to the most closely matched word. The classic measurement of dissimilarity between two words based on several distance functions, such as edit distance [10], hamming distance [11], and longest common subsequence distance [12,13]. Thus the spelling correction is essentially finding a valid word with the minimum distance to the misspelled word. Quite a few dynamic programming algorithms have been proposed to solve this problem. Readers can find a survey of these algorithms in [14]. In recent years, spelling correction has evolved to perform query corrections. This correction is often a task of context sensitive spelling correction (CSSC), where corrections will be geared towards more meaningful or frequently searched words [15]. Thus, it is a good idea to use the query log to assist the correction [16].
Unlike many query applications, it is not sufficient to return a frequently searched medical term that best matches the query based on search history, not to mention that such history data is often not available. Accurately identifying a specific biomedical term, such as a drug name or a chemical compound, is demanded by many biomedical applications. Given this consideration, classical spelling correction techniques are more preferable than the CSSC for matching biomedical terms to UMLS concepts. However, we found that the classical dynamic programming algorithm is too slow for this task because of the huge volume of terms in the UMLS Metathesaurus. In addition, it is unable to effectively handle a term with missing words (e.g., "gastro reflux" has a large distance to "gastro oesophageal reflux" though the two terms usually mean the same thing), or words not in their usual order (e.g., "lymphocytic leukemia chronic" has a large distance to "leukemia chronic lymphocytic").
The background described above motivated us to find an efficient and accurate medical term mapping method for the UMLS. To tackle this challenge, in this work we propose a Layered Dynamic Programming Mapping (LDPMap) approach to query the UMLS Metathesaurus.
Methods
We use Longest Common Subsequence (LCS) to measure the similarity between two words. Given two words A and B, their similarity is defined as: This similarity measure is a variation of the longest common subsequence distance [12]. We can observe that WordSimilarity(A, B) ranges between 0 and 1. The function WordSimilarity(A, B) is the basic building block for LDPMap. In the UMLS, each concept is a sequence of words. We define the similarity between two concepts a n = (A 1 , A 2 , ..., A n ) and b m = (B 1 , B 2 , ..., B m ) as: Similar to word similarity, in our query we will normalize the concept similarity by the number of words contained in each concept. We can observe that normalized concept similarity score ranges between 0 and 1. If two concepts are identical then this score is 1.
The key issue in the above definition is R, which is a matching relation between words in α n and β m . We have two constraints on R, which leads to two different foci. Constraint 1: There do not exists two matching pairs (i,j), (x,y) in R such that i=x or j=y. Constraint 2: In addition to constraint 1, for any two matching pairs (i,j), (x,y) in R, either i<x &&j<y, or x<i &&y<j.
Constraint 1 converts the concept similarity problem into a maximum weighted bipartite matching problem [17]. Considering a bipartite graph built on two vertex sets a n and b m with word similarities being the edge weights, finding a highest score for concept similarity under Constraint 1 is equivalent to finding a maximum weighted matching for the bipartite graph. This model is particularly helpful for identifying the similarity between two terms regardless of their word ordering. We used this as one of the measurements in our final query workflow ( Figure 1) and implemented this by maximal weighted matching.
In the following section, we will focus on concept similarity calculation under constraint 2, which regulates that the similarity comparison between two terms shall follow the word orders in those terms, similar to the LCS problem in which matching between two words shall follow the character orders. Thus, the concept similarity calculation problem can be considered as a macro level similarity calculation where each unit is a word instead of a letter as in the case of word similarity calculation. This model has a lot of advantages as we will see in the following section.
Suboptimal structure of the concept similarity under constraint 2
Our next question is how to perform the concept similarity calculation. Unlike word similarity calculation in which each match outcome is a binary result (i.e., the same letter or a different letter), each match in the concept similarity calculation is a word similarity value between 0 and 1. The algorithm for the word similarity calculation cannot be applied to the concept similarity calculation. However, we find the concept similarity calculation also has a suboptimal structure as follows: if i=0 or j=0 The above suboptimal structure is true because for any two words A i ε a i , B j ∊ b j , there are at most three possible cases: ( Note that we do not consider it a valid case that neither A i nor B j is used in the matching. In this case, we can always choose to make them matching without violating Constraint 1 and result in a higher or at least equal concept similarity score.
Main algorithms
Given the suboptimal substructure, we can design a dynamic programming algorithm to calculate the concept similarity score between two terms, on top of the LCS dynamic programming algorithm for calculating word similarity. The two layers of dynamic programming not only result in a method less affected by missing words or words in different orders, but also significantly increase the query speed as we will see below. These enable our searching method practically applicable to many biomedical applications.
The UMLS Metathesaurus (version used in this work: 2012AB) contains around 11 million records in its MRCONSO.RRF files. Each record is a medical term. For query purposes, we discard duplicate terms and non-English terms and result in about 6.87 million records. A term is considered duplicate if both its CUI and name are identical to another term. However, among these 6.87 million records, there are only 1,874,573 unique words (white space is the delimiter). Thus concept similarity on a word basis saves a huge amount of redundant calculation otherwise needed by classic methods on a character basis. Correspondingly, in our method, we first pre-process the UMLS Metathesaurus into a word vector of unique words, and convert each UMLS concept, which consists of a list of words, into a list of indices with regard to the word vector. Procedure LDPMap-Preprocessing is the pseudo code.
Procedure LDPMap-Preprocessing ( ) We process a query using the Algorithm LDPMap_ Query. When a query process starts, we first build a word similarity matrix between the query term and the word vector (Line 1-5), using the WordSimilarity function defined above. Then we build a concept score vector between the query term and 6.87 million UMLS Metathesaurus concepts (Line 6-8). The construction of the concept score vector uses the WordSimilarityMatrix built previously so that there are no more word similarity calculations. In addition, it adopts a dynamic programming approach in Function ConceptSimilarityScore, owing to the suboptimal structure of the ConceptSimilarity function. Algorithm
A running example
To facilitate the understanding of our method, we provide a simple running example of our method in Tables 1 and 2. Assume the input query term is "gastro reflux". The Algorithm LDPMap_Query will first build a WordSimilari-tyMatrix between this query term and the word vector of Metathesaurus. Results were partially shown in Table 1.
After the WordSimilarityMatrix is available, the Algorithm LDPMap_Query will calculate the concept similarity scores between the query term and UMLS concepts by dynamic programming. The calculation will refer to WordSimilarityMatrix for word similarity score instead of calculating it again. An example of a concept similarity calculation is given in Table 2.
Complexity analysis
The LDPMap method is much faster than the classic LCS-based word similarity calculation by treating the query term and each UMLS concept as one single word, as demonstrated in our empirical study. The classic LCSbased word similarity calculation uses dynamic programming on a character basis while we use two layers of dynamic programming, one on a character basis and the other on a word basis. To understand the analytical reason behind this speedup, let us make some simple assumptions. Assume the UMLS Metathesaurus contains M unique concepts, and each concept or query term contains t words, and each word has d characters. Also assume UMLS Metathesaurus contains K unique words. Then, the classic LCS-based word similarity calculation takes approximately O(t 2 d 2 M) time to handle a query. However, LDPMap method takes approximately O(td 2 K +t 2 M) time to handle this query. It is easy to observe that K<<tM. This explains why LDPMap is much more efficient. In the following, we will see that our LDPMap approach can be further sped up with the pipeline technique.
Speeding up LDPMap with the pipeline technique
In building the WordSimilarityMatrix and ConceptScore_ Vector, the dynamic programming method has been used for around 1.87 million times and 6.87 million times, respectively. It is interesting to find out if there are repeated calculations that can be reused to speed up the LDPMap method. By studying both the word vector and the Metathesaurus, we found the former has a lot of repeated prefixes among words (e.g. words "4-Aminophenol", "4-Aminophenyl"), and the latter has a lot of repeated prefix words among concepts (e.g. C1931062 ectomycorrhizal fungal sp. AR-Ny3, C1931063 ectomycorrhizal fungal sp. AR-Ny2). Thus, by lexicographically sorting the word vector and the Metathesaurus, we can use this information to save a lot of calculation in the LDPMap approach as follows: (1) In calculating WordSimilarityMatrix, Given a word A, if it has p common prefix letters with the previous word B, the dynamic programming only needs to start from p+1 iteration because the previous p+1 columns of the dynamic programming table are exactly the same as the previous results.
(2) In calculating ConceptSimilarityScore, Given a concept a, if it has q common prefix words with the previous concept b, the dynamic programming only needs to start from q+1 iteration because the previous q+1 columns of the dynamic programming table are exactly the same as the previous results. That means, the for loop in Line 2 of Function ConceptSimilarityScore shall start with j=q+2.
The mechanism of the speedup technique can be described as a pipeline technique because a computation Table 2 An example of calculating the concept similarity score between the query term "gastro reflux" and the UMLS concept "gastro oesophageal reflux" for the ConceptScore_Vector construction. The calculation will refer to the WordSimilarityMatrix as shown in Table 1. The normalized final similarity score is 2*2/(2+3)=0.8.
result can be passed down and partially reused by the subsequent computation. In the empirical study, we will see that the pipeline technique significantly improves the LDPMap speed.
A comprehensive query workflow using LDPMap approach Given the above solutions to the concept similarity problem under Constraints 1 and 2, we will design a comprehensive query workflow for mapping a query term to UMLS concepts. Our query workflow needs to consider multiple types of input variations and errors. Other than missing words and words in different orders that can be properly handled by concept similarity problem formulation, we need to consider another situation when two words are merged together. In this situation, the concept similarity modelling does not fit well because it is on a word basis. Therefore it is preferable to use the classic LCS method. However, as we pointed out above, the classic LCS method is too slow for the UMLS Metathesaurus. Fortunately, we found that we can leverage concept similarity solutions, outputting a list of concepts with similarity score greater than a threshold. When we set the threshold to be 0.35, in most cases it is able to output concepts that are similar with the query term regardless of the word merging issues. The number of outputted concepts is much smaller than the size of UMLS Metathesaurus; thus applying the LCS method on this small subset is much faster than on the whole UMLS Metathesaurus. The query workflow is illustrated in Figure 1.
In the query workflow, we first calculate concept similarity scores under Constraint 2 between the query term and all UMLS concepts. If there are concepts with scores higher than threshold T 1 , we output the results and the query completes. Otherwise, we save any concepts with scores higher than threshold T 2 as SET(T 2 ), and then perform two additional queries: (1) calculate word similarity between the query term and each concept in SET(T 2 ) by treating the query term and each concept as one single word; (2) calculate the concept similarity scores under Constraint 1 between the query term and all UMLS concepts. Finally, we merge and output the results from (1) and (2). The number of results outputted is adjustable. An application can choose to output concepts with scores higher than a threshold, or only the top ranked concepts.
Results
To understand the actual performance of LDPMap, we implemented it in C++, and subjected it to two sets of empirical studies. In summary, the results demonstrate that LDPMap method performs much better than available methods in terms of query speed and effectiveness. All experiments were carried out on Linux cluster nodes with 2.4GHz AMD Opteron processors. For the LDPMap query workflow, we set two parameters T 1 = 0.8 and T 2 = 0.35.
Query speed comparison
We would like to know how fast LDPMap handles query in comparison with the standard LCS method which treats the query term and each UMLS concept as a single word, and how effective the pipeline technique for the LDPMap is. Therefore, we test the three algorithms, LCS standard, LDPMap (LDPMap_Query Algorithm) without the pipeline technique, and LDPMap algorithm with the pipeline technique, on four sets of medical concepts randomly chosen from the UMLS Metathesaurus. The first set consists of 1000 single-word medical concepts. The second, third and fourth sets consist of 1000 two-word, 1000 three-word, and 1000 four-word concepts, respectively. The results are shown in Figure 2.
From Figure 2 we can observe that the LDPMap algorithm is much faster than the standard LCS. In addition, the standard LCS method is susceptible to the word numbers in a query term while the LDPMap method is much more stable. This result is consistent with the above complexity analysis. In addition, the LDPMap with the pipeline technique significantly speeds up the basic LDPMap method. This confirms our intuition that the pipeline technique saves huge amounts of redundant computation thus improving the efficiency of the LDPMap method. As a result, we can see that in this set of experiments LDPMap with pipeline techniques on average answers a query in less than 1 second. However, the standard LCS method takes about 1 to 2 minutes in answering a query, which makes it virtually unacceptable for many biomedical applications, which can require near real-time responses, or when processing large amounts of data. In addition to the slow query time, the standard LCS is not good at processing query terms with missing words or words in different orders, as we have discussed above.
It is worthwhile to note that even for one word query, LDPMap method is significantly faster than LCS, though the concept similarity is exactly the same as the word similarity in this case. This is because the LDPMap preprocessed the UMLS terms on a word basis and built an efficient index. The similarity measurement is not directly on the UMLS terms but on words and the index which saves a lot of computational cost. In contrast, the LCS will handle the similarity measurement directly over every UMLS term. This can also be explained by our complexity analysis above. When t=1 (t is the number of words in a query), LCS complexity is O(d 2 M) while the LDPMap is O(d 2 K+M). Since K<<M, we conclude that LDPMap is much faster than LCS.
Next, we would like to know how effective LDPMap handles queries, especially when the query terms are slightly different than the terms in the UMLS Metathesaurus.
Query effectiveness comparison
To understand how effective LDPMap (referring to LDPMap query workflow in this set of experiments) handles queries with name variations and errors, we used two available methods, UMLS Metathesaurus Browser and MetaMap as benchmarks. In a cursory examination of cTAKES, we found that it exhibited similar characteristics to MetaMap in its ability to handle name variations and errors and therefore we have excluded it from comparison. Since the study on UMLS Metathesaurus Browser requires manually inputting terms and checking the results, we have to limit the query test to manageable numbers. In addition, since the UMLS Metathesaurus Browser cannot accept a query term with more than 75 characters, we limit all query terms in our test to be no more than 75 characters. Given the above situations, and considering the fact that more than 50% of UMLS concepts contain at least 32 characters, we randomly chose 100 medical concepts with 32-75 characters from the UMLS Metathesaurus.
The 100 medical concepts are divided into two groups. The first group consists of 50 concepts with no special characters (i.e., characters other than letters and numbers), and the second group contains 50 concepts with 5 or more special characters. The two groups are for two different testing purposes.
Group 1: We will use group 1 to test how effective the query workflow handles pure English name terms, and English name terms with input errors, variations, and typos. Thus, in addition to querying the original names, we also query the names with 1, 2, 3, and 4 character variations. Character variations are generated randomly in this study, including (1) deleting a character, (2) replacing a character, (3) merging two words, i.e., deleting the white space between two words.
Group 2: We will use group 2 to test how effective the query algorithm is in handling many professional medical terms, which may contain a good number of special characters, such as chemical compounds and drugs. To simulate the name variations that frequently appear in these terms, we randomly apply 1, 2, 3, and 4 character variations, including (1) deleting a special character, (2) replacing a special character by a white space.
To complement the above test groups, we use the following group to test how effective the query algorithm handles short terms which may be queried commonly in real situation.
Group 3: We randomly picked 100 medical concepts with 5-31 characters. Since many of these concepts are quite short, we only apply 1 and 2 random character variations, including (1) deleting a character, (2) replacing a character, (3) merging two words.
In these experiments, we found that MetaMap often output multiple matching results but there are no ranks of these results. In contrast, the UMLS Metathesaurus Browser usually outputs a list of ranked concepts, and LDPMap can be configured to output the top k (k>=1) ranked concepts.
Thus, to be as fair as possible, we use two criteria to measure the correctness of a query: Criterion 1 indicates if the query processing mechanism is able to handle the query with reasonable accuracy. Criterion 2 is much stringent and it indicates whether a method can be applied to applications require high accuracy.
Figures 3 and Figure 4 are the error rate for the two groups of experiments, under Criterion 1. From both figures, we can clearly see that the LDPMap approach has very few errors among all tests. In comparison, the UMLS Metathesaurus Browser and MetaMap's error rate are quite high especially when multiple characters changes are present. MetaMap has a considerable error rate even when querying the original terms (0 characters changes). This may owe to the text processing mechanism of MetaMap. Since MetaMap is targeted at finding medical terms from a biomedical text, it leverages a combination of part-of-speech tagging, shallow parsing, and longest spanning match against terms from the SPECIALIST Lexicon before matching terms against concepts in the UMLS. Therefore, it tends to decompose longer spans of text and medical terms into several shorter medical terms. Figure 5 and Figure 6 are the error rates for the two groups of experiments, under Criterion 2. Since Meta-Map usually outputs multiple concepts without ranking, we exclude MetaMap from the Criterion 2 measurement. From these two figures, we can observe that the error rate of the UMLS Metathesaurus Browser is much higher in comparison with the measurement of Criterion 1.
Quite surprisingly, there are some errors even when querying a few original terms (such as " Distal radioulnar joint"). This suggests that UMLS Metathesaurus Browser is not suitable for query processing for applications that have a high-accuracy demand. In contrast, the LDPMap still has a very low error rate, on average less than 5% across the 0-5 character changes, and free of errors in querying the original terms.
From Figure 7 and Figure 8, we can see that the general performances of LDPMap, UMLS Metathesaurus Browser, and MetaMap on short query terms are similar to their performances on long query terms. LDPMap still has a clear advantage over UMLS Metathesaurus Browser, and MetaMap. However, we noticed that LDPMap error rate reaches 27% for 2 character changes under Criterion 2. This is understandable because generally short terms contain fewer words than long terms, and the concept similarity measurement is less favoured. However, the parameter T 1 can be used as an adjustment of preference between the concept similarity measurement and the word similarity measurement. By increasing T 1 from 0.8 to 0.85, we observed that this error rate reduces from 27% to 20%. This demonstrates that LDPMap is flexible in handling both long and short term queries.
To provide some details on the medical concepts we used in this set of experiments, and the character changes applied. We list a few of them in Table 3. From this table, we can see that it contains concepts of different lengths. The randomly generated character variations cover several common cases of text data inaccuracy, including, misspellings, merging of two words, and special character omissions. From Table 4 we can see that MetaMap cannot handle them properly. Instead, it finds some concepts related to individual words in the query term. The UMLS Metathesaurus Browser does not do any better on them. In contrast, LDPMap correctly answered all these queries except for "AlbunexIectable Product". Although "Injectable Product" is not correct, it is at least closer to the original term than those returned by the UMLS Metathesaurus Browser and MetaMap. By reviewing the LDPMap approach, we conclude that this error can be eliminated if we increase the threshold T 1 to a value such that word similarity (LCS) is used to measure the two terms. To confirm this, we increase T 1 from 0.8 to 0.85, and LDPMap successfully returns the original term. However, a high T 1 implies that LDPMap gives more preference to LCS-based similarity measurement than to concept similarity measurement defined above. Consequently, LDPMap will be less productive in handling real-world queries that contain incomplete medical terms (i.e., medical terms with missing words). It is quite evident that there does not exist one set of T 1 and T 2 that fits all situations. As a result, we will fine tune these parameters to leverage LDPMap in our future applications.
Conclusions
In the work we proposed LDPMap, a layered dynamic programming approach to efficiently mapping inaccurate medical terms to UMLS concepts. As a main advantage of the LDPMap algorithm, it runs much faster than classical LCS method therefore makes it possible to efficiently handle UMLS term queries. When similarity is counted on a word basis, LDPMap algorithm may yield a more desirable result than LCS. In other cases (such as word merging), it is possible that LCS query results are more preferable. Thus, in the comprehensive query workflow of LDPMap, the LDPMap method is complemented by LCS and adjustable by parameter T 1 . Different from using LCS alone, the LDPMap query workflow only applies LCS (when needed) to a very limited number of candidate terms thus achieves a very fast query speed.
In query effectiveness comparison, we observed that LDPMap has a very high accuracy in processing queries over the UMLS Metathesaurus involving inaccurate terms. In contrast, the UMLS Metathesaurus Browser has a very limited ability in handling these queries, though it can handle queries of accurate terms fairly well. Throughout the study, we also observed that Meta-Map, in general, is not suitable for mapping long medical terms to the UMLS concepts as it focuses on extracting short medical terms from the query text.
Although LDPMap is very efficient in handling UMLS term queries, it has two major limitations. First, it cannot handle synonyms and coreferences. Fortunately, UMLS Metathesaurus often list a concept preferred names and synonyms so that LDPMap can work effectively in most cases, though the list may still not be complete. Second, it is not able to perform syntax-level processing as MetaMap does, such as extracting medical terms from an article. Whether it is possible to extend the LDPMap approach to overcome the two limitations remains an open question.
In the future we would like to investigate this question and plan to use LDPMap as an efficient pre-processing tool to map medical terms to the UMLS concepts, and use the results in our knowledge discovery applications. | 7,299.6 | 2014-05-01T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Green High-Yielding One-Pot Approach to Biginelli Reaction under Catalyst-Free and Solvent-Free Ball Milling Conditions
A simple, green, and efficient approach was used to synthesize 3,4-dihydropyrimidines derivatives. We showed that the application of the planetary ball milling method with a ball-to-reagent weight ratio of 8 for the Biginelli reaction provides 3,4-dihydropyrimidines derivatives with excellent yields (>98%) in a short reaction time from the one-pot, three-component condensation of aldehydes, ethyl acetoacetate, and urea (or thiourea).
Introduction
The development of new efficient methods to synthesize organic heterocycles that are both economical and eco-friendly presents a great challenge for the scientific community.
Solvent-free reactions are highly significant from both economical and synthetic points of view. These kinds of reactions ensure an essential facet of green chemistry to reduce the risks to humans and the environment.
Multicomponent reactions (MCRs) have gained importance because of their efficiency and effectiveness as a method for one-pot synthesis of a wide range of heterocycles [1][2][3][4][5][6]. The optimal MCR is sufficiently flexible. Thus, it can be conducted to generate adducts with a variety of functional groups that may then be selectively paired to enable different cyclization manifolds, thereby leading to a diverse collection of products.
In this study, we propose a new and highly efficient approach to the one-pot synthesis of Biginelli 3,
Materials and Techniques
The ball mill used in this study was a Planetary Micro Mill PULVERISETTE 7 (Fritsch, Idar-Oberstein, Germany) classic line with 45 mL tempered steel vials and 10 mm tempered steel grinding balls. The melting points were determined with a Stuart SMP10 melting point apparatus (Bibby Scientific, Staffordshire, UK). All of the compounds used in this study were purchased from Aldrich (St. Louis, MO, USA). IR spectra were obtained with an FT-IR-Tensor 27 spectrometer in KBr pellets (Bruker, Ettlingen, Germany). 1 H and 13 C-NMR spectra were determined with a Bruker 400 NMR spectrometer (Bruker Biospin, Rheinstetten, Germany) in DMSO-d6 with TMS as the internal standard. Chemical shifts were expressed as δ ppm units. The elemental analysis was performed on a PerkinElmer 2400 CHN Elemental Analyzer (Wellesley, MA, USA). The progress of all reactions was monitored through TLC on silica gel 60 with 1:1 hexane/ethyl acetate.
General Procedure for Synthesis of 3,4-Dihydropyrimidine Compound 4a
An equimolar amount (0.02 mol) of benzaldehyde (1a), ethyl acetoacetate (2), and urea (3a) (total mass 5.92 g) was placed into tempered steel vials with 47.36 g of tempered steel balls (22 balls of 10 mm in diameter). The vials were closed and then placed in a Planetary Micro Mill Pulverisette 7, which is set to 750 rpm. The 3,4-dihydropyrimidine compound 4a was obtained in pure form after 30 min of milling without further purification.
Materials and Techniques
The ball mill used in this study was a Planetary Micro Mill PULVERISETTE 7 (Fritsch, Idar-Oberstein, Germany) classic line with 45 mL tempered steel vials and 10 mm tempered steel grinding balls. The melting points were determined with a Stuart SMP10 melting point apparatus (Bibby Scientific, Staffordshire, UK). All of the compounds used in this study were purchased from Aldrich (St. Louis, MO, USA). IR spectra were obtained with an FT-IR-Tensor 27 spectrometer in KBr pellets (Bruker, Ettlingen, Germany). 1 H and 13 C-NMR spectra were determined with a Bruker 400 NMR spectrometer (Bruker Biospin, Rheinstetten, Germany) in DMSO-d 6 with TMS as the internal standard. Chemical shifts were expressed as δ ppm units. The elemental analysis was performed on a PerkinElmer 2400 CHN Elemental Analyzer (Wellesley, MA, USA). The progress of all reactions was monitored through TLC on silica gel 60 with 1:1 hexane/ethyl acetate.
General Procedure for Synthesis of 3,4-Dihydropyrimidine Compound 4a
An equimolar amount (0.02 mol) of benzaldehyde (1a), ethyl acetoacetate (2), and urea (3a) (total mass 5.92 g) was placed into tempered steel vials with 47.36 g of tempered steel balls (22 balls of 10 mm in diameter). The vials were closed and then placed in a Planetary Micro Mill Pulverisette 7, which is set to 750 rpm. The 3,4-dihydropyrimidine compound 4a was obtained in pure form after 30 min of milling without further purification.
In this paper, we present an efficient one-pot, three-component solvent-free and catalyst-free approach to synthesize 3,4-dihydropyrimidine derivatives by direct condensation of equimolar quantities of benzaldehyde derivatives, ethyl acetoacetate, and urea/thiourea in a simple planetary ball mill at 750 rpm without adding any solvent or catalyst (Scheme 1). The progress of the reactions was monitored every 10 min of the milling cycle through thin-layer chromatography (TLC).
We examined different ball ratios (Table 1) to improve the efficiency of the ball milling approach for the Biginelli reaction. Equimolar quantities (0.02 mol) of benzaldehyde (1a), ethyl acetoacetate (2), and urea (3a) (with a total mass of 5.92 g) were introduced in the planetary ball milling, and several milling times and ball weights were tested [42]. Table 1. Milling parameters and the conversion rate for the synthesis of 3,4-dihydropyrimidines derivatives. Accordingly, our investigation showed that the reaction does not change even after 12 h of milling when the ratio (ball weight/reagent weight) is equal to 1. The increasing value of the ball weight causes the conversion rate to increase to the optimal value for the ball weight-to-reagent weight ratio, which is equal to 8 (47.36 g of balls) ( Table 1). The protocol also provides simple access to 3,4-dihydropyrimidine derivatives (4a-j) ( Table 2).
Entry * BRR (Balls-to-Reagents Weight Ratio) Time (min) Conversion (%)
This approach exhibits the advantages of high yield, short reaction time (30 min), and easy workup. It is also environmentally benign. All of the synthesized products have been characterized through NMR ( 1 H and 13 C), IR, and elemental analysis.
Conclusions
We developed a simple, green, and quick method for the one-pot Biginelli reaction. This technique is highly efficient. The important advantage of the present procedure, in addition to its simplicity, is its ability to obtain the synthesized 3,4-dihydropyrimidine derivatives in a short reaction time, in pure form, and with excellent yields. | 1,384.6 | 2016-12-14T00:00:00.000 | [
"Chemistry"
] |
Simulation of the stress dependence of hysteresis loss using an energy-based domain model
The assembled domain structure model (ADSM) is a multiscale magnetization model that can be used to simulate the magnetic properties of a core material. This paper reveals the mechanism of the hysteresis loss increase due to compressive stress applied to a silicon steel sheet by conducting a simulation using the ADSM. A simple method of adjusting the simulated hysteresis loss to the measured loss is also proposed. By adjusting the hysteresis loss under a stress-free condition, the stress dependence of the hysteresis loss of a non-oriented silicon steel sheet is quantitatively reconstructed using the ADSM, where the stress-induced anisotropy strengthens the pinning effect along the stress direction. © 2017 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). https://doi.org/10.1063/1.4993661
I. INTRODUCTION
The magneto-mechanical interaction in iron-core materials has been studied intensively to take into account magnetic deterioration due to compressive stressing of the motor core. [5][6][7] In representing the magneto-mechanical interaction, phenomenological modeling of the magnetization process is not useful because it requires parameter fitting of the measured property, yet a magnetic measurement under an arbitrary vector/tensor combination of magnetization and stress directions is practically difficult. Accordingly, a physical magnetization model is required to predict the stress dependence of hysteresis loss without a magnetic measurement under mechanical stress and the following model parameter fitting.
Several physical multiscale models 2,3,8 have successfully been used to predict the permeability decrease dependent on mechanical stress. However, the prediction of the stress dependence of the hysteresis-loss property 1,2,11 remains a challenging task because the physical modeling of the pinning field is an open problem.
The assembled domain structure model (ADSM) 3,4 is a physical magnetization model that includes the pinning effect on the crystal-grain scale. The model parameters are given by material constants, such as the anisotropy constant and magnetostriction constants. The magneto-elastic energy causes magneto-mechanical interaction in the core material, yielding stress-dependent magnetic properties. 3 The pinning field is simulated under an assumption of a statistical distribution of pinning sites. 3 The main purpose of this paper is to reveal the mechanism of the loss increase due to the compressive stressing of a silicon steel sheet on the basis of simulation results obtained using the ADSM. A simple method of adjusting the simulated hysteresis loss using the ADSM to the measured loss is also proposed.
II. ADSM WITH A PINNING FIELD
The ADSM 3 (II.A) and pinning model 4 (II.B and C) are briefly explained.
A. ADSM
The ADSM is a multiscale model for which macroscopic magnetization is constructed by assembling mesoscopic cells called simplified domain structure models (SDSMs) (Fig. 1). An SDSM has six domains corresponding to the three easy axes of cubic anisotropy. The magnetization state in each cell is represented by the volume ratios r i and the magnetization vectors m i = (sinθ i cosφ i , sinθ i sinφ i , cosθ i ) (i = 1 . . . 6) of the six domains. The variable vector in a cell j is denoted x j = (θ 1 , . . . , θ 6 , φ 1 , . . . , φ 6 , r 1 , . . . , r 5 ) (r 6 = 1 r 1 . . . r 5 ). The variable vectors x j ( j = 1, 2, . . . ) are determined so as to give a local minimum of the total magnetic energy e, which consists of the Zeeman energy, crystalline anisotropy energy, magnetostatic energy, and magnetoelastic energy. For convenience of formulation, the energy components are normalized by the crystalline anisotropy constant while the magnetic field is normalized by the anisotropy field. For example, the normalized magnetoelastic energy is given as where σ is the stress, λ 100 and λ 111 are the magnetostriction constants, K is the crystalline anisotropy constant, (α 1,i , α 2,i , α 3,i ) and (γ 1 , γ 2 , γ 3 ) are the direction cosines of the magnetization vectors of domain i and the stress σ with respect to the three easy axes of cubic anisotropy.
B. Distribution of the pinning field
Suppose that the macroscopic relationship between the normalized average magnetization m and the applied field h is represented as where h ah (m) represents the anhysteretic magnetization curve and h p (m) is the pinning field. The anhysteretic field is determined by Zeeman energy, crystalline anisotropic energy, magnetostatic energy, and magnetoelastic energy. The pinning field is additionally required to move a domain wall against friction generated by the pinning sites. The distribution of pinning sites is defined by a density function f (p) satisfying where p is the pinning strength. The magnetization proceeds with domain wall motions passing through pinning sites. In the case that the magnetization proceeds from the demagnetized state, the domain wall moves when h p = h h ah exceeds p and accordingly the magnetization is described as The density function f (p) is determined from the density of impurities or defects in the grains. This paper simply uses a Gaussian distribution as the density function. For convenience of simulation, relation (3) between h p and m is reformulated using the scalar stop model 9 having input m and output is a hysteretic function described by the stop model. The stop model is identified from the MH loops generated by the pinning model above.
C. Application to the ADSM
In the ADSM, the pinning field is added cell by cell. The pinning field h p is additionally required to change the volume ratio r i in a cell corresponding to the domain wall motion. The energy minimization procedure requires terms of ∂e/∂r i to decrease the total energy by changing r i . To consider the pinning effect, the pinning energy e p in every cell is added to e, where ∂e p /∂r i is the pinning field h p in the cell. Accordingly, the effective field ∂e p /∂r i refers to the force of friction acting on the domain wall.
For simplicity, suppose that the magnetization proceeds with 180 • domain wall motion with two domains i and i in a cell. The normalized magnetization along direction i in the cell is then given by m cell = 2r i -1. Accordingly, the pinning field generated by the domain wall motion i is given as h p (m cell ) = h p (2r i 1). Thus, ∂e p /∂r i is expressed as Terms ∂e p /∂φ i and ∂e p /∂θ i are set to zero.
D. Adjustment of hysteresis loss
This paper proposes a method for adjusting the simulated hysteresis loss to the measured loss using a weighting function of the stop model. 10 The stop model with weighting function is given as where w(m) is the weighting function. Ref. 10 introduced an iterative method to determine the weighting function as where k is the iteration number, L simulated is the hysteresis loss for amplitude m computed using the ADSM with w k (m)S(m) and L measured is the measured loss. For simplicity, this paper sets w(m) = w 1 (m), which means that the adjustment of loss is not very accurate. Using w(m), the pinning field is described as This adjustment is a kind of parameter fitting using loss measurements. This paper aims to predict the hysteresis loss under mechanical stress from the hysteresis loss without stress. Hence, the weighting function is determined from the measured hysteresis loss without stress.
A. Stress-dependent magnetic property
The magnetization process of a NO silicon steel sheet is simulated using 8 × 8 × 1 cells where the unit cell dimension ratio is 1:1: Figure 2 shows the simulated hysteresis loss at 50 Hz with and without mechanical stress along the rolling direction (RD). The weighting function improves the representation of hysteresis loss over 1 T under a stress-free condition. A quantitative agreement of the loss increase due to compressive stress is obtained. Figure 3 shows BH loops with and without stress, where the simulated loops roughly agree with measured loops and the decrease in permeability due to compressive stress is predicted using the ADSM. Figure 4 shows the dependence of hysteresis loss on the applied stress along the RD. The weighting function improves the loss property representation at 1.5 T. If the simulated hysteresis loss without stress agrees with the measured loss, the loss increase due to compressive stress is accurately represented. The tensile stress does not affect the loss appreciably.
B. Dependence of hysteresis loss on mechanical loss
Equation (9) has unidirectional anisotropy, where the stress direction of compressive or tensile stress becomes the hard or easy axis of magnetization, respectively. Figure 5 shows the distribution of magnetization directions in the domains of 8 × 8 cells with/without stress, where the distribution of the angle difference β between the magnetization direction and the RD is shown. At the demagnetizing state (m = 0), the distribution of β is almost symmetric with respect to β = π/2 to cancel the average magnetization by pairs of domains having anti-parallel magnetization When compressive stress is applied along the RD, the magnetization is nearly perpendicular to the RD in the demagnetizing state because of the stress-induced anisotropy. Under compressive stress, the magnetization also proceeds with 180 • domain wall motion roughly within 60 • ≤ β ≤ 120 • when the magnetization is small. Figure 6 plots the difference of β between µ 0 M = 0 and 0.4 T, where the distribution is almost point-symmetric with respect to (π/2, 0) showing the shift of the distribution from β to π β . When the magnetization becomes large, there is a shift of the distribution from β to nearly β π/2, which corresponds to the 90 • domain wall motion. The compressive stress thus increases the 180 • domain wall motion roughly within π/3 ≤ β ≤ 2π/3 and the 90 • domain wall motion, which strengthens the pinning field as follows.
If the magnetization proceeds with 180 • domain wall motion between two domains, the magnetization along the RD is where the volume ratio of the two domains are r and 1 r having the magnetization angles of β and β + π. The domain motion ∆r = ∆m cell /(2cosβ) increases if β is near π/2, which results in a strong pinning field. If the magnetization proceeds with 90 • domain wall motion between two domains having magnetization angles of β and β ± π/2, for example, the magnetization along the RD is m cell = r cosβ + (1 − r) cos(β ± π/2) = r cosβ ± (r − 1) sinβ.
The domain motion ∆r = ∆m cell /(cosβ ± sinβ) exceeds 180 • domain wall motion with β ≈ 0. The ADSM can predict the loss increase due to the compressive stress because it describes the effect of stress on the magnetization state in the crystal-grain scale depending on the distributed crystal orientations.
IV. CONCLUSION
The mechanism of hysteresis loss increase due to compressive stress was discussed according to simulation results obtained using the ADSM. The dependence of hysteresis loss on the applied stress was predicted using the ADSM, and agrees with the measured loss property quantitatively. The simulated distribution of magnetization angles shows that the compressive stress suppresses the development of magnetic domains having magnetization nearly parallel/antiparallel to the stress direction. As a result, the compressive stress causes mechanically induced anisotropy that increases the hysteresis loss by strengthening the pinning field along the compressed direction in the process of domain wall motion.
ACKNOWLEDGMENTS
This work was supported in part by the Japan Society for the Promotion of Science under Grant-in-Aid for Scientific Research (C) Grant No. 26420232. | 2,769.2 | 2018-04-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
Series DC Arc Fault Detection Using Machine Learning Algorithms
The wide variety of arc faults induced by different load types renders residential series arc fault detection complicated and challenging. Series dc arc faults could cause fire accidents and adversely affect power systems if not promptly detected. However, in practical power systems, they are difficult to detect because of a low arc current, absence of a zero-crossing period, and various abnormal behavior based on different types of power loads and controllers. In particular, conventional protection fuses may not be activated when they occur. Undetected arc faults could cause false operation of power systems and potentially lead to damage to property and human casualties. Therefore, it is imperative to develop a detection system for series arc faults in DC systems for the reliable and efficient operation of such systems. In this study, several typical loads, especially nonlinear and complex loads such as power electronic loads, were chosen and analyzed, and five time-domain parameters of the current—average value, median value, variance value, RMS value, and distance of the maximum and minimum values—were chosen for arc fault detection. Various machine learning algorithms were used for arc fault detection and their detection accuracies were compared.
I. INTRODUCTION
Recently, renewable energy has drawn attention owing to its advantages, such as green techniques and low carbon dioxide emission, and studies have been conducted on integrating them into existing power networks [1]- [4]. Although DC power systems are becoming an essential part of renewable energy systems, DC power systems have some inherent challenges. In particular, the occurrence of arc faults is one of the most critical problems. Arc faults could lead to high temperatures, intense light, and noise. Hence, they could potentially result in surrounding materials catching fire and thereby cause economic loss [5]. There are two main types of arc faults in DC power systems: series and parallel arc faults [6], [7]. A series arc fault is generated by the disconnection of a conductor in transmission power lines, whereas a parallel arc fault results from the insulation breakdown between two or more parallel lines because of an external The associate editor coordinating the review of this manuscript and approving it for publication was Inam Nutkani . force or heat. For the safety of DC systems, it is vital to detect arc faults promptly; therefore, arc fault detection techniques are essential. Generally, parallel arc faults exciting different current flows, and the rapid increase in the arc fault current can be eliminated by using devices such as fuses. Series arc faults act like an additional impedance in the system and cause the arc fault current to decrease. Consequently, conventional protection devices cannot be activated [8]. If not detected and eliminated promptly, series arc faults could affect the system's related circuits, damage the power supply sources and system controller, and even cause explosions. In DC networks, most of the components are connected through electronic circuits or converters, and the electromagnetic distortion noise produced by electronic converters renders arc fault detection more challenging. Therefore, several approaches to detect series arc faults and numerous studies have been conducted to analyze the characteristics of series arc faults.
There are several types of research on arc fault detection. Mathematical models of arc faults have been developed using experimental data [9]- [13]. However, the models do not comprehensively describe the external characteristics of the arc and are suitable for theoretical investigation and studies. Furthermore, arc fault detection methods based on the characteristics of arc faults, such as intense light, high temperature, considerable distortion noise, and high electromagnetic radiation, have been developed in [14]- [20]. However, their major drawback is their incapability to locate the positions of arc faults correctly [21]. With the advancement of information technology, artificial intelligence (AI) methods have become popular and offer potential techniques in fault diagnosis in various areas such as high impedance fault detection in medium voltage networks [22], failure detection in electrical machines [23], and track circuit fault detection in railway systems [24]. Several recent studies have achieved promising results for DC series arc fault detection with AI-based methods, such as the combined use of a support vector machine (SVM) and wavelet packet decomposition for series arc fault detection [25], the use of a hidden Markov model (HMM) for obtaining the maximum likelihood of series arc faults for correctly detecting faults [26], and the use of a cascaded fuzzy logic system in a photovoltaic system for series arc fault detection [27]. Numerous features such as current variations and high-frequency energy are extracted, trained for series arc detection based on weighted least squares SVM algorithms [28]. Furthermore, an attractor matrix was constructed from current signals and feature extraction based on singular value decomposition was proposed in [29], sparse coding features and a neural network were combined for arc fault detection in [30], and the combination of domain adaptation and a deep convolutional generative adversarial network was presented in [31]. A report in [32] presents a comparison between various learning techniques in DC photovoltaic system. Generally, these studies focus only on one control technique or one particular switching frequency for specific loads. On the other hand, the performances of AI algorithms are greatly affected by the operating conditions. The effects of different operation conditions on arc detection are still an open question for the researcher. There is a need for an overview study with various load types, control techniques, input features, switching frequencies.
In this paper, five input features and eight AI algorithms have been executed and compared, the types of input parameters such as average value, median value, variance value, RMS value, and the difference between the maximum and minimum value [33]. Comparing the performance according to the combination of AI algorithms and five input features of different load types is presented. This paper is organized as follows. Section 2 describes the experimental setup and how the current characteristics in each of the normal and arcing parts change in the time domain when a series arc occurs. Section 3 details the AI algorithms used for arc fault detection and feature analysis techniques used for series arc detection in this study. Section 4 presents detection results obtained using the eight AI algorithms and five input features in enclosed and unenclosed cases when a series arc fault occurred for different load types and operating frequencies.
Finally, the conclusion of the arc fault detection according to AI algorithms is presented in Section 5. Figure 1 shows a circuit diagram for obtaining series arc data. To obtain the data, we designed the arc-generating circuit regarding UL1699B [34]. Separating the arc rods generated an arc, and an oscilloscope was used to save the currents flowing through the rods before and after arcing. MATLAB was used to analyze the arc currents. The arc generation experiment setup comprised a DC power supply, arc generator, and loads. An N8741A DC power supply (Keysight Technologies, USA) was employed in the experiment. Table 1 presents the specifications of loads used in the experiment. The three-and single-phase inverters were constructed using an insulated-gate bipolar transistor module (SKM50GB123D, SEMIKRON, Germany). The switching frequency of the model predictive control (MPC) technique was variable. In this study, the switching frequency was the average switching frequency obtained from the number of times the switch was turned on and off in a specific interval. The current amplitude was the arc current magnitude before arcing. In the case of inverter loads, the arc current before and after arcing was the inverter's input current [35]. As shown in Figure 1, a DC voltage was supplied to the load. Subsequently, the step motor, which was connected to the arc rods, was switched on to separate the arc rods. The data were sampled by an oscilloscope (Tektronix MSO3054, USA) at 250 kHz sampling frequency. Tektronix TCP312 (Tektronix, OR, USA) was used as the current probe to measure the arc current. The recorded data were split into smaller data sets of 2 ms for training and testing the AI algorithms. Figure 2 presents the structures of the threephase and single-phase inverters that were used as loads in this study. These inverters converted DC signals into AC signals, and during their operation, only one switch was connected in each phase leg at any given instant. This led to eight and four switching vectors for the operation of the three-and single-phase inverters, respectively. This study employed space vector modulation (SVPWM), MPC, and sinusoidal pulse width modulation (SPWM) to control the three-phase and single-phase inverters. SVPWM is a modulation technique for the control of pulse width modulation. The objective was to use a given DC voltage and control six switches to emulate three-phase sinusoidal waveforms whose frequency and amplitude was adjustable. MPC is an advanced method of process control while satisfying one or several constraints. It involves dynamic models of the circuit. SPWM is a typical PWM technique. The sinusoidal AC voltage reference was compared with the high-frequency triangular carrier wave to determine the switching state for each leg in the inverter. Different control techniques were used to compare the performance of arc detection for various conditions for obtaining an overview of the effectiveness of the different artificial learning algorithms. Figure 3 shows the normal and arcing state waveforms for different loads. For all the loads, the shapes of the waveforms before arcing were similar. When an arc was generated, the waveforms showed many abnormal behaviors, such as the addition of harmonic components to the load current, the distortion of the load current waveform, and a decrease in the current amplitude. The large amplitude spikes during the initial arcing state in the current were caused by electrical sparks. The magnitude of electrical sparks could be large or small, depending on the type of load. The aforementioned abnormal behaviors could be potentially used for arc fault detection.
III. ARTIFICIAL INTELLIGENCE ALGORITHMS
The SVM is based on a framework called Vapnik-Chervonenkis (VC) theory. Then, Boser and colleagues presented an algorithm that maximizes the margin between training data [36]. The SVM aims to find the best hyperplane that can separate data from two different classes with the maximum margin, and it can perform linear or nonlinear classifications based on the features of the data. The best classifier is the SVM, with the hyperplane has the largest margin.
2) K-NEAREST NEIGHBOR
Evelyn Fix and Joseph Hodges proposed the K-nearest neighbors (KNN) algorithm. The basic concept of the classification algorithm is that if an object has K most similar neighbors in its vicinity and if most of them are located in a specific class, the object is also located in that class [37].
3) RANDOM FOREST
Random forest (RF) is an ensemble learning method for classification, regression, and other tasks, and it constructs multiple decision trees during training time and at the output [38]. The forests pull together the decision tree algorithms, take the teamwork of many trees, and improve the performance of a single random tree.
4) Naïve BAYES
Naive Bayes (NB) is the simplest form of Bayesian network classifiers [39]. Naïve Bayes classifiers require several parameters to be linear in the number of variables (features/predictors) in a learning problem. Naive Bayes is a typical method for constructing classifiers: assigning class labels to objects represented as vectors of feature values.
5) DECISION TREE
One of the most popular classification models is the decision tree (DT) model. DTs are popular because they are practical and easy to understand. Furthermore, rules can be easily extracted from DTs [40]. Classification trees perform the classification function by using a top-down process that divides the input training data into smaller branches until a branch that shows the most appropriate label is reached.
The DT structure consists of a root, several nodes, branches, and leaves (also known as decision points), which are class labels.
6) DEEP NEURAL NETWORK
The deep neural network (DNN) structure consists of n parameters as inputs, and these inputs are passed through a network composed of N layers to obtain the final results. This process is repeated N times. The state of the first layer was as follows: X is the input parameter, W T 1 is the weight of the first layer, and b 1 is the bias in the first layer. h 1 is the output of the first layer, which is transferred to the second layer. The primary learning method of multiple artificial neural networks is to evaluate one epoch and then use the error to update the weight and the bias value to reduce each layer's error. This method is called error backpropagation. There are different types of neural networks, but all of them consist of the same components: neurons, weights, biases, and functions. These components function similarly to the human brain and can be trained like any other machine learning algorithm. In this layer, the neurons are connected fully with the neurons of the previous layer. It has the simplest structure and plays an important role in connecting all neurons located in the preceding and following layers. If all neurons of all layers are fully connected (FC), the neural network is called DNN.
7) LONG SHORT-TERM MEMORY
Long short-term memory (LSTM) belongs to the recurrent neural network (RNN) algorithm. An LSTM unit has three gates: a forget gate, an input gate, and an output gate. These gate structures can achieve efficient feedback of adequate information through selective forgetting and memory mechanisms, thereby making the network achieve the approximation of complex time-varying nonlinear functions better. In LSTM, the long-term memory and the short-term memory are controlled separately. Equation (2) shows the output of each LSTM neuron.
where σ is a variable that determines how much the weight and bias values are changed for the data received in one iteration as the learning rate. h t−1 and h t are the short-term memory states at the previous moment and at present, respectively. C t−1 and C t are the long-term memory states at the previous moment and at present, respectively. W 0 and b o are the current weight and bias of current LSTM cell, respectively. x t represents the input data, which come from another LSTM cell. y t represents the output data, which is sent to another LSTM cell.
8) GATED RECURRENT UNIT
Gated recurrent unit (GRU) is also an RNN algorithm. Unlike LSTM, long-term memory and short-term memory are combined in GRU. There are two main gates in GRU, namely, update and reset gates. The function of the update gate is to control previous state information flows into the current state. The role of the reset gate is to control the ignored degree of status information at the last moment. Equation (3) shows the output of each GRU neuron.
where h t−1 and h t are the memory states at the previous and present time, respectively. z t ,ĥ t are the update gate vector and candidate activation vector, and x t is the current input. Table 2 shows the layer structures of three deep learning techniques (DNN, LSTM, and GRU). DNN had four FC layers, and the number of neurons in layers 1, 2, 3, and 4 were 4, 5, 5, and 2, respectively. LSTM and GRU had five layers, and the number of neurons in layers 1, 2, 3, 4, and 5 were 16,16,8,8, and 2, respectively. The second and fourth layers of LSTM are different from those of GRU. The properties of hidden layers such as the number of layers and neurons are chosen by the trial and error method. The present structures showed the best performance among various structures. However, there may be other suitable layer configurations.
B. INPUT PARAMETERS
A feature is a critical part of machine learning implementation. A group of features can illustrate the original input data, but not wholly represent the original data. Thus, the more the number of features used, the ML algorithm is more effective. However, if the number of features is too high, the classification performance can be degraded or overfitting can occur. Several techniques can obtain features from the input data, such as fast Fourier transform and wavelet transformation. However, these features pertain to the frequency domain, and their extraction requires high sampling frequency and computational cost. In practical systems, these drawbacks could delay the processing time and affect accuracy when arc faults occur. By contrast, features in the time domain can be extracted with a low sampling frequency, which offers a fast computation effort. Therefore, time-domain features were utilized for arc fault detection in this study. The data were sampled at 250 kHz sampling frequency. Then, the recorded data were split into smaller data sets of 2 ms for training and testing the AI algorithms. For each data set, the signal is processed to obtain one feature set of five values for average, median, variance, rms, and distance between the maximum and minimum currents. After that, these feature sets were used as input for eight learning techniques to detect series DC arc faults.
IV. SERIES DC ARC FAULT DETECTION USING ARTIFICIAL INTELLIGENCE ALGORITHMS
Arc detection using AI algorithms can be divided into two types. The first type is when the test data of a category has already been trained, and the second type is when the test data of a category has not been trained. These cases are referred to as enclosed types and unenclosed types, respectively. Figure 4 shows the structure of a confusion matrix. CN and CA are the correctly predicted data sets for the normal and arcing states, respectively. MD indicates ''missing detection,'' and it refers to the arcing state data set being predicted as the normal state. FD is ''false detection,'' and it refers to the normal state data set being predicted as the arcing state. The numerals 0 and 1 signify the normal state and arcing state, respectively. To evaluate the performance of the AI algorithms, we used the following metrics. The dummy detection rate is the ratio between the number of normal state data sets predicted as the arcing state and the total number of normal state data sets. It is expressed as % of Dummy Det. = # of normal data sets predicted as arcing state total # of normal data sets .
The missing detection rate is the ratio between the number of arcing state data sets predicted as the normal state and the total number of arcing state data sets. It is expressed as % of Missing Det.
= # of arcing data sets predicted as normal state total # of arcing data sets .
The accuracy detection rate is the ratio of the number of correctly predicted data sets to the total number of test data sets. It is expressed as % of Total Acc. = # of correctly predicted data sets total # of test data sets .
The best detection technique is the technique with the lowest dummy and lowest missing detection rates or highest accuracy. The distribution of training and test data is shown in Figure 5. The data were divided into three groups. Group 1 consisted of current data of the three-phase inverter load with SVPWM control at current amplitudes of 3, 5, and 8 A, and the switching frequencies varied from 5 to 20 kHz. Group 2 consisted of current data of the resistor load, the three-phase inverter load with MPC, and the single-phase inverter load with PWM control. The current amplitudes were 5 and 8 A, and the switching frequencies varied from 5 to 20 kHz. Group 3 consisted of current data of the three-phase inverter load with SVPWM control. The current amplitudes were 4, 6, and 7 A, and the switching frequencies were 15 and 20 kHz. This group was employed as a neutral group, and it was not used for training but only for testing. Figure 5(a) shows the distribution of training data and test data for series DC arc detection for the enclosed type; 22,800 data sets of the normal state and arc state were entered into the training data, and 18,400 data sets were entered into the test data. The test data were excluded from the training data. Figure 5(b) shows the distribution of training data and test data for series DC arc detection for the unenclosed type. There were two cases in this type, and for simplicity, they were named unenclosed type 1 and unenclosed type 2. In unenclosed type 1, the data in group 1 were trained, and the data in groups 2 and 3 were used for the test. In unenclosed type 2, the data in group 2 were used for training, and those in groups 1 and 3 were used for the test. The ratio between the normal and arcing data sets in all the training and test processes was 1:1. Figure 6 shows dummy detection rates for the enclosed type when different DC current amplitudes and various loads were employed. In the case of three-phase inverter load with SVPWM, the dummy detection rates of the five machine learning techniques were always lower than those of the three deep learning techniques in all frequency ranges at 3 A. At 5 A, KNN, RF, NB, DT, and GRU techniques were lower than the remaining three learning techniques in all frequency ranges. At 8 A and three-phase inverter load with SVPWM, KNN, RF, NB, DT were lower than the other techniques in frequency ranges 5 to 15 kHz and vice versa in 20 kHz switching frequency. In the case of three-phase inverter load with MPC, all techniques showed similar performance except DNN for both 5 and 8 A current amplitudes in all frequency ranges. A similar trend was observed for the case of single-phase inverter and resistor load. Generally, KNN, RF, NB, and DT showed high performance compared with SVM and three deep learning techniques. Figure 7 shows missing detection rates for the enclosed type when different DC current amplitudes and various loads were employed. In the case of three-phase inverter load with SVPWM, the missing detection rates of KNN, RF, NB, and DT were lower than that of other techniques in all frequency ranges except NB at 3 A. In the case of three-phase inverter load with MPC, all techniques showed similar performance except SVM for both 5 and 8 A current amplitudes in all frequency ranges. A similar trend was also observed for the case of single-phase inverter and resistor load. Figure 8 shows accuracy detection rates for the enclosed type when different DC current amplitudes and various loads were employed. In the case of three-phase inverter load with SVPWM at 3 A, DT showed the highest accuracy and GRU showed the lowest accuracy at 5 and 15 kHz switching frequencies. At 10 kHz, NB showed the highest accuracy and GRU showed the lowest accuracy. At 20 kHz, DT showed the highest accuracy and NB showed the lowest accuracy. Among the five machine learning techniques, DT showed the best performance and SVM had the lowest accuracy. Among the three deep learning techniques, DNN showed the best FIGURE 6. Dummy detection rates for the enclosed type in different conditions. 133352 VOLUME 9, 2021 FIGURE 7. Missing detection rates for the enclosed type in different conditions. VOLUME 9, 2021 FIGURE 8. Accuracy detection rates for the enclosed type in different conditions. 133354 VOLUME 9, 2021 performance and GRU had the lowest accuracy. In terms of the frequency range, the machine learning techniques' accuracy increased when the frequency increased from 5 to 15 kHz, and their highest accuracy was observed at 15 kHz. On the other hand, the deep learning techniques showed the highest accuracy at 20 kHz. When the current amplitude was 5 A, KNN, RF, NB, and DT showed the highest accuracy at 5 kHz and at 10, 15, and 20 kHz, KNN, RF, and DT showed the highest accuracies, respectively. GRU showed the lowest accuracy in the frequency ranges 5, 10, and 15 kHz, and DNN showed the lowest accuracy at 20 kHz. RF showed the best performance among the five machine learning techniques, whereas SVM had the lowest accuracy; however, the differences in the accuracies of KNN, RF, NB, and DT were fairly small. Among the three deep learning techniques, LSTM showed the best performance and GRU had the lowest accuracy. In terms of the frequency range, the accuracies of SVM and the three deep learning techniques increased with an increase in the frequency from 5 to 20 kHz. At 20 kHz, all AI algorithms showed similar and high performance. At current amplitude 8 A, NB showed the highest accuracy at 5, 10, and 15 kHz, and at 20 kHz, LSTM and GRU showed the highest accuracies. LSTM and GRU showed the lowest accuracies at 5, 10, and 15 kHz, and NB showed the lowest accuracy at 20 kHz. Among the five machine learning techniques, NB showed the best performance, and SVM had the lowest accuracy. Differences between the accuracies of KNN, RF, NB, and DT were significantly small. DNN showed the best performance at 5, 10, and 15 kHz among the three deep learning techniques, and LSTM and GRU had the best accuracy at 20 kHz. In terms of the frequency range, the accuracies of SVM and the three deep learning techniques increased and the accuracies of KNN, RF, NB, and DT decreased when the frequency increased from 5 to 20 kHz. In the case of a three-phase inverter load with MPC at 5 A, all techniques, except for DNN, showed high accuracy (above 99%) in all frequency ranges. SVM had the lowest accuracy; however, the accuracy difference between SVM and the other machine learning techniques was minimal. When the current amplitude was 8 A, GRU showed the highest accuracy at the switching frequency of 5 kHz, and at 10, 15, and 20 kHz, DT showed the highest accuracy. DNN showed the lowest accuracy in all frequency ranges. KNN, RF, NB, and LSTM also showed high performance (above 91%) in all frequency ranges. SVM showed mediocre performance compared with the other techniques at switching frequencies of 5 and 10 kHz. The accuracy of SVM, KNN, RF, and NB increased with the switching frequency from 5 to 20 kHz. On the other hand, the accuracies of DT, DNN, LSTM, and GRU increased with an increase in the switching frequency from 5 to 15 kHz but slightly decreased at 20 kHz. In the case of single-phase inverter load. The performance of all techniques was high (about 97%). DNN showed the best performance in all frequency ranges, and NB showed the lowest accuracy. However, the difference between the accuracies of all the techniques was minimal. When the resistor load was used. The performance of all techniques was high (almost 100%), and the arc fault could be detected correctly without miss or false detections, except for NB at 5 A.
A. ENCLOSED TYPES
Generally, data that are trained before testing are significantly beneficial for all AI techniques to achieve high performance for different current levels, load types, and switching frequencies. In the training process, the unique characteristics of each specific condition are learned, and they can subsequently be used to detect arc faults correctly.
B. UNENCLOSED TYPES
This section compares the detection rate regarding the input parameters and AI structures for the unenclosed types. As mentioned, this type was divided into two cases. Different data sets were trained and tested in each of the cases. Figure 9 shows dummy detection rates for the unenclosed type 1 when different DC current amplitudes and various loads were employed. In the case of three-phase inverter load with MPC, the dummy detection rates of the KNN, RF, and NB techniques were always lower than those of the three deep learning techniques at 5, 15, and 20 kHz. A similar trend was observed for KNN, RF, NB, and DT when the current amplitude was 8 A. In the case of single-phase inverter load, NB and LSTM showed the lowest rates, whereas SVM, KNN, RF, DT, and DNN reached the maximum rate (about 100%) at 5 A in all frequency ranges. All techniques showed the same performance for both 5 and 8 A current amplitudes when the resistor load was employed. In the case of three-phase inverter load with SVPWM, KNN, RF, LSTM, GRU showed high dummy detection rates at 4, 6, and 7 A current amplitudes in all frequency ranges. On the other hand, SVM, NB, and DT show the best detection rate except for NB at 6 A and DT at 4 A current amplitude. Figure 10 shows missing detection rates for the unenclosed type 1 when different DC current amplitudes and various loads were employed. In the case of three-phase inverter load with MPC, the performances of all techniques were similar at 5 A current amplitude in all frequency ranges. A similar trend was also observed for SVM, DNN, LSTM, GRU at 8 A current amplitude, whereas RF, NB, DT hit the maximum rate (100%) in all frequency ranges. In the case of single-phase inverter load, SVM, KNN, RF, DT, DNN, GRU showed the best rate, whereas NB and LSTM reached the maximum rate. When the resistor load was employed, all techniques reached the maximum rate for both 5 and 8 A current amplitudes except NB at 5 A. Three-phase inverter load with SVPWM, KNN, RF, LSTM, and GRU showed high performance in all frequency ranges. The other techniques showed mediocre performance and NB hit the maximum rate at 4 and 7 A current amplitudes. Figure 11 shows accuracy detection rates for the unenclosed type 1 when different DC current amplitudes and various loads were employed. The accuracies of the KNN, RF, and NB algorithms were higher than those of the three deep learning algorithms at 5 A current amplitude and 5, 15, 20 kHz switching frequencies. At 5, 15, and 20 kHz, VOLUME 9, 2021 KNN showed the highest accuracy, and at 10 kHz, the SVM and GRU techniques showed the highest accuracies. At 5 and 20 kHz, DNN showed the lowest accuracy, and at 10 and 15 kHz, NB and LSTM showed the lowest accuracies, respectively. Among the five machine learning techniques, KNN showed the best performance. The difference between the accuracies of KNN, RF, and NB was significantly small at the frequency ranges of 5 and 15 kHz. Among the three deep learning techniques, GRU showed the best performance in all frequency ranges and DNN showed the lowest accuracy. In terms of the frequency range, the accuracies of all techniques decreased when the frequency increased from 5 to 20 kHz. When the current amplitude was 8 A. The KNN algorithm showed the highest accuracy in all frequency ranges, and RF, NB, and DT showed poor performance in all frequency ranges. The accuracy detection rates of SVM and the three deep learning techniques were similar to each other in all frequency ranges. Among the five machine learning techniques, KNN showed the best performance, and RF, NB, and DT had the lowest accuracy. Among the three deep learning techniques, LSTM and GRU showed the best performance in all frequency ranges. In terms of the frequency range, the accuracies of all techniques, except for RF, NB, and DT, increased from 5 to 15 kHz; the accuracies of RF, NB, and DT remained constant. The best performance of SVM, KNN, DNN, LSTM and GRU was observed at 15 kHz. In the case of single-phase inverter load, GRU showed the highest accuracy in all frequency ranges. However, at 10 kHz, the value of the highest accuracy was only 88.25%. This was almost 10% lower than the enclosed type. The other techniques showed poor performance in all frequency ranges. When the resistor load was employed, the performance of all techniques in both cases was poor, except for NB at 5 A. However, the accuracy of NB at 5 A was only 77.56%. It is surmised that the current characteristics of the resistor load were different from those of the single-and three-phase inverter loads. Thus, all learning models with inverter loads cannot detect the arc fault with high accuracy. In the case of three-phase inverter load with SVPWM, SVM showed the highest accuracy and DNN took second place at 4 A and the switching frequency of 15 kHz. By contrast, DNN had the highest accuracy and SVM took second place at the switching frequency of 20 kHz. The remaining algorithms showed poor performance at both 15 and 20 kHz. When the current amplitude was 6 A, at the switching frequency of 15 kHz, DNN showed the highest accuracy, and SVM showed the best performance at 20 kHz. The remaining algorithms, except for DT, showed poor performance at both 15 and 20 kHz. A similar trend at 7 A was observed for DNN and DT for 15 and 20 kHz switching frequency, respectively. Figure 12 shows dummy detection rates for the unenclosed type 2 when different DC current amplitudes and three-phase inverter load with SVPWM were employed. SVM and NB showed the best rate, whereas KNN, RF, DT, LSTM, GRU hit the maximum rate at 3 A in all frequency ranges. When the current amplitudes were 5 and 8 A, all techniques showed high performances except DNN and KNN, RF, LSTM and GRU at 8 A current amplitude and 20 kHz switching frequency. When the current amplitudes were 4, 6, 7 A, the dummy detection rates of all techniques were similar that those in unenclosed type 1 with small differences. Figure 13 shows missing detection rates for the unenclosed type 1 when different DC current amplitudes and three-phase inverter load with SVPWM were employed. KNN, RF, DT, LSTM, GRU showed the best rate in all frequency ranges at 3 A current amplitude, whereas NB and SVM showed the highest and medium rates in all frequency ranges. When the current amplitude was 5 A, DNN showed the best performance, a similar trend was observed for KNN, RF and GRU at 8 A current amplitude. In contrast, the other techniques showed similar performance for both 5 and 8 A current amplitudes in all frequency ranges. When the current amplitudes were 4, 6, 7 A, all techniques' missing detection rates were similar to those in unenclosed type 1 with small differences. Figure 14 shows accuracy detection rates for the unenclosed type 1 when different DC current amplitudes and three-phase inverter load were employed. At 3 A, DNN showed the highest accuracy at the switching frequencies of 5, 15, and 20 kHz, and SVM showed the best performance at the switching frequency of 10 kHz. Other AI techniques showed poor performance in all frequency ranges. When the DC current amplitude was 5 A, GRU showed the best performance at 5 kHz, and DNN showed the highest accuracy at 15 kHz. Furthermore, DT showed the highest accuracy at the switching frequencies of 10 and 20 kHz. The accuracy of the LSTM and GRU techniques fluctuated with an increase in the switching frequency, whereas the accuracy of the remaining techniques increased with the switching frequency. When the DC current amplitude was 8 A, GRU showed the highest accuracy at switching frequencies of 5, 10, and 15 kHz, and DNN exhibited the best performance at 20 kHz. NB showed the lowest accuracy in all switching frequency ranges. The accuracy of KNN, RF, and GRU decreased with an increase in the switching frequency, whereas that of SVM, DT, and LSTM increased with the switching frequency. When the DC current amplitudes were 4, 6, 7 A, the performances were similar to those in unenclosed type 1.
Clearly, the performance of all techniques for the enclosed type was generally higher than that for the unenclosed type. However, the poor performance for the unenclosed type showed that the load type and control technique could affect the accuracy of arc fault detection when the new data were not trained. Therefore, new data training is essential for a new load or new operation conditions for achieving high performance in DC series arc fault detection.
V. CONCLUSION
Using eight types of AI algorithms and various input parameters, combinations suitable for arc detection were examined. In the case of DC series arc enclosed type, KNN, RF, NB, and DT showed high performance in all switching frequency ranges. SVM, DNN, LSTM, and GRU showed mediocre performance at low switching frequencies such as 5 and 10 kHz, they show high performance at high switching frequencies such as 15 and 20kHz. The difference in accuracies at high switching frequency is significantly slight between all AI techniques. Generally, the performance of all AI algorithms increases with the increase of frequency. The increase of frequency may increase the helpful information in each data set; thus, the accuracy can be improved.
In the case of DC series arc unenclosed type 1, the performance of all techniques is significantly degraded, especially the resistor load. It is guessed that the current characteristic of resistor load is different from the single-or three-phase inverter loads. Thus, all learning models of inverter loads cannot detect the arc fault with high accuracy. In the case of DC series arc unenclosed type 2, similarly to the unenclosed type 1, the performance of all AI techniques are low. In both unenclosed types 1 and 2, the performance of all techniques at high switching frequencies is higher than at low switching frequencies. The poor performances in the unenclosed type show that different load types or control techniques could affect the accuracy of arc fault detections without training the new data. Therefore, training new data when a new load or operation conditions is essential for achieving high performance in DC series arc fault detection.
The performances of machine learning techniques were better than deep learning techniques in low-frequency ranges. Deep learning techniques usually process raw data and do not require feature extraction to obtain high accuracy; some useful information might be lost during the feature extraction resulting in the poor performance of deep learning techniques. In addition, deep learning techniques consist of many neurons and layers, which can increase the computational cost and execution time. This could be useful for practical applications, which is a priority for cost and reliability.
Machine learning techniques showed high performance at low switching frequencies, small data sets, and simple implementation. However, their drawbacks are the need for feature extractions to maintain the high detection rates. On the other hand, deep learning techniques do not require any feature extraction to obtain high accuracy. They require a large data set and high computational cost due to their deeper structure than machine learning techniques. The depth and the width of layers in deep learning algorithms (DNN, LSTM, GRU) were chosen on the basis of the trial and error method. Many tests are required to find the most optimal performance. However, there is no way to guarantee that the chosen depths and widths yield the best performance. For example, the optimal depth and width of layers can be different if the trial number is different. Furthermore, the performance of deep learning algorithms varies with the operating conditions (current amplitude, load type, and load converter frequency). This means that the optimal depth and width of layers in one case are not optimal for other cases. As shown in the detection results, deep learning algorithms showed higher performance than the other AI techniques in several cases, whereas their performance was poor or mediocre in other cases.
It is interesting to note that no AI technique showed high performance in all test cases. Some techniques achieved high accuracies in specific conditions, otherwise, their accuracies were poor. It is recommended that for arc fault detection, several AI techniques should be combined to improve accuracy and maintain high performance in different operating conditions. Furthermore, all AI algorithms can be used to detect AC arc faults. However, the input features should be different from DC arc fault detection due to the difference in arc current characteristics, such as zero-crossing periods (flat shoulders) and high-frequency harmonic components. Several frequency domain analysis techniques such as fast Fourier transform, wavelet transform are helpful to extract features. This study offers a specific view on different learning techniques. This might be helpful research for selecting learning techniques, which can assist in building more robust and reliable systems when implementing an arc fault detection system regarding different priorities. | 9,380 | 2021-01-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Existence of dissipative solutions to the compressible Navier-Stokes system with potential temperature transport
We introduce dissipative solutions to the compressible Navier-Stokes system with potential temperature transport motivated by the concept of Young measures. We prove their global-in-time existence by means of convergence analysis of a mixed finite element-finite volume method. If a classical solution to the compressible Navier-Stokes system with potential temperature transport exists, we prove the strong convergence of numerical solutions. Our results hold for the full range of adiabatic indices including the physically relevant cases in which the existence of global-in-time weak solutions is open.
Introduction
We consider a compressible viscous Newtonian fluid that is confined to a bounded domain Ω ⊂ R d , d ∈ {2, 3}. Its time evolution is governed by the following system:
2)
∂ t (̺θ) + div x (̺θu) = 0 in (0, T ) × Ω. (1.3) Here ̺ ≥ 0, u, p and θ ≥ 0 stand for the fluid density, velocity, pressure, and potential temperature, respectively. The viscous stress tensor S(∇ x u) is given by where µ and λ are viscosity constants satisfying µ > 0 and λ ≥ − 2 d µ . Denoting by γ > 1 the adiabatic index, the pressure state equation reads p(̺θ) = a(̺θ) γ , a > 0 . (1.5) This type of Navier-Stokes equations is often used in meteorological applications; see, e.g., [1] and the references therein. System (1.1)-(1.5) governs the motion of viscous compressible fluids with potential temperature, where diabatic processes and the influence of molecular transport on potential temperature are excluded. Only potential entropy stratification in the initial data is imposed. We refer a reader to Feireisl et al. [2], where the singular limit in the low Mach/Froude number regime of the above Navier-Stokes system with γ > 3/2 was analyzed. For γ > 9/5, Bresch et al. [3] showed that the low Mach number limit for the considered system is the compressible isentropic Navier-Stokes equation. In [4] Lukáčová-Medvid'ová et al. use a slightly more complex version of the above system as the basis for their cloud model; see also Chertock et al. [5], where the uncertainty quantification was investigated. Due to the link between potential temperature and entropy, system (1.1)-(1.5) is often reported in the literature as the Navier-Stokes system with entropy transport. To avoid any misunderstanding, we call it in the present paper the Navier-Stokes system with potential temperature transport.
In literature we can find several existence results for the Navier-Stokes system (1.1)-(1.5). The question of stability of weak solutions for γ > 3/2, d = 3 was analyzed by Michálek [6]; see also [7], where the stability of weak solutions for the compressible Navier-Stokes equations with a scalar transport was studied for γ > 9/5 by Lions. Under the assumption γ ≥ 9/5 in the case d = 3 and γ > 1 in the case d = 2, system (1.1)-(1.5) is known to admit global-in-time weak solutions; see Maltese et al. [8, Theorem 1 with T (s) = s γ ]. Note that in the aforementioned paper the authors work with the entropy s instead of the potential temperature θ. However, in their framework the specified choice of the function T yields s = θ. We point out that the physically relevant adiabatic indices γ lie in the interval (1,2] if d = 2 and in the interval ( A simpler model for viscous compressible fluid flow is the barotropic Navier-Stokes system with the state equation p = a̺ γ , a = const. The first global-in-time existence result for weak solutions of this system allowing general initial data was established in 1998 by Lions [7] for γ ≥ 3/2 if d = 2 and γ ≥ 9/5 if d = 3. In 2001, Feireisl, Novotný, and Petzeltová [9] extended Lions's result to the situation γ > 1 for d = 2 and γ > 3/2 for d = 3; see also Feireisl, Karper, Pokorný [10]. To date, the latter is the best available global-in-time existence result for weak solutions for the barotropic Navier-Stokes system. The main obstacle that hampers the derivation of the existence result for γ ≤ 3/2 in three space dimensions is the lack of suitable a priori estimates for the convective term ̺u ⊗ u. These difficulties are inherited by the full Navier-Stokes-Fourier system that includes an energy equation, too. In [11], Feireisl and Novotný obtained the existence of global-in-time weak solutions for the Navier-Stokes-Fourier system. However, their result holds only for a very restrictive class of state equations. In particular, the natural example of the perfect gas law p = ̺θ is still open for the existence of weak solutions. In this context, we refer a reader to [12], where the complete Navier-Stokes-Fourier system for the perfect gas was studied in the context of generalized solutions.
The question of uniqueness of weak solutions remains open in general. However, we have a weak-strong uniqueness principle for the barotropic Navier-Stokes equations. It means that weak and strong solutions to the Navier-Stokes system emanating from the same initial data coincide; see, e.g., Feireisl, Jin, Novotný [13] or Feireisl [14].
In [15], Feireisl et al. introduced a new concept of generalized solutions to the barotropic Navier-Stokes system. They work with the so-called dissipative measure-valued (DMV) solutions that are motivated by the concept of Young measures. In this context, a DMV-strong uniqueness principle was established and the existence of global-in-time DMV solutions for a class of pressure state equations including the barotropic case with γ ≥ 1 was achieved. In our recent work [16], we have extended the DMV-strong uniqueness result to the Navier-Stokes system with potential temperature transport (1.1)- (1.5).
In [17,Chapter 13], Feireisl et al. give a constructive existence proof and demonstrate that DMV solutions to the barotropic Navier-Stokes system can also be obtained by means of a convergent numerical method that was originally developed by Karlsen and Karper [18], [19], [20], [21]. However, their result is based on the assumption that γ > 6/5 if d = 3 and γ > 8/7 if d = 2; for the three-dimensional case see also Feireisl and Lukáčová -Medvid'ová [22].
The goal of this paper is to introduce a concept of DMV solutions to the Navier-Stokes system with potential temperature transport and prove the global-in-time existence of such generalized solutions for all γ > 1 by analyzing the convergence of a suitable numerical scheme. To this end, we propose a new version of the mixed finite element-finite volume method of Karlsen and Karper [18]; see also [10], [17,Chapter 13], [22].
The paper is organized as follows: In Section 2, we introduce our notion of DMV solutions to the Navier-Stokes system with potential temperature transport and present our main result. Section 3 is devoted to the numerical method and the collection of its basic properties. In Section 4, we state a discrete energy equality for our method which serves as a basis for several stability estimates. The consistency of the numerical method is established in Section 5 and in Section 6 we conclude that any Young measure generated by the solutions to our numerical method represents a DMV solution to the Navier-Stokes system with potential temperature transport. In particular, we show that the numerical solutions converge weakly to the expected values with respect to the Young measure. The convergence of numerical solutions is strong as long as a strong solution of (1.1)-(1.5) exists.
Dissipative measure-valued solutions
Before defining dissipative measure-valued solutions to the Navier-Stokes system with potential temperature transport, we fix the initial and boundary conditions. The Navier-Stokes system with potential temperature transport (1.1)-(1.5) is endowed with the initial data and the no-slip boundary condition We henceforth write Ω t = (0, t) × Ω whenever t > 0. Furthermore, P : [0, ∞) → R, ∈ Ω T is a parametrized probability measure (Young measure) acting on R d+2 , we write whenever g ∈ C(R d+2 ). Moreover, we tend to write out the function g in terms of the integration variables ( ̺, θ, u) ∈ R × R × R d ∼ = R d+2 : if, for example, g( ̺, θ, u) = ̺ u, then we also write We proceed by defining dissipative measure-valued solutions to the Navier-Stokes system with potential temperature transport (1.1)-(1.5).
and for which there exists a constant c ⋆ > 0 such that is called a dissipative measure-valued (DMV) solution to the Navier-Stokes system with potential temperature transport (1.1)-(1.5) with initial and boundary conditions (2.1) and (2.2) if it satisfies: [1] P(R d+2 ) denotes the space of probability measures on R d+2 .
We proceed by defining the relevant discrete function spaces. The space of piecewise constant functions is denoted by Q h = v ∈ L 2 (Ω) v| Ω\Ω h = 0 and v| K ∈ P 0 (K) for all K ∈ T h [4] .
The Crouzeix-Raviart finite element spaces are denoted by [4] P n (K) denotes the set of all restrictions of polynomial functions R d → R of degree at most n to the set K.
With these spaces we associate the projection Π V,h : Additionally, we agree on the notation
Mesh-related operators
Next, we define some mesh-related operators. We start by introducing the discrete counterparts of the differential operators ∇ x and div x . They are determined by the stipulations respectively. We continue by defining several trace operators. For arbitrary The convective terms will be approximated by means of a dissipative upwind operator. For where ε > 0 is a given constant, Remark 3.1. In the sequel, we tend to omit the letter σ in the subscripts and superscripts of the operators defined in Sections 3.2 and 3.3. In some places, we also suppress the letter h and the superscript in in the notation if no confusion arises.
Time discretization
In order to approximate the time derivatives, we apply the backward Euler method, i.e., the time derivative is represented by where ∆t > 0 is a given time step and s k−1 h and s k h are the numerical solutions at the time levels t k−1 = (k − 1)∆t and t k = k∆t, respectively. For the sake of simplicity, we assume that ∆t is constant and that there is a number N T ∈ N such that N T ∆t = T .
Numerical scheme
We are now ready to formulate our mixed finite element-finite volume (FE-FV) method. where Remark 3.3. We note that our FE-FV method is a generalization of the scheme presented in [17,Chapter 13]. New ingredients are a modified upwind operator and the artificial pressure terms The latter are added to ensure the consistency of our method for values of γ close to 1, see Sections 4, 5.
Initial data
The initial data for the FE-FV method (3.2)-(3.4) are given as As a consequence of this stipulation, we observe that
Properties of the numerical method
We proceed by summarizing several properties of the FE-FV method (3.2)-(3.4).
If, in addition, there are constants 0 < c < c such that Proof. For the proof we refer the reader to Appendix A.3.
From Lemma 3.4 we easily deduce the following corollary.
starting from the discrete initial data (3.5) has the following properties:
Stability
We continue by discussing the stability of the FE-FV method (3.2)-(3.4) that follows from a discrete energy balance. For its derivation, we rely on the concept of (discrete) renormalization.
The same technique will be used to establish a discrete entropy inequality.
Discrete renormalization
In the sequel, we shall state renormalized versions of (3.2) and (3.3) that describe the evolution Together with suitable choices for the function b, the first two renormalized equations will help us to handle the pressure terms when deriving the discrete energy balance. The last equation will be used to establish the discrete entropy inequality.
Proof. The proof of assertion (i) can be found in [19,Lemma 5.1]. The main idea is to take 3) and to rewrite the results by means of basic algebraic manipulations, Gauss's theorem, and Taylor expansions.
Discrete energy balance
We now have all necessary tools at hand to establish the energy balance for our numerical method.
starting from the discrete initial data (3.5) and P the pressure potential introduced in (2.3). Denoting the discrete energy at the time level k ∈ N 0 by we deduce that Proof. The proof can be done following the arguments in [10, Chapter 7.5]. Therefore, we depict only the most important steps. First, taking Next, we observe that Moreover, by applying Lemma 4. (4.10) Plugging (4.6)-(4.10) into (4.5), we see that we have almost arrived at (4.4). Indeed, it only remains to show that which follows by direct calculations. This completes the proof.
Time-dependent numerical solutions and energy estimates
Next, we formulate appropriate stability estimates for the time-dependent numerical solutions introduced below.
that are piecewise constant in time by setting The most important stability estimates that can be obtained from the discrete energy balance (4.4) read as follows.
Corollary 4.4 (Stability estimates). Any solution
starting from the initial data (3.5) has the following properties: (4.14) Proof. The proof is provided in Appendix A.4.
Discrete entropy inequality
We conclude this section by stating a discrete entropy inequality. It is obtained by taking b = χ in Lemma 4.1(ii).
Consistency
The goal of this section is to establish the consistency of the FE-FV method (3.2)-(3.4).
The structure of the proof of Theorem 5.1 is essentially the same as that of [17,Theorem 13.2]. In particular, we will use similar tools. Apart from the estimates listed in Appendix A.1, we will need the following results.
Then the subsequent relations hold: Then [5] (5.8) Then (5.9) [5] In integrals of the form E(K) we consider the the vector n σ in the definition of the trace operators (·) in,σ and (·) out,σ to be replaced by n K .
Remark 5.5. The formula in Lemma 5.3 also holds true when the dissipative upwind term is replaced by the usual upwind term and the last term on the right-hand side of the identity is canceled. The same applies to Corollary 5.4. Then and which follows from the fact that r ∈ Q h . Corollary 5.4 can be proven by applying Lemma 5.3 with Having all necessary tools at our disposal, we can approach the proof of Theorem 5.1. • Recall that the elements of Q h and V h vanish outside Ω h . This allows us to replace Ω h by Ω when appropriate.
The continuity equation.
Next, let us consider the second term on the left-hand side of (5.12). Using Lemma 5.3 with These terms can be further estimated as follows.
The proof of (5.3) can be done by repeating the proof of (5.2) with ̺ h and ̺ 0 h replaced by ̺ h θ h and ̺ 0 h θ 0 h , respectively.
From (3.4) we deduce that
Let us consider the first term on the left-hand side of (5.15). Due to the second estimate in . . , d}, as well as Remark A.1, Hölder's inequality, the third estimate in (4.11), and the fact that ∆t ≈ h, we have Next, we turn to the last three terms on the left-hand side of (5.15). It follows from Lemma 5.6 that Finally, let us examine the second term on the left-hand side of (5.15). Applying Corollary 5.4 with (s, w, g, ψ) = (̺ h u h , u h , ϕ h , ϕ)(t, ·), t ∈ [0, T ], as well as the estimates (A.7)-(A.9) and (A.12), we deduce that We continue by estimating the above terms.
The entropy inequality.
Taking χ = ln in Lemma 4.5, we deduce that where Now we may rewrite the first two integrals in (5.20) following the procedure used to handle the continuity equation. We arrive at Moreover, combining c ⋆ ≤ θ h ≤ c ⋆ with Hölder's inequality, the first estimate in (A.1), the second estimate in (4.11) and the first estimate in (4.13), we deduce that Finally, seeing that (by a computation similar to that in (5.14)) we have we may rewrite (5.21) as where α 3 = min{α 1 , ε − 1}. In particular, we can choose β = min{α 1 , α 2 , α 3 } = min ε − 1, 1−2δ 4 .
Convergence
We proceed by proving our main result, namely Theorem 2.3. (3.5). Here we suppose that the parameters satisfy (5.1).
Proof of Theorem
Due to the second estimate in (4.11), the first estimate in (4.13), the third estimate in (4.12), (A.15), and Corollary 3.
Taking into account the remaining estimates in (4.11)-(4.14) as well as the first estimate in (A.2) and passing to a subsequence as the case may be, we obtain that as h ↓ 0. Following the arguments given in [ Moreover, using Hölder's inequality, (A.15), the first estimate in (A.2), the assumption on the initial data, and Lemma A.2, we easily verify that (Ω) and f ∈ C 1 (0, ∞), (6.9) and as h ↓ 0.
• Applying measure-theoretic arguments to the viscous terms, we conclude that • Using the density of C ∞ c (Ω) in W 1,2 0 (Ω) as well as Gauss's theorem, we easily verify that In particular, we may rewrite (6.12) in the form
Potential temperature equation.
The potential temperature equation can be handled in the same manner as the continuity equation.
In particular, Consequently, (6.16) can be rewritten as It is easy to see that (6.17) also holds for test functions ϕ of the class Accordingly, we may use the dominated convergence theorem to extend the validity of (6.17) to test functions ϕ ∈ C 1 (Ω T ) d satisfying ϕ| [0,T ]×∂Ω = 0.
Entropy inequality.
Due to (6.1), (6.2), and (6.9), we may take the limit h ↓ 0 in (5.5). We obtain for all ψ ∈ C ∞ c ([0, T ) × Ω), ψ ≥ 0. By an approximation argument similar to that in the case of the continuity equation, the validity of (6.21) can be extended to test functions ψ ≥ 0 of the class C c ([0, T ) × Ω) ∩ W 1,∞ (Ω T ). In particular, we may consider test functions of the form ψ = φ τ,δ η, Consequently, The entropy inequality (2.8) follows by performing the limit δ ↓ 0 in the above inequality. For the limit process, we rely on Lebesgue's differentiation theorem as well as the dominated convergence theorem. This completes the proof of Theorem 2.3.
From the proof of Theorem 2.3 it follows that any Young measure generated by a sequence 4) represents a DMV solution to the Navier-Stokes system with potential temperature transport (1.1)-(1.5). Moreover, If there is a strong solution to system (1.1)-(1.5) for given initial data (̺ 0 , θ 0 , u 0 ), then we may use the DMV-strong uniqueness result established in [16] to strengthen the aforementioned convergence statement as follows.
Proof. Let (̺ h , θ h , u h ) h ↓ 0 be a sequence as described above. To prove Theorem 6.1, it suffices to show that every subsequence (̺ h⋆ , θ h⋆ , From the proof of Theorem 2.3 and the DMV-strong uniqueness principle established in [16] we deduce that there is a subsequence
Conclusions
In the present paper, we introduced DMV solutions to the Navier-Stokes system with potential temperature transport (1.1)-(1.5) and proved their existence. For the existence proof we examined the convergence properties of solutions to a mixed FE-FV method that is a generalization of the method developed for the barotropic Navier-Stokes equations; see [22], [17,Chapter 13], [10,Chapter 7]. In particular, we showed that any Young measure generated by a sequence represents a DMV solution to the Navier-Stokes system with potential temperature transport (1.1)-(1.5).
In order to ensure the validity of our existence result for all physically relevant values of the adiabatic index γ -that is, γ ∈ (1, 2] if d = 2 and γ ∈ (1, 5/3] if d = 3 -we added two artificial pressure terms to our method. In the case of values of γ close to 1, these terms provided us with sufficiently good stability estimates for the limit process. In the limit process itself, we profited from the generality of DMV solutions that allowed us to hide the terms arising from the artificial pressure terms in the energy concentration defect and the Reynolds concentration defect, respectively. The strategy of adding artificial pressure terms points out a flexibility of the DMV concept. Indeed, it would not work in the framework of weak solutions. In spite of the generality of DMV solutions to system (1.1)-(1.5), we can show DMV-strong uniqueness, i.e., provided there is a strong solution, we can show that in a suitable sense any DMV solution on the same time interval coincides with it. We will present the detailed result in our upcoming paper [16]. Here, we made use of this result to prove the strong convergence of the solutions to our FE-FV method to the strong solution of the system. are valid for all p ∈ [1, ∞], all v ∈ V 0,h , all K ∈ T h , and all σ ∈ E h (K). Moreover, given φ ∈ C 1 (Ω), an application of Taylor's theorem yields Next, combining [ for all q ∈ [1, ∞], all φ ∈ W 1,q (Ω), and all ψ ∈ W 2,q (Ω). The latter estimates are also known as the Crouzeix-Raviart estimates.
Remark A.1. Clearly, the operators Π Q,h and Π V,h are linear. Furthermore, we may use (A.12) and the triangle inequality to deduce that there exists an h-independent constant C > 0 such that (A.14) Consequently, Π V,h is continuous.
Next, we prove the following auxiliary result that is needed in the proof of Theorem 2.3.
provided h is sufficiently small. Therefore, an application of Lemma A.2(ii), (iii) yields Since ε > 0 was chosen arbitrarily, the desired result follows.
A.3 Properties of the numerical scheme
In this section, we present a proof of Lemma 3.4 that is based on the following lemma.
Lemma A.4 ([27, Theorem A.1]). Let M, N be natural numbers, C 1 > α > 0 and C 2 > 0 real numbers, and Further, let F : V × [0, 1] → R N × R M be a continuous function that complies with the following conditions: (ii) The equation F (f , 0) = (0, 0) is a linear system with respect to f and admits a solution in W .
The proof of Lemma 3.4 is done in two steps.
Proof of Lemma 3.4(i). We start by showing that, given for all φ h ∈ Q h and φ h ∈ V 0,h . The proof of this fact is essentially identical to that of [17,Lemma 11.3]. In order to be able to apply Lemma A.4, we set where ||u k h || ≡ ||∇ h u k h || L 2 (Ω h ) d×d and the numbers α, C 1 , C 2 are yet to be determined. Clearly, we can construe Q 2 h as a subset of R 2N and V 0,h as a subset of R dM , where N is the number of tetrahedra (triangles) and M the number of inner faces (edges) of the mesh T h . Next, we define the continuous map for all φ h ∈ Q h and φ h ∈ V 0,h . Adapting and repeating the arguments from Section 4 to derive the energy estimates, we deduce that in Ω h , where L ∈ T h is chosen in such a way that (Z k h ) L = min R ∈ T h {(Z k h ) R }. In view of (A.22), we can find a constant α ≡ α( Thus, we have in Ω h and, analogously, Consequently, there is a constant C 1 ≡ C 1 (̺ k−1 h , Z k−1 h , u k−1 h ) > 0 such that ̺ k−1 h , Z k−1 h , ̺ k h , Z k h < C 1 in Ω h . Therefore, F fulfills assumption (i) of Lemma A.4. We proceed by proving that F satisfies assumption (ii) of Lemma A.4. To this end, we consider the equation F (((̺ k h , Z k h ), u k h ), 0) = (0, 0) that can be written as Obviously, this is a linear system for ((̺ k h , Z k h ), u k h ) with a positive definite matrix. Thus, the equation F (((̺ k h , Z k h ), u k h ), 0) = (0, 0) has a unique solution. Therefore, F also satisfies assumption (ii) of Lemma A. 4 is a solution to (3.2)-(3.4).
Proof of Lemma 3.4(ii). Suppose the triplet (r
Then [10, Chapter 7.6, Lemma 6] shows that r k h ∈ Q + h . The desired conclusions follow by applying this observation with
A.4 Stability estimates
The aim of this section is to provide the reader with a proof of Corollary 4.4.
Proof of Corollary 4.4. To begin with, we observe that 0 ≤ E k h ≤ E k−1 h for all k ∈ N. This follows from the fact that the second term on the left-hand side of (4.4) is nonnegative and all terms on the right-hand side are nonpositive. Here, the nonpositivity of the terms on the right-hand side is ensured by the convexity of the pressure potential P . Moreover, employing Hölder's inequality and Remark A.1, we see that ||̺ 0 || L ∞ (Ω) ||u 0 || 1.
Using this observation, it is easy to establish the first estimate in (4.11), the first two estimates in (4.12), the estimates in (4.13), and the estimates (4.15)-(4.18). Then, due to Corollary 3.5(i), the second estimate in (4.11) follows from the first estimate in (4.13 Consequently, the last estimate in (4.11) follows from the first two. Furthermore, an application of Poincaré's inequality (A.10) reveals that the last estimate in (4.12) is a consequence of the first. Due to Corollary 3.5(i), the validity of the first estimate in (4.14) results from the third estimate in (4.11). Using Hölder's inequality and the second estimate in (A.1), we deduce that Therefore, the second estimate in (4.14) follows from the third estimate in (4.12), the second estimate in (4.13), and (A.15). Finally, we combine Hölder's inequality, the estimates (A.3) and (4.15), the first estimate in (A.1), and the first and third estimate in (4.12) to conclude that We note in passing that estimate (4.20) can be proven in the same way. | 6,262.2 | 2021-06-23T00:00:00.000 | [
"Physics",
"Mathematics"
] |
Copper-Assisted Direct Growth of Vertical Graphene Nanosheets on Glass Substrates by Low-Temperature Plasma-Enhanced Chemical Vapour Deposition Process
Vertical graphene (VG) nanosheets are directly grown below 500 °C on glass substrates by a one-step copper-assisted plasma-enhanced chemical vapour deposition (PECVD) process. A piece of copper foil is located around a glass substrate as a catalyst in the process. The effect of the copper catalyst on the vertical graphene is evaluated in terms of film morphology, growth rate, carbon density in the plasma and film resistance. The growth rate of the vertical graphene is enhanced by a factor of 5.6 with the copper catalyst with denser vertical graphene. The analysis of optical emission spectra suggests that the carbon radical density is increased with the copper catalyst. Highly conductive VG films having 800 Ω/□ are grown on glass substrates with Cu catalyst at a relatively low temperature. Electronic supplementary material The online version of this article (doi:10.1186/s11671-015-1019-8) contains supplementary material, which is available to authorized users.
Recently, three-dimensional (3D) graphene attracts attention due to its high surface-area-to-volume ratio [1]. Vertical graphene (VG) nanosheet is one of the popular 3D carbon structure materials [10], which has been applied to various applications of field emitters [7,18], supercapacitors [19][20][21][22] and batteries [23,24]. In practice, VG films have been typically grown on metal substrates at a relatively high temperature [10,16], close to 1000°C, which limits the use of various low-meltingtemperature substrates. Yang et al. reported the growth of VG films on dielectric substrates (SiO 2 ) [25] at the temperature of 900°C. However, the growth rate drops significantly at the temperature below 900°C. Liu et al. reported the synthesis of carbon nanosheets on a metalcoated glass at a low temperature. In fact, it is a kind of process to grow carbon-based material directly on metal [26]. Recently, the catalytic effect of copper on graphene growth has been reported in high-temperature CVD processes [27,28], whereas it has not been studied in a plasma-enhanced chemical vapour deposition (PECVD) process. Furthermore, low operation temperature is necessary for an economic and facile process, which can be more feasible for industrial application. Especially, it can pave the way for more applicable substrate materials [29]. For instance, glass is a widely used commercial material with a cheap price but low melting temperature, which should be adopted in a low-temperature process.
In this work, VG films were grown on glass substrates in plasma-enhanced CVD with copper foils at a relatively low temperature, and the properties of vertical graphene films were investigated with the catalytic effect of copper foil in the PECVD process.
Methods
Vertical graphene nanosheets were grown in a radiofrequency (RF) inductively coupled plasma (ICP) reactor as shown in Fig. 1. A piece of copper foil (50 × 25 cm 2 ) was located inside of the quartz tube reactor. The temperature in the centre of the heating area can be increased up to 900°C by the lamp heater located in the centre of the reactor. A pre-cleaned glass substrate was placed outside (downstream zone) of the direct heating zone, and the temperature around the glass substrates was maintained at about 500°C to prevent glass deformation. Before the growth of the graphene nanosheets, the glass substrate was cleaned by H 2 for 2 min with 100 W of plasma power. After the cleaning, 2 sccm of C 2 H 2 and 1 sccm of H 2 were introduced into the tube reactor and 280 W of plasma power was supplied to the coils outside the reactor. The processing pressure was kept at 12 mTorr throughout the growth process. The growth rate was calculated by dividing the height by the growth time from the scanning electron microscopy (SEM) images. After growing graphene nanosheets on the glass substrate, the system was cooled down to room temperature slowly. A uniform VG film obtained on a glass substrate is shown in Fig. 1b.
Optical emission spectra (OES) were taken during the growth process by a high-resolution spectrometer (HR4000CG-UV-NIR, Ocean Optics). The VG film structure was analysed with field emission SEM (JEOL, JSM7401F). High-resolution transmission electron microscopy (HRTEM, JEM-2100F JEOL) was taken to confirm successful growth of graphene in nanoscale. Chemical elements of as-prepared films were determined by an energy-dispersive spectrometer (EDS, JEOL, JSM 6700F). Carbon bonding structure was analysed by Raman spectroscopy (Renishaw, RM-1000 Invia) with a wavelength of 532 nm (Ar + ion laser). The optical transmittance of VG films was determined by a UV-vis spectrophotometer (UV-650, JASCO) in the visible and infrared ranges.
Results and Discussion
In the process of growing carbon materials by CVD, such as carbon nanotubes and graphene, metal foils (Cu, Ni, Co, etc.) are typically adopted as substrates [7,20]. However, during the growth process of the VG film by PECVD, we found that the VG films can be also grown on dielectric substrates (e.g. glass) even at temperature as low as 500°C without any metal substrate, whereas the growth rate is quite low under the low growth temperature. Thus, in order to grow dense VG with a high growth rate, copper was adopted as a catalyst of the low-temperature PECVD process. The significant growth enhancement effect of copper is clearly shown in the SEM images in Fig. 2. Without the copper catalyst, a 100-nm-thick VG nanosheet film is grown in 20 min on a glass substrate (Fig. 2a inset), with a growth rate of 5 nm/min. In this condition, the small VG nanosheets are sparsely spread on the substrate as shown in Fig. 2a.
In contrast, a 270-nm-thick VG film is obtained in 12 min with the copper foil inside the reactor. The growth rate is significantly increased to 28 nm/min, and the VG size is enlarged in the copper-assisted PECVD process. Several VG films are reported on various substrates of Ni, Si, SiO 2 and Cu by PECVD at high temperature [1,7,30]. However, the growth mechanism of VG films has not been cleared on glass substrates with copper catalysts. The evolution of VG films on glass substrates is monitored by varying the growth time from 1 to 12 min as shown in Fig. 3. A thin layer is grown on the glass surface within the first 1 min, but no obvious VG nanosheets can be found. The thin layer plays a role as a buffer layer [31], connecting the substrate and the VG nanosheets. The existence of the buffer layer is confirmed by a scratch made by tweezers as shown in the inset. As the growth time is increased to 4 min, a large amount of VG nanosheets appear on the buffer layer (Fig. 3b). As the growth time is increased to 8 and 12 min, the VG nanosheets are further grown and connected densely. The height of the VG nanosheets increases continuously with the growth time (Fig. 3e).
The vertical structure of the crispate VG film was investigated by the cross-sectional SEM analysis as shown in Fig. 4a. The VG film on the glass substrate is composed of two layers of a horizontal buffer layer and VG nanosheets on that buffer layer. The buffer layer is believed to reduce the mismatch of the atomic structure between the glass and VG and similar results observed elsewhere [32], and the VG nanosheets are grown on the buffer layer with less stress. Multi-layer graphene structure in the VG nanosheets was identified by TEM analysis show as shown in Fig. 4b.
Raman spectra of the VG films were taken with the different growth time to evaluate the carbon bonding structure as shown in Fig. 5. The VG grown without copper catalyst does not show the characteristic D and G peaks in the Raman spectra within the growth time of 10 min, indicating the deposition of the base layer of amorphous carbon. Until 20 min of the growth time, a carbon layer is deposited and a barely visible (2D) peak can be found at 2670 cm −1 . The growth rate is significantly enhanced when the copper catalyst is applied in the PECVD process as shown in Fig. 5b. Significant D and G peaks are observed in the spectrum of the VG film even with a growth processing time of 1 min. However, the low 2D peak signal intensity at 2670 cm −1 implies an amorphous carbon structure [33]. This layer of amorphous carbon is believed to serve as a buffer layer as mentioned earlier. The signal intensity of the 2D peak starts to increase with the increase of the growth time indicating the growth of VG nanosheets on substrate.
Based on the above experimental results, the growth mechanism of VG films on the glass substrate in the PECVD system is illustrated in Fig. 6. In the initial stage of VG growth, the hydrocarbon source gas, C 2 H 2 , is dissociated into reactive radicals and they are transported onto the glass substrate (Fig. 6a) [34]. A thin layer of amorphous carbon is believed to be formed firstly on the substrate due to the lattice mismatch between the glass and graphene (Fig. 6b) [32]. Then, the graphene nanosheets start to grow, while the amorphous carbon layer is still depositing as well, forming the carbon islands (Fig. 6c). Subsequent in-plane-oriented layer growth mode is unfavourable due to the following three mechanisms: (1) the simultaneous growth of graphene and carbon island leads to discontinuity of horizontal graphene growth; (2) because of the strain energy in the edges and defects of initial graphene, the intermediate layer may not be able to continue to form bulk crystal and thus causes a transition from 2D complete films to 3D clusters [7]; and (3) in this plasma system, the electric field is developed between bulk plasma and the surface of the substrate, and ions generated in plasma are accelerated through the sheath. The energetic ions Fig. 2 SEM of VG on glass substrate without and with copper catalyst. SEM images of the VG grown on a glass substrate for 20 min without copper catalyst (a) and for 12 min with copper catalyst (b). The inset images are the cross-sectional SEM images of the samples deliver kinetic energy to the substrate by collisions on the surface, resulting in defects on the graphene film surface helping graphene growth in the vertical orientation (Fig. 6d). The VG growth is unique in plasmaenhanced chemical vapour deposition process and is not observed in typical thermal CVD processes. All these three mechanisms lead to the graphene grown as 3D clusters (Fig. 6e). In the VG growth process with copper catalyst, the hydrocarbon gas molecules are dissociated into reactive radicals on the copper surface [34] and a 6 Schematic growth process of VG film on glass substrate in a PECVD system. a Dissociation of carbon-hydrogen bonds by plasma. b Formation of the carbon buffer layer on the glass substrate. c Simultaneous growth of graphene and carbon islands. d Sheath effect and ion bombardment between bulk plasma and the substrate. e Sparse distribution of VG nanosheets prepared by PECVD process without the copper catalyst on the glass substrate. f, g Schematic growth process of the VG film by enhancement of copper catalyst. f Dissociation of hydrocarbon gas on the surface of copper. The dissociated reactive radicals transport to bulk plasma and increase the radical density. g Dense distribution of the VG nanosheets prepared by PECVD process with the copper catalyst on the glass substrate portion of those reactive radicals is expected to be desorbed from the copper surface and go back to the plasma. Optical emission spectra (OES) were taken during the deposition process to understand the catalytic effect of copper on radical densities as shown in Fig. 7. All the peak intensities increase significantly when the copper catalyst is employed. In particular, the intensities of C 2 and CH peaks are quantitatively analysed because C 2 and CH radicals have been reported as the major growing source and the terminator of graphene in plasma condition [35,36], respectively. Thus, the relative intensity of C 2 :CH can indicate the relative contribution of the growth reaction to termination reaction. As marked in the figure, the relative intensity of C 2 :CH increases significantly from 0.69 to 1.29 after applying a Cu catalyst. Therefore, the increased reactive radicals contribute to faster formation of the VG film, resulting in the denser and faster growth of VG ( Fig. 6f-g).
Chemical elements in the VG film were also analysed by EDS in order to detect the presence of copper in the VG samples. As summarized in Table 1 and Additional file 1: Figure S1, no copper element is found in the samples, indicating that the copper works as a catalyst in this PECVD process without being incorporated into the VG films.
Finally, the sheet resistance, height and transparency of the VG nanosheets are plotted as functions of growth time as shown in Fig. 8. The transparency of the VG film decreases as the growth proceeds with a longer time (Fig. 8a). Meanwhile, higher VG nanosheets provide lower sheet resistance that can be attributed to the close networking of the VG nanosheets (Fig. 8b). The VG grows with the processing time, but the growth rate is not linear during the whole process. The VG shows a relatively slow growth rate of 10 nm/min in the initial stage within 4 min. As discussed earlier, the base layer is forming in the first minute and VG seeds are forming on the top of the base layer. The VG seeding process may require time and energy. Once the VG seeds are successfully formed on the base layer, the VG grows at the high growth rate of 28.8 nm/min linearly with time in the range of 4 to 12 min.
Conclusions
In this work, vertically oriented conductive graphene film is synthesized directly on glass substrates with a copper catalyst in a low-temperature PECVD process. The catalytic mechanism of copper in the VG growth process is investigated in this work. The transparency and sheet resistance of the VG films were characterized with different growth times. The direct growth of the VG on glass substrates with copper in the PECVD process presented in this work does not require any additional substrate etching or transfer processes. This VG growth process is expected to facilitate the scaleup and makes VG production more economic for potential industrial production. The large surface area of VG films provides a big advantage in the application of electrical devices and energy storage devices. | 3,357.2 | 2015-08-04T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
MOSIC : Mobility-Aware Single-Hop Clustering Scheme for Vehicular Ad hoc Networks on Highways
As a new branch of Mobile ad hoc networks, Vehicular ad hoc networks (VANETs) have significant attention in academic and industry researches. Because of high dynamic nature of VANET, the topology will be changed frequently and quickly, and this condition is causing some difficulties in maintaining topology of these kinds of networks. Clustering is one of the controlling mechanism that able to grouping vehicles in same categories based upon some predefined metrics such as density, geographical locations, direction and velocity of vehicles. Using of clustering can make network’s global topology less dynamic and improve the scalability of it. Many of the VANET clustering algorithms are taken from MANET that has been shown that these algorithms are not suitable for VANET. Hence, in this paper we proposed a new clustering scheme that use Gauss Markov mobility (GMM) model for mobility predication that make vehicle able to prognosticate its mobility relative to its neighbors. The proposed clustering scheme’s goal is forming stable clusters by increasing the cluster head lifetime and less cluster head changes number. Simulation results show that the proposed scheme has better performance than existing clustering approach, in terms of cluster head duration, cluster member duration, cluster head changes rate and control overhead. Keywords—Vehicular Ad hoc Networks; Mobile ad hoc Networks; Network Topology Control; Clustering Scheme
I. INTRODUCTION Vehicular ad hoc networks (VANETs) makes a new vision in the field of Intelligent Transportation Systems (ITS).
Recently, VANET becomes a most important area of research both in academic and industry field, because it has the potential to create numerous applications such as dissemination of safety, routing plans, traffic condition message, entertainment (e.g.information sharing, gaming), e-commerce and control of vehicle flow formations [1], [21], [22].In principle, VANET is a special form of MANETs, with the difference that there are mobile nodes (Vehicles) have high dynamic mobility.In VANET vehicles equipped with an on-board unit (OBU) which make theme able to communicate with each other (vehicleto-vehicle, V2V) and via roadside units (vehicle-to-roadside, V2R) also called as RSUs.The communication standard that vehicles used to communicate with each other is Wireless Access for Vehicular Environments (WAVE), which it is an approved amendment to the IEEE 802.11 standard.WAVE is also known as IEEE 802.11p [13].
Due to high mobility, VANET topology changes rapidly, so establishing new control topology cause to introducing high communication overhead for exchanging information.There are several control schemes for media access and topology management have been proposed.One of these schemes is clustering structure.In clustering structure, the mobile nodes are divided into a number of virtual groups based on certain metrics.These virtual groups are called clusters [2].Some cluster-based approaches have been proposed and applied in Ad-hoc Networks, because the clustering have more advantages such as reduce the delay, overhead and solving the scalability problem in large scale networks.However, in dynamic environments the clusters usually are unstable and frequently disjointed.Hence the clustering schemes which are proposed for Mobile ad hoc networks (MANET) and Wireless Sensor networks (WSN) are not suitable for VANET.On the other word, in VANET, vehicle move with high and variable speeds which causing to frequent changes in the network topology, and it can significantly reduce the cluster stability and efficiency.CH duration is one of the reasons that can be caused to this reduction.It means that whatever CH duration increased respectably cluster stability will be increased.On the other hand, an efficient cluster maintenance has directly impact on CH lifetime.Hence, this parameters should be considered in the design of new cluster scheme.The aim of this work is proposed a scheme to construct a stable single-hop clusters with more CH lifetime, more CM duration and less cluster change rate.In this scheme CH selection conducted base on relative mobility, which calculated as the average relative distance and relative velocity.
The rest of this paper is organized as follows.In Section II previous work related to cluster formation and maintenance will be described.Section III explain preliminaries of proposed scheme.Section IV present our proposed algorithm processing.Section V includes simulation description with comparative results.The paper is concluded in Section VI.
II. RELATED WORKS
As a well-known organizing and controlling networks, node clustering is widely used in MANET and Wireless Sensore Networks (WSN).Clustering technique can be used for diverse purpose such as broadcasting, routing and QoS.
There are many clustering solution based on topology, energy, neighbor have been proposed [16], [7], [8], [9], [10], [11], [12].However, these clustering algorithms significantly are not suitable for dynamic environments such as VANET.One of the well-known clustering scheme which frequently used for comparison with other VANET clustering algorithms, is MOBIC [4].Indeed, this algorithm is based on the lowest-ID algorithm [16].In MOBIC, cluster head selection is based on the signal power which received at any node from its neighbors derived from successive receptions.The performance of MOBIC is medium and not effective for dynamic scenarios.
The aggregate local mobility (ALM) algorithm is proposed in [5].This algorithm used a relative mobility which calculate based on distance between a node and its neighbors.ALM algorithm aims to extend cluster lifetime using ALM.
Another known clustering algorithm which was proposed is affinity propagation (AP) algorithm [14].AP algorithm is a distance-based clustering scheme which vehicles exchange the availability and responsibility information with their neighbors and based on this information, CH is selected.The drawback of AP is that frequent changing of CH increased when vehicle's speed increased.it is because of that the AP does not take the speed difference of vehicles into consideration.
Adaptable mobility-aware clustering algorithm based on destination positions (AMACAD) [17] is clustering scheme which is proposed for VANETs.This algorithm used set of parameters, including position, speed and distance as a metric for CH selection.DMMAC is a novel clustering algorithm which proposed by Hafeez et al [15].DMMAC used velocity as main factor to form clusters, meanwhile it utilized fuzzy system to processing vehicle's velocity to enhance stability of cluster.Beside aforementioned aspects, DMMAC algorithm used a temporary cluster head concept which will be used when the main CH are unavailable.But this algorithm suffers from CHs frequently change when the vehicle's speed increased.
At the end of review the previous works, we will refer to laned-based clustering (LBC) scheme [19].This scheme is designed specifically for the urban environments, which the number of lanes in its traffic flow considered as metric for CH selection process.However, this scheme does not consider the exact number of vehicles for each flow.
III. PRELIMINARIES
The proposed clustering scheme uses Gauss-Markov Mobility (GMM) model [3] to calculate the future vehicles position and based on that predicted position and other metrics (e.g.Relative velocity, relative distance), the proposed scheme try to form a stable single-hop cluster.We call this, MObilityaware and SIngle-hop Clustering scheme (MOSIC).The term of single-hop cluster refer to a cluster architecture which cluster-member can communicate with cluster-head directly.The MOSIC focuses only on V2V (Vehicle-to-Vehicle) communication and the main objective of proposed scheme is to make a large network with highly dynamic nodes appear smaller and able to sustaining clusters for long period by increasing the cluster-head and cluster-member duration.So in following some essential assumptions and definitions which MOSIC used will be described.
A. Assumptions and Definitions
The proposed clustering scheme assumes that all vehicles traveling in the same direction (one way) on highway and all of them are equipped with Global Position System (GPS) receivers an On Board Units (OBU).Location information of all vehicles needed for clustering scheme is collected with the help of GPS receivers.Also The roads in highway has a maximum allowed velocity (V max ).Each vehicle have the same transmitting capability since they have equal chance to be elected as CHs.In this paper we use some definitions that we'll explain them in the following.In addition, Table II N C indicate as a vehicle is standalone and doesn't belong to any cluster, CM as a vehicle which belong to a cluster, CH as a vehicle that has task of coordination among cluster members and responsible for management of cluster structure [20] and TCM represent as a vehicle which doesn't receive the information broadcasted by the CH for ∆T interval.
Figure 1 show the vehicle's state in a highway environment.We considered two vehicles are r-neighbors if the distance between them is less than r.Consequently, the neighborhood N i of a vehicle i is defined as follows: Where D i,j is the average distance between vehicles i and j.Definition 3: (Nodal degree): The total number of rneighbors of a vehicle is called the nodal degree of the vehicle i which calculated as follow: The nodal degree of a node i can be concluded as the cardinality of the set N i .Definition 4: (Stable r-neighbors): Two vehicles are considered as a stable r-neighbors if the difference speed between them is less than ±∆V th .Where ∆V th is a predefined threshold.
B. Gauss-Markov Mobility (GMM) Model
The Gauss-Markov Mobility (GMM) Model [3] is a memory-based mobility model which able to calculate next position of mobile node based on its current mobility metric.In this model, each mobile node is assigned to the initial speed and direction.The GMM model used alpha α, 0 ≤ α ≤ 1, parameter which determines variability in mobile node movement.In this model, at each fixed interval of time, n, the mobile node update it current speed and direction which the new speed and direction are calculated as follows: where s n and d n are the new speed and direction of the mobile node at time interval n; s and d are representing the mean value of speed and direction and s xn−1 and d xn−1 are random variables from a Gaussian (normal) distribution.At each time interval the next location is calculated based on the current location, speed, and direction of movement.Specifically, at time interval n, an Mobile node's position is given by the equations 5 and 6: where (x n ,y n ) and (x n−1 ,y n−1 ) are the x and y coordinates of the mobile node's position at the (n th ) and (n − 1) st time intervals, respectively, and s n and d n are the speed and direction of the mobile node, at the (n) st time interval which achieved based on equations 3, 4.
C. Message passing format
As previously mentioned, the VANET is running under WAVE (Wireless Access for Vehicular Environments) architecture (IEEE 802.11p) and messages are encapsulated in UDP packets in the network layer.Each vehicles exchange their status message with their neighbors in its communication range, r, periodically.The status message contains information about the vehicle's ID, vehicle state, current speed V , communication range r, CH's ID (CHID) and position P OS, as shown in Fig. 2.
D. Cluster Metrics
In this section the cluster metrics, which plays an important role in cluster formation and cluster maintenance, will be described.
1) Average relative velocity:
In every time interval, each vehicle will be aware about all r-neighbor vehicles, using exchange status message, and based on that information, average relative velocity Vreli will calculated as follow: where V max is the maximum allowed velocity on the road, and Vmi is the average velocity of vehicle i against their rneighbors which defined as follow: where j is a potential neighboring vehicle, and V i , V j are the velocity of vehicle i and j respectively in m/s and Deg i is nodal degree of vehicle i.
2) Average relative distance: Each vehicle will collect its mobility information such as its location at every time interval ∆T and send this information to all its r-neighbors via Control Channel.So each vehicle able to calculate its average relative distance among its r-neighbors.Relative distance is one of the measure that play a key role to elect CH.
Consequently relative distance defined and calculated as follow: where R i,j is obtained from the below equation: We use the metric proposed by Basu [4] to calculate average relative distance (Equation 10), but whit difference that we used distance and predicted distance between two nodes instead of Packet Delay, which is used in [4].In formula 10, d i,j is distance between vehicle i and j which can achieved and calculate via Euclidean distance.
also d i,j represents the distance between vehicle i and j which is predicted with mobility model, and similar to d i,j calculated as follow: where x i and y i is future position of vehicle i that calculated and obtained using Gauss-Markov Mobility model (see Sect.III-B).
3) Average relative mobility: Average relative mobility is an important measurement that vehicles can be informed about their r-neighbors and based on this parameter, vehicles decides which vehicle is more suitable to selected as CH.Mi is defined as follow: where Ri is average relative distance and Vreli is average relative velocity which described in previous subsections.As you can see, whatever the nodal degree of a vehicle is increased, then correspondingly, the value of Ri and Vreli , will be decreased.Because according to Formula 8 and 9, the relative distance and relative velocity are inversely proportional with nodal degree (Deg i ).So a vehicle with lower value of Mi is more considerable.
IV. MOSIC PROCESSING
This section contains the description of the procedures that form part of the proposed clustering scheme.In brief, the proposed clustering scheme is formed by the four phase (Initialization, CH Selection, Cluster Formation and Cluster maintenance), described in the following subsections.When a node is not belong to any cluster (in Non-Clustered state), it executes the initialization phase.after that, depending on whether the cluster head can be found in nearby or not, the node can launches the join procedure or the cluster formation phase.Hence, after the cluster formation phase or after joining a cluster, the maintenance procedure will be executed and it checks the validity of the cluster periodically.
A. Initialization phase
This phase is executed by any vehicle which its state is NC (Non Clustered) and also receives a status message from its r-neighbors.In any interval time, ∆T , a vehicle which its state is NC broadcast its status message to discover weather a Cluster Head exist in vicinity or not.If there is at least one Cluster Head can be found, then the vehicle launches the join procedure.Otherwise, it execute the cluster formation phase.The pseudo code of the Initialization phase shown in Algorithm 1.After ∆T interval, broadcast its status message again; 17: end if
B. Cluster Head selection phase
In principle, CH is a coordinator with the task of coordination among cluster members and also responsible for management of cluster structure [20].One of the most frequently used technique to increase cluster stability is CH duration.CH duration has impact direct relative with cluster stability.It means that select a more stable CH can be beneficial to keep cluster structure for long periods and stable cluster can reduce packet loss probability.Consequently, select a CH that can be stable for long period, is an important factor in the design of MOSIC.In proposed scheme, we defined a mobility measure, M i , that each vehicle calculated it based on status messages which received in interval time ∆T from r-neighbors and each vehicle has greatest value of M i , will be selected as CH.Mobility measure calculated as follow: where Mi is average relative mobility.As mentioned in sec: III-D3, a vehicle with lowest value of Mi is more considered to be CH, So, for simplicity calculations, the value of Mi will be reversed, because the lowest value becomes to the greatest value, And it's exactly what Formula 14 shows.Once status message are received, the vehicle with highest M i among its r-neighbors will elect itself as CH.Vehicle with highest M i will set its CHID field to its own ID and send the status message to r-neighbors and subsequently all r-neighbors will join cluster (All r-neighbors sets their CHID field to vehicle's ID which selected as CH).It should be noted that nodes with Non-Clustered state, can't participate in the election process and they must commence the initialization phase.
The pseudo code of the CH selection shown in Algorithm 2.
Algorithm 2 Cluster-Head Selection Broadcast head message and r-neighbors will join cluster;
C. Cluster formation phase
The cluster formation phase is executed every time interval, ∆T , with nodes in NC state that already before run the initialization phase and discover that there is no CH in vicinity.However, after initialization phase (which all NC state nodes broadcast its status message and receive reply messages), a vehicle whose speed is the slowest among all its NC rneighbors, start the cluster formation process.This vehicle is called cluster forming vehicle (CF V ).At the beginning of the process, CF V select itself as a CH and broadcast status message to r-neighbors.Thus vehicles upon receipt the status message, set its CHID field to CF V 's ID and also update its state to CM.
The pseudo code of the Cluster formation shown in Algorithm 3.
D. Cluster maintenances phase
The main aims of clusters maintenance phase is to maintain the cluster structure as stable as possible.Because of the dynamic nature of the VANET, joining and leaving the cluster happen frequently.However, there are three events that can affect on stability of a cluster include: Joining Cluster, Leaving Cluster and Cluster merging.In the following cluster maintenance procedure will be described.
1) Joining Cluster: When a NC (Non-Clustered) state vehicle approach a CH (comes within CH transmission range), then the vehicle and CH compare and check their relative velocity, Vreli , and if the velocity difference is within ±∆V th , then the vehicle will join to the cluster and subsequently, CH add it to its members list.In some cases, a NC state's vehicle maybe comes in multiple CHs transmission range, r, then in this condition, vehicle join to cluster which has more nodal degree.
2) Leave Cluster: When a cluster-member moves out of the CH's transmission range, r, it is not removed from the cluster members list maintained by the cluster-head immediately.In the other hand, if a CM does not receive the information broadcasted by the CH every ∆T interval, the state of this node changes from CM to TCM (Temporary Cluster Member).It does not leave the cluster immediately, because this disconnection maybe due to the weak quality of the wireless signals.
If the temporary member receives the information broadcasted by the CH again in the coming m interval, the state of this node changes to CM again.But when a temporary member does not receive the CH information consecutively for m times, it means that the node moves out of the cluster range.Thus the state of that node changes to the NC.Meanwhile, the CH will delete this member from the members list.Then, the node can either join another cluster or form a new cluster.
3) Cluster Merging: Whenever two CH approach and come in each other transmission ranges, and they stay connected over a time period and also their relatively velocity is within the ±∆V th , then the cluster merging process will commence.In this process, the CH with less nodal degree abandon their CH's role and joins to the cluster with more nodal degree.The other members of the merged CH according to its condition can join another cluster or become a standalone member (NC).
V. SIMULATION AND PERFORMANCE EVALUATION
The aim of the simulation is to compare the performance of the our proposed mobility-aware single hop clustering scheme (MOSIC) to the previously proposed VANET clustering schemes.However, the performance of the clustering scheme is evaluated by using the metrics of cluster head duration, cluster member duration, cluster head change rate, the number of cluster and control overhead, which these performance metrics can demonstrate the stability of our clustering scheme [14], [1].
The MOSIC is implemented in NS-3 simulator at version 3.24.1 [18].The simulation scenario is based on one directional highway segment of 1000 m in length and three lanes.The vehicles are injected into the road randomly.Maximum Velocity varies from 10 to 35 m/s and the transmission range is 200 m.The total simulation time is 600 s.The clustering process start at the 300th second where all the vehicle are on the road.All of the performance metrics are evaluated for the remaining 300 s.Also we consider that the maximum allowed velocity in the road is 55 m/s.The general and important simulation parameters are listed in Table III.Also we used Gauss-Markov mobility model, as temporary hybrid model beside vehicles mobility.In other words, we used Gauss-Markov mobility (GMM) model as a prediction model for calculated next position of vehicles, which used in equation 10.We set α, Tuning parameter, to 0.85, as shown in Table III.
A. Cluster-Head Duration
Cluster-Head duration refers to the interval during which the vehicle' state is in CH and remain in this state until its state changed into CM or NC.The average CH duration is calculated by dividing the total CH duration with the total number of state changes from CH to CM or NC. Figure 3 illustrate the average CH duration of MOSIC and other clustering schemes for different maximum vehicle velocities.In Figure 3, the average CH duration decreases when the vehicle velocity increases.The reason for this is that when the vehicle velocity increase, the topology of network becomes more dynamic and eventually this makes it difficult for CHs to maintain a relatively stable condition with their neighbor vehicles for a long period.As shown in Figure 3, the MOSIC has better performance in term of CH duration against N-Hop [1], AMACAD [7], ASPIRE [23] and Lowest-ID [16] respectively.
B. Cluster-Member Duration
Cluster-Member duration is the interval from the time during which a vehicle joins a specified cluster to the time when it leaves the cluster.By dividing the total cluster member duration into total cluster member changes, average cluster member duration is calculated.Figure 4, shows the CM duration of MOSIC and other approaches for different maximum vehicle velocities.As shown in Figure 4, CM duration increases when vehicle velocity increases and it's because of the efficient cluster maintenance mechanism.The result which shown in Figure 4 indicate that the MOSIC CM duration is higher than N-Hop, AMACAD, ASPIRE and Lowest-ID respectively in most cases.
C. Cluster-Head Change Rate
Cluster-Head change number is the number of vehicles whose state changes from CH to CM or NC during a simulation process, and the rate of CH Change is defined as the changing per unit time.Figure 5, shows the CH change rate of MOSIC and other clustering schemes for different maximum vehicle velocities.A low CH change rate leads to a stable cluster structure.As shown in Figure 5, CH change rate increases when vehicle velocity increases.This is because of the dynamic nature of network.It means that with increasing velocity, it will be difficult for CH to keep efficient relatively stable with their CMs for a long period and maybe CH exited from cluster or in another condition, maybe CH into range of other CHs and merged with it and this situation effect of CH changes rate.
D. Number of Clusters
In VANET, because of highly dynamic movement of vehicles, clusters are created and vanished frequently over time and it cause to increase clusters number and consequently, increase maintenance cost.A Few clusters can enhance efficiency and performance of VANETs.A clustering algorithm is suitable, if it could reduces the number of clusters in system.This suitability achieved by create a relatively metric which sustain the current cluster scheme stable as much as possible.Figure 6 demonstrate the number of clusters under different transmission ranges and velocity scenarios.As shown in Figure 6, With increasing velocity the changes in the number of clusters is minimally and it because of good relative mobility metric which utilized in our scheme.
E. Average Control Message Overhead
All clustering schemes incur some additional control overhead to form and maintain their cluster structures and most of this overhead related to cluster formation and CH selection.So, in this paper we consider the overhead of the cluster formation and cluster head selection as the control message overhead.However, the average control message overhead is the count of total control message received by each vehicle in the network at cluster formation and CH selection procedures.Figure 7 shows the average control message overhead of MOSIC, N-Hop, AMACAD and ASPIRE at different velocities.Compared with above-mentioned clustering algorithm, MOSIC performs better in terms of control overhead.In MOSIC, each vehicle creates a control message during every channel interval and broadcast it to its single-hop neighbors to calculate the relative mobility between vehicle and its neighbors.This condition is equal to all above-mentioned clustering algorithms.But because of hight stability of cluster structure in MOSIC, with more CH duration and low CH change rate, the control message to reestablish the clusters structure and CH selection will be reduced and consequently the control overhead will be decreased.Clustering mechanism is one of existence organizing mechanism which designed to adapt to the VANET environment.In this study, a mobility-aware and single-hop clustering scheme (MOSIC) was proposed.The MOSIC is based on the changes in the relative mobility of the vehicles, which is calculated by finding the average of the relative velocity, the nodal degree and relative distance of all the same direction neighbors.It used Gauss-Markov mobility model to predict the vehicle next location and based on the vehicle's location and its predicted location, relative distance will be calculated and consequently relative mobility can be obtained.The MOSIC simulated on NS-3 and its performance compare to some clustering approach.Simulation indicate that the clustering of MOSIC outperforms than N-Hop, AMACAD, ASPIRE and Lowest-ID clustering in terms of CH duration, CM duration, CH change rate metrics and Control Message Overhead at various vehicle velocity scenarios.As future work, we aim to investigate the use of MOSIC in urban traffic scenarios and design the efficient routing protocol based on this scheme.
Fig. 5 :Fig. 6 :
Fig. 5: Average CH Changes Rate provides the notations that utilize in this paper.
TABLE II :
Notations and Description
1 :
State i : state of vehicle i; 2: N i : r-neighbors set of vehicle i; 3: if (State i == N C) then 6: if (Receive messages after ∆T interval) then
1 :
State i : state of vehicle i; 2: N i : r-neighbors set of vehicle i; 3: M i : mobility measure of vehicle i; 4: CHID: cluster head; 5: ID i : ID of vehicle i; 6: Receive status message from r-neighbors in ∆T ; 7: Update its N i sets; 8: Calculate the M i based on received status messages; 9: if (N i > 0 and State i != N C) then
TABLE III :
Simulation Parameters | 6,174 | 2016-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
The simulation study of high-performance micro-inductors Based on MEMS 3D coils
With the development of flexible electronics and microsystems, the demand for the miniaturization of electronic components is becoming increasingly urgent. Due to the advantages of miniaturization, low power consumption, and easy integration, the micro-inductors fabricated by microfabrication technology get more and more attention. However, the traditional two-dimensional inductors are no longer able to meet the growing needs of today’s society in terms of occupied areas, inductance values, and packaging costs. So, this paper proposes a three-dimensional structure thin film solenoid micro-inductor based on MEMS and analyzes it by using finite element method (FEM). The simulation focuses on recording the inductance values and quality factors of the micro-inductors with varying parameters in the frequency range of 0-2, 000 MHz. Additionally, the maximum current values corresponding to the micro-inductors with different parameters are recorded at the operating temperature of polyimide, specifically addressing the current-carrying capacity. The simulation result has certain theoretical guidance for high-performance micro-inductors design.
Introduction
The market has expressed an urgent need for the development of high-power and low-loss integrated micro-inductors as microsystems, and flexible electronics move toward higher performance and smaller sizes.For academics both domestically and internationally, enhancing the performance of microinductors has emerged as a research priority.The construction of integrated micro-inductors with benefits such as low resistance, high inductance, high-quality factor, low cost, and mass manufacturing can be accomplished by integrating thin film technology into MEMS technology [1, 2, and 4].The potential for development of MEMS-based micro-inductors is enormous, and in the future, they are expected to gradually replace traditional inductors, becoming key components in RF and other fields.This will drive the development of electronic products towards integration and intelligence.
Currently, both domestically and globally, there have been numerous papers on micro-inductors, the majority of which concentrate on thin film inductors with two-dimensional planar designs.However, there presently exists a lack of knowledge on three-dimensional micro-inductors.He et al. researched decoupling and coupling inductors with multi-layer magnetic cores.Selvaraj, S.L. et al. reported the design and fabrication of an on-chip solenoid inductor with a novel thin film magnetic core for highfrequency DC-DC power conversion applications [3].All existing reports have underscored the substantial utility of micro-inductors in integrated devices, as well as the relatively advanced research on micro-inductors.However, to date, there has been inadequate research on the simulation of 3D structural micro-inductors.
Simulation and analysis
We need to investigate the effects of many variable factors on the electromagnetic and thermal properties of the three-dimensional solenoid inductor to further improve its performance [5].During the investigation of thermal correlation characteristics, the study records the maximum current passing through the coil of an inductor with varying parameters at the insulation layer's operating temperature.The primary parameters under consideration are the magnetic core size and coil size of the inductor.
Figure 1 depicts the structure of a three-dimensional solenoid inductor, where a coil is wound around a magnetic core in space, forming a solenoid structure.Figure 1 (b) illustrates the implementation of a cuboid connecting column on both sides of the magnetic core for electrical connection.In Figure 1 (c), the transparent section above the silicon substrate represents the polyimide material serving as an insulating layer.
Effect of magnetic core size
We change the thickness and width of the magnetic core within the size range of conductor coil encirclement.The thickness of the magnetic core is set within the range of 20 to 45 μm, while the width of the magnetic core is fixed between 780 μm and 880 μm.By computing and analyzing these parameters, we aimed to gain insights into the inductor's performance characteristics.Figures 2-5 depict the simulation results illustrating the impact of both magnetic core thickness and width on the performance of the micro-inductor.In Figures 2 and 4, the inductance of the micro-inductor improves as the thickness and width of the magnetic core expand.This is because a larger magnetic core allows more magnetic flux to pass through, thereby enhancing the inductance value.Additionally, Figure 2 demonstrates that the maximum quality factor of the micro-inductor also increases with an increase in the magnetic core thickness.However, in Figure 4, the maximum quality factor initially decreases and then increases with an increase in the magnetic core width.Nevertheless, the overall change amplitude is not significant and remains around 24-25.
Furthermore, Figures 2 and 4 demonstrate a significant negative correlation between the quality factor in the high-frequency band and the thickness and width of the magnetic core.This is due to the increased size of the magnetic core, which elevates the series capacitance value, thereby reducing the inductor's ability to store magnetic energy at high frequencies.However, changes in core thickness and width in Figures 3 and 5 have almost no effect on the current-carrying capacity of the micro-inductor.This is because the current-carrying capacity of an inductor is primarily linked to its resistance, and altering the core size only modifies the capacitance value, which does not significantly impact the maximum current allowed to flow through it.
Based on the conclusion, it is evident that alterations in the thickness and width of the magnetic core have a substantial impact on the inductance of the micro-inductor, and increasing the size of the magnetic core proves to be an effective means of enhancing the inductance value.
Effect of wire size
The wire thickness varies between 20 μm and 50 μm, and the associated performance parameters of the inductor were calculated and analyzed.Figures 6-7 depict the simulation results illustrating the impact of wire thickness on the performance of the micro-inductor.Through simulation, it has been observed that the impact of wire width on inductor performance is similar to that of wire thickness, and therefore the influence of wire width is not depicted in a chart here.However, when considering Figure 6, it becomes evident that both the inductance and maximum quality factor of the micro-inductor decrease with an increase in wire thickness and width.As the coil width increases, the self-inductance coefficient of the inductor decreases, leading to a gradual reduction in the inductance value.
Upon further examination of Figure 6, it becomes evident that the quality factor in the high-frequency band experiences a slight increase with an increase in wire thickness and width.Additionally, Figure 7 illustrates an improvement in the current-carrying capacity of the inductor.This is attributed to the expanded cross-sectional area of the wire resulting from the increased thickness and width, which subsequently reduces the resistance loss.Based on these findings, it can be concluded that increasing the wire thickness and width can enhance both the quality factor and current-carrying capacity of the micro-inductor at high frequencies.However, it is important to note that excessive increases in wire thickness and width can lead to significant parasitic and eddy current losses, potentially negating the improvement in the quality factor and even causing a decline in performance.
It is critical to find a balance between size requirements and performance goals during the structural optimization design given the divergent impacts of wire size on the inductance, quality factor, and current-carrying capacity of the micro-inductor.To reach the required criteria, it is important to properly increase the wire's thickness and width.
Optimized inductor performance
The finite element method and diagram are used to analyze the relationship between the magnetic core thickness, magnetic core width, wire thickness, wire width, inductance, quality factor, and currentcarrying capacity of the micro-inductor.The results of the simulation calculations yield a direct basis for optimizing the dimensions and performance of micro-inductors.We compared the performance parameters of the optimized micro-inductors in this study with those of some published core-based micro-inductors, with the aim of highlighting the advantages of the device performance designed by us.Table 1 summarizes the relationship between radio frequency inductor performance and operating frequency, as reported in various literature on magnetic materials.It is important to note that our analysis is a first in that it takes current-carrying capacity into account.In our work, the micro-inductor in the high-frequency range exhibit significant advantages in terms of inductance value and quality factor.What's more, the micro-inductor we design boast excellent current-carrying capacity.
Conclusions
The specific findings came from studying and analyzing how various parameters affected the performance of the 3D solenoid-type micro-inductor.
Increasing the size of the magnetic core portion can increase the micro-inductor's inductance, but doing so will also lower its quality factor at high frequencies and have essentially little effect on its current-carrying capacity.In practical applications, it is required to combine the operating frequency, size requirements, and performance requirements, choose the appropriate material as the core, and suitably increase its size due to the opposing trend of inductance value and quality factor.
In the conductor coil section, increasing the wire size has the potential to enhance the quality factor and current-carrying capacity of the micro-inductor.However, it is important to note that this increase in wire size may result in a reduction in its inductance.Therefore, in practical applications, it is crucial to strike a balance between the desired performance requirements and the limitations of the current electroplating process.By considering both the size and performance requirements, it is possible to appropriately determine and increase the wire size.
In this paper, a novel simulation method of high-performance micro-inductors based on MEMS 3D coils is proposed.Through optimization techniques, the micro-inductor achieves impressive results, with a maximum inductance of 99.5 nH, a maximum current-carrying capacity of 0.936 A, and a maximum quality factor exceeding 21.These findings demonstrate that the inductor successfully attains the objectives of compact size, high inductance value, high current-carrying capacity, and superior quality factor simultaneously.In this paper, we not only focused on investigating the operating frequency, inductance value, and quality factor of the inductor but also took on the challenging task of enhancing
Figure 1 .
Figure 1.Schematic illustrations of an on-chip solenoid inductor.
Figure 2 .
Figure 2. The impact of magnetic core thickness on inductance value and quality factor.
Figure 3 .
Figure 3.The impact of magnetic core thickness on the current-carrying capacity of micro-inductor.
Figure 4 .
Figure 4.The impact of magnetic core width on inductance value and quality factor.
Figure 5 .
Figure 5.The impact of magnetic core width on the current-carrying capacity of micro-inductor.
Figure 6 .
Figure 6.The impact of wire thickness on inductance value and quality factor.
Figure 7 .
Figure 7.The impact of wire thickness on current-carrying capacity of micro-inductor.
Table 1 .
Performance of published inductors. | 2,391.2 | 2024-02-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Towards 5G: Scenario-based Assessment of the Future Supply and Demand for Mobile Telecommunications Infrastructure
Moving from 4G LTE to 5G is an archetypal example of technological change. Mobile Network Operators (MNOs) who fail to adapt will likely lose market share. Hitherto, qualitative frameworks have been put forward to aid with business model adaptation for MNOs facing on the one hand increasing traffic growth, while on the other declining revenues. In this analysis, we provide a complementary scenario-based assessment of 5G infrastructure strategies in relation to mobile traffic growth. Developing and applying an open-source modelling framework, we quantify the uncertainty associated with future demand and supply for a hypothetical MNO, using Britain as a case study example. We find that over 90% of baseline data growth between 2016 and 2030 is driven by technological change, rather than demographics. To meet this demand, spectrum strategies require the least amount of capital expenditure and can meet baseline growth until approximately 2025, after which new spectrum bands will be required. Alternatively, small cell deployments provide significant capacity but at considerable cost, and hence are likely only in the densest locations, unless MNOs can boost revenues by capturing value from the Internet of Things (IoT), Smart Cities or other technological developments dependent on digital connectivity.
Introduction
The mobile telecommunications industry has a dynamic competitive environment due to widespread and sustained technological change (Curwen and Whalley, 2004;Han and Sohn, 2016). We experience generational upgrades on at least a decadal basis, requiring Mobile Network Operators (MNOs) and other market actors to have an understanding of future digital evolution. Even market leaders with significant advantages in the telecommunications industry can fall behind if they are unable to keep abreast of new developments and actively adapt existing market strategies for new conditions (Asimakopoulos and Whalley, 2017). Indeed, the digital ecosystem is experiencing significant disruption from new digital platforms and services (Ruutu et al. 2017;Wang et al. 2016), with substantial ramifications for MNOs as revenues have been either static or declining (Chen and Cheng, 2010), and these conditions exist alongside the increasing operational costs of serving ever increasing mobile data traffic. Hence, in wireless telecoms, MNOs must be aware of both opportunities and threats arising from technological change, particularly when moving from one generation to the next (du Preez and Pistorius, 2002;Salmenkaita and Salo, 2004).
Telecommunications are essential for modern economic activities, as well as for a fully functioning society. These technologies can enable economic growth through new content, services and applications (Hong, 2017;Krafft, 2010), while also enabling productivity improvements throughout the economy by lowering costs. The ability of Information Communication Technologies (ICT) to interchange data via telecommunications networks is essential for the economic development of the digital economy (Wymbs, 2004;Cheng et al. 2005;Kim, 2006), and the range of industrial sectors it comprises. New cross-sectoral advances have also emerged, such as the Internet of Things (IoT) and Smart Cities (Yang et al. 2013;Hong et al. 2016;Bresciani et al. 2017;Almobaideen et al. 2017), which rely on the availability of digital connectivity for smartphones, sensors and other communications devices. Hence the signal quality of mobile telecommunications infrastructure is an ever more important factor, requiring operators to focus on both network reliability and capacity expansion techniques to meet consumer and industrial requirements (Shieh et al. 2014). This is challenging however, given the weak revenue growth currently experienced, leaving only a modest appetite for infrastructure investment.
Scenario planning is a foresight tool that can be applied to understand how changes in the external environment may affect current or potential market strategies (Ramirez et al. 2015). On the one hand, this approach can be used to foster learning and the adaptive skills of an organisation (Favato and Vecchiato, 2016), while on the other, it supports high-level strategic decision-making (O'Brien and Meadows, 2013;Parker et al. 2015). Quantified approaches allow one to measure the impact of external drivers using systems modelling. Importantly, the choice of how much infrastructure is required, when, and where, is seen to be a problem of decision-making under uncertainty (Otto et al. 2016).
The aim of this paper is to quantify the uncertainty associated with the future demand for mobile telecommunications infrastructure, to test how different strategies perform over the long term. We focus specifically on capacity expansion via 5G mobile telecommunications infrastructure. In undertaking this task, the research questions which we endeavour to answer are as follows: 1. How will the combination of growing data usage and demographic change affect the demand for mobile telecommunications infrastructure? 2. How do different supply-side infrastructure options perform when tested against future demand scenarios? 3. What are the ramifications of the results, and how do they relate to the wider technological change literature, particularly in mobile telecommunications?
As the '5G' standard is still to be determined, the approach taken in this paper is to extrapolate LTE and LTE-Advanced characteristics, and to include those identified frequency bands that may be used for 5G rollout over the next decade, in relation to changing demand. Hence, a spectrum-based strategy includes integrating 700 and 3500 MHz on existing brownfield macrocellular sites, as these will be the newly available frequency bands to MNOs in Europe. Importantly, we also test the impact of increasing network densification using a small cell deployment strategy, as this is a key technological enabler for delivering expected 5G performance.
In the next section, a literature review will be undertaken in relation to the future demand for telecommunications services, as well as the current state-of-the-art of telecommunications infrastructure assessment. In Section 3 the methodology will be outlined, and the results reported in Section 4. The findings will be discussed in Section 5, and finally conclusions will be stated in Section 6.
Literature Review
Although the full specification of '5G' is yet to be determined, it is likely that the technical requirements will include delivering peak rates of 20 Gbps per user in low mobility scenarios, user experienced data rates of 100 Mbps, radio latency of less than 1 ms, significantly higher area traffic capacity (1000 times LTE), and a massive number of devices (ITU, 2015;Shafi et al. 2017). This will provide enhanced mobile broadband, massive machine type communications, and ultra-reliable low latency communications. While new generations of mobile technology can be dominated by marketing spin (Shin et al. 2011), there is consensus that network densification via smaller cells will be a key technique for 5G networks (Andrews et al. 2014). As the research questions outlined in this paper require a focus on technological forecasting, the relevant literature will now be reviewed, before the techno-economic literature on next generation mobile networks is evaluated.
Technological Forecasting in Telecommunications
Modelling and simulation methods are frequently combined with scenario planning approaches to test potential telecommunication strategies. As well as for strategic purposes, MNOs also often rely on demand forecasts methods to justify internally and externally the considerable investments required to move into new markets (see Fildes, 2002;2003). This is often related to generational upgrades of technologies, where forecasts can help to understand, for example, how different factors affect the demand for future mobile wireless communications services (Frank, 2004). Commonly used methods include time-series econometric approaches (Lee, 1988), innovation diffusion modelling (Jun et al. 2002;Venkatesan and Kumar, 2002;Meade and Islam, 2006;Michalakelis et al. 2008;Chu and Pan, 2008) and technological forecasting . Systems dynamics approaches have also been applied to model the underlying dynamics of mobile telecommunications diffusion (Mir and Dangerfield, 2013).
Generational changes in mobile wireless technology also provide opportunities for new market niches (Nam et al. 2008), but this can often lead to failure . Shin et al. (2011) focus on the socio-technical dynamics of moving from 3G to 4G LTE telecommunications, and study how 4G strategies have been formed, shaped and enhanced. Importantly, the transition between different generations of mobile technologies requires executives to adapt to a dynamically evolving industrial landscape as technology and regulation both change. Ghezzi et al. (2015) highlight the rapid transformation taking place in the telecommunications industry due to technological change and develop a framework to support MNOs operating in highly competitive markets. Using structured interviews with top-and middle-managers from four Italian MNOs, the authors identify the key drivers of disruptive change and the implications for their current business models. Increasing data traffic and decreasing voice revenues are the key drivers of disruptive change. Indeed, the emergence of Voice-over-IP (VoIP) services is one key driver of decreasing voice-related revenues, as users substitute paid voice services via an MNO for free VoIP access over the Internet (e.g. via Skype), leading the infrastructure owner to lose revenue (Kwak and Lee, 2011). MNOs have addressed this by bundling voice and SMS with data (Stork et al. 2017).
The analysis of digital adoption and the diffusion forecasting of mobile telephony has received significant attention in the technological change literature (Islam et al. 2002;Vicente and Gil-de-Bernabé, 2010;Kim et al. 2010;James, 2012;Kyriakidou et al. 2013;Mayer et al. 2015;Pick and Nishida, 2015;Islam and Meade, 2015;Sultanov et al. 2016;Sadowski, 2017), but relatively little focus has been placed on how this may affect mobile traffic growth, operator cost and competitiveness. However, in the rapidly growing ICT market, the forecasting of new technologies is a difficult yet necessary endeavour for operators and not something that should be purely left for the engineering domain as it has important commercial strategy ramifications.
In one such study, Lee et al. (2016) forecast mobile broadband traffic demand in Korea, using a devicebased approach and a three-round Delphi expert elicitation process. Scenario analysis was applied to reflect uncertainty in the future dynamics of the sector, with 'optimistic', 'neutral' and 'pessimistic' scenarios being developed. Unsurprisingly, the conclusion was that mobile traffic will continue to increase, but the authors quantify by how much, and Korea is expected to see an increase to approximately 286 Petabytes per month by 2020, which is six times greater than 2012. This is approximately 6 GB of monthly mobile traffic per user. Velickovic et al. (2016) develop and apply a forecasting model for the deployment of fixed Fibre-To-The-Home (FTTH) telecommunication infrastructure, where demand forecasting is used to enable the dimensioning of necessary network resources. This is essential for operators to understand which investments are required, spatially and temporarily, to meet evolving demand. The network economics of telecommunications networks make it extremely challenging to service low demand areas, as there are inevitably large fixed capital costs in deployment which need to be shared across many users. Hence, demand stimulation is an activity that needs to take place simultaneously when there is poor take-up, to encourage more favourable scale economies (Yoo and Moon, 2006).
There has been relatively little emphasis on how infrastructure, a key mediator in the global economy, affects firm strategy. Often those studies that have addressed this important topic have focused on expert elicitation, Delphi approaches to scenario development or case studies (Schuckmann et al. 2012;Raven, 2014;Bolton and Foxon, 2015;Roelich et al. 2015;Labaka et al. 2016), which are valuable in generating understanding, but can be complemented by modelling, simulation and the testing of 'what if' scenarios (Huétink et al. 2010;Zani et al. 2017).
Techno-economic assessments of next generation mobile networks Undertaking analysis for the UK telecommunication regulator Ofcom, Real Wireless (2012) examine the impact of the 700 MHz band in meeting growing demand for wireless broadband capacity. Different traffic demand scenarios were developed and tested, focusing on variations in mobile traffic by device penetration and type. Demographic change over time was not included in this analysis for the period 2012-2030 (such as fertility, mortality and migration), despite it potentially having a multiplier effect, as large increases in the population will affect device growth and traffic demand. Ultimately, the cost of 5G systems will be determined by the large number of new components required to operate enhanced network infrastructure, including basestation units and backhaul transmission, as well as the associated cost of site installation, site operation, network optimisation and maintenance, and edge cache placement (Yan et al. 2017).
With the rapid growth of data and additional required network capacity, the solution to this is not merely an engineering domain issue, but also increasingly a techno-economic problem (Zander, 2017). While some of the underlying difficulties are highly technical, including dealing with interference and noise, escalating energy consumption, and using available bandwidth more efficiently, many of them are economic, including the cost of a dense small cell layer. Hence, there is a growing need for disruptive business model innovation to provide technically scalable solutions for enhanced wide-area capacity, while remaining within certain energy consumption and cost constraints. Breuer et al. (2016) undertook a techno-economic analysis of 5G fronthaul and backhaul, focusing on the convergence benefits with fixed access (ranging from FTTH to Fibre-To-The-Cabinet). Different 'massive 5G' small cell deployments are explored as a broadband data layer coexisting with a macro basestation network. Moreover, economic analysis of '5G superfluid networks' by Chiaraviglio et al. (2017) assessed capex, opex, Net Present Value (NPV) and Internal Rate of Return (IRR) for two case study areas, where operators were migrating from legacy 4G networks, in Bologna, Italy and San Francisco, USA. The analysis found that the cost of deploying dedicated hardware was higher than the cost of deploying commodity hardware running virtual resources. In a profit analysis, the authors also found that the monthly subscription fee could be kept sufficiently low, while still generating profit overall.
An engineering-economic analysis of China's Shanghai region by Smail and Weijia (2017) used costbenefit modelling to assess the deployment of 5G technologies in relation to legacy 4G mobile networks. Comparison analysis was performed of price, cost, coverage and capacity for different scenarios using heterogeneous basestation types. This included the development of a pricing model. Key findings include the most cost-effective solution being macro cells with improved carrier aggregation, and the use of existing sites being critical to keep down costs. Having reviewed the literature on the modelling of mobile wireless infrastructure, we will now focus on Britain as a case study.
Britain as a Case Study Example
In 2016, the then Chancellor George Osborne tasked the National Infrastructure Commission with advising 'the Government on what it needs to do to become a world leader in 5G infrastructure deployment, and to ensure that the UK can take early advantage of the potential applications of 5G services' (Osborne, 2016:1). Consequently, the UK's 5G strategy was released in 2017, entitled 'Next Generation Mobile Technologies: A 5G Strategy for the UK' (DCMS and HM Treasury, 2017), containing content on the economic case, regulation, governance, coverage and capacity, security, spectrum and technology. The UK is also embarking on its first comprehensive National Infrastructure Assessment, which will include the supply and demand of mobile telecommunications (National Infrastructure Commission, 2017).
Both the 5G Strategy and National Infrastructure Assessment are important market drivers reflecting that some consumers are unhappy with the current state of telecommunications in the UK. For example, the British Infrastructure Group (2016), consisting of over 90 cross-party Members of Parliament, have supported calls for reform to the sector in a recent mobile coverage campaign, based on the elimination of areas of no coverage (known as 'not-spots'). In terms of existing provision, recent analysis by the regulator Ofcom (2016) found that 4G coverage by all four operators now reaches 72% of premises indoors and only 4% of the premises are not covered by 4G signal from any operator. However, only 40% of the geographic area is covered by every operator. Indeed, some feel the experience differs from the reported voice and data coverage statistics leading to disgruntled users, with this therefore becoming a hot topic in the media. OpenSignal's (2017) State of LTE report shows that the UK ranks 43 rd globally in terms of 4G availability and 39 th in terms of speed (subject to the usual speed test caveats).
The concern is that not enough digital access infrastructure is being deployed, potentially leading to businesses and consumers being dissatisfied. In some cases, demand could exceed supply, which may have potential economic impacts as telecommunications underpin economic activities with bottlenecks leading to productivity issues. Both the National Infrastructure Assessment and the UK's 5G Strategy aim to eliminate or reduce connectivity problems, but there are still a limited range of metrics to help support future decision making in both industry and government, which we aim to address in this analysis.
There is a lack of open-source modelling frameworks for assessing the supply and demand of telecommunications. In this paper, we apply the Cambridge Communications Assessment Model testing it annually up to 2030, based on the methodology illustrated in Figure 1. The approach allows us to assess mobile against future demand scenarios, including (i) required per user traffic and (ii) fertility, mortality and migration. We utilise Object Oriented Programming (OOP) principles to deliver the flexibility required for the multi-level modelling of assets, networks and whole systems. The OOP approach has previously been utilised for technological forecasting by applying it to the UK mobile telecommunications industry to support management decisions (Christodoulou et al. 1999). Transparency and reproducibility are central tenets of the research and therefore we have provided open-source access to the model code (https://github.com/nismod/digital_comms).
Figure 1 Methodological sequence
The process of producing the exogenous demographic scenarios is articulated in Thoung et al. (2016) with the projections considering fertility, mortality and migration. very high resolution using the Local Authority District as the statistical unit of projection. We then disaggregate these projections annually between 2016 and 2030 to approximately 9000 postcode sectors, using a population weighting from the last UK census (ONS, 2013).
Although there has been exponential traffic growth over the last decade, future data demand is unknown. In the UK, according to the data reported by Ofcom (2012c;2015a;, the average data consumption was 110 MB per month in March 2011. This demand more than doubled in one year, to 240 MB in 2012. In 2016, average monthly data consumption per user reached 1.3 GB. Figure 2 shows the historical series for the UK in light blue. The reasons behind this large increase in data demand are mainly related to the rollout of 4G LTE, allowing for higher mobile broadband speeds, driving data consumption, particularly from video and 'data-hungry' applications. Latest data from Cisco (2017) report that video accounts for 63% of all mobile traffic in the UK and that it will grow seven-fold by 2021, meaning that by then video would be 81% of the UK mobile data traffic. The same report forecasts a Compound Annual Growth Rate (CAGR) of 38% for mobile traffic in the UK over the next five years, as shown in orange in Figure 2.
As the proportion of video in mobile traffic reaches saturation levels, CAGRs for per-user data demand tend to slow (Cisco, 2011;. As 4G reaches maturity, future data demand growth is more uncertain, as this will depend on potential 5G use cases and services. Hence, in this paper we explore three different scenarios regarding data demand for connectivity services once 5G technologies start to be commercially deployed from 2020:
2.
Baseline demand, where data demand follows a linear increase over the long-term.
3.
Low demand, modelled by a logistic curve (as stated in equation 1), which represents the 'worst' scenario for 5G adoption, where there is no new 'killer app', causing demand to plateau. Equation (1) represents the logistic curve to model the low demand scenario, where k is 1.764 and D 12 GB. Within the Demand Module, the monthly data demand in a specific year ( ) is then converted to the number of Mbps required per user ( ) in time .
We dimension the network across 30 user activity days per month ( ), based on 15% of traffic taking place in the busiest hour ( ℎ ) (Holma & Toskala, 2009). We illustrate this in equation (2): where is the data demand at any given time over the study period, as illustrated in Figure 2.
The user density ( ) of area at time is calculated annually based on the local population ( ), a smartphone take-up rate of 80% ( = 0.8) based on the Ofcom (2017) technology tracker, a market share parameter for a hypothetical operator assumed to be 30% ( = 0.3), and the geographical area ( ) of the postcode sector, as outlined in equation (3): The Total Demand per km 2 ( ) of area at time is calculated in equation (3) using the bit rate required per user ( ) in time and the user density ( ).
As high-resolution, purely bottom-up modelling is a challenging task, we instead adopt a geotyping approach that groups areas based on similar characteristics, such as population density, to define the type of clutter environment. This is undertaken for England, Scotland and Wales (with Northern Ireland being excluded due to data discrepancies), using polygon data from Ordnance Survey Codepoint (2015). We use the annual Ofcom (2016) Connected Nations data of 4G geographic coverage by Local Authority District to disaggregate coverage to postcode sectors. This disaggregation is carried out by taking the aggregate percentage of area covered by 4G LTE, and allocating it to the most densely populated postcode sectors first. All sites within areas with LTE coverage are considered to have LTE assets. This is a method for taking the site information from Sitefinder (Ofcom, 2012d), and updating the technologies present, considering the basestations belonging to the four major operators (EE, Vodafone, O2 and Three; EE's data were obtained by combining T-Mobile and Orange). Currently, there are two macrocellular networks with sites shared between firstly O2/Telefonica and Vodafone, and secondly EE and Hutchinson Three. Hence, to obtain a single network, we split the total number of sites in two in each local area.
We reuse the geotypes outlined in a report for the Broadband Stakeholder Group by Analysys Mason (2010), which have been applied elsewhere in the literature as they align with the 90 th percentile of population coverage (see Oughton and Frias, 2016;Oughton and Frias, 2017). Seven geotype segmentations are used based on the minimum number of persons per km 2 , which include Urban (>7,959 persons per km 2 ), Suburban 1 (>3,119 persons per km 2 ), Suburban 2 (>782 persons per km 2 ), Rural 1 (>112 persons per km 2 ), Rural 2 (>47 persons per km 2 ), Rural 3 (>25 persons per km 2 ) and Rural 4 (>0 persons per km 2 ). The population density bands are held static, while postcode sectors can transition between bands based on demographic change over the study period. The population density is hence a proxy for building density and is used in the network dimensioning based on three clutter types (urban, suburban and rural). In the dimensioning of the Radio Access Network (RAN), LTE and LTE-Advanced characteristics are extrapolated. For each geotype, networks are dimensioned using a model to calculate the minimum number of basestations required to meet different levels of demand, allowing a set of network performance curves to be generated for Inter-Site Distance (ISD) systemlevel simulations (Frias et al. 2017). The performance of the network is evaluated based on the average per user throughput for different spectrum bands, guaranteeing desired Quality of Experience, 90% of the time. Using the performance curves, a set of lookup tables are developed for reference when simulating the performance of different capacity expansion strategies.
As defined in equation (5), the probability density function of the Signal-to-Noise-plus-Interference Ratio (SINR) is developed for each cell size, allowing the calculation of an average spectral efficiency based on the technology, following Mogensen (2007). The average spectral efficiency of a cell (in bps/Hz) for a particular spectrum band and Inter-Site-Distance is defined by .
Based on the available bandwidth in the defined carrier frequency for a three-sector cell, the average throughput (Mbps) is calculated, as defined in equation (6).
Options involving small cells focus on deploying them at 3700 MHz using Time Division Duplexing (TDD). Here, LTE-like spectral efficiency is assumed (1.5 bps/Hz) for estimating the number required, along with 100 MHz available bandwidth, a maximum coverage of 200m and a 75% download-toupload ratio. Parameter values are outlined in Table 1.
Infrastructure is deployed to postcode sectors by the hypothetical operator based on different investment decisions. We assume operators would first deploy LTE if this technology was not already present (integrating 800 and 2600 MHz) on a site. Then, sites with LTE in operation may have 700 MHz and 3500 MHz integrated. Finally, a small cell layer is deployed (to increasing densities of small cell sites) within each postcode sector operating in TDD at 3700 MHz.
At each modelled time-step, postcode sectors are considered in order of high to low population density. Each area's capacity and demand are compared using a capacity margin metric, and upgrades are applied based on the options available until either the demand is met or the annual budget is exhausted. The capacity margin ( ) for area at time is calculated by subtracting from as illustrated in equation (7): Within the Cost Module we calculate the costs of infrastructure upgrades over the course of the study period, according to the Total Cost of Ownership (TCO) of new assets. We exclude the costs of operating legacy network assets. We use costs sourced from the Ofcom Mobile Call Termination (MCT) Model (2015b), as well as from 5G NORMA (2016). Costs are calculated by finding the NPV of the TCO for each infrastructure asset over a 20-year period, based on the methodology dictated by Ofcom for the cost of extending coverage for the 4G LTE auction (Real Wireless, 2012). This includes a 20year NPV calculation with no account for price trends, and a 3.5% social discount rate. We assume a 10-year equipment lifetime for macrosites and 5-year equipment lifetime for small cells, with civil works not being repeated. There is no residual value at the end of the period.
The cost structure of assets is affected by the fact that we take a brownfield approach for spectrum integration using the existing macrocellular network and a greenfield approach for small cell deployment. Small cells are deployed on local authority owned street furniture for no cost. Detailed asset costs are outlined in Table 3, including references to their sources. We assume a 10% mark-up on all costs for upgrades in the core network. The results will now be reported in Section 4. We start by reporting the results by scenario for long-term demand, before progressing to the performance of different supply-side infrastructure strategies. A discussion is then undertaken on the ramifications of these findings in Section 5.
Demand results
In the demographic scenarios tested, the baseline population grew by 5.3 million, to 68.6 million. This contrasts with the high growth scenario where there was an additional 7.8 million people (reaching 72.8 million in 2030), and the low growth scenario which saw much smaller growth of 2.7 million people (reaching 64.4 million in 2030). The baseline population density grew from 274 persons per km 2 , and finished at 297 persons per km 2 (an increase of 23 persons per km 2 ). Figure 3 illustrates population growth graphically by scenario. Population forecasts have minor differences in starting points as the most recently available data are from 2015.
Figure 3 Demand growth metrics by scenario
In terms of the aggregate area demand, there was an increase in the baseline from approximately 0.2 Tbps in 2016 to 4.23 Tbps in 2030. The low growth scenario grew from 0.2 Tbps in 2016 to 2.08 Tbps in 2030, whereas the high growth scenario grew from 0.2 Tbps in 2016 to 9.7 Tbps. As this represents a hypothetical MNO with only 30% market share, the total national demand would be significantly higher. Where this demand takes place spatially, is of vital importance to MNOs. Figure 4 also illustrates the demand evolution across Britain over time, reflecting the underlying demographic characteristics of local areas. Across all scenarios, there is a lower demand in northern and western regions of Britain, particularly in Scotland and Wales, as the population is either static or declining. In the low scenario, demand is mostly concentrated within major cities such as London, Birmingham, Manchester, Newcastle and Glasgow, as one would expect. However, under baseline or high growth there is considerable demand in lower population density suburban and rural areas too, particularly in the South East and Midland areas of England, which would prove a considerable challenge to meet in a costefficient way.
The 'Static' scenario specifically considers no demographic change from 2016 onwards, only focusing on baseline data demand growth. This allows the impact of the population and data exogenous drivers to be isolated and compared. The key finding is that in the 'Static' scenario, the aggregate demand was quite similar to the baseline in 2030 at 3.9 Tbps, whereas the baseline reached 4.23 Tbps. This indicates that population growth in the baseline scenario led to an additional required 0.3 Tbps of capacity at the end of the study period. To summarise for the baseline scenario, only 8% of the growth for 2016 to 2030 results from demographic change, whereas 92% is from per user data demand. Therefore, this means technological progress accounts for more than 90% of the growth in total data demand.
Supply-side strategy results
We tested four different options, including (i) minimum intervention, (ii) spectrum integration, (iii) small cell deployment and (iv) a hybrid approach using both spectrum and small cells. Figure 5 illustrates the performance of all strategy options across each exogenous demand scenario.
In the case of the minimum intervention strategy, we found that the current system is not sufficient in meeting long-term demand. The capacity deficit grew considerably between 2017 and 2030 depending on the scenario, with this metric being smallest under low growth and largest under high growth as one would logically expect. This 'do nothing' scenario provides an important comparison for the effectiveness of other strategies.
The performance of the spectrum strategy had mixed results based on different demand scenarios. For example, there was only a minor capacity deficit in the baseline when compared to the high growth scenario. Fortunately, in the low growth scenario system capacity narrowly managed to meet long-term demand, avoiding a capacity deficit in all areas except the London region. Yet, these results indicate that a purely spectrum-based approach would not be a robust strategy to meet long-term demand. However, it is promising that this strategy could meet demand in many locations outside of the major urban conurbations.
The small cell deployment option performed well across all scenarios, with aggregate system capacity being positive in all cases, avoiding the capacity deficit that often arose in other options. Small cells provide very high capacity in localised areas (due to small coverage radii per cell), but such a large system capacity surplus may lead to overprovisioning, which could be economically inefficient. More cost efficient wide-area coverage solutions are likely to be needed.
In the hybrid strategy, we see a very similar outcome to the previous small cell strategy. This is because the additional spectrum (specifically in urban areas), is not sufficient to meet required demand, therefore the decision layer in the model results to small cell deployment. In certain scenarios, a capacity deficit arose in London and the South East. Moreover, in the high demand scenario there was a capacity deficit in many other English regions, particularly around the main conurbations in the Midlands and North West. Appendix A contains detailed results for the five areas with the largest capacity margin deficits, broken down by scenario and year, for the baseline scenario.
Figure 5 Strategy performance over time
To understand the commercial ramifications of these results, each demand scenario and deployment strategy must be evaluated in relation to cost. Moreover, it is particularly helpful to quantify how capital expenditure takes place across urban, suburban and rural areas. Consequently, Figure 6 visualises the results for the rollout of infrastructure over the study period of 2016-2030.
As the UK has incomplete LTE coverage, in all expansion options, this commonly took place in the early years of the study period, up until 2020 when the 5G spectrum bands became available. Spending on LTE was generally high because it requires a whole new basestation as opposed to simply integrating additional carriers for 700 or 3500 MHz on an existing site. Spending on urban areas made up a small proportion of the overall capital expenditure, with suburban and the densest rural areas absorbing considerable resources particularly later in the study period from 2021 to 2030.
During the LTE upgrade in the first three years of the study period, resources were pushed out to some of the lowest population density areas in remote Britain. In the spectrum strategy, this option runs out of new bands to integrate leading to a decrease in total capex spending annually over the study period. However, the shape of this decline indicates that not all areas have a negative capacity margin when the bands come online, with spending taking place in two phases. Firstly, in 2020 a range of urban, suburban and rural areas with a capacity deficit receive new spectrum, with this spending tailing off in 2021, before rising again by 2023. In the strategies containing small cells, the limited radii of these assets cause spending to ramp up to the maximum annually allowed. Small cells dominate the spending profile from 2020 onwards with a small proportion in early years spent on integrating 700 MHz, and an even smaller proportion spent on 3500 MHz.
Having reported both the demand and infrastructure performance results, we will now discuss what this means for future decision making. Estimating the future demand for mobile telecommunications infrastructure is a challenging task, as there is a very high degree of uncertainty. The aim here was to quantify the impact of the industry standard forecast by Cisco up until 2021, and then use high (exponential), baseline (linear) and low (logistic) growth scenarios to explore future mobile traffic demand up to 2030. Population projections were also integrated. We find that data demand is by far the largest driver of mobile traffic (constituting 92% of baseline growth), with demographic change including fertility, mortality and migration having only a marginal impact on future demand (8% in the baseline). This is an important finding because it encourages technological forecasters in industry and government to focus on refining future data demand forecasts, rather than population projections, contrasting strongly with energy or transport systems where demographic growth is often the major driver.
The results show within the methodology applied here, that per user monthly mobile traffic demand is estimated to be 4.7 GB in 2020 for the UK. This estimate is comparative to the results of Lee et al. (2016) who developed forecasts of mobile broadband traffic using a combination of scenario analysis and the Delphi method, finding that South Korea would have a monthly user demand of approximately 6 GB by 2020. Although the demand scenarios used here for the UK were lower, the difference in adoption and access capacity between each country provides an explanation. As carried out in , further research should put a larger emphasis on modelling devices and their impact on data demand, and using these scenarios to drive the existing modelling framework.
Additionally, although this paper included annual migratory trends (predominantly towards urban areas), daily commuting and travel patterns were excluded. This is however important, and a further incremental development would be to better reflect mobility in the busy hour estimation, particularly in cities, as this is where the mobile network can easily become overloaded. Indeed, user mobility can make it a challenge to report mobile statistics; hence one approach is to anchor data usage to the population or premises (as used by Ofcom, 2016). Although the method used here is a simplification, the results still provide useful comparative insight across scenarios. The growth of computational power, combined with new crowd-sourced data, may help in overcoming this issue as user mobility can be tracked through time and space.
To evaluate how options performed against future demand scenarios, the second research question focused on assessing interactions between supply and demand during a mobile telecommunications generational upgrade. Different infrastructure strategies could be assessed against the minimum intervention option. The results found that the spectrum strategy performed well in the baseline and low demand scenarios up until 2025, demonstrating that this will play an important role in meeting midterm demand. However, spectrum-based options were more sensitive to demand uncertainty than small cell deployment, which was more capable of dealing with higher demand growth, although this was at the expense of increased cost. The small cell and hybrid strategies provide huge capacity in some areas, but still fail to meet demand in others. This is due to limited coverage per cell in combination with constrained capital expenditure.
Importantly, if there is a desire for the large per user speeds mooted by 5G, which cannot be achieved by spectrum strategies alone in non-urban areas, new revenue streams consequently need to be found to boost infrastructure investment. Therefore, the obvious prospect is to capture some of the potential value created from the new market opportunities and productivity gains associated with IoT, Smart Cities and other digitally connected systems, as has been identified in the technological change literature (Yang et al. 2013;Sadowski, 2017). Of course, in this analysis we tested a relatively narrow range of supply-side strategies, therefore further research must focus on defining strategies of emerging 5G technologies including higher order MIMO and millimetre wave small cells to enhance supply capacity.
In answering the final research question, we explore the ramifications of the results. Firstly, the quantified future scenarios used in this paper, in combination with the performance of different supplyside options, are a useful tool for understanding the economic costs of increasing mobile demand. Ghezzi et al. (2015) provide a decision framework for MNOs to identify the key drivers of potentially disruptive change to their operations, market strategy and different business models. Importantly, increasing data traffic is treated as one of the key drivers. Hence, the scenario-based analysis produced here provides complementary quantitative information for MNOs to anticipate how changing demand may affect the value proposition, value creation, value delivery and value appropriation of their business model.
Moreover, while this information may be readily available within some organisations, particularly MNOs or telecommunication regulators, the findings can also be valuable elsewhere, perhaps where they are not so readily available. For example, it is vital information for small and medium-sized (SME) digital economy firms, as well as for governmental institutions which sit outside of the telecommunication regulator and hence might not be able to access commercially sensitive data. Indeed, Shin et al. (2011) under take an analysis of the socio-technical dynamics of moving from 3G to 4G LTE in Korea, determining that moving to a new generation of mobile technology cannot be driven by a single organisation as it requires a tremendous number of partnerships between private and public organisations. Hence, the scenario analysis undertaken here can help in this endeavour by providing transparent assessment of the supply and demand of mobile telecommunications for MNOs, SME digital economy firms and government institutions.
Indeed, plentiful spectrum resources play an important short-to mid-term capacity enhancing option, as illustrated in this analysis. Therefore, the UK Government's timely delivery of the bands outlined in the Public Sector Spectrum Release programme (750 MHz of sub-10 GHz spectrum by 2022, with 500 MHz being available by 2020) will be highly important specifically for rural areas where small cell rollout will be unviable (UK Government Investments, 2016). On the other hand, in urban areas with high data demand, small cell deployments are a cost-efficient strategy for MNOs, therefore government must ensure these assets can be efficiently and cheaply installed. Often, stringent local planning regulations combined with historical protections for older buildings can prevent timely deployment, requiring innovative private and public solutions to overcoming these barriers.
In both the spectrum integration and small cell strategy options, backhaul capacity and its consequential cost is a significant potential issue. Options should be explored that can focus on enhancing fibre access in areas with little backhaul capacity, as this could lower deployment costs. Ultimately in 5G we will see increased convergence within the telecommunications industry, as MNOs who also own a fixed network may allow low-cost access for backhaul to provide a competitive cost advantage.
Conclusion
Rapid technological innovation in mobile telecommunications affects our ability to accurately forecast long-term capacity and demand, making it essential that rigorous examination of this uncertainty is both quantified and visualised to support decision-making. The analysis presented here can help MNOs, SME digital economy firms and government institutions understand the implications of increasing demand (particularly the economic implications) resulting from change in both per user traffic and demographics. Additionally, quantified assessment of the performance of different 5G supply-side strategies were presented as ways for MNOs to cope with dynamic mobile traffic growth.
We find that increasing per user traffic resulting from technological change has a major impact on future demand, whereas demographic change (fertility, mortality and migration) has only a minor effect. For example, in the baseline scenario only 8% of the growth in data for 2016 to 2030 resulted from demographic change, whereas 92% was from per user data demand. Hence, technological progress accounts for more than 90% of the growth in total data demand. Consequently, technological forecasters should be encouraged to focus on refining per user data demand, rather than devoting time to developing population projections, contrasting strongly with energy or transport systems.
The modelled results indicate that spectrum strategies could perform well in most scenarios up until 2025, and hence will play an important role in meeting mid-term demand. However, if demand growth was very high, spectrum failed to meet demand. This contrasts with small cell deployment, which provided huge increases in capacity, but at the expense of much higher capital expenditure due to limited coverage per cell. Unless new revenue can be obtained from the value created by IoT, Smart Cities, or other new technological developments, the investment appetite for rolling out small cells anywhere other than urban areas will be low. In any case, telecommunications capacity needs to be balanced with demand, which will mean MNOs avoid overprovisioning. 5G small cell deployments are highly likely in urban and suburban locations, but more cost-efficient wide-area coverage solutions are required to meet lower population density areas.
Qualitative frameworks have been put forward in the technical change literature to aid with business model adaptation for MNOs facing on the one hand increasing traffic growth, while on the other declining revenues. The contribution of this paper is to provide a scenario-based assessment of telecommunications supply and demand as we move towards 5G, to serve as complementary evidence for high-level decision-makers to develop successful market strategies that are robust to different futures.
David Cleevely
Dr David Cleevely CBE FREng FIET is an entrepreneur and international telecoms expert. In 1985, he founded the telecommunications consultancy Analysys Mason where he made a significant contribution to the theory and practice of calculating Universal Service Obligation costs, as well as identifying The Broadband Gapwhere the cost of supply would exceed the price consumers were willing to pay. He is an authority on telecommunication policy and has advised numerous governments on policy and innovation frameworks. | 10,084.2 | 2018-08-01T00:00:00.000 | [
"Engineering",
"Computer Science",
"Business",
"Economics"
] |
Stock market price prediction model based on grey prediction and ARIMA
. Nowadays more and more people like to invest in volatile assets, and it is the goal of every market trader to maximize the total return by developing a reasonable investment strategy. We first predicted the daily value of gold and bitcoin for five years based on known data, we built two models, one is Improved Metabolic Gray Model (Abbreviated as IGM), the other is Time Series Model ARIMA. The application of the model helps investors make investment decisions and improve economic returns.
Introduction
Market traders buy and sell volatile assets frequently, with a goal to maximize their total return. There is usually a commission for each purchase and sale. Two such assets are gold and bitcoin.Gold, as a general equivalent, is a constant in asset allocation. Gold has been widely used throughout the world as money, [1] for efficient indirect exchange (versus barter), and to store wealth in hoards. For exchange purposes, mints produce standardized gold bullion coins, bars and other units of fixed weight and purity.
Bitcoin is a decentralization-based, peer-to-peer network and consensus initiative, open source code, with blockchain as the underlying technology cryptocurrency [2]. Bitcoin is an innovative payment network and a new kind of money. And it is the first decentralized digital currency. They are transferred directly from person to person via net without going through a bank or clearing house. The transaction fees are much lower and you can use them in every country.From its initial unpopularity to its worldwide recognition, Bitcoin's high returns come with high risks. They have a high volatility [3][4].
GM (1,1)
GM (1, 1), pronounced as "Grey Model First Order One Variable.", is a time series forecasting model, which is able to make accurate predictions for forecasting of the monotonous type of processes. Details are as follows: First set the original sequence satisfying first-order homogeneous differential equation (4) the satisfaction condition of this differential equation is when , the solution of the equation is (6) the discrete values sampled for equal intervals are (7) when , the first-order homogeneous differential equation can be transformed into (9) that is when , the solution is fitted value, when , the solution is predicted value.
Pretest
Whether a a given sequence can built GM(1,1) model with high accuracy is generally determined by the class ratio of . The class ratio need to meet certain intervals.
The definition of class ration is then can built GM(1,1) mode. Considering the volatility of asset values and the small amount of data just starting to forecast, we let , and we find . Conditions are met to the model.
Process and result
We realize the above modeling process by programming. We improve the model by carefully considering the cycles of asset volatility,we bring , , , into the equation separately and use GM(1,1) to predict the data for five years [5]. We find that the best result is bring , into the equation separately and average the two sets of predicted values obtained. As shown in figure1 ,the prediction is very accurate [6].
Post-test and model evaluation
We test the accuracy of magic by residual test. First calculate Then do cumulative reduction calculation And then calculate absolute error (A.E.) and relative Error (R.E.) R.E.
We find that almost all of the are less than 20%, which are reasonable, except for some values. We call them 'Abnormal Values' and list them in the following table. All the abnormal values come from bitcoin predictions [7]. By observing the table, we find that there is a very large error in the predicted value, on March 13, 2020 bitcoin plummeted from 7936.65 on the 12th to 4830.21, an increase of -39.1404434%. By checking the information, we know the plunge on that day was due to the outbreak of the global new coronavirus epidemic . In addition, many investors' disappointment in the market accelerated after US President Trump announced on Wednesday night that travelers from 26 "Schengen Convention countries" would be restricted from entering the United States for 30 days from Friday (13th), in addition to the United Kingdom and Ireland [8].The plunge on the 13th also had an impact on the forecast value on the 15th, which was low.
We know that bitcoin is very volatile and various factors can affect its value, the influencing factors are twofold: market and government macro-regulation. The outbreak of the black swan event of the new crown epidemic, the U.S. government's implementation of entry controls, and the greatly reduced market demand caused bitcoin to plummet. There are also some other reasons, such as the February 5, 2018, Global regulators banning, UK and US bankers banning the use of credit cards to buy bitcoin, etc. have caused a lot of concern in the market. The same lead to a high forecast [9].
The virtual currency exchanges FireCoin and OKCoin Coin announced that "only RMB trading business will be stopped, but the rest of the business will not be affected" on September15,2017. Some people see this as a way to continue coin trading on the platform or to provide information aggregation of individual-to-individual virtual currency trading business, so the market demand is somewhat higher and the forecast is low [10].
Large deviations are due to the objective reasons, which cause market changes or government regulation, and bitcoin plunged after a period of growth or rebounded after a period of decline. In addition, the abnormal value only accounted for 0.4939627%, which shows that the GM(1,1) model we built is relatively accurate in its forecast.
Time Series Model
We then use Time Series Model ARIMA [5] to predict the five years' daily prices of the two assets.
ARIMA
ARIMA models are applied in some cases where data show evidence of non-stationarity in the sense of mean (but not variance/autocovariance), where an initial differential step (corresponding to the "integrated" part of the model) can be applied one or more times to eliminate the non-stationary of the mean function (i.e., the trend). [6]The AR part of ARIMA indicates that the evolving variable of interest is regressed on its own lagged (i.e., prior) values. The MA part indicates that the regression error is actually a linear combination of error terms whose values occurred contemporaneously and at various times in the past. [7] The I (for "integrated") indicates that the data values have been replaced with the difference between their values and the previous values (and this differential process may have been performed more than once). The purpose of each of these features is to make the model fit the data as well as possible.Our modeling process is as follows.
Figure 2. ARIMA algorithm
Here are the ARIMA forecasts for gold. For the value of gold on non-trading dates, we choose to skip.Firstly we perform the first-order difference operation and the second-order difference operation.The difference between the first-order difference result and the second-order is not much, and if we use the second-order difference there will be over fitting, so we choose the result of the first-order difference processing. Then we use smoothing method to process the data, but the results are not good. less than 1%, 5%, 10% at the same time that means very good rejection of the hypothesis, in this data, the ADF result is -6.9, which is less than the statistical value of three levels.
The p-value of 0.184617 is less than the significance level of 0,05, so the hypothesis can be rejected with 95% confidence level and the series is considered as non-white noise series.
ACF,PACF are determined by using trailing and truncated tails.
Figure 7. AIC
The prediction process for bitcoin is the same as for gold.We use the model ARIMA(1,1,7) to predict gold, use ARIMA(0,1,10) to predict bitcoin by programming. The results are shown in figure8.
Model Comparison
Compared to Time Series Model, the Improved Metabolic Gray Model predicts more accurately by comparing the two models' abnormal value percentage.
Conclusion
Investment in volatile assets requires relevant practitioners to formulate reasonable investment strategies to maximize the total return. In view of the current needs of financial investors to expand their income, this paper establishes an improved metabolic grey model and ARIMA model based on the known data to predict the daily value of gold and bitcoin in five years. Compared to Time Series Model, the Improved Metabolic Gray Model predicts more accurately by comparing the two models' abnormal value percentage. The application of the model helps investors make correct investment decisions and improve economic benefits. | 1,986.6 | 2022-09-19T00:00:00.000 | [
"Economics"
] |
Pesma Apps as Android-based Integrated Applications for Mahasantri Pesma
Pesantren Mahasiswa Internasional KH Mas Mansur (Pesma) has a shuttle facility for everyone who lives there. However, the use of the shuttle is not optimal because the ordering procedure still uses manual techniques. It contrasts with the rapid development of industry 4.0 and the mission of Pesma to get digitalization. This paper describes the effort to improve Pesma shuttle bookings by developing an application. The application is built upon the Android Studio 3.5.1 platform and uses Firebase real-time database. The development method implements the System Development Life Cycle (SDLC), Waterfall. The research results in an application called “Pesma Apps” that can be used by staff and mahasantri. Testing the Pesma apps, we obtained sufficiently good results where the black box testing proves that all functions work well. A usability testing using SUS with 30 respondents produces a good result at the level of 72.6, which suggests that the application is accepted.
Introduction
The rapid changing of globalization nowadays makes the development of Information Technology growing very fast. It makes the use of mobile device increases, especially smartphones, which greatly encourages the effectiveness of an activity. Many people and institutions utilize the development of Information Technology, especially the use of smartphones in the form of android-based applications to facilitate work and daily activities [1] The demands for the rapid development of information technology requires people to follow these developments. Especially in the education, commerce, transportation and health sectors. In fact, people nowadays want the ease of accessing things through smartphones that most people currently have. [2].
Pesantren Mahasiswa Internasional KH Mas Mansur (Pesma) Univesitas Muhammadiyah Surakarta (UMS) is an institution owned by UMS and under supervision of Lembaga Pengembangan Pondok Al-Islam dan Kemuhammadiyahan (LPPIK) UMS. An active student of UMS who live in Pesma is called Mahasantri. Mahasantri receives some facilities related to education (Language, Religion, and soft skill), sports, and transportation (Shuttle). Shuttle is an operational car that is owned by Pesma. It can be utilized by directors, staff, employees, and students or Mahasantri. However, the use of shuttle is not optimal yet, because of the ordering procedure that still applies traditional methods. To begin with, mahasantri who wants to book/order the Shuttle is required to confirm to the driver that the unit is ready to use. The authors consider that the process is overcomplicated and is not relevant in the fast-paced era of today. Besides improving the quality of human resources, Pesma also has another mission, namely digitalization. This digitalization is a response of rapid development of industry 4.0 that the purpose is to achieve improvements regarding automation and operational efficiency, as well as effectiveness [3]. However, it can take place in Pesma via smartphones in which there are android-based applications. Therefore, the authors would like to conduct research on existing case studies at Pesma, especially related to Pesma Shuttle bookings.
The output of this research is named "Pesma Apps". Pesma Apps is an integrated application for Mahasantri. However, the authors will examine a smaller scope which focuses on ordering Shuttle. The application is based on Android. The authors create this application using Android Studio and Java Programming. Android mobile OS provides a flexible environment for Android Mobile Application Development as the developers can not only make use of Android Java Libraries, but it is also possible to employ normal Java IDEs [4] . However, android can enhance reliability, usability and other features of existing products [5]as we all know that the development of hardware for mobile device is getting better and the performance index is veryhigh than the actual requirements ofthe software configuration. Phone's features are now more dependent on software or application. This paper describes development of Android mobile platform application. Development environment of Windows Mobile and Apple's iPhone are very simplified for mobile applications. Mobile applications give users a quick and reliable user experience. Primary focus of this paper is on the Android architecture based on Linux version 2.6. It is Linux based an open-source mobile phone operating system. Basically Java programming language is used to develop android application. Android SDK provides set of application programming interfaces (APIs.
Method
The method that used in this research refers to the System Development Life Cycle (SDLC). System Development Life Cycle (SDLC) is a general methodology used to develop information systems. SDLC consists of several phases starting from the planning, analysis, design, implementation and maintenance phases of the system. This SDLC concept underlies various types of software development models to form a framework for planning and controlling information systems. SDLC models that are often used include Waterfall [6]. The authors, in compiling research, were using the SDLC Waterfall model [7]. The Waterfall model has a systematic approach starting from the sequence based on the system's needs and then going to the stages of analysis, design, development, testing or verification and maintenance stages that were implemented in this research.
a. Requirement Analysis
The analysis of the equipment required to design the Pesma Apps application include: 1) Tools and Materials: Tools and materials used for design according to the Table 1. Macbook Pro used for make programming code and Android smartphone with 5.0 minimum version used for build the application
b. Design
Design is the initial stage to analyze the shape and design of the application to be created. This design includes a use case diagram that will illustrate the form of a series of interactions between users and activities that are performed in the system. 1) Use Case Diagram Use case diagram is applied to model the behavior of information systems that will be created. Use case will describe an activity carried out by the actor. Actors are components involved in using the application. It provides the simplest representation to visualize how actors interact with the admin [8]. well as view the user that has been registered, booking list and booking verification. Based on Figure 3, it is observed that user can carry out registration, book shuttle, view the history, and edit user profile.
2) Activity Diagram
The UML Activity Diagram (AD) is an important diagram for modeling the dynamic aspects of a system [9]. Figure 4 describes admin's activity to control the application and to book shuttle.
c. Development/Implementation
The authors started to create applications using Java Programming language to translate the logic expected by the authors with Android Studio Application 3.5.1.
d. Testing
Testing is an important stage to test the feasibility of an application that has been made and make sure that the application is made in accordance with the initial planning in terms of interface and function. The authors used 2 methods to test applications that are made using the Blackbox testing and System Usability Scale (SUS) because of its versatility, ease of administration, and comparative value [10]data were collected on the usability of applications used on two kinds of mobile platforms-phones and tablets-across two general classes of operating systems, iOS and Android. Over 4 experiments, 3,575 users rated the usability of 10 applications that had been selected based on their popularity, as well as 5 additional applications that users had identified as using frequently. The average SUS rating for the top 10 apps across all platforms was 77.7, with a nearly 20-point spread (67.7-87.4. Black Box testing is an application testing in terms of specific functional without testing the design and program code. The point of black box method only tests the functional application and also, it can evaluate valid and invalid entries from users(Putri, 2019). System Usability Scale (SUS), the authors used a questionnaire with a 5-point liker scale, consisting of SD (Strongly Disagree), D (Disagree), N (Neutral), A(Agree), SA (Strongly Agree).
e. Maintenance
The last phase is the stage of making an application in accordance with the design that has been made. The authors implemented the design using the help of the Android Studio 3.5.1 application using Java programming language and Firebase database because it combines many products with Google's infrastructure and developerfriendly environment [12]. Android Firebase API needed to gain the access to database [13]. The minimum SDK is Android 5.0 version.
1) Introduction to The Initial Menu
The initial menu of the application is the user sign-in display, the option to the admin page, and the button for registration -displayed in Figure 6. User Sign-in menu Figure 6, appears after splash screen, when the application opened. Users are required to enter the Student Number (NIM) or Student Number that has been registered. If the student is not registered yet, the student is required to choose the option register, as shown in figure 7. After the registration is conducted successfully. The home page will be displayed, as shown in Figure 8. The home menu shown in Figure 8, displays the main menus that can be accessed by the user including Shuttle, Academic, Pesma, Facility, About, and Meals. This research focuses on Shuttle menu that displayed in Figure 9.
Figure 9. Shuttle Menu
The application provides a function for booking a shuttle. Users are required to choose the desired destination according to the destination list that is available in the application. After selecting a destination, the user chooses the day of the shuttle booking and also picked the available driver. The user presses the book button for the ordering process after deciding the destination selection process, time, and driver. The history menu displays the history of the user transaction -shown in figure 10 There are several features that can be accessed by the admin, such as add user, user list, order list, logoutwhich can be seen in Figure 11. The main feature that connects to shuttle booking is shown in Figure 12. It displays a list of incoming orders for being processed by the admin. Also, accept function, which is the approval process for incoming orders. Hence, the user can view the transaction in the history menu after the approval process from the admin.
Figure 14. WhatsApp Message
After approval process from the admin, the user is able to view the history after approval, as displayed in Figure 13.
The user can carry out further transactions to the driver by clicking contact button. Then, the display will be directed to WhatsApp or the driver number. It is displayed in Figure 14. The default message from the user to the driver to order the Shuttle.
b. Black Box Test
The authors employed black box testing to test the application functionally. The purpose of this black box testing method is to find the malfunctions in the program [14] c.
Usability Test
The usability test used in this study uses the System Usability Scale (SUS) method with 30 respondents from Pesma K.H. Mas Mansur to evaluate usability of Pesma Apps.
The System Usability Scale (SUS) is conducted using a questionnaire that can be used to measure computer system usability according to the user's subjective perspective [15]menunjukkan perlu dilakukannya pengujian usability. Pengujian usability dijalankan untuk mengukur aspek-aspek usability yaitu: efektivitas, efisiensi, dan kepuasan pengguna. Penelitian ini menggunakan kuesioner System Usability Scale (SUS. The question provided, along with the scale, is informed in Table 3. The result of SUS calculation is shown in Table 4. The result is then converted into statistics -informed in Figure 15. Figure 15 describes the graphic score of SUS Usability test. There are 30 respondents giving scores in 60-70 range, 6 respondents giving scores in 71-80 range, 4 respondents giving scores in 81-90 range, and 1 respondent giving scores in 91-100 range. The complete result of SUS score is informed in Table 4.
Conclusion
Based on the experiment, it is concluded that the application created from this research is ready to use. The name of the application is "Pesma Apps". Pesma Apps is made to facilitate students in ordering Pesma shuttle. This application provides the information regarding shuttle that can be ordered. It can also show the available routes that can be ordered by mahasantri -along with the estimated costs that must be paid by mahasantri.
The black box evaluation the Pesma Apps produces good results. The functions of the application results in valid results meaning the whole process of the application works well. The usability testing using SUS results in a good score of 72.6. Therefore, it can be stated that the application is acceptable and it can be declared that this application is well-conducted and this research can be further developed to make it better, for example about database management and academic learning system. | 2,900.8 | 2020-08-22T00:00:00.000 | [
"Computer Science"
] |
Analysis the Effect of Islamic Banks Performance on Depositor’s Fund: Evidence from Indonesia
This study aims to examine the effect of CAMEL framework on depositor’s fund of Indonesian Islamic banks. The study uses a sample of 11 Islamic commercial banks and 24 Islamic business units’. It used depositors fund as the endogenous variable, and some components of CAMEL such as capital adequacy, assets quality, operational efficiency, profitability, and liquidity as exogenous variables. An econometric model was established and parameters are estimated based on the secondary data obtained from Islamic banking statistics-Bank of Indonesia database for five years (2010-2015). The results of the paper conclude that capital adequacy ratio and liquidity are significant and positively correlated to Islamic deposits, while nonperforming financing is significant but negatively related to the Islamic depositor’s fund. On the other hand, profitability and operational efficiency are not to be significant influence on the depositor’s fund. Finally, the statement of theory proved that good Islamic banks performance provided positive image and confidence in Islamic banking system.
Introduction
Islamic banking system in Indonesia has significantly emerging and growing over the last fourth decades. It achieved an impressive growth in assets, Islamic financing and deposits respectively. Moreover, most financial indicators are also very promising when we look at its total assets that have reached to 274 trillion rupees by the end of 2015, which grew by 5 percent compared with year of 2014. Meanwhile, total financing increased by 17 percent compared with year of 2015, and Islamic deposits have reached to 220 trillion rupees at end of the year 2015 (Bank of Indonesia, 2015).
In fact, financial and operational performance of Islamic banks is very important to behavior of depositor's fund, because it has an influence on their invested fund. Furthermore, depositors are interesting more carefully to managing their money placing in Islamic banks in order to ensure that their funds are being invested prudently. Therefore, financial performance is commonly classified into several aspects such as capital requirements, assets earning quality, operational efficiency, profitability and liquidity (Bashir, 2001). These aspects enable the banks to evaluate their financial and operational activities based CAMEL framework (Hasbi & Haruman, 2011). Thus, we are looking to analyze the assets and liabilities side beside operational activities carefully in order to protect and manage Islamic depositor's fund and reinforce the public trust towards Islamic banking system.
One of the important objectives that study attempts to achieve is to utilize Islamic banks assets in order to maximize their invested depositor's fund. Further, it seeks to improve components of CAMEL as much as possible in order to attract more depositors' fund. It also argues Islamic banks are potentially exposed various kinds of risks such as liquidity and operational risks which might effect on depositors fund (Zaini & Rosly, 2008). Thus, Islamic banks performance is playing a vital role in managing Islamic bank depositor's fund.
In Indonesia, Islamic banking system carried out its funding and financing activities through two kinds of deposits, namely demand deposits and investment deposits. These funds are invested in the business community as Islamic financing contracts such as Murabahah, Musharakah, and Mudarabah financing (profit-sharing financing and sales-based financing) as mentioned by research of Ismal (2011) andWijaya (2008). Therefore, the good Islamic bank performance can improve the public trust who have an excess fund that might increase public saving and number of depositors in the bank. However, the performance of Islamic banks in Indonesia is evaluated and restricted according to Bank Indonesia Act No.9/1/PBI/2007. The aim of this study is to understand the behavior of depositors fund regarding to the performance of Indonesian Islamic banks. Therefore, data variables of depositors' fund are analyzed based on monthly data series from period of 2010 until 2015 involved by recovery from global financial crisis. Moreover, this study used five bank' specific variables which are capital adequacy, assets quality, operational efficiency, profitability and liquidity in order to estimate the amount of its effect on depositor' fund. So, finding the effect of financial performance on depositors fund is important for the banks in order to understand the financial characteristics that contribute in improving their financial policy and increase their assets market share compared with conventional banks. On the other hand, it makes depositors strongly confident in investing their fund in Islamic financing way.
This research article is planned as follows: section I is introductory part of study. Section II is a briefly summary of literature review. Section III explains data variables and the research methods. Section IV is a research variables and hypotheses. Section V is a discussion of research findings and section VI concludes the research paper.
Literature Review
The research explores the Islamic banking literatures of Islamic deposits as well as the previous studies that analyzed the relationship between Islamic banks performance and depositors fund. The earliest references to the reorganization of banking on the basis of profit-sharing rather than interest rate are found in Quershi (1946), Ahmad (1952), and Siddiqi (1981). They replaced interest rate with Profit-loss sharing (PLS) rate in order to avoid interest rate in Islamic financial transactions to be compliment with Islamic principles. Siddiqi (1981) added under the PLS system, the assets and liabilities of Islamic banks are integrated in the sense that borrowers share profits and losses with the banks, which turn share profits and losses with the depositors In Islamic banking system, the surplus spending units deposit their funds with banks, which in turn lend their fund to deficit spending units, since any financial system deals with the nature or character of principles guiding the flow of fund from the surplus spending units to the deficit spending units, it is relatively simple how to explain the nature of Islamic banking system (Rosly, 2005). Khan and Mirakhor (1987) mentioned that nominal value of investment deposits are not guaranteed and will fluctuate according to the performance of bank, any shocks to the Mudarabah and Musharakah arrangements will change the value of deposits and capital that held by public. In this context, Sundararajan and Errico (2002) stated that over all degree risks of assets financing of banks may shift to their investment depositors, partially the risks that come from equity-based financing (Mudarabah and Musharakah). Furthermore, Rosly (2005) argued that Islamic bank deposits are not attractive option if Islamic investments indicate higher transaction costs. This means the higher operating expenses over than operating income may decline the volume of depositors fund in Islamic banks. Meanwhile, Zaini and Rosly (2008) analyzed the risk and return of Islamic bank investment depositor's fund. They found bank performance has significant effect on investment deposits. However, the higher credit risks and nonperforming financing may depreciate the value of capital and depositors fund in Islamic bank.
Considering to the Islamic bank performance evaluation, Manarvi (2011) mentioned that Islamic banks are assessed over all soundness of bank and ensured the healthy condition by components of CAMEL similar of conventional bank. Sahajwala and Bergh (2000) argued that the performance of Islamic banks are evaluated different aspects of financial activities such as adequacy of risk based capital, assets growth rate, profitability and liquidity based on conventional foundation model of CAMEL that released by U.S. Federal reserve bank in 1980, which commonly assessed both conventional and Islamic financial institutions. While, Sarker (2006) stated that CAMEL model seeks to assess and track changes in a bank financial condition and risk profile in order to generate timely warning for the regulator to help initiate warranted action. He argued that CAMEL is a good indicator which reflects bank financial condition and interest of depositors, because it is interacted with each item of assets and liabilities side.
Under bank performance theory each financial institution evaluated on the basis of five dimensions of CAMEL which reflect all financial and managerial aspects of bank, include: Capital adequacy, Assets quality, Management quality, Earnings, and Liquidity. Hasbi and Haruman (2011) argued that these five dimensions have relative significant influence on depositor's fund. They found that capital adequacy and operational efficiency have significant and positively related to depositors, while the variables profitability, assets quality, and liquidity haven't significant influence on depositor's fund. Moreover, other studies from Indonesia such as Muhtarom (2009), and Sumachdar and Hasbi (2010) performance of Islamic banks before and during the economic downturn in 25 Gulf countries. They used Islamic financing/asset and deposit/ asset ratios to measure Islamic bank performance. It found that Islamic banks are more stable than conventional banks before and during the crisis.
The limitations of prior studies are that most of them were focused on discussion the financial characteristics of Islamic banking industry compared with conventional one. In fact, there were few previous studies that have filled the research gap in understanding the relationship between banks performance and their depositor's fund, specially the effect of some risks such nonperforming financing and operational risks on depositors fund. Therefore, depositors fund modeling has not been primarily discussed in area of Islamic finance. As a result, this research attempts to understand the directional relationship between some bank specific factors and behavior of depositor's fund.
Data and Research Methods
This research attempts to find out the influence of some aspects of bank performance on depositor 's fund. The data observation consists of all Islamic commercial banks in Indonesia. Furthermore, the data analysis based on monthly time series starts from January of 2010 until December of 2015. The final sample was 11 Islamic banks and 24 Islamic business units. As a result, the total of 60 observations is included in the analysis. In fact, this number is quite enough to achieve some meaningful significant results. Further, multiple regression analysis is used in order to examine the effect of some certain bank performance indicators {capital adequacy ratio (CAR), nonperforming financing (NPF), operational efficiency (OEOI), profitability (ROA), and liquidity (FDR)} on depositors Fund (DF). However, Islamic banks in Indonesia have a few primary empirical studies that investigated in the relationship between bank performance and behavior of depositor's fund. Hasbi and Haruman (2011) analyzed the effect of CAMEL rating system in depositor's fund. They found that there is positive effect of some financial indicators on Islamic deposits.
As result, this study contributes the understanding of banking performance vital role in attracting more depositors fund and reinforcing public trust in Islamic banking system. Therefore, theory statement of this research states that good Islamic performance has a positive image on the public perceptions and depositors in Indonesian Islamic banks. Moreover, this research extends Islamic banking methodology through enhancing the profitability and assets market share of Islamic banks compared with conventional banks.
Research Variables and Hypotheses
The main objective of this research is to explain the variability of depositors fund (DF) regarding to the influence of components of CAMEL and performance. Therefore, depositors fund variable is considered the primarily interested variable for the study and used as a measurable tool of the public trust in Islamic banking system. Thus, this parameter is measured and used by Hasbi and Haruman (2011) as follows: There are five bank specific characteristics (CAMEL) that measured bank performance which are defined and described as follows: Capital Adequacy Ratio (CAR), capital adequacy ratio is defined as situation where the adjusted capital is sufficient to absorb all losses and cover fixed assets of the bank leaving a comfortable surplus for the current operation and future expansion (Ebhodaghe, 1991). It measured bank capital (reserves, paid in capital, retained earnings, and current earnings) compared with risk weighted assets (Sarker, 2006). Therefore, this variable is calculated by: CAR = (Bank capital/ Risk weighted total assets) H 1 : Capital adequacy ratio has a statistically significant influence on depositor's fund of Indonesian Islamic banks.
Nonperforming financing (NPF), is defined as the level of bad financing that had been reserved. It measured the assets quality of bank and it also described the capacity of bank in spreading risks and recovering default loans (Sundarajan & Errico, 2002). The lower ratio, it means that the better earning assets quality. Thus, it is calculated as follows: NPF = (The amount of default from financing/ Total financing) H 2 : Nonperforming financing has a statistically significant influence on depositor's fund of Indonesian Islamic banks.
Operational efficiency (OEOI), used to gauge management soundness, and that occurred by using operating ijef.ccsenet.org International Journal of Economics and Finance Vol. 8, No. 10;2016 expenses to operating income ratio (OEOI) as stated in study of Sahajwala and Bergh (2000) and Sarker (2006). They argued that OEOI could be used as indicator to evaluate management quality of the bank. Therefore, the higher OEOI, the higher operational inefficiency of the bank, it is calculated by: OEOI = (Operating Expenses / Operating Incomes) (4) H 3 : Operational efficiency has a statistically significant influence on depositor's fund of Indonesian Islamic banks.
Profitability (ROA), there are several indicators of profitability such ROA and ROE, while most studies prefer to use return on assets (ROA) because it is the more effective utilization of total asset to generate profit effectively and efficiently. Rosly (2005) defined return on asset as net income after tax divided by total assets. Thus, it is calculated as follows: ROA = (Net income / Total assets) H 4 : Profitability has a statistically significant influence on depositor's fund of Indonesian Islamic banks Liquidity (FDR), indicated as the capability of a bank to meet short term obligations and occasional withdrawals, in other words the degree which of bank assets convertible to cash with undue losses (Sundarajan & Errico, 2002). Hasbi and Haruman, (2011) used total financing to total deposits ratio as indicator to measure liquidity of Islamic banks. Therefore, it is calculated by: FDR = (Total financing/ Total deposits) H 5 : Liquidity has a statistically significant influence on depositor's fund of Indonesian Islamic banks The depositors fund has a mean value of 210.742 billion for Islamic commercial banks with a standard deviation of 25958.9. Further, the capital adequacy ratio is 14.7%, this means that Islamic banks in Indonesia hold the minimum capital of 8% beyond the amount required by regulation, where standard deviation is 2.47%. Moreover, the nonperforming financing has a mean value of 4.24%, with a standard deviation of 0.83%, this means that the percentage default loan from financing is 4.2% on average. Table1 shows that operational efficiency measured by OEOI has a mean value of 87.6% while standard deviation 4.39%. It indicates that operating expenses less than operating income because this ratio is less than 1. In the same manner, the average profitability that measured by ROA is 2.85% with a standard deviation of 0.3% which is slowdown over the period of study. The result of statistical analysis also shows that the average of financing to deposit is 97% less than 1 with standard deviation 8.67%. This means that Islamic financing unable to cover depositor fund which has an effect on liquidity position of Islamic bank. The result of table 3 represents that all variables of Skew are less than the critical value of ± 2.58. This means that the data distribution is normal as a univariate. While, multivariate tests provide critical value of 3.086, which is still fewer than 10.000 as mentioned by Ghozali (2011). Therefore, this research can be concluded that the data variables have a normal distribution as a multivariate. It also shows that depositors fund has positively correlated with capital adequacy ratio (0.749) and liquidity measured by FDR (0.613) and significant at 1% levels. This means that higher capital adequacy is more able to attract the depositor's fund and also higher liquidity position increases depositors fund. This result is consistent ijef.ccsenet.org
Testing Correlation among Variables
International Journal of Economics and Finance Vol. 8, No. 10;2016 with research conducted by Hasbi and Haruman (2011). Whereas, nonperforming financing is significant but negatively correlated with depositor fund (0.822) at significant level of 1%. This argues that high level of uncollectable fund from Islamic investments will result to decline the value of depositor's fund. On the other hand, depositor fund is negatively correlated with operational efficiency (0.293) and profitability (0.309) and significant at 5% levels.
Multiple Regression Analysis
This research aims to find out the effect of Islamic banks performance on depositors fund by using multiple regression analysis. Multiple regressions could be useful to predict the relationship between components of CAMEL and depositors fund. Accordingly, this research can formulate the relationship between five dimensions of CAMEL (CAR, NPF, OEOI, ROA, and FDR) and depositors fund (DF) through using the following hypothesized model:
DF= ƒ (CAR, NPF, OEOI, ROA, FDR)
Based on the above function the study seeks to examine whether the depositors fund could be explained by Islamic bank performance indicators. Therefore, the multiple regression equation is applied as follows: Where the variables are defined and discussed in section 4. (i) and (t) represent Islamic banks and the number of monthly observation respectively. While, β 1 -β 5 are coefficient effect of variables and z 1 is error term. This study uses ordinary least square method for estimation depositor fund by using SPSS statistics program as follows: Table 5 also concludes that the variable capital adequacy ratio has a significant and positive relationship with depositor's fund. It has t-value of 2.215 and p-value 3.1% less than significant level at 5%. This indicates that the higher capital requirements by Islamic bank are highly attractive depositor fund. This result is consistent with the findings of Hasbi and Haruman (2011).
Nonperforming financing of bank is significantly and negatively correlated with depositor's fund. It has also a t-value of -2.70 and p-value 0.9% less than significant level at 5%. This means that the higher uncollectable loan receivables are lower the volume of depositors fund and public trust in Islamic banks. This relationship is consistent with the result of research conducted by Zaini and Rosly (2008).
This research used FDR as a proxy of liquidity; it shows a significant positive relationship with depositor's fund. The t value is 2.051 and p value 0.046 is less than significant level at 5%. This means depositors fund is highly involved by higher degree of liquidity in Islamic banks. This result is consistent with the findings of Hasbi and Haruman (2011) and Ismal (2011).
Finally, the regression results found limited support for a direct relationship between depositors fund and profitability measured by ROA. The return on assets ratio is not to be significant at 5% level. This result is contrary with research of Hasbi and Haruman (2011). Moreover, this research found that operational efficiency measured by OEOI does not significant influence on depositor's fund at 5% of level.
Conclusion
This research paper attempts to find out the bank specific financial variables (CAMEL) that influence on depositors fund in the case of Indonesian Islamic banks. On the basis of the results of research, five components of CAMEL system are used as explanatory variables which are: capital adequacy, assets quality, management quality; profitability and liquidity. It reveals that only three variables (capital adequacy, assets quality, and liquidity) of five are found statistically significant effect on depositor's fund. However, the rest two variables (operational efficiency and profitability) are not be significant. Furthermore, we can conclude that Islamic banks in Indonesia can change their depositors fund based on the capital adequacy requirements, assets quality measured by nonperforming financing, and liquidity position. These results are consistent with earlier studies such as Zaini and Rosly (2008), Hasbi and Haruman (2011) and Ismal (2011).
Generally, this research found that some indicators of bank performance have a positive effect on the behavior of depositors and public perceptions toward Islamic banks such as capitalization and liquidity. This means that high degree of capital requirements and liquidity will generate high level of depositor's fund. However, the increase of nonperforming financing or uncollectable fund due to the default risks of some investment projects will result to decline the confidence of public in Islamic banks. Moreover, this study confirmed that nonperforming financing declines the assets earning utilization of Indonesian Islamic banking industry. On the other hand, this research revealed that profitability position still slowdown growth due to the limited Islamic financing strategy and conservative policy towards taking huge risks in sharing financing. This research indicates that Islamic banks operational efficiency is well managed. Therefore, there is no effect of operating expenses to operating income ratio on depositor's fund.
According to these results, the researcher revealed that profitability of Islamic commercial bank is significantly slowdown. Therefore, the researcher suggests the need for optimization of Islamic bank assets which is to contribute revenue to the bank. The selection and proper use of financing will make profitability (ROA) to be optimal in order to improve confidence of depositors toward Islamic banking system. Moreover, this research suggests keep improving Islamic banks performance and intensifying profit-loss sharing financing by focusing on it more than debt-based financing will optimize short term and long term profit, with attention to credit and operational risks management, because the behavior of depositors are sensitive to credit and operational risks. Furthermore, the researcher suggests some alternatives available to invest fund under PLS financing such as Islamic banks joint financing for government and Islamic investment portfolio in infrastructure sectors such as education, health and social services. | 4,918.2 | 2016-09-23T00:00:00.000 | [
"Economics",
"Business"
] |
A Novel Design of Grooved Fibers for Fiber-Optic Localized Plasmon Resonance Biosensors
Bio-molecular recognition is detected by the unique optical properties of self-assembled gold nanoparticles on the unclad portions of an optical fiber whose surfaces have been modified with a receptor. To enhance the performance of the sensing platform, the sensing element is integrated with a microfluidic chip to reduce sample and reagent volume, to shorten response time and analysis time, as well as to increase sensitivity. The main purpose of the present study is to design grooves on the optical fiber for the FO-LPR microfluidic chip and investigate the effect of the groove geometry on the biochemical binding kinetics through simulations. The optical fiber is designed and termed as U-type or D-type based on the shape of the grooves. The numerical results indicate that the design of the D-type fiber exhibits efficient performance on biochemical binding. The grooves designed on the optical fiber also induce chaotic advection to enhance the mixing in the microchannel. The mixing patterns indicate that D-type grooves enhance the mixing more effectively than U-type grooves. D-type fiber with six grooves is the optimum design according to the numerical results. The experimental results show that the D-type fiber could sustain larger elongation than the U-type fiber. Furthermore, this study successfully demonstrates the feasibility of fabricating the grooved optical fibers by the femtosecond laser, and making a transmission-based FO-LPR probe for chemical sensing. The sensor resolution of the sensor implementing the D-type fiber modified by gold nanoparticles was 4.1 × 10−7 RIU, which is much more sensitive than that of U-type optical fiber (1.8 × 10−3 RIU).
were removed only partially, instead of removing them entirely. The materials wrapped around the core of the optical fiber were ablated using an ultrafast femtosecond laser; then, the core surface was modified with Au nanoparticles and exposed to the sample of analyte. The transparent, hard and brittle material could be effectively machined by the femtosecond laser which induces the non-linear, multiphoton absorption of the material during irradiation [8,9]. The main purpose of the present study was to design grooves on the optical fiber for the FO-LPR microfluidic chip and to investigate the effect of the groove geometry on the biochemical binding kinetics through simulations.
Designs of the Grooved Optical Fiber
The schematic illustration of the FO-LPR microfluidic chip is depicted in Figure 1. Figure 1a shows the detecting principle of the FO-LPR and a sketch of the sensing element. The structure of the chip and the fluidic operation is illustrated in Figure 1b. The solution with analyte was injected into the reaction microchannel and reacted with the receptor coated on the optical fiber. The reaction microchannel was 500 μm high and 500 μm wide with a length of 20 mm. An optical fiber was placed at the center of the microchannel. A gold nanoparticle monolayer was coated on the unclad portion of the optical fiber via an organosilane linker and the gold nanoparticle surface was further functionalized with a receptor [1]. The value of the Reynolds number for the microchannel of the FO-LPR device was less than ten; therefore, there was laminar flow in the microchannel and molecular diffusion across the channel was slow. To enhance the biochemical binding on the unclad optical fiber, the geometry of the grooved channel was the same as in our previous work [5]. The grooves generate transverse flows in the microchannel [10] and enlarge the probability of analytes getting close to the immobilized receptors. Fabricating grooved microchannels (with a cross section of 500 μm × 500 μm) can be quite complicated. An optical fiber composed of a silica core of 62.5 μm in diameter was employed in the present study. In the original design as reported in the literature [1,2,5], the cladding and jacket layers of a 400 μm core optical fiber were removed entirely. However, the mechanical strength of optical fiber with a core diameter of 62.5 μm, as used herein, was not strong enough to sustain the stresses during assembling and packaging. To avoid fracturing the fiber, the femtosecond laser was used to only partially remove the cladding and polymeric jacket. The present study presents a novel design of grooved fibers which have been integrated into the FO-LPR device. The designs of the grooved optical fibers, as illustrated in Figure 2, are termed as U-type or D-type based on the shape of the grooves. The length of one groove was L g , and L S denotes the space between the grooves, which was fixed at 1 mm. The number of grooves (N g ) could be varied. In order to compare the performance of U-type and D-type fibers, the total effective area was fixed. Thus, the total length of the grooves was set at 6 mm, i.e., Ng × L g equals 6 mm. The mechanical strength of the optical fiber is expected to be improved because of the design of the grooves, and the grooves designed in the optical fiber are also expected to induce chaotic advection to enhance the mixing in the microchannel. The effect of the number of (a) (b) grooves (Ng) on biochemical binding was then investigated in this simulation study. The enhancement of not only mechanical strength, but also biochemical binding performance by chaotic mixing, is expected in the proposed design.
Experimental Section
A femtosecond laser micromachining system [9] was used for engraving grooves on the optical fiber. The femtosecond laser was a regenerative amplified mode-locked Ti:sapphire laser with pulse duration ~120 fs after the compressor, central wavelength 800 nm, repetition rate 1 kHz, and maximum pulse energy of ~3.5 mJ. The number of laser shots applied to the sample was controlled by an electromechanical shutter. The laser beam was focused onto the fiber by a 10x objective lens (numerical aperture 0.26, M Plan Apo NIR, Mitutoyo) mounted on a Z stage. Grooves under fabrication was translated by a PC controlled X-Y micro-positioning stage with error less than 1 μm. The fabrication process was monitored by a charge-coupled device (CCD). The most prominent features of the femtosecond laser compare with conventional continuous long-pulsed laser are ultra short pulse duration and very strong instantaneous power. According to the above features, the femtosecond laser induces non-linear multi-photon absorption of materials. It can engrave on transparent, hard and brittle materials very precisely without inducing any micro cracks and heat affected zone.
The grooved fibers proposed herein is expected to exhibit better perfomance of mechanical strength. The tensile test of the fibers after manufacturing were implemented. Tensile testing is a standard procedure for determining the mechanical properties of materials. A standard tension test machine was set up and shown in Figure 3. The grooved optical fiber shown is placed in the grips of the testing machine. The grips are driven by stepping motor (the minimum displacement is 1 μm) as well as the screw, hence the load applied by the machine is axial. The testing machine elongates the grooved optical fiber at a slow, constant rate until the grooved optical fiber ruptures. During the test, continuous readings are taken of the applied load and the elongation of the grooved optical fiber. The load-elongation curve diagrams for the optical fibers could be obtained to demonstate their mechanical strength.
For the sensor a 62.5/125 multimode all-silica fiber with a 250 μm buffer (Corning) was used. A femtosecond laser micromachining system was used to engrave U or D-shape trenches on the optical fiber. Because of the negative surface charge of gold nanoparticles, the positive charge of poly(allylamine hydrochloride) (PAH) can serve as a linker between the negatively charged silica surface and Au nanoparticles [11][12][13]. As such, the exposed silica surface in the grooves was then modified with poly(allylamine hydrochloride) by immersing the cleaned grooved optical fibers into vials of 3 mM solution of poly(allylamine hydrochloride). After 15 min, the optical fibers were removed from the solution and rinsed with pure water to remove unbound monomers from the surface. After thorough rinsing, the grooved optical fibers were immersed into the gold nanoparticles solution (prepared based on the procedure of the Natan's method [14]) for 30 min to modify gold nanoparticles on the surface of the grooves. The optical fibers were then rinsed with pure water to remove unbound gold nanoparticles on the surface. The schematic diagram of the modification of Au nanoparticles was illustrated in Figure 4.
Numerical Simulations
The mathematical model for mass-influenced binding kinetics has been investigated in the literature [5,15]. Three-dimensional simulations of biochemical binding kinetics were performed using CFD-ACE TM software running on a personal computer, and structured grids were employed to solve the governing equations. The governing equations for the FO-LPR microfluidic chip were the continuity equation, the momentum conservation (Navier-Stokes) equation and the analyte convection-diffusion equation, as well as the surface reaction of a reversible analyte-receptor binding. The dimensionless forms of continuity, momentum and convection-diffusion equations can be expressed as follows: where A, R and AR represent concentrations of analyte, receptor binding site and analyte-receptor complex (or bound analyte), respectively. k a and k d , respectively, are the association and dissociation rate constants. Laminar flow was set and the temperature effect was omitted in this simulation. Semi Implicit Method for Pressure Linked Equation (SIMPLE) algorithm is initiated by Patankar [16] to solve the above mentioned equations and has been widely applied to Computational Fluid Dynamics (CFD) problems. The SIMPLE-consistent (SIMPLEC) algorithm, which is one of the most popular variants of SIMPLE-family, was proposed to improve the convergence [17]. The SIMPLEC method was adopted for pressure-velocity coupling and all spatial discretizations were performed using the first-order upwind scheme. The simulation was implemented in a transient state. A fixed-velocity condition was set to the boundary condition at the inlet of the FO-LPR microfluidic channel. The boundary condition at the outlet was set at a fixed pressure. The surface of the optical fiber was set as the reactive boundary for biochemical binding. The grid-independency test was done. Therefore, the total number of elements was approximately 50,000 for the cases used in this work. Different numbers of grooves (N g ) indicate various geometries of computational domains. Three-dimensional simulations of pressure, velocity fields as well as species concentrations including binding reactions are obtained by solving the aforementioned equations. The concentrations of bound analyte on the reaction area are averaged at any instant to illustrate the time histories of binding kinematics.
Results and Discussion
The parameters of binding kinetics as reported in the literature [18] were adopted in the simulations. The associated and disassociated constants, k a and k d , were 57,085 M -1 s -1 and 0.0455 s -1 , respectively. The surface concentration of the receptor was 9.742 × 10 -8 mole/m 2 . The physical properties as well as the parameters of binding kinetics are listed in Table 1. For the U-type fiber, the simulated time histories for the average concentration of bound analyte on the fiber for different numbers of grooves are plotted in Figure 5a. The concentration of bound analyte increased with the number of grooves (N g ) in the simulated sensorgram overlays, as shown. However, the increment of the concentration of bound analyte decreased with the number of grooves. Improvement by increasing the number of grooves become smaller when N g is larger than six. The transient concentration of bound analyte when the cladding and jacket layers were removed entirely was simulated for comparison. Since the total effective area should be consistent, the unclad length of the fiber with circular removal was about 3.5 mm. The concentration of bound analyte on the optical fiber with circular removal exhibited the highest values when compared to all the cases of the U-type fiber. When the sample of analyte was injected into the microchannel, the analyte reacted with the binding sites on the unclad portion of the optical fiber. For the optical fiber with circular removal of cladding, the large area with open sites was exposed to the analyte simultaneously. Therefore, the concentration of bound analyte increased much faster than those for the U-type fiber with different number of grooves. For the D-type fiber, the simulated transient histories for the average concentration of bound analyte on the fiber for different numbers of grooves are plotted in Figure 5b. Similar to the simulated results for the U-type fiber, the concentration of bound analyte increased with the number of grooves. Furthermore, the concentration of bound analyte for the D-type fiber with six grooves is almost identical to that with eight grooves. Hence, the D-type fiber with six grooves is the optimum design. Moreover, the concentrations of bound analyte for the D-type fiber are significantly higher than those for the U-type fiber and are comparable to those on the fiber with circular removal of cladding.
D-type fiber
The U-shape groove is very narrow; thus, it is not easy for the analyte to get close to the effective area. The contours of the flow field at the center cross-section of the groove in the FO-LPR microfluidic chip with U-type and D-type fibers are depicted in Figure 6. Apparently, since the velocity close to the bottom of U-shape groove is low, most of the binding reaction took place by molecular diffusion rather than advection. The mixing patterns in the FO-LPR chip for U-type and D-type fibers with different number of grooves are shown in Figure 7. The grooves designed on the optical fiber also induce chaotic advection to enhance the mixing in the microchannel, as shown in this Figure. The mixing patterns point to the fact that D-type grooves enhance the mixing more effectively than U-type grooves. The patterns show that improvement by increasing the number of grooves for U-type fiber become slower. Besides, the mixing patterns for the D-type fiber indicate that the degrees of chaos in the microfluidic channels for N g equals 4, 6 or 8 are close, which is consistent with the results in Figure 5b. The optimum performance of biochemical binding for the D-type fiber is close to that for the optical fiber with circular removal of cladding, as shown in Figure 5b. The peak value of the concentration of bound analyte on the D-type fiber with six grooves is 0.7% higher than that of the optical fiber with circular removal. The performance of the Dtype fiber with six grooves is as good as that of the fiber with circular removal. However, the D-type fiber with six grooves remains the optimum design due to the increased fragility of the fiber with increased number of grooves.
When the injected volume was fixed at 100 μL, the simulated time histories for the average concentration of bound analyte on the U-type and D-type optical fibers (N g = 6) with different injected flow-rates are shown in Figure 8. The concentration of bound analyte for the higher injected flow-rate increases faster than that for the lower injected flow-rate, as shown in this figure. However, the duration of the free analyte staying in the channel becomes shorter when the higher injected flow-rate is applied. Therefore, the bound analyte is not allowed to reach equilibrium concentration in the cases with the higher injected flowrates (e.g. when the flow-rate was 100 μL/min). When the flow rate is 50 μL/min, the concentration of bound analyte on the D-type fiber reaches 97% of the equilibrium concentration; however, that on the U-type fiber is only 89% of the equilibrium concentration. The response time may be quantitatively defined as the required time to reach 95% of the equilibrium concentration when the flowrate is 20 μL/min. Therefore, the response times with the D-type and U-type are approximate 122 and 206 seconds, respectively. This indicates the response time with the D-type fiber is faster than that with the U-type fiber.
The load-elongation curve diagrams for optical fibers with different types of grooves are depicted in Figure 9. The symbols represent the experimental data and linear least square fitting curves of these data are also plotted herein. The curve for the U-type fiber with one groove is close to that for the fiber without fabrication. However, the U-type fiber ruptured when its elongation was beyond 160 μm, which was only half of that for a raw fiber. The curves of load-elongation for the D-type fibers with one and six grooves are close and their elongations can be about 250 μm. The D-type fibers with one and six grooves ruptured when their elongation were beyond 260 μm and 250 μm, respectively. The elongation for D-type fibers apparently can be larger than that for the U-type fiber. Moreover, the slope of the curve for the D-type fiber is smaller than that for the U-type as well as raw fibers. It indicates that the D-type fibers exhibit larger elongation than the U-type fiber when the same loading is applied. The experimental results show that the D-types fiber can sustain larger elongation than the U-type fiber. Moreover, the effect of increase of the number of grooves in the D-type fiber is almost insignificant as shown in the load-elongation curve diagrams. Figure 10 shows the SEM images of the optical fibers fabricated by the femtosecond laser and its exposed surface after modification by gold nanoparticles for the U-type and D-type fibers. The dimensions of the U-shape groove, illustrated in Figure 10a (left), were 100 μm in depth measured from the surface of the polymer jacket layer, 80 μm in width in the jacket layer, 60 μm in width in the cladding layer and the total length is 6 mm (N g = 1). It indicates that the core of the fiber has been exposed and the remaining jacket layer has provided enough mechanical strength for further processing according to the aforementioned results. The SEM image of the exposed surface of the Utype fiber after gold nanoparticles-modification is revealed in Figure 10a (right). Apparently, Au nanoparticles are distributed on the surface uniformly. However, it should be noted the SEM image could be taken only at a silica surface close to the jacket, due to the difficulty of imaging bottom silica surface by SEM. The SEM images of the D-type optical fiber fabricated by the femtosecond laser and its exposed surface after modification by gold nanoparticles are presented in Figure 10b. The dimensions of the D-shape groove shown in Figure 10b were 100 μm in depth measured from the surface of the polymer jacket layer, and the total length is 6 mm (N g = 1). It also indicates that gold nanoparticles have been successfully immobilized on the exposed surface of the fiber. The fiber-optic sensing system set up to measure the transmission power of the sensor was reported in our previous work [9]. This system was consisted of a function generator, a LED light source (λ = 530 nm), a sensing grooved fiber modified with Au-nanoparticles, a microfluidic chip, a photodiode, a lock-in amplifier and a computer for data acquisition. The abilities of the Au-modified grooved optical fibers with one groove to detect changes in the surrounding refractive index were investigated. The surrounding refractive index was controlled by preparing sucrose solutions with various concentrations [19]. The refractive indexes of sucrose solutions were prepared in the range of 1.333 to 1.403. For the U-type optical fiber (N g = 1), a plot of the transmission power as a function of the refractive index was linear (R = 0.9998). The sensor resolution by transmission power interrogation (sensor resolution = 3 σ/m, σ = standard deviation of I in measuring the blank, m = slope) was 1.8 × 10 -3 RIU. For the D-type optical fiber (N g = 1), a plot of the transmission power as a function of the refractive index was also linear (R = 0.9983). However, the sensor resolution by the D-type fiber was determined to be 4.1 × 10 -7 RIU, which is significantly better than that obtained by the U-type fiber. The reason for such a big improvement is not clear, and is likely caused by a significantly lower surface coverage of gold nanoparticles on the U-type fiber core as a result of poor mass transfer of gold nanoparticles and/or linker molecules to the fiber core surface.
Conclusions
In this study, the numerical simulation of biochemical binding kinetics of the FO-LPR microfluidic chip with grooved optical fibers was successfully performed. The sensing element of the FO-LPR sensing platform was integrated with the microfluidic chip to reduce sample and reagent volume, to shorten both response and analysis time, as well as to increase sensitivity. The optical fibers were designed and termed as U-type or D-type based on the shape of the grooves. The U-shape groove was so narrow that it was not easy for the analyte to get close to the effective area. For the optical fiber with circular removal of cladding, the large area with open sites was exposed to the analyte simultaneously. Therefore, the concentration of bound analyte increased much faster than that for the U-type fiber. However, the mechanical strength of the optical fiber with a core diameter of 62.5 μm, as used herein, was not strong enough to sustain the stresses during assembling and packaging. The concentration of bound analyte increases with the number of grooves (N g ) in the simulated sensorgram overlays. The numerical results indicate that the design of the D-type fiber has exhibited efficient performance on biochemical binding. The grooves designed on the optical fiber can also induce chaotic advection to enhance the mixing in the microchannel. The mixing patterns indicate that the Dtype grooves enhance the mixing more effectively than the U-type grooves. The optimum performance of biochemical binding for the D-type fiber (N g = 6) is very close to that for the optical fiber with circular removal of cladding. The experimental results indicate that the D-type fiber performs better than the U-type fiber in terms of the mechanical property. Furthermore, the present study successfully demonstrates the feasibility of fabricating the grooved optical fibers by the femtosecond laser, and making a transmission-based FO-LPR probe for chemical sensing. The sensor resolution of the gold nanoparticles-modified D-type optical fiber by transmission power interrogation is 4.1 × 10 -7 RIU, which is much more sensitive than that of U-type optical fiber (1.8 × 10 -3 RIU). | 5,177.2 | 2009-08-20T00:00:00.000 | [
"Physics"
] |
COLLABORATIVE CLUSTERING PROTOCOL IN WIRELESS SENSOR NETWORKS
: Cluster formation is one of the best strategies widely used for energy-constrained sensor nodes. The critical problem in clustering is the limit of a number of clusters formed and election of appropriate cluster heads. In this paper, we propose a novel collaborative clustering protocol in which two types of nodes will be nominated in each data gathering period, the central cluster head and highest energy node in the cluster. The central cluster head is nominated upon its proximity to cluster centroid and the highest energy node in the cluster sends the data towards the base station. The protocol is energy efficient in the sense that the most central node collects data from its member to reduce intracluster communication cost and highest energy node will face long-distance transmission towards the base station. Nomination of central cluster head is based on rotation schemes to distribute burdens in data collection and each nominated node will elect the next candidate central cluster head to enhance deterministic nature of head election mechanism. The proposed protocol is simulated using OMNeT++ and the result verifies 89% improvement in network lifetime compared to LEACH protocol.
I. INTRODUCTION
Wireless sensor network (WSN) is usually composed of a large collection of small autonomous sensor devices that can sense physical and environmental conditions. It is the most emerging technology used in a variety of applications such as industrial areas, home network, land and underwater disaster management, habitat monitoring, weather forecasting, medicine, military etc. Each sensor network has at least one base station where these sensor nodes send their data to the base station. A sink or base station is a nonenergy constraint or a device that has sufficient storage, processing capacity and acts as an interface between users and the network. Users will retrieve the required information from the network by injecting queries and gathering results from the sink. In most of these applications, wireless sensors are deployed statically [3]. In the highly dynamic and energy constraint network, it is a challenging task to develop a routing protocol.
WSNs development was originally initiated by military applications, The advancements in wireless communications, low-power [1] electronics manufacturing, embedded microprocessors and interesting attributes of sensor nodes such as tiny size, low-power, low-cost and Multifunction have made WSNs available for wide range of potential applications.
The wireless sensor networks are characterized by, inadequate computational power, poor storage capacity and limited, irreplaceable battery power, short-range radio communication, massive and random node deployment, unreliable environment, mobility nature of nodes and inexpensive compared to the traditional sensor.
Since sensor nodes are battery powered and have inadequate energy capacity, energy constraint is the biggest challenge/problem for network designers in harsh and hostile environments such as in a battlefield, where it is difficult to access the sensor nodes and recharge or replace their batteries [12]. There is a different type of applications that make that WSNs differ greatly from conventional networks like data or telecommunication Networks, network (MANET) and cellular systems. These networks have to operate in a self-organizing ad-hoc fashion since none of the nodes are likely to be capable of delivering the resources to act as a base station or central manager. All the nodes may not have its own a timeslot, when it is in one of the sleep modes, since being in a sleep mode is inherent to not transmitting traffic control section every frame.
Whenever the network is in communication, the nodes will be going to finish its energy or power. Therefore nodes will not function properly in which that will have a negative impact on the network performance. Therefore, routing protocols that need to be designed for sensor networks should be energy efficient so as to prolong its network lifetime. Cluster heads are responsible to forward incoming packets from other clusters there is a significant difference in energy dissipation among the cluster head nodes [7]. Which creates unbalanced energy consumption among the cluster head nodes.Cluster head selection plays a significant role in determining the lifespan of the sensor network.
Here are some of the methods that need to be done in selecting the appropriate cluster heads. The process of grouping the sensor nodes in a densely deployed large-scale sensor network is known as clustering. It involves grouping of nodes into clusters and electing a cluster head for wireless sensor networks [16].There are some issues involved with the process of clustering in a wireless sensor network. , how many clusters should be formed, how many of the nodes should be taken into a single cluster, the last important issue is the selection that used for the procedure of cluster-head in a cluster formations.
In any hierarchical routing protocol, selection of CHs is a very important step. Clustering can be done in two types of networks, homogeneous and heterogeneous networks on the basis of energy. [15,24] Homogeneous are those in which nodes have same initial energy while heterogeneous networks are those in which nodes have different initial energy. Static clustering technique along with the heterogeneous distribution of nodes has been using prolong the stability period and network lifetime. In A heterogeneous sensor nodes, some of the nodes have more power than others.
In Hierarchical clustering, the sensor nodes are organized into a hierarchy, based on their power levels and proximity. [16,24]. In these routing scheme nodes which has higher energy is supposed to be selected randomly selected for processing and sending data while low energy nodes are used for sensing and send information to the cluster heads. Clustering technique enables the sensor network to work more efficiently. It increases the energy consumption of the sensor network and hence the lifetime. Traditional routing protocols for WSN may not be optimal in terms of energy consumption.
Clustering techniques can be efficient in terms of energy and scalability. The objective of clustering is to minimize the total transmission power aggregated over the nodes. Every cluster selects a cluster head (CH) responsible for coordinating the data transmission among the nodes in a cluster. Hierarchical Protocols are used to conserve energy by grouping the nodes into clusters. These are some hierarchical protocols, as LEACH [1], it is a protocol used for selection of cluster heads. But it doesn't take into account the distribution of sensor nodes and battery power. Multi-Hop LEACH is, modified [2] from LEACH uses cluster formation, elects CH and vice CH and Election of a vice CH is performed which performs the cluster formation and elects a CH and a vice CH which is used when the CH dies So even if the CH dies due to the gathering of data from all the nodes, cluster won't be useless as a Vice CH is there which will take the role of CH to transmit to the BS.
In this paper, we propose a novel Collaborative Clustering Protocol (CCP) that preserves the collaboration of central cluster head and highest energy node for energy efficiency. The basic philosophy behind the proposed protocol is that node which is very close to most of the nearby nodes on the average should collect local cluster data and node having highest energy in each cluster should assist central cluster head through transmission of data to long distance standing base station. The pillars behind the protocol are minimizing the overall energy cost in cluster data collection phase through letting the central node to most of the nodes to act like cluster head and sharing burdens of this central cluster head through the participation of data reporting towards the base station.
The protocol is energy aware in the sense that it lets only the highest energy node to participate in burden sharing. The protocol also ensures little variations on a number of cluster heads in each round as currently acting central cluster heads nominate next round central cluster head nodes.
Routing schemes in wires sensor networks
The WSN topology is highly dynamic caused by frequent node mobility. [9] As the network size grows highly in demand, data generated by one or more sources usually have to be routed through several intermediate nodes to reach the destination due to the limited range of each node's wireless transmissions. There will be a problem when intermediate nodes fail to forward the incoming message. To prevent this usually acknowledgments and retransmissions are implemented to recover the lost data. However, this generates a large amount of additional traffic and delays in the network. [23] Without using these schemes, the reliability of the system can also be increased by using multipath routing. Multipath routing allows the establishment of more than one path between source and destination and provides an easy mechanism to increase the likelihood of reliable data delivery by sending multiple copies of data along different paths without acknowledgment schemes.
Strategies of Cluster Head Selection
As cluster head selection plays, the most important role of energy optimization in wireless sensor networks, we need to take care of appropriate cluster head selection for certain routing algorithms in wires sensor networks. We take a wider and brief comprising and summery the various cluster head section methods of hierarchical routing protocols that have the ability to maintain efficient energy utilization and data aggregation towards the base station [5, 15, 19, and 26]. Cluster formation is one of the best strategies widely used for energyconstrained sensor nodes, we need to select an appropriate number of clusters as well as cluster heads.
The location where the selected cluster heads are positioned plays a significant role in a network lifetime and affects the total energy consumption of the entire wireless sensor networks. Cluster heads can be dispersed in the sensor field randomly, or they can be deployed in a deterministic fashion. The following are some of the well-known method cluster head selection methods.
Deterministic Schemes
Sensor nodes select itself as a cluster head criterion during the communitarian process. In each round in order to make the decision of cluster head, hello packets are broadcast to all the sensor nodes of their neighbors. The first nodes to receive the pre-defined number of these messages declare themselves to be cluster head and the member nodes (ordinary nodes) will send "join me" request to the respective cluster heads. These techniques are very important and effective for energy optimization.
Base Station Assisted Schemes
The base station is a relay a short-range transceiver which connects cordless nodes, computer terminals, or another wireless device to a central hub and allows connection to a network.The base station by self-selects the appropriate cluster heads.
Fixed-Parameter Probabilistic Schemes
Cluster heads are chosen first due to the following 2 criteria's: By evaluating expressions involving: Using the Probabilistic methods requirements By using parameters like how many numbers of cluster heads are required.
Resource Adaptive Probabilistic Schemes
The threshold is calculated by the scheme taking into consideration the residual energy, energy consumed during the present round and the average energy of the node as additional parameters. This causes the strategy for the selection of the cluster head to be energy adaptive.
Cluster Head Selection in Hybrid Clustering (Combined Metric) Schemes
This scheme adjusts the nodes and the threshold function, and the non cluster-heads select the optimal cluster-head by taking into consideration the comprehensive nodes' residual energy and how far the node is from the base station.
II. LITERATURE REVIEW
A number of clustering protocols have been explored in order to obtain the effective energy usage in wireless sensor network. [22] The main goal of a sensor network is to forward the sensing data gathered by sensor nodes to the base station. One of the common techniques is direct data transmission. In this case, each node in the network directly sends sensing data to the base station. However, if the base station is remote from the sensor node, the node will soon die due to excessive energy consumption for delivering data. In order to solve this problem, some of the clustering algorithms aimed at saving energy have been proposed like leach (low-energy adaptive clustering [1,4,14,21]. hierarchically it is based on the randomized rotation of the cluster head to distribute the energy load among the sensor nodes evenly distribute in the entire network.
LEACH serves as the benchmark for most of the hierarchical clustering algorithm for WSNs that was developed for reducing power consumption. [1] It is self-adaptive and self-organized. LEACH protocol uses round as unit, each round is made up of cluster set-up stage and steady-state stage, for the purpose of reducing unnecessary energy costs, the steady-state stage must be much longer than the set-up stage each node elects itself as a cluster head based on a probabilistic scheme and broadcasts its availability to all the sensor nodes present in the area. During the phase of clustering, the cluster head plays an important role in providing data communication to nodes and delivered the base station efficiently. In this protocol each node has equal probability becoming a cluster head.
In Leach, the clustering task is distributed among the nodes based on duration. Direct communication is used by each cluster head (ch) to forward the data to the base station (BS). Nodes will generate a random number between 0 and 1 and they will be compared with the threshold value. The node becomes a cluster head for the current round if the number is less than the threshold value (TN).
HEED (hybrid energy efficient distributed efficient distributed clustering) is primarily designed to achieve power balancing by extending the leach protocol. [2]. In HEED algorithm, the residual energy is playing an important role in determining the possibility of nodes becoming CH. HEED improves the basic scheme of LEACH based on residual energy as parameters for cluster selection to accomplish power balancing. The protocol is serving in multi-hop networks, using an adaptive transmission power for inter-cluster communication. In HEED, the selection of a CH is based on two parameters, viz, Residual Energy and the proximity of the neighbor nodes.
This algorithm proposes a new approach of an energyefficient homogeneous clustering algorithm for wireless sensor networks uses selection criteria's like holdback value, the residual energy of existing cluster heads, and nearest hop distance of the in which the lifespan of the network is increased by assuring a homogeneous distribution of nodes in the clusters. [20] Therefore algorithm, will improve the energy efficiency and network performance through selecting cluster heads on the node It also introduces a new clustering parameter for cluster head election, which can better handle the heterogeneous energy capacities. Power efficient routing in wireless sensor network is the main challenge for researchers. This protocol deals with cluster formation based on different techniques and criteria such as cluster head selection, aggregation of the sensed data within a cluster and sending that to the base station in an energy efficient way.
The smallest distance between cluster heads in a cluster based sensor network is prolonging network lifetime by dispersing the cluster heads, thus lowering the average communication energy consumption [4].
This protocol is used for sensing the temperature applications in a hierarchical approach including with the use of data-centric schemes. The cluster head broadcasts two thresholds to the nodes [14]. TEEN is not recommended for applications where periodic reports are needed since the user may not get any data at all if the thresholds are not reached. [6] It is observed that with increasing number of levels of nodes energy saving also increases Load balancing using clustering method is one of the most practical solutions, regarding to energy limitation in wireless sensor networks.
This study, propose that a new energy efficient (EE) clustering based protocol for single-hop, heterogeneous WSN [24].The newly proposed protocol uses channel state information (CSI) in the selection process of Cluster Heads so as to improve and showing better stability period than that of well-known protocols than existing routing protocols.
Separate clusters are created using the cluster creation algorithm. [9] Based upon the quality of services as metrics available information with each sink node, data packets are routed to the base station. Due to the hierarchical architecture, the performance is unaffected by the increase in the number of mobile nodes, at the same time the Packet Loss is reduced. This paper proposes a new clustering approach MZ-SEP based on multiple triangle zones distribution and SEP protocol. The partition of sensor deployment field into multiple zones enhances the communication between cluster heads and their members [17]. Hierarchical Clustering is an energy-efficient communication protocol which can be used by the sensors to report their sensed data to the sink. [11] The routing protocol of wireless sensor network must minimize the energy consumption so as to increase the network lifetime. Many researchers have already proposed a number of distinct algorithms based on different technique the Hierarchical clustering algorithms are the most important ones.
This article proposes Unequal Clustering Size (UCS) designed for more uniform energy dissipation among the cluster head nodes. [13]. In one-hop communication, every sensor node can directly reach the destination, while in multihop communication, nodes have transmission range and therefore are forced to route their data over several hops until the data reach the final destination. Manjeshwar proposed Threshold sensitive Energy Efficient sensor Network protocol (TEEN) Hansen, Ewa proposes that the geographical distribution of the cluster heads severely influences the overall energy consumption of the network [5]. Spreading the cluster heads more evenly means prolonging the lifetime of the network. In this article, a distributed randomized clustering algorithm for generating a hierarchy of CHs is proposed In this paper, the authors propose a novel clustering algorithm, Front-Leading Energy Efficient Cluster Heads (FLEECH), in which the whole network is partitioned into regions with diminishing sizes [15]. In each region, we form multiple clusters. Here in this proposed routing algorithm cluster head selection is based on residual energy and distance of each node to the sink as the most significant parameter [16]. This article is based on the concept of finding the cluster head minimizing the sum of Euclidean distances between the head and member nodes.
This protocol operates based on local information .here there is no demand for centralized control and produces relatively small communication overhead [10]. This paper introduces energy efficient and density control clustering algorithm (EEDCA) [18]. On this proposed approach, the selection of the cluster head depends on residual energy, density, and distance.
The author proposed an efficient algorithm, which is mainly considering the energy density of the clusters to balance the energy and prolong the network lifetime [19,25]. A unique two-step cluster head method is devised to select the proper cluster head and the shortest path selection from source to sink is performed by using On-demand routing approach.
B. Elbhiri assumptions are based on dividing the network into dynamic clusters [30].The cluster's nodes communicate with an elected node called cluster head, and then this cluster head aggregates and communicates the information to the base station The proposed algorithm guarantees the entire network stays alive for a longer time than the other existing energy efficient techniques based on residual energy for the selection of cluster heads. This paper proposes energy-aware routing protocol (EAP) for a long-lived sensor network and the protocol improves the lifetime of the entire network by minimizing energy consumption for in-network communications and balancing the energy load among sensor nodes [3].
This study further explains more about routing methods of Enhanced Developed Distributed Energy-Efficient Clustering scheme (EDDEEC) for heterogeneous Wireless sensor networks [21]. The algorithm uses the techniques of changing dynamically Cluster Head (CH) election probability.
Arati Manjeshwar proposes a hybrid routing protocol (APTEEN) which allows for comprehensive information retrieval. [29] The nodes in such a network not only react to time-critical situations but also give an overall picture of the network at periodic intervals in a very energy efficient manner.
This work is a comprehensive and fine-grained survey on clustering routing protocols from the proposed literature for wires sensor networks [22]. This outline has the advantages and objectives of clustering for WSNs and develops a novel taxonomy of WSN clustering routing methods based on complete and detailed clustering attributes. This paper presents a clustering scheme to create a hierarchical control structure for multi-hop wireless networks. Hierarchical routing algorithms are the best mechanisms that have been used to provide for highly scalable opportunities in many large networking systems that have been designed [23]. It primarily demonstrates how certain geometric properties of the wireless networks can be exploited to perform clustering with some desired properties.
Power-Efficient Gathering in Sensor Information Systems (PEGASIS) is a near optimal chain-based protocol [8]. In order to extend network lifetime, nodes need only communicate with their closest neighbors and they take turns in communicating with the base station and when the round of all nodes communicating with the base-station ends, the new round will start.
III. PROPOSED STRATEGIES AND ALGORITHMS
The major aim of this paperwork is to maximize the network lifetime of sensor networks by lowering the total energy dissipation needed for delivering data to the base station. Hierarchical-based routing is a cluster based routing in which high energy nodes are randomly selected for processing and sending data while low energy nodes are used for sensing and send information to the cluster head. The main objective of clustering is to minimize the total transmission power aggregated over the nodes. Every cluster selects a cluster head (CH) responsible for coordinating the data transmission among the nodes in a cluster to deliver to the base stations.
Noncluster head nodes do not transfer data directly to the base station node rather they send the sensed data to the cluster head. CHs aggregate data it received from its member nodes and forward it to the base station. Therefore the number of nodes communicated to the base station will be decreased and the total energy dissipation significantly reduced.
Basic assumptions taken in the proposed routing protocol are the following: Sensor nodes are homogenous having the same initial energy, processor power, and memory capacity. The transmission power level of sensor nodes can reach the base station which is located outside of network area. Nodes have built-in GPS for location information Ideal MAC layer and error-free communication channel. Initially, each sensor nodes will send location information to the base station and then the base station will nominate nodes to act as central cluster heads (CCH nodes). Election of CCH nodes is upon computation of the density of neighbor nodes within transmission region and nodes having a high density within transmission range are center to most nodes, hence elected as CCH nodes. Base station elects CCH nodes only once and any other election of CCH nodes in subsequent rounds will be decided locally without the participation of base station.
Then CCH nodes will send HEAD_ADVERT message to all neighbor nodes within competition radius. Upon reception of HEAD_ADVERT messages from CCH nodes, regular nodes decide their own CCH node based on distance cost metric and they will send a HEAD _JOIN message to respective CCH nodes embedding their within residual energy and location information the network area. The CCH nodes receiving HEAD _JOIN message compute the cluster centroid based on location information of nodes and the highest energy node (HEN) within the cluster. Cluster member very close to cluster centroid will be nominated as the candidate CCH node for next round. CCH nodes pass the decision on the election of HEN nodes upon the prediction of energy remaining after each cluster member send their own packet to CCH node for the entire TDMA frames. So in each round of data gathering period, CCH nodes collect data from cluster members, aggregate the correlated data and send it to HEN nodes .then the HEN node sends the received data directly to the base station. The collaboration between CCH and HEN nodes is quite logical in the sense that the most central node should collect local data from cluster members for minimizing intracluster communication cost and node having the highest energy should collaborate with CCH node to face long-distance transmission to the base station and share burdens imposed on cluster head nodes (CCH nodes) in effect. Let Computation of the cluster centroid based on the position of member nodes and the nominating node with minimum distance from this centroid as CCH node for next round increase the possibility of CCH nodes to be located in a region which is very close to each and every nearby node and then reducing the overall energy cost in intracluster communication.
The protocol is to ensure that elected cluster head node should reduce local data collection cost and HEN node within the cluster is expected to share burdens of CCH node through the participation of long-distance transmission towards the base station.
The proposed protocol also eliminate random nature of cluster head election mechanism as current CCH nodes nominate CCH nodes for next round based on distance from cluster centroid. Hence the number of cluster heads elected in each round is deterministic compared to an unpredictable number of cluster heads in LEACH protocol. Election of CCH nodes is based on rotation to distribute energy consumption in cluster data collection and because of this sometimes nodes with a few number of neighbor nodes may be elected as CCH node. In such scenario, the current CCH node will not be involved in next CCH election process as cluster centroid computed from a few number of nodes and nomination of next round CCH accordingly reduce the fairness of burdens among CCH nodes. Regular nodes which are not receiving any HEAD_ADVERT message nominate themselves as CCH node and inform nodes within transmission range. Node nominating itself as CCH node but does not receive any HEAD_JOIN message from nearby nodes will quit from its wish of being CCH and it will join one of CCH nodes upon reception of advertisement message from newly nominated CCH nodes.
Algorithm (cluster formation)
Nodes send location information to BS The bs nominates and announce NOPT CCH nodes for each node receiving bs announcement if(nominated as CCH node) broadcast HEAD_ADVERT msg endif else wait for CCH node adverts end for for each node receiving HEAD_ADVERT msg identify its CCH node send HEAD_JOIN to CCH node (residual energy & location) end for for each node receiving HEAD_JOIN msg if(currently CCH node) compute cluster centroid nominate CCH node for next round identify HEN node endif end for
3.Energy dissipation model
Energy dissipation is the most important deciding factor to determine the life of a sensor network because usually sensor nodes are driven by a battery. Accurate estimation of sensor network lifetime requires the precise energy consumption model.
Energy dissipation by sensor nodes for data transmission has a direct relation to the transmission distances and data packet size to be transmitted. Sometimes energy optimization is more complicated in sensor networks because it involved not only reduction of energy consumption but also prolonging the life of the network as much as possible. Heizelman [1] proposed a model that considers microcontroller Processing and radio transmission and receiving only. This model does not consider other important sources of energy consumption, such as transient energy, sensor sensing, sensor logging and actuation accurate.
The energy dissipates by a node in transmitting N-bit packet to another node which is located at distance of S, is given by: (1) For reception of N packets, the radio dissipation model expenditure can be calculated as: Where N, S, do represent a number of the packets of bits, transmission distances, and the threshold distances respectively [27,28]. The electronics energy Elec depends on factors such as: Filtering Digital coding Modulation and Spreading of the signal Whereas the amplifier energy, Efs.d2 or Eamp.d4, depends on the distance to the receiver and the acceptable bit-error rate Value of threshold distance do can be calculated as: If the distance is less than a threshold d0, the free space model is used otherwise, the multipath model is used. In our scenario, we use both of the two-channel models based upon the distance between the transmitter and receiver [1,26]. For the verification of the energy efficiency of the proposed protocol, we need to take some important modeling features in the next subtopics.
4.Network Model
Here is a list of assumptions are considered about the model of the network for the proposed protocol that we are going to develop.
a) The nodes are randomly& uniformly distributed in the reign. b) Noncluster head nodes do not transfer data directly to the base station. c) The Base Station (BS) which is placed and located at a fixed distance far away from network filed where nodes are deployed. It is a resource-rich device that has adequate storage and processing power d) Sensor nodes are left unattended once deployed randomly in the area of interest. e) Nodes are battery operated and homogeneous with inform initial energy allocation. The battery could not replace or not be rechargeable. f) All sensors have location determination devices such as GPS. g) Nodes that are found close to one another may have similar or correlated data. Sensor nodes should be time synchronized in the order of a second. h) The sink node possesses the highest energy and is static.
IV.SIMULATION RESULTS AND ANALYSIS
The performance of the proposed protocol is evaluated using OMNeT++network simulator and the result of the simulations stored in vector and scalar files are analyzed using MATLAB software. We have assumed that the underlying MAC layer is ideal and the communication channel is free of error. We define network lifetime based on around at which first sensor node dies (FND) as this metric measures the stability period of sensor networks. The proposed protocol is compared with low energy adaptive clustering hierarchy (LEACH) protocol based on a "round" at which the first node dies (FND), 20% of node dies (PND), half node dies (HND) and the last node dies (LND). The number of sensor nodes alive per each round and variance of energy consumption among sensor nodes area also examined in the simulations. The simulation parameters are summarized in table 1. As shown in Fig.1, the nodes are randomly& uniformly distributed and the base station is located outside of the region under investigation in the simulation. Table 2 shows FND, PND, HND and LND performance metrics in nutshell. The proposed protocol has % improvement in network lifetime which shows that the stability period of CPP protocol is elongated compared to LEACH protocol. 20% of nodes die at round 1007 & 1303 in LEACH and CPP protocol respectively. The duration between FND & LND (instability period) in CPP protocol is short in length compared to LEACH protocol implying that energy consumption of sensor nodes is relatively homogenous in our protocol which avoids the early death of sensor nodes. The number of sensor nodes working till all nodes dies in target region affects the degree of data gathering and the functionality of wireless sensor networks. Fig.2 depicts the number of alive nodes per each round. The simulation manifests that the CPP protocol has superior performance compared to LEACH protocol till 62% of sensor nodes are alive. Fig.4&5 show the number of elected cluster heads for each round in LEACH and CCP protocol respectively. There is high variation in a number of cluster heads in each round for LEACH protocol because of random nature of cluster head election whereas the variation in CCP protocol is very slow as currently acting cluster heads nominate the next candidate cluster heads in each data gathering period. The simulation shows that nomination of cluster heads in CCP protocol is relatively deterministic compared to LEACH protocol which is used to maintain an optimum number of cluster heads for all data gathering periods.
In CPP protocol, the CCH nodes are centered to most of the cluster members and hence the average energy consumption of nodes in intracluster communication seems relatively homogenous in theory compared to LEACH protocol which blindly selects cluster heads probabilistically without considering the centrality of the nominated cluster head. In addition, the highest energy node in cluster share burdens imposed on CCH nodes for the bulky energy loss accompanied after long-distance transmission to the base station and theoretically this allows nodes to have a good distribution of energy loss for each data gathering period in effect. To see the status of homogeneity in energy consumption among sensor nodes, we compute the standard deviation of residual energy of nodes per each round till all sensor nodes are flying to death. The simulation result in Fig.6 justifies the theoretical explanation and our protocol has relatively uniform energy consumption among nodes compared to LEACH protocol. In order to extend the lifetime of wireless sensor networks, many approaches have been done. Wireless Sensor Networks dividing the network into clusters, gathering data from nodes and aggregating them to the base station. Some of the clustering algorithms consider the residual energy of the nodes in the selection of the cluster heads and others rotate the selection of cluster heads periodically.
The network lifetime of energy sensitive wireless sensor networks relies on efficient clustering algorithm. In this paper, we propose energy efficient clustering protocol in which the central cluster head and the highest energy node in the cluster collaboratively send the data towards the base station. The proposed protocol also avoids random cluster head election mechanism and simulation results manifest that it has better performance compared to LEACH protocol. | 7,767.2 | 2017-08-30T00:00:00.000 | [
"Computer Science"
] |
Hemoglobin contrast in magnetomotive optical Doppler tomography
We introduce a novel contrast mechanism for imaging blood flow by use of magnetomotive optical Doppler tomography (MM-ODT), which combines an externally applied temporally oscillating high-strength magnetic field with ODT to detect erythrocytes moving according to the field gradient. Hemoglobin contrast was demonstrated in a capillary tube filled with moving blood by imaging the Doppler frequency shift, which was observed independently of blood flow rate and direction. Results suggest that MM-ODT may be a promising technique with which to image blood flow. © 2006 Optical Society of America OCIS codes: 170.4500, 170.3880, 160.3820, 290.5850 .
Optical coherence tomography (OCT) uses the short temporal coherence properties of broadband light to extract structural information from heterogeneous samples such as biological tissue.The ability to locate the microvasculature precisely is important for diagnostics and treatments that require characterization of blood flow.Recently several efforts to introduce novel blood flow contrast mechanisms, including protein microspheres incorporating nanoparticles into their shells, 1 plasmon-resonant gold nanoshells, 2 and use of magnetically susceptible micrometer-sized particles with an externally applied magnetic field, 3 have been reported.
A high iron content, which is due to the presence of four atoms in each hemoglobin molecule, and the high concentration of hemoglobin in human red blood cells (RBCs) make this molecule a useful test case for investigating magnetomotive effects on endogenous magnetically susceptible particles in biologic tissue. 4 RBC suspended in plasma and placed in a magnetic field gradient experiences forces and torques that tend to position and align it with respect to the field's direction.The magnetic force, in the direction of probing light z, is given by where V is the particle volume, 0 is the permeability of free space, B is the magnitude of the magnetic flux density along the z axis, and ⌬ is the difference between the susceptibilities of the RBC and of the surrounding plasma to the force, which gives a dynamic RBC displacement ͓z͑t͔͒ that can be included in the analytic OCT fringe expression, I f : where I R and I S are the backscattered intensities from the reference and the sample arms, respectively, f 0 is the fringe carrier frequency, 0 is the light source's center wavelength, n is the medium's refractive index, and z͑t͒ is the dynamic RBC displacement.
We present a novel extension of magnetomotive OCT that is analogous to the features of functional MRI to image hemoglobin in blood erythrocytes.The contrast of optical Doppler tomography (ODT) images can be enhanced by activation of iron-containing hemoglobin molecules in blood with an externally applied high-strength magnetic field gradient.Importantly, our approach requires no exogenous contrast agent to detect blood flow and location.
Herein, we describe magnetomotive optical Doppler tomography (MM-ODT) imaging of the Doppler shift of hemoglobin by applying an oscillating magnetic field to moving blood.The ODT light source consisted of a superluminescent diode (Model BWC-SLD-1C, B&W TEK, Inc., Newark, Deleware) centered at 1.3 m.A rapid-scanning optical delay line was used in the reference arm and was aligned such that no phase modulation was generated when the group phase delay was scanned at 4 kHz.The phase modulation was generated by an electro-optic waveguide phase modulator that produced a single carrier frequency ͑1 MHz͒.A hardware in-phase and in-quadrature demodulator with high-bandpass filters was constructed to improve imaging speed.Doppler information was calculated with the Kasai autocorrelation velocity estimator. 5A 750 m innerdiameter glass capillary tube was placed perpendicularly to the probing beam.Fluids used for flow studies were injected through the tube at a constant flow rate controlled by a dual-syringe pump (Harvard Apparatus 11 Plus, Holliston, Mass.).A solenoid coil (Ledex 4EF) with a cone-shaped ferrite core at the center (Fig. 1) and driven by a current amplifier supplying as much as 960 W of power was placed underneath the sample during MM-ODT imaging.The combination of the core and solenoid by high-power operation dramatically increased the magnetic field strength ͑B max = 0.7 T͒ at the tip of the core and also focused the magnetic force on the targeted samples.The magnetic force applied to the capillary tube was varied by the sinusoidal current to induce RBC movement.
To demonstrate the MM-ODT approach we recorded M-mode OCT-ODT images of a capillary glass tube filled with a stationary low-susceptibility turbid solution with and without an external magnetic field as a control sample.The low-susceptibility turbid solution was a mixture of deionized water and 0.5 m latex microspheres ͑ s =5 mm −1 ͒.The magnetic flux density and its frequency were approximately 0.7 T and 50 Hz, respectively.The field gradient ͑ץB / ץz͒ in the optical measurement zone was measured ͑220 T / m͒ with a magnetometer over a 1 mm line segment extending from the tip along the axis of the ferrite core.M-mode OCT-ODT images were ac-quired at a rate of 100 ms per frame.Figures 2(a Deoxygenated blood was extracted from the vein of a male volunteer's left arm and diluted with saline.During preparation, blood was not exposed directly to air, so the sample remained deoxygenated.To simulate flow, we injected blood through the capillary tube by using a syringe pump at a constant flow rate.As Fig. 3 shows, the oscillating Doppler frequency shift that results from RBC movement could be observed at two flow rates (5 and 30 mm/ s).Because the flow direction was nearly perpendicular to the probe beam, no significant Doppler frequency shift was distinguishable at the 5 mm/ s flow rate [Fig.3(a)] without any external magnetic field.In the case of the 30 mm/ s high blood flow rate, as shown in Fig. 3(c), the Doppler frequency shift caused by the flow could be observed.For maximum contrast enhancement, the probe beam should be directed parallel to the gradient of the magnetic field's strength.Application of a 50 Hz magnetic field increased the Doppler contrast of blood at both slow and fast flow rates, as shown in Figs.3(b) and 3(d).Note that in the 30 mm/ s highflow-rate image, higher contrast is observed than with the low-flow-rate image but that the Doppler frequency shift of the former as a function of depth is less homogeneous than the latter, which is indicative of perturbation by blood flow.The same blood was diluted to 5% hematocrit, but no RBC movement could be observed below 8% hematocrit.We calculated Doppler frequency shift profiles from the ODT images by averaging 20 lines indicated by horizontal arrows in Figs.3(a RBCs under the influence of a strong magneticfield gradient tend to travel either toward or away from the field source, depending on whether their magnetic properties are paramagnetic or diamagnetic. 4RBCs moving toward the probe are colored red (Fig. 3), while blue indicates RBC movement in the opposite direction.When only a magnetic force is present (recoil and pressure-gradient forces are absent or neglected), direct integration of Eq. ( 1) gives z͑t͒ = ⑀͑t͒ + z 0 cos͑4f m t͒, where ⑀͑t͒ = a 0 t 2 , a 0 and z 0 are constants and are dependent on , V, and B, and f m is the modulation frequency of the magnetic flux density.Because ⑀͑t͒ is proportional to time squared, the displacement is strongly dominant in one direction in free space, and the RBC oscillation may not be visible.However, as the local concentration of RBCs increases in the capillary tube in response to an external magnetic field, osmotic and elastic recoil forces increase and hinder further RBC movement into the field.Equivalently, forces driving RBCs find an equilibrium state at which the magnetic force is balanced by the sum of the recoil forces.In a confined system such as a blood vessel, ⑀͑t͒ becomes negligible at sufficiently long times (in a few seconds) because recoil and drag forces impede the free-space acceleration of the RBCs related to ⑀͑t͒.Once these forces balance, free-space acceleration of the RBCs approaches zero and the sinusoidal variation of the magnetic force dominates RBC displacement.Sinusoidal variation of the RBC displacement ͓z͑t͒ = z 0 cos͑4f m t͔͒ produced harmonics in the interference fringe intensity ͑I f ͒ according to relation (2).Because of the coherent detection scheme employed in our experiments, the harmonics remain centered at 0 Hz, even after demodulation.According to Eq. ( 1) the magneticforce frequency is two times the magnetic field frequency ͑f m ͒.In our experiments f m was set to give a value of I f consisting primarily of the 4f m harmonic [Fig.4(b)], whereas the fundamental frequency (the magnetic-force frequency) of the RBC displacement is 2f m .Because the magnetic susceptibility of hemoglobin is dependent on oxygen saturation, RBC displacement in vivo may differ from that reported herein and will require further study.
In conclusion, we have demonstrated what is believed to be the first implementation of MM-ODT for improved Doppler imaging of blood flow by use of an external oscillating magnetic field.Mechanical movement of RBCs in blood flow was introduced by a temporally oscillating high-strength magnetic field.The controlled and increased Doppler frequency in MM-ODT may provide a new investigational tool with which to study blood transport.
) and 2(b) show M-mode OCT and ODT images without any external magnetic field.The ODT image [Fig.2(b)] contains small random phase fluctuations that are due to ambient vibration through the optical path.Figures 2(c) and 2(d) show M-mode OCT-ODT images with a 50 Hz externally applied magnetic field.No distinguishable Doppler shift could be observed in the ODT image [Fig.2(d)], indicating no interaction between the external magnetic field and the moving latex microspheres.
Fig. 1 .
Fig. 1. (Color online) Schematic diagram of the probe beam, the flow sample, and the solenoid coil.
Fig. 3 .
Fig. 3. (Color online) (a), (b) M-mode ODT images of 5 mm/ s blood flow without and with a 50 Hz magnetic field, respectively.(c), (d) M-mode ODT images of 30 mm/ s blood flow without and with a 50 Hz magnetic field, respectively.Black vertical bar, 200 m; black horizontal bar, 20 ms.
Fig. 4 .
Fig. 4. (Color online) Doppler frequency shift profiles (a) without an external magnetic field and (b) with a 50 Hz magnetic field. | 2,326.8 | 2006-03-15T00:00:00.000 | [
"Physics"
] |
Computational Processing and Quality Control of Hi-C, Capture Hi-C and Capture-C Data
Hi-C, capture Hi-C (CHC) and Capture-C have contributed greatly to our present understanding of the three-dimensional organization of genomes in the context of transcriptional regulation by characterizing the roles of topological associated domains, enhancer promoter loops and other three-dimensional genomic interactions. The analysis is based on counts of chimeric read pairs that map to interacting regions of the genome. However, the processing and quality control presents a number of unique challenges. We review here the experimental and computational foundations and explain how the characteristics of restriction digests, sonication fragments and read pairs can be exploited to distinguish technical artefacts from valid read pairs originating from true chromatin interactions.
Introduction
Three-dimensional folding of chromatin can bring functional elements such as promoters and enhancers into contact, even though they are widely separated in the linear sequence of the genome. Hi-C is a global method for interrogating chromatin interactions that combines formaldehyde-mediated cross-linking of chromatin with fragmentation, DNA ligation and high-throughput sequencing to characterize interacting loci on a genome-wide scale [1]. Although Hi-C has proved to be an extremely powerful method for investigating the large-scale architectural features of the genome such as topologically associating domains (TADs) [2], in most cases, the resolution of Hi-C libraries is not sufficient to investigate interactions between specific gene promoters and their distal regulatory elements [3].
The unique features of the chimeric read pairs, as well as the high frequency of artefactual pairs, complicate even the primary steps of the computational analysis. Here, we present a review of computational approaches to the processing and quality control of Hi-C, Capture Hi-C (CHC) and Capture-C data. In the first section of this work, we present an overview of experimental protocols with an emphasis on experimental parameters that are important for the computational analysis. Based on this, we discuss the main computational pre-processing and quality control steps that have to be performed before downstream analysis and give a brief overview of available tools and literature. Finally, we present the analysis of three representative Hi-C, CHC and Capture-C datasets, pointing out similarities and differences between the protocols.
Experimental Protocols: Hi-C, CHC and Capture-C
Hi-C combines formaldehyde-mediated cross-linking of chromatin with fragmentation, DNA ligation and paired-end short-read sequencing in order to identify pairwise contacts between genomic regions. Capture Hi-C (CHC) and Capture-C methodologies employ a hybridization technology similar to exome capture that enriches Hi-C libraries for viewpoint sequences representing loci of interest using biotinylated complementary RNA (cRNA) probes. The enrichment step adds a layer of complexity to the computational processing and quality control of CHC data. We refer to the original publications for details on the experimental protocols [3][4][5][6][7].
Cross-Linking and Digestion
The experimental specimens, such as cells or tissues, are first crosslinked with formaldehyde to generate covalent bonds between interacting or nearby chromatin regions. In the first step, a restriction enzyme is used in order to digest DNA that is cross-linked to the same protein or protein complex as a result of chromosomal interactions. This effectively segments the genome into a disjoint set of restriction digests defined by the enzyme (or enzyme combination). In general, it cannot be assumed that the digestion is complete and therefore digests may contain uncut restriction sites. The average size of the digests defines the lower limit of the resolution of the method (generally around 4000 bp for six-cutters such as HindIII and 900 bp for four-cutters such as DpnII). At this stage, the sample contains a mixture of cross-linked and non cross-linked DNA digests that have sticky ends on both termini ( Figure 1). For instance, the enzyme HindIII has the recognition sequence 5'-AˆAGCTT-3' and so following restriction, the overhang at sticky ends is: The restriction digestion of cross-linked chromatin results in fragments, also referred to as digests, whose ends correspond to restriction cutting sites of the chosen enzyme (step-like symbols). At this stage, the sample consists of a mixture of cross-linked protein-DNA complexes (A) and non cross-linked DNA (B). The digestion cannot be assumed to be complete, for instance, due to inaccessibility of DNA. Therefore, uncut restriction sites may also occur within digests.
Ligation
For the Hi-C protocol, the sticky ends are filled in (and simultaneously labeled) with biotin-14-dATP together with unbiotinylated dCTP, dGTP and dTTP in a Klenow end-filling reaction and the resulting blunt ends are re-ligated with T4 DNA ligase. The intermediate sites that link pairs of digests are referred to as ligation junctions and the biotin labels function as baits that enable DNA fragments that arise from re-ligated digests to be enriched and un-ligated digests to be discarded. The target DNA sequence is determined by the chosen enzyme. For instance, for HindIII it is: In contrast to Hi-C, the sticky ends are not filled in and labeled with biotin for the Capture-C protocol. This results in a slightly different sequence at ligation junctions (no repetition of the overhang). More importantly, no enrichment for fragments arising from ligation can be performed. Three types of ligation are possible. In the desired form of Hi-C ligation, interacting restriction digests attach to one another, forming either linear or circular molecules, depending on whether only one or both ends of the digests were ligated; we refer to this category as valid ligation (Figure 2A). The termini of digests from different protein-DNA complexes may also ligate, which we refer to as random cross-ligation. Those unintentional ligations can lead to false positive predicted interactions, because the random cross-ligation products cannot be distinguished from valid Hi-C products ( Figure 2B). Furthermore, ligation of the two ends of individual digests may occur, which results in circular molecules and is referred to as self-ligation ( Figure 2C). Finally, digests may remain un-ligated ( Figure 2D). If we find a read pair that maps to two or more adjacent restriction fragments, we cannot directly observe if the read pair was the result of incomplete digestion or ligation of the adjacent restriction fragments. In either case, the resulting read pairs do not represent genuine three-dimensional interactions. We use a size threshold to classify such fragments as "un-ligated" if their length is below the threshold. Ligation between digests within the same cross-linked protein-DNA complex results in intended chimeric Hi-C products that consist of digest pairs linked by ligation junctions. Given pairs may form circular or linear molecules (A). Beyond that, the ends of digests from different protein-DNA complexes may also ligate, which is referred to as random cross-ligation. Those unintentional ligations lead to false positive predicted interactions (B). Furthermore, the ends of individual digests may ligate, which results in circular molecules and is referred to as self-ligation (C). Finally, the ends of given digests may remain un-ligated (D).
Shearing by Sonication
After the ligation step, the resulting molecules are sheared by sonication and the sonicated DNA is end-repaired. This re-linearizes the circularized ligation products. In general, the termini of sonication fragments do not coincide with restriction enzyme cutting sites. Conceptually, three different fragment categories can be distinguished at this stage ( Figure 3). Chimeric fragments arising from valid ligation or random cross-ligation consist of two DNA segments from different genomic locations and are linked by a ligation junction. If both segments are located on the same chromosome, the fragments are referred to as cis and otherwise as trans. Fragments arising from un-ligated digests do not contain ligation junctions, whereas fragments arising from self-ligation do. Note that fragments without ligation junctions may also arise from digests involved in ligations because shearing of ligated digests can occasionally produce pieces of DNA without a ligation junction.
In theory, all three fragment types may contain uncut restriction sites due to incomplete digestion. If no fill in of the sticky ends was performed, ligation junctions and uncut sites have the same DNA sequence, whereas, if the fill in was performed, ligation junctions occur as two consecutive repetitions of the overhang sequence. Fragment ends that correspond to un-ligated termini of restriction digests are referred to as dangling ends. Dangling ends are most likely to occur at the ends of fragments arising from un-ligated digests, because all these digests have two un-ligated ends and sonication will inevitably produce fragments with dangling ends. Other fragment categories that may have dangling ends are chimeric fragments arising from random cross-ligation or from incomplete ligations within given DNA-protein complexes (because only one pair of ends was ligated). Finally, all ring-shaped digests are very unlikely to result in dangling end fragments unless by chance breakpoints are introduced at restriction cutting sites. . Shearing re-linearizes ring-shaped re-ligation products and introduces a new type of fragment end (denoted by flash-like symbols). At this stage, three different categories of fragments can be distinguished: chimeric fragments arising from interactions or cross-ligation (A) as well as fragments arising from un-ligated (B) and self-ligated digests (C). The size distribution of fragments results from digestion and shearing and can be assumed to be the same for all three categories. For chimeric fragments that contain multiple restriction sites, the size cannot be unambiguously determined (marked with an asterisk, see text below).
Sequencing and Mapping
For Hi-C, paired-end sequencing of the two outermost ends of fragments is performed and the reads are independently mapped (treated as single-end reads) to the corresponding reference genome on the basis of sequence identity. Since each read can map either to the positive strand (forward orientation) or to the negative strand (reverse orientation), four different relative orientations of mapped read pairs are possible ( Figure 4). If both reads are mapped to the same strand, they point in the same direction, either to the left or right. If both reads are mapped to different strands, the sequential order matters and the reads point either inwards or outwards. Read pairs from chimeric fragments may have all possible orientations. In contrast, sequencing of un-ligated fragments must result in inward pointing pairs, whereas sequencing of fragments arising from self-ligation must result in outward pointing pairs.
Paired-end sequencing and mapping
A Chimeric read pairs from valid Hi-C and cross-ligation B Un-ligated C Self-ligation Outwards Forward Reverse Figure 4. Only the two outermost ends of fragments are subjected to paired-end sequencing and mapped to the forward (red) and reverse strand (blue) of the corresponding reference genome. Read pairs arising from chimeric fragments may have all possible relative orientations (A). Read pairs arising from un-ligated fragments can only point inwards (B). Read pairs arising from self-ligation must point outwards (C).
Enrichment of Target Fragments
For Hi-C, an enrichment step is performed in which the biotin-marked ligation products are enriched using streptavidin Dynabeads [4]. This effectively depletes the un-ligated fragments, because all other fragment types now contain biotinylated ligation junctions. This step is not performed for Capture-C [7,8], which is why, all else equal, one has to sequence more reads from un-ligated fragments in order to obtain a comparable number of reads from chimeric fragments. Capture Hi-C and Capture-C involve an additional enrichment step using biotinylated oligonucleotides that are referred to as baits or probes and complement target regions in the genome such as promoters [6,7]. In this way, sequencing is focused on a selected set of target regions thereby reducing the sequencing depth required in order to obtain the desired coverage of the target regions. Ideally, the specific characteristics of Hi-C fragments are taken into account for bait design, which can be a challenging task for various reasons. For instance, assuming that the shearing breakpoints introduced by sonication are evenly distributed across the genome, the biotin-marked ligation junctions on chimeric fragments would accumulate around the fragment centers. In this situation, it would be sufficient (and possibly more favourable) to target only the outermost ends of digests, that is, near restriction sites. This and other challenges were addressed by GOPHER, an easy-to-use and robust desktop application for CHC probe design [9].
A Processing Pipeline for Read Pair Categorization
The processing and quality control steps can be divided into three main steps. The truncation step removes sequences from chimeric reads that would impede mapping; the alignment step maps each (potentially truncated) read separately and then rejoins the reads and determines the relative orientation of the "re-paired" reads. The pairs are then classified as artefactual or valid, and the counts of valid read pairs are determined for individual pairs of restriction digests (interactions). The resulting matrix of interactions can be used for downstream analysis.
Truncation of Reads
For Hi-C, the sticky ends are filled in with biotinylated nucleotides and the resulting blunt ends are ligated. The corresponding ligation junctions can then be observed as two consecutive copies of the overhang sequence at restriction enzyme cutting sites (e.g., AAGCTAGCTT for HindIII; see Section 2.2). Depending on the distance of the ligation junction to the terminus of the sonication fragment, the read sequence can consist of sequences from two different digests separated by the ligation junction ( Figure 5A). On average, longer read lengths and smaller size-selected fragments following sonification are more likely to produce reads that contain a ligation junction. Read mappers cannot map chimeric reads with 5' and 3' segments that correspond to two different genomic locations. Therefore, sequences are read in the 5'-3' direction and chimeric reads are truncated at the location of the ligation site, thereby removing the following sequence (other strategies are also in use [10]). In contrast to Hi-C and CHC, no fill in of the overhangs is performed for Capture-C and the ligation junctions occur as plain restriction sites but the truncation step is performed in an analogous fashion.
Independent Mapping of Reads and Re-Pairing
The digestion and ligation steps of the Hi-C protocol require each read of a given pair to be mapped separately. A read mapper such as bowtie2 [11] can be used in single-end mode to map the truncated forward and reverse reads independently. Information about the relative order and orientation needs to be combined subsequently ("re-paired") [12], which results in the four different read pair orientations: left, right, inwards and outwards ( Figure 4). Mapped reads are stored in the SAM format [13], which allows every possible relative orientation to be represented with SAM flags.
Fragment and Digest Size Calculations
In order to decide whether a given read pair originates from a chimeric, un-ligated or self-ligated fragment, thresholds are applied to fragment and digest sizes. For the determination of these sizes, the special characteristics of Hi-C data must be taken into account.
The size of un-ligated fragments is only defined for inward pointing read pairs that map to the same chromosome (cis) and corresponds to the distance between the 5' end positions of the two mapped reads, as usual. The size of fragments arising from ligation is defined for all read pairs and calculated by summing up the sizes of the two segments that form a fragment. The size of the individual segments corresponds to the distance between the 5' end position of a mapped read and the next occurence of a restriction site in 3' direction ( Figure 5B).
Another relevant size is that of self-ligating digests which is only defined for outward pointing read pairs mapping to the same chromosome and corresponds to the genomic distance between the two restriction sites that re-ligated. This size can be calculated by adding the size of the un-ligated part of the self-ligated digest that corresponds to distance between the two 5' end positions of the mapped reads to the calculated fragment size ( Figure 5C).
We note that the size calculation procedure for fragments arising from ligation does not take into account incomplete digestion, that is, the next occurrence of a cutting site in 3' direction does not necessarily correspond to a ligation junction (fragment marked with an asterisk in Figure 3). It is impossible to determine with certainty whether incomplete digestion or an interaction has occurred. In such cases the calculated fragment size will be shorter than the actual size.
Elimination of Artefactual Read Pairs
Read pairs that originate from un-ligated or self-ligated digests are not informative and need to be filtered out. The processing of Capture-C, CHC and Hi-C data is based on read pair orientation and thresholds that are applied to the sizes of fragments and self-ligated digests ( Figure 6). Inward pointing pairs that map to the same digest must have originated from un-ligated fragments. A size threshold is applied to the remaining fragments to categorize them as valid or artefactual (C). Outward pointing read pairs that map the same digest must have originated from self-ligated digests. A second size threshold is applied to the remaining fragments to categorize them as valid or artefactual (D). Read pairs mapping to the same strand can only be chimeric. However, we observe very small proportions of read pairs that are mapped to the same strand and digest. Such read pairs are classified as strange internal (E).
Read pairs that map to different chromosomes obviously originate from chimeric fragments. Furthermore, read pairs can be distinguished by means of their relative orientation. Pairs mapping to different strands of the same chromosome may be valid or originate from cross-ligated, un-ligated or self-ligated digests. Pairs where both reads map to the same restriction digest are clearly artefactual. If the read pair points inwards, then the fragment is classified as un-ligated. If the read pair points outwards, the fragment is classified as self-ligated (Figures 2 and 4). If a read pair maps to two adjacent fragments, then in principle this could represent a short-range interaction or could result from incomplete digestion of an un-ligated fragment or ligation of adjacent restriction digests. It is impossible to experimentally distinguish between these possibilities. A threshold is applied to the size of un-ligated fragments (l u ; Figure 5B). If l u is within the expected range for fragments after shearing (not longer than a few hundred base pairs), the read pair is classified as un-ligated even if the reads are mapped to adjacent intervals that are flanked by different restriction sites. Read pairs that encompass multiple adjacent restriction fragments but whose size is below the threshold are also classified as artefacts.
A second threshold is applied that relates to the original size of self-ligating digests (l s ; Figure 5C). This size corresponds to the distance between the two restriction sites that define the self-ligating digest. If this size is within the expected range for self-ligating digests (not longer than a few thousand base pairs), a read pair is classified as self-ligated.
In contrast to un-ligated and self-ligated pairs, read pairs mapping to the same strand can only be chimeric. However, a very small proportion of read pairs (less than 0.1%) can be observed to be mapped to the same strand and to the same restriction digest. These pairs, which we refer to as strange internal because they do not correspond to any of the categories discussed above, presumably represent technical artefacts.
For the remaining chimeric read pairs, a third threshold is the ligation size (l r ). If this size is outside the expected range for fragments after shearing the corresponding read pairs are classified as too short or too long. All other chimeric read pairs are classified as valid and are suitable for downstream analysis.
We note that the chimeric read pairs with a valid size contain an unobservable but presumably large proportion of read pairs that originate from random cross-ligations. Such read pairs cannot be eliminated by the quality-control pipeline because they are indistinguishable from read pairs arising from genuine interactions.
Quality Metrics
It is important to understand how experimental procedures affect the values of the metrics in order to interpret them in the context of new experiments. A variety of counts and proportions can be useful in assessing the quality of an experimental dataset. Quality metrics are derived for the three major steps of the processing pipeline as well as for the overall result of the experiment (Table 1). It is not currently possible to define thresholds above or below which a dataset must be regarded as being low or high quality. Instead, we recommend that these quality metrics be compared for individual experiments to identify outliers or failed experiments that might need to be repeated or omitted from further analysis.
Following truncation and mapping, read pairs are categorized according to fragment size and orientation and the resulting assignments of read pairs to the artefact categories or to chimeric read pairs of valid size is reported. All trans read pairs must be chimeric because they map to different chromosomes. Trans read pairs in principle may represent genuine interchromosomal interactions but trans read pairs are enriched in artefactual interactions and high trans/cis ratios may be indicative of poor library quality [12,14]. This interpretation is supported by the fact that the proportion of trans read pairs compared to all read pairs that map to a chromosome is approximately linearly related to the number of digests per chromosome (with the largest chromosomes such as chr1 and chr2 having substantially fewer trans pairs than small chromosomes such as chr21 and chr22) (Figure 7). Table 1. Quality metrics that can be used to assess the quality of an experimental dataset. It is not possible to provide precise cutoffs for the quality metrics but instead we recommend that researchers use the metrics to compare experiments in a given study to identify potential outliers that may require attention.
Removed by truncation Read pairs removed because at least one of the reads was too short to map following truncation at a ligation sequence. Depends on the specificity of the read sequence at ligation junctions, which is typically higher with longer restriction enzyme recognition sequences and if sticky ends are filled in.
Unmapped/multimapped Read pairs removed because at least one of the reads could not be mapped (or could not be mapped uniquely).
Duplicated
Removed duplicated read pairs (one pair is retained for downstream analysis). High duplication rates indicate low library complexity that may be due to low amounts of DNA used for library preparation.
Dangling ends
Read pairs at least one of whose 5' ends coincides with a restriction enzyme cutting site. Dangling ends may correspond to un-ligated digest ends.
Remaining pairs
Total read pairs that were not removed in the course of truncation, mapping and deduplication (usually on the order of 10 8 ).
Re-paired read pairs
Un-ligated Large proportions of un-ligated read pairs indicate inefficient biotin pull down of fragments with ligation junction. For Capture-C, the proportion of un-ligated pairs is much higher because no pull down is performed.
Self-ligated Self-ligation seems to be a rare event. Because fragments arising from self-ligation contain ligation junctions the proportions may be higher for capture Hi-C (CHC) as compared to Capture-C.
Strange internal
Number of read pairs for which both reads map to the same strand and restriction digest. Cannot be explained by un-ligated or self-ligated digests. Typically, this category make up only very small proportions (less than 0.1% of re-paired pairs).
Chimeric
Read pairs that arise from interactions or random cross-ligations (on the order of 10 7 ).
Chimeric read pairs
Trans Chimeric read pairs whose reads map to different chromosomes. Large proportions indicate a high degree of random cross-ligation.
Cis
Chimeric read pairs whose reads map to the same chromosome.
Non-singleton index (NSI) Number of interactions that consist of more than one read pair. A high proportion of singleton interactions may indicate a high degree of random cross-ligation because random cross-ligations for a given digest pair are unlikely to occur more than once.
Global quality metrics
Yield of chimeric pairs (YCP) Percentage among input read pairs that are classified as chimeric and used for downstream analysis. Low percentages may indicate low overall performance of the protocol.
cis:trans ratio Low percentages of cis read pairs indicate a high degree of random cross-ligation.
Yield of non-singleton pairs Percentage among input read pairs that belong to interactions with more than one read pair.
Target enrichment Percentage of chimeric read pairs for which at least on read is mapped to a target region. Low percentages indicate poor performance of target enrichment.
Computational Tools for Processing Hi-C and Capture Hi-C Data
Many authors have presented tools for processing Hi-C data that implement the strategies discussed above or variations thereof. HiC-Pro [15], Juicer [16], HiCUP [12], HiCdat [17], HOMER [18] and HiC-bench [19] are some of the best known tools.
This review is focused on pre-processing and quality control. However, we will briefly summarize typical computational analysis procedures following the pre-processing. The goal of most experiments is to determine characteristic interactions between genomic regions. The analysis can be carried out on individual restriction fragments but especially for Hi-C, interactions are often combined into genomic bins of fixed size (e.g., 5 kb, 20 kb, 40 kb, . . ., 1 Mb). The counts of chimeric loci stemming from different genomic regions reflect the strength of the genomic interactions between them. However, factors including the distance between restriction sites, the GC content of the fragments and sequence uniqueness introduce systematic biases that can affect interpretation [20] and therefore it is desirable to normalize the raw counts prior to downstream analysis [21,22]. Multiple approaches are used to normalize raw read count data, including Poisson regression [22], negative binomial regression [23], iterative correction and eigenvector decomposition [15,24,25], locally weighted linear regression of multiple datasets [21] and others.
Three Exemplary Datasets
To illustrate the analysis strategy and introduce the quality metrics on real data we applied our processing pipeline and quality control analysis (Diachromatic, see Methods) to a Capture-C [8], a CHC and a Hi-C dataset [37] (See Methods for a description of the datasets). The resulting quality metrics are shown in Table 1.
If the truncation removes too much of one read for it to be reliably mapped, then the read pair is removed from further analysis. The proportion of reads removed following truncation is much higher for the Capture-C dataset than for the other two datasets (roughly 3% for Hi-C and 20% for Capture-C). Presumably, this reflects the fact that the Capture-C experiments were performed with DpnII (GATC) and the other two datasets were performed with HindIII digestion followed by biotin fill in, which results in a much longer ligation sequence (AAGCTAGCTT) that is less likely to occur by chance. The alignment step performed by Diachromatic uses bowtie2 [11] and records the number of reads that could not be mapped or were multimapped; both categories of pairs are omitted from further analysis. Finally, successfully re-paired reads are examined for duplicates and the duplicates are removed. The target enrichment tends to reduce the complexity of the library and so the proportion of duplicates is higher for the capture Hi-C and Capture-C libraries (about 3% for Hi-C and 12-30% for CHC and Capture-C). In these datasets, roughly between 40% and 60% percent of read pairs were then available for further analysis. About 3-4% of read pairs in the experiments analyzed here contained at least one read with a dangling end. In Diachromatic, dangling read pairs are not removed because they may be the result of incomplete ligation (see Section 2.3).
The fact that there are more self-ligated read-pairs in the capture Hi-C library as compared to the Capture-C library probably reflects the fact that the biotin pull down enriches all categories of read pairs with ligation junctions (including self-ligated pairs) and tends to deplete un-ligated read pairs (which were more frequent in the Capture-C dataset). The proportion of trans pairs is the highest for Hi-C. Furthermore, the proportion of read pairs that map to non-singleton interactions is higher for Capture-C and capture Hi-C as compared to Hi-C, presumably because of the enrichment step. Global quality metrics can be used to compare related experiments. The Yield of chimeric pairs (YCP) is defined as the percentage of raw read pairs that pass all quality filters and thereby are classified as valid chimeric pairs for downstream analysis. The YCP reflects the overall efficiency of the Hi-C protocol. Valid read pairs arising from genuine chromatin-chromatin interactions between different chromosomes cannot be distinguished from those arising from cross-ligation events. However, based on the assumption that random cross-ligations between DNA fragments of different chromosomes (trans) are more likely to occur than cross-ligations between DNA fragments of the same chromosome (cis), a low ratio of the numbers of cis and trans read pairs is taken as an indicator of poor Hi-C libraries [12,14]. The fact that the cis:trans ratio is higher for the Capture-C and CHC datasets probably reflects the enrichment of targeted cis interactions (the assumption here is that true interactions are more likely to be cis than trans). The non-singleton index (NSI) is simply the percentage of non-singleton read pairs among all valid chimeric read pairs. It is higher for Capture-C and CHC than for Hi-C, presumably because the enrichment step causes reproducible interactions to be sequenced multiple times. An increased amount of random cross ligation would reduce the NSI, all else equal, because it is unlikely that the same cross-ligation event occurs more than once by chance. The target enrichment coefficient (TEC) is the proportion of read pairs for which at least one of the two reads maps to a digest that was selected for target enrichment. In the experiments analyzed here, the capture Hi-C and Capture-C methodologies yield comparable results between 22% and 30%.
Conclusions
Hi-C, CHC and Capture-C are used in a broad variety of experimental settings to characterize topological associating domains and functional interactions of promoters with distal regulatory elements such as enhancers. The results of the analysis can be used to understand the effects of single nucleotide polymorphisms (SNPs) and structural variants on gene regulation and chromosomal architecture and for the analysis of gene regulatory programs in development and disease [28]. We showed that a deep understanding of the data and potential quality issues are essential for the correct interpretation of the experimental results. With the appropriate computational analysis, noise from experimental artefacts is separated from the real signal in order to identify true interactions and reconstruct the three-dimensional folding structure of genomes.
Datasets
We analyzed representative Capture-C, capture Hi-C and Hi-C datasets to generate the data in Table 2. The read length was 100 bp for all the datasets analyzed here. Table 2. Average read pair counts and quality metrics for Capture-C, CHC and Hi-C datasets. The percentages for truncation, mapping and deduplication were calculated with respect to the total number of read pairs. The percentages for read pair categorization were calculated with respect to the number of reads that could be re-paired (remaining reads from the first processing steps). Percentages of cis and trans read pairs as well as read pairs in non-singleton interactions were calculated with respect to the total number of pairs that were categorized as chimeric. See Table 1 for explanation of the quality metrics.
Item
Capture-C Capture Hi-C Hi-C The Capture-C dataset [8] captured 446 limb-associated gene loci in mouse at three developmental time stages in forelimb, hindlimb and midbrain. Each experiment was performed in two biological replicates. We analyzed data for 12 (SRR3950556, SRR3950558-SRR3950568) out of 14 experiments (data for two replicate experiments in fore and hindlimb were not available at the Sequence Read Archive [38] at the time of this writing). A total number of 1,123,921,557 read pairs were extracted.
For the capture Hi-C study [37], 22,000 promoters in human CD34 and GM12878 blood cells were captured. The capture Hi-C experiments were performed in two biological replicates for CD34 and in three biological replicates for GM12878 cells. For most biological replicates there are also technical replicates. Altogether, data from 9 runs are available comprising 1,308,468,350 read pairs (ERR436025-ERR436033). We did not pool technical replicates but analyzed them separately. In addition, Hi-C experiments from the same experiment [37] were analyzed. One replicate each was performed for CD34 and GM12878. The two datasets comprise 351,972,837 read pairs (ERR436023, ERR436024).
Diachromatic
Diachromatic is a Java application that implements the above processing and quality control pipelines. As input, Diachromatic expects paired FASTQ files from a Hi-C, CHC, or Capture-C experiment. Furthermore, Diachromatic requires a digest file as input that contains the coordinates of all digests that result from complete digest of the entire genome using a given restriction enzyme or set of enzymes. Beyond that, the digest file contains information about each digest such as length and GC content. The digest map can be generated using GOPHER [9]. Diachromatic produces a BAM file containing valid chimeric read pairs designed for downstream analysis. Diachromatic source code and complete documentation are available at the Diachromatic GitHub page (https: //github.com/TheJacksonLaboratory/diachromatic).
Digest Map for Andrey et al. 2016
In order to create the digests map that is required as input for Diachromatic we used GOPHER (v0.5.9). The 446 gene symbols of the target genes were extracted from Table S1 of the original publication [8]. After manual revision of gene symbols that could not be found in RefSeq annotation, 433 gene symbols were imported into GOPHER. For these gene symbols we derived viewpoints for mm10 using GOPHER's extended approach with 5000 bp in up-and 2000 bp in downstream direction. The restriction enzyme was set to DpnII. This approach is similar to that taken by Andrey et al. Thresholds for GC and repeat content as well as balanced margins were overridden. Altogether, the design consists of 433 genes, 577 viewpoints, 6402 unique digests and 12,804 probes.
Digest Map for Mifsud et al. 2015
To create the digest map for the analysis of the data of Mifsud et al., we used GOPHER's preset option for all protein coding genes and the simple approach with HindIII in order to create viewpoints for hg38. Thresholds for GC and repeat content as well as balanced margins were overridden. This results in a design with 18,957 genes, 31,832 viewpoints, 19,873 unique digests and 39,746 probes. | 7,863.8 | 2019-07-01T00:00:00.000 | [
"Computer Science"
] |
Can asymmetric post‐translational modifications regulate the behavior of STAT3 homodimers?
Abstract Signal transducer and activator of transcription 3 (STAT3) is a ubiquitous and pleiotropic transcription factor that plays essential roles in normal development, immunity, response to tissue damage and cancer. We have developed a Venus‐STAT3 bimolecular fluorescence complementation assay that allows the visualization and study of STAT3 dimerization and protein‐protein interactions in living cells. Inactivating mutations on residues susceptible to post‐translational modifications (PTMs) (K49R, K140R, K685R, Y705F and S727A) changed significantly the intracellular distribution of unstimulated STAT3 dimers when the dimers were formed by STAT3 molecules that carried different mutations (ie they were “asymmetric”). Some of these asymmetric dimers changed the proliferation rate of HeLa cells. Our results indicate that asymmetric PTMs on STAT3 dimers could constitute a new level of regulation of STAT3 signaling. We put forward these observations as a working hypothesis, since confirming the existence of asymmetric STAT3 homodimers in nature is extremely difficult, and our own experimental setup has technical limitations that we discuss. However, if our hypothesis is confirmed, its conceptual implications go far beyond STAT3, and could advance our understanding and control of signaling pathways.
| INTRODUCTION
The signal transducer and activator of transcription 3 (STAT3) is a conserved transcription factor that plays key roles in development, immunity, response to injury and cancer. 1,2 STAT3 dimerization, post-translational modification (PTM) and intracellular location are limiting events in these biological functions. STAT3 is most commonly found as homodimers in the cytosol of unstimulated cells, and is canonically activated by phosphorylation at Y705 upon stimulation with a variety of cytokines and growth factors. 1,2 Phosphorylated STAT3 is then retained in the nucleus, where it activates the transcription of a specific set of genes. However, unstimulated STAT3 is also found in the nucleus, binds to DNA and controls the transcription of a gene set different from phosphorylated STAT3, such as m-Ras, RANTES or cyclin B1. [3][4][5] Stimulation of cells with cytokines from the IL-6 family or angiotensin II also induces accumulation of unphosphorylated STAT3 in the nucleus, where it forms complexes with other transcriptional regulators such as NFkB and p300/CBP. [6][7][8] Nuclear accumulation of unphosphorylated STAT3 could have relevant physiopathological consequences, as it is correlated with cardiac hypertrophy and dysfunction in mice overexpressing Angiotensin receptor. 3 Furthermore, de novo mutations that force nuclear accumulation of unphosphorylated STAT3, such as L78R, E166Q or Y640F, are associated with inflammatory hepatocellular adenomas. 9,10 STAT3 can be also found in the mitochondria, where it is necessary for normal activity of the electron transport chain. 11 This function is independent of its activity as a transcription factor and Y705 phosphorylation, but dependent on S727 phosphorylation. 11,12 Mitochondrial STAT3 can also act as a transcription factor on mitochondrial DNA, and has been found to promote Ras-mediated oncogenic transformation. 1,13 Other PTMs can regulate the behavior and function of STAT3, such as acetylation at K49 or K685 3,14,15 or dimethylation at K49 or K140. 16,17 Although dimethylation of the K49 or K140 residues is induced by stimulation with cytokines and is favored by STAT3 phosphorylation, there is basal K49 (but not K140) dimethylation in the STAT3 of unstimulated cells, 16 and the same happens with STAT3 acetylation. 14,15 The role of these and other PTMs on mitochondrial functions of STAT3 remains unknown.
Three ingenious systems have been developed so far to visualize and study STAT3 dimerization in living cells, based on fluorescence resonance energy transfer (FRET), 18 bioluminescence resonance energy transfer (BRET) 5 or the homoFluoppi tag. 19 The FRET/BRET systems enable the visualization of both STAT3 homodimerization and its interaction with other proteins in real time and in a reversible manner. 5,18 However, they require very skilled users for sampling and analyses and are difficult to adapt for high-throughput experiments. The homoFluoppi system is simpler but it only allows to visualize STAT3 homodimerization, and exclusively by microscopy, as there is no change in total fluorescence but in the distribution of the fluorescence within the cell, in the form of punctae. 19 Bimolecular Fluorescence complementation (BiFC) assays also allow the analysis of protein-protein interactions in living cells, 20 and their particular properties make them complementary to FRET/BRET or homoFluoppi systems. 20,21 In BiFC assays, the proteins of interest are fused to two non-fluorescent, complementary fragments of a fluorescent reporter, such as Venus ( Figure 1A). When the proteins of interest dimerize, the fragments are brought together and reconstitute the fluorophore, the fluorescence being proportional to the amount of dimers. This fluorescence can be easily recorded and quantified by microscopy or flow cytometry in living cells, and applied to high-throughput setups.
Here, we developed a suit of Venus-STAT3 BiFC constructs that are not only an important addition to the existing STAT3 toolset, but were also successfully employed to generate an interesting hypothesis on the control of the STAT3 pathway by PTMs. Literature on STAT3 generally assumes that STAT3 homodimers are formed by two identically modified molecules. However, this is highly unlikely in a complex intracellular context, as PTMs do not occur in all the pool of STAT3 molecules at the same time or with the same efficiency. We aimed at determining the relative contribution of residues K49, K140, K685, Y705 and S727 to the dimerization and intracellular distribution of STAT3 homodimers.
Venus-STAT3 system
We developed a suit of plasmids to study STAT3 dimerization in living cells, based on BiFC systems using Venus fragments as a reporter ( Figure 1A), as we did for other proteins in previous reports. 20,22,23 When STAT3 dimerizes, the Venus fragments are brought together and reconstitute the fluorophore, fluorescence being proportional to the amount of dimers ( Figure S1A). Transfection of HEK293 or HeLa cells with the wild-type (WT) pair of Venus-STAT3 BiFC constructs led to successful expression of the chimeric proteins V1-STAT3 and V2-STAT3 (Figure 1B,C; Figure S1A). Fluorescence was primarily cytoplasmic in both cell lines, with low but visible nuclear signal ( Figure 1C; Figure S1B). The combination of STAT3 with the corresponding BiFC constructs for Mdm2 or p53 proteins had extremely low levels of fluorescence ( Figure S2A). This is consistent with the fact that these proteins are not STAT3 interactors and supports the specificity of the Venus-STAT3 BiFC system. Incubation with leukemia inhibitory factor (LIF, 100 ng/mL) induced STAT3 phosphorylation and translocation to the nucleus in HEK293 and HeLa cells ( Figure S1B,C), but it did not enhance STAT3 dimerization ( Figure 1B; Figure S1D). Incubation with STAT3 inhibitor Stattic (5 µmol/L) or removal of the C-terminus containing the SH2 domain partially prevented STAT3 dimerization ( Figure 1B), consistent with previous reports. 18,24 On the other hand, single or double Y705F/S727A phosphoresistant mutants did not decrease fluorescence ( Figure 1B). These results support existing evidence indicating that STAT3 dimerization is actually independent of phosphorylation. 5,19,25 Naturally occurring STAT3 mutations cause hyper-immunoglobulin E syndrome or inflammatory hepatocellular adenoma. 10,26 The L78R mutation, in particular, inhibits STAT3 dimerization but has a strong tendency to go to the nucleus and activate transcription. 10,18 We created a L78R STAT3 mutant in our BiFC system and confirmed first that it inhibited STAT3 dimerization ( Figure S2A), and induced nuclear translocation at the expense of cytoplasmic STAT3 ( Figure S2B,C). Furthermore, the analysis of microscopy images F I G U R E 1 A Venus-signal transducer and activator of transcription 3 (STAT3) bimolecular fluorescence complementation (BiFC) system allows the visualization and study of STAT3 homodimers in living cells. A, Venus BiFC fragments constituted by amino acids 1-158 (Venus 1, V1) and 159-238 (Venus 2, V2) were fused to the N-terminus of the STAT3 sequence in two independent constructs. K49, K140, K685, Y705 and S727 residues can be post-translationally modified, and were inactivated in both V1-and V2-STAT3 constructs by site-directed mutagenesis. B, Wild-type (WT) Venus-STAT3 constructs produced fluorescence in HeLa cells, and it was monitored by flow cytometry 24 h after transfection with BiFC constructs. Incubation with leukemia inhibitory factor (100 ng/mL) for 2 h in the absence of serum or the presence of the indicated drugs or mutant BiFC pairs (n = 3; P < .05). Results were normalized vs the WT STAT3 pair (100%). C, Microscopy pictures of representative cell phenotypes in the different symmetric combinations of BiFC Venus-STAT3 constructs (Incl, inclusions; scale bar, 20 µm). D, Percentage of cells displaying fluorescence predominantly in the Nucleus (black bar), predominantly in the Cytosol (white bar), homogeneously distributed in cytoplasm and nucleus (nucleocytoplasmic, grey bar), in the mitochondria or in non-mitochondrial inclusions. Data are shown as the average ± SEM of n = 12 WT or n = 3 (rest of combinations) independent experiments. *Sign. vs the symmetric WT STAT3 pair, P < .05 indicated that it also induces STAT3 aggregation into cytoplasmic inclusions ( Figure S2B,C).
Taken altogether, our results indicate that the behavior of the Venus-STAT3 BiFC system is consistent with previous reports for tagged STAT3, and indicates that it could be useful for the analysis of environmental or genetic modifiers of STAT3 dimerization, protein-protein interactions and intracellular traffic.
| The dimerization and intracellular distribution of unstimulated symmetric STAT3 homodimers
Next, we tried to elucidate the role that particular residues susceptible to PTMs could play on the dimerization and intracellular localization of STAT3 homodimers without adding exogenous cytokines. The residues chosen were K49, K140, K685, Y705 and S727, susceptible to acetylation, methylation or phosphorylation. The original idea was to establish a baseline for future experiments in the presence of cytokines, which enhance the frequency of these particular PTMs in STAT3. However, low levels of these PTMs in the absence of cytokines have been also described in the literature, 3,14,15 and we also wanted to know if these basal PTMs or the residues themselves had any influence in the dimerization and distribution of STAT3 homodimers. We initially assumed that the two STAT3 molecules that form a dimer are identical in all aspects, including their PTMs. Therefore, our analyses focused first on "symmetric" combinations. No combination had a consistent effect on unstimulated STAT3 dimerization, as determined by flow cytometry ( Figure S3). In order to analyze the intracellular location of unstimulated STAT3 homodimers, we classified cells qualitatively in three categories that are mutually exclusive (their sum is 100% of cells), according to the relative intensity and location of the fluorescence signal ( Figure 1C,D; Figure S2): (a) predominantly in the cytoplasm (eg WT pair); (b) predominantly in the nucleus (eg upon LIF induction, Figures S1B and S2B); or (c) homogeneously distributed through nucleus and cytoplasm (eg Y705F pair). We also determined the percentage of cells with mitochondrial signal or intracellular inclusions ( Figure S4). Although changes in patterns of STAT3 dimer distribution were observed in several symmetric BiFC pairs, only the Y705F pair induced a significant increase in the percentage of cells with homogeneous nucleocytoplasmic fluorescence ( Figure 1D).
| Relative contribution of specific residues to STAT3 dimerization, intracellular location, and cell proliferation
Like us, and to the best of our knowledge, the existing scientific literature on STAT3 implicitly assumes that STAT3 homodimers are formed by two identical molecules in all aspects, including PTMs. For example, it is still relatively easy to find articles and reviews where STAT3 is described to homodimerize only upon phosphorylation of both molecules at Y705, 27 and ourselves worked under this same assumption until very recently. 28 Here, we made use of the unique properties of our BiFC system to determine the relative contribution of each residue to the dimerization and intracellular distribution of unstimulated STAT3 dimers, in an experimental paradigm similar to the one we used previously for mutant huntingtin. 20 We combined all possible inactivating PTM mutations with each other, but again no combination had a consistent effect on unstimulated STAT3 dimerization ( Figure S3). However, the intracellular distribution of STAT3 homodimers was significantly altered by specific combinations of STAT3 molecules (Figure 2A). Unlike the K49R symmetric pair, K49R asymmetric combinations dominantly induced an increase in cells with homogeneous nucleocytoplasmic fluorescence at the expense of cytoplasmic location (Figure 2A), similar to the Y705F symmetric pair. K140R-or K685R-containing pairs showed some tendency to shift cytoplasmic location toward nucleus, but only the K140R + S727A combination achieved significance. This phenotype was almost identical to the Y705F + S727A asymmetric pair (Figure 2A).
We then pooled and analyzed all results according to the number and type of PTM mutations present in each BiFC pair. Combinations carrying any one (asymmetric) or two K-R substitutions (symmetric or asymmetric) significantly increased mitochondrial translocation, while decreasing the percentage of cells with STAT3 dimers predominantly in the cytoplasm ( Figure 2B). Asymmetric combinations of one K-R substitution and one phosphoresistant mutant also increased nuclear translocation, but only 2×K-R combinations increased homogeneous nucleocytoplasmic distribution. Combinations carrying any two phosphoresistant mutations (symmetric or asymmetric) had no significant effect on cellular distribution of STAT3 homodimers ( Figure 2B). These results indicate that specific asymmetric PTMs on STAT3 dimers can prevent their nuclear import/ export. This was later confirmed by pooling the data according to whether the STAT3 pair was symmetric or asymmetric in their PTM mutations ( Figure 2C). We found that only asymmetric PTM mutant combinations increased nucleocytoplasmic or nuclear distribution at the expense of decreasing cytoplasmic localization of STAT3 homodimers. Asymmetric combinations were also sufficient to produce an increase in mitochondrial localization of STAT3 dimers ( Figure 2C).
Signal transducer and activator of transcription 3 contributes to cancer cell survival, proliferation and malignant transformation even in conditions where it is not stimulated by cytokines, 25,[29][30][31][32] and mitochondrial STAT3 could promote oncogenic transformation in certain biological contexts. 13,33 In order to know if there were biological consequences of the observed changes of behavior in unstimulated STAT3 dimers, HeLa cells were transfected with the different combinations of constructs and their proliferation was determined 24 hours later ( Figure 3). The asymmetric combinations K49R/K140R, K140R/ K685R and K685R/S727A increased significantly the number of cells versus control cultures transfected with wild type STAT3. Among the symmetric combinations, only the K49R pair showed a smaller but significant increase in cell proliferation. These results indicate that asymmetric dimers of STAT3 could have differential biological effects.
| DISCUSSION
We have developed and validated a new BiFC assay for the visualization and study of STAT3 interactions in living cells. Our system responds as expected to pharmacological activation or inhibition of STAT3, disease-associated genetic mutations, and potential protein interactors. The Venus-STAT3 BiFC system is complementary to previously reported F I G U R E 2 Asymmetric signal transducer and activator of transcription 3 (STAT3) post-translational modifications regulate intracellular distribution of STAT3 homodimers. A, Intracellular distribution of fluorescence in asymmetric combinations of Venus-STAT3 bimolecular fluorescence complementation (BiFC) constructs (and the WT symmetric pair as reference). Data are shown as the average of n = 12 (WT, wild-type) or n = 3 (rest of combinations) independent experiments ± SEM. Statistical analysis was carried out by means of a one-way ANOVA followed by a Bonferroni test adjusted for multiple comparisons. Significant vs the symmetric WT STAT3 pair, *P < .05, **P < .01. B and C, The same original data, but pooled according to the number and nature of substitutions (B) or the symmetry or asymmetry of substitutions (C) in the STAT3 homodimer, and represented as box plots. The limits of the boxes represent the smallest and largest values, the straight line represents the median, the dashed line represents the average, and the dotted line represents the average for WT STAT3 pair. Statistical analysis was carried out on the Average ± SEM of each pool of data (1×YF/SA:1×KR, n = 6; 2×YF/SA, n = 3; 2×KR, n = 6; sym, n = 5; asym, n = 10). Significant vs the symmetric WT STAT3 pair, *P < .05, **P < .01; significant vs 2×YF/SA substitution (B) or the symmetric mutant pairs (C), # P < .05, ## P < .01
FRET, 18 BRET 5 or homofluoppi, 19 as they all have different advantages and limitations. FRET/BRET approaches enable the visualization of any protein-protein interaction and have high temporal and spatial resolution, but they are difficult to scale-up to high-content screenings. Homofluoppi enables high-throughput analysis, but it is not suitable for the visualization of STAT3 heterodimers (eg with STAT1) or other protein-protein interactions. Both are especially suitable for microscopy analyses, but not for flow cytometry analyses. BiFC systems can be applied to any type of protein-protein interaction, are easy to use and scale up for high-throughput analysis, and enable both microscopy and flow cytometry approaches. However, BiFC systems have a lower time resolution than FRET, BRET or Homofluoppi systems and usually lower signal-to-noise ratios. 34 The reconstitution of the fluorophore is irreversible, potentially limiting interactions that are transient, but otherwise having the advantage of accumulating low frequency events that otherwise would be unnoticed. And finally, only dimers with complementary reporter fragments will be observed (ie Venus 1 + Venus 2), but it is possible that dimers are also formed between STAT3 molecules that carry the same Venus fragment. Nevertheless, BiFC systems are widely used, 34 represent an excellent first, simple approach to visualize protein-protein interactions in living cells, 35 and could even be combined with FRET approaches for the visualization of multi-protein complexes. 36 We believe that our STAT3 BiFC system will make a great addition to the existing STAT3 protein-protein interaction toolbox.
Our results indicate that asymmetric PTMs could constitute a new level of regulation of unstimulated STAT3 behavior and function. We must emphasize that this observation was very surprising and is put forward cautiously as a working hypothesis, rather than a conclusive result. To the best of our knowledge, there is no direct empirical evidence in the literature showing that asymmetrically modified STAT3 dimers actually happen in nature, and such demonstration would be currently extremely difficult from a technical point of view, even in vitro. Previous studies most often rely on systems that do not differentiate between monomers and dimers, 3,4,11,12,14,15,17,37 and/or that produce a single population of STAT3 molecules, either mutated or normal. 5,18,19 And yet, in the crowded and diverse intracellular environment, the probability for two identical STAT3 molecules to form a dimer (or for a dimer to be modified in both molecules simultaneously and in the same residues) should be low, although it could certainly be enhanced by either total absence or presence of stimuli. For example, most STAT3 molecules are not phosphorylated in the absence of extracellular stimuli, and this proportion is reversed shortly after cytokines bind to their corresponding membrane receptors ( Figure S1C). However, cells often show small amounts of phosphorylated STAT3 in resting state ( Figure S1C) and, conversely, cytokine-stimulated STAT3 induces the de novo transcription of new STAT3 molecules that are not necessarily phosphorylated. 1,2 This indicates that unphosphorylated and phosphorylated STAT3 should coexist at similar levels in many situations, and the literature presents evidence that this could be equally true for other STAT3 PTMs induced by cytokines. [14][15][16][17] Beyond the technical difficulties to confirm the existence of asymmetric STAT3 homodimers in nature, our experimental design has several limitations that may have determined our observations. First, we have tested our system in cells that express endogenous STAT3, which could somewhat interfere with the system. One argument against this possibility is that we observe changes in asymmetric combinations but F I G U R E 3 Specific asymmetric signal transducer and activator of transcription 3 (STAT3) dimers enhance the proliferation of HeLa cells.
HeLa cells were transfected with the different combinations of STAT3 bimolecular fluorescence complementation constructs, and their viability was determined by means of the 3-(4,5-dimethyl-2-thiazolyl) 2,5-diphenyl-2H-tetrazolium bromide assay 24 h later. Statistical analysis was carried out on the Average ± SD of data (n = 3). Significant vs the symmetric wild-type (WT) STAT3 pair, *P < .05, **P < .01 not in symmetric combinations. We initially assumed endogenous STAT3 would interfere homogeneously in all possible combinations. If this is incorrect and endogenous STAT3 is interfering, especially with certain combinations, we would expect some degree of similarity between symmetric and asymmetric combinations having at least one mutation in common. However, symmetric combinations are similar between them and in most cases different to their asymmetric counterparts. It should also be noted that, while endogenous and exogenous STAT3 could carry different PTMs (besides the BiFC tags), a possible differential interference of endogenous STAT3 does not necessarily invalidate our hypothesis. In normal conditions, the pool of STAT3 molecules will be heterogeneous, and the possible differential interference of endogenous STAT3 could actually correspond to the effect that other molecules would have on specific STAT3 homodimers. Nevertheless, the experiments should certainly be repeated in a STAT3 knockout context to remove possible confounders.
Second, we overexpressed the constructs transiently, probably contributing to the high variability we observe between experiments. Higher-than-normal levels of STAT3 could produce interactions that would not occur in normal conditions. It was suggested to us that stable transfection on STAT3negative cells could both improve variability and produce levels of STAT3 similar to the endogenous levels in a parental cell line. This is not necessarily correct, since expression of proteins highly depend on their promoter, and similar attempts in the literature produced cell lines expressing higher levels than parental cell lines. 16 Although stable transfection or infection with viruses could be pursued in the future, this approach could also be problematic because of the particular features of BiFC systems, such as its irreversibility, which could produce further accumulation of STAT3 dimers over time.
Third, BiFC assays have their own technical limitations. BiFC assays frequently show some spontaneous reconstitution of the fluorophore that adds background and reduces the signal-to-noise ratio. 34 We present results that indicate that our system is specific, combining STAT3 with proteins that should not interact with it or introducing pharmacological or genetic modifiers of STAT3 dimerization ( Figure 1B; Figure S2). However, we never achieved total inhibition of STAT3 homodimerization, and therefore some possible background cannot be completely ruled out. Such possible background could produce artifacts, making us believe that we are visualizing actual STAT3 dimers when we are just observing reconstituted Venus, and in this situation STAT3 monomers could behave differently. Furthermore, the irreversibility of the BiFC systems could amplify the occurrence of low frequency interactions, therefore magnifying events that are not biologically relevant. These two last issues could be overcome by confirming our observations in a FRET system, alone or in combination with our BiFC system. 36 Alternatively, STAT3 mutants could be inserted in existing split luciferase systems that are reversible and have higher signal-to-noise ratio than BiFC assays. 38 In summary, our results must be considered as a working hypothesis, but they point at an exciting possibility: the behavior and function of protein homodimers could be controlled by PTMs in only one of the molecules. If asymmetric STAT3 dimers actually happen and play a relevant biological role, this would open a series of interesting questions: do they regulate specific sets of genes? Do they enable gradation of STAT3 transcriptional or mitochondrial activities? And if they do not happen, how do cells manage to achieve perfectly symmetrical STAT3 dimers with such high efficiency? Given the essential roles of STAT3 in development, immunity, tissue stress and cancer, addressing these questions could have important implications for the diagnosis, treatment and understanding of a wide spectrum of human pathologies.
| Cell cultures
HeLa and HEK293 cells were maintained in Dulbecco's minimal essential medium (DMEM; Gibco, Invitrogen) supplemented with 10% fetal bovine serum (FBS) and 1% of a penicillin/streptomycin commercial antibiotic mixture (Gibco; Invitrogen), under controlled conditions of temperature and CO 2 (37°C, 5% CO 2 ). Cell culture dishes were purchased from Techno Plastic Cultures (AG) unless otherwise indicated. For all experiments, cells were seeded at a density of 10 000 cells/ cm 2 regardless dish size. For flow cytometry assays, cells were grown on 6-well plates (35 mm diameter). For cell viability and adenosine triphosphate (ATP) assays, cells were grown on 96-well and 24-well dishes, respectively. For microscopy, cells were seeded on glass-bottom 35 mm dishes (10 mm glass surface diameter; IBIDI) and fixed with 4% paraformaldehyde in phosphate buffered saline (PBS) right before imaging. For protein extraction (PAGE and filter trap assays) cells were seeded on 60 or 100 mm dishes.
| Plasmids
Venus-STAT3 BiFC constructs were designed using A Plasmid Editor free online software (http://jorge nsen.biolo gy.utah.edu/wayne d/ape/) and synthesized by Invitrogen. Briefly, the cDNA sequence of STAT3-alpha was fused to the sequence of two complementary, non-fluorescent fragments of the Venus protein (Venus 1, amino acids 1-157; and Venus 2, amino acids 158-238) ( Figure 1A), and inserted in a pcDNA 3.3 TOPO backbone (Invitrogen). Mutant constructs were produced by polymerase chain reaction (PCR)-based site-directed mutagenesis using these original constructs as templates. Table 1 shows the primers used for cloning and mutagenesis. All BiFC constructs were deposited in Addgene (https ://www.addge ne.org/). Deletion mutants lacking the C-terminus (DelCT) of STAT3 were produced by PCR-mediated subcloning using full-length Venus-STAT3 BiFC constructs as templates. The original lysine (K) residues on positions 49, 140 and 685 were replaced by arginine (R) residues, the tyrosine (Y) residue on position 705 by phenylalanine (F) and the serine (S) residue on position 727 by alanine (A) ( Figure 1A). Additionally, the L78R mutation associated to inflammatory hepatocellular adenoma was also produced and analyzed during the optimization of the system. Plasmid transfection was carried out by means of JetPrime (Polyplus-transfection) following manufacturer's instructions. Subsequent cell viability, ATP, immunoblotting, microscopy and flow cytometry assays were carried out 24 hours after transfection.
| Flow cytometry
Cells were washed once with PBS (Gibco, Invitrogen), trypsinized (0.05% w/v, 37°C, 5 minutes) and collected into BD Falcon Round-Bottom Tubes (BD Biosciences). Cells were then resuspended in PBS at room temperature and analyzed by means of a Calibur flow cytometer (Beckton Dickinson).
Ten thousand cells were analyzed per experimental group. The FlowJo software (Tree Star Inc) was used for data analysis and representation.
| Microscopy
Images of transfected HeLa or HEK293 cells were acquired using an Applied Precision DeltavisionCORE system, mounted on an Olympus inverted microscope, equiped with a Cascade II 2014 EM-CCD camera, using a 63× 1.4NA Oil immersion objetive, DAPI + DsRed + enhanced green fluorescent protein (EGFP) fluorescence filtersets and differential interference contrast (DIC) optics. Pictures were analyzed by means of the ImageJ free online software (http://rsbweb.nih.gov/ij/).
| Western blotting
Twenty-four hours after transfection, cells were washed once with PBS, lysed in a triton-based lysis buffer (1% NP-40, 150 mmol/L NaCl, 50 mmol/L Tris pH 7.4, supplemented with protease inhibitor and phosphatase inhibitor cocktails [Roche diagnostics]). Lysates were sonicated for 10 seconds at 5 mA using a Soniprep 150 sonicator (Albra) and centrifuged at 10 000× g for 10 minutes at 4°C, and supernatants were collected for analyses. Protein concentration was quantified by means of the Bradford assay. Equal amounts of protein (30-50 μg) from each extract were prepared for analysis by western blotting under denaturing conditions. Loading buffer (200 mmol/L Tris-HCl pH 6.8; 8% sodium dodecyl sulfate (SDS); 40% glycerol; 6.3% β-mercaptoethanol; 0.4% bromophenol blue) was added to the samples, which were then boiled for 5 minutes at 95°C and resolved on 12% SDS-polyacrylamide gel electrophoresis with SDS-containing running. Proteins were then transferred to nitrocellulose membranes, and transfer efficiency and equal sample loading was confirmed by Ponceau S staining. Membranes were blocked with 5% non-fat dry milk in Tris-HCl buffer saline-Tween (TBS-T) (150 mmol/L NaCl, 50 mmol/L Tris pH 7.4, 0.5% Tween-20) for 1 hour at room temperature before addition of primary antibodies. Primary antibodies against the following proteins were used at the specified concentrations: STAT3 (1:1000; Cell Signaling), phospho-STAT3 (Tyr705) (1:1000; Cell Signaling); and GAPDH (1:30 000; Ambion). Membranes were then washed 3 times in TBS-T and incubated with a secondary mouse IgG Horseradish Peroxidase-linked antibody (1:10 000; GE Healthcare Life Sciences). Signals were developed by enhanced chemiluminescence reagents (Millipore) and imaged with a Chemidoc device (Biorad).
| Statistics
Sigmaplot software (Systat Software, Inc) were used to perform the statistical analysis and graphical representation of data. Results are shown as the average ± SEM of at least 3 independent experiments, as indicated in figure legends. Statistical analysis was carried out by means of a one-way ANOVA followed by a Bonferroni test adjusted for multiple comparisons. Results were in all cases considered significant only when P < .05. | 6,578.6 | 2020-01-27T00:00:00.000 | [
"Biology"
] |
The Impact of Transportation on the Croatian Economy: The Input–Output Approach
: The aim of this paper was to determine the economic impact of the transportation sector on the Croatian economy by using input–output analysis. According to the input–output tables for the Croatian economy for 2004, 2010, 2013, and 2015, output and gross value-added multipliers were calculated. The results of the conducted analysis indicated that the multiplicative effects of the transportation sector in Croatia were significant in the observed period, especially for the air transport sector. Furthermore, comparative multiplier analysis with selected European Union countries was performed to assess the Croatian transportation industry position from an international perspective. Lower output and gross value-added multipliers for the Croatian transportation sector imply that old European Union member states capitalized the transportation sector more for growth and development. The Croatian transportation sector recorded lower imported intermediate inputs, average domestic inputs, and higher value-added multipliers similar to new European Union members. Simulations based on multiplicative effects show that restrictions on movements and human contacts, imposed due to the COVID-19 pandemic, could induce a strong reduction in the economic activity of transport and other sectors that are included in the value-added chain of the transport industry. quantified the socio-economic impact and the level of dependence of heavily relying on The results showed the importance of the due to the high dependency and correlation between tourism and air connectivity, creating a high indirect effect on the national economic model. Stebbings et used IO data in quantifying the contribution of the sector to the United Kingdom economy. The results reveal
Introduction
Transportation is an essential link for the movement of individuals and goods. It undoubtedly contributes to the development of all business sectors and society. Transport plays an important role in the operation of each economy and is considered to be a determining factor for economic development and growth. The role of transportation is visible for its contribution towards the creation of an effective connection of the supply chain of goods and services, shipments of intermediate inputs, and delivery of final goods. The interrelationship between transport and other economic sectors has been widely examined by both the business and academic community from the macroeconomic and microeconomic perspectives. From the microeconomic perspective, the importance is usually assessed by its influence on each specific sector in the economy. The significance of the transport sector on the macroeconomic level is manifested in the overall impact to the output, income, economic growth, and employment. According to previous studies, the transport industry provides more than 10 million jobs in the transport industry and accounts for more than 5% of the overall gross domestic product (GDP) in the European Union (European Commission 2020a). An even greater share of the transport industry contribution should be indicated for developed European countries, varying between 6% and 12% of the GDP (Gnap et al. 2018). The transportation sector plays an important role in the Croatian economy, accounting for 5% of the total GDP (CBS 2020b) and generating a significant share of 5.2% of total exports (The World Bank 2020). The importance of the transport sector in the Croatian economy (Božičević et al. 2008) is relatively higher than in other economies (Lejour et al. 2009).
The provision of transport and warehousing services requires a considerable capital investment (Rašić-Bakarić 2013), especially investments in transport infrastructure, which are considered to be essential for economic and social development (Ministry of the Sea, Transport, and Infrastructure of the Republic of Croatia 2017). The importance of the transportation industry in Croatia is outlined in the Transport Development Strategy for the period 2017-2030, which defines the concepts of sector strategies (Ministry of the Sea, Transport, and Infrastructure of the Republic of Croatia 2017). Traffic and mobility are included as one of the five priority thematic areas of the Croatian Smart Specialization Strategy. It confirms the importance of the transportation sector in the context of economic and social development. Transport directly affects the expansion of the industrial market, and indirectly affects economic growth. It improves living standards and competitiveness among regions and local communities, but also improves the physical expansion and integration of infrastructure (Ministry of Economy, Entrepreneurship and Crafts of the Republic of Croatia 2016). In order to prioritize the government-driven strategic development projects, there is a need for an economic impact analysis of transportation sectors, given the excessive costs of transport infrastructure, to provide policymakers with feasible and the most scientifically proven information on the economic impact of transportation industries (Lee and Yoo 2016).
The objective of this paper is to quantify multiplicative effects of the Croatian transportation sector and to identify changes and trends for the period 2004-2015 by using input-output (IO) analysis. The output and gross value added (GVA) multipliers for the Croatian economy are estimated. Comparative analyses with selected European Union (EU) countries are conducted to assess the Croatian transport industry position from an international perspective. The contribution of this paper is intended to address the lack of recent studies which quantify the economic effects of transport on the Croatian economy. The estimation of multiplicative effects is especially important when an impact of exogenous shock is the subject of the research. The current COVID-19 pandemic resulted in policy measures which have restricted the movements and contacts of humans. A recent study (Fernandes 2020) concluded that service-oriented economies will record the most pronounced effects caused by the outbreak of the virus. The same author also highlighted the spillover of the crisis from transport and other service industries to the rest of the economy and the spread of negative effects throughout the value-added chain. Fornaro and Wolf (2020) showed that the spread of the epidemic resulted first in a shock of demand reduction, followed by a reduction in supply and continued negative spiral effects. Simulations of different scenarios of the impact of exogenous shock on the demand for transport services and the reduction in the GVA for the total economy based on the estimates of the multiplicative effect are also provided in this research.
The paper layout is organized into five sections. After the Introduction, in Section 2, we provide an overview of recent literature which includes IO applications to explore transportation-economic linkages. In Section 3, we provide the methodology concept and modeling framework to quantify the economic effects of the transportation sector. In Section 4, we present the results of our empirical analysis, while in the last section the key conclusions are drawn along with recommendations for future applications.
Importance of the Transportation Sector in Croatia
The influence of the transport industry on the development and growth of national economies was the topic of numerous studies and recently published works by Tong and Yu (2018) and Jurgelane-Kaldava et al. (2019). The basis of sectoral development and an increase in quality and reliability of transport services are related to investments in transport infrastructure, which enable the prosperity and affirmation of the total transport system. The Croatian transport sector uses various forms of transport, such as rail, road, water, and air, which have specific roles in providing support to passengers and providing freight services at the international and national level. Table 1 presents trends in annual freight traffic for six transportation subcategories in the period 2004-2015. Data show the dominant role of road transportation in total domestic freight transport with an average share of 58.7% in the observed period. It was declining after the economic crisis in 2009, and the indicators of mild recovery appeared at the end of the analyzed period. The negative trend is also evident in other transportation modes. High dependence on road transportation could lead to an increase in negative implications of transport on the environment as emissions of air pollutants or congestion (Bharadwaj et al. 2017), contributing to the overall negative externalities (Alises and Vassallo 2015). The Transport Development Strategy of the Republic of Croatia for the period 2014-2030 implemented policy measures to stimulate a shift to alternative transport modes and reduce negative impacts. It envisages diverting 30% by 2030, and by 2050 more than 50% of road freight transport over distances of more than 300 km to rail, sea, and inland waterways through the construction of green freight corridors (Government of the Republic of Croatia 2014). The average annual share of the transportation and storage sector in a total number of persons employed in legal entities in Croatia amounted to 6% during the period 2004-2015. Total freight volume, investments, and employment confirm the dominant role of road transportation. In 2015, 46.3% of total employees in legal entities of the transportation and storage sector worked in the subsector of land transport and transport via pipelines, 30.2% in warehousing and support activities for transportation, 17.5% to postal and courier services, and solely 4.5% employed in water transport, and respectively 1.6% in the air transport subsector (CBS 2020a). Table 2 presents the comparison of the economic structure of selected old and new EU member states (NMS). Old member group includes Germany (DE), Italy (IT), Spain (ES), and the United Kingdom (UK), which has been one of the strongest EU economies before Brexit. The group of selected NMS economies includes economies that are similar to Croatia (HR) according to the population size and geographical location: Slovenia (SL), Slovakia (SK), Hungary (HU), and Czech Republic (CZ). The share of agriculture and industry in GVA is generally higher in the NMS group while more developed EU old members recorded a higher part of the public, business, and personal services. The hotel industry and trade significance are highest in Spain and Croatia due to geographical and climate conditions favoring tourism. The share of transport in total GVA of selected economies varies from 4.0% in the United Kingdom to 6.4% in Slovenia. Transportation is a more significant economic sector in NMS economies, while its share in the old members is slightly below. Land transport and supporting services recorded a dominant share of GVA created in the transport sector in all analyzed economies.
Literature Review
This section provides an overview of research on the economic impact of the transportation sector using IO analysis. The available literature presents some basic assumptions (Gretton 2013;Gupta 2009) and limitations (Miller and Blair 2009) of the IO model. Yu (2017) provided a comprehensive overview of IO model applications to economic linkages of transportation. Recently, Morrissey and O'Donoghue (2013) and Lee and Yoo (2016) have applied it in identifying the role of transport clusters. Some studies focused on specified transport subsectors. Wang and Wang (2019) and Santos et al. (2018) explored the port industry significance. The economic effects of the cruise industry were examined by Vayá et al. (2017) and Chang et al. (2015). Oxford Economics (2017) researched the Croatian shipping industry role that is not directly classified to transport, but for which performance is strongly related to water transport services. Bagoulla and Guillotreau (2020) analyzed the impact of maritime transport in France on the domestic economy, providing a different perspective by assessing the environmental impact of shipping on direct and indirect gas emissions. Yu et al. (2019) constructed the China non-competitive constant price IO model comprising the transport and storage sector. Kwak et al. (2005) examined the status and economic impact of four maritime industries in Korea to present policymakers' relations of these sectoral industries with the rest of the national economy. The Portuguese maritime cluster was assessed using three different qualitative and quantitative methodologies (Salvador et al. 2016), including IO analysis, indicating intra-sectoral relations of the marine industry as significant while emphasizing weak intermediate linkages. The interaction between air transport and economic development in Greece was studied by Dimitrios et al. (2017) and Dimitrios and Maria (2018). Studies quantified the socio-economic impact and the level of dependence of regions heavily relying on tourism. The results showed the importance of air transport for the Greek economy, mainly due to the high dependency and correlation between tourism and air connectivity, creating a high indirect effect on the national economic model. Stebbings et al. (2020) used IO data in quantifying the contribution of the marine sector to the United Kingdom economy. The results reveal the twice as much estimated contribution of the marine economy to the overall United Kingdom economy if indirect effects are included. The economic impact of the marine sector of Australian coastal communities was an objective of van Putten et al. (2016). By the IO model, authors identified the interrelationships of the different maritime sector activities by highlighting key industries that could retain the current level or, perhaps, secure the marine sector's future growth. Chiu and Lin (2012) explored the maritime industry effects within other parts of the Taiwan economy. The study reveals the questionable position of industry in the domestic economy, considering the economic impact and the low intensity of dispersion.
The economic effects of final demand on production, GVA, and employment have been estimated for individual sectors of the Croatian economy. Buturac et al. (2017) found the highest output multipliers in Croatia for the construction (1.68) and manufacturing industries (1.599). Multiplicative effects for agriculture, estimated to be 1.54, are close to the national average (1.53). The lowest multipliers have been found for public and personal services, which are labor-intensive low-tech industries. Keček et al. (2019) found that ICT sectors contributed to the total Croatian GVA at a range higher than 4.5% if indirect effects are included. Ivandić and Šutalo (2019) estimated the GVA multiplier for Croatian tourism at 1.55 and the total contribution of tourism to 16.9% in 2016. In Mikulić et al. (2018), wind-power plant deployment effects were quantified, where small multiplicative ones have been found due to the high import content of high-tech products required in those plants. Mikulić et al. (2020) valorized economic effects from the energy renovation program of public buildings in Croatia and by application of closed IO model estimated investment multiplier at 2.5. As the analysis of the Croatian transportation sector's significance has not yet been adequately evaluated, there is a need to provide a quantitative analysis of its total effects on the Croatian economy.
General Structure of Input-Output Analysis
The relation between transportation and other economic sectors and the economic impact of the transport industry on the national economy has been examined by various analytical frameworks developed for specific purposes. Cost-benefit analysis (CBA) and IO analysis are the most frequently used approaches (Yu 2017). While the central focus of the CBA is on direct access within the contribution to the transport sector, IO analysis applies to the structure of more extensive national economic impacts and linkages between specific activities (Lakshmanan 2011). IO analysis enables quantitative macroeconomic insight to assess the influence of final demand on domestic production, GVA, and employment. Leontief (1986) was the first who developed this approach, proposing the inter-sectoral model that provides linkages among productive industries of a specified economy on a national or regional level. It has been widely used in analyzing the impact of multiple areas in recent years (Miller and Blair 2009). Using IO tables as a statistical foundation, mathematical relations of inter-industry transaction tables were created by applying the Leontief inverse matrix. It should note that the structure of IO tables of a specified economy is divided into several productive sectors, where columns represent input values of particular sectors and rows represent respective output values. The impact of cross-sector flows on the overall production of each sector is determined in the IO table by the principal equation of the IO model (Miller and Blair 2009): where x i is a total output of sector i, z ij represents the number of a product from sector i used as an intermediate input in production by sector j, and f i represents a final demand of sector i, for i, j = 1, . . . , n (n is a number of sectors). This equation represents a system of linear equations, one per sector of the economy, where the output of each sector is divided between intermediate products and final demand. The relation between inputs used by sectors and the total produced output is determined with technical coefficients a ij = z ij x j . By using simple matrix notation, the system of Equation (1) for the total economy, it is possible to rewrite it as where A is n × n matrix of technical coefficients, x is the column vector of outputs, and f is the column vector of final demands, i.e., The Equation (2) can be rewritten as where I is the identity matrix, and (I − A) and is called the Leontief matrix. The solution to this system of linear equations is: where L = (I − A) −1 represents the Leontief inverse matrix or multiplier matrix. This matrix can be interpreted as the relation of direct and indirect requirements for the output of each sector to support one unit of deliveries to the final demand, and it is defined by elements a ij . The primary objective of this research is a calculation of the output and GVA multiplier for the transportation sector. In this research, an open IO model based on domestic demand is used. Grady and Muller (1988) argue the preference for using the open IO model instead of the closed one, which includes induced household expenditures. Multipliers are calculated as the relation of total (direct plus indirect) effects to direct effects. The simple output multiplier for the sector can be calculated by using the following relation: i.e., the output multiplier is calculated as a sum of individual industry column elements of the Leontief inverse matrix. Value-added multipliers measure the value-added of a single sector as a result of an additional output delivered to final demand. In matrix form, it can be denoted as where v c is vector of value-added coefficients representing the share of GVA of each sector in its output.
Data Sources
The publication of official symmetric IO tables for Croatia facilitated the use and quantification of data for analytical purposes to assess the overall contribution of transportation in Croatia in the observed period. Data used for this research comprises four IO tables from different sources as follows: The IO table for 2015 is available from the Eurostat database (Eurostat 2020b). While correspondence of the old and new classification systems is full for land, water, and air transport (Table 3), sector 64 in CPA 2002 is not fully comparable to H53, because telecommunication services are now classified into the new information and communication sector. In this research, comparative analyses of the transportation sector are provided for the selected new EU members (Hungary, Czech Republic, Slovakia, and Slovenia) and the selected old EU member states (Germany, Italy, United Kingdom, and Spain). IO tables for those countries are retrieved from the Eurostat database (Eurostat 2020b). The last available IO data refer to 2015, which could affect the main assumptions of the IO method on the existence of fixed technological coefficients in a more recent period. The technology could be changed consequently to the implementation of more efficient production processes, the use of modern ICT technologies, changes of relative prices, and other factors (Miller and Blair 2009). The dynamic I-O analysis with coefficients updated by application of statistical techniques, such as the RAS method or Cross-Entropy Model, has been described in the economic literature (Miller and Blair 2009). However, if only statistical methods are applied, without the inclusion of more recent official data on the change of the structure of intermediate consumption, it could harm the reliability of estimates. Rokicki et al. (2020) found noticeable differences at the sectoral level comparing survey-based versus algorithmbased multi-regional IO tables. As a set of EU economies is included in the sample, dynamization of IO data based exclusively on statistical techniques could result in estimates which are not robust and depend on the statistical technique arbitrarily selected by the authors.
Research Results
Indirect effects of an economic sector in the IO model result from the technical requirements of a production process applied and the structure of domestic and imported intermediate inputs used. The higher share of domestic inputs incorporated in the sector's output implies higher integration of the domestic economy and larger indirect effects. This section presents analyses of multiplicative effect trends of the Croatian transportation sector and a comparison to the selected set of EU economies.
The Structure of Output in the Transport Sector in Croatia and EU
The most significant input required by companies operating in the transport sector is energy. In most EU economies, energy balances reveal their dependence on imported energy, especially crude oil and oil derivatives. As a result of import dependence, a specified share of indirect effects is not operating in the domestic economy, but is transferred abroad. A comparison of the output structures in the transportation sectors in Croatia and selected EU countries is provided by Table A1 in the Appendix A. Figure 1 represents the comparison of land transport as the most significant subsector. A general declining trend of the share of domestic intermediate consumption can be noticed in both land and water transport sectors in the analyzed period in most of the economies (Figure 1). The structure of inputs used by Croatian land transport companies is more similar to new EU member states, where the share of imports is higher compared to more developed old EU countries.
Warehousing and supporting transport services in most EU economies are more integrated with domestic producers. The share of imported inputs in this sector is the highest in Hungary, the Czech Republic, and Croatia. Land transport services and transport services via pipelines and air transport sector (presented in Table A1 in Appendix A) recorded an average level share among analyzed countries. Otherwise, water transport, warehousing, and support services for transportation and postal and courier services were among the lowest, which can be explained with the relatively reduced integrity level of the Croatian economy into the overall European market.
Output and GVA Multipliers of Croatian Transportation Sector in Period 2004-2015
Multiplier analysis of the transportation sector is performed based on the IO tables for 2004, 2010, 2013, and 2015. As the transportation industry for its operation uses intermediate inputs delivered by other activities, it induces spillover effects on the total economy. Table 4 shows output multipliers for the Croatian transportation sector for 2004, 2010, 2013, and 2015. In the observed period, the highest output multiplier was recorded for the air transport sector. In 2015, it amounted to 1.85. It means that if the final demand for products in this sector increases by 1 HRK, the total output in the Croatian economy will grow A general declining trend of the share of domestic intermediate consumption can be noticed in both land and water transport sectors in the analyzed period in most of the economies (Figure 1). The structure of inputs used by Croatian land transport companies is more similar to new EU member states, where the share of imports is higher compared to more developed old EU countries.
Warehousing and supporting transport services in most EU economies are more integrated with domestic producers. The share of imported inputs in this sector is the highest in Hungary, the Czech Republic, and Croatia. Land transport services and transport services via pipelines and air transport sector (presented in Table A1 in Appendix A) recorded an average level share among analyzed countries. Otherwise, water transport, warehousing, and support services for transportation and postal and courier services were among the lowest, which can be explained with the relatively reduced integrity level of the Croatian economy into the overall European market.
Output and GVA Multipliers of Croatian Transportation Sector in Period 2004-2015
Multiplier analysis of the transportation sector is performed based on the IO tables for 2004, 2010, 2013, and 2015. As the transportation industry for its operation uses intermediate inputs delivered by other activities, it induces spillover effects on the total economy. Table 4 shows output multipliers for the Croatian transportation sector for 2004, 2010, 2013, and 2015. In the observed period, the highest output multiplier was recorded for the air transport sector. In 2015, it amounted to 1.85. It means that if the final demand for products in this sector increases by 1 HRK, the total output in the Croatian economy will grow by 1.85 HRK. Output multiplier values for land transport services and transport services via pipelines were the most consistent, while the lowest value was observed for the postal and courier services sector. Similarly, GVA multipliers for the transportation sector were calculated, considering four analyzed years, as shown in Table 5. GVA multipliers were mostly found at the same sectoral importance level as those indicated to output multipliers. The air transport sector had the highest values, and the postal and courier services sector had the lowest GVA multipliers.
Comparison of Multiplicative Effects in Croatia and Selected EU Countries
The competitiveness of the Croatian transportation sector could be assessed by the comparative analysis of output and GVA multipliers. The study includes the selected advanced EU economies and developing new member states. The analysis of output and GVA multipliers is based on the years 2010 and 2015, where the same classification of activities has been applied.
In most observed economies, the highest output multiplier is found in air transport and supporting transport services (Table 6). On the other hand, sector postal and courier services have the lowest output multipliers values. Output multipliers of the Croatian transportation sector were found among the lowest compared to selected old and new member states in the observed period. Inland transport, supporting transport, and postal services in the old EU members group generally recorded higher multipliers values. It can be explained, not only with higher integration of domestic producers, but also with country size and the dominant role of domestic over international transport. Transport companies in small economies, such as Croatian or Slovenian, usually participate in international transport operations with a higher proportion and buy a significant share of oil products and other intermediate inputs abroad. On the contrary, the lowest output multipliers were calculated for Hungary, Slovenia, and Croatia, while the largest fluctuations in output multiplier values were found in the water transport sector.
The total GVA results for the transportation sector in selected EU economies are presented in Table 7. Cumulative effects should be interpreted as GVA, which is created in the overall economy when final consumption for a specified transport sector increases by a unit monetary value. Total effects in most economies are highest for post services where 1 EUR increase of final demand results in 0.835 EUR GVA in the overall economy (2015 data). An increase of final demand for water and air transport results in lower amounts of domestic GVA, which can be explained by a higher share of international transport where a certain proportion of energy products and other intermediates are bought abroad. GVA multipliers, which are to be interpreted as a ratio of total effects created in the national economy and direct effects recorded in the transport sector, are similar to output multipliers, but with higher fluctuations in values, especially for the water transport and air transport sectors (Table 8). In air transport, the highest GVA multipliers are mostly dependent on low direct effects and the low margin charged by air transporters because of the extremely competitive market, while indirect effects are relatively high.
Transportation requires a significant input of imported products, primarily oil derivatives, used in the transport vehicles operations. Table 9 presents the total requirements for imported products per unit value of the transport industry output. The highest import requirements are estimated for air and water transport, and the lowest for postal services. Import requirements of Croatian transporters are similar to the average ones estimated for other EU economies. A decrease in import content of Croatian water transport in 2015 can be explained by the restructuring of this activity from international to local transport (ferries and touristic routes) due to the increased demand of foreign tourists visiting Croatia.
Simulation of Total GVA Effects Caused by a Reduction of Transport Services Due to Restrictions in Movements of Persons as a Result of COVID-19 Pandemic
Multipliers are estimated by the IO method are efficient in both directions. Transport activity reduction indirectly affects other domestic companies included in the value-added chain of the transport industry. Although data for all transport modes are not regularly published for all economies, it is clear that the reduction of the volume of transport activity in 2020 will be significant. Available data on passenger-kilometers realized in rail transport in the second quarter of 2020 in Croatia and Germany dropped to only one third compared to the same period of 2019. Even worse performance of rail transport is recorded in Spain, France, and Italy, where the reduction amounted to over 80%. According to Air Passenger Market Analyses (IATA 2020), passenger air transport measured in revenue passenger kilometers was down 90% year-on-year in April 2020, and 75% in August. Although the transport of freight recorded a modest reduction, GVA data in transport activity will certainly point to a severe reduction when available next year. Table 10 presents the simulation results on the total national GVA reduction due to transport activities reduction caused by the COVID-19 pandemic. According to all three scenarios, the worst effects are expected for Slovenia and Italy. According to the moderate scenario of transport activity reduction of 35%, the result will be a decrease in total economic activity in the range from 2.6 to 4.1%, when indirect effects are included. As the European Commission (2020b) in Autumn Forecast estimates the average growth rate of economic activity in EU at 7.4%, it is clear that one third to one half of the reduction of GDP could be related to the poor performance of the transport industry under the impact of exogenous shock. Table 10. Simulation of the effects of the total national GVA reduction in 2020 due to the decreasing of transport activity.
Discussion and Conclusions
The significance of transportation for the Croatian economic growth was examined by IO analysis. It was used to determine the integration of transportation and other domestic sectors. The utilization of modern and efficient transportation leads to significant influence on the growth of other economic activities and socio-economic development of Croatia. Multiplicative effects in the transportation sector are notable in the observed period, especially for the air transport sector. While output multipliers for road and water transport are close to average multipliers for all economic sectors found in recent literature (Buturac et al. 2017), the multiplier for air transport is significantly higher due to more complex applied technology. The lowest multipliers for Croatia and sampled economies have been detected for postal and courier services, relatively simple laborintensive activities. The moderate intensity of output multipliers was in water transport, land transport services, and transport services via pipelines and warehousing and support services. Postal and courier services recorded lower output multipliers. The highest GVA multipliers were recorded for air transport services, while the lowest ones were recorded for postal and courier services.
The effects of the transportation sector analyzed in this research are primarily distributed through other activities rather than within the transportation cluster, meaning that indirect effects prevail as opposed to direct ones, spreading across various activities, especially for air and water transport sectors. The transportation sector, identified as a loose network of interrelated activities, shows a relatively moderate degree of integration in the whole economy. In perspective, a higher level of integrity and connections with other industries is needed, on a national and international level, which would generate higher value-added and other multiplicative effects along with the influence on the achievement of broader socio-economic goals. The international market trends, indicated in 2010, have been marked by declining demand and growing competition but, the recovery was perceivable concerning the multipliers increase in 2015.
Compared to other European countries, the Croatian transportation sector recorded lower output and GVA multipliers, which implies that other countries, like Italy, the United Kingdom, Spain, or the Czech Republic, capitalized on the transportation sector more for growth and development than other countries. The Croatian transportation recorded a lower share of the imported intermediate and average level of the domestic inputs; a higher level of value-added, compared to the other examined European economies, is very similar to the Slovenian and Hungarian transportation industries.
Multipliers estimated in this study are beneficial not only in the positive direction connected to growth in final demand, but also in a sudden decrease due to exogenous shock. Simulation of the effects of the COVID-19 pandemic points to the transport industry as one of the principal sectors which caused a sharp decline of economic activity in EU economies.
More investment in the technological modernization of transportation to increase the competitiveness and share of higher value-added services is necessary. Utilizing the more sophisticated transportation level leads to the higher multiplicative effect of this sector and enable valorization of complete high-value base and human resources quality.
IO analysis of the transportation sector has proved very useful, and results were in line with the expectations. The unavailability of more recent data to perform longer-term IO analysis and the calculation of remaining multipliers associated with IO tables are the main limitations of this research. The recommendations for future research are, mainly, directed to the inclusion of alternative economic IO approaches and modeling applications to determine transportation-economic linkages, which would enable more detailed insight and perspective in the long-term.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A Table A1. The share of domestic and import inputs in the output of the transport sector. | 7,844.2 | 2021-01-21T00:00:00.000 | [
"Economics"
] |
A Person Re ‐ Identification Scheme Using Local Multiscale Feature Embedding with Dual Pyramids
: In this paper, we propose a new person re ‐ identification scheme that uses dual pyramids to construct and utilize the local multiscale feature embedding that reflects different sizes and shapes of visual feature elements appearing in various areas of a person image. In the dual pyramids, a scale pyramid reflects the visual feature elements in various sizes and shapes, and a part pyramid selects elements and differently combines them for the feature embedding per each region of the person image. In the experiments, the performance of the cases with and without each pyramid were compared to verify that the proposed scheme has an optimal structure. The state ‐ of ‐ the ‐ art studies known in the field of person re ‐ identification were also compared for accuracy. According to the experimental results, the method proposed in this study showed a maximum of 99.25% Rank ‐ 1 accuracy according to the dataset used in the experiments. Based on the same dataset, the accuracy was determined to be about 3.55% higher than the previous studies, which used only person images, and about 1.25% higher than the other studies using additional meta ‐ information besides images of persons.
Introduction
Person re-identification refers to identifying a pedestrian of interest based on external information or characteristics of walking from a number of people captured in single or multi-camera environments. It is being regarded as an essential technique for the intelligent video surveillance systems such as tracking offenders or searching for missing persons [1,2]. Recent person re-identification methods use deep neural network to transform the person images of both a query and a gallery set into feature embedding. Then, the similarity between the feature embedding of the query and the ones in the gallery set is estimated to verify the identity of person of interest by selecting similar images from the gallery set.
However, in an actual environment, the identity of a person can be easily confused depending on the differences in the captured time, place, or viewpoint. Because a large difference in appearance of person such as an appearing color, pattern of clothing, belongings, or exposed skin such as face, arms, and legs can be caused by the change of such conditions. Therefore, an intra-class variation that phenomenon of identifying the same person differently or an inter-class variation that phenomenon of identifying the different person as the same can happen easily. Due to such variations, the similarity between the same person can be estimated low, or the similarity between the different person may show high.
As a method to solve such variations, recent studies [3][4][5][6][7][8][9][10][11][12] have mainly used the following two types of methods. In the first method, a neural network uses additional meta-information such as time, place, and viewpoint at the time of photographing of the pedestrian with a person image [3,4]. Although this method has the advantage of improving the person re-identification accuracy, it is difficult to obtain such metainformation automatically from an actual environment. Moreover, additional considerations and resources are required to process acquired information into a data form that can be input into a neural network. Therefore, the second method, which only uses images as input data, has been mainly studied. To obtain the performance close to the first method without additional meta-information, this method utilizes various preprocessing techniques or utilizes a number of auxiliary neural networks or special modules to extract more discernable representative features from the input image while constructing a neural network [5][6][7][8][9][10][11][12]. Although this method has an advantage of achieving performance close to the first method, the model can become excessively complicated due to the requirement of several auxiliary neural networks within the neural network or an algorithm in addition to the neural network [5,6,9]. Due to the limitations of the image segmentation method or filter shape used by the existing method, it is difficult to reflect the visual elements of various sizes and shapes appearing in the person image into the feature embedding [5][6][7][8][9][10][11][12].
Therefore, in order to enable robust re-identification against the intra/inter-class variations using only the given person images without any other meta-information, a new person re-identification method with dual pyramids that extracts various visual feature elements scattered in various areas of a person image and reflects them in the feature embedding is proposed in this paper. In dual pyramids, a scale pyramid reflects the visual feature elements in various sizes and shapes, and then a part pyramid selects visual feature elements and differently combines them for the feature embedding per each region of the person image. For such purpose, a scale pyramid has different sized kernels that are arranged in serial and parallel fashions, and a part pyramid uses divided and combined feature maps extracted from the scale pyramid to compose feature embedding. Moreover, the proposed model can provide accurate re-identification results using multiscale features from various regions on the input image.
This paper is structured as follows. Section 2 introduces the existing studies that performed person re-identification based only on images of persons as related studies. In Section 3, the novel method for person re-identification proposed in this study is described. The feature embedding extracted from the neural network uses a dual pyramid structure to reflect various sizes and shapes of the visual feature elements shown in the image in the suggested method. Section 4 shows the experimental results related to each module and corresponding combinations to verify the structural validity of the proposed model and also describes the accuracy of the proposed person re-identification scheme with the publicly available data sets. Section 5 describes the qualitative and quantitative comparisons of the proposed scheme with other state-of-the-art methods. Finally, the conclusion of this paper and future research is described in Section 6.
Related Work
Existing methods that perform re-identification based only on person images [5][6][7][8][9][10][11][12] use neural networks with unique structures to construct more discriminating feature embedding by extracting as much information as possible from the given image. These methods can be classified into two broad categories depending on how the neural network selects the region of an image reflected in the feature embedding. The first method constitutes feature embedding that reflects all visual feature elements that exist in the entire image [5][6][7][8], and such feature embedding is called the global feature embedding [5][6][7][8]. Conversely, when multiple feature embedding is formed based on the visual elements of corresponding small area images and the small area images are considered as a combination of multiple area images when given an image, the embedding is referred to as local feature embedding [9][10][11][12].
First, the neural networks of the studies using global feature embedding [5][6][7][8] mostly focus on the generation of discriminative feature embedding, which denotes visual elements that are randomly distributed throughout the image. For example, in [5,6], the input person images were reconstructed into different sizes. For example, variously sized copies with different resolutions were initially made [5], or small patch images with various sizes were separated from the pre-determined points of the foreground [6]. Subsequently, the reconstructed images were provided to the sub-networks that accept inputs suitable for differently sized images and then used to form feature embedding representing the corresponding person. This method has an advantage in that a neural network can extract and utilize a large amount of information from images of various sizes created based on a person image. This leads to exhibition of excellent performance without separate meta-information such as shooting time and location. However, as the sizes of the images used as inputs vary, several neural networks are required to accommodate images of different sizes. For example, two or more InceptionV3 networks [13] with 23.5 million in weight were used in [5], and 21 self-designed affiliated neural networks were used in [6], leading to excessively complex neural networks. In addition, adjusting previously determined resolution [5] or the separation point of the patches [6] becomes inevitable when applied to an actual environment for optimal performance. On the other hand, in [7,8], only one image was used as an input to the single neural network. Instead of any other preprocessing or auxiliary network, in [7] a unique module composed of a series of layer sets that consisted of a parallel connection of a series of one or two convolution layers was used, and in [8] one to four serially connected convolutional layers with kernels sized 3*3 each and arranged in parallel were placed within the neural network to provide a square-shaped receptive field of various sizes and reflect visual feature elements in the feature embedding. However, the important visual feature elements that can be used in identifying the person such as the person's arms, legs, and feet, clothing or patterns on it, and the person's belongings are mostly shown as rectangular shapes at various regions of the input image. Therefore, when the receptive fields of a neural network that can be given to the input image are limited to square shapes [7,8], properly reflecting visual elements of other shapes such as rectangular shapes with either longer horizontal or vertical direction can be problematic in the input embedding.
Previous studies using the local feature embedding also applied auxiliary network or similar techniques to find areas of body parts as a basic position for feature embedding extraction such as arms, legs, and torso [9]. However, this method was problematic because the performance of the person re-identification could be greatly varied depending on the body part detection result. Therefore, recent studies [10][11][12] simplified the meaning of a local area as a patch of a horizontally divided person image. The feature map extracted from the backbone such as ResNet50 [14] or VGG16 [15] is simply divided horizontally, and each region becomes an origin area of each local feature embedding. Accordingly, previous study [10] divides the feature map from ResNet50 [14] backbone into six regions horizontally and construct local feature embeddings representing each region by global average pooling. When each feature embedding is configured on the divided regions with fixed size, the visual feature elements that exist over two or more neighboring regions that are larger than the fixed size are difficult to be reflected in the feature embedding. Therefore, other studies [11,12] tried to solve this problem using a so-called feature pyramid. In [11,12], a feature pyramid was configured by combining six horizontally divided basic regions first, and then overlapping and tying between the successive neighboring regions, with a total of one to six in each group, creating a total of 21 combined regions. The study [12] also used a similar feature pyramid, which is configured by combining horizontally divided eight basic regions first, and then tying between the adjacent neighboring regions without overlapping, with a total of eight, four, two, and one in each group to create a total of 15 combined regions. These combined regions are converted into local feature embedding through the adding up of vectors created by global average pooling and global max pooling. However, in the case of constructing a pyramid with multiple combination of regions that overlap each other, each basic region is used to configure multiple combination of the regions. In particular, the basic region located in the center is used more often than the basic regions outside the center region. For this reason, regardless of the actual importance of each basic region, more regions located in the center of the image are more often unconditionally utilized to configure the combination of regions. This is a problem since the crucial visual feature elements such as shoes, hair color, and exposed facial features are difficult to use as important sources. In addition, when the sum of the maximum pooling and the average pooling is used for the construction of the feature embedding, the value in the embedding eventually becomes an ambiguous value rather than an average or maximum, which may result in a deterioration in the discrimination of embedding.
Therefore, this study proposes a person re-identification method that applies a scale pyramid module that allows the neural network to extract visual feature elements that appear in various sizes and shapes in the input person image and a part pyramid module that allows configuration of more accurate feature embedding by appropriately reflecting extracting features according to each region. The scale pyramid module used in the proposed method is designed by connecting the convolutional layers with kernels of different sizes and shapes in series and a parallel manner, so the receptive fields of various sizes including square and rectangular shapes can be acquired from the input image. In addition, to construct discriminative feature embedding that evenly reflects visual feature elements that exist across either narrow or wide areas within the input image, a part pyramid module is horizontally divided into eight segments and then serially connected by regions, a total of eight, four, two, and one in each group, without a mutual overlap to produce a total of 15 regions to configure feature embedding in each region. Each region is converted into local feature embedding by using only global average pooling instead of the complex summation of maximum pooling and average pooling. This proposed method can form the feature embedding that independently reflects the visual feature elements of various sizes and shapes that exist on each region of the input person image. Based on this, preferable person re-identification without additional information on the person can be achieved.
Overall Procedure of the Proposed Person Re-Identification Scheme Using Dual Pyramids
In this section, we introduce an overall procedure of the proposed person reidentification scheme using multiscale feature embedding made by dual pyramids, a scale pyramid and a part pyramid. As shown in the training process of (Figure 1), the proposed scheme uses only the person's image as an input without additional meta-information such as the captured time, location, and viewpoint. The input person image is converted into 15 part-level local feature embedding, and each embedding is used to train by the classification method that infers the person's ID with given cross entropy loss function as the ground truth (GT). The details of embedding, such as creation process and network architecture of sub-module are explained in the following Sections 3. In the subsequent inference process, in addition to the query image that requires reidentification, the gallery images to be used to verify the identity of the query image are given, and the pre-trained re-identification model is used to convert the query image and gallery images into the part-level feature embedding. Fifteen part-level feature embedding created from each image are connected to a single feature embedding, which designates multiscale features extracted from various locations of the input image and then uses them for similarity comparison. The multiscale feature embedding created from the query image is called query embedding, and that from the gallery images is called gallery embedding. The similarity comparison between feature embedding is made based on the cosine similarity, and the set of gallery embedding is sorted based on the similarity with the query embedding. In turn, the identity of the gallery image with the highest similarity is specified as the identity of the query image.
The structure of the neural network constituting the person re-identification model is configured in the order of the input, backbone network, scale pyramid, part pyramid, and output embedding module as shown in (Figure 2). First, as for the input image of the neural network, the rectangular images of 1:3 ratio with width of 128 and height of 384 are accepted instead of the square shapes commonly used by the image handling neural networks to reflect general body characteristic of longer height to width of the shoulder. The input image is transferred to the backbone and the subsequent convolutional layer to be converted into a feature map sized (8,24,512). Following the convolutional layer, batch normalization (BN) [16] and dropout [17] are placed to prevent overfitting by the previous layers. At this point, if the entire network of ResNet50 [14] is used as a backbone, the size of output is reduced to 1/32 of the input, which in turn limits the available features of the subsequent blocks. Therefore, the last convolution block in ResNet50 [14] is excised to produce a lower reduction ratio of 1/16 for more feature utilization by the subsequent layers. The extracted feature map from the backbone and subsequent convolution layer is delivered to the scale pyramid to produce six outputs, each of which is (8, 24, 512) size and represents the multiscale features from large to small and both square and rectangular-shaped visual elements of the input image. And these six output feature maps from the scale pyramid are transferred to the input of the part pyramid to generate 15 part-level local feature vectors sized (1, 1, 512*6) that originate from various regions of the input with different configuration as shown in (Figure 2). Finally, the vectors are converted to the local feature embedding, with a length of (512), by passing through a convolution layer with the last kernel sized 1*1, batch normalization (BN) layer, and dropout layer as shown in (Figure 2). The proposed person re-identification model can create the local feature embedding that reflects the visual feature elements of various sizes and shapes of each of the 15 regions created by the pyramid, and through this, the model can even utilize the detailed visual feature elements distributed throughout an image during the person re-identification process.
Scale Pyramid to Extract Multiscale Features
In this section, we introduce the scale pyramid in more detail. This pyramid can convert the visual elements of various sizes and shapes in the input image into the features used to compose the feature embedding.
The scale pyramid allows the person re-identification model to have receptive fields of various sizes including the rectangular shapes in addition to the square ones. The scale pyramid has a feature map sized (W, H, C) of (width, height, channel) as an input which is made by previous steps in the person re-identification model. It produces 6 outputs sized (W, H, C) from the receptive fields of different sizes and shapes as shown in ( Figure 3). Looking first at the neural networks leading to the outputs Output_1 and Output_2, they share two consecutive convolutional layers with the kernels sized 1*1 and 3*3. Accordingly, the output feature map up to this point has a receptive field of 3*3 in size with respect to the input feature map, and since the two convolutional layers connected in parallel have the kernels sized 1*3 and 3*1, the receptive field of the output with vertically long rectangular shape (Output_1) of 3*9 and horizontally long rectangular shape (Output_2) of 9*3 can be accommodated with respect to the input feature map. The neural networks leading to the following Output_3 and Output_4 also share a convolutional layer with a kernel sized 1*1, and since the two convolutional layers connected in parallel have kernels sized 1*3 and 3*1, the receptive field of the output acquires a vertically long rectangular shape sized 1*3 (Output_3) and a horizontally long rectangular shape sized 3*1 (Output_4) with respect to the input feature map. Unlike the other parts, the neural network leading to Output_5 secures a square-shaped receptive field sized 3*3 through average pooling and learns the features of the corresponding region through a convolution layer with a kernel sized 1*1. Finally, the neural network leading to Output_6 uses a convolutional layer with a kernel sized 1*1 to secure the smallest 1*1 sized square-shaped receptive field. Therefore, the scale pyramid can secure the receptive fields of 6 different sizes and shapes that include the square shapes sized 3*3 and 1*1 as well as the rectangular shapes sized 3*9, 9*3, 1*3, and 3*1 with respect to each arbitrary point of the (8, 24, 512) sized feature map, which originated from the output feature map of the backbone as shown in the example of ( Figure 4). Regarding the input image, the scale pyramid can extract large and small, square and rectangular-shaped visual elements for all regions of the image as the features to reflect in the embedding. Accordingly, the proposed person reidentification model can utilize the visual feature elements of different sizes and shapes that exist at various locations on the input image during the formation of the feature embedding.
Part Pyramid to Generate Local Feature Embedding in Part-Level
In this section, we introduce the part pyramid, which can convert part-level information in the input image, which originates from various regions with different configuration, into the local feature maps.
As shown in (Figure 5), the architecture of the proposed part pyramid is similar to the feature pyramid in [12] in that it receives one input feature map and constructs 15 output regions. However, as described below, there are differences in the size of input/output feature maps and the method of configuring individual regions, and in the method of global pooling for each region. First, 6 feature maps sized (W, H, C) that are produced from the scale pyramid are transferred to the input of the part pyramid block as shown in ( Figure 5). The part pyramid block reconstructs the received feature maps to 15 different feature maps, each of which represents a respective field that has one of the four different sizes to organize the local feature embedding evenly with the visual feature elements that appear in the various areas of the input image such as patterns or color of bags, belongings, or clothes as well as the narrow area of the input image such as shoes or hair color without bias of a particular region. For this purpose, the part pyramid block is constructed from one feature map by concatenating 6 feature maps that are received as inputs and horizontally dividing the maps into 8 regions to form basic local feature maps from Base_1 to Base_8 with a size of (W, H/8, C*6) as shown in ( Figure 5). Subsequently, to extract the visual feature elements distributed over a wider area, 4 types of 15 combination local feature maps are configured with 1 combination feature map connected with the Output_1_1 sized (W, H, C*6), which is composed of 8 neighboring basic local feature maps without overlapping, 2 combination feature maps connected with the Output_2_1 and Output_2_2 sized (W, H/2, C*6) that are attached by 4 neighboring maps, and 4 combination feature maps connected with Output_4_1, 2, 3 and 4 sized (W, H/4, C*6) that are attached by 2 neighboring maps, and 8 feature maps (from Output_8_1 to Output_8_8) that are directly from each of the 8 basic local feature maps. These 15 local feature maps created through a series of processes are converted into local feature vectors sized (C*6) by the global average pooling only to avoid the ambiguity of feature value in the vector. Through this, the model can create the part-level local feature embedding originating from various regions of different configurations. As shown by the person in (Figure 6), the feature embedding extracted from 8 basic regions reflects the visual feature elements that appear in order of the hair color, appearance of the face, and characteristics of clothing or belongings on the chest and abdomen, characteristics of clothing or belongings on the buttocks, characteristics of clothing on the thighs and calves, and shoe color. In addition, 4 feature maps that gather 2 basic local areas each can extract feature embedding that appears in order of the hair color and length and appearance of face, overall characteristics of the top, characteristics of the top section of the bottom, and characteristics of the lower section of the bottom and shoes. Two feature maps that gathered 4 basic local areas each can extract feature embedding that appears in order of the overall characteristics of the top and overall characteristics of the bottom. Lastly, 1 feature map that gathers 8 basic local areas each can extract feature embedding of the overall visual feature element of the person image. Therefore, the 15 local feature maps created by the part pyramid accommodate regions corresponding to different ranges on the input image to configure feature embedding per region to reflect unique characteristics of each region. A preventive measure for biased training of a particular region by the feature embedding and configuration of excessively large number of local embedding at the same time was performed by preventing overlapping between the regions during the configuration of each region.
Experiment and Analysis
The dataset of DukeMTMC-reID [18] which is widely used for the development of person re-identification models, was used for this study. The DukeMTMC-reID [18] dataset composed of the images of 702 persons include 16,522 training images, 17,661 gallery images, and 2228 query images. In addition, the Market-1501 [19] dataset, which is widely used in the field of person re-identification, was used in the experiment to evaluate the performance of the model and increase the evaluation objectivity by examining the dependency of the model on a specific dataset. The Market-1501 [19] dataset, which is composed of images of 750 persons, consists of 12,936 learning images, 13,115 gallery images, and 3368 query images.
Prior to the model training, all training images went through the data augmentation process consisting of horizontal flip, random erase, and normalization, and accordingly, 66,088 images for DukeMTMC-reID [18] and 51,744 images for Market-1501 [19] were used during actual training. As the ground truth label for each training image, one-hot encoding vector with corresponding ID value set at 1 for the person on a given image and 0 for all others was used in general. The Adam optimization function [20] was also used during a total of 80 epochs. At this time, the starting training rate was set at 0.001 and was reduced by 0.1-fold for every 30 epochs elapsed. In addition, as the validation data that verified the overfitting status during the training process, randomly selected 6609 images consisting of 10 percent of all training data were utilized. If the calculated validation loss based on the validation data was the smallest, it was saved and used as the weight of the model. Moreover, the zero paddings that filled the remaining part with zero value were set to allow for the inputs during the usage of the kernels by all convolutional layers, and the ReLU [21] function was allowed for all functions to activate the outputs of the layers for the neural network construction.
As a method to evaluate the performance of the model, the Rank-1 accuracy which decides on the equivalency based on the query image ID and first rank among the gallery embedding that are aligned based on the cosine similarity was used. Because of the special characteristic of the person re-identification among the intelligent video security schemes that require accuracy in identification such as tracking abnormal behaviors or searching for missing persons, the superior accuracy of the Rank-1 identification result is a critical performance indicator of the proposed method.
The first experiment was conducted by switching among ResNet50 [14], VGG 16 [15], and SeResnet101 [22], which were used in by previous person re-identification experiments as the backbone neural network to set the optimal structure of the neural network constituting the re-identification model.
In the experimental results shown in (Table 1), ResNet50 [14] showed the best performance among the three backbones. The shallow neural network VGG16 [15] showed difficulty in constructing the feature embedding that sufficiently reflected the visual elements and discriminated person ID with the neural network. On the other hand, the excessively deep neural network SeResNet101 [22] showed decreased performance since unnecessary visual elements as well as the valid visual elements were reflected in the feature embedding.
Training Setting for Backbone with ImageNet [23] Pre-trained Weight
Rank-1 ResNet50 [14] 84.51% SeResNet101 [22] 82.58% VGG16 [15] 70.51% In the second experiment, performance evaluation was conducted by applying different sized pyramids with five configurations composed of the outputs of variously sized receptive fields that included 1*1, 3*3, and 9*9 sized square shapes and 1*3, 3*1, 9*3, and 3*9 sized rectangular shapes on the output feature maps extracted from a backbone as suggested in (Table 2). In (Table 2), the receptive field included in each configuration is marked with "O", and the receptive field that is not included is marked with "X". As the results of the experiment, better performance was shown for the cases with both shaped receptive fields instead of the cases with only the square-shaped receptive fields sized (1*1, 3*3) or (1*1, 3*3, 9*9). Moreover, the performance increased with the increased number of available receptive fields and showed optimal performance in the configurations with six outputs sized (1*1, 3*3, 1*3, 3*1, 3*9, 9*3). However, the performance decreased whenever a pyramid branch with a receptive field sized (9*9) was added. It is estimated that a receptive field sized (9*9) with the input feature map sized (8,24) may be too large to extract local features properly. Through this experiment, it was shown that having shapes of receptive fields close to the visual feature elements that appear in the person image in the neural network in addition to square-shaped receptive fields commonly used in neural networks is more effective in constructing more discerning feature embedding.
In the third experiment, the performance evaluation experiment was conducted on the 15 feature vectors extracted from the part pyramid based on six feature maps from the scale pyramid by changing the composition of the global pooling mechanism for local feature embedding that constitutes the output of part pyramid as suggested in (Table 3). As the results of the experiment, better performance was shown for the case with only the global average pooling used compared to only the global max pooling used or the sum of global average pooling and global max pooling used. Especially, the lowest performance was produced when the sum of the global max pooling and global average pooling was used. This can be interpreted as meaning that when these values are used together, the value becomes ambiguous, and the performance may decrease compared to when using each alone.
Therefore, as shown in the previous experiments, the model composed of the scale pyramid with six outputs sized (1*1, 3*3, 1*3, 3*1, 3*9, 9*3) based on the backbone of ResNet50 [14] and the part pyramid with 15 outputs of 0, 2, 4, and 8 divisions with global average pooling can be said to have the optimal configurations for the person reidentification based on the method proposed in this study. Considering the weight constituting this model is 43 million and the weight of neural networks used as the backbone is 47 million in the model of [5], the number of auxiliary neural networks is 21 for the model of [6], and the model of [10] uses all of ResNet50 [14] as the backbone and more complex composite local feature embedding than current study, the complexity of the neural networks of the proposed method is considered to be less than or similar to previous studies.
As the final experiment, the proposed model was trained by the datasets of DukeMTMC-reID [18] and Market-1501 [19], and the performance of the proposed model was evaluated and showed in (Table 4). Table 4. Performance evaluation using DukeMTMC-reID [18] and Market-1501 [19] datasets.
Training and Test Dataset
Rank-1 DukeMTMC-reID [18] 94.79% Market1501 [19] 99.25% As the results, the proposed model showed the best performance in terms of the Rank-1 accuracy of 94.79% based on the DukeMTMC-reID [18] dataset. The excellent performance of 99.25% in terms of the Rank-1 accuracy was also observed with the Market-1501 [19] dataset. The results suggest that the proposed method can show a robust performance regardless of the shooting environments or the type of person composition of the dataset used for training and evaluation.
Discussion
The main aim of this study was to develop a dual pyramid structure with more discriminative feature embedding for a person re-identification. To accurately detect the gallery that is the same as the query, discriminative feature embedding was used, which is the connection of 15 local feature embedding by the proposed scheme using scale and part pyramid structure. According to the analysis of the experimental results in (Table 5), the performance rates of the proposed scheme based on Rank-1 accuracy are 94.79% and 99.25% for the DukeMTMC-reID [18] dataset and the Market-1501 [19] dataset, respectively.
Rank-1 Additional Meta-Information DukeMTMC [18] Market-1501 [19]
DL multi-scale Representations [5] 79.2% 88.9% Not used PCB [10] 83.3% 92.3% Not used Horizontal Pyramid Matching [12] 86.6% 94.2% Not used OSNet [8] 88.6% 94.8% Not used Pyramidal Person Re-Id [11] 89.0% 95.7% Not used Viewpoint-Aware Loss [4] 91.61% 96.79% View-point info. St-reID [3] 94.00% 98.00% Spatio-temporal info. Ours 94.79% 99.25% Not used To compare the performance results of these approaches, the approaches of [3] and [4] with the Rank-1 accuracy rate of 94.0% and 91.61% based on the DukeMTMC-reID [18] dataset and the Rank-1 accuracy rate of 98.00% and 96.79% based on the Market-1501 [19] dataset, which is a lower performance than proposed scheme, are shown. These results suggest that only the person image can be used for re-identification without additional meta-information such as shooting time, location, and viewpoint for a comparatively superior performance. The approaches of [5] and [8] with the Rank-1 accuracy rate of 79.2% and 88.6% based on the DukeMTMC-reID [18] dataset and the Rank-1 accuracy rate of 88.9% and 94.8% based on the Market-1501 [19] dataset are also shown. Moreover, the approaches of [10], [11], and [12] with the Rank-1 accuracy rate of 83.3%, 89.0%, and 86.6% based on the DukeMTMC-reID [18] dataset and the Rank-1 accuracy rate of 92.3%, 95.7%, and 94.2% based on the Market-1501 [19] dataset are shown. All of them have a lower performance than the proposed scheme, which suggests that using multi-scale features and localized features together can make better performance than using them separately.
Conclusions and Future Work
In this study, a novel person re-identification method based on dual pyramids of a scale pyramid and a part pyramid was proposed for more accurate person reidentification results by extracting the visual feature elements of various sizes and shapes appearing in different regions of a person image. The scale pyramid applied to the proposed model enables more diverse and accurate feature extraction by allowing different sizes that include the rectangular-shaped feature receptive fields as well as the square-shaped feature receptive fields used in the previous studies with neural networks. The part pyramid allows various regions of an image to form innate feature embedding, thereby creating feature embedding that reflects detailed visual feature elements that exist in each region for more accurate person re-identification. In addition, since the proposed method shows superior Rank-1 accuracy to the method using only the square-shaped features or the method using only global feature embedding, the multi-scaled regional features used by the dual pyramid structure are shown to be valuable in person reidentification.
In the future, in order to obtain more accurate re-identification results, research is planned to select relatively more important areas for each region using the attention mechanism during the extraction of regional multi-scale features.
Conflicts of Interest:
The authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. | 8,190 | 2021-04-08T00:00:00.000 | [
"Computer Science"
] |
A Novel Technique for Determination of Flow Characteristics of Blast Furnace Slag
A study of flow characteristics of blast furnace slag helps determine its softening and flow (liquid-mobility) temperatures. The slag with a narrow difference between the two temperatures is termed a “Short Slag”. Its formation ensures higher rates of slag-metal reactions with the trickle of the slag soon after its formation exposing fresh mass for faster reactions, the trickling slag, creating fresh interfaces facilitating slag-metal exchanges. In the present work, a novel technique is adopted to determine the flow characteristics of blast furnace slag obtained from different industrial blast furnaces. It is seen that the results so obtained agree very closely with the values obtained from adopting conventional methods of determining the liquidus temperature using “slag atlas”. It is observed that under the range of compositions studied a high C/S ratio combined with a high MgO content in the slag is beneficial to the B.F. process as it renders a “short slag”.
Introduction
The ironmaking blast furnace is a complex high temperature counter current reactor in which iron bearing materials (ore, sinter/pellet) and coke are alternately charged along with a suitable flux to create a layered burden in the furnace.The iron bearing material layers start softening and melting in the cohesive zone under the influence of the fluxing agents at the prevailing temperature which greatly reduces the layer permeability that regulates the flow of materials (gas/solid) in the furnace.It is the zone in the furnace bound by softening of the iron bearing materials at the top and melting and flowing of the same at the bottom [1].A high softening temperature coupled with a relatively low flow temperature would form a narrow cohesive zone lower down the furnace [2].This would decrease the distance travelled by the liquid in the furnace there by decreasing the Silicon pick-up [3,4].On the other hand the final slag, that trickles down the Bosh region to the Hearth in the furnace, should be a short slag that starts flowing as soon as it softens.Thus fusion behaviour is an important parameter to evaluate the effectiveness of the B.F. slag.
Fusion behaviour is described in terms of four characteristic temperatures [5]; IDT, the initial deformation temperature symbolising surface stickiness, important for movement of the material in the solid state; ST, symbolising plastic distortion, indicating start of plastic distor-tion; HT, the liquidus temperature, symbolising sluggish flow, playing a significant role in the aerodynamics of the furnace and heat and mass transfer; and FT, the flow temperature, symbolising liquid mobility.
The slag formed in the cohesive zone is the primary slag formed with FeO as the primary fluxing constituent; the solidus temperature, fusion temperature, solidus-fusion interval being significantly affected by FeO [6].This slag is completely different from the final slag where the fluxing is primarily caused due to the presence of basic constituents like CaO or MgO.While it is not possible to obtain primary slag from the industrial blast furnace, it is always possible to prepare a synthetic slag in the laboratory resembling the primary slag and study its flow characteristics.We have kept this venture for future studies and the present study limits itself to the study of flow characteristics of the final slag as obtained from the industry.However, it must be noted that from the process point of view the final slag should be a "Short Slag", a slag with a small difference between the ST and FT.Such a slag acquires liquid mobility and trickles down the furnace away from the site where it starts distorting plastically, as soon as possible.This action exposes fresh sites for further reaction and is supposedly responsible for enhanced slag-metal reaction rates, influencing the blast furnace operations and the quality of the metal.
The flow-characteristic of blast furnace slags is strongly influenced by the extent of reduction of iron oxide at low temperature (in the granular zone) besides being influenced by the composition, and the quality and the quantity of the gangue in the iron bearing materials.Ray et al. [7] have shown that the C/S (CaO/SiO 2 ) and MgO content of the blast furnace slag greatly influence its softening-melting properties.Especially the MgO content in addition to C/S ratio is felt to be of such a great importance that Shen et al. [8] have actually proposed the addition of MgO through tuyere injection to make the MgO available later in the process.They claim that availability of MgO later in the process would result in small temperature range of cohesive zone resulting in better permeability of the bed which in turn would influence the coke consumption and quality of hot metal produced.
Keeping the above in mind, in addition to employing two different experimental techniques for measurement of the flow characteristics of industrial blast furnace slags and comparing the liquidus temperatures so obtained the present work also involves itself with analysis of the data on the basis of the chemical composition of the slag.
Experimental
There are two numbers of basic methods namely Pressure drop method and Slag atlas method for measuring melting characteristics of a burden or slag.The 1st method measures the width of the cohesive zone in a blast furnace directly by recording the softening (plastic distortion) and flow (liquid mobility) temperatures of the burden in terms of pressure drop.
An attempt is made [7] for testing softening and melting characteristics in the laboratory by simulating the changes occurring in a small volume of the iron bearing material in the vicinity of the cohesive zone.The changes undergone by the element as a result of the burden descent pertain to temperature, load, gas flow rate and composition.In order to achieve simulation in laboratory conditions the sample is subjected to a pre-programmed variation of temperature, composition of reducing gas and load variations as a function of time.The test consists of measuring pressure drop across the sample bed, height of the sample bed, inlet and exit gas composition/flow rates and the weight of the sample.From the data, the degree of iron oxide reduction (oxygen loss) and bed contraction are calculated and the data as obtained is presented in the form of a graph (Figure 1).
The interpretation of the graph (as reported) is as follows: T1: Temperature in ˚C at which softening starts (Pressure drop reaches 1 kPa).
T2: Temperature in ˚C at which the sample bed stops contracting (Pressure drop returns to that at T1 i.e. 1 kPa).However, this method suffers heavily on account of the following: 1) In this method, the dynamic nature of charge descent is not simulated.
2) The heat transfer and possible effects on the kinetics of iron oxide reduction are not accurately simulated.
3) This test method maintains constant gas flow rate throughout the test whereas in the actual furnace, the permeability of the charge is ever changing when iron burden layers soften during their descent.
4) This test is conducted at atmospheric pressure whereas the gas pressure inside the modern blast furnace maybe 2 to 4 times higher.
The second available method uses the predicted slag atlas for different major constituents of the slag (CaO, SiO 2 , Al 2 O 3 , and MgO) to determine the liquidus temperature of the slag of given composition [9].One of the components is kept constant and the variations in other three constituents are simultaneously considered for estimation of the liquidus temperature.In the diagram below (Figure 2) the slag atlas for 20% Al 2 O 3 is presented.The liquidus temperature of a slag with 20% Al 2 O 3 , 34.57% CaO, 6.51% MgO and 36.72%SiO 2 is found to be 1300˚C (Point "L", the intersection of line 1 and line 2 in the diagram parallel to the CaO-SiO 2 line for MgO and SiO 2 -MgO line for CaO respectively).
In the present novel technique a hot stage microscope is used 5.The line diagram of the instrument is provided in Figure 3.
To measure the characteristic temperatures, a small cube (3 mm cube) of the sample is prepared from the powdered sample after adopting an appropriate method of sampling.The sample is gradually heated and the profile of the sample is photographed and reported.The photographs in Figures 4(a)-(c atlas method and the hot stage microscopy are compared and the error percentages are tabulated (Table 1).
Liquidus Temperature Measurement
The difference between liquidus tem through the two methods reported above are pres Table 1 and Figure 5.
It is clearly established that the average difference between liquidus temperatures measured by the hot stage microscopy method an ree closely.The difference is ±0.66% maximum, the minimum deviation being 0.22% and the maximum 1.32%.
Thus the high temperature microscopy adopted in the present work renders comparable results in relation to the age-old slag atlas method and can be considered as a vel method for estimating characteristic temperatures of the blast furnace slag.
Effect of Basicity (C/S Ratio or B 2 ) on Characteristic Tem
The variation of ST, HT and FT with C/S ratio is presented in Table 2 and Figure 6.It is interesting that the flow temperature decreases with increase in the ) Abs (X-Y) (*100) C/S ratio, the rate of decrease of the flow temperature ecreasing with increase in the C/S ratio.d This is because the addition of the basic oxide CaO to the silicate network breaks down the network resulting in formation of smaller silicate groups known as anionic units or flow units [10].The net effect is reduction of viscosity, i.e., increase in flow ability noted by the decrease in flow temperature.However, since smaller and smaller flow units require relatively higher oxygen, rendered by the addition of higher amounts of basic oxide, the progressive increase in metal oxide content is less and less effective in decreasing the flow unit size.This explains the decrease of flow temperatures at a decreas- ing rate with increase of the C/S ratio [11].
It is further observed that the ST increases with the C C/S ratio the difference between FT and S igure 7).It may be appreciated that in the blast furnace it is necessary to have a narrow softening-flowing range rather than a sharp low liquidus temperature.Such a situation would generate a "Short Slag", i.e., as soon as the slag softens, though the volume shrinks and the permeability of the bed is adversely affected, such a slag trickles down away from the site without necessarily requiring any higher availability of thermal energy.From this point of view alone it may be concluded that under the experimental conditions and within the range of compositions studied, higher values of C/S ratios are beneficial to the blast furnace process.
fect gO C tent o he C teri
The variation of the characteristic temperatures with the MgO content is presented in Table 2 and Figure 8. is observed that MgO content and though the trend is not very clear, the FT decreases with the increase in the MgO content in general.Thus, higher MgO contents within the range of compositions examined tend to generate 'Short Slags', decreasing the difference between the FT and ST in general.
Under the blast furnace conditions if we consider slags with softening-flowing range less than 100˚C to be short slag, it can be seen form Table 2 that slag nos.2, 3 and 6, having a combination of high C/S ratio and high MgO content result in short slags.This is in line with the work done by V.K. Gupta et al. and R.N. Singh [12,13].They observed that a high MgO combined with a high C/S ratio results in a short slag.
Conclusions
1) The high temperature m for determining th characteristics) of b 2) Under the range of compositions examined a high C/S ratio is beneficial for the blast furnace process as it ensures the of a "Short Slag".
3) Increased MgO content combined with a high C/S ratio form "Short Slags" which is advantageous to blast furnace process.
Figure 1 .
Figure 1.Softening melting test results and derived data.
) show the different profiles and the related characteristic temperatures of a ypical slag sample.t Copyright © 2012 SciRes.OJMetal
Figure 4 .
Figure 4. Different characteristic temp res of blast fur
Figure 5 .
Figure 5.Comparison between liquidus temperature values obtained from present work and slag atlas.
Figure 6 .
Figure 6. of different characteristic temperatures with C/S ratio.
Figure 8 .
Figure 8. Variation of different characteristic temperatures with MgO content. | 2,874.2 | 2012-06-26T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
On Textual Analysis and Machine Learning for Cyberstalking Detection
Cyber security has become a major concern for users and businesses alike. Cyberstalking and harassment have been identified as a growing anti-social problem. Besides detecting cyberstalking and harassment, there is the need to gather digital evidence, often by the victim. To this end, we provide an overview of and discuss relevant technological means, in particular coming from text analytics as well as machine learning, that are capable to address the above challenges. We present a framework for the detection of text-based cyberstalking and the role and challenges of some core techniques such as author identification, text classification and personalisation. We then discuss PAN, a network and evaluation initiative that focusses on digital text forensics, in particular author identification.
Introduction
Personal threats, false accusations, privacy-violation and defamation are typical forms of attacks faced by victims of harassment and stalking. The advancement in Information and Communication Technology (ICT) has extended existing attack vectors further to include many online socialnetworks designed for people to interact using multimedia. Content forms such as text, images, audio and video are utilised in the context of Human Computer Interaction (HCI) methods to interface with end users. This is usually enabled through web browsers, mobile applications and other such means.
The significant impact of ICT on the severity of cyberstalking has been reported in the literature. For instance, research from the Electronic Communication Harassment Observation (ECHO) project [23,31] shows that many incidents, although initially emerging in cyberspace, have consequently moved to the physical world. Extreme examples of such incidents have forced victims to disengage from their daily routines, move homes, and/or change jobs resulting in significant financial losses, inducing fear, distress, and disrupting the daily activities of victims. Accordingly, terms such as cyberstalking and cyberbullying have emerged to address the problem with full consideration of the heavily interconnected Cyber-Physical-Natural (CPN) world [18] to accurately define the ecosystem where both victims and attackers practice all their life-related activities. There is evidence that the extreme emotional distress and physical trauma caused by these anti-social offences have also led to suicide and murder [31].
It is important to elaborate on the unique characteristics of cyberstalking; in this paper we define cyberstalking messages to be: 1) unwanted or unwelcome; 2) sent from a known or unknown but determined/motivated party (per-petrator); 3) intentionally communicated to target a specific individual (the victim), and 4) persistent. The National Centre for Cyberstalking Research (NCCR) 1 , based in the UK, further recognises the persistent behaviour to be realised when ten or more unwanted messages are sent over a period of time that is equal to or less than four weeks. Clearly, this discussion sets a distinctive line between cyberstalking and any discrete events of online harassing materials. To effectively mitigate the risks associated with cyberstalking, technology must be utilised to support detection, event classification, automated responses, and reporting of incidents. Text analysis and Information Retrieval (IR) play a critical role given that text (emails, SMS, instant messaging (IM), Blog posts, Twitter tweets, etc.) is a popular content form reportedly used in the vast majority of incidents.
In the remainder of this paper we elaborate further on the need for technical solutions to tackle cyberstalking. Afterwards, in Sect. 3, we discuss a framework for cyberstalking detection and evidence gathering; this also includes the application of text analysis and machine learning in this context. One of the main emerging challenges within technical solutions is authorship identification. We therefore also introduce a the PAN shared task series that tackles this task in Sect. 4. It is one of the aims of this paper to relate this line of research to the context of automatic solutions to detect and handle cyberstalking in text messages.
Finding solutions to curtail cyber harassment and cyberstalking
The importance of an adequate cyber crime response is recognised to be a cross-cutting issue in cybersecurity and law enforcement as it has clear links to serious organised crime, protecting the vulnerable and victims of child sexual exploitation [37]. The growth of the internet has led to the traditional crimes of stalking and harassment being transformed in scale and form. Much of the recent research into cyberstalking has focussed on the comparisons between offline stalking and cyberstalking and the mental health outcomes of the victims of stalkers. Therefore, the necessity to increase understanding of the technological means of detecting and gathering evidence in cases of cyberstalking is paramount.
Although there is no conclusive evidence as to the increasing prevalence of cyberstalking on account of advancements in technology, it can be assumed that the number of cyberstalking incidents has indeed risen dramatically. According to a report released by the UN's International Telecommunication Union (ITU) in 2013, approximately 39 % of the world's population now has access to the inter-1 http://www.beds.ac.uk/nccr/. net, which is equivalent to around 2.7 billion people. Online resources can also be utilised unlawfully by criminals. For instance, with regard to cyberstalking, criminals have an infinite number of online users to stalk or harass.
From a broad perspective, the issues surrounding and the consequences of acts such as cyberbullying, harassment, and stalking are most certainly within the public's zeitgeist. For example, TV shows and films are increasingly produced on this subject matter and not necessarily from a fictional standpoint. However, realistic solutions are rarely forthcoming beyond the required narrative closure. Legally, since cyberstalking is a criminal offence in some countries, the system partially contributes to the solution. For instance, in the UK, based on the circumstances of a given case, relevant laws could include the Sexual Offences Act 2003 S.15, Protection from Harassment Act 1997, Crime and Disorder Act 1998, and Domestic Violence, Crime and Victims Act 2004. Additionally, a number of support services (e.g. The National Stalking Helpline) provide the community with advice on how to report harassment, gather evidence and reduce risk. However, relevant technologies have only been very briefly researched to produce applicable solutions. Therefore, beyond advice on best practice (for example see [25]) for those who find themselves as the target of cyberstalking and the like novel technical solutions are needed. These technical solutions are needed not only from a prevention or evidenciary basis but also so that those who find themselves as the focus of these types of attack can feel a sense of regaining control, loss of control being one of the many consequences as reported by the ECHO project [23,31].
Current literature includes proposals aiming to shield unwanted communication; provide training and emotional support through simulators; and facilitate incident reporting and digital investigations [20]. As communication channels are hard to control, current proposals in this area suggest a layer of encryption and integrity checking to preserve privacy and facilitate identity checking [10]. This will presumably prevent unwanted communication but the scenario adopts a white-list approach where each connection is preapproved. This can be very efficient within a parental-controlled environment to protect minors but not convenient for adults with extensive online tasks to perform as part of their career or social life. A good solution should ideally empower users with real control over unwanted messages without restricting their online reachability. Other existing methods utilise traditional techniques to restrict contact (e.g. block IDs and mobile numbers); although attackers in many stalking scenarios are known to the victims, this approach still fails due to the high-degree of online anonymity possible in cyberspace, for instance, perpetrators can forge email headers, create new social media ac-counts, and hide their IP addresses via Privacy Enhancing Technologies (PET) [16].
Reactive proposals are focused on incident response, usually through digital investigation toolkits designed to not only recover the attacker's identity but to preserve an admissible evidence to a court of law. Software such as the Predator and Prey Alert (PAPA) system [2] enables remote monitoring of local activities by the police to facilitate investigations and collect evidence; such solutions require an agent software to be installed at the end-user's side to be able to also monitor encrypted communications [4]. Clearly, this comes at a price in terms of user privacy but could be effective in many extreme cases. In response to online anonymity, authorship analysis can be performed to establish hypotheses on which content belongs to which user. Eventually, determining particular details such as the age, gender or physical location from contextual clues can help a system to automate a response (e.g., warning, block, report) [3]. Nonetheless, content forms can sometimes be directly linked to the originator, for instance pictures can be associated with the particular camera it was taken by, or to other images produced by the same camera. This has been tested based on the analysis of Sensor Pattern Noise (SPN), published results suggest a satisfactory outcome for this technique [29].
An example of the other type of reactive responses could be initiated through peer-support simulators, a virtualised application to provide social services including emotional comfort and standardised professional advice to victims [39]. Likewise, proactive solutions could include training through simulators or by means of serious games [8] to educate and raise awareness of the problem.
An important conclusion from these examples is that automated detection through machine learning and text analysis is a fundamental component to provide intelligence in each case. Detection is known to be the first step to trigger a suitable action and mitigate the incident; alert a supervisor, block communication, and preserve evidence. As such, this is currently an active research area at a very early stage where data mining algorithms are trained, pre-processing techniques tested, and new corpora are being built [14,7,9].
The prior techniques were discussed to demonstrate a practical response. While their combination yields a promising plan customised to mitigate cyberstalking, the applicability of such implementations can also be extended to cover other forms of cybercrime. For instance, the functionality of a mobile application designed to report harassment could, in theory, be generalised to consider blackmail, fraud and other anti-social behaviour. The Crown Prosecution Service in the UK categorise criminal offences sent via social media into credible threats that could constitute violence to the person or damage to property; targeted attacks on individuals such as Revenge Porn; communications which may amount to a breach of a court order or a statutory prohibition, and finally, communications which are Grossly Offensive, Indecent, Obscene or False [36]. Any digital evidence created using media recorders could be shared with law enforcement using the same process but using different methods. Likewise, any empirical results on the feasibility of using simulated agents in virtual reality to mitigate cyberstalking by means of training and social support, would trigger advancement in the field to provide algorithms modelled to automated conversations with people suffering depression, anxiety disorder, or even eating disorders.
A framework for automatic cyberstalking detection in texts
Having discussed the need for technical solutions to tackle cyberstalking, we now turn to the question how different text analysis, information retrieval, and machine learning techniques could be utilised to detect potentially harmful cyberstalking messages and to collect the required evidence for law enforcement. To this end, we present in this section a framework that outlines potential tasks and solutions as well as their relationships. In this respect, author identification is one of the core tasks that undergoes increasing popularity in the research community. We therefore continue our discussion in Sect. 4 where the PAN network on digital text forensics is introduced, which combines different research efforts in this field. Our proposed framework is called Anti Cyberstalking Text-based System (ACTS). The framework is a generalisation of the one proposed in [12] (from email to general text messages), which is missing a personalisation module motivated below. It is furthermore adapted from work presented in [11]. The framework proposed here could best be described as a detection and digital readiness system, which specialises in an automatic detection and evidence collection of text-based cyberstalking (e.g., in emails, MMS, SMS, chat messages, tweets, social media updates, instant messages). A prototypical implementation of the framework is under development, and the data collection process is ongoing. ACTS is designed with the aim to run on a user's computer/mobile device to detect and filter textbased cyberstalking. The architecture of ACTS is depicted in Fig. 1.
The proposed system combines several techniques to mitigate cyberstalking. It consists of five main modules: attacker identification, detection, personalisation, aggregator, and evidence collector.
When a new message arrives, metadata-based blacklists may be applied to filter messages coming from unwanted User decision α β λ Fig. 1 The ACTS framework. Different text analysis and machine learning modules, based on user profiles, content and writeprint/author identification, are used to determine whether a text message is legitimate or unwanted senders. Such metadata may for instance consist of the header information in emails or the sender in tweets. However, some systems allow for forging such metadata, for instance by providing a fake 'sender' header in emails or using anonymous or fake accounts. For this reason, messages that pass the blacklist need to be further examined by the identification, personalisation, and detection modules. The results from three modules are passed to the aggregator for a final decision.
Similar to other email filtering systems (like spam detectors), the detection module is employed to detect and classify messages into cyberstalking messages, genuine messages, and grey messages based on their (textual) content. A number of supervised and unsupervised machine learning algorithms can be employed to classify and filter unwanted text messages [30]. To this end, we assume the detection module computes a value β that covers the content-based estimate of the system that the message is unwanted (for instance, based on unwanted words or phrases). A challenging task is to take into account the nature of messages from short SMS and chat messages to potentially longer emails.
Unlike the detection module, the attacker identification module analyses messages based on sender writeprints (in an analogy to fingerprints); these are writing-style features such as structural, lexical, syntactic, and content-specific features [38]. Applying means for authorship attribution and verification (further discussed in Sect. 4), this module is deployed specifically to detect and uncover anonymous and spoofed messages which are sent by a known attacker who may not be detected based on metadata. Furthermore, the evidence provided by the attacker identification module helps classifying those messages which potentially could bypass the detection module as they do not contain any unwanted words or phrases. However, authorship attribution on short messages poses specific challenges for instance due to the character limitation of short messages (e.g., SMS is limited to 160 characters per message and similar limitations hold for tweets). Nevertheless, because of their character limitation, people tend to use unstandardised and informal language abbreviations and other symbols, which mostly depend on user's choice, subject of discussion and communities [13], where some of these abbreviations and symbols could provide valuable information to identify the sender. A possible solution to overcome this shortcoming and to enhance the identification process is a combination of cyberstalker's writeprints with their profile, including linguistic and behavioural profiles, utilising already collected writeprints and profiles stored.
The result of the identification module is represented by the value α, for instance, based on three outputs: not cyberstalking (α ≥ r ), cyberstalking (α ≤ r ) and grey (r < α < r ). The α value is passed to the aggregator component, r and r are pre-defined threshold values in attacker identification (which have to be determined empirically).
Cyberstalking and cyber-harassment are abusive and threatening attacks; however, the concept of what is considered abusive and threatening in a message is a subjective decision from a victim's perspective; we have to take into account that such a decision is a highly personalised one. For example, bare words or phrases in a message might have no inclination whatsoever towards bad feeling to almost anyone, but they might cause fear and distress to a cyberstalking victim. For instance, sending child birthday wishes may commonly be considered as positive, but not in case of somebody who lost their child or had undergone abortion. This complicates the process of developing a general tool to combat text-based cyberstalking. For this reason we define a personalisation module which is employed to enhance the overall victim's control over incoming messages, where each victim can outline and define their own rule preferences. Therefore the personalisation module may consist of rule based components and a code dictionary. The rule based component is optional, where rules are defined based on words, dates and phrases provided by the user. For example, a typical rule might be if ((date A < currentdate < date B ) ∧ (message contains "happy birthday")) return true. If cyberstalking involves ex-partners, the cyberstalker has background knowledge about victims and knows which words/ phrases at which specific times can cause distress and fear to victims (in this case "happy birthday"). Furthermore, consider the above example where somebody gets birthday wishes for a lost child; they will likely occur around the time of the actual birthday, hence specifying a time range would make sense as a further means to personalised cyberstalking detection.
A code dictionary is created from ranked words and phrases which are commonly used in cyberstalking. Furthermore, the code dictionary could also be updated by the user. The ranking value for each word and phrase is initially set to zero. Then each time a word or phrase in a dictionary is matched with words or phrases in received message, the ranking value of the matched word/phrase in the code dictionary is increased. Obviously, the most common words or phrases will be ranked highest, and the messages first matched with highest ranked words and phrases.
The received message could be preprocessed; for this purpose k-shingling [6] could be utilised. Shingling is another way to represent features (terms) of a message, which has been used in email classification. A shingle of a message is a sequence of all words in that message; the size k of a shingle is the number of words in that shingle (denoted by k-shingle). If a message m can be presented by a sequence of words w , w , ..., w n then k-shingling of m will result in j features with j = (n − k) + , so each feature will cover k terms. For example [6], if we select 4-shingling (k = ) and the message is "a rose is a rose is a rose", the features are based on ("a rose is a"), ("rose is a rose"), ("is a rose is"), ("a rose is a"), ("rose is a rose"). Where each k-length shingle is run against the dictionary, probabilistic disambiguation [1] is another possible method to be used. This is a probabilistic technique used to measure usage violence and extremist hate effects on different online messages. Therefore, such technique could be used to measure the degree of offensiveness and seriousness of cyberstalking messages in relation to a dictionary code database.
Both the dictionary's returned result and rule-based result are represented by the value λ, which may be for instance either cyberstalking (1) or not cyberstalking (0) (when both returned results are negative). The final decision whether a received message is cyberstalking or not is made in the aggregator module, utilising the outcome from the previous modules. α, β and λ are the final calculated result values for each individual received message by the identification, personalisation, and detection module, respectively. Messages are identified as either grey (?), cyberstalking (1) or not cyberstalking (0) based on these values. If messages are classified as grey, the respective message may be flagged and the final decision should be made by the user.
The final module is the evidence collection module, which collects evidence from a newly arriving cyberstalking message, for instance, apart from the provided metadata and content, in the case of email the source IP address or, if it is not available, the next server relay in the path, and the domain name (both addresses are automatically submitted to WHOIS and other IP geolocation websites). The information with timestamp and email headers is saved, for instance, in the evidence database on a victims' device. The module should also regularly update and add stylometric profiles and related information of the cyberstalking message to the database. Furthermore it should utilise statistical methods like multivariate Gaussian distribution and PCA to analyse the writeprint and profiles of cyberstalking, and text mining to extract similar features, attacker behaviour, greeting, farewell, etc., specifically between anonymous messages and non anonymous ones.
Saving cyberstalking messages and evidence locally or in a (private or shared) cloud is another function of the evidence module. This process will allow law enforcement to have regular access to messages as well as have an overview of the cyberstalking progress. Saved cyberstalking messages could be a first step in collecting data on cyberstalking. However, saving data (evidence and emails) is usually an optional function of the system, that would only take place when the victim agrees with law enforcement to save data so that law enforcement could have a regular access and monitor cyberstalking incidents. The process of saving cyberstalking messages, for instance, in a cloud requires some safeguarding to preserve the messages' integrity and authenticity and protect it from any malicious act (which might destroy or manipulate potential evidence). Hash functions like SHA could be utilised to make sure the exchanged data is not modified during transmission or by any unauthorised person. Furthermore, asymmetric keys could be used for data encryption. Provided a suitable API Fig. 2 Identification module of the ACTS framework. The module comprises components for various relevant digital text forensics tasks that are used to collect evidence against suspects is available as well as corresponding legislation is in place (e.g. Germany's 'quick freeze' data retention approach 2 ), the evidence collection module could also notify the service or content provider.
Digital text forensics for identification
An important part of our framework to detect cyberstalking is the author identification module. Its purpose is the analysis of arriving messages with respect to authorship and originality. Figure 2 gives an overview of its four major components, namely attribution, verification, profiling, and reuse detection. Each of these components is invoked under specific circumstances, sometimes in parallel, to collect evidence about the origin of a given message or a given collection of messages. Its results are aggregated and then returned to the surrounding framework. In what follows, we briefly explain these components and their underlying problem settings, we outline their relevance to detecting cyberstalkers, and we point to state-of-the-art research for each of them, much of which originates from a number of shared task competitions that have been organized as part of PAN workshop series on digital text forensics. 3 2 This (controversially discussed) approach means data should be stored only "under court order based on a probable cause" (see also http://www.dw.com/en/germany-calls-for-a-quick-freeze-datacompromise/a-15829029). 3 PAN is an excellence network and workshop series on digital text forensics, where researchers and practitioners study technologies that analyze texts with regard to originality, authorship, and trustworthiness. Almost all of the technologies for corresponding tasks are still in their infancy, and active research is required to push them forward. PAN therefore focuses on the evaluation of selected tasks from the digital text forensics in order to develop large-scale, standardized benchmarks, and to assess the state of the art. PAN has organized shared task events since 2009. See also http://pan.webis.de.
For the detection and subsequent prosecution of cyberstalking, it is important to collect evidence on the suspect perpetrators. The application of forensic software for author identification may aid in this respect by comparing the messages received from a stalker with other pieces of writing from a suspect, or that of a number of potential suspects. In cases where the stalker attempts to stay anonymous, this may help in revealing their identity. However, even if the stalker apparently acts openly, collecting evidence that connects the stalker's messages to the apparent identity of the stalker is an important part of an investigation, since the stalker may try to deceive the investigators. In this connection, technologies for authorship attribution and verification are required to scale future investigations, which, when given a text of unknown authorship, either attribute it to the most likely author among a set of candidates, or verify whether the text has been written by the same author as another given text. The former task corresponds to a traditional task in forensic linguistics, where investigators first narrow down the set of candidates who may have written a given piece of text using other evidence, and then employ a forensic linguist to determine who of the candidates probably wrote the text in question based on stylistic analyses. This presumes of course that suspect candidates can be identified and that sufficient writing samples from each of them can be gathered. In that case, attribution boils down to a multi-class classification problem, where each suspect candidate corresponds to a class. By contrast, verification corresponds to a so-called one-class classification problem [21,35]: either a text has been written by a given author (the target class), or not, whereas determining the latter would mean to be able to accurately distinguish the given author from all others. While being more challenging to solve automatically, verification problems may frequently arise within cyberstalking detection. For example, one may wish to check whether a message received from a given sender was indeed written by that sender by verifying whether that message corresponds stylistically to messages previously received from the same sender. Altogether, attribution and verification address complementary problem settings within cyberstalking detection.
In situations where little is known about the originator of an offending message, however, neither attribution nor verification technologies are of much use. Here, author profiling technology can be applied to determine at least some demographics about the author of the message in question. Author profiling technology attempts to correlate writing style with demographics, such as age, gender, region of origin, mother tongue, personality, etc., which is typically cast as a multi-class classification problem. This information may help to narrow down the search for suspects. At the same time, author profiling technology may also be used to verify whether the supposed age of the sender of a stalking message is consistent with the results of an automatic analysis, which may raise a flag, or serve as sufficient reasons to doubt the obvious in an investigation. An analysis of personality types may further allow for recommending ways to deal with a supposed stalker in order not to encourage them further.
The automatic assessment of messages with respect to authorship presumes that they have actually been written by their senders. This assumption may not hold under all circumstances; especially when offenders become aware of the fact that their messages are being analyzed with regard to writing style, they may attempt to obfuscate them. While it is still unclear how well humans are capable of adjusting their own writing style so that forensic software or even a human forensic linguist are misled, an easy way to send messages devoid of one's own writing style is to reuse someone else's writing. This is why reuse detection forms a integral part of forensic analysis, where the task is to identify texts or text passages that have been reused, and to retrieve their likely sources. Nevertheless, even in the absence reference collections to compare a given message with, a writing style analysis of a message may still be useful, namely to identify writing style breaches (i.e., positions in a message where the writing style changes), which would serve as evidence that texts from different authors have been conflated [34].
All of the aforementioned authorship-related tasks, with the exception of reuse detection, are basically addressed using machine learning applied on top of stylometry, the science of quantifying the writing style of texts. The first application of stylometry to tackle an authorship dispute dates back to the 19th century [24], and since then linguists have proposed plenty of features for this task [17]. In general, such features attempt to capture writing style at character level, at the lexical level, at the syntactic level, at the semantic level, and dependent on the application. It turns out, however, that low-level features at character level, such as character n-grams, where n ranges from 2 to 4, are among the most effective ones, whereas tapping syntactic or semantic information is less so and may serve only as a complement. Character n-grams indeed carry various forms of stylistic information, including function word usage, inflections, phonetic preferences, and even word and sentence length distribution dependent on how often white spaces and punctuation occur. Regarding the classification technology applied, the outlined multi-class problems make use of a straightforward classifier, whereas the oneclass classification problem of verification requires tailored approaches. One of the best-performing ones includes the reconstruction approach "Unmasking", which trains a classifier to separate the text passages from the text of unknown authorship from those of the known author, repeating the training iteratively and removing the most discriminative features in each iteration. The decrease of classification performance over iterations is consistently higher if the unknown text has in fact been written by a known author [22]. Besides these notable examples, there are plenty more, many of which have been surveyed in [32]; both for authorship attribution and verification, author profiling as well as reuse detection, dozens of approaches have been proposed over the past two decades. Yet, for all of these tasks, little effort has been spent to develop standardised benchmarks, so that results can hardly be compared across papers. To fill this gap, the PAN workshop for digital text forensics has been initiated, where shared tasks for all of the aforementioned problems have been organised starting 2009. While a complete survey of the results of the PAN initiative is out of the scope of this paper, we refer to the latest overviews of the respective tasks, namely for authorship attribution [19], authorship verification [33], author profiling [28], and the two subtasks of reuse detection, text alignment and source retrieval [15,27]. These benchmarks have had a significant impact on the community. In a recent large-scale reproducibility study on authorship attribution, they were employed to reimplement and reproduce the 15 most influential approaches from the past two decades, evaluating them on the standardised datasets [26]. The study finds that some of the approaches proposed early on are still competitive with the most recent contributions.
With respect to cyberstalking detection, there are still open challenges in authorship identification that need to be addressed, such as the fact that these technologies do not work well on very short texts, unless many short text from the same author can be gathered. If a stalker sends only very short and only a few well-placed messages, a reliable identification may be circumvented altogether. Moreover, application-dependent style features need to be developed that also take into account the context of the recipient.
Conclusion and future work
Textual analysis and machine learning are cornerstones towards a technical response to the problem of cyberstalking. This is evident by the different prevention and mitigation techniques discussed in this paper as well as the Anti Cyberstalking Text-based System (ACTS) framework. ACTS' modules showcase various features to mitigate this type of anti-social offence. By design, it has a prevention mechanism combining the ability to detect, analyse, identify, and block communication. Further, it also has an integrated functionality to quarantine evidence to aid digital forensics investigations. The forensic element of this framework is not limited to logging content but adding a layer of analysisbased metadata to establish relationships between collected evidence, hence supporting investigation. In practice, this capability is also critical to alarm and convince law-enforcement to the severity of the attack as it consequently provides means to assess potential risk.
Our future work in this regard includes the development of a new mechanism to empower users further with evidence-based advice on how to respond to harassment. Victims usually have few choices 1) sending a reply to an unwanted message; 2) ignore it; or 3) outsource the response to a third-party. Some of these actions include further decisions such as deciding the content of the response in the case of sending a reply or identifying a suitable third-party to contact. We argue that machine learning can eventually provide intelligence to guide users towards personalised suitable actions. Accordingly, this ongoing work should also survey existing experiences of victims to support such a system.
Besides personalisation and content analysis, one of the crucial elements of the ACTS framework relies on effective authorship identification in a cyberstalking context. We therefore discussed existing promising approaches for several facets of this challenging task. It becomes clear that the outcome of this line of research can potentially help to detect cyberstalking more accurately. However, most of the approaches have not directly been applied to the problem of cyberstalking detection. Future efforts should therefore focus on applying these mechanisms, potentially in the context of ACTS, directly to the cyberstalking detection problem. One step in this direction could for instance be the organization of a shared task on cyberstalking detection in the context of PAN.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 7,807.8 | 2016-06-01T00:00:00.000 | [
"Computer Science"
] |
Protection Schemes in HPON Networks Based on the PWFBA Algorithm
In this paper, possibilities for network traffic protection in future hybrid passive optical networks are presented, and reasons for realizing and utilizing advanced network traffic protection schemes for various network traffic classes in these networks are analyzed. Next, principles of the Prediction-based Fair Wavelength and Bandwidth Allocation (PFWBA) algorithm are introduced in detail, focusing on the Prediction-based Fair Excessive Bandwidth Reallocation (PFEBR) algorithm with the Early Dynamic Bandwidth Allocation (E-DBA) mechanism and subsequent Dynamic Wavelength Allocation (DWA) scheme. For analyzing various wavelength allocation possibilities in Hybrid Passive Optical Networks (HPON) networks, a simulation program with the enhancement of the PFWBA algorithm is realized. Finally, a comparison of different methods of the wavelength allocation in conjunction with specific network traffic classes is executed for future HPON networks with considered protection schemes. Subsequently, three methods are presented from the viewpoint of HPON network traffic protection possibilities, including a new approach for the wavelength allocation based on network traffic protection assumptions.
Introduction
Passive Optical Network (PON) is a fiber-based point-to-multipoint optical network communication technology with no active elements in the signals path from source to destination (single optical fiber serves multiple endpoints by using unpowered/passive fiber optic splitters, which divides the fiber bandwidth to them), with main advantages such as [1] longer distances (i.e., as the last mile technology between internet provider and customer-approx. 20 km) [2], higher bandwidth (futuristic view up to 25G-PON to 50G-PON [3]), downstream video broadcasting, eliminated need for additional electronic devices in the network (thanks to its passive characteristic), and easy upgrades to higher bit rates (or additional wavelengths).
Future access technologies utilized in passive optical networks must be able to provide high sustainable bandwidths on a per-user basis while keeping capital and operational expenditures as low as possible. Therefore, Next-Generation Passive Optical Networks (NG-PON) need to provide survivability schemes in a cost-efficient way, whereas the growing importance of uninterrupted internet access makes fault management an important challenge [4]. The reliability requirements may depend on user profiles. Thus, NG-PON networks should also support the end-to-end protection for selected users when requested [5]. Within this context, it is significant to discover an effective way to analyze network traffic protection schemes for utilization in future passive optical networks [6].
We introduce new possibilities for network traffic protection in future Hybrid Passive Optical Networks (HPON) [7], including reasons for using the advanced traffic protection schemes for various network traffic classes in those passive networks. Furthermore, principles of the Prediction-based Fair Wavelength and Bandwidth Allocation (PFWBA) algorithm are explained in detail, with a close focus on a Prediction-based Fair Excessive Bandwidth Reallocation (PFEBR) algorithm, with an Early Dynamic Bandwidth Allocation (E-DBA) mechanism and Dynamic Wavelength Allocation (DWA) scheme. Subsequently, three methods are analyzed and evaluated from the viewpoint of HPON network traffic protection possibilities. The first method presents an original approach to the PFEBR algorithm adapted to network traffic protection schemes. The second method introduces our newly developed approach to wavelength allocation based on network traffic protection assumptions. The third method, based on fixed wavelength priority bandwidth allocation, we accommodated directly to specific network traffic classes.
The paper is structured as follows. Section 2 introduces the finer details of HPON networks and the importance of network traffic protection in such networks. Section 3 provides a detailed description of PFWBA, PFEBR and E-DBA algorithm principles. Section 4 provides details about our simulation for HPON, including the results. Last but not least, Section 5 provides conclusions of our findings.
Network Traffic Protection in HPON Networks
The HPON presents an intermediate stage in the migration scenario to the fully operational Wavelength Division Multiplexing-Passive Optical Network (WDM-PON) networks, which have a positive future thanks to their ability to satisfy the growing bandwidth demands [8]. Hybrid passive optical networks, except for other characteristics, can utilize both Time Division Multiplex (TDM) and Wavelength-Division Multiplexing (WDM) techniques. It means that various wavelengths can be considered for the network traffic. Thanks to WDM technology, we can efficiently utilize the optical transmission medium, which is very important from the viewpoint of growing bandwidth demands and cheap services. In the present day, there are different standardized passive optical technologies with various dedicated wavelengths able to cooperate. Moreover, the availability of various wavelengths transmission channels leads to the extension of advanced network traffic protection schemes. Therefore, new wavelength allocation methods must be considered in addition to effective bandwidth capacity utilization in HPON networks [9,10].
Future HPON architectures can be proposed without or with network traffic protection schemes. Therefore, they can be designed keeping in mind different possible paths for network deployment and protection upgrades and can be proposed with different levels of network traffic protection. The proposed survivable architectures can also be applied to Time-Division Multiplexing-Passive Optical Networks (TDM-PON) with more than one stage of remote nodes based on power splitters [11,12]. A benefit of the network traffic protection deployed in HPON networks can be obtained as a consequence of the reliability performance improvement and the service interruption decrements experienced by users [13][14][15][16]. It can be beneficial to either provide protection functionalities at the time of the HPON network deployment or at least install a sufficient amount of optical fibers in advance. Furthermore, reasons for network traffic protection are very substantial from the viewpoint of signal transmission. A clear benefit can be shown when network planning is completed with a possible protection upgrade, which leads to a decrease in investment costs. This confirms the importance of the right deployment plan for future hybrid passive optical networks.
HPON networks present a base for converged Fibre-Wireless Passive Optical Networks (Fi-Wi PON) [17]. For that reason, it is necessary to provide a certain level of network traffic protection and restoration [8,11,12]. Different protection schemes can be proposed, starting from a no-protection scenario towards proposed architectures with the protection. However, in a case that no hardware protection is utilized at the network creation, appropriate network traffic protection schemes can be realized. Thus, transmission channels provided with various wavelengths can be involved in providing protection paths for high-priority network traffic classes.
Except for optimization of the optical power budget for passive optical network designs [18], the network traffic protection securing and the system recovery in the case of failures is one of the most important issues in HPON networks. As future passive optical networks transmit aggregated high-speed data from up to hundreds of customers, network units and/or distribution failures represent a serious problem. Research on fault-tolerant HPON topologies recommends the utilization of duplicated optical fibers between the Optical Line Terminal (OLT) and Optical Network Units (ONU), duplicated optical components and the supplementary circuit included in the Remote Node (RN) in one common entity [16] even though these schemes are non-adequate due to their high redundancy with significant expenses [13,19,20]. Except for hardware realization, costlier effective solutions that focus on dynamic wavelength and bandwidth allocation algorithms can be utilized for enhancing differentiated network traffic protection schemes for different traffic classes [21,22].
In this paper, the attention is focused on possible advanced network traffic protection in HPON networks utilizing an appropriate enhancement of the PFWBA activity. In Section 3, principles of the PFWBA algorithm are introduced in detail, focusing on the Predictionbased Fair Excessive Bandwidth Reallocation (PFEBR) algorithm with the Early Dynamic Bandwidth Allocation (E-DBA) mechanism and the related Dynamic Wavelength Allocation (DWA) scheme. This PFWBA algorithm includes the PFEBR and the DWA scheme. These two algorithms are not executed simultaneously, but first, the OLT terminal calculates the time intervals for individual ONU units, and subsequently, working and/or protection wavelengths can be assigned to these calculated time intervals. To analyze various network traffic protection schemes utilizing the wavelength allocation cases possible in HPON networks, a simulation program with the enhancement of the PWFBA algorithm is realized, and an evaluation of different network traffic protection types available for future HPON networks is executed in Section 4. Particular requirements are more or less important for the selected HPON network topology and are satisfied by the presented enhancement of the PFWBA algorithm.
E-DBA and PFEBR Principles
The robust PFWBA contains DWA schemes and the E-DBA mechanism for the PFEBR. In the standard Dynamic Bandwidth Allocation (DBA) scheme, the OLT unit starts a process of bandwidth allocation based on time intervals after receiving REPORT messages from all ONU units. The early DBA mechanism (E-DBA) ensures the sequence of transmitting REPORT messages into the OLT by delaying ONU units with unstable network traffic based on the B V , which is represented as the ONU group with a higher variance than the mean variance.
The E-DBA mechanism consists of two operations. In the first step, the OLT executes the DBA scheme after receiving REPORT messages received at the end of ONU i−1 . This operation reduces the idle period in the standard DBA algorithm and obtains actual information for ONU units with the unstable network traffic, leading to improving the prediction accuracy in the next service cycle. In the second step, a time interval is assigned to each ONU unit based on the network traffic variance of all ONU units in decreasing order, and at the same time, the B V group is updated by adding some unstable network traffic ONU units with higher variances. This operation mitigates a variance by shortening the waiting time before transmitting data from unstable network traffic ONU units [19].
The PFEBR algorithm calculates a variance of each ONU unit based on previous network traffic information and sorts variances in decreasing order. In this way, an unstable degree list is acquired. A calculation of the variance V i for the ONU i unit can be expressed as: where B Total represents the sum of differentiated network traffic classes (Assured Forwarding (AF), Best Effort (BE) and Expedited Forwarding (EF)) of the ONU i unit in the given (n) cycle ∈ PC (Previous Cycle), B Mean is the mean of B Total values and N H presents the number of historical REPORT messages. The B V group contains ONU units with a higher variance than the mean variance V Mean calculated as: where N represents a number of ONU units in the system. The E-DBA mechanism moves REPORT messages from the B V group between previous and current ONU units. The PFEBR algorithm requests actual information from the unstable network traffic ONU list to avoid prediction inexactness.
After sending data from all ONU units based on the unstable degree list (UDL), the PFEBR predicts the future network traffic requirement based on the bandwidth request according to this UDL. Predicted requests R C based on differentiated network traffic classes for all ONU units are defined as follows: where B C presents the requested bandwidth of the ONU i in the given (n) cycle for differentiated network traffic classes C ∈ {AF, BE} and α means the linear estimated credit [19]. The PFEBR algorithm executes the Excessive Bandwidth Reallocation (EBR) process after finishing the bandwidth prediction needed for each ONU unit. The PFEBR scheme can provide a fairness approach to the EBR in accordance with the guaranteed bandwidth. First, the fair EBR operation in the PFEBR must calculate the R Total for each ONU unit. The size of the available bandwidth can be expressed as: where C cap is the OLT line capacity in bit/s, T cycle presents the maximum cycle interval, g is the guard time, N is the number of ONU units, N V is the number of ONU units in the B V group, and the control message length L CM is 512 bits (64 octets). The ONU i unit with the maximum residual bandwidth is then selected from unallocated ONU units. The granted bandwidth allocated for the ONU i unit G Total i,n in the next cycle is defined as: where R Total is the sum of differentiated network traffic loading after prediction from the ONU i unit in the given (n) cycle, S i /∑ S k , where k ∈ UN (unallocated) is the ratio of the available bandwidth B Available assigned to the ONU i unit. The granted bandwidth for particular differentiated (EF, AF and BE) network traffic classes is as follows: This process continues until the bandwidth for each ONU unit is allocated. Finally, the PFEBR organizes a broadcasting sequence and a report time for each ONU unit based on the unstable degree list [19].
PFWBA Principles
Cooperation between the DWA algorithm and the PFEBR scheme for enhancement of the system performance can be considered in this way. Before wavelengths are allocated, the PFEBR based on requests from all ONU units determines the transmitting time for the given ONU unit in the current cycle. The PFWBA considers the unstable degree list and enhances the prediction accuracy when scheduling the transmitting sequence after collecting REPORT messages from all ONU units. First, the PFWBA scheme divides all ONU units into three groups based on the variance of all ONU units: Then, the PFWBA scheme assigns the wavelength for each ONU unit gradually group by group. The PFWBA defines two basic variables-the Channel Available Time (CAT) presents the wavelength availability for broadcasting after the time expiration t, and the Round Trip Time (RTT) presents the time needed for signal transmission from the OLT terminal to ONU units and back. The PFWBA selects requested time intervals in the same group with a minimum transmission time. This scheduling process representing the DWA is concretely described in [19].
Simulation Program of the Advanced HPON Network Traffic Protection
We prepared and realized a simulation program in the Java Runtime Environment (JRE) programming software using the Eclipse Integrated Development Environment (IDE) framework that can simulate stochastic network traffic in the HPON network and can be utilized for providing traffic protection paths for different network traffic classes [23]. This program presents activities of the basic PFWBA algorithm and its modifications from the viewpoint of network traffic protection schemes. Specifically, the program presents the dynamic bandwidth and wavelength allocation process in each service cycle managed by the OLT unit. The program also incorporates a generator of the stochastic network traffic with a generation of particular requests for differentiated network traffic classes that are present in the REPORT message for individual ONU units. The simulation program works in cooperation with the Adobe Integrated Runtime (Adobe AIR) multiplatform runtime system due to simpler programming of the graphical interface in the program, where many input parameters can be predetermined, for example, the cycle duration, the guard time, the OLT capacity, the number of ONU units, the number of ONU units entering the unstable degree list, the number of wavelengths and the selected method for the wavelength allocation. There is a possibility for step-by-step operation during the bandwidth and wavelength allocation process. Default input parameters at the program initialization are presented in Table 1.
First three values of input parameters are determined by the OLT terminal at the beginning of the network traffic transmission. They allow for the distribution of time slots without network collisions. They are also unchanged until the HPON network architecture is changed. In this case, these values must be recalculated. As an example, if the specific 1 Gbps line capacity per 1 wavelength is supposed, a value of the available bandwidth after subtraction of guard times and REPORT messages can be calculated by using Equation (5). B Available is the maximum number of bytes that can be assigned in one cycle. If this number is multiplied by the number of cycles per second, then the maximum available capacity can be determined. In the next step, the variance (the difference between the current and previous requests) according to Equation (1) must be considered. Therefore, the median for all ONU units is compared with the mean variance V Mean . The first group of ONU units with the largest variance are entered into the unstable degree list, which means that the PFEBR algorithm predicts their large request changes and accommodates them first. The second group of elements is created by ONU units with variance larger than the median, but they are not included in the unstable degree list. The third group is created with ONU units with variance smaller than the median, and thus, the PFEBR algorithm supposes that these units will have no large change in requests. Therefore, data are assigned last to this group in decreasing order. In addition, the PFEBR algorithm ensures fair bandwidth allocations. If more ONU units are included in the UDL, the PFEBR will be less fair.
Because the PFEBR algorithm belongs to the one-level prediction techniques, it predicts the bandwidth allocated for particular ONU units. The prediction is based on the linear credit as a ratio of the request in the given (n) cycle to the total transmitting time of all ONU units in the previous (n − 1) cycle. Therefore, the prediction for a given network traffic class is larger than the requested transmission for each ONU unit. In the final step, the bandwidth for ONU units is allocated based on Equations (7)- (9).
The bandwidth allocation is realized in groups, i.e., first time intervals are allocated to ONU units from the B V group and, finally, up to ONU units with a smaller variance. When the OLT unit assigns time intervals to ONU units, it must also assign wavelengths. Without knowing assigned time intervals, an allocation of wavelengths could be very ineffective. Therefore, the PFEBR algorithm is performed before the DWA algorithm. Three methods for the wavelength allocation are analyzed.
Method 1-Uniform Utilization of Wavelengths
The first method for advanced network traffic protection is characterized by a uniform utilization of all possible wavelengths. Its disadvantage is the fact that transmitters and receivers paired with considered wavelengths must be turned on in all ONU units utilized in the HPON network. On the contrary, its advantage is the computing simplicity. For specific network traffic classes, two wavelengths are simultaneously utilized for network traffic protection. In this case, both wavelengths are utilized as working and protection paths. In addition, only half of the transmission capacity can be practically utilized. The measurement data based on method 1 are shown in Table 2. The wavelength allocation using method 1 is shown in Figure 1
Method 2-Non-Uniform Utilization of Wavelengths
The second method for advanced network traffic protection is trying to utilize only one wavelength, and the second one is not activated until network traffic exceeds the line capacity for the wavelength. This is carried out by a subtraction of transmitting times of particular ONU units from the CAT parameter. A disadvantage of our proposed method is higher computing intensity than in method 1. The measurement data based on method 2 are shown in Table 3. The wavelength allocation using method 2 is shown in Figure 2 (Number of Wavelengths: 2; Available Bandwidth: 972,952 Mbps; 243,238 Bytes/cycle; Total Grant: 138,226 Bytes; Throughput: 28%). For specific network traffic classes, the first wavelength is realized as the working path; the second one is considered as the protection path. Its great advantage is the power saving if the second wavelength is not utilized at that moment. Therefore, except from realizing advanced network traffic protection, possible power savings in ONU units can be seriously considered. In praxis, this power saving can be increased with a higher number of ONU units in HPON networks.
Method 3-Fixed Wavelength Priority Bandwidth Allocation FWBPA
This method can be applied in applications very sensitive to delay. Because the third method for advanced network traffic protection is working with three different wavelengths, each network traffic class has its own dedicated wavelength. In this way, a small total delay is ensured. A disadvantage is the data stream separation from the ONU unit, whereby this data stream is commonly transmitted in methods 1 and 2. Moreover, method 3 uses the RTT parameter for each network traffic class separately, which means three more times. Then, the method FWBPA is not so effective as method 2, but it allows for the optimization of packet losses and the delay in the AF network traffic dominating in access networks in the present day [20]. The wavelength allocation using method 3 is shown in Figure 3 For specific network traffic classes, three wavelengths are simultaneously utilized. Each wavelength is realized as the working path for a specific network traffic class and simultaneously as the protection path for the other two network traffic classes in the case of wavelength failure. If the channel capacity is sufficient, the allocated bandwidth is always higher than the requested bandwidth. This is caused by the linear credit that is always positive. As the PFWBA algorithm supports Quality of Service (QoS) requirements, the EF and AF network traffic classes sensitive to delay are preferred to the non-sensitive BE network traffic class. Data not transmitted in the current cycle will be included in the next cycle and will be transmitted if the channel capacity is sufficient. Generally, this BE network traffic does not require any specifications for the bandwidth guarantee and transmission delay.
Conclusions
In this paper, the enhancement of HPON network traffic protection schemes based on basic features of the PFWBA algorithm is analyzed. The PFWBA algorithm based on prediction allows for a dynamic and effective utilization of network transmission capacities. Therefore, our selected PFWBA algorithm belongs to algorithms that could possibly be implemented for network traffic protection in hybrid passive optical networks. Using a realized simulation program, three methods were analyzed and evaluated from the viewpoint of traffic protection possibilities. The first method presents an original approach of the PFEBR algorithm adapted to network traffic protection schemes. The second method introduces a new approach for the wavelength allocation based on network traffic protection assumptions. The third method based on fixed wavelength priority bandwidth allocation is accommodated directly to specific network traffic classes.
For advanced network traffic protection schemes in HPON networks, two or three wavelengths can be simultaneously utilized. There exist two basic approaches to utilizing wavelength allocation. In the first approach, the same wavelength can be realized as the working and protection path for specific network traffic classes (a method with uniform utilization of wavelengths and the FWBPA method). In the second approach, different wavelengths are utilized as the working and protection paths for all network traffic classes (a method with the non-uniform utilization of wavelengths). Results from the simulation program show that our proposed method with the non-uniform utilization of wavelengths seems to be more effective from the viewpoint of network traffic protection compared with other considered methods.
Based on the simulation results obtained, we intend to implement the presented methods of wavelength allocation in real HPON systems with considered traffic protection schemes. In some scenarios, each ONU can also process differentiated network traffic classes by varying the number of resources for a defined number of wavelengths. Therefore, the effect of using a different number of resources requested by different traffic classes will be considered in future works.
Funding:
The financial support of the project "Network Service Availability Threat Analysis, Detection and Mitigation" n. FW01010474, was granted by Technology Agency of Czech Republic.The paper was also supported by research activities in the project KEGA 034STU-4/2021.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable. | 5,412.4 | 2022-12-01T00:00:00.000 | [
"Computer Science"
] |
Self-duality for $\cal N$-extended superconformal gauge multiplets
We develop a general formalism of duality rotations for $\cal N$-extended superconformal gauge multiplets in conformally flat backgrounds as an extension of the approach given in arXiv:2107.02001. Additionally, we construct $\mathsf{U}(1)$ duality-invariant models for the ${\mathcal N}=2$ superconformal gravitino multiplet recently described in arXiv:2305.16029. Each of them is automatically self-dual with respect to a superfield Legendre transformation. A method is proposed to generate such self-dual models, including a family of superconformal theories.
Motivated by the discovery of the ModMax theory [18] and its supersymmetric counterpart [19,20], the formalism of U(1) duality rotations has recently been extended in [21] to higher-spin conformal gauge fields on conformally flat backgrounds and some of their N = 1 and N = 2 superconformal cousins.Specifically, the following types of U(1) duality-invariant dynamical systems were studied in [21]: • Self-dual models for a conformal field, φ α(m) α(n) , m, n ≥ 1, with the gauge freedom (2.1), where ∇ α α = (σ a ) α α∇ a is a conformally covariant derivative.Here ∇ a = e a m ∂ m − 1 2 ω a bc M bc , where e m a and ω a bc denote the vielbein and spin connection, respectively.The corresponding actions are functionals of the gauge invariant field strengths Ĉ[∆] α(m+n) and Č [∆] α(m+n) , which are defined by (2.3) and have the conformal properties (2.4).
• Self-dual models for an N = 1 superconformal real prepotential Υ α(s) α(s) = Υ α(s) α(s) , with s > 0, defined modulo the gauge transformations where ∇ A = (∇ a , ∇ α , ∇ α) are the N = 1 superconformally covariant derivatives.The action of such a model is a functional of the gauge-invariant chiral field strength and its conjugate.The s = 0 case corresponds to a vector multiplet, and the corresponding duality-invariant models for the N = 1 vector multiplet were studied in [6][7][8].
At the same time, N -extended superconformal gauge-invariant models have been constructed [22] to describe the dynamics of a complex tensor superfield Υ α(m) α(n) , with m, n ≥ 0, in a conformally flat superspace.Such a prepotential is defined modulo the gauge transformations (3.1), and the corresponding gauge-invariant chiral field strengths Ŵ are given in (3.5).One of the main goals of this paper is to develop a general formalism of U(1) duality rotations for these N -extended superconformal gauge multiplets in conformally flat backgrounds. 3s follows from (1.5) and (3.5), the chiral field strengths carry at least two spinor indices in the N = 2 case.A chiral scalar field strength W is known to correspond to a vector multiplet [24].The only missing choice of chiral spinor field strengths, Ŵα and Wα , has been shown to correspond to the N = 2 superconformal gravitino multiplet discovered in [25].The second goal of this work is to construct duality-invariant models for this multiplet.This paper is organised as follows.In section 2, we review the formalism of U(1) duality rotations for conformal gauge fields.Then, in section 3, we extend this formalism to the case of N -extended superconformal gauge multiplets.Finally, section 4 is devoted to the construction of U(1) duality-invariant models for the N = 2 superconformal gravitino multiplet.The main body of this paper is accompanied by a single technical appendix, appendix A. It is devoted to a reduction of the free action for the N = 2 superconformal gravitino multiplet to N = 1 superspace.
All results in this work are presented within the framework of conformal (super)space.In the non-supersymmetric case, we employ the approach of [26] recast in the modern setting of [27].For the N = 1 and N = 2 superconformal cases, we refer the reader to the original publications [28] and [29], respectively, as well as to [30,31] for a recent review, whose conventions we utilise.For N > 2, our conventions coincide with the ones of [32].
The two-component spinor notation and conventions we employ in this work follow [33], which are similar to those of [34].Additionally, throughout this paper we make use of the convention whereby indices denoted by the same symbol are to be symmetrised over, e.g.
with a similar convention for dotted spinor indices.
Conformal gauge fields
This section is devoted to a review of the formalism of U(1) duality rotations for conformal gauge fields developed in [21].We recall that such a field, φ α(m) α(n) , with m, n ≥ 1, is defined modulo the gauge transformations [35,39,40] where ∇ α α = (σ a ) α α∇ a is a conformally covariant derivative.It should be pointed out that, for m = n = s, it may be consistently restricted to be real; φ α(s) α(s) = φ α(s) α(s) .Consistency of (2.1) with conformal symmetry implies that φ α(m) α(n) is a primary field of dimension 2 where K a and D denote the special conformal and dilatation generators, respectively.
From φ α(m) α(n) , we construct the following field strengths In the m = n ≡ s case, the field strengths . The descendants (2.3) are primary in generic backgrounds, (2.4b) and gauge-invariant in all conformally flat ones, Here C abcd denotes the background Weyl tensor.In what follows, we will assume such a geometry.
It follows from (2.4) that the dimensions of the field strengths (2.3) are determined by ∆, which explains why the field strengths carry the label ∆.We also note that the field strengths (2.3) are related via the Bianchi identity 2.1 U(1) duality rotations for conformal gauge fields We consider a dynamical system describing the propagation of φ α(m) α(n) .The corresponding action functional, which we denote S (m,n) [ Ĉ, Č], is assumed to depend only on Ĉ[∆] α(m+n) , Č [∆] α(m+n) , and their conjugates.Considering S (m,n) [ Ĉ, Č] as a functional of the unconstrained fields Ĉ[∆] α(m+n) , Č[∆] α(m+n) and their conjugates, we may introduce the primary fields where the variational derivative is defined in the following way and e denotes the determinant of the (inverse) vielbein.The conformal properties of the fields (2.7) are: Varying S (m,n) [ Ĉ, Č] with respect to φα(n) α(m) yields the following equation of motion (2.10) It is clear from the discussion above that the system of equations (2.6) and (2.10) is invariant under the U(1) duality rotations where λ = λ is an arbitrary constant parameter.One may then construct U(1) duality-invariant nonlinear models for such fields.They may be shown to satisfy the self-duality equation [21] i which must hold for unconstrained fields Ĉ[∆] α(m+n) and Č[∆] α(m+n) .The simplest solution of this equation is the free action [36][37][38][39][40] The m = n = 1 case in (2.12) corresponds to nonlinear electrodynamics where the integral form of the self-duality equation was given for the first time in [7].In the earlier publications [2,4,5,41] it was given in the form where L(F ) is the Lagrangian of the electromagnetic field.As emphasised in [7], the integral form of the self-duality equation must be used in theories with higher derivatives.Dualityinvariant theories with higher derivatives were studied, e.g., in [10,[42][43][44].
Self-duality under Legendre transformations
In the case of nonlinear (super) electrodynamics, U(1) duality invariance implies self-duality under Legendre transformations, see [7] for a review and [21] for the extension to bosonic higher spins.In this subsection, we will show that this property extends to the present case.
First, we describe the Legendre transformation for a theory described by the action To this end, we introduce the parent action where Ĉ[∆] α(m+n) and Č[∆] α(m+n) are unconstrained fields, while Ĉ[∆] α(m+n) and Č[∆] α(m+n) take the form: where φ α(m) α(n) is a Lagrange multiplier field.Now, if one varies eq.(2.15) with respect to φ (D) α(m) α(n) , the resulting equation of motion is exactly the Bianchi identity (2.6), whose general solution is given by (2.3).As a result, we recover the original self-dual model.
Next, we vary the parent action (2.15) with respect to the unconstrained fields Ĉ[∆] α(m+n) and Č[∆]
α(m+n) .The resulting equations of motion are which we may solve to obtain Inserting this solution into (2.15),we obtain the dual action . (2.18) Now, assuming that the action S (m,n) [ Ĉ, Č] satisfies the self-duality equation (2.12), we will show that it coincides with the dual action (2.18) (
2.19)
A routine calculation allows one to show that the following functional is invariant under infinitesimal U(1) rotations (2.11) Hence, it must also be invariant under the following finite duality transformations: ) ) ) Performing this transformation on functional (2.20) with λ = π 2 yields and upon inserting this expression into (2.18),we obtain (2.19).Thus, the Lagrangian associated with the self-dual theory is invariant under Legendre transformations.
Superconformal gauge multiplets
In this section we extend the formalism of duality rotations for conformal gauge fields reviewed above to the case of N -extended superconformal gauge multiplets in conformally flat backgrounds.We recall that the latter are described by complex tensor superfields Υ α(m) α(n) , m, n ≥ 0. They are defined modulo the gauge transformations [22,23]: where are the covariant derivatives of N -extended conformal superspace and we have introduced the second-order operators It should be pointed out that, for N = 2, the transformation law (3.1c)describes the linearised N = 2 conformal supergravity multiplet [45].Further, for N = 1, transformations (3.1) are equivalent to those given in [39,40,46]: ) The requirement that (3.1) is consistent with superconformal symmetry implies that Υ α(m) α(n) is primary, K B Υ α(m) α(n) = 0, and its dilatation weight and U(1) R charge are as follows4 where denotes the special superconformal and U(1) R generators, respectively.We note that, if m = n = s, the gauge prepotential Υ α(s) α(s) may be consistently restricted to be real; Υ α(s) α(s) = Υ α(s) α(s) .In this case, the gauge transformations (3.1) reduce to: ) It should be pointed out that the flat-superspace version of eq.(3.4a) first appeared in [47].
From Υ α(m) α(n) , and its conjugate Ῡα(n) α(m) , we may construct the chiral field strengths where we recall that ∆ = m − n, we have made the definitions: and the totally antisymmetric tensor . Their superconformal transformation laws are characterised by the properties: Further, on conformally flat backgrounds, these descendants are gauge-invariant In what follows, we will assume such a geometry.It is important to note that, in such backgrounds, the field strengths (2.3) obey the following Bianchi identity (3.9)
U(1) duality rotations for superconformal gauge superfields
Considering S (m,n;N ) [ Ŵ, W] as a functional of the chiral, but otherwise unconstrained, superfields and their conjugates, we define the dual tensors where the variational derivative is defined as follows Here E denotes the chiral measure.The superconformal transformation law of the dual superfields (3.10) are characterised by the properties: ) Additionally, varying S (m,n;N ) [ Ŵ, W] with respect to Ῡα(n) α(m) yields the following equation of motion It is clear to see that the Bianchi identity (3.9) and the equation of motion (3.13) are together invariant under the following U(1) duality rotations: Here λ = λ is an arbitrary constant parameter.A routine analysis then leads to the self-duality equation for S (m,n;N ) [ Ŵ, W] We emphasise that this equation must hold for chiral, but otherwise unconstrained, superfields Ŵ[∆] α(m+n+N ) and W[∆] α(m+n+N ) .The simplest solution of the self-duality equation (3.15) is the free action For N = 1, the flat-superspace version of (3.16) was first given in [39] and then extended to general conformally flat backgrounds in [40].The extension to N > 1 followed shortly thereafter [22].
Self-duality under Legendre transformations
In section 2.2, we extended the well-known result that U(1) duality invariance implies selfduality under Legendre transformations to the case of general conformal gauge fields.Here, we will extend this result to the case of a superconformal gauge multiplet.
We begin by introducing the parent action Here Υ (D) α(m) α(n) is a Lagrange multiplier superfield; if one varies eq.(3.17) with respect to Υ (D) α(m) α(n) , the resulting equation of motion is exactly the Bianchi identity (3.9), whose general solution is given by (3.5).Consequently, we recover the original model.
Next, varying (3.17) with respect to which we may solve to obtain Inserting this solution into (3.17A routine calculation allows one to show that the following functional is invariant under infinitesimal U(1) rotations (3.14) Hence, it must also be invariant under the following finite duality transformations: ) Performing this transformation on (3.23) with λ = π 2 yields and upon inserting this expression into (3.21),we obtain (3.22). 4 The N = 2 superconformal gravitino multiplet Consider a dynamical system describing the propagation of the N = 2 superconformal gravitino multiplet in curved superspace, see e.g.[31] for a review of the latter.The assoicated action functional S[ Ŵ, W] is assumed to depend on the chiral field strengths Ŵα , Wα and their conjugates, which are defined as follows:
.1)
Here ) denote the N = 2 conformally covariant derivatives and the gauge prepotential Υ i is defined modulo gauge transformations of the form Gauge transformation (4.2) is superconformal provided Υ i is characterised by the properties which imply that the field strengths are primary in general curved backgrounds: However, the gauge transformation (4.2) leaves the field strengths (4.1) invariant only in conformally flat backgrounds Here W αβ denotes the N = 2 super-Weyl tensor.Such a geometry will be assumed in what follows.It is important to note that the field strengths (4.1) satisfy the Bianchi identity
U(1) duality-invariant models
We now consider S[ Ŵ, W] as a functional of the unconstrained fields Ŵα , Wα and their conjugates.This allows us to introduce the dual chiral superfields where we have made the definition The superconformal properties of the superfields (4.7) are: Varying S[ Ŵ, W] with respect to Υ i yields the equation of motion whose functional form mirrors that of the Bianchi identity (4.6).
It is clear from the discussion above that the system of equations (4.6) and (4.10) is invariant under the U(1) duality rotations One may construct U(1) duality-invariant models for Υ i .Their actions satisfy the self-duality equation which must hold for unconstrained fields Ŵα and Wα .The simplest solution of this equation is the free action5
Self-duality under Legendre transformations
We begin by describing a Legendre transformation for a generic theory with action S[ Ŵ, W].For this we introduce the parent action Here Ŵα and Wα are chiral, but otherwise unconstrained superfields, while Ŵα and Wα take the form where Υ D i is a Lagrange multiplier superfield.Indeed, upon varying (4.14) with respect to Υ D i one obtains the Bianchi identity (4.6), and its general solution is given by eq. ( 4.1), for some primary isospinor superfield Υ i defined modulo the gauge transformations (4.2) and characterised by the superconformal properties (4.3).As a result the second term in (4.14) becomes a total derivative, and we end up with the original model.Alternatively, if we first vary the parent action with respect to Ŵα and Wα , the equations of motion are which we may solve to express Ŵα and Wα in terms of the dual field strengths.Inserting this solution into (4.12),we obtain the dual model Now, given an action S[ Ŵ, W] satisfying the self-duality equation (4.9), our aim is to show that it satisfies which means that the corresponding Lagrangian is invariant under Legendre transformations.
A routine calculation allows one to show that the following functional is invariant under (4.8).The latter may be exponentiated to obtain the finite U(1) duality transformations Performing such a transformation with λ = π 2 on (4.19) yields Upon inserting this expression into (4.14),we obtain (4.18).
Auxiliary superfield formulation
The duality-invariant models described in section 4.1 may be reformulated by using an auxiliary superfield approach in a spirit of [16,17].Such a formulation was recently employed in the study of duality-invariant models for the N = 2 vector multiplet [20] and superconformal higher-spin multiplets [21].
The starting point for such an analysis is the following action functional Here we have introduced the auxiliary superfields ηα and ηα , which are characterised by the properties: Employing perturbation theory, these equations allow one to express ηα and ηα as functions of Ŵα , Wα and their conjugates.This means that (4.22) is equivalent to the theory described by Here F(x, y), G(x, y) and H(x, y) are dimensionless, real functions of two real variables, and Ξ = Ξ is a conformal compensator with the properties: In (4.28) we have introduced the primary, dimensionless and uncharged descendants We emphasise that, unless the real functions above are chosen such that the functional (4.28) is Ξ-independent, the latter describes a non-superconformal theory.Below, the superconformal case will be studied in more detail.
Superconformal duality-invariant models
As discussed above, the interaction (4.28) describes a superconformal theory if the associated functions F, G and H are chosen such that (4.28) is independent of Ξ.This is the case if: where h(x) is a real function of a real variable.
It is well known that the requirement of conformal invariance uniquely singles out the Mod-Max theory [18] in the family of U(1) duality-invariant models for nonlinear electrodynamics without higher derivatives, eq.(2.14).This uniqueness is no longer present if the Maxwell field (conformal spin s = 1) is replaced by a conformal higher-spin field s > 1.Even in the conformal graviton case (spin s = 2), Ref. [21] constructed a two-parameter family of conformal U(1) duality-invariant models.Furthermore, the N = 1 supersymmetric ModMax theory [19,20] is a unique superconformal representative in the family of U(1) duality-invariant models for nonlinear supersymmetric electrodynamics proposed in [6] (see eq. (2.10) in [6] or eq.(2.22) in [19]).However, for more general supersymmetric models of the functional form (2.10) in [19], a recent paper [50] constructed a family of N = 1 superconformal vector multiplet models with U(1) duality invariance.It is therefore not surprising that the superconformal U(1) duality-invariant model (4.31) has nontrivial functional freedom.
Concluding comments
In this paper, we have developed the general formalism of U(1) duality rotations for the N -extended superconformal gauge multiplets Υ α(m) α(n) , m, n ≥ 0. We recall that, associated with each such multiplet is a pair of chiral field strengths Ŵ[∆] α(m+n+N ) and W[∆] α(m+n+N ) carrying at least N spinor indices. 6Hence, as discussed in [22], for any fixed N > 1, chiral field strengths carrying fewer than N indices do not originate from this family of gauge prepotentials.The N = 2 story was recently completed in [25], where the spinor field strengths (4.1) were shown to describe the superconformal gravitino multiplet. 7The completion of this analysis for N > 2 remains an open problem, which would be interesting to revisit in the future.
Building on the results of [25], in section 4, we extended the formalism of duality rotations to the case of the N = 2 superconformal gravitino multiplet.This formalism was then utilised, in conjunction with an auxiliary superfield formulation developed in section 4.3, to describe new nonlinear models (4.28), including a family of superconformal ones (4.31).
As discussed in the main body of this work, every duality-invariant model arises as a solution of the so-called 'self-duality equation.'We reiterate that the self-duality equation for a conformal gauge field φ see section 2 for the appropriate definitions.One of the main results of this work is a supersymmetric extension of (5.1) in the sense that it describes U(1) invariant models for the superconformal gauge multiplets Υ α(m) α(n) .Remarkably, we found that the functional form of the former is similar to that of (5.1), namely8 As aforementioned, there exist more general superconformal gauge multiplets than those described by the superfields Υ α(m) α(n) .Specifically, in the N = 2 case, vector and superconformal gravitino multiplets do not belong to this family.In spite of this, the functional form of their self-duality equations, see [6,7,9] for the vector multiplet and (4.12) for the gravitino multiplet, agrees with that of (5.2). 9 This leads us to believe that the latter is somewhat universal.
In section 4.3, inspired by the approaches of [16,17], we described how duality-invariant models for the N = 2 superconformal gravitino multiplet could be reformulated via an auxiliary superfield approach.This proved to be a powerful technique to generate such models.Below, we will sketch such a formulation for duality-invariant models describing the gauge superfield Υ α(m) α(n) described above. 10he starting point for such an analysis is the following action functional Here we have introduced the chiral auxiliary superfields η and η, which are characterised by the same superconformal properties as Ŵ and W, respectively.By construction, the self-interaction S Int [η, η] contains cubic and higher powers of η, η and their conjugates.Varying (5.3) with respect to the auxiliary superfields yields algebraic equations of motion which allow one to express η and η as functions of Ŵ, W and their conjugates.This means that (5.3) is equivalent to the self-dual theory described by where S Int [ Ŵ, W] contains cubic and higher powers of Ŵ, W and their conjugates.
This formulation is especially powerful as the self-duality equation (5.2) takes a remarkably simple form.Specifically, U(1) duality invariance is equivalent to the requirement that S Int [η, η] is invariant under rigid U(1) phase transformations S Int [e iϕ η, e iϕ η] = S Int [η, η] , ϕ ∈ R . (5.5) Within the Gaillard-Zumino approach to self-dual nonlinear electrodynamics [1][2][3][4][5], duality rotations are symmetries of the equations of motion.There exist different approaches in which duality transformations are symmetries of the action [55][56][57][58][59].So far these approaches have not been generalised to include the (super)conformal higher-spin theories studied in our paper.The Pasti-Sorokin-Tonin (PST) formalism in four and higher dimensions (see, e.g., [55,56] and references therein) has been truly successful in formulating (locally) supersymmetric theories with on-shell supersymmetry.It suffices to mention their construction of the covariant action for the super-five-brane of M theory [60].There exist several off-shell superfield generalisations of the PST formalism, see e.g.[61,62].It remains an interesting open problem to develop a superfield generalisations of the PST formalism for the U(1) duality-invariant models for nonlinear N = 1 supersymmetric electrodynamics [6,7]. | 5,163.2 | 2023-08-21T00:00:00.000 | [
"Physics",
"Mathematics"
] |
Black-Box Tuning of Vision-Language Models with Effective Gradient Approximation
Parameter-efficient fine-tuning (PEFT) methods have provided an effective way for adapting large vision-language models to specific tasks or scenarios. Typically, they learn a very small scale of parameters for pre-trained models in a white-box formulation, which assumes model architectures to be known and parameters to be accessible. However, large models are often not open-source due to considerations of preventing abuse or commercial factors, hence posing a barrier to the deployment of white-box PEFT methods. To alleviate the dependence on model accessibility, we introduce collaborative black-box tuning (CBBT) for both textual prompt optimization and output feature adaptation for black-box models. Specifically, considering that the backpropagation gradients are blocked, we approximate the gradients of textual prompts by analyzing the predictions with perturbed prompts. Secondly, a lightweight adapter is deployed over the output feature of the inaccessible model, further facilitating the model adaptation process. Empowered with these designs, our CBBT is extensively evaluated on eleven downstream benchmarks and achieves remarkable improvements compared to existing black-box VL adaptation methods. Code is released at https://github.com/guozix/cbbt.
Introduction
Large-scale vision-language (VL) models (Radford et al., 2021;Jia et al., 2021;Li et al., 2021;Yao et al., 2021;Alayrac et al., 2022;Yuan et al., 2021) have demonstrated remarkable performance in a wide range of applications.Various model finetuning methods have been proposed to exploit the potential of pre-trained VL models for downstream vision (Zhou et al., 2022b;Lu et al., 2022b;Wang et al., 2022;Sun et al., 2022c;Zhang et al., 2022;Wortsman et al., 2022;Li et al., 2023) and natural language processing (Lu et al., 2022a;Yan et al., 2022) tasks.Most existing methods conduct parameter-efficient fine-tuning (PEFT (Houlsby et al., 2019)), which updates a tiny fraction of the model parameters or introduces a small number of extra parameters for tuning, in order to transfer pre-trained knowledge in a computation-and data-efficient manner.
Although impressive improvements have been achieved, standard PEFT methods need to pass signals forward and backward through the entire pre-trained model to update the parameters, which relies on the availability of the architecture, parameters, and even the inference source code of the model.Nevertheless, the trend of building machine learning models as a service leads to many proprietary services that only provide an API interface for model inference, e.g., ChatGPT, Bard, and GPT-4, where the parameters and inference code of the models are not open-source due to commercial or safety considerations.Under such black-box circumstances, existing PEFT methods can hardly be adopted.Thus, it is worthwhile to develop methods that can tune pre-trained VL models in a black-box setting.Moreover, in the era of large foundation models, running super large pre-trained models on local devices can be very costly as the scale of the pre-trained model has constantly increased.Although existing PEFT methods restrict learnable parameters to a fairly small scale, it is still a burden to accommodate models with billions of parameters in limited computing resources for most users.
To tackle these problem of tuning black-box VL models, there exist a few very recent efforts.For instance, BlackVIP (Oh et al., 2023) pioneered black-box prompting for VL models by learning an asymmetric autoencoder-style coordinator with a zeroth-order optimization to modify visual prompts in the pixel space.However, modifying prompts in the large pixel space causes inefficiency and the method requires up to 9k parameters in the coordinator to achieve the goal.Besides, the performance of their visual prompts is subject to the diverse semantic features of a well-trained generative selfsupervised learning model.Even so, the method demonstrates limited performance improvements after prompting, showing that prompt tuning in the black-box setting is very challenging.
In this paper, we propose a collaborative blackbox tuning method dubbed CBBT for tuning pretrained VL models and adapting them to downstream tasks.Unlike in BlackVIP (Oh et al., 2023), we learn the prompt for the textual input instead of images, and we adapt the visual features using an adapter.The basic idea is illustrated in Fig. 1.
A query-efficient approximation method (Wierstra et al., 2014) is used to estimate the gradients and optimize the textual prompt with the blackbox pre-trained VL model, from which true gradients are not accessible.Specifically, we query the model with randomly perturbed prompts and then summarize the change in model prediction loss to estimate the gradient of learnable parameters (i.e., the prompts).We equip single-step gradient optimization with information from history updates via a momentum strategy, which leads to faster convergence and better results.
Under the circumstance where the output features are available for the pre-trained VL models, we further adapt the visual features by introducing a lightweight adapter module.As demonstrated in Fig. 1, the visual adapter can be learned effortlessly by supervised learning, without having knowledge of the pre-trained VL backbone.
With the joint optimization of the textual prompt and the visual adapter, our CBBT achieves significant model adaptation performance.To evaluate its effectiveness, we conduct extensive experiments on eleven downstream benchmarks, showing superior performance compared to existing black-box VL adaptation methods.
The main contributions of this work can be summarized as follows: • We advocate textual prompting for adapting pretrained black-box VL models to downstream tasks.Satisfactory prompt tuning results are obtained with an effective gradient approximation algorithm.
• We expedite the tuning process by utilizing history updates as beneficial information for each optimization step, which brings about accelerated convergence and better results.
• We adapt the visual features jointly with the textual prompt when output features are available.
The comprehensive comparison shows that our method achieves state-of-the-art performance compared to other black-box tuning approaches.
Related Work
Black-box Prompt Tuning for Large Language Models.BBT (Sun et al., 2022b) adopts derivativefree optimization using covariance matrix adaptation evolution strategy (CMA-ES) (Hansen et al., 2003) to optimize the prompt in a low-dimensional intrinsic subspace.With this method, the adaptation of large language models works well on natural language tasks, surpassing even the white-box prompting performance.BBTv2 (Sun et al., 2022a) further enhances the capacity of BBT by using deep prompt tuning.BDPL (Diao et al., 2022) tunes a set of discrete prompts for language models by modeling the choice of words in the prompt as a policy of reinforcement learning, and a variance-reduced policy gradient estimator (Williams, 1992;Dong et al., 2020;Zhou et al., 2021) is used to optimize the discrete prompt based on loss value.Black-box Adaptation for VL Models.To the best of our knowledge, BlackVIP (Oh et al., 2023) is the first work to tackle black-box tuning problem of pre-trained VL models.It designs an asymmetric autoencoder-style coordinator to generate inputdependent image-shaped visual prompts and optimize the coordinator by zeroth-order optimization using simultaneous perturbation stochastic approximation (SPSA) (Spall, 1992(Spall, , 1998(Spall, , 1997)).However, the improvement brought by this method (after visual prompting) is relatively limited compared to the baseline, i.e., the pre-trained CLIP (Radford et al., 2021).LFA (Ouali et al., 2023) liberalizes the regimes of black-box models by assuming precomputed features from pre-trained backbones are accessible.They optimize a projection layer for a better alignment between pre-computed image features and class prototypes by a multi-stage procedure.They first solve the orthogonal procrustes problem (Schönemann, 1966) by singular value decomposition (SVD) and further refine the projection matrix using adaptive reranking loss.Albeit superior adaptation performance is obtained, we advocate that the complex-phased optimization can be substituted by end-to-end supervised learning with a lightweight adapter, which effortlessly provides comparable results given labeled image features.
Prompt Feature Adapter q Losses
Back Propagation Learnable q perturbations Here we introduce the general form of prompt tuning and adapter method and the dilemma when applied to black-box VL models.
Prompt tuning for VL models.Given a pretrained VL model, e.g., CLIP (Radford et al., 2021), existing soft prompt tuning approaches (Zhou et al., 2022b,a;Sun et al., 2022c) for classification tasks typically prepend learnable embeddings to the class names of the target dataset: where i ∈ {1, . . ., C} denotes the index of classes, c i denotes word embedding of the i-th class name c i .For j ∈ {1, . . ., M }, v j is a learnable word embedding whose dimension is the same as the dimension of normal word embeddings in the vocabulary.The prediction of an input image x is obtained by computing similarities between the image feature f and prompted textual class features {t i } C i=1 : where the features of images are encoded by pretrained image encoder f = Enc I (x), and textual class embeddings are generated by text encoder t i = Enc T (ϕ(c i )).⟨•, •⟩ calculates the cosine similarity and τ is a temperature parameter.The objective of prompt module ϕ is maximizing the classification probability of the ground-truth class of few-shot image samples: When given a while-box model, it is straightforward to calculate the gradient of with respect to the prompt, and optimization of the prompt can be performed via gradient descent: Unfortunately, in the black-box setting, the gradients are unable to be backpropagated through the pre-trained black-box Enc I and Enc T via the chain rule, and the term ∇ ϕ L(y, x, ϕ) cannot be directly obtained.Thus, current gradient-based prompt tuning methods are not feasible in this situation.
Adapter learning for VL models.Adapter learning methods (Gao et al., 2021;Zhang et al., 2022) for VL models usually manipulate the output features of pre-trained models for adaptation to target tasks.For instance, an adapter module can be introduced to transfer the visual features to new domains with f = ψ(f ), and then the prediction is obtained by: Learning such an adapter module by minimizing L(y, f , ψ) does not require back-propagation through the entire pre-trained VL model, which provides convenience for adaptation without knowing the details of the backbone model.But access to the output features of the pre-trained model is required to construct and optimize the adapter module (Zhang et al., 2022;Ouali et al., 2023).Further Analyses of the Black-box PEFT.Given a black-box pre-trained model, the unavailability of gradients set a barrier to prompt tuning.Therefore, we intuitively have the idea of optimizing the prompt by estimating gradients.Input gradient approximation has been explored in the application of black-box model attacks (Ilyas et al., 2018b,a) and black-box model reprogramming (Tsai et al., 2020).We employ a perturbation-based gradient approximation method to estimate the gradient of learnable parameters in the prompt.The estimated gradient serves as an effective guide for the tuning of the prompt.
Although the gradient approximation technique provides barely satisfactory optimizing guidance, it is still suboptimal compared to the real gradients.Merely conducting single-step gradient descent based on the results of the estimated gradient leads to inefficient training.Inspired by the previous design of optimizers, we try to expedite the optimization based on the estimated gradient with a momentum.The basic idea is that information from previous updates is useful for the current step, and accumulated gradients possibly provide more promising exploration directions.we empirically find that equipping the momentum strategy for gradient approximation brings expedited convergence and remarkable adaptation performance gain.
Although we have no access to the internal variables of typical black-box models, under the circumstance where output features of the pre-trained VL backbone are available, post-processing adapter modules can be directly learned by labeled samples for PEFT.
Motivated by the above analyses, we propose to adapt black-box VL models with a collaborative PEFT consisting of optimization from two perspectives.Firstly, we tune a textual prompt under the guidance of the estimated gradient.Perturbationbased gradient approximation and effective optimization strategy are used to facilitate the training.Secondly, we learn a lightweight adapter to transfer pre-trained visual features.Joint optimization of the prompt and adapter brings superior adaptation performance.The overview of the proposed model is illustrated in Fig. 1.
In the following, we begin by presenting the perturbation-based gradient approximation method in Section 3.2.Then, we explain how to expedite the tuning process by leveraging information from previous updates to achieve a better optimization in Section 3.3.Finally, we introduce the adapter module and joint training schedule in Section 3.3.
Perturbation Based Gradient Approximation
Suppose the prompt module ϕ has parameter θ with dimension D. Let f (θ) be the loss function defined in Eq. (3).To approximate the gradient of the loss function with respect to θ, one possible avenue is to add a small increment to each dimension of θ and sum up the slope of all dimensions: where e i is a one-hot vector and its i-th element is equal to 1.Such an approximation may work well for low-dimensional parameters but is not suitable for problems where D might be large.For example, the dimension of each word embedding of pre-trained CLIP is 512, i.e., θ ∈ R M ×512 .Thus M × 512 independent API calls for the black-box model must be applied to obtain the complete estimated gradient of parameter θ, which causes inefficiency.
To alleviate the cost of the above gradient estimation method, we adopt a stochastic perturbationbased gradient estimation technique formulated as: g i is the slope of the loss function along the direction of the perturbation.ϵ i is a vector randomly drawn from a unit sphere with an L2-norm of 1. β is a small value controlling the scale of perturbations.b is a scaling factor balancing the bias and variance trade-off of the estimator.
To mitigate noise in the estimated gradients, we sample random perturbation ϵ i for q times, and the gradient of θ is approximated by averaging the slope of q directions (Wierstra et al., 2014;Ilyas et al., 2018a;Tu et al., 2019): The upper bound of the estimation g w.r.paper as: Setting a smaller β can reduce the last error term in Eq. ( 9) but may cause an increase in noise due to numerical precision.Increasing the number of samples q reduces the first error term but consumes more queries for the model API.
Effective Optimization Based on Estimated Gradient
To expedite the optimization based on the estimated gradient, we facilitate the tuning process by leveraging the momentum strategy.Specifically, we estimate the first-order moments of the parameters' gradient by The first-order moments accelerate the optimization and reduce the noise in the gradient of each step.And we obtain the adaptive estimation of the secondorder moment by which is used to adjust the learning rate of each dimension adaptively.
In our experiments, we use optimizers that integrate the momentum as a practical implementation.To analyze the optimization results of different optimizers, we illustrate the trend of normalized loss value |L(θ * ) − L(θ)| / |L(θ * ) − L(θ 0 )| in Fig. 2. Adam (Kingma and Ba, 2014) shows a fast and steady convergence and satisfied final results.We have also tried more advanced techniques, e.g., LAMB (You et al., 2019), but no significant improvement in performance is observed.Empirical results show that optimizing the prompt with Adam optimizer based on the estimated gradient provides expedited convergence and superior adaptation performance.
Visual Adapter Module
The pre-trained VL models can be effectively adapted to downstream tasks through the black-box prompt tuning method mentioned above.Meanwhile, under the assumption that having access to the output features of the black-box model (Ouali et al., 2023), a lightweight adapter module can be directly learned from labeled few-shot samples.
Adapter modules (Houlsby et al., 2019;Gao et al., 2021;Zhang et al., 2022) have been proven to be effective in the adaptation of VL models.During the training process of the adapter, the gradients do not need to be back-propagated through the entire pre-trained model, making it possible to equip the adapter module with black box models of which only the output features are available.
The text features have been adapted in our method by tuning the learnable prompt.Thus, we introduce an adapter module only for the visual features to achieve a collaborative adaptation.Specifically, we add an adapter module to the output of the visual encoder of the pre-trained VL model.Access to computed image features and labels allows the adapter to be learned at ease through direct supervised learning.During training, the visual adapter module and text prompts are optimized in turn to achieve a joint adaptation.
In our experiment, we attempt two simple but effective adapter designs, CLIP-Adapter (Gao et al., 2021) and Tip-Adapter (Zhang et al., 2022).Both of which can be well suited for the manipulation of image features for better adaptation.
Implementation Details
Datasets.We perform the few-shot adaptation on black-box pre-trained CLIP (Radford et al., 2021) for image classification tasks following the general protocol in existing methods (Zhou et al., 2022b;Ouali et al., 2023;Oh et al., 2023).In particular, we adopt 11 commonly used datasets to evaluate our method, including ImageNet (Deng et al., 2009), Caltech101 (Li et al., 2004), Oxford-Pets (Parkhi et al., 2012), StanfordCars (Krause et al., 2013), Flowers102 (Nilsback and Zisserman, 2008), Food101 (Bossard et al., 2014), FGV-CAircraft (Maji et al., 2013), SUN397 (Xiao et al., 2010), UCF101 (Soomro et al., 2012), DTD (Cim- Learnable Prompts.The learnable prompts are shared across all classes in the target dataset.By default, the length of the prompt is set to be M = 1, which reduces the number of parameters in the learnable prompt.A small parameter optimization space helps maintain the quality of the estimated gradients with limited resource for exploration, resulting in effective tuning results.The effect of different prompt sizes is analyzed in Sec.4.4.To initialize the prompt with different length, we use "a", "a photo", "a photo of a", and "a photo of a a photo of a" for M = 1, 2, 4, 8, respectively.Adapter Module.Following CLIP-Adapter (Gao et al., 2021), our adaptor module adopts a two-layer MLP that follows the pre-trained visual encoder.The input and output dimensions are the same as the dimension of the CLIP image feature, and the number of hidden units is a quarter.Following Tip-Adapter (Zhang et al., 2022), we use the averaged feature of random augmented training images from 10 epochs as the initialization of the cache to construct the projection layer.
Training Details.We employ the official CLIP model to evaluate our proposed method.For a comprehensive comparison, we conduct experiments with different visual backbones, i.e., ResNet50 and ViT/B16.The query number q is set as q = 256 by default, and its effect is discussed in Sec.4.4.
The hyperparameters b and β in Eq. ( 7) are set as D and 1/D, respectively.D is the dimension of the parameter in the prompt.
From Table 1, our black-box prompt tuning method (ViT/B16 backbone) surpasses previous work Oh et al. (2023) with an average accuracy margin of 7.3% across 11 datasets, demonstrating the effectiveness of our black-box textual prompting for the adaptation of the VL model.Furthermore, when the context length of the prompt is fixed as M = 1, our black-box prompt tuning method performs comparably to the white-box prompt method, i.e., CoOp (1 ctx), with a slight difference of less than 2%.By assuming pre-computed features are available, LFA (Ouali et al., 2023) optimizes a projection layer in a multi-stage procedure as introduced in Section 2. We advocate that end-to-end learning of adapter methods (Gao et al., 2021;Zhang et al., 2022) provides a much more brief avenue meanwhile gives satisfactory performance.As shown in Table 1, optimizing the adapter module from CLIP-Adapter and Tip-Adapter can achieve comparable performance with LFA.Thus, we integrate our black-box prompt tuning method with these more flexible adapter modules.From Table 1, the collaborative adaptation of black-box prompting and adapter module brings remarkable performance and achieves a new state-of-the-art result.
Comparison with Black-Box Optimizers
Existing black-box prompt tuning methods have explored various effective optimization techniques when the gradient is unavailable.Here we compare our method with two other different optimization algorithms based on our implementation.In particular, CMA-ES algorithm (Hansen et al., 2003) is considered as state-of-the-art in evolutionary computation and is previously used to optimize the prompt for large language models (Sun et al., 2022b,a).SPSA-GC was proposed by BlackVIP (Oh et al., 2023) to learn a visual prompt for adaptation of pre-trained CLIP.
For a fair comparison, we unify the number of API calls per iteration for all competitors to 10.This is achieved by: setting the population size of CMA-ES as 10; setting the number of repeated two-side estimations of SPSA-GC as 5; setting the number of samplings of our perturbation-based gradient approximation as q = 10.The experiments are conducted on CLIP ResNet50 model, and the prompt length was set to 1.All optimizers are trained for 750 iterations until convergence, and the results are listed in Table 2. From the Table, our method outperforms the SPSA-GC algorithm, which is also based on gradient estimation.Although CMA-ES exhibits faster convergence, noticeable fluctuations are observed even in the later stages of training.Our perturbation-based gradient approximation method is more suitable for the adaption of the VL model.
Ablation Study
Ablation studies are performed to evaluate the effect of various factors, including the number of queries, the prompt length, the number of few-shot samples, and the collaborative training schedule.The experiments are mainly on the CLIP ResNet50 model.Effect of the number of queries q.The number of samplings q controls the times of querying the black-box model in each iteration.It has a significant impact on the number of API calls required for learning the prompt.Fig. 3 illustrates the adaptation performance with different q values.Generally, larger values of q yield more reliable gradients but also require more time and API calls for the black-box model.To trade-off the performance and computational cost, we use q = 256 for the results presented in Section 4.2.Effect of prompt length.We further investigate the effect of Prompt length M .For comparison, all the experiments are conducted under 16-shot training data, with the same number of sampling (q = 256) and iterations.The results are illustrated in Fig. 4. One can see that the trend of performance on different tasks varies as the context length of the prompt changes.For white-box prompt tuning, longer prompts usually can lead to better adaptation to downstream datasets, e.g., DTD and Eu-roSAT.However, blindly lengthening the context (e.g.M = 16) will not result in continuously rising performance.Increasing the length of context brings little improvement for OxfordPets.We attribute these results to the varying degrees of data diversity among different tasks.
But in the case of black-box models, the experimental phenomenon changes due to the influence of gradient approximation.Lengthening the context of the prompt brings trivial benefits and may even result in noticeable performance degradation.The expanded parameter space of a long context leads to practical difficulties in gradient estimation thus the optimization may lead to a suboptimal result.Increasing the number of sampling q may improve the reliability of estimated gradients, but scaling up q in proportion to the size of the prompt leads to severe inefficiency.Thus, we use the prompt length of 1 as a trade-off.Effect of the number of few-shot samples.The number of few-shot samples determines the amount of training data used to adapt the pre-trained VL model.To demonstrate its effect, we keep the de-fault configuration and vary the number of samples used for prompt tuning.Both black box and white box models undergo the same number of iterations.As shown in Fig. 5, increasing the number of samples clearly leads to better adaptation results.Moreover, we observe that in extremely data-scarce scenarios with only 1-shot sample per class, tuning the prompt based on the estimated gradient outperforms white-box tuning on all three datasets.One possible explanation is that optimizing with true gradients can lead to overfitting when the amount of data is too small.In contrast, gradient approximation provides a more robust optimization direction.As the amount of data increases, the advantages of direct white-box learning become more obvious.Effect of the collaborative training schedule.In our experiment, the prompt and the adapter module are optimized jointly to maximize their collaborative performance.During training, we alternately update the prompt and the adapter module at different epochs.To assess the effectiveness of this joint optimization schedule, we conducted experiments using three different ways of training: (i) tuning the prompt until convergence and then optimizing the adapter module (P-A); (ii) tuning the adapter module until convergence and then optimizing the prompt (A-P); (iii) our collaborative training schedule (ALT).We train "Ours (CLIP-Adapter)" under the above three schedules, and the results are shown in Table 3.As shown in the table, recurrently updating the prompt and the adapter alternately (ALT) achieves superior collaborative adaptation performance, demonstrating its effectiveness.
Conclusion
In this paper, we present CBBT, a black-box adaptation approach for VL models.We effectively tune a soft prompt for the text encoder by gradient approximation and jointly learn a lightweight adapter module to transfer the visual features of the pre-trained backbone.Equipped with the textual prompt and the visual adapter, our method achieves a collaborative adaptation for both modalities.Experiments on various datasets show that our CBBT performs favorably against the state-of-the-art methods.
Limitations
We optimize the prompt in the original highdimensional prompt embedding space, which leads to unsatisfactory optimization results for the prompt with a long context, as shown in Section 4.4.The high-dimensional parameter in the prompt also makes the gradient approximation more difficult.
We have tried to optimize the prompt in a smaller subspace following the approach in BBT (Sun et al., 2022b).But the adaptation performance decreased a lot even though we only released a small proportion of the original dimensions.The intrinsic dimensionality property (Aghajanyan et al., 2020;Qin et al., 2021) for vision-language pre-trained models needs further investigation.
Besides, we optimize a continuous prompt with the need for the token embedding layer of pretrained models.Learning a discrete prompt for the adaptation of VL models is worthy of exploration, considering that the discrete text prompt provides an explicit explanation, and discrete text inputs are more suitable for the invocation of the latest pre-trained model APIs with natural language inputs and/or outputs.
A Generalization Ability of Black-Box Prompt
To evaluate the generalization ability of our method, we conducted experiments on the extensively evaluated domain shift benchmarks and base-to-new setting (training on samples from base classes, testing on samples from new classes) commonly used in studies for adaptation of CLIP.
Generalization to other domains.Following CoOp (Zhou et al., 2022b) and CoCoOp (Zhou et al., 2022a), we evaluate the transferability of the prompt learned from ImageNet to the three specially designed datasets.results are shown in Table 4.Given the high variance inherent in these trials, the results are averaged over three random re-runs to ensure reliable comparisons.
Our prompt learned by black-box optimization performs better than CoOp with a clear margin.Moreover, compared to CoCoOp, which relies on input-conditioned prompts generated by a metanetwork, our vanilla prompt demonstrates superior performance on two of the three benchmarks.Generalization from base to new classes.Following CoCoOp (Zhou et al., 2022a), we split the classes of the target dataset into two sets.In the base-to-new setting, the methods are trained using data from base classes and tested separately on base and new classes to evaluate the generalization ability to unseen classes in training.The results are shown in Table 5.
While CoOp improves pre-trained CLIP on base classes, it fails grievously on novel classes.Co-CoOp optimizes for each instance to gain more generalization over an entire task.Our method achieves comparable results to CoCoOp by tuning a single prompt with the black-box optimizer.Optimizing the prompt by estimated gradient avoids the trend of overfitting to training samples, thus making up the superior of our method on generalization ability to white-box prompt tuning.
B More Results with Longer Prompt
In Fig. 4 of our paper, we optimize prompts with different lengths under a fixed training time budget by setting the same number of samplings q as 256 for gradient approximation.Such a setting ensures training efficiency but may lead to suboptimal results for longer prompts, resulting in a performance drop of longer prompts.To demonstrate this, we have conducted experiments in which the value is scaled proportionately according to the size of the prompt, and the results are reported in Table 6.
From the table, with sufficient training time available, proportionately scaling the samplings for tuning of the longer prompts achieves stable convergence and clear improvements (especially on Eu-roSAT).Nonetheless, our optimized prompts consistently outperform hand-crafted hard prompts of any length.
C Computational Time Budget
The added computation burden of our method compared to white-box prompting methods lies within the multiple samplings required by the gradient approximation.We provide the training duration linked to the tuning methods presented in Table 1 on the EuroSAT dataset in Table 7.All training procedures are conducted on a single 3090 GPU.We record the minutes used for complete training and divide the time by the number of trained epochs to ascertain the time per epoch.While the sampling process inevitably elongates the training period, the overall consumed time is acceptable.
D Analysis of the Error in Gradient Estimation
The upper bound of the error of gradient approximation is 4 ∥∇f (θ)∥ 2 2 according to Eq. ( 9).It is a theoretical value obtained through multiple bounding steps in the proof.The actual estimation error of the gradient during training is much lower than the theoretical upper bound since the experiments are conducted on reasonably annotated datasets with pre-trained CLIP and properly initialized prompts.As the training proceeds, the value of the true gradient becomes small, making the error of the estimated gradient, bounded by the true gradient, become small simultaneously.Thus, the results of "Ours (w/o adapter)" are closely comparable to "CoOp (1 ctx)" in Table 1.
E Applying to Larger Black-Box Models
It is promising to apply our method to larger blackbox models.In fact, there exist closed-sourced model APIs, e.g., GPT-3, that provide the feature extraction function.It is possible to adapt pre-trained models of this kind by transferring the extracted features.Additionally, inspired by recent discrete prompt tuning approaches in Maus et al. (2023); Wen et al. (2023), it is practically feasible to discretize the learned prompts by projecting the continuous embedding to discrete token space to support a broader range of black-box models that only allows discrete input, e.g., ChatGPT, Bard.Our research will persist in exploring more practical adaptation techniques for vision-language models.
Figure 1 :
Figure 1: Overview of our proposed method.We collaboratively optimize the textual prompt and the image feature adapter for the adaptation of black-box pre-trained VL models.The prompt is optimized by estimated gradients since backpropagation cannot be applied to the black-box model.The visual adapter module is learned by direct supervised learning given output features from the pre-trained model.3 Method 3.1 PEFT in the Black-box Framework
Figure 2 :
Figure 2: Trend of loss during training on EuroSAT.We adopt ADAM optimizer for expedited convergence and superior adaptation performance.
Figure 4 :
Figure 4: Ablation results of "Ours (w/o Adapter)" with different context length of the prompt.
Figure 5 :
Figure 5: Ablation results of "Ours (w/o Adapter)" with different quantity of few-shot training data.
Table 1 :
Few-shot adaptation performance on 11 image classification tasks.Black-box methods are indicated with gray shadows.
Table 2 :
Comparison of different black-box optimizers.
Table 3 :
Ablation study on the training schedule.
Table 4 :
Comparison of manual and learned prompt in domain generalization.The prompts are learned on 16-shot data from ImageNet.
Table 6 :
More results with longer prompt and varying samplings q. "ctx" denotes the length of the prompt.
Table 7 :
Comparison of training time budget. | 7,168 | 2023-12-26T00:00:00.000 | [
"Computer Science"
] |
Testing for direct genetic effects using a screening step in family-based association studies
In genome wide association studies (GWAS), family-based studies tend to have less power to detect genetic associations than population-based studies, such as case-control studies. This can be an issue when testing if genes in a family-based GWAS have a direct effect on the phenotype of interest over and above their possible indirect effect through a secondary phenotype. When multiple SNPs are tested for a direct effect in the family-based study, a screening step can be used to minimize the burden of multiple comparisons in the causal analysis. We propose a 2-stage screening step that can be incorporated into the family-based association test (FBAT) approach similar to the conditional mean model approach in the Van Steen-algorithm (Van Steen et al., 2005). Simulations demonstrate that the type 1 error is preserved and this method is advantageous when multiple markers are tested. This method is illustrated by an application to the Framingham Heart Study.
INTRODUCTION
Some of the recently published genome-wide association studies identified the same genetic locus as a disease susceptibility locus for different complex diseases (Amos et al., 2008;Thorgeirsson et al., 2008). One possible mechanism is that the marker locus is pleiotropic and has genetic effects on several, different phenotypes. Determining whether the marker acts directly on each of these phenotypes or only indirectly via one or more intermediate phenotypes is an important step in understanding the biological significance of the genetic associations. In order to understand and characterize the underlying genetic effect, methods have been proposed to disentangle these potential direct and indirect genetic effects (Vansteelandt et al., 2009;Vansteelandt, 2010;Berzuini et al., 2012;Vansteelandt and Lange, 2012;VanderWeele et al., 2012). All currently available methods focus on the direct and indirect genetic effects relative to one (group of) secondary phenotypes. Because the magnitude of the indirect effect depends on how strongly these secondary phenotypes affect the primary phenotype, these methods consider adjustment for confounding of the relationship between these phenotypes by measured extraneous factors. Some of these methods quantify both the direct and indirect genetic effects, but assume that none of these extraneous confounding factors is influenced by the considered marker (VanderWeele et al., 2012). Some of these methods allow for some of the extraneous confounding factors to be influenced by the considered marker, but they merely quantify direct genetic effects (Vansteelandt et al., 2009;Vansteelandt, 2010;Berzuini et al., 2012).
Regardless of the considered framework, all available methods only test one gene at a time and need to be corrected for multiple comparisons. This concern over multiple comparisons becomes an issue in family-based genome wide association studies (GWAS). When there is a region with a strong association with both the endo-phenotype and phenotype, identifying SNPs in the region that are still associated with the phenotype of interest after accounting for the association with the endophenotype requires testing for a direct causal effect for every SNP in the region. In order to increase power to detect this direct genetic effect, we propose a 2-stage testing strategy to minimize the burden of multiple comparisons in the causal analysis (Van Steen et al., 2005;Murphy et al., 2008;Won et al., 2009). The application of a screening step when testing for direct genetic effects is an important advantage in this scenario where the multiple-comparison problem is a major hurdle. The power of our approach is assessed by simulation studies. We show that the type-1 error is preserved and the method is shown to be advantageous when multiple SNPs are tested for a direct effect on the phenotype of interest.
METHODS
Suppose that in the family-based study, n trios (offspring and both parents) have been genotyped at a specific marker locus. Assuming there is no bias due to ascertainment conditions, the variable X i denotes the coded genotype of the offspring and S i denotes the parental genotypes for individual i. If genotypic data is unavailable for the parents but genotypic information is available on the subject's siblings, the variable S i denotes the sufficient statistic by Rabinowitz and Laird (2000) For offspring i, Y i denotes the target phenotype in the association study and K i denotes the secondary phenotype in the study.
Suppose that an association has been observed between the secondary phenotype of interest, K i , and the marker locus. Given this association, the goal is to test for an association between the target phenotype Y i and the marker locus that cannot be explained by a possible indirect effect mediated by K i . To achieve this goal, data is needed on all risk factors of the secondary phenotype K i that are also associated with the primary phenotype (Cole and Hernan, 2002). Let L i denote this collection of measured confounding variables. Because L may be high-dimensional, we do not assume that it is only related with Y by means of a causal effect, but allow for their association to be itself confounded by potentially unmeasured factors U. This is shown in the causal diagram of Figure 1, where the presence of U additionally captures the potential for confounding of the genetic association as a result of population admixture (Vansteelandt and Lange, 2012). Throughout, in contrast to other mediation analysis techniques (namely those based on so-called natural direct and indirect effects), we will allow for the possibility that some of these confounding variables are themselves affected by the studied marker, as illustrated via the edge from X to L in the causal diagram (VanderWeele et al., 2012).
Consider model where γ j for j = 0, 1, . . . , 3 denote the mean parameters and can be estimated by ordinary least squares. Note that γ 1 represents the true effect of K i on Y i and not a spurious association because, by assumption, the above model includes all relevant risk factors of K i . In order to construct an adjustment principle that tests for a direct genetic effect of the marker locus X on the target phenotype Y, the effect of the secondary phenotype K has to be estimated. Vansteelandt et al. use an estimate for γ 1 based on model (1) to adjust the phenotype Y i to Y i − γ 1 K i . A family-based association test (FBAT) on this adjusted phenotype is then a test for the direct genetic effect in the family-based setting (provided that the distribution of the test statistic acknowledges the uncertainty in the estimated effect γ 1 ) (Vansteelandt et al., 2009).
To reduce the number of multiple comparisons, we adapt the conditional mean model approach in the VanSteen-algorithm (Van Steen et al., 2005) to model (1). By replacing the observed marker score in model (1) by the expected marker score conditional upon the parental genotypes or sufficient statistic, the genetic effects of locus X i can be assessed without having to adjust the α-level of any subsequently computed FBATs (Lange et al., FIGURE 1 | Causal diagram illustrating the confounding of the target phenotype Y and the marker locus X. S denotes the parental genotype or Rabinowitz and Laird's sufficient statistic. K denotes the secondary phenotype of interest. L allows for confounding between K and Y . U represents a collection of unmeasured factors that allow for confounding due to population stratification or confounding between the two phenotypes K and Y . Note that causal diagrams assume that all variables that jointly affect any two variables are included. The absence of an arrow between any two variable denotes that there is no direct causal effect. For instance, U has no direct causal effect on X . 2003a,b; Van Steen et al., 2005). Similar to the idea of the conditional mean model approach, model (1) can be rewritten by substituting X i with its expected value E(X i |S i ), As shown in the proof given in the appendix, the parameter γ 1 is the same in both model (1) and model (2) when the null hypothesis holds that there is no direct effect and, moreover, there is no confounding due to population substructure. For testing the null hypothesis of no direct genetic effect, model (2) can thus be used to estimate the parameter γ 1 in a screening step without biasing the significance level since X i is not included in this model, provided there is no confounding due to population substructure. For the screening step, each subject contributes andγ * 1 is the ordinary least squares estimate for γ 1 in model (2), which does not involve the genetic marker X.Ỹ * i is not adjusted for the covariates L i since including factors such as L i in the phenotypic adjustment would introduce bias if the common risk factor L i is associated with the DSL X i (Vansteelandt et al., 2009). The parametersȳ andk are the observed phenotypic means of Y and K in the sample, respectively. Then the test statistic for the screening step is where var(T * i ) is calculated based on the sample variance ofT * and * i denotes the residual from model (2).
is the predicted value for K under a linear regression model for K with the covariates L i and E(X i |S i ), and σ * 2 k denotes the residual variance in that model. The variance correction given in Equation (5) is needed to account for estimating γ 1 in the proposed phenotype adjustmentỸ * i (Vansteelandt et al., 2009).
For step 1, the test statistic given in Equation (4) can be used for the screening step to pick the SNPs with the highest power since X is not used in this test statistic. For step 2, this smaller subset of SNPs are used to test the null hypothesis of no direct effect using the test statistic based on Equation (1) proposed by Vansteelandt et al. (2009) andγ 1 is the ordinary least square estimate for γ 1 in model (1), which does involve the genetic marker X. Using this association test with the adjusted phenotypeỸ i as the target phenotype provides a robust and valid test for the null hypothesis that there is no direct effect between the target phenotype Y i and the DSL; i.e., the association between the target phenotype Y i and the DSL is solely a result of the association between the secondary phenotype K i and the DSL. Adjusting for estimating γ 1 based on model (1), the test statistic is distributed chi-square with one degree of freedom under the null hypothesis of no direct effect of X on Y and has the following form whereT where var(T i ) is calculated based on the sample variance ofT and i denotes the residual from model (1). μ and σ 2 k denotes the residual variance in that model. The variance correction given in Equation (8) is needed to account for estimating γ 1 in the proposed phenotype adjustmentỸ i (Vansteelandt et al., 2009). Note that Equation (3) is similar to Equation (6), but Equation (6) contains the genetic marker X i . Similarly, Equation (5) is similar to Equation (8), but Equation (8) contains the genetic marker X i .
Note that under the alternative hypothesis, the association between K and Y is different in models (1) and (2), even in the absence of population admixture. Model (1) represents the causal effect of K on Y under the alternative hypothesis, but model (2) does not represent the causal effect of K on Y because there is a remaining spurious association between X and Y along the path K ← X → Y in Figure 1. Under the null hypothesis, this path does not exist. As a result, the proposed approach is valid for testing in the absence of population stratification, but may have less power when either the X → K or the X → Y link is strong.
This scenario is explored further in the simulation section of this paper.
Because the test statistic for the screening step given in Equation (4) is susceptible to population stratification, we examined this scenario in the simulation section as well. Principal component analysis (PCA) can be used in the screening step to correct for population stratification.
SIMULATIONS
Using simulation studies, we asses the type-1 error rate, the power, and robustness of this new approach which uses a trait that estimates γ 1 based on model (2) in the screening step and compare it to the approach proposed by Vansteelandt et al. (2009) which uses a trait that estimates γ 1 based on model (1). Similar to Vansteelandt et al. (2009), both methods are evaluated under various conditions. All simulations use a sample size of 1000 trios and are based on 5000 replications. The simulations are run for allele frequencies 5, 10, 15, 20, 25, 30, 35, 40, and 45%. To reflect a realistic setting, the data is simulated to reflect covariances found in the Framingham Heart Study (Herbert et al., 2006). The phenotype of interest Y is simulated such that it resembles FEV1. The secondary phenotype K resembles weight and the set of common confounding variables resemble height and age. As seen in Figure 2, the first scenario assumes there is a direct genetic effect of the marker on the intermediate phenotype K and on the common covariate L. Each genetic effect has a locus specific heritability of 1%. The intermediate phenotype K explains 1% of the phenotypic variation in Y, creating an association between the SNP and Y. The second scenario is similar to the first scenario except that there is no genetic effect on the confounder L. The genetic association with the intermediate phenotype K is still present. The third scenario is similar to the first scenario except there is no association between K and Y. The fourth scenario is similar to the second scenario except that there is no genetic effect on the intermediate phenotype K.
As seen in Table 1, the type-1 error rate is similar whether model (1) or model (2) is used to estimate γ 1 . For lower allele frequencies, under scenario 1 and 3, the type-1 error rate is 1-2% higher than expected. For higher allele frequencies under all four scenarios, the type-1 error rate is 0.5% lower than expected. In general, the type-1 error rate is close to 0.05 regardless of how γ 1 is estimated. As seen in Table 2, the power is similar whether model (1) or model (2) is used to estimate γ 1 assuming no population admixture. For lower allele frequencies, the method by FIGURE 2 | The top left figure represents scenario 1. The top right figure represents scenario 2 which is the same as scenario 1 except that X does not cause L. The bottom left figure represents scenario 3 which is the same as scenario 1 except that K does not cause Y . The bottom right figure represents scenario 4 which is the same as scenario 2 except that X does not cause K .
www.frontiersin.org
November 2013 | Volume 4 | Article 243 | 3 Vansteelandt et al. (2009) has higher power and for higher allele frequencies the proposed method has higher power. However, this difference in power is negligible; the power never differs by more than 2%. The advantage of our approach becomes clear when testing multiple SNPs. Table 4 shows how the power to detect the causal SNP for our approach compares to Vansteelandt et al. (2009) when one SNP has a direct effect on the phenotype as simulated above in Table 2 and 49 other SNPs are not associated with the phenotype of interest. Table 1 shows the type-1 error rate in this scenario where the one SNP has an indirect effect on the phenotype as simulated above in Table 1 and 49 other SNPs are not associated with the phenotype of interest or any of the other phenotypes. Table 6 shows how the power to detect the causal SNP for our approach compares to Vansteelandt et al. (2009) when one SNP has a direct effect on the phenotype as simulated above in Table 2 and 99 other SNPs are not associated with the phenotype of interest. Table 5 shows the type-1 error rate in this scenario where the one SNP has an indirect effect on the phenotype as simulated above in Table 1 and 99 other SNPs are not associated with the phenotype of interest or any of the other phenotypes.
Our approach allows for a screening step similar to the Van Steen algorithm (Van Steen et al., 2005) where the top 3 SNPs out of 50 and the top 5 SNPs out of 100 with the highest test statistic given by Equation (4) are chosen. We chose 3 SNPs out of 50 and 5 SNPs out of 100 since this is roughly 5% of the SNPs. After the top 3 or 5 SNPs are chosen based on the screening step, the test statistic described in Equation (7) is used to obtain a p-value which is compared to α/3 and α/5, respectively. We compare our approach with the screening step to the approach by Vansteelandt et al. (2009) with a Sidak correction. Since our approach allows for a screening step, we are better able to detect the SNP that has a direct causal effect on the target phenotype as seen in Tables 4, 6.
Note that the power in Tables 4, 6 is lower than that in Table 2 which is expected since multiple SNPs are tested. For more common allele frequencies, the power of using the proposed method with a screening step is more than double that of the Vansteelandt algorithm with a Sidak correction while the type-1 error rates are similar as seen in Tables 3, 5. Therefore, if multiple SNPs are tested, the proposed approach has better power to detect the SNP that has a direct effect on the phenotype of interest.
Since the proposed approach is valid for testing, but may have less power when either the X → K or the X → Y link is strong, we looked at the effect of increasing the association between X and K when K influences Y (X → K) and X and Y (X → Y). We increased the correlation between X and K from 0.025 to 0.05 and then 0.075. We also increased the correlation between X and Y from 0.05 to 0.10 and then 0.15. The power of both statistics remained very close. At most, the power of the Vansteelandt
Since the test statistic for the screening step given in Equation (4) is susceptible to population stratification, we examined a few scenarios where population stratification was present. We simulated half of the subjects to have allele frequency of 5, 5, 20, and 40% and the other half of the subjects to have allele frequency of 10, 45, 25, and 45%, respectively. Similar to Tables 3, 4, one SNP has a direct effect on the phenotype of interest and 49 other SNPs are not associated with the phenotype of interest in Table 3 | This table displays the significance rate when one SNP does not have a direct effect on the phenotype Y but acts as seen in Figure 2 without the arrow from X to Y and 49 SNPs are not associated with the phenotype Y.
Allele frequency (%)
Type-1 error rate when 50 SNPs are tested 5 1 0 1 5 2 0 2 5 3 0 3 5 4 0 4 5 Scenario Tables 7, 8. Similar to Tables 5, 6, one SNP has a direct effect on the phenotype of interest and 99 other SNPs are not associated with the phenotype of interest in Tables 9, 10. As seen in Tables 7, 9, the type-1 error rates are similar for both methods. As seen in Tables 8, 10, even though there is some population stratification present, the proposed method with a screening step still performs better than the Vansteelandt algorithm, especially when the allele frequencies are more common.
DATA ANALYSIS: AN APPLICATION TO THE FRAMINGHAM STUDY
We evaluated the practical relevance of the proposed adjustment principle by an application to the Framingham Heart Study with 1400 probands (Herbert et al., 2006). For the target phenotype, we selected the lung-function measurement FEV1. For the secondary phenotype K, we selected height. Gender, and age represent L, the collection of common risk factors between FEV1 and height. For rs2415815 a SNP associated with both height and FEV1, the test statistic equals 0.044 with corresponding p-value equal 0.83. As a result, we fail to reject the null hypothesis and conclude that there is no evidence that the SNP acts directly on FEV1 other than via body height.
DISCUSSION
Our proposed FBAT assesses the direct genetic effect of a marker locus on the phenotype of interest, other than through another correlated phenotype. The adjustment is based on the conditional mean model approach and can be incorporated into the FBATapproach in a straightforward fashion. The power of the approach is assessed by simulation studies and shown to be similar to the Vansteelandt et al. method when only one SNP is being tested and superior when multiple SNPs are being tested (Vansteelandt et al., 2009). Unlike the Vansteelandt et al. method, this method uses a screening step and has the unique advantage in situations in which a large number of SNPs are tested for a direct effect on the phenotype of interest. Since the number of tests will be much smaller than the total number of SNPs, this will lead to substantial reduction in the adjustment for multiple-comparisons and will result in improved overall statistical power. In this process, the screening step works under the assumption of no population admixture, but the final analysis of the selected SNPs is robust against it. While we considered several causal scenarios, if the causal relationships assumed in the DAGs are not true this could cause problems for the proposed method. For example, a causal arrow K ← Y or L → Y could introduce spurious association for this method. Therefore, one needs to makes sure that the assumptions of the DAG are met before using the proposed approach. While the simulations considered 50 and 100 SNPs, a realistic application could involve thousands of GWAS SNPs. This leads to extreme multiple test corrections and may lead to very different behavior than the behavior observed in the simulation studies (Morris and Elston, 2011). Furthermore, if phenotypes of the founders are known, the proposed method could perform poorly compared to population-based approaches.
For the screening step in the Simulations section, we chose 3 out of 50 and 5 out of 100 SNPs since this is roughly 5% of the tested SNPs. Another number of SNPs could be chosen for the screening step. Although, if the majority of SNPs are chosen in the screening step (i.e., 40 out of 50 SNPs), this increases the number of multiple comparisons and can decrease power. If too few SNPs are chosen in the screening step (i.e., 1 out of 50 SNPs), this decreases the number of multiple comparisons, but one may fail to detect the causal SNP since too few SNPs were chosen. Care needs to be given to the number of SNPs chosen in the screening step (Van Steen et al., 2005). One cannot simply choose different numbers of SNPs for the screening step until significant results are found since this will inflate the type-1 error rate (Van Steen et al., 2005).
APPENDIX
The following proof shows that the test statistics in the first and second screening steps are uncorrelated under the null hypothesis. As discussed in the introduction and methods sections,Ỹ = Y −ȳ − γ 1 K −k is the adjusted phenotype for the effect that the phenotype K has on the target phenotype Y. For ease of notation, we will useỸ = Y − γ 1 (K) for this proof. Suppose that the null hypothesis is true that X has no effect on Y other than through K. Let E(Y|X, K, U) = E(Y|K, U) = {w(U) + γ 1 K} where equals the identity link or exponential link and w(U) is an arbitrary function. Without loss of generality, for the following proof, let equal the identity link. This model does not involve X because we are working under the null hypothesis of no direct effect. Furthermore, the parameter γ 1 in this model is the same as in model E(Y|X, K, L, S) = w * (X, L, S) + γ 1 K for some function w * (X, L, S) of (X, L, S), which can be seen by inferring this model from model (9)
Assuming that Var(Y|K, U) is constant, as we do throughout, it is immediate that the term Part 3 is zero. As a result, this shows that Cov(Ỹ(X − E[X|S]),ỸE[X|S]) = 0. | 5,807.2 | 2013-11-21T00:00:00.000 | [
"Biology"
] |
Adaptive Cooperative Control of Multiple Urban Rail Trains with Position Output Constraints
: This paper studies the distributed adaptive cooperative control of multiple urban rail trains with position output constraints and uncertain parameters. Based on an ordered set of trains running on the route, a dynamic multiple trains movement model is constructed to capture the dynamic evolution of the trains in actual operation. Aiming at the position constraints and uncertainties in the system, different distributed adaptive control algorithms are designed for all trains by using the local information about the position, speed and acceleration of the train operation, so that each train can dynamically adjust its speed through communicating with its neighboring trains. This control algorithm for each train is designed to track the desired position and speed curve, and the headway distance between any two neighboring trains is stable within a preset safety range, which guarantee the safety of tracking operation of multiple urban rail trains. Finally, the effectiveness of the designed scheme is verified by numerical examples.
Introduction
Under the condition that urban rail trains operate at high speed, high density, and long cycle automatic operation, effective train control strategies are needed to ensure the position and speed tracking accuracy of train operation. In order to solve these problems, many scholars have proposed different schemes. According to the number of controlled trains, the train control methods mainly include single train operation control and multiple trains cooperative control algorithms. The current control algorithms for single train mainly include PID control [1], fuzzy control [2], neural network control [3], adaptive control [4], predictive control [5], iterative learning [6], or the combination of several control theories [7][8][9], but in the environment of the high-density operation where interactions between trains are common, the control methods for single train can hardly meet the control requirements. However, the multiple trains cooperative control (MTCC) is applicable to train operation control in the case of mutual interference between trains. This method integrates the operating states of multiple trains and considers them as a whole for optimization, which is a solution to achieve global optimum and ensure system performance [10,11]. For example, the authors of [12] developed a multiple trains cooperative robust sampling data acquisition system to track the desired speed. The authors of [13] proposed a cooperative trains control method to reduce the energy consumption and peak demand of trains. The authors of [14,15] adopted a centralized control framework to design a MTCC strategy with the safe distance headway. However, the multiple trains cooperative method based on centralized control may reduce the robustness and reliability of the system and increase the computational complexity of the control system. To overcome these problems, the authors of [16] proposed a distributed MTCC based on nonlinear mapping feedback. The authors of [17] proposed an alternating direction method of multipliers to optimize the distributed control of multiple trains. Distributed control is able to equip each train with its own
•
Considering the safety of multiple trains operation and station fixing stops, the distributed cooperative control law based on potential functions and POC is designed to ensure that each train tracks the desired train position and speed, at the same time, ensuring that the distance between each train and adjacent trains is within the predefined safety range.
•
The adaptive laws automatically estimate the drag coefficient online, and proposes a single-value learning adaptive train cooperative control method to further simplify the structure of the controller. • Different distributed control designs are applied to the various trains, not only for a single train, but also for a collection of n trains (n > 1 is a natural number). When the number of train groups varies, there exists no requirement to adjust the control structure or redesign the adaptive laws.
The rest of this paper is organized as follows. Section 2 introduces the multiple trains dynamic system model. Section 3 gives the detailed control schemes. Section 4 discusses the simulation results. Section 5 provides the concluding comments.
Preliminaries System Model
In the process of multiple trains tracking operation, each train is able to dynamically adjust its own speed based on communication with neighboring trains. In this paper, by treating each train as an agent and the information exchange between trains as the communication between agents, the tracking operation of the trains can be described by the framework of a multi-agent system.
Let G = (V, E, A) be a weighted digraph of n order, where V = {1, 2, · · · , n} is the set of nodes, E ∈ V × V is the set of edges, A = a ij n×n is the nonnegative adjacency matrix of the digraph G, representing the information interaction between trains. If (i, j) ∈ E holds, it means that the two trains can get the status data of each other, then a ij = 1, otherwise a ij = 0. N i = {j ∈ V : (i, j) ∈ E} represents the set of neighbors of node i. The Laplacian matrix L corresponding to the adjacency matrix A is described as [L] ii = n ∑ j=1,j =i a ij and For the MTCC system, this paper sets that each train can communicate with its neighboring trains, then the middle train can exchange data with its preceding and following trains. The foremost train can only communicate with the one immediately behind it, and the rearmost train can only communicate with the train immediately in front of it. So, the following adjacency matrix consisting of n trains can be obtained as follows: The corresponding Laplace matrix is expressed as follows: In addition, in order to ensure that the safety separation distance between each train and its neighboring trains is stable within a certain range, the potential function is introduced in this paper as follows.
is a differentiable non-negative potential function of the distance x i,j between train i and j, l 1 is the minimum safe separation distance between two neighboring trains, l 2 is the maximum allowable separation distance between two neighboring trains, and l 1 < l 2 , such that: When the positions of train i and j are at the desired position, U i,j ( x i,j ) is the unique minimum value.
Consider the cooperative operation system of multiple urban rail trains with n trains shown in Figure 1, the motion dynamics model of train i can be described as follows: where x i,1 (t) and x i,2 (t) represent the position and speed of the ith train at time t, respectively, u i (t) is the cooperative control input to be designed, that is, the acceleration of the ith train, a i,0 , a i,1 , and a i,2 are the basic operating resistance coefficients for the ith train, f i,s is the ramp resistance caused by the route slope, f i,c is the curve resistance caused by the route curve, and f i,t is the tunnel resistance caused by the route. During the actual train operation, the basic operating resistance coefficients a i,0 , a i,1 , a i,2 and additional resistance f i,s , f i,c , f i,t change with different trains, weather conditions, outside environments, and other factors, these parameters are extremely difficult to get precisely, resulting in the parameters uncertainties of the train dynamics model. Among them, ,c , and f + i,t are the unknown upper bounds of the time-varying function, respectively. Thus, in this paper, an adaptive control scheme is designed to identify uncertain parameters to ensure the control performance of multiple trains cooperative control system.
In order to ensure that the trains accurately track the operation curve and the requirements of station fixing stop in the MTCC operation system, the higher and lesser bounds of the train position output bounds need to be strictly restricted. It is defined as follows.
Lemma 1 ([23]). For the train operation control system with position error
The design objectives of multiple trains distributed adaptive cooperative control are as follows.
•
The position errors of all trains in the multiple trains cooperative operation are limited to a preset range, that is , and each train can accurately track the desired speed and distance curve.
•
The separation distance between each train and neighboring trains in the MTCC system is kept within a predefined safety range.
•
In the process of multiple trains tracking and cooperative operation, the speed of each train approaches the desired speed.
Control Law Design
In this section, two control schemes are proposed for multiple trains distributed adaptive cooperative control with POC and parameters uncertainties. The first is the multiple trains adaptive cooperative control with POC based on neighboring trains data, while the second is the multiple trains single-value learning adaptive cooperative control with POC. The second proposes a novel single-parameter adaptive control scheme, which requires only one online parameter adjustment and simplifies the structure of the controller. Definition f i = a i,0 + f i,s + f i,c + f i,t , then the system model Equation (3) can be rewritten as follows: In order to ensure that the trains accurately track the operation curve and the requirements of station fixing stop in the MTCC operation system, the higher and lesser bounds of the train position output bounds need to be strictly restricted. It is defined as follows.
The design objectives of multiple trains distributed adaptive cooperative control are as follows.
•
The position errors of all trains in the multiple trains cooperative operation are limited to a preset range, that is | x i,1 (t)| < k i , and each train can accurately track the desired speed and distance curve.
•
The separation distance between each train and neighboring trains in the MTCC system is kept within a predefined safety range.
•
In the process of multiple trains tracking and cooperative operation, the speed of each train approaches the desired speed.
Control Law Design
In this section, two control schemes are proposed for multiple trains distributed adaptive cooperative control with POC and parameters uncertainties. The first is the multiple trains adaptive cooperative control with POC based on neighboring trains data, while the second is the multiple trains single-value learning adaptive cooperative control with POC. The second proposes a novel single-parameter adaptive control scheme, which requires only one online parameter adjustment and simplifies the structure of the controller.
Adaptive Cooperative Control Design of Multiple Trains with POC
For the MTCC system with POC and uncertain parameters, the distributed adaptive cooperative control algorithm based on the data of neighboring trains is designed as follows: 3 } is the positive design parameter, x i,2d is the desired speed output value, and α i > 0 and β i > 0 are designed control gains.
Equation (6) is the position output constrained adaptive control part, which is used to identify the unknown parameters of the train model, compensate the desired speed, limit the position output, enable the train to accurately track the desired speed and distance curve, and achieve the first term of the control objective. u i,2 (t) is used to maintain the safe separation distance between each train and neighboring trains, and the specific design is as follows: where d i > 0 is the control parameter. Artificial potential field function U i,j (x i,j ) is introduced according to Definition 1.
, l 1 is the minimum safe separation distance between two neighboring trains, and l 2 is the maximum allowable separation distance between two neighboring trains. Equation (8) can ensure that the separation distance between two neighboring trains is kept within the preset safety range of (l 1 , l 2 ), and the rear-end collision of two neighboring trains can be avoided. Remark 1. u i,2 is the negative gradient of the artificial potential field function U i,j (x i,j ), and u i,2 acts on the MTCC system to bring down U i,j (x i,j ). When x i,j = l 2 1 +l 2 2 2 , the gradient information of U i,j (x i,j ) is equal to zero, at this time u i,2 = 0, that is, the stable separation distance between two neighboring trains is x i,j = l 2 1 +l 2 2 2 . u i,3 (t) requires the speed of each train to be consistent, and the specific design is as follows: where h i > 0 is the control parameter. Define position tracking error x i,1 (t) = x i,1 (t) − x i,2d t and speed tracking error (4) can be rewritten as the following multiple trains error dynamics system. where Theorem 1. For a set of n urban rail trains system (4) operating on the same route, an adaptive cooperative controller (5) with POC is used, in which each train can track the target speed-distance curve during train operation, the separation distance between two neighboring trains is always kept within the safe range of (l 1 , l 2 ), and the speed of each train is able to reach the desired speed. That is, the closed-loop dynamic system satisfies the control objective.
Proof of Theorem 1. The symmetrical Barrier Lyapunov function for the POC is chosen as follows: According to Lemma 1, for The differential of V i,1 is obtained as follows: The Lyapunov function is chosen as follows: The differential of V i is obtained as follows: .
From β i > 0, we can know that V i is semi-negative definite, so V i is a not-increasing state, and its numerical boundary value is V i (0), i.e., V i is bounded, which further ensures the boundedness of x i,1 , x i,2 , f i , a i,1 , and a i,2 . Therefore, the control input u i,1 is bounded. It is further guaranteed that .. V i (t) = 0. According to the linear control theory, it is known that the designed controller can guarantee the asymptotic speed and position tracking of each train, and achieve the tracking of the target speed-distance curve.
Based on the error dynamics Equation (10) for multiple trains operation, the following global positive definite Lyapunov function is selected as Algorithms 2022, 15, 138 7 of 19 The differential of Q 1 is obtained as follows: .
Sort out the first item in Equation (17), and we get that The Equation (17) can be expressed as where Thus, if U i,j wants to satisfy the continuity and boundedness, for any t ≥ 0, l 1 ≤ x i,j ≤ l 2 , that is, the separation distance between the neighboring trains should be kept within the safety range of (l 1 , l 2 ). Therefore, the certain safety separation distance can be maintained between two neighboring trains to ensure the safety of multiple trains operation.
Furthermore, according to Equation (19), it is known that if x 2 (t) = 0, then . Q 1 (t) = 0. Therefore, for the error system of Equation (10), the set Ω{( . Q 1 (t) = 0 . According to the LaSalles invariance principle, each solution of the Equation (10) beginning from any original situation converges to the maximal invariant set in . Therefore, the speed of each train reaches the desired speed x i,2d .
Single-Value Learning Adaptive Cooperative Control Design of Multiple Trains with POC
In order to simplify the structure of the controller, this subsection proposes a novel single-parameter adaptive control method. By selecting a new type of parameter to be estimated, only one online parameter adjustment is required, which effectively improves the engineering practicability of the controller.
Based on the data of neighboring trains, the adaptive cooperative control law for multiple urban rail trains is designed as follows: where 1 , a i,2 },Θ i is the estimated value of Θ i , the adaptive law ofΘ i is designed as follows: where θ i > 0 is the control parameter.
Theorem 2.
For an ordered collection of n trains, the distributed controller in Equation (20) and the adaptive law in Equation (21) are designed so that each train can track the target speed-distance curve, the separation distance between two neighboring trains is always kept within the safe range of (l 1 , l 2 ), and the speed of each train can reach the desired speed. That is, the closed-loop dynamic system satisfies the control objective.
Proof of Theorem 2. Define the parameter estimation error Θ i =Θ i − Θ i , based on the coordinates x i,1 , x i,2 , x j,2 , Θ i , and U i,j , the dynamics of the multiple trains closed-loop system can be obtained as follows: The following global positive definite Lyapunov function is selected so that The differential of Q 2 is obtained as follows: . According to Equation (18) (24) is as follows: .
where x 2 = ( x 1,2 , x 2,2 , · · · , x n,2 ) T . The specific analysis process is similar to Section 3.1. From Laplace matrices L ≥ 0 and h i > 0, we know that . Q 2 ≤ 0. By integrating Equation (25) from 0 to t, we can get that Q 2 is bounded. Furthermore, from Equation (23) we know that U i,j is bounded. Thus, if U i,j wants to satisfy the continuity and boundedness, then for any t ≥ 0, l 1 ≤ x i,j ≤ l 2 ; that is, the distance between two trains is kept within the safe range (l 1 , l 2 ). Therefore, the certain safety separation distance can be maintained between two neighboring trains to ensure the safety of multiple trains tracking operation.
To enhance the robustness of the controller, based on the σ modified robust adaptive control theory [25], the modified adaptive laws are as follows: .â .â .
Simulation Results
In order to verify the effectiveness of the designed MTCC schemes, five trains are applied to the cooperative operation. The operating resistance of five trains were set as 0.3 + 0.004x 1,2 + 0.00016x 2 1,2 , 0.6 + 0.002x 2,2 + 0.00004x 2 2,2 , 0.4 + 0.0025x 3,2 + 0.0001x 2 3,2 , 0.5 − 0.003x 4,2 − 0.00024x 2 4,2 , 0.18 − 0.0015x 5,2 − 0.0002x 2 5,2 . The operating route of the train selects the parameters in the literature [26]. The minimum safe separation distance between two neighboring trains were set as l 1 = 400, and the maximum allowable separation distance between two neighboring trains was set as l 2 = 600 (which can adjust to any constant according to the practical operating conditions such as train braking distance, safety redundancy distance and train length). The separation distance between any two neighboring trains at the beginning of each operation was set as x i,j = l 2 1 +l 2 2 2 . According to the control objective of the designed MTCC algorithm, the initial speed value of each train was set as x i,2 (0) = x i,2d (0). Parameter k i = 0.2 that limits the train position output value (which can choose any value according to the size of the position output accuracy).
For the multiple trains adaptive cooperative control with POC, the initial parameter values were set asf 1 (0) = 0.3,f 2 (0) = 0.6,f 3 (0) = 0.4,f 4 (0) = 0.5,f 5 (0) = 0.18, a i,1 (0) = 0,â i,2 (0) = 0. The control parameters are fine-tuned using a trial-and-error scheme, and they were chosen as α 1 = α 2 = α 3 = α 4 = 1, α 5 = 1.2, β i = 0.05, For the multiple trains single-value learning adaptive cooperative control with POC, the initial parameter values were set asΘ 1 (0) = 0.4,Θ 2 (0) = 0.8,Θ 3 (0) = 0.5,Θ 4 (0) = 0.7, Θ 5 (0) = 0.4. The control parameters are fine-tuned using a trial-and-error scheme, and they were chosen as α 1 The simulation results of the multiple trains adaptive cooperative control with POC are shown in Figures 2-6. Figures 2 and 3 show the position tracking and position separation distance error curve of each train in the MTCC system respectively. Figures 4 and 5 show the speed tracking and its error curve of each train in the MTCC system, respectively. Figure 6 shows the corresponding control input curve. The simulation results of the multiple trains single-value learning adaptive cooperative control with POC are shown in Figures 7-12. By applying the adaptive cooperative control in Equation (20) of single-value learning to each train in a multiple trains cooperative system with position output constrains, the evolution of the adaptive parameters is shown in Figure 7. It can be seen from Figure 7 that all the adaptive parameters converge to constants, which indicates the effectiveness of the designed single-value learning adaptive control algorithm in identifying these unknown parameters. The position tracking and the error curves of each train in the MTCC are given in Figures 8 and 9, respectively. From the curves in the figure, it can be seen that the first train can follow the tracks of the desired train position curve with high precision, the following train can keep preinstall safety separation distance with the neighboring trains with minor error, and the tracking error of each train is within the tolerance range, which guarantees the safety of multiple trains cooperative operation and realizes the MTCC.
The speed and speed tracking error curves of each train are given in Figures 10 and 11, respectively. From Figures 10 and 11, it can be seen that the speed tracking effect is great and each train can automatically regulate the control input conferring to the data of the neighboring trains, as shown in Figure 12. In the section with sudden change of desired speed and complex route in Figure 10, that is, the train runs in 52 s to 248 s, the speed tracking error of each train in the multiple train cooperative operation system shown in Figure 11 fluctuates, but the speed error of the preceding trains does not affect the speed control of the tracking trains. The speed tracking error of each train decreases with the propagation of train number, that is, the fluctuation of speed tracking error of the fifth train is the smallest, which ensures the stability of multiple trains cooperative operation. In other acceleration and deceleration sections, the speed error of each train is significantly reduced. In particular, in the cruise section, the speed error of each train is almost zero, which has good control accuracy. As can be seen from Figure 12, when the trains are just started, the controller applies a large control input to overcome the external resistance. When the train runs normally, the control input remains stable. When the train continues to run for 52 s to 248 s, the controller adjusts the control input in order to maintain the control accuracy. The control inputs of other sections are relatively stable, As can be seen from Figure 2, the first train achieves accurate tracking of the desired position outline. In the meantime, the following trains also have the fine tracking performance, and guarantee that the following trains maintain the set safe separation distance between two neighboring trains, which means the position tracking performance of the MTCC controller is excellent.
As can be seen from Figure 3, the separation distance error of each train matches the allowable range of stopping accuracy less than 0.2 m. The slight separation distance errors of two neighboring trains ensures the stability of the multiple trains operation and realizes the useful multiple trains cooperative control. It can also be observed that the position separation distance error of multiple trains does not "backward" propagate, i.e., the position error x 5,1 < x 4,1 < x 3,1 < x 2,1 < x 1,1 , while the position error converges to a small size and the system performance is good.
From Figures 4 and 5, it can be seen that the developed control algorithm can show true high-accuracy speed tracking, and the speed of each train is close to the desired speed, which effectively guarantees the safety of multiple trains cooperative operation. In particular, the speed control accuracy of multiple trains cooperative operation in cruise mode has obvious advantages.
The multiple trains control input curves of the proposed control algorithm, i.e., the acceleration curves of the multiple train's operation, is given in Figure 6. The control input curve is relatively smoother overall, and when the train was running at 52 s to 248 s, the controller quickly switches the control input to keep the control accuracy in order to overcome the concentrated disturbances such as large external gradients and complex curves. The magnitude of this control input change can match the performance of train traction and braking. Therefore, the developed control algorithm has great position and speed tracking results and achieves the design objective of the distributed control laws for the cooperative operation of multiple trains.
The simulation results of the multiple trains single-value learning adaptive cooperative control with POC are shown in Figures 7-12. By applying the adaptive cooperative control in Equation (20) of single-value learning to each train in a multiple trains cooperative system with position output constrains, the evolution of the adaptive parameters is shown in Figure 7. It can be seen from Figure 7 that all the adaptive parameters converge to constants, which indicates the effectiveness of the designed single-value learning adaptive control algorithm in identifying these unknown parameters. The position tracking and the error curves of each train in the MTCC are given in Figures 8 and 9, respectively. From the curves in the figure, it can be seen that the first train can follow the tracks of the desired train position curve with high precision, the following train can keep preinstall safety separation distance with the neighboring trains with minor error, and the tracking error of each train is within the tolerance range, which guarantees the safety of multiple trains cooperative operation and realizes the MTCC.
The speed and speed tracking error curves of each train are given in Figures 10 and 11, respectively. From Figures 10 and 11, it can be seen that the speed tracking effect is great and each train can automatically regulate the control input conferring to the data of the neighboring trains, as shown in Figure 12. In the section with sudden change of desired speed and complex route in Figure 10, that is, the train runs in 52 s to 248 s, the speed tracking error of each train in the multiple train cooperative operation system shown in Figure 11 fluctuates, but the speed error of the preceding trains does not affect the speed control of the tracking trains. The speed tracking error of each train decreases with the propagation of train number, that is, the fluctuation of speed tracking error of the fifth train is the smallest, which ensures the stability of multiple trains cooperative operation. In other acceleration and deceleration sections, the speed error of each train is significantly reduced. In particular, in the cruise section, the speed error of each train is almost zero, which has good control accuracy. As can be seen from Figure 12, when the trains are just started, the controller applies a large control input to overcome the external resistance. When the train runs normally, the control input remains stable. When the train continues to run for 52 s to 248 s, the controller adjusts the control input in order to maintain the control accuracy. The control inputs of other sections are relatively stable, even in acceleration and deceleration sections. The fluctuation is very small to meet the comfort of passengers. Thus, it can be seen that in the multiple trains single-value learning adaptive cooperative control system with POC, the trains can regulate their own operation status conferring to the operation data of the neighboring trains, which usefully ensures the safety of multiple trains cooperative operation, reduces the complexity of the controller, and provides a reference for the development and application of the new generation of MTCC system. In addition, in order to demonstrate the performance and engineering practicalit advantages of the controller proposed in this paper, the PID control algorithm extensively applied to train operation control systems (such as the Beijing Metro Yizhuan Line) as selected in this paper, and the PID controller structure is chosen in the form o reference [27], with the PID controller parameters set as In addition, in order to demonstrate the performance and engineering practicality advantages of the controller proposed in this paper, the PID control algorithm is extensively applied to train operation control systems (such as the Beijing Metro Yizhuang Line) as selected in this paper, and the PID controller structure is chosen in the form of reference [27], with the PID controller parameters set as K p,1 = 0.05, K i,1 = 0.01, K d,1 = 0.03, K p,2 = 0.15, K i,2 = 0.03, K d,2 = 0.013, K p,3 = 0.13, K i,3 = 0.035, K d,3 = 0.014, K p,4 = 0.11, K i,4 = 0.032, K d,4 = 0.012, K p,5 = 0.1, K i,5 = 0.03, K d,5 = 0.02. Figures 13 and 14 give the position tracking errors and speed tracking errors under the PID controller. It can be seen from Figure 13 that the position tracking error of PID controller is distributed in the range of [−1.41, 1.58], which is relatively large as a whole. Due to the lack of adaptive mechanism of PID controller, the ability to deal with the change of train running resistance and external environment was not strong, and the degree of position tracking error decreasing in turn with the propagation of train number was not obvious. The overall position error is much larger than 0.2 m proposed in this paper, and there is continuous fluctuation. It can be seen from Figure 14 that the speed error of the PID controller varies widely, among which the speed error of the first train is the largest, and the speed error of the fifth train is improved, but there were overall fluctuations. Thus, the ability of PID controller to adjust its own operation state according to the operation information of adjacent trains is poor, and it is lack of adaptive ability to train operation resistance and external environment changes, so it is difficult to ensure the stability of multiple trains cooperative operation. By contrast, the multiple trains cooperative control method designed in this paper has great control performance and self-adaptability, and effectively ensures the safety of multiple trains cooperative operation. the largest, and the speed error of the fifth train is improved, but there were overall fluctuations. Thus, the ability of PID controller to adjust its own operation state according to the operation information of adjacent trains is poor, and it is lack of adaptive ability to train operation resistance and external environment changes, so it is difficult to ensure the stability of multiple trains cooperative operation. By contrast, the multiple trains cooperative control method designed in this paper has great control performance and selfadaptability, and effectively ensures the safety of multiple trains cooperative operation.
Conclusions
In this paper, the cooperative operation control schemes for a class of nonlinear urban rail transit train systems are studied, and two position output-constrained adaptive control methods based on multi-agent collaboration are proposed. A set of orderly running urban rail trains are considered as a multi-agent system. The position output constrained adaptive learning controller uses the data of the train itself as well as the neighboring trains to enable the position and speed of each train to track the desired position and speed trajectory of the train. Among them, the train position errors of the two MTCC algorithms with POC meet the stopping accuracy of less than 0.2 m, respectively, and the position tracking error of their tracking trains is not affected by the position error of the preceding trains and gradually approaches 0, which ensures the
Conclusions
In this paper, the cooperative operation control schemes for a class of nonlinear urban rail transit train systems are studied, and two position output-constrained adaptive control methods based on multi-agent collaboration are proposed. A set of orderly running urban rail trains are considered as a multi-agent system. The position output constrained adaptive learning controller uses the data of the train itself as well as the neighboring trains to enable the position and speed of each train to track the desired position and speed trajectory of the train. Among them, the train position errors of the two MTCC algorithms with POC meet the stopping accuracy of less than 0.2 m, respectively, and the position tracking error of their tracking trains is not affected by the position error of the preceding trains and gradually approaches 0, which ensures the stability of the multiple trains tracking operation. The introduction of the potential function ensures that the separation distance between any two neighboring trains is always kept within the designed safety range, which can be flexibly adjusted according to the conditions of train braking distance, safety redundancy distance, train length, and operation interval. The two position output constrained adaptive control methods based on multi-agent cooperation proposed in this paper can not only ensure the tracking operation safety of multiple urban rail trains, but also flexibly adjust the train operation interval and effectively improve the operation efficiency of urban rail system. In particular, single-value learning adaptive cooperative control method with POC for the multiple trains requires only one online parameter adjustment, which guarantees the engineering practicality of the algorithm while ensuring the lowest structural complexity of the MTCC algorithm which will promote the practicability of the theoretical work related to urban rail cooperative control. Additionally, the designed method is also suitable for irregular event interference in a certain range.
Although this paper has conducted in-depth research on the cooperative control problem and the model uncertainties problem for multiple urban rail trains, and achieved phased results, the cooperative control problem for the adjustment of urban rail trains' operation status still needs further study due to the complexity of the cooperative adjustment process of urban rail trains and the time-varying characteristics of the train operation environment. Combined with practical engineering, future research will focus on the following areas:
•
Multiple trains adaptive cooperative anti-disturbance control with position output constraints under model uncertainty constraints and longer lasting external multisource disturbance. The dynamic behavior of multiple trains is related to many factors, excluding their own traction and braking force, but also by air resistance, tunnel resistance, ramp resistance, route condition changes, and the running state of the preceding trains and other random interference factors. So, it is very difficult to establish an accurate model that can portray so many factors, and at the same time, the model established by considering too many factors is too complicated. Therefore, while ensuring the simplicity of the control algorithm, the design considering the model uncertainties and longer lasting disturbance is the key to further improving the engineering practicability.
•
Research on active fault-tolerant control of the running state adjustment process of urban rail multiple trains. The drift of train sensor parameters, the delay of multiple trains information interaction data transmission, and packet loss are unavoidable. Active fault-tolerant control that takes into account train sensor accuracy and communication channel capacity limitations will help to improve the reliability and applicability of multiple trains cooperative control.
•
Research on more scenarios of multiple trains cooperative operation. In this paper, only the typical operation scenarios in multiple trains cooperative operation are considered, and other modes of actual train operation are not analyzed. Further exploration of multiple trains cooperative operation in other operation scenarios is needed to ensure the engineering practicality of the designed algorithm, which will promote the practicality of related theoretical work.
Author Contributions: Analysis, conceptualization, designed, methodology, investigation, and writing, J.Y.; resources, supervision and review, Y.Z.; conceptualization, investigation and resources, J.Y. All authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by Gansu Provincial Department of Education: Excellent Postgraduate "Innovation Star" Project, grant number 2021CXZX-552. | 8,460.2 | 2022-04-21T00:00:00.000 | [
"Computer Science"
] |
Use of a flexible optical fibre bundle to interrogate a Fabry–Perot sensor for photoacoustic imaging
Photoacoustic imaging systems based on a Fabry Perot (FP) ultrasound sensor that is read-out by scanning a free-space laser beam over its surface can provide high resolution photoacoustic images. However, this type of free-space scanning usually requires a bulky 2-axis galvanometer based scanner that is not conducive to the realization of a lightweight compact imaging head. It is also unsuitable for endoscopic applications that may require complex and flexible access. To address these limitations, the use of a flexible, coherent fibre bundle to interrogate the FP sensor has been investigated. A laboratory set-up comprising an x-y scanner, a commercially available, 1.35 mm diameter, 18,000 core flexible fibre bundle with a custom-designed telecentric optical relay at its distal end was used. Measurements of the optical and acoustic performance of the FP sensor were made and compared to that obtained using a conventional free-space FP based scanner. Spatial variations in acoustic sensitivity were greater and the SNR lower with the fibre bundle implementation but high quality photoacoustic images could still be obtained. 3D images of phantoms and ex vivo tissues with a spatial resolution and fidelity consistent with a free-space scanner were acquired. By demonstrating the feasibility of interrogating the FP sensor with a flexible fibre bundle, this study advances the realization of compact hand-held clinical scanners and flexible endoscopic devices based on the FP sensing concept.
Introduction
Photoacoustic tomography (PAT) is a non-invasive imaging technique that combines the high spatial resolution of ultrasonography with high optical absorption contrast to visualize the structure and function of tissues to centimeter scale depths [1][2][3]. PAT utilizes the specific absorption features of chromophores such as haemoglobin to visualise vascular structures [4], to obtain physiologically relevant functional information such as oxygen saturation [5,6], or to exploit lipid absorption to characterise vulnerable atherosclerotic plaques [7,8]. A variety of photoacoustic imaging scanners based on the use of piezoelectric detectors have been demonstrated [1]. However, for short-range high-resolution imaging in widefield PA tomography mode [1], the limited acoustic bandwidth and insufficiently fine spatial sampling of piezoelectric based detection schemes can compromise image quality. An alternative detection method that can address these limitations is based upon the use of a Fabry Perot (FP) polymer film ultrasound sensor [9]. In this approach, the incident acoustic field is mapped by using an interrogation laser beam to optically address different spatial points on the FP sensor. Two distinct implementations of this sensing concept have been described previously. The first and most common scheme is one in which a free-space focused interrogation laser is scanned across the surface of the FP sensor using a two axis galvanometer based conjugate scanner; this approach forms the basis of a range of non invasive pre-clinical [10][11][12] and clinical photoacoustic imaging [13] instruments. The second embodiment involves reading-out the FP sensor by delivering the interrogation beam via the individual cores of a rigid fibre bundle to realise a miniature endoscopic probe [14]. Both configurations have their limitations, particularly for clinical use. With the free-space FP scanner, the large size and weight of the galvanometers and conjugation optics makes it challenging to realise a compact lightweight hand-held imaging head. With the rigid fibre bundle approach, the range of endoscopic applications is limited to those where line-of-sight access is available. The limitations of both methods could be mitigated if the FP sensor is interrogated using a flexible fibre bundle. This would obviate the need for a bulky proximal end scanner enabling a compact lightweight probe head for non-invasive imaging to be realised. For endoscopic use, it would provide the flexibility required to reach anatomical sites that are inaccessible with a rigid probe.
The aim of the current study is to advance towards the above implementations by investigating the feasibility of interrogating the FP sensor using an inexpensive, commercially available flexible fibre bundle. This involved addressing several distinct technical challenges that do not arise with a free-space interrogated FP sensor. When using a fibre bundle, an optical scanner with higher spatial positioning accuracy is needed to individually address each core of the bundle. An additional optical system is also required in the form of a distal-end relay to achieve a sufficiently large FOV and an optimal interrogation beam spot size. Moreover, there are several potential sources of SNR degradation that are unique to a fibre bundle implementation, particularly when using commercially available fibre bundles [16] which are not designed for the commonly used 1500 nm − 1600 nm FP sensor interrogation wavelength range. In order to explore these issues and assess their impact on photoacoustic imaging performance, a bench-top experimental arrangement comprising an FP sensor interrogated using a flexible fibre bundle has been developed. This was used to assess optical and acoustic performance and compare it to that of a conventional free-space FP scanner [9]. In addition, the ability to acquire photoacoustic images using the system was demonstrated by imaging a range of tissue mimicking phantoms and ex vivo tissues.
Experimental setup
In order to investigate the use of a flexible fibre bundle to read-out the FP sensor and establish its photoacoustic imaging performance, the experimental set-up in Fig. 1 was used. It comprises a flexible coherent fibre-optic bundle with an x-y galvanometer based optical scanner at its proximal end and a telecentric lens relay and a planar Fabry-Perot (FP) ultrasound sensor at its distal end. The scanner is used to scan the interrogation beam from core to core at the proximal end of the bundle in order to address different spatial points on the sensor and map the distribution of photoacoustic signals generated in the target. Previously we described a probe in which the FP sensor was deposited directly on to the tip of a 3.2 mm diameter rigid fibre bundle and a similar core-to-core scanning read-out approach employed [14]. However, depositing the FP sensor on to the tip of the 1.35 mm diameter flexible fibre bundle used in the current study would result in a small acoustic aperture that compromises the PA image quality. In addition, the small core diameter of the bundle used would produce an interrogation beam spot size that is smaller than desirable and compromise sensitivity. The use of the distal end telecentric relay between the fibre bundle tip and the FP sensor addresses these issues.
The fibre bundle (Schott Inc.) was a commercially available leached fibre bundle comprising 18,000 step-index fibre-optic cores (0.393 NA). The core/cladding diameter of each fibre was 6.7 µm/10.6 µm respectively. In addition a 0.17 µm thick 2 nd cladding remaining from the leaching process separated each fibre; thus the total core-to-core spacing was 10.6 µm. The diameter of the bundle was 1.35 mm and it was protected by a flexible plastic coating with a diameter of 2.2 mm. Since a laser source with narrow linewidth is used to interrogate the FP sensor, Fresnel reflections from the bundle endfaces interfere with the reflections from FP sensor and produce parasitic interference which acts as a source of noise. To suppress the detection of Fresnel reflections, both bundle endfaces were angle polished and wedged as described in Appendix A.
The telecentric lens relay comprises two achromatic doublet pairs and projects a magnified image of the bundle end face onto the FP sensor. Thus, when the interrogation beam is scanned over the proximal end of the bundle, corresponding points on the FP sensor are optically addressed. The relay serves two purposes. Firstly, it magnifies the lateral field-of-view (FOV) by a factor of f 2 /f 1 , where f 1 and f 2 are the focal lengths of lenses L 1 and L 2 as shown in Fig. 1; in this way a FOV arbitrarily larger than the bundle diameter can be realized by appropriate choice of the relay lens pair focal lengths. Secondly, it reduces the numerical aperture of the laser interrogation beam at the FP sensor by the same factor and reduces the beam walk-off, an important requirement for achieving high fringe visibility and finesse which are required for optimum sensitivity [15]. Experiments were conducted using magnifications 4.5× and 7.5×, attained by using focal lengths: f 1 = 16 mm (12.5 mm dia) and f 2 = 75 and 125 mm (50.8 mm dia). These magnifications resulted in 30 µm and 50 µm interrogation beam spot diameters at the FP sensor surface, and 6 mm and 10 mm diameter FOVs, respectively.
The interrogation laser was a CW wavelength-tunable external cavity laser (1550 nm center wavelength, Tunics T100s-HP, Yenista Optics) in conjunction with a 100 mW EDFA (Pritel FA-23). The high power provided by the EDFA is required to compensate for the losses incurred by the optical setup; the total one-way loss is approximately 90% and is made up of the Fresnel reflections at the fibre bundle endfaces and the relay optics combined (15%) and the intrinsic attenuation of the fibre bundle (85%). The latter is high because the core and cladding diameters of the specific bundle used are optimized for use at visible wavelengths [16]. However, at the interrogation laser wavelength of 1550 nm, the 6.7 µm core diameter and thin 1.78 µm cladding thickness results in weak confinement and thus significant attenuation. The output of the EDFA was coupled into the fibre cores via the x-y scanner and an achromatic doublet scan lens. The light reflected back from the FP sensor is coupled back into the bundle and directed via a single-mode fibre-optic circulator on to an InGaAs photodiode-amplifier unit with a bandwidth (−3 dB) extending from 0.5 MHz to 50 MHz connected to a 250 MS s −1 digitizer with a 125 MHz analogue bandwidth (not shown).
The FP sensor was fabricated on a 10 mm thick PMMA substrate by depositing a 15 µm thick transparent polymer spacer layer (Parylene C) between two dielectric mirror coatings. The mirror coatings were designed for high reflectivity in the spectral range 1500 nm to 1600 nm where FP sensor is interrogated, and high transmission in the visible and near-infrared region [9]. This allows for transmission of excitation wavelengths through the FP sensor for backward-mode photoacoustic imaging. The −3 dB acoustic detection bandwidth of the sensor used was 53 MHz [17].
Two different excitation laser sources were used to generate photoacoustic waves in the sample. For phantom experiments, the excitation source was a Q-switched Nd:YAG (Ultra, Big Sky) laser that emits 7 ns pulses at 1064 nm with 20 Hz pulse repetition frequency (PRF). To acquire images in biological tissues, a tunable (410 -2100 nm) optical parametric oscillator (OPO) based laser system (Innolas Spitlight 600) emitting 7 ns pulses at 30 Hz PRF was used. In both cases, the excitation light was delivered via a multimode optical fibre. The output of the fibre was directed using a dichroic mirror through the sensor and on to the sample with a spot diameter of approximately 20 mm.
To acquire an image, the proximal end of the bundle was optically scanned from core to core. At each core position, the interferometer transfer function [9] (ITF), the relationship between optical power reflected from the FP sensor and interrogation wavelength, is acquired. The interrogation laser wavelength is then set to the point of maximum slope on ITF. Under these conditions, an acoustic wave (generated by the absorption of a pulse of laser light in the target) incident on the FP sensor will modulate its optical thickness. This results in a corresponding modulation in the reflected power of the interrogation laser beam which is transmitted back through the relay, along the fibre core and detected by the photodiode. Following acquisition of the photoacoustic signals from all 18,000 cores (which are arranged in a hexagonal pattern), the measured data is interpolated on to a uniform rectilinear grid before reconstructing the PA images using a time reversal algorithm [18,19].
For comparison purposes, a free-space FP scanner, similar to that described in [9] by Zhang et al., was used as a benchmark. With this system, the fibre bundle and the relay are not used and the sensor is directly read out by scanning a focused 50 µm diameter interrogation laser beam propagating in free-space over the surface of the FP sensor [9]; from here on, this will be referred to as the "free-space FP scanner". When using this scanner for comparison with the fibre bundle system, the interrogation laser power was adjusted so that the mean power recorded by the photodiode at the bias wavelength is approximately the same as that for the fibre bundle system; this compensates for the optical losses of the latter which are not present with the free-space scanner.
Results
In this section, the influence of the fibre bundle on the optical and acoustic performance of the FP sensor is investigated and compared to that achieved using the free-space FP scanner [9]. In addition, the photoacoustic imaging performance is assessed by measuring the line-spread function and imaging a range of tissue-mimicking phantoms and ex vivo biological tissues.
Round-trip coupling efficiency distribution
When using a fibre bundle to interrogate the sensor, it is desirable that the efficiency with which the interrogation beam is coupled into a specific core of the bundle, transmitted to the FP sensor via the relay and then re-coupled back into the core is similar for all cores. This requires a precision x-y scanner that can provide sub-micron positional accuracy over the entire 1.35 mm diameter of the bundle. It also requires that the distal end relay introduces negligible off-axis distortion to the beam. To assess the extent to which these requirements are met, the relative round-trip coupling efficiency of each core was measured. To achieve this, the interrogation laser wavelength was tuned to a point of zero derivative on the ITF. At this wavelength, the reflectivity of the FP sensor is independent of its optical thickness so the sensor acts as a mirror of spatially uniform reflectance. All 18,000 cores of the bundle were then individually addressed by scanning the interrogation beam over the proximal end of the bundle and recording the mean photodiode voltage (V dc ) at each core location. Figure 2(a) shows the results of such a scan where the greyscale intensity represents V dc and Fig. 2(b) shows a histogram of the same data. For comparison, this histogram also shows data acquired using the free-space FP scanner to scan the same sensor over the same area and number of spatial points. Figure 2(b) shows that for the fibre bundle case there is significant variation in V dc with the standard deviation 4 times higher than the free-space case. This is due to differences in core-to-core input coupling efficiency and off-axis aberrations in the lens relay which do not arise in the free-space case. Fig. 1. The greyscale intensity represents the measured photodiode voltage, V dc , and provides a measure of the relative round-trip coupling efficiency of the system. (b) Histogram of data in (a) (V dc corresponds to the core centres) and that acquired from an identical scan of the same FP sensor over the same area and number of spatial points (18,000) using the free space FP scanner. The vertical axis in (b) represents the number of spatial points of value V dc expressed as a percentage of the total number of points scanned (18,000).
ITF measurement
For high sensitivity, the design of the FP sensor (i.e., mirror reflectivities, spacer thickness, etc.) should be such that it provides high finesse and visibility. However, even if the FP sensor possesses these inherent characteristics, sensitivity can still be compromised if the measurement of the ITF is corrupted by noise introduced by the optical system. This is because such noise can distort the ITF to the extent that the optimum bias wavelength, λ b can no longer be accurately identified. This has not been observed to be a significant issue when using a free-space beam to interrogate the sensor [9]. However, use of a fibre bundle can introduce noise on the ITF that does not arise in the free-space case, especially if the core/cladding parameters are designed for visible wavelengths but longer wavelengths in the near infrared are used as in the current study; the fibre bundle that was used is specified for the 400 nm − 600 nm range but is used to transmit the 1500 nm − 1600 nm sensor interrogation light. At 1550 nm for example, the V number of the bundle is 5.34, and the number of guided modes is approximately 14; thus the fibre is weakly multimodal. Wavelength dependent interference between these multiple propagating modes can arise and introduce spurious baseline fluctuations in the ITF. Similar fluctuations can also be caused by wavelength-dependent optical coupling between adjacent cores. This is a distinct possibility given the small (1.78 µm) cladding thickness separating individual cores relative to the 1500 nm − 1600 nm wavelengths used. To investigate the impact of these influences, ITFs were acquired at each of the 18,000 core locations. For comparison, ITFs were also acquired at 18,000 points over the same area on the same sensor using the free-space FP scanner. Example ITFs acquired at a single point using both systems are shown in Fig. 3 and are of similar overall shape suggesting fibre bundle does not significantly distort the intrinsic ITF. This is further evidenced by the measured visibility and finesse obtained from the 18,000 ITFs acquired using both configurations. For the bundle, the mean visibility and finesse were 0.63 and 156.8, respectively; for the free-space scanner, they were 0.65 and 155.3. However, although these results suggest the fibre bundle does not negatively impact on the intrinsic interferometric characteristics of the sensor, Fig. 3 shows that it does introduce significant baseline fluctuations, most likely due to the above mentioned inter-modal interference and crosstalk effects. In principle, this could compromise the accurate identification of λ b . However, previous experience using a rigid bundle [14], which is afflicted by a similar level of ITF baseline noise, suggests this is not significant if a noise-free ITF, obtained by fitting a Lorentzian to the measured ITF, is used to determine λ b -as described in section 2, this is the approach employed in the current study. Fig. 3. Interferometer transfer function (ITF) of the FP sensor interrogated by a single core of the fibre bundle (orange) and its Lorentzian fit (gray). The ITF of the same sensor interrogated by a free-space beam is also shown (blue) for comparison.
Acoustic SNR
For a given FP sensor interrogated using a fibre bundle, the critical question is whether the acoustic SNR will be lower than that achieved when using a free-space beam to interrogate the same sensor. To assess this, the output of a 25 mm diameter, 3.5 MHz planar ultrasound transducer that emitted a plane wave with an amplitude variation of less than 5% over a 15x15 mm 2 area was directed on to the FP sensor. The peak positive amplitude of the acoustic waveform (which is taken to represent the signal) and the RMS noise (over a 20 MHz measurement bandwidth) were measured at different points on the sensor using firstly the fibre bundle and then the free-space FP scanner [9]. In both cases, an identical circular area of 10 mm diameter was scanned and measurements were made over the same number of spatial points (18,000). For the free-space FP sensor scan, the interrogation laser power was adjusted so that the mean dc voltage measured by the photodiode at the bias wavelength was approximately the same as the fibre bundle case to ensure a fair comparison.
The signal, noise and SNR distributions are shown in the histograms in Fig. 4. First consider the signal distribution ( Fig. 4(a)). The mean signal is similar for both free-space and fibre bundle configurations. However, for the fibre bundle, the variation is larger due to the variations in round-trip coupling efficiency which do not apply to the free-space case as described in section 3.1.1 (Fig. 2(b)). The noise distribution is shown in Fig. 4(b). This shows that the mean noise is higher for the fibre bundle system than the free-space case. It was observed that this noise is broadband and encompasses the ultrasonic frequency range and thus cannot be filtered out. Its high frequency nature suggests it is unlikely to originate from low-frequency vibrations. The most likely source is parasitic interference between two or more optical fields with a pathlength difference on a scale comparable to the laser coherence length. Under these conditions, phase noise arising from the finite laser linewidth is converted to intensity noise. There are several possible sources of such noise. It could arise from superposition of the light reflected from the front and distal ends of the fibre which involves a significant pathlength difference (>1 m). However, the wedge on the proximal fibre bundle end-face ensures that the magnitude of the front-end reflection that is coupled in to the circulator and detected by the photodiode is negligible. Parasitic interference arising from these fibre-endface reflections is therefore unlikely to be significant. Although the wedge on the distal end of the bundle also suppresses the Fresnel reflection, the high NA of the fibre and the relatively small wedge angle (8.3°may result in a non negligible reflection. This reflection and that from the FP sensor could then form a parasitic interferometer with a multi-cm pathlength difference that introduces noise. Other possible noise sources are inter-modal interference and coupling between adjacent cores, although in both cases the scale of the pathlength differences involved is likely to be small compared to the laser coherence length. The SNR distributions in Fig. 4(c) reflect both signal and noise distributions. The fibre bundle SNR is lower with a mean of 62 compared to 95 for the free-space case. Since the mean signal is similar for both cases (Fig. 4(a)), the reduced SNR is largely a consequence of the higher noise introduced by the bundle. 4. Histograms of (a) signal, (b) noise and (c) signal-to-noise ratio (SNR) for fibre bundle and free-space interrogated FP sensor configurations. In both cases the FP sensor was interrogated at 18,000 different points over a circular area of 10 mm diameter. The vertical axes represents the number of spatial points expressed as a percentage of the total number of points scanned (18,000).
Finally, given the propensity for inter-modal interference and cross coupling effects, both of which are sensitive to environmental influences, the fibre bundle was repeatedly twisted, bent and shaken while monitoring the acoustic signal in real time on an oscilloscope. No significant changes in the acoustic signal amplitude were observed.
Photoacoustic imaging performance
In this section, the ability of the system to acquire photoacoustic images is assessed by scanning tissue mimicking phantoms and vascularized ex vivo tissues.
Spatial resolution
The lateral and vertical spatial resolution was evaluated by imaging five rows of absorbing ribbons distributed over a depth of 7 mm that provide a step edge in the lateral direction. Deionized water was used to provide acoustic coupling between the FP sensor and the ribbon phantom. An excitation wavelength of 1064 nm and a fluence of 20 mJ cm −2 was used. The interrogation beam was scanned in 2D over the proximal face of the bundle in order to map the spatial distribution of the PA waves incident on the FP sensor at the distal end; the 4.5× lens relay was used for this experiment. A 3D PA image was reconstructed from this data. Figure 5(a) shows an x-z slice from the reconstructed 3D PA image where ribbon cross-sections are clearly visualized up to 7 mm in depth. The lateral resolution was determined by taking the FWHM width of a Gaussian fit to the derivative of lateral profile (Fig. 5(b)). The lateral spatial resolution at a depth of 1 mm was 57 µm and gradually decreased to 150 µm at 7 mm in depth, as shown in the contour plot (Fig. 5(d)), a consequent of the limited view provided by the scan area. The axial line spread function was determined by taking the FWHM width of a Gaussian fit to the axial profile (Fig. 5(c)). It was found to be 28 µm and largely spatially invariant over the x-z plane. The measured axial and lateral resolutions are broadly consistent with those previously obtained using free-space FP based scanners [9][10][11]. This suggests that image degradation due to crosstalk between adjacent cores and any distortions in the mapping of the hexagonal pattern of the fibre cores on to the FP sensor plane by the relay optics is not significant.
Arbitrary shaped phantoms
The three-dimensional imaging capability of the system was demonstrated by imaging two arbitrary shaped phantoms. The top panel in Fig. 6 shows the widefield microscope images of a synthetic hair knot, and a leaf skeleton phantom coated in India ink. These phantoms were immersed in a deionized water bath about 1 mm away from the FP sensor surface. PA signals were acquired using a 1064 nm excitation wavelength and a fluence of 20 mJ cm −2 . The reconstructed PA images of the phantoms, maximum intensity projected along the x-y and x-z plane, are shown in the bottom panel of Fig. 6. The structural features of synthetic hair knot and twisted ribbons phantoms are accurately reproduced in the PA images. The intricate veins of the leaf skeleton phantom are also clearly visualized in the PA image. Middle and lower panels: reconstructed PA images of the phantoms shown as maximum intensity projected along the x-y and x-z planes.
Ex vivo tissues
To demonstrate the high-resolution imaging capability in biological media, ex vivo tissues with vascular architectures of different spatial scales were scanned using the system with the 7.5× relay to provide a 10 mm circular FOV.
First, the chorioallantoic membranes (CAM) from fourteen day-old duck eggs were imaged using 590 nm excitation wavelength and 18 mJ cm −2 fluence. The procedure was not considered to require Home Office Regulation (Animals (Scientific Procedures) Act 1986) as the duck embryos had not reached the last third of gestation (normal gestation period is 28 days) and the embryo was killed before the start of the final third of the incubation period. Figure 7 shows the maximum intensity projected PA images of the duck CAM, colour-coded according to the depth. The images show the dense blood vasculature of the CAM, which is the outermost extra-embryonic membrane that is highly vascularized for gaseous exchange and calcium transportation between the embryo and its environment. The fine network of blood vessels in the CAM, some as small as 50 µm across, are clearly visualized in the PA images.
An ex vivo normal term human placenta with larger vessels was also imaged. The placenta was collected with written informed consent after a caesarean section delivery from a healthy term pregnant woman at University College London Hospital (UCLH). The Joint UCL/UCLH Committees on the Ethics of Human Research approved the study (14/LO/0863). At delivery, the umbilical cord of the placenta was clamped to preserve blood in the fetal chorionic microvasculature. After delivery the amniotic membrane was stripped from the chorionic surface. A water-based gel was used for acoustic coupling, and PA images were acquired using a 590 nm excitation wavelength and 18 mJ cm −2 fluence. The top panel in Fig. 8 shows 10 mm diameter widefield microscope images from the chorionic surface of the placenta, and the bottom panel shows the 3D PA images from the same area. Locations marked by letter v indicate areas where sub-surface chorionic vessels are visible in the PA images but are not visible in the widefield microscope images of the chorionic surface. The tissue surrounding the chorionic vessels has strong PA contrast, which originates from the dense fetal villous capillary architecture. The PA images also show patches in the surrounding tissue where there is a pronounced absence of PA contrast. These areas of anomalous negative contrast, marked with letter c, are likely to be the sites of calcium deposits [20] or areas of infarction in the chorionic plate arising from chronic vascular placental impairment; the latter are commonly seen in term placentas, even those with a normal outcome and are classified according to a "Grannum grade" which describes their depth and distribution through the placental tissue [21].
Discussion
This study has explored the use of a flexible coherent optical fibre bundle to interrogate a FP ultrasound sensor. When using the fibre bundle, it was found that variations in core-to-core round trip coupling efficiency result in acoustic sensitivity variations over the sensor scan area that are significantly greater than observed when using a free-space interrogation beam ( Fig. 4(a)). In principle, this can introduce spatial variations in image SNR but in practice this was not observed to any significant extent. This is because in widefield photoacoustic tomography mode, each reconstructed image pixel is the superposition of a significant proportion of the total number of photoacoustic signals detected over the entire FOV. This has a smoothing effect such that the spatial variation of the detection sensitivity is not mapped directly on to the reconstructed image but averaged out. Nevertheless, if required, there is scope for improvement since these variations in sensitivity most likely arise from misalignments between the focused interrogation beam and the cores at the proximal end of the bundle or off-axis aberrations in the relay optics. They could be reduced by optimising the design of the scanner optics and control hardware to improve positioning accuracy and refining the relay lens design to reduce aberrations. The fibre bundle system also exhibits high optical attenuation. As demonstrated, this can be inexpensively compensated for by using an EDFA to increase the interrogation laser power. If required however, use of an EDFA could be avoided by using a bundle designed for low loss propagation at telecom wavelengths and depositing anti-reflection (A/R) coatings on the fibre endfaces and the relay lenses to reduce Fresnel reflection losses.
The fibre bundle also introduces noise that is not present with the free-space FP scanner. As described in section 3.1.2, it introduces baseline fluctuations on the ITF. However, this appears not to be a limiting a factor since it does not compromise accurate identification of the optimum bias wavelength. More problematic however, is the increased broadband noise that the fibre bundle introduces since it encompasses the ultrasonic frequency range and thus reduces acoustic SNR; as Fig. 4(c) shows, the mean SNR is approximately a factor of 1.5 lower than obtained with a free-space FP scanner. Further investigation is required to establish the precise origins of this increased noise. If it is primarily due to parasitic interference between the reflections from the fibre-bundle end-face and the FP sensor, then increasing the angle of the distal-end wedge and depositing an anti-reflection coating on to the tip of bundle would reduce it. Noise due to inter-modal interference could be minimised by reducing the number of propagating modes (preferably to one) by designing the input coupling optics to provide a lower NA and increasing the scanner positional accuracy so that the focused interrogation beam can be aligned more precisely with the axes of the cores. If the noise is due to core-to-core cross coupling, then increasing the cladding thickness, albeit at the cost of a non-trivial redesign of the bundle, could reduce it. Indeed, most of the above limitations are a consequence of using a commercially available fibre bundle designed for use at visible wavelengths. They could be mitigated by using a fibre bundle designed to provide low loss single mode operation at 1550 nm.
The photoacoustic imaging performance was also evaluated. Image resolution and spatial fidelity could be compromised by spatial distortion introduced by aberrations in the distal-end lens relay system or cross-coupling between adjacent cores. However, no evidence of this was observed from the phantom studies undertaken. The lateral spatial resolution was 50 µm at a depth of 1 mm from the sensor surface and gradually decreases to 150 µm at 7 mm in depth. The axial resolution is 28 µm, which is largely invariant across the entire field of view. Both estimates are consistent with those obtained previously with free-space FP scanners [9]. This is further evidenced by the high quality of PA images of the vasculature in ex vivo duck embryo and human placenta where 3D vascular structures are clearly differentiated from the surrounding tissue.
Conclusions
In summary, this study has shown that, when using a standard commercially available flexible fibre bundle to spatially the map the output of the FP sensor, it is possible to acquire high quality PA images, comparable in terms of resolution and spatial fidelity to those obtained with a free-space FP scanner. The acoustic SNR when using a fibre bundle is lower due to higher noise, most likely due to the distal end fibre-end reflection and the non-ideal propagation characteristics of the bundle. Nevertheless, even with these non-optimal characteristics, the SNR was sufficient to acquire photoacoustic images of tissue with a penetration depth sufficient for superficial non invasive and endoscopic imaging applications. Moreover, depending on the noise source, there is scope for improvement by modification of the distal end of the bundle-end-face, improved design of the scanner optics and using a bundle designed for single mode operation in the 1500 nm to 1600 nm wavelength range. Successful use of a flexible fibre bundle to interrogate the FP sensor would considerably broaden its applicability to biomedical photoacoustic imaging and ultrasound field mapping. Most obviously it would pave the way to miniaturized, flexible photoacoustic imaging probes for endoscopic use, hand held non-invasive clinical scanners for imaging the skin or applications where an electrically passive, non-ferrous imaging head is required, for example for photoacoustic or ultrasound imaging in an MR scanner.
Appendix A: minimizing the detection of Fresnel reflections from bundle endfaces
To suppress the Fresnel reflections, both bundle endfaces were polished at an angle θ > sin −1 (NA), where NA is the numerical aperture of the objective lens that focuses the interrogation beam. However, this poses the challenge that only a small part of the 1.35 mm diameter bundle endface remains in the focal plane of the objective lens [22]. This was resolved by applying an optically clear epoxy to the proximal endface and polishing at an angle α = θ n−1/n (where n is the refractive index of the epoxy) to form a counter wedge, which shifts the focal plane to the angled endface. In addition, at the proximal end, the bundle needs to be tilted by an angle δ, so that the axis of the incident focused beam is aligned with the axis of the fibre cores to achieve maximum coupling efficiency. The relation between angle of the wedge (α) and the angles at which the bundle endface was polished (θ) and tilted (δ) are derived using the ray diagram in Fig. 9. According to the law of refraction, sin α = n sin β which is equivalent to AD/AB = nAD/AC. Assuming the paraxial approximation, AC AE AB + BE, which gives the relation BE AB(n − 1), where BE is the apparent shift of the focal plane due to the epoxy wedge from OO' (which represents the location of the focal plane in the absence of the wedge) to ON. This angular shift is given by: tan γ = BE BO AB BO (n − 1) tan α(n − 1), or γ α(n − 1) The bundle endface needs to be positioned at an angle α(n − 1) to remain in focus, and additionally tilted by an angle δ for maximum coupling efficiency as described above. These conditions lead to the angle, θ, at which the bundle endface needs to be polished: The proximal end of the fibre bundle was scanned with a 0.12 NA objective lens, so the bundle endface and the epoxy (n=1.51) surface was polished at angles 7°and 8.3°, respectively, and the endface was tilted 2.8°from the objective lens axis. The histogram plot in Fig. 9 compares the RMS noise voltage recorded at the output of the photodiode-amplified unit when the FP sensor was interrogated using the fibre bundle prior to (blue) and after the endfaces were angle polished and wedged (orange). On average, the baseline noise reduced by a factor of 3.2 when the Fresnel reflections were suppressed thereby significantly improving the acoustic sensitivity. | 8,464.4 | 2019-12-13T00:00:00.000 | [
"Physics",
"Medicine"
] |
Predicting Objective Response Rate (ORR) in Immune Checkpoint Inhibitor (ICI) Therapies with Machine Learning (ML) by Combining Clinical and Patient-Reported Data
: ICIs are a standard of care in several malignancies; however, according to overall response rate (ORR), only a subset of eligible patients benefits from ICIs. Thus, an ability to predict ORR could enable more rational use. In this study a ML-based ORR prediction model was built, with patient-reported symptom data and other clinical data as inputs, using the extreme gradient boosting technique (XGBoost). Prediction performance for unseen samples was evaluated using leave-one-out cross-validation (LOOCV), and the performance was evaluated with accuracy, AUC (area under curve), F1 score, and MCC (Matthew’s correlation coefficient). The ORR prediction model had a promising LOOCV performance with all four metrics: accuracy (75%), AUC (0.71), F1 score (0.58), and MCC (0.4). A rather good sensitivity (0.58) and high specificity (0.82) of the model were seen in the confusion matrix for all 63 LOOCV ORR predictions. The two most important symptoms for predicting the ORR were itching and fatigue. The results show that it is possible to predict ORR for patients with multiple advanced cancers undergoing ICI therapies with a ML model combining clinical, routine laboratory, and patient-reported data even with a limited size cohort.
Introduction
Immune checkpoint inhibitors (ICIs) are standard-of-care treatments in several malignancies, both in adjuvant and advanced settings [1][2][3][4][5][6][7][8][9][10][11][12]. However, treatment response assessment of the ICIs differs from traditional cancer therapies, with unique tumor response patterns such as pseudo-and hyperprogression [13]. Furthermore, the temporal association of radiological response to treatment may sometimes be obscure. While only a subset of patients responds to ICIs, novel tools to assess the treatment response are needed when aiming to improve patient care and the clinical value of ICIs.
Artificial intelligence (AI)-based analytics have gained growing interest in the field of cancer care. Machine learning models have been shown to predict responses to a variety of standard-of-care chemotherapy regimens from gene expression profiles of individual patients with high accuracy [14,15]. Furthermore, deep learning systems have shown promising results, especially in cancer diagnostics [16]. AI-based methods can be used to analyze vast data pools to create predictive and prognostic analytics for generating value-based healthcare assets. In addition, recent data show that machine learning (ML) algorithms could identify patients with cancer who are at risk of short-term mortality [17]. Tumor immunology is a very complex entity, and it is clear that none of the single factors known so far can predict benefit for ICI therapy with high accuracy. Therefore, it is likely that using multiple inputs would result in prediction models with higher sensitivity and specificity.
A comprehensive and timely assessment of patients' symptoms is feasible via electronic (e) patient-reported outcomes (PROs) [18][19][20]. ePROs have been shown to improve quality of life (QoL) and survival and decrease emergency clinic visits in cancer patients receiving chemotherapy and in lung cancer follow-up [21,22]. Numerous studies have linked ICI treatment benefit to the presence of physician-assessed immune-related adverse events (irAEs), but the prognostic role of ePROs is an uninvestigated area [23][24][25]. While MLbased methodology can comprise numerous variable data sources to generate prediction models [26], the association of irAEs to ICI treatment benefit together with the complexity of tumor immunology generates an interesting landscape to investigate ML-based models.
We have previously shown that the real-world symptom data collected with the Kaiku Health ePRO tool from cancer patients receiving ICI therapy align with the data from clinical trials and that correlations between different symptoms occur, which might reflect therapeutic efficiency, side effects, or tumor progression [27,28]. We first explored the possibilities of ML-based prediction models on ePROs to create prediction models of symptom continuity of cancer patients receiving ICIs and showed that it is feasible [29]. Based on our previous work on ML modeling and the ePRO symptom correlations, we speculated that if symptoms can predict irAEs, symptoms could work as a surrogate to irAEs. That hypothesis was confirmed in our latest research showing that ML-based prediction models using ePRO and electronic health care record (EHR) data as an input can predict the presence and onset of irAEs with a high accuracy [30].
The aim of this study was to investigate whether it is possible to predict objective response rate (ORR) in patients undergoing ICIs for advanced cancers. Thus, pseudonymized and aggregated ePRO symptom data collected with the Kaiku Health ePRO tool, laboratory values, and demographics, in addition to prospectively collected clinician-assessed treatment responses and irAE data, were used to train and tune a prediction model built using an open-source Python library XBoost (extreme gradient boosting algorithm) to assess clinical response to ICI treatment.
Materials and Methods
The study subjects (n = 31) consisted of patients recruited to the prospective KISS trial investigating ePRO follow-up of cancer patients receiving ICIs at Oulu University Hospital. In brief, the trial included patients with advanced cancers (non-small cell lung cancer, melanoma, genito-urinary cancers, and head and neck cancers) treated with anti-PD-(L)1s in outpatient settings with the availability of internet access and email. At the initiation of the treatment phase (within 0-2 weeks from the first anti-PD-(L)1 infusion), the patients received an email notification to complete the baseline electronic symptom questionnaire of 18 symptoms and did so weekly thereafter until treatment discontinuation or six months of follow-up. The symptoms tracked by the Kaiku Health ePRO tool are potential signs and symptoms of immune-related adverse events, and symptom selection is based on the reported publications of the following clinical trials: CheckMate 017 (NCT01642004), Check-Mate 026 (NCT02041533), CheckMate 057 (NCT01673867), CheckMate 066 (NCT01721772), CheckMate 067 (NCT01844505), KEYNOTE-010 (NCT01905657), and OAK (NCT02008227).
Besides recording the presence of a symptom, a severity algorithm of the ePRO tool grades the symptom according to the Common Terminology Criteria for Adverse Events (CTCAE) protocol, from 0 to 4, with no (0), mild (1), moderate (2), severe (3), and lifethreatening (4) categories. In addition to ePRO-collected symptoms, data on demographics, treatment responses according to the Response Evaluation Criteria in Solid Tumors (RECIST 1.1), irAEs (nature of AE, date of onset and resolving, dates of change in AE severity, and the highest grade based on CTCAE classification), and laboratory values were prospectively collected prior and during the treatment period.
The KISS trial was approved by the Northern Ostrobothnia Health District ethics committee (number 9/2017), Valvira (number 361), and details of the study are publicly available at clinicaltrials.gov (NCT03928938). The study was conducted in accordance with the Declaration of Helsinki and Good Clinical Practice guidelines.
The ML-based prediction model was built using the extreme gradient boosting algorithm, implemented using an open-source Python library XGBoost, which is widely used for classification problems. Gradient boosting is an ensemble learning algorithm; thus, it is an ensemble of many decision trees-usually tens or hundreds-which are weak learners but, when combined using the gradient boosting approach, form a strong learner capable of capturing complex relationships in the training data.
The aim of this study was to create a ML-based model for predicting the presence of complete response (CR) or partial response (PR) based on evolving digitally collected patient-reported symptoms, the presence of physician-confirmed irAE, and laboratory values collected in a prospective manner from cancer patients receiving ICI therapies in the KISS trial [26]. The included data consisted of symptom data that were graded by the algorithm of the ePRO tool according to CTCAE, automatically via application programming interface (API)-fetched laboratory data (bilirubin, hemoglobin, ALP, ALT, platelets, leukocytes, creatinine, thyrotropin, and neutrophils) from the baseline (prior to the first drug infusion) throughout the treatment phase, demographics (age and sex), treatment responses, and the presence of irAEs at response assessment (yes/no), as well as the time (weeks) from therapy initiation.
ORR was defined as the proportion of patients in whom PR or CR responses were seen as the best overall response (BOR) according to RECIST 1.1. Stable disease (SD) was categorized as a non-response together with progressive disease (PD). Closest preceding laboratory values and reported symptoms, both as changes from the baseline, were linked to the treatment responses; thus, the timelines of ePROs, irAEs, and BORs were synchronized according to dates. In addition, the model accounted for whether the patient had had a diagnosed irAE prior to/at the time of response evaluation.
Treatment responses according to RECIST 1.1 were divided into binary categories. The output of the prediction model is a continuous value [0,1] depicting the probability for the positive event, i.e., objective response (CR or PR) versus no objective response (SD + PD) (Figure 1). With a classification threshold of 0.5, the continuous probabilities were converted into binary outcomes, i.e., when the predicted probability for the positive event is greater than 0.5, prediction is labeled positive (CR or PR as treatment response), and if less than 0.5, then negative (SD or PD as a treatment response). Thus, the modeling methodology used in this study follows a general framework of binary classification in ML.
Prediction performance of the model for unseen samples was evaluated using leaveone-out cross-validation (LOOCV), which trained and tested 63 models, each time iteratively leaving one sample (related to one of the clinician-assessed treatment responses) out as a test set. Multiple response assessments across the same patients were used to create a timeline of best overall responses (BORs); however, in every time point analyzed, the parameters differ, comprising a new sample. Furthermore, the used gradient boosting trees-based algorithm (XGBoost) can handle intercorrelated observations or features, and, thus, correlated input parameters do not cause problems for the modeling.
The prediction performance of the model was evaluated with accuracy, AUC (area under curve), F1 score, and MCC (Matthew's correlation coefficient). Accuracy describes how many predictions were correct as a percentage, and 100% indicates a perfect classification. AUC is a commonly used performance metric for binary classification ranging from 0 to 1, where 0.5 is random guessing and 1 is perfect classification. F1 score is the weighted average of precision (i.e., how many of the cases predicted as positive are positive) and recall (how many of the positive cases are detected), which attains values between 0 and 1, 1 indicating perfect precision and recall. MCC summarizes all possible cases for binary predictions: true and false positives, and true and false negatives. MCC can be considered as a correlation coefficient between the observed and the predicted classifications, and it attains values between −1 and 1, where 1 is perfect classification, 0 is random guessing, and −1 indicates completely contradictory classification.
ML Prediction Model
The initial ePRO dataset included 992 filled symptom questionnaires from the 31 ICItreated cancer patients in outpatient settings comprising 18 monitored symptoms collected weekly using the Kaiku Health ePRO tool ( Table 1). The irAE data included physicianconfirmed prospectively collected irAE (n = 26) data in the eCRFs of the KISS trial from those 31 patients, containing initiation and end dates, CTCAE class and severity, and nature (colitis, diarrhea, arthritis, rash, hyperglycemia, neutropenia, pneumonitis, itching, cholangitis, mucositis, hypothyreosis, and hepatitis). Prospectively assessed treatment responses (n = 63) by the study physicians were also retrieved from the eCRF. The patients with partial (PR) or complete (CR) responses (n = 19) were characterized as responders, while stable (SD) and progressive disease (PD) (n = 44) were categorized as a non-response. The complete modeling framework for ORR prediction is illustrated in Figure 1. We also tested several other commonly used ML models, such as logistic regression, elastic-net regression, support vector machines, LightGBM, and random forests, but XGBoost had the best performance with the LOOCV evaluation, and, thus, it was chosen as the model for the study.
Performance Metrics for ORR Prediction
The model trained to predict ORR had a promising LOOCV performance with all four metrics: accuracy, AUC, F1 score, and MCC. The accuracy of predicting ORR was 75%. The AUC value (0.71) suggests a decent quality level of model performance. The F1 score (0.58) indicates that the model was feasible in predicting the treatment response, which was supported by the MCC value (0.40). In the confusion matrix for all 63 LOOCV ORR predictions can be seen a rather good sensitivity (0.58) and a high specificity (0.82) of the model (Figure 2). The false negatives (8/63 samples) were identified as the cases where the prediction model did not predict a presence of objective treatment response for a test dataset sample which was a true positive, i.e., CR or PR was present. The false positives (also 8/63 samples), on the other hand, were the cases where the model predicted the presence of CR or PR for the sample but the sample was a true negative, i.e., the response was SD or PD. Figure 3 illustrates the feature importance from a model trained with all available samples (n = 63). The displayed importances depict the relative average improvement in prediction accuracy across all of the 100 decision trees in the model where a certain feature is utilized. The importance of each feature should be considered as relative to the others. As is presented in Figure 3, the two most important features for predicting the ORR were itching and fatigue. Figure 3 reveals that roughly half of the features contributed to the predictions. The features which do not contribute to the predictions could be removed from the prediction model using feature selection, but it does not impact the model performance, due to the tree structure of the used gradient boosting algorithm.
Discussion
Digitalization is a global megatrend affecting the basic structures of human interaction. Digital transformation of healthcare aims to deliver the positive impact of technology in many forms, e.g., telemedicine, AI-aided medical devices, and vast data pools to create predictive analytics. Evolving data show that electronic health record-based predictive algorithms may improve clinicians prognostication and decision-making [31]. A recent study on the use of radiomics and machine learning revealed that the algorithm utilizing individual CT scans of advanced melanoma patients receiving single anti-PD-1 therapy outperformed traditional RECIST 1.1 criteria in predicting treatment response [32]. However, utilization of ePROs in creating ML algorithms is a novel approach.
In this study, we investigated ML modeling that combines prospectively collected data on ePROs, demographics, laboratory values, irAEs, and treatment responses. The aim of the study was to investigate whether these data inputs could be used to predict treatment benefit from ICI therapies in metastatic cancers. The results showed that it is possible to predict ORR with a high specificity, even using data from a patient cohort of multiple cancer types. The study highlights the possibilities of using pooled data from various sources for ML models and potentials of these models to improve the clinical value of cancer treatments.
Parallel to traditional follow-up of cancer patients, ePROs enable capturing of symptoms in a timely and comprehensive manner, and it integrates the patients' perspective into the cancer care continuum [17,33]. Previous studies have provided evidence that ePRO follow-up can improve QoL, reduce emergency clinic visits, and, more importantly, improve survival in chemotherapy-treated patients with advanced cancers and in lung cancer [20,21]. We have previously shown that ePRO follow-up is also feasible for cancer patients receiving ICIs and that ePRO-collected symptom profiles mimic the AE results of ICI registration studies [26,27]. In addition, our earlier studies have highlighted the possibilities of ePRO data inputted ML models in facilitation of irAE detection, which could improve their treatment [29]. Furthermore, since irAEs often are linked to improved outcomes in patients treated with ICIs [23][24][25][26], we speculated that ePRO-collected symptom data could be used to predict treatment benefit. Compared to healthcare professionalcollected symptoms, ePRO might provide additional value to the symptom assessment, especially with low-grade symptoms without external presentation such as itching. As far as we know, the present study is the first ever to combine ePRO-collected symptom data with ML modeling to predict treatment response with ICIs.
Even though this has been intensively studied for years, there are no known universal predictive factors for ICI benefit in cancer treatment suitable for clinical practice for multiple cancer types [34]. Our results indicate that multiple data points and sources collected over time can be used to generate an adaptive ML model able to predict treatment outcomes. Due to the complexity of cancer immunology, we speculate that no single universal marker for ICI benefit will be discovered, and more effort should be used in analyzing multidimensional data, combining not only tumor features but also clinical data, such as ePRO symptoms and routine laboratory values. Furthermore, our study suggests that ePRO data could be used as a non-invasive indicator for immune activation and, therefore, surrogate for ICI treatment benefit.
There are several limitations when interpreting our results. Our patient cohort is limited in size, which could decrease the generalizability of the results. Our model had high specificity for treatment benefit but only a moderate level of sensitivity, which might relate to the small cohort size or be a general feature of these models. Nevertheless, the used modeling methods and approaches were chosen to overcome the issues related to imbalanced datasets, and intercorrelated parameters were selected to minimize such bias. These methods and approaches included, e.g., utilization of sample weights (giving more emphasis to the rare positive samples in model training), utilization of F1 score and MCC as performance metrics, and using a regularized tree-based model, XGBoost. In addition, our cohort consisted of ICI-monotherapy-treated patients, and the results might not be applicable to patients treated with ICI combinatory therapies. Thus, our model inevitably requires validation in another, preferably larger, cohort. As far as we know, however, these types of datasets are currently unavailable.
In our opinion, ML models should be incorporated into the digital symptom follow-up of a cancer patient for optimal remote monitoring. The tool should include an interactive ePRO approach connecting the patient and care unit in a timely fashion and, preferably, also automatically integrate other clinical data such as laboratory values. When these datasets are available in a sole platform, adaptive ML models such as the one built in the present study can be used to bring additional important data, such as irAE and treatment benefit probabilities, for clinical decision-making. This digital tool could personalize cancer care and bring additional clinical value to the ICI treatments, especially considering the high costs and undefined predictive factors.
Conclusions
In healthcare, knowledge representation as part of the clinical decision support system is currently the most used AI approach. There are high hopes that AI could improve healthcare with early diagnostics and improved care in a more cost-effective manner compared to current measures. Yet the digital revolution in healthcare provides new ways to both collect clinically relevant data from each patient and connect it to large data pools of existing patient-level data for analysis with AI-based algorithms, aiming to personalize treatment schemas and follow-up based on individual risk assessment.
In conclusion, our study highlights the possibility of generating ML models for ICI treatment benefit. We used multiple inputs for the model, including ePRO symptom data, which could serve as a non-invasive surrogate for immune activation. The main results suggest that these models perform with a high specificity. Even though validation of the results in larger cohorts is required, the promising results favor digital approaches in ICI patient follow-up. | 4,463.2 | 2022-01-31T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Genotoxic Effects of Exposure to Formaldehyde in Two Different Occupational Settings
Aleksandr Butlerov synthesized the chemical in 1859, but it was August Wilhelm von Hofmann who identified it as the product formed from passing methanol and air over a heated platinum spiral in 1867. This method is still the basis for the industrial production of formaldehyde today, in which methanol is oxidized using a metal catalyst. By the early 20th century, with the explosion of knowledge in chemistry and physics, coupled with demands for more innovative synthetic products, the scene was set for the birth of a new material– plastics (Zhang et al., 2009).
Primary formaldehyde is emitted from motor vehicles and fugitive industrial emissions, while secondary formaldehyde is produced by the photochemical oxidation of volatile organic compounds (VOCs) as the result of intense sunlight, especially during summer months (Odabasi & Seyfioglu, 2005). In addition, it has been postulated that formaldehyde can be produced by reactions involving anthropogenic and naturally occurring alkenes (Chen et al., 2002).
Removal of formaldehyde from the atmosphere can occur by chemical transformations, rain and snow scavenging of vapours and particles, by dry deposition of particles, and by vapour exchange across the air-water interface. Particle/gas phase distribution of formaldehyde is an important factor in determining its atmospheric fate, transport, and transformation (Odabasi & Seyfioglu, 2005).
Considering indoor air presence, homes containing large amounts of pressed wood p r o d u c t s s u c h a s h a r d p l y w o o d w a l l p a n e l i n g , p a r t i c l e b o a r d , f i b e r b o a r d , a n d U r e a -Formaldehyde Foam Insulation (UFFI) often have elevated levels of formaldehyde emissions exceeding 0.3 ppm (U. S. Environmental Protection Agency [USEPA], 2007). Since 1985, the Department of Housing and Urban Development has only allowed the use of plywood particleboard that conforms to the 0.4 ppm formaldehyde emission limit in the construction of prefabricated and mobile homes (USEPA, 2007). Formaldehyde emission levels generally decrease as products age. In older homes without UFFI, concentrations of formaldehyde emissions are generally far below 0.1 ppm (USEPA, 2007). This value is close to the indoor limit, 0.1 mg/m 3 (0.08 ppm), recommended by the World Health Organization (World Health Organization -Regional Office of Europe [WHO-ROE], 2006), the limit followed by many other countries including the UK (Committee on the Medical Effects of Air Pollutants [COMEAP], 2004), and China (Standardization Administration of China [SAC], 2002).
Moreover, some studies have reported that seasonal variations resulted in higher indoor formaldehyde concentrations during the summer due to increased off gassing promoted by the higher temperatures (Kinney et al., 2002;Ohura et al., 2006;Yao & Wang., 2005). It seems that besides the type of materials used and home age also the season (warmer temperatures) influence formaldehyde concentrations in indoor settings (Viegas & Prista, 2010;Zhang et al., 2009).
Small amounts of formaldehyde are naturally produced in most organisms, including humans, as a metabolic byproduct (IARC, 2006;NTP, 2005), and are physiologically present in all body fluids, cells and tissues. The endogenous concentration in the blood of humans, monkeys and rats is approximately 2-3 mg/L (0.1 mM) (Casanova et al., 1988;Heck et al., 1985). Formaldehyde is also found in foods, either naturally or as a result of contamination (IARC, 2006). Therefore, everyone is continually exposed to small amounts of formaldehyde, environmentally present in the air, our homes and endogenously in our own bodies (Zhang et al., 2009).
Taking into account occupational settings, exposure involves not only workers in direct production of formaldehyde and products containing it, but also in industries utilizing these products, such as those related with construction and household (Zhang et al., 2009). The most extensive use of formaldehyde is in production of resins with urea, phenol and melamine, and also polyacetal resins. These products are used as adhesives in manufacture www.intechopen.com of particle-board, plywood, furniture and other wood products (IARC, 2006). Formaldehyde is also used in cosmetics composition and has an important application as a disinfectant and preservative, reason why relevant workplace exposure may also occur in pathology and anatomy laboratories and in mortuaries (Goyer et al., 2004;IARC, 2006;Zhang et al., 2009).
The exposed workers, commonly found in resin production, textiles or other industrial settings, inhale formaldehyde as a gas or absorb the liquid through their skin. Other exposed workers include health-care professionals, medical-lab specialists, morticians and embalmers, all of whom routinely handle bodies or biological specimens preserved with formaldehyde (IARC, 2006;Vincent & Jandel, 2006;Zhang et al., 2009).
Concerning exposure limits in occupational settings, Occupational Safety and Health Administration (OSHA) has established the following standards that have remained the same since 1992: the permissible exposure limit (PEL) is 0.75 ppm (parts per million) in air as an 8-h time-weighted average (TWA 8h ) and the short-term (15 min) exposure limit (STEL) is 2 ppm. American Conference of Governmental Industrial Hygienists (ACGIH) recommended threshold limit value (TLV) is 0.3 ppm as a ceiling value. The National Institute of Occupational Safety and Health (NIOSH) recommends much lower exposure limits of 0.016 ppm (TWA 8h ) and 0.1 ppm (STEL), above which individuals are advised to use respirators if working under such conditions. In Portugal, the Portuguese Norm (NP 1796(NP -2007 points also 0.3 ppm as a ceiling value. The primary metabolite of formaldehyde is formate which is not as reactive as formaldehyde itself and can either enter into the one-carbon metabolic pool for incorporation into other cellular components, be excreted as a salt in the urine, or further metabolized to carbon dioxide (Agency for Toxic Substances and Disease Registry [ATSDR], 1999). The metabolic pathway to formate production is catalyzed by cytosolic glutathione (GSH)-dependent formaldehyde dehydrogenase (FDH). The reaction of formaldehyde with GSH yields (S)hydroxymethylglutathione which, in the presence of NAD+ and FDH, forms the thiol ester of formic acid via the action of (S)-formyl glutathione hydrolase (SFGH) (Pyatt et al., 2008).
There is scientific evidence conclusively demonstrating that inhaled formaldehyde does not enter the systemic circulation to modify normally present endogenous levels (ATSDR, 1999;Heck & Casanova, 2004). This is likely due to the high water solubility of formaldehyde and its rapid metabolism. The lack of systemic distribution is evidenced by a variety of studies in rodents, monkeys and humans (Pyatt et al., 2008).
It seems clear that as long as inhaled levels of formaldehyde are below concentrations that can be rapidly metabolized by tissue formaldehyde dehydrogenase and other highly efficient detoxification enzymes, normal endogenous concentrations (0.1 mM) of formaldehyde in the blood do not increase (ATSDR, 1999;Heck & Casanova, 2004).
Human studies have shown that chronic exposure to formaldehyde by inhalation is associated with eye, nose and throat irritation (Arts et al., 2008). Sensory irritation leads to reflex responses such as sneezing, lacrimation, rhinorrhea, coughing, vasodilatation and changes in the rate and depth of respiration. The latter results are a decrease in the total amount of inhaled material resulting in a protective effect to the individual. Trigeminus stimulation is not necessarily an indication of cell or tissue damage. At higher concentrations formaldehyde will lead to cytotoxic reactions; this cytotoxic respiratory tract irritation is a localized pathophysiological response to a chemical, involving local redness, swelling, or itching (Arts et al., 2006). Formaldehyde was long considered as a probable human carcinogen (Group 2A chemical) based on experimental animal studies and limited evidence of human carcinogenicity. More recently, several studies report a carcinogenic effect in humans after chronic exposure to formaldehyde, in particular an increased risk for nasopharyngeal cancer (Armstrong et al., 2000;Coggon et al., 2003;Hildesheim et al., 2001;Lubin et al., 2004;Vaughan et al., 2000). Since 2006, IARC classifies formaldehyde as carcinogenic to humans (Group 1), based on sufficient evidence in humans and in experimental animals (IARC, 2006). IARC also concluded that there is a "strong but not sufficient evidence for a causal association between leukemia and occupational exposure to formaldehyde".
The ''strong'' evidence for a causal relationship between formaldehyde exposure and leukaemia comes from recent updates of two of the three major industrial cohort studies of formaldehyde-exposed workers (Hauptmann et al., 2003;Pinkerton et al., 2004). These data have strengthened a potential causal association between leukemia and occupational exposure to formaldehyde, especially for myeloid leukemia (Zhang et al., 2009).
Nevertheless, some authors have argued that it is biologically implausible for formaldehyde to cause leukaemia (Cole & Axten, 2004;Marsh & Youk, 2004). Their primary arguments against the human leukemogenicity of formaldehyde are: (1) it is unlikely to reach the bone marrow and cause toxicity due to its highly reactive nature; (2) there is no evidence that it can damage the stem and progenitor cells, the target cells for leukemogenesis; and (3) there is no credible experimental animal model for formaldehyde-induced leukaemia. This led Pyatt et al., (2008) to recently comment that ''the notion that formaldehyde can cause any lymphohematopoietic malignancy is not supported with either epidemiologic data or current understanding of differing etiologies and risk factors for the various hematopoietic and lymphoproliferative malignancies''. Indeed, IARC itself concluded that ''based on the data available at this time, it was not possible to identify a mechanism for the induction of myeloid leukaemia in humans'' and stated that ''this is an area needing more research'' (IARC, 2006;Cogliano et al., 2005;Zhang et al., 2009).
However, recently, IARC reaffirmed the classification of formaldehyde in Group I, based on sufficient evidence in humans of nasopharyngeal cancer. Considering the possible association with leukemia the epidemiological evidence has become stronger and IARC has concluded that there is sufficient evidence for leukaemia, particularly myeloid leukaemia (Baan et al., 2009;Hauptmann et al., 2009;IARC, 2006). Moreover, in 2010 Schwilk and colleagues performed an up-dated meta-analyses focusing in higher exposure groups and myeloid leukemia and included two large recent studies and conclude that formaldehyde exposure is associated with increased risks of leukemia, particularly myeloide leukemia and highlight the importance of focusing on high-exposure groups and myeloid leukemia when evaluating the human carcinogenicity of formaldehyde (Schwilk et al., 2010).
In the case of formaldehyde exposure assessment and considering that health effects seems to be mainly related with the high concentration peaks than with long time exposure at low levels, the strategy to perform exposure assessment in occupational settings must be based on the determination of ceilings concentrations. This option might be the best to evaluate exposures and to obtain data for risk assessment development (Hauptmann et al., 2009;IARC, 2006).
Manifold in vitro studies clearly indicated that formaldehyde can induce genotoxic effects in proliferating cultured mammalian cells (IARC, 2006). Furthermore, some in vivo studies detected changes in epithelial cells (oral and nasal) and in peripheral lymphocytes related to formaldehyde exposure (Speit & Schmid, 2006;Suruda et al., 1993).
Frequency of micronucleus (MN) in buccal and/or nasal mucosa cells is being used to investigate local genotoxicity. According to reports concerning experimental genotoxicity studies, MN are the most sensitive genetic endpoints for detection of formaldehyde induced genotoxicity (Merck & Speit, 1998). Thus, MN test with exfoliated cells could be a powerful tool for detection of local genotoxic effects in humans, which is fundamental for hazard identification and risk estimation (Speit & Schmid, 2006). MN in peripheral blood lymphocytes has been extensively used to evaluate the presence and extend of chromosome damage in human populations exposed to genotoxic agents. As advantages, this MN test provides a reliable measure of chromosomal breakage and loss at lower cost and more easily than chromosomal aberrations. Moreover, the availability of cytokinesis-block technique eliminates potential background caused by effects on cell division kinetics (Bonassi et al., 2001).
Research work has been developed to know occupational exposure to formaldehyde in two different occupational settings (resins production and in pathology and anatomy laboratories) from Portugal and, also, study eventual health effects related with exposure. The objective of this chapter is to describe the work developed and discuss the obtained results.
Subjects
This study was carried out in Portugal, in 80 workers occupationally exposed to formaldehyde vapours: 30 workers from formaldehyde and formaldehyde-based resins production factory and 50 from 10 pathology and anatomy laboratories. A control group of 85 non-exposed subjects was considered. All subjects were provided with the protocol and with the consent form, which they read and signed.
Health conditions, medical history, medication and lifestyle factors for all studied individuals, as well as information related to working practices (such as years of employment) were obtained through a standard questionnaire.
Exposure assessment
Two different exposure assessment methods were, simultaneously, applied in the 10 anatomy and pathology laboratories in Portuguese hospitals and in the formaldehyde and formaldehyde-based resins production factory, in order to assess formaldehyde occupational exposure. Environmental monitoring was performed between the period of September 2007 and March of 2008.
In these two occupational settings were identified different exposure groups. In laboratories were defined three, namely pathologists, technicians and technical assistant. Also, in the factory were define three groups -production of resins, impregnation and quality control. These definitions were based essentially on activities similarity.
Method 1
In one of the methods 30 environmental samples were obtained by personal air sampling with low flow rate (0.01 to 0.10 L/min) pumps (Zambelli) during a typical working day. The sorbent tubes used were impregnated with 10% (2-hydroxymethyl)piperidine. Sampling time was 6 to 7 hours. Two to three samples were collected in each laboratory by the use of electric flow pumps which were placed in a worker of each exposure group.
Method 2
Ceiling values for formaldehyde exposure were obtained using Photo Ionization Detection (PID) direct-reading equipment (with an 11.7 eV lamp) designated by First-Check, from Ion Science. This equipment accurately detects formaldehyde from 1 ppb to 10,000 ppm and performs automatically data log readings from the sensor on a second basis. Measurements were performed in each task and readings were stored in instrument internal memory with a date and time stamp. At the same time it was performed video recording and synchronized with real-time exposure data obtained with PID equipment followed by combination of the exposure profile with the video image of worker activity. With this method it was possible to establish a relation between worker activities and ceiling values, and to determine principal emission sources.
Eighty three activities were studied in the 10 laboratories and three activities in the factory. All tasks were studied in normal conditions, namely using ventilation dispositive and, as usual, none of the workers was using masks to protect from formaldehyde vapours.
In both methods sampling/measures were performed near workers respiratory system. Data obtained from NIOSH 2451 method was compared with reference value from OSHA (TLV-TWA=0.75 ppm) because there is no reference in Portugal for this exposure metric.
The ceiling values obtained from PID method were compared with reference value from Portuguese Norm 1796-2007 (0.3 ppm).
Biological monitoring
To evaluate the effects of the occupational exposure, the study of effect biomarkers was conducted. The biomarkers of effect studied were specifically genotoxicity biomarkers, namely micronuclei in two different biological matrixes -peripheral blood lymphocytes and buccal exfoliated cells.
The protocol used to measure the MN in peripheral blood lymphocytes was the fully validated cytokinesis-blocked micronucleus assay (CBMN), developed by Fenech [20], where it is used citochalasin B to block the cytokinesis in order to lymphocytes had a binucleated appearance. Heparinized blood samples were obtained by venipuncture from all subjects and freshly collected blood was directly used for the MN test. Lymphocytes were isolated using Ficoll-Paque gradient and placed in RPMI 1640 culture medium with L-glutamine and red phenol added with 10% inactivated fetal calf serum, 50 ug/ml streptomycin + 50U/mL penicillin, and 10 ug/mL phytohaemagglutinin. Duplicate cultures from each subject were incubated at 37ºC in a humidified 5% CO 2 incubator for 44h, and cytochalasin-b 6 ug/mL was added to the cultures in order to prevent cytokinesis. After 28h incubation, cells were spun onto microscope slides using a cytocentrifuge. Smears were air-dried and double stained with May-Grünwald-Giemsa and mounted with Entellan. The frequencies of binucleated cells with MN were determined analyzing 1000 lymphocytes from two slides for each subject.
The optimal protocol of MN test for buccal exfoliated cells was performed after many experiments. In order to reach the optimal protocol, different techniques of collecting the cells and the staining were done.
Concerning to the sample collection, it was considered that the best way of obtaining the sample it was by scrapping the inner checks of the individuals with an endobrush and directly performed a smear in two slides. The samples were immediately fixed with Mercofix®, a methanol based preservative.
The staining protocols selected were based on the affinity of the stains with the nucleus: Hematoxilin-Eosin, Hematoxilin, Giemsa, May-Grunwald Giemsa, Papanicolaou, Feulgen with Light Green and Feulgen.
The reliable results were achieved with Feulgen without counterstain (Nersesyan et al., 2006). This technique consists in a first step of hydrolysis with HCL 5M followed by washing with distillate water, incubation with Schiff Reagent and tap water final washing. The slides were allowed to air dried, mounted with entellan®. Two thousand cells were scored from each individual. Only cells containing intact nuclei, neither clumped nor overlapping were included in the analysis.
The criteria for scoring the nuclear abnormalities in lymphocytes and MN in buccal cells were described by Fenech et al. (1999) and Tolbert et al. (1991), respectively.
Characteristics of the studied population
The characterization of the population studied is summarized in Table 1. Controls and exposed workers did not differ significantly in age and in smoking habits. Only for gender distribution a significant difference was found between the two groups (p=0.002), due to the larger number of women in the control group.
Exposure assessment
Formaldehyde exposure values were determined using the above described methods: NIOSH 2541 for average concentrations (TWA 8h ) and PID method to obtain ceiling concentrations (Tables 2 and 3). On the opposite, for ceiling concentrations all the higher results obtained for each exposure group in each occupational setting exceeded the reference value (0.3 ppm). In laboratories, values lied between 0.18 ppm and 5.02 ppm, with a mean of 2.52 ppm. In the factory the concentration values registered each second lied between 0.0 and 1.02 ppm.
www.intechopen.com
The three activities studied in the factory have result above the reference value for ceiling concentrations (0.3 ppm).
In production of resins the higher concentration value was obtained during the collection of a sampling in resins reactor performed by a production operator. In this case and during operation of impregnation machine there were not local exhaust ventilation dispositive. Only in the "quality control" exposure group there was a small hotte that is not normally used to perform quality analysis of resins.
In the case of laboratories, all of them had, at least, one task with a higher result than the reference value (0.3 ppm) (Figure 1).
Fig. 1. Higher ceiling value obtained in each laboratory
Considering all of the 83 tasks studied in the laboratories (Table 4), 93% of the results were higher than the reference value for ceiling concentrations (0.3 ppm).
Highest exposure level was observed during "macroscopic examination" of formaldehydepreserved specimens. This task is developed in a macroscopic bench with local exhaust ventilation. In all the laboratories studied was verified that ventilation was functioning normally.
The task "data registration" showed also a high formaldehyde concentration value, being important to note that this task occurs during macroscopic examination (Table 4).
Concerning the 69 macroscopic examinations, the most frequent task develop in this laboratories, it was possible to verify that near 93% of formaldehyde concentration values were higher than 0.3 ppm.
In this occupational setting, highest score for ceiling values was identified in the results of the exposure group "Pathologists" and the highest mean was obtained for the "Technicians" group (Table 5).
It is important to consider that none of the workers of the two occupational settings were using appropriate respiratory protection during the tasks studied. Table 6 showed that the frequency of MN in occupationally exposed workers was significantly higher in comparison with the control group, both in peripheral blood lymphocytes (p<0.001) and in epithelial buccal cells (p<0.001).
Controls Exposed Factory
Pathology and anatomy laboratories Table 6. Frequency of MN in the studied population When analyzing each occupational setting separately, we found significant differences in MN frequencies in peripheral blood lymphocytes (p < 0.001) and in epithelial buccal cells (p<0.005) between the laboratories and control groups. Concerning the factory group, significant differences in MN frequencies were only detected in epithelial buccal cells (p<0.001).
Finally, it was compared MN frequencies between the two exposed groups and found that MN frequency in peripheral blood lymphocytes was significantly higher in the laboratories group (p<0.005), but respecting to epithelial buccal cells there was no significant difference between them (p=0.108).
In what concern to the three exposure groups studied in the pathology anatomy laboratories, the pathologists group has higher MN mean in lymphocytes and the technician had higher MN mean in buccal cells ( Factory results reveal quality control group with higher MN mean in lymphocytes and also in buccal cells (
Discussion
As indicated by several studies (IARC, 2006;Orsière et al., 2006;Shaham et al., 2003) exposure assessment in present investigation demonstrates that both occupational settings studied involve exposure to high peak formaldehyde concentrations.
The importance of this consideration lies in the fact that health effects (cancer) linked to formaldehyde exposure are more related with peaks of high concentrations than with long time exposure at low levels (IARC, 2006;Pyatt et al., 2008). Moreover, the choice of exposure metric should be based on the most biologically relevant exposure measure in order to diminish misclassification of exposure, thus leading to attenuated exposure-response relationships (Preller et al., 2004). Furthermore, high exposures of short duration (peaks) are of special concern, because they can produce an elevated dose rate at target tissues and organs, potentially altering metabolism, overloading protective and repair mechanisms and www.intechopen.com amplifying tissue responses (Preller et al., 2004;Smith, 2001). In addition, Pyatt et al. (2008) pointed out, as a limitation in most epidemiological studies, the lack of data about exposure to peak concentrations. Therefore, in those studies, health effects resulting from occupational exposure to formaldehyde are associated to exposure exclusively based on time-weighted average concentrations (Pyatt et al., 2008). Until 2004 only two studies concerning the association between exposure to formaldehyde and nasopharyngeal cancer that presented data on exposure to ceiling concentrations obtained higher relative risk values compared with the other studies (Hauptmann et al., 2004;Pinkerton et al., 2004;Zhang et al., 2009).
Recently Hauptmann and colleagues have found that mortality rate from leukemia increased significantly not just with number of years of activity, in this case embalming, but also with increasing peak formaldehyde exposure (Dreyfuss, 2010;Hauptmann et al., 2009).
Results in laboratories indicate "macroscopic examination" as the task involving the highest exposure. This is probably because precision and very good visibility is needed and, therefore, pathologists must lean over the specimen with consequent increase of proximity to formaldehyde emission sources. Studies developed by Goyer et al., (2004) and Orsière et al., (2006) support that proximity to impregnated specimens promotes higher exposure to formaldehyde. "Pathologist" is normally the exposure group that performs this task. However, the "Technician" group obtained, simultaneously, higher TWA 8h and higher mean of ceiling values. This can be due to the fact that this is the group envolved in more tasks related with formaldehyde exposure, during the working day.
In the case of the factory, the task "collecting a sample of the reactor" involved a manual process. Probably the proximity and reactor open promote exposure.
It is important to refer that these type of information (exposure determinants, emission sources and exposed workers) was only possible to obtain because video recording could be performed simultaneously with concentration measurements.
This resource gives opportunity to directly relate performance with exposure (Mcglothlin, 2005;Ryan et al., 2003;Rosén et al., 2005). Additionally, real-time measurements are useful also for evaluating engineering controls and their efficacy (Yokel & MacPhail, 2011).
In addition, and in agreement with other studies (Kromouht, 2002;Meijster et al., 2008;Susi & Schneider, 1995), it is possible to conclude that TWA 8h measurements give poor information and is of less utility in the identification of tasks that should be targeted for control.
Long exposures to formaldehyde, as those to which some workers are subjected for occupational reasons, are suspected to be associated with genotoxic effects that can be evaluated by biomarkers (Conaway et al., 1996;IARC, 2006;Viegas & Prista, 2007). In this study, the results suggest that workers in pathological anatomy laboratories are exposed to formaldehyde levels that exceed recommended exposure criteria and a statistically significant association was found between formaldehyde exposure and biomarkers of genotoxicity, namely MN in lymphocytes and buccal cells.
Chromosome damage and effects upon lymphocytes arise because formaldehyde escapes from sites of direct contact, such as mouth, originating nuclear alterations in lymphocytes of those exposed (He & Jin, 1998;Orsière et al., 2006;Ye et al., 2005;Zhang et al., 2009).
Our results corroborate previous reports (Ye et al., 2005) that lymphocytes can be compromised by long term exposures. Moreover, the changes in peripheral lymphocytes can be a sign that the cytogenetic effects triggered by formaldehyde can reach tissues faraway from the site of initial contact (Suruda et al., 1993). Long term exposures to high concentrations of formaldehyde indeed appear to have a potential for generalized DNA damage. In experimental studies with animals, local genotoxic effects following formaldehyde exposure have been previously demonstrated to give rise to DNA-protein cross links, structural chromosomal aberrations, and aberrant cells (IARC, 2006). In our research work the MN frequency in peripheral blood lymphocytes was significantly higher in the laboratories group in comparison with the factory, probably because the years of exposure are higher in the first group.
In humans, formaldehyde exposure is associated with an increase in the frequency of MN in buccal epithelium cells (Burgaz et al., 2002;Speit et al., 2007) as corroborated by the results presented here. Suruda el al. (1993) claims that although changes in oral and nasal epithelial cells and peripheral blood cells do not indicate a direct mechanism leading to carcinogenesis, they do indicate that DNA alterations took place. It thus appears reasonable to conclude that formaldehyde is a risk factor for those that are occupationally exposed in these two occupational settings (IARC, 2006).
In human biomonitoring studies it is important to assess the influence of major confounding factors such as gender, age and smoking habits in the endpoints studied. However, in ours results no significant differences were obtained in MN frequencies between women and men (both in peripheral blood lymphocytes and epithelial buccal cells). However, in other studies an increase in MN frequencies in women was found. Current knowledge on the effect of gender on genetic damage determines a 1.5-fold greater MN frequency in females than in males (Fenech et al., 2003;Wojda et al., 2007), witch can be explained by preferential aneugenic events involving the X-chromossome. Surralés et al. (1996) reported an excessive overrepresentation of this chromosome in micronucleic lymphocytes cultured from women.
Tobacco smoke contains a high number of mutagenic and carcinogenic substances and is causally linked to an elevated incidence of several forms of cancers (IARC, 1985). Hence, smoking is an important variable to consider in biomonitoring studies and, particularly in this study since formaldehyde is present in tobacco smoke (IARC, 2006). The effect of tobacco smoking on MN frequency in human cells has been object of study. In most reports the results were unexpected, as in many instance smokers had lower frequencies of MN than non-smokers (Bonassi et. 2003;Orsière et al., 2006). In the present study no significant differences were found in MN (peripheral blood lymphocytes and epithelial buccal cells) between smokers and non-smokers. These findings are similar to results obtained in the study of Bonassi et al., (2003). These authors recommend that quantitative data about smoking habit should be collected because the sub-group of heavy smokers (30 cigarettes per day) can influence the results. For notice, the questionnaire results of this study revealed no heavy smokers in these two workers groups.
Conclusion
Some preventive measures can be applied to reduce exposure to formaldehyde in these two occupational settings. In the case of anatomy and pathology laboratories exposure reduction can be achieved by the use of adequate local exhausts ventilation, relocation of the specimen containers to areas with isolated ventilation and using hooded enclosures over such containers.
For the factory, preventive measures must consider automating some processes like sampling in reactors and, additionally, promote the use of the existing located ventilation dispositive.
Exposure assessment methods applied in the research developed permitted to conclude that TWA 8h measurements give poor information concerning to preventive measures priority and CBMN assay applied to assess genotoxic effects is a screening technique that can be used for clinical prevention and management of workers under occupational carcinogenic risks, namely exposure to a genotoxic agent such as formaldehyde.
The most recent studies suggest that future research is warranted to more effectively assess the risk of leukemia arising from formaldehyde exposure and to better explain some inconsistencies in mode of action and, also, to understand the role of short-term peak exposures. | 6,783 | 2012-01-01T00:00:00.000 | [
"Chemistry"
] |
Affective field during collaborative problem posing and problem solving: a case study
Educators in mathematics have long been concerned about students’ motivation, anxiety, and other affective characteristics. Typically, research into affect focuses on one theoretical construct (e.g., emotion, motivation, beliefs, or interest). However, we introduce the term affective field to account for a person’s various affective factors (emotions, attitudes, etc.) in their intraplay. In a case study, we use data from an extracurricular, inquiry-oriented collaborative problem posing and problem solving (PP&PS) program, which took place as a 1-year project with four upper secondary school students in Sweden (aged 16–18). We investigated the affective field of one student, Anna, in its social and dynamic nature. The question addressed in this context is: In what ways does an affective field of a student engaging in PP&PS evolve, and what may be explanations for this evolvement? Anna’s affective field was dynamic over the course of the program. Her initial anxiety during the PP&PS program was rooted in her prior affective field about mathematics activities, but group collaboration, the feeling of safety and appreciation, together with an increased interest in within-solution PP and openness for trying new things went hand in hand with positive dynamics in her affective field.
Introduction
This article introduces the notion of "affective field" to denote the complex of students ' emotions, attitudes, interests, beliefs, etc. that are at stake during activity. We propose this notion as a way to compensate for the dominant trend to treat each affective construct in its own separate body of literature (as observed, e.g., by Hannula, 2011;Renninger & Hidi, 2016). We have chosen the field metaphor because of the parallel we see with magnetic fields. First, magnetic fields have positive and negative poles, just like the attractions and repulsions that are typical of affect. Second, fields tend to be open, not purely tied to one body, as affective factors can spread across groups (de Freitas, Ferrara, & Ferrari, 2019). Third, the field metaphor allows us to talk about bundles of affective factors involved in learning activity such as mathematical problem solving and problem posing (the topic of this special issue).
Affect is an important part of mathematical activity (e.g., of mathematical problem solving) and a relevant predictor for students' future mathematical behavior (Hannula, 2019). Because of common phenomena such as disengagement and diminishing participation, affect has been of interest for mathematics educators for many years (Grootenboer & Marshman, 2016). Accordingly, it has been of increasing interest to researchers in mathematics education over the last decades (see, e.g., the Special Issue on affect in Educational Studies in Mathematics, Zan, Brown, Evans, & Hannula, 2006; see also Batchelor, Torbeyns, & Verschaffel, 2019;Schukajlow, Rakoczy, & Pekrun, 2017). It involves a variety of factors or constructs such as emotions, attitudes, beliefs, values, motivation, interest, and many more (e.g., Grootenboer & Marshman, 2016;McLeod, 1992), which are related to one another (O'Keefe, Horberg, & Plante, 2017).
Scholars such as Hannula (2002) and Cobb et al. (1989) have shown how students' affect is better viewed more holistically and dynamically. These studies, which investigated students' affect during problem solving-focused teaching experiments over a period of time, were an inspiration and starting point for this study. Yet, as observed by Renninger and Hidi (2016), most research still atomistically studies (single) affective constructs in variables and seeks to identify relationships between them as if they are separate entities.
The purpose of our article is to analyze an affective field's emergence and dynamics in entanglement with PS&PP activities, with PP in particular. We use the case of the affective field of a Swedish upper secondary school student, Anna, who took part in an extracurricular mathematics project where four students (aged 16-18) met and collaborated in inquiryoriented meetings. We chose to focus on Anna, since her affective development was the most dramatic, and hence the most interesting, and because she was open talking about her feelings, which allowed us to study the dynamics of her affective field. We trace the changes in her affective field over the course of an inquiry-oriented PP&PS program. Based on previous research on affect (e.g., Carmichael, Callingham, & Watt, 2017;Cobb et al., 1989;Grootenboer & Marshman, 2016;Hannula, 2002;Roth & Walshaw, 2019), we assume that various affective factors (emotions, attitudes, etc.) simultaneously play a role when students engage with mathematical PS and in PP-which we keep in vision through the use of the notion of affective field. Given the scarcity of research on affect during PP&PS, we are interested in how her affective field may evolve over time. We ask the research question: In what ways does an affective field of a student engaging in PP&PS evolve, and what may be explanations for this evolvement?
2 Problem posing and problem solving during students' mathematical inquiry Problem posing (PP) is an important activity for school students. In recent years, the potential of integrating PP in classroom practices has been supported both theoretically and empirically (Cai et al., 2015;Singer, Ellerton, & Cai, 2013). PP is associated to improvement of students' PS skills, their understanding of mathematical concepts, and to students' attitudes, motivation, and selfconfidence Cifarelli & Sevim, 2015;Silver, 1994;Silver & Cai, 1996;Singer & Moscovici, 2008).
PP is regarded as essential in mathematical thinking (Bonotto & Dal Santo, 2015) and "as a critically important intellectual activity in scientific investigation" (Cai et al., 2015, p. 5). However, there is no consensus on a definition of the term PP (Singer et al., 2013). In our work, we focus on PP during PS (Silver, 1994), so-called within-solution problem posing (Cifarelli & Sevim, 2015, p. 177). Within-solution PP happens when a student changes the goal and conditions of a problem intentionally, or shifts perspective, for instance, in inquiry-based learning approaches. In students' PS, there can even be "series of self-generated problematic situations within which particular goals and purposes are pursued" (Cifarelli & Sevim, 2015, p. 180), for instance, when students generalize or broaden the scope of mathematical problems (see also Silver, 1994).
The inherent role of PP within PS has been acknowledged and emphasized in research (e.g., Pirie & Kieran, 1994;Schoenfeld, 1985;Silver, 1994) as well as in education: For instance, the NCTM Standards (2000) recommend for students to formulate problems themselves, to put up and investigate conjectures, and to generalize and extend problems. Investigating the interplay of PP and PS in students' exploration, Cai and Cifarelli (2005; found the relation between PP and PS to be recursive, involving "problem posing-solving chains" (p. 62), where PS and PP may alternate and coevolve. Based on such observations, we use the term PP&PS approaches for PS approaches that open up for within-solution PP and provide opportunities for students to pursue their own goals, shift perspectives, and follow their PP interests.
We lean on previous research that has used open-ended problems, inquiry-based learning approaches, and modeling problems (see Hansen & Hana, 2015;Kilpatrick, 2016). A common feature of such approaches is that students inquire into mathematically rich situations or problems, which naturally also involves PP activities (Bonotto & Dal Santo, 2015;Cifarelli & Sevim, 2015;Kilpatrick, 2016;Silver, 1994): Students set their own goals and pose problems to themselves that do not necessarily have to be difficult but may even be shifts in perspective and attention (Bonotto & Dal Santo, 2015;Starko, 2010). Several scholars (e.g., Brown & Walter, 1993;Cifarelli & Sevim, 2005) have pointed out the potential benefits of within-solution PP, including conceptual growth through in-depth inquiry into the problem and increased PS through PP. For instance, Cifarelli and Sevim (2015) illustrated how the coevolving process of PP and PS increased the level of generalization, broadened perspective, and expanded the scope of the problem, which led to conceptual progress.
Researchers have wondered whether inquiry-driven PP&PS can have a positive influence on students' affect or vice versa or perhaps both. Cifarelli and Sevim (2015), in a case study on fourth graders, found that "[t] he students were highly motivated to answer questions that arose from their sense of surprise in their results" (p. 188) and that the increased motivation went along with an "ongoing sense of accomplishment" (p. 188), which in turn may be related to students' self-efficacy. Chen et al. (2015) investigated a training program aiming to enhance fourth grade students' PP&PS abilities, and, through using questionnaires, they showed that not only the originality of the posed problems but also students' PP&PS beliefs were affected positively. Chang et al. (2012) found that a PP&PS training program increased students' flow experiences, which "could augment students' motivation and learning process" (p. 776). Further, PP in students' inquiry may influence students' attitudes towards mathematics positively (e.g., Brown & Walter, 1993;see Silver, 1994, for an overview). On the other hand, PP may also have a negative influence on students' affect: Silver (1994) hypothesizes that especially students with a history of success in regular, non-inquiry-based teaching with direct instruction may have little desire or motivation to be engaged in PP activities. Finally, we think that the distinctions between affect and PP are analytic distinctions of a much more fluid and complex phenomenon. We prefer to think in terms of co-occurring phenomena rather than (in)dependent variables (cf. Barad, 2007).
Affective field
Affect is a complex phenomenon with many factors involved (e.g., Hannula, 2019), while the terminology and concepts used to account for factors of affect are partially used interchangeably with varying meanings (Lomas, Grootenboer, & Attard, 2012). In the following, we describe our theorization of students' affect as affective field, which means the bundles of affective factors involved in particular situations in their intraplay. In doing so, we lean on McLeod's (1992) seminal conceptualization of the affective domain, which includes emotions, attitudes, and beliefs. However, whereas we see that McLeod refers to a conceptual domain when he writes about the affective domain (note the terms reconceptualizing and concepts), our notion of affective field rather refers to the affective factors at work in people and groups of people.
Research into affect typically focuses on one theoretical construct (e.g., emotion, motivation, beliefs, interest). In our article, we take a holistic view on the affective domain: We argue that there is an added value in treating affect as a field to counteract the fragmented literature on a multitude of affective constructs and build on approaches assuming that emotions, interest, motivation, and engagement are highly related constructs (Hannula, 2006(Hannula, , 2011O'Keefe et al., 2017;Renninger & Hidi, 2016) and we are inspired by, among others, Hannula's (2002) and Cobb et al.'s (1989) holistic theorizations of affect.
In accordance with the increased acknowledgment of the sociocultural context in research on affect (e.g., Grootenboer & Marshman, 2016;Heyd-Metzuyanim et al., 2016;Middleton et al., 2016;Pantziara, 2016), we assume affective factors of participants in the social context to interact and to be hardly separable from one another. For instance, if three students in a group work are highly motivated and enjoying it, this positive affect is likely to have an impact on the fourth student (for instance, motivating him/her) (see de Freitas et al., 2019 for affectivity). Also, the norms within these contexts interact with affective fields (Cobb et al., 1989).
In our theorization of affect, we take a holistic stance since we are talking about the affective field as a fluid phenomenon. Constructs such as emotions and beliefs have the advantages of communication and research, but their self-sustained essences "constitute a rather shaky ground" (Sfard, 2008, p. 56), and objectifying talk can lead to controversies about the correctness, even though there is no real right or wrong-leading to an "ontological collapse" (p. 57) of taking statements about discourse to be about the extradiscursive world. Yet, for research, it is important to have concepts to grasp the phenomena under investigation. Therefore, despite our holistic stance, we investigate the affective field (taken as a whole) through researching bundles of factors. Table 1 presents working definitions of some affective factors that are important for describing students' affective field in this article. These comprise
Emotions
Emotions are feelings such as happiness, fear, or anger in a particular situation that are temporary and unstable (Emotion, n.d.; Grootenboer & Marshman, 2016;McLeod, 1992) "It's fun!"
Attitudes
Attitudes are stabilized affective responses within certain situations or rather a psychological tendency towards an object or entity (Grootenboer & Marshman, 2016;McLeod, 1992;Savelsbergh et al., 2016) "I am always afraid of being wrong."
Beliefs
Beliefs are students' views of some aspect of the world (Philipp, 2007), e.g., beliefs about mathematics and beliefs about problem solving "We just want to know what the answer is. It's not how we solve it, it's if we get the right answer." Self-efficacy Self-efficacy is a student's own assessment/judgment of her capabilities to execute specific behaviors in specific situations, e.g., to pose and solve math problems (Pajares & Miller, 1994) "I felt like I cannot do this." "It kind of affects me when I cannot solve it. I do not feel very confident and strong."
Interest and motivation
Interest is a preferred engagement of a person (student) with a certain entity, which can be more or less situational (e.g., finding a problem appealing) or enduring (e.g., general interest in math)-it is a continuum (Akkerman, Vulperhorst, & Akkerman, 2019;Renninger, 2009;Schukajlow et al., 2017) Motivation is the ensemble of reasons and influences why students engage in any pursuit, e.g., in mathematical problem solving, or in a particular approach (see Middleton, Jansen, & Goldin, 2016; Motivation, n.d.) "We would like to see!" (a student referring to another student's approach) "And (…) then you wanna do it. Because it's a challenge."
Values
Value is the appreciation or perceived importance of objects, contents, and actions (see Rokeach, 1973;Schukajlow et al., 2012) "That's good. (…) It's logical." (a student referring to another student's approach) emotions, attitudes, beliefs, self-efficacy, motivation, interest, and values. We use working definitions because generally shared definitions do not exist (see Grootenboer & Marshman, 2016;Hannula, 2019). Of course, for these working definitions, we needed to make decisions: Although some scholars would, for example, consider interest to be an emotion, we follow Renninger and Hidi (2016) to regard interest to be emotionally charged, but not to be an emotion itself. Also, we have decided to take values as analytically distinguishable from interest, following Schukajlow et al. (2012), even though others might see them in unity.
This study
This article presents a post hoc analysis of the affective field of a student, Anna, when engaging in PP&PS and its evolvement in an extracurricular inquiry-oriented project. It was in hindsight that we realized how affect was a crucial aspect in students' PP&PS. We saw the potential of the data to illustrate the fluidity and multiplicity of Anna's affective field in entanglement with her PS&PP activities, with PP in particular.
The Creative Math Meetings
The so-called Creative Math Meetings (CMMs) was a program developed at Örebro University, Sweden, in particular by the first author of this article, in collaboration with a local school. It took place roughly every other week during a full school year, spanning from August to May. The university invited students from a local school to participate. The CMMs addressed upper secondary school students, aged 16 to 18. Participation in the CMMs was voluntary.
Prior to the CMMs, the university teacher held a talk at the school with the purpose to recruit interested students to join the CMMs. The CMMs were announced as a learning and research project aiming to develop mathematical activities for interested students with a focus on students' creative problem solving. During the CMMs, the students worked on different mathematics problems, which often came without explicit instruction what to inquire about. The students were not asked explicitly to also pose problems-rather we aimed for self-driven within-solution PP by the students, as described in Section 2. In those meetings in focus in this article (see Fig. 1), the students (1) were encouraged to find different solutions for a particular geometry multiple solution task (Leikin, 2009;Novotná, 2017) in meeting 1, (2) were asked to work on a problem where they tested whether one can lay down domino tiles (displaying the numbers 0-6 and 0-9) in a circle in meeting 2 (Kießwetter & Rehlich, 2005), and (3) worked on a problem where they were supposed to inquire into round tours in a city map (Eulerian graphs) in meeting 3 (see Supplementary Materials 1, 2, and 3 for a description of the problems and on how they each afforded PP).
In the meetings, the students used to first work on the problems individually for a short time (ca. 15 min) and then worked in groups for most of the time (ca. 60 min). At the end of each meeting (except M0), reflections took place: These reflections were minimally guided: The students reported-in their own words-how it went, about their group work, and their discoveries. The reflections took approximately 20-30 min each. In these reflections, the students were not explicitly asked to report about their affect, for example, on their emotions and self-efficacy, since it was not the program's original aim to investigate student affect. However, very early, it became clear that the PP&PS was an affective experience for the students, and they-especially Anna-reported much about their anxieties, pride, or other emotions. All in all, the project comprised 15 meetings (Fig. 1).
The group
Over the course of the first CMMs, a stable group of four students attended the CMMs regularly: Anna and David, two 18-year-old students in Swedish upper secondary grade 3 (grade 12 in the K-12 system), and Jakob and Linda, 16-year-old students in grade 1 (grade 10, all names are pseudonyms). In this project, the first author of this paper was the teacher. This was because the school teachers were not used to an inquiry-oriented collaborative PP&PS approach towards teaching and wanted to sit in on and get to know the program. Two school teachers attended the CMMs irregularly as observers. The teacher in the CMMs had 5 years of experience as a school teacher and had furthermore taught for 6 years at a university level in teacher education programs before this project. In the CMMS, the students talked in English fluently: They were partially used to do so in their international program at school (English as teaching language: Anna, Jakob, and Linda), had been on longer exchange programs to English-speaking countries, or were raised bilingually (Swedish-English).
This case study focuses on one out of the four students: Anna. Her affective development was the most dramatic: There was an incident in the first meeting, only with Anna (not the rest of the group) where she told the teacher individually that she could not bear the feeling of being unable to solve the problem and wanted to quit the program. Yet, during the project, her view on mathematics and her affective field changed substantially. Furthermore, it was not the project's original aim to inquire into the affective factors involved in PS&PP, and the students were not explicitly encouraged to talk about their feelings. Yet, Anna's openness allowed us to trace her affective field as analytic case.
Data and data analysis
All meetings (except for M0, which was a getting-to-know-each-other meeting) were video recorded with one or two cameras. Inspired by the studies of Hannula (2002) and Cobb et al. (1989), we intended to let the data speak when analyzing our data-in line with the idea of affective field: We used the case of what we conceptualized as Anna's affective field during PP&PS as "inspiring narrative" (Hannula, 2002, p. 31), which is not created from a void but rather is extracted from episodes observed during the meetings as well as from the stories the students told in their reflections. To trace the dynamics of Anna's affective field, we used the data from the group reflections of the meetings, where the students described their emotions, attitudes, or beliefs related to PP&PS. In particular, the reflections in meetings 2 and 3 were significant with respect to students' affect and are in focus. In the transcripts, we coded all affective factors related to students' PP&PS that came up. For example, when Anna uttered "I am always afraid of being wrong," we paraphrased this as "anxiety of being wrong" and coded it as an attitude (see Table 1 for a list of all factors that were coded). We did so for all students: This was because we did not want to cut out and isolate Anna's PP and her affect from the group's but to take into account the social nature of PS&PP and of affect.
Note that in the coding process, we decided to code interest and motivation together, since intrinsic motivation and interest are partially considered similar (Wigfield & Cambria, 2010) and they were practically difficult to separate in the coding of students' group work. Based on our codings, we created snapshots of students' affective fields (Figs. 3, 5, 6, and 7): Overviews with bundles of beliefs, attitudes, etc. as described by the students (mostly paraphrased).
For finding explanations for the dynamics in Anna's affective field, we also used data from later reflections (meeting 9) and from an important affective incident in the first meeting, when Anna initially wanted to quit the program because of her insecurity and anxiety related to the PP&PS. Further, we used all group work data of the meetings 2 and 3 and coded and paraphrased the affective factors. To trace the factors in their fluidity, we captured them in "flow tables" (see Supplementary Materials 4 and 5), which are tables capturing students' group work, where the affective factors together with a paraphrase of the respective student's utterance (e.g., PP interest to find a proof, brought forward by Anna) were noted in the chronological order of appearance in the group work. These tables indicate the flow of affectivity connected to PS&PP in the students' work. Based on these tables, we (a) report on the group work with focus on affect and PP as well as its major themes and (b) created snapshots of the affective field in the group work (structurally similar to the ones for the student reflections).
The case of Anna's affective field
To demonstrate dynamics in Anna's affective field, we use the same model (Fig. 2) in all following figures to elaborate on the affective factors entangled with PP&PS. As to be expected, we did not find evidence of all factors in all situations under investigation. The boxes stay empty when aspects were not explicitly at stake.
Incident in first meeting: Anna's initial anxiety and students' prior affect
We begin with an incident in meeting 1 in minute 106ff, where the students worked on a problem (a geometry multiple solution task, see Supplementary Material 1) that afforded PP in a way that the students were asked to find different solutions and perspectives. Anna took the opportunity to talk to the teacher who was passing by: Anna: I'm not sure I will be coming back next time.
Teacher: Why not?
Anna: I don't know. It kind of affects me when I cannot solve it. I don't feel very confident and strong when I leave. So… I don't know. I don't get a good feeling here. So, I will have to think about it.
Teacher: Yes do so. (Inaudible) But research says the girls are not as self-confident as boys are. So, this should not be a reason. If I know that this is something… (inaudible) So I would strongly suggest that you come back. (inaudible) Anna: Now it was also the same problems [referring to the geometry multiple solution task that they had already worked on in the prior getting-to-know-each-other meeting]. Maybe another problem is like more in my style. And maybe it's easier. And maybe this can be another thing to, like, motivate me. But I don't know. I'll have to think about it.
Teacher: But do so. (inaudible, referring to the next problem in the next session) Anna: I'll think about it.
Teacher: Yes. Do so. Please.
Our interpretation is that Anna's experience of failure in PS came with an affective reaction. The fact that she could not solve the problem caused negative emotions ("I don't get a good feeling here") and appeared to address her self-efficacy (feeling "not strong," "it affects me," "I don't feel very confident and strong") (Fig. 3). The (female) teacher attributed Anna's low self-concept to gender differences and thus depersonalized the issue and tried to show empathy and to-in a certain waybond with Anna (in the sense of "we women should not be frightened and need to counter this difference"). Anna then hypothesized that the failure may be related to the type of mathematical problem: that other problems might be more "her style" and could "motivate her." In the second meeting, Anna described the same incident in retrospect, stating "Yeah, I was... Yeah, I remember. I was, like, I can't do this, okay, crying (rubbing her eyes, laughing) (everybody laughing)." Jakob, in turn, added "I was expecting something extremely hard. I was, like, tense," which was affirmed by Linda. Anna's utterances reflect a low self-efficacy ("I can't do this") and helplessness/sadness ("crying"). Anna even mimicked to cry through rubbing her eyes and laughed-possibly because she was embarrassed that she had had bad feelings in the first place. In meeting 9, Anna detailed: Because for me like in the beginning in the very, very first lesson we had that problem with a triangle. I was looking at it and I did not have a solution; I did not have an answer. That's why I didn't want to come back because I felt like I can't do this. (…) Because usually if I don't get the answer it's like, damn, I suck at this. Then you just want to quit.
The emotional experience of not being able to solve the problem caused avoidance strategies and affected her self-efficacy. Anna was a very good mathematics student at school. Her experience of discomfort in this "new" mathematical inquiry activity can be related to what Silver (1994) hypothesized: That PP&PS may have a negative affective influence on students who have a history of success in regular, non-inquiry-based teaching. Most likely, Anna was not used to this kind of frustration in mathematics and maybe she wanted to protect herself Fig. 3 Anna's affective field during 1st meeting (top) and students' affective field prior to the project (bottom) (blue entries relate to Anna) from a decreased self-efficacy or from changing her mathematical identity (Horn, 2008;Kaspersen, Pepin, & Sikko, 2017).
In retrospective reflections in meeting 9, David and Anna described their affect prior to the project. Their descriptions help to explain why Anna experienced anxiety and tenseness in the beginning of the project (see Fig. 3). For example, she believed that there is only one way to solve a problem and was generally scared when she did not find that particular way. The students' beliefs about mathematics and their values can be related to a typical non-inquiry based-teaching, the attitudes that they describe are predominantly negative (boredom, hatred, anxiety), and their motivation is extrinsic rather than intrinsic. In particular, the students were not used to within-solution PP in mathematics, finding different perspectives, or trying new things. In our interpretation, Anna's negative affective reaction-a repulsion-is connected to her unfamiliarity with PP&PS approaches. We use the data from the following meetings to trace how Anna increasingly got acquainted to within-solution PP-together with positive dynamics in her affective field.
Second meeting: PP interest and positive affect in group work
In the second meeting, the students worked on a problem (the domino problem, see Supplementary Material 2) that afforded within-solution PP in a way that a trial-anderror approach hardly leads to a (quick) solution, which sets the stage for the students to inquire into regularities, patterns, and generalizations self-driven. After their individual work, the students were encouraged by the teacher to sit together and share their ideas with each other: "I would suggest that you present to each other your ideas. If you have not finished with your ideas, perhaps you can help each other out (…). Yeah?" Group work: In the following, we will characterize the students' group work with regard to PP and affect. Jakob voluntarily started the group work by reporting about his individual failure experience in single work: I was hoping for an epiphany (smiling). But that's never smart (all students laughing). But I wasn't really sure where to go. So, I listed this, and … then I gave up and desperately tried to make a circle (all students laughing).
This move by Jakob indicated his emotion in single work, where he resigned, and it transported a mathematics-related value, the acceptance of failure. In turn, Anna showed empathy and emotional support for him and the interest to include his idea even though it failed in the end ("Well, maybe we can find use for it [your approach] later?"). Jakob then showed interest in the other students' approaches ("Did someone solve this problem?"). Anna confirmed this, yet indicated insecurity ("I'd like to think I did it."), and Jakob decided that David, who was more confident, should go on ("Okay, so, confident guy goes first."). David then, reporting his approach, brought forward his PP interest: generalizing the particular mathematics problem to a general problem (about even and odd numbers). This PP focus also caught Anna's attention ("You said something earlier about even and odd numbers. Be-cause…"), which deepened David's interest in generalization and made him interested to even find a proof. When David finished his explanations, Jakob turned again to Anna, asking: "You think you solved it as well?", indicating interest in her approach. Anna again showed hesitance ("Yeah, I'd like to think I solved it."), but-this time-was encouraged by Jakob to talk about it ("We would like to see (smiling)."). Anna then, telling her approach, revealed that she also had had the PP interest to generalize the problem (to even and odd numbers). This was appreciated by the others as "logical" (Jakob), "sounds like it would work" (Jakob) and "it makes sense" (Linda). In turn, David showed interest to check whether the two approaches (by David and Anna) were compatible and combinable. This went along with a certain emotional arousal: First, there was tension, since David thought that their approaches were different. When Anna then explained and confirmed that they actually were compatible, David was relieved, first throwing his hands in his face with the other students observing him tensely (Fig. 4, left)-and then he started chuckling. This relief then spread among the group, together with happiness and joy, the students smiling and chuckling (Fig. 4, right)-and with Anna being in the center of the group, being empowered. In the following group work, Anna was being more proactive in suggesting what to do, bringing her PP interest more into play (e.g., suggesting to find a proof for the general solution and to combine the approaches by David and herself).
To summarize, we identified two major themes in the group work: The first theme was the creation of an atmosphere of safety and appreciation through students' handling of failure experiences and their interest in one another's ideas. Jakob, Linda, and Anna each reported, several times, about negative feelings during the individual work preceding the group work (insecurity/failure experience). Every time the students mentioned individual insecurity or failure experiences, the group comforted the respective student, showed emotional support, or showed interest in and appreciation for the approaches, despite the mistakes. The second theme was students' increasing establishment of norms about PS and interest in within-solution PP. In their PS, they, for instance, frequently mentioned their appreciation of logical, elegant, simple, fast, "sense-making," or "believable" approaches. The students also developed an increased interest in within-solution PP (e.g., Cifarelli & Sevim, 2015), which was entangled with positive affect: In particular, they developed an interest to generalize the problem, to find a proof for their generalization, and to combine their solutions for such a proof, and they began to establish norms about what to strive for in open problem situations (generalizations, proofs, and combinations of their solutions). This can be related to Cifarelli and Sevim's (2015) descriptions of within-solution PP as "transformations of the original problem" (p. 178), which expands "the scope of the original problem" (p. 178). The PP of the students in our case study was accompanied by happiness, relief, and appreciation when the students found out that their approaches complemented each other. We think that the first theme (safety and appreciation) together with a happy atmosphere, where students were smiling, chuckling, laughing, and joking (indicating positive emotions such as enjoyment and pleasure), set the stage for them to engage in within-solution PP in a safe and appreciating environment: to try different ideas and approaches without fearing consequences of failure. Student reflection: At the end of the second meeting, the group reflected on their group work and collaboration (see Fig. 5). The students valued individual work for its purpose for the group work: getting the group to work, for the group to be able to come up with a solution, and for everyone to be able to contribute to the group. Anna stated, "I mean, to a certain extent you need to work alone to actually have something to say to the group." She also mentioned the risk that without individual work before group work, the group could be steered too easily by single persons. David valued individual work because there are no distractions. However, both Anna and Jakob mentioned that they feel little enjoyment (if at all) during the individual work, "the solitary path" (Jakob), and insecurity (Anna: "you're not sure about anything, you want confirmation"). The students related group work to enjoyment and to a feeling of safety: Either they got confirmation that their solutions from single work were correct or, if they made a mistake, they got corrected, which made them feel safe. Especially Anna emphasized the feeling of safety. They also valued everyone's contribution to the group. Yet, the students did not value group work only because of safety reasons: They also mentioned the benefit of having all the group members' ideas, which indicated the valence of content collaboration.
At the beginning of the third meeting, when the students recalled what they were working on in the second meeting, Anna said: "The dominos! (chuckling) (everyone chuckling, laughing) I was just explaining that to my friend. This felt so professional! (throwing her hair back with one hand) (everyone laughing)." Her utterance and chuckling indicate joy and pride, and that the collaborative work influenced her self-efficacy in a positive way. The group appeared to realize their collective efficacy, which "is developed when a group works" (Pantziara, 2016, p. 7).
Taking together the data from group work and student reflection in meeting 2 (snapshots in Fig. 5), we see that Anna's affective field-together with the group's-was positively affected: For example, her positive emotions of feeling happy and safe went along with increased interest in within-solution PP, which again went along with her feeling professional (selfefficacy). PP activities and attractions in the affective field went hand in hand.
Third meeting: increased PP interest and belief change
In the third meeting, the students worked on a problem (the city tour problem, see Supplementary Material 3), which was open and afforded within-solution PP in the way that the students were encouraged to find their own goals, perspectives, and problems to pursue. After their individual work, the students were to sit together. The teacher did not give an explicit instruction, neither was a particular problem solving focus given on the task sheet.
Group work: The group work in this meeting started with Anna asking the teacher for guidance. Right after the students sat down together as group, Anna asked: Anna: (to the teacher) Do you want us to go through the other ones as well or should we just look at Melbourne, because that's the one-Teacher: (cheerfully) Do whatever you want.
Anna: Okay.
Anna's initial question possibly reflected her belief that in mathematics, the teacher normally poses the question or tells what to investigate or her insecurity about how to proceed when there is no teacher guidance. The cheerful answer by the teacher indicated the teacher's aim not to be steering students' PP&PS interests, in particular for the students to set focus themselves and develop their own PP interest. In turn, the students pursued different PP interests: for example, to inquire into the unsolvable instance (like Anna had suggested in the first place: to focus on the case of Melbourne) or to add a street in the map, which came along with enjoyment by Anna and Linda when thinking about how to rebuild the map of the city. Their PP activities were entangled with enjoyment and happiness. When it turned out that Linda had made a mistake in her solution and felt uncomfortable about it, Anna showed empathy and emotional support ("Yeah, I also, like when I did it, I was like: Oh I forgot this street! (…) And I'm like, damn it."), which was supported by David, and which encouraged Linda to stay in the game and to stay active in the group's inquiry activities. The group work went along for some minutes with different within-solution PP interests-the students playing around with different ideas about what to inquire into. Then, one student, Jakob, brought forward the idea to inquire into the number of roads at each intersection (vertex degree). The moment when Jakob brought forward this PP interest was-in our interpretation-the emotional peak of the collaborative PP&PS in this meeting: All students got excited ("Oh!," "Mhhhhhh! Oh, yeah, yeah!," "Oh yeah, that's true, that's true!," "You are great!"). In our interpretation, the students had something like an Aha! moment, which come along with sudden certainty and affective responses (Liljedahl, 2013). David then gave an impulse to develop this idea even further by generalizing the PP focus to the parity of the number of roads at each intersection (even/odd), which was similar to the PP interest in meeting 2 (to generalize the problem to even/odd numbers). The PP interest to generalize the problem was again appreciated by the others and accordingly pursued in students' within-solution PP activities.
We identified two themes in students' group work-similar to the second meeting: The first theme again touched students' efforts to comfort each other when they reported failure experiences from individual work. They showed empathy and bonding efforts, which created a safe environment for the students. The second theme was students' inquiry drive together with PP interest. Even more than in the preceding meeting, students' group work was driven by their interest in within-solution PP (e.g., Cifarelli & Sevim, 2015): The students changed perspectives, set their own goals, and pursued them. In doing so, they picked up each other's ideas and developed them further. This can be related to "problem posing-solving chains" (Cai & Cifarelli, 2005, p. 62), where the students pose new problems, pursue them, find new angles, etc. (see also . Student reflection: At the end of this meeting, the students reflected, among other things, about how the activities in the CMMs differed from their regular teaching at school, which hints at their beliefs about mathematics PS and the role of PP. All students in the group explained that and how their prior activities at school differed from those in the CMMs. Whereas prior school activities in their view appeared to be largely product focused, the students described that their activities in the CMMs also involved "finding things," "discovering the problem," or "creating our new problems," which may be related to within-solution PP in inquiry-based approaches (e.g., Cifarelli & Sevim, 2015). This was related to their beliefs about mathematics: They perceived mathematics in the CMMS as "general, it's way more like continuous" (Anna) (see Fig. 6).
Further, the students emphasized their fun and enjoyment (e.g., "It's getting more and more fun."). The students got acquainted to working in an inquiry-based way and to pose problems. They further mentioned their feeling of each making an individual contribution ("adding" to the group), and this appeared to influence their self-efficacy positively. Together with their Fig. 6 Affective field during the group work (top) and after the 3rd meeting (bottom) (blue entries relate to Anna) perception of "unusually good team work" (Jakob), this indicates their perceived collective efficacy (Pantziara, 2016).
Taking together the data from group work and student reflection in meeting 3 (snapshots in Fig. 6), we see that Anna's and the group's increased interest and engagement in withinsolution PP went along with positive emotions, such as fun, enjoyment, and excitement. These attractions to PP were entangled to the beliefs about mathematics: Whereas Anna had perceived mathematics as finding correct answers in her previous schooling (Fig. 3), she now described mathematics as finding and discovering things and as continuous (Fig. 6).
Overall changes in the PP&PS affective field
We use data from the ninth meeting, where David and Anna reflected on how they developed during the CMMs, to illustrate the overall changes in Anna's affective field: In their descriptions, they explicitly opposed their views prior to the project (Fig. 3) to their current views (Fig. 7).
Changes in Anna's affective field are captured pointedly in her own words: In the beginning it was really hard for me at least to let go of-'cause I am always afraid of being wrong. But when you're trying to be creative you need to try different ways (pointing into different directions) and realize, okay that didn't work, next thing, that didn't work, and I mean just accepting that you were wrong, that's fine, and you move on, has been, like: What? I'm wrong! I'm so wrong! And then I get so stressed about it, instead of, okay, that's fine, move on. And that also comes from if you think that there is one way of doing it, and then you realize that that way isn't the right way, then you have nothing else to do, besides being scared that it didn't work. I don't know. But then, Fig. 7 Snapshot of the affective field that emerged in the project (blue entries relate to Anna) eventually, when you get to practice being wrong, and here (referring to the project) in other people's points of view there are other options, then it got easier and easier. But it was really hard 'cause it's so easy to hold on to this way of thinking, this way of feeling about it. And then, as it went along, it got better, 'cause you realize you could do it… if you only tried a little bit. It's so emotional… (chuckling).
Comparing Anna's affective field at this point in time to the one in the beginning of the project (Fig. 3), the positive evolvement is apparent. Anna emphasizes trying new things, trying different things, and looking into different directions-which all relates to within-solution PP. Many factors in her affective field have changed: Her attitudes shifted (from, e.g., "Mathematics is boring" to "Trying new things makes PS more free and is fun")-together with her beliefs about mathematics. This is again connected to her values (with a focus on outcomes initially, turning to valuing the process of trying new things and the acceptance of being wrong). Even though we refrain from making causal claims (about within-solution PP being causal for the changes in her affective field), we see that the increased interest in within-solution PP and positive dynamics in Anna's affective field co-occur.
Discussion
In this article, we proposed the notion of affective field to account for a person's various affective factors (emotions, attitudes, etc.) in their intraplay. In a case study, we investigated the affective field of Anna, an upper secondary school student, in its social and dynamic nature. We used data from an extracurricular, inquiry-oriented collaborative problem posing and problem solving (PP&PS) program, which took place as a 1-year project with four upper secondary school students in Sweden (aged 16-18) and asked: In what ways does an affective field of a student engaging in PP&PS evolve, and what may be explanations for this evolvement?
In short, we focused on Anna, who initially wanted to quit the collaborative meetings but became an active and positive participant. In line with similar studies that focused on affect as a broad domain rather than on particular constructs, we saw how many related affective factors were at stake, for example, emotions, attitudes, self-efficacy, interest, etc. Moreover, what we have come to characterize as an affective field, somewhat similar to a magnetic field with attractions and repulsions, proved to be dynamic-also in line with studies that were interested in the flux of affective factors rather than the influence on one construct on another.
Our case study illustrated the dynamics of Anna's affective field over the course of the school year. This concerned many affective factors involved in their intraplay. In the beginning, Anna's affective field was characterized by repulsions: negative emotions and low selfefficacy, which went along with beliefs of mathematics, for instance, as being a means to an end, with extrinsic motivation (doing mathematics to get grades), and negative attitudes ("maths is boring", "negative attitude" towards mathematics). However, over the course of the project, the students got increasingly interested to pose problems themselves, to connect problems, to generalize them, etc. Their attractions for PP went along with beliefs of mathematics as being continuous and open, with appreciation for generalization, with an open-minded attitude, and positive emotions such as fun and excitement.
Over time, many factors have been driving forces for students' engagement in the PP&PS process and for the evolvement of Anna's affective field. Our analyses illustrated how the group established a safe atmosphere through students' appreciation for and interest in each other's ideas and approaches, their positive handling of failure, and their efforts to comfort each other, to bond, and to show empathy. This set the stage for the students to get acquainted with inquiry-based PP&PS and within-solution PP (Cifarelli & Sevim, 2015), which the group developed an increased interest for over the course of the project. In their collaborative work, they set their own goals, modified the problems, and inquired into different directions. They spontaneously aimed for generalization of problems and for proving their discoveries and thus posed themselves new problems. The PP&PS, along with increasingly positive affect, mostly seemed to be self-driving: The joy of solving a simple problem led them to pose a more difficult generalized problem. When they got stuck, they helped each other and aimed to combine their approaches. Affect and cognition seemed part and parcel of the same process. The PP experiences appeared to be highly affective for the students, they included Aha! moments (Liljedahl, 2013), and were related to belief change about mathematical PS.
In the reflective phases, the students articulated, among other things, how they felt. Anna repeatedly explained how her history of being used to one solution to mathematics problems (her "way of thinking" and "feeling" about it) had hindered her when she was confronted with the first problem in the inquiry-oriented meetings, which she was not able to solve. Her own explanation made sense to us: Failure decreased her motivation and self-efficacy.
Of course, the dynamics of Anna's affective field need to be regarded against the backdrop of the setting in which they took place. We think that the informal and voluntary nature of the ("out-of-regular-school") setting may have had a significant impact on the data, both in terms of the positive nature of Anna's affective field and in terms of the students' inclination to engage in PP while working on PS tasks. For the students, within-solution PP was different from regular schooling, where "[u]]sually, like, when you're solving a problem in the math book, it's like find the right answer, check it in facit [i.e., sample solution at the end of the book], be done with it" (Anna) and where "in the math book, in the end, x equals a nice number. If it doesn't, something's wrong" (Anna). In the out-of-regular-school context, there was no grading, and the students did not even go to the same school classes, so that the meetings may have offered the opportunity for a "fresh start" which may have facilitated the establishment of new interests and norms.
It is a challenge to identify all affective factors involved in students' activities, educational science, and psychology that tend to focus on operational definitions and studying affective factors in isolation or small sets. Taking seriously a fluid ontology thus comes with methodological challenges (Sfard, 2008). Our case study illustrated how affective factors can be studied in an affective field (e.g., beliefs, emotions, interest, self-efficacy). Our analyses also illustrated that and how students' affective field was social: Affect was not separated between the persons but rather "contagious." The group dynamics were essential for Anna to overcome her anxiety, to feel she was contributing, and to increase her self-efficacy.
We think that future research could aim to study the social nature of the affective field more explicitly: To take the whole group's various affective factors as one affective field and to investigate interactions between affective factors within the group in their entanglement and intraplay. Yet, our project's original aim was not to investigate affective fields, and it was only in hindsight that we realized how affect was a crucial aspect in students' PP&PS. Therefore, the students were not explicitly encouraged to reflect on their affect by the teacher-they only did so self-driven, with Anna being open and telling more about her feelings than others. We believe that future research with focus on groups' affective fields and even richer data will provide valuable findings on the contagion of affect within groups and also be able to investigate affective factors and their intraplay in more detail. Also, we feel that future research can further deepen analysis of students' affect. For example, Roth and Walshaw's (2019) analysis of pitch of voice could offer further indicators of affect. Also, the analysis of body movement (de Freitas et al., 2019), including the analysis of gestures, facial expressions, and eye movements, appears to offer valuable further insights. We think that micro-genetic multimodal data can be a rich basis to observe the multitude of affective factors during PP&PS. Further, we recommend that future research could focus even more on the entanglement of PP and the affective field: In our study, PP emerged self-driven based on open mathematical problems, where the students inquired into different directions, tried new things, etc. Future projects could put even more emphasis on PP than ours and could help to understand the dynamics of affective fields connected to PP even deeper.
Yet, we think that our study did make a step to understand better the entanglement of affect and PP&PS. Both the theorization of students' affective field and the empirical insights from the case study help to describe and explain the intricate relations between the various affective factors involved. We are well aware that through our choices, for instance, through the set of affective factors we coded in the data and through the definitions we made, we restricted our view (like with all choices) and maybe could have got further or deeper insights with other choices. Thus, we hope that our study can be a springboard for other researchers to take a holistic stance towards student affect and to develop our ideas further.
Funding Information Open Access funding provided by Projekt DEAL.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 12,139.4 | 2020-09-02T00:00:00.000 | [
"Mathematics",
"Education"
] |
Estimation of pulsatile flow and differential pressure based on multi-layer perceptron using an axial flow blood pump
: This study proposes a non-invasive method for estimating the pulsating flow and pressure difference, which uses the blood pump estimation model based on a multi-layer perceptron to calculate the flow and pressure difference under pulsating conditions. The model takes 11 parameters such as the rotational speed, power, and pulsation waveform of the blood pump as the input and uses the pressure difference and flow as the output. The experimental results of 119,590 sample data show that the flow error of the training set of the blood pump estimation model is 0.14 l/min and the pressure difference error is 7.50 mmHg; the flow error of the test set is 0.14 l/min and the pressure difference error is 7.50 mmHg. Compared with the traditional flow and pressure prediction method, this method has higher precision, which will provide a certain technical accumulation for accurately estimating the flow and pressure difference of the blood pump in the pulsating conditions.
Introduction
The treatment of patients with heart failure by implanting a blood pump is the best treatment option in the absence of a cardiac donor. The output flow and pressure are important characteristic parameters of the design and operation of the blood pump. The measurement accuracy and method are directly related to the practical application effect of the blood pump in animal experiments and clinical practice [1,2]. The measurement of blood pump output flow and pressure is divided into direct measurement and indirect measurement, and direct measurement has shortcomings such as implantation difficulty and sensor malfunction. Therefore, many research teams have conducted research on non-invasive flow and pressure difference measurement of the blood pump.
The studies by Funakubo et al. [3] and Ayre et al. [4] have shown using the pump's characteristic curve to estimate the average flow pressure difference of the pump can obtain accurate results under steady-state conditions. Current clinical devices such as HeartMate III have operated by using pulsating flow, and studies have shown that this approach can reduce thrombus formation compared to blood pumps operating under steady-state conditions [5]. However, due to the numerous factors that need to be considered for the estimation of the blood pump flow and pressure difference under pulsating conditions, there are few studies on this aspect. Tsukiya et al. [6] used the power and rotational speed to estimate the dimensionless instantaneous flow. Karantonis et al. [7] used the autoregressive with exogenous input model to accurately estimate the pulsating flow and differential pressure, but they did not consider the different pulsation waveforms that have an influence on estimating the flow and differential pressure.
In this study, a new method for estimating the fluctuating flow and pressure difference of the blood pump using a multi-layer perceptron model is proposed. Firstly, the principle of this method and the selection of related parameters are introduced, and then the proposed method is verified by using an in vitro simulated circulation loop. Finally, to verify the reliability of the method, the estimation error of the traditional method is compared with that of the method in this paper.
Principle of the estimation model
The blood pump converts the input electrical energy into electromagnetic energy, and then the electromagnetic energy drives the impeller to rotate to generate mechanical energy. Finally, the impeller rotates to generate a flow field change to generate hydraulic energy. If we want to know the output flow and pressure difference of the blood pump, we need to establish the relationship between the hydraulic parameters of the output and the input electrical parameters. Also, the neural network can fit arbitrary functions with arbitrary precision. Based on this idea, consider using a multi-layer perceptron to model the experimental data of the blood pump, and obtain the blood pump input parameters and output parameters by training a certain sample. With this model, if the input of the blood pump is given, the flow rate and differential pressure output under pulsating conditions can be obtained.
The multi-layer perceptron consists of an input layer, several hidden layers, and an output layer. The first layer is the input layer, the last layer is the output layer, and the middle layers are the hidden layers. Each layer is composed of nodes. Each node is connected to all nodes in the adjacent layer and the output of a node in the last layer is an input parameter of a node in the next layer. The ultimate training goal of a multi-layer perceptron is to find the optimal combination of connection weights and offsets in the network, which will contribute to accurately derive the relationship between the input vectors and the output vectors. The structure model of the multi-layer perceptron in this paper is shown in Fig. 1. Its input layer is composed of 11 parameters and the output layer is composed of 2 parameters which are flow and differential pressure, and it includes 3 hidden layers.
As described in Fig. 1, the model has five layers. The input layer was recorded as l 1 , the three hidden layers were recorded as where f ⋅ is the activation function, y l j is the output of node j of layer l, and u l j is the input of the node. The formula for u l j is where w l ji is the weight of node i in layer l − 1 to node j in layer l and b l j is the offset of node j in layer l. The commonly used activation functions are sigmoid function, tanh function, rectified linear unit (ReLu) function, swish function etc. In this paper, tanh function and ReLu functions are selected as the activation function, respectively. The mathematical expressions of tanh function and ReLu functions are
Selection of input elements in the estimation model
The flow and pressure difference of blood pump under the pulse condition is affected by many factors. It is important to select the parameters that have an important influence on the flow estimation and differential pressure estimation as the input elements of the multi-layer perceptron. The parameters affecting the flow and pressure difference mainly include input power and rotational speed. When the blood pump operates in a pulsating manner, its power and rotational speed are constantly changing. Therefore, the rotational speed change rate, power change rate, power/rotational speed, power change rate/rotational speed, power change rate/rotational speed change rate of these five parameters are added to the input of the model. At the same time, if we adopt different speed pulsation waveforms, the output of flow and pressure difference is different. The parameters related to the speed pulsation waveform mainly include centre rotation speed, amplitude, period, and waveform [8,9]. In summary, the rotational speed, power, rotational speed change rate, power change rate, power/rotational speed, power change rate/rotational speed, power change rate/rotational speed change rate, centre speed, waveform, amplitude, and period of the 11 parameters are used as input elements of the multilayer perceptron.
Modelling and training of estimation model
The network structure of the model for estimating the flow and pressure difference of the blood pump under pulsating conditions is shown in Table 1. The hidden layers use the dropout layer and the ratio of dropout is 0.5.
In this paper, tensorflow [10,11] and keras are used to build a multi-layer perceptron model for estimating the flow pressure difference. The training set data is used to train the model, and the test set data is used to test the training results. The parameters in the training process are batch_size = 350, epoch = 1000. Adam optimiser is used to optimise the model and its learning rate To measure the rationality of model training, we need to use the loss function to evaluate. In this paper, we use mean-square error (MSE) function as the loss function. The mathematical expression is where Y i is the predicted value of the sample and Y i is the true value of the sample. The larger the function value is, the worse the prediction effect of the model is.
Traditional estimation methods
The traditional method of predicting the flow and pressure difference of the blood pump is mainly to extract the signal characteristics from the blood pump, which include the speed, current, and power of the pump. Then the output flow and pressure difference between the inlet and outlet of the blood pump are estimated. Malagutti et al. [12] proposed a flow and differential pressure estimation model based on power and speed as follows: where Q est is the predicted flow rate, P est is the predicted pressure difference, ω is the rotational speed of the blood pump, p is the input power of the blood pump, and the parameters before ω and p are coefficients of the model. The traditional prediction model is fitted with experimental data under fluctuating conditions. The results are shown in Tables 2 and 3.
Experimental device
The experimental system needs to complete the flow and pressure difference estimation experiment of the axial flow blood pump under the pulsating condition. It needs to implement the real-time collection of experimental data such as speed, input voltage, current, power, output flow, pressure difference. The overall layout of the test bed is shown in Fig. 2. In Fig. 2, 1 is the system blood circulation line; 2 is the constant temperature water bath, which ensures that the circulating blood is always maintained at the normal physiological temperature, avoiding the accuracy of the experimental results such as haemolysis due to temperature; 3 is a damping valve, which is used to simulate different resistance conditions; 4 is the ultrasonic flow sensor, at the same time, the upstream section of the flow meter is guaranteed to be larger than five times the diameter of the straight pipe, and the downstream section is larger than three times the diameter, to measure the circulating flow field of the system accurately; 5 and 6 are high-precision pressure sensors, which use diffused silicon piezoresistive sensors to measure the pressure difference between inlet and outlet; 7 is the exhaust valve; 8 is an axial flow blood pump; 9 is the driving coil; and 10 is the driving circuit board.
The axial flow blood pump model was designed by Central South University in this experiment. It consists of pre-turning vane, impeller, and rear guide vane. The design flow is 5 l/min, pressure difference is 100 mmHg, speed is 8000 r/min, and the material is titanium alloy.
Experimental process
In this experiment, experiments were carried out at three kinds of centre speeds. Two kinds of pulsating waveforms of the square wave and sine wave were selected. The pulsation frequency was 0.5, 1.0, and 2.0 Hz. The experimental group is shown in Table 4. The experiment receives data through the serial port and collects a total of 119,590 sample data, including target speed, actual speed, voltage, current, flow, inlet pressure, and outlet pressure. Through the data processing, the values of the rotational speed change rate, the power change rate, the power/speed, the power change rate/ rotation speed, and the power change rate/the rotational speed change rate are obtained.
The collected 119,590 sample data were divided into training sets and test sets. Sixty per cent of the sample data was divided into training sets, the number of which is 71,754. Forty per cent of the sample data was divided into test sets, the number of which is 47,836. Pandas are used to clean the training set data. The main process is to read the data, randomly scramble the data, process the data into a mean of 1, variance of 1, and store it in the H5 file. The Python language is used to build the multi-layer perceptron model described above, and then the file is imported into the programme. Through the operation of the programme, the operation result and model error of the method will be obtained.
Result and discussion
Based on the training results of the model, the flow and pressure difference estimation experiments were carried out on two kinds of pulsating waveforms of a sine wave and a square wave. The results are shown in Figs. 3 and 4.
Among them, the flow estimation error of the model on the training set is 0.14 l/min, and the pressure difference estimation error is 7.50 mmHg. The flow estimation error of the model on the test set is 0.14 l/min and the differential pressure estimation error is 7.50 mmHg.
Comparing the estimation method of multi-layer perceptron with the traditional one, the results are shown in Table 5.
In this study, a multi-layer perceptron model for pulsatile flow and pressure difference estimation was successfully designed and validated. The model proposed in this paper produced superior results in the experiment, and its error was obviously smaller than that of the traditional estimation method. This shows that the prediction method has high accuracy. The estimation of the sine wave was slightly better than that of the square wave, which may be due to the larger velocity change rate of the square wave under fluctuating conditions. By accurately estimating the pulsatile flow and pressure difference, it is helpful to optimise the closed-loop control system of the pump.
Conclusion
In this paper, a new method of estimating the flow and pressure difference of an axial flow blood pump by using a multi-layer perceptron model was proposed. The feasibility of the method was proved both theoretically and practically, and satisfactory experimental results were obtained. Compared with the traditional estimation method, the model error is smaller and the accuracy is higher. Owing to the limitation of experimental conditions, this method has not been tested on a centrifugal blood pump, and subsequent experiments will be carried out on a centrifugal blood pump using this method. | 3,211.6 | 2020-10-23T00:00:00.000 | [
"Engineering"
] |
Mapping between the OBO and OWL ontology languages
Background Ontologies are commonly used in biomedicine to organize concepts to describe domains such as anatomies, environments, experiment, taxonomies etc. NCBO BioPortal currently hosts about 180 different biomedical ontologies. These ontologies have been mainly expressed in either the Open Biomedical Ontology (OBO) format or the Web Ontology Language (OWL). OBO emerged from the Gene Ontology, and supports most of the biomedical ontology content. In comparison, OWL is a Semantic Web language, and is supported by the World Wide Web consortium together with integral query languages, rule languages and distributed infrastructure for information interchange. These features are highly desirable for the OBO content as well. A convenient method for leveraging these features for OBO ontologies is by transforming OBO ontologies to OWL. Results We have developed a methodology for translating OBO ontologies to OWL using the organization of the Semantic Web itself to guide the work. The approach reveals that the constructs of OBO can be grouped together to form a similar layer cake. Thus we were able to decompose the problem into two parts. Most OBO constructs have easy and obvious equivalence to a construct in OWL. A small subset of OBO constructs requires deeper consideration. We have defined transformations for all constructs in an effort to foster a standard common mapping between OBO and OWL. Our mapping produces OWL-DL, a Description Logics based subset of OWL with desirable computational properties for efficiency and correctness. Our Java implementation of the mapping is part of the official Gene Ontology project source. Conclusions Our transformation system provides a lossless roundtrip mapping for OBO ontologies, i.e. an OBO ontology may be translated to OWL and back without loss of knowledge. In addition, it provides a roadmap for bridging the gap between the two ontology languages in order to enable the use of ontology content in a language independent manner.
Background
Two ontology based systems, the Open Biomedical Ontologies (OBO) [1] and the Semantic Web [2,3], each associated with a large community are being developed independently. Ontologies in biomedicine are used for organizing biological concepts and representing relationships among them. Major results include the Gene Ontology (GO) [4] and the Zebrafish Anatomy Ontology (ZFA) [5]. OBO format, which originated with GO, continues to evolve in support of the needs of the biomedical community. Over 100 OBO ontologies are available on the NCBO BioPortal [6]. Thus OBO is the backbone for ontology tools in this domain.
The Semantic Web is an evolving extension of the World Wide Web based on ontologies. Intended to facilitate search and information integration, and built on the foundations of artificial intelligence, the Semantic Web envisions the Web becoming a global knowledgebase through distributed development of ontologies using formally defined semantics, global identifiers and expressive languages for defining rules and queries on ontologies. The Semantic Web has been organized in the form of a layer cake where each layer provides a representation language of increasing expressive power (see Figure 1). The Web Ontology Language (OWL) [7], a component of the Semantic Web, provides the capability of expressing ontologies in multiple dialects. OWL-DL, a Description Logics based dialect, has become its language of choice due to the availability of reasoning tools. In the biomedical domain, some important ontologies such as NCI Thesaurus [8] and BioPAX [9] have been modelled in OWL.
Given the volume and growth of OBO content, integrating the features promised by Semantic Web technologies with OBO content would provide significant benefit to the biomedical community. One way to provide these features is to create a system that allows back and forth translation of OBO ontologies between the two systems. This paper describes precisely such a round-trip and the methodology that was followed in the course of its creation. The results in this paper represent a community effort to create a standard transformation mapping, initiated by the OBO foundry. One goal was to reconcile a number of independent efforts. In addition to this paper, a summary of this collaboration is in additional file 1 that lists the transformation choices of the respective contributors and a mediated set of transforms, called the 'common mapping'. Supplemental material on the mapping is also available [10]. The final results produce OWL-DL, as validated by WonderWeb OWL Ontology Validator [11]. A full implementation was done in Java, and is a part of the Gene Ontology Figure 1 Layer cakes for OBO and the Semantic Web. A layer cake for OBO, with some examples and a comparison with the Semantic Web layers; the mapping between the two layer cakes is generally quite straightforward, which makes it easy to understand the constructs in OBO and their mappings in OWL.
project source [12], hosted at sourceforge.net. It provides a lossless roundtrip mapping for OBO ontologies, i.e. ontologies that are originally in OBO can be translated into OWL and back into OBO.
A basis for reconciling the efforts was an observation that the Semantic Web layer cake itself could serve as a guideline for studying the representation of ontologies in OBO and creating the transformation system. We found that most of OBO can be decomposed into layers with direct correspondence to the Semantic Web layer cake. Compared to an approach that deals with each construct individually, we found that this method gave a better organization to our work and enabled us to identify matches and mismatches between the two languages more efficiently. Discussions became a two step process where it was first determined if an OBO construct had a clear correspondence to a Semantic Web layer, with respect to its intended expressive power, and if so, to which level it belonged. It followed that constructs that fell into the same equivalence class should be handled similarly. Deep discussion could be limited to those OBO constructs that could not be easily situated in this structure. These include, (1) local identifiers in OBO compared to global identifiers in OWL, (2) various kinds of synonym elements in OBO, and (3) defining subsets of OBO ontology. Even these constructs can be expressed in OWL-DL, albeit not by obvious construct substitution. We conclude that OWL-DL is strictly more expressive than OBO.
An additional consequence of this work is that, in effect, it defines a subset of OWL-DL that captures the expressive power of OBO and can be seen as a way of introducing formal semantics to OBO. We include a discussion of how OWL tools can be restricted to this subset so as to assure that ontologies developed with OWL tools may be translated to OBO. Similarly and perhaps more importantly, how to assure that OWL tools do not break OBO ontologies that have been translated to OWL such that, after using OWL tools, an updated ontology may be returned to OBO form. The exception handling in the Java based OWL to OBO translator was developed such that the translator itself serves double duty as a validator for this subset of OWL. At least two biomedical ontology tools, OBO-Edit [13] and Morphster [14] already exploit this translator.
OWL and OBO continue to evolve. OWL 2 [15] has recently been ratified by the World Wide Web Consortium, and a new version of OBO (1.3) is under active development [16]. Given that the older versions of these languages still support most ontologies, we have focused on those versions. However, later in the paper we provide a discussion on the new versions and their impact on the transformation system.
Related work
Each of the authors of this paper, as well as Mikel Egana, Erick Antezana, and LexBio group at Mayo Clinic, contributed some earlier independent effort at creating a transformation system [17][18][19]. The results of these efforts are documented in our spreadsheet as well. No single effort survived in its entirety in the common mapping.
Another independent and important effort was that of Golbreich et al [20,21] (hereafter Golbreich) that was not included in the standardized mappings. Golbreich developed a BNF grammar for OBO syntax, as well as a mapping between OBO and OWL 1.1 (now known as OWL 2). The differences between the Golbreich work and the common mapping effort presented in this paper comprise a difference of methodology and practical focus. Golbreich's work laid out valuable syntactic groundwork to formalize the semantics of a large subset of OBO. Much like most of the other first efforts, a complete transformation system was not specified. This particular effort deferred resolving OBO annotations, synonyms, subsets, and deprecation tags. Golbreich's work also did not address the mapping of local identifiers in OBO into global identifiers. However, the transformations that are specified by Golbreich are largely consistent with the common mappings.
Definitions Ontology
In knowledge-based systems, an ontology is a vocabulary of concepts and describable relationships among them [22]. Ontologies are extensively used in areas like artificial intelligence [23,24], the Semantic Web [7,[25][26][27] and biology [4][5][6] as a form of knowledge representation. They generally describe individual objects (or instances), classes of objects, attributes, relationship types, and relationships among classes and objects within a domain.
OBO ontologies
An ontology in OBO consists of two parts; the first part is the header that contains tag-value pairs describing the ontology, and the other part contains the domain knowledge described using term and typedef (more commonly known as a relationship type) stanzas [28]. A stanza generally defines a concept (term or typedef) and contains a set of tag-value pairs to describe it. To the terms and typedefs defined in OBO ontologies are assigned local IDs and namespaces.
The OBO format is human friendly, and useful GUI-based tools like OBO-Edit are available for building ontologies in it [13]. We deal with OBO version 1.2, and refer to it as simply OBO in this paper.
Semantic Web ontologies
The Semantic Web ontologies give well-defined meaning to the content on the World Wide Web and enables computers and people to work in cooperation. Some key technologies that form the Semantic Web are: 1. Resource Description Framework (RDF) [29] can express meaning of data using triples. A triple is a binary predicate that defines a relationship between two entities. 2. The Semantic Web uses Universal Resource Identifiers (URIs). This means that each entity gets a globally unique identifier. 3. RDF Schema (RDF-S) and Web Ontology Language (OWL) are ontology languages. RDF-S allows description of valid classes and relationship types for an application, and some properties like subclasses, domains, ranges etc. OWL further allows describing constraints on instances and provides both ontology level and concept level annotations, set combinations, equivalences, cardinalities, deprecated content etc.
A common syntax for representing ontologies on the Semantic Web is RDF/XML. OWL is based on RDF and RDF-S, and on occasion, we use OWL as an encompassing term for all these languages.
OBO and the Semantic Web layers
The Semantic Web was envisioned as an expressive hierarchy that is often illustrated as a layer cake [3] (see Figure 1c). At the beginning of this research it was our conjecture that the precise organization of the hierarchy transcends the Semantic Web and could be used, retroactively, to formalize the structure of other data and concept modelling systems. Thus, as a first step towards the creation of a transformation mechanism between OBO and OWL, we created a layer cake for OBO whose structure mirrored that of the Semantic Web layer cake. This allowed us to identify straightforward mappings as well as the cases that do not match as well. We term this the 'two layer cakes' methodology. This methodology has also been successfully applied towards the transformation of SQL databases into OWL ontologies [30].
OBO layer cake
We methodically examined each of the constructs of OBO. We find that most of OBO can be decomposed into layers with direct correspondence to the Semantic Web: OBO Core, OBO Vocabulary, and OBO Ontology Extensions (see Figure 1a, 1b).
1. OBO Core: In OBO, a concept can either be a term (class) or a typedef (relationship type). OBO Core deals with assigning IDs and ID spaces to concepts, and representing relationships as triples. 2. OBO Vocabulary: OBO Vocabulary allows annotating concepts with metadata like names and comments. It also supports describing sub-class and sub-property relationship types, as well as the domains and ranges of typedefs. 3. OBO Ontology Extensions: In addition to concept-level tags, OBO Ontology Extensions (OBO-OE) layer defines tags for expressing metadata on the entire ontology as well. It also allows defining synonyms, equivalences and deprecation of OBO concepts. OBO-OE layer can also express specific properties of OBO terms (e.g. set combinations, disjoints etc.), and typedefs (e.g. transitivity, uniqueness, symmetry, cardinalities). Table 1 provides assignments of OBO constructs to appropriate layers in the OBO layer cake.
Since we mostly have an exact mapping of layers between the two languages (see Figure 1), deciding which constructs to use for each kind of transformation is simplified. OBO Core tags can be transformed using RDF. OBO Vocabulary tags require using RDF Schema constructs. OBO Ontology Extensions tags require constructs defined in OWL.
Incompatibilities between OBO and OWL
We classify incompatibilities between the two languages into one of the two categories. First, in certain cases, the semantic equivalent of a construct in one language is missing from the other language. Second, sometimes the semantics of constructs in OBO are not sufficiently well-defined to map to a formally defined OWL construct, which forces us to define new vocabulary in OWL in order to allow the lossless transformation.
1. Entities in OWL have globally unique identifiers (URIs). On the other hand, OBO allows local identifiers. Transforming OBO into OWL requires transforming the local identifiers in an OBO ontology into URIs. Also, in order to make the roundtrip possible, it is necessary to extract the local identifier back from the URI. 2. OBO language has the 'subset' construct, which does not have an equivalent construct in OWL. An OBO subset is a collection of terms only, and is defined as a part of an ontology. An ontology can contain multiple subsets and each term can be a part of multiple subsets. In order to make the transformation possible, we need to define an OWL construct equivalent to OBO subset, and some relationship concepts to represent terms being in a subset, and a subset being a part of an ontology. 3. There are multiple kinds of synonym tags in OBO, e.g. related, narrow, broad, exact etc. The differences between these constructs are not formally documented. This requires defining new concepts in OWL, which can perhaps be mapped to new or already existing constructs in OWL.
Elements of OBO "missing" in Semantic Web are few, and can still be constructed in OWL. Thus, OBO ontologies may be translated to Semantic Web. However, in order to make the roundtrip possible, we find it important to store some ancillary information about the OBO ontology in the OWL file, e.g. a base URI etc., so it can be translated back without any loss of knowledge. It is important to note that even changing a local identifier within the whole knowledgebase is counted as loss of knowledge from the original source, even if the overall structure of the ontology remains intact.
The presence of such incompatibilities requires us to make some complex choices regarding the transformation process. Our solutions to these problems are explained in detail later.
OBO and sub-languages of OWL
OWL has three increasingly expressive sublanguages; OWL Lite, OWL DL and OWL Full. Each of these sublanguages extends its simpler predecessor with richer constructs that affect the computational completeness and decidability of the ontology.
Our investigation shows that a major portion of the OBO Ontology Extensions maps to OWL Lite and provides similar level of expressiveness. Overall, OBO features are a strict subset of OWL DL.
In OBO, the definition of a term, or a typedef, is rigid and not as expressive as OWL Full. OWL Full allows restrictions to be applied on the language elements themselves [7,26]. In other words, an OWL Full Class can also be an OWL Full Property and an Instance and vice versa. Such features are not supported in OBO.
Recall, the primary concern is the use of the Semantic Web technology and tools for OBO ontologies. Thus, that OBO is less expressive than OWL is the convenient direction of containment. It does mean that round trips cannot be supported unless the editing of any OBO ontology while in their OWL representation is restricted. We talk about editing of transformed ontologies while in OWL language in a later section. While transforming OBO ontologies into OWL, we must ensure producing a representation that can be used by description logic based inference engines. One of the intended goals of our transformation is to produce OWL DL, and not OWL Full.
Transformation metadata and rules
In this section, we present some of the rules for the transformation of OBO ontologies into OWL. For more complex transformations we describe the transformations and explain our approach.
In order to facilitate the transformation, we have defined a set of OWL meta-classes that correspond to the vocabulary of OBO tags. Complete listing of mappings between OBO and OWL are available in additional file 1.
Simple transformation rules
Most of the transformations follow simple rules. For most header and term/typedef tags, there is a one-to-one correspondence between OBO tags and OWL elements, either pre-existing or newly defined. In this section, we list the elements with this kind of simple transformation. Table 2 Example A provides some examples.
Header: The set of tag-value pairs at the start of an OBO file, before the definition of the first term or typedef, is the header of the ontology.
When translated into OWL language, each of the OBO header tags gets translated into the corresponding OWL element. The whole ontology header is contained in the owl:Ontology element in the new OWL file, and can appear anywhere within the file, as opposed to the start of file in OBO language.
Terms: A term in OBO is a class in OWL. So, a term declaration is translated into an owl:Class element and the tags associated with a term are contained within this element. Some tags that have straightforward transformations to OWL elements are: 1. The elements for 'name' and 'comment' about a term fall into the OBO Vocabulary layer, and are translated into rdfs:label and rdfs:comment respectively. A 'definition' tag is translated into hasDefinition annotation property, and is therefore placed in the OBO Ontology Extensions layer. 2. The 'is_a' tag in OBO specifies a subclass relationship, and is placed in the OBO Vocabulary layer. It is translated into an rdfs:subClassOf element (Table 2 Example B).
Typedefs: A typedef in OBO is an object property in OWL. A typedef stanza in an OBO file is translated into an owl:ObjectProperty element in OWL. The other information associated with the typedef is expressed as elements nested within this element. Some simple transformations are: 1. OBO typedefs can have associated domains and ranges. These are expressed by 'domain' and 'range' tags, and are in the OBO Vocabulary layer. These tags are translated into RDF Schema defined elements rdfs:domain and rdfs:range respectively. 2. Just like subclasses for terms, a property can be a sub-property to another property. A sub-property relationship is expressed using the 'is_a' tag, from OBO Vocabulary layer, in a typedef stanza. This tag is translated into an rdfs:subPropertyOf element defined in RDF Schema.
3. Typedefs may be cyclic ('is_cyclic' tag), transitive ('is_transitive' tag) or symmetric ('is_symmetric' tag). These tags fall into the OBO Ontology Extensions layer. The corresponding elements in OWL are annotation property isCyclic, and property types owl:TransitiveProperty and owl:SymmetricProperty respectively. The isCyclic property specifies a Boolean value.
Identifiers and ID spaces
OBO has a local identifier scheme. As OBO evolves, ID spaces have been introduced to allow specifying global identifiers. OBO identifiers have no defined syntax, but they are recommended to be of the form: "<IDSPACE>:<LOCALID>" However, OBO ontologies may contain flat identifiers, ones that do not mention the ID space. OBO identifiers must be converted to URIs for use in OWL. The rules for converting OBO identifiers to URIs in the current mapping are as follows: If the OBO header declares an ID space of the form: "idspace: GO http://www.go.org/ owl#", all OBO identifiers with the prefix GO: will be mapped to the provided URI, e.g. "http://www.go.org/owl#GO_0000001".
If an OBO ID space prefix does not have a declaration in the header, all identifiers that mention that prefix will be transformed using a default base URI, for example an identifier of the form "SO:0000001" will become "<default-base-uri>SO_0000001". In case the OBO identifier is flat, e.g. foo, the transformation again uses the default base URI to create "<default-base-uri>UNDEFINED_foo". Notice that the URI contains "UNDEFINED_", which clarifies that the URI should be translated into a flat identifier when translating the OWL version back to OBO. Flat identifiers are discouraged in OBO since they are not globally unique. Our transformation scheme only attempts to enable the roundtrip, and does not guarantee uniqueness of the identifiers. Typedefs defined in OBO Relations Ontology [31] are often used as a common vocabulary in OBO ontologies. Such typedefs have OBO identifiers prefixed with ID space OBO_REL. OBO ontologies assume the presence of this ID space with URI "http://www.obofoundry.org/ro/ro.owl" even if it is not explicitly stated. When translated into OWL, an XML namespace xmlns:oboRel with the same URI is added to the ontology, and the newly created object property is assigned that namespace. As a result, we ensure that all Relations Ontology constructs are mapped to the same URIs across ontologies.
Relationships
Relationships between OBO terms can be defined using the 'relationship' tag. A defined relationship is like a binary predicate and consists of a subject (the term being described in the stanza), a relationship type and an object.
There are multiple kinds of restrictions on relationships that can be expressed using OWL. OBO specifications do not specify any formal semantics for the 'relationship' tag that match a specific relationship type restriction defined in OWL. Therefore, we have selected the most general restriction to transform OBO relationships into OWL.
An example of relationship transformation is shown in Table 2 Example C. The owl: someValuesFrom element specifies the type of restriction that is applied to the OWL relationship. This restriction is similar to the existential quantifier of predicate logic [7,26]. In the existing OBO ontology content, we have only seen OBO relationships of this kind. It is possible that some ontologies use a different semantics of relationships. Currently, we do not have a way of differentiating between the two uses of OBO relationships so our transformation is based on the common semantics.
Subsets
Terms in an OBO ontology can be organized into subsets. A term can belong to multiple subsets.
In order to declare a subset, a value for the tag 'subsetdef' is specified in the OBO ontology header. This value consists of a subset ID (or subset name) and a quoted description about the subset. A term can be assigned to a defined subset using the 'subset' tag. Multiple 'subset' tags are used to assign the term to multiple subsets of the ontology.
When the ontology is translated into OWL, the mapping of subsets is one of the more complex processes. This is due to the fact that subsets do not have a semantic equivalent in OWL. Therefore, we use some OWL features to construct elements that serve as subsets. Subsets fall in the OBO Ontology Extensions in the OBO layer cake.
The local ID (or name) assigned to the subset, which is locally unique, becomes the OWL ID of a subset resource. A subset resource is declared using an oboInOwl:Subset element. The inSubset annotation is used to assign terms to a subset, and it is expressed within the owl:Class element.
Obsolete content
OBO format supports obsolete content. A term or typedef can be marked as obsolete using the 'is_obsolete' tag with a 'true' Boolean value. The 'is_obsolete' tag is in the OBO Ontology Extensions.
Obsolete terms and typedefs are not allowed to have any relationships with other terms or typedefs, including the subclass and sub-property relationships.
When translated into OWL, an obsolete term becomes a subclass of oboInOwl:Obso-leteClass (Table 2 Example D). Similarly, an obsolete typedef becomes a subproperty of oboInOwl:ObsoleteProperty.
Notice that while OWL provides elements to handle deprecation, obsolescence in OBO has different semantics, hence requires a different mapping.
OBO semantics by transformation
The transformation system has the additional effect of formalizing the semantics of the OBO language. The semantics of OBO are operationally defined by means of GO and the software systems that support GO. The semantics of OWL have been formally defined using model theory [25,29]. Though we have not written it out, a formal document specifying (or suggesting) OBO semantics can be generated. The contents of that document would comprise an enumeration of the pair-wise mapping of constructs between the two languages, restating, in each mapping, the semantics stated for the involved OWL construct.
In Table 3, we present a few examples where our transformation mapping could provide formal semantics for OBO constructs, taken directly from OWL semantics specifications. So, 1. x is_a y: all instances of x are also instances of y.
2. x is domain of y: the subject entity for all relationships of type y is an instance of x.
3. x is disjoint from y: x and y do not have any common instances.
While the identification is straightforward in these cases, in certain other situations, it is not very clear. Finding the semantics of relationships in OBO is one such case. OBO specifications do not provide the semantics of the construct used to specify relationships between two terms using a typedef. Therefore, it is hard to decide which of the available relationship constraints in OWL (owl:allValuesFrom, owl: someValuesFrom) to use, the former being similar to a universal quantifier, and the latter to an existential quantifier. In our transformations, we use owl:someValues-From, since already built ontologies show examples of use of OBO relationship construct in a way compatible to that of owl:someValuesFrom. We recommend that in practice the semantics of OBO relationships always match the owl:someValuesFrom restriction. x is a subclass of y is_a rdfs:subClassOf CEXT(x) ⊆ CEXT(y) x is a sub-property of y is_a rdfs:subPropertyOf EXT(x) ⊆ EXT(y) x is the domain of property y domain rdfs:domain <z,w> EXT(y) impliesz CEXT(x) x is the range of property y range rdfs:range <w,z> EXT(y) impliesz CEXT(x) x is disjoint from y disjoint_from owl:disjointWith CEXT( Other OBO tags that do not clearly match with OWL elements, such as synonyms and subsets, as well as the semantics for the 'is_obsolete' tag also present a more significant challenge in the identification of semantics.
Updating OBO ontologies in OWL
The set of constructs for ontology representation provided by OWL is considerably larger than the set of constructs provided by OBO. Therefore, in order to allow roundtrip transformations on OBO ontologies, it is important to restrict the editing of such ontologies per some guidelines while they are being represented in OWL.
Our transformation mappings essentially provide a subset of OWL elements that may be used for adding or updating contents of the ontology.
Compared to the general use of OWL, there are two key points to keep in mind: 1. To create relationships, use owl:someValuesFrom relations, since OBO does not have a corresponding relationship mechanism for owl:alValuesFrom.
2. Obsolescence of terms in the ontology should be done using the obsolete elements oboInOwl:ObsoleteClass and oboInOwl:ObsoleteProperty. OWL has seemingly similar, but semantically different deprecation elements, which must not be used for obsolescence.
Interconnecting OBO and the Semantic Web
The implications of our work in providing semantics to OBO strongly suggest the use of this mapping as a potential bridge between the OBO and the Semantic Web worlds. Compared to the existing work by Golbreich et al. [20,21], our ability to make roundtrips between OBO and OWL could enable seamless interconnections between the two worlds.
Our roundtrip tool could also be used as a validator for ontologies updated in OWL. It is common for biologists to develop and refine their OBO ontologies as their work progresses. Our work provides a path for accessing and querying the Semantic Web as well as OBO content in an integrated fashion, and to assimilate linked data available on the Semantic Web.
An implementation of our roundtrip mappings is provided by the Morphster tool [14] to jumpstart the integration of OBO ontologies with the Semantic Web. Morphster has successfully accomplished the use of a Semantic Web based triple store Jena SDB [32] for storage of large OBO ontologies and querying by the SPARQL query language for RDF. It also enables the use of XML Web Services with OBO ontologies to obtain and link diverse data such as images from Morphbank [33], and authoritative taxonomic names from uBio [34] etc.
OBO 1.3 and OWL 2
OBO and OWL both continue to evolve as ontology languages, providing new features based on real applications and user experience. A new version of OWL, commonly known as OWL 2 [15], has recently been ratified by the World Wide Web Consortium. Meanwhile, a new version of OBO, OBO 1.3, is under active development with draft documents available for comment [16]. As the languages change, tools as well as ontology content will be updated to utilize their new features.
Of particular concern to our work are the changes that are taking place in each language and their impact on the transformations. In this section, we discuss our understanding of these issues.
New features of OWL 2 mainly concern easier syntax for common ontology statements and new constructs that increase expressivity. Hence, we can expect simpler transformation rules for going from OBO to OWL 2.
The biomedical ontology community now understands that OBO and OWL are both useful ontology languages and the intention is to make these languages entirely interconvertible in the long term. One of the objectives behind the updates to OBO is to bring the feature set of OBO 1.3 closer to that of OWL.
• OBO 1.3 promises to provide a specification of formal syntax and semantics, hence taking a big step towards making provably correct mappings to OWL possible. The syntax for OBO 1.3 is specified as a BNF grammar, and the semantics are defined using the Obolog language, a collection of logical constructs defined using the ISO standard Common Logic [35]. In addition to the logical semantics of Obolog, the new specifications will also provide interpretations for Obolog to simplify translations into OWL-DL as well as OWL 2.
• The new version of OBO will accompany a recommendation [36] for globally unique identifiers for OBO that will have a one to one mapping with OBO Foundry compliant URIs, hence making the ID mapping obvious. The design goals behind this recommendation are to make sure that the URIs resolve to useful information about an OBO term, and that it is possible to maintain those URIs over time so they keep pointing to useful information. The recommendation document provides an example of how existing OBO IDs, new URIs, and existing transformed URI from the standard mapping may be related in the future (see Figure 2). • The new version of OBO will introduce new supported stanzas (sections) of OBO ontologies, i.e., 'Annotation' and 'Formula'. Annotation stanzas will allow the representation of annotations, and to attach metadata to them. Formula stanzas will be used to represent logical or mathematical formulas. A transformation system for OBO 1.3 will need to accommodate these stanzas as well.
Conclusions
Building ontologies is not a new idea for the biology community, and precedes the development of the Semantic Web. While ontologies are a central part of the architecture of the Semantic Web, the Semantic Web vision includes a broad range of technologies from the Artificial Intelligence field, such as inferring and querying mechanisms, as well as additional elements for distributed computing, such as global identifiers and the use of XML and HTTP as middleware. OBO, on the other hand, has appropriate tool support for building ontologies and hosts a number of important biomedical ontologies. Hence the OBO community has the biggest and most immediate need for the features being developed by the Semantic Web community.
We have standardized the mapping between the two systems to allow the OBO community to utilize the tool base developed for the Semantic Web world. We have indirectly formalized the semantics of OBO by creating a roundtrip transformation between OBO and OWL. We have also implemented our transformation tool in Java and it is available as part of the open source Gene Ontology project, and also as a web service. We believe our work is an important step towards building interoperable knowledge bases between OBO and the Semantic Web communities.
A key difference between the OBO community and the Semantic Web is the methodology for content development across ontologies. The Semantic Web has adapted a completely distributed development mechanism for ontologies that may be integrated using URIs. On the other hand, the OBO community uses a hybrid of centralized and distributed development. While the users of OBO develop ontologies independently, the OBO foundry has the goal of collaboratively creating a suite of orthogonal interoperable reference ontologies, such as the Relations Ontology, in the biomedical domain. Our transformation system enriches the Semantic Web by providing this additional structured ontology content and the access to the wealth of data annotated using it.
Epilogue
As of September 2010, OBO 1.3 has been deprecated, to be replaced by OBO 1.4. In addition to describing the syntax and semantics of OBO 1.4, work is in progress on defining a mapping for the new version of OBO to OWL2-DL. The new mappings are expected to be a part of the final specification of OBO 1.4. Readers should refer to the editor's draft [37] for further developments.
Methods
Based on the mapping rules, we have implemented a Java implementation of the OBO to OWL transformation. Our implementation is part of the official Gene Ontology project source [12]. Gene Ontology project is an open source project on Sourceforge. net, and is home to the OBO ontology editor OBO-Edit. Our implementation is part of the OBO API that provides data structures for storing OBO ontologies, as well as read and write capabilities for OBO and OWL, among other operations. The source code for our transformation tool is available at [38].
Finally, we have deployed our transformation as a web service for general use: http://www.cs.utexas.edu/~hamid/oboowl.html. Figure 2 Mappings between OBO Ids and URIs A mapping between the existing OBO Ids, newly recommended Foundry-compliant URIs, and the URIs produced by the standard mapping, mentioned as OBO legacy URI. This figure has been taken from the draft of the recommendation, and refers to the mappings of Ids described in the recommendation document.
In the OBO API, we have created NCBOOboInOWLMetadataMapping class in the package org.obo.owl.datamodel.impl. This class implements the roundtrip mapping between OBO and OWL. In order to provide console-based use of the transformation tool, we have created Obo2Owl and Owl2Obo classes in org.obo.owl.test package.
In order to evaluate the OWL output of our implementation, we have tested our tool on Gene Ontology, Zebrafish Anatomical Ontology, Spider Ontology and Adult Mouse Gross Anatomy, obtained from NCBO BioPortal. After transformation of these ontologies into OWL, we have successfully loaded the OWL files into Protégé [39], an ontology development tool for the Semantic Web. Using the 'summary' feature of Protégé, we have compared the overall class and object property count with the term and typedef count obtained for the original OBO file, using OBO-Edit's 'extended information' feature. The results of the comparison (Table 4) show equal values for both versions of the ontologies. Similarly, for testing the roundtrip, we compared the original OBO file with the roundtrip version, again using OBO-Edit's 'extended information' feature. Our evaluation showed that the two OBO ontologies had the same term and typedef counts ( Table 4). Class counts do not include obsolete classes, or ancillary information required for roundtrips. ZFA = Zebrafish Anatomical Ontology, MA = Adult Mouse Gross Anatomy, SPD = Spider Ontology, GO = Gene Ontology. | 8,418.2 | 2011-03-07T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Formation of Unipolar Outflow and Protostellar Rocket Effect in Magnetized Turbulent Molecular Cloud Cores
Observed protostellar outflows exhibit a variety of asymmetrical features, including remarkable unipolar outflows and bending outflows. Revealing the formation and early evolution of such asymmetrical protostellar outflows, especially the unipolar outflows, is essential for a better understanding of the star and planet formation because they can dramatically change the mass accretion and angular momentum transport to the protostars and protoplanetary disks. Here we perform three-dimensional nonideal magnetohydrodynamics simulations to investigate the formation and early evolution of the asymmetrical protostellar outflows in magnetized turbulent isolated molecular cloud cores. We find, for the first time to our knowledge, that the unipolar outflow forms even in the single low-mass protostellar system. The results show that the unipolar outflow is driven in the weakly magnetized cloud cores with the dimensionless mass-to-flux ratios of μ = 8 and 16. Furthermore, we find the protostellar rocket effect of the unipolar outflow, which is similar to the launch and propulsion of a rocket. The unipolar outflow ejects the protostellar system from the central dense region to the outer region of the parent cloud core, and the ram pressure caused by its ejection suppresses the driving of additional new outflows. In contrast, the bending bipolar outflow is driven in the moderately magnetized cloud core with μ = 4. The ratio of the magnetic to turbulent energies of a parent cloud core may play a key role in the formation of asymmetrical protostellar outflows.
INTRODUCTION
Protostellar outflows play essential roles in the formation process of protostars and planets.The protostellar outflows are one of the observable signs of the birth of protostars in the dense regions of molecular cloud cores.The mass accretion and angular momentum transport to protostars and protoplanetary disks are regulated by the driving and subsequent evolution of the protostellar outflows, contributing to the determination of the star formation efficiency in the isolated protostellar cores (Machida & Hosokawa 2013).Moreover, the grown dust with a size of about a centimeter in the inner region of the protoplanetary disk is entrained by the protostellar outflow and refluxed from the outflow onto the outer region of several tens of astronomical units of the disk (Tsukamoto et al. 2021b).The above process, which is referred to as the ashfall phenomenon, can circumvent the radial drift barrier that typically hinders planet formation.Therefore, the protostellar outflows contribute to crucial physical processes for the formation of protostars and planets.
One of the remarkable discoveries is that the molecular outflows driven by YSOs exhibit a variety of asymmetrical features.For instance, a statistical study of the properties of molecular outflows conducted by Wu et al. (2004) show that 50 sources (13 %) out of 391 targets present unipolar outflows, of which 28 are redshifted ones, indicating that the redshifted unipolar outflows are equally abundant compared to the blueshifted ones and the unipolar outflows are intrinsic.Recent highresolution observations of the protostellar environment in the Orion A molecular cloud using the Atacama Large Millimeter/submillimeter Array (ALMA) have more clearly detected the isolated protostars driving the unipolar outflow (Hsieh et al. 2023).The resolved unipolar cavities in the circumstellar envelopes are delineated by the near-infrared images of the scattered light around protostars, which typically trace the hot shocked gas regions in the outflows, obtained by the survey of protostellar outflow cavities in the Orion molecular clouds using the Hubble Space Telescope (Habel et al. 2021).Although the optical and near-infrared observations are inherently affected by the extinction effect, the result can provide complementary evidence for the existence of the unipolar outflows.Additionally, many detections of unipolar outflows in low-mass and high-mass YSOs are reported from recent ALMA observations (e.g., Aso et al. 2018;Louvet et al. 2018;Kong et al. 2019;Aso et al. 2019;de Valon et al. 2020;Garufi et al. 2020;Li et al. 2020;Dutta et al. 2020;Baug et al. 2021;Sato et al. 2023a;Dutta et al. 2023;Sato et al. 2023b).Moreover, observations have frequently detected bipolar molecular outflows with asymmetrical features, such as the bendings and different sizes of redshifted and blueshifted lobes (Arce et al. 2013;Yen et al. 2015Yen et al. , 2017a;;Aso et al. 2018Aso et al. , 2019;;Okoda et al. 2021;Hsieh et al. 2023;Kido et al. 2023).These observational results suggest that the asymmetrical features frequently emerge in the outflows.
Although the unipolar outflows have been observed, the formation and early evolution of the unipolar outflows driven by low-mass protostars remain unclear so far.Understanding the formation and early evolution of the outflows with the outstanding asymmetrical features, in particular the unipolar outflows, is crucial because they can strongly regulate the mass accretion and angular momentum transport to the protostars and protoplanetary disks.Observed molecular cloud cores are often threaded by the magnetic field (Crutcher 2012;Pattle et al. 2023), and in the context of low-mass star formation, the bipolar outflows magnetically driven by the first cores and the protoplanetary disks are successfully reproduced by many numerical simulations of the gravitational collapse of rotating magnetized cloud cores (e.g., Tomisaka 1998Tomisaka , 2002;;Matsumoto & Tomisaka 2004;Banerjee & Pudritz 2006;Machida et al. 2008;Tomida et al. 2010Tomida et al. , 2013;;Bate et al. 2014;Tomida et al. 2015;Tsukamoto et al. 2015aTsukamoto et al. , 2017;;Wurster et al. 2018a;Vaytet et al. 2018;Tsukamoto et al. 2018Tsukamoto et al. , 2020;;Hirano et al. 2020;Tsukamoto et al. 2021b;Wurster et al. 2021;Marchand et al. 2023).On the other hand, the isolated low-mass cloud cores are often associated with the turbulent velocity field as well (e.g., Goodman et al. 1993;Barranco & Goodman 1998;Burkert & Bodenheimer 2000;Misugi et al. 2019Misugi et al. , 2023)), which is typically subsonic or at most transonic (Ward-Thompson et al. 2007).The turbulence in the low-mass cloud cores can naturally produce the complex asymmetrical structures such as the warped protoplanetary disks and the filamentary infalling envelopes, and the rotation directions of the disks change dynamically as time proceeds due to the chaotic accretions (e.g., Tsukamoto & Machida 2013;Takaishi et al. 2020Takaishi et al. , 2021)).Thus, the turbulence is expected to generate the asymmetry of the magnetic field, leading to the unipolar outflows.However, few studies have performed that the asymmetrical bipolar outflows form in the realistic simulations of the collapse of magnetized turbulent low-mass cloud cores with/without coherent rotation (Matsumoto & Hanawa 2011;Joos et al. 2013;Matsumoto et al. 2017;Lewis & Bate 2018), and they have not yet identified the unipolar outflows driven by low-mass protostars.
In contrast to the above simulations, in a recent paper, Mignon-Risse et al. (2021a) report that the weak and transient unipolar outflow is driven by evolving binary massive protostars formed in the most turbulent case of their simulations, which calculate the gravitational collapse of magnetized turbulent massive cores of 100 M ⊙ by solving the magnetohydrodynamics (MHD) equations including the ambipolar diffusion and hybrid radiative transfer in the context of high-mass star formation (see also Mignon-Risse et al. (2021b) for details of their simulations).The results indicate that the initial turbulence of the parent cloud core, i.e. the environmental ram pressure, can strongly affect the outflow driving.Machida & Hosokawa (2020) also show that the ram pressure caused by the infalling envelope with the high accretion rate suppresses the outflow driving when the rigidly rotating cloud core initially has a weaker magnetic field.These results suggest that the ram pressure caused by the stronger turbulence and weaker magnetic field may be crucial for the formation of the unipolar outflows in the low-mass protostar formation as well as in the high-mass protostar formation.
This paper reports, for the first time to our knowledge, the formation of the unipolar outflows driven by the single low-mass protostellar systems.We perform the simulations of the gravitational collapse of magnetized turbulent low-mass cloud cores of 1 M ⊙ with strong, moderate and weak magnetic fields to investigate the formation and early evolution of the unipolar outflows, which have not yet been explored in the previous studies.In addition, this paper presents the subsequent evolution of the protostellar system driving the unipolar outflow, which is similar to the launch and propulsion of a rocket.
The rest of the paper consists of the following sections.Section 2 describes the numerical method and the initial conditions.Section 3 presents the numerical results.Finally, Section 4 summarizes and discusses the results and findings.
Basic Equations and Numerical Method
The simulations solve the non-ideal MHD equations including self-gravity of the gas: where v is the gas velocity, ρ is the gas density, P is the gas pressure, B is the magnetic field, ϕ is the gravitational potential, and G is the gravitational constant.The unit vector along the magnetic field is denoted by B ≡ B/|B|.η O and η A are the resistivities for the Ohmic dissipation and ambipolar diffusion of the non-ideal MHD effects.We note that the Hall effect, which is another important factor of the non-ideal MHD effects, is currently ignored because the simulation incorporating the Hall effect requires highly small time steps in its calculation and thus it is computationally highly demanding for the long-term simulation up to ∼ 10 4 yr after the protostar formation.Instead of solving the radiation transfer, we adopt the barotropic equation of state that mimics the thermal evolution of the cloud core presented by the radiation hydrodynamics simulations (Masunaga & Inutsuka 2000): where c s,iso = 1.9 × 10 4 cm s −1 is the isothermal sound speed at the temperature of 10 K, and ρ crit = 4 × 10 −14 g cm −3 is the critical density at which the thermal evolution changes from the isothermal to adiabatic.The equations are solved with the smoothed particle hydrodynamics (SPH) method (Lucy 1977;Gingold & Monaghan 1977;Monaghan & Lattanzio 1985).The ideal MHD part of the equations is solved with the Godunov smoothed particle magnetohydrodynamics (GSPMHD) method (Iwasaki & Inutsuka 2011).In addition, we adopt the hyperbolic divergence cleaning method proposed by Iwasaki & Inutsuka (2013) so that the divergence-free condition of the magnetic field is satisfied.The Ohmic dissipation and ambipolar diffusion are calculated with the methods of Tsukamoto et al. (2013a) and Wurster et al. (2014).The processes of the Ohmic dissipation and ambipolar diffusion are accelerated by super-time-stepping (STS) method (Alexiades et al. 1996).The parameters of ν sts = 0.01 and N sts = 5 are adopted for STS (Tsukamoto et al. 2013a).The self-gravity of the gas is computed by the Barnes-Hut octree algorithm with an opening angle parameter of θ gravity = 0.5 (Barnes & Hut 1986).The spline interpolation for the gravitational softening is adopted with the technique of the adaptive softening length (Price & Monaghan 2007).The numerical code is parallelized with the Message Passing Interface (MPI).The numerical code has already been applied to a variety of problems (e.g., Tsukamoto & Machida 2011, 2013;Tsukamoto et al. 2013aTsukamoto et al. ,b, 2015cTsukamoto et al. ,b,a, 2017Tsukamoto et al. , 2018;;Takaishi et al. 2020;Tsukamoto et al. 2020Tsukamoto et al. , 2021a,b;,b;Takaishi et al. 2021;Tsukamoto et al. 2023a).
Resistivity model
The simulations use the tabulated resistivities for η O and η A that are presented as the single-sized dust model with the dust size of a d = 0.035 µm in Tsukamoto et al. (2020).The resistivities are generated by the chemical reaction network calculation using the method of Susa et al. (2015) The calculation also includes the neutral and singly charged dust grains, G 0 , G + , and G − .The calculation takes into account the cosmic-ray ionization, gas-phase and dust-surface recombination, and ion-neutral reactions.The indirect ionization by high-energy photons emitted by direct cosmicray ionization (described as CRPHOT in the UMIST database) is also considered in the calculation.The initial abundance and reaction rates are taken from the UMIST2012 database (McElroy et al. 2013).The grain-ion and grain-grain collision rates are calculated using the equations in Draine & Sutin (1987).The chemical reaction network calculation is conducted using the CVODE package (Hindmarsh et al. 2005) assuming the system is in the chemical equilibrium.The resistivities are calculated using the abundances of charged species in the equilibrium state.The momentum transfer rate between the charged and neutral species is calculated using the equations in Pinto & Galli (2008).The temperature for the chemical reaction network calculation is modeled as T chem = 10(1 + γ T (ρ/ρ crit ) (γ T −1) ) K, where γ T = 7/5.It is assumed that the dust has the internal density of ρ d = 2 g cm −3 and the size of a d = 0.035 µm.The dust-to-gas mass ratio is fixed to be 0.01 in the calculation.The cosmic ray ionization rate is assumed to be ξ CR = 10 −17 s −1 .
Initial conditions for the density profile
The simulations start from the collapse of isolated molecular cloud cores including the magnetic field and turbulence simultaneously.The initial cloud core follows the density profile presented by Tsukamoto et al. (2020): where f is the density enhancement factor, ρ c is the characteristic density, and R c = 6.45a is the radius of the initial cloud core.ϱ BE is the non-dimensional density profile of the Bonnor-Ebert sphere (Bonnor 1956;Ebert 1955), which is a pressure-confined and self-gravitating isothermal gas sphere in hydrostatic equilibrium state against the gravitational collapse and well fits the density distribution of observed isolated molecular cloud cores (e.g., Alves et al. 2001;Kandori et al. 2005).
The density enhancement factor f controls the strength of gravity.More specifically, f can be written as f = 0.84/α, where with E thm and E grav are the thermal and gravitational energies of the initial cloud core (without the surrounding medium of ρ(r) ∝ r −4 ); see Appendix A of Matsumoto & Hanawa (2011).The density profile of the initial cloud core with f = 1 in a region of r < R c corresponds to that of the critical Bonnor-Ebert sphere.The initial cloud core with f > 1 is gravitationally unstable.
The initial cloud core has the temperature of T c = 10 K, the radius of R c = 4.8 × 10 3 au, the mass of M c = 1 M ⊙ within r < R c (∼ 2.1 M ⊙ in the entire domain of r < 10R c ), and the ratio of α = 0.4, which are determined by specifying f = 2.1 and the central density of ρ 0 = f ρ c = 7.3 × 10 −18 g cm −3 .The free-fall time is calculated as t ff = (3π/(32Gρ 0 )) 1/2 = 2.5 × 10 4 yr.The simulations resolve M c = 1 M ⊙ with the number of SPH particles of N SPH = 10 6 , which corresponds to the mass resolution of M c /N SPH = 10 −6 M ⊙ .The number of all the SPH particles in the entire domain of
Turbulence and magnetic fields
The initial cloud core has the divergence-free turbulent velocity field with the velocity power spectrum of P v (k) ∝ k −4 (Burkert & Bodenheimer 2000), where k is the wavenumber.The amplitude of the turbulence is characterized by the mean sonic Mach number: or the ratio of the turbulent to gravitational energies: where σ v and E turb = 3σ 2 v M c /2 are the one-dimensional velocity dispersion and turbulent energy of the initial cloud core without the surrounding medium of ρ(r) ∝ r −4 .We adopt M s = 0.86, which corresponds to γ turb = 0.1 initially.Hence, the initial cloud core has a non-vanishing net angular momentum of |J c,net | = 4.4 × 10 53 g cm 2 s −1 due to the stochastic nature of the turbulent velocity field.The direction of J c,net is set as the z-axis of the simulations.
The turbulent velocity field is generated with the method of Tsukamoto & Machida (2013), which is also adopted in our previous studies (Takaishi et al. 2020(Takaishi et al. , 2021)).All the simulations use the same turbulent velocity field, which is assigned to the cloud core of r < R c alone, and vanishes for R c ≤ r < 10R c .We note that the initial velocity field consists only of the turbulent velocity field, and it is not the superposition of the turbulent and rotational velocity fields.
The initial cloud core has the axisymmetric magnetic field modeled by Tsukamoto et al. (2020).In cylindrical coordinates (R, φ, z), they are where B c denotes the strength of the central magnetic field.The magnetic field described by the equations has a constant uniform component for z-direction in the central region ( B R → 0 and B z → B c as r → 0) and becomes an hourglass-shaped structure with |B| ∝ r −2 in the outer region (except at the midplane) as r → ∞ (see Appendix A of Tsukamoto et al. ( 2020) for details).The simulations can avoid to emerge low-β plasma regions in the surrounding medium of ρ(r) ∝ r −4 , where β plasma is the plasma beta parameter.Such the hourglass-shaped structures of the magnetic field have been estimated by previous observations on protostellar cores (e.g., Girart et al. 2006;Kandori et al. 2020a) and starless cores (e.g., Kandori et al. 2017Kandori et al. , 2018Kandori et al. , 2020b)).
The initial cloud cores are parameterized with the strength of the central magnetic field B c , which can be expressed by using the dimensionless mass-to-flux ratio: where Φ mag (R) is the magnetic flux of the initial cloud core calculated as and (M c /Φ mag ) crit = (0.53/3π)(5/G) 1/2 is the critical mass-to-flux ratio on stability for uniform spheres (Mouschovias & Spitzer 1976).
The simulations are conducted with four different dimensionless mass-to-flux ratios, µ = 2, 4, 8, and 16, which correspond to B c = 252 µG, 126 µG, 63 µG, and 31 µG, respectively.The dimensionless mass-to-flux ratio with the constant value of the central magnetic field B c is defined as As noted in Tsukamoto et al. (2020), µ const would be a suitable indicator to compare the strength of the magnetic field of this study with those of previous studies because we focus on the time evolution of the central region of the cloud core.µ can be converted into the ratio of the magnetic to gravitational energies: where E mag is the magnetic energy of the initial cloud core without the surrounding medium of ρ(r) ∝ r −4 .The model names and corresponding parameters are summarized in Table 1.We note that the last column of Table 1 summarizes the results of driven outflow morphologies.3. RESULTS
overview on 3D structure of the driven outflows and magnetic field
First, we show an overview of the simulations, which clearly show the very different morphologies of the driven outflows.Figure 1 shows the three-dimensional views of the density distribution and magnetic field structure for the three driven outflows: the no outflow (model MF2, top green box), bipolar outflow (model MF4, middle blue box), and unipolar outflow (model MF8, bottom magenta box).The simulations are conducted from the protostellar collapse to t p ∼ 10 4 yr, where t p denotes the elapsed time after the protostar formation epoch defined at the time when the central density becomes higher than 1.0 × 10 −11 g cm −3 .
The top green box of Figure 1 shows that no outflow is driven in the model MF2 in which the initial cloud core has a strong magnetic field of µ = 2.The hourglass-shaped structure of the magnetic field is formed and kept until the end of the simulation.The protoplanetary disk at the central region does not form in a timescale of t p ∼ 1.5 × 10 4 yr whereas the disk-shaped flattened infalling envelope, which is so-called the pseudo-disk structure (Galli & Shu 1993a,b), forms in this model.This result indicates that the relatively strong magnetic field rapidly extracts the angular momentum caused by the turbulent accretion from the central region via magnetic braking and suppresses the formation of a Keplerian rotating disk at the central region.
The middle blue box of Figure 1 shows that the bipolar outflow is driven in the model MF4 in which the initial cloud core has the magnetic field of µ = 4.The bipolar outflow is mainly driven by the magneto-centrifugal wind model (Blandford & Payne 1982) whereas the spiral-flow model (Matsumoto & Hanawa 2011;Matsumoto et al. 2017) also contributes to driving it.The magnetic field gradually becomes twisted as time proceeds, and the driven bipolar outflow evolves to be a large size of ∼ 5 × 10 2 au in the lower region and ∼ 10 3 au in the upper region, indicating that the outflow size of the upper region is greater than that of the lower region.Furthermore, the results show that the driven bipolar outflow is slightly bending and its driving directions in the upper and lower regions are not perfectly antiparallel with each other.The driving directions of the bipolar outflow are also misaligned with the direction of the large-scale global magnetic field (roughly the z-axis Yellow isosurfaces show the density of 3.2×10 −18 g cm −3 , 10 −17 g cm −3 , 3.2 × 10 −17 g cm −3 , and 10 −16 g cm −3 with the radial velocity of v r > 0, representing the outflows.White lines show the magnetic field lines.Blue and green isosurfaces show the density of 3.2 × 10 −17 g cm −3 and 10 −16 g cm −3 with v r < 0, representing the infalling envelopes.Cut-plane densities on x − y (z = 0), x − z (y = 0), and y − z (x = 0) planes are projected for each panel.t p notes the elapsed time after the protostar formation epoch defined at the time when the central density becomes higher than 1.0 × 10 −11 g cm −3 .The scale of the box is ∼ 2, 000 au.The origin of a coordinate system is shifted to the center of mass of the system.direction).The warped structures of the infalling envelopes are emerged by the perturbation of the turbulent accretion.Our results suggest that the bending and misaligned structures of the bipolar outflow are created by the turbulent accretion via the surrounded infalling envelopes.Therefore, the turbulence of molecular cloud cores can naturally explain the observed asymmetrical features of bipolar molecular outflows such as the bendings and different sizes of redshifted and blueshifted lobes (Arce et al. 2013;Yen et al. 2015Yen et al. , 2017a;;Aso et al. 2018Aso et al. , 2019;;Okoda et al. 2021;Hsieh et al. 2023;Kido et al. 2023).The results perform that the bipolar outflows can be formed in not only the rotating cloud cores but also the turbulent cloud cores, which is consistent with previous theoretical studies (Matsumoto et al. 2017).
The bottom magenta box of Figure 1 shows that the unipolar outflow is driven in the model MF8 in which the initial cloud core has the weak magnetic field of µ = 8.The unipolar outflow evolves to be a large size of ∼ 10 3 au in the lower region and has the large opening angle in the bottom right panel of Figure 1.The magnetic field lines gradually becomes twisted only in the lower region as time proceeds.In contrast, the magnetic field lines in the upper region have extended and spread out structures without twisting.The results indicate that the unipolar outflow is mainly driven by getting accelerated with the magnetic pressure gradient force although the magneto-centrifugal force (Blandford & Payne 1982) slightly contributes to driving and accelerating it.Tomisaka (2002) indeed reports that the outflow driven by the magnetic pressure gradient force appears in the case of the weak magnetic field while the outflow driven by the magneto-centrifugal force appears in the case of the strong magnetic field because the toroidal components of the magnetic field can be amplified more easily by the disk rotation comparing to the case of the strong magnetic field.The results indicate that the strong turbulent accretions can locally amplify the toroidal components of the magnetic field relative to the poloidal ones, in contrasts to the bipolar outflow formed in model MF4.The results also show that the infalling envelopes have warped and elongated filamentary structures by the turbulent accretion.As shown in Figure 2 of the next subsection 3.2, these structures generated by the turbulent accretion can naturally explain the observed arc-like structures on the scale of ∼ 10 3 au (e.g., Tokuda et al. 2014).
The unipolar outflow is also driven in the model MF16 in which the initial cloud core has the very weak magnetic field of µ = 16.The outflows and initial parameters of the cloud cores are summarized in Table 1.It can be seen from Table 1 that the unipolar outflows are driven with E mag /E turb < 1, whereas the bipolar outflow is driven with E mag /E turb ∼ 1.The results of Figure 1 and Table 1 suggest that the ratio of the magnetic and turbulent energies of the parent cloud core, as represented by E mag /E turb , may play a key role in the driven outflow morphologies.
formation and evolution of unipolar outflow
Next, we focus on the formation and subsequent evolution of the unipolar outflow.Figure 2 shows the evolution of the surface density distributions along the y-direction for model MF8.We note that the origin of a coordinate system is shifted to the center of mass of the system.
Panel (b) of Figure 2 shows that the unipolar outflow is driven around the protostar at t p ∼ 8×10 3 yr after the protostar formation.Subsequent evolution from panels (b) to (c), the unipolar outflow grows up with the outflow speed of 1 − 5 km s −1 and expands to the scale of ∼ 10 3 au in a timescale of ∼ 5 × 10 3 yr.Panels (c) to (f) show that the unipolar outflow has a highly wide opening angle.The velocity of the unipolar outflow gradually becomes large during the evolution from panels (b) to (f).As time proceeds from panels (b) to (f), the velocity of the central protostar v p evolves to be from subsonic (v p ∼ 0.5c s,iso ) to supersonic (v p ∼ 1.6c s,iso ).
All the panel of Figure 2 shows that arc-like structures appear from ∼ 10 2 au to ∼ 10 3 au scales in the x−z plane.The arc-like structures in the panels of Figure 2 are infalling by the turbulent accretion and not outflowing.Tokuda et al. (2014) have reported an arc-like structure on ∼ 10 3 au scale around a very low-luminosity protostar in the dense cloud core MC27/L1521F by ALMA observations.Many Figure 2. Evolution of the surface density distributions along the y-direction for model MF8 in which the unipolar outflow is driven around the protostar.Panels are labeled by t p that notes the elapsed time after the protostar formation epoch defined at the time when the central density becomes higher than 1.0 × 10 −11 g cm −3 .v p is the velocity of the protostar.White arrows of the panels show the cut-plane velocity at y = 0.The reference arrow plotted on the top right corresponds to 1 km s −1 .Sky-blue solid lines show the velocity contours at v r = 0, tracing the front lines of the outflowing gas, where v r is the radial velocity of the gas.The origin of a coordinate system is shifted to the center of mass of the system.interferometric observations have also detected similar arc-like structures, which have been recently called the accretion streamers, in a wide range from the cloud core scale of ∼ 10 4 au (e.g., Pineda et al. 2020) to the disk and envelope scales of 10 2 −10 3 au (e.g., Yen et al. 2014Yen et al. , 2017b;;Akiyama et al. 2019;Yen et al. 2019;Thieme et al. 2022;Garufi et al. 2022;Kido et al. 2023;Aso et al. 2023).Our results suggest that the accretion streamers on the scales of 10 2 − 10 3 au can be naturally explained by the filamentary envelope accretions caused by the turbulence of the parent cloud cores, as emerged in many previous numerical simulations of the collapse of self-gravitating low-mass cloud cores with the turbulence (e.g., Matsumoto & Hanawa 2011;Tsukamoto & Machida 2013;Matsumoto et al. 2017;Takaishi et al. 2020).
Figure 3 shows the evolution of the ratio of the ram pressure P ram and magnetic pressure P mag at the cut-plane of y = 0 for model MF8, following the analysis of Machida & Hosokawa (2020).Panels (a) to (f) of Figure 3, the ram pressure P ram is lower than the magnetic pressure P mag inside the lower region of the driven unipolar outflow (inside the sky-blue line).The result indicates that the 1.0 0.5 0.0 0.5 1.0 log (P ram /P mag ) (= log( plasma / ram )) magnetic pressure gradient force caused by the twisted magnetic field as shown in panels of model MF8 in Figure 1 gradually becomes large and continuously enhances the unipolar outflow getting accelerated until the end of the simulation of t p ∼ 2.3 × 10 4 yr.In the upper region, the ram pressure P ram , however, is always higher than the magnetic pressure P mag from panels (a) to (f) of Figure 3, suggesting that the outflow driving is suppressed and failed by the ram pressure due to the infalling envelopes.
In order to see the detailed evolution of the ram and magnetic pressures separately, we introduce the ram beta parameter as β ram = P thm /P ram , where P thm is the thermal pressure of the gas.Figure 4 shows the evolution of the ram beta parameter β ram outside the outflow (v r < 0, outside the sky-blue line) and the plasma beta parameter β plasma = P thm /P mag inside the outflow (v r > 0, inside the sky-blue line) at the cut-plane of y = 0 for model MF8.
Panel (a) of Figure 4 shows that β ram (color map outside the outflow) is asymmetrically distributed and its value in the upper region is smaller than that in the lower region before the unipolar outflow is 2.0 1.5 1.0 log ram (v r < 0), log plasma (v r > 0) driven.The results indicate that the asymmetric accretion due to the turbulence of the initial cloud core generates such different ram pressure distributions at the initial accretion stage if the turbulence dominates.Therefore, driving outflow is initially delayed and suppressed in the region with large values of the ram pressure, and the outflow is driven earlier in the region with a smaller value of the ram pressure.
Figure 4 shows that the low-β ram region gradually expands to ∼ 10 3 au in the upper region from panels (b) to (f), indicating that the ram pressure by the infalling envelopes increases and suppresses the outflow driving as time proceeds.Figure 4 also shows that low-β plasma region expands inside the region of the driven unipolar outflow.Therefore, it is expected that the ram pressure P ram keeps overcoming the magnetic pressure P mag in the further subsequent evolution and no outflow driving is sustained in the upper low-β ram region.
Protostellar rocket effect
Figure 5 shows the projected protostellar trajectories on the x − z plane relative to the center of the initial cloud core for all models.As shown in Figure 5, the protostellar systems driving the unipolar outflow in models MF8 and MF16 move from the inner to the outer regions of their parent cloud cores at a distance of approximately 5 × 10 2 au in the timescale of t p ∼ 2 × 10 4 yr because the unipolar outflow ejects the protostellar system due to the linear momentum transport from the unipolar outflow to the protostellar system.Figure 5 also shows that the projected protostellar velocities with the unipolar outflow highly increase as time proceeds, suggesting that the protostellar systems are accelerated by driving the unipolar outflow.This phenomenon is similar to the launch and propulsion of a rocket.In the following, we refer to the protostellar systems with the acceleration by the linear momentum transport of the driven unipolar outflow as the protostellar rocket.
The protostellar rocket amplifies the relative velocity of the infalling envelopes towards the central protostar and protoplanetary disk.Therefore, the results suggest that the increase of the ram pressure by the infalling envelopes, in the upper region as shown in panels (b) to (f) in Figures 3 and 4, is caused by the combination with such the protostellar rocket and the infalling envelopes.
One interesting finding is that the driving additional new outflows to the different directions against the unipolar outflow are prevented once the protostellar rocket forms, suggesting that the unipolar outflow is sustained via the above phenomenon.Furthermore, the protostellar rocket feedbacks itself to evolve and enhance the unipolar outflow driving further as shown in Figure 2, resulting that the feedback system performs to be the instability state.Hereafter, we refer to the instability state as the protostellar rocket effect.
We note that the protostellar systems driving no outflow and the bipolar outflow also move shorter distances of ∼ 10 2 au comparing those driving the unipolar outflows due to the linear momentum transport from initial chaotic accretions by the turbulence.The velocities of the protostellar systems driving the bipolar outflow and no outflow remain subsonic throughout the simulations.
observational signature of unipolar outflows
Although the unipolar outflows have been detected in many observations, it is possible that the bipolar outflow is observed as the unipolar outflow due to extinction effects by the geometry and/or the surrounding gas.We propose the expected linear polarization maps of the thermal emission from dust grains aligned with the magnetic field as the observational signature to identify whether the unipolar outflow is real or not.
Figure 6 shows the expected polarization maps of the thermal emission from dust grains aligned with the magnetic field for model MF8 in which the unipolar outflow is driven.The polarization maps are calculated from the relative Stokes parameters q and u by using the method described in Tomisaka (2011) (see also Appendix A for details).Panel (a) of Figure 6 shows that the expected polarization degree in both upper and lower regions has a large value of ∼ 10 % at the scale of ∼ 10 3 au.The result suggests that the expected polarization in the upper region is roughly the same as that in the lower region before driving the unipolar outflow.However, panels (b) to (c) of Figure 6 show that the expected polarization degree inside the region of the driven unipolar outflow (inside the sky-blue line) considerably decreases to ∼ 0−5 %.As shown and pointed out in Tomisaka (2011), the protostellar outflow has a low polarization degree because the toroidal components cancel out : 1 c s, iso : 1 c s, iso : 1 c s, iso : 1 c s, iso : 1 c s, iso : 1 c s, iso : 1 c s, iso : 1 c s, iso : 1 c s, iso : 1 c s, iso : 1 c s, iso : 1 c s, iso : 1 c s, iso : 1 c s, iso : 1 c s, iso : 1 c s, iso : 1 c s, iso = 2 (no) = 4 (bipolar) = 8 (unipolar) = 16 (unipolar) Figure 5. Projected protostellar trajectories on the x − z plane relative to the center of the initial cloud core (gray point) for all models.t p is the elapsed time after the protostar formation.v p shows the velocity of the protostar.Gray arrows indicate the projected protostellar velocity on the x − z plane.The reference arrow plotted on the top right corresponds to 1 c s,iso = 1.9 × 10 4 cm s −1 .The symbols "S" and "E" denote the positions of the protostar at its formation (t p = 0) and at the end of the simulation.The numbers beside the symbols "S" and "E" mean the values of the dimensionless mass-to-flux ratio µ.A schematic illustration on the protostellar rocket induced by the unipolar outflow is plotted near the positions of "E8" and "E16".Gray circles indicate the same distances from the center of the initial cloud core.each other inside the outflow region.The depolarization along the line of sight therefore emerges by the configuration of the twisted magnetic field around the unipolar outflow.
During the subsequent evolution from panels (d) to (f) of Figure 6, the expected polarization in the upper region gradually increases to ∼ 14 % and has a large value relative to that in the lower region whereas the polarization values becomes slightly large inside the unipolar outflow.The results indicate that the expected polarization is quite different between the lower and upper regions once after driving the unipolar outflow.Thus, the unipolar outflow can be identified by the observation of the thermal dust polarization, although the detectable sensitivity should be investigated in our subsequent work.It should be pointed out in Figure 6 that the accretion streamers caused by the turbulent accretions of the infalling envelope have a low expected polarization degree.
SUMMARY AND DISCUSSION
Recent observations on star-forming regions have revealed that the protostellar outflows driven by the YSOs exhibit a variety of asymmetrical features, in particular the unipolar outflows.Although the observations have reported the unipolar outflows, the formation and early evolution of the unipolar outflows driven by low-mass protostars are yet unclear.This study investigates the formation and early evolution of the protostellar outflows with the asymmetrical features by using the threedimensional non-ideal MHD simulations of the gravitational collapse of magnetized turbulent isolated low-mass cloud cores.This paper presents, for the first time to our knowledge, the formation of the unipolar outflow in the early evolution of the low-mass protostellar system.Our results and findings are summarized as follows.
1.The unipolar outflows are driven by the protostellar systems formed in the weakly magnetized cloud cores with the dimensionless mass-to-flux ratios of µ = 8 and 16.In contrast, the bending bipolar outflow is driven by the protostellar system formed in the moderately magnetized cloud core with µ = 4. Furthermore, no outflow is driven by the protostellar system formed in the strongly magnetized cloud core with µ = 2.The results explain the observed asymmetrical features of the protostellar outflows.
2. The protostellar system is ejected by the unipolar outflow from the central dense region to the outer region of the parent cloud core.As a result, the protostellar system driving the unipolar outflow is gradually accelerated as time passes.The protostellar velocity evolves to be from subsonic to supersonic by the acceleration.This is very similar to the launch and propulsion of a rocket, and so we call the protostellar system ejected and accelerated by the unipolar outflow the protostellar rocket.
3. We find that the subsequent additional new outflows cannot be driven by the protostellar rocket until the end of the simulation.This is because they are suppressed by the ram pressure of the infalling envelopes, which is enhanced by the acceleration of the protostellar rocket itself.In the context of low-mass star formation, the unipolar outflows can set the protostellar rocket to be the instability state, and we call this phenomenon the protostellar rocket effect.
The remaining question is how the outflow morphologies change with the different turbulent energies E turb from the one adopted in this study.Our results show that the unipolar outflows are driven with E mag /E turb < 1, while the bipolar outflow is driven with E mag /E turb ∼ 1.This implies that the outflow morphologies may depend on the ratio of the magnetic and turbulent energies of the parent cloud core, E mag /E turb .
We suggest that the outflow morphologies can be used as a new tracer to indirectly estimate the magnetic field strengths of the parent molecular cloud cores.The measurements of the magnetic field strengths using the Zeeman effect show that the observed typical cloud cores are slightly supercritical with the dimensionless mass-to-flux ratios of µ obs ∼ 2 (e.g., Troland & Crutcher 2008;Falgarone et al. 2008;Crutcher 2012), which means that the magnetic pressure is not sufficient to prevent the gravitational collapse of the cloud cores.The measurements using the Davis-Chandrasekhar-Fermi (DCF) method also suggest that the cloud cores are magnetized with µ obs ∼ 2 − 3 (e.g., Kirk et al. 2006;Karoly et al. 2023).The cloud cores may actually form from somewhat subcritical initial conditions (e.g., Pattle et al. 2017;Karoly et al. 2020;Yin et al. 2021;Priestley et al. 2022;Ching et al. 2022;Karoly et al. 2023).However, the measurements of µ obs suffer fundamentally from their statistical and systematic uncertainties such as the selection bias towards sources with strong magnetic fields for the Zeeman effect observations and the overestimates of the magnetic field strengths in the DCF method (Liu et al. 2021).
In our simulations, a bipolar outflow forms in a cloud core with µ = 4 (µ const = 2) and M s = 0.86.Typical cloud cores have the turbulence of M s ≲ 1 (e.g., Ward-Thompson et al. 2007).Therefore, our results suggest that the formation of bipolar outflows in turbulent cloud cores requires the relatively strong magnetic field of the parent cloud cores.In contrast, the unipolar morphology may form with the relatively weak magnetic field of the parent cloud cores.Wu et al. (2004) show that bipolar outflows are observed more frequently than unipolar outflows.This suggests that the typical molecular cloud cores tend to have relatively strong magnetic fields.Note, however, that the turbulence strength is fixed in our current study and the impact of different turbulence strengths remains unclear.We will investigate the relation between the outflow morphologies, magnetic field strengths (E mag ), and turbulence strengths (E turb ) in our future work.
The protostellar rocket effect would drive shock waves into the envelope around the protostar, which may possibly be detected by some chemical shock tracers in the molecular line emissions.However, the detectability of the shock waves depends on the chemical species of the shock tracers and their lifetimes in the post-shock waves.In addition, even just accreting gas, without the protostellar rocket effect, is capable of driving the accretion shocks.Therefore, the detectability of the shock waves in the ambient cloud material needs to be investigated in detail to identify the probable chemical shock tracers with their characteristics.
This study ignores the Hall effect due to the computational cost to include it.Here, we briefly discuss possible impact of the Hall effect on our conclusion.The magnetic braking is strengthened or weakened by the Hall effect when the rotation and magnetic field vectors are aligned or anti-aligned, and the Hall effect introduces some interesting phenomena in the formation and early evolution of the protostars and protoplanetary disks in collapsing cloud cores (e.g., Wardle & Ng 1999;Krasnopolsky et al. 2011;Li et al. 2011;Braiding & Wardle 2012;Tsukamoto et al. 2015b;Wurster et al. 2016;Tsukamoto et al. 2017;Wurster et al. 2018aWurster et al. ,b, 2021)).If the Hall effect is strong enough even in our simulation environments, we speculate that when the rotation and magnetic field vectors are aligned, the magnetic field may be twisted more strongly and the outflow is stronger, making it harder for the unipolar outflow to form.On the other hand, when the rotation and magnetic field vectors are anti-aligned, the magnetic field may be twisted more weakly and the outflow is less likely to overcome the ram pressure, which may lead to the formation of the unipolar outflow.However, it should be noted that the magnetic resistivity depends on the dust model, cosmic ray ionization rate, and also on the strength of the magnetic field (e.g., Wardle & Ng 1999;Nakano et al. 2002;Padovani et al. 2014;Marchand et al. 2016;Wurster et al. 2018c;Koga et al. 2019;Tsukamoto & Okuzumi 2022;Kawasaki et al. 2022;Kobayashi et al. 2023;Tsukamoto et al. 2023b).These predictions will be verified by future simulations including the Hall effect.
How long the protostellar rocket effect continues in subsequent evolution is still an open question.More evolved protostellar rockets are expected to be distributed outside the region of the initial cloud core as long as the unipolar outflow is maintained.Thus, the unipolar outflow and the protostellar rocket may be observationally identified by combining the proper motions, the launch direction of the unipolar outflow on the plane of the sky, and the morphologies of the infalling envelope cavity using, for example, the Global Astrometric Interferometer for Astrophysics (GAIA), the James Webb Space Telescope (JWST), ALMA, and/or other methods.However, even if peculiar proper motions are detected, it may be difficult to distinguish between protostellar rockets being its origin and, for example, stellar encounters being its origin.The long-term evolution and observability of protostellar rockets will be considered in our future studies.
Figure 1 .
Figure1.Three-dimensional views of the density distribution and magnetic field structure for models of the no outflow (model MF2, top green box), bipolar outflow (model MF4, middle blue box), and unipolar outflow (model MF8, bottom magenta box).Yellow isosurfaces show the density of 3.2×10 −18 g cm −3 , 10 −17 g cm −3 , 3.2 × 10 −17 g cm −3 , and 10 −16 g cm −3 with the radial velocity of v r > 0, representing the outflows.White lines show the magnetic field lines.Blue and green isosurfaces show the density of 3.2 × 10 −17 g cm −3 and 10 −16 g cm −3 with v r < 0, representing the infalling envelopes.Cut-plane densities on x − y (z = 0), x − z (y = 0), and y − z (x = 0) planes are projected for each panel.t p notes the elapsed time after the protostar formation epoch defined at the time when the central density becomes higher than 1.0 × 10 −11 g cm −3 .The scale of the box is ∼ 2, 000 au.The origin of a coordinate system is shifted to the center of mass of the system.
Figure 3 .
Figure 3. Evolution of the ratio of the ram pressure P ram and magnetic pressure P mag at the cut-plane of y = 0 for model MF8.Black lines show the contours of the ratio.t p of each panel corresponds to that in Figure 2. v p is the velocity of the protostar.White lines show the contours of the surface density in Figure 2 ranging from −0.75 to 1.25 in 0.25 steps on a logarithmic scale.Black arrows of the panels show the cut-plane velocity at y = 0.The reference arrow plotted on the top right corresponds to 1 km s −1 .Sky-blue solid lines show the velocity contours at v r = 0, tracing the front lines of the outflowing gas.The origin of a coordinate system is shifted to the center of mass of the system.
Figure 4 .
Figure 4. Evolution of the ram beta parameter β ram = P thm /P ram outside the outflow (v r < 0) and the plasma beta parameter β plasma = P thm /P mag inside the outflow (v r > 0) at the cut-plane of y = 0 for model MF8, where P thm is the thermal pressure of the gas.Black lines show the contours of them.Sky-blue solid lines show the velocity contours at v r = 0, indicating the boundary between outside and inside of the outflow.t p of each panel corresponds to that in Figure 2. v p is the velocity of the protostar.White lines show the contours of the surface density in Figure 2 ranging from −0.75 to 1.25 in 0.25 steps on a logarithmic scale.White arrows of the panels show the cut-plane velocity at y = 0.The reference arrow plotted on the top right corresponds to 1 km s −1 .The origin of a coordinate system is shifted to the center of mass of the system.
Figure 6 .
Figure 6.Expected linear polarization along the y-direction for the model of the unipolar outflow (model MF8).The color maps in each panel show the polarization degree.The polarization degree vectors are plotted by red-brown bars in each panel.The elapsed times t p of each panel are the same as in Figure 2. White lines show the contours of the surface density in Figure ranging from −0.75 to 1.25 in 0.25 steps on a logarithmic scale.White of the panels show the cut-plane velocity at y = 0.The references of the polarization degree vector and cut-plane velocity are plotted on the top right.Sky-blue solid lines show the velocity contours at v r = 0, tracing the front lines of the outflowing gas.The origin of a coordinate system is shifted to the center of mass of the system.
Table 1 .
The model names and parameters Model µ B c (µG) µ const γ mag E mag /E turb Outflow Note-µ is the dimensionless mass-to-flux ratio.B c is the strength of the central magnetic field.µ const is the dimensionless mass-to-flux ratio with the constant value of the central magnetic field.γ mag = E mag /|E grav | is the ratio of the magnetic to gravitational energies.E mag /E turb = γ mag /γ turb is the ratio of the magnetic to turbulent energies.The last column indicates the morphology of the driven outflow. | 11,380.6 | 2024-01-04T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Post-translational protein lactylation modification in health and diseases: a double-edged sword
As more is learned about lactate, it acts as both a product and a substrate and functions as a shuttle system between different cell populations to provide the energy for sustaining tumor growth and proliferation. Recent discoveries of protein lactylation modification mediated by lactate play an increasingly significant role in human health (e.g., neural and osteogenic differentiation and maturation) and diseases (e.g., tumors, fibrosis and inflammation, etc.). These views are critically significant and first described in detail in this review. Hence, here, we focused on a new target, protein lactylation, which may be a “double-edged sword” of human health and diseases. The main purpose of this review was to describe how protein lactylation acts in multiple physiological and pathological processes and their potential mechanisms through an in-depth summary of preclinical in vitro and in vivo studies. Our work aims to provide new ideas for treating different diseases and accelerate translation from bench to bedside.
Introduction
Post-translational modifications (PTMs) refer to the chemical modification of a protein after translation and regulate protein activity, localization, and folding, as well as critical interactions between proteins and other biomacromolecules [1,2].Many important life activities and the occurrence of diseases have been linked not only to the abundance of proteins but also to various post-translational modifications of proteins [3].Proteomic modifications are of great significance to reveal the mechanisms of life activities, screen clinical markers of diseases, and identify drug targets through an in-depth study of the changes in protein post-translational modification levels [4].It has long been believed that lactate is a metabolic waste of glycolysis by cellular life activities under hypoxia, thus prompting the stereotype formation of lactate as a harmful substance [5].However, biological functions of lactate are being progressively discovered, including intracellular energy supply, signal transduction, modulation of the tumor microenvironment, inflammation regulation, etc., and are also involved in the progression of cancer, inflammatory diseases, and metabolic diseases [6][7][8][9][10][11][12].
Several common PTMs, such as acetylation, methylation, ubiquitination, and phosphorylation, have received widespread attention and have been well characterized [13].Interestingly, a lactate-induced lactylation modification of histone lysine residues was first identified in 2019 by Zhang et al. [14] and was involved in the homeostatic regulation of M1 macrophages under bacterial infections.Protein lactylation proposed as a new PTM not only opens up a new field for the study of proteins but also indicates a novel direction for exploring lactate in cancer, metabolism, immunity, etc.This review elaborates on this topic based on lactate metabolism and the effects of histone or non-histone lactylation on cellular biology.It contributes to further understanding protein lactylation and elucidating the role of lactate in the regulation of cell function.Finally, we explore the possibility of targeting potential targets of lactylation modification for the treatment of various diseases.
Lactate
The lactate shuttle Lactate is the end-product of glycolysis, a major substrate for oxidative metabolism, which serves as a bridge connecting many cellular pathways [15].Lactate is transported and subsequently accumulated in different important organs via blood circulation in the body but also plays a role in regulating cellular energy and redox homeostasis by intracellular and cell-cell lactate shuttles [16].Lactate can be exchanged between cells and the extracellular matrix and between the inner and outer mitochondrial membranes by monocarboxylate transporter (MCT) and lactate dehydrogenase (LDH) [17].Glycolytic cells trigger a large uptake of glucose and participate in glycolysis in the cytoplasm, where pyruvate is turned into lactate by LDHA, which then is excreted to the extracellular matrix by MCT4.Of note, lactate uptake by oxidative cells via MCT1 leads to the conversion of it back to pyruvate in the cytoplasm via LDHB, which is then transported to the mitochondria via MCT1 to complete the tricarboxylic acid cycle (TAC) and contributes to energy metabolism [18].Moreover, under stimulation with hypoxia, hydrogen peroxide, and lactate, the expression of hypoxia-inducible factor-1 (HIF-1) is upregulated in the cell, which promotes the expression of MCT4 and the exportation of lactate (Fig. 1) [19].This lactate shuttle contributes to intercellular lactate sharing and links glycolysis with aerobic oxidation, which is conducive to more efficient allocation and exploitation of energy by tumor cells.
Lactate: the classic and new perspectives of metabolism
Conventional wisdom suggests that glucose is the major source of nutrient supply and produces energy by two metabolic means: glycolysis and mitochondrial oxidative phosphorylation [20].Both metabolic pathways start with pyruvate, an intermediate product from the breakdown of glucose, accompanied by the production of small amounts of ATP and NADH [21].Under aerobic conditions, pyruvate and NADH electrons enter the mitochondria, where they are converted to acetyl-CoA, which will then go to the tricarboxylic acid (TCA) cycle to produce enormous amounts of ATP [22].Under pathological hypoxic conditions accompanied by the failed entry of electrons into the mitochondria, pyruvate generated from glycolysis is only converted to lactic acid by LDH.Lactic acid is then dissociated into lactate and H + , causing the body to accumulate lactate [23].However, this view has been currently updated and improved.Aerobic glycolysis, also known as the Warburg effect, still occurs and provides a way to quickly produce energy and lactate under stressful conditions such as tumors, exercise, trauma, sepsis, and heart failure, although cells are in an aerobic environment [6,24].The key mechanism of aerobic glycolysis may lie in the up-regulation of LDHA and pyruvate dehydrogenase kinase (PDK) in tumors and other states to synergistically promote the conversion of abundant pyruvate into lactate [25].
It is traditionally believed that lactate is a "metabolic waste product" and its catabolism occurs mainly in the liver, where it undergoes gluconeogenesis and reproduces glucose, a process known as the Cori cycle [26].However, new concepts of lactate are gradually being established.Using the 13 C-isotope tracer and metabolomics study by Jang C et al. [27], lactate has a higher circulatory turnover flux in fasted pigs despite glucose being the most abundant circulating carbohydrate, that is, the TCA cycle feeds primarily off circulating lactate, and glucose mainly provides nutrients for the TCA cycle through circulating lactate, suggesting that apparently many organs simultaneously produce and consume circulating lactate.Moreover, lactate is not only described in pigs but is also widely used as a fuel in mice and humans, confirming that lactate is a common carbohydrate fuel in mammals [28].The ubiquitous expression of MCT and the oxidation of lactate into pyruvate for the TCA cycle in cells by LDHB also confirm that lactate has become a nearly common carbohydrate fuel [27].Aerobic glycolysis has been intensively studied in pathophysiological processes.The increased pyruvate kinase muscle isozyme 2 (PKM2)/PKM1 ratio plays an important role in promoting the metabolic "conversion" of glucose oxidation to aerobic glycolysis, which utilizes glycolysis intermediates and upregulates the glutaminolysis, pentose phosphate pathway, and single carbon metabolism to facilitate the biosynthesis of nucleosides, thus contributing to cell proliferation [29].The production of lactate by aerobic glycolysis has also been shown to cause a highly acidic microenvironment in the local area, which may alter immune cell infiltration to promote immunosuppression and cell proliferation [30].
Lactate as a ligand for GPR81: a cell transduction molecule
Lactate is not only the most common carbohydrate fuel under specific physiological conditions, but also carries a deeper biological significance.It is thought that lactate acts as a ligand for the G-protein-coupled receptor 81 (GPR81), which also mediates signal transduction to facilitate the effects of lactate [31].Lactate is involved in extracellular signal-regulated kinase (ERK) dephosphorylation by activating GPR81 and promotes cell apoptosis and susceptibility to ischemic injury in ischemic brain injury, suggesting GPR81 antagonist might be a potential strategy for brain ischemia [32].In contrast, several studies support a possible protective role of lactate in ischemic brain damage, possibly by supplying energy to compensate for the bioenergetic crisis caused by ischemia [33,34].Collectively, lactate at low concentrations may exacerbate neuronal injury by activating the GPR81 receptor, while high concentrations protect nerve cells through the supply of ATP.Lactate/GPR81 pathway can also inhibit lipolysis by down-regulating cellular cAMP level, making it an important target to intervene in lipid metabolism and treat metabolic syndrome [35].In cancer treatment, lactate/GPR81 is also required for tumour growth.On the one side, when lactate is the main energy source for tumor cells because of the Warburg effect, deletion of GPR81 results in mitochondrial functional inactivation and a marked attenuation of tumor growth [36].On the other hand, lactate/ GPR81 can promote tumor progression through multiple signaling pathways.For example, lactate-induced GPR81 activation activates the transcription factor TEAD by reducing intracellular cAMP levels, further mediating the programmed death-ligand 1 (PD-L1) promoter activation and increase of PD-L1 protein levels in lung cancer, which confirms the key role of lactate in modulating cancer cells to evade immune surveillance [37].Another example is lactate/GPR81 signaling through activating the inosine phosphoinositide 3-kinase (PI3K)/protein kinase B (Akt)-cAMP response element binding protein (CREB) pathway, resulting in increased production of the pro-angiogenic mediator amphiregulin (AREG) to promote angiogenesis [38].Additionally, on the plus side is the fact that increased lactate release within an infection context mediates signal transduction of the bone marrow endothelial lactate-receptor GPR81, thereby preferably promoting neutrophil mobilization by regulating the Fig. 1 Regulation of lactate metabolism progress in normal, glycolytic and oxidative cells.Glucose metabolism mainly contains glycolysis and the TCA cycle in the mitochondrion.With sufficient oxygen, normal cells produce energy mainly through the TCA cycle.Under stimulation with hypoxia, tumors, and inflammation, glycolytic cells trigger a large uptake of glucose and participate in glycolysis in the cytoplasm, where pyruvate is turned into lactate by LDHA, which then is excreted to the extracellular matrix by MCT4.Of note, lactate uptake by oxidative cells via MCT1 leads to the conversion of it back to pyruvate in the cytoplasm via LDHB, which is then transported to the mitochondria via MCT1 to complete the TAC and contributes to energy metabolism expression of endothelial VE-Cadherin and further vascular permeability in bone marrow as well as inducing the release of neutrophil mobilizers such as granulocyte colony-stimulating factor (G-CSF) [39].Elevated lactate levels attenuate inflammation during delivery by acting on uterine GPR81 to down-regulate key pro-inflammatory genes in a feedback way, such as interleukin (IL)-1β, IL-6, chemokine ligand 2, etc. [40].Collectively, these examples illustrate the role of the functions of lactate in ischemic damage or neuroprotection, angiogenesis, promoting tumor growth and inflammation regulation, etc.
Mechanisms of histone lactylation
In 2019, a mass shift of 72.021 Da in the histone lysine residues was first identified through the mass spectrometry analysis of MCF-7 cells by Zhang et al. [14], which was similar to that caused by adding a lactyl group to the ε-amino group of a lysine residue.To further corroborate this modification, they revealed lactate exposure could promote lactylation of lysine residues through metabolic labelling experiments using the isotope L-lactate ( 13 C 3 ) [14].Some unique amino acid residues with substrate specificity such as lysine, arginine, and histidine present positively charged side chains at physiological pH [41].Moreover, lysine and arginine often are located at the hydrophilic surface of proteins, and the ε-amino group of lysine and guanidinium group of arginine exposed to the solvent due to the significant hydrophobicity of these side chains are susceptible to post-translational modification [42].Sokalingam S et al. [43] have also demonstrated that most ε-amino and guanidinium groups of lysine and arginine residues in protein structures are exposed to the solvent through the analysis of the electrostatic interactions in green fluorescent protein (GFP), which increased their potential for interactions of various physicochemical factors.Lysine is not only the most modified amino acid but also the most widely affected amino acid by PTMs in comparison with arginine, making it a hot topic in enzyme and chemical PTMs [44].Arginine plays a major structural role in driving protein folding and stability because of its guanidinium group to form three-dimensional ionic interactions.While the geometry of lysine residues is less stable than that of arginine, its ε-amino group can form single-ion interactions, making lysine more functionally flexible and therefore easier to bring modified [45].The relatively free coordination of ions and the chemical reactivity of the ε-amino make lysine a key component of various enzymatic catalysis.As compared to bacteria, mammals appear to have more varieties of lysine PTMs.Interestingly, lysine is one of the essential amino acids that mammals must obtain from the diet, which makes lysine PTMs nutrient-sensitive and sense cell metabolic states to varying degrees [45].Thus, the nature of lysine makes these related PTMs play a crucial role in the regulation of cell health.
Exogenous and endogenous L-lactate, but not D-lactate, accumulates to a certain extent and directly promotes lactylation of specific lysine residues [46].Glycolysis inhibitors directly correlate with both decreased lactate production and lysine lactylation (Kla), while mitochondrial inhibitors and cellular hypoxia may increase lactate production and enhance Kla [47].In the majority of investigated lactylation modified proteins, lactyl moieties of lactyl-CoA from L-lactate are bound via the ε-amino group of lysines to the target protein.
Generally, this process starts with relevant enzymes.First, the "writers", a series of specific acylases, transfers the lactyl group of lactyl-CoA as a substrate to a histone or non-histone lysine residues, which alters the protein structure and function.Then, "erasers" emerge to stop the entire Kla cycle and prevent the protein lysine from having lasting effects, meaning that they act as deacylases to remove part or all of the lactyl groups from the target proteins.Finally, effector proteins called "readers" specifically recognize this change in Kla to affect downstream signaling pathways and initiate various biological events (Fig. 2) [48].Moreover, non-enzymatic lysine acylation involves the deprotonation of the ε-amino group of the substrate lysine by a general base, such as aspartate or glutamate.Then, the deprotonated ε-amino group initiates the nucleophilic attack on the thioester bond of acyl-CoA greatly [49].Importantly, the terminal carboxylic acids of acyl-CoA form a highly reactive cyclic anhydride intermediate through intramolecular catalysis and react strongly with free lysine ε-amino to produce non-enzymatic acyl-lysine modifications [50].However, lactyl-glutathione (LGSH), but not lactyl-CoA, as in conventional studies, has been reported to be involved in another unique non-enzymatic lactyl transfer [51].A typical non-enzymatic acyl-transfer mechanism occurs in acetyl-glutathione, whose acetyl group is transferred to the ε-amino group of a lysine residue to generate lysine acetylation (Kac) [52].Gaffney DO et al. [51] further discussed the non-enzymatic kla based on LGSH considering the chemical similarity between acetyl-glutathione and LGSH.The main conclusions are listed below: methylglyoxal, a reactive glycolytic by-product, rapidly binds to glutathione via glyoxalase 1 (GLO1) to produce LGSH, which transfers its own lactyl group to a protein lysine residue in a non-enzymatic acyl-transfer mechanism.And, LGSH is also hydrolyzed by glyoxalase 2 (GLO2) to recycle glutathione and produce D-lactate.Thus, GLO1 amplification without an accompanying compensatory increase in GLO2 may predispose the balance of the glyoxalase cycle towards LGSH and further kla production.
Interestingly, glycolytic proteins are major targets for Kla through the feedback regulation, presented with inhibition of glycolytic enzyme activity and reduction of glycolytic metabolites [51].Overall, knowledge regarding Kla as a novel PTM remains limited, particularly for the substrates, modification reactions (enzymatic or non-enzymatic), and lactylation dynamics.
Crosstalk between lactylation and other acylations
There is potential crosstalk among lysine acylations because they are intertwined in the metabolic networks of cells.Profoundly understanding the crosstalk in PTMs may be helpful for further mining of lactylation modification.Therefore, attention must be paid to the metabolic pathways that are interrelated and regulated when explaining the relationship between lactylation and other acylations [53].Most proteins function through interactions with other proteins.It is reported that many proteins have at least one PTM, and many of them have more than one, suggesting crosstalk among different PTMs of proteins is ubiquitous [54].In particular, there is a high degree of similarity and coordination between lactylation and acetylation, which are important processes linking metabolism and epigenetics [55].For instance, two types of PTMs tend to target lysine and hold some enzymes in common, e.g., p300 as the writer [14].Li L et al. [56] showed that Gli-like transcription factor 1 (Glis1) enhanced levels of acetyl-coA and lactate as well as synergistically drove histone acetylation and lactylation by transcriptional activation of glycolytic genes and higher glycolytic flux.Moreover, lactate may act as an important transcriptional regulator and induce histone hyperacetylation by promoting expression of histone deacetylase (HDAC)-associated genes to inhibit HDAC activity [57].Interestingly, several studies have shown that a portion of histone acetyltransferases (HATs) and HDACs catalyze the lactylation and delactylation of histones, respectively [14,46].Since lactation and acetylation are subject to the regulation of both HATs and HDACs simultaneously, it is reasonable to ascertain the correlation between them [46,58].The link between lactylation and acetylation has been shown in more studies.For instance, lactate simultaneously promotes the lactylation and acetylation of high mobility group protein B1 (HMGB1) in macrophages by activating p300 acetylase and inhibiting the activity of SIRT1 deacetylase [59].Additionally, cold exposure Fig. 2 Lactate from extracellular matrix or glycolysis resulted in lactylation.A Lactate may synthesize lactyl-CoA, and then, the lactyl group is transferred by "writer" to lysine, leading to lactylation of histones or non-histones to affect gene expression or downstream signaling pathways.B Three types of interactions (in different color blocks) between lactylation (pink residues with La) and other PTMs (green residues with R) are shown can trigger metabolic reprogramming of aerobic glycolysis driven by mitochondrial damage in macrophages, increase histone acetylation to promote the release of inflammatory factors.In turn, the accumulation of intracellular lactate results in histone lactylation as a selfprotective mechanism to initiate the transcription of anti-inflammatory genes [60].However, the changes of Kla and Kac differ across cell types.Under hypoxia condition, both human Hela cells and murine macrophages show increased Kla levels, but Kac levels decrease in the former, and remain unaffected in the latter [14].It is for this reason that the changes of Kla and Kac in different cells responded to different stimulations are not always consistent.Therefore, simply attributing the relationship between Kla and Kac or other acylation modifications to synergy or competition seems unreasonable.
In addition to acetylation, other posttranslational acylations, such as succinylation, crotonylation and butyrylation, have also been reported to crosstalk with lactylation.For instance, succinylation of PKM2 at lysine residue K311 in LPS-induced macrophages helps PKM2 enter the nucleus to promote the expression of IL-1β and HIF-1α-dependent genes and the metabolic shift to aerobic glycolysis (lactate production) [61,62].In contrast, SIRT5 acts as an "eraser" of succinylation, effectively desuccinylates and activates PKM2, thereby reversing the above process [61].Moreover, histone lysine crotonylation (Kcr) and Kla are distributed widely throughout the brain, and HDACs have been shown to "erase" histone Kcr and Kla.While inhibition of HDACs stimulates the levels of histone acylation modifications (H3K9cr and H3K18la) in vivo and in vitro, and widely promotes neuronal differentiation and cell proliferation processes [63].Lactylation could also be associated with butyrylation mediated by butyrate, which contributes to increased levels of protein Kla in human Hela cells, and may be prevented by inhibition of HDACs [64].Overall, lactylation may be related to other acylation modifications in ways we do not yet understand, including other propionylation, glutarylation, betahydroxybutyrylation, and 2-hydroxyisobutyrylation [65][66][67][68].
Lactylation in health and diseases
Protein lactylation has been extensively detected and studied in various disease models.Lactate accumulation from metabolic reorganization in multiple diseases controls the progress of disease.At the same time, lactate and histone lactylation also seem to be highly necessary for neurodevelopment and to orchestrate gene expression changes [63].The role of histone or nonhistone lactylation in neuronal development, cancer, inflammation, embryogenesis, cerebral disease, fibrosis, and so on will be discussed in the following sections (Table 1).
Cellular development and differentiation
Histone lactylation that marks numerous genes is widely distributed throughout the developing telencephalon and changes dynamically in the course of development, indicating histone lactylation is an intrinsic pathway to regulate gene expression during mammalian development [63].There is a metabolic transition from glycolysis to mitochondrial oxidative phosphorylation during neurogenesis, which contributes to lower levels of lactate and lactyl-CoA and could have affected whole histone Kla levels during development [69].Dai SK et al. [63] showed that the levels of H3K18la and even total histone H3 Kla declined over time during neurogenesis and differentiation in mice, while the increased levels of multiple histone lactylations pre-activated neuronal transcriptional programs and promoted the differentiated maturity of neural stem cells by the inhibition of "eraser" HDAC1-3.In contrast, p300/CBP acts as a "writer" of histone lactylation, and its knockdown inhibits embryonic neural differentiation in the normal and Rubinstein-Taybi syndrome brain [70].Similarly, genes associated with neural development and differentiation remain primed in the early stages of neurogenesis, and HATs and HDACs separately promote the lactylation and delactylation of histones that target these primed genes and thus regulate neurogenesis [71,72].In fact, the switch for histone lactylation depends on the balance between "writers" (such as CBP/p300 and HATs) and "erasers" (such as HDACs) and acts as regulatory elements of genes determining neural fate.Overall, the crosstalk of multiple histone acylations, more than just lactylation, plays a key role in the regulation of neural development and disease.Another is that glucose tends to be metabolized most to produce lactate through aerobic glycolysis during osteogenic differentiation, characterized by elevated LDHA levels [73].JunB, a component of the activator protein-1 (AP-1) transcription factor family, is involved in osteoblast differentiation and bone formation [74].The lactate-derived histone H3K18la levels gradually increase and are remarkably enriched on the promoter of JunB to activate its expression, which contributes to the formation of mineralized nodules and alkaline phosphatase activity [73].Moreover, lactate supplementation also facilitates transcriptional elongation through enhanced histone lactylation on germline and embryo cleavage-related genes, which induces global up-regulation of genes involved in embryo cleavage [55] (Table 2)
Inflammation
There is growing evidence suggesting that histone or non-histone lactylation is strongly associated with inflammation [14,59,[75][76][77].Current research on lactylation associated with inflammation mainly focuses on macrophages, which are highly plastic cells of the innate immune system and could promote or resolve inflammation under different functional phenotypes [78].In the colitis model, the toll-like receptor (TLR) stimulated by LPS activates PI3K-Akt in a B-cell adapter for PI3K (BCAP)-dependent manner, which further leads to the accumulation of lactate and histone lactylation and therefore enhances expression of reparative macrophage genes associated with the M2-like phenotype, such as ARG1 and KLF4 [75].Conversely, the loss of BCAP may exaggerate the inflammatory response following TLR activation.Another, lactate inhibits its tetramer-to-dimer transition and nuclear distribution as well as thus activates PKM2 by promoting the lactylation level of PKM2 at the K62 site, ultimately inducing a macrophage phenotypic switch toward reparative M2 macrophages, manifested by decreased expression of inflammatory factors [77].Similarly, macrophages could also enhance the uptake of extracellular lactate via MCT during polymicrobial sepsis with elevated lactate levels and promote the lactylation of HMGB1 dependent on the "writer" p300/CBP.And, then, HMGB1 with elevated lactylation levels in macrophages is more released and accumulated into the cytoplasm via exosome secretion to further induce endothelial barrier dysfunction [59].Concerning histone lactylation, which is most widely studied, hypoxia and bacterial challenges boost lactate production and elevated histone H3 lactylation at the K18 site by glycolysis in the late phase of M1 macrophage polarization, which induces the expression of genes involved in the damage repair homeostasis [14].Significantly, histone lactylation and acetylation have different temporal kinetics, and histone lactylation occurs later than acetylation, which explains the expression of repair genes in the late phase of M1 macrophage polarization to promote homeostasis.Moreover, multiple studies have shown that long non-coding RNAs (lncRNAs) play a crucial role (such Lactylation production [127] GCN5 CPTH6 Pre-clinical Leukemia, lung cancer Lactylation production [128] p300/CBP A-485 Pre-clinical Neovascularization, pituitary adenoma, melanoma, etc. Lactylation production [110] as host immune response and pathogen transmission) in pathogenic infections [79].LPS treatment and bacterial infection upregulate the expression of LINC00152 in human colon cell lines by introducing histone lactylation on its promoter, and decreasing the binding efficiency of repressor YY1 to it, which resists both Salmonella invasion and the inflammatory response [76].
Brain diseases
Lactate has been proposed as an energy substrate source for neurons and a valuable cell-cell signaling molecule in the brain, which is linked to neurological and psychiatric diseases [80].The abnormal expression of lymphocyte cytosolic protein 1 (LCP1) is closely related to various cancer stages and severity [81].Recently, Wen et al. [82] showed that LCP1 was significantly up-regulated on the 14th day in the middle cerebral artery occlusion (MCAO) rat model through proteomic analysis.The elevated lactylation levels of LCP1 by excessive glycolysis in cerebral infarction reduce its own degradation and cell viability, and enhance the cell apoptosis rate in vitro, and increase the brain water content, infarct area, and neurological score in vivo [83].However, LCP1 knockdown or inhibiting the glycolysis reverses the above process and relieves the cerebral infarction injury [83].HMGB1 is typically loosely bound to DNA in the nucleus but released into the cytoplasm or extracellular space when cells are damaged by external stimuli, inducing apoptosis and inflammatory responses [84].Yao X et al. [85] showed that upregulated LDHA increased the lactate content and promoted the lactylation of histone H3K18la, which was significantly enriched on the HMGB1 promoter and upregulated HMGB1 expression, hence inducing cell pyroptosis and aggravating cerebral ischemia-reperfusion injury.Similarly, elevated lactate and histone H4K12la levels also are observed in Alzheimer's disease and further promote the expression of glycolytic gene PKM2, thus forming a positive feedback loop that contributes to the abnormal activation and dysfunction of microglia as well as neuroinflammation [86].Interruption of this loop by blocking PKM2 could ameliorate microglial dysfunction and Aβ pathology [86].Interestingly, stressassociated neural excitation and social defeat stress also increase lactate and histone H1 lactylation levels in the brain, which is associated with a decrease in social behavior and an increase in anxiety-like behavior [87].
Fibrosis
There is compelling evidence that the upregulation of glycolysis in trophoblast cells, macrophages, and myocardial endothelial cells contributes to the progression of placental, pulmonary, and myocardial fibrosis, respectively [88][89][90].Reduced blood flow to the uteroplacental unit in preeclampsia leads to a hypoxic condition in the placenta, which in turn promotes excessive lactate production by trophoblast cells and induces histone lactylation to regulate the expression of genes associated with preeclamptic placental fibrosis (FN1 and SERPINE1) [88].Another, TGF-β1 (transforming growth factor-β1) stimulates the increase of lactate production in lung myofibroblasts and secrete it into the extracellular milieu to promote histone lactylation in the promoters of the profibrotic genes in macrophages, thereby inducing the expression of some profibrotic mediators [89].Additionally, after myocardial infarction, high lactate levels induce lactylation of Snail1, a TGF-β transcription factor, thereby activating the TGF-β/Smad2 pathway to further up-regulate endothelial-to-mesenchymal transition and exacerbate cardiac dysfunction and fibrosis [90].And, p300 as a "writer" mediates lactate-induced lactylation of histone and Snail1 in these processes.Intriguingly, Wang N et al. [91] showed that GCN5, as a writer of histone lactylation, promoted Histone H3 lactylation in monocytes in an IL-1β-dependent manner after myocardial infarction and activated reparative genes Lrg1, Vegf-a, and IL-10, which is conducive to the reparative environment and the improvement of cardiac function.Overall, these findings shed light on the mechanism underlying the key contribution of lactate and lactylation to the pathogenesis of different fibrotic diseases.
Tumors
The tumor microenvironment is often characterized by lactate, a core metabolite produced by the Warburg effect [92].In the last decades, lactate may be considered a biological marker of malignancy, and it was found to be strongly associated with shorter overall survival and a higher incidence of metastasis in tumor patients [93].This association led us to question whether lactate has a role in cancer progression.The available data already suggest tumor-associated lysine lactylation occurs on both histone and non-histone proteins.In hepatocellular carcinoma (HCC), glypican-3 knockdown reduces the lactylation of c-myc and further reduces the protein stability and expression of c-myc, thereby inhibiting the progression of liver cancer [94].Another example is that high lactylation of adenylate kinase 2 in HCC could significantly reduce its own activity, mediate perturbation of ATP metabolism and down-regulate the intrinsic apoptosis pathway to promote cancer cell proliferation and migration, and predict poor prognosis in HCC patients [95].Concerning colorectal cancer (CRC), Hypoxiainduced glycolysis promotes the lactylation of β-catenin to further enhance the protein stability and expression of β-catenin, ultimately aggravating the progression of CRC through the Wnt signaling pathway [96].In prostate cancer, elevated lactate promotes the lactylation and stability of HIF1α to induce KIAA1199 transcription and KIAA1199-mediated angiogenesis, vasculogenic mimicry and depolymerized hyaluronic acid levels [97].Moreover, Gu J et al. [98] showed that the lactylation of MOESIN at lys72 enhanced TGF-β and downstream SMAD3 signaling in Treg cells through TGF-β receptor I to regulate the development and function of Treg cells to increase tumorigenesis and tumor growth.
In addition, histone H3 (e.g., K9, K18, and K56) are also found to be involved in the regulation of various cancer types including lung, prostate, kidney, colon, liver, and melanoma [99][100][101][102][103][104].He Y et al. [99] demonstrated that prostate and lung adenocarcinomas exhibited preferential utilization of aerobic glycolysis and concomitant histone hyperlactylation due to an impairment of the Parkin-mediated mitophagy, which subsequently led to the metabolic reprogramming and neuroendocrine differentiation following upregulation of neuroendocrine gene expression.However, the cell fate determinant Numb reversed this process by binding to Parkin.Lactate also regulates cellular metabolism at least in part through down-regulating HK-1 (glycolytic enzyme) and up-regulating IDH3G (TCA cycle enzyme) gene expression mediated by histone lactylation in non-small cell lung cancer [100].In clear cell renal cell carcinoma, inactive von Hippel-Lindau (VHL) induces histone lactylation in a HIFs-dependent manner, thereby transcriptionally activating the expression of platelet-derived growth factor receptor β (PDGFRβ) to promote tumor progress.In turn, overexpression of PDGFRβ positively stimulates histone lactylation [101].Concerning hepatocellular carcinoma, demethylzeylasteral reduces the lactate level and attenuates histone lactylation, which plays an anti-cancer role by regulating the glycolytic metabolic pathway [102].In ocular melanoma, elevated histone lactylation effectively promotes the tumorigenesis through up-regulating the transcription of YTHDF2 and further inducing the degradation of PER1 and TP53 mRNAs via binding to their respective m6A sites [103].Moreover, in colorectal cancer, elevated lactate in tumor-infiltrating myeloid cells induced METTL3 expression by promoting histone lactylation, and further m6A modification on Jak1 mRNA, which promotes its protein translation and strengthened downstream STAT3 signal that enhanced immunosuppressive functions of myeloid cells to promote tumor immune escape.
Other epigenetic regulations
In addition to those above, lactate-induced lactylation can additionally contribute to DNA repair, embryo implantation, and the improvement of the fatty liver, but it can also lead to the worsening of pulmonary hypertension and proliferative retinopathies [105][106][107][108][109][110].Sun Y et al. [105] found that hyperlactylation of PARP1 regulated its ADP-ribosylation activity and might contribute to DNA repair based on an alkynyl-functionalized bioorthogonal chemical reporter, YnLac.The bioorthogonal lactylation chemical reporter opens up new avenues for the functional research and analysis of this newly discovered lactylation in normal physiology and disease.During pregnancy, increased levels of histone H3K18 lactylation and lactate help to maintain glutathionebased redox homeostasis and apoptotic balance, which are essential for successful embryo implantation [106].However, inhibition of LDHA activity reduces lactate and histone lactylation, thereby impairing embryonic pre-implantation development [107].Another important benefit is that MPC1 knockout induced lactate accumulation, promoted the lactylation of FASN at the K673 site in hepatocytes to inhibit activity of FASN, and mediated the down-regulation of liver lipid accumulation, as reported by Gao R et al. [108].On the downside, hypoxia-induced mitochondrial reactive oxygen species (mROS) triggers lactate accumulation and histone lactylation in pulmonary artery smooth muscle cells (PASMCs) by upregulating HIF-1α/PDK1&2/p-PDH-E1α axis, which further promotes the proliferation of PASMCs and vascular remodeling and exacerbates hypoxic pulmonary hypertension [109].Moreover, hyperlactylation of non-histone YY1 under hypoxia is regulated by p300 as a "writer".YY1 is directly bound to the promoter of FGF2 and promotes the transcription of FGF2 through its high lactylation, thus promoting the formation of neovascularization.This situation is reversed by the p300/CBP inhibitor A-485 [110].
Lactylation in different cell biology processes
Increasing studies focus on the role of lactate-mediated lactylation in different cell biology processes to understand the function of protein lactylation.We will specifically discuss lactylation-mediated antitumor immunity and also focus on macrophages, immune cells, and other types of cells.
Macrophages
Macrophages are a highly heterogeneous cell population and act as scavengers that regulate immune reactions and also participate in the maintenance and restoration of immune homeostasis [111].Activated macrophages are generally divided into two phenotypes: proinflammatory macrophages, so-called M1-type macrophages, and anti-inflammatory M2-type macrophages [112].In the early tumor development stage, tumor-associated macrophages facilitate the development of a proinflammatory environment in the tumor, but in later stages, elevated lactate-derived histone H3K18la levels by glycolysis skews macrophage polarization toward the M2 phenotype [14].In polymicrobial sepsis, lactate-derived HMGB1 in macrophages has elevated lactylation levels, which accumulate in the cytoplasm via exosome secretion and result in endothelial dysfunction [59].Elevated histone H4K8la levels in macrophages upregulate LINC00152 by reducing the negative regulatory efficiency of YY1 on LINC00152, thereby inhibiting salmonella invasion and inflammatory response and promoting tumor growth [76].Moreover, TGF-β1 stimulates the increase of lactate production in myofibroblasts and secreted it into the extracellular milieu to promote histone lactylation in macrophages, thereby inducing the expression of some profibrotic mediators [89].Overall, lactylation affects the metabolic reprogramming and immunomodulatory effects of macrophages.Mainly, it promotes polarization changes that have a positive effect on promoting the repair of damage and tumor phenotype.
Immune cells
Elevated lactylation of MOESIN at Lys72 in Treg cells mediated by lactate from cancer enhances TGF-β and downstream SMAD3 signaling to regulate the development and function of Treg cells to control tumorigenesis and antitumor therapy [98].A high level of histone lactylation in tumor-infiltrating myeloid cells induces METTL3 expression and m6A modification of Jak1 mRNA, which leads to the protein translation of Jak1 and the stimulation of downstream STAT3 signal that enhances myeloid immunosuppressive functions [104].Overall, these studies show that lactylation may have an immunosuppressive effect on the several types of immune cells in tumor microenvironment.
Neurocytes and osteoblasts
The lactylation level of histone H3 in neural stem cells has been observed to decrease over time during mouse neurogenesis.However, multiple histone Kla levels elevate significantly to orchestrate gene expression changes and widely participate in neuronal differentiation and cell proliferation processes significantly by inhibiting the "eraser" HDAC1-3 or activating the "writer" p300/CBP [63].Similarly, the increased expression level of H3K18la in osteoblasts promotes the formation of cell mineralized nodules and alkaline phosphatase activity, which plays an important role in the differentiation of osteoblasts [107].And, the elevated lactylation of LCP1 and histone H3 in neurocytes after cerebral infarction reduce its own degradation and cell viability and enhance the expression of IL-18 and IL-1β and the apoptosis rate of neurocytes [83,85].Activated microglia in Alzheimer's disease are overly lactate and histone lactylated, further promoting glycolytic gene PKM2 expression, resulting in abnormal activation and dysfunction [86].Moreover, an increase in lactate and histone lactylation levels in neurocytes occurs in response to social defeat stress and stress-associated neural excitation, which are associated with increased anxiety-like behavior [87].In summary, the existing studies have revealed the non-canonical function of lactylation during nerve and osteogenic differentiation.
Tumor cells
The lactylation of c-myc and AK2 in HCC cells affects the degradation and activities of the cells themselves, which result in their viability, migration, and invasion [94,95].Similarly, enhanced lactylation of β-catenin in colon cancer cells amplifies the stability and manifestation of β-catenin, thus exacerbating the progression of colon cancer via the Wnt signaling pathway [96].HCC cells also exhibit elevated lactylation levels of histone H3, which DML can reduce and play an anticancer role by regulating the glycolytic metabolic pathway [102].Elevated histone lactylation levels have also been reported in lung cancer cells, renal cancer cells, and melanoma cells to promote tumor occurrence and development through mediating HK-1 and IDH3G gene expression, or transcriptionally activating the expression of PDGFRβ and YTHDF2 [100,101,103].Moreover, a prostate cancer cell with elevated lactylation of HIF-1α activates KIAA1199, simulates KIAA1199-mediated angiogenesis and vasculogenic mimicry, and increases depolymerized hyaluronic acid [97].
Other cell types
Generally, research suggests that lactylation in different cell types functions differently in different conditions.Non-histone YY1 with hyperlactylation regulated by the "writer" p300 in retinal microglia under hypoxia directly interacts with the promoter of FGF2 and promotes its transcription, thereby activating neovascularization [110].Moreover, lactylation histone H3 is reported to associate with endometrial cells, oocytes, and embryonic cells and contributes to the maintenance of glutathionebased redox homeostasis and apoptotic balance, which are essential for successful embryo implantation [106,107].Furthermore, pulmonary hypertension and fatty liver disease are exacerbated by the lactylation change of histone H3 in pulmonary smooth muscle cells and FASN in hepatocytes, respectively [108,109].
Lactate/lactylation-targeting drugs
The abundance and immunomodulatory effects of lactate and lactylation may be a novel direction for targeted therapy in various diseases, alone or in combination with other therapeutic strategies.Lactate and its transporter proteins will likely serve as a new therapeutic target, such as targeting MCT1, MCT4, and LDHA, which are currently under preclinical investigations and clinical trials.High LDH levels in the blood and tumor microenvironment are associated with a poor prognosis [113].The transport capacity of MCT1/4 is critical for intracellular and extracellular lactate levels and transports lactate into and out of the cell according to the concentration of the substrate [114].Several MCT inhibitors, including syrosingopine, AR-C155858, 7ACC2, BAY8002, SR13800 and AZD3965, have been shown to inhibit MCT activity, but only the MCT1 inhibitor AZD3965 is currently in human clinical trials (NCT01791595) [115][116][117][118][119]. For example, metabolic changes induced by the MCT1 inhibitor AZD3956 (particularly the decrease in lactate export) promote increased infiltration of anti-tumor immune cells (dendritic and natural killer cells), thereby inhibiting tumor growth in mice [120].Moreover, clinical trials in humans have shown AZD3965 to be well tolerated at doses that deliver target engagement, most commonly with electroretinogram changes, fatigue, and anorexia, all of which are reversible [121].Co-treatment with anti-PD-1 and the LDHA inhibitor GSK2837808A has a stronger anti-tumor effect than anti-PD-1 therapy alone.Mechanically, lactate degradation reduces regulatory T (Treg) cell induction and tumor growth and enhances anti-tumor immunity [98].In addition, oxamate and dichloroacetate also inhibit lactate production for the treatment of a variety of tumors as well as metabolic diseases by targeting LDHA and pyruvate dehydrogenase kinase (PDHK), respectively [99,101].And, 2-deoxy-D-glucose (2DG), 3-Bromopyruvic acid (3-BrPA), tristetraprolin, and lonidamine have also been reported to be involved in inhibiting hexokinase and thus regulating glycolysis [122][123][124].Interestingly, the lactate receptor GPR81 induces chemoresistance in hepatic cancer cells by binding to lactate [125].And, curcumin and LRH7-G5 can restore the sensitivity of resistant tumor cells to chemotherapy by targeting GPR81 [118,125].Overall, targeting lactate and its transporter not only enhances the antitumor responses of the immune system, but also significantly increases their therapeutic efficiency and plays a synergistic effect in combination with checkpoint inhibitors.
Several drugs targeting protein post-translational modifications (i.e., enzymes that catalyze the lactylation and delactylation of proteins) have been shown to be therapeutically effective for a variety of diseases in clinical trials.Highly selective delactylase agonists (e.g., ITSA-1 targeting for HDACs) and inhibitors of lactylation induction (e.g., garcinol targeting for HATs) affect various physiological processes regulated by histone or non-histone lactylation and can be targeted for therapeutic purposes in a variety of diseases [126,127].Moreover, the p300/CBP inhibitor A-485 also exerts an anti-retinal neovascularization effect in proliferative retinopathies through the inhibition of lactylation modification of YY1 at the K183 site [110].Similarily, CPTH6, a selective GCN5 HAT inhibitor, can induce apoptosis in human leukemia cells [128].Controlling the switch from lactate production and lactylation to acetyl-CoA production and the TCA cycle may provide new opportunities for targeted cancer therapies.Therefore, the exact mechanism of lactylation requires further study to identify novel targets for drug development.
Conclusions and perspectives
In summary, the new target, protein lactylation, is a "double-edged sword" for human health and diseases.Because it is closely related to multiple physiological and pathological processes, such as neuronal development, embryogenesis, cancer, inflammation, cerebral disease, fibrosis, and so on.Briefly, the positive effects of protein lactylation on human health are typically manifested as a key regulatory role in the differentiated maturity of neural stem cells and osteoblast differentiation and bone formation, as well as transcription elongation of embryo cleavage-related genes.In addition, lactylation also contributes to DNA repair, embryo implantation, and the improvement of the fatty liver.However, on the other hand, elevated lactylation may have either a causative or predisposing role in the worsening of cancer, inflammation, fibrosis, and brain diseases, pulmonary hypertension, and even proliferative retinopathies.These perspectives are critically significant and are first described in detail in this review.Although the functions of protein lactylation in health and disease have been reported, to aid in the development of more targeted lactylation inhibitors/agonists and facilitate their application in clinical practice, additional studies exploring the concrete molecular mechanisms of lactylation are needed.
Table 1
The function of lysine lactylation modification in physiology and disease | 9,347.8 | 2024-01-10T00:00:00.000 | [
"Medicine",
"Biology"
] |
Carrier trapping and recombination: the role of defect physics in enhancing the open circuit voltage of metal halide perovskite solar cells
One of the greatest attributes of metal halide perovskite solar cells is their surprisingly low loss in potential between bandgap and open-circuit voltage, despite the fact that they suffer from a nonnegligible density of sub gap defect states. Here, we use a combination of transient and steady state photocurrent and absorption spectroscopy to show that CH3NH3PbI3 films exhibit a broad distribution of electron traps. We show that the trapped electrons recombine with free holes unexpectedly slowly, on microsecond time scales, relaxing the limit on obtainable open-circuit voltage (VOC) under trapmediated recombination conditions. We find that the observed VOCs in such perovskite solar cells can only be rationalized by considering the slow trap mediated recombination mechanism identified in this work. Our results suggest that existing processing routes may be good enough to enable open circuit voltages approaching 1.3 V in ideal devices with perfect contacts.
Introduction
Metal halide perovskite solar cells owe their rapid increase of power conversion efficiencies over 22% 1,2 to several key properties. They benefit from low exciton binding energies, 3,4 high ambipolar mobilities, [5][6][7] high absorption cross-sections, 8 and long carrier lifetimes. 5,9,10 These properties have allowed this class of materials to function effectively as not just photovoltaic devices, but also as light emitting diodes (LEDs) and optically pumped lasers. [11][12][13] Still, the materials are known to suffer from a significant density of sub gap states that should induce non-negligible recombination losses. 9,10,14,15 Extensive time resolved photoluminescence and terahertz spectroscopy on the most commonly employed CH 3 NH 3 PbI 3 perovskite have shown that, at solar fluences, the photo-carrier dynamics are limited by a monomolecular trapping process, while the radiative bimolecular recombination process is surprisingly slow and hence only dominates at high excitation densities. 7,9,10,16 While it is accepted that carrier trapping plays a dominant role in perovskite photocarrier dynamics at solar fluences, the nature of the traps and the recombination pathway has remained unexplored.
Generally, carrier trapping into deep sub gap states is considered to lead to rapid non-radiative recombination which severely limits the quasi-Fermi level splitting of the materials, and hence the photovoltage of the solar cells. This follows the Shockley Reed Hall (SRH) framework, where recombination occurs through a state within the forbidden band of the semiconductor. The SRH behavior can be categorized by two distinct regimes where the semiconductor is either doped or closer to intrinsic. In a highly doped semiconductor, trapping into a sub gap state leads to immediate annihilation by the many excess carriers of the opposite charge, while trapping into such a state does not necessarily lead to immediate recombination in a lightly or undoped material. The SRH model has generally been applied to highly doped silicon solar cells where trapping results in immediate recombination and hence the trapping lifetime of the minority carrier becomes the most relevant parameter. 17 Indeed, SRH recombination has been generally proposed to dominate in lead halide semiconductors. 7,10,17,18 Despite evidence that the perovskite layers are generally only lightly doped, previous works have primarily assumed that the recombination rate of the trapped electron or hole is the same as the trapping rate, and hence the trapping lifetime has been used to estimate both electron and hole diffusion lengths. 6,7,19 With the reported sub-gap trap densities of around 10 16 cm À3 and an effective trapping lifetime of about 100 ns, 9,10 rapid trap mediated recombination would result in a severe limitation to attainable photovoltages of perovskite solar cells. Still, this relatively new technology boosts voltages already approaching 1.2 V, 20,21 which is remarkably high for a semiconductor with a bandgap of only 1.6 eV. In the limit where all recombination is due to radiative band-to-band recombination the material should be able to achieve ideal V OC s of around 1.3 V, 22 not much higher than what has been already experimentally obtained. This suggests that the sub gap states, thought to be almost unavoidable in a solution processed and low-temperature crystallized material, may not form highly detrimental recombination centers. Previous photoconductivity measurements led us to suggest that the carrier trapping process leads to a photodoping effect, which implies a long lived trapped species and associated long lived free carrier species. 14 Such slow trap mediated recombination would allow for far greater Fermi level splitting and V OC s compared to the rapid trap mediated recombination where the trapped carriers recombine almost instantaneously with free carriers. Still, such a phenomenon has hitherto remained unexplored within the field of perovskite solar cells. While several photophysical models have been developed to explain photoluminescence decays, 7,9,10 none has been extended to consider the recombination lifetimes of the trapped charge even though this may be one of the most relevant parameters to consider when it comes to determining how detrimental a given density of trap sites might be to the total recombination flux, quasi Fermi level splitting, and photovoltage in solar cells. Some important questions that remain to be addressed can be summarized as follows: (1) do the predominant defects act as electron or hole traps? (2) What is their energetic distribution? (3) How rapid is trap mediated recombination? (4) How does the effective carrier lifetime affect the theoretically obtainable V OC s of perovskite solar cells?
In this work, we directly monitor trapped electron-free hole recombination kinetics in metal halide perovskite films for the first time, establishing that CH 3 NH 3 PbI 3 suffers from a significant and broad density of sub gap electron traps. Surprisingly, after an initial fast electron trapping process (100 ns lifetime), the trapped electrons slowly recombine with free holes on tens of ms timescales, thus deviating significantly from the expected rapid trap mediated recombination pathway. This results in a situation where most of the traps are filled at solar fluences allowing the solar cells to obtain improved photovoltages.
We finally address the implications to the theoretically obtainable V OC s in perovskite solar cells by using simple Fermi-Dirac statistics. If we account for the slow trapped charge recombination and associated trap filling we estimate maximum obtainable V OC s close to 1.3 V, about 150 mV higher than that expected for rapid trap mediated recombination-clearly more consistent with the experimental results. 20,21 These findings shed light on the high photovoltages achieved for this system despite the inevitable presence of significant trap densities inherent in solution processed semiconductors.
Results and discussion
Nature and energetic distribution of trap sites
In order to firstly measure the trap energy distribution, we performed Fourier transform photocurrent spectroscopy on a perovskite layer with two lateral Ohmic contacts, which serves as a photoresistor. Any photocurrent collected upon sub gap excitation directly implies the presence of sub gap sites, and so this measurement allows us to obtain the energetic distribution of such states.
The sample structure is shown in Fig. 1a, and the normalized photocurrent spectrum is shown in Fig. 1b. We used a gold/ perovskite/gold structure (the perovskite deposition method for all measurements except where otherwise noted is the PbCl 2 derived perovskite), which guarantees an Ohmic response limited by the semiconductor layer rather than the contacts (see Fig. S1, ESI †), 23,24 applying a bias of 10 V over a channel of 4 mm. Since the device functions as a planar photodetector with symmetric contacts, we only require the presence of one free carrier to measure any photocurrent under an external applied bias. 14 This allows us to detect transitions that result in only one free carrier. Consistent with previous reports of low Urbach energies we observe a sharp band edge onset in the photocurrent corresponding to an Urbach energy of 25 meV 25 ( Fig. 1), but also observe an additional broad tail with a distinct slope in the photocurrent extending from the band edge to the instrument limitation at almost 1.1 eV. This is direct evidence for the presence of a broad distribution of trap states down to at least 0.5 eV from either the valence or conduction band edge. Previous theoretical studies have focused on identifying distinct types of defects with discrete energy levels, with the most recent work suggesting that iodide interstitials are likely to manifest themselves as relatively deep electron traps. [26][27][28] The shape of our sub gap photocurrent spectrum is not completely coherent with this scenario. It seems possible that the broad distribution of sub gap states could be due to an inhomogeneity in crystallinity and perhaps stoichiometry on the nano-to-micro scale, or even to the presence of multidimensional defects which have not yet been well studied.
Having established that our material is suffering from the presence of a broad distribution of sub gap trap sites, we aim to determine whether this distribution is associated with electron or hole trapping, or both. Here, we measure the photocurrent from the same device architecture shown in Fig. 1a upon monochromatic excitation both above and below gaps. We compare the pristine perovskite covered by a thin layer of inert PMMA with the one covered by a thin hole accepting (Spiro-OMeTAD, referred to as Spiro) or electron accepting layer (PCBM). 5 The perovskite is directly excited and the vast majority of the detected current comes from the carriers in the perovskite layer only (see S1, ESI †). 14 We point out that the photocurrent measured here is proportional to the carrier densities and their mobilities. Under steady state illumination, the carrier density is determined by the carrier lifetime. This can be formally represented by eqn (1): 29 I p q(nÁm n + pÁm p ) = q(GÁt n Ám n + GÁt p Ám p ) (1) where I is the photocurrent, q is the elemental charge, n and p are the electron and hole densities respectively, m is the carrier mobilities, G is the generation rate, and t is the effective carrier lifetimes under the relevant conditions. Considering that PCBM and Spiro have been previously demonstrated to be effective electron and hole acceptors, 5 reducing PL by over 90%, it is fair to consider only hole densities and mobilities within the perovskite in the presence of the PCBM acceptor and mainly electron densities and mobilities in the presence of the Spiro acceptor. The results obtained upon the above gap excitation are displayed in Fig. 1c. The steady state photocurrent in samples with PCBM electron accepting layers is higher than that of samples with an inert top layer. This is expected, since electron transfer to PCBM will result in a longer lived free hole population in the perovskite. Lifetimes will be associated with the recombination rate between a hole in the perovskite and an electron in the PCBM layer. Such lifetimes have been found to be on the order of 1-10 ms via transient photovoltage measurements for recombination at both the perovskite-PCBM and perovskite-Spiro interfaces. 30 Surprisingly, the samples with the Spiro hole acceptor exhibit a photocurrent of orders of magnitude lower even than the neat samples, despite the fact that they should also exhibit enhanced lifetimes associated with slow recombination across the perovskite-Spiro interface (electrons in perovskite with holes in Spiro). This leads us to conclude that either the electron mobility is orders of View Article Online magnitude lower than the hole mobility, or that electrons are predominantly trapped. Since the effective masses for electrons and holes have been repeatedly shown to be roughly the same, 3,31,32 we believe that our results indicate that electrons are trapped and hence suffer from a low effective long range mobility. So far, the results suggest that the material suffers from a significant and broad density of sub gap electron traps, which limit effective long-range electron mobility. To relate the photocurrent response upon sub gap excitation observed in Fig. 1b to the behavior in Fig. 1c, we excited the samples using a sub gap excitation source (850 nm laser) and monitored the photocurrent. The results are plotted in Fig. 1d, and show that sub gap excitation leads to little to no detectable photocurrent (over three orders of magnitude lower than the neat samples) when a hole acceptor is placed on top of the samples. On the other hand, the presence of an electron acceptor has a similar effect as with the above gap illumination. This allows us to claim that deep electron traps are present, which can be directly populated by excitation from the valence band to yield trapped electrons and free holes. The free holes can be collected as a photocurrent in neat samples, but no photocurrent is collected in samples with the Spiro hole acceptor simply because there are only trapped electrons left in the film. The proposed mechanism is displayed in Scheme 1. It is worth noting that upon sub-gap excitation, in principle, one would expect the same photocurrent for PCBM and PMMA contacted thin films. Nevertheless in Fig. 1d we do notice a small deviation. We speculate that this may be due to a different chemical interaction between the interfaced materials which may cause the density, nature, distribution, and lifetime of trapped electrons to be different.
Note that in Fig. 1c the sublinear behavior for the electronaccepting sample suggests that recombination across the perovskite-PCBM interface has a charge density dependence, while this is not observed for the perovskite-Spiro interface. This is well in agreement with the scenario where free electrons in the PCBM and free holes in the perovskite recombine in the first case, while free holes in the Spiro will recombine with localized, trapped electrons in the perovskite in the second case.
Trap mediated recombination lifetimes and mechanism
In an effort to directly monitor the trapped electron lifetimes, we performed transient photocurrent measurements on the same samples used for the steady state photocurrent measurements, this time using a pulsed excitation analogous to that used in transient PL measurements rather than a steady state excitation. This measurement allows us to monitor the transient photoconductivity of the perovskite layer with various charge quenching layers, and thus directly probe the free carrier population as a function of time after excitation. Monitoring the photoconductivity rather than the photoluminescence means that we are not limited by the presence of radiative recombination but can monitor any free carrier. We start by performing an above gap fluence dependence with non-quenching samples (Fig. 2a). In the early stages (o1 ms) the decays become steeper for higher excitation densities as previously observed via photoluminescence spectroscopy when moving from monomolecular to bimolecular recombination regimes. Interestingly, we also observe an extremely slow component in the photoconductivity traces that makes up an increasingly large fraction of the decay as the excitation density is reduced. This component has not generally been observed in the transient photoluminescence data that we and others 7,9,10,19,33 have ever recorded for CH 3 NH 3 PbI 3 Scheme 1 Schematic illustration of carrier dynamics upon above and below gap excitation when the perovskite is contacted by electron (PCBM) and hole (Spiro-MeOTAD) accepting layers. (see Fig. S2, ESI †), which means that whichever mobile photoexcited species is still present on these long time scales cannot relax radiatively, or it has a very low radiative efficiency and need more care to be detected. It is reminiscent, however, of some of the slow decays observed when measuring transient voltage decays. 34 Notably this slow component, which appears to decrease in decay rate over time, makes up less than 10% of the total decay upon high excitation (10 17 cm À3 ) but approximately 50% of the total decay upon low (10 15 cm À3 ) excitation.
Since the signal is directly proportional to the photoconductivity, and hence carrier density, its relative magnitude is used as a proxy for carrier density. We performed the same measurements (at 'low' 10 15 cm À3 excitation density) for a sample with the hole accepting layer (Fig. 2b). It shows an extremely rapid decay in the photocurrent and does not show any observable slow tail, unlike for the case of the PMMA and PCBM (see Fig. S2 and S3, ESI †) covered samples. This decay is consistent with the rapid hole transfer to the Spiro, 5 leaving only electrons in the material, which clearly do not contribute to any photocurrent on time scales 410 s of ns. As evidenced by both these and the steady state photocurrent measurements in Fig. 1, it is evident that the electrons do not contribute to any significant photocurrent, at least for long-range transport. This is direct proof that electrons are predominantly being trapped in the CH 3 NH 3 PbI 3 perovskite with monomolecular lifetimes in the ns time window.
We can now explain the fluence dependent transient photocurrent kinetics for the neat samples shown in Fig. 2a. As the excitation density approaches the trap density, the slow component takes up an increasingly large fraction of the decay. At low excitation densities, most of the generated electrons are trapped on tens of ns timescales as has been previously reported for these materials and as we show here (see fits in Fig. S4, ESI †), and the free holes are left behind until they recombine with the trapped electrons. These holes are responsible for the remaining slowly decaying photocurrent. The fact that the slow component of the decay takes up a large fraction of the decay only once initial densities of 10 15 cm À3 are used means that the trap density lies somewhere between 10 15 and 10 16 cm À3 , similar to what we have previously found from photoluminescence decays in these materials. 9,10 While a rapid trap mediated recombination model would suggest that once the electrons are trapped, they should recombine at a similar rate with free holes, our data show that this recombination process is actually extremely slow and takes place via a density dependent process that can be as slow as many microseconds. This is more akin to the situation in materials such as ZnO or TiO 2 where holes can be trapped at surfaces for long times of up to seconds, leaving free electrons. 35,36 This is known as a ''photodoping'' effect, which is what we propose to be happening in our perovskite thin films. Since the material is ionic and defects are expected to be charged, 26,37,38 a filled trap is likely to be neutral and hence relatively unlikely to lead to rapid recombination.
If such an effect is observable via the photoconductivity across thin films, it should also be observable in transient absorption kinetics. Indeed, since the long lived photoconductivity in neat samples and samples with PCBM electron acceptor represents the presence of a long lived free hole population, this should be observable as a bleach at the perovskite band edge due to state filling in the valance band. 39 We therefore performed transient absorption studies on neat films and films with PCBM and Spiro accepting layers. We display transient absorption decays probed at the peak of the band edge bleach at 750 nm in Fig. 3. The high initial excitation density (necessary to detect the small long lived signal) results in a rapid initial decay, corresponding to bimolecular recombination in the PMMA coated samples and to a combination of bimolecular recombination and charge transfer for the Spiro and PCBM coated samples. Still, by measuring the decay to longer time scales than have been previously reported, we find that the transient absorption decays closely mimic the transient photocurrent decays, exhibiting a significant long-lived free carrier population only in the presence of PMMA and PCBM, which we can now assign to remnant free holes in the valence band. Of course, this implies that hole diffusion lengths in the perovskite films are likely to be much longer than the electron diffusion lengths.
The fact that recombination of the trapped electrons with free holes is extremely slow has significant implications to the perovskite solar cells. Since the balance between the generation and recombination rates of the trapped carriers determines their depopulation, the slower the depopulation rate, the lower the illumination intensity required to fill all the trap states at the steady state. This effect would in principle increase the expected V OC value at a fixed density of trap states, since the total non-radiative recombination rate will be lower, enabling operation closer to the radiative limit.
To further quantify steady state trap filling, we have taken films formed via different preparation routes and hence likely with different trapping rates and densities, and studied the illumination intensity at which the traps are primarily filled. To accomplish this, we monitor the photocurrent contribution from a modulated sub-gap excitation (850 nm) as a function of a steady-state above-gap excitation (650 nm). We modulate only the 850 nm laser and use a lock-in amplifier to detect the photocurrent signal from this modulation. Based on the discussion above, we expect to observe a point at which the above gap excitation background has filled most of the trap sites, and the sub gap contribution should shrink. The background fluence at which the sub gap contribution becomes less than it was in the absence of any above gap excitation background gives an idea of the illumination intensity required to fill the traps at the steady state and achieve optimum Fermi level splitting.
We have chosen to use three MAPbI 3 preparation routes which we have previously optimized to provide efficient devices: the PbCl 2 derived perovskite, the Pb(Ac) 2 derived perovskite, 40 and the Pb(Ac) 2 perovskite treated with hypophosphorous acid (HPA). 41 These routes provide a wide range of crystal sizes (see SEM images in Fig. S6, ESI †), with the Pb(Ac) 2 route giving the smallest crystals, HPA-treated Pb(Ac) 2 increasing the crystal size somewhat, and the PbCl 2 route giving the largest crystals. 40,41 We show the measurements of the sub gap photocurrent in Fig. 4, where the HPA treated sample demonstrates significant trap filling at 1.7 Â 10 17 cm À2 s À1 , but the non-HPA treated Pb(Ac) 2 derived film demonstrates a less significant trap filling at equivalent fluences. Interestingly, the PbCl 2 derived perovskite gives evidence for the most quick trap state filling, with the IR photocurrent contribution diminishing at a fluence of 2 Â 10 16 cm À2 s À1 . Again, this is established by the above gap excitation fluence at which the IR photocurrent is rapidly declining and drops below what it was in the absence of any above gap excitation.
The results indicate that of the three perovskite routes, the PbCl 2 route may be the most favorable in terms of achieving a material with low trap densities. However, it has been notoriously difficult to obtain films with 100% coverage of the substrate, 42 resulting in pinholes and losses in open circuit voltages. This has led to the use of the Pb(Ac) 2 derived perovskite, which forms into extremely smooth and continuous films. However, this appears now to come at the price of a slightly increased trap density. This points to traps being localized predominantly on the surface of crystals, since this route attains smaller grain sizes. 40,41 The HPA treatment still allows for the formation of smooth and continuous films, but clearly seems to decrease the trap density and result in a material in which most of the traps are filled, consistent with a slight increase in grain size (though not to the extent of the PbCl 2 films).
The most significant behavior observed here is that the different samples exhibit very different points at which their sub gap contribution is strongly diminished, consistent with varying trapping and trapped electron-hole recombination rates. We confirm this again by plotting the transient photocurrent of Pb(Ac) 2 -derived perovskite films with and without HPA in Fig. S7 (ESI †), where we find that the HPA treatment slows the trapped electron-hole recombination rate as well as the trapping rate itself. This is further evidence that the absolute trap density and processing route of the films affect the rate at which trapped electrons can recombine with free holes, and that not all traps behave the same. Of course, this was already expected from the broad distribution of sites shown in Fig. 1b.
Implications to V OC
We can take this analysis slightly further, and estimate the obtainable photovoltage due to the effectiveness of Fermi level splitting, bearing in mind what we have learned from the measurements presented here. If the 100 ns (taken as a typical value for many of the perovskite films used throughout different laboratories) 6,19,20 electron trapping process resulted in immediate recombination of the trapped electron with a free hole, the effective electron and hole lifetimes would both be 100 ns. Of course, if the trapped electron to free hole recombination rate is extremely slow then it is likely many traps can be filled at solar fluences (as is the case for the PbCl 2 derived and HPA treated Pb(Ac) 2 derived perovskite films), high hole densities are reached, and only the radiative bimolecular recombination rate becomes increasingly relevant. Using the simple relations shown below, 43 it is possible to estimate the maximum obtainable Fermi level splitting and hence a rough approximation of maximum V OC for the three cases: rapid 100 ns trap mediated recombination, a slow trap mediated recombination model vs. complete trap filling at 1 Sun and resultant only bimolecular recombination; where G is the generation rate (based on J SC of 23 mA cm À2 and a 500 nm thick film), R(n,p) is the recombination rate of the electrons and holes respectively, t is the monomolecular recombination lifetime, B is the bimolecular recombination coefficient (9 Â 10 À10 cm À9 s À1 ), E Fn and E Fp are the quasi-Fermi levels for the electrons and holes respectively, E G is the bandgap (1.6 eV), KT is the thermal energy in eV, and N C (1.9 Â 10 18 cm À3 ) and N V (2.4 Â 10 18 cm À3 ) are the effective density of states of the conduction and valence bands, respectively. We calculate the effective density of states based on the reported electron and hole effective masses of approximately 0.18 and 0.21m 0 for the electrons and holes, respectively. 3,31 Here, we simply estimate the steady state carrier densities based on the rate equations shown above for the different cases: for the first case we assume a 100 ns monomolecular lifetime for both the electrons and holes, for the second case we assume a 100 ns lifetime for the electrons but a 10 ms lifetime for the holes, and for the third case we simply use the literature value for the bimolecular recombination coefficient and calculate the corresponding electron and hole densities at 1 Sun's worth of excitation. Once the carrier concentrations are known, we can use the calculated density of states to determine the degree of quasi-Fermi level splitting for each type of carrier. Table 1 shows our estimation of the electron and hole densities as well as the resultant Fermi level splitting and theoretically obtainable V OC s for the two extreme cases. We also describe the situation where electron traps are not filled but the trapped electron to free hole recombination has a slow monomolecular lifetime of 10 ms (a conservative approximation based on the transient decays is shown in Fig. 2 and 3).
This analysis makes it very clear that a rapid trap mediated recombination model with 100 ns trapping and recombination lifetimes would lead to very low electron and hole densities yielding low theoretical V OC s of approximately 1.14 V, which is incompatible with the high experimentally observed values of up to 1.19 V. 20 We note that we ignore recombination across the ETL and HTM interfaces with the perovskite, and that the values presented here are very clearly the maximum attainable values assuming ideal contacts. When we consider the results from Fig. 4, which indicate that traps are starting to be filled for the PbCl 2 derived material, we must consider the situation where primarily bimolecular recombination affects the carrier dynamics and densities, or at least a situation where most traps are filled and hence the behavior is more akin to the bimolecular case. The photoluminescence quantum yields of perovskite films made in this way have been reported to be 10-30% 9,12 at solar fluences, which is in line with a situation where most, but likely not all, traps are filled. In a perovskite film with traps filled, where bimolecular recombination is the dominant mechanism, it becomes possible to obtain high V OC s of approximately 1.3 V, in line with the thermodynamic limit for a 1.6 eV semiconductor and consistent with the highest reported values of 1.19 V in a real device. Considering the situation where traps are far from completely filled, like the case for the Pb(Ac) 2 derived perovskite, but including the fact that trapped electrons only recombine with free holes on slow (ms) timescales, we find that it is possible to obtain high V OC s of approximately 1.26 V, still consistent with the high observed voltages even in non-optimized films with significant electron trap densities. In this case, the high hole densities obtained at 1 Sun's worth of excitation mean that radiative recombination will start to compete with the trapping process, i.e. the extremely slow hole recombination will result in increasingly high PLQEs even at low fluences such as at 1 Sun. We make a rough estimation of the relative contribution due to radiative recombination for the fast and slow trap mediated recombination (scenario 1 and 3 in the table) and find that this yields photoluminescence quantum yields of 0.3 and 26%, respectively (see supplemental discussion for details, ESI †). This analysis proves that it is not possible to obtain high quantum yields nor high Fermi level splitting in our perovskite materials if we simply consider 100 ns trapping and recombination time constants. In fact, we now find that the reported quantum yields of 10-30% are only well explained by the fact that trapped carriers are long lived, allowing high enough carrier densities to be reached to facilitate radiative recombination even at 1 Sun. We note that our estimations ignore any non-radiative recombination due to the introduction of the selective contact layers or even other
Conclusions
We have used a combination of transient and steady state photocurrent, absorption, and photoluminescence spectroscopy to study the carrier dynamics in perovskite films over long time scales. Electron trapping is a predominant decay pathway, but the trapped electrons are surprisingly long lived; they only recombine with associated free holes over the course of many microseconds. This allows most of the traps in perovskite films made with typical deposition methods to be filled at solar fluences and hence allow us to rationalize the high V OC s reported for perovskite solar cells, which exceed the limits imposed by a rapid trap mediated recombination model. We furthermore find that due to these fortuitously long lived traps, perovskite films made via existing processing routes exhibit or are close to exhibiting high enough optoelectronic quality to enable solar cells with V OC s approaching 1.3 V provided that non-radiative decays due to contact layers can be mitigated.
Perovskite fabrication method
Glass substrates were sequentially cleaned with Hellmanex soap, acetone, and isopropanol. Most of the measurements (unless otherwise noted) were performed on perovskite films made via the PbCl 2 precursor method. Here, 0.8 M solutions of 3 : 1 (by molar concentration) of methylammonium iodide : PbCl 2 in DMF were spin coated on oxygen plasma cleaned glass substrates at 2000 rpm for 45 seconds in a nitrogen filled glovebox. The substrates were allowed to dry at room temperature for 30 minutes, then they were annealed at 90 1C for 90 minutes, followed by 120 1C for 20 minutes. The gold electrodes were then thermally evaporated onto the perovskite films through a shadow mask. Then polymethylmethacrylate (PMMA) (20 mg ml À1 ) or PCBM (20 mg ml À1 ) or Spiro-OMeTAD (100 mg ml À1 ) was spin coated on to the perovskite films at 2000 rpm for 45 seconds. For the PbAc 2 derived perovskite films, 1 M solutions of 3 : 1 MAI : PbAc 2 with or without 0.0075 M hypophosphorous acid were spin coated at 2000 rpm for 45 seconds. The films were allowed to sit at room temperature for 5 minutes, after which they were annealed at 100 1C for 5 minutes.
Steady state photocurrent measurements
Samples were illuminated using a mechanically chopped laser source (either 650 or 850 as detailed in the main text). A power supply was used to provide a voltage bias across the devices and the current was recorded on a lock-in amplifier in current mode, set to the chopping frequency. The chopping frequency was set to 23 Hz.
In the case where a visible light bias was used and only the sub gap contribution was measured, a 690 nm laser was used to continuously illuminate the samples while a mechanically chopped 850 nm laser excitation was used to detect the sub gap contribution. Again, the modulated photocurrent was detected using a lock-in amplifier. In all cases the laser excitation was defocused to cover the entire area between the electrodes. The noise at the output of the lock-in amplifier used here (SR530) is 0.13 pA OHz À1 and with a specified bandwidth of 0.01 Hz, we have a noise level of 6 fA. This gives more than enough room to measure the pA signals which were the lowest reported in this work.
Excitation density was estimated by assuming that 90% of the above gap excitation was absorbed within the perovskite. The red excitation was used to ensure a fairly uniform absorption profile, and for the sake of simplicity, the total generated carriers were assumed to be uniformly distributed throughout the bulk.
Transient photocurrent spectroscopy
The same samples were excited by 1 ns laser pulses (690 nm, 1 Hz repetition rate), making sure to illuminate the entire area between the electrodes. A power supply was used to bias the sample, while the photocurrent was amplified with a transimpedance amplifier (gain  10 000) and then measured using an oscilloscope.
To confirm that we are not simply measuring the time for the carrier to be swept out by the electric field, we calculate the sweep-out time using a mobility of 20 cm 2 V À1 s À1 as an upper limit. This yields a lower limit sweep out time of 8 ms, far longer than any of the events we have described above.
Transient absorption spectroscopy
Transient absorption (TA) spectroscopy was conducted using an amplified Ti:sapphire laser (100 fs pulses at 800 nm) focused into a sapphire plate to generate a broadband white light probe. The frequency-doubled output of a Q-switched Nd:YVO 4 laser acted as a pump (700 ps FWHM pulses at 532 nm), synchronized to the Ti:sapphire laser via a digital delay generator. This setup enables us to perform TA over pump-probe delays from one nanosecond to hundreds of microseconds, covering the timescales of both band-to-band recombination and long lived trap recombination.
FTPS
Fourier transform photocurrent spectroscopy was performed using a modified FTIR setup. The excitation was focused onto the perovskite device which was biased by an external power supply. The photocurrent was amplified, recorded, and the interferogram converted to a photocurrent spectrum using a custom designed program. | 8,250.2 | 2016-11-02T00:00:00.000 | [
"Physics"
] |
Investigation of the asperity point load mechanism for thermal elastohydrodynamic conditions
The rolling contact fatigue damage called pitting or spalling develops more frequently in surfaces with negative than positive slip. Since normal line loads do not cause any tensile surface stresses this investigation considers the effects of small point shaped asperities. Shear traction causes tensile stresses at the trailing edge of asperities entering the contact at negative slip. At positive slip the tensile stresses appear at the leading edge when the asperities exit the contact. It was found that the trailing edge of the asperity breaks through the lubrication film at contact entry. This causes negative slip to be more detrimental than positive slip. At negative slip the location of large frictional shear stresses and tension stresses from normal asperity contact coincide.
Introduction
Highly loaded gear and bearing surfaces will eventually fail due to rolling contact fatigue (RCF). One form of RCF is pitting, also called spalling, where the complete damage is the result of fatigue crack growth until the detachment of a piece of surface material. The damage can initiate at the surface or below it. This work analyses one initiation mechanism of the surface initiated cracks. The surface cavity often has the characteristic sea-shell shape presented in Fig. 1a. It is established [1] that the damage almost exclusively develops where friction acts against the rolling direction (negative slip). The process starts with fatigue initiation at the tip of the sea-shell shaped pit. It then slowly grows in the forward rolling direction [1]. The damage driving mechanism is however still debated.
The asperity point load mechanism for RCF resides in surface peaks creating point contacts which are surrounded by a tensile surface stress. Inside the contact, the nominal pressure ensures that the surface stresses remain compressive. However, on the outside of the contact entering or exiting asperity the tensile stress may be high enough to cause crack initiation and growth, see Fig. 1b. Experiments show that the crack propagation profile from a point load follows the characteristic cross-sectional profile of pits [2]. Simulations, based on rolling contacts with asperities and a mode I crack direction criterion, shows that the full profile agrees with the sea-shell pit shape [3]. In the current study several thermal 1 Corresponding author<EMAIL_ADDRESS>elastohydrodynamic (TEHL) simulations were performed to investigate the effect of sliding at varying the slide to roll ratio, SRR. Surface fatigue was evaluated through the Findley criterion. The goal was to show that the asperity point load mechanism can explains why pits initiates more often for negative than positive slip.
Fig. 1 a)
A tilted surface view of a RCF pit together with its cross-sectional profile. b) Schematic view of the asperity point load mechanism inducing a surface initiated RCF crack at asperity entry.
Theoretical background
The rolling contact was modelled fully flooded with Reynolds' equation for thin films: In Eq. (1) p is the pressure, h is the local film thickness, u m is the mean entrainment velocity, ρ is the density and η is the viscosity. Since the width of the contact was larger than 1000 • h, average values were used in the thickness direction. Asperity effects were captured by solving the differential equation in both the rolling direction (RD) and the transverse direction (TD). Cavitation was treated by forcing p 0. The asperity shape was modelled with an axisymmetric cosine profile; see Everitt and Alfredsson [4]. The pressure-viscosity relation of the lubricant was described by Roelands' equation: Equations for R , 0 and along with material parameters were collected from Larsson et al. [5]. The shear limit proposed by Bair et al. [6] was used to limit the shear tractions: where is the limiting shear stress coefficient and 0 is the shear limit at p = 0. Coulomb friction was used for metal contact, with the friction coefficient μ Dry =0.3.
Thermal model
The energy dissipation per lubricant volume was estimated by integrating the shear stresses times the shear rates and dividing by the film thickness. It was combined with compressibility and the assumption of a constant temperature through the fluid thickness. To incorporate these effects the incremental change of energy density was formulated as where u a = u mh 2 p/12η is the average fluid velocity and u s is the sliding velocity; see for example by Cheng and Sternlicht [7] for a similar formulation. The energy dissipation through the solids were modelled with the same equation except that the energy source terms, the first four terms on the right hand side of Eq. (4) were excluded since the solids were elastic without internal heat generation. Any effects on the lubricant film from thermal expansion of the solids were omitted.
The power generated by sliding metal contacts was added to the surface nodes of the metals instead of the lubricant. The power was then equally distributed between the two surfaces. The metal contact had a high capacity of heat conduction since there was no isolating lubricant present.
Fatigue evaluation
Repeated contact pressure and traction gave rise to cyclic stresses in the solids. Fatigue was evaluated using the Findley criterion. It was selected based on earlier studies on cases with large compressive mean stresses in combination with tensile in-surface stresses [8,9]. In dimensionless form, the Findley criterion is where amp is the shear stress amplitude on a plane and n,max is the maximum normal stress on the same plane. F = 0.627 and eF = 625 MPa are the normal stress parameter and the endurance limit for the current case carburized gear steel [4]. The plane that maximizes Fi is searched for. An index value above unity predicts fatigue damage.
Numerical setup
The numerical program was built on Huang's code [10]. It starts by solving a timeindependent case of a smooth cylinder rolling on a plane. To minimize the time spent to obtain the steady-state solution the simulations were first roughly solved for a course mesh. Then the mesh was refined in three steps. At the approximate convergence of P, and 0 for a coarser mesh, the solution was transferred to a refined mesh. Each refinement doubled the resolution in all directions. The final mesh contained 257 • 49 nodes in the lubricant. When the time-independent model had converged, the transient problem was introduced and the asperity travelled through the model in 257 time steps. In order to get a fully coupled solution both P and were updated in each iteration of each time step. To stabilize the solution the relaxation parameters in the Gauss-Seidel and the Jacobi iteration methods were adjusted based on the residual. The temperature field were updated with a newton forward iteration scheme. To enable a stable temperature solution sub time steps where introduced inside the temperature routine. The global structure of the program is shown in Fig. 2. A more detailed description has been provided by Everitt and Alfredsson in their article from 2019 [11].
Results
The effect of different SRR was investigated through 14 simulations. The speed of the fast surface was 6.3 m/s and the speed of the slow surface was adjusted to achieve the desired SRR. The asperity was high enough to break through the lubricant and cause metal to metal contact. Other key parameters are presented in Table 1. The lubricant parameters were taken from Larson et al. [5] for the oil named PAO B. The key result of the investigation is presented in Fig. 3. It shows that the fatigue risk illustrated by the Findley index is higher for asperity surfaces subjected to negative slip than positive slip. It also shows that pure rolling is the least detrimental case but that fatigue damage is still predicted for the contact conditions in Table 1. 6 shows where large 1 develop relative to the asperity. When the asperity entered the contact, the tensile stress developed at the trailing edge of the asperity, as is shown in Fig. 1b and Fig 6. When exiting the contact the tensile stress developed on the leading edge of the asperity, see The position of large shear stresses from metal contact did however explain why the trailing edge was more critically loaded from a fatigue perspective. Fig. 7 shows that the large shear stresses developed only at the trailing edge. Since the Fi is a combination of both normal and shear stresses the trailing edge at SRR<0 was critical. The reason for the shear stresses being higher at the trailing edge was that metal contact developed close to the trailing edge; see the film height Fig. 8. shows that metal contact occurred at the trailing edge of the asperity.
Conclusions
The contact loads were simulated using thermal elastohydrodynamic lubrication in order to account for the effects of the lubricant. Findley's multiaxial criterion was used and the loads from negative slip were found to be more detrimental than those from positive. The explanation resides in the trailing edge of the asperity breaking through the lubricant as it entered the rolling contact. At negative slip the trailing asperity edge was also the location for the largest tensile stress. At positive slip, on the other hand, the high tensile stress developed at the leading edge, which separated large shear and tensile stresses. Therefore, negative slip is more detrimental than positive slip and pitting is primarily found in surfaces subjected to negative slip. | 2,185.4 | 2019-01-01T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
The influence of institutional ownership , independent commissioners , dividend policy , debt policy , and firm size on firm value
This study aims to examine the effect of institutional ownership, independent commissioner, dividend policy, debt policy, and firm size on firm value. The dependent variable used in this study is firm value, while the independent variables are the Effect of Institutional Ownership, Independent Commissioner, Dividend Policy, Debt Policy, and Firm Size. The population in this study are manufacturing companies, especially in the food and beverage sub-sector listed on the Indonesia Stock Exchange from 2012 - 2017. The sample in this study was selected using the purposive sampling method and obtained as many as 36 samples of observations. The analytical technique used in this research is multiple linear regression analysis. The results of this study indicate that the variables of institutional ownership and firm size have a negative effect on firm value, while the variables of independent commissioners, dividend policy, and debt policy have a negative effect on firm value.
In the current era of corporate competition, where so many companies are emerging and developing in Indonesia, this is able to boost the Indonesian economy in achieving stability. In its competition, companies try to put themselves in a stable position and are ready to compete so that they can survive and develop.
Food and beverage companies are one of the industrial sector categories on the Indonesia Stock Exchange which have the opportunity to grow and develop and have an important role in the development of economic growth in Indonesia. Because the sector is one of a number of sectors that are prioritized by the Government in encouraging industry as a driver of the national economy.
Basically every company has a purpose. These goals can be categorized in both the short and long term. In the short term, the company aims to maximize current profits, while in the long term it aims to increase the value of the company itself. Firm value summarizes the collective assessment of investors about how well a company is doing, both current performance and future projections. The value of the company can be seen through the company's stock price. If the stock price increases, the value of the company will also increase, and vice versa (Setiawati & Lim, 2018). Optimizing the value of the company which is the company's goal can be achieved through the implementation of the financial management function, where one financial decision taken will affect other financial decisions and have an impact on company value. .
LITERATURE REVIEW
called an agency problem. The existence of the agency problem will cause the company's financial goals to not be achieved, namely increasing the value of the company by maximizing shareholder wealth. This requires a control from outside parties where the role of good monitoring and supervision will direct the objectives as they should (Sukirni, 2012).
One of the company's internal factors that can affect the value of the company is good corporate governance. The Good Corporate Governance (GCG) mechanism is used as a control for companies to stay within the proper limits (Syafitri et al., 2018). In achieving good corporate governance, it is necessary to have the role of institutional ownership and independent commissioners. Institutional ownership is felt to reduce the occurrence of agency conflicts. Shleifer & Vishny (1997) in Tambunan et al (2017) , argue that the company will be well controlled by the institution. Companies with large institutional ownership indicate their ability to monitor management. The greater the institutional ownership, the more efficient the utilization of company assets by management. Thus the proportion of institutional ownership acts as a prevention against waste by management (Melia, 2015). The independent commissioner is the best position in carrying out the duties of the monitoring or monitoring function in order to achieve good corporate governance in the company (Tambunan et al, 2017). Firm value can also be influenced by dividend policy. Dividend policy is often considered as a signal for investors in assessing the good or bad of the company, this is because dividend policy can have an influence on the company's stock price. The size of the company paying dividends to shareholders depends on the dividend policy of each company.
In addition, the value of the company can also be influenced by debt policy. Sources of funding within the company can be obtained from internal and external companies. From the internal company, it can be in the form of retained earnings and from the external company in the form of debt or the issuance of new shares. Companies that use debt have obligations for interest and principal costs. The use of debt (external financing) has a considerable risk of non-payment of debt, so the use of debt needs to pay attention to the company's ability to generate profits. Leverage can be understood as an estimator of the risks inherent in a company, meaning that the greater the leverage, the greater the investment risk (Prasetyorini, 2013). According to Sofyaningsih & Hardiningsih (2011), debt policy can be used to create company value. But the debt policy depends on the size of the company. Large companies have the advantage that it is easy to meet funds from debt on the capital market. So linking debt with firm size and firm value becomes very relevant.
Another factor that affects firm value is firm size. The relative market share shows the company's competitiveness is higher than its main competitors. Although it does not rule out bankruptcy, large companies are considered more robust in the face of shocks. According to Prasetyorini (2013) the size of the company is considered capable of influencing the value of the company because the larger the size or scale of the company, the easier it will be for companies to obtain sources of funding, both internal and external. Pratama & Wiksuana (2016) found in their research that Firm Size, Leverage and Profitability have a significant positive effect on Firm Value. Firm size and leverage have a significant positive effect on profitability. Hasan & Mildawati (2020) found in his research that Good Corporate Governance represented by institutional ownership proxies has a significant positive direct effect on firm value. Good Corporate Governance represented by institutional ownership proxy has a significant indirect effect on firm value by using financial performance as an intervening variable. With the differences in the results of previous studies, researchers will re-examine related to corporate governance, dividend policy, debt policy and company size in relation to firm value. Berliani & Riduwan (2017) found in their research that managerial ownership, institutional ownership, independent commissioners, ROA, ROE affect firm value, while firm size has no effect on firm value. Thaharah & Asyik (2016) found in their research that managerial ownership has no effect on firm value. While institutional ownership, independent commissioners, audit committees have an effect on firm value. However, profitability is not able to mediate the effect of firm size on firm value.
A. Agency Theory
Regarding agency theory, related to this research, agency theory is related to Good Corporate Governance (GCG) because it highlights the direct relationship between principal and agent (Lestari & Priyadi, 2017). The agency relationship perspective is the basis used to understand corporate governance. Agency theory results in an asymmetric relationship between owners and managers, to avoid this asymmetrical relationship a concept is needed, namely the concept of Good Corporate Governance which aims to make the company healthier (Windasari & Riharjo, 2017).
B. Signaling Theory
According to Brigham & Houston (2006) a signal is an action taken by the company to provide instructions for investors about how management views the company's prospects. This signal is in the form of information that presents information, notes or descriptions for past, present and future conditions for the survival of a company. Signal theory explains how the signals of management's success or failure are conveyed to owners. In the agency relationship, managers have asymmetric information about the company's external parties, including investors and creditors. Asymmetry occurs when managers have more internal company information and information faster than external parties. In order to reduce information asymmetry, companies must disclose their information, both financial and non-financial information (Yusuf, 2020).
C. Value Of Company
According to Yusuf (2020) company value is a company's performance in the past and future prospects which have the aim of being able to generate large profits in order to provide maximum luxury to shareholders if the share value of a company increases.
The higher the share price of the company, the higher the prosperity for shareholders.
In this study, Tobin's Q ratio is used to measure firm value. The Tobin's Q ratio is considered to be able to provide the best information, because in Tobin's Q it includes all elements of the company's debt and share capital (Agustina et al., 2015). Tobin's Q model defines firm value as a combination of tangible and intangible assets. Tobin's q is the ratio of the market value of the company's assets as measured by the market value of the number of outstanding shares and debt (enterprise value) to the replacement cost of company assets. Yusuf (2020) the calculation of the Tobin's Q ratio is more rational considering that the elements of liability are also included as the basis for the calculation. The Tobin's Q ratio provides an overview not only of the fundamental aspects, but also the extent to which the market values the company from various aspects that are seen by the wider party including investors. The measurement of the Tobin's Q ratio as an indicator of the company's performance will have more meaning when viewed from the ratio value every year. With the comparison, it will be known that the company's financial performance increases every year, so that investors' expectations for investment growth will be higher. Institutional ownership is expressed as a percentage (%) which is measured by comparing the number of shares owned by institutional investors divided by the total number of shares outstanding (Santoso, 2017).
D. Institutional Ownership
Institutional Ownership is ownership of company shares owned by institutions or institutions such as insurance companies, banks, investment companies and ownership of other institutions (Thaharah & Asyik, 2016). Institutional ownership is one of the main GCG mechanisms that help agency problems in Jensen and Meckling (Yusuf, 2020). According to Jensen and Meckling on Yusuf (2020) institutional ownership has a very important role in minimizing agency conflicts that occur between managers and shareholders. The existence of institutional investors is considered capable of being an effective monitoring mechanism in every decision made by managers. This is because institutional investors are involved in strategic decisions so they do not easily believe in earnings manipulation (Berliani & Riduwan, 2017).
E. Independent Commissioner
Independent Commissioners are commissioners who are not affiliated with or related to the controlling shareholder, the independent board of commissioners plays a very important role in the company, especially in implementing the mechanism for implementing corporate governance (Syafitri et al, 2018). Independent Commissioners are in the best position in carrying out their functions in order to achieve and realize a company that has good corporate governance.
F. Dividend Policy
According to Ouma (2012) dividend policy is one of the most important decisions. That is, the dividend policy can increase the value of the company through the company's ability to pay dividends. According to Yusuf & Suherman (2021) dividend policy is a policy that is associated with determining whether the profits earned by the company will be distributed to shareholders or will be retained in the form of retained earnings. The policy on dividend payments is a very important decision in a company. This policy will involve two parties with different interests, namely the first party the shareholders, and the second party the company itself. The amount of dividend distribution by the company to shareholders will make investors interested in investing in the company. The greater the value of shares distributed to shareholders, the more investors will invest.
F. Debt Policy
According to Rahmawati & Haryanto (2012), debt policy is a very important decision for every company because this policy is taken by the company's management in order to obtain sources of financing for the company to finance the company's operational activities (Rahmawati & Haryanto, 2012). The concept of leverage is important for investors in making stock valuation considerations. Investors generally tend to avoid risk. The risk that arises in the use of financial leverage is called financial risk, namely the additional risk that is charged to shareholders as a result of using debt by the company. The higher the leverage, the greater the financial risk and vice versa (Horne & Wachowicz, 2012).
According to Weston and Copeland in Sukirni (2012), debt policy is a policy that determines how much the company's funding needs are financed by debt. Debt policy includes the company's funding policy from external sources. If investors see a company with high assets but also high leverage risk, they will think twice about investing in that company. Debt policy determination is proxied by Debt to Equity Ratio.
G. Company Size
The size of the company is one indicator to observe the large political costs that must be borne. Company size can be measured by looking at the total assets owned by a company (Yusuf & Suherman, 2021). Company size is an indicator that shows the company's financial strength.
According to Ghofir & Yusuf (2020) , firm size has a different effect on the firm value of a firm. In terms of company size, it can be seen from the total assets owned by the company, which can be used for company operations. If the company has large total assets, the management is more flexible in using the assets in the company.
Firm size is stated to be a determinant of financial structure in almost every study and for a number of different reasons. The size of the company can determine the level of ease of the company in obtaining funds from the capital market and determine the bargaining power (bargaining power) in financial contracts. Large companies can usually choose funding from various forms of debt, including special offers that are more profitable than small companies. The greater the amount of money involved, the more likely it is to make a contract that can be designed according to the preferences of both parties, instead of using a standard debt contract (Hasnawati & Sawir, 2015).
According to Moh'd, Perry Rimbey in Hasnawati & Sawir (2015) suggests that large companies will more easily access funding through the capital market. This convenience is good information for making investment decisions and can also reflect the value of the company in the future. Company size describes the size of a company which can be expressed by total assets or total net sales. The greater the total assets and sales, the greater the size of a company.
METHODOLOGY
In this study, the type of research used is causal research, which is to explain the effect of an independent variable on the dependent variable. The independent variables in this study include institutional ownership, independent commissioners, dividend policy, debt policy, and firm size, while the dependent variable is firm value.
RESULTS
The following is a description of 4. Dividend payout ratio (DPR) relates to the use of profits that are the rights of shareholders and these profits can be divided into dividends or retained earnings to be reinvested. In the descriptive statistical test, the minimum value for the dividend payout ratio variable is 1.01% at PT. Delta Jakarta Tbk in 2015, this means that the dividend per share given to investors is 1.01% of the earnings per share. The maximum value for the dividend payout ratio variable is 88.48% at PT. Delta Jakarta Tbk in 2012, this means that the dividend per share given to investors is 88.48% of the earnings per share. The average (mean) is 37.537%.
good average or mean value because the mean value is greater than the standard deviation value. The standard deviation reflects the deviation, so that the spread of the data shows normal results and does not cause bias. The higher Tobin's Q will attract investors to buy shares because it shows that the company has good growth prospects. Based on table 2 shows that Asymp.Sig. (2-tailed) of 0.997 which means the value is greater than 0.05 or 0.997 > 0.05. So it can be concluded that the data in this study are normally distributed. The data in this study have met the assumption of normality and can be analyzed further using regression analysis.
The percentage of institutional ownership is measured by
a. Dependent Variable: TOBINSQ Based on the results of the analysis using the multicollinearity test in table 3 shows that the value of the variance inflation factor (VIF) of the five variables is smaller than 10, and the tolerance value is above 0.10, so it can be assumed that there is no multicollinearity between independent variables. Based on Table 4, the value of Asymp.Sig is obtained. (2-tailed) of 0.398 which means the value is greater than 0.05 or 0.398 > 0.05. Asymp.sig value of more than 5% indicates the data does not contain autocorrelation problems.
Figure 1. Heteroscedasticity Test
Based on Figure 1 above, it can be seen that there is no clear pattern, as well as the points that spread above and below the number 0 on the Y axis, it can be said that the regression model used is feasible to study because there is no heteroscedasticity in this regression model.
Coefficient of Determination (R )
The coefficient of determination (R ) essentially measures how far the model's ability to explain the ability of the dependent variable to vary. The value of the coefficient of determination is between zero and one (Ghozali, 2016). In this study, the value of Adjusted R2 is used to measure the magnitude of the coefficient of determination. The small value of R2 means that the ability of the independent variables in explaining the variation of the dependent variable is very limited. A value close to one means that the independent variables provide almost all the information needed to predict the dependent variable (Ghozali, 2016 Table 5 shows that the coefficient of determination which shows the R-square value is 0.499. This means that 49.90% of firm value can be explained significantly by institutional ownership, independent commissioners, dividend policy, debt policy, and Meanwhile (100% -49.90%) = 50.10% firm value can be explained by other variables.
Simultaneous Significant Test
The F test or ANOVA test aims to test all independent or independent variables simultaneously affecting the dependent or dependent variable. In this test, the size is used independently with a significance of 0.05.
1. If the probability value is <0.05, it can be said that there is a jointly significant influence between the independent variables on the dependent variable. 2. If the significance value is > 0.05 then there is no significant effect jointly between the independent variables on the dependent variable.
Table 6. Simultaneous Significant Test ANOVAa
Based on table 6, it can be concluded that the variables of institutional ownership, independent commissioners, dividend policy, debt policy, and firm size have a joint effect on firm value, which means that the model is suitable for research, which is seen with a sig value of 0.001 < 0.05.
Individual Parameter Significant Test
The t-statistical test shows how far the influence of one explanatory or independent variable individually in explaining the variation of the dependent variable. Decision making basis 1. Probability > 0.05 then H0 is accepted 2. Probability < 0.05 then H0 is rejected The results of the t-statistical test of each independent variable on the dependent variable can be explained as follows: 1. The institutional ownership variable has a t-count value of -2.309 and a sig value of 0.028 <0.05. This shows that the variable of institutional ownership has a negative effect on firm value. In taking the hypothesis, then H1 is accepted, which means that institutional ownership has an effect on firm value. 2. The independent commissioner variable has a t value of 1.244 and a sig value of 0.223 > 0.05. This shows that the independent commissioner variable has no effect on firm value. In taking the hypothesis, then H2 is rejected, which means that the independent commissioner has no effect on firm value. 3. The dividend policy variable which is proxied by the dividend payout ratio has a t value of -0.462 and a sig value of 0.648 > 0.05. This shows that the dividend policy variable has no effect on firm value. In taking the hypothesis, H3 is rejected, which means that the dividend policy has no effect on firm value. 4. Debt policy variable as proxied by the debt to equity ratio has a t value of -4.134 and a sig value of 0.000 <0.05. This shows that the debt policy variable has a negative effect on firm value. In taking the hypothesis, then H4 is accepted, which means that debt policy has an effect on firm value. 5. The firm size variable has a t-value of -2.678 and a sig value of 0.012 <0.05. This shows that the firm size variable has a negative effect on firm value. In taking the hypothesis, then H5 is accepted, which means that the size of the company affects the value of the company.
Based on the results of multiple linear regression testing that have been described previously, the discussion in this study is about 1. The Effect of Institutional Ownership on Firm Value The results of this study found that the variable of institutional ownership has a negative effect on firm value. This means that high institutional ownership will reduce the value of the company. This condition can occur because of the institutional ownership of the sample companies, some of which are constant every year and some that are not stable, namely decreasing and increasing.
Institutional investors with majority share ownership are more likely to take sides and cooperate with management to prioritize their personal interests over the interests of minority shareholders. This is a negative signal for outsiders because the alliance strategy of institutional investors with management tends to take company policies that are not optimal, this action is detrimental to company operations. As a result, investors will not be interested in investing their capital, the volume of stock trading will decrease, the company's share price and company value will also decrease. The results of this study are in line with research conducted by (Rahma, 2014) which states that institutional ownership has a negative effect on firm value.
The influence of independent commissioners on firm value
The results of this study found that the independent commissioner variable had no effect on firm value. This is because the existence of an independent board of commissioners in a company is considered not effective enough to monitor or monitor company managers and market participants do not fully trust the performance of the independent board of commissioners in the company, resulting in a lack of investor interest in investing in the company which has an impact on decreasing value. company.The results of this study are in line with research conducted by (Fiadicha,2016) which states that independent commissioners have no effect on firm value.
The Effect Of Dividend Policy On Firm Value
The results of this study found that the dividend policy variable had no effect on firm value. These results indicate that the level of dividends distributed to shareholders is not related to the level of firm value. Dividend policy does not affect the value of the company because according to them the dividend payout ratio is only a detail and does not affect the welfare of shareholders. The increase in the value of dividends is not always followed by an increase in the value of the company. Because the value of the company is determined only by the company's ability to generate profits from company assets or investment policies. According to Kusumastuti (2013) adding the reason that dividend policy has no effect on firm value is because shareholders only want to take capital gains. The results of this study support the research conducted by Wibowo and Aisjah (2013) with the results of research that dividend policy proxied through the dividend payout ratio (DPR) has no effect on firm value.
The Effect Of Debt Policy On Firm Value
The results of this study found that the debt policy variable had a negative effect on firm value. This shows that the lower the debt level of a company,the value of the company will increase this is because the company's obligation to pay debts to creditors decreases so that the profits generated by the company increase and cause the company's stock price to increase so that the value of the company will also increase both in the eyes of prospective creditors and creditors. for the market. 5. The effect of firm size on firm value The results of this study found that the firm size variable has a negative effect on firm value. This is because in small companies even though the investment is not large, small companies can also provide optimal profits. Vice versa in large companies, companies with large total assets with dominant components in receivables and inventories may not necessarily be able to pay dividends (retained earnings) due to assets that accumulate in receivables and inventories. Companies are more likely to retain profits than distribute them as dividends, which can affect stock prices and firm value. Referring to these findings, it can be stated that companies that have large total assets do not necessarily give investors confidence in managing the company in order to increase the value of the company.
CONCLUSIONS AND SUGGESTIONS
4. Debt policy has a negative effect on firm value. This shows that the lower the debt level of a company, the value of the company will increase this is because the company's obligation to pay debts to creditors decreases so that the profits generated by the company increase and cause the company's stock price to increase so that the value of the company will also increase both in the eyes of prospective creditors and creditors. for the market.
5. Firm size has a negative effect on firm value. This is because in small companies even though the investment is not large, small companies can also provide optimal profits. Vice versa in large companies, companies with large total assets with dominant components in receivables and inventories may not necessarily be able to pay dividends (retained earnings) due to assets that accumulate in receivables and inventories.
Companies are more likely to retain profits than distribute them as dividends, which can affect stock prices and firm value. Referring to these findings, it can be stated that companies that have large total assets do not necessarily give investors confidence in managing the company in order to increase the value of the company.
A. Conclusions
Based on the data processing, it can be concluded: 1. Institutional ownership has a negative effect on firm value. Institutional investors with majority share ownership are more likely to take sides and cooperate with management to prioritize their personal interests over the interests of minority shareholders. This is a negative signal for outsiders because the alliance strategy of institutional investors with management tends to take company policies that are not optimal, this action is detrimental to company operations. As a result, investors will not be interested in investing their capital, the volume of stock trading will decrease, the company's share price and company value will also decrease.
3. Dividend policy has no effect on firm value. These results indicate that the level of dividends distributed to shareholders is not related to the level of firm value. Dividend policy does not affect the value of the company because according to them the dividend payout ratio is only a detail and does not affect the welfare of shareholders. The increase in the value of dividends is not always followed by an increase in the value of the company. Because the value of the company is determined only by the company's ability to generate profits from company assets or investment policies. According to Kusumastuti (2013) adding the reason that dividend policy has no effect on firm value is because shareholders only want to take capital gains.
2. Independent commissioners have no effect on company value. This is because the existence of an independent board of commissioners in a company is considered not effective enough to monitor or monitor company managers and market participants do not fully trust the performance of the independent board of commissioners in the company, resulting in a lack of investor interest in investing in the company which has an impact on decreasing value. company.
B. Suggestions
Suggestions that researchers can give based on research results, as follows: 1. Changing the company sample, because the total sample does not reflect the actual condition 2. For the variable of good corporate governance mechanism plus other elements of managerial share ownership structure, the board of directors, and the audit committee 3. Using other measures of firm value. | 6,787.2 | 2021-08-15T00:00:00.000 | [
"Economics",
"Business"
] |
A Self-Adaptive Reinforcement-Exploration Q-Learning Algorithm
: Directing at various problems of the traditional Q-Learning algorithm, such as heavy repetition and disequilibrium of explorations, the reinforcement-exploration strategy was used to replace the decayed ε -greedy strategy in the traditional Q-Learning algorithm, and thus a novel self-adaptive reinforcement-exploration Q-Learning (SARE-Q) algorithm was proposed. First, the concept of behavior utility trace was introduced in the proposed algorithm, and the probability for each action to be chosen was adjusted according to the behavior utility trace, so as to improve the efficiency of exploration. Second, the attenuation process of exploration factor ε was designed into two phases, where the first phase centered on the exploration and the second one transited the focus from the exploration into utilization, and the exploration rate was dynamically adjusted according to the success rate. Finally, by establishing a list of state access times, the exploration factor of the current state is adaptively adjusted according to the number of times the state is accessed. The symmetric grid map environment was established via OpenAI Gym platform to carry out the symmetrical simulation experiments on the Q-Learning algorithm, self-adaptive Q-Learning (SA-Q) algorithm and SARE-Q algorithm. The experimental results show that the proposed algorithm has obvious advantages over the first two algorithms in the average number of turning times, average inside success rate, and number of times with the shortest planned route.
Introduction
Reinforcement learning (RL), one of methodologies of machine learning, is used to describe and solve how an intelligent agent learns and optimizes the strategy during the interaction with the environment [1]. To be more specific, the intelligent agent acquires the reinforcement signal (reward feedback) from the environment during the continuous interaction with the environment, and adjusts its own action strategy through the reward feedback, aiming at the maximum gain. Different from supervised learning [2] and semisupervised learning [3], RL does not need to collect training samples in advance, and during the interaction with the environment, the intelligent agent will automatically learn to evaluate the action generated according to the rewards fed back from the environment, instead of being directly told the correct action.
In general, the Markov decision-making process is used by the RL algorithm for environment modeling [4]. Based on whether the transition probability P of the Markov decision-making process in the sequential decision problem is already known, the RL algorithm is divided into two major types [5]: a model-based RL algorithm under known transition probability and a model-free RL algorithm under unknown transition probability, where the former is a dynamic planning method, and the latter mainly includes strategybased RL method, value function-based RL method, and strategy-value function integrated RL method. The value function-based RL method is an important solution to the model-free Markov problem, and it mainly makes use of Monte Carlo (MC) RL and temporal-difference (TD) RL [6].
As one of most commonly used RL algorithms, the Q-Learning algorithm, which is based on value, reference strategy learning and TD method [7], has been widely applied to route planning, manufacturing and assembly, and dynamic train scheduling [8][9][10]. Many researchers have been dedicated to improving the low exploration efficiency problem of the traditional Q-Learning algorithm. Qiao, J.F. et al. [11] proposed an improved Q-Learning obstacle avoidance algorithm in which Q-Table was substituted with neural network (NN), in an effort to overcome the deficiency of Q-Learning, namely, it was inapplicable under continuous state. Song, Y. et al. [12] applied the artificial potential field to the Q-Learning algorithm, proposed a Q value initialization method, and improved the convergence rate of the Q-Learning algorithm. In [13], a dynamic adjustment algorithm for the exploration factor in ε-greedy strategy was raised to improve the exploration-utilization equilibrium problem existing in the practical application of RL method. In the literature [14], a nominal control-based supervised RL route planning algorithm was brought forward, and the tutor supervision was introduced into the Q-Learning algorithm, thus accelerating the algorithm convergence. Andouglas et al. came up with a reward matrix-based Q-Learning algorithm, which satisfies the route planning demand of marine robots [15]. Pauline et al. introduced the concept of partially guided Q-Learning, initialized the Q-Table through the flower pollination algorithm (FPA), and thus optimized the route planning performance of mobile robots [16]. Park, J.H. et al. employed a genetic algorithm to evolve robotic structures as an outer optimization, and it applied a reinforcement learning algorithm to each candidate structure to train its behavior and evaluate its potential learning ability as an inner optimization [17].
It can be seen from the above description that the current research on the improvement of the Q-learning algorithm mainly focuses on two aspects: The first is to study the attenuation law of the exploration factor ε with the increase of the number of training episodes. The second aspect is to adjust the exploration factor ε of the next episode according to the training information of the previous episode. The article creatively proposes the idea of dynamically adjusting the exploration factor of the current episode according to the states accessed times, based on the decayed ε-greedy strategy, the concept of behavior utility trace is introduced, and the states accessed list and on-site adaptive dynamic adjustment method are added, so the self-adaptive reinforcement exploration method is realized, and taking the path planning as the application object, the simulation results verify the effectiveness and superiority of the proposed algorithm. The main contributions of the proposed algorithm are as follows: (1) the traditional Q-learning algorithm selects actions randomly according simply to the equal probability method, without considering the differences of actions. In the article, the behavior utility trace is introduced to improve the probability of effective action being randomly selected. (2) In order to better solve the contradiction between exploration and utilization, the article designs the attenuation process of the exploration factor into two stages. The first stage is the exploration stage. The attenuation of the exploration factor is designed slowly to improve the exploration rate. The second stage is the transition from the exploration stage to the utilization stage, which dynamically adjusts the attenuation speed of the exploration factor according to the success rate. (3) In addition, because the exploration factor is fixed in each episode, the agent makes too much exploration near the initial location. The article proposes to record the access times of the current state through the states accessed list, so as to reduce the exploration probability of the states with more accessed times, and improve the exploration rate of the state which is first accessed, so as to improve the radiation range of exploration. In a timing sequence process, if the state at time t + 1 only depends on the state S t at time t while unrelated to any state before time t, the state S t at time t is considered with Markov property. If each state in a process is of Markov property, the random process is called Markov process [18].
The key to describing a Markov process lies in the state transition probability matrix, namely, the probability for the transition from the state S t = s at time t into state S t+1 = s at time t + 1, as shown in Equation (1): For a Markov process with n states (s 1 , s 2 , · · · , s n ), the state transition probability matrix P from the state s into all follow-up states s is expressed by Equation (2): P = P s 1 s 1 · · · P s 1 s n · · · · · · P s n s 1 · · · P s n s n (2) In Equation (2), the data in each row of state transition matrix P denote the probability value for the transition from one state into all other n states, and the sum of each row is constantly 1.
An MRP is formed when the reward element is added into a Markov process. An MRP, which is composed of finite state set S, state transition probability matrix P, reward function R and attenuation factor γ(γ ∈ [0, 1]), can be described through a quadruple form S, P, R, γ.
In an MRP, the sum of all reward attenuations since the initial sampling under a state S t until the terminal state is called the gain, which is expressed by Equation (3): In Equation (3), R t+1 is the instant reward at time t + 1, k is the total number of subsequent states from the time t + 1 until the terminal state, and R t+k+1 is the instant reward of terminal state.
The expected gain of a state in the MRP is called value, which is used to describe the importance of a state. The value v(s) is expressed as shown in Equation (4): By unfolding the gain G t in the value function according to its definition, the expression as shown in Equation (5) can be acquired: In Equation (5), the expected value of state S t+1 at time t + 1 can be obtained according to the probability distribution at the next time. s denotes the state at the present time, s represents any possible state at the next time, and thus Equation (5) can be written into the following form: Equation (6), which is called Bellman equation in MRP, expresses that the value of a state consists of its reward and the values of subsequent states according to a certain probability and attenuation ratio.
Markov Decision-Making Process
In addition to the MRP, the RL problem also involves the individual behavior choice. Once the individual behavior choice is included into the MRP, the Markov decision-making process is obtained.
The Markov decision-making process can be described using a quintuple form S, A, P, R, γ, , where its finite behavior set A, finite state set S and attenuation factor γ are identical with those of the MRP. Different from the MRP, the gain R and state transition probability matrix P in the Markov decision-making process are based on behaviors.
The action A t = a is chosen at state S t = s, the gain R a s at the moment can be expressed by the expected gain R t+1 at time t + 1, and it is mathematically described as shown in Equation (7): The mathematical description of state transition probability P a ss from the state S t = s into the subsequent state S t+1 = s is shown in Equation (8): In the Markov decision-making process, an individual will choose one action from the finite behavior set according to their own recognition of the present state, and the basis for choosing such behavior becomes a strategy and a mapping from one state to action. The strategy is usually expressed by the symbol π, referring to a distribution based on the behavior set under the given state s, namely: The state value function v π (s) is used to describe the value generated when the state s abides by a specific strategy π in the Markov decision-making process, and it is defined by Equation (10): The behavior value function is used to describe the expected gain that can be obtained by executing an action a under the present state s when a specific strategy π is followed. As a behavior value is usually based on one state, the behavior value function is also called state-behavior pair value function. The definition of behavior value function q π (s, a) is shown in Equation (11): By substituting Equation (3) into Equations (10) and (11), respectively, two Bellman expectation Equations (12) and (13) can be acquired: In the Markov decision-making process, a behavior serves as the bridge of state transition, and the behavior value is closely related to the state value. To be more specific, the state value can be expressed by all behavior values under this state, and the behavior value can be expressed by all state values that the behavior can reach, as shown in Equations (14) and (15), respectively: v π (s) = ∑ a∈A π(a|s)q π (s, a) q π (s, a) = R a The following can be obtained by substituting Equation (15) into Equation (14): Equation (14) is substituted into Equation (15) to obtain: Each strategy corresponds to a state value function, and the optimal strategy naturally corresponds to the optimal state value function. The optimal state value function v * (s) is defined as the maximum state value function among all strategies, and the optimal behavior value function q * (s, a) is defined as the maximum behavior value function among all strategies, as shown in Equations (18) and (19), respectively: From Equations (16) and (17), the optimal state value function and optimal behavior value function can be obtained, as shown in Equations (20) and (21), respectively: If the optimal behavior value function is known, the optimal strategy can be acquired by directly maximizing the optimal behavior value function q * (s, a), as shown in Equation (22):
Q-Learning Algorithm
Compared with the TD reinforcement learning method based on value function and the state-action-reward-state-action (SARSA) algorithm, the iteration of the Q-learning algorithm is a trial-and-error process. One of the conditions for convergence is to try every possible state-action pair many times, and finally learn the optimal control strategy. At the same time, the Q-learning algorithm is an effective reinforcement learning algorithm under the condition of unknown environment. It does not need to establish the environment model, and has the advantages of guaranteed convergence, less parameters and strong exploratory ability. Especially in the field of path planning, which focuses on obtaining the optimal results, the Q-learning algorithm has more advantages, and it is the research hotspot of scholars at present [19]. The Q in Q-Learning algorithm is namely the value Q (s,a) of state-behavior pair, representing the expected return when the Agent executes the action a (a ∈ A) under the state s (s ∈ S). The environment will feed back a corresponding instant reward value r according to the behavior a of the Agent. Therefore, the core idea of this algorithm is to use a Q-Table to store all Q values. In the continuous interaction between the Agent and the environment, the expected value (evaluation) is updated through the actual reward r obtained by executing the action a under the state s, and in the end, the action contributing to the maximum return according to the Q- Table is chosen. Reference strategy TD learning means updating the behavior value with the TD method under the reference strategy condition. The updating method of reference strategy TD learning is shown in Equation (23): In Equation (23), V(S t ) represents the value assessment of state S t at time t, V(S t+1 ) is the value assessment of state S t+1 at time t + 1, α is the learning rate, γ is the attenuation factor, R t+1 denotes the instant return value at time t + 1, π(A t |S t ) is the probability for the target strategy π(a|s ) to execute the action A t under the state S t+1 , and µ(A t |S t ) is the probability to execute the action A t under the state S t+1 according to the behavior strategy µ(a|s ). In general, the target strategy π(a|s ) chosen is of a certain ability. If π(A t |S t ) µ(A t |S t ) < 1, the probability of the reference strategy choosing the action A t is smaller than that of the behavior strategy, and the updating amplitude is relatively conservative at the moment. When this ratio is greater than 1, the probability of the reference strategy choosing the action A t is greater than that for the behavior strategy, and the updating amplitude is relatively bold.
The behavior strategy µ of reference strategy TD learning method is replaced by the ε-greedy strategy based on the value function Q (s,a) , and the target strategy π is replaced by the complete greedy strategy based on the value function Q (s,a) , thus forming a Q-Learning algorithm. Q-Learning updates Q (S t ,A t ) using the time series difference method, and the updating method is expressed by Equation (24): In Equation (24), the TD target value R t+1 + γQ(S t+1 , A ) is the Q value of the behavior A generated based on the reference strategy π. Thanks to this updating method, the value of behavior obtained by the state S t according to the ε-greedy strategy will be updated by a certain proportion towards the direction of maximum behavior value determined by the greedy strategy under the state S t+1 , and will finally converge to the optimal strategy and optimal behavior value. The concrete behavior updating formula of the Q-Learning algorithm is shown in Equation (25): In Equation (25), a is the action acquiring the maximum behavior value under the subsequent states.
Behavior Utility Trace
In order to improve the low exploration efficiency of traditional exploration strategy, in this study, the behavior utility trace [20] was introduced into the probability of action selection, and the self-adaptive reinforcement-exploration strategy was proposed based on the improved ε-greedy algorithm.
In the RL, the behavior utility trace was used to record the influence of each state on its subsequent states and to adjust the step size during the state value updating. The behavior utility trace is defined as shown in Equation (26): In Equation (26), 1(S t = s) is an expression used to judge true value. It is 1 when and only when S t = s, and 0 under other conditions.
Meanwhile, whether a state is accessed is also an important signal. After an action is executed and transferred to a brand-new state, this, to some extent, indicates that the Symmetry 2021, 13, 1057 7 of 16 exploration of action is effective this time. Therefore, the state access function v(s t ) was used in this study to describe whether a state is accessed as shown in Equation (27): In Equation (27), s t represents the present state, V is the set of accessed states, and the value of v(s t ) is 1 when and only when this state is accessed for the first time.
By combining instant reward and the features of utility trace and the relationship between whether a state is accessed and the action itself, an action utility trace based on instant reward and state access function was designed in this study as shown in Equation (28): In Equation (28), the action a i ∈ A is the behavior space of the Agent, a t−1 and a t−2 are the actions made at the previous time and the time before the previous time, respectively, r t is the instant reward of the present state, and e value is the exploration incentive value set according to the practical situation. When the same action is continuously chosen for the E value (E t (a i )) of action a i ∈ A, the following will hold: When the instant reward is non-negative and the present state is not accessed, the E value at the present time is obtained by adding e to the attenuation value of E value at the previous time. When the instant reward is negative or the present state is accessed, the E value at the present time is the attenuation value of E value at the previous time. When the action selection is not continuous, the following will be true: The E value at the present time will be e under positive instant reward and 0 under negative instant reward, and the E value will be constantly 0 under other circumstances.
Adaptive Reinforcement-Exploration Strategy
Based on the utility trace introduced, a self-adaptive reinforcement-exploration strategy was put forward in this study. This strategy, which is a symmetric improvement of the ε-greedy strategy, was improved mainly from the three following aspects: • Introduction of the behavior utility trace to improve the probability for different actions to be chosen and to enhance the effectiveness of exploration action.
When the strategy chooses to explore the environment at a certain probability, the probability for each action to be randomly chosen is shown in Equation (29): In Equation (29), the action is a i ∈ A, E a i is the E value of behavior utility trace for each action, and n is the total number of behavior spaces. When a positive instant reward is obtained by executing one action or the Agent is transferred to a new state, the E value of this action will be enlarged, and so will the probability for it to be chosen. Through such a design, the algorithm can be encouraged to explore in a linear form, thus expanding the radiation range of the initial exploration.
•
Real-time adjustment of exploration factor ε in different phases until it meets the objective needs.
As for the adjustment of exploration factor ε, the whole attenuation process of ε is divided into two phases. The first phase is the initial training phase of the Agent, in which the Agent almost has no understanding of environmental information and its main task is to explore the environment. In this phase, the attenuation of exploration factor ε should be slightly slow. In this study, the concrete adjustment formula for the exploration factor ε is shown in Equation (30): In Equation (30), ε t is the exploration factor at the present time, t is the present number of iteration cycles, T is the maximum number of training cycles of state, and β is the self-adaptive exploration factor, with a range of [0, 0.5].
When the iteration cycle satisfies t > βT, here comes the second phase in which the focus is shifted from the exploration to utilization. In this study, the exploration factor ε was designed into an approximately linear attenuation and gradual stabilization at the set minimum value during this phase. Meanwhile, in order to adjust the attenuation rate according to the practical situation, the concept of success rate proposed in the literature was taken for reference to design the concrete adjustment formula of exploration factor ε in the second phase, as shown in Equation (31): In Equation (31), ε 1 is the final value of ε in the first phase, T 1 is the target number of episodes of the minimum ε, R is the probability of the Agent successfully arriving at the target position in every ten experiments, i is the accelerated utilization factor of the success rate, and ε min is the set minimum value of the exploration factor.
Through such a design, the success rate of the Agent becomes higher, and the exploration factor ε attenuates faster, so as to make better use of information and reach the goal of accelerating the convergence while ensuring the algorithm effect.
•
Adaptive adjustment of the exploration factor ε of the present action according to the number of access times of the state.
If the traditional exploration strategy is used, although the previous information will be shared in the subsequent iterations, the exploration of the Agent nearby the initial position will be much greater than the exploration nearby the target position. In order to improve the exploration rate nearby the target position and reduce the ineffective exploration nearby the initial position, a dynamic adjustment method was proposed in this study for the exploration factor under a state according to the number of access times of this state. The adjustment formula of exploration factor ε cur of the present state within the iteration cycle is shown in Equation (32): In Equation (32), ε t is the basic exploration factor of the present state sequence, µ and ϕ are the utilization factors of number of state access times, x is the number of access times of the present state, the maximum value on record is x max , and ϕ satisfies ϕx max ≤ 1.
Design of Reward and Penalty Functions
In essence, the RL problem solving means a maximization process of return during the continuous trials and errors and interaction between the Agent and environment. Rightly, as a reward given after the Agent executes an action, the reward function serves as the bridge for the Agent to acquire the environmental information, and its quality has a direct bearing on the convergence or not of the final strategy [21]. The traditional reward function is designed as shown in Equation (33): In Equation (33), both r 1 and r 2 are positive, collision means that a collision occurs, when the instant reward −r 1 is acquired at the moment, and get_target expresses that the Agent successfully arrives at the target position, when the instant reward r 2 is harvested. Obviously, the reward function of such a design will be stuck in an obvious problem; the reward values are too sparse, and the Agent will obtain the feedback of reward values only when experiencing a collusion or reaching the target position. However, the Agent fails to obtain any reward feedback under other states, so the Agent has to wander continuously and can be hardly converged. Therefore, the reward density needs to be reasonably increased.
In the reward function designed in this study, any action of Agent would obtain an instant reward. The Agent would acquire an instant reward r 1 when colliding, a positive reward r 2 when approaching the target, a negative reward r 3 when being away from the target, and the final reward r 4 when reaching the target position. In the path planning task, the agent needs to reach the target location without collision. Reaching the target location is the ultimate goal and the main task. Therefore, the reward value of r 4 should be set to the maximum. The setting of r 1 is to avoid collision and can be regarded as a secondary task, so it is less than r 4 . The setting of r 2 and r 3 is to avoid that the reward values are too sparse to make the agent unable to reach the target location, which plays an auxiliary role. Therefore, the values of r 3 and r 4 are set to two orders of magnitude smaller than r 4 or r 1 .The reward function with symmetry designed in this study is shown in Equation (34).
In Equation (34), r 1 , r 2 , r 3 and r 4 are all positive, close_to_target means approaching the target position, and far_from_target means being far from the target position.
Algorithm Implementation
The basic idea followed by the proposed SARE-Q algorithm was described as follows: The self-adaptive reinforcement-exploration strategy was used as the behavior selection strategy of the Q-Learning algorithm to improve the environmental exploration efficiency and better balance the contradiction brought by the exploration-utilization problem. Meanwhile, the optimized reward function was combined to enhance the agent-environment interaction efficiency and improve the performance of the original algorithm. The algorithm pseudocode is shown in Algorithm 1:
Initialization:
The following are initialized: The minimum exploration factor ε min , the maximum number of iterative episodes T max_episodes , the maximum step size of single episode T max_steps , learning rate α, reward attenuation factor γ, utility trace attenuation factor λ, exploration incentive value e, self-adaptive exploration factor β, utilization factor i of the success rate, and utilization factors µ and ϕ of access times. The terminal state set S T is set. For each state s (s ∈ S), a (a ∈ A), and the initial value of Q-Table is randomly set. The T- Table of state access times is initialized, so is the behavior utility trace table, namely, E-Table. Loop iteration (for t = 1, 2, 3 · · · (t < T max_episodes )): Initialize the initial state s Update the basic exploration factor ε t Cycle (for t = 1, 2, 3 · · · (t < T max_steps )): Update the exploration factor ε cur in this episode Choose an action a according to the self-adaptive reinforcement-exploration strategy and Q-
Simulation Experimental Environment and Parameter Setting
In this study, the simulation experiment was carried out in the n × n symmetric grid map environment commonly used in the route planning experiment. Three experimental scenarios-multi-obstacle grid map environments consisting of 10 × 10, 15 × 15 and 20 × 20 grids, respectively-were designed to verify the Q-Learning algorithm, the selfadaptive Q-Learning (SA-Q) algorithm proposed [22], and the SARE-Q algorithm proposed in this study. Hereby the simulation experimental environment design was introduced by taking the 20 × 20 grid environment for example as shown in Figure 1. Update the exploration factor in this episode Choose an action according to the self-adaptive reinforcement-exploration strategy and Q-
Simulation Experimental Environment and Parameter Setting
In this study, the simulation experiment was carried out in the n × n symmetric grid map environment commonly used in the route planning experiment. Three experimental scenarios-multi-obstacle grid map environments consisting of 10 × 10, 15 × 15 and 20 × 20 grids, respectively-were designed to verify the Q-Learning algorithm, the self-adaptive Q-Learning (SA-Q) algorithm proposed [22], and the SARE-Q algorithm proposed in this study. Hereby the simulation experimental environment design was introduced by taking the 20 × 20 grid environment for example as shown in Figure 1. As shown in Figure 1, there were in total 20 × 20 = 400 grids. The grey grids in the figure represented obstacles, and eight different obstacles were set in total. The green grid expressed the initial position of the Agent, with the coordinates of (2,14). The blue grid was the target position of the Agent, with the coordinates of (17,6). The red circle was the Agent. The initial position and target position of the Agent were fixed, and the Agent only moved within the white grids, where neither obstacles nor boundary could gain access. That the Agent was located in different grids represented different states, it could move towards four directions: up, down, left and right, but it could move forward only by the unit grid length towards one direction each time. When the Agent moved towards one direction, if the next target position was an accessible grid, it must be shifted to this grid. As shown in Figure 1, there were in total 20 × 20 = 400 grids. The grey grids in the figure represented obstacles, and eight different obstacles were set in total. The green grid expressed the initial position of the Agent, with the coordinates of (2,14). The blue grid was the target position of the Agent, with the coordinates of (17,6). The red circle was the Agent. The initial position and target position of the Agent were fixed, and the Agent only moved within the white grids, where neither obstacles nor boundary could gain access.
That the Agent was located in different grids represented different states, it could move towards four directions: up, down, left and right, but it could move forward only by the unit grid length towards one direction each time. When the Agent moved towards one direction, if the next target position was an accessible grid, it must be shifted to this grid. If the next target position went beyond the edge or was inaccessible for obstacles, etc., the Agent would stay at the original position.
The experimental simulation platform in this study was as follows: the computer operating system was Windows10, with the CPU of i5-8300H, internal storage of 8 GB, the graphics card of 1050Ti, video memory of 4 GB, conda version of 4.8.4, Python version of 3.5.6, and simulation platform of OpenAI Gym. The related algorithm parameters used were as follows [23]: the initial exploration factor was ε init = 1, the minimum exploration factor was ε min = 0.01, and through the repeated experiments, the learning rate was set as α = 1.0, reward attenuation factor as γ = 0.9, the maximum step size of single episode as Step max = 100, the maximum number of iterative episodes as T max_episodes = 1000, the maximum number of exploration episodes as 0.8 times of maximum number of iterative episodes T max_episodes , namely 800, utility trace attenuation factor as λ = 0.75, exploration incentive value as e = 1, self-adaptive exploration factor as β = 0.1, utilization factor of the success rate as i = 0.1, µ and ϕ for the utilization factor of access times as 0.5 and 0.1, respectively, and r 1 , r 2 , r 3 and r 4 for the reward function as 20, 0, 0.2 and 30, respectively.
Simulation Experiment and Result Analysis
In the experimental environment as shown in Figure 1, three different algorithms were used for the route planning, and their typical return curves are displayed in Figure 2, where the x-coordinate denotes the present iterative episode, and y-coordinate represents the total return obtained by each episode.
It could be observed from Figure 2 that in the 20 × 20 grid environment, in comparison with the Q-Learning algorithm and SA-Q algorithm, the Agent could reach the target position earlier by using the SARE-Q algorithm, and the number of times for it to reach the target position was the maximum. From the curve shape, the curves obtained by the Q-Learning algorithm and the SA-Q algorithm were straighter than that obtained by the proposed algorithm after the initial convergence, which is ascribed to the dynamic adjustment of exploration factor ε. For any state not accessed or the state accessed for just a few times, the proposed algorithm would reinforce the exploration nearby such state, thus leading to a great curve fluctuation. However, for the other two algorithms, the exploration factor ε was stabilized at a small value in the later iteration phase, and it almost did not fluctuate after being approximately converged.
The optimal routes obtained by 100 route planning experiments using the three different algorithms are shown in Figure 3. It could be shown that all three algorithms could give the shortest routes, among which the number of turning times of the optimal route given by the Q-Learning algorithm was six, that by the SA-Q algorithm was seven and that by the SARE-Q algorithm was four.
The statistical performance results of the three algorithms in executing the route planning task for 100 times are listed in Table 1. It could be shown from Table 1 that the average operating time of the SARE-Q algorithm differed a little from the SA-Q algorithm and the Q-Learning algorithm. The average number of turning times of the proposed algorithm was the least, and it was much smaller than that of the Q-Learning algorithm and the SA-Q algorithm. As for the average success rate, the proposed algorithm was 16.3% ahead of the SA-Q algorithm, and considerably 35.8% ahead of the Q-Learning algorithm. In the aspects of average step size and number of times with the shortest route, all of the three algorithms might miss the shortest route, but the SARE-Q algorithm was superior to the other two algorithms.
Simulation Experiment and Result Analysis
In the experimental environment as shown in Figure 1, three different algorithms were used for the route planning, and their typical return curves are displayed in Figure 2, where the x-coordinate denotes the present iterative episode, and y-coordinate represents the total return obtained by each episode. It could be observed from Figure 2 that in the 20 × 20 grid environment, in comparison with the Q-Learning algorithm and SA-Q algorithm, the Agent could reach the target position earlier by using the SARE-Q algorithm, and the number of times for it to reach the target position was the maximum. From the curve shape, the curves obtained by the It could be observed from Figure 2 that in the 20 × 20 grid environment, in comparison with the Q-Learning algorithm and SA-Q algorithm, the Agent could reach the target position earlier by using the SARE-Q algorithm, and the number of times for it to reach the target position was the maximum. From the curve shape, the curves obtained by the Q-Learning algorithm and the SA-Q algorithm were straighter than that obtained by the proposed algorithm after the initial convergence, which is ascribed to the dynamic adjustment of exploration factor . For any state not accessed or the state accessed for just a few times, the proposed algorithm would reinforce the exploration nearby such state, thus leading to a great curve fluctuation. However, for the other two algorithms, the exploration factor was stabilized at a small value in the later iteration phase, and it almost did not fluctuate after being approximately converged.
The optimal routes obtained by 100 route planning experiments using the three different algorithms are shown in Figure 3. It could be shown that all three algorithms could give the shortest routes, among which the number of turning times of the optimal route given by the Q-Learning algorithm was six, that by the SA-Q algorithm was seven and that by the SARE-Q algorithm was four. The statistical performance results of the three algorithm ning task for 100 times are listed in Table 1. It could be shown operating time of the SARE-Q algorithm differed a little from Q-Learning algorithm. The average number of turning time was the least, and it was much smaller than that of the Q-Le Q algorithm. As for the average success rate, the proposed a the SA-Q algorithm, and considerably 35.8% ahead of the Q aspects of average step size and number of times with the s algorithms might miss the shortest route, but the SARE-Q a other two algorithms. The simulation experiments were implemented in 15 × 15 and 10 × 10 symmetric grid environments, in an effort to explore the influence of different grid environments on the algorithm performance, and the corresponding experimental results are shown in Tables 2 and 3. As shown in Tables 1-3, all performance indexes of the three algorithms declined with the increase in the number of map grids. During this process, the SA-Q algorithm was comprehensively ahead of the Q-Learning algorithm in terms of performance indexes, though some of its performance indexes were lower than those of Q-Learning algorithm. The SA-Q algorithm was always superior to the other two aspects in terms of operating time. Although the operating time was not as superior as that of the SA-Q algorithm, the other performance indexes of the SARE-Q algorithm were all better than those of the other two algorithms.
Conclusions
The SARE-Q algorithm was proposed in this study, in order to tackle the problems of the traditional Q-Learning algorithm, e.g., slow convergence and easy local optimization. In the end, the route planning was simulated on the OpenAI Gym platform. By a symmetric comparison with the Q-Learning algorithm and SA-Q algorithm, the superiority of the proposed SARE-Q algorithm was verified. The following conclusions were obtained through the theoretical study and simulation experiments: 1.
The problems existing in the Q-Learning algorithm were studied, the behavior utility trace was introduced into the Q-Learning algorithm, and the self-adaptive dynamic exploration factor was combined to put forward a reinforcement-exploration strategy, which substituted the exploration strategy of the traditional Q-Learning algorithm.
The simulation experiments of route planning manifest that the SARE-Q algorithm shows advantages, to different extents, over the traditional Q-Learning algorithm and the algorithms proposed in other references in the following aspects: average number of turning times, average inside success rate, average step size, number of times with the shortest planning route, optimal number of turning times of route, etc.
2.
Though being of a certain complexity, the random environment given in this study is obviously too simple compared with the actual environment. Therefore, the algorithms should be explored under the environment with dynamic obstacles and dynamic target positions in the future. Meanwhile, the algorithm was verified only through the simulation experiments, so the corresponding actual system should be established for the further verification. In addition, the SARE-Q algorithm proposed in this study remains to be further optimized in the aspect of parameter selection. How to optimize the related parameters through intelligent algorithms is the follow-up research content.
3.
Restricted by the action space and sample space, the traditional RL algorithm is inapplicable to actual scenarios with very large state space and continuous action space. With the integration of deep learning and RL, the deep RL method will overcome the deficiencies of the traditional RL method by virtue of the powerful character representation ability of deep learning technology. The subsequent research content lies in studying the deep RL method and verifying it in the route planning. | 9,473.8 | 2021-06-11T00:00:00.000 | [
"Computer Science"
] |
Investigation of Terahertz Emission from BiVO4/Au Thin Film Interface
We demonstrate emission of terahertz pulses from a BiVO4/Au thin film interface, illuminated with femtosecond laser pulses. Based on the experimental observations, we propose that the most likely cause of the THz emission is the Photo-Dember effect caused by the standing wave intensity distribution formed at the BiVO4/Au interfaces.
Introduction
Terahertz (THz) pulses can be generated in non-linear optical crystals, metals, and semiconductors using ultrashort laser pulses that excite currents and polarizations [1,2]. For example, THz radiation can be generated by illuminating thin semiconductor layers deposited on metal surfaces [3,4]. When a femtosecond laser pulse is incident on such a semiconductor-metal junction, a transient current is formed in the Schottky depletion layer of the metal/semiconductor interface, which gives rise to the emission of an electromagnetic transient in the THz range. In the past, THz emission from, mostly, conventional semiconductors like gallium arsenide, silicon, germanium, and some unconventional ones, such as cuprous oxide, has been studied [5].
In general, there are two generation mechanisms which are important in the case of THz generation from semiconductor surfaces and interfaces. When a metal comes into contact with a semiconductor, the metal-semiconductor junction forms either a Schottky, or an ohmic contact depending on the barrier height. Due to the depletion field present near the Schottky barrier, the photoexcited carriers accelerate and form a photocurrent normal to the surface which gives rise to THz emission [6][7][8][9].
In the case of narrow-bandgap semiconductors, where the surface depletion field is weaker, we can have THz emission through the photo-Dember effect. In the photo-Dember effect, the incident light is absorbed near the surface of the semiconductor thin film. When the absorption is strong, more charge carriers are generated near the surface of the semiconductor compared to deeper into the material. As a result, a non-uniform carrier distribution is built up and a carrier gradient is formed. When, in addition, the mobilities of the electrons and holes are also different, they diffuse with different velocities. As a result, a transient dipole is rapidly formed near the surface. This time-dependent dipole, which is parallel to the concentration gradient and perpendicular to the excited surface, gives rise to THz radiation [10,11]. THz emission from the photo-Dember effect has been reported for many different materials, like InAs, InN etc. [12][13][14]. In 2010, Klatt et al. showed THz emission from lateral photo-Dember currents by partially masking the semiconductor surface with a metal layer. When excited with femtosecond laser pulses, this gives rise to a laterally non-uniform carrier distribution, which emits THz radiation [15].
The study of THz pulses emitted from semiconductors and semiconductor/metal interfaces gives us information about the carrier dynamics and the carrier transport properties in the semiconductors. In cases when the THz generation mechanism is initially not known, careful examination of the properties of the emitted waveforms can help us discover the source of the emission.
Here, we present results on the emission of THz radiation, after illumination with femtosecond laser pulses of the material bismuth vanadate (BiVO 4 ), which is a promising material for solar water splitting and photocatalysis [16,17]. We find that the emitted THz amplitude increases linearly with the incident pump power indicating a second-order nonlinear optical process as the source of the emission. We examine the topography of the deposited BiVO 4 thin films using atomic force microscopy (AFM) and investigate the optical properties using reflection spectroscopy. Samples with an insulating SiO 2 layer in between the BiVO 4 and the gold are found to emit THz signals comparable in strength to samples without such a layer. This indicates that built-in electric fields, associated with Schottky barrier formation, are less likely to be the source of the emission. Instead, we propose the longitudinal photo-Dember effect as the generation mechanism responsible for the THz emission.
Sample Preparation
For the sample, a 100-nm-thick gold film was deposited on the glass substrate using electron beam evaporation. For adhesion purposes, a 10-nm-thin chromium layer was deposited first. Then, different thicknesses (20-300 nm) of BiVO 4 were deposited on top of the gold film using a spray pyrolysis technique, which is discussed in detail elsewhere [18,19]. Due to its high visible light absorption and high chemical stability, BiVO 4 acts as an efficient catalyst and splits water into hydrogen and oxygen upon illumination. Apart from showing high visible light photoactivity, it has many other interesting properties as well, such as ferroelasticity, photochromism and ionic conductivity [20,21]. BiVO 4 prepared using the spray-pyrolysis method is reported to be monoclinic in nature and an n-type semiconductor [22,23]. Among the three available crystal phases of BiVO 4 , monoclinic BiVO 4 (m-BiVO 4 ) is an important material with many applications. It is a wide band semiconductor with a bandgap of 2.4 eV (≈520 nm) and it exhibits much higher photocatalytic activity with respect to the other polymorphs [24][25][26]. The bandgap of BiVO 4 is around 520 nm (2.4 eV) hence ideally it should have little or no absorption at a wavelength of 800 nm (1.5 eV) pump light. However, when 800 nm wavelength pump light is incident on the BiVO 4 film at a 45 • degree angle of incidence, a significant amount of absorption is observed presumably due to the impurities/defects introduced during the preparation. The amount of absorption is not exactly known, since some scattering of the light also occurs, which may give rise to an additional, apparent absorption. During the deposition of BiVO 4 , the sample (glass substrate coated with gold thin film) is heated at 450 • C for around 2 h. As a result of the heating, the surface of the bare gold becomes rough and discontinuous and there is a formation of big islands. In Fig. 1a, we show a scanning electron microscope (SEM) image of a 100-nm-thick gold film after heating at 450 • C for around 2 h. Heating of the sample while spraying leads to drying and crystallization of BiVO 4 . In Fig. 1b
THz Generation and Detection Setup
The experimental setup is schematically shown in Fig. 2. We use a Ti:Sapphire oscillator which has a center wavelength of 800 nm, pulse duration of 50 fs, average power of 800 mW and a repetition rate of 11 MHz. When ultrashort pulses from the laser oscillator are incident on the sample at a 45 • angle of incidence, THz emission is observed. A pair of gold coated parabolic mirrors is used for collecting, collimating, and finally focusing the THz radiation onto a 500-μm-thick zinc telluride (ZnTe) (110) electro-optic detection crystal. The electric field of the THz radiation induces a small birefringence in the electro-optic crystal. At the same time, a part of the same ultrashort laser pulse that was used to generate the THz pulse is incident on the detection crystal. When the linearly polarized probe beam propagates through the electro-optic crystal, due to the THz induced birefringence, it acquires a small elliptical polarization. Then, the probe beam passes through a Wollaston prism which separates the beam into two orthogonal components. A differential detector, consisting of two photodiodes, measures the difference in the intensities which is proportional to the instantaneous THz electric field. By varying the time-delay between the pump pulse and the probe pulse, the THz electric field is measured "stroboscopically" as a function of time.
Results and Discussion
In Fig. 3a, we show the measured THz electric field as a function of time, emitted from a 100-nm-thick BiVO 4 film deposited on a thick gold film with a 100 nm average thickness. The amplitude of the emitted THz electric field is roughly around 0.2 % of the THz emission from a conventional semi-insulating GaAs (100) surface (b) (a) Fig. 3 a Measured THz electric field emitted from a 100-nm-thick BiVO 4 film deposited on a 100-nmthick gold film, plotted vs. time. b Measured THz amplitude plotted as a function of the incident laser power depletion field emitter and is comparable to the emission from percolated gold [27]. Tight focusing of the pump beam is avoided to prevent any damage to the sample. The emitted THz amplitude increases linearly with the laser power incident on the sample as shown in Fig. 3b. This suggests that a second order non-linear optical process is responsible for the THz emission. The BiVO 4 layer deposited using the spray pyrolysis method is not uniform. Combined with the large surface roughness, already shown in Fig. 1b, this makes it difficult to get an accurate estimate of the thickness of the BiVO 4 layer. This non-uniformity in the thickness gives rise to a variation in the THz amplitude. Hence, as we go from one spot to the other on the sample, there is a variation of around ±10 percent of the peak to peak amplitude of the emitted THz radiation.
In the literature, BiVO 4 thin films deposited on the gold surface are reported to show a diode-like behavior, suggesting that the BiVO 4 /Au junction forms a Schottky interface [28]. In Fig. 4a, we schematically show the energy band bending between gold and BiVO 4 , which is an n-type semiconductor. In order to determine if the depletion field is responsible for the THz generation in our case, we included SiO 2 dielectric layers of varying thickness between the BiVO 4 layer and the gold layer. Due to the SiO 2 layer, the carrier transport between gold and BiVO 4 is strongly reduced, which should hinder the formation of a depletion field. However, during the deposition of BiVO 4 , due to heating, there is a risk of destroying the ultrathin SiO 2 layer. In that case, the BiVO 4 film can again come into direct contact with the gold layer and form a Schottky junction. To check if the SiO 2 layer is still intact after heating for 2 h, we inspect the samples using the current-voltage (I-V) characteristics. In Fig. 4b, we show the I-V curves for the bare gold thin film and the gold thin film with 20 nm SiO 2 layer on top. We measure a significant current for the bare gold thin film but very little current is measured for the SiO 2 /Au layers which confirms that even after heating, the SiO 2 layer is not getting destroyed and remains intact.
Moreover, the sample is also characterized using scanning electron microscopy (SEM). In Fig. 4c, we show the SEM image of the gold thin film coated with a 20 nm SiO 2 layer, after heating at 450 • C for 2 h. Interestingly, it is observed that when a 20-nm-thin SiO 2 layer is present on top of gold, the surface of the annealed BiVO 4 /Au sample is much smoother compared to the surface of the annealed bare gold film. This gives a clear indication that the SiO 2 layer sandwiched between the gold thin film and the BiVO 4 thin film remains intact even after heating. In Fig. 4d, we compare the amplitude of the THz radiation emitted from BiVO 4 /Au and from BiVO 4 /SiO 2 /Au samples with a silica layer thickness of 5 nm. We observe that the THz amplitude remains largely unaffected after the inclusion of the thin SiO 2 layer. Similar results are observed with thicker layers of silica sandwiched between gold and BiVO 4 . This is shown in Fig. 4e where we plot the measured THz electric field emitted from an additional set of three BiVO 4 /SiO 2 /Au samples as a function of time, where the silica thickness is 5, 10, and 20 nm. The above results make it less likely that the THz generation is due to carrier acceleration in the depletion field associated with the BiVO 4 /Au Schottky interface. Excitation of BiVO 4 thin films deposited on a glass substrate using femtosecond laser pulses does not produce any measurable THz emission. This excludes the possibility of THz emission due to a surface depletion field or the surface photo-Dember effect [29][30][31][32]. We propose a new generation mechanism based on the longitudinal photo-Dember effect. The mechanism of a typical longitudinal photo-Dember effect is schematically shown in Fig. 5a. In a typical longitudinal photo-Dember effect, electron-hole pairs are generated in the vicinity of a semiconductor surface by photo-excitation forming a concentration gradient. Due to a difference in mobilities, electrons and holes move with different velocities. As a result, a dipole perpendicular to the surface is formed which emits THz radiation. Alternatively, the longitudinal Photo-Dember effect may also be realized in a slightly different way. When 800 nm laser light is incident on the BiVO 4 thin film deposited on gold, due to the interference of incident light and light reflected from the gold surface, a standing wave is formed. This standing wave has a low intensity at the metal surface, a consequence of Maxwell's boundary conditions, and an increasing intensity further away from the metal surface. Because of this, we have a higher absorption near the BiVO 4 /air interface and, as a result, more carriers than near the BiVO 4 /Au interface. In this way, we get a concentration gradient which, combined with a difference in the mobility of electrons and holes, gives rise to a THz emitting transient dipole. In BiVO 4 , the hole mobility is higher than the electron mobility [33] (In a typical photo-Dember effect, as shown in Fig. 5a, the mobility of electrons is greater than the mobility of holes). When the optical thickness of the beam path within BiVO 4 approaches λ/4, the layer begins to act like an antireflection coating, trapping more light inside the layer and giving rise to more absorption, as we recently observed for several semiconductors on gold substrates [5]. Assuming that the metal behaves like a perfect metal, we calculate that the BiVO 4 thickness for which the coating acts like an antireflection coating at 800 nm is about 71 nm, assuming a refractive-index of 2.9 for the BiVO 4 . This is thinner than the thickness (100 nm) where we observe a maximum in the emitted THz amplitude (as discussed below). The most likely explanation for this is that the deposited BiVO 4 layer thickness is only approximately known and might be different from 100 nm.
To further test whether the standing-wave induced photo-Dember effect provides a plausible explanation for the THz emission, we have also studied samples in which the gold layer has been replaced by indium tin oxide (ITO). ITO is a conducting oxide which reflects THz light and transmits light with a wavelength of 800 nm quite well. In the absence of the gold layer, multiple reflections are strongly reduced and no significant charge carrier gradient is formed. As a consequence, no THz emission is observed when the gold layer is replaced by ITO layer. This provides supporting evidence for our proposed generation mechanism of THz radiation from the BiVO 4 /Au interfaces.
In Fig. 6, we show the amplitude of the THz radiation emitted from the BiVO 4 /Au interface as a function of the thickness of the BiVO 4 layer. When we increase the thickness of the BiVO 4 thin film, the THz emission initially increases with increasing thickness, peaks around 100 nm and then decreases. The results suggest that as we increase the thickness of BiVO 4 thin film, the absorption of pump light increases which is expected as the interaction length of the pump light with BiVO 4 increases (Lambert-Beer law). More absorption leads to the generation of more charge carriers, and, as a result, the THz amplitude increases. As we further increase the thickness of BiVO 4 layer, the absorption becomes high and less light reaches the gold surface.
As a result, the standing wave is much less pronounced and so the THz amplitude decreases again.
Conclusion
In conclusion, we demonstrate THz emission from BiVO 4 /Au thin film interfaces and investigate the possible generation mechanisms using THz radiation. Based on the experimental results and observations, we propose that the longitudinal photo-Dember effect is the mechanism responsible for the THz generation. | 3,754.8 | 2015-08-25T00:00:00.000 | [
"Physics"
] |
Visualization Method of Key Knowledge Points of Nursing Teaching Management System Based on SOM Algorithm and Biomedical Diagnosis
The traditional nursing teaching knowledge point recommendation algorithm based on collaborative filtering is difficult to deal with the problem of data sparsity, while the traditional recommendation algorithm based on matrix decomposition has poor scalability in dealing with high-dimensional data, and their recommendation results are only determined according to the prediction score, resulting in low recommendation accuracy. In view of this, a nursing teaching knowledge point recommendation method based on a SOM neural network and ranking factor decomposition machine is proposed. Firstly, the SOM neural network is used to cluster users based on users’ academic background information, then the partial order relationship of nursing teaching knowledge points is constructed by using users’ explicit and implicit web access behavior, and finally, the factor decomposition machine is used as the ranking function to classify users’ academic background web access behavior, borrowing nursing teaching introduction text, and other characteristic information were modeled, and the peer-to-peer ranking learning algorithm was used to accurately recommend nursing teaching knowledge points. Experimental results show that the proposed method can effectively alleviate the problem of data sparsity and improve the accuracy and efficiency of recommendations.
Introduction
With the continuous advancement of the digital construction of nursing teaching knowledge points, the number of electronic nursing teaching knowledge points has increased sharply, resulting in problems such as information overload and cognitive loss when users search for nursing teaching knowledge points [1]. erefore, how to provide users with nursing teaching knowledge point recommendation services according to the users' preferences for nursing teaching knowledge points has become an important problem to be solved to improve the personalized service quality of nursing teaching knowledge points. Most of the existing personalized nursing teaching knowledge point recommendations are realized by the traditional recommendation method based on user collaborative ltering. e basic principle is to nd the nearest neighbor users similar to the target users by calculating the similarity between users, then predict the score value of the target users on the nursing teaching knowledge points according to the historical score data of the similar users on the nursing teaching knowledge points, and recommend the nursing teaching knowledge points based on the score value. Because in the case of sparse data, a user-based collaborative ltering algorithm [2]. erefore, most scholars are committed to improving the above algorithms. For example, song Chuping integrates reader characteristics and nursing teaching characteristics into user similarity calculation to improve the accuracy of recommendation [3]. However, with an increase in the number of users, the amount of user similarity calculation will increase , resulting in a reduction in recommendation e ciency. erefore, the SOM algorithm is used for clustering. By calculating the similarity between the target user and each clustering center, the cluster is found, and the nearest neighbor user set is constructed, so as to reduce the amount of user similarity calculation. However, because the traditional algorithm is affected by the initialization K value and the clustering time is long, the accuracy of clustering results is not high [4].
SOM Structure of Key Knowledge Points of Nursing
Teaching Management System. A nursing knowledge map is a specific knowledge base for the nursing field, which includes a series of entities in the nursing field and their associations. ere is a precedent for the construction of a nursing knowledge map [5]. Using the technologies of text extraction, relational data conversion, and data fusion, this paper explores the automatic construction method and standardized process of a TCM knowledge map in order to realize the template-based TCM knowledge Q&A and the auxiliary prescription based on the knowledge map reasoning [6]. e SOM network structure is shown in Figure 1.
A SOM network is a tissue feature mapping network. Its basic principle is that for each input vector, a neuron in the output layer has the closest value to the input vector and wins by receiving the maximum stimulation. Some neurons around the winning neuron are also greatly stimulated due to lateral action. At this time, the network performs a learning operation, and the winning neuron and its surrounding neurons modify their own weight vector to move to the input vector [7]. Each neuron is moved to the whole input space as more vectors are submitted, and it is close to the input vector of its nearest vector value and arranged in that layer to obtain a classification. According to experience, when all input and output values are between 0 and 1, the calculation effect of the SOM neural network is the best [8]. Assign the weights of each neuron in the network to the random number in the [0,1] interval as the initial value w ij , set a large neighborhood radius n, and set the number of neurons that are learning t. Randomly select a training mode x(t) � (x 1 (t), x 2 (t),. . ., x n (t)) to provide to the input layer of the network. Select the neuron matching the input vector as the winning neuron c. If the Euclidean distance is adopted, C is Update the weights of neurons in the neighborhood to make them move in the direction of super input vector. Assuming that an n-dimensional input eigenvector can be expressed as x � (x 1 ,x 2 ,. . .,x n )∈R n , y i as the target prediction value corresponding to the input eigenvector, FM can use the decomposition interaction parameters to model all nested interactions of n input variables of x in d-dimension [9]. When d � 2, the factorization machine model can be expressed as follows: Where w 0 is the total deviation; w i is the unary interaction parameter of the input variable x i ; w ij is the decomposition parameter between V I and V J , which is defined as follows: Where k is a super parameter that defines the decomposition dimension. Suppose there is an input space X ∈ R N , where n is the number of features. At the same time, there is an output space (i.e. scoring space) in which the tag Y � {r 1 ,r 2 ,. . .,r q } represents the user's preference order for items, and the fixed order they maintain is r q ,r q−1 ,. . .,r 1 where v represents the preference relationship. In order to determine the order relationship between items, we need to select a set of sorting functions f ∈ F so that each candidate function f in f can determine the following partial order relationship, namely: Suppose, in x× there is a set of sorting reals S � {(x(i), y(i))}, T i � 1 in y space, where y(i) is the preference sorting, and t is the number of instances. e sorting task is to find an optimal function f * ∈ f to minimize the loss function of a given sorting instance. Here, F m function is selected as the sorting function, which is as follows: Convert any instance pair and their sequential relationship into a new instance, and give the instance a new label. Assuming that p and q, respectively, represent an instance in an instance pair, and y p and y q represent their sorting, there are (p, q), z � +1 y p > y q , According to the above method, a new training set s' � {P(t), q(t), and Z(t)}, L i � 1, can be created from a given training set s, where l is the number of newly constructed instances. us, the hinge loss function of the t-th instance pair in the training set s' is Computational Intelligence and Neuroscience Where the subscript "+" represents the positive part; f(P(t)) -f(q(t)) can be calculated by FM function within the linear time complexity O (k•n). Define a global loss function on the whole training set s': Where λ,θ re model parameters, θ regularization parameters, and initial parameters, knowledge point feature extraction based on the above algorithm can better comb and display the context.
Characteristic Identification of Knowledge Points in Personalized Nursing Teaching.
e purpose of personalized nursing teaching knowledge point recommendation based on the SOM neural network and ranking factor decomposition machine model is to accurately cluster users by using the nonparametric characteristics and high accuracy of the SOM neural network, and then use the characteristics of the factor decomposition machine that can easily integrate highdimensional data as a ranking function to evaluate the academic background, quality, and accuracy of users in the same cluster. Borrow a variety of characteristic information, such as nursing teaching introduction text and web access behavior to model, and use the level sorting learning algorithm to train the model so as to realize the accurate sorting and recommendation of nursing teaching knowledge points. e flow chart of sorting recommendations based on SOM and RFM is shown in Figure 2.
Step 1. initialize the network, that is, set the SOM network and initialize the initial value of each training parameter [10,11]. e values to be initialized are: the random number that gives the link weight W in the [0,1] interval; determine the initial value of the learning rate n (1)η (o) (0 < η (ω)< 1); and determine the initial value n (o) of the domain n (t), where G is the winning neuron; calculate the Euclid distance between the weight vector w � (w, m) and the input sample x = (x 1 , x 2 ,. . ., x n ), and select the minimum distance to determine the winning neuron [12]. Adjust the connection weight w and update the neighborhood n (i) of the output layer. e update formula for the connection weight between each neuron in the input layer and the neuron in the input layer is as follows: Where w ij (t + 1) represents the input neuron at t + 1 time; connection weight with output neuron J; n (t) is the domain range centered on the winning neuron g at time t; update the learning rate M and domain n (t); then, e knowledge points have an inevitable sequence in the learning process. Whether a knowledge point can be learned at present often depends on whether other knowledge points have been learned, or if the latter is the preparatory knowledge of the former [13]. Before learning a certain knowledge point, you must first learn another related knowledge point, and the relationship between the two is the precursor relationship [13]. After learning a certain knowledge point, the knowledge points directly supported by this knowledge point form a successor relationship between the two, as shown in Figure 3. As a knowledge system, there is an inherent relationship of mutual restriction and mutual influence between concepts and principles [14][15][16]. e relationship reveals that there is a network structure between knowledge points and points out that knowledge is composed of a group of interconnected and interactive nodes. A correlation is conducive to the mastery of knowledge and the formation of knowledge system. e association relationship between knowledge points can be divided into two categories: one-to-one association (1 : 1), which means that one knowledge point only corresponds to another knowledge point; one to many association (1 : m), which indicates that a knowledge point can be associated with multiple knowledge points; and the graph indicates the one to many association between knowledge points. Figure 4 shows the characteristics of the tree structure of nursing knowledge points.
Knowledge points are very important for teaching activities. After completing the division of knowledge points and determining the relationship between knowledge points, we should consider how to organize teaching according to knowledge points in a specific knowledge field, because teaching is composed of the teaching of knowledge points [17][18][19]. e knowledge point structure model can better organize and describe the content of knowledge fields. e knowledge point structure model is basically a hierarchical tree structure. In the hierarchy of relationship, the parentchild relationship and the brother relationship are the two most important relationships. ey serve as the basis for building the tree structure for knowledge points. e association relationship enriches the content of the knowledge tree. Using these two relationships to describe knowledge points constitutes a knowledge point structure model. On the basis of this model, the knowledge points of a specific discipline can be listed and organized according to the relationships between knowledge points, and the knowledge point structure diagram of this discipline can be constructed. e knowledge in textbooks is generally arranged in linear order. In fact, the relationship between knowledge points is complex [20,21]. To learn a knowledge point, you must first have certain basic knowledge (a precursor relationship), that is, master some knowledge points. To learn these knowledge points, you may need to master other knowledge points. In this way, all knowledge points and the relationship between them constitute a knowledge point network. e knowledge Computational Intelligence and Neuroscience point network is a network composed of several related knowledge points based on their internal relations [19]. e nodes of the network represent knowledge points, and the links between nodes represent the links between knowledge points. After learning a knowledge point, students should also understand that the "environment" of the knowledge point is the content of its sequence, left and right, up and down, so as to determine the mark of the "status" of the knowledge, in order to make students realize the structure of the same network between the knowledge points and establish the consciousness of the network. rough this network diagram, teachers and students can have a clear understanding of some theorems of solid geometry [22,23]. For example, in the teaching process, if we pay attention to the application of the knowledge point network to tell students about knowledge points, it is beneficial for students to understand the knowledge structure, build their' own cognitive systems, and facilitate the transfer of memory and knowledge skills, because the knowledge point network contains information about the learning path , and this learning path should be reasonable and optimal for students. In a word, this paper deeply analyzes the relevant contents of knowledge point representation and establishes a knowledge point model that is suitable for teaching in form, reflects the connotation of knowledge points in content, and helps to realize the teaching process, which provides a new perspective for teachers to design teaching according to the attributes and laws of knowledge points.
Realization of Knowledge Point Context Visualization.
e construction of a knowledge map is generally divided into two ways: top-down and bottom-up. e top-down construction method is based on ontology and takes highly structured encyclopedias and other websites as data sources to extract ontology and rule constraints and fill them into the knowledge base. Figure 5 shows the technical architecture of the knowledge map. e three steps of knowledge extraction, fusion, and processing in the box are the core of the construction of the knowledge map. It can be seen from the figure that structured data can easily extract knowledge from it because of its high degree of standardization; Computational Intelligence and Neuroscience semistructured and unstructured data have poor standardization and are difficult to obtain knowledge directly. erefore, it is necessary to extract the entities and associations of knowledge with the help of a series of operations such as attribute extraction, relationship extraction, and entity extraction, and then store them in the knowledge base. e construction process of a knowledge map is a continuous cycle. e iterative process can be roughly divided into three stages: knowledge extraction, knowledge fusion, and knowledge processing.
In the traditional teaching of nursing management, the teaching goal is above all else. It is not only the starting point of the teaching process but also the destination of the teaching process. However, in the classroom of network teaching, because it emphasizes that students are cognitive subjects and active constructors of meaning, students' meaning construction of knowledge is regarded as the ultimate goal of the whole learning process. e whole teaching process starts with a situation conducive to students' meaning construction and closely surrounds the center of "meaning construction." Whether it is students' independent exploration, cooperative learning, or teachers' guidance, in short, all aspects of the learning process should belong to this center, which should be conducive to completing and deepening the meaning construction of the learned knowledge. Combined with the characteristics of nursing management and based on constructivism theory, this paper constructs the structure of nursing management teaching modes in a network environment. e specific operation flow is shown in Figure 6.
Acquiring knowledge through a well-structured relational database or third-party knowledge base is also a good choice to build a knowledge map.
Merge the ontology in the third-party knowledge base into its own library. As another important knowledge source for knowledge mapping, relational databases can usually use the resource description framework (RDF) as a data model and integrate it into a knowledge map. At present, a considerable number of open-source tools support the transformation of data in structured relational databases into Computational Intelligence and Neuroscience RDF triples to realize the construction of a knowledge map. is view is a local view, which is only used to show the association between the outcome entity and the interaction entity in a specific domain. In the above view, it can be seen that for a certain nursing symptom, the associated nursing measures are very concentrated in some fields, and the visual recommendation of nursing measures can be made for the nursing measures in the same field. erefore, we hope to design a view that can not only provide a more detailed expression of information, but also reflect the hierarchical information of nursing measures and their fields. e package layout view has the ability for hierarchical expression and can classify and display data according to categories. However, the package view is not suitable for expressing network class information. Atlas data is a kind of network data. Network data can be expressed by force guidance diagrams, radar diagrams, chord diagrams, etc. Among them, the force guidance diagram is a node connection diagram, and the package diagram is a content filling diagram. If the two are used as a mixed view, they can achieve a complementary effect visually. erefore, consider combining the two views. At the view level, the system interface is mainly divided into three parts. On the right is a general introduction to the Atlas data, which is divided into Computational Intelligence and Neuroscience the data source, data description, overall data analysis, and node selection details. e lower side is the system toolbar, which switches the interaction mode of the system. e middle part is the data view, which is used for data display and data view interaction.
Analysis of Experimental Results
In order to verify the effectiveness of the proposed SOM method, comparative experiments are carried out. FM is a traditional factorization machine model, which is used to judge that SOM based on ranking learning has higher accuracy than traditional FM; BPRMF is a matrix decomposition model based on pairwise ranking method, which is used to judge the influence of ranking learning algorithms on recommendation accuracy; and RSVM is a support vector machine based on pairwise ranking method, which is used to judge that FM, as a ranking function, can more accurately express user preferences than SVM, as shown in Table 1.
For ranking recommendation, because users pay more attention to the recommendation quality of the top-ranked items in the recommendation list, this study selects two ranking position-sensitive evaluation indicators for evaluation, namely average accuracy and normalized impairment cumulative gain. Map is defined as follows: Where r is the sorting ordinal number; N is the number of recommended products; NR is the number of related commodities sorted as r; n r /r is the accuracy of truncated sorting; l (r) is a binary correlation function with a sorting number r; correlation is 1 and uncorrelation is 0; Na is the total number of related commodities. e larger the map value, the higher the ranking of items related to user preferences, and the better the overall ranking effect of the algorithm. N dcg is defined as follows: Where P represents the position of the item in the list, Z p is the normalization factor, and K (i) represents the correlation level between the item with location i and user preference. e value range of NDCG is [0,1]. e larger the value, the more consistent the sorting results are with the user's interests and preferences. eir entity information comes from the nursing guide. By grasping the nursing guide, we extract different sets of nursing entities and the contact edge sets between entities and construct the entity network in the nursing field. e extracted main entity information and the association information between some entities are shown in Table 2. e traditional algorithm cannot be directly applied to the research object of this paper. In order to verify the effectiveness of the improved algorithm, this section will compare the clustering quality of the improved k-medoids algorithm with that of the improved SOM algorithm. e interclass distance and intraclass distance of the SOM improved algorithm and the k-medoids algorithm are shown in the following Table 3. In the experiment, the two algorithms are clustered ten times, and the average values of SSE and SSB from multiple experiments are calculated. As shown in Table 4, the experimental results of the two algorithms are compared.
From the perspective of stability, the repetition rates in the clustering results of the SOM improved algorithm and the k-medoids improved algorithm are compared. Here, eleven experiments are also carried out on different algorithms to calculate the average repetition rate of different algorithms.
In many experiments and simulations, the clustering results of the SOM improved algorithm can obtain a 67.0% repetition rate, which is similar to the clustering effect of the k-medoids improved algorithm, which proves that the SOM improved algorithm in this paper also has good stability. e following figure shows the proposed SOM algorithm and the comparison algorithm MAP@10 and NDCG@10 results on evaluation indicators.
It can be seen from Figure 7 that the performance of the algorithm varies with the value of K. When k � 15, the performance of the four algorithms is the best. At the same time, it can be seen that SOM obtains the best performance under different K values. is is because, compared with FM, SOM adopts a pairwise sorting learning algorithm, so its performance is better than FM; compared with BPRMF, SOM can not only integrate the explicit and implicit feedback information of users but also integrate the text information of the borrowing nursing teaching introduction and the borrowing log information, so its performance is also better than the traditional methods. At the same time, SOM's sorting function FM uses interaction parameters rather than independent parameters to model the interaction between features, so SOM can obtain better performance. Especially in the case of sparse data, its performance is better.
Conclusion
In the courses that need a lot of practice, the implementation of a project-based teaching method can quickly improve students' practical operation abilities. At the same time, the establishment of students' theoretical knowledge systems cannot be ignored. e nursing specialty is a highly practical specialty. It adheres to the project-based teaching method without making students with weak theoretical basic knowledge more backward. Cultivating practical talents is one of the ways to realize this. e cultivation of operation skills is an important task in nursing teaching. Students' nursing operation level directly affects the effects of clinical practice and the future development of nursing specialties, as well as the quality of their practical talents. Years of nursing practice show that nursing teaching management is a systematic, phased, complex, and carefully organized process. In order to ensure the effect of nursing operation teaching, this paper uses a SOM neural network to cluster users according to the users' academic background information, analyzes the explicit and implicit web access behavior of nursing teaching knowledge points, constructs the partial order relationship of nursing teaching knowledge points, classifies users' academic background web access behavior, and uses a point-to-point sorting learning algorithm to accurately recommend nursing teaching knowledge points. Different teaching purposes can be implemented for each type of students. Students with weak theoretical knowledge can purposefully integrate, so that excellent students can help weak students learn from each other,
Data Availability
e data used to support the findings of this study are included within the article.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 5,734.4 | 2022-10-11T00:00:00.000 | [
"Computer Science"
] |
Utopian/Dystopian Visions: Plato, Huxley, Orwell
This paper attempts to theorize two twentieth-century fictional dystopias, Brave New World (2013) and Nineteen Eighty-Four (1984), using Plato’s political dialogues. It explores not only how these three authors’ utopian/dystopian visions compare as types of narrative, but also how possible, desirable, and useful their imagined societies may be, and for whom. By examining where the Republic, Brave New World, and Nineteen Eighty-Four stand on such issues as social engineering, censorship, cultural and sexual politics, the paper allows them to inform and critique each other, hoping to reveal in the process what may or may not have changed in utopian thinking since Plato wrote his seminal work. It appears that the social import of speculative fiction is ambivalent, for not only may it lend itself to totalitarian appropriation and application—as seems to have been the case with The Republic—but it may also constitute a means of critiquing the existing status quo by conceptualizing different ways of thinking and being, thereby allowing for the possibility of change.
INTRODUCTION
We usually imagine utopias as communitarian societies, such as the one proposed in Sir Thomas More's, Utopia (2003), yet it is easy to forget that Capitalism also began as a utopian project in early modernity promising unlimited individual and social progress through a combination of unfettered private enterprise and "trickle-down" economics. The early twentieth century subsequently saw totalitarian regimes of all hues offering radical alternatives to parliamentary democracy, enticing or coercing the citizenry to trade in its civil rights and liberties for paternalistic government in a social experiment that cost countless of lives. Historically, dystopias have the habit of presenting themselves as utopias to conceal the fact that they are promoting agendas quite contrary not only to their subjects' interests but also their stated aims. No-one, perhaps, understood this better than George Orwell, who had learned from his experience in the Spanish Civil War that… All historical changes finally boil down to the replacement of one ruling class by another. All talk about democracy, liberty, equality, fraternity, all revolutionary movements, all visions of Utopia, or "the classless society", or "the Kingdom of Heaven on earth", are humbug … covering the ambitions of some new class which is elbowing its way to power. (Orwell, 1947) Orwell's comments come from his 1946 review of James Burnham's The Managerial Revolution (1960) which argued that the two ideologies of the superpowers would one day merge in a global bureaucracy run by technocrats. Besides its relevance to the post-Cold War era of globalized capital where we are told that history has ended with the triumph of Western liberal democracy, 1 Burnham's predictive sociology may be useful in theorizing such technocratic dystopias as Brave New World and Nineteen Eighty-Four, in which ideological conflicts have either been eradicated or are used as an alibi by the ruling elite to enforce a scientific dictatorship on a global scale.
However, more useful in unraveling the complexities of Huxley's and Orwell's utopian/dystopian visions may be Plato's Republic, the first social engineering project in Western culture which set the stage for subsequent discussions of ideal societies in scientific, philosophical, and fictional terms. In the present paper, I begin by arguing that it is difficult to distinguish between fictional utopias and dystopias not only because of the indeterminacy of authorial intent and the irony which typifies the genre, but also because there can be no universal agreement as to what constitutes an ideal society. Moreover, even if we assume that a project like that outlined by Plato in The Republic is both desirable and realizable, the means necessary to carry it out may be deemed unacceptable. What is interesting in the modern utopias/dystopias under investigation is that coercion is seen to be less effective than suggestion and punishment less useful than ideological indoctrination-ideas that we originally find in Plato's Laws (1975, 720d). Indeed, realized political utopias are able to employ both coercion and indoctrination when necessary, while the latter is preferred in today's globalized technocracy which covers up its collectivist nature with a neoliberal and philanthropic façade. In any case, whether we are speaking of militaristic or scientific dictatorships of the left or the right-fictionally represented by Nineteen Eighty-Four and Brave New World, respectively-all forms of cultural as well as sexual expression are deemed to be the preserves of the state and appropriately fettered using a combination of cultural control and biopolitics that was first proposed in Plato's Republic more than two millennia ago. Nevertheless, for all the oppressive potential of utopian/ dystopian visions and the totalitarian uses they may be put to, such narratives can also function as important vehicles for cultural critique, offering alternative perspectives on the official narratives or the totalizing rhetoric of a given regime. Thus, more important arguably than the way speculative fiction imagines alternative societies is its position on creative writing or poiesis itself, for to restrict or abolish that would be to deny the very conditions of its existence.
THE PROBLEM OF CLASSIFICATION AND AUTHORIAL INTENTION
Although a utopia may simply be "an imagined society put forward by its author as better than any existing society, past or present" (Morrison, 2007, p. 232), it is in fact a very elusive animal. Firstly, it is defined as much by what it includes as by what it excludes: those sociopolitical realities deemed by its author to be undesirable. Secondly, not only may one person's utopia be another's dystopia, but the categories are always more or less overlapping and difficult to delineate. 2 Thus, as Lyman Sargent originally observed, "[t]he major problem facing anyone interested in utopian literature is the definition, or more precisely, the limitation of the field" (Sargent, 1975, p. 137). Responding to Sargent's challenge, Antonis Balasopoulos has recently enumerated no less than ten different types of utopia/dystopia: 1. Satirical anti-Utopias, 2. Dogmatic fictional anti-Utopias, 3. Dogmatic non-fictional anti-Utopias, 4. Pre-emptive anti-Utopias, 5. Critical anti-utopias, 6. Dystopias of tragic failure, 7. Dystopias of authoritarian repression, 8. Dystopias of catastrophic contingency, 9. Nihilistic dystopias, and 10. Critical dystopias. 3 Of course, as this critic admits, these categories depend largely on the interpretation of the texts to which they are applied and also inevitably converge. Nevertheless, Balasopoulos' nuanced albeit over-schematic typology aptly illustrates how the problem of definition is crucial in any discussion of the subject.
G.R.F. Ferrari, in City and Soul in Plato's Republic, singles out four different types of utopia: "idealistic," "realistic," "ironic," and "writerly" (Ferarri, 2005, pp. 117-18), arguing that Plato's version cuts across all four of them. The Republic presents a society based on consummate reason with the philosopher-king at the top of a more-or-less rigid class structure made up of Guardians, Auxiliaries, and Producers. However, one could argue that, in positing such a conflict-less and static world, the Republic attempts to abolish politics altogether. Given that Plato's magnum opus is considered the founding text of Western political philosophy, this could be something of a bad omen. Brave New World similarly presents a system which, by a combination of eugenics, biochemical control, and "hypnopaedia," has brought about the so-called "ultimate revolution" 4 after which society need not evolve any further. Oceania too, in Orwell's Nineteen Eighty-Four, has reached a kind of historical stasis in which social and personal development is arrested by means of a contrived state of permanent emergency, such as that theorized by Giorgio Agamben in The State of Emergency (2005). Thus, the systems presented in these utopian/dystopian visions can be said to aim at or to have achieved a final solution to humanity's problems, like that envisaged by Socrates when he says that "[t]here will be no end to the troubles of states, or indeed of humanity itself, till philosophers become kings in this world" (Plato, 1987, 473d).
However, a closer reading of the Republic, Brave New World, and Nineteen Eighty-Four reveals that all the goals of their respective societies have not in fact been reached. At the beginning of Book VIII of the Republic, Socrates confesses that the all-wise rulers of Callipolis are unable to calculate the so-called "marriage number" without which it is impossible to maintain the eugenic separation of the different castes that make up the "good" or "beautiful city," resulting in a "chaotic mixing of iron with silver and of bronze with gold" (Plato, 1987, 547a). The irony is that Socrates was married to a proverbially shrewish wife , Xanthippe, and so was unable to create an ideal domestic environment, even as he outlines his vision of the ideal state. Thus, it could be argued that, for all his optimistic rationalism, Plato despairs of being able to control human sexuality: it is the one thing that foils his plans to create a perfectly harmonious social system in which every citizen is content with their place and function. In Huxley's novel too, for all its plentiful supply of hallucinogenic drugs, the World State cannot stop some of its citizens from falling in love and feeling melancholy, or preferring Shakespeare to sex, pain to joy: the irrational element in the human soul cannot be entirely eradicated, it seems. Despite the gloomy picture that Nineteen Eighty-Four paints of the possibility of resistance, Orwell also adds something to the appendix of his novel which throws a spanner in the works of the Party's plans: the translation of such writers as Shakespeare, Milton, Swift, Byron, and Dickens has proved so difficult for the Party, claims the frame narrator, that the final adoption of Newspeak has been postponed for the year 2050. The individual may be doomed in the coming totalitarian Superstate, implies Orwell, but the human spirit as expressed in great works of literature cannot be so easily suppressed. Thus, irony is employed by all three authors to signal gaps and inconsistencies in the official scripts of their respective utopias as well as in their full implementation, suggesting that not all is well in paradise.
IJCLTS 8(2):22-30
The authorial intention behind philosophical and literary utopias/dystopias is invariably difficult to gauge, complicating the way we respond to and classify such works. The famous Orwellian critic Bernard Crick notes that Nineteen Eighty-Four has been read as "deterministic prophecy," "science fiction," "a humanistic satire on contemporary events," and as "a total rejection of socialism of any kind" (Crick, 2007, p. 146). Crick's personal preference is to view the novel as a social satire in the Swiftian mode-which is hardly easier to define-adding that "we should no more expect the future to resemble Nineteen Eighty-Four than we should expect to find the islands of Lilliput or Brobdingnag' after reading Gulliver's Travels" (Crick, 2007, p. 147). But things are not as simple or as apolitical as this critic suggests. Nineteen Eighty-Four is arguably worse than being a dystopian vision of the future. As Richard Voorhees has observed, "[f]ar from being a picture of the totalitarianism of the future 1984 is, in countless details, a realistic picture of the totalitarianism of the present" (Voerhees, 1961, pp. 85-86). Brave New World is even more difficult to categorize, despite the author's meta-fictional elucidations. Huxley referred to his futuristic novel as a "negative utopia," and claimed that he wrote it in revolt against what he called the "horror of the Wellsian Utopia" (Huxley, 1969, p. 438). However, as Anthony Hitchens has observed, Huxley "often held and expressed diametrically opposed views," while in Brave New World, "one can often detect strong hints of a vicarious approval of what is ostensibly being satirized" (Hitchens, 2003, p. xii). So is Brave New World a dystopia or a utopia? 5 If like Orwell, Huxley wanted to tell his readers, "Don't let it happen. It depends on you" (Crick, 1980, p. 395), his work could be unambiguously classified on the basis of authorial intent as a dystopia, or a "Pre-emptive anti-Utopia," according to Balasopoulos' scheme. But if Huxley, like Plato, is outlining what he considers the closest thing to an ideal society, then we would have to classify it as a utopia in disguise, a kind of ideological Trojan horse. The early-twentieth century produced many such speculative narratives designed to covertly promote the idea of the World State and act as a vehicle for the social Darwinist agenda of the scientific elite. Thought (1999) was subtitled "An Experiment in Prophecy," whereas the utopian novel, Men Like Gods (2007), which clearly promotes Wells' futuristic creed was described by the author as a "scientific fantasy" (Wells, 1934, p. x); but what is the difference, one might ask, between a prophecy and a fantasy. Michael Hoffman has called this kind of science fiction "predictive programming" which works by propagating "the illusion of an infallibly accurate vision of how the world is going to look in the future" (Hoffman, 2001, p. 205) that, once ingested on a cognitive level, become self-fulfilling prophecies, subtly conditioning readers to fatalistically accept the vision of the future presented to them.
H.G. Wells' non-fictional Anticipations of the Reaction of Mechanical and Scientific Progress upon Human Life and
Prophetic ability can only be verified after the event being prophesied, so to indulge in predictions of the futureutopian or dystopian-is to promote a certain ideology and a concomitant kind of social subject. As Louis Althusser famously claimed, "in ideology the real relation is inevitably invested in the imaginary relation, a relation that expresses a will … a hope, or a nostalgia, rather than describing a reality" (Althusser, 2008, p. 234). Exploring this issue further, Balasopoulos has argued for a deconstruction of Mannheim's distinction between ideology as a distortion which "occurs in the interests of preserving 'a certain order'," and utopia as one tending to "shatter the order of things prevailing at the time" (Balasopoulos, 2019, p. 59). 6 However, the rhetorical sleight of hand in which what is merely proposed or imagined is presented as logically incontrovertible or teleologically inevitable is not the preserve of novelists like Aldous Huxley and H.G. Wells who dreamed of building a technocratic Eden on Earth in the twentieth century; philosophers have been known to practice it too. The form of the Platonic dialogue is intended to make it appear dialectical/dialogical rather than prescriptive or sermon-like, and sometimes this is achieved to masterful effect, as in the opening pages of the Republic in which Socrates debates with various sophists on the meaning of justice. However, in some dialogues, such as the Laws, there is little real debate because the conclusions have been drawn beforehand by the Platonic mouthpiece, and the interlocutors are ill-matched. Literary dialogues, on the other hand, may offer more room for discussion because they are not usually committed to promoting a particular political agenda. However, it is interesting to find a comparable false debate taking place between Mustapha Mond and John the Savage on individual freedom vs. collective happiness, in Brave New World, as well as between O'Brien and Winston Smith on the nature of power in Nineteen Eighty-Four. The dice are thus loaded in all kinds of utopian/dystopian narratives to conceal the fact that philosophical or scientific expertise, in itself, is no failsafe basis for political authority, nor indeed of social organization.
Plato has been accused by liberal theorists for being a totalitarian. Karl Popper, in The Open Society and its Enemies (1966), famously argued that Plato was a reactionary who rejected the emancipation of the individual that resulted from the rise of democracy in fifth-century Athens. If we define a totalitarian state as that in which the ends of society are not in dispute, the Republic, in enforcing the maximum degree of ideological consensus amongst its citizens, is indeed totalitarian. Ironically, it is often argued that the closest thing humanity has ever known to an ideal political system is the democracy which spawned Plato's utopian visions, despite the fact that this was the same city which condemned Socrates to death for allegedly preaching heresies. 7 Athens also limited the franchise to adult males and employed slaves; but it was a city which encouraged political debate and demanded participation of the citizens, not only in the deliberative and legislative process, but in government itself. What other kind of society, one might ask, would allow people to write such anti-democratic political tracts as the Republic if not a consummate democracy? In Plato's Laws, too, the population of Magnesia is prevented from doing exactly what Socrates' interlocutors were encouraged to do most of all: challenge what they believe and what they are told. Thus, it seems that Plato's utopian/dystopian visions have the negative function of confirming the importance of precisely those things which they seem to deny: freedom from coercion and freedom of thought. Leo Strauss, in his essay "Plato" (1987), has argued that this may have been the philosopher's underlying intention, which, if true, would place the Republic in the same category of negative fictional utopias as Nineteen Eighty-Four. However, Plato is writing about the needs of the state in a very specific albeit philosophical fashion and must be judged on the basis of the discourse he employs; one does not draft an extended manuscript of proposals dealing with every detail of social and political life as a mere rhetorical game.
SOCIAL ENGINEERING: THE QUESTION OF MEANS
A central presupposition of the present paper is that Plato was the first social engineer, viewing human beings as raw material to be fashioned and refashioned at will for the ultimate goal of creating an actual utopia on Earth-even if Socrates calls it a "city in speech" (Plato, 1987, 369a). It could be claimed that, even if Plato's programme is impractical or only approximately realizable in day-to-day politics, it can still offer the statesman an ideal to aim at. However, as Donald R. Morrison points out, "the precondition for Callipolis is so dramatic, and the revolution it requires so total, that this utopian vision cannot be approached gradually" (Morrison, 2007, p. 244). All utopias, by their very nature, presuppose that existing social forces be neutralized so that the ideal society may be built on a socio-political tabular rasa, as it were. The Republic is no exception. As a culminating requirement for the city to come into being, Socrates proposes that the rulers "send out to the country all those in the city who happen to be older than ten" (Plato, 1987, 541a). What does Socrates mean by this surprising statement? It is possible that Socrates is implying that those whose basic education is complete will be useless for the Callipolis, and need to be exiled; but Socrates may also be euphemistically saying that everyone over the age of ten will have to be got rid of. Whichever way we interpret Plato's words, their ramifications are disturbing and remind us of latter-day utopian/ dystopian projects which included the wholesale elimination of undesirable citizens.
In Brave New World, dissent is nipped in the bud from birth and then systematically dissuaded by induced euphoria; only as a last resort are those dissatisfied by the regime physically removed by being exiled. One of the major disagreements between Orwell and Huxley is the latter's belief that persuasion and suggestion were far more effective means of social control than coercion. Thus, recalling Michel Foucault's argument that biopolitics superseded the death penalty as the prime means of social control in modernity (Foucault, 1990, p. 137-38), Brave New World regulates the citizen's psychological development and mental processes in such a way as to preempt the need for the uglier methods of tyrannical rule practiced from time immemorial. As Huxley writes in Brave New World Revisited (1994), control through the punishment of undesirable behaviour is less effective, in the long run, than control through the reinforcement of desirable behaviour by rewards, and that government through terror works on the whole less well than government through the non-violent manipulation of the environment and of the thoughts and feelings of individual men, women and children. (Huxley, 1994, pp. 5-6) Plato would have lauded Huxley's preference for "soft power" over coercion. However, Orwell felt that such methods were unrealistic and contrary to the nature of power which can only affirm itself through conflict and opposition. "How does one man assert his power over another," asks O'Brien; "Obedience is not enough. Unless he is suffering, how can you be sure that he is obeying your will and not his own?" (Orwell, 1984, pp. 229-230) Nevertheless, the constant surveillance of the citizen is of paramount importance for Ingsoc, for if no-one is perceived to be disobeying, then the security apparatus would be redundant and the power of the state much curtailed. However, the telescreen is not only a means of rooting out subversive activity, in Nineteen Eighty-Four; it would be practically impossible, in any case, for all the members of the Outer Party to be constantly scrutinized by the guards, as though in an enormous Panopticon. The primary function of the telescreen is to abolish the distinction between the private and the public so as to render the citizen physically and psychologically malleable. This is the main goal of biopolitics, after all. Just as children are encouraged to spy on their parents in Oceania and report any instance of "thoughtcrime," so Plato pronounced in a way eerily prescient of twentieth-century dictatorships, "Anyone who makes any effort to assist the authorities in checking crime should be declared to be a great and perfect citizen of the state, winner of the prize for virtue" (Plato, 1975, 730b).
In Nineteen Eighty-Four, as in Stalinist Russia, apprehended (or constructed) dissidents are erased from all official records, becoming "unpersons." This is a much safer method of neutralizing potential threats in the socialist utopia of Ingsoc than public execution because it denies opponents of the regime the possibility of becoming martyrs and setting an example for others. Thus, as Winston realizes in the novel, "[h]e who controls the past controls the future. He who controls the present controls the past" (Orwell, 1984, p. 213). The past is also thoroughly and systematically purged from people's minds and hearts in Brave New World to make room for the scientific dictatorship masquerading as a Riviera hotel. 8 "History is bunk" (Huxley, 2013, p. 38), explains the Resident Controller for Western Europe, "we don't want people to be attracted by old things. We want them to like the new ones" (Huxley, 2013, p. 172). This would explain the mantras, "Ending is better than mending. The more stitches the less riches" (Orwell, 1984, p. 43) which are drummed into the citizens' minds through hypnopaedic indoctrination every night-just as consumerism is instilled into the consciousness of the modern citizen through endless advertizing. Moreover, one of the reasons why the regime encourages the New Worldians to be perennially intoxicated with the standard issue hallucinogenic drug is that it forces them to live in a constant present, without regard for past or future: "Was and will make me ill … I take a gramme and only am" (Huxley, 2013, p. 89), recites Lenina Crowne as she swallows her soma pill. Keeping the past alive is one of the main themes of Orwell's novel, and the reason why Winston Smith starts writing the diary which later becomes the basis of the book. As Krishan Kumar writes, The importance of the past, as the only storehouse of alternative values and practices, is dwelt on throughout: in the old diary and the antique paperweight that Winston conceals and treasures, in the old-fashioned room above the junk-shop where the lovers meet,… in memories of his mother and sister, in the word "Shakespeare" that is on Winston's lips when he wakes from the dream. (Kumar, 1984, pp. 22-23) In both Brave New World and Nineteen Eighty-Four, Shakespeare possesses a signal importance for the rebels, not only because of the cultural capital which the Bard represents, but also because of his historical resonance. Totalitarian regimes of all ideological hues would like nothing better than to make humanist writers like Shakespeare unpersons; half the battle against individual freedom would then have been won.
This brings us to the question of censorship in the utopias/dystopias under investigation. Literature is mechanically produced in Orwell's novel, as in Book III of Gulliver's Travels, with the minimum of human intervention. Creative individuals, it seems, are deemed to be particularly dangerous to realized utopias. In Brave New World too, all the great cultural achievements of the past (e.g. ancient cities, mythologies, religions, works of art) are forbidden to everyone except the Resident Controllers. Mustapha Mond would therefore entirely agree with Plato that "some of the many authors of [classical] works have left us writings that constitute a danger" (Plato, 1975, 810b). Ironically, this would no doubt include Plato's dialogues too. Plato was the first to make mimetic art a political problem in Book III and X of the Republic where he argued that almost all existing poetry had to be banished from the ideal city as mendacious and morally corrupting for the Guardians. Totalitarian regimes also feel threatened by uncontrolled scientific research. In Plato's Statesman, for example, the Eleatic jokes that, if the laws of a given state were deemed inviolable, then all research leading to new knowledge would have to be outlawed, leading to the necessity of executing all those who showed themselves wiser than the law (Plato, 2003, 297d-300a). For Plato's mentor, Socrates, who paid for his love of knowledge with his life, this would not have been a funny proposition, but neither is it for Winston Smith who risks everything in order to research Oceania's history and find the occluded truth. Interestingly, "Newspeak has no word for 'science', and nothing in its vocabulary that expresses the empirical mode of thought" (Orwell, 1984, p. 249); to paraphrase one of the basic principles of Oceania, the ignorance of the citizen is the strength of the Party. Science is also viewed as a powerful social engineering tool in Brave New World that cannot be left to the discretion of scientists. As Mustapha Mond says, "all our science is just a cookery book, with an orthodox theory of cooking that nobody's allowed to question, and a list of recipes that mustn't be added to except by special permission from the head cook. I'm the head cook now" (Huxley, 2013, p. 192). Brave New World suggests that, although unregulated art and science are both potentially subversive, art is the more dangerous cultural product since it can more easily escape state control. 9 Although, as we have seen, the Appendix of Nineteen Eighty-Four paints a similar picture, more crucial ultimately for Orwell seems to be the shared objectivity of rational positivism, since, as Winston ruminates, "Freedom is the freedom to say that two plus two make four. If that is granted, all else follows" (Orwell, 1984, p.73).
Plato held that all cultural activity had to be equally regulated by the rulers in the name of social and political stability. "Change," he wrote, "except in something evil, is extremely dangerous" (Plato, 1975, 797d). Callipolis therefore permits only a governmentally sanctioned form of religion, while Magnesia, the constitutional utopia of Plato's mature thought, also has laws against impiety and unacceptable religious beliefs. Just as in Nineteen Eighty-Four religion has been replaced by the cult of Big Brother, the semi-divine leader whom all citizens must worship with complete self-abasement, so in Brave New World "Our Lord" has been replaced by "Our Ford" (Huxley, 2013, p. 21), the patron saint of the modern production line. Of highest political importance in The Republic is the education of the rulers, the unity of the city, and the correct ethos of the citizen; so, Plato advocates only those forms of cultural activity which, to his mind, promote these goals, banning everything else. However, content is relatively unimportant for the brand of censorship we find in modern utopias/dystopias-it does not really matter what the official deity is called. What matters for the regimes of Brave New World and Nineteen Eighty-Four is that citizens are denied genuine freedom of conscience and thought. Two plus two could thus be five, or anything else the authorities want it to be; Orwell makes clear that this is not so much a scientific as a political problem.
PRIVACY, SEX, AND THE LIBIDINAL ECONOMY
In modern times, Foucault has pioneered the analysis of the state's encroachment into what has traditionally been regarded as the private sphere, up to an including the subject's body and biological functions. It may surprise us to find, however, that this topic was not unknown to Plato, nor excluded from his social-engineering project. It is not uncommon to find Plato using the doctor/patient paradigm to describe the relationship between ruler and ruled, as when lying to the state is compared to someone in training lying to his doctor about his physical condition (Plato, 1987, 389b), or when laws are imposed on the citizen without explanation being likened to a slave-patient being treated by a slave-doctor (Plato, 1975, 720d). To the ancient Greeks who lived in tightly-knit communities in close quarters with their fellow citizens, the strict division between the public and private was unknown. Just as in Plato's Magnesia, marriage inspectors invade and survey the homes of citizens under the pretense of aiding family life (Plato, 1975, 784a), so, in modern totalitarian regimes, nothing that the citizen did in private should be allowed to go unseen, unrecorded, and unregulated. Plato thus seems to have harboured great mistrust for the institution of the family and feared that all sorts of undesirable practices may go on undetected behind closed doors. As he writes in the Laws, "Our ideal, of course, is unlikely to be ever realized fully so long as we persist in our policy of allowing individuals to have their own private establishments, consisting of house, wife, children, and so on" (Plato, 1975, 807b). Thus, in Callipolis, procreation and child-rearing is entirely in the hands of the state, while in Nineteen Eighty-Four the Party eventually intends to take children from their mothers at birth, "as one takes eggs from a hen" (Orwell, 1984, p. 230). Mothers actually feature prominently in the utopian/ dystopian visions of Orwell and Huxley, as they do in their common source, We. The protagonist of Zamyatin's novel, D-503, laments the fact that he has no mother of his own, while Winston Smith clings to the memories of his mother as a vital link to the unadulterated past. However, there is something anachronistic in the maternal figure that Winston recollects, while the selfless caritas that she represents appears as hapless in the sociopolitical realities of Oceania as the mother is in Huxley's denatured World State in which children are not raised in homes but "hatcheries" (Huxley, 2013 p, 5). In Nineteen Eighty-Four, as in Brave New World, the family as a source of private identity and security is anathema to the state, as it is in Plato's political dialogues.
There are certain recommendations that Plato makes which, unadulterated, can be said to offer a theoretical basis for Communist totalitarian practices. There are also ways in which Capitalist utopias/dystopias seem to have taken Plato's guidelines and reversed them so as to achieve the same results without appearing to be oppressive. A case in point is Brave New World's position on sex. In the Republic, as in Nineteen Eighty Four, sex is divorced from pleasure and regarded by the regimes in question as merely a means of providing the state with new citizens. As O'Brien tells Winston, "Procreation will be an annual formality like the renewal of a ration card. We shall abolish the orgasm" (Orwell, 1984, p. 230). The added advantage of such a policy is that libidinous energies that might have been invested into fulfilling sexual relationships are sublimated and channeled into collectivist activities, such as war. Plato was a keen psychologist and although in his later writings he seems prepared to make concessions to accommodate the citizens' physical desires into his political programme, in the Republic-which is also a soul-fashioning project-he is very strict about controlling the Guardians' bodily appetites. As he writes, luxurious desires make a city "feverish" (Plato, 1987, 372e) and are "the sources of the worst evils for cities and individuals" (Plato, 1987, 373e). Huxley reverses this principle, making sex solely a means of recreation, in keeping with the general hedonistic outlook of the World State. Not only adults, but children are educated from an early age to indulge in non-procreative sexual activity without restraint and without emotional attachments. According to Freud's repression hypothesis, this should result in rebellious behaviour and the breakdown of social order, but Huxley shows that, in a society which has abolished the family and by extension the Oedipus Complex, it is an even more effective means of social control than the most extreme enforced celibacy. Not only is the city not destroyed by the "fever" of uncontrolled sexual passions, as Plato would have expected, but the citizens have no time or surplus energy to participate in any activity deemed dangerous to the state and cut (such as falling in love or reading Shakespeare). As Plato writes in the less puritanical Laws, Every living creature has an instinctive love of satisfying desire whenever it occurs, and the craving to do so can fill a man's whole being, so that he remains quite unmoved by the pleas that he should do anything except satisfy his lust for the pleasures of the body, so as to make himself immune to all discomfort. (Plato, 1975, 782d) Thus, even if Brave New World promotes recreational sex-something which is only allowed for procreation in Nineteen Eighty-Four-it is only in the interests of maintaining the status quo by inhibiting natural and free sexual relations; the objective in both works is the same: the total libidinal and, by extension, political control of the citizen.
If Plato could see what these authors have done with his wisdom, he would probably be taken aback, but it seems to keep society functioning harmoniously which is, after all, the primary goal of his own utopian project. Or is it? There is a passage in Book II of the Republic where Glaucon debates with Socrates about the so-called "city of pigs": an alternative utopia to Callipolis in which people are said to live well-ordered and sensible lives, catering primarily for their physical needs and their security-indeed, very much as most people do in the modern world today. But the citizens of huopolis do no philosophy, and therefore neglect what was for Plato the most important constituent of human happiness, the cultivation of the soul. In Platonic terms, the "city of pigs" is a contradiction in terms because a polis, as the Republic tells us, is the social embodiment of justice and the love for the good, which animals presumably have no need for. Besides anticipating Aristotle's distinction between zein and eu zein, biological as opposed to political life, the paradigm of huopolis suggests that, for Plato, the human soul loses its characteristically human quality when it leads a purely physical or hedonistic existence. However, this is exactly what Huxley's vision of utopia is based on, and it seems to work. Thus, as Christopher Burlinson writes with regard to the plethora of animals that appear in More's Utopia but which could also apply to Plato's huopolis, "animals provide a figure in which the human and non-human can be brought together, where the differences between animals and humans are acknowledged but where their place in the world of our ethical concerns is re-evaluated" (Burlinson, 2008, p. 38). 10 When we compare the "city of pigs" with Brave New World, certain difficult questions arise, such as whether human beings need philosophy to be happy. Perhaps, Plato is wrong in assuming that what he regards as the sumum bonum of political life, i.e. a philosophical utopia ruled over by philosopher-kings, is or should be subscribed to by all right-minded citizens. As Claude Lefort argues, "the whole utopian reorganization of polis life [in The Republic] is not only directed by the superior insight of the philosopher but has no other aim than to make possible the philosopher's way of life" (Lefort, 1998, IJCLTS 8(2):22-30 p. 51). Brave New World also asks whether happiness based on unlimited drugs and sex is not true happiness. Does society actually need art, religion, and philosophy, or are these things expendable, a form of collective neurosis, as Huxley suggests? If we had to choose between contentment and art, or happiness and God, as the Savage is made to do in Brave New World, what would we choose? The answers that we might give to these questions reveal a fundamental disparity between the way Plato and the Greeks understood eudemonia and the way the moderns understand happiness, a difference predicated not only on the way the relationship between the animal and the human is conceptualized, but also on the hierarchy of values implicit in the classical Greek triad, nous-psyche-soma, or mind, soul, and body. Moreover, viewed from a political perspective, if the role of philosophy is to help people make choices, as Socrates demonstrates, then the regimes of Brave New World and Nineteen Eighty-Four are essentially the same in doing away with that need: slaves do not need philosophy, nor do those not required to make choices.
PLATO'S LEGACY: CONCLUDING REMARKS
Ironically, the practical value of Plato's political writings, regardless of what the philosopher intended, has turned out to be greater for those bent on founding dictatorships than for those genuinely interested in improving society. Indeed, in the Phaedrus, Socrates warns against precisely such an eventuality when he asserts that writing is inherently vulnerable to misinterpretation, since it is "incapable of speaking in [its] own defense as … of teaching the truth adequately" (Plato, 1995, 276c). What we are left with in such twentieth-century utopias/dystopias as Brave New World and Nineteen eighty-Four is merely the shell of the Platonic ideal, i.e. the totalitarian social structure and the absolute authority figure of the philosopher-king in the guise of Big Brother or the Resident Controller. The ethical and political goal that Plato envisaged for his ideal city is absent. On the other hand, one could argue that the socially beneficial telos of Plato's political project would not justify the more-or-less oppressive means deemed necessary to achieve it, anyway. If so, then nothing much can be said to have changed in the two and a half millennia since the appearance of the first political utopia which proposed to put an end to history by crowning philosophers. What remains when the Nietzschean will-to-power flies in the face of Plato's political idealism is O'Brien's ultra-cynical view of government expounded in the Ministry of Love: "Power is not a means, it is an end. The object of persecution is persecution. The object of torture is torture. The object of power is power" (Orwell, 1984, p. 227).
More crucial, arguably, in a discussion of utopias/dystopias is not how speculative fiction envisages a better or a worse society, but where such narratives self-reflexively stand on the issue of freedom of expression and the intellectual and artistic forms which this may take. We have seen how Plato in the Republic takes issue with creative writers who do not follow strictly ethical principles in their work, but may represent the unseemly, corrupt, and immoral as freely as they do the beautiful, the just, and the good. Also, in the Laws, the Athenian proclaims, "[n]o one should be allowed to show his work to any private person without first submitting it to the appointed assessors and to the Guardians of the Laws, and getting their approval" (Plato, 1975, 801d). However, is Plato entitled to condemn strictly non-philosophical ways of viewing the world, such as epic and tragic poetry, when he himself employs myths, allegories, symbols, and other figures of speech to teach-paradoxically-the difference between truth and falsehood? On the other hand, there is a certain consistency to Plato's thinking for, if a society had achieved the ideal state, any literature which promoted contrary opinions and entertained alternative values would indeed be dangerous for the health of the body politic. But, does not this contradict the very essence of the polis, which, as Balasopoulos following Aristotle has argued, resides in its plurality and heterogeneity (Balasopoulos, 2007, p. 133)?
If one could envisage an "ultimate revolution," 11 that would more likely approximate Huxley's vision of society reduced to a bee colony in which human beings have been entirely deprived of their individuality and have been educated to "love their servitude" (Huxley, 1994, p. 154). Alternatively, social progress may be imagined as a species of "permanent revolution," but not in the sense that Marx and Engels describe in The Holy Family (1956) which Orwell can be said to parody through the state of permanent war in Nineteen Eighty-Four, or even in Trotsky's transnational sense, reflected perhaps in the World State of Huxley's utopia/dystopia. The "permanent revolution" which speculative fiction encourages the reader to visualize may be closer to Nietzsche's notion of a "transvaluation" or "re-evaluation of all values" given that it does not allow any social ideal to escape critical scrutiny, including, paradoxically, the freedom to criticize itself. One could argue that herein lies a crucial difference between Plato's absolutist cultural theory and Socrates' dialectical method of debate, a difference which is not allowed to emerge as clearly as it could in the political dialogues since Socrates' voice is subsumed into Plato's and is never heard directly. Thus, we could assert that the social import of speculative fiction, philosophical or otherwise, is ambivalent, for not only may it lend itself to totalitarian appropriation and application-as seems to have been the case with The Republic-but it may also constitute a means of critiquing the existing status quo by conceptualizing different ways of thinking and being, thereby allowing for the possibility of change, which even Plato recognized as necessary "in something evil" (Plato, 1975, 797d).
END NOTES
1. This is the famous thesis of Frances Fukuyama's The End of History and the Last Man (1992), a book which ominously alludes to the provisional title of Nineteen Eighty-Four, "The Last Man in Europe," as well as to Friedrich Nietzsche's concept of "the last man." 2. See Laurence Davis (1999), "At Play in the Fields of Our Ford: Utopian Dystopianism in Atwood, Huxley, and Zemyatin", Transformations of Utopia: Changing Views of the Perfect Society, George Slusser, et al eds. | 9,847.6 | 2019-03-30T00:00:00.000 | [
"Philosophy",
"Political Science"
] |
Computer Aided Diagnosis System for Stone Detection and Early Detection of Kidney Stones
: Problem statement: Most of the previous study in diagnosis of kidney stone identifies a mere presence or absence of the stones in the kidney. However proposal in our study even present an early detection of kidney stones which helps to change the diet conditions and prevent the formation of stones. Approach: The study presented a scheme for ultrasound kidney image diagnosis for stone and its early detection based on improved seeded region growing based segmentation and classification of kidney images with stone sizes. With segmented portions of the images the intensity threshold variation helps in identifying multiple classes to classify the images as normal, stone and early stone stages. The improved semiautomatic Seeded Region Growing (SRG) based image segmentation process homogeneous region depends on the image granularity features, where the interested structures with dimensions comparable to the speckle size are extracted. The shape and size of the growing regions depend on this look up table entries. The region merging after the region growing also suppresses the high frequency artifacts. The diagnosis process is done based on the intensity threshold variation obtained from the segmented portions of the image and size of the portions compared to that of the standard stone sizes (less than 2 mm absence of stone, 2-4 mm early stages and 5mm and above presence of kidney stones). Results: The parameters of texture values, intensity threshold variation and stones sizes are evaluated with experimentation of various Ultrasound kidney image samples taken from the clinical laboratory. The texture extracted from the segmented portion of the kidney images presented in our study precisely estimate the size of the stones and the position of the stones in the kidney which was not done in the earlier studies. Conclusion: The integrated improved SRG and classification mechanisms presented in this study diagnosis the kidney stones presence and absence along with the early stages of stone formation.
INTRODUCTION
The study has focused on the kidney image segmentation and diagnosis for stone detection and absence in the ultrasound images. First, the kidney moves within the patient owing to breathing, at several centimeters of amplitude (Van Sornsen de Koste et al., 2006) and irrigation liquids are used during the operation. These two constraints have a direct effect on image quality. Moreover, it is not uncommon to break some optical fibers, resulting in black dots on the image. Second, kidney stones have demonstrated different chemical compositions, resulting in different shapes, colors and textures (Leusmann, 1991). Finally, the system must be fast enough to work over the laser shooting rate. Though medical image segmentation is a very active domain (Duncan and Ayache, 2000;Rao, 2004), few studies have examined this particular subject. Most of them have focused on MRI (Vivier et al., 2008;Makni et al., 2009), CT scan or ultrasound images (Sridhar et al., 2002) San Jose Estepar et al., 2009). Except for laparoscopic images (Voros et al., 2006), only a few researches have directly examined video images. In this study, integrated image processing (with improved image segmentation and diagnosis) scheme performs the function semi automatically, without the need for an interaction from the user to diagnosis kidney stones presence, absence and early detection from Ultrasound images. The initial stage has been presented and its precision, robustness and speed have been examined.
Integrated image segmentation and diagnosis of ultrasound kidney images: The ultrasound image is non invasive among various imaging modalities with features like low cost imaging, minimal scan time, flexible operation and reduced exposure to harmful radiation. The segmentation and classification technique evaluate the features of the image acquired and display result for kidney stones diagnosis.
Ultrasound kidney image segmentation: Semi automated SRG algorithm for kidney image segmentation are comprises of three steps i.e., seed point selection, seed region growing and optimal threshold value. A seed is a perquisite and need to automatically select a seed point replacing the selection through user interaction. With the given seed SRG can start to grow, but a threshold value has to be determined to only cover the reasonable pixels.
The local variance and mean ratio of the granularity in the fully developed ultrasound speckle kidney image is used as the measured parameter for seed point selection. According to this parameter, it is possible to decide whether the processed pixel is within homogeneous region or not. In general, if the local variance to mean ratio is larger than that of speckle, then the corresponding pixel can be considered as a resolvable object. Otherwise, it belongs to a homogeneous region. The shape of speckle pattern and average speckle size varies at different locations of stored images. Therefore, it is highly desirable to arbitrarily defined shape and size of the homogeneous regions for smoothing. This is achieved through the region growing procedure, which effectively fits the grown region to the homogeneous area without imposing any shape constraint. The region growing procedure employs a look-up table consisting of statistical bounds for different values of local statistics.
The aim of region-based segmentation techniques is to extract the homogeneous seed points from the ultrasound filtered image. Region growing technique is better in noisy images, where borders are extremely difficult to detect such as ultrasound medical images. For region growing method homogeneity is an important property, which can be based on gray-level, shape, model. For region-based segmentation, the basic requirement is to satisfy the region similarity in the kidney image.
The Proposed semi-automatic seeded region growing algorithm for ultrasonic kidney images: • Choose a window sized (2k+1)×(2k+1) being centered at (i, j) for seed point • Generate the look up table for local statistics for each pixel: • Calculate the homogeneity • Calculate the statistical similarity bound • Implement region growing for every pixel: • Each image pixel is taken as a seed pixel • Store the neighboring pixel information for every seed point • Grow region from the seed point according to the statistical similarity criterion • Implement region merging: • Labeling the each region with a unique number • Store the neighboring region information for every seed region • Merge the neighboring region according to the merging criteria with the seed region • Update the segmented image output Image diagnosis: Image diagnosis is based on the texture of the segmented portion of the images compared to that of standard benchmarked kidney image texture values. Texture extraction is the process of quantifying the texture patterns within a specified neighborhood of size M by N pixels around a pixel of interest. There mainly exist four categories of texture analysis, namely, structural, statistical, model-based and transform-based approaches. In our study, two sets of texture features, namely statistical i.e., first and second order coefficients and structural i.e., Haralick texture descriptors, are used separately to compare their performances in characterizing type of renal calculi.
The Ultrasound kidney image is x (i,j) 'T R ' is the total number of pixels in kidney region "R". The first order gray level statistical features mean Pixel Rate In case of even N, the median value is estimated by finding the average of two middle values. The integration of the image segmentation and classification is done with intensity threshold variation associated to the texture features for the characterization of region from images. The two methods used in this study for texture description are statistical and structural. The Spatial Gray Level Dependence Method (SGLDM) is adopted for statistical texture description. All known visually distinct texture pairs can be discriminated using above method. These statistical features of second order are computed in a two step process. The first step delivers the co-occurrence matrices containing the element P k (i, j). Each (i, j)th entry of the matrices represents the probability of going from pixel with gray level (i) to another with a gray level (j) under a predefined newly suggested angles, 0, 30, 60, 90, 120 and 150 degrees. Based on the co-occurrence matrices texture features are estimated to diagnosis the stones presence, absence and early stages in the kidney images.
Experimentation of kidney image segmentation and classification:
Ultrasound kidney images with normal and stones of various sized is obtained from the medical laboratory. The images acquired by ultrasound scanner were processed using discriminative features. The edges of the stones were clearly observed on the segmented portion of the images with classification indicate the size of the stone presented. The images with segmented classification find the average, mean and median values of intensities of the pixels in the image.
The seed is the starting point of the region growth. The position of the seed must be inside the calculus in the kidney image. The segmented image is then a binary image representing the inside and the outside of the calculus on the kidney image. The similarity criteria of the region growing algorithm allows to differentiating the calculus from the rest of the image. Hence, it is possible to study with calculi of different colors and textures to ensure a proper functioning in most of the clinical scenario. The similarity criterion gives a score representing the similarity between the region already found and the pixel or group of pixels examined. The stopping criterion is defined as a threshold on the value of the similarity criterion. Its value must be optimized to give the best segmentation possible.
The experimentation conducted for these parameters were set after a study made on a dataset composed of more than 150 images. The solution for accurate results is to preset the algorithm with the found parameters and to allow the medicos to adjust the parameters around this preset at the beginning of the diagnosis. The standard results obtained from the manual detection of kidney stones from the ultrasound images have been traced by expert and urologist who is even prone to variability. The proposed automated study of integrated segmentation and diagnosis provides precise stone detection in the kidney for any number of ultrasound kidney images. The detection rate for the red spot is nearly 95%, the impact of specular reflections on the surface of the kidney images and eventual blood drops flowing through the kidney was examined with noise preprocessing stages (Rizon et al., 2005).
Performance evaluation of kidney stone diagnosis and early detection:
The improved semiautomatic region growing algorithm was developed and set up for the most frequent usable clinical scenario. Manual segmentations were made by the medical experts of the clinical laboratory on real ultrasound images to establish the actual position and a quantitative error measurement. The adaptation of Local Homogeneity Criterion was used in gray ultrasound kidney images. The stopping criterion was a intensity threshold variation of 15, shown in Fig. 1 (stone detected images) and the window size was set to 6×6 pixels.
The manual initialization obtain good results on the validation ultrasound kidney image classification and diagnosis of the stone presence and early stages composed of images coming from different clinical situations. The stone size formed in normal patient is less than 2 mm however with size of 5 mm and above cause serious issue to the kidney. The size of the stones in the kidney between 3-4 mm is classified as early stages of stone formation in the kidney (Fig. 2). The tabulation show in table1 (Fig. 3) depicts the effect of proposed method in diagnosing the US kidney images for its stone detection and early stages of stone formation using various texture features. The study presented in this study utilized improved SRG for segmentation identifies the various intensity threshold segments of the ultrasound kidney images.
DISCUSSION
With variant of threshold greater than 15, texture features and size of the stones and segmented portions define the images to diagnosis the presence, absence and early stages of the stone in the kidney. The proposed integration improved SRG and classification is fast enough to run at a frame rate of 20 Hz and integrated in an automated system to sweep the surface of kidney stones and remove them by remedy measures such as sound shocks to fragment the stones to negligible pieces.
CONCLUSION
The study presented in the study diagnosis the kidney stone detection and early stages of formation with integrated image segmentation and classification. The segmentation process is carried out with improved SRG for identifying the intensity threshold variation. The texture extracted from the segmented portions is mapped to the size of the pixel squares with its size magnitude. The size and texture property of the segmented image evaluate the absence and presence of kidney stones. The experimentation is conducted to easy and effective diagnosis of stones and its early stages from ultrasound image samples taken from various patients. The parameters for diagnosing and classifying the US kidney stone images are identified and analyzed. The quality features describe the kidney stone size and its position is made viable. There has been significant difference observed in many parameters of each type of renal calculi. It is required to analyze different sizes of the kidney stone and based on these numerical values of features it is highly feasible to develop a universal reference for kidney stone categories.
ACKNOWLEDGMENT
The Researchers are thankful to Adharsh scan centre, Erode, Tamilnadu, India for providing kidney ultrasound images for the research study. | 3,082.8 | 2011-02-25T00:00:00.000 | [
"Medicine",
"Physics"
] |
Evidence-Based Interactive Management of Change
Evidence-based interactive management of change means hands-on experience of modified work processes, given evidence of change. For this kind of pro-active organizational development support we use an organisational process memory and a communication-based representation technique for rolespecific and task-oriented process execution. Both are effective means for organizations becoming agile through interactively modelling the business at the process level and re-constructing or re-arranging process representations according to various needs. The tool allows experiencing role-specific workflows, as the communication-based refinement of work models allows for executable process specifications. When presenting the interactive processes to individuals involved in the business processes, changes can be explored interactively in a context-sensitive way before re-implementing business processes and information systems. The tool is based on a service-oriented architecture and a flexible representation scheme comprising the exchange of message between actors, business objects and actors (roles). The interactive execution of workflows does not only enable the individual reorganization of work but also changes at the level of the entire organization due to the represented interactions.
modelling and management, and communications engineering.
Introduction
Currently, the competitiveness of enterprises is mainly driven by their capability to implementing process-driven information systems.They should enable structural flexibility, besides addressing quality, cost, and partner/customer relationships (Stephenson et al., 2007).The need for velocity popped up when cross-enterprise operations had become relevant in the course of economic globalization.Structural flexibility requires adapting to global supply chains or evolving networks albeit increasing operational efficiency and effectiveness (Haeckel et al., 1999).In this context process-driven business and information systems development is of crucial importance at the tactical and operational level (cf.Laudon et al., 2005).
Velocity Depends on Accurate Process Specifications.
For (cross-)organizational operation process specifications have to be tailored to a specific business (opportunity) at hand.Of particular importance is their coherence, when different organizational roles or units are involved in a business case.Coherence requires the consistent propagation of business objectives to operational structures, e.g., reducing engineering cycle times through reporting loops, both on the level of process specification, and on the level of processing them, e.g., using workflow management systems.
Each process can be considered as functional entity with dedicated objectives and particularities that have to be integrated with those of the enterprise network or networked enterprise.Intelligible and flexible representational schemes for process specifications and information systems architecting allow enterprises to keep up with the dynamics of change.Process design and redesign has to be tightly coupled with control flows of enterprise information systems (cf.Rouse, 2006).However, a variety of techniques is used to capture process elements and the flow of control.In most cases, for process specification (targeting towards implementation) a language switch occurs shifting from natural to formal languages.Effects of that shift are of economical (costs), social (conflicts, negotiations), and organizational scale (iterations, quality control), mostly in case of mismatches between business requirements and information systems features.
Continuous Change.
Those effects are of particular importance when dealing with all types of changes due to the snapshot nature of specifications and models (cf.Lewis et al., 2007, p.15).It is caused by cognitive mismatches when using particular notations for specification, and leads to incoherent transformation of information throughout analysis, design, and implementation.Current tools do not resolve semantic-gap issues (as identified in the realm of semantic web applications -cf.Ehrig, 2007) for developments driven by business process modelling languages, e.g., Rumpfhuber et al. (2002).However, such problems become substantial when tasks are increasingly pushed to users (Chakraborty, 2004), as the control flow of integrated processes has to be semantically coherent on the level of work tasks, and syntactically correct on the level of technology support.In case users are not supported along the workflows they are involved in doing their business, interactive developments can easily lead to reduced productivity, decreased quality of work, lack of efficiency in the course of task accomplishment, problematic management of tasks (cf.Preece, 1993).
Bridging Semantic Gaps.
For our approach we will use experiences from model-based technology development (Stary, 2000) and subject-oriented business process engineering (Fleischmann et al., 2009).The ultimate goal of development should be a role-and task-conform workflow in a way that each stakeholder is able to experience a certain task in an effective and efficient way from a mutually tuned global organizational and individual perspective.The latter might require the adaptation of existing work processes towards individual work styles and personal preferences.Thereby, the interactive process artefact reflects the world of tasks as perceived or envisioned by stakeholders or responsible managers.It takes into account skills and preferences, as required for task accomplishment and for mutual interaction.As such representations have to be negotiated before coming into operation on the organizational level.Task and role models need to be communicated.Semantic interaction requires some ontology and a notation supporting mutual understanding.
The Process Perspective.
Both characteristics, the representation of task knowledge and its communication are crucial in organizational learning processes (cf.Nonaka et al., 1999, Davenport, 1998, Senge, 1990), in particular when knowledge is considered as a production factor.Common to the development problem in systems design seems to be, for knowledge being a production factor or product that knowledge needs to be located, acquired, analyzed, and developed.Analogous to design knowledge organizational knowledge has to be considered as a product as well as a process.As a process it addresses the individual and group-wise processing of knowledge items, i.e. learning at both levels.Managing this transfer process has turned out to be a challenging organizational activity (Nonaka et al., 1995, Probst et al., 1997).For analysis and development knowledge needs to be represented and stored in organizational memories (Ackermann, 1994).According to Kühn et al. (1997) it captures "accumulated know-how and other knowledge assets and makes them available to enhance the efficiency and effectiveness of knowledge-intensive work processes".
Evidence-based Interactive Management of Change.
In the course of (continuous) improvement of business processes and changes in organizations stakeholders play an important role.Herrmann (2000) proposes to qualify stakeholders by letting them develop parts of business processes individually.Using the workflow modelling language SeeMe (Herrmann, 1998) stakeholders create or edit diagrams describing their particular view on the assigned work tasks and their specific procedures for task accomplishment.The diagrams should be linked directly to software applications supporting the task execution.Through activating those links the stakeholders might immediately get an impression of executing the specified workflow and using the assigned applications, eventually involving different applications and different user interfaces.This example demonstrates the utility of tools enabling handson-experience.
In order to execute workflow specifications in the course of organizational learning such tools have to capture the organizational process on an individual level and to ensure process visibility for other stakeholders, on the level of the organization.We achieve that in terms of executable specifications.For individual stakeholders and management we provide workflow definitions and executable interactive artefacts presenting the data and communication facilities according to individual workflow definitions.
As soon as stakeholders adapt business workflows to their specific view, individual learning is stimulated.The process changes can be made visible by prototypical process execution.Together with the process description, they are means of communication between stakeholders -an organizational learning step is initiated.
In order to implement the above mentioned concepts we have used a language for human-centred process modelling, and a corresponding tool to execute individual workflow specifications immediately.Hence, the following work is located at the interface of knowledge management, organizational learning, and business process engineering (see figure 1).The interdisciplinary approach is given by the nature of our research objectives: For evidence-based interactive management of change, we need to integrate business process engineering in knowledge management processes.In this way, procedures dealing with work process changes can be improved.In order to achieve these improvements both on the individual and organizational level we use concepts from organizational learning.As evidence we consider all inputs triggering learning steps.They might stem from market or production developments, organizational brainstorming sessions, idea management or continuous improvement activities set by individual stakeholders.Any evidence should be documented in process specifications for reflection and further processing.The latter enables the interactive and collaborative management of change.
Figure 1. Interfacing knowledge management, organizational learning, and business process engineering
In the next section we describe the underlying conceptual framework stemming from organizational learning.In section 3 we elaborate the representation scheme for business process engineering and its interactive processing support for organizational knowledge creation.In section 4 we give the service-oriented architecture of the tool.In our conclusion (section 5) we summarize the objectives of our work, the presented achievements, and sketch our future research.
The Operational Frame of Reference
Organizational learning encompasses individual learning as well as learning on the organizational level and the transfer of knowledge between these levels.Brown and Duguid (1998) emphasize the importance of so-called boundary objects which help to transfer knowledge between individuals" or groups with different attitudes and presuppositions.For them, business processes are proper candidates for representing boundary objects because "business processes can enable productive cross-boundary relations as different groups within an organization negotiate and propagate a shared interpretation."For this reason, we have selected business process representations as enabler for individual and organizational learning processes and the transfer process (cf.process-based knowledge management in Abecker et al. (2002)).
In the following we describe how organizational learning processes occur and how they can be supported with business processes and business process modelling which is a technique for representing business processes encompassing structural and behavioural aspects.
In order to support stakeholders on the individual level, individual learning has to be stimulated.Kolb (1984) has proposed an experiential learning cycle which describes the learning process of individuals".In figure 1 we incorporate the experiential learning cycle into our framework for organizational learning processes.Four steps can be used to explain "design" as activity for mutual discourse: Observation and assessment lay ground to evidence for management of change in terms of (re-)designs.They trigger a continuous experiental learning cycle: Design: Stakeholders express their role-specific view onto the business processes according to their task assignments and experiences. Implement: The workflow specifications of the refined business processes can be executed.This way, an interactive artefact is generated that enables hands-onexperience for role-specific task accomplishment.The visualization and prototypical execution of the workflow specification allows to put the modified business processes into action. Observe: Stakeholders observe through the interactive artefact generation and workflow execution possible effects the executed processes have on their work and the organization of tasks. Assess: If the results fit the expectations of the stakeholders, the concerned process serves as input for the learning process on the organizational level.If further process refinements or modifications are required the cycle starts again.
In order to transfer the knowledge acquired by the stakeholder to the organizational level we designed the following learning steps (see figure 1): Negotiation of the business process with all stakeholders whose work is affected by the process: the stakeholder who changed the process presents the process.The artefact generation and workflow execution help to visualize the changes in a straight-forward way and to initiate discussions.Participants in the negotiation process can vote for or against the changes or propose modifications themselves.The negotiation ends when all participants have accepted or rejected the proposed changes.Negotiation parameters should be set at the beginning of the negotiation like required percentage of votes for an acceptance /a rejection, or a time limit which breaks off the negotiation.The moderator of the negotiation process should be the responsible stakeholder for the business process (process owner).
The negotiated business process has to be documented in form of a business process model.The model has to be stored into the organizational memory where all stakeholders have access to.The organizational memory is represented through a business process repository which stores the original process model and all versions which were negotiated.The latest negotiated process model serves as basis for further changes.
The organizational learning step is complete if the modified business process represents the novel basis for the work of the stakeholders.Figure 1 shows the operational frame of reference for learning.
Figure 2. The Operational Frame of Reference of Interactive Change Management (according to Heftberger et al., 2004)
The directed link from the individualized business process model refers to the entire organizational memory, because all business processes of an organization are interrelated.The links between the business process model and workflow prototyping mean that experiences gained from using the interactive artefact generation and workflow execution influence the adaptation of business processes specifications, and vice versa, modified processes result in alternative actions (at the user interface) of the artefact.
The organizational learning cycle might be iterated as soon as stakeholders have used the newly acquired and specified knowledge on the level of the overall organization (denoted through the dotted line across the individual learning cycle).On the individual as well as on the organizational level, the ability can been acquired to put process specifications into practice.In this way, anticipated changes can be implemented prototypically, tested without disturbing the running business, and in case of acceptance incorporated.In case of rejecting the proposal no further implications occur.
Archtetyping
In the following we detail the representation scheme and the interactive tool support from a stakeholder perspective.We also give the procedure for proposing and experiencing workflow executions, i.e. initiating the organizational learning life cycle.
(Re-)Thinking Business in Terms of Socially Valid Processes
Understanding organizations as social systems we have defined a notation and specification language that is capable of describing an organization, therefore being as expressive as existing business process modelling languages, but provides an actor-specific and communication-oriented perspective makes use of semantic-net features to employ natural language expressions contains a minimal set of elements and relations in order to describe an organization as accurate as possible in an intelligible form provides descriptive items and relations to identify relevant codified (documented) knowledge for task accomplishment.
The language captures the tasks to be supported,
Lessons learnt from model-based design.
Designing information systems in a user-centered and task-driven way, several models could be distinghuised and explored in the development of the ProcessLens approach (cf.Dittmar et al., 2004;Stary, 2000): The user model sets up a role model by defining specific views on tasks and data (according to the functional roles of users). Business processes as such are modeled partially through the task model.
Comprehensive processes can be composed of subprocesses.This step can be executed recursively until elementary task actions become evident.Additionally, temporal constraints can be set between tasks and processes.The task model can be designed according to the organization of work and the stakeholders" perception of work, usually being a part of the business intelligence representation. The problem domain or data model describes the data derived from the tasks and user organization in the problem domain.In contrast to traditional data modeling, in ProcessLens both aspects of the data required for task accomplishment are captured: the static and the dynamic properties.Typically, objects of work are specified in the data model. Last but not least, the interaction model provides all interactive elements and interaction styles that are needed to make the business processes visible for stakeholders on the one hand and executable on the other hand.
ProcessLens provides dedicated relationships to ensure context-sensitive representations, and to prohibit models that can not be executed prototypically due to incoherent refinement and inconsistencies.Each of those relationships, such as "is handled" between roles of a user model and subtasks of a task model, are checked through particular algorithms according to their semantics and syntax.UML (www.uml.org) is used for specifying the various models and their relationships.Each object relevant to actor behavior or task accomplishment is described using attributes and stereotypes, which indicate which category of model element is addressed, e.g.task, activity, role).Methods are described using activity diagrams.
Activity diagrams model the behavior of actors, task accomplishment, data management, and user interaction with the information system.They contain activity states, (optional) transitions, fork and join vertices.Using these elements, either XOR, OR or AND relations can be modeled.Connecting states with transitions allows for constraints when operating a business.These conditions are triggers to reach the next state.Activity diagrams are prerequisites for the adaptation of business processes to the understanding of the stakeholders.
Behavior descriptions might be useless when the context of application is not intelligible.Therefore, a dedicated connection between activity diagrams had to be introduced.For instance, the activity diagram of "Order Entry" has to be connected to the activity diagram of "Order Form (electronic)" through synchronisation links, since the form (part of the interaction model) enables order entries (part of the task model).These links allow specifying the detailed flow of control.Hence, as soon as a certain state, such as "input order data" is reached, the expected behavior continues with a state from the activity diagram of "Order Form (electronic)" and returns when filling in the form has been completed.A synchronisation link is a special kind of transition, which had to be added to standard UML, in order to ensure the automated execution of task flows.
Besides the usefulness of keeping actor-and task-specific perspective on process specifications, the ProcessLens project revealed the benefits of automated execution in terms of immediate experience of task flows, not only for stakeholders, but also for customers, managers, and organization developers.
Utilizing an integrated, generic scheme for specification.
The integration of the various perspectives is enabled through communication links that synchronizes behaviour sequences (similar to ProcessLens).A respective template is used as representational frame of reference.The following figure shows a template of type 3, since it contains 3 involved parties.All parties exchange messages.In order to distinguish concrete persons from roles in a process we call the parties in a process subjects.Subjects are the acting elements in processes, like in sentences of natural languages where the subject is also the acting element.The subject which starts the message exchange is marked with a small white triangle (subject1).
Figure 3. A process template with 3 involved parties
Each subject can send messages with the name Message to any other subject any time.Figure 4 shows the behaviour of the subject with the name "subject1".Since subject1 is the subject which starts a process its "start" state is the state "select".The "start" state is framed.
The state "start" and the transitions to the state "select" are never executed in the start subject.This state is the "start" state in all the other subjects.All the other subjects are waiting for a message from (all) other subjects.In this way each subject not being a start subject has to receive at least one message before they can start to send messages.The start subject sends a message to any other subject.The receiving subject can reach now the state "select".In that state a subject can decide upon its next action without restriction.A subject which is in state "select" can send a message to other subjects which are still in the state "start".Now these subjects can also reach the "select" state and can send messages.Finally, all subjects are in the state "select" and can communicate whenever being addressed.
In the "select" state the start subject decides whether it wants to send or to receive a message.In the beginning it does not make sense to receive a message since the other subjects are waiting for messages.All the other subjects are in the state "start" which is a "receive" state.This means the start subject will start with sending messages.After that mutual message exchange can start.When becoming active in the "select" state a subject decides to use the "send" transition.In the state "prepare message and select address" the subject instantiates the business object that is transmitted by the message "message".After that a subject decides to which subject the message with the business object as content will be sent.In the "select" state a subject can also decide whether it wants to receive a message.If a message from the expected subject is available the message can be accepted and a follow-up action can be executed.It is not specified what the follow up action is.It works like receiving an e-mail.The receiver is able to interpret the content of an e-mail and knows what the corresponding follow-up action is.The "abort" transitions back to the "select" state enable to step back in case a subject has made the wrong choice.
Figure 5. Behaviour of Subject2
Above we have shown a generic process scheme with three participants.Such types of schemes are universal, as they can be easily created for any number of participants.The following figure shows the generic structure of a process with four participants (type 4 template).The behaviour of each subject has to be adapted to the corresponding number of subjects in a generic process.Figure 7 shows the behaviour of the start subject "subject1".The modeler or the tool needs to add the respective "send" and "receive" transitions between corresponding states.In the "send" area transitions must be added to send a message to the new subject, as for the "receive" area.In the "receive" state a corresponding transition has to be added.Using that extension principle the behaviour for each type of generic process scheme can be generated automatically.With the message "Message" a corresponding business object is sent.The structure of this business object corresponds to the structure of a mail with some extensions like keyword and signature.The following figure shows the specification of the business object message in a XSD notation.
Figure 8. Structure of the Mail Business Object
Whenever a message "message" is sent, such a business object is sent.The values for the components of the business message object correspond to the content of a traditional mail.
Adopting the Generic Scheme.
Subjects are abstract resources.They represent the parties involved in a process.For the specification of an actual workflow the various subjects of a process must be assigned to existing roles, persons or agents.The assignment of persons or agents to subjects means that a process is embedded into a concrete environment.The following example shows an application of a generic process scheme of type 3 in an actual work organization.
Figure 9. Embedding a process in its environment
The persons Max Mustermann, Tobias Heinzinger, Uwe Hofmann und Johannes Luther are assigned to subject "subject1".Since these persons are assigned to the start subject all of them can start the process.For instance, Max Mustermann creates a message and sends it to Nils Meyer.Nils Meyer can accept that message and can send a message back to "subject1" -the message is received by Max Mustermann.Max Mustermann receives the message because in his work environment or context the process is started.If another person assigned to "subject1" starts a process this process instance is executed in his or her environment.
The embodiment of a process depends on the usage of a generic process specification.It depends on the business events to be handled.In our example vacation requests are handled.Nils Meyer as head of a development department is responsible for Max Mustermann, Tobias Heinzinger, Uwe Hofmann and Johannes Luther.Subject3 is assigned to Elisabeth Schwarzmeier.Subject3 represents the human resource department of the organization.
Evidence-based Interactive Experience.
The execution of work processes can be supported by an appropriate workflow system.In our research we use a suite developed for subject-oriented process specifications and workflows (Fleischmann et al., 2009).The following figure shows a screenshot as Max Mustermann creates a new instance of a generic process template for 3 parties (i.e. of type 3).This new process instance has the title "Request for vacation" (see ellipsis in the figure).
Figure 10. Max Mustermann creates a process instance with the title 'Request for Vacation'
After creating the process instance Max Mustermann is guided through the process.He is asked by the workflow system which transition he wants to follow.He knows that he has to fill in the business message form with the corresponding data and that form has to be sent to Nils Meyer.Consequently, Max Mustermann follows the transition "send".In the state "prepare message and select address" following the transition "send" he fills in business object data required for the application for vacation.
In the following figure the user interface of the workflow system and direct experiencing of the specification are shown.Max Mustermann can put in all the required data for a vacation request to the business object and send it to Nils Meyer who is the owner of subject2.This is all Max Mustermann needs to know, since the behaviour description of his subject would allow sending the vacation request to the Human Resource department, conform to using a mail system.Each stakeholder needs to know to whom he/she has to send a mail in the course of task accomplishment.The workflow used in that example produces a protocol recording who has executed a certain action at a certain time.The following figure shows an example of an execution path for handling a vacation request with a generic process scheme.The steps executed in each subject are shown in a corresponding column with the subject name as head line.
Figure 13. Execution path of a generic 3-party process scheme for a vacation application
Subject1 starts with the "select" activity and selects the "send" transition.Then the action "prepare message and select address" is executed and in state "state2" the message is sent to subject2.Now subject1 reaches again the state "select".In state "start" subject2 receives the message.In the subsequent state "follow up action" the content of the received message is read and the corresponding action is executed by Nils Meyer who is the owner of Subject2.In the case of vacation application this follow-up action is Nils Meyer´s decision whether the vacation application is accepted or denied.This decision must be sent to subject1.In state "select" subject2 decides to follow the "send" transition, prepares the message with the result of the decision, and sends it to subject1.This swim lane diagram shows which subject executes which actions in which sequence.If a subject sends a message the "send" state is connected with the corresponding "receive" state in the receiving subject.Subject1 sends a message to subject2 in state "state2".Subject2 receives that message in state "start".
A subject-oriented workflow system can guide each party involved in a process.It can be used to produce an execution protocol recording the sequence in which messages have been exchanged between the involved parties.Another advantage is that a workflow for a generic process scheme can be generated automatically.The only parameter that must be known is the number of involved parties besides the subject starting the process execution.Once completely detailed, a proposed business process can be experienced and immediately negotiated using the subject-oriented workflow system.We exemplify the required steps in the next sub section.
Submitting a Process Specification to an Organizational Memory
Detailing a generic process scheme can be described as omitting all items that are not required for task accomplishment, or "refinement through restriction".There are several restriction steps: 1. Remove message connections between subjects that are not required.2. Name the subjects according to the application domain.3. Name the messages and introduce message types according to the application domain.4. Adapt specification to actual subject behaviour.5. Refine the structure of the business objects transmitted by the various messages.
After each restriction step the process can be executed by a subject-oriented workflow system.With each restriction step the guidance for the subject holders is becoming more stringent to task accomplishment.
In the following the application of these restriction steps is exemplified for handling vacation requests, starting with a generic process scheme of type 3.
As the process specificiation can be embedded into the organization after each restriction step, the corresponding workflow can be used like shown above, according to the available resources.
Remove unnecessary communication paths.
We assume that Subject1 represents the stakeholder asking for vacation, Subject2 represents the manager and Subject3 the human resource department.When handling the process "application for vacation" there is no communication between the employees and the HR department.Therefore the modeler needs to remove the corresponding exchange of messages.In addition, HR never sends a message to the manager.The modeler needs to remove the exchange of messages between Subject3 and Subject2.The following figure shows the resulting communication structure.
Figure 14. Restricting communication
According to the changes in the communication structure the behaviour of the subjects has to be adapted.The corresponding "send" and "receive" branches have to be removed.The following figure shows the adapted behaviour of Subject1.The "send" branch to Subject3 and the "receive" branch to Subject3 are removed.
Figure 15. Communication to Subject3 is removed in Subject1
Analogously, the behaviour of Subject2 is adapted.The "receive" transition for messages is removed from Subject3.The following figure shows the resulting behaviour.
Figure 16. Subject3 does not send any messages
The changes in the workflow system restrict the possible interactions between subjects according to the vacation handling process, but there are still message exchanges allowed which do not meet the requirements of the anticipated vacation process.Subject1 can send another message with an application for vacation, Subject2 can send several messages with an answer to Subject1 etc.
Getting Concrete.
In a next step the subjects have to be named.Generic names need to be replaced to fit the application.Since the intention is to create a process and workflow for handling applications for vacation, the subjects are named according to that domain.Subject1 is renamed to "employee", Subject2 is renamed to "manager" and Subject3 is named "HR".The following figure shows the communication structure of the resulting specification with the domain-specific names of the subjects.
Figure 17. Communication structure with domain-specific subject names
The naming of the subjects has also some impacts on the communication behaviour.In the subject behaviour specification the corresponding sending or receiving subjects must be also adapted.The following figure shows the adapted behaviour of the subject "employee".
Figure 18. Behaviour of subject 'employee' with actual names of communication partners
The behaviour of the other subjects is adapted in an analogous way.
Instead of using the generic message name "message" the modeler need to rename the messages in accordance to the application domain.He/She might introduce additional message types.The name of the messages exchanged between subjects can give some indications about their content and meaning.Instead of sending a message of the type "message" informing the subject "manager" only after reading that this is an application for vacation, the subject "employee" can send a message with the name "application for vacation".In this way the name of the message type indicates the intention of sending a message in the context of a certain process specification.
The following figure shows message types relevant for handling vaction requests.Subject "manager" can send two different message types to subject "employee", one message type for acceptance and the other for denial.The new message names improve also the readability of the process.
Figure 19. Communication structure with actual message types
The new message types have also some impacts on the behaviour specifications of all subjects.The message type "message" must be replaced with the new message names and for the new message types additional "send" and "receive" transitions must be added.The following figures show the adapted behaviour of the subjects "employee" and "manager".The simple behaviour of subject HR can be adapted in a similar way.
Figure 20. Behaviour of subject 'employee'
While renaming message types and adding new message types the process becomes more intelligible.Proper names for message types help users to grasp the meaning of a message type.But there are still the same problems we have after re-naming the subjects.Although the process can be better understood users might still send messages to subjects that are out of scope for the addressed task accomplishment, such as a second message "application for vacation" to the manager.Still, it requires cognitive effort to organize effective and efficient communication having a certain process in mind.
Figure 21. Behaviour of subject 'manager'
Up to now the already restricted behaviour of the involved subjects allows to send messages which are not in line with the intended handling of a business event.In order to achieve a coherent picture the behaviour of the involved subjects has to be restricted further.The following figure shows the modified behaviour specification of the subject "employee".According to that spefication the subject "employee" can only send a single message "application for vacation" to the subject "manager".After sending that message the employee has to wait for the answer from the manager.In case the subject "employee" receives the message "denied" from the manager the "end" state is reached and the process execution is finished.In case the subject "employee" receives the message "approved" the state vacation is reached which means the action "vacation" is executed.If this action is finished the "end" state is reached.
Figure 22. Complete behaviour refinement of subject 'employee'
The following figure shows the restricted behaviour of the subject "manager".After receiving the message "application for vacation" the subject "manager" decides whether he/she accepts or rejects that application.In the acceptance case the subject "manager" sends the message "accepted" to the subject "employee", and after that the message "accepted vacation application" is sent to the subject "HR".Then the subject "manager" reaches the "end" state.In the case of rejecting the application the subject "manager" sends the message "denied" to the subject "employee" and reaches another "end" state.
Figure 23. Final behaviour refinement of subject 'manager'
After those changes in the behaviour individuals can be guided through the process in a task-conform way without generating behaviour that is out of scope or context of the addressed task.There is only one issue remaining open.In the message specifications the subjects still use a generic business object for transferring the details about the vacation.This problem can be solved by defining a business object, and thus, representing the application domain of the process.
Finally, each addressed business object needs to be detailed, as shown for the vacation application form.
Figure 24. Sample Structure Specification
Then negotiating on the level of the organization is instantiated.It might lead to further adaptations.The versioning support of the organizational memory allows tracing the process of gathering inputs and compromising.
Service-Oriented Architecting of the Tool Support
Subject-oriented models have to be considered from the perspective of (distributed) software architecting for implementation.This conversion must be executed fast and at reasonable costs.Although functions can be derived through stepwise transforming model elements, service-oriented architecting is more effective and efficient.Users (i.e.subjects) could compose information systems based on components implementing subject activities.Such an assembling would not require programming at all.The concept of service-oriented architectures (SOA) has been designed to support such type of implementations.
In the following we demonstrate the straightforward use of subject-oriented models for SOA-implementations.The fundamental concept of SOA is the decomposition of software applications in coherent parts.Each part can be linked according to organizations, user"s or application"s needs.Such a composition allows generating applications along business processes in an effective and efficient way.Sequences of service calls replace change requests for existing applications involving IT and organization departments.
Using SOA requires business applications to be decomposed into functions or services, such as checking a vacation request, or updating a days-off sheet.These functions might stem from various platforms.Once functions become available as independent parts, developers might group them to establish novel functionalities.Neither the entire program nor the business logic is located in a single program, but rather distributed over several systems, including organizations.
A complete SOA requires control mechanisms assigning services to activities or tasks and implementing the overall flow of control.Each service has to be triggered accurately, in order to meet the requirements of the application.As these control mechanisms are the starting point of activities, they represent the subjects of SOA.For the definition of process control, services orchestration and service choreography can be used.
Service Orchestration and Service Choreography.
Orchestration describes the sequence of services as required for business operation.Orchestrating operations or service involves some kind of flow diagram mechanisms to specify the sequences of utilized service operations.BPEL has been defined as formal description language (Business Process Execution Language)cf.Havey (2005).The execution of an orchestration requires central control to follow the defined flow of control.So far, the integration of user interactions is very costly, as BPEL targets for automated processes (workflows).Each process has at least one or two users, one triggering the process and one interested in the result.
Each activity is orchestrated, however, the activities synchronize themselves exchanging messages.The choreography of services is the description of direct interactions of the parties involved in a process, e.g., a sequence of messages.Each party involved in a process has its own flow of control, synchronized by exchanging messages.Hence, each user is responsible for executing process steps.In contrast to orchestration, when using choreography the system elements follow an agreed plan, however, pursuing individual paths to accomplish the involved tasks (initially promoted and then monitored by a controlling party).The following table summarizes the respective concepts.The top level concerns the interaction with users, activating operations, providing inputs to functions of a process, and processing their results.Below the implementation of the business processes is addressed.In this level the sequence of functions (implemented in the bottom layer), or user inputs are expected (top level).As such, level 2 contains most of the business process logic.The bottom level provides all service operations available on level 2.
Orchestration
The subjects or actors correspond to the presentation level in SOA.Activities correspond to the operations assembled according to the orchestration.In Figure 26 the process description is given using EPC notation (IDS, 2000), orchestrating all services including user interactions.The service infrastructure contains objects.Their operations are used as activities (termed predicates in the figure).
Figure 26. Orchestration and SOA
Using jPASS!(www.jcom1.com)operations can be used to provide meaningful subject-specific steps.Such services in use are triggered in a synchronous way.Hence, a subject switches to a subsequent state once all concerned services have been completed.This approach corresponds to orchestration.In the corresponding tool jPASS!subjects also comprise users besides technical elements.The services in use can either represent complex business-logic elements or elementary reading or writing operations.
The services assigned to the nodes might also contain user in/output as a part of their processing scheme.In this way users become part of the service of a process.A service can provide data sent via messages to other subjects, e.g., implemented by formbased user interaction.For instance, such an internal function would allow a user to fill in a holiday application form.The service meets the technical requirement to provide the relevant data for the holiday application.User inputs can also be combined with calculated or extracted information from data repositories.User services can be standardized constructing (process) portals that might serve as controller component for process or service execution.
Subjects themselves can be also considered services.A subject can request a service from another subject.Subjects can be considered as a service provider producing the requested services in a self-contained way.In these cases subjects offer so called ‚request services" requested by messages.The requesting service might continue the service provision when a service provider accomplishes the requested task.The requested results delivered by the requested subject are accepted by the requesting service when suitable.Figure 27 visualizes the various aspects of subject-oriented process description, service-oriented architecture (SOA), and resulting portals.
Figure 27. jPASS! and SOA
In the business context portals offer a user-specific view on information and relevant business processes.The subject-oriented perspective of processes reflects for all users their relevant part(s).allocating subjects to users (stakeholders) all portal processes or services can be identified that are relevant for individual task accomplishment, streamlined with the task procedures of other members of the organization.
Conclusions
Organizations have to act and react according to environmental changes and internal developments.An inappropriate (re)action might lead to problematic situations.In our research we look at computer workplaces, the stakeholders" knowledge and the resulting potential for individual and organizational learning.If each person is enabled to adapt business processes, data and artefacts according to his/her needs and role characteristics, profound knowledge of the organizationseen as incorporation of all its memberscan be stored and made tangible via workflow generation.It can also be challenged and modified in a participatory style of organizational development.
Our a research goal was and still is supporting stakeholders in a way that they can (re)define business processes according to their view on the organization of work in a consensual way.In doing so, they are able to contribute to organizational learning actively.The proposed subject-oriented representation and execution support captures the structure and dynamics of task accomplishment addressing actors (roles), problem domain data (objects), and interaction modalities (message exchanges) for task accomplishment.Each specification can be visualized and directly experienced by all involved parties.
What still needs to be done is to perform cross-organizational field studies to evaluate the impact of actor-specific and communication-driven workflow generation for networked organizational learning processes.It has to be investigated how stakeholders can be motivated to adapt business processes according to their experience in crossorganizational networks, and start organizational learning processes.A matter of particular interest is the observation of the evolution of business processes in the organizational memory as well as of the impact the individual views have on the organizational changes of the business processes.
Observe: individuals observe stimuli and their consequences from the environment Assess: the observations are assessed partly consciously, partly unconsciously Design: on basis of the assessment individuals form abstract concepts to react to the stimuli in an adequate way Implement: the developed concepts are implemented in real situations.The observation and reception of the effectiveness re-iterates the cycle.
users" perception of tasks in terms of involved actors, problem domain data required for task accomplishment, and exchange of messages to complete work tasks.
Number 1 in that figure shows the name of the current state: "Prepare message and select adress". Number 2 shows the title of that process instance: "Request for vacation". Number 3 shows the creation date of that process instance. Number 4 shows the form for instantiating the business object.
Figure 12 .
Figure 12.User Interface of the workflow system in state prepare message and select the person(s) to be addressed
Choreography
Central control for process execution Central process execution cannot be implemented for multithreaded organizations Tendency of serialization of process activities Each party is responsible for correct execution sequence Multi-threaded organizations can be supported through decentralized process execution Tendency of parallel process activitiesService-Oriented Architecture (SOA).Service operations, their orchestration or choreography, and the recognition of users follow a 3-tier concept, as shown in the figure. | 9,724.4 | 2011-09-06T00:00:00.000 | [
"Business",
"Computer Science"
] |
Multi-Area State Estimation: A Distributed Quasi-Static Innovation-Based Model with an Alternative Direction Method of Multipliers
: In the modern power system networks, grid observability has greatly increased due to the deployment of various metering technologies. Such technologies enhanced the real-time monitoring of the grid. The collection of observations are processed by the state estimator in which many applications have relied on. Traditionally, state estimation on power grids has been done considering a centralized architecture. With grid deregulation, and awareness of information privacy and security, much attention has been given to multi-area state estimation. Considering such, state-of-the-art solutions consider a weighted norm of residual measurement model, which might hinder masked gross errors contained in the null-space of the Jacobian matrix. Towards the solution of this, a distributed innovation-based model is presented. Measurement innovation is used towards error composition. The measurement error is an independent random variable, where the residual is not. Thus, the masked component is recovered through measurement innovation. Model solution is obtained through an Alternating Direction Method of Multipliers (ADMM), which requires minimal information communication. The presented framework is validated using the IEEE 14 and IEEE 118 bus systems. Easy-to-implement model, build-on the classical weighted norm of the residual solution, and without hard-to-design parameters highlight potential aspects towards real-life implementation.
Introduction
Power System State Estimation (PSSE) was originally introduced by Schweppe in the early 1970s [1], and the operation of the grid has relied on such model ever since. Through State Estimation (SE), the operators in the control rooms are able to make decisions and perform actions in order to operate the grid efficiently and reliably. Hence, SE is an essential part of monitoring the status of the grid in real-time. With the advent of the Smart Grid (SG) concept, the architecture of the grid has changed by the integration of many different technologies [2]. In specific, new sensors technology are deployed in order to enhance monitoring of the grid. Such integration has not only advanced the SE accuracy, but also has increased the computation burden considering real-time monitoring.
As the existing power networks are migrating to the SG paradigm, one of the numerous additional challenges is privacy and security [3]. The power network is typically composed of a large number of buses that are operated and monitored by several regional centers. With grid deregulation, the need of a distributed real-time monitoring process of large inter-connected power grid becomes imperative towards reliable operation. Since PSSE are traditionally performed in a centralized architecture, substantial research efforts aimed towards developing distributed approaches for advancing the SE process have been in place.
One main characteristic of Distributed State Estimation (DSE) is enabling regional control centers to perform their own SE process with enhanced robustness. Considering DSE, the state-of-the-art literature can be divided into two categories: hierarchical and neighbor-to-neighbor SE. In the hierarchical approach, each local control center performs their own SE, and upon convergence, estimated states are communicated with a central control for coordination. The works in [4][5][6][7][8][9][10] have explored this approach. The main shortcoming of this method is that there is still a need for a central controller to consent the global estimation of the states by matching with the boundary bus measurements. The neighbor-to-neighbor SE approach, on the other hand, eliminates the need for the central processor. In this approach, the communication happens between neighboring control centers. The authors in [11][12][13][14][15] have proposed different techniques for designing a fully DSE. In [11], a relaxed semidefinite programming non-linear SE is used to achieve near-optimal solution. However, the method may fail in the absence of voltage magnitude meters at all buses. In [12], a distributed non-linear SE is presented. However, the main drawback is the need for estimating the global state vector for each local area. In [14], a decomposition method is proposed. However, local observability is required and convergence is not always guaranteed. In [15], the authors developed a DSE based on Alternating Direction Method of Multiplier (ADMM). However, the method considers only a linear measurement model, known as DC SE. In the DC SE model, measurements are assumed to be linearly related with the system states. DC SE model is an approximation model that might not be suitable for applications where accurate representation of the underlying physical system is needed. In fact, the relationships between recorded measurements and state variables are non-linear [16,17]. A review on the concept of multi-area SE can be found in [18]. This paper presents a distributed non-linear multi-area state estimator which is based on the Gauss solution. The specific contributions of this work towards the state-of-the-art are two-fold: • A distributed measurement model for non-linear multi-area state estimation which utilizes ADMM; • Applying the Innovation Concept, which takes into account the masked error component in the Jacobian range space.
State Estimation with the Innovation Concept
The power system with n buses and m measurements is modeled as a set of non-linear equations as follows [19]: where z ∈ R m is the measurement vector, x ∈ R N is the state variables vector, h(x):R m → R N , (m > N) is a non-linear differentiable function that relates the states to the measurements, e is the measurement error vector assumed with zero mean, standard deviation σ and having Gaussian probability distribution, and N = 2n − 1 is the number of unknown state variables. Weighted Least Square (WLS) is a classical state estimator that searches for the best estimates of the states x of the well-known problem that minimizes the cost function as follows: where R is the measurement covariance matrix. J(x) index is a norm in the measurements vector space. Letx be the solution of the aforementioned minimization problem. Then, the estimated measurement vector isẑ = h(x). The residual is defined as the difference betweenẑ and z: r = z −ẑ. Linearizing (1) at a certain operating point x * yields: where H = ∂h ∂x is the Jacobian matrix of h calculated at the point x * . z = z − h(x * ) = z − z * and x = x − x * are the correction of measurement and state vector, respectively.
Under an observable condition, i.e., rank(H) ≥ N, the vector space of measurements can be decomposed into two sub-spaces that are orthogonal to each other. Let P be a linear operator such that ẑ = P z and the residual vector r be z − ẑ. Then, the vector ẑ = H x is orthogonal to the residual vector r, since P projects the measurement vector mismatch z orthogonality in the range space of H. Equivalently, in mathematical form one can write the following: Solving for x, one can obtain the following: The projection matrix P is found to be the idempotent matrix that can be derived as follow: using the solution in (5), the estimated increment in measurement is: Therefore, by substituting (5) into (6), the projection matrix P can be calculated using the following expression: It is possible to decompose the measurement error vector into two components. The component e D is the detectable error which is the residual in the classical WLS model while the component e U is the undetectable error. e D is in the orthogonal space to the range space of Jacobian whereas e U is hidden in the Jacobian space. Hence, the error can be written as follows: The error vector in (8) is called Composed Measurement Error (CME). In order to find the masked error and compose it, the Innovation Index (II) introduced in [20] is used to quantify the undetectable error as presented in the following: A low Innovation Index, i.e., II, indicates there is a large component of error which is not reflected in the residual. Therefore, the residual will be very small even if there is a gross error. By using (8) and (9), the CME in its normalized form will be as follows: where σ i is the standard deviation of the measurement i. Since measurement error has a unique decomposition, the authors in [19] showed that the minimization should be performed on the norm of the error in the detection stage. Therefore, the objective function in SE to be minimized is the following:
Alternating Direction Method of Multiplier (ADMM)
The algorithm was introduced in 1970 by Gabay, Mercier, Gowinski, and Marrocco [21]. ADMM is a simple algorithm that incorporates the features of Dual Ascent and the Method of Multipliers. The powerful feature of dual ascent is that in some cases, decomposition can be applied. The method of multipliers, on the other hand, provide robustness to the the dual ascent in a sense that convergence can be achieved without strict assumptions on the objective function. Therefore, ADMM can be used towards the solution of a multi-area state estimation, as the interconnected system measurement model in a distributed architecture. The general form of ADMM can be written as follows [21]: with variables x ∈ R n and z ∈ R m , where A ∈ R p×n , B ∈ R p×m , and c ∈ R p . By using the method of multiplier, the augmented Lagrangian can be written as follows: Therefore, the structure of ADMM consists of the following iterations: where ρ > 0. The iteration in (14) is just a minimization over x while fixing other variables, i.e., z and v. Then, a minimization over z is performed in (15) while fixing v and using updated x from (14). The last iteration is simply an update of the dual variable v. Actually, the dual variable can be viewed as a running sum of the error, analogous to integrator controller in control theory. This version of ADMM is often called unscaled form. Another version of ADMM known as scaled version can be obtained. The process starts by defining the residual r = Ax + Bz − c , and combining the linear and quadratic terms in the augmented Lagrangian function to be as follows: where u = 1 ρ v is the scaled dual variable. Hence, the ADMM cycles in the scaled version can be expressed as follows: From the provided two versions of ADMM iterations, both approaches are equivalent. The scaled form, however, is relatively shorter regarding formulation size.
Multi-Area AC State Estimation Model Using ADMM
The power system network is a wide-area interconnected system that is typically partitioned into regions based on geographical location. The system states of each region are monitored and supervised by a local control center. The overall PSSE goal is to estimate the states in each control region in an optimal way. If each control center estimates their local states without communicating with neighboring regions, then the estimated solution is sub-optimal. At the same time, if there are measurements pertaining to lines connecting boundary buses between areas or injection measurements associated with those boundary buses, then there is a need for communication between those control centers in order to utilize those measurements. For illustration purposes of interconnected power network, the IEEE 14 bus system is partitioned into four areas as shown in Figure 1. As seen, area 1 has a connection with area 2 through the tie-lines connecting buses 5 and 2 (in area 1) with buses 3 and 4 (in area 2), respectively. At the same time, area 1 has a connection with area 3 through the transmission line connecting bus 5 with bus 6. Therefore, towards optimal estimation, area 1 would require to communicate with areas 2 and 3. This could be done through information sharing of states of buses 3, 4, and 6 if there is a power flow measurements in those lines and/or power injections at buses 2 and 5. Similar scenario is applied for other areas. Hence, for a multi-area state estimation, each area would augment shared states with neighboring areas to their local states when performing local SE. The advantage of ADMM algorithm in the state estimation process is pertained to the decomposition of the process of state estimation while constraining the estimates of bordering buses if they are shared with neighboring areas. In that way, communication is limited only between neighboring regions, which leads to estimating the system states in fully distributed manner and without a central process. In addition, upon convergence, global estimation is optimal and carried out with minimal computations (and requirements of information sharing).
The framework for the multi-area state estimation is initialized with the partition of the grid into K control centers. Each control center is aiming to solve the following model: where z k ∈ R M k is the measurement vector of region k, x k ∈ R N k is the state variables vector of region k associated with the measurement in z k , h k (x k ) : R M k → R N k is a non-linear differentiable function that relates the states to the measurements, e k is the measurement error vector assumed with zero mean, standard deviation σ and having Gaussian probability distribution, M k is the number of measurements in region k, and N k is the number of unknown state variables in region k. The model in (1) is non-linear. Hence, a linearization is required in order to solve the non-convex optimization problem. Using Gauss-Newton approximation, as described in Section 2.1, one can write the following model: Solving for x k , one can obtain the following: Then, the estimated state can be updated simply as: The process is solved iteratively until a convergence criterion is achieved. In [15], a framework for applying ADMM for distributed state estimation is established considering a multi-area linear measurement model. In some applications, as fault diagnosis of power systems, a more accurate model is required. Hence, the ADMM framework considering a multi-area non-linear state estimation model is developed in this work.
The main characteristic of the power system grid is that some of the states in the state vector x k are common between region k and their neighbors due to measurements collected and related to those states. Hence, for neighboring regions k and l, define x k [l] (x l [k]) to be a sub-vector of the states in region k (l) that are shared between the two regions. One can define auxiliary variables for each shared state between two regions. For instance, the auxiliary variable x kl can be introduced to represent x k [l]. Similarly for region l, the auxiliary variable x lk represents the variable x l [k]. Define for a region k the set N k which represents the number of regions that share states with region k. In addition, for each entry in the state vector in region k, i.e., x k (i), define the set N i k which consists of the set of regions sharing the state x k (i) with region k. Since the state is a simple update that relies on the increment (as described in Section 2.1), the augmented Lagrangian (as described in Section 2.2) can be written for the increment of the states as follows: ADMM consists of the following iterations: By applying the minimization of the increment of states as in (23), ADMM iterations can be simplified [15] and outlined in the following steps: Step 1: Step 2: Step 3: where D k is a diagonal matrix with entry (i, i) is |N i k |, i.e., the number of regions sharing the state x k (i) with region k, v k,l is the Lagrange multiplier associated the constraint between region k and region l, x l [i] is the entry in the vector x l corresponding to the state x k (i).
Case Study
The presented framework is tested on two IEEE systems, i.e., IEEE-14 bus and IEEE-118 bus system. To apply multi-area state estimation, the IEEE-14 bus system is dived into 4 control regions [4] while IEEE-118 is partitioned into 3 regions [13]. In all cases, MATLAB package MATPOWER [22] is used to generate measurement sets with Gaussian noise for each scenario. The measurement sets consist of line power flows (real and reactive) for each line, bus power injections, and all voltage magnitudes. The implementation of the proposed framework and evaluation of results were conducted using MATLAB.
IEEE-14 Bus System
In this case study, the partitioned IEEE-14 bus system is used and illustrated in Figure 1. The multi-area non-linear state estimation model presented previously is applied and, upon convergence, the final estimates of the states are shown in Figures 2 and 3. As shown, in the "Area Number" axis, area number 5 is the power flow solution while the different bars associated with a bus across areas indicate the specific bus is shared with the corresponding areas. The results show that the converged solution are much close to the power flow solution. To show the performance of the presented model, the error metric e k = x k − x p f 2 /N k in log scale is utilized. A 100 Monte Carlo simulation is generated and the average error curves for each area is shown in Figure 4. From the results, one can see that the presented model is able to achieve a per area mean error smaller than 10 −2 within a few more itera-tions compared to the centralized solution, which in this case took 11 iterations. Using the Monte Carlo simulation, Tables 1 and 2 present the error statistics per area per state. In other words, the difference between estimated states and power flow solution is calculated for each state. In Table 1, local and shared states are included in the calculation while in Table 2 only local states are considered. Table 2 further presents final estimates of the states in each area. Mean
Area 1 Area 2 Area 3 Area 4
Mean To evaluate the presented model's robustness, measurements' noise conditions are simulated. Figures 5 and 6 show the performance index J(x), which is the norm of the measurements error in each area after convergence, over iterations. In this figure, the index J(x) and the χ 2 threshold associated with each area are plotted on the same window. As one can see, upon convergence, the performance index is smaller than the threshold value, which indicates that no gross error is present. Hence, the estimated states are optimal and the measurements error are minimal. To further analyze the robustness of the presented model, the noise level in the measurements is varied from 0 to 1.4% of the measurements' standard deviation (sd). Test result is presented in Figure 7. This figure highlights the effect of noise level in the measurements associated with each area on the performance index J(x). Each area has different χ 2 threshold because of different measurement set across areas. The results show that even with noise level above 1%, the performance index J(x) per area is smaller than the threshold value, which reduces the false positive alarms.
IEEE-118 Bus System
In this case study, the multi-area non-linear state estimation model is applied to a partitioned IEEE-118 bus system. Since the number of states in this system is large compared to the IEEE-14 bus system, it is difficult to visualize the final estimates. However, the same error metric used in the IEEE-14 bus system is applied to this system. Hence, a 100 Monte Carlo simulation is generated and the average error curves for each area is shown in Figure 8. From the shown results, one can see that the presented model has achieved error below the order of 10 −2 within less iterations compared to the system in Figure 1. The main reason for such is that the number of shared states within an area is less, which depicts a realistic scenario in real-life. Tables 3 and 4 present the error statistics per area per state for the considered system. In Table 3, local and shared states are included in the calculation while in Table 4 only local states are considered. Table 4 presents statistics regarding area states final estimates.
Area 1 Area 2 Area 3
Mean The behavior of the performance index J(x) is recorded over the iteration process and is presented in Figures 9 and 10. Upon convergence, the performance index is less than the threshold value. Hence, no gross errors are detected based on χ 2 test. The robustness the presented model against the noise level in the measurements is also evaluated considering this test system. Similar to the previous case study, the noise level is varied from 0 to 1.4% of the measurements' standard deviation. The result is presented in Figure 11. The results show that with noise level up to 1.1%, the performance index J(x) per area is below the threshold value, which reduces the false positive alarms.
Conclusions
In this paper, a multi-area nonlinear state estimation model is presented. The model considers the Innovation Index (I I) concept, which estimates the masked error component pertaining to the Jacobian null space and then composes the measurement error. The multiarea measurement model is solved with the alternative direction method of multipliers, which enables minimal information sharing. Validation is made considering two IEEE test systems. The two systems are partitioned to emulate different control centers controlling and monitoring interconnected power system. Tests include comparison with the benchmark centralized quasi-static measurement model. Test results statistics regarding robustness to measurement noise and computational speed are analyzed. Presented results show that the model is capable of handling noise in shared states among areas while maintaining computational speed and precision. Further, multi-area model converges to centralized solution within acceptable range of errors. Considering that the majority of utilities companies software rely on a centralized nonlinear state estimation, the presented model can be easily integrated in real-life application without major changes.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,084.8 | 2021-05-13T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Categorical Vehicle Classification and Tracking using Deep Neural Networks
The classification and tracking of vehicles is a crucial component of modern transportation infrastructure. Transport authorities make significant investments in it since it is one of the most critical transportation facilities for collecting and analyzing traffic data to optimize route utilization, increase transportation safety, and build future transportation plans. Numerous novel traffic evaluation and monitoring systems have been developed as a result of recent improvements in fast computing technologies. However, still the camera-based systems lag in accuracy as mostly the systems are constructed using limited traffic datasets that do not adequately account for weather conditions, camera viewpoints, and highway layouts, forcing the system to make trade-offs in terms of the number of actual detections. This research offers a categorical vehicle classification and tracking system based on deep neural networks to overcome these difficulties. The capabilities of generative adversarial networks framework to compensate for weather variability, Gaussian models to look for roadway configurations, single shot multibox detector for categorical vehicle detections with high precision and boosted efficient binary local image descriptor for tracking multiple vehicle objects are all incorporated into the research. The study also includes the publication of a high-quality traffic dataset with four different perspectives in various environments. The proposed approach has been applied on the published dataset and the performance has been evaluated. The results verify that using the proposed flow of approach one can attain higher detection and tracking accuracy. Keywords—Vehicle classification; generative adversarial networks; single shot multibox detector; vehicle tracking; deep neural networks
I. INTRODUCTION
With a rising count of vehicles on road, and those in a huge variety, resulting in traffic congestion and a slew of related difficulties, it is necessary to address these issues [1]. It motivates us to consider an intelligent and smart traffic monitoring system that could assist traffic agencies in addressing issues such as routing traffic based on the density of vehicle movement on the road, collecting traffic data like count of vehicles, vehicle type, and vehicle motion parameters, and managing roadside assistance in the event of an accident or other anomalous incident. It conducts traffic analysis using the acquired data to optimize the use of highway networks, forecast future transportation demands, and enhance transportation safety [2]. The primary functions of an intelligent and intelligent traffic monitoring system are vehicle categorization and tracking on a category basis. Due to the substantial technological problems associated with the same, several research topics have been studied, resulting in the creation of numerous vehicle categorization, and tracking systems. Classifying vehicles and maintaining their trajectories properly in a variety of environmental circumstances is critical for efficient traffic operation and transportation planning.
The scientific advancements have resulted in the development of several novel vehicle categorization systems. Three types of categorical vehicle classification systems may be found in use today: in-road, over-road, and side-road. Each category of vehicle classification is further divided into subcategories depending on the sensors utilized, the techniques used to utilize the sensors, and the processes used to classify cars [3]. While both in-road and side-road approaches are capable of accurate categorical vehicle classification, they differ significantly in terms of sensor types, hardware configurations, configuration process, parameterization, operational requirements, and even expenses, making it even more difficult to determine the most suitable solution for a given vehicle in the first instance. These techniques have limitations when more than one vehicle is in the same location at the same time [4]. So, these techniques can't be utilized for tracking the vehicles.
To circumvent the restrictions, over-the-road-based methods for category vehicle classification and tracking are used. Camera-based systems are the most popular technology for over-road-based systems [5] [6]. The cameras are mounted at a height sufficient to cover the road's wide field of vision and can span several lanes. There are two primary obstacles to attaining our aim that are linked with camera-based systems. To begin, their performance is significantly impacted by weather and lighting conditions, resulting in blurred, hazy, and rainy observations in collected pictures. The same findings are made in captured pictures when automobiles are travelling at high speeds on the road. Second, a higher viewing angle allows for consideration of more distant road surfaces, however, the vehicle's object size changes significantly, and the accuracy of detection of tiny objects located distant from the road suffers because of the shift. We focus on above two difficulties in this work to provide a feasible solution, and we demonstrate how to adapt the category vehicle recognition findings to multiple object tracking.
A. Image Restoration
Images restoration problems such as image deblurring, dehazing and deraining being all focused at creating an (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 12, No. 9, 2021 565 | P a g e www.ijacsa.thesai.org accurate representation of a clear final picture out of an insufficiently clear input image. Numerous studies have been conducted in this area. A multi-layer perceptron technique for deblurring that eliminates noise and artefacts [7]. To cope with outliers, a CNN based on the single value dissemination is used [8]. Certain techniques [9], [10] begin by estimating blur kernels with convolutional neural networks and subsequently deblur images using traditional restoration methods. Many edge adaptive neural networks have been developed for the purpose of recovering clear images instantly [11], [12]. Recent deep learning-based approaches for image dehazing [13], [14] estimate transmission maps first and subsequently restore clear images using conventional methodologies [15]. Typically, traditional methods for image deraining are created using the statistical characteristics of rainy streaks [16][17][18][19]. The author in [20] built neural network for removing rain and/or dirt from pictures. Having been developed with the aid of the ResNet [21], [22] built deep network for image deraining. The author in [23] introduced Generative Adversarial Network (GAN) architecture for generating realistic pictures from random noise. Numerous techniques for visual tasks have been developed because of this framework [24][25][26][27]. The authors in [28][29][30][31] have also utilized the GAN framework to low-level vision issues. We chose to apply the capabilities of the GAN framework physics model [32] for picture restoration jobs due to the positive findings.
B. Detection of Vehicles
Now, vehicle detection can be accomplished using both standard machine vision techniques and sophisticated deep learning techniques. Traditionally, machine vision techniques employ a vehicle's motion to distinguish it from a fixed backdrop picture. This approach may be classified into three categories [33] as background subtraction [34], frame subtraction on a continual basis [35], and optical flow [36]. Variance is determined by applying the frame subtraction technique, which compares pixel data of two or three successive frames. Additionally, threshold separates the shifting foreground region [35]. By employing this technique and reducing noise, the vehicle's halt may also be recognized [37]. When the video's backdrop picture being stationary, background data is used to build the model [37]. Following that, it is possible to segment the moving object as well as the frame pictures by comparing each frame image to the backdrop model. Optical flow approach being exploited to detect a motion area in frames. The resulting optical flow field encodes the direction of motion and speed of each pixel [36]. While the classic machine vision approach detects the vehicle more quickly, it does not perform well in case the image brightness varies, there being a continuous motion in backdrop, or there are vehicles moving with low speed or some complicated sceneries. Vehicle identification using deep convolutional neural networks [52] may be classified into two broad groups. The two-stage technique begins by generating a candidate box for the item using multiple methods and then classifying it using a CNN. Second, a single-stage technique could not produce candidate box but instead turns object bounding box placement problem straight transform it into a regression problem that can be processed. Region-CNN (R-CNN) [38] employs a two-stage technique that utilizes selective search of region [39] in image. CNN image input must be fixed size, and the network's deeper structure needs a lengthy training period and uses a significant amount of storage capacity. SPP NET [40], which is based on concept of spatial pyramid matching, enables the network to accept pictures of varying sizes and provide fixed outputs. Among the one-stage techniques, the Single Shot Multibox Detector (SSMD) [41] and You Only Look Once (YOLO) [42] frameworks are most important. For many categories, SSD for single shot detectors (YOLO) that is significantly faster than the preceding state-of-the-art and as accurate as slower techniques that undertake explicit area recommendations and pooling, such as the Faster R-CNN [43]. SSMD's central idea is to forecast category scores and box offsets for a specific set of default bounding boxes by applying tiny convolutional filters on feature maps. We chose to use the SSD framework [43] for categorical vehicle identification and classification tasks due to the positive findings.
C. Tracking of Vehicles
Aspects of the functioning of an intelligent traffic system that need advanced vehicle object identification applications, such as multiple object tracking, are also crucial [44]. DBT (Detection-Based Tracking) and Detection-Free Tracking (DFT) are the two most common methods of initializing objects in multi-object tracking systems (DFT). To detect moving objects in video frames, the DBT method first uses background modelling to detect them before tracking them. However, the DFT technique is only capable of initializing the tracking object and cannot deal with the addition of new objects or the removal of current ones. Multi-object tracking algorithms must consider the similarity of items inside a frame, as well the associated problem of objects across frames, when developing their algorithms. The normalized cross-correlation function may be used to determine the similarity of objects inside a frame. As shown in [45], the Bhattacharyya distance is being used to calculate the distance between two objects based on the colour histograms of their respective images. When connecting inter-frame items, it is critical to specify that each item may appear on no more than one track at a time and that each track may include no more than one object. It is now possible to fix this issue by using either detection-level exclusion or trajectory-level exclusion. SIFT and ORB feature points were used for object tracking to overcome the difficulties caused by size and illumination changes in moving objects in [46] and [47], however this approach is slow and requires many feature points. The feature point detection technique Boosted Efficient Binary Local Image Descriptor (BEBLID) is proposed for use in this study [48]. BEBLID is considerably faster than SIFT and ORB in extracting feature points.
D. Our Contributions Comprise the following Items
On the foundation of this work, a large-scale dataset of vehicle movement on roads has been developed, which may offer many distinct category vehicle objects that have been thoroughly annotated under diverse situations taken by high-mounted cameras. It is possible to utilize the dataset to test the performance of a variety of vehicle detection methods.
For recovering blurred, hazy, or rainy images recorded in road scenes, a method based on the GAN framework www.ijacsa.thesai.org for image restoration has been developed. This approach is utilized to increase the accuracy of vehicle detection in road scenes.
A technique based on convolutional neural networks, i.e., SSMD, is implemented for category vehicle detection.
A system for tracking and analyzing several vehicles is presented for road situations. The BEBLID method extracts and matches the detected object's feature points.
Findings of this investigation will be discussed in further detail in the following sections. Section 3 introduces the vehicle dataset that will be utilized in this work. During Section 4, you'll learn about the general procedure of the suggested system. Section 5 shows the results of the experiments as well as the relevant analyses. Section 6 provides a comprehensive summary of the complete method.
III. VEHICLE DATASET
Because of concerns about copyright, privacy, and security, traffic dataset is rarely made public owing to the widespread use of traffic surveillance cameras on highways across the world. With images of highway sceneries and typical road scenes, the KITTI benchmark dataset [31] aids in the solution of issues such as 3D object identification and tracking, which are commonly encountered in automated vehicle driving applications. The Tsinghua-Tencent Traffic-Sign Dataset [32] contains pictures captured by automobile cameras in a variety of lighting and weather situations, however there are no cars identified. The Stanford Car Dataset [33] and the Comprehensive Cars Dataset [34] are vehicle datasets captured by non-monitoring cameras and featuring a bright car look; they are used in research and development. The datasets are captured by security cameras; one such dataset is BV Dataset [35], which is an example. Even though this dataset categorizes vehicles into 6 categories, shooting angle being positive, and the vehicle object is too tiny for each image, making the generalization impossible for CNN training. A dataset called Traffic and Congestions [36] comprises photos of cars on roads collected by security cameras, however most of the images have some degree of occlusion in them. This dataset has a small number of images and contains no information on the vehicle's classification, making it less helpful. As a result, only a few datasets have pertinent annotations, and there are only a few images of traffic scenes available. This section provides an overview of the vehicle dataset from the standpoint of road surveillance footage that we created. Dataset available on link: https://drive.google.com/drive/folders/1vYwLPkZZ2OX1cIIP QZA4SgB3dum7vPwV?usp=sharing. The video in the dataset is taken from the DND road in Delhi, India as shown in Fig. 1. The road monitoring camera was put on the side of the road and built at a height of 10 meters with a fixed angle of view. The photos taken from this vantage point span a large portion of the road in the distance and include cars of all types. The pictures in the dataset were taken from four surveillance cameras at different times of day and under varied lighting situations to provide a diverse range of photographs. The vehicles in this dataset are divided into three categories: twowheelers, Light Motor Vehicles (LMV), which include threewheelers, automobiles, minivans, and other similar vehicles, and Heavy Motor Vehicles (HMV), which include buses, trucks, and other similar vehicles (Fig. 2). The Table I
IV. METHODOLOGY
The technique of the categorical vehicle classification and tracking system is described in detail in this section. First, the video data from the road traffic scenario is imported into the system. Second, the GAN framework is used to recover the pictures that have been captured. After that, the road area is excavated. The SSMD deep learning object detection technique is being used to recognize presence of vehicles belonging to three different categories in a road traffic environment. Finally, BEBLID feature extraction is carried out on the identified vehicle box to complete the tracking of numerous vehicle objects. In the proposed technique, the essential components of picture restoration, vehicle detection, propagating object states into future frames, linking current detections with existing objects, and controlling the lifespan of tracked objects are all discussed in detail. Diagram of the methodology's building blocks is depicted in Fig. 3.
A. Image Restoration
As previously stated, weather and lighting circumstances have a significant impact on the performance of camera-based systems, resulting in blurring, hazing, and precipitation observations in the captured pictures. High-speed vehicle movement on the road is observed in captured images, and the same or similar observations can be deduced from those images. The former scenario is caused by environmental changes and is thus less likely to occur, but the latter situation occurs almost without fail, necessitating the need for restoration. To achieve precise vehicle detection, it is necessary to repair the images to eliminate the issues that have arisen. Following a study of the literature on picture restoration approaches, we were encouraged by the positive results to apply the capabilities of the GAN framework physics model [32] to image restoration problems in our own research.
1) Image Restoration with GAN:
An image restoration task is to predict a clear picture x from an input image y that as been provided. Fundamentally, the estimated x should be compatible with the input y under the picture creation paradigm, which is as follows: (1) The operator H is used to transfer the unknown outcome x to the seen picture y. Depending on the situation, the blur, haze, or rain operation may be used. It is required to apply extra constraints on x to regularize it since the estimation of x from y is not well-posed. In the maximum a posteriori (MAP) paradigm, one frequently used method is predicated on the assumption that x may be solved by, In the above equation, | and are probability density functions, which are referred to as the likelihood term and image prior in the scientific literature, respectively. The mapping functions between x and y are directly learned using mathematical approaches, G is the mapping function in this case. In the case of the function G, it can be considered an inverse operator of H. If the mapping function can be predicted accurately, G(y) should be near to the ground truth, theoretically speaking.
The adversarial learning method used by the GAN algorithm is used to learn a generative model. It trains a generative network and a discriminative network at the same time by optimizing, among other things.
in which z represents random noise, x represents a genuine picture, and D represents a discriminative network are used. For the sake of convenience, we will also refer to a generative network as G. As part of the training process, the generator generates samples (G(z)) that may be used to deceive the discriminator, while the discriminator learns to discriminate between actual data and samples generated by the generator. A binary classifier is used as the discriminator. If the observed image y serves as the input to the generator, then the adversarial loss is, The value of (5) is near to zero if the distribution of the produced picture G(y) differs considerably from the distribution of the clear image, and it is greater if the distribution differs significantly from the clear image. It is possible to address the image restoration difficulty by doing the negative log procedure, If we consider the data term to ensure that the recovered image x and the input image y are consistent under the appropriate image degradation model, then we get . The regularisation of the recovered image x is denoted by and models the characteristics of the recovered image, respectively. In vision tasks, the function functions as a discriminator, with the value of the function being considerably smaller if x is clear and much bigger otherwise. In other words, maximizing the goal function as Eq. 3 will result in a decrease in the value of x. As a result, the predicted intermediate picture will be significantly more detailed. Accordingly, in order to regularize the solution space of picture restoration, adversarial loss can be employed as a precursor to the restoration. Fig. 4 depicts the major components of the GAN method, which include two discriminative networks, one generative network, and one picture degradation model [32], as well as their interactions.
where being the kernel for blur, and represents convolution operator. For image dehazing and deraining, ̃ where representing an atmospheric factor and being the transmission map. The discriminative network D g is used to determine if the distributions of the generator G outputs are comparable to those of the ground truth images. It is required to categorize using the discriminative network D h whether the regenerated result ̃ is consistent with the observed image . All the networks are taught in a collaborative manner from beginning to end.
During training, we rely on an Adam optimizer, which starts with a learning rate of 0.0002, with the method outlined in [24] being used. To get our results, we choose a batch size of one and a slope of 0.2 for the Leaky-ReLU. We use the same weight initialization strategy [24] uses. We must first get the generator G to create G( ) and . We may utilise the relevant physics model parameters to employ the generator, as we know the training data as well as the physics model parameters ̃. The discriminators D g and D h accept input data sets { , G( )} and { , ̃} respectively. We update the discriminators using a history of produced pictures (rather than the most recent generative networks' images) according to the methods discussed in [24]. The generator and the discriminators have a one-to-one update ratio set between them.
B. Excavation of the Road Area
The next section covers the procedure for removing the road surface. We developed it using an image processing approach called the Gaussian mixture model, which results in superior vehicle detection results when combined with the deep learning object detection method, as shown in Fig. 2. The video picture of traffic on the road has a wide field of vision. In this investigation, the cars are the primary centre of attention, and the road area is the zone of interest in the resulting image. Meanwhile, depending on the camera's view angle, road area being focused for certain range of the image's horizontal and vertical planes. We were able to extract the road segments from the video using this function. In a traffic scenario, a perfect background is not always accessible and may always be modified in crucial circumstances by the introduction or removal of items from the picture, as well as the presence of objects that are either slow moving or immobile. The Gaussian mixture model (GMM) was used to account for all these factors correctly. According to the method, background is visible more frequently than foreground and model variance is small [49].
The recent history of the intensity values of each pixel X 1 , ..., X t is modeled by a mixture of K Gaussian distribution. The probability of observing the current pixel value is given by the formula: where K gives the number of Gaussian distributions, is the weight of the k th Gaussian in the mixture at time t having mean and covariance matrix and η is a Gaussian probability density function which is given by | | (10) where n is the dimension of the colour space and is the number of colours in the colour space. As soon as the parameters have been initialized, the K Gaussians are sorted in the order of the ratio 1/(k). Due to the fact that backgrounds are more prevalent in scenes than moving objects, as well as the fact that their values are almost constant, it follows that a backdrop pixel equates to a high weight with low variation. The first B Gaussian distributions that surpass a specific threshold T 1 are kept for use as a background distribution. For example, Distributed data that is part of the foreground represents other distributions. Until a match is found, the process repeats as the system computes and compares every new X t value to the K Gaussian distributions. A pixel's value follows a Gaussian distribution if it is 2.5 standard deviations away from that distribution's mean. The background image is smoothed using a Gaussian filter once the road section has been extracted as the background picture. The MeanShift method smoothes the input image's colour. The final step is to finish filling the holes and carrying out morphological procedures in order to get most of the road surface. To extract the road regions, we made use of a variety of landscapes and have the results in Fig. 5.
C. Categorical Vehicle Detection using SSMD
Here is a description of the object detection approach that was employed in this study. The SSMD network was utilised in the development of the categorical vehicle detection framework and its deployment. The SSD approach's final detections are created by feeding bounding-boxes and scores of object class occurrences into a fixed-size feed-forward convolutional network followed by a non-maximum suppression phase. Addition of an auxiliary structure to the base network, such as the VGG-16, results in detections that have the following important characteristics:
1) Maps of multi-scale feature for identifying anomalies:
At end of the truncated base network, convolutional feature layers are added to complete the network. These layers get smaller and smaller as time goes on, and they allow for predictions of detections at various sizes.
2) Convolutional neural network prediction techniques: A sequence of convolutional filters is associated with each feature layer, and it creates a discrete set of detection results. The three-dimensional tiny kernels provide either a score for each category or an offset in the shape relative to the default box coordinates and are the essential element used for the prediction of parameters in a feature layer of size mxn with channels. For each kernel location, it generates a number as an output. When it comes to figuring out the bounding box offset output values, it is crucial to first understand the differences between measurements made on various feature maps.
3) Box and aspect ratio defaults: In the design of feature map cells, each is equipped with default bounding boxes, even if many feature maps are employed above the cell. Due to the tiling of the feature map's boxes, with the position of each box in relation to its associated cell fixed, the boxes' arrangements in the feature map are fixed. We predict the offsets, class scores, and the box shapes for each feature map cell. From there, we calculate the class scores and four offsets to get the final bounding box, as seen in the illustration. The (c + 4)k filters being applied around each spot in the feature map amount to (c + 4)kmn outputs for a m x n feature map. a) Training: For SSD training to be effective, the ground truth information must be allotted to certain detector outputs in the fixed set of detector outputs. Once a decision has been made on this assignment, it is applied completely to the loss function and back propagation. Additionally, you must pick the set of default boxes and scales that you will use for the data augmentation and the hard negative mining and methods.
i) Training method for matching: For training, we need to discover the ground truth boxes and train the network according to that discovery. For each ground truth box we create, we are using a preset box that is predefined with a variety of attributes, such as box size, box aspect ratio, and box placement. For every ground truth box, we compare it to the best-overlapping default box. Any boxes that meet the requirements are then matched to ground truth in which the jaccard overlap is over a certain level (0.5). By contrast, the learning challenge is made easier since the network may make predictions about a large number of default boxes that overlap, instead of needing to select one single box as the biggest overlapper.
ii) Loss function: The training aim is to be able to deal with a variety of vehicle types. We'll define an indication of matching a box in the i-th category to a box in the j-th category as . ∑ holds under the matching strategy shown above. The weighted sum of the localization loss (loc) and the confidence loss (conf) are the overall objective loss function: (12) where N is the number of matching default boxes, and the weight term has been adjusted to one via cross validation. If N equals 0, the loss is set to zero. In a localization test, the localization loss is the difference between the expected box (l) parameters and the ground truth box (g) values.
iii) Scales and aspect ratios for default boxes: To manage diverse object scales, feature maps from many distinct layers in a single network are used for prediction, with parameters shared across all object scales. This allows the network to handle several object scales at the same time. In addition, it has been depicted that feature maps from the lower layers could help to enhance the quality of semantic segmentation since the lower layers capture finer features of the input (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 12, No. 9, 2021 570 | P a g e www.ijacsa.thesai.org objects. For detection, we make use of both the bottom and higher feature maps. With the tiling of default boxes, we may train individual feature maps to be sensitive to objects of different sizes and shapes over time. Assume that we wish to make predictions using m feature maps. The following formula is used to determine scale of default boxes for every feature map: (13) Where equals 0.2 and equals 0.9, the lowest layer has a scale of 0.2, the topmost layer has a scale of 0.9, and all levels in between are evenly spaced. We impose various aspect ratios on the default boxes, denoted by the variables . We can determine the width and height of each default box. The centre of each default box is set to ( | | | | ), where | | denotes the size of the k-th square feature map, | | .
iv) Hard negative mining: we rank the default boxes according to their largest confidence loss and choose just those at the top of the list, ensuring that the ratio of negatives to positives is no more than 3:1. This resulted in a speedier optimization process and more uniform training.
v) Enhancement of data: To make the model more robust to a broad range of input object sizes and shapes, each training image is randomly chosen using one of the following methods: Utilize the whole original input image.
Sample a patch with values of 0.1, 0.3, 0.5, 0.7, or 0.9 to obtain the least feasible jaccard overlap with the objects.
Take a sample of a patch at random.
Each sampled patch is between [0.1 and 1] of the original image's size, with an aspect ratio of between 1/2 and 2. Following the preceding sampling step, each sampled patch is given a fixed size, and the patches are then horizontally flipped with a probability of 50%.
D. Multiple Vehicle Object Tracking
This section describes how numerous vehicle objects are tracked using the object box discovered in the preceding section. During this stage, the BEBLID algorithm was employed to extract vehicle characteristics, and good results were achieved. The BEBLID method surpasses the competition by a considerable margin in terms of computing performance and matching costs. This algorithm is a superior alternative to other image description algorithms that have been previously described in the literature. Feature computations for the BEBLID algorithm are based on differences in grey values between a pair of box image regions, with the integral image serving as a basis for computations for the BEBLID algorithm features based on differences in grey values between a pair of box image regions. The technique takes use of AdaBoost to train a descriptor on an imbalanced data set to handle the challenge of highly asymmetric image matching. Binarization in a descriptor is achieved by minimizing the amount of new similarity loss in which all weak learners share a common weight. The coordinate system must be established by assuming the feature point to be at the centre of a circle and using the centroid of the point region to represent the coordinate system's x-axis. Thus, when the image is rotated, the coordinate system may be adjusted to match the image's rotation, resulting in rotation consistency in the feature point descriptor. When viewed from a different angle, a consistent point can be made. After getting the binarization, the feature points are matched using the XOR operation, which improves the overall efficiency of the matching process. 6 illustrates the tracking method. When the number of matching points collected reaches a predefined threshold, the point is regarded successfully matched, and the object's matching box is painted around it. The following information relates to the source of the prediction box: Purification of feature points is performed using the Maximum Likelihood Estimator Sample Consensus (MLESAC) algorithm, which can exclude incorrect noise points caused by matching errors, and estimation of the homography matrix is performed using the MLESAC algorithm, which is capable of excluding incorrect noise points caused by matching errors. The estimated homography matrix and the location of the original object detection box are transformed into a perspective to get a matching prediction box for the original object detection box. Both the prediction box in the first frame and the detection box in the second frame must fulfil the centre point's criterion for the smallest distance between them to match the same item effectively. To be more specific, we define a threshold T equal to the greatest pixel change between the observed centre point of the vehicle object box and the vehicle object box's centre point when it moves between two subsequent video frames. The difference between two successive frames of the same vehicle in terms of positional movement is less than the threshold T. When the centre point of the vehicle object box crosses T in two subsequent frames, the vehicles in those two frames become unrelated, and the data connection fails. The threshold T value is proportional to the size of the vehicle object box, taking scale shift into vehicle. The thresholds for each vehicle object box are set to a variety of values. This definition is sufficiently flexible to accommodate vehicle mobility and a variety of different video input sizes. When T = box height/0.25 is used, the height of the vehicle object box is utilized as the input parameter for the calculation. We discard www.ijacsa.thesai.org any trajectory that has not been updated in ten consecutive frames, which is suitable for a camera scene with a wide-angle image collection along the route under investigation. If the prediction box does not match the item in future frames, it is determined that the object is absent from the video scene and the prediction box is removed. The method outlined above results in the collection of global object identification and tracking trajectories from the viewpoint of the whole road surveillance video.
E. Analysis of Trajectories
This section discusses both the analysis of moving objects' trajectories and the gathering of data on numerous items in a traffic flow. The majority of roadways are split into two lanes, separated by isolation barriers. We identify the vehicle's orientation in the world coordinate system based on its tracking trajectory and mark it as approaching or fleeing the camera. A straight line is drawn across the traffic scene image to serve as a detection line for the purpose of calculating vehicle classification data. The detection line must be centred on the 1/2 point of the traffic image's high side. Concurrently, the road's traffic flow in both directions is counted. The object's memory is accessed when the object's trajectory crosses the detection line. The number of objects in different orientations and categories over a certain time may be calculated at the end of the operation.
V. SIMULATION AND RESULTS
Many measures have been developed in the past for evaluating the systems performance quantitatively. The proper one depends heavily on the application, and the search for a single, universal evaluation criterion is currently underway. On one side, it being ideal to condense results into a single number that can be compared directly. On the other side, one could not want to lose knowledge about the algorithms' specific faults and present a large number of performance estimations, which makes a clear voting impossible. So, we would be evaluating the performances with more than one parameter.
A. For Image Restoration 1) Peak signal to noise ratio(PSNR): Considering a reference image f and a test image g, which have a resolution of MxN, the PSNR score among f and g being calculated as: (14) , The PSNR score increases as the mean squared error (MSE) decreases; this indicates that a greater PSNR value results in a higher image quality.
2) Structural similarity index (SSIM):
The SSIM being a well-known quality statistic that is used to compare two images. It is thought to be connected to the human visual system's perception of quality. The SSIM score being calculated as: (16) , (19) l: luminance, c: contrast and s: structural comparison function Few results of GAN framework for image restoration are shown in Fig. 7.
The images are randomly selected, and their performance is quantified in terms of PSNR and SSIM. The average of the two parameters' scores, is shown in Table II.
B. For Vehicle Detection
It was necessary to use the test set to compute the mean average precision (mAP); mAP is an acronym for Average Precision (AP), which is defined as calculating the area under the precision-recall curve for a given total number of object class instances [43]. The experiment is divided into three classes, which include two-wheelers, light motor vehicles, and heavy motor vehicles. The mean of 11 points for each potential threshold in the category's precision/recall curve is described for each category by AP. We utilized a series of criteria [0, 0.1, 0.2,..., 1] to measure our results. For recall values larger than each threshold (in this experiment, the barrier is 0.25), there will be a matching maximum precision value, denoted by pmax(recall). The precisions listed above are computed, and AP is the average of these 11 maximum precisions (recall). This number was used to describe the overall quality of our model. www.ijacsa.thesai.org The calculation of precision, recall and IoU (Intersection over union) is as follows: (24) in which TP, FN, and FP denote the number of true positives, false negatives, and false positives, respectively We used the following formulas to compute the parameter scores for both categories: 1) When the dataset was sent directly into the object detection algorithm, that is, when no image restoration procedure was used to restore the image.
2) When a picture is restored using the GAN framework, a dataset is fed into the object detection algorithm.
Tables III and IV provide the results of the parameters for each of the two categories. There is a 13.7 percent difference in the two-category results for the metric mAP when comparing them. This improvement figure clearly demonstrates that restoring the pictures has a significant influence on the quality of object identification and, indirectly, on the accuracy of tracking while tracking objects.
Few results of SSMD approach for categorical vehicle detection id depicted in Fig. 8.
C. Multiple Vehicle Object Tracking
The performance evaluation for multiple vehicle object tracking is done through following parameters [51]:
1) Multiple Object Tracking Accuracy (MOTA):
This parameter takes into account three different types of errors: false positives, missed targets, and identity changes. For improved tracking accuracy, a high MOTA value is preferred. It is calculated as: The frame index is t, and the count of ground truth objects is GT. MOTA could be negative if count of mistakes produced by tracker is more than total object count in the scene. MOTA score being solid indicator of tracking system's overall performance. (26) d t,i is the bounding box overlap of target i with its assigned ground truth object, and c t is count of matches in frame t. Average overlap among all properly matched hypotheses and their corresponding objects being given by MOTP, which spans among t d : 50% and 100%.
3) False Alarms per Frame (FAF):
It reflects per-frame amount of false alarms. A lower value of FAF is desirable for better tracking.
4) Mostly Tracked (MT):
It indicates the number of paths that have been mainly tracked. i.e. the target has had the same label for at least 80% of its existence. A high value of MT parameter is desirable for better tracking.
5) Mostly Lost (ML):
It indicates the amount of trajectories that have been lost for the most part. i.e. the target being not monitored for at least 20% of the time it is alive. A lower value of ML parameter is desirable for better tracking. 8) IDsw: The amount of times an ID changes to a formerly tracked object. A lower value of IDsw parameter is desirable for better tracking. 9) Frag: The amount of times a track is fragmented due to a miss detection. A lower value of Frag parameter is desirable for better tracking. The score of the various tracking parameters is depicted in Table V. Trajectory estimation done on the dataset is depicted in Fig. 9. It summarizes the movement of vehicles with direction information and maps the future state predictions.
VI. CONCLUSION
This research developed from the standpoint of surveillance cameras, a dataset of vehicle objects and presented a technique for image restoration, object detection, and tracking for road traffic video scenes. The use of the GAN framework for picture restoration, as well as the GMM for road area extraction, resulted in a more effective detection system. The annotated road vehicle object dataset was used to train the SSMD object identification algorithm, which resulted in the development of an end-to-end vehicle detection model. The location of the object in the image being evaluated by the BEBLID feature extraction method based on results of the object detection technique and image data. The trajectory of the vehicle might thus be determined by tracking the binary characteristics of many objects. Lastly, the vehicle trajectories were examined to obtain information on the road traffic scene, such as driving direction as well as vehicle category and traffic density. Testing findings confirmed that suggested vehicle identification and tracking approach for road traffic scene has good performance and is practicable, as demonstrated by the outcomes of the experiments. The method described in this paper being low in cost and high in stability when compared to the traditional method of monitoring vehicle traffic by hardware. It also requires no large-scale construction or installation work on existing monitoring equipment, which is a significant advantage over the traditional method. | 9,776.2 | 2021-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Elastic electron scattering by halocarbon radicals in the independent atom model approach
In order to study the elastic scattering of electrons by CFn (n = 1 − 4) molecular targets the independent atom model (IAM) is used with the optical potential (OP) method. The scattering cross sections were calculated in two approximations of the model – the IAM approach is used for the differential, while the Additivity Rule (IAM-AR) is used for the integral cross sections. The amplitudes of electron scattering by the carbon and fluorine atoms of the target molecules are calculated from the corresponding phase shifts, using the real and complex optical potential method. The parameter-free real part of the OP is calculated from the corresponding atomic characteristics – nuclear charge, electron density and static dipole polarizability. The differential and integral cross sections are calculated at equilibrium internuclear distances of the CFn molecules. They were compared with the available experimental data and with other theoretical results. A good overall agreement was observed while comparing our integral cross sections with the measured data. The level of the agreement however strongly depends on the target molecule, and a good consistency is observed typically above certain collision energies: from 10 eV in case of CF2, above 15-20 eV for CF3 and from 40 eV in case of CF4. Similar tendencies were found in case of the differential cross sections for a wide range of scattering angles at collision energies above 10 eV in case of CF2, above 15–20 eV for CF3, while in case of CF4 – above 20 eV.
Introduction
The physical electronics is a wide scientific research area, and its achievements could be applied in several state-ofthe-art technologies: from low-temperature plasma, semiconductor production and material science up to light industry and environmental protection [1]. For example, the plasma discharge could be treated as a medium, where a high number of electron collisions take place with atoms and/or molecules. Electron collisions with molecules play a very important role in several scientific and applied fields: from the investigation of the ionizing radiation effects on the human body and the DNA up to the kinetic modelling of plasma environments, including those of the The absolute values of electron-molecule scattering cross sections play a crucial role in the plasma reactor modelling as well as in the control of plasma processing efficiency of gas mixtures. However, there are only limited data about the low-energy collisions in these gases. Therefore, different scattering approaches should be used in order to obtain relevant knowledge about the collisions, e.g. on the energy and angular dependencies of the processes. These type of theoretical predictions can be used for modelling of complex processes, which are not easy to handle experimentally (for example, due to the toxic or highly reactive species).
The fluorine-containing radicals are the key components of the different gas-discharge and low-energy plasma environments (for more details see the numerous experimental and theoretical works [1][2][3][4][5][6][7][8][9][10][11][12][13]). Among the traditional feedstock gases in plasma-assisted semiconductor production the following molecular gases could be mentioned: CF 4 , C 2 F 6 , C 3 F 8 , c-C 4 F 8 and GeF 4 , SiF 4 . In the plasma these molecules undergo to fragmentation processes due to inelastic collisions, which lead to the production of ionized and neutral radicals, including CF, CF 2 and CF 3 .
It is worth noting that similar molecules could be also important in those processes, in which the carbon and fluorine atoms are exchanged with Si and Ge or Cl, Br and I, respectively [14,15]. The relative ratio and the role of different radicals in the plasma processes are poorly understood at the moment. The fluorocarbon radicals have some common properties, e.g. large dipole moment and dipole polarizability and in most cases they have also openshell-type electronic structure. The large scattering cross sections amplitudes could be also related with these properties. Unfortunately, the experimental studies of the above mentioned radicals are rather limited, because of the complexity of preparation of stable fluorocarbon beams with appropriate particle density.
The name of a recent work [2] "Anomalously large low-energy elastic cross sections for electron scattering from the CF 3 radical" primarily states its aim -how the large integral cross section of electron scattering could be explained below 15 eV collision energies? The authors of reference [3], which is the continuation of paper [2], presented a series of experimental and theoretical cross sections for e − +CF 3 elastic scattering at collision energies between 7 and 50 eV. The target radicals were obtained by the authors by pyrolysis from CF 3 I molecules at 817 • C. However, in these processes other atomic and molecular fragments were produced as well, with the following relative concentrations: CF 3 (23%), I (33%), CF 3 I (25%), I 2 (7%) and C 2 F 6 (12%). The relative ratios of these fractions were also used in the determination of the absolute cross section values (see [2,3] for more details). At the same time, the states of the particular radicals in these experiments are not well-identified. For example, if the CF 3 and CF 2 target radicals are produced in vibrationally excited states, it could lead to overestimated cross-section values, compared to the ground state.
It is worth noting here that the experimental cross sections in [2,3] were compared with the results of several theoretical calculations. They applied the Schwinger multichannel method (SMC) in the simple static-exchange (SE) approximation as well as the independent atom model with screening corrections (IAM-SCAR), both with and without ground-state dipole corrections. Unfortunately, none of the theoretical data reproduce the behaviour of the integral cross sections (ICS) below 20 eV. They are also only in a qualitative agreement with the experiments in case of differential cross sections (DCS) at these energies. In the quantitative analysis they underestimate the measured data by an order of magnitude. The independent atom model (IAM) is a relatively simple and widely used theoretical approach for studying the electron scattering dynamics by molecules. The model uses the interaction potentials, phase shifts and scattering amplitudes, which are calculated for electron scattering by the particular atoms of the molecule. So, the molecular target in the IAM framework is treated as a collection of atoms (without symmetry), which are located at welldefined distances from each other. The model was proposed by Mott and Massey [16], and it was intensively used in recent studies [3,14,15,17] to calculate the differential and integral scattering cross sections. The IAM-SCAR method is a novel version of the model, which takes into account the interatomic screening effects by a multiplicative, energy-dependent factor. This factor was derived both for total [18] and for differential cross sections [19], increasing the precision of the model at lower collision energies.
Nowadays more sophisticated methods are also available, which treat the interaction potentials and the scattering amplitudes in a more convenient manner, taking into account purely molecular properties. These models use, for example, a symmetry-adapted, single-centre expansion of the molecular wave function to calculate the electron densities. Such methods were proposed with spherical [20] and single-centre [21] potentials.
In the present work we propose a joint theoretical analysis for elastic electron scattering by the CF 4 molecule and its CF n (n = 1 − 3) radicals, using two approximations of the well-known independent atom model. The method is based on quantum-mechanical electron-atom scattering amplitudes. To calculate the amplitudes, the real and the complex optical potential (OP) methods were used. The cross sections in this work are compared with the available experimental and theoretical data for CF n systems.
Scattering cross sections and amplitudes
The scattering of an electron with momentum k on an N -atomic molecule by angle θ could be characterized theoretically by the F (θ, k) (direct) and G(θ, k) (spinflip) scattering amplitudes. Within the independent atom model framework, they correspond to the sum of the particular atomic scattering amplitudes f m (θ, k) and g m (θ, k) (see for example [1][2][3]5,6,9,16,17,22]): (1) For the differential cross section then we have: where r ij = r i − r j are the internuclear distances. The OP method is used to study the behaviour of the differential as well as the integral elastic and momentum transfer cross sections of electron scattering by molecules [23][24][25][26]. In the IAM framework the DCS of elastic electron scattering by an N -atomic molecule, after averaging over the random vibrational and rotational degrees of freedom of the molecule, could be expressed as follows [6,16,17] (atomic units = e = m e = 1 are used throughout the work, unless otherwise noted): Here θ is the scattering angle; f m and g m are the direct and spin-flip scattering amplitudes of the m-th atom, respectively; s = 2k sin(θ/2) and k = √ 2E, where E is the energy of the incident electron; r nm is the internuclear distance between the m-th and n-th atom of the molecule.
On the other hand, according to the "Additivity Rule" (IAM-AR) approximation, the DCS (3) could be expressed as the sum of the DCSs of scattering on all particular atoms, i.e.
The DCSs of electron scattering by an XY n heteronuclear molecule have a complex character in the IAM framework. For example, in case of scattering by the CF 4 molecule, for which all internuclear distances between the C and F atoms are equal and very close to the F-F distances, the DCS could be calculated as follows: As one can see in equations (5) and (6), the features and the behaviour of the electron-molecule DCSs in the IAM framework are most likely determined by the energy and angular behaviour of the particular atomic DCSs -dσ el,A /dΩ (in our case these are the dσ el,C /dΩ and dσ el,F /dΩ atomic cross sections).
The integral elastic scattering cross sections could be calculated by direct integration of the corresponding DCSs over the scattering angles: The σ IAM−AR el integral cross section can be calculated according to the optical theorem as well [16,17,27]. This theorem coincides with the IAM-AR approximation [6,[17][18][19]. Therefore, according to equations (7) and (8) and that sin(sr nm )/sr nm | θ→0 → 1, sin(sr nm )/sr nm | rnm→0 → 1, the following expression could be derived: The spin-flip amplitude does not contribute to the cross sections at θ = 0 • at all, so g n (θ = 0 • , k) = 0. The corresponding σ IAM−AR mom and σ IAM mom momentumtransfer cross sections could be determined analogously, using the (1 − cos θ) weighting function (see [28]), e.g.: Based on our previous experience [24][25][26], we suppose that the scattering cross sections for the whole molecule can be described well, when using a sufficiently good theoretical description of scattering by the particular atoms of the molecule, so not only in case of fast incident electrons, when k(r nm ) min 1, but also at lower energies, in case of k(r nm ) min > 1.
The electron-atom scattering amplitudes can be calculated by determining the real δ ± (E) = ε ± (E) (in case of real interaction potential [23]) or the complex δ ± (E) = ε ± (E) + iµ ± (E) partial phase shifts [29] (in case of complex OP, by taking into account the absorption effects). Using real partial phase shifts the scattering amplitudes can be calculated as follows: while using complex partial phase shifts for calculations the scattering amplitudes are: In equations (11)-(14) P (cos(θ) are the Legendre polynomials, while P 1 (cos(θ) are the first order associated Legendre functions. At the initial ≤ min angular momenta values for the incident electron the partial phase shifts could be determined by the variable-phase method, using the real or complex OP approach (see [23,28] and Refs. therein). The asymptotic values of the phase shifts at min < < max are calculated as follows: where α d is the static dipole polarizability of the corresponding atom. It can be calculated by any timedependent ab initio quantum approaches, and its empirical value could be also used (see [30,31]). For example, at 50 eV for the carbon and fluorine atoms l min (C) = 13 and l min (F) = 11, while at 1000 eV collision energy these values equal 40 and 34, respectively. The l max values were not larger than 295, and they have changed with the collision energy.
It is worth noting here that any published partial phase shift data for electron-atom scattering could be used in the IAM framework to calculate the different cross sections of electron scattering by those molecules, which consist of these atoms.
Electron-atom interaction potentials
In the Relativistic-Static-Exchange-Polarization (RSEP) approximation our electron-atom interaction potential does not contain any empirical or fitting parameters [23]: where the "±" sign in the spin-orbit interaction potential corresponds to the j = ± 1/2 total angular momenta of the incident electron. The V S , V E , V P , V R and V ± SO parts of the OP are the static, exchange, polarization, scalarrelativistic and spin-orbit interaction potentials, respectively. These components are basically determined by the total and spin electron densities of the particular atoms of the molecule. The electron densities could be calculated by different theoretical models: Thomas-Fermi, Hartree-Fock, density functional theory (DFT), etc. The calculated densities usually could be approximated by some analytical functions, which is especially useful in systematic calculations. It is worth noting that in references [20] and [21] the interaction potentials are derived from purely molecular electron densities.
The static potential is determined by the Coulomb interaction between the incident electron and the atomic nuclei as well as between the bound electrons of the target atom (with ρ(r) electron density) [32,33]: We used the Hartree-Fock electron densities and static potentials [32] of the C and F atoms, which are the constituents of the investigated molecular targets. The spin-orbit interaction potential is (see [32]): The scalar part of the V R (r, E) relativistic potential is expressed as (see [35,36]): As one can see in equations (18) and (19), using analytical expressions for the static potential is very favourable, because its derivatives then could be also calculated analytically.
For the exchange interaction potential the inhomogeneous electron gas approximation is used (see [33]): where ] expression was used with atomic ionization potentials (I ). It can be treated as a multiplicative factor for the non-relativistic potential (20). The polarization potential is determined in the local, spin-unpolarized inhomogeneous electron gas approximation (see [24,28]) and can be divided into short-range (SR) and long-range (LR) parts. A parameter-free electron correlation-polarization interaction potential is used for the V SR P short-range part (see [23]). In the local density approximation (LDA) of DFT it can be expressed using correlation functionals: where Applying the variation principle for equation (21) once the following polarization potential could be obtained: The polarization potential can be expressed simply using the ε c [r s (r)] correlation energy density, like in reference [38], but equation (22) is a more precise form. At asymptotic distances the polarization potential has a well-known V LR P (r) = −α d (0)/2r 4 form, where α d is the static dipole polarizability of the particular atoms. We used α C d = 11.26 a 3 0 and α F d = 3.76 a 3 0 values for the carbon and fluorine atoms, respectively. The V SR P (r) and V LR P (r) potentials match at a given r c distance.
The absorption effects in electron-atom collisions are studied in the complex optical potential (RSEPA) approximation, where V ± opt (r, E) = V ± (r, E) + iV ± A (r, E). They have an impact on the scattering characteristics at E > ∆ collision energies, where ∆ is the energy of the first inelastic threshold of the atoms. For the carbon and fluorine atoms the inelastic effects should be taken into account above ∆ C = 7.50 eV and ∆ F = 12.70 eV energies [31]. The absorption effects could be determined, for example, by the non-empirical Staszewska-type potential [40] (see also [28]). This potential has the following form: where the local velocity of the incident electron is determined from its local kinetic energy: . The values of σ b (r, E) (average binary collision cross sections) depend on the expressions for α(r, E) and β(r, E) functions [40]. For example, in the 2nd version of the Staszewska potential (23) they are used with the following parameters: . For qualitative calculations the empirical McCarthypotential can be a very useful option (see [40]): where ρ H (r) is the density of the highest occupied (valence) electron subshell. The energy-dependent W (E) function can be evaluated by fitting the absorption (excitation or ionization) cross sections to the experimental data. The W (E) function can be used then at all collision energies.
In the spherical [20] and single-centre [21] approaches the absorption effects are taken into consideration more accurately, calculating the absorption of the whole molecule. It is widely known that taking into account the absorption effects slightly decreases the calculated values of the differential and integral cross sections.
Interatomic distances of the molecules
The equilibrium internuclear distances of the CF n (n = 1 − 4) molecules were calculated by ab initio geometry optimization, using the GAUSSIAN quantum chemistry software [41]. The calculations were performed on the CCSD(T) level of theory, using the "aug-cc-pvdz" basis set. The following internuclear distances were calculated: • for CF molecule: r CF = 1.3071Å; • for CF 2 molecule: r CF = 1.3071Å, r FF = 2.0922Å; • for CF 3 molecule: r CF = 1.3365Å, r FF = 2.2053Å; • for CF 4 molecule: r CF = 1.3370Å, r FF = 2.1831Å.
As one can see, the r CF internuclear distances slightly increase as the number of fluorine atoms increases. The r FF internuclear distances are not so monotonous -they have a maximum at n = 3 in case of CF 3 . For the CF, CF 2 and CF 3 radicals the following r CF internuclear distances were found in reference [42] (inÅ, respectively): 1.2912, 1.3018, 1.3388. As one can see, our calculated values are in good overall agreement with the mentioned data. 3 Results and discussion 3.1 Integral cross sections Figures 1 and 2 show the integral elastic and momentumtransfer cross sections, calculated for e − +CF n (n = 1−4) collisions using the IAM-AR approach, which corresponds to the optical theorem (see Eq. (9)). All elastic and momentum-transfer integral cross sections are calculated up to 1000 eV energies. The elastic ICSs for e − + CF n (n = 1 − 4) scattering are also shown in Table 1 from 10 to 1000 eV collision energies. For e − +CF 3 and e − +CF 4 collisions the electron-atom scattering amplitudes are calculated in the RSEPA approximation (including the absorption effects), while for the e − + CF and e − + CF 2 collisions we excluded the inelastic effects (RSEP approximation). As we mentioned above, taking into account the absorption effects slightly decreases the absolute values of the cross sections, but it does not affect their qualitative behaviour. We found that the integral cross sections of scattering by the studied molecules can be characterized with very similar energy behaviour. at 50 eV. Below this energy we overestimate the data of work [10]. Above 50 eV collision energies we got a good agreement with the data of both references [4] and [10].
For low collision energies, up to 10 eV the main contribution in the calculated ICSs is originated from the cross sections of e − + C collision, while above 10 eV -from the e − + F scattering (see the cross section analysis in [26]). The significant overestimation of our calculated cross sections compared to the experimental ones is due to the fact that the contribution of the carbon atom is substantially smaller than the contribution of the fluorine atoms at this energy. In other words, the carbon atom is screened by the fluorine atoms. As the energy increases, the crosssection amplitudes for both the C and F atoms decrease, and above 100 eV they are almost equal. Therefore, the total cross section of e − + CF 4 scattering is mainly determined by the contribution of the fluorine atoms.
The momentum-transfer cross sections of this process (Fig. 2a) slightly overestimate the experimental data [4,7], but the overall agreement is very good, there are energy regions, where we slightly overestimate the experiments.
There are published theoretical integral and differential cross sections for e − + CF 4 collision [5,9] between 100 and 700 eV energies, calculated by the IAM approach. The elastic ICSs [9] are comparable with our data and the measured ones [4] only at higher energies, above 400 eV. The calculated momentum-transfer cross sections in reference [9] overestimate our cross sections up to 150 eV, while at higher energies they are in good agreement with our data and those of experiment [4]. It is worth noting that in reference [9] all components of the OP was used, while the authors of reference [5] used only the static and exchange potentials (SE-approximation). The elastic and momentum-transfer ICSs, calculated in [5], overestimate the cross sections, obtained by the authors of [4] and [9]. e − + CF 2 and e − + CF 3 . In Figure 1b our cross sections are compared with the available experimental data for e − + CF 3 [2,3] and e − + CF 2 [43] processes (see also [26,44,45]). It is worth noting that the experimental data were obtained with a rather high uncertainty (see Fig. 1b), which can be related with the issues of CF 3 radical production in pyrolysis (as described in Sect. 1).
The effect of an additional fluorine atom leads to a slight increasing of the e − + CF 3 cross sections, compared to those of e − + CF 2 scattering. As one can see in Figure 1b the energy behaviour of the experimental cross sections for CF 2 and CF 3 molecules is not well-described by the theoretical ICSs below 20 eV. Our calculated cross sections for e − +CF 2 are higher, compared to the corresponding experimental ones, while in case of e − +CF 3 scattering our data are smaller. For the e − +CF 2 collision we slightly overestimate the experiments even above 20 eV. A possible reason of this could be the neglect of absorption effects in our calculation, which decreases the ICS values. However, as one can see in Figure 1b, as the number of fluorine atoms in the radicals is decreasing, the amplitudes of the experimental and calculated cross sections are also decreasing. Therefore, in case of e − + CF 2 scattering the calculated cross sections are mainly determined by the contribution Fig. 3. The angular behaviour of differential cross sections for elastic electron scattering by CF and CF4 (a,c) and also by CF2 and CF3 (b,d) molecules at 10 and 15 eV collision energies.
of the C atom up to ∼35 eV collision energies, while above this -by the contribution of the fluorine atoms (see the e − + C and e − + F cross sections in [26]). The overestimation of the calculated cross sections over the experimental ones up to this energy allows one to conclude that the contribution of the carbon atom is again smaller compared to the contribution of the fluorine atoms. The carbon atom somewhat screened by the fluorine atoms, but less effectively than in case of e − + CF 4 scattering. As the energy increases, the cross sections for the C and F atoms decrease and are closer to each other, so the total cross section for the CF 2 radical is mainly determined by the contribution of the fluorine atoms.
The 10-eV minimum of the experimental e − +CF 2 cross sections [43] (see Fig. 1b) is not reproduced by our calculations. Our ICSs are quantitatively comparable with the experimental data for CF 2 and CF 3 molecules above 20 eV collision energies. It is worth noting here that none of the theoretical methods (SMC, IAM-SCAR, R-matrix) used in references [2,3] can reproduce the qualitative and quantitative behaviour of the measured cross sections for e − + CF 3 scattering. Only the IAM-SCAR method can produce integral cross sections within the experimental error barriers above 25 eV, but even these ICSs are smaller than the measured ones. For this particular collision system the experimental cross section is significantly overestimated by our calculations up to 20 − 25 eV collision energies, which does not coincide with the patterns observed earlier (see [26]). It is possible however, as it was mentioned in reference [26] too, this is an evidence of electron scattering by vibrationally excited CF 3 radicals. Figure 2b shows the momentum-transfer integral cross sections. The energy behaviour of these cross sections are similar for all CF n target molecules. It is worth noting that these ICSs for e − + CF 2 and e − + CF 3 collisions are almost completely coincide above ∼30 eV.
Differential cross sections
The angular behaviour of our calculated DCSs are shown in Figures 3-5 for different e − + CF n (n = 1 − 4) scattering processes. The cross sections are calculated in the IAM framework at 10, 15, 20, 25, 35 and 50 eV collision energies. The scattering amplitudes were calculated for the particular atoms in the RSEP (e − +CF/CF 2 ) and RSEPA (e − + CF 3 /CF 4 ) approximations of the optical potential model. As we found for the ICSs the inclusion of absorption effects somewhat decreases the absolute DCS values, but does not affect their qualitative behaviour. We found that using the IAM-AR approach for DCS calculations leads to an intensive decreasing of cross section values at small scattering angles, compared with the results of the IAM approach. For example, at 7 eV this angular range is [0 • − 90 • ]. With an increasing collision energy this angular interval substantially decreases, at 50 eV it is about [0 • − 30 • ]. Moreover, equations (3)-(4) include more intensive contributions to the structure of the DCSs. This is due to the role of interference terms in the angular behaviour of the cross sections. The angular features of the calculated DCSs for the mentioned molecules are similar -the absolute value of the cross sections increases step-by-step with the increasing number of fluorine atoms.
In case of the diatomic CF molecule experimental data could not be found at all in the literature. The angular behaviour of the differential cross sections for the CF and CF 4 molecules are similar, however the DCSs for e − + CF 2 and e − + CF 3 are much closer to each other. The calculated DCSs by the authors of reference [5] for e − + CF 4 scattering overestimate the corresponding experimental [4] and theoretical [9] cross sections at all collision energies (from 100 to 700 eV) and all scattering angles. The angular behaviour (oscillations) of the theoretical DCSs are typically similar to each other and also to the experimental ones. The results of calculations in reference [9] overestimate the experimental data [4] at 100 and 150 eV, but at higher collision energies (200-300 eV) they are in good overall agreement.
In order to calculate the DCSs of e − + CF x (x = 1 − 3) scattering processes the authors of references [42] and [46] have used the R-matrix method below 10 eV collision energies. It is worth noting that our method does not allow to adequately describe quantitatively the differential cross sections at these low energies. For their calculations in the inner region the authors of references [42,46] used the close coupling method with molecular wavefunctions. In the outer region they used the coupled equations of singlecentre expansion. At small scattering angles the DCSs for molecules with large dipole momentum generally characterized with very high values. For example, for the e − +CF collision the DCS at 7.5 eV and 10 • scattering angle equals 40.6× 10 −20 m 2 /sr. Our calculated DCS with a 4.42×10 −20 m 2 /sr value at 7 eV is close to the 3×10 −20 m 2 /sr theoretical value, which was calculated in reference [42], using small dipole momentum (0.12 Debye). In the 45 • − 130 • angular range the calculated DCSs in [42] have a clear structure -they could be characterized with 3 minima and 2 maxima with ca. 0.5 × 10 −20 m 2 /sr and 0.4 × 10 −20 m 2 /sr values. Our cross sections for this molecule are characterized with only one wide gap at all collision energies. 10 eV. The theoretical cross sections for e − + CF and e − + CF 4 collisions are similar at this energy. The highest difference between them could be observed at small scattering angles, up to 70 • . Near the minimum at 110 • the calculated DCS for e − +CF 4 is higher than for the e − +CF collision. We obtained not so good agreement with the angular dependencies, compared to the experimental data [7,8] for this energy. The measured cross sections [7] somewhat overestimate our results near the minima, where the DCSs for the two processes are similar.
Our cross sections for e − + CF 3 and e − + CF 2 are very similar, they differ only at forward angles and in the region of the minimum. In the 120 • − 180 • angular range they coincide. Our DCSs are close to the measured data [2,3] from 40 • up to 75 • . These experimental cross sections have a strong angular structure at this energy -a minimum at 60 • and a maximum at 115 • , which is not reproduced by any of the theoretical calculations. The angular behaviour of the calculated e − + CF 2 cross sections is closer to the experimental data in reference [44], however it strongly overestimates them below 90 • . Near the minimum at 115 • these cross sections are in good agreement with each other and also with the calculated cross sections for the e − +CF 3 collision.
15 eV. The angular dependencies of the calculated cross sections for e − + CF and e − + CF 4 collisions already strongly differ at this collision energy. The angular structure of the e − +CF 4 DCSs is more complex -another minimum is formed around 70 • . Our data are well comparable with the measured ones [7,8] above 80 • . The absorption effects are still negligible.
The calculated e − + CF 3 and e − + CF 2 cross sections are in good agreement, they are almost equal in backward directions, above 125 • . At 30 • − 70 • scattering angles our cross sections for the e − + CF 3 collision are close to the lower error bar of the experimental data [2,3]. These experimental DCSs could be characterized with an almost stationary ca. 4 × 10 −20 m 2 /sr value between 60 • and 135 • . This is not reproduced by any theoretical cross sections. The calculated e − + CF 2 cross sections slightly overestimate the measured data, obtained by the authors of reference [44] at 40 • −45 • scattering angles. We also found that the corresponding cross sections are close to each other near the minimum between 90 • and 120 • . 20 eV. A higher discrepancy is observed between the calculated differential cross sections of electron scattering by CF and CF 4 molecules. The e − + CF DCSs preserve their previous structure with a single minimum. However, the cross sections for e − + CF 4 collision have a more interesting character -a clear formation of another minima is observed at 65 • . Our data are in good overall agreement with the experimental ones [7] for this process above 70 • . The absorption effects are still negligible.
The calculated e − + CF 3 cross sections are also close to the measured ones [2,3] nearly in the whole forward direction (from 20 • to 90 • ). Our DCSs for the e − + CF 2 and e − + CF 3 collisions are similar, but there are some slight differences between them. For example, in case of the CF 3 molecule a second minimum is formed at 60 • . In backward direction, from 120 • to 180 • , they still coincide. The calculated cross sections for the e − + CF 2 collision slightly overestimate the corresponding experimental data [43] at 45 • − 75 • scattering angles, and they are also close to the calculated ∼0.2 × 10 −20 m 2 /sr value for e − + CF 3 scattering near the minimum around 105 • .
25 eV. At this collision energy there are no published experimental data for the e − + CF 4 scattering process. There is a slightly higher difference between our calculated cross sections for e − + CF and e − + CF 4 scattering. In the e − + CF DCSs only a single minimum is observed, as seen for lower energies. The angular dependencies of the e − + CF 4 cross sections show a clear additional minimum at 60 • . The absorption effects are already noticeable at this energy.
Our calculated cross sections for e − + CF 3 scattering are close to the measured ones [2,3], reproducing their features in the whole measured angular range from 40 • up to 135 • . The e − + CF 2 and e − + CF 3 DCSs are still close to each other, but there is a slightly higher difference between their absolute values. They are in a perfect agreement above 110 • . The calculated cross sections for e − + CF 2 scattering are close to the experimental ones from reference [43] for all measured angles, from 40 • to 135 • . However, they slightly overestimate the corresponding experimental data at angles below 90 • . In the region of the minimum around 105 • the calculated DCSs for the CF 2 and CF 3 radicals are nearly equal (∼0.2 × 10 −20 m 2 /sr). 35 eV. Slightly higher discrepancies were observed between the e − + CF and e − + CF 4 differential cross sections at this energy. The single minimum in the e − + CF DCS is shifted to smaller angles, now it can be found around 100 • . Our cross sections for the e − + CF 4 collision are in a good qualitative and in satisfactory quantitative agreement with the experimental data, published by the authors of reference [7]. An additional minimum is located at ca. 45 • scattering angle. The absorption effects have a clear impact on the cross section values, which could be observed already in the full angular range.
Our DCSs for the e − + CF 3 scattering process reproduce the angular structure of the measured cross sections in [2,3] and close to them in the absolute scale for all investigated scattering angles, from 40 • up to 135 • . The calculated e − + CF 2 and e − + CF 3 cross sections are very similar, only a slight difference can be found between them at small forward angles. 50 eV. At this collision energy a good overall agreement (both qualitative and quantitative) is obtained between our calculated DCSs for the e − + CF 4 collision and the corresponding experimental ones [7]. This is valid for all scattering angles, except of the minimum at 100 • . Our theoretical DCS is ca. 0.03 × 10 −20 m 2 /sr here, while the experimental value is ca. 0.14 × 10 −20 m 2 /sr. The absorption effects are rather strong here: they reduce the absolute values of the cross sections approximately by a factor of 2 in a wide angular range above ∼35 • .
The calculated e − + CF 3 cross sections are within the estimated uncertainty of the measured data in [2,3]. They are close to each other for all scattering angles between 20 • and 135 • . At very small angles, below 30 • , our DCSs for e − +CF 2 and e − +CF 3 scattering slightly differ, but at higher angles they are close to each other. The calculated cross sections for the e − + CF 2 collision are similar to the measured ones [43] for all investigated angles. Around the minimum at ca. 105 • they slightly overestimate the corresponding experimental data, and close to our DCSs for e − + CF 3 scattering. At this energy the absorption effect plays an important role, so using the RSEPA approximation instead of RSEP leads to a better agreement between our calculated data for CF 2 and the corresponding experimental ones [43].
To summarize we can say that the DCSs for the e − + CF 4 scattering process are in good agreement with the experimental data published in reference [7] above 20 eV collision energies, especially in backward scattering directions. Once the energy increases the agreement gets better, even at small angles, in forward directions. With an increasing collision energy the absorption effects are also increasing -the values of our cross sections are considerably reduced due to the absorption and they are closer to the measured data.
The theoretical e − + CF 3 differential cross sections quantitatively can be comparable with the experimental ones at small angles above above 15 eV collision energies. The theoretical DCSs for e − + CF 2 scattering reproduce the measured cross sections in backward directions (90 • − 130 • ) above 10 eV collision energies. Therefore, in case of smaller molecular targets a better agreement can be observed between our theoretical data and the corresponding experimental DCSs at low collision energies.
It is worth noting that the only theory in references [2,3], which qualitatively reproduce the experimental angular behaviour of the DCSs for e − + CF 3 scattering, is the Schwinger multichannel method. But even this method underestimates the measured cross sections at 7 eV by an order of magnitude and at least with a factor of 5 at 20 eV. Respectively, none of the theoretical methods, proposed in [2,3], can correctly reproduce the absolute values and behaviour of the measured cross sections. The results of IAM-SCAR calculations [2,3] (which are similar to our IAM-AR approximation) are in good overall agreement with the measured elastic scattering data above 25 eV collision energies. These cross sections coincide with the lower boundary values of the experimental error bars.
Conclusions
In order to study the elastic scattering of electrons by molecular targets the independent atom model is used along with parameter-free real and complex electron-atom interaction potentials. The features of electron-molecule scattering generally follow the features of the scattering by its particular atoms. The integral cross sections of electron scattering by the CF, CF 2 , CF 3 and CF 4 molecular targets are calculated in the IAM-AR approach, while for the differential cross sections the IAM approach is used.
The comparative analysis of our integral cross sections with the available experimental ones shows that our data can be used for the description of scattering by CF 2 radicals above 10 eV, by CF 3 -above 15-20 eV, while in case of the CF 4 target molecule -above 40 eV. For the previous a good agreement was observed between the momentumtransfer cross sections above 50 eV collision energy.
Comparing our theoretical differential cross sections with the corresponding measurements allows one to draw some conclusions about the limits of our methods. They could be used to adequately calculate the DCSs above 10 eV for CF 2 (between 100 • and 180 • scattering angles), above 15-20 eV for CF 3 (0 • −90 • ) and above 20 eV for the largest CF 4 molecule (from 80 • to 180 • ).
Comparing our results of calculations with the experimental data for e − + CF 2 and e − + CF 3 collisions allows one to conclude that in case of the CF 3 radical in references [2,3] the scattering characteristics were most likely measured for vibrationally excited target molecules.
The performed calculations and their comparison with the available experiments confirm that more sophisticated methods are needed to develop in order to adequately describe the cross sections of scattering at lower energies. These methods, along with the electron-molecule interaction potentials, should take fully into account the characteristics of the targets -molecular wavefunctions, electron densities, polarizabilities and dipole momenta. | 9,560.6 | 2020-03-01T00:00:00.000 | [
"Chemistry"
] |
Improved Locating Method for Local Defects in XLPE Cable Based on Broadband Impedance Spectrum
: The crosslinked polyethylene (XLPE) cable safety is affected by environmental factors and artificial defects during operation. This work proposes an improved locating method based on broadband impedance spectrum (BIS) to locate local defects in XLPE cables. The calculation process of the algorithm has been analyzed. The selection of the incident Gaussian signal and the peak recognition method have been discussed, where the pulse width of the Gaussian signal was found to be determined primarily by the upper limit frequency of the traveling wave transmitting in the cable. The centroid and function fitting methods were established to reduce the peak recognition error caused by the test sampling rate. This work verified the accuracy of the algorithm through experiments. A vector network analyzer (VNA) was used to test the BIS of the cable. A 20 m-long cable containing abrasion and an inserted nail with different depths was measured in the BIS test. It was found that the abrasion and the nail could be located. The locating deviation of abrasion was within ± 1%, and the centroid and function fitting methods could effectively reduce the locating deviation. The locating deviation was within ± 1% when the depth of the nail inserted into the cable accounted for less than 50% of the insulation thickness. When the depth exceeded 75% of the insulation thickness, the deviation of each method was more significant, and the maximum absolute value of the deviation was 4%.
Introduction
Crosslinked polyethylene (XLPE) cable is widely used in urban distribution networks due to its high performance [1,2].The safety of cable operation is of great significance to the stability of power systems [3,4].With the development of cities, XLPE insulation is affected by multiple environmental factors, including artificial defects during construction [5,6].Therefore, a new effective method to locate local defects before cable failure needs to be studied to ensure the reliability of the power supply.
The traveling wave methods to diagnose local defects in cables have been widely studied.When a high-frequency signal transmits in the line, signal refraction and reflection will occur where the cable wave impedance mismatches [7,8].This method uses the traveling wave to locate defects by analyzing the time difference between the reflected and incident signals.Time domain reflectometry (TDR) is the most common method utilizing this principle.TDR can accurately locate degradation, such as the cable's local moisture and thermal aging [9,10].A narrow pulse-width Gaussian signal is used as the incident signal in the TDR test.However, the bandwidth of the Gaussian signal is limited, and electromagnetic interference easily occurs during its transmission process.To solve this problem, the sequential time domain reflectometry method (STDR) and spread spectral time domain reflectometry (SSTDR) have been used to change the waveform of incident signals.The PN code is used as the incident signal of the STDR method, and the crosscorrelation coefficient of the incident signal and the reflected signal is calculated to locate Energies 2022, 15, 8295 2 of 14 defects [11][12][13].The calculation method of SSTDR is similar to that of STDR, but the signal used in SSTDR is a modulated signal of PN code and sinusoidal signal [14][15][16].Compared with TDR, STDR and SSTDR have the advantages of higher resolution and anti-interference ability [17].Although the above methods can detect local defects effectively, the signal's frequency band must be selected before the test because the cable's size will affect the signal's transmission characteristics.The test signal will have different attenuation and dispersion when transmitted in other cables.When the cable information is unknown, the signal must be repeatedly attempted, increasing the test's difficulty [18].Some scholars have recently proposed the broadband impedance spectrum (BIS) method.The test signal used by BIS is a swept signal with an amplitude of 5 V.The power of the swept signal in each frequency band is equal, so the signal does not need to be attempted repeatedly due to attenuation [19].The single-ended impedance obtained from the BIS test is converted into the location spectrum of the cable through a mathematical algorithm.The inverse fast Fourier transform (IFFT) method has been used to transform the impedance in the frequency domain into the time domain.It has been found that the location of local moisture and irradiation in long cables can be identified by IFFT.However, the input swept signal has significant energy in the high-frequency band, and the Gibbs phenomenon will occur in the time-frequency domain transformation, which will seriously affect the resolution of the test results [20].Some scholars have tried to deconstruct the signal into real and imaginary parts.The effect of spectral leakage is found to be reduced by transforming the imaginary part by IFFT [21].It has also been found that interpolating and windowing the signal will inhibit the Gibbs phenomenon [22].Although the above research has confirmed that the improved algorithm can increase the locating accuracy, the algorithmic principles of the window function and the timefrequency domain transformation process are still unclear.Therefore, it is necessary to study the principle and identification ability of the algorithm further.At the same time, there are few algorithms for identifying the peaks of reflected signals.The method of locating the maximum amplitude value of the peak in the BIS signal is based mainly on the judgment of the maximum value.However, the top point depends on the sampling frequency of the BIS and the reflected signal frequency, which is easily affected by noise.In order to avoid the misjudgment of the defect location, a new peak identification method needs to be applied in the locating algorithm.
This work proposes an improved locating method to detect local defects in long cables.The process of time-frequency domain transformation was analyzed, and different peak identification methods were discussed.Then, location spectra of 20 m-long cables containing local mechanical abrasion and nail insertion were tested.The locating accuracy of varying identification methods was analyzed, and the influencing factors were discussed.
Transmission Line Model
The test signal of BIS is the swept signal in the range of kHz to MHz.When the signal wavelength is much greater than the cable length, the signal oscillates multiple times during transmission in the cable.Therefore, the lumped parameter model is no longer applicable.The transmission line model of micro-element parameters to analyze the wave transmission process is shown in Figure 1 [23].A cable can be equivalent to a series of ∆l-length transmission line element circuits.R 0 , L 0 , G 0 , and C 0 are the cable's distribution parameters, representing the cable's resistance, inductance, conductance, and capacitance per unit length, respectively.The differential equation can be established and solved through the transmis model.The calculation process of the single-ended broadband impedance spect been deduced in many research works [24,25].BIS at the measurement port ca tained by (1): where Zc represents the cable's characteristic impedance, k represents the prop constant of the cable, and l represents the length of the cable.k and Zc can be ca by the distribution parameters: where w represents the angular frequency of the test signal.The BIS of the cable i only to the distribution parameters of the cable.The distribution parameter at th location will change when local defects occur in the cable.Therefore, BIS can re insulation condition of the cable.
BIS Algorithm
Currently, the locating method is based mainly on the time-frequency tran the BIS directly.However, the IFFT method will cause serious spectral leakage, a locating accuracy and resolution.This work uses the transfer function method to the influence of spectral leakage.The algorithm process is shown in Figure 2. Th the cable is measured by a vector network analyzer (VNA).Then, the transfer fun the cable can be calculated by (4-5) [26]: where Zi represents the wave impedance of the cable, which is constant and dep the cable parameters; μ0 represents the permeability of a vacuum; ε0 represents th The differential equation can be established and solved through the transmission line model.The calculation process of the single-ended broadband impedance spectrum has been deduced in many research works [24,25].BIS at the measurement port can be obtained by (1): where Z c represents the cable's characteristic impedance, k represents the propagation constant of the cable, and l represents the length of the cable.k and Z c can be calculated by the distribution parameters: where w represents the angular frequency of the test signal.The BIS of the cable is related only to the distribution parameters of the cable.The distribution parameter at the defect location will change when local defects occur in the cable.Therefore, BIS can reflect the insulation condition of the cable.
BIS Algorithm
Currently, the locating method is based mainly on the time-frequency transform of the BIS directly.However, the IFFT method will cause serious spectral leakage, affecting locating accuracy and resolution.This work uses the transfer function method to weaken the influence of spectral leakage.The algorithm process is shown in Figure 2. The BIS of the cable is measured by a vector network analyzer (VNA).Then, the transfer function of the cable can be calculated by ( 4) and ( 5) [26]: where Z i represents the wave impedance of the cable, which is constant and depends on the cable parameters; µ 0 represents the permeability of a vacuum; ε 0 represents the dielectric constant of a vacuum; ε r represents the relative permittivity of XLPE; r s represents the shield's radius; and r c represents the radius of the core.
The selection of Gaussian Signal
A Gaussian signal is selected as the simulated input signal in the algorit quency domain spectrum is a single-lobe waveform, and the amplitude decr frequency increases.The mathematical expression of the Gaussian signal is sh 7): where a represents the maximum amplitude value of the Gaussian signal, b rep time shift of the signal, c represents the pulse width of the signal, and fsample rep sampling frequency of the time domain signal.a and b do not affect the freque waveform of the Gaussian signal, so a= 1, b= 1 × 10 −6 are fixed in this work.affects mainly the amplitude-frequency characteristics of the signal.In the alg necessary to select the appropriate c to ensure that the signal's frequency ban the cable's transmission range as far as possible.c is chosen as follows:
The selection of Gaussian Signal
A Gaussian signal is selected as the simulated input signal in the algorithm.Its frequency domain spectrum is a single-lobe waveform, and the amplitude decreases as the frequency increases.The mathematical expression of the Gaussian signal is shown in ( 6) and ( 7): where a represents the maximum amplitude value of the Gaussian signal, b represents the time shift of the signal, c represents the pulse width of the signal, and f sample represents the sampling frequency of the time domain signal.a and b do not affect the frequency domain waveform of the Gaussian signal, so a= 1, b= 1 × 10 −6 are fixed in this work.However, c affects mainly the amplitude-frequency characteristics of the signal.In the algorithm, it is necessary to select the appropriate c to ensure that the signal's frequency band is within the cable's transmission range as far as possible.c is chosen as follows: where p represents the ratio of the amplitude of the signal at the maximum frequency to the amplitude at 0 Hz.p < 1 can be obtained by (7).p = 0.01 is fixed to ensure that the energy will not be attenuated too much in the high-frequency band.f m represents the cable's upper limit transmission frequency of the traveling wave.
Energies 2022, 15, 8295 5 of 14 The accuracy of the location spectrum is affected by the selection of c.As c becomes smaller, the pulse width of the reflected peak becomes smaller, and the accuracy of the location spectrum becomes higher.The selection of c is determined mainly by the cable's upper limit transmission frequency of the traveling wave.Therefore, it is necessary to analyze the transmission characteristics of traveling waves in cables with different lengths.Figure 3 shows the BIS amplitude-frequency characteristic of cables with different lengths, which was measured by VNA.The amplitude of BIS shows a trend of oscillation attenuation with frequency.The upper limit test frequency is defined as the frequency of the lowest amplitude in the curve.Three cables with the lengths of 20, 60, and 100 m are used here to show the limit frequency selection.As shown in Figure 3, the upper limit frequency of traveling waves in the 20 m cable is 98 MHz, while it is 52 MHz in the 60 m cable and 37 MHz in the 100 m cable.
Energies 2022, 15, x FOR PEER REVIEW 5 o analyze the transmission characteristics of traveling waves in cables with diffe lengths.Figure 3 shows the BIS amplitude-frequency characteristic of cables with dif ent lengths, which was measured by VNA.The amplitude of BIS shows a trend of osc tion attenuation with frequency.The upper limit test frequency is defined as the freque of the lowest amplitude in the curve.Three cables with the lengths of 20, 60, and 10 are used here to show the limit frequency selection.As shown in Figure 3, the upper li frequency of traveling waves in the 20 m cable is 98 MHz, while it is 52 MHz in the 6 cable and 37 MHz in the 100 m cable.Figure 4a shows the location spectra of intact cables with different lengths.The i dent signal is selected according to (6-8), and the location spectrum is calculated by The area covered by the incident peak and reflected peak at the ends of the cable is defi as the test blind zone.The blind zone of the 20 m cable is approximately 3 m, that of 60 m cable is approximately 4 m, and that of the 100 m cable is approximately 6 m. width of the blind zone is proportional to the width of the reflected peak, so as the len increases, the width of the reflected peak increases, and the resolution of the location sp trum decreases.
Figure 4b shows the location spectra of intact cables with different values of c. cable length is 20 m. c' is the calculated according to (6-8).It can be found from Figur that when c = 1.5c', the width of the blind zone is larger.When c = 0.75c', the amplitud noise is greater in the location spectrum due to spectral leakage.This proves that the lection of c will affect the recognition of the reflected peak.As a result, c is selected us (8) in the following work.Figure 4a shows the location spectra of intact cables with different lengths.The incident signal is selected according to (6)-( 8), and the location spectrum is calculated by BIS.The area covered by the incident peak and reflected peak at the ends of the cable is defined as the test blind zone.The blind zone of the 20 m cable is approximately 3 m, that of the 60 m cable is approximately 4 m, and that of the 100 m cable is approximately 6 m.The width of the blind zone is proportional to the width of the reflected peak, so as the length increases, the width of the reflected peak increases, and the resolution of the location spectrum decreases.
Figure 4b shows the location spectra of intact cables with different values of c.The cable length is 20 m. c' is the calculated according to (6)- (8).It can be found from Figure 4b that when c = 1.5c', the width of the blind zone is larger.When c = 0.75c', the amplitude of noise is greater in the location spectrum due to spectral leakage.This proves that the selection of c will affect the recognition of the reflected peak.As a result, c is selected using (8) in the following work.
The Process of Time-Frequency Transformation
The time domain function of the Gaussian signal is transformed to a frequency domain function by fast Fourier transform method (FFT).Then, the frequency domain function is multiplied by the transfer function to calculate the frequency domain function of the reflected signal.Finally, the time domain signal is obtained by IFFT.The calculation process is shown in (9):
The Process of Time-Frequency Transformation
The time domain function of the Gaussian signal is transformed to a frequency domain function by fast Fourier transform method (FFT).Then, the frequency domain function is multiplied by the transfer function to calculate the frequency domain function of the reflected signal.Finally, the time domain signal is obtained by IFFT.The calculation process is shown in (9): where G(f )* represents the frequency domain continuation of G(f ), and its value in the negative frequency domain is the conjugate of that in the related positive frequency domain.By multiplying the time domain function of the reflected signal with the propagation velocity of the electromagnetic wave in the XLPE, the location spectrum of the cable can be obtained.In this paper, the propagation speed of electromagnetic waves in XLPE is approximately 1.70 × 10 8 m/s [27].
Location of Reflected Peaks
Since the signals analyzed in this work are discrete, a distortion phenomenon may occur when the reflected signal is recovered by IFFT, as shown in Figure 5.The maximum value of the reflection peak is unable to locate the defect accurately.To improve the identification of the reflected peak, this work uses the centroid and function fitting methods to find the peak position.
Energies 2022, 15, x FOR PEER REVIEW 7 of 1 where G(f)* represents the frequency domain continuation of G(f), and its value in th negative frequency domain is the conjugate of that in the related positive frequency do main.By multiplying the time domain function of the reflected signal with the propaga tion velocity of the electromagnetic wave in the XLPE, the location spectrum of the cabl can be obtained.In this paper, the propagation speed of electromagnetic waves in XLPE is approximately 1.70 × 10 8 m/s [27].
Location of Reflected Peaks
Since the signals analyzed in this work are discrete, a distortion phenomenon may occur when the reflected signal is recovered by IFFT, as shown in Figure 5.The maximum value of the reflection peak is unable to locate the defect accurately.To improve the iden tification of the reflected peak, this work uses the centroid and function fitting method to find the peak position.The centroid method takes the centroid position of the peak waveform as the peak position.The principle of the centroid method is to compare the amplitude of the signa at different times with the mass of the object at different positions.The centroid is defined as the ratio of the sum of moments at different positions to the total mass.Therefore, th centroid of the peak signal can be calculated as follows: where xi represents the coordinate of each sampling point in the reflected peak, sr(xi) rep resents the amplitude of each sampling point in the reflected peak, and m represents th number of coordinate points in the reflected peak.
The function fitting method is used to fit the reflected peak of the signal waveform Attenuation and dispersion will occur when the signal is transmitted in the cable, but i does not affect the waveform characteristics of the function.Therefore, the reflected signa has the characteristics of a Gaussian signal.The expression of the reflected Gaussian signa is obtained by fitting, and then the fitted signal's peak position can be calculated.Due t the sampling frequency being limited in the test, the sampling point in the location spec trum is discrete, as shown by the blue point in Figure 5.If the amplitude of the sampling point has no error, its fitting function is the same as the original reflected signal, and th locating accuracy is the highest.The centroid method takes the centroid position of the peak waveform as the peak position.The principle of the centroid method is to compare the amplitude of the signal at different times with the mass of the object at different positions.The centroid is defined as the ratio of the sum of moments at different positions to the total mass.Therefore, the centroid of the peak signal can be calculated as follows: where x i represents the coordinate of each sampling point in the reflected peak, s r (x i ) represents the amplitude of each sampling point in the reflected peak, and m represents the number of coordinate points in the reflected peak.
The function fitting method is used to fit the reflected peak of the signal waveform.Attenuation and dispersion will occur when the signal is transmitted in the cable, but it does not affect the waveform characteristics of the function.Therefore, the reflected signal has the characteristics of a Gaussian signal.The expression of the reflected Gaussian signal is obtained by fitting, and then the fitted signal's peak position can be calculated.Due to the sampling frequency being limited in the test, the sampling point in the location spectrum is discrete, as shown by the blue point in Figure 5.If the amplitude of the sampling point has no error, its fitting function is the same as the original reflected signal, and the locating accuracy is the highest.
Experimental Setup
The experimental platform used in this work is shown in Figure 6.A PC was used to record and analyze the BIS of the tested cable, which was measured by VNA.The test frequency band of the VNA was set to 100 kHz-100 MHz.The frequency interval of the swept signal was set to 10 kHz.The VNA and cable were connected through alligator clips.The copper shield was grounded during the experiment.
Experimental Setup
The experimental platform used in this work is shown in Figure 6.A PC was used to record and analyze the BIS of the tested cable, which was measured by VNA.The test frequency band of the VNA was set to 100 kHz-100 MHz.The frequency interval of the swept signal was set to 10 kHz.The VNA and cable were connected through alligator clips.The copper shield was grounded during the experiment.In this work, the location spectra of cables with different abrasion conditions were measured, as shown in Figure 7.The tested cable was YJLV-1 × 35 − 8.7/15 kV XLPE insulated cable and the length of the cable was 20 m.The thickness of XLPE insulation was 4.5 mm.The length of the abrasion was set to 5 cm.The location of the abrasion was 9 m away from the cable measurement port.The sheath of the cable was partly removed, and the copper shields were not damaged in sample a (shown in Figure 7a).The copper shields of samples b and c were worn down, and the difference between b and c was the size of defect area.Sample d could be characterized as abrasion on the semiconducting layer, and the cable insulation was not worn.Samples e and f could be characterized as insulation abrasion defects.Sample e was less worn, and the sample f was more worn.The inner semiconducting layer of the cable could be seen through the insulation in sample f.In this work, the location spectra of cables with different abrasion conditions were measured, as shown in Figure 7.The tested cable was YJLV-1 × 35 − 8.7/15 kV XLPE insulated cable and the length of the cable was 20 m.The thickness of XLPE insulation was 4.5 mm.The length of the abrasion was set to 5 cm.The location of the abrasion was 9 m away from the cable measurement port.The sheath of the cable was partly removed, and the copper shields were not damaged in sample a (shown in Figure 7a).The copper shields of samples b and c were worn down, and the difference between b and c was the size of defect area.Sample d could be characterized as abrasion on the semiconducting layer, and the cable insulation was not worn.Samples e and f could be characterized as insulation abrasion defects.Sample e was less worn, and the sample f was more worn.The inner semiconducting layer of the cable could be seen through the insulation in sample f.
Experimental Setup
The experimental platform used in this work is shown in Figure 6.A PC was used to record and analyze the BIS of the tested cable, which was measured by VNA.The test frequency band of the VNA was set to 100 kHz-100 MHz.The frequency interval of the swept signal was set to 10 kHz.The VNA and cable were connected through alligator clips.The copper shield was grounded during the experiment.In this work, the location spectra of cables with different abrasion conditions were measured, as shown in Figure 7.The tested cable was YJLV-1 × 35 − 8.7/15 kV XLPE insulated cable and the length of the cable was 20 m.The thickness of XLPE insulation was 4.5 mm.The length of the abrasion was set to 5 cm.The location of the abrasion was 9 m away from the cable measurement port.The sheath of the cable was partly removed, and the copper shields were not damaged in sample a (shown in Figure 7a).The copper shields of samples b and c were worn down, and the difference between b and c was the size of defect area.Sample d could be characterized as abrasion on the semiconducting layer, and the cable insulation was not worn.Samples e and f could be characterized as insulation abrasion defects.Sample e was less worn, and the sample f was more worn.The inner semiconducting layer of the cable could be seen through the insulation in sample f.To analyze the locating ability of multiple defects, a nail was inserted in the same cable at 12.35 m after the test of abrasion defects, as shown in Figure 8. d represents the insertion depths of the nail, and r i represents the thickness of the XLPE insulation.The relative inserted depth is defined as n as follows: BIS at different n (25%, 50%, 75%, and 100%) was measured.To analyze the locating ability of multiple defects, a nail was inserted in the same cable at 12.35 m after the test of abrasion defects, as shown in Figure 8. d represents the insertion depths of the nail, and ri represents the thickness of the XLPE insulation.The relative inserted depth is defined as n as follows: BIS at different n (25%, 50%, 75%, and 100%) was measured.
Location Spectra of Samples
The location spectra of the tested cable are shown in Figure 9.Each location spectrum shows two peaks at the position of 9 m away from the measurement port, which correspond to the location where the wave impedance changes.When the sheath of the cable is broken and copper shields are intact, no reflected peaks appear in the location spectrum (sample a).This is because the sheath will not influence the distribution parameters of the cable, which are related only to the properties of the cable conductors and insulation.When the copper shields of the cable are worn down, there are two peaks in the location spectrum, and the maximum amplitude values are 3.64 × 10 −3 and 3.68 × 10 −3 (sample c).The location of the reflected peak corresponds to the intersection of the abrasion part and the intact part.When abrasion occurs on the semiconducting layer of the cable, the maximum amplitude values rise to 4.80 × 10 −3 and 4.72 × 10 −3 (sample d).When XLPE insulation is worn down, the maximum amplitude values increase to 9.56× 10 −3 and 8.64× 10 −3 (sample f).
Location Spectra of Samples
The location spectra of the tested cable are shown in Figure 9.Each location spectrum shows two peaks at the position of 9 m away from the measurement port, which correspond to the location where the wave impedance changes.When the sheath of the cable is broken and copper shields are intact, no reflected peaks appear in the location spectrum (sample a).This is because the sheath will not influence the distribution parameters of the cable, which are related only to the properties of the cable conductors and insulation.When the copper shields of the cable are worn down, there are two peaks in the location spectrum, and the maximum amplitude values are 3.64 × 10 −3 and 3.68 × 10 −3 (sample c).The location of the reflected peak corresponds to the intersection of the abrasion part and the intact part.When abrasion occurs on the semiconducting layer of the cable, the maximum amplitude values rise to 4.80 × 10 −3 and 4.72 × 10 −3 (sample d).When XLPE insulation is worn down, the maximum amplitude values increase to 9.56× 10 −3 and 8.64× 10 −3 (sample f ).
The location spectra of the nail insertion are shown in Figure 10.Each location spectrum shows a high peak at approximately 12.35 m, which also corresponds to the location of the inserted nail.When n reaches 25%, the maximum amplitude value of the reflected peaks is 2.43 × 10 −3 .When n reaches 50%, the value increases to 2.45 × 10 −3 .As n increases to 75%, the corresponding value increases to 4.80 × 10 −3 .When the nail has punctured the insulation completely, the value reaches 9.13 × 10 −1 , and the distortion of the location spectrum is noticeable.When the relative inserted depth is less than or equal to 75%, the mechanical abrasion at 9 m can be correctly located.However, when n reaches 100%, it is challenging to identify the reflected peak at 9 m due to noise interference.The reason for the signal distortion is the short circuit between the cable core and the copper shield.The total reflection of the traveling wave occurs at the short circuit position, and the incident signal and reflected signal are superimposed between the measurement port and the short circuit location.The amplitude of the location spectrum increases, and the locating resolution decreases.The location spectra of the nail insertion are shown in Figure 10.Each location spectrum shows a high peak at approximately 12.35 m, which also corresponds to the location of the inserted nail.When n reaches 25%, the maximum amplitude value of the reflected peaks is 2.43 × 10 −3 .When n reaches 50%, the value increases to 2.45 × 10 −3 .As n increases to 75%, the corresponding value increases to 4.80 × 10 −3 .When the nail has punctured the insulation completely, the value reaches 9.13 × 10 −1 , and the distortion of the location spectrum is noticeable.When the relative inserted depth is less than or equal to 75%, the mechanical abrasion at 9 m can be correctly located.However, when n reaches 100%, it is challenging to identify the reflected peak at 9 m due to noise interference.The reason for the signal distortion is the short circuit between the cable core and the copper shield.The total reflection of the traveling wave occurs the short circuit position, and the incident signal and reflected signal are superimposed between the measurement port and the short circuit location.The amplitude of the location spectrum increases, and the locating resolution decreases.The location spectra of the nail insertion are shown in Figure 10.Each location spectrum shows a high peak at approximately 12.35 m, which also corresponds to the location of the inserted nail.When n reaches 25%, the maximum amplitude value of the reflected peaks is 2.43 × 10 −3 .When n reaches 50%, the value increases to 2.45 × 10 −3 .As n increases to 75%, the corresponding value increases to 4.80 × 10 −3 .When the nail has punctured the insulation completely, the value reaches 9.13 × 10 −1 , and the distortion of the location spectrum is noticeable.When the relative inserted depth is less than or equal to 75%, the mechanical abrasion at 9 m can be correctly located.However, when n reaches 100%, it is challenging to identify the reflected peak at 9 m due to noise interference.The reason for the signal distortion is the short circuit between the cable core and the copper shield.The total reflection of the traveling wave occurs at the short circuit position, and the incident signal and reflected signal are superimposed between the measurement port and the short circuit location.The amplitude of the location spectrum increases, and the locating resolution decreases.Figure 10.The location spectra of the cable samples with an inserted nail.
Locating Results of Abrasion and Nail Insertion
Since the reflected peaks appear at the two endpoints of the cable defect segment, the midpoint of the two peaks can be defined as the location of the local defect.The locations of the reflected peaks are calculated using the maximum value method, the centroid method, and the function fitting method.The locating deviation can be defined by: where l s represents the defect location in the location spectrum, l m represents the actual location of the defect, and l represents the length of the tested cable.
Table 1 shows the locating results of abrasion obtained by the three peak recognition methods.Since two reflected peaks in the location spectrum at the starting point and the endpoint of the abrasion area correspond to the impedance mismatch points, the positioning result can be obtained from the midpoint of the reflected peak.The actual abrasion position is 9 m, and the locating results of the three methods are near this value.The abrasion locating deviations of the different methods are shown in Figure 11.The absolute value of the deviation of the maximum method is approximately 1%, which is larger than that of the other two methods.The wave impedance at the abrasion location is more mismatched with severe abrasion, and the reflected peak is more pronounced, improving the positioning accuracy.Therefore, the deviation decreases with more severe abrasion.The deviation of the function fitting method is approximately 0.5%.The deviation of the centroid method is approximately 0.25%, except for the sizeable locating deviation of sample c, which reaches 0.58%.The results show that different defect conditions have little effect on the accuracy of the centroid function fitting methods.These two methods can significantly improve the locating accuracy.Table 2 shows the location of the nail insertion in the insulation.The measured nail insertion location is 12.35 m.Since the nail insertion area is small when the nails are inserted, the traveling wave has a considerable reflection at the nail location, and the defect can be located directly through the reflected peak location.When n reaches 50%, the error in the locating result is small.When n increases to 75%, the position of the reflected peak shifts to the measuring end.When the nail completely penetrates the insulation, the reflected peak shifts to the open circuit end.Table 2 shows the location of the nail insertion in the insulation.The measured nail insertion location is 12.35 m.Since the nail insertion area is small when the nails are inserted, the traveling wave has a considerable reflection at the nail location, and the defect can be located directly through the reflected peak location.When n reaches 50%, the error in the locating result is small.When n increases to 75%, the position of the reflected peak shifts to the measuring end.When the nail completely penetrates the insulation, the reflected peak shifts to the open circuit end.The locating deviations of different peak recognition methods are shown in Figure 12.When n is less than or equal to 50%, the deviation of locating the nail insertion is within 0.5%.The maximum absolute value of the deviation is approximately 4% when n reaches 75%.The deviation is approximately 3% when the nail penetrates the insulation.The changing trend of deviation obtained by different peak recognition methods is consistent.When the insertion depth of the nail is small, the locating accuracy of the centroid method and function fitting method is higher.When the depth of nail is larger (75%), the locating accuracy of the centroid method and function fitting method is lower than that of the maximum method.The reason for the larger locating error is the superimposition of the incident signal and reflected signal when the total reflection occurs.The total reflected signal with high amplitude has a significant influence on the location spectrum, and the accuracy will decrease.
Conclusions
In this work, a new defect-locating method was proposed, and the calculation process of the algorithm was analyzed.The selection principle of incident Gaussian signal was discussed, and the results of three different peak recognition methods were analyzed.The location spectra of long cables with different defects were measured.The locating deviations of different methods were analyzed.In our future work, we will focus on the research of 10 kV cable defect's location and apply this algorithm to the on-site cable maintenance work.At the same time, the location algorithm will be further improved to reduce the noise and improve the locating resolution.The main conclusions are as follows: 1.The pulse width parameter c of the Gaussian signal is determined by the upper limit frequency of BIS.The proper c will improve the locating resolution.2. The location spectrum can locate mechanical abrasion and an inserted nail in a 20 m
Conclusions
In this work, a new defect-locating method was proposed, and the calculation process of the algorithm was analyzed.The selection principle of incident Gaussian signal was discussed, and the results of three different peak recognition methods were analyzed.The location spectra of long cables with different defects were measured.The locating deviations of different methods were analyzed.In our future work, we will focus on the research of 10 kV cable defect's location and apply this algorithm to the on-site cable maintenance work.At the same time, the location algorithm will be further improved to reduce the noise and improve the locating resolution.The main conclusions are as follows: 1.
The pulse width parameter c of the Gaussian signal is determined by the upper limit frequency of BIS.The proper c will improve the locating resolution.
2.
The location spectrum can locate mechanical abrasion and an inserted nail in a 20 m cable.The location of the abrasion shows two reflected peaks, and the location of the inserted nail shows a single reflected peak.
3.
In the location of abrasion, the deviation is within 1%.The centroid and function fitting methods can effectively reduce the positioning error.4.
When the depth of the nail insertion is small, the locating deviation is within 1%.The centroid method and function fitting method can reduce the locating error.When the nail insertion depth is greater, the absolute value of the deviation will be more significant, and the maximum absolute value is 4%.
Figure 2 .
Figure 2. The calculation process of the BIS locating algorithm.
p represents the ratio of the amplitude of the signal at the maximum fr
Figure 2 .
Figure 2. The calculation process of the BIS locating algorithm.
Figure 3 .
Figure 3. Upper limit frequencies of cables with different lengths.
Figure 3 .
Figure 3. Upper limit frequencies of cables with different lengths.
Figure 4 .
Figure 4. Location spectra of different cables: (a) Different lengths; (b) Different pulse widths c.
Figure 5 .
Figure 5. Error in peak identification of location spectrum.
Figure 5 .
Figure 5. Error in peak identification of location spectrum.
Figure 7 .
Figure 7.The abrasion condition of the cable sample: (a) The sheath was partly removed; (b) The slight abrasion of the copper shields; (c) The serious abrasion of the copper shields; (d) The abrasion of the semiconducting layer.(e) The slight abrasion of the insulation.(f) The serious abrasion of the insulation.
Figure 7 .
Figure 7.The abrasion condition of the cable sample:(a) The sheath was partly removed; (b) The slight abrasion of the copper shields; (c) The serious abrasion of the copper shields; (d) The abrasion of the semiconducting layer.(e) The slight abrasion of the insulation.(f) The serious abrasion of the insulation.
Figure 8 .
Figure 8.The cable sample inserted with a nail.
Figure 8 .
Figure 8.The cable sample inserted with a nail.
Energies 2022 , 15 Figure 9 .
Figure 9.The location spectra of the cable samples with abrasion.
Figure 10 .
Figure 10.The location spectra of the cable samples with an inserted nail.
Figure 9 .
Figure 9.The location spectra of the cable samples with abrasion.
Figure 9 .
Figure 9.The location spectra of the cable samples with abrasion.
Figure 10 .
Figure 10.The location spectra of the cable samples with an inserted nail.Figure10.The location spectra of the cable samples with an inserted nail.
15 Figure 11 .
Figure 11.The deviations of locating abrasion with three methods.
Figure 11 .
Figure 11.The deviations of locating abrasion with three methods.
Figure 12 .
Figure 12.The deviations of nail insertion with three methods.
Figure 12 .
Figure 12.The deviations of nail insertion with three methods.
Table 2 .
The nail locating results of different inserted depths.
Table 2 .
The nail locating results of different inserted depths. | 9,324 | 2022-11-06T00:00:00.000 | [
"Physics"
] |
Federal Inland Revenue Service Tax Awareness Index : Development and Validation
Taxation is not just a means for generating revenue for the country, it is also a tool used to regulate the economy using fiscal policy, to control inflation, prices of goods and services and bridge the gap between the rich and the poor. However, the awareness for this all-important component of any goal-oriented government is poor. It is against this backdrop that this study aimed at developing a Tax Awareness Index for use in the Nigerian context. The study adopted descriptive survey approach with a cross-sectional design. The sampling approach to recruit participants for the study was the convenience sampling method. Data analysis involved the use of principal component analysis (PCA) with Varimax rotation. Results showed that the initially proposed 10-factor solution of 94 questionnaire items was not supported with the data gathered. However, a 5-factor solution emerged that made substantive sense for the purpose of developing a Tax. Awareness Index. A composite score on the 5-factor solution indicates the level of tax awareness for a respondent. It was concluded that this index would serve as a viable tool to measure how much the general populace is aware of tax and its components.
Introduction
Tax is conceptualized as a compulsory contribution by tax payers regardless of any matching return of services or goods by the government (James & Nobes, 2000).Tax or taxation does not occur in a vaccum.Governments levy and raise tax revenue to finance various public expenditures (Palil, 2010).
Taxation is a compulsory levy imposed by the government on its citizens on their profits, consumption and income while Tax awareness is basically how informed or how well people know and are familiar with the relevant taxes levied on them by the government and the impact of those taxes on their individual lives and the economy as a whole.Taxation is not just a means for generating revenue for the country, it is also a tool used to regulate the economy using fiscal policy, and with taxation, inflation can be controlled, prices of goods and services can be controlled and the gap between the rich and the poor can be bridged if proper tax awareness is created and enforced.
According to Steinmo (1996), Governments need money.Modern governments need lots of money.How they get this money and whom they take it from are the two most difficult political issues faced in any modern political economy.Tax awareness is key to tax compliance, when the citizens in a country are aware of their tax responsibilities they are more likely to remit their tax liabilities as and at when due to relevant tax authorities.Tax awareness has a positive and significant impact on taxpayer compliance (Santi 2012).When the level of tax compliance is high the level to the gross domestic product (GDP) ratio will be high and government would have more money to fund projects and effectively regulate and stir the economy in the right part.However, where compliance is low, sour taste is left in the mouth.Fowler (2018) argued that with the current state of the oil industry, huge infrastructure deficit and increasing external debt in the last eight years, it is now clear that reliance on oil is not sustainable with a tax to GDP ratio of only 6 per cent, one of the lowest in the world with Ghana at 15.9%, most developed countries are at about 30%.There is a lot of work to be done in creating awareness and bringing people into the tax system.Akinfala (2017) argued that tax revenue is a reliable and sustainable way of generating income for government to prosecute government business.Prosecuting government business is not just to pay salaries but more importantly, to provide public goods and services for all citizens, whether you are in employment of government or not, whether you are a paid employee or self-employed person.But the irony, however, is that the level of tax awareness is still poor across the country Adeosun (2017) This implies that since the product "tax" is not common and not everyone wants to buy it, a lot of people will feign ignorance and consequently choose to become "unaware" of its existence.One of the major functions of the tax authorities as spelt out in the recently released Tax Policy (2017) is "To promote tax awareness and a tax culture in Nigeria, the Federal and State tax authorities through the Joint Tax Board shall set aside a uniform day in the year as a National Tax Day".Adeosun (2017) opined that many state governments get away with wastage and corruption because of non-compliance to tax payment by many Nigerians, especially at the state level.This could be explained by several factors.It could be due to lack of trust based on previous experiences where taxes are paid but not remitted to the government's coffers on one hand, or in the case where they are remitted but such fund is diverted for personal aggrandizement.These could explain for increased non-compliance.Furthermore, Adeosun (2017) submitted that governments at the state levels will become more responsible when Nigerians hold them accountable which is only possible when they pay their taxes, adding that such action is part of their duties.Officially, the number of taxpayers paying N10 million and above as tax per year, as at 2017, was 943.Of these, 941 were based in Lagos, with only two based in Ogun State.The implication of this is that in all the other states and the Federal Capital Territory, there is no billionaire or multi-millionaire.That may not be the case, given the assets scattered around the country and vehicles on our mostly decrepit roads.What it means is that many property owners in urban areas have not been paying tax or have been underpaying taxes.
Nigeria Bureau of Statistics estimated that Nigeria has a population of about 193,392,517 million people as at 2016.The economically active or working age population (15 -64 years of age) was 111.1 million in Q3 2017 according to the National Bureau of Statistics (NBS).Furthermore, as released by the Joint Tax Board (JTB), only about 10 million people, out of this number, are registered for personal income tax across the 36 states and the Federal Capital Territory (FCT).These numbers just do not add up.How does one explain that 10 million people carry the tax burden that at least 77 million people are expected to share?This is 13 for the cost of 100.There are four possible groups within the Nigerian tax bracket; enterprises (multinational and domestic), High Net worth Individuals (HNIs), the formally employed and the informally employed.According to Adefeko (2018),oil revenues still account for about 70 per cent of government income, a reality that leaves the economy very vulnerable to fluctuations in the oil market.
The above lends credence to the need for a reassessment of the government's current approach to improving the country's tax base through the Voluntary Assets and Income Declaration Scheme (VAIDS).VAIDS was introduced in 2017 to create awareness on the obligations of Nigerians as it pertains to tax payments, raise additional revenue of $1bn to reduce Nigeria's borrowing needs, allow investment in vital infrastructure, spur development, capture additional 4 million more tax payers in its net, increase Nigeria's tax to Gross Domestic Product (GDP) ratio from six per cent to 15 per cent by 2020, broaden the Federal and State tax brackets, curb non-compliance with existing tax laws and discourage use of tax havens and prevent illicit financial flows and reduce tax evasion in exchange for amnesty on criminal prosecution and penalties Osinbajo (2017).Nigeria declared every Thursday as "Tax Thursday", the scheme ends on 30 th June , 2018.
Before oil was discovered in Nigeria, most Nigerians paid their taxes, the decline in taxation as a key source of government revenue in Nigeria came about when Oil was discovered in Nigeria in 1956 by Shell-BP at Oloibiri in the Niger Delta area after half a century of exploration.The focus then changed from agricultural driven economy to an oil driven economy, less and less emphasis was placed on tax as a key source of government revenue because of the large revenue coming from the oil sector except for the very obvious taxes such as PAYE which is deducted at source, Company income tax especially for registered companies, Petroleum profit tax and VAT charged on consumption.The refocus on the Nigerian economy to oil not only lead to a decline in the level of tax awareness, and because the level of tax awareness is low, the revenue accruable to the government from tax payments is significantly reduced.This then has implication for the government's responsibilities to the citizen.The focus on a single product (oil) at the expense of others has also led to a fall in the level of trading activities in Nigeria, especially among small and medium scale enterprises (SMEs).Hence, businesses find it hard to cope as less emphasis was made to strengthen other sectors.This caused many businesses which needed government special support to go out of business and thus tax revenue which should have been collected from these businesses were lost.
This scenario has spread into different parts of the economy such as; the agricultural sector, mining sector, manufacturing sector, tourism sector etc.It also made it hard for infant indigenous industries to survive because the competition with imported goods was high and they sold at a lower price.High cost of production for manufacturing firms, especially SMEs causes them to either go out of business, lay off workers or reduce salaries to stay in business.All these factors and many more have their implications on the economy of Nigeria as a whole and specifically tax.Although this is a problem that need to be answered, there is a dearth of literature on tools that measures the level of tax awareness among expected tax payers.The objective of this study is to fill the knowledge gap in terms of the dearth of an appropriate instrument for measuring and predicting tax awareness and other variables affecting taxation.The development and validation of the tax awareness index (FIRS-TAI) is one of the bold attempt at remediating an aspect of the policy defect in view of the fact that tax awareness is one of the independent variables affecting tax compliance in Nigeria.
Literature Review
Tax is something undeniable for every citizen, their awareness of taxation will be important in pursuing tax compliance.Alstadsaeter (2013) in his study on the effect of awareness and incentives on tax evasion found that tax awareness has an explanation as to why some taxpayers engage in legal tax avoidance activities while others do not.The taxpayer's awareness of tax rules depends on the salience of taxes.He concluded that lack of tax awareness and complexity of the tax code can result in accidental tax evasion through overstatement of the dividend allowance.Studies have shown that there exists a positively significant relationship between tax knowledge, tax awareness and tax compliance (Adekanola, 1997;Ola, 2001).These studies concluded that when people are adequately informed about taxes, there level of awareness increases and consequently their compliance level.However, Palil (2010) found that tax knowledge has a significant impact on tax compliance even though the level of tax knowledge varies significantly among respondents.The author opined that being informed about tax is not always a significant predictor of tax compliance in all cases and among different populations.
Contrary to Palil (2010), Berhane's (2011) study showed that tax compliance is influenced by tax education.
Third variables have also been implicated as mediators in tax compliance.For instance Savitri & Musfialdy (2016) found that service quality has a full mediating role in relationship between taxpayers' awareness, tax penalties, and compliance cost and tax payer compliance.Mukhlis, Utomo, and Soesetio (2015) opined that tax education has a positive and significant impact on tax knowledge, tax knowledge has a positive effect on tax fairness, tax fairness has a significant positive effect on tax compliance and tax knowledge has a significant and positive effect on tax compliance.Furthermore, Hastuti (2014) in his study on tax awareness and tax education: a perception of potential taxpayer's results shows that there's no significant difference in contextual tax awareness between groups.It indicates that tax function and the obligation to do self-assessment are got into their head without considering whether they come from business or non-business program of study.It is also happened in a view of ethical tax awareness.This is to show to the government since tax awareness has already been generated among the youth.Rahayu et al (2017) found that knowledge and understanding of tax regulation in the society through the awareness of taxpayer do not have significant influence on tax compliance; a new understanding and a better perspective of the influence of knowledge, understanding, and awareness of taxpayer compliance.
Different factors might affect tax compliance attitude which would have been a fallout of low level of awareness.Such factors could include economical, institutional, social, individual factors.Previous empirical studies (Lemessa, 2007;Beza, 2014;Amina & Saniya, 2015;) also found the following factors as determining factors for tax compliance (tax knowledge, perception on government spending; perception on impartiality and fairness of the tax system; penalties; personal financial constraint; changes on current government policies; and referral group such as friends, relatives).Furthermore, Hai andSee (2011) andClotfelter, 1983 found that the high tax rate results in high tax noncompliance.
Much has been said and written on tax compliance, tax education and the dangers of evading tax.But a core determinant of tax compliance (which every government craves for) is tax awareness.High awareness by the society would encourage people to fulfill their obligations to register as taxpayer.Reporting and paying taxes properly are forms of national and civic responsibility.Most citizens do not have much understanding of what tax laws mean and why the tax system is structured and administered as it is (Braithwaite, 2007).Developing a tool to understand the level of awareness that exist among a particular population will help inform strategies for creating awareness and tax education.This is a gap identified in literature.
Methodology
The study adopted a descriptive survey approach using a cross-sectional design.This simply meant that data was gathered largely at different locations and across ages and various demographic characteristics.The Nigerian adult population of age 18 and above was the target for the study.The study utilized the non-probability sampling technique, specifically the purposive, convenience and accidental sampling techniques.The purposive technique was employed because of the age criteria, while the convenience and accidental techniques were employed to cut across various strata of Nigerians in the six geopolitical zones.Other criteria for inclusion in the study was that each prospective respondent was required to be able to read and write the English Language and should be willing to volunteer his or her time to complete the questionnaire.
The principal researcher employed the services of trained research assistants to help administer the questionnaire in the six major geo-political zones.Their job was to meet prospective participants in these zones and ask them to volunteer their time to complete the questionnaire.In addition, questionnaires were also distributed to each state offices of the Federal Inland Revenue Service (FIRS).This was for distribution to customers and visitors who met the research criteria.The procedure was that the informed consent of each prospective was obtained followed by the request and instructions on how to fill out the questionnaire.
A Total of 1030 participants duly completed the questionnaire and less than 4% of the cases contained missing data.This comprised of Male=629 (60.4%) and Female = 400 (38.4%) and remaining 1.2% were participant that did not indicate whether they were male or female.The mean age was 33.81 and median=32.00but the age with the highest frequency was 28.The coverage of the questionnaire administered in the various geo-political zones in Nigeria were; North East = 56 (5.4%), North West = 55 (5.3%), North Central = 240 (23.3%),South East = 207 (20.1%),South West = 318(30.9)and South-South = 138 (13.4%).The remaining 1.6% of the sample did not indicate their geo-political zones.In this sample, 913 (88.6%) of the total participants had graduate or equivalent academic qualification and the remaining percentage (11.4%)were people ranging from no formal education to secondary qualifications or equivalent.The nature of the sample in terms of vocation included; Unemployed =15 (4.4%), Students = 67 (6.5%), Business/Entrepreneur = 256 (24.9%),Private sector workers = 474 (46.1%),Civil servants = 169 (16.4%), and others = 4 (.4%).45 participants or 4.4% did not indicate the nature of their vocation.Also, 746 (72.4%) participants indicated that they were tax payers and 174 (16.9%) indicated that they were not tax payers.The remaining percentage did not give any indication on whether they were tax payers or not.
Research Instrument
The questionnaire items in this study was developed based on various inputs from tax experts and related disciplines.Thereafter a panel of tax experts confirmed the face and content validity of the questionnaire items for the research before proceeding to the exploratory analytical stage of the study.
The questionnaire was tagged Tax Aware Index (TAI) at that stage.The TAI questionnaire consisted of 11 major sections.These were the sections for Demographic information, General tax obligations, Value added tax, Income tax, Non-Profit organization, Offences and penalties, Mode of payment, Petroleum profit tax, Capital gains tax, Stamp duties and Education tax.The questionnaire initially consisted of a total of 88 items ranging from 3 items to 26 items for each of the various components.The response format was the Likert format.The scale was 5 point scale with options ranging from 1-Strongly disagree, 3-Not sure, … 5-Strongly agree.Cronbach's alpha coefficient analysis to test for the reliability of the arious components ranged from α = 0.61 to α = 0.89.This meant that the questionnaire items for the various components ranged from fair to excellent.
Results
Using varimax rotation, a principal component analysis (PCA) was run with the ten factors in the initial factor analysis.The Tax Awareness Index (TAI) included factors include General Tax Obligations (GT), Value Added tax (VA), Income Tax (IN), Non-Profit Organisation (NP), Offences & Penalty (OP), Mode of Payment (MP), Petroleum Profit Tax (PP), Capital Gains Tax (CG), Stamp Duty (SD) and Education Tax(ED).These ten factors of had a total of ninety-four (94) questionnaire items.As a precondition for the factor analysis, the sample size appropriateness was tested.The Kaiser Meyer Olkin measure of sampling adequacy in SPSS 18.0 was = .89.This was an indication that the sample size was excellent.The initial solution with the unrotated (Varimax rotation) with Eigenvalues greater one indicated a 10-factor solution (See table 1).The scree plot is capture in figure 1.Based on the scree plot, the 10 factor solution indicated in table 1 was difference in the sense that there were likely less than 10 factors in the solution.Furthermore, the rotated factor solution for 10 factors did not show meaningful pattern.See table 2 This meant that due to the inconsistency between the initial 10 factor solution and the scree plot, more analysis was need to determine the factor structure of the Tax awareness Index..49 .
Any State government can impose tax on its residents (GT21) .46
There is no difference between federal tax laws and state tax laws ( R) (GT22) The Pay as You Earn (PAYE) is a separate type of tax payment (GT23) The Pay as You Earn is a means through which taxes are paid (GT24) .51 Only businesses that occupy office space pay taxes (GT25) Salary income and all other income are liable to tax(GT26) .50 VAT is a type of indirect tax (VA1) .46 VAT rate in Nigeria is 5% (VA2) .63 It is compulsory for all businesses in Nigeria to register for VAT(VA3) .61 It is compulsory for ALL businesses in Nigeria to file VAT return monthly?(VA4) .60 Filing for VAT is the 21st day of the month following the month of the transaction(VA5) .53 VAT is supposed to be included in receipts of items purchased(VA6) .67 VAT is paid on food each time you eat in a registered restaurant?(VA7) .57 VAT is borne by the final consumer(VA8 Estate Agents and Property valuers of buildings are required to pay taxes on their commission/income(NP6) .45 Tenement rates on property are charged and paid at the local government council(NP7) Non-disclosure of correct income when filing tax return is a tax offense (OP1) .53 Failure to keep proper accounting records by a company is not an offense (OP2) .63 Out rightly refusing to pay taxes is against the law (OP3) .62 Omitting to disclose income received in Nigeria is not an offence(OP4) .61 Non declaration of income brought into Nigeria from sources outside Nigeria is not an offence(OP5) .53 Giving any incorrect information in relation to any income related matter is an offence (OP6) .57 Failure to render returns and present documents on demand by FIRS is an offence (OP7) .68 Understating any income for tax returns is a punishable offense (OP8) .63 State taxes can be ignored if you pay Federal taxes(OP9) .50 Over/under invoicing of goods and services is not an offence (R) (OP10) -.44 Claim of Fictitious expenses and assets by individuals and companies is against the tax laws(OP11) Non-remittance of Value-Added Tax (VAT) is not an offence(OP12) .50 Non-registration for taxes is an offence(OP13) .57 Non invoicing of VAT is an offense (OP14) .61 Non -filing of VAT returns is an offense.(OP15) .57 A taxpayer can use FIRS electronic payment options to pay taxes (MP1) .43.49 Are you aware of Integrated Tax Administration System (ITAS)?(MP2) .52 A taxpayer can file returns and pay taxes on the ITAS platform?(MP3) .61 A taxpayer can claim refund from FIRS when tax is over paid to FIRS(MP4) .56 Assessment under the Petroleum Profits Tax Act is on a current year basis?(PP1) .56 The profits of gas producing companies, companies engaged in marketing of refined products and oil servicing companies are subject to petroleum profits taxation (PP2) .51 PPT is paid in twelve installments plus the thirteenth installment which is the final settlement.(PP3) .459.43 Transfer of interest in real estate triggers payment of Capital Gains Tax if the consideration for the transfer is higher than the historical cost (CG1) .55 Only State governments administer Capital Gains Tax (CG2) .45 Gains arising from the disposal of an individual's principal private residence are exempted from the provisions of the Capital Gains Tax Act (CG3) .43
CGT rate is 10% (CG4)
.53 There is loss relief for disposal of a chargeable as asset less than the historical cost (CG5) .67 Gains accruing to any Local Government Council is not chargeable to CGT(CG6) .66 Gains from disposal of Nigerian government securities, stocks and shares are not chargeable gains (CG7) .64 CGT is charged on any capital gains, accruing to any person on disposal of assets, after allowing certain deductions (CG8) .74 CGT is assessed on a current year basis (CG9) .71 The Stamp Duties Act provides for the application of a duty of 1.5% on transfer of property (SD1) .57 FIRS demands for payment of applicable duty on ALL transactions (SD2) .63 State Governments do not impose and collect stamp duties (SD3) .43 Appeal against a Stamp Duties Commissioner's adjudication is to the High Court (SD4) .62 Unstamped documents are not acceptable as evidence in court?(SD5) .61 The administrative head of a Stamp Duty officer is the Commissioner of Stamp Duties (SD6) .67 Companies not registered in Nigeria, otherwise known as foreign companies are not covered by the Tertiary Education Tax Act and so are not liable to the 2% tax imposed.(ED1 After a series of exploratory analysis to find the most suitable factor structure that makes substantive sense, It was discovered that only a 5-factor solution showed potential for the Tax Awareness Index.See Table 3.The highlighted score in the table are the indications of the factor loading and structure.However, due to the cross loading, there may be need for further review before further test.Tax payment is an obligation every citizen must comply with (GT2) .58 Tax payment is a method by which Governments obtain money from citizens (GT3) .47 Taxes must be paid on the profit earned by businesses (GT6) .49 Taxes are used to develop a country's amenities (GT14) .49 Tax is used to redistribute income (GT16) .43 Taxes provide governments with steady funding required to finance infrastructure and to cater for citizens' welfare.(GT17) .54 The higher your income, the higher your tax (GT18) .55 The Federal Inland Revenue Service (FIRS) is the only body statutorily responsible for collection of federal taxes (GT20) .49 Any state government can impose tax on its residents (GT21) .46 The Pay as You Earn is a means through which taxes are paid (GT24) .51 Salary income and all other income are liable to tax(GT26) .47 VAT is a type of indirect tax (VA1) .45 VAT rate in Nigeria is 5% (VA2) .62 It is compulsory for all businesses in Nigeria to register for VAT (VA3) .59 It is compulsory for ALL businesses in Nigeria to file VAT return monthly?(VA4) .56 Filing for VAT is the 21st day of the month following the month of the transaction (VA 5) .60 VAT is supposed to be included in receipts of items purchased (VA6) .68 VAT is paid on food each time you eat in a registered restaurant?(VA7) .56 VAT is borne by the final consumer (VA8) .74 A taxpayer can apply for refund of excess VAT paid (VA9) .61 Non-collection of VAT attracts penalty of 150% of uncollected VAT and interest at 5% above the CBN rediscount rate (VA10) .48 Small and medium businesses pay tax if they earn income (IN1) .63 The law provides for income tax waivers/concessions in Nigeria (IN2) .58 Taxes on salaries are deducted at source (IN3) .67 As a business man, I have to pay my taxes (IN8) .50 Churches, mosques and other places of worship are required to pay taxes in Nigeria (NP5) .41 Estate Agents and Property valuers of buildings are required to pay taxes on their commission/income (NP6) .42 Non-disclosure of correct income when filing tax return is a tax offense ((OP1) .59 Failure to keep proper accounting records by a company is not an offense (OP2) .67 Out rightly refusing to pay taxes is against the law (OP3) .62 Omitting to disclose income received in Nigeria is not an offence (OP4) .554 Non declaration of income brought into Nigeria from sources outside Nigeria is not an offence (OP5) .56 Giving any incorrect information in relation to any income related matter is an offence (OP6) .56 Failure to render returns and present documents on demand by FIRS is an offence (OP7) .71 Understating any income for tax returns is a punishable offense (OP8) .67
State taxes can be ignored if you pay Federal taxes (OP9)
.43 Non-remittance of Value-Added Tax (VAT) is not an offence (OP12) .54 Non-registration for taxes is an offence (OP13) .40.42 Non invoicing of VAT is an offense (OP14) .47 Non -filing of VAT returns is an offense.(OP15) .47 A taxpayer can use FIRS electronic payment options to pay taxes (MP1) .41 Are you aware of Integrated Tax Administration System (ITAS)?(MP2) .40 A taxpayer can file returns and pay taxes on the ITAS platform?(MP3) .42 Assessment under the Petroleum Profits Tax Act is on a current year basis?(PP1) .50 The profits of gas producing companies, companies engaged in marketing of refined products and oil servicing companies are subject to petroleum profits taxation (PP2) .40 PPT is paid in twelve installments plus the thirteenth installment which is the final settlement.(PP3) .53 Transfer of interest in real estate triggers payment of Capital Gains Tax if the consideration for the transfer is higher than the historical cost (CG1) .60 Only State governments administer Capital Gains Tax (CG2) .40 .42 Gains arising from the disposal of an individual's principal private residence are exempted from the provisions of the Capital Gains Tax Act (CG3) .41
CGT rate is 10% (CG4)
.54 There is loss relief for disposal of a chargeable as asset less than the historical cost (CG5) .63 Gains accruing to any Local Government Council is not chargeable to CGT (CG6) .59 Gains from disposal of Nigerian government securities, stocks and shares are not chargeable gains (CG7) .58 CGT is charged on any capital gains, accruing to any person on disposal of assets, after allowing certain deductions (CG8) .71 CGT is assessed on a current year basis (CG9) .67 The Stamp Duties Act provides for the application of a duty of 1.5% on transfer of property (SD1
Discussion
The aim of this study was to develop a Tax Awareness Index.The study hypothesized an initial 10 factor solution based on inputs from tax experts and available literature.The 10 factor solution had a total of 94 sample items.However, the 10-factor solution did not provide meaningful outcome as initially hypothesized in varimax rotation.Subsequently, a series of exploratory analysis was conducted to understand a more enduring structure of the factor solution produced mixed results because there were a lot of cross loadings.These cross loading were early signs of systematic errors either due to the nature of the questionnaire or the structure of questionnaire.
In situations like these, one way or another, the researcher has to find ways of establishing what makes sense by looking through the factor loadings and making appropriate decision on what is most plausible in interpreting the outcome of the factor solution.For example, in this study, mode of payment (MP), stamp duties (SD), education tax (ED) capital gains tax (CG) and petroleum profit tax (PP) all loaded as one factor.Does this mean that all these components are more or less the same in content and construct?The answer could be yes or no.What needs to be examined is to understand if all these components have a connection in one way or another.
If that is the case, they can all be collapsed and classified as one factor Nevertheless, based on the PCA, a five factor model depicting a probable indication of a Tax Awareness Index was revealed.Factor one consisted mainly of some questionnaire items of General Tax Obligations (GT), factor two consisted of some factors Value Added Tax (VA), factor three consisted of some questionnaire items of Income Tax (IN) and Non-Profit Organization.
Factor four consisted of questionnaire item of Offences & Penalties (OP) and factor five comprised some questionnaire items from mode of payment (MP), Petroleum Profit Tax (PP), Capital Gains Tax (CG), Stamp Duties (SD) and Education Tax (ED).The five factors emerging from the factor analysis are valid with respect to measuring tax awareness among Nigerians.
Conclusion
The main objective of this study was to develop a Tax Awareness Index for use in the Nigerian context.The study found out that the initial 10-factor solution proposed did not make substantive sense.Rather it was found that a 5-factor structure of Tax Awareness Index was most plausible in the given circumstance.The next stage of this process will require that we review the factors in terms of content and construct and rename some of them in a confirmatory factor analysis (CFA).
Best practices require that different samples should be used for both the exploratory and confirmatory stages of the research.For now it will suffice to suggest we should now move from the exploratory stage to the confirmatory stage.Therefore, a major limitation of this study is that the 5-factor structure proposed in this study cannot be said to be final, but it offers us the prospects of getting a true structure of a Tax Awareness Index.That can be achieved in a follow up study to the present one. Figur
Table 1 .
Output of the variance of initial 94 items of tax awareness index
Table 3 .
Rotated component matrix of 5 factors of tax awareness index | 7,324.8 | 2018-06-17T00:00:00.000 | [
"Economics"
] |
PathVisio Analysis: An Application Targeting the miRNA Network Associated with the p53 Signaling Pathway in Osteosarcoma
MicroRNAs (miRNAs) are small single-stranded, non-coding RNAmolecules involved in the pathogenesis and progression of cancer, including osteosarcoma. We aimed to clarify the pathways involving miRNAs using new bioinformatics tools. We applied WikiPathways and PathVisio, two open-source platforms, to analyze miRNAs in osteosarcoma using miRTar and ONCO.IO as integration tools. We found 1298 records of osteosarcoma papers associated with the word “miRNA”. In osteosarcoma patients with good response to chemotherapy, miR-92a, miR99b, miR-193a-5p, and miR-422a expression is increased, while miR-132 is decreased. All identified miRNAs seem to be centered on the TP53 network. This is the first application of PathVisio to determine miRNA pathways in osteosarcoma. MiRNAs have the potential to become a useful diagnostic and prognostic tool in the management of osteosarcoma. PathVisio is a full pathway editor with the potentiality to illustrate the biological events, augment graphical elements, and elucidate all the physical structures and interactions with standard external database identifiers.
Introduction
Osteosarcoma (OS) is the most common primary malignant bone tumor, comprising about 20% of primary bone sarcomas. It is a high-grade malignant tumor characterized by the cells forming immature bone or osteoid. The tumor is considered primary when the underlying bone is normal and secondary when it is altered by a pre-existing condition such as prior irradiation or Paget disease (Osasan et al., 2016;Sergi and Zwerschke, 2008). OS is slightly more prevalent in males (male:female = 3:2) and has a bimodal age distribution with a preference for the adolescent and geriatric age groups, with most of the primary OS cases (60-70%) affecting adolescents and young adults (from 15 to 25 years of age). In the elderly, OS is usually associated with Paget disease of the bone, post-radiation sarcoma, and dedifferentiated chondrosarcomas (Sergi and Zwerschke, 2008). Primary OS may arise in any bone, generally in the long bones of the appendicular skeleton (80-90%), most commonly in the distal femur, proximal tibia, and proximal humerus. Within the long bones, the tumor is usually located in the metaphysis and arises as an enlarging and palpable mass, which results in progressive pain. OS originating in the mid-shaft of bones is uncommon. Conversely, tumors arise e often in the epiphysis where the growth plate is located. Less than 1% of OS is found in the bones of the hands and feet. There is an increase in osteosarcoma's relative incidence in non-long bones, including the jaws, pelvis, spine, and skull within the senior age group. The standard first-line treatment regimens for OS include surgery and multi-agent chemotherapy. Almost all patients receive a neoadjuvant intravenous combination of doxorubicin and cisplatin with or without methotrexate as the initial chemotherapy regimen. In cases where surgical resection is not feasible or the margins are inadequate, the use of radiation therapy may improve local control, but this is not considered a standard of care in pediatric and young adult patients. There has been a significant increase in the 5-year survival rates of patients with OS due to the advances in patients' clinical management. Most centers' survival rates now exceed 50%, but patients presenting with metastatic and recurrent disease have a survival rate of below 20%. The lung is the leading site of metastatic deposits (Abarrategi et al., 2016;Chen et al., 2016b).
Osteosarcoma and genetics Some genetic syndromes are associated with an increased risk of OS. They include hereditary retinoblastoma (germline mutation of the Rb gene), Li-Fraumeni syndrome (germline mutation of the TP53 gene), Bloom syndrome (germline mutation of the RECQL2 gene), Werner syndrome (germline mutation of the RECQL3 gene), and Rothmund-Thomson syndrome (germline mutation of the RECQL4 gene) (Osasan et al., 2016). The two most prominent genes that harbor germline mutations in patients with OS are the retinoblastoma (Rb 1) and the TP53 tumor suppressor genes. Most OS demonstrate inactivation of both the retinoblastoma (Rb) and p53 pathways. OS has a disorganized genome characterized by complex, unbalanced karyotypes with varying patterns of abnormalities. The most consistent finding beyond the TP53 and RB genes' dysregulation is significant aneuploidy with some evidence of chromothripsis. Chromothripsis is the phenomenon by which up to hundreds to thousands of clustered chromosomal rearrangements occur in a single event in localized and confined genomic regions in one or a few chromosomes (Ly and Cleveland, 2017;Poot, 2017;Smida et al., 2017). These findings suggest an early defect in DNA repair/surveillance as a mechanism for the pathogenesis of OS (Behjati et al., 2017). Tumor suppressor genes function to control cell growth by inhibiting cell proliferation and tumor development. Also, they play a role in cell repair and apoptosis. When tumor suppressors mutate, resulting in a loss or reduction in function, there is an increase in the likelihood of developing cancer. The retinoblastoma (RB) was the first tumor suppressor gene described and encodes a protein that functions as a negative regulator of the cell cycle (Ren and Gu, 2017). This protein stabilizes constitutive heterochromatin to maintain overall chromatin structure. RB1 is the checkpoint that binds the E2F family of transcription factors and inhibits cell cycle progression. Defects in this gene are associated with retinoblastoma, urinary bladder cancers, and OS. The RB gene is critical for the regulation of the G 1 to S cell cycle transition. In the absence of mitogenic stimuli, Rb remains dephosphorylated and binds to E2F family transcription factors, preventing their activation of the cell cycle. Mutations that result in the loss of function of the RB protein occur in approximately 70% of OS, mostly due to a loss of heterozygosity. Structural rearrangements and point mutations in the RB gene can also occur (Ren and Gu, 2017). The TP53 gene functions as a tumor suppressor in essentially all tumors. It encodes a tumor suppressor protein, which contains transcriptional activation, DNA binding, and oligomerization domains. This protein plays a crucial role in maintaining genomic stability functioning as a transcription factor that regulates the expression of various genes involved in cell cycle arrest, DNA repair, changes in metabolism, and apoptosis. Mutations in this gene are associated with a wide variety of cancers, including OS. The function of p53 can be affected by mutations in the gene itself or by mutations to up-or downstream mediators of its activity. Mutations that result in the loss of function of the p53 gene occur in approximately 75% of OS cases. The mutations in the TP53 gene include allelic loss (75-80%), rearrangements (10-20%), and point mutations (20-30%) (Braithwaite et al., 2017;Duffy et al., 2017;Gold, 2017;Guha and Malkin, 2017;Kastenhuber and Lowe, 2017;Merkel et al., 2017). In the RB pathway setting, E2F3 and CDK4, both of which counteract RB control of cell cycle progression, are estimated to possess gain of function mutations. E2F3 is found in 60% of tumors, while CDK4 is found in 10% of tumors (Sampson et al., 2015). Within the p53 pathway, MDM2 is an E3 ubiquitin ligase that acts as a negative regulator of p53. The MDM2 gene is amplified in 3-25% of OS. COPS3 promotes the proteasomal degradation of p53, and COPS3 amplification is seen in 20-80% of OS cases. In the c-Myc pathway, the c-Myc gene is a key transcription factor that functions as a general amplifier of gene expression (Iaccarino, 2017). It enhances the transcription of essentially all genes with active promoters within the cell. This gene is amplified in 7-67% of OS cases and overexpressed in at least 30% of tumors (Morrow and Khanna, 2015;Sampson et al., 2015).
Role of MiRNA in osteosarcoma
MicroRNAs (miRNAs) are a small single-stranded, noncoding RNA molecule (from 18 to 25 nucleotides in length), which are usually found in eukaryotic cells. They are involved in various biological processes that regulate differentiation, apoptosis, and proliferation of numerous non-neoplastic and neoplastic diseases (Dong et al., 2016;Hashimoto and Tanaka, 2017;Leichter et al., 2017;Nugent, 2014;Ram Kumar et al., 2016;Sampson et al., 2015;Sergi et al., 2017a;Sergi et al., 2017b;Zhao et al., 2013). This process is achieved by complementarily pairing with the 3' untranslated region (3' UTR) or 5' untranslated region (5' UTR) of target genes, thus inhibiting the mRNA translation of these genes. In 1993, miRNA was first discovered in the nematode species C. elegans, and the first molecule was named lin-4. Since this discovery, it has been estimated that as many as 1000 miRNAs exist in the human genome, with more than 30% of the human genome regulated by miRNAs that simultaneously target multiple genes. In the last decade, it became clear that miRNAs are implicated in the pathogenesis of cancer, including OS (Ram Kumar et al., 2016). This aspect was demonstrated by the differences in the miRNA expression profiles detected between normal and cancer cells. The expression of many different types of miRNA was found to be altered (either over-expressed or reduced) in malignancy. MiRNAs can function as tumor suppressors, oncogenes, or both. The dysregulation of miRNA expression may contribute to cancer development through the loss of controls of biological processes. These natural and physical properties can make miRNAs useful diagnostic and prognostic tools in the management of various cancers, including OS and non-oncological diseases (Agarwal et Kobayashi et al., 2012;Leichter et al., 2017;Lin et al., 2016;Nugent, 2014;Ram Kumar et al., 2016;Sampson et al., 2015;Zhao et al., 2013;Zhou et al., 2016). There is increasing evidence that multiple miRNAs may play a role in determining the response to chemotherapy in the treatment of OS (Ram Kumar et al., 2016;Sampson et al., 2015).
Bioinformatics
In the last two decades, numerous bioinformatics tools have been developed to manage the increasing abundance of data. The massive flow of miRNA data can be handled effectively and efficiently using specific bioinformatics tools. In targeting miRNAs, we can address the identification, expression, and analysis of explicit and multiple miRNAs, establish miRNA regulatory networks, miRNA metabolic and signaling pathways, and miRNA-transcription factor interplay, thereby linking miRNAs to particular diseases or status of the disease.
WikiPathways is an open, collaborative platform for drawing, editing, and sharing biological pathways, built using the same software underlying Wikipedia. This platform can be used to integrate, visualize, and analyze system-wide transcriptomics, proteomics, and metabolomics data. Several studies have demonstrated miRNAs' involvement in the pathogenesis, diagnostic potential, and therapeutics of OS. As indicated above, these miRNAs have been re-emphasized most recently because they intrinsically regulate the expression of different genes that play essential roles in tumorigenesis, cell invasion, migration, and metastasis. In this review, we aimed to discuss the current knowledge of miRNAs' role and their target genes in OS and attempt to develop an OS pathway involving miRNA integrating WikiPathways and other bioinformatic tools.
Materials and Methods
PubMed, Scopus, and Google Scholar were used to systematically search for reviewed publications that investigated the functions of miRNA in the pathogenesis, treatment, and prognosis of osteosarcoma. Publications in the time frame "2008-2018" and targeting "miRNA" and "Osteosarcoma" were retrieved from the archives. The findings from these publications were used to compile a list of miRNAs that are associated with OS. This study relies on a systematic search, but it does not comply with PRISMA eligibility criteria (Preferred Reporting Items for Systematic Reviews and Meta-Analyses).
PathVisio, a no-cost open-source pathway editor, visualization, and analysis software, has significantly enhanced the capacity to explore large scale data. It provides an invaluable tool for investigating genes, proteins, and metabolites in both the healthy and diseased states of complex tissues and related diseases, including OS. We used PathVisio as a pathway editor, visualization, and analysis software. Since the first publication of PathVisio in 2008, the software has been cited more than 170 times and used in many different biological studies. As an online editor, PathVisio is also integrated into the community curated pathway database WikiPathways. Wikipathways is one of the most popular freely available databases for assessing biological pathways. It is an open, collaborative platform used to create and share paths and is known as a plugin for PathVisio. PathVisio 3 is a free open-source pathway editor, visualization, and analysis toolbox implemented in Java, a class-based, object-oriented programming language able to run on all major operating systems (Bhat et al., 2018;Kutmon et al., 2015). The miRTar bioanalysis tool was used to determine miRNAs' interaction with genes in the TP53 pathway (Hsu et al., 2011). In particular, the miRTar tool adopts seven scenarios to identify putative miRNA target sites of the gene transcripts. It illustrates the biological functions of miRNAs concerning their targets in metabolic pathways. The prediction system helps biologists to quickly identify the regulatory relationships between crucial miRNAs and their targets.
The results were used in assembling the pathway for OS. Common miRNAs that have been previously identified in studies to have a role in the development and progression of OS were selected from the literature and imputed into this tool to identify the targeted genes. A pathway network was constructed using the ONCO.IO micro-analysis tool. A pathway for miRNAs linked to OS was then built using PathVisio and the Wikipathways plugin. The URLs of the website platforms we used are https://onco.io/, http://mirtar. mbc.nctu.edu.tw/human/, https://www.pathvisio.org/, and https://www.wikipathways.org/index.php/WikiPathways.
Results
There is a significant number of miRNAs that we found to be associated with OS. We found 1298 records of osteosarcoma papers associated with the word "miRNA". Three studies were substantially selected from which miRNAs associated with osteosarcoma were used for further detailed analysis (Chen et al., 2016a;Kobayashi et al., 2012;Nugent, 2014). In these studies, A total of 6 miRNAs were found on chromosome 1, making chromosome 1 the most popular miRNA location. Chromosomes X and 11 were the second most frequent miRNA locations, with each chromosome being responsible for five miRNAs. The third most common chromosomal location for miRNAs is chromosome 19, responsible for four miRNAs. In addition, miRNAs are also located on chromosomes 3, 4, 5, 6, 7, 9, 13, 14, 15, 16, 17, 18, 20, and 21. All types of cellular pathways from development to oncogenesis are affected by miRNAs. Tab. 1 highlights the miRNAs associated with OS. Tabs. 2 and 3 recapitulate the roles and target genes of miRNAs in OS, with Tab. 2 displaying those with increased expressions and Tab. 3 displaying those with decreased expressions. A careful perusal of the literature showed that OS has increased expression of miR-21, miR-93, miR-135b, miR-150, miR-210, miR-221, miR-199b-5p, miR-218, miR-542-5p, and miR-652. While target genes are known for each of these miRNAs, the role in which they play is only known for miR-21, miR-93, miR-221, and miR-199b-5p. Conversely, there was decreased expression of miR-16, miR-24, miR-29a, miR-29b, miR-31, miR-34a, miR-34b, miR-34c, miR-125b, miR-132, miR-133a, miR-143, miR145, miR-183, -199a-3p, miR-200, miR-206, miR-335, miR-340 and miR-424. Roles are defined for all the miRNAs except miR-29b, miR-34b, and miR143. Target genes have been identified for miR-16, miR-29a, miR-29b, miR-31, miR-34a, miR-34b, miR-133a and miR-206. The interconnection of these miRNAs with signaling pathways was the next step in our analysis and the miRNAs with OS intrinsic regulation are key and displayed in Tab. 4. Fig. 1 shows the TP53 Network built by ONCO.IO, a bioinformatic tool on PathVisio software, and the miRNAregulation of TP53 in OS, respectively. The weight of miR34 for transcriptional regulation is prominent.
The purpose of this study was to attempt to construct a pathway involving the miRNA regulation of the p53 signaling pathway for OS. No unique, perpetual, and solid pathway involving miRNAs for OS was found, but there are multiple pathways related to the TP53 gene which are associated with different conditions. Data regarding miRNA and target genes involved in the development and progression of OS corresponded to information that was published in previous studies. This data was imputed into onco.io to generate a signaling pathway for p53 that shows miRNA regulation. Fig. 1 is exclusively an example of multiple genetic interactions that can be revealed using PathVisio. It does not mean that it is comprehensive of all genes-miRNAs interaction as networks. The gray shadow of the left corner of Fig. 1 is supposed to polarize the attention on some molecules of interest, but it can be displayed in other locations according to different research questions.
Discussion
The mechanism of action of miRNAs in OS remains not clearly understood. However, the TP53 gene is mutated in more than 20% of OS, with mutations demonstrated to be involved in tumorigenesis. MiRNAs are involved in the control of many cellular processes, and the dysregulation of miRNA expression can influence carcinogenesis once tumor suppressor genes or oncogenes encode the relevant target mRNAs. Even a small variation can have significant implications for the cell since each miRNA can have many targets. In humans, it has been established that many miRNA genes are located in cancer-associated regions or at the fragile sites of chromosomes, which are prone to deletion, amplification, and mutations in cancer cells. Since miRNAs can function as negative regulators of gene expression, an over-expression of oncogenic miRNAs can contribute to tumor development by promoting cellular proliferation and evasion of apoptosis. A similar effect will occur if there is a reduction in the expression of tumorsuppressive miRNAs. Research has demonstrated both increases and decreases in the expression of specific miRNAs in cancer. These appear to vary depending on the particular tissue and the cancer type (He et al., 2007;Kao et al., 2012;Kobayashi et al., 2012). Several miRNAs have been identified as direct targets of p53.
The miR-34 family (miR-34a, miR-34b, and miR-34c) has been an important component of the p53 tumor suppressor pathway. P53 induces the expression of these miRNAs in response to DNA damage and oncogenic stress in many cancers. He et al. (2007) reported that the miR-34 family induces G1 arrest and apoptosis via their targets CDK6, E2F3, Cyclin E2, and BCL2 in a p53-dependent manner in OS cells (Bhat et al., 2018). The expression of miR-34 is decreased in OS, and miR-34 enhances p53 mediated cell cycle arrest and apoptosis. Also, p53 induces the upregulation of miR-192, miR-194, and miR215 in U2OS cells, which carry the wild-type p53. The loss of miR-31 is associated with defects in the p53 pathway, while overexpression of miR-31 significantly inhibits OS cells' proliferation. Moreover, miR-31 seems to have the potential to prevent disease progression or the development of pulmonary metastasis in OS (Kao et al., 2012;Kobayashi et al., 2012). Biological pathways are descriptive. Sometimes complex diagrams are used to summarize and describe physical processes. These pathways show the potential interaction among genes, proteins, and metabolites. Path diagrams are a common way to graph the wealth of information available on these biological processes. To the best of our knowledge, no established pathways involving miRNAs for OS has been confirmed so far. However, there are multiple pathways related to TP53, which are associated with different disease conditions. The purpose of this study was to construct a path involving the miRNA regulation of the p53 signaling pathway for OS using PathVisio. Data regarding miRNA and target genes involved in the development and progression of OS corresponds to information that is available in the biomedical research literature. There is significant involvement of miRNAs in the development, progression, and metastasis of OS. The involvement spans from gene expression to epigenetics. MiRNAs and their identified target genes are associated with multiple biological pathways and functions related to bone biology and cancer development and progression. Dysregulation of miRNAs is thereby associated with tumorigenesis in OS. A study by Andersen et al. (2018) investigated the miRNA expression in 101 OS samples (Andersen et al., 2018). A total of 752 miRNAs were profiled, with 33 of these being identified as deregulated in OS. Andersen et al. (2018) found a significant role of miRNAs in the tumorigenesis of OS and that 29 deregulated miRNAs were strongly correlated with cancer development and progression. MiR-221 and miR-222 are associated significantly with time to metastasis. Significant downregulated miRNAs were identified as miR- 100-5p, miR-125b-5p, miR-127-3p, miR-370-3p, miR-335-5p and miR-411-5p. Scott et al. (2007) and Sempere et al. (2004) showed that miR-125b is an important regulator of both proliferation and differentiation of different cell types. At the same time, Mizuno et al. (2008) indicated that miR-125b inhibits normal OB proliferation in mouse cells and plays a role in bone development and OS tumorigenesis. Andersen et al. (2018) also identified miR-181a-5p, miR-181c-5p, miR-223-3p and miR-342-3p as being significantly upregulated in OS.
Our study was done to summarise and further increase our understanding of the roles played by various miRNAs at various stages of the signaling pathway regulated by TP53 in OS. Improved knowledge would allow for the development of specific miRNAs as biomarkers for diagnosis, disease monitoring, and OS progression. The possibility exists that miRNAs may have a therapeutic role in managing OS in the nearest future, particularly with the adoption of protocols of personalized medicine, renewed gene technologies, and digital pathology (Burnett et al., 2020;Jin et al., 2020;Sergi, 2019). MiRNA-directed gene regulation will pave the way for improving traditional gene therapy approaches to cancer, including OS. Presently, validation of miRNA pathways and targets in metastatic osteosarcoma has not been determined. Still, miRNA plays a role in the progression of OS by regulating proliferation, invasion, adhesion, metastasis, apoptosis, and angiogenesis. Identifying dysregulated miRNAs in patients with OS may contribute to the development of biomarkers for diagnosis and prognosis. There are challenges faced in identifying all the targets of miRNAs and establishing their contribution towards malignancy. Circulating miRNAs are considered predictive biomarkers for various types of cancers. They can be used as non-invasive disease biomarkers in cancer since they exist in human serum and plasma in remarkably stable forms. Comprehensive screening of miRNA profiles would allow for earlier detection of OS, as well as nullify the need for the collection of tissue samples through invasive procedures such as biopsies. Despite the clinical potential for the use of miRNAs as diagnostic biomarkers, several limitations are present. In most studies, the cohort of patients used has been relatively small, and therefore evaluations of large, long-term sample sizes with long-term follow-up are required. There is a lack of standardized approaches in the methodology of the normalization of circulating miRNAs. A refined approach is needed in future The CASP8 gene is responsible for the production of a member of the cysteine-aspartic acid protease family. The sequential activation of caspases is critical in the execution-phase of the programmed cell death or apoptosis. CHEK1 is the gene for the serine/ threonine-specific protein kinase, which coordinates the DNA damage response and cell cycle checkpoint response preventing damaged cells from progressing through the cell cycle. FAS forms the death-inducing signaling complex upon ligand binding, and, in several settings, there is evidence for crosstalk between the extrinsic and intrinsic pathways of apoptosis. Mouse double minute 2 (MDM2) homolog is a protein that in humans is encoded by the MDM2 gene. MDM2 is an essential negative regulator of the p53 tumor suppressor. SESN1 or Sestrin 1, p53-regulated protein PA26, is a protein encoded by the SESN1 gene. The p53 tumor suppressor protein induces Sestrins, which play significant roles in the cellular response to DNA damage and oxidative stress. studies to establish miRNAs as circulating biomarkers for clinical use. The role of miRNAs in OS has been studied in detail, but it is not clear whether it can be utilized to treat patients with OS. The involvement of miRNA function in the progression of OS has raised the possibility of the utilization of miRNA as a novel therapy. Extensive toxicity studies and preclinical safety trials would need to be conducted before considering a miRNA-based therapeutic approach. A greater understanding of the roles that different miRNAs play in the development and progression of OS could ultimately improve this tumor (Abarrategi et al., 2016;Bhat et al., 2018;He et al., 2007;Jones et al., 2012;Kao et al., 2012;Kobayashi et al., 2012;Kutmon et al., 2015;Leichter et al., 2017;Nugent, 2014;Ram Kumar et al., 2016).
Moreover, the EIMMO, MicroInspector, miRU, MMIA, RNA22, StarMir, and MMIA are additional tools with variable data from biology scientists. They are web-based and specific for identifying miRNA binding sites (Hsu et al., 2011).
There are a few additional limitations to our study. First, the most common weakness of bioinformatics tools is the generation of large amounts of false-positive data. We considered the other open-source tools, such as DIANA, TARGETSCAN, and MIRANDA, but we chose to use miRTar because of the familiarity with this tool. Although based on available scientific data, many of the proposed gene interactions in these databases may be speculative. Second, the current method of pathway analysis depends on existing databases. Not all the miRNAs and genes linked to OS were found in the ONCO.IO miRNA analysis tool database, which was used to construct the pathway network. Third, the interpretation of results based on pathway analysis tools needs to be interpreted with caution because the miRNA field is an evolving platform spanning from genomics to proteomics.
In conclusion, although the field of miRNA research is still relatively new, its rapid expansion has the potential to use these small molecules in the management of cancer. The PathVisio analysis of Wikipathways may be a useful bioinformatic tool for cancer research. Several miRNAs have been involved in OS, with some demonstrated to be overexpressed while others are downregulated. Our analysis indicates that there is indeed potential for miRNAs to play a critical role in the management of OS both as promising diagnostic biomarkers and either predictive or prognostic indicators. Bioinformatics speed has increased daily, and we expect that the PathVisio analysis on Wikipathways may be a useful tool for cancer research readily available for cancer research investigators worldwide. The miRTar bioanalysis tool can be used to determine the interaction of miRNAs with genes in the TP53 pathway, and the ONCO.IO miRNA analysis tool database was used to identify miRNAs and OS. In OS patients considered good responders to chemotherapy, miR-92a, miR-99b, miR-193a-5p, and miR-422a expression increased, while miR-132 decreased. This is the first application of PathVisio to determine miRNA pathways in osteosarcoma to the best of our knowledge. PathVisio is a full pathway editor with the potentiality to illustrate the biological events, augment graphical elements, and elucidate all the biological structures and interactions with standard external database identifiers. MiRNAs have the potential to become a useful diagnostic and prognostic tool in the management of OS. | 6,083 | 2021-01-01T00:00:00.000 | [
"Biology"
] |
Atomic-scale study of the amorphous-to-crystalline phase transition mechanism in GeTe thin films
The underlying mechanism driving the structural amorphous-to-crystalline transition in Group VI chalcogenides is still a matter of debate even in the simplest GeTe system. We exploit the extreme sensitivity of 57Fe emission Mössbauer spectroscopy, following dilute implantation of 57Mn (T½ = 1.5 min) at ISOLDE/CERN, to study the electronic charge distribution in the immediate vicinity of the 57Fe probe substituting Ge (FeGe), and to interrogate the local environment of FeGe over the amorphous-crystalline phase transition in GeTe thin films. Our results show that the local structure of as-sputtered amorphous GeTe is a combination of tetrahedral and defect-octahedral sites. The main effect of the crystallization is the conversion from tetrahedral to defect-free octahedral sites. We discover that only the tetrahedral fraction in amorphous GeTe participates to the change of the FeGe-Te chemical bonds, with a net electronic charge density transfer of ~ 1.6 e/a0 between FeGe and neighboring Te atoms. This charge transfer accounts for a lowering of the covalent character during crystallization. The results are corroborated by theoretical calculations within the framework of density functional theory. The observed atomic-scale chemical-structural changes are directly connected to the macroscopic phase transition and resistivity switch of GeTe thin films.
X-ray absorption fine structure (EXAFS) measurements, some groups showed that, upon amorphization, the average coordination of Ge atoms decreases from six-fold in the crystalline phase (c-GeTe) to four-fold in the amorphous state (a-GeTe) 17,18 . Other groups, though, also on the basis of EXAFS results, proposed alternative scenarios 22 . Based on X-ray photoelectron spectroscopy (XPS), Betts et al. 23 observed a relatively large shift in the Ge 3d level upon crystallization, which was attributed to a covalent-to-ionic change of the Ge-Te chemical bonding without a strong change in the bond lengths; on the other hand, Shevhik et al. concluded the opposite, i.e. that the phase change in GeTe has to be attributed mainly to local symmetry changes with no change in the charge density around Ge 24 . The latter interpretation has been supported by synchrotron-based XPS experiments 25 , while different groups have reported changes in the electronic structure of a-GeTe and c-GeTe 26 . The evident controversy in the interpretation of XPS results underlines the need for an experimental method more sensitive to the very small valence state changes occurring at the Ge site during the amorphous-to-crystalline GeTe phase transition. In particular, while the structure of c-GeTe seems quite well understood, the main questions that are left concern the local structure of a-GeTe and, particularly, the mechanisms driving the a-GeTe to c-GeTe phase transition at the atomic-scale 21 . Andrikopoulos et al. have applied Raman scattering to show that the structure of a-GeTe contains only tetrahedral GeTe 4−n Ge n , species (n = 0, 1, 2, 3, 4), whereas Te-Te bonds are absent 27 . They have observed that the n = 0 case gradually dominates when increasing the annealing temperature (before the phase transition), finally driving the phase change to c-GeTe.
Mössbauer spectroscopy (MS) is an ideal tool for measuring local variations of charge density and symmetry around the Mössbauer-active probe in materials experiencing macroscopic phase transformations, and 119 Sn and 125 Te MS have been previously conducted on both glassy and c-GeTe compounds [28][29][30][31][32][33] . By 119 Sn MS at Ge sites, the local structure of amorphous Ge x Te 1−x (x ≤ 0.2) alloys has been described with the co-existence of tetrahedral and the so-called defect-octahedral (i.e. Ge in an octahedral configuration with two nearest neighbors, nn, Te vacancies) local configurations 33 . Again by 119 Sn MS, it has been shown that Ge atoms in a-GeTe are tetrahedrally coordinated with the Te nn in a covalent-type of bonding; while, upon crystallization, Ge acquires the 2+ charge state, as expected in the c-GeTe crystal, with Ge surrounded by six Te atoms as nn [28][29][30] . The isomer shift at 125 Te sites in amorphous and crystalline GeTe has been reported to be the same within the experimental error, while a strong change in the electric field gradient has been observed 31 .
Here, we present results obtained by temperature-dependent 57 Fe emission Mössbauer spectroscopy (eMS) in GeTe, as performed at the radioactive ion beam facility ISOLDE at CERN. Such an experimental method is sensitive to the nuclear hyperfine interactions between the 57 Fe nuclei and their nn and next nn ions (nnn). In particular, eMS is used to investigate the Fe site location in GeTe following the implantation of 57 Mn, and to determine the atomic scale mechanisms at the basis of the phase change occurring in GeTe upon thermal annealing. When compared to 119 Sn and 125 Te MS experiments 28-33 , 57 Fe MS is characterized by a higher sensitivity to potential small variations in the local valence states of the probe ions and the local symmetry around the Mössbauer probe due to the smaller intrinsic linewidth of the 14.4 keV transition 34 . A special feature of the eMS approach is that the implantation fluence is kept very low (10 10-12 ions/cm 2 ), corresponding to a concentration of 10 −4 -10 −3 at.%. This assures single ion implantation, without overlapping damage cascades, and rules out any prospect of Mn/Fe precipitation. The eMS measurements are done at the implantation temperature, and the atomic-scale information is obtained with Mn/Fe probes at rest, 1.5 min. after the implantation. More importantly, eMS allows in situ monitoring of the local changes occurring during and across the a-GeTe to c-GeTe phase transition. The experimental approach in this work is unique, since eMS was carried out on thin films of GeTe which have been previously characterized by temperature-dependent resistivity measurement, whose preparation is described in the methods sections (see also ref. 35 and references therein). By doing so, we seek a correlation between the resistivity switching and the thermally induced crystallization tracked at the atomic scale in the in-situ eMS study.
Our experimental findings are corroborated by simulations based on first principles calculations in the framework of density functional theory (DFT).
Results and Discussion
Basic properties of the GeTe films. Two samples, labelled GeTe-1 and GeTe-2, were cut from the same wafer and are the subject of the present study. The electrical resistivity (ρ) of sample GeTe-1 was measured as a function of temperature in a vacuum chamber, while sample GeTe-2 was used for in situ temperature-dependent eMS measurements. Figure 1 shows the resistivity of the GeTe-1 sample, as recorded during the thermal annealing. The sample is initially in its amorphous state, showing a resistivity ρ ≈ 10 Ωcm. Upon heating, the resistivity sharply drops at the transition temperature T ac ≈ 180 °C as a result of the amorphous-to-crystalline phase transition, and the crystalline structure is retained until the end of the thermal treatment at 250 °C, when complete transformation is achieved. Once crystallized, the film remains in a low resistivity state down to RT, since the re-amorphization requires melting followed by fast quenching. The resistivity values in both the crystalline and amorphous states are in agreement with those previously reported for GeTe thin films [35][36][37] .
GIXRD was performed on the two GeTe samples following thermal annealing performed in the Van der Pauw set up on GeTe-1 and during eMS measurements on GeTe-2. Both samples were found to crystallize in the rhombohedral structure R3m:H of GeTe, as shown in Fig. 2(a). This is the expected distorted NaCl structure of GeTe below 670 K 38 . The small variation of the diffracted intensity may evidence a slight variation of the preferential orientation of the crystallites and/or a different structure factor. The lattice parameters were extracted from the Rietveld refinement of the diffraction spectrum of sample GeTe-2, with an arbitrary texture and imposing a micro-strain of 1%. Figure 2(b) shows the obtained simulation within the whole explored 2Θ range. The extracted lattice parameters are a = 4.15 Å and c = 10.51 Å, which are slightly lower than those reported for stable and stoichiometric GeTe (a = 4.21 Å, c = 10.60 Å) 39,40 .
ple GeTe-1 show that the amorphous-to-crystalline phase change occurs at T ac ≈180 °C. Hence, the eMS measurements on the as-grown (amorphous) sample GeTe-2 were conducted at four stages: (a) implantation and measurement at 36 °C; (b) implantation and measurement at 150 °C (i.e. 30 °C below T ac ); (c) implantation and measurements at 210 °C (i.e 30 °C above T ac ); (d) implantation and measurements back to 150 °C. The extremely low total concentration of the implanted ions makes the 57 Fe nuclei a local probe of the macroscopic a-GeTe to c-GeTe structural transition. The respective spectra are presented in Fig. 3(a)-(d). Insets in Fig. 3 report the resistivity curve of the twin GeTe-1 thin film, with the dot markers indicating the corresponding temperatures at which eMS was carried out in GeTe-2. Before the phase transition in GeTe-2, the eMS spectra are interpreted in terms of two components, labelled A (Lorentzian single line) and D (Voigt line shape quadrupole-doublet), while following the phase transition in both GeTe-1 and GeTe-2, the eMS data are fitted by including the additional single line C. Both A and C components show unresolved quadrupole splitting (ΔE Q < 0.1 mm/s). The fitting of all the eMS spectra of crystallized GeTe-2 and GeTe-1 was conducted simultaneously, by forcing the isomer shifts of all the components to follow the second order Doppler shift 41 . The quadrupole splitting of the D component showed the typical T 3/2 temperature dependence as observed for damage components in group IV semiconductors 42 , suggesting a highly disordered local Fe environment, as also manifested with quite large linewidth. Table 1 summarizes the Mössbauer parameters at RT of the identified A, C and D components: isomer shift (δ), quadrupole splitting (ΔE Q ) and σ free, the additional Gaussian broadening of the linewidth (see Methods).
The eMS spectrum obtained following implantation at 150 °C ( Fig. 3(b)) does not show any major changes compared with the 36 °C measurement ( Fig. 3(a)), with only the D component showing a slightly lower relative intensity. On the other hand, the spectrum collected at 210 °C ( Fig. 3(c)) shows major changes once T ac is passed: the relative intensity of the A component is drastically reduced compared to that observed at 150 °C and the Table 1. Mössbauer parameters at RT for the C, A, and D components, as determined by fitting the eMS data of GeTe-2, being: δ the isomer shift, ΔΕ Q the quadrupole splitting, and σ free is the additional Gaussian broadening free to vary in the fitting procedure.
Scientific REPoRTS | 7: 8234 | DOI:10.1038/s41598-017-08275-5 spectrum is dominated, instead, by the new single line C, with a different value of the isomer shift δ (Fig. 3c). The change in isomer shift accompanying the transformation of the spectral component A in a-GeTe to the C component in the c-GeTe, corresponds to an energy change of ΔE = 3.715 × 10 −8 eV. The relative intensity of the D component also drops across the phase transition, without displaying any change of the isomer shift. After lowering the temperature to 150 °C ( Fig. 3(d)), the eMS spectrum shows that the amorphous structure does not recover, consistently with what shown by the resistivity measurements. However, the full thermal budget furnished to the system, enhances the A to C transformation (see Supplementary Information). Figure 4 shows the variation with temperature of relative area intensities of the spectral components A, C and D. A 20% fraction of the Fe atoms remains in the A-type of spectral component, even after the implantation and measurement above T ac .
The eMS measurements of GeTe-1 were conducted at 36 and 150 °C, following the resistivity measurement depicted in Fig. 1 (see Supplementary Information). They show the dominating C component already at 36 °C, as expected after crystallization occurring during the Van der Pauw measurements (Fig. 1), where the temperature was higher than T ac .
Calculation of Fe hyperfine parameters in GeTe.
In order to proceed with the lattice site assignments, and to elucidate the configurational changes occurring across the a-c phase transition, six different configurations were simulated. I, II): Fe substituting Ge (Fe Ge ) surrounded by six and four Te atoms, as nn in c-GeTe and a-GeTe, respectively; III, IV): Fe substituting Ge surrounded by six Te atoms with an additional one and two Te vacancies in c-GeTe, respectively; V, VI): Fe substituting Te surrounded by six and four Ge atoms, as nn in c-GeTe and a-GeTe, respectively. Figure 5(a) shows configuration I): Fe Ge in an octahedral configuration in the rhombohedral structure (space group R3m) formed by a 2 × 2 × 2 supercell of c-GeTe 17 , with lattice parameters of 6.02 Å 43 , six-fold coordinated by Te with three short (2.83 Å) and three long (3.15 Å) bond-lengths 17,44 . Figure 5(b) shows configuration II): Fe Ge in a fourfold tetrahedral coordination with the Fe Ge -Te distance in the unit cell reduced to 2.5 Å. To simulate the tetrahedral amorphous structure, we forced consistency between the obtained lattice parameters with those calculated from the interatomic distances obtained by EXAFS analysis in a-GeTe 17 . Figure 6 shows the charge densities corresponding to the configurations I,II) depicted in Fig. 5, which have been calculated in order to monitor the Fe, Ge and Te valence electron states and charge transfer properties in these c-GeTe and a-GeTe phases. The legends in Fig. 6 indicate the magnitude of the charge density Δn(r) (same color code as in Fig. 5). The charge densities around the Fe and Te atoms are mainly formed by d and p orbital states, respectively. Clearly, there is a higher degree of covalency along the Fe Ge -Te bonding in the amorphous case ( Fig. 6(b)) than in the crystalline state ( Fig. 6(a)). Fe lattice site identification in GeTe. Following the implantation of radioactive 57 Mn + ions, the daughter Fe probe ions could in principle substitute for Ge (Fe Ge ) and/or Te (Fe Te ). Moreover, owing to the 〈E R 〉 = 40 eV recoil energy imparted on the 57* Fe daughter nucleus in the β − decay of 57 Mn, a fraction of the daughter 57 Fe probe ions could be expelled from the initial site occupied by the implanted 57 Mn ions to interstitial sites (Fe I ) 45,46 . Indeed, our eMS measurements on 57 Mn/ 57 Fe implanted Si and Ge 42,47 show appreciable interstitial fractions of the Fe ions. However, these studies also show that the Debye temperatures for substitutional Fe (Fe Si and Fe Ge ) extracted from the eMS resonance spectra agree well with estimates based on the mass defect approximation and those of interstitial Fe are at least 100 °C lower. In the present study, the average Debye temperature (θ D ) for Fe in the GeTe samples, determined from the temperature dependence of the resonance area, is <θ D > = 175 (25) K. This value is in good agreement with the value θ D = 205 K for Fe substituting Ge estimated using the mass defect approximation, assuming a value θ D = 180 K for the GeTe host lattice 48 . This allows us to exclude any significant contribution from interstitial Fe to the A, C and D components, as also confirmed by measurements on GeTe-1 (see Supplementary Information). In principle, the 57 Mn + ions are expected to adopt the more electropositive site (Ge site). In the case of Fe substituting Te (Fe Te ), the bonding with neighboring Ge atoms would require the charge state of Fe to lower to Fe + / Fe 0 , which would be expected to give a higher isomer shift than that measured for the A component (Table 1). Moreover, the preferential substitution of the cation as a dopant in group VI chalcogenides has been previously reported, with substitution of Te sites having a much higher formation energy 13,49 . Liu et al. performed comparative XPS studies of Ge 1−x Fe x Te films (x = 0.02-0.25) and FeTe 50 . The Fe 2p core-level XPS spectra revealed the two components Fe 2p 3/2 and Fe 2p 1/2 , which were coincident in the GeFeTe and FeTe samples, indicating that Fe occupies substitutional Ge (cation) sites and is bonded with Te in the Fe incorporated GeTe films.
We now compare the measured δ and ΔE Q for the A and C components (Table 1) with the respective values calculated for the different local configurations listed in Table 2. The hyperfine parameters of components C and A very well match those simulated for the I) and II) configurations in Table 2, respectively. We therefore assign the A component to the Fe Ge -4Te nn tetrahedral configuration in a-GeTe, and the C component to the octahedral Fe Ge -6Ge nn. The small calculated quadrupole splitting for the octahedral configuration I) in Table 2, is of the same order of magnitude as the additional line-width broadening observed for the C component in the eMS measurements ( Table 1). The null quadrupole interaction in the tetrahedral configuration (A component) originates from the equal and opposite contributions to the electric field gradient given by the d(x 2 -y 2 ) and dz 2 orbitals 51 . The displaced Ge atoms take up interstitial sites probably diffusing upon thermal annealing. We exclude any incorporation of Ge in the immediate neighbourhood of Fe Ge (nn or nnn), since this would readily generate a non-zero electric field gradient, and hence quadrupole splittings for the A and C components, which we do not observe.
It is evident from calculations (Table 2), that the introduction of one and two Te vacancies around Fe Ge in the c-GeTe configuration strongly enhances the quadrupole splitting. In particular, configuration IV) in Table 2 with two Te vacancies yields δ and ΔE Q values matching very well the experimental values for the D component (Table 1), which is characterized by a larger line broadening compared to components A and C (cf. Table 1). Consequently, we assign the D component to the defect-octahedral configuration proposed in ref. 33. In a-GeTe thin films, the fraction of defect-octahedral configuration has been shown to increase with the film thickness, and reported to be ≤30% for 100 nm layers 52 . By assuming the trend of the defect-octahedral fraction vs film thickness reported in ref. 52, we expect a fraction ≤35% in a 150 nm thick GeTe film. In GeTe-2, we detect a 60% of D fraction in a-GeTe (Fig. 4). Therefore, we conclude that our D component consists of two contributions: a ≤ 35% of Fe Ge in the defect-octahedral configuration and a ≥25% fraction in a more disordered local configuration (distribution of bond angles and/or additional Te vacancies), due to the lattice damage induced by the implantation process. At the phase transition temperature of 180 °C, the ion-implantation induced damage is expected to disappear 42 . We therefore conclude that the ≤20% fraction of D component that is left in c-GeTe (Fig. 4) is due to a persisting fraction of Fe Ge in the defect-octahedral configuration, with the remaining ≥20% fraction (A) due to tetrahedral, and the remaining 60% (C) due to octahedral sites. Results obtained by inelastic Raman light scattering on bulk c-GeTe 53 , report the local structure of crystalline GeTe as including a 16.7% of Ge atoms in tetrahedral configurations and a 29.9% in defective octahedra, in reasonable agreement with our findings.
By normalizing the fraction of Fe Ge in a-GeTe only to the pure tetrahedral + defect-octahedral contributions (i.e. not considering the implantation-damage), we estimate fractions of ∼53% and 47%, respectively, for the two configurations. When compared with ref. 33 we could expect a lower amount of tetrahedral configuration. On the other hand, it is known that the tetrahedral fraction increases at lower thicknesses 52 . These values must be compared to the results of Raty et al. 54 , based on DFT simulations generated following the melt-quenched procedure. Raty et al. predict a fraction of 30% for the tetrahedral Ge atoms, lower than the 53% that we detect in a-GeTe. It is important to underline the importance of the difference between as-deposited and melt-quenched amorphous GeTe-based alloys in determining their atomic-scale structure. Indeed, it is typically reported a higher tetrahedral fraction in as deposited GeTe-based materials when compared to melt quenched counterparts [55][56][57] . This is due to the fact that the amorphization induced by laser or pulsed current (i.e. melt-quenched cases) forms a kind of intermediate structure between the as-deposited amorphous and crystalline phases, thus typically exhibiting a higher concentration of distorted octahedral Ge sites 55, 56 . Atomic-scale mechanisms of the amorphous-to-crystalline phase transition. Our results define a scenario in which the macroscopic structural (Fig. 2) and resistivity (Fig. 1) changes occurring in GeTe thin films at 180 °C, are connected to the local transformation at Fe Ge sites from: a combination of pure tetrahedral (53%) and defect-octahedral (∼47%) configurations, to: a dominant pure octahedral structure (60%), with a residual fraction of ≥20% tetrahedral and ≤20% defect-octahedral sites.
Certainly, the weightiest effect across the phase change is the transformation of the pure tetrahedral to pure octahedral fraction (Fig. 3). This is in accordance with Raman studies, which have demonstrated that it is the n = 0 configuration in the amorphous GeTe 4−n Ge n that dominates the phase transition from a-GeTe to c-GeTe 27 .
Simultaneously to the structural change, there is an electronic charge transfer, which transforms the chemical bond character between Fe Ge and neighbouring Te atoms. In particular, the measured isomer shift change between components A and C corresponds to an electronic charge transfer of approximately 1.6 e/a 0 (e denotes the electronic charge and a 0 = 0.53 Ǻ the Bohr radius) between Fe Ge and neighbouring Te atoms, which takes place during the phase transition 34 . This electronic density variation at the Fe Ge site is directly connected to a change in character of the chemical bonding, i.e. to the lowering of the covalence when transforming from a-GeTe to c-GeTe (Table 1 and Fig. 3). This is in accordance with the charge density calculations that confirm the higher degree of covalence along the Fe Ge -Te bonding in the case of local tetrahedral configuration of a-GeTe, when compared to the octahedral c-GeTe (Fig. 6). The change in covalence is due to the lower shielding originating from the d-orbitals in the tetrahedral a-GeTe configuration 34 , where a higher p-d hybridization is observed, when compared to the octahedral c-GeTe one. Kolobov et al. have shown that the energy-efficient phase transition in GeTe occurs through a bond switch, where the pairs of non-bonding valence p-electrons (residing in the same orbital and not participating in the formation of conventional covalent bonds) mediate the bond switch without the rupture of the strong covalent bonds of the amorphous state 21 . With eMS, we probe the chemical rearrangements occurring at the Ge site indirectly, i.e. through the hyperfine interactions experienced by Fe substituting Ge at the Fe Ge site. It is not possible to compare quantitatively the electronic configuration changes across the Fe Ge -Te bonds with those of Ge-Te bonds due to the additional contribution of the d-orbitals to the chemical bond in the case of Fe Ge -Te. However, it is of much interest to attempt a comparison with ref. 21, since the experimental verification of the mechanism there proposed is still lacking and challenging. The small charge transfer of 1.6 e/a 0 between Fe Ge and the neighbouring Te atoms, as measured by eMS, is expected to be a particularly cost-effective process in terms of energy. Moreover, on the atomic-scale, a non-100% switch is evidenced from the local tetrahedral to the octahedral configuration: for the full thermal budget furnished to GeTe-2, which corresponds to a fully achieved macroscopic phase transformation (see Supplementary Information), there are still a ≥20% of the Fe Ge atoms in the A-type spectral configuration (see Fig. 4). This is also the case of GeTe-1 (see Supplementary Information). The eMS results evidence that the macroscopic phase transition (Figs 1 and 2) is not accompanied by the full transformation of tetra -to -pure-octahedral configuration on the atomic-scale. We suggest that the coexistence of the A and C components (Fig. 3) following the phase transition, is a marker for the very delicate and simultaneous change of structure and chemical bonding around Fe Ge during the macroscopic phase transition. It is therefore tempting to associate our experimental evidence with the energy efficient bond switch process Table 2. DFT calculated electric field gradient (V zz ) and Mössbauer parameters δ and ΔΕ Q for Fe at Ge and Te sites in GeTe, in the indicated symmetry structure. For Fe at the Ge site, the situation in which Te is replaced with 1 and 2 vacancies in c-GeTe is also simulated.
Scientific REPoRTS | 7: 8234 | DOI:10.1038/s41598-017-08275-5 proposed in ref. 21, and in particular with the suggested absence of a real rupture of the strong covalent bonds in a-GeTe following the phase transition.
There is an additional ≤20% of defect-octahedral Fe Ge (component D) fraction that is left in c-GeTe, but this component does not show any change in its isomer shift, meaning it is not directly involved in the change of the chemical bonding. This demonstrates that the change in the nature of the chemical bond across the phase change is uniquely associated with the tetrahedral -to -pure-octahedral transformation.
Summary. The macroscopic phase change and electrical conductivity switch occurring in GeTe at 180 °C were studied. A clear correlation with atomic-scale chemical-structural changes was established by monitoring the amorphous-to-crystalline phase transition by emission Mössbauer spectroscopy on 57 Fe probes, substituting Ge in GeTe thin films.
Certainly, the most debated questions are: "what is the local structure of a-GeTe and which mechanism drives the fast and reversible phase transition to and from c-GeTe?" Our results show that the Ge environment in as-sputtered a-GeTe is a combination of tetrahedral (53%) and defect-octahedral (47%) configurations. With the experimental method applied here, employing the extreme sensitivity of the 57 Fe probe, we followed in situ the local transformation occurring at Ge sites during thermal annealing. We show that the phase and resistivity changes characterizing the prototypical GeTe chalcogenide, are attributable to a local symmetry variation around Fe Ge from tetrahedral and defect-octahedral (both surrounded by four Te atoms) in a-GeTe to octahedral (surrounded by six Te atoms) in c-GeTe (60%) with remaining fractions of ≥20% tetrahedral and ≤20% defect-octahedral sites, respectively.
Simultaneously, a small net-electron charge density transfer of ~1.6 e/a 0 between the Fe Ge and the neighbouring Te atoms was measured. This was found to be associated with the gradual change of the degree of chemical bonding from covalent to ionic. Most importantly, these chemical changes are uniquely associated with the transformation from the Fe Ge tetrahedral fraction in a-GeTe to the local octahedral symmetry in c-GeTe, without any apparent involvement of the defect-octahedral fraction in a-GeTe. Our experimental results were corroborated by DFT calculations of the hyperfine parameters of the Fe probes in the different local symmetries.
Methods
Sample preparation. Amorphous 150 nm-thick Ge 50 Te 50 stoichiometric thin films were deposited onto Si(550 μm)/SiO 2 (80 nm) substrates by DC magnetron sputtering of a GeTe target in Ar atmosphere. Two samples, labelled GeTe-1 and GeTe-2, cut from the same wafer, were the subject of the present study.
GIXRD measurements.
Grazing incidence X-ray diffraction (GIXRD) measurements were performed at an incidence angle ω = 1°, in order to investigate the crystalline structure of the crystals, prior to and following the thermal treatment and ion implantation. Measurements were performed with an upgraded XRD3000 (Italstructure) diffractometer with monochromated Cu Kα radiation (wavelength 0.154 nm) and a position sensitive detector (Inel CPS120).
Resistivity measurements.
The resistivity measurements on GeTe-1 were conducted during thermal annealing by using a four-probe setup in the Van der Pauw configuration. The sample was heated in contact with a heater-chuck, from RT to 250 °C and back to RT, at a constant rate of 10 °C/min, in a chamber which had been previously evacuated to <10 −5 mbar, in order to prevent oxidation and contamination. The maximum temperature of 250 °C was chosen in order to ensure a complete GeTe crystallization. eMS measurements. eMS were conducted following the implantation of radioactive 57 Mn + (T 1/2 = 1.5 min) ion beams at the ISOLDE facility at CERN. The beam was produced by 1.4 GeV proton-induced fission in UC 2 targets and subsequent laser ionization 58 . Pure beams with intensities of ~5 × 10 8 ions/s were implanted at 50 keV (fluence <10 12 cm −2 ) into the GeTe sample held at temperatures from RT up to 210 °C in vacuum (10 −6 mbar), in an implantation chamber. Under the implantation conditions reported here, the Mn ion range was estimated (TRIM) to be 32 nm. This rules out the possible effect of surface oxidation, which according to the X-Ray Reflectivity (not shown) is limited to 11 nm in GeTe-2. Each eMS spectrum was recorded following an average 5 min implantation and measurement time. Each sample received a maximum implantation fluence of ~1.5 × 10 12 at./cm 2 , which is well below the threshold of overlapping damage cascades (typically 10 13 -10 14 cm −2 ) in semiconductors and insulators 58 . Heating was performed with a halogen lamp mounted behind the sample. In the eMS experiments performed on GeTe-2, a temperature ramp rate of ~5 °C/min was used. 57 Mn β-decays to the 14.4 keV Mössbauer state of 57 Fe (T½ = 100 ns), allowing eMS spectra to be recorded using a resonance detector equipped with enriched 57 Fe stainless steel electrodes, mounted on a conventional drive system outside the implantation chamber. The intrinsic line-shape and line-width of the detector were determined from implantations into an α-Fe foil, yielding a Voigt profile with Lorentzian line width (FWHM) of Γ = 0.34 mm/s and additional Gaussian-broadening of σ = 0.08 mm/s. Isomer shifts and velocities are given with respect to the centre of the spectrum of α-Fe at RT. The eMS spectra were analyzed by using the Vinda analysis program 41 . Calculation details. Theoretical calculations of the hyperfine interaction parameters were conducted by employing the generalized gradient approximation (GGA), within density functional theory (DFT). The full potential linearized augmented plane wave (FP-LAPW) method, as implemented in the WIEN2K code 59 , was employed together with the Perdew-Burke-Enzerhof (PBE) generalized GGA functional, for all of the DFT calculations 60 . In particular, simulations were done both with and without including the Hubbard-like Coulomb term U in the PBE parametrization. In the calculations, the considered radii of the muffin tin atomic spheres of Ge, Te and Fe were 2.3, 2.5 and 2.11 a.u., respectively. The atomic radii were chosen such that the mutual overlaps between all kinds of combinations of interstitial and atomic spheres are within the permissible limit of the atomic sphere approximation. Moreover, the distinction between the valence and core states was made through the energy value, and a value of -6 Ry was taken as the boundary separating the core electron states and valence electron states. The cut-off parameter in the calculations (R MT K MAX ) was set to 7.0, a supercell size of 2 × 2 × 2 and a mesh of (4 × 4 × 4) k-points in the irreducible part of the first Brillouin zone were used in the GGA approximation. In this approach, the isomer shift δ and the quadrupole splitting ΔE Q were calculated from their contact densities (ρ) and the principal component (V zz ) of the electric field gradient, respectively, as reported in the literature 61 . In particular, the ΔE Q is calculated in the axially symmetric electric field gradient approximation 34 .
The non-negligible hybridization between the d-valence band of Fe and the p-valence band of Te in the tetrahedral configuration, makes it necessary to include a Hubbard term U in the Coulomb interaction term in the GGA approximation. The U term is generally estimated by comparing calculated and measured physical properties. Assuming that U = 3 eV, the total magnetic moment of Fe Ge in both the GGA and GGA + U approximations was calculated: for a-GeTe to be 0.90 and 2.35 respectively; for c-GeTe to be 2.38 without including the U term. The difference between a-GeTe and c-GeTe is due to the fact that in the local octahedral configuration (c-GeTe) the hybridization between d and p orbitals is lower than in the amorphous state. Therefore, even without inclusion of the U term, the total magnetic moment is close to the value for an isolated Fe atom. | 7,680.4 | 2017-08-15T00:00:00.000 | [
"Materials Science",
"Physics"
] |
An Analysis of Illocutionary Acts of Hillary Clinton’s Concession Speech to Donald Trump in Presidential Election
As social beings, people always want to relate to other human beings. They want to know their surroundings and interact with the surrounding environment. When people want to interact with the surrounding environment, they need the language to communicate. Language is foremost a means of communication. Communication always takes place within some sort of social context and integrally intertwined with our notions of who we are on both the personal and the broader, societal levels. According to Wardhaugh (2010), language is a system of vocal symbols used for human communication. When we use language, we communicate our individual thoughts, as well as the cultural beliefs and practices of the communities of which we are a part: our families, social groups, and other associations.
Language cannot be separated from Pragmatics because Pragmatics is the study about the ability to connect and also to relate the sentences and the context out of the language. In definition, Pragmatics is the study of relation between language and context that are encoded in structure of language. Another opinion from Yule (1996), pragmatics is the study of relationship between linguistics forms and the users of those forms.
Speech acts is the most interesting theory in the study of pragmatics and seems relevant in language teaching and language learning. Speech acts is an action performed by someone in saying and doing something. According to Austin (1962), there are three types of speech acts, namely locutionary acts, illocutionary acts and perlocutionary acts. A Locutionary acts is the literal meaning of the utterances. Meanwhile, illocutionary acts refers to the extra meaning of the utterance Illocutionary Acts is what the speaker wants to achieve by uttering something and illocutionary acts is an utterance which has a particular conventional force. with regard to this, illocutionary acts in Hillary Clinton's speech is interesting to be analyzed.The purpose of this research is to analyze the types of illocutionary acts found in Hillary Clinton's concession speech to Donald Trump. The writer used descriptive qualitative research. The main research instrument was the writer herself supported by the data analysis sheet. The data analysis was performed by categorizing the data based on Searle's categorization of speech acts (2005) which include assertives, directives, commissives, expressives and declaratives. Each category was throughly observed to find the answer of the research questions. The final step was presenting the data and making a conclusion in reference to the findings of the research. The research findings show that the types of illocutionary acts found in Hillary Clinton's concession speech to Donald Trump consist of assertives, directives, commissives, expressives and declaratives. Assertives have the highest frequency of occurence 13 types (36.1%). It is followed by directives, commissives, expressives and declaratives which occur 9 types (25%), 3 types (8.3%), 9 types (25%) and 2 types (5.6%) respectively. The dominant illocutionary acts in Hillary Clinton's speech are assertives. Assertation showed the highest frequency of assertives. So, the total of data were 36 types of illocutionary acts founds in Hillary Clinton's concession speech to Donald Trump. .
Keywords:
Illocutionary acts, assertives, directives, commissives, expressives and declaratives. produced on the basis of its literal meaning and perlocutionary acts deal with the effects of the utterances on the hearer, depending on specific circumstances.
From those three acts, Yule (2014:49) states that the illocutionary acts are the most often discusses acts in pragmatics. Even, the concept of speech acts is narrowed down to the illocutionary acts and purposes the illocutionary acts based on its functions. It is according to how illocutionary acts relate to the social goals or purposes of establishing and maintaining politeness. The form types of illocutionary acts functions such as competitive, convivial, collaboratives and conflictive. So, illocutionary acts as a main analysis to analyzed Hillary Clinton's speech.
The research was conducted by Saputro (2015) entitled "The analysis of illocutionary acts of Jokowi's Speeches". In his research analyzed of two selected speeches delivered by Jokowi in APEC CEO summit 2014 Forum and World Economic Forum. Furthermore, the research focused on the types of illocutionary acts found in Jokowi's speeches, Jokowi performed such illocutionary acts viewed from the context of situation underlying the speeches and the possible perlocutionary effects of performing the dominant illocutionary acts. Meanwhile, in this study, the writer analyzed one of Hillary Clinton's speech and the writer does'nt focus on perlocutionary effect and the context of situation but, the writer focused on the types of illocutionary acts in Hillary Clinton's speech. From the previous study, the writer compare the researches above with this study as references that gives a contribution for this research.
A number of U.S media and international media called Hillary Clinton's speech as elegance and exquisit speech. Furthermore, the speech inspired suporters to get the qualified values in public and to fight for what they believe. The most important thing is that she has inspired all women of the world and new generation to build the United Stated to be better in the future. With regard to this, the writer analyze the types of illocutionary acts used by Hillary Clinton in her speech.
As the conclusion, this study discusses the following problem is what types of illocutionary acts performed by Hillary Clinton in her concession speech to Donald Trump?
VIII.Method
This study uses descriptive qualitative research. Descriptive qualitative research is a systematic subjective approach used to describe life experiences and give them meaning. Sugiyono (2016:12) states that qualitative research is a method that is used to collect the data in the form of word of pictures rather than number. In this study, the writer uses descriptive qualitative research because it can analyze systematically the fact and charachteristics of data, especially analysing the types of illocutionary acts used by Hillary Clinton in her speech to Donald Trump.
A. Data Analysis
The writer uses speech acts theory by Austin (1962) and Searle (2005) in data analysis. Data analysis is the process of systematically searching and arranging the interview transcript, field notes and other materials that you accumulate to increase your own understanding of them and to enable you present what you have discovered to others (Sugiyono, 2016:244).
The writer categorized them based on the speech acts by Austin (1962) theory about illocutionary acts and Searle (2005) theory about types of illocutionary acts which consisted of assertives, directives, commissives, expressives and declaratives. The writer observed and calculated frequency the occurence of illocutionary acts in order the data were easily read as illustrated in the following table. The table 4.2 above shows that the illocutionary acts found in Hillary Clinton's speech consist of assertives, directives, commissives, expressives and declaratives. Assertives have the highest frequency of occurence or 13 types (36.1%). It is followed by directives, commissives, expressives and declaratives which occur 9 types (25%), 3 types (8.3%), 9 types (25%) and 2 types (5.6%) respectively. Furthermore, the types of assertives are claims and conclusions. The types of directives consist of requesting, commanding and suggesting. Commissives include promising and offering. The types of expressives are thanking, congratulating, apologizing and deplore. The last, the type of declaratives is declaring.
VI.Conclusion
After analyzing the data, it is important to conclude what elaborated before. This research is concerned with the pragmatic analysis of the types of illocutionary acts in reference to the Searle's categorization of speech acts (2015). So, the conclusion of this research is based on the statement of problem in this research, the writer just focused on analyzing the the types of illocutionary acts in Hillary Clinton's concession to Donald Trump in reference Searle's categorization of speech acts (2015). As the statement of problem in this research ''what types of illocutionary acts performed by Hillary Clinton in her concession speech to Donald Trump?'' Then, the writer found the types of illocutionary acts, they are assertives 13 types (36.1%). It is followed by directives, commissives, expressives and declaratives which occur 9 types (25%), 3 types (8.3%), 9 types (25%) and 2 types (5.6%) respectively. The types of assertives include assertion, claims and conclusions. The types of directives consist of requesting, commanding and suggesting. Commissives include promising and offering. The types of expressives are thanking, congratulating, apologizing and deplore. Later on, the types of declaratives is declaring. So, the total of data were 36 types of illocutionary acts founds in Hillary Clinton's concession speech to Donald Trump. | 1,951.6 | 2019-03-05T00:00:00.000 | [
"Linguistics"
] |
Learning to Read Maps: Understanding Natural Language Instructions from Unseen Maps
Robust situated dialog requires the ability to process instructions based on spatial information, which may or may not be available. We propose a model, based on LXMERT, that can extract spatial information from text instructions and attend to landmarks on OpenStreetMap (OSM) referred to in a natural language instruction. Whilst, OSM is a valuable resource, as with any open-sourced data, there is noise and variation in the names referred to on the map, as well as, variation in natural language instructions, hence the need for data-driven methods over rule-based systems. This paper demonstrates that the gold GPS location can be accurately predicted from the natural language instruction and metadata with 72% accuracy for previously seen maps and 64% for unseen maps.
Introduction
Spoken dialog systems are moving into real world situated dialog, such as assisting with emergency response and remote robot instruction that require knowledge of maps or building schemas. Effective communication of such an intelligent agent about events happening with respect to a map requires learning to associate natural language with the world representation found within the map. This symbol grounding problem (Harnad, 1990) has been largely studied in the context of mapping language to objects in a situated simple (MacMahon et al., 2006;Johnson et al., 2017) or 3D photorealistic environments (Kolve et al., 2017;Savva et al., 2019), static images (Ilinykh et al., 2019;Kazemzadeh et al., 2014), and to a lesser extent on synthetic (Thompson et al., 1993) and real geographic maps (Paz-Argaman and Tsarfaty, 2019; Haas and Riezler, 2016;Götze and Boye, 2016). The tasks usually relate to navigation (Misra et al., 2018; or action execution (Bisk et al., 2018;Shridhar et al., 2019) and as- Figure 1: User instruction and the corresponding image, displaying 4 robots and landmarks. The users were not restricted or prompted to use specific landmarks on the map. The circle around the target landmark was added for clarity for this paper; users were not given any such visual hints.
sume giving instructions to an embodied egocentric agent with a shared first-person view. Since most rely on the visual modality to ground natural language (NL), referring to items in the immediate surroundings, they are often less geared towards the accuracy of the final goal destination.
The task we address here is the prediction of the GPS of this goal destination by reference to a map, which is of critical importance in applications such as emergency response where specialized personnel or robots need to operate on an exact location (see Fig. 1 for an example). Specifically, the goal we are trying to predict is in terms of: a) the GPS coordinates (latitude/longitude) of a referenced landmark; b) a compass direction (bearing) from this referenced landmark; and c) the distance in meters from the referenced landmark. This is done by taking as input into a model: i) the knowledge base of the symbolic representation of the world such as landmark names and regions of interest (metadata); ii) the graphic depiction of a map (visual modality); and iii) a worded instruction.
Our approach to the destination prediction task is two-fold. The first stage is a data collection for the "Robot Open Street Map Instructions" (ROSMI) (Katsakioris et al., 2020) corpus based on OpenStreetMap (Haklay and Weber, 2008), in which we gather and align NL instructions to their corresponding target destination. We collected 560 NL instruction pairs on 7 maps of different variety and landmarks, in the domain of emergency response using Amazon Mechanical Turk. The subjects are given a scene in the form of a map and are tasked to write an instruction to command a conversational assistant to direct robots and autonomous systems to either inspect an area or extinguish a fire. The setup was intentionally emulating a typical 'Command and Control' interface found in emergency response hubs, in order to promote instructions that accurately describe the final destination, with regards to its surrounding map entities.
Whilst OSM and other crowdsourced resources are hugely valuable, there is an element of noise associated with the metadata collected in terms of the names of the objects on the map, which can vary for the same type of object (e.g. newsagent/kiosk, confectionary/chocolate store etc.), whereas the symbols on the map are from a standard set, which one hypothesizes a vision-based trained model could pick-up on. To this end, we developed a model that leverages both vision and metadata to process the NL instructions.
Specifically, our MAPERT (Map Encoder Representations from Transformers) is a Transformerbased model based on LXMERT. It comprises of up to three single-modality encoders for each input (i.e., vision, metadata and language), an early fusion of modalities components and a crossmodality encoder, which fuses the map representation (metadata and/or vision) with the word embeddings of the instruction in both directions, in order to predict the three outputs, i.e., reference landmark location on the map, bearing and distance.
Our contributions are thus three-fold: • A novel task for final GPS destination prediction from NL instructions with accompanying ROSMI dataset 1 .
• A model that predicts GPS goal locations from a map-based natural language instruction.
• A model that is able to understand instructions referring to previously unseen maps.
Related Work
Situated dialog encompasses various aspects of interaction. These include: situated Natural Language Processing (Bastianelli et al., 2016); situated reference resolution (Misu, 2018); language grounding (Johnson et al., 2017); visual question answer/visual dialog (Antol et al., 2015); dialog agents for learning visually grounded word meanings and learning from demonstration (Yu et al., 2017); and Natural Language Generation (NLG), e.g. of situated instructions and referring expressions (Byron et al., 2009;Kelleher and Kruijff, 2006). Here, work on instruction processing for destination mapping and navigation are discussed, as well as language grounding and referring expression resolution, with an emphasis on 2D/3D real world and map-based application.
Language grounding refers to interpreting language in a situated context and includes collaborative language grounding toward situated humanrobot dialog (Chai et al., 2016), city exploration (Boye et al., 2014), as well as following high-level navigation instructions . Mapping instructions to low level actions has been explored in structured environments by mapping raw visual representations of the world and text onto actions using using Reinforcement Learning methods (Misra et al., 2017;Xiong et al., 2018;Huang et al., 2019). This work has recently been extended to controlling autonomous systems and robots through human language instruction in a 3D simulated environment (Ma et al., 2019;Misra et al., 2018;Blukis et al., 2019) and Mixed Reality (Huang et al., 2019) and using imitation learning . These systems perform goal prediction and action generation to control a single Unmanned Aerial Vehicles (UAVs), given a natural language instruction, a world representation and/or robot observations. However, where this prior work uses raw pixels to generate a persistent semantic map from the system's line-of-sight image, our model is able to leverage both pixel and metadata, when it is available in a combined approach. Other approaches include neural mapping of navigational instructions to action sequences (Mei et al., 2015), which does include a representation of the observable world state, but this is more akin to a maze rather than a complex map. With respect to the task, our model looks to predict GPS locations. There are few related works that attempt this challenging task. One study, as part of the ECML/PKDD challenge (de Brébisson et al., 2015), uses Neural Networks for Taxi Destination Prediction as a sequence of GPS points. However, this does not include processing natural language instructions. SPACEREF (Götze and Boye, 2016) is perhaps the closest to our task in that the task entails both GPS tracks in OSM and annotated mentions of spatial entities in natural language. However, it is different in that these spatial entities are viewed and referred to in a first person view, rather than entities on a map (e.g. "the arch at the bottom").
In terms of our choice of model, attention mechanisms (Bahdanau et al., 2015;Vaswani et al., 2017;Xu et al., 2015) have proven to be very powerful in language and vision tasks and we draw inspiration from the way (Xu et al., 2015) use attention to solve image captioning by associating words to spatial regions within a given image.
Data
As mentioned above, the task is based on Open-StreetMap (OSM) (Haklay and Weber, 2008). OSM is a massively collaborative project, started in 2004, with the main goal to create a free editable map of the world. The data is available under the Open Data Commons Open Database Licence and has been used in some prior work (Götze and Boye, 2016;Hentschel and Wagner, 2010;Haklay and Weber, 2008). It is a collection of publicly available geodata that are constantly updated by the public and consists of many layers of various geographic attributes of the world. Physical features such as roads or buildings are represented using tags (metadata) that are attached to its basic data structures. A comprehensive list of all the possible features available as metadata can be found online 2 . There are two types of objects, nodes and ways, with unique IDs that are described by their latitude/longitude (lat/lon) coordinates. Nodes are single points (e.g. coffee shops) whereas ways can be more complex structures, such as polygons or lines (e.g. streets and rivers). For this study, we train and test only on data that uses single points (nodes) and polygons (using the centre point), and leave understanding more complex structures as future work. 2 wiki.openstreetmap.org/wiki/Map Features We train and evaluate our model on ROSMI, a new multimodal corpus. This corpus consists of visual and natural language instruction pairs, in the domain of emergency response. In this data collection, the subjects were given a scene in the form of an OSM map and were tasked to write an instruction to command a conversational assistant to direct a number of robots and autonomous systems to either inspect an area or extinguish a fire. Figure 1 shows an example of such a written instruction. These types of emergency scenarios usually have a central hub for operators to observe and command humans and Robots and Autonomous Systems (RAS) to perform specific functions, where the robotic assets are visually observable as an overlay on top of the map. Each instruction datapoint was manually checked and if it did not match the 'gold standard' GPS coordinate per the scenario map, it was discarded. The corpus was manually annotated with the ground truth for, (1) a link between the NL instruction and the referenced OSM entities; and (2) the distance and bearing from this referenced entity to the goal destination. The ROSMI corpus thus comprises 560 tuples of instructions, maps with metadata and target GPS location.
There are three linguistic phenomena of note that we observe in the data collected. Firstly, Landmark Grounding where each scenario has 3-5 generated robots and an average of 30 landmarks taken from OSM. Each subject could refer to any of these objects on the map, in order to complete the task. Grounding the right noun phrase to the right OSM landmark or robot, is crucial for predicting accurately the gold-standard coordinate, e.g. send husky11 62m to the west direction or send 2 drones near Harborside Park.
Secondly, Bearing/Distance factors need to be extracted from the instruction such as numbers (e.g. 500 meters) and directions (e.g. northwest, NE) and these two items typically come together. For example, "send drone11 to the west about 88m".
Thirdly, Spatial Relations are where prepositions are used instead of distance/bearing (e.g. near, between), and are thus more vague. For example, "Send a drone near the Silver Strand Preserve".
Task Formulation
An instruction is taken as a sequence of word tokens w =< w 1 , w 2 , . . . w N > with w i ∈ V , where V is a vocabulary of words and the corresponding geographic map I is represented as a set of M landmark objects o i = (bb, r, n) where bb is a 4-dimensional vector with bounding box coordinates, r is the corresponding Region of Interest (RoI) feature vector produced by an object detector and n =< n 1 , n 2 . . . n K >, is a multi-token name. We define a function f : Since predictingŷ directly from w is a harder task, we decompose it into three simpler components, namely predicting a reference landmark location l ∈ M , the compass direction (bearing) b 3 , and a distance d from l in meters. Then we trivially convert to the final GPS position coordinates. Equation 1 now becomes:
Model Architecture
Inspired by LXMERT (Tan and Bansal, 2019), we present MAPERT, a Transformer-based (Vaswani et al., 2017) model with three separate singlemodality encoders (for NL instructions, metadata and visual features) and a cross-modality encoder that merges them. Fig. 2 depicts the architecture. In the following sections, we describe each component separately. Metadata Encoder OSM comes with useful metadata in the form of bounding boxes (around the landmark symbols) and names of landmarks on the map. We represent each bounding box as a 4-dimensional vector bb meta k and each name (n k ) using another Transformer initialized with pretrained BERT weights. We treat metadata as a bag of names but since each word can have multiple tokens, we output position embeddings pos n k for each name separately; h n k are the resulting hidden states with h n k,0 being the hidden state for [CLS].
Instructions Encoder
Visual Encoder Each map image is fed into a pretrained Faster R-CNN detector (Ren et al., 2015), which outputs bounding boxes and RoI feature vectors bb k and r k for k objects. In order to learn better representation for landmarks, we fine-tuned the detector on around 27k images of maps to recognize k objects {o 1 , .., o k } and classify landmarks of 213 manually-cleaned classes from OSM; we fixed k to 73 landmarks. Finally, a combined position-aware embedding v k was learned by adding together the vectors bb k and r k as in LXMERT: where F F are feed-forward layers with no bias.
Variants for Fusion of Input Modalities
We describe three different approaches to combining knowledge from maps with the NL instructions:
Metadata and Language
The outputs of the metadata and language encoders are fused by conditioning each landmark name n i on the instruction sequence via a uni-directional cross attention layer (Fig. 3). We first compute the attention weights A k between the name tokens n k,i of each landmark o k and instruction words in h w 4 and re-weight the hidden states h n k to get the context vectors c n k . We then pool them using the context vector for the [CLS] token of each name: We can also concatenate the bounding box bb meta k to the final hidden states: Metadata+Vision and Language All three modalities were fused to verify whether vision can aid metadata information for the final GPS destination prediction task (Fig. 4). First, we filter the landmarks o i based on the Intersection over Union between the bounding boxes found in metadata (bb meta k ) and those predicted with Faster R-CNN (bb k ), thus keeping their corresponding names n i and visual features v i . Then, we compute the instruction-conditioned metadata hidden states h meta i , as described above, and multiply them with every object v i to get the final h meta+vis context vectors: Figure 4: Fusion of metadata, vision and language modalities. Metadata are first conditioned on the instruction tokens as shown in Fig. 3. Then, they are multiplied with the visual features of every landmark.
Map-Instructions Fusion
So far we have conditioned modalities in one direction, i.e., from the instruction to metadata and visual features. In order to capture the influence between map and instructions in both ways, a crossmodality encoder was implemented (right half of Fig. 2). Firstly each modality passes through a self-attention and feed-forward layer to highlight inter-dependencies. Then these modulated inputs are passed to the actual fusion component, which consists of one bi-directional cross-attention layer, two self-attention layers, and two feed-forward layers. The cross-attention layer is a combination of two unidirectional cross-attention layers, one from instruction tokens (h w ) to map representations (either of h meta k , v k or h meta+vis k ; we refer to them below as h map k ) and vice-versa: Note that representing h map k with vision features v k only is essentially a fusion between the vision and language modalities. This is a useful variant of our model to measure whether the visual representation of a map alone is as powerful as metadata, specifically for accurately predicting the GPS location of the target destination.
Output Representations and Training
As shown in the right-most part of Fig. 2, our MAPERT model has three outputs: landmarks, distances, and bearings. We treat each output as a classification sub-task, i.e., predicting one or the k landmarks in the map; identifying in the NL instruction the start and end position of the sequence of tokens that denotes a distance from the reference landmark (e.g., '500m'); and a bearing label. MAPERT's output comprises of two feature vectors, one for the vision and one for the language modality generated by the cross-modality encoder.
More specifically, for the bearing predictor, we pass the hidden state out w,0 , corresponding to [CLS], to a FF followed by a softmax layer. Predicting distance is similar to span prediction for Question Answering tasks; we project each of the tokens in out w down to 2 dimensions corresponding to the distance span boundaries in the instruction sentence. If there is no distance in the sentence e.g., "Send a drone at Jamba Juice", the model learns to predict, both as start and end position, the final end of sentence symbol, as an indication of absence of distance. Finally, for landmark prediction we project each of the k map hidden states out map k to a single dimension corresponding to the index of the i th landmark.
We optimize MAPERT by summing the crossentropy losses for each of the classification subtasks. The final training objective becomes: (17) 5 Experimental Setup Implementation Details We evaluate our model on the ROSMI dataset and assess the contribution of the metadata and vision components as described above. For the attention modules, we use a hidden layer with size of 768 as in BERT BASE and we set the numbers of all the encoder and fusion layers to 1. We initialize pretrained BERT embedding layers (we also show results with randomly initialized embeddings). We trained our model using Adam (Kingma and Ba, 2015) as the optimizer with a linear-decayed learning-rate schedule (Tan and Bansal, 2019) for 90 epochs, a dropout probability of 0.1 and learning rate of 10 −3 . Evaluation Metrics We use a 10-fold crossvalidation for our evaluation methodology. This results in a less biased estimate of the accuracy over splitting the data into train/test due to the modest size of the dataset. In addition, we performed a leave-one-map-out cross-validation, as in Chen and Mooney (2011). In other words, we use 7-fold cross-validation, and in each fold we use six maps for training and one map for validation. We refer to these scenarios as zero-shot 5 since, in each fold, we validate our data on an unseen map scenario. With the three outputs of our model, landmark, distance and bearing, we indirectly predict the destination location. Success is measured by the Intersection over Union (IoU) between the ground truth destination location and the calculated destination location. IoU measures the overlap between two bounding boxes and as in Everingham et al. (2010), must exceed 0.5 (50%) to count it as successful by the formula: Since we are dealing with GPS coordinates but also image pixels, we report two error evaluation metrics. The first is sized weighted Target error (T err) in meters, which is the distance in meters between the predicted GPS coordinate and the ground truth coordinate. The second is a Pixel Error (P error) which is the difference in pixels between the predicted point in the image and the ground truth converted from the GPS coordinate.
Comparison of Systems We evaluate our system on three variants using different fusion techniques, namely Meta and Language; Meta+Vision and Language; and Vision and Language. Ablations for these systems are shown in Table 1 and are further analyzed in Section 6. We also compare MAPERT to a strong baseline, BERT. The baseline is essentially MAPERT but without the bidirectional cross attention layers in the pipeline (see Fig. 2). Note, the Oracle of the Meta and Language has a 100% (upper bound) on both cross-validation splits of ROSMI, whereas the oracle of any model that utilizes visual features, is 80% in the 10-fold and 81.98% in the 7-fold cross-validation (lower bound). In other words, the GPS predictor can only work with the output of the automatically predicted entities outputed from Faster R-CNN, of which 20% are inaccurate. Table 1 shows results on both oracles, with the subscript lower indicating the lower bound oracle and upper indicating the "Upper Bound" oracle. In Table 2, all systems are being projected on the lower bound oracle, so as to compare them on the same footing. Table 2 shows the results of our model for Vision, Meta and Meta+Vision on both the 10-fold cross validation and the 7-fold zero-shot cross validation. We see that the Meta variant of MAPERT outperforms all other variants and our baseline. However, looking at the 10-fold results, Meta+Vision's accuracy of 69.27% comes almost on par with Meta's 71.81%. If we have the harder task of no metadata, with only the visuals of the map to work with, we can see that the Vision component works reasonably well, with an accuracy to 60.36%. This Vision component, despite being on a disadvantage, manages to learn the relationship of visual features with an instruction and vice-versa, compared to our baseline, which has no crossing between the modalities whatsoever, reaching only 33.82%. When we compare these results to the zero-shot paradigm, we see only a 10.5% reduction using Meta, whereas Error Analysis In order to understand where the Vision and Meta models' comparative strengths lie, we show some example outputs in Fig. 5. In examples 1&2 in this figure, we see the Meta model is failing to identify the correct landmark because the instruction is formulated in a way that allows the identification of two landmarks. It's a matter of which landmark to choose, and the bearing, distance that comes with it, to successfully predict the destination location. However, the Meta model is mixing up the landmarks and the bear-ings. We believe it is that perhaps the Meta model struggles with spatial relations such as "near". The Vision model, on the other hand, successfully picks up the three correct components for the prediction. This might be helped by the familiarity of the symbolic representation the robots (husky, drones, auvs), which it is able to pick up and use as landmarks in situations of uncertainty such as this one. Both models can fail in situations of both visual and metadata ambiguity. In the third example, the landmark (Harborside Park) is not properly specified and both models fail to pinpoint the correct landmark, since further clarification would be needed. The final example in Fig. 5 shows a situation in which the Meta model works well without the need of a specific distance and bearing. The Vision model manages to capture that, but it fails to identify the correct landmark.
Conclusion and Future Work
We have developed a model that is able to process instructions on a map using metadata from rich map resources such as OSM and can do so for maps that it has not seen before with only a 10% reduction in accuracy. If no metadata is available then the model can use Vision, although this is clearly a harder task. Vision does seem to help in examples where there is a level of uncertainty such as with spatial relations or ambiguity between entities. Future work will involve exploring this further by training the model on these type of instructions and on metadata that are scarce and inaccurate. Finally, these instructions will be used in an end-to-end dialog system for remote robot planning, whereby multi-turn interaction can handle ambiguity and ensure reliable and safe destination prediction before instructing remote operations. | 5,710.6 | 2021-01-01T00:00:00.000 | [
"Computer Science",
"Geography"
] |
Three-Dimensional Cellular Automata Simulation of the Austenitizing Process in GCr15 Bearing Steel
On the basis of the two-dimensional cellular automaton model, a three-dimensional cellular automaton model of austenitizing process was established. By considering the orientation of pearlite layer and the direction of austenite grain growth, the velocity of the interface was calculated during the austenitizing process. The austenitizing process of GCr15 steel was simulated, and the anisotropy of grain growth rate during austenitization was demonstrated by simulation results. By comparing the simulation results with the experimental data, it was found that the calculated results of the three-dimensional cellular automaton model established in this paper were in good agreement with the experimental results. By using this model, the three-dimensional austenitizing process of GCr15 steel at different temperatures and under different processing times can be analyzed, and the degree of austenitization can be predicted.
Introduction
The structure of the bearing steel continuous casting billet at normal atmospheric temperature is mainly composed of lamellar pearlite structure and carbide. Pearlite is a mixed structure formed by the interleaving of lamellar ferrite and cementite. When the steel temperature exceeds the transition temperature, the steel can spontaneously be an austenitizing transformation process. It is generally believed that the austenitizing process is a transformation process of nucleation growth. The nucleation rate of the nuclei and the growth rate of the grains together determine the rate of the austenitizing transformation process.
Speiche et al. [1] pointed out that austenite can be nucleated at the interface between cementite and ferrite. Roosz et al. [2] believed that austenite can nucleate at three interfaces of cementites and ferrites, which are the internal interface of pearlite group, the interface on the edge of pearlite, and the interface on the corner of pearlite. The relationship was obtained between the nucleation rate of austenite and the morphology of pearlite by experiment. Shtansky et al. [3] observed the austenite nucleation inside and at the boundary of the pearlite by using transmission electron microscopy. Another view is that austenite nucleation is mainly at the boundary of pearlite clusters [4]. Li et al. [5] further pointed out that austenite mainly nucleated at the interface of high-angle pearlite. Combining the two viewpoints, it can be considered that austenite mainly nucleates at the boundary of the pearlite group, and also has a certain nucleation inside the pearlite group.
For the austenitizing process, in addition to experimental observation of the metallographic phase of the sample, many scholars have also carried out numerical simulation studies on the transformation process of pearlite to austenite. Akbay et al. [6] established a simple model for the transformation of lamellar ferrite and cementite into austenite, and obtained analytical solutions and numerical results
Three-Dimensional Cellular Automaton Model
The cellular automaton model is a mathematical model that is time-discrete and spatially discrete. Each discrete point is assigned a state parameter whose possible states are also discrete and finite. The cellular automaton model follows the rules of local evolution. In the evolution process of each time step, the state of a certain point is determined by the state of the points around it, defined as neighbors, and is independent of other points. With the cellular automaton model, many complex continuous changes can be discretized into simple local evolution processes to achieve the reproduction of some complex phenomena.
A cellular automaton model consists of cells, state of the cell, cell space, neighbor's types, and evolution rules.
(1) The cell. The cell is the basic unit of the cellular automaton model, and is the carrier of various state parameters and the executor of the evolution rules. In the simulation of the evolution of the material, the cell is the unit of material that is discretized. For a two-dimensional model, the shape of the cell is mainly square, regular hexagon, and equilateral triangle. For a three-dimensional model, the shape of the cell is mainly truncated octahedron, cube, and sphere.
(2) The state of the cell. The state of a cell includes all states of a finite number of discrete material. In principle, one cell can only have one state, but multiple states can exist simultaneously in the actual application process.
(3) The cell space. The space is a collection of the cells. The set of multiple cells can be seen as a cell space.
(4) The neighbor's type. For a specific lattice point, the lattice points of the local area that are involved in the evolution rule are called the neighbor lattice point. The distance between one lattice point and the specific lattice point is generally used to determine whether the lattice point is the neighbor. For a space composed of square or cube cells, there are three main types of cells, namely Von Neumann type neighbors and Moore type neighbors, as shown in Figure 1.
(5) The evolution rules. The cell and cell space are the basic unit of the model. To realize the dynamic evolution process of the model, it is necessary to add the evolution rule. The evolution rule is built according to a specific physical process. In the evolution rule, only the state of the specific lattice The boundary conditions of the cellular automaton model mainly include fixed boundary conditions, symmetric boundary conditions, and periodic boundary conditions. The periodic boundary condition was used in the calculations in this paper. During the calculation process, all lattice points were updated synchronously.
Three-Dimensional Cellular Automata Simulation of the Austenitizing Process
The austenitizing process can be decomposed into three processes: nucleation, austenite nucleus growth, and austenite grain collision. The nucleation process occurs on the pearlite matrix and is related to the local morphology of the pearlite matrix and the driving force of the pearlite phase transition. In general, the trigeminal intersection and interface of the pearlite mass are easier to nucleate, and the nuclei may also be formed between the two pearlite layers. After the nucleation process is completed, the newly formed austenite nuclei form a new phase interface with the pearlite matrix. The phase interface is generally propelled to the pearlite matrix under the action of diffusion driving force, so that the pearlite transforms into austenite. The moving velocity of the interface between the austenite and the pearlite is related to the solute diffusion coefficient in the material and the thickness of the pearlite layer. As the austenite nucleus continues to grow, when the two austenite grains meet at a certain point, the interface between the pearlite and the austenite transforms into an austenite grain boundary, that is, the austenite grain collision. The two collided austenite grains will continue to move caused by the grain boundary curvature, that is, the austenite grain growth.
Mathematical Description of the Austenitizing Process
Speiche et al. believe that the austenite nucleation process is instantaneous, that is, the nucleation position has been exhausted in the initial stage of austenitizing transformation. However, Roosz et al. believe that the austenite nucleation process is continuous, that is, the nucleation rate remains constant during the austenitizing transformation process. In the study by Speiche et al., the C content of the steel was 0.96%, and in the study by Roosz et al., the C content of the steel was 0.78%. Studies by Dernfeld [16] have confirmed that this difference in nucleation is mainly due to differences in C content. The composition of GCr15 steel studied in this paper is shown in Table 1. Its C content was about 1%, which is closer to Speiche's research. It can be considered that the nucleation process was instantaneous. Therefore, in this paper, the number of austenite grains in the sample which was rapidly cooled after complete austenitization was used, instead of the number of austenite nuclei at the beginning of the transformation, thereby obtaining the number of austenite nuclei per unit volume. As shown in Figure 2, after more than 1000 grains, the instantaneous nucleation density (N/V) of austenite was 1.687 × 1015 m −3 . N is the number of austenite nuclei and V is the volume. The boundary conditions of the cellular automaton model mainly include fixed boundary conditions, symmetric boundary conditions, and periodic boundary conditions. The periodic boundary condition was used in the calculations in this paper. During the calculation process, all lattice points were updated synchronously.
Three-Dimensional Cellular Automata Simulation of the Austenitizing Process
The austenitizing process can be decomposed into three processes: nucleation, austenite nucleus growth, and austenite grain collision. The nucleation process occurs on the pearlite matrix and is related to the local morphology of the pearlite matrix and the driving force of the pearlite phase transition. In general, the trigeminal intersection and interface of the pearlite mass are easier to nucleate, and the nuclei may also be formed between the two pearlite layers. After the nucleation process is completed, the newly formed austenite nuclei form a new phase interface with the pearlite matrix. The phase interface is generally propelled to the pearlite matrix under the action of diffusion driving force, so that the pearlite transforms into austenite. The moving velocity of the interface between the austenite and the pearlite is related to the solute diffusion coefficient in the material and the thickness of the pearlite layer. As the austenite nucleus continues to grow, when the two austenite grains meet at a certain point, the interface between the pearlite and the austenite transforms into an austenite grain boundary, that is, the austenite grain collision. The two collided austenite grains will continue to move caused by the grain boundary curvature, that is, the austenite grain growth.
Mathematical Description of the Austenitizing Process
Speiche et al. believe that the austenite nucleation process is instantaneous, that is, the nucleation position has been exhausted in the initial stage of austenitizing transformation. However, Roosz et al. believe that the austenite nucleation process is continuous, that is, the nucleation rate remains constant during the austenitizing transformation process. In the study by Speiche et al., the C content of the steel was 0.96%, and in the study by Roosz et al., the C content of the steel was 0.78%. Studies by Dernfeld [16] have confirmed that this difference in nucleation is mainly due to differences in C content. The composition of GCr15 steel studied in this paper is shown in Table 1. Its C content was about 1%, which is closer to Speiche's research. It can be considered that the nucleation process was instantaneous. Therefore, in this paper, the number of austenite grains in the sample which was rapidly cooled after complete austenitization was used, instead of the number of austenite nuclei at the beginning of the transformation, thereby obtaining the number of austenite nuclei per unit volume. As shown in Figure 2, after more than 1000 grains, the instantaneous nucleation density (N/V) of austenite was 1.687 × 1015 m −3 . N is the number of austenite nuclei and V is the volume. The interfacial moving velocity of austenite in different directions determines the austenitizing process of GCr15 steel. The research on the interface velocity of austenite growth in pearlite mainly includes numerical simulation based on solute diffusion and the nucleation and growth model that was proposed by Roósz et al. [2] on the basis of experimental research. Gaude-Fugarolas et al. [17] proposed that the austenitizing process is controlled by C diffusion, and the interface velocity can be calculated as follows: where v is the average velocity of the austenite interface, in meters per second; rf and r0 are the farthest and closest distances of diffusion, respectively, in meters; rf is half of the thickness of the pearlite layer; r0 is a few lattice thicknesses, about 10 −8 m; D is the diffusion coefficient of the main element's diffusion in austenite, in meters squared per second; c γθ is the molar concentration of solute in austenite at the interface between austenite and cementite; c γα is the molar concentration of solute in austenite at the interface between austenite and ferrite; and c αγ is the molar concentration of solute in ferrite at the interface between ferrite and austenite. For austenitizing process and cementite dissolution processes in high carbon low alloy steels, Hillert [18] proposed two models. In the case of low superheat, the pearlite dissolution process depended on the diffusion of alloying elements; when the temperature is above a certain critical temperature, the pearlite dissolution process does not depended on the diffusion of alloying elements, and the austenitizing process depends on diffusion of carbon elements. Because of the difficulty in calculating the transition temperature, it is difficult to determine the solute elements, and thus it is difficult to implement the model in Equation (1). Therefore, this paper uses the semiempirical model proposed by Roósz: where E is the empirical constant and Q is the austenite growth process activation energy. According to the experimental results [19], the empirical constant and activation energy are 3.406 × 10 −19 m 3 /s and 6.995 × 10 −22 J/atom, respectively. k is the Boltzmann constant, ΔT is the degree of superheat, and σ0 is the pearlite layer spacing. As shown in Figure 3, the measured value of σ0 was 0.227 μm. The interfacial moving velocity of austenite in different directions determines the austenitizing process of GCr15 steel. The research on the interface velocity of austenite growth in pearlite mainly includes numerical simulation based on solute diffusion and the nucleation and growth model that was proposed by Roósz et al. [2] on the basis of experimental research. Gaude-Fugarolas et al. [17] proposed that the austenitizing process is controlled by C diffusion, and the interface velocity can be calculated as follows: where v is the average velocity of the austenite interface, in meters per second; r f and r 0 are the farthest and closest distances of diffusion, respectively, in meters; r f is half of the thickness of the pearlite layer; r 0 is a few lattice thicknesses, about 10 −8 m; D is the diffusion coefficient of the main element's diffusion in austenite, in meters squared per second; c γθ is the molar concentration of solute in austenite at the interface between austenite and cementite; c γα is the molar concentration of solute in austenite at the interface between austenite and ferrite; and c αγ is the molar concentration of solute in ferrite at the interface between ferrite and austenite. For austenitizing process and cementite dissolution processes in high carbon low alloy steels, Hillert [18] proposed two models. In the case of low superheat, the pearlite dissolution process depended on the diffusion of alloying elements; when the temperature is above a certain critical temperature, the pearlite dissolution process does not depended on the diffusion of alloying elements, and the austenitizing process depends on diffusion of carbon elements. Because of the difficulty in calculating the transition temperature, it is difficult to determine the solute elements, and thus it is difficult to implement the model in Equation (1). Therefore, this paper uses the semi-empirical model proposed by Roósz: where E is the empirical constant and Q is the austenite growth process activation energy. According to the experimental results [19], the empirical constant and activation energy are 3.406 × 10 −19 m 3 /s and 6.995 × 10 −22 J/atom, respectively. k is the Boltzmann constant, ∆T is the degree of superheat, and σ 0 is the pearlite layer spacing. As shown in Figure 3, the measured value of σ 0 was 0.227 µm. The above is the austenite boundary perpendicular to the direction of the pearlite layer, that is, the austenite growth rate was calculated when the austenite interface growth direction was parallel to the pearlite layer. According to the study of reference [20], it can be seen that the direction in which austenite grains move in the pearlite had a significant effect on the austenite interface moving velocity. The austenite growth process was solute atoms diffusing from cementite through austenite to ferrite. Therefore, the following flow conservation relationship should be satisfied at the austenite and ferrite interface: where J is the solute flow rate through austenite, in mol•s −1 •m −2 . σα is the thickness of the ferrite layer, as shown in Figure 4, in meters. The diffusion distance from cementite to ferrite is 1/2 of the thickness of the layer, so the flow rate J can be approximated by the following formula: (4) The two distance parameters (σ0 and σα) in Equations (3) and (4) are proportional to the spacing of the layers, and it can be inferred that the austenite interface moving velocity is inversely proportional to the square of the layer thickness. This result is consistent with the form in the semiempirical model Equation (2).
As shown in Figure 5, when the direction of austenite growth is at an angle to the direction of the pearlite layer, the distance parameters in Equations (3) and (4) are multiplied by 1/sinω. Considering this situation, the moving velocity in Equation (2) will be calculated as follows: The above is the austenite boundary perpendicular to the direction of the pearlite layer, that is, the austenite growth rate was calculated when the austenite interface growth direction was parallel to the pearlite layer. According to the study of reference [20], it can be seen that the direction in which austenite grains move in the pearlite had a significant effect on the austenite interface moving velocity. The austenite growth process was solute atoms diffusing from cementite through austenite to ferrite. Therefore, the following flow conservation relationship should be satisfied at the austenite and ferrite interface: v where J is the solute flow rate through austenite, in mol·s −1 ·m −2 . σ α is the thickness of the ferrite layer, as shown in Figure 4, in meters. The diffusion distance from cementite to ferrite is 1/2 of the thickness of the layer, so the flow rate J can be approximated by the following formula: The above is the austenite boundary perpendicular to the direction of the pearlite layer, that is, the austenite growth rate was calculated when the austenite interface growth direction was parallel to the pearlite layer. According to the study of reference [20], it can be seen that the direction in which austenite grains move in the pearlite had a significant effect on the austenite interface moving velocity. The austenite growth process was solute atoms diffusing from cementite through austenite to ferrite. Therefore, the following flow conservation relationship should be satisfied at the austenite and ferrite interface: where J is the solute flow rate through austenite, in mol•s −1 •m −2 . σα is the thickness of the ferrite layer, as shown in Figure 4, in meters. The diffusion distance from cementite to ferrite is 1/2 of the thickness of the layer, so the flow rate J can be approximated by the following formula: (4) The two distance parameters (σ0 and σα) in Equations (3) and (4) are proportional to the spacing of the layers, and it can be inferred that the austenite interface moving velocity is inversely proportional to the square of the layer thickness. This result is consistent with the form in the semiempirical model Equation (2).
As shown in Figure 5, when the direction of austenite growth is at an angle to the direction of the pearlite layer, the distance parameters in Equations (3) and (4) are multiplied by 1/sinω. Considering this situation, the moving velocity in Equation (2) will be calculated as follows: The two distance parameters (σ 0 and σ α ) in Equations (3) and (4) are proportional to the spacing of the layers, and it can be inferred that the austenite interface moving velocity is inversely proportional to the square of the layer thickness. This result is consistent with the form in the semi-empirical model Equation (2).
As shown in Figure 5, when the direction of austenite growth is at an angle to the direction of the pearlite layer, the distance parameters in Equations (3) and (4) are multiplied by 1/sinω. Considering this situation, the moving velocity in Equation (2) will be calculated as follows: where → n 1 is the normal vector of the pearlite layer and → n 2 is the moving direction vector of the austenite grain boundary. In the calculation process, the → n 1 is the normal vector which is assigned randomly by the program in the initial tissue formation process of pearlite, and the same pearlite group has the same value. The → n 2 is related to the relative position of the austenite point and the pearlite point. When the position of the interface pearlite grid is determined, a vector can be determined relative to a certain austenite neighbor.
Simulation Results and Analysis
A three-dimensional cellular automaton model was established in this paper. The shape of the cell was a cube, and the neighbor type was a Moore-type neighbor. There were two states of the cell, one state was pearlite, and the other was austenite. For each pearlite cell, a laminar normal direction vector was assigned, and adjacent cells having the same normal vector formed a pearlite cluster. For each newly formed austenite nucleus, an austenite orientation was imparted, austenite grains were formed by austenite nucleation, and new austenite was formed by austenite nucleus growth. The orientation of the newly formed austenite cell was the same as the orientation of the austenite nuclei.
In this paper, the C#.net was used to program the three-dimensional cellular automaton model of the austenitizing process of bearing steel, and the simulation was carried out using the selfcompiled program. The initial organization of the pearlite was formed by the Monte-Carlo method when the calculation was started, as shown in Figure 6. The size of the pearlite in the initial structure was the same as the pearlite measured in the actual initial structure [19]. According to the conclusions in reference [5], it was assumed that the austenite nuclei were formed at the trigeminal boundary of the pearlite cluster. The austenite nucleation process was instantaneous, and no new austenite nucleation was formed during the growth of austenite grains.
Simulation Results and Analysis
A three-dimensional cellular automaton model was established in this paper. The shape of the cell was a cube, and the neighbor type was a Moore-type neighbor. There were two states of the cell, one state was pearlite, and the other was austenite. For each pearlite cell, a laminar normal direction vector was assigned, and adjacent cells having the same normal vector formed a pearlite cluster. For each newly formed austenite nucleus, an austenite orientation was imparted, austenite grains were formed by austenite nucleation, and new austenite was formed by austenite nucleus growth. The orientation of the newly formed austenite cell was the same as the orientation of the austenite nuclei.
In this paper, the C#.net was used to program the three-dimensional cellular automaton model of the austenitizing process of bearing steel, and the simulation was carried out using the self-compiled program. The initial organization of the pearlite was formed by the Monte-Carlo method when the calculation was started, as shown in Figure 6. The size of the pearlite in the initial structure was the same as the pearlite measured in the actual initial structure [19]. According to the conclusions in reference [5], it was assumed that the austenite nuclei were formed at the trigeminal boundary of the pearlite cluster. The austenite nucleation process was instantaneous, and no new austenite nucleation was formed during the growth of austenite grains.
Simulation Results and Analysis
A three-dimensional cellular automaton model was established in this paper. The shape of the cell was a cube, and the neighbor type was a Moore-type neighbor. There were two states of the cell, one state was pearlite, and the other was austenite. For each pearlite cell, a laminar normal direction vector was assigned, and adjacent cells having the same normal vector formed a pearlite cluster. For each newly formed austenite nucleus, an austenite orientation was imparted, austenite grains were formed by austenite nucleation, and new austenite was formed by austenite nucleus growth. The orientation of the newly formed austenite cell was the same as the orientation of the austenite nuclei.
In this paper, the C#.net was used to program the three-dimensional cellular automaton model of the austenitizing process of bearing steel, and the simulation was carried out using the selfcompiled program. The initial organization of the pearlite was formed by the Monte-Carlo method when the calculation was started, as shown in Figure 6. The size of the pearlite in the initial structure was the same as the pearlite measured in the actual initial structure [19]. According to the conclusions in reference [5], it was assumed that the austenite nuclei were formed at the trigeminal boundary of the pearlite cluster. The austenite nucleation process was instantaneous, and no new austenite nucleation was formed during the growth of austenite grains. In this paper, there were three kinds of distance between the central cell and the neighbor cell. The first one was the cell being coplanar with the central cell; the distance was a cell side length a. The second was the cell being co-edge with the center cell; the distance was √ 2a. The third was the cell being co-apex with the central cell, and the distance was √ 3a. For a pearlite cell, when a neighbor cell was austenite, the probability that the center cell translated to austenite under the action of the neighbor cell was determined by the following equation: where L is the distance between the neighbor and the central cell, in meters and ∆t is the calculated time step, in seconds. The austenite interface moving velocity v 1 is determined by Equation (5), and the moving direction of the austenite grain boundary is related to the direction of the pearlite layer. The normal vector → n 1 of the pearlite layer is randomly assigned in the initial tissue. The direction of austenite movement → n 2 is a vector pointing from the center of the neighboring austenite cell to the center cell, which can be calculated from the coordinates of the cell. The transformation of the central cell is the result of the action of all austenitic cells in its neighbors, so the total transition probability of the central cell is where p i is the transition probability of the action of the austenite cell i on the central cell in the neighbor cell.
Using this model, the austenitizing process was calculated under isothermal conditions, and the calculated temperatures were 755 • C, 765 • C, 770 • C, 780 • C, and 800 • C. The calculation results are shown in Figure 7. It can be seen from the figure that the calculation results are in good agreement with the experimental results [19]. The maximum relative error at 765 • C, 770 • C, 780 • C, and 800 • C was less than 4%, and the maximum relative error at 755 • C was less than 8%. This indicated that the cellular automaton model proposed in this paper can better simulate the transformation process of pearlite to austenite. It can also be seen from the figure that with the initial organization of the pearlite of this paper, the time when 80% of the pearlite transformation to austenite was at about 5 s, 10 s, 15 s, 27 s, and 100 s at 800 • C, 780 • C, 770 • C, 765 • C, and 755 • C, respectively. Increasing the temperature can speed up the conversion of pearlite to austenite. When the temperature was increased from 755 • C to 765 • C, the conversion speed was increased by about four times. When the temperature was increased to 770 • C, the conversion speed was increased by about seven times. When the temperature was increased to 780 • C, the conversion speed was increased by about 10 times. When the temperature was increased to 800 • C, the conversion was almost instantaneous. In this paper, there were three kinds of distance between the central cell and the neighbor cell. The first one was the cell being coplanar with the central cell; the distance was a cell side length a. The second was the cell being co-edge with the center cell; the distance was 2a . The third was the cell being co-apex with the central cell, and the distance was 3a . For a pearlite cell, when a neighbor cell was austenite, the probability that the center cell translated to austenite under the action of the neighbor cell was determined by the following equation: where L is the distance between the neighbor and the central cell, in meters and Δt is the calculated time step, in seconds. The austenite interface moving velocity v1 is determined by Equation (5) where pi is the transition probability of the action of the austenite cell i on the central cell in the neighbor cell. Using this model, the austenitizing process was calculated under isothermal conditions, and the calculated temperatures were 755 °C, 765 °C, 770 °C, 780 °C, and 800 °C. The calculation results are shown in Figure 7. It can be seen from the figure that the calculation results are in good agreement with the experimental results [19]. The maximum relative error at 765 °C, 770 °C, 780 °C, and 800 °C was less than 4%, and the maximum relative error at 755 °C was less than 8%. This indicated that the cellular automaton model proposed in this paper can better simulate the transformation process of pearlite to austenite. It can also be seen from the figure that with the initial organization of the pearlite of this paper, the time when 80% of the pearlite transformation to austenite was at about 5 s, 10 s, 15 s, 27 s, and 100 s at 800 °C, 780 °C, 770 °C, 765 °C, and 755 °C, respectively. Increasing the temperature can speed up the conversion of pearlite to austenite. When the temperature was increased from 755 °C to 765 °C, the conversion speed was increased by about four times. When the temperature was increased to 770 °C, the conversion speed was increased by about seven times. When the temperature was increased to 780 °C, the conversion speed was increased by about 10 times. When the temperature was increased to 800 °C, the conversion was almost instantaneous. While accurately simulating the austenitizing process, the model of this paper can also clearly demonstrate the anisotropy caused by the direction of the pearlite layer during austenite grain growth. As shown in Figure 8, the orange line in the figure is the boundary of the original pearlite mass, and the position of the austenite nucleus is at the trigeminal junction of the pearlite mass. It can be seen from the figure that the velocity of austenite grains growing in all directions was not the same, similar to the phenomenon observed in the experimental diagram. The cellular automaton model in this paper reproduced the anisotropy of austenite growth. Because of the different conditions of the growth process, the grains with better growth conditions obtained a relatively large volume, while the grains with poor growth conditions had a small volume.
Materials 2019, 12, x FOR PEER REVIEW 8 of 9 While accurately simulating the austenitizing process, the model of this paper can also clearly demonstrate the anisotropy caused by the direction of the pearlite layer during austenite grain growth. As shown in Figure 8, the orange line in the figure is the boundary of the original pearlite mass, and the position of the austenite nucleus is at the trigeminal junction of the pearlite mass. It can be seen from the figure that the velocity of austenite grains growing in all directions was not the same, similar to the phenomenon observed in the experimental diagram. The cellular automaton model in this paper reproduced the anisotropy of austenite growth. Because of the different conditions of the growth process, the grains with better growth conditions obtained a relatively large volume, while the grains with poor growth conditions had a small volume.
Conclusions
In this paper, a three-dimensional cellular automaton model for the transformation of bearing steel pearlite to austenite was established. In the austenitizing process, because of the angle between the orientation of the pearlite and the growth direction of austenite, the austenite had different growth velocity in different directions. In this paper, the austenitizing process of pearlite was predicted by the three-dimensional cellular automaton model, and the expression of interfacial movement velocity of pearlite layer orientation and austenite grain growth direction was considered comprehensively. The anisotropy of grain growth in the pearlite was analyzed. The calculation results of the three-dimensional cellular automaton model in isothermal condition were in good agreement with the experimental results; the maximum relative error between calculation and experimental results at 765 °C, 770 °C, 780 °C, and 800 °C was less than 4%, and the maximum relative error at 755 °C was less than 8%. Increasing the temperature can speed up the conversion of pearlite to austenite. When the temperature was increased from 755 °C to 765 °C, 770 °C and 780 °C, the conversion speed was increased by about 4 times, 7 times, 10 times, respectively; when the temperature was increased to 800 °C, the conversion was almost instantaneous.
Conclusions
In this paper, a three-dimensional cellular automaton model for the transformation of bearing steel pearlite to austenite was established. In the austenitizing process, because of the angle between the orientation of the pearlite and the growth direction of austenite, the austenite had different growth velocity in different directions. In this paper, the austenitizing process of pearlite was predicted by the three-dimensional cellular automaton model, and the expression of interfacial movement velocity of pearlite layer orientation and austenite grain growth direction was considered comprehensively. The anisotropy of grain growth in the pearlite was analyzed. The calculation results of the three-dimensional cellular automaton model in isothermal condition were in good agreement with the experimental results; the maximum relative error between calculation and experimental results at 765 • C, 770 • C, 780 • C, and 800 • C was less than 4%, and the maximum relative error at 755 • C was less than 8%. Increasing the temperature can speed up the conversion of pearlite to austenite. When the temperature was increased from 755 • C to 765 • C, 770 • C and 780 • C, the conversion speed was increased by about 4 times, 7 times, 10 times, respectively; when the temperature was increased to 800 • C, the conversion was almost instantaneous. | 8,038.8 | 2019-09-01T00:00:00.000 | [
"Materials Science"
] |
Novel Poly(3-hydroxybutyrate-g-vinyl alcohol) Polyurethane Scaffold for Tissue Engineering
The design of new synthetic grafted poly(3-hydroxybutyrate) as composite 3D-scaffolds is a convenient alternative for tissue engineering applications. The chemically modified poly(3-hydroxybutyrate) is receiving increasing attention for use as biomimetic copolymers for cell growth. As of yet, these copolymers cannot be used efficiently because of the lack of good mechanical properties. Here, we address this challenge, preparing a composite-scaffold of grafted poly(3-hydroxybutyrate) polyurethane for the first time. However, it is unclear if the composite structure and morphology can also offer a biological application. We obtained the polyurethane by mixing a polyester hydroxylated resin with polyisocyanate and the modified polyhydroxyalkanoates. The results show that the poly(3-hydroxybutyrate) grafted with poly(vinyl alcohol) can be successfully used as a chain extender to form a chemically-crosslinked thermosetting polymer. Furthermore, we show a proposal for the mechanism of the polyurethane synthesis, the analysis of its morphology and the ability of the scaffolds for growing mammalian cells. We demonstrated that astrocytes isolated from mouse cerebellum, and HEK293 can be cultured in the prepared material, and express efficiently fluorescent proteins by adenoviral transduction. We also tested the metabolism of Ca2+ to obtain evidence of the biological activity.
molecules with gradual stress compression 40 . It is known that this grafted P(3HB) is also biodegradable and biocompatible, which could be used to prepare nanoparticles with potential application as drug delivery systems 41 . Therefore, we proposed the use of P(3HB-g-VA) for the synthesis of a polyurethane foam scaffold. The lack of existing research on the fabrication of these materials prompted us to study their synthesis in depth. It is not yet known if this type of polyurethane can be successfully obtained and used for biomedical purposes.
Here, we describe for the first time a novel method in which a grafted P(3HB) is combined with a polyester hydroxylated resin and poly-isocyanate to yield chemically-crosslinked polyurethane. Our strategy relies on adding the P(3HB) grafted with poly(vinyl alcohol) as a chain extender in a presence of a porogen to prepare a foam scaffold. This approach enabled the evaluation of the activity of mammalian cells on the polymeric structure. To the best of our knowledge, this research constitutes the first of its kind, in which a gamma radiation-induced P(3HB) graft copolymer is successfully used to synthesise a polyurethane scaffold. We also report a proposal for the polymerisation mechanism and demonstrate the great potential of this structural component in tissue engineering.
Results
Synthesis and characterisation of the P(3HB-g-VA) polyurethane scaffold. We prepared round shape scaffolds of roughly 10 mm in diameter and 2.5 mm in height, with an average dry weight of 525 ± 3 mg. The scaffolds, hereafter called P1M3DH, displayed a mean compressive modulus and compressive strength of 20 ± 2 and 2 ± 0.1 MPa respectively (p < 0.05). Figure 1a-d present the scanning electron microscope (SEM) micrographs of the cross-section of the P(3HB-g-VA) polyurethane scaffold at different magnifications. The cross-section SEM images revealed a porous structure with pore size ranging from 1 to 10 μ m and average porosity of approximately 92 ± 2%. The magnified view of the surface showed a rough morphology divided into three main areas. The first region consisted of an open non-directional network of pores, with average pore size ranging from 5 to 10 μ m. The multi-scale pore structures are caused by the salt leaching. The second area displayed uneven porosities of less than 5 μ m pore size. This area showed a high degree of inter-connectivity along with the third area. The latest area exhibited a rounded geometry and suggested a random non-directional pore structure, which ranges in size, of approximately 1 ± 0.14 μ m. Therefore, owing the large degree of interconnectivity and the wide range of macro-porosity, it appears that the P(3HB-g-VA) polyurethane scaffold may be suitable for the cell growth and proliferation 12,16,22,25,28,31,[42][43][44][45][46][47][48][49] . On the other hand, the result of the mechanical properties of P(3HB-g-VA) polyurethane scaffolds is consistent with that obtained for the synthesis of poly(3-hydroxybutyrate) and nanohydroxyapatite thermal remolded composite scaffold and three-dimensional scaffolds prepared from lyophilized poly(3-hydroxybutyrate-co-hydroxyhexanoate) 44,49 . One can expect that the compressive strength should have a greater result, but it is strongly influenced by the high porosity formed in the leaching process and the brittleness of P(3HB-g-VA) 40 . In addition, porosity testing for P1M3DH scaffolds revealed that the porosity of the samples increased as the concentration of the copolymer increased. Consequently, the compressive strength decreased with increasing weight percentage of P(3HB-g-VA) in the prepared dough.
A proposed mechanism for the preparation of P(3HB-g-VA) polyurethane scaffold. The mechanism by which the three-component polyurethane forms a 3D-scaffold is proposed. First, the resin (aliphatic isocyanate) is mixed with the hardener (polyol). In this step, the 1,4-Diisocyanatobutane (BDI) is attacked by the polyethylene oxide (PEO) molecules yielding ionic species (i). Then, the PEO hydrogen is swiped by the BDI nitrogen (ii). The ii species can react with BDI by another alcohol group to end up with two isocyanate groups (iii). This product is called the prepolymer. The prepolymer is in this case a urethane dimer intermediate. In the next step (chain extension), it can react with a chain extender to yield the final polyurethane. It seems that there are two main chain extenders, the polyol and the hydroxylated polyester (P(3HB-g-VA). A linear segmented material is produced by the first one (iv). A chemically crosslinked polyurethane is obtained by the polyester (v). This reaction is known as the prepolymer process.
Additionally, the formation of the P(3HB-g-VA) polyurethane foam (see Fig. 2) should be attributed to two main mechanisms. The first one is the addition of a porogen, in this case sodium acetate (NaAc). This salt is randomly aggregated three-dimensionally until a high pore volume is reached. The second mechanism involves the reaction of water molecules to the isocyanate to yield urea and carbon dioxide (vii).
As a result a complex chemically-crosslinked structure is achieved (vi). The novel structure contains a soft segment formed by the prepolymer, and two arbitrary segments from the chain extenders (1,4-Butanediol (BDO) and P(3HB-g-VA)), which are called hard segments. The foam structure depends on the BDI/ PEO/ BDO/P(3HB-g-VA) equivalent ratio as well as the porogen size and the quantity that was added to the mixture. The mixture is cured in a mould, where the polyurethane hardens to form a thermoset. The NaAc is leached by Soxhlet extraction in water after the product is cured. Consequently, three-dimensional pores of up to 10 μ m are formed.
Astrocytes and HEK293 cells grown in P1M3DH scaffolds.
To test the ability of P1M3DH scaffolds for growing mammalian cells, we cultured astrocytes isolated from mouse cerebellum and HEK293 (Fig. 3a,b). After three days in vitro we imaged the cells expressing either the fluorescent protein mCherry (Fig. 3a,b) or eGFP (Fig. 3c,d). From the first day, the cells attached well to the surface of P1M3DH and adapted to the rough surface of the scaffold; by the second day both cell types developed complex processes normally observed in vivo, and in many cases the cells touched each other forming clumps associated with the cavities of the scaffold surface. When the fluorescence emitted by the cells was observed in an epifluorescence microscope it was evident that much of the surface of the scaffold generated autofluorescence (Fig. 3a,c); this autofluorescence was efficiently filtered when observed by confocal microscopy (Fig. 3b,d), which revealed the complexity of the cell morphologies.
Calcium imaging in HEK293 cells grown in P1M3DH scaffolds. In the previous experiments it was
shown that mammalian cells grow efficiently on P1M3DH and express efficiently fluorescent proteins by means of adenoviral transduction. This appeared to prove that astrocytes and HEK293 cells are physiologically active when grown on P1M3DH. However, to have clear-cut evidence of their biological activity we tested the metabolism of Ca 2+ , a cellular second messenger necessary for vital metabolic cascades. Figure 4a shows that HEK293 cells incorporated the fluorescent Ca 2+ indicator Fluo-4AM. This was considered as the basal activity of Ca 2+ in these cells and was recorded for 60 sec. If, as suggested by previous observations, the cells were metabolically active, a sudden rise of fluorescence intensity should be detected when challenged by changes in the plasma membrane resting potential. This was proven when high potassium was added to the cellular medium (Fig. 4b); in this image the increased fluorescence showed up quite clearly in most of the cells attached to P1M3DH. Fifteen of these cells were randomly chosen for quantitative analysis (Fig. 4c) and ∆ F/F plotted as a reason of time. ∆ F/F recordings from individual cell are shown in Fig. 4d and plotted in Fig 4e. On average, the cells emitted five times more fluorescence when exposed to high potassium, thus unequivocally indicating that HEK293 cells are physiologically active. Supplementary video illustrates the time-curse of this protocol (see Video 1).
Discussion
The structure of P(3HB) molecules is very inactive, with lack of functional groups, bringing a hydrophobic nature. The introduction of polyvinyl alcohol groups to the polyester structure provides a greater hydrophilicity, biodegradability, and also functionality, as demonstrated in our previous works [39][40][41] . This strategy promotes a new alternative for application. In this case, the PVA moieties can be used to generate a crosslinked reaction that creates a modern kind of polyurethane. This approach paves the way for the synthesis of a novel family of thermosets with potential applications in tissue engineering. The combination of different polyisocianates, prepolymers, chain extenders, porogens and gamma-radiation-induced grafted polyhydroxyalkanoates bring about a large new family of what we herein define as "polyurethanoates". We demonstrated that our first attempt succeeded in improving the mechanical properties of the scaffolds and the obtaining of an appropriate morphology, which allows the attachment of the mammalian cells as a prerequisite for evaluating the biocompatibility. Additionally, it was unequivocally demonstrated that the attached cells were physiologically active. By careful manipulation, the pore architecture of the scaffolds can be controlled and therefore, the surface presentation can be improved. The interconnected porous structure suggests a favourable environment for ingrowth of cells, vascularisation and the diffusion of nutrients for cell proliferation. The experiments have focused on the research of the P(3HB-g-VA) polyurethane scaffold in vitro, but further research should be performed on evaluating the novel scaffolds in vivo. Nevertheless, the advancement in tissue engineering technology for the study of Astrocytes and HEK293 cells grown involves increasing the knowledge about the neural connections and gene expression respectively. This research reveals a promising candidate for tissue engineering and shows the way for the discovery of novel chemical structures that enables to continue growing the diversity of design in advanced biomaterials systems 50,51 .
Methods
Fabrication and characterisation of the P(3HB-g-VA) polyurethane scaffold. The synthesis of P((3HB)-g-VA) was carried out through the simultaneous irradiation method. P(3HB) and vinyl acetate (VAc), were subjected to the same source of 60 Co-gamma-radiation in air (Gamma Beam 651 PT, Nordion International), which has a dose rate of about 1 kGy/h and a dose of 10 kGy (measured with a Fricke dosimeter). The experiment involved approximately 10 hours of exposure time to high-energy radiation. We used glass sealed ampoules under vacuum, containing approximately 250 mg of P(3HB) and 3 mL of VAc in bulk. The product was washed with acetone to eliminate the ungrafted PVAc and afterwards it was dried to constant weight at 50 °C in a vacuum oven. Subsequently, the graft copolymer P((3HB)-g-VAc) was hydrolysed in methanol solutions of 0.05 M of sodium hydroxide (NaOH) for approximately 10 hours to yield P((3HB)-g-VA). In the latter case, the grafted P(3HB) was also dried to constant weight. The powder obtained was grounded to a mesh size of 200 (74 μ m). Sodium acetate (NaAc) (J.T. Baker) (74 μ m) was used as pore former, and 0.3828 g of the graft copolymer was mixed with 0.1148 g of hydroxylated resin (Reichhold Química, Mex.), 0.0287 g of polyisocyanate and 0.2297 g of the porogen. The homogeneous dough was loaded into a stainless steel mould that allowed applying a pressure of approximately 5 MPa for 15 minutes. We obtained round shape scaffolds of roughly 1 cm in diameter and 2.5 mm in thickness. The obtained scaffolds were aerated for 24 hours and then filed down in order to expose the pores. The salt was leached from the scaffold by Sohxlet extraction in water for 24 hours. The morphology of the porous foam, previously coated with gold, was surveyed with SEM (JEOL-JSM-6060LV) operated at 15 kV. The compressive properties of the obtained scaffolds were estimated using an Instron mechanical testing machine; model Adamel Lhomargy DY.22 with a load cell of about 1 KN at a crosshead speed of 0.5 mm/min. The maximum load helped to determine the compressive strength while the compressive modulus was obtained from the slope of the initial linear region in the stress/strain curve. Ten specimens were tested to ensure reproducibility. All data presented herein was reported as mean ± standard and the statistical analysis was carried out using one-way analysis of variance (ANOVA). P values < 0.05 were considered as statistical significant (n = 15 for cell attachment and proliferation; n = 10 for mechanical and porosity test). Ten scaffolds were chosen for porosity testing. We used a method previously reported that involved filling a density bottle with ethanol. Then, the bottle was weighted (m 1 ) in an analytical balance (Sartorius; Sartorius AG, Göttingen, Germany). Afterwards, the bottle is weighted while containing the scaffold and filled with ethanol (m 2 ), and also the dry scaffold (m s ) was weighted. Finally, the porosity (ϕ) is measured by the following equation , where ρ ethanol is the density of ethanol 42 .
Primary cerebellar cultured astrocytes. All experimental procedures were conducted in accordance to the ethical polices for animal care and handling of the National University of Mexico. All experimental protocol was approved by the Instituto de Neurobiología Animal Care and Use Committee. Two male p5 CD1 mice were decapitated for each; their brains were rapidly removed and placed in a Petri dish with cold phosphate buffer solution (PBS) (137 mM NaCl, 2.7 mM KCl, 10 mM Na 2 HPO 4 • 2 H 2 O, 2 mM KH 2 PO 4 ; pH 7.4). The cerebellum was isolated in a cold PBS buffer, chopped into small pieces and placed in an Eppendorf tube containing 200 μ l of Dulbecco's Modified Eagles Medium (DMEM) supplemented with 10% fetal bovine serum; 2 mM glutamine, 100 UI/ml penicillin and 100 μ g/ml streptomycin. The small pieces of cerebellum were mechanically dissociated into individual cells by several passes through a fire polished Pasteur pipette tip, previously treated with Sigmacote ® (SIGMA* ). The supernatant was suspended into an Eppendorf tube with 500 μ l of DMEM. The cell suspension was plated on sterilised P1M3DH scaffolds and diluted with 3 ml of DMEM in a 35 mm Petri dish. The cultures were kept five days in vitro (5 DIV) at 37 °C under a mixed air and 5% CO 2 atmosphere. The medium was changed every 2 days 52,53 . Twenty-four hours before confocal imaging, the cells were transduced with mCherry using an adenoviral vector (All culture reagents were purchased from Gibco BRL).
Loading cells for calcium imaging. P1M3DH scaffolds were sterilised under UV light for 10 min and glued on the surface of a Petri dish, then HEK293 cells were placed on top of the scaffolds and maintained in Dulbecoo's Modified Eagle Medium M (Gibbco TM ) containing 10% fetal bovine serum and antibiotics (100 UI/ml penicillin and 100 UI/ml streptomycin). After two days at 37 °C and 5% of CO 2 , cells were loaded for 30 min, with the calcium indicator Fluo-4AM (Molecular Probes ® ) dissolved in 50 μ l DMSO containing 20% Pluronic F-1227 (SIGMA* ) and further diluted in a calcium free dye buffer (125 mM NaCl, 2 mM MgCl 2 , 4.5 mM KCl, 10 mM Glucose, 20 mM HEPES) to yield a final concentration of 0.5 mM Fluo-4AM. Dynamics of cytosolic Ca 2+ were monitored by imaging changes of fluorescence intensity before and after adding to the medium the high potassium solution (140 mM KCl, 2 mM MgCl 2 , 20 mM, 2 mM CaCl 2 , Glucose, 10 mM HEPES). Time lapse confocal recordings (Zeiss LSM510) were taken for 1.5 s at 1.5 Hz using a wavelength of 488 nm for excitation, reconstructed with ImageJ and analysed with MATLAB.
Data analysis. Frames of 512 × 512 pixels were acquired at 1.5 Hz. Image sequences were analysed with custom programmes written in MATLAB. Cell-outlines were detected using a semi-automated algorithm based on cell fluorescence intensity, cell size and shape. After labeling the cell-based regions of interest (ROIs) all pixels within each ROI were averaged to give a single time course (∆ F/F). | 3,921.4 | 2016-08-09T00:00:00.000 | [
"Biology",
"Materials Science",
"Engineering"
] |
Gait Analysis Using the Physics Toolbox App
Sensors in new smartphones can be an excellent tool to measure different physical quantities and these can be useful to develop real experiments with application in human health. Therefore, in this research work, the design and construction of a gait analyser was carried out using the toolbox physics app for data acquisition and MATLAB was used for its analysis. This system analyses the behaviour of walking in people and its purpose is to diagnose any health problem caused by the human being’s way of walking. For this, the g-force meter was used in <inline-formula> <tex-math notation="LaTeX">$x$ </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">$y$ </tex-math></inline-formula>, and <inline-formula> <tex-math notation="LaTeX">$z$ </tex-math></inline-formula> components. This application uses inputs from the device’s sensors to save and export the data in comma-separated values (CSV) format through a.csv extension. The results are very interesting because they can be used in medical diagnoses such as reduced gait speed and loss of regularity, symmetry, or synchronization of body movements. Since many organs are involved in gait, there are several types of gait disturbances that cause gait to be abnormal. With this system, it is possible to identify the hemiplegic gait, festinating gait, paraparhetic gait, waddling gait, among others. This analysis was carried out using the Pearson, Spearman, and Kendall correlation coefficients, which indicate the possible diagnoses in the patients.
I. INTRODUCTION
The continuous increase in technology dedicated to increasing life expectancy has gained attention in the two last decades. Therefore, new diagnostic and monitoring systems are needed, but with the characteristics of being non-invasive, low-cost, reliable and accurate to provide affordable health care services to humans. The most recent advances in the different areas of engineering have provided increasingly smaller sensors in order to propose smart, fast and costeffective solutions for various health-related problems. A system that can be useful for this kind is a device that allows analyzing the way of walking to monitor and predict the health status of people. Since an individual's gait patterns are linked to their health conditions, that is, people with diseases tend to walk differently and their walking patterns differ from The associate editor coordinating the review of this manuscript and approving it for publication was Cihun-Siyong (Alex) Gong . those of healthy people. Until now, walking patterns were analyzed for the purpose of predicting falls. For example in ref. [1], they made the proposal to continuously monitor the walking patterns of the elderly to identify the quality of their joints or diseases related to them in order to predict falls. However, there are few studies on the analysis of the general walking patterns of people that may be related to joint instability and musculoskeletal disorders. In ref. [2], [8], the authors relate the walking patterns to the ages of the individuals, this is due to the fact that the shape of the human body and the muscular strengths change according to age. Thus, a group of people with similar ages have similar gait patterns [3]. Therefore, a person can be classified by their way of walking, that is, there is a correlation of the way they walk with age. For example, for older adults the way of walking indicates the existence of some health problems such as limb imbalance, weak joints or asymmetric acceleration of the limbs [4]. So, analyzing the patterns in an individual's gait could be a good indicator to visualize the state of health that they present. In the ref. [5], [6], it is precisely indicated the need to develop new alternatives for the analysis of walking using low-cost resources, in such a way to build systems with the integration of high-performance devices to provide these services. In ref. [7], the authors also analized the caw's walking using accelerometers and gyroscopes installed on ear tags and collar-tags. However, the physics toolbox application to acquire the signals through cell phone's sensors has been used in different situations like in refs. [10], [11]. In general, the biometric gait recognition can be categorized into three approaches: machine vision based, floor sensor based and wearable sensor based [12]- [29]. In this research work, we used the wearable sensor based method. In our work, the design and construction of a gait analysis using the physics toolbox application to acquire the signals through cell phone sensors and then store them in a .csv file is carried out. The aim of this article is to use the movil phone to evaluate the gait and to predict problems in the future. We use smartphones every day and one can see that they can be used to analyze the gait in humans. The advantage of our system is that it has no data storage limit, since the cell phone is being used and even more so, data can be saved in the cloud. Also, the sensors used by cell phones are very well calibrated and therefore the error rate is minimal. One of the important objectives in the development of embedded systems is precisely to reduce the high costs and their high integration, this system fully fulfills it. The physics toolbox application is widely used and highly reliable, therefore the system here developed.
II. MATERIALS AND METHODS
The design of this embedded system consists, primarily, of the physics toolbox application. The coordinate system in this application is: the total vector represents the relative total gravitational force aligned with the plane of the device screen. Vector components are displayed in red along the x-axis and in green along the y-axis and in blue along the z-axis. While the device screen is vertical with respect to the ground and if it is not accelerated, the vector will read a value of unit down. The system is set as shown in Figure 1.
The graphical user interface of the physics toolboox application is shown in Figure 2. Here the Fg is measured as a function of time. As well as the components in x, y and z. The + symbol is to start saving data to a.cvs file. This application is useful for education, academia and industry, since it contains practically everything that can be measured and generated with a smartphone. This application uses inputs from the device's sensors to record and export data in Comma Separated Value (CSV) format via a .csv file. The data can be recorded in elapsed time on a graph or displayed digitally. Users can export the data for further analysis in a spreadsheet. The g-force meter measures the relationship between normal force and gravitational force (Fn/Fg) in three dimensions. The g-force changes when the mobile device: accelerates, decelerates or changes direction. When the mobile device is not accelerating and is face up relative to the earth's surface, it reads g-force values of 0, 0, 1. This means that a normal force is only experienced in the upward direction, and that it has the same force as the force of gravity. An object experiencing a vertical g-force of 2 feels a force twice as strong as gravity in the upward direction (which is interpreted as ''feeling twice as heavy''). An object that experiences a g-force of 0 is in free fall (which is interpreted as ''feeling weightless''). The g-force data is extracted directly from the accelerometer. Accelerometers often come like sensors that contain at least two components: piezoresistive and capacitive cantilevers. As the mobile device accelerates, the cantilever bends, changing the resistance of the silicon, which is interpreted as acceleration. Alternatively, a capacitive accelerometer contains three comb-shaped inertial masses attached to springs, with one in each dimension. When the mobile device is not accelerating and is lying down, a total g-force of 1 is measured due to the gravitational force pulling downward (and the resulting upward reaction force of equal force). The linear accelerometer measures acceleration in a straight line in three different dimensions. Linear acceleration changes when the mobile device accelerates, decelerates, or changes direction. When the mobile device is at rest relative to the earth's surface, it reads acceleration values of 0, 0, 0. Linear acceleration differs from general acceleration. This is because engineers often interpret the displacement of inertial mass in the z direction as the relativistic acceleration of an object on the surface of a rotating earth. This is accurate when considering the entire Earth as a frame of reference, but not accurate when considering a local frame of reference. Linear acceleration is derived from the gforce meter, but it also uses the gyroscope and magnetometer to cancel out the effects of the earth's gravitational field on the sensor. All these sensors are integrated in your cell phone and here they were used to make this gait analyzer system.
III. RESULTS
For the acquisition of signals, a 30-year-old healthy person was taken as a model. Here the mobile device was placed VOLUME 10, 2022 on the left foot, although it can be placed on the right foot, however it is best to place it on the left foot, which provides more information. Figure 3a shows the g-Force data in the x direction. Figure 3b shows the g-Force data in the y direction, and Figure 3c shows the measured g-Force data in the z direction. The data of the total g-force is shown in Figure 4. The sample only takes 10 steps for its analysis, however the data we have is up to 1000 steps, but for simplicity in the analysis we will only take this sample. As can be seen, the total g-force has positive values and even more they start at 1, which is correct because if the device were at rest it would give us a constant line with a value of 1. To compare our results, we used a comercial device called runscribe [30] and it can be seen in Figure 5. Also, in this case the patient background: he is 30 years old male, no history of mayor injuries and healthy with minor knee and hip complaints. In the results, there are three primary pivots in gait: the heel pivot, the ankle pivot, and the forefoot pivot. Their approach allows you to segment the foot for a more detailed analysis of these three primary pivots of gait. The gait gurve shows total vertical ground reaction force from heel strike to toeoff. The Force vs. Time graphs for the forefoot and heel provide for specific loading patterns during heel contact and forefoot contact, independent of and in conjunction with the gait curve. Their gait curves and ours are pretty similars, it can be seen in Figure 5a, where x, y and z directions have the same forms.
To analysis of the data, there are some methods, for example in refs. [31]- [37]. In refs. [38]- [42], there is an nalysis of the performance of differents algorithms from a systematic review. They presented the influence of sensor position, analysed variable and computational approach in gait timing estimation from IMU measurements. In ref. [43], the authors investigated the validity and reliability of a smartphone-based application to measure postural stability. In this research work, we used the stadisctics and we calculated the coefficient for Pearson's correlation, the coefficients for Kendall's correlation and coefficients for spearman's correlation. These results are for a patient who is healthy in his gait. This dependence of variables is even more noticeable when the dispersion matrix is calculated, which is shown in Figure 6. The dispersion matrix graphically shows us the relationship that exists between the analyzed variables.
Pearson's correlation coefficient is a measure of linear dependence between two quantitative random variables and it is defined as equation (1).
where cov(X,Y) is the covariance and σ X is the standard deviation of X σ Y is the standard deviation of Y . As we know, unlike covariance, Pearson's correlation is independent of the measurement scale of the variables. In a less formal way, we can define Pearson's correlation coefficient as an index that can be used to measure the degree of relationship of two variables as long as they are both quantitative and continuous. Table 1 shows the Pearson's correlation coefficients. For the value of 1, it indicates that there is a perfect correlation and for the value 0, it indicates that there is no linear relationship, but this does not necessarily imply that the variables are independent. If the value goes 0 < r < 1 then there is a positive correlation and if −1 < r < 0, there is a negative correlation. In our case, where there is the greatest correlation is between the variable gFy and TgF.
On the other hand, a statistical analysis was also carried out and the coefficient for Spearman's correlation was found, which is a measure of the correlation between two random variables. To calculate the coefficients of Spearman's correlation, the data are ordered and replaced by their respective order. As shown in equation 2.
where D is the difference between the corresponding statistics of order of X -Y. N is the number of data pairs. The coefficients for Spearman's c correlation is shown in Table 2 . Spearman's correlation coefficient is less sensitive than Pearson's for values that are far from expected. The interpretation of Spearman's coefficient is the same as that of Pearson's correlation coefficient. It oscillates between −1 and +1, indicating negative or positive associations respectively, 0 zero, means no correlation but no independence. In our case, the greatest correlation is between the variable gFy and TgF, and this coincides with Pearson's correlation coefficient. For both cases, they are close to the value of 0.77. This indicates that indeed the greatest correlation is between these two variables. Similarly, another statistical analysis was performed, and Kendall's rank correlation coefficient was found, commonly known as Kendall's τ coefficient, this measures the ordinal association between two measured quantities. A τ test is a nonparametric hypothesis test for statistical dependence based on the coefficient τ . It is a measure of rank correlation: the similarity in the ordering of the data when they are classified into ranks for each of the quantities. The denominator is the total number of pair combinations, so the coefficient must be in the range −1 ≤ τ ≤ 1. If the agreement between the two classifications is perfect (that is, they are equal), the coefficient has the value 1. If the disagreement between the two classifications is perfect (that is, one classification is the inverse of the other), the coefficient has a value −1. If X and Y are independent, then we would expect the coefficient to be approximately zero. The coefficients for Kendall's correlation is showed in Table 3 . In our case, it was simulating the results obtained in the Pearson and Kendall correlation coefficients. Well, where there is the greatest correlation is between the variable gFy and TgF.
From the results obtained, we observed that the coefficient of correlations found is high between gFy and TgF, since it has a similar range. These results are for a patient who is healthy in his gait. This dependence of variables is even more noticeable when the dispersion matrix is calculated, which is shown in Figure 6. The dispersion matrix graphically shows us the relationship that exists between the analyzed variables. In this case, there is a strong correlation between the variable gFy and TgF. For the analysis of the walk of the users, about 100 people of different ages have been taken. Results for 100 people with different age range are presented in Table 4 .
IV. DISCUSSIONS
The gait is a complex motor function that requires the interrelation of the mechanisms of locomotion, balance, motor control and an adequate muscular-skeletal function. Gait disturbances are frequent and sometimes with functional and important consequences for patients, which is why an adequate assessment is essential. Within the classic sections of the evaluation of gait or locomotion we find the dependent or independent ability to walk, the use or the predominant pattern that the patient presents, the identification of the main deficits, the attitude of the trunk during walking and limbs during the cycle, as well as the need to use support products. Finally, the description of the type or pattern of gait can help in the diagnosis of the problems that patients present during locomotion. In this line, the most common gait patterns in the patient with neurological pathology would be the following: Reaper gait is a gait disorder characterized by the flexion posture of the upper limb and extension of the lower limb, so when giving one step, the leg describes a circumduction movement with activation of the quadratus lumbar, this type of gait pattern is observed in patients with cerebrovascular accidents. Festive gait is typical in advanced parkinsonian syndromes, the trunk is in flexion with an internalization of the center of gravity with bent hips and knees and the arms in semi-flexion at the elbow, the steps are very short and also fast as if they were chasing the center of gravity of the patients, it is common the presence of frostbite when passing through doors or when making turns. Toe gait Children with idiopathic toe gait do so bilaterally but are able to perform a plantar gait when expressly requested, when the child is walking only leans on the toes, but upon physical examination It can be found that mobility in the ankle, dorsal flexion is decreased, which is a consequence of a shortening in the Achilles tendon, the neurological examination of the child with toe walking is normal without evidence or muscle weakness, it is usually associated with to a syndrome of minimal brain dysfunction. Scissor gait or paraparetic is a gait disorder characterized by the crossing of the lower extremities in each of the steps as a result of increased tone or hypertonia in the leg muscles, it is observed in spastic paraparesis observed in children with cerebral palsy, ataxic gait also called hesitant or wobbly due to cerebellar involvement and is characterized by the presence of hypotonia, incoordination, alterations in balance, increased base of support, short steps, should not be confused with a tabetic or talone gait due to a proprioceptive affectation, which presents very loud steps since the patient does not know how their lower limbs are located and should not be confused with the compass or star gait that is typical of a vestibular affectation, which presents long steps. Duck or mallard gait characterized by exaggerated lateral displacement of the trunk and the elevation of the hips when walking, is a typical gait with people with muscular dystrophy, these patients present hyperlordosis, falls, difficulty running, jumping or getting up. Steppage gait is the characteristic gait in those patients who have difficulty dorsiflexing the ankle, the foot is therefore dropped, so in order not to drag it in the gait cycle, the patient raises the hip and knee exaggeratedly and when supporting the foot it does so first by touching with the tip of the foot, it is produced by an affectation of the muscular group innervated by the external popliteal sciatic nerve, therefore the tibialis anterior. Trendelenburg gait is the typical gait that appears when the hip abductors are altered, in which the patient's pelvis tends to fall to the opposite side during the stance phase, the opposite hip falls down to avoid falling, the patient moves his center gravity to the opposite side, moving the trunk and head in that direction, the result is a gait with a lateral jerk towards the affected side, if the patient has a dysfunction both hip abductors, therefore bilaterally the lateral shaking of the trunk will be to both sides, which has often been called the duck gait, which is the typical one that patients with muscular dystrophy usually present. Choreoathetosic gait is characterized by rapid, irregular, abrupt movements, movements worsen with gait, associated with abrupt movements of anterior and lateral propulsion of the pelvis that can simulate dance steps, however it is not usual for falls to occur, no matter how aberrant they may be look like this gait pattern.
On the other hand, measurements were made for people with diagnoses of abnormalities in their walks. For patients with festive or parkinsonian gait, a Pearson correlation coefficient of 0.56783 was obtained, which is correct, since the classic appearance is of ''shuffling'' and is caused by a decrease in both the length and the height of the foot. For patients with a gait of the reaper, the Pearson correlation coefficient was 0.45567, however there we do notice that there is a change in the g-force in the z direction, since the movement is like a scythe and of course there is a change in the z direction. On the other hand, patients diagnosed with duck gait were also characterized, which is characterized by a rocking gait due to disorders in the pelvic area. For this case, the Pearson correlation coefficient was obtained with a value of 0.49923, which makes sense, since the steps are short.
V. CONCLUSION
The design and construction of a single gait analyzer using the physics toolboox application for data acquisition was successfully carried out in this research work. MATLAB was used for data analysis. The objective of this system was to analyze the walking behavior of people and thus be able to diagnose any health problem caused by the human's way of walking, as well as to help diagnose any anomaly in the way of walking. For this, the cell phone was placed on the patient's left foot and the physics toolbox application was activated with the g-Force meter, a ratio of Fn/Fg in the x, y, and z directions. Once the data file was obtained, it was preceded by MATLAB. The results are very interesting, because they can be used in medical diagnoses such as reduced gait speed and loss of regularity, symmetry, or synchrony of body movements and therefore with this system gait can be identified of the reaper, festive march, scissor or paraparhetic march, duck or mallard march, among others. This analysis was carried out using the Pearson, Spearman and Kendall correlation coefficients, which indicate the possible diagnoses in the patients. | 5,153.4 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
Development of a Charge-Multiplication CMOS Image Sensor Based on Capacitive Trench for Low-Light-Level Imaging
This paper presents an electron multiplication charge coupled device (EMCCD) based on capacitive deep trench isolation (CDTI) and developed using complementary metal oxide semiconductor (CMOS) technology. The CDTI transfer register offers a charge transfer inefficiency lower than 10−4 and a low dark current o 0.11nA/cm2 at room temperature. In this work, the timing diagram is adapted to use this CDTI transfer register in an electron multiplication mode. The results highlight some limitations of this device in such an EM configuration: for instance, an unexpected increase in the dark current is observed. A design modification is then proposed to overcome these limitations and rely on the addition of an electrode on the top of the register. Thus, this new device preserves the good transfer performance of the register while adding an electron multiplication function. Technology computer-aided design (TCAD) simulations in 2D and 3D are performed with this new design and reveal a very promising structure.
Introduction
The latest advances in complementary metal oxide semiconductor (CMOS) technologies have led to their adoption in image sensors.Compared to Charge Coupled Devices (CCD), CMOS image sensors offer a higher level of integration with on-chip CMOS functions and lower power voltage [1].However, high sensitivity for low-level imaging in CMOS image sensors are still tricky to achieve, especially with EMCCD devices.Various kinds of noise sources are present in CMOS image sensors such as temporal and spatial noise, dark current or read noise, as well as the degradation of the signal-to-noise ratio when a small amount of photo-generated carriers have to be detected, which is very difficult to avoid.One solution is to amplify the signal before its charge-to-voltage conversion in order to minimize the effect of the noise sources and especially the dark current.This can be achieved thanks to electron multiplication in EMCCD [2].Although a new kind of high performance low-light-level image sensor has been developed thanks to the advent of the quanta image sensor [3], EMCCDs are still wanted for particular applications such as time delay integration for earth observations.
The EMCCD using impact ionization was proposed in the 1980s [4], and it is known for its good capability to record a low-light-level scene [5][6][7] thanks to the multiplication of the amount of charges in the CCD register with a good noise control.The latest advances in this topic have been achieved thanks to the use of a buried channel and short poly-gap distances.For instance, the following performances have been recently demonstrated: a gain of 3% per stage, a CTI of 0.01, a dark current of 2-3 nA/cm 2 , and a Full Well Charge (FWC) of 116 ke − [8].Although few other CMOS EMCCD have been developed on various technologies [9][10][11][12], none of them relate to the development of EMCCD with deep trench-based devices.Therefore, in this paper, we would like to propose an innovative structure based on Capacitive Deep Trench Isolation (CDTI) [13].The goal is to demonstrate a well-controlled charge multiplication in such a device.
A trench CCD has been proposed in 1989 [14] where electrons are carried between two trenches, but the development seems to be canceled.Then, a CCD-on-CMOS trench structure based on CDTI has been developed in 2020 [15], and this device is used for this study.In this particular concept, CDTIs shape the electrostatic potential, leading to carrier displacement in a buried channel mode.This device presents attractive characteristics such as a Charge Transfer Inefficiency (CTI) lower than 10 −4 [15], a FWC of 57.6 ke − , a dynamic range of 78 dB [16], and a dark current of 0.11 nA/cm 2 at room temperature for a pixel size of 3 × 12 µm 2 .Therefore, this register combines a low CTI, a high FWC, and a low dark level.For comparison, state-of-the-art characteristics of CCD-on-CMOS devices are a CTI of 1 × 10 −5 , a FWC of 30 ke − , and a dark current of 3.7 nA/cm 2 [17].
Therefore, the aim of this new development is to propose an EMCCD device based on an innovating architecture and with state-of-the-art performances concerning the dark current and the CTI.Advantages and drawbacks of this device for electron multiplication are studied experimentally and with the TCAD simulations.Furthermore, a new structure device is suggested and investigated.
Experimental Setup
The CDTI structure under investigation is a two-phase structure based on two kinds of CDTI: storage CDTIs along the path of the charge transfer and transverse CDTIs separated by a pass and perpendicular to the storage CDTIs (see Figure 1a).The transverse CDTI is used to create a lower potential barrier between phases thanks to the design.Thus, the charges can be stored in one phase even when adjacent phases are biased to the same voltage and transfer happens in one preferential direction as shown by the schematic representation of the potential in Figure 1b.Passivation of the Si-SiO2 interface defects of the CDTI edges is achieved by biasing the CDTI at V CDTI = −1 V in inversion at low state, while the transfer is performed at V CDTI = 3 V.In inversion, a thin hole layer accumulates at the Si-SiO2 interfaces, thus passivating the trench surfaces and leading to a very low dark current generation [18].
The n-well doping profile has a maximum doping concentration at about 1 µm depth, which makes the charge storage possibly deep in the volume of the silicon, far from the surface.The n-well doping concentration has been adjusted in a way to allow for a full depletion of the buried layer, as described in the reference [15].The n-well storage and transfer volume are delimited by a p+ pinning implantation on the top, as well as by the p-epitaxial layer at the bottom and laterally by the CDTI gates.These CDTI trenches are filled with p-doped polysilicon and contacted at their surface.CDTIs have a depth of 2.1 µm and a width of 0.2 µm.
The transfer register is operated with the two-phase timing diagram presented in Figure 2. To transfer the integrated signal from phase 1 to phase 2, the last phase is biased to a positive voltage.The surface potential as well as the depletion potential between the CDTIs are lowered compared to the potential of the first phase.Then, the electrostatic potential gradient allows for the charge transfer from phase 1 to phase 2, similarly to classical CCDs.The charge transfer device is based on three transfer fingers fabricated in STM IMG140 technology between the input stage and the output stage as presented in Figure 3.The three transfer fingers are used here to increase the charge transfer capacity.Several design variations in the two-phase shift register have been fabricated and address the pixel-stage pitch, the number of stages, and the pass width.The nominal design has a pitch of 12 µm, 220 stages, and a pass width of 0.2 µm.The input stage relies on an n+ drain injection node, followed by a sampling barrier CDTI gate φ in , allowing for the generation of synchronous charge pulses using a common diode cut-off technique [19].The output stage is based on a DC-biased decoupling gate V2, followed by an n+ floating diffusion.The input and the output stages are surrounded by CDTI VF and VL to avoid parasitic collections.The floating diffusion is reset by means of a reset transistor and is read thanks to a source follower as can be found in the pixel arrays.The CDTI register is characterized at room temperature using a Cascade semi-automatic Prober equipped with a probe card and a Pulse Instrument data generator.The charge-to-voltage Conversion Factor (CVF) is evaluated with the common mean-variance method [20] at 22 µV/e − .As this CVF does not allow us to reach the FWC because of the output stage saturation, a second lower CVF is implemented and is activated once the output voltage exceeds 600 mV.This latter CVF allows for the full FWC evaluation.In order to quantify the correct number of electrons beyond 600 mV, a corrective factor is applied.The CTI is evaluated by using the Extended Pixel Edge Response (EPER) method [20,21], which consists in measuring the deferred charges following an injected charge sequence.The dark current is measured by subtracting the reference level from the signal level without injection in operating conditions with a low and high state alternating as shown by the timing diagram.
Preliminary Measurements
CTI and dark current measurements are performed to evaluate the performance of the CDTI register in an avalanche configuration, which was never performed before in any study.The CTI measurement is shown in Figure 4 and achieves low values under 1 × 10 −4 .A CTI increase is visible at low injection as it is usually the case in CCDs due to the presence of a trapped charge.At high injection, the CTI also increases and it is attributed to the change between the buried and surface transport regime.A dark current of 0.11 nA/cm 2 at room temperature is also measured, comparable to the best CCD-on-CMOS transfer registers.These measurements demonstrate the really good performance of the CDTI register operated in transfer mode.
Experimental Setup
In order to use the CDTI register under impact ionization conditions, some adjustments on the timing diagram need to be made.First, the electric field has to be strong enough to reach impact ionization and usually it needs to be greater than 1 × 10 5 V/cm [8][9][10].Second, all charges to be multiplied have to go through a high and constant electric field during the transfer with the aim to reach a sufficient ionization rate [22].The latter involves the creation of an intermediate storage well isolated from the phase and submitted to the high potential via a fixed potential barrier, as represented in Figure 5a.In our case, the storage and transverse CDTI can be operated independently, so phase 1 can be used as a temporary storage well and phase 2 can be as a multiplied charge collection well, while the transverse CDTI φ02 represents a charge transfer barrier.By doing so, the potential between φ02 and φ2 is kept constant during the avalanche, as requested.The timing diagram is adapted and is presented in Figure 5b.At the beginning of the multiplication phase, the charges are stored under phase 1 with the CDTIs φ1 biased at 3 V, then φ02 is biased at 3 V and φ2 is biased at a high voltage between 3 V and 15 V.The avalanche process begins when φ1 is biased at −1 V and the charges leave the temporary storage well to go through φ02.The impact ionization occurs between φ02 and φ2, where the high electric field region is located.Following this charge multiplication step, the charges are stored in an inversion condition in the volume between CDTI φ2 and can be normally transferred from phase 2 to phase 1.
Measurements
The measurement of the output signal as a function of the injected signal in avalanche mode are presented in Figure 6a.Two aspects can be observed on this result.First, for a small injection level (charge < 30% FWC), the output signal increases with the φ2 bias.On the contrary, for a strong injection level, the output signal decreases as the φ2 bias increases.The net signal is the subtraction between the output signal and the dark signal, and, as shown in Figure 6b, there is a reduction in this net transferred signal with the increase in φ2, where this phenomena is even more pronounced at a strong injection level.It suggests the absence of impact ionization of the injected charges.In order to investigate the root cause of these results, TCAD simulations with Synopsys Sentaurus 2020.09-SP1Software have been performed.To reduce the complexity and the simulation time, only one stage is simulated in 2D with doping profiles extracted from a Sprocess simulation with the same timing diagram and biases used for measurements.The injection is performed by controlling the bias of the n+ injection drain.To deplete the structure, the n+ output drain is biased at 3 V and one transfer cycle is simulated to empty the register.The device structure is simulated with the Structure Editor tool and electrical simulations are performed with Sdevice, with the following models activated [23]: hydrodynamic models for current densities, Philips unified model with doping dependence for the mobility, Band to band, Auger, SRH recombination with doping dependence, and an avalanche model for impact ionization.
In order to understand the decrease in the output signal for a large injection level, potential profiles between two storage CDTIs are extracted from the TCAD simulations at one injection level and depending on the φ2 bias.The results are displayed in Figure 7. Two different situations can be observed: a buried channel regime where the charges are confined between and away from the two CDTIs, and a surface regime where the charges are localized near the surfaces of both CDTI.The change from the buried channel to surface regime defines the FWC of the device, depending on φ2.Increasing the φ2 CDTI bias has the effect to shrink the potential well since the surface pinning dominates over the vertical p+/n/p pinning.Therefore, surface trapping is quickly promoted for the large bias on φ2.
To investigate the increase in the output signal at a small injection, the TCAD simulations allow for the extraction of electrostatic potential and electric field maps during the avalanche process.As can be seen in Figure 8, the high electric field regions are located between CDTIs φ02 and φ2, close to the interfaces where the dark current is strongly suspected to be generated.An electric field of 2.10 5 V/cm, high enough to create an impact ionization regime, is reached in the injected charge path (between CDTIs φ02) at φ2 = 16 V, while a higher electric field is reached between φ02 and φ2, where the dark current generation is located.Therefore, as soon as φ2 is equal or higher than 6 V, impact ionization of the dark charge occurs, because of the bad localization of the high electric field, and a strong increase in the dark current is visible.Moreover, tests with different integration times have been performed and show a linear increase in the dark signal, with the integration time indicating the presence of a strong dark current.It can be noted that the strong electric field is also present at the top of the CDTI, where the p+ pinning layer is located.We can therefore reasonably think that this device is also affected by the clock-induced charge (CIC), with the latter being linked to the multiplication of holes under the effect of the strong electric field [24].
Buried channel
Surface regime To conclude, two effects are observed here and hide the avalanche of the injected charges: the decrease in the FWC for large injections, and the increase in the dark current for a low injection level.In order to have a well-controlled multiplication, the Excess noise factor (ENF), i.e., the ratio between noise with and without the avalanche, must not exceed 1.4 [9].In our case, for 220 stages, 60 noise e − can be measured without the avalanche, so something like 84 e − should be measured with the activation of the avalanche.These values are largely exceeded.In the following, two solutions are tested with the aim to suppress these limitations.
FWC Optimization with Larger Pitches
First, as the FWC is strongly reduced by the increase in the φ2 bias, registers with larger pitches are characterized with the goal to increase the FWC of the device.Three design variations are tested in addition to the nominal design: registers with 440 stages and with a 6 µm pitch, 110 stages with a 24 µm pitch, and 55 stages with a 48 µm pitch.The results are presented in Figure 9 where the FWC signal and the dark current signal are plotted as a function of the φ2 bias for different stage pitches.As expected, the FWC increases when the stage pitch increases.Thus, for a large stage pitch, a higher φ2 bias can be used; however the dark current still limits the rise in the φ2 bias because it increases with an exponential behavior.Therefore, the larger pitch is not a solution to obtain the avalanche in those devices because of the strong dark current multiplication.
Dark Current Optimization with Lower Temperature
To prevent the dark current increase, measurements under a lower temperature are performed as presented in Figure 10.To do so, the devices are packaged in order to be usable in a thermal chamber, where the temperature is varied from 20 °C to −40 °C.A sensible effect on the FWC can be observed here: the FWC increases linearly as the temperature decreases.It can be attributed to the fact that electrons escape the potential well due to the thermionic emission; this phenomena increases with the temperature.Indeed, if the electron energy is high enough, it can jump into the Si-SiO2 interface region, lowering the FWC [20].In addition, the freezing of the interface states may slow down the capture and emission process of traps, contributing to the observed tendency.Unfortunately, the dark current at a high φ2 voltage does not seem to be affected by the temperature reduction.The small shift for the high φ2 voltage is attributed to the FWC variation, as more darkgenerated electrons can fill the larger FWC.For low φ2 biases, a reduction in the dark current can be observed from 20 °C to 0 °C, showing that temperature has an influence on the thermo-generated dark current, meaning that the increase observed for high φ2 biases is more likely due to the CIC.Consequently, it will not be possible to avoid a dark current multiplication in this device by simply lowering the temperature.
The solutions tested in this part were therefore not effective to achieve charge multiplication in these structures.For this reason, a design modification is proposed in the next section.
Full Well Charge
Dark current
Modified CDTI Structure Investigated Using TCAD Simulations
The problem of the CDTI avalanche structures is that they offer large CDTI interface areas, which are not in the path of the injected charge and are exposed to the high electric field.This leads to an important dark current generation and CIC, which dominate the signal and prevent any charge multiplication observation.For a comparison purpose, 0.5 µm 2 of the silicon-oxide interface is exposed to the strong field for the planar electrodes' design, as proposed by Dunford et al. [11], while 7.2 µm 2 are exposed to the high electric field in the CDTI registers.One also needs to notice that the oxide quality of CDTIs might be worse compared to surface gate oxides, leading to a higher dark current generation.With the aim to reduce the CDTI Si-SiO2 interface area exposed to the high electric field, solutions involving only CDTI components have been rejected because they do not allow for a strong electric field in the charge injection path and lead to large interface areas under the high electric field.Based on this conclusion, a new hybrid structure between the CDTI register and the classical CCD registers is suggested.
This structure is presented in Figure 11a.The modification consists in the addition of a top electrode (TE) above phase 2 biased at a high voltage to obtain the avalanche, while all other CDTI remains at normal biases.The CDTI transfer concept is kept and the device has the benefit of the very good transfer properties and low dark current in storage mode.The main advantage is the reduction in the surface exposed to a strong electric field to 0.12 µm 2 and also the use of surface gate oxide exposed to the high electric field instead of CDTIs, which is known to offer a better interface quality, and therefore, a lower dark current.In this structure, the strong electric field is at the same place as the multiplication of the charges, avoiding the problems of the higher dark charges' multiplication.This hybrid structure involves two operation modes that can be seen on the timing diagram in Figure 11b: first, an avalanche mode, where the top electrode is biased at a high voltage and the charges are attracted near the TE surface, then, a classic buried channel charge transfer mode, when TE is not activated.
The new structure is studied thanks to the 3D TCAD simulations with the timing diagram presented in Figure 11b.CDTI are biased at −1 V for low state and 3 V for high state in the same way as previously for the charges' transfer.The TE electrode is set at 0 V when disabled and is biased up to 15 V for the charge multiplication.Electric field cross-sections with the top electrode activated at 10 V or disabled at 0 V are presented in Figure 12.Streamlines are also shown and represent the path followed by electrons guided by the quasi-fermi gradient from a starting point which is the previous storage well.The streamlines show a correct charge transfer from one phase to the other one in transfer mode and from one phase to TE in avalanche mode.This observation is valid for V TE ≥ 8 V.For a lower TE bias, the electrons remain partly trapped in potential pockets in the buried channel.As the structure is validated regarding charge transfer behavior, it can be studied more precisely for charge multiplication thanks to variations on TE positions and sizes as well as its optimum operating voltage.The optimization of TE size and position are performed with the simulations displayed in Figure 13.This figure shows planar cut at 0.05 µm depth from the top of the register for a better understanding of electric field distributions.Comparison between large electrode (Figure 13a) of 2.4 × 0.9 µm 2 and small electrode of 0.6 × 0.2 µm 2 (Figure 13b) are represented as well as the small electrode at different positions under phase 2 (Figure 13c,d).In order to limit the charge multiplication of the dark current, the high electric field should be far from the CDTIs.At V TE = 15 V, in the case of a large electrode, an electric field of 3.2 × 10 5 V/cm is reached on the edges of the CDTI whereas it is equal to 1.6 × 10 5 V/cm in the center.The injected charges coming mainly to the middle of the electrode, and the dark current coming from the interfaces of the CDTI, this structure will induce a higher dark charges' multiplication compared to the injected charges.For the small electrode centered in the middle of the phase 2, TE is far enough to the CDTI edges to avoid this phenomenon and the multiplication of the dark current should be limited.Moreover, the electric field must be as homogeneous as possible under all the TE area in order to obtain a similar impact ionization under the TE center and near the edges.The goal is to avoid areas of the high electric field outside the localization of signal electrons.According to these simulations, an electric field variation between the center, and the edges of 33% with the small electrode and 50% with the large electrode is visible.Therefore, a small electrode at the center of the phase would be better to limit the dark current generation.
Electric Field (V/cm) In order to confirm it, additional 2D simulations are performed with the aim to extrapolate the dark current generation in avalanche mode.For this purpose, the TCAD simulation is calibrated according to the dark current measurements performed on the same technology: • dark current on CDTI structure until V φ2 = 11 V.The test structure is identical to the one presented in Figure 8 • dark current on a surface gate structure until V SG = 8 V.A cross-section view of the simulated and measured structure is given in Figure 14.For these simulations, the surface SRH model is activated and the parameter S 0 controlling the surface recombination velocity [25] is calibrated at a low gate voltage, then traps are introduced if necessary at the Si-SiO2 interfaces with tunable concentrations in order to match with the measurements for all gate voltages.The 3D dark current value is deduced thanks to the multiplication by the third dimension of the simulated device.The calibration leads to the following parameters for the CDTI interface: S 0 = 2000 cm/s and [trap] = 3.65 × 10 12 /cm 2 .For the surface gate structure, it is found that S 0 = 20 cm/s and no traps are needed due to the better oxide quality.Then, taking into account these calibrated parameters and the geometry of the new structure with the smallest TE gate, it is possible to predict the dark current generation in avalanche mode.The results are shown in Figure 15 and demonstrate a much lower dark current with the new structure.Indeed, on one hand, the oxide area under the high electric field is reduced, and on the other hand, the oxide under the high electric field has a higher quality and the dark current is lower.The effect is even more pronounced if the storage CDTI φ2 is biased at 0 V during the avalanche process; however, this latter configuration should be demonstrated in the measurements.In order to address the minimum TE operating voltage needed to obtain the avalanche, the simulations of the electric field under the TE electrode as a function of the TE voltage are performed.For this purpose, the oxide thickness used under TE is 18 nm, with the goal to avoid any oxide breakdown.The electric field value is extracted at the depth where the quasi-fermi gradient is maximum.Therefore, the curve of the electric field as a function of the TE voltage is presented in Figure 16 and shows that the electric field under TE exceeds 1 × 10 5 V/cm for V TE ≥ 8 V. From this voltage, the electrons are exposed to a sufficient electric field over a distance ranging from 30 to 100 nm; thus, the avalanche regime should be reached under these conditions.The expected impact ionization probability at V TE = 12 V, inducing an electric field of 2.3 × 10 5 V/cm and taking into account the analytical models given by Moll and Overstraeten [26], is 1800 cm −1 .Knowing that the register has 220 stages, it should lead to an expected multiplication gain of 50.
This newly developed structure looks very promising, as it combines advantages of the CDTI register and the impact ionization feature.However, measurements have to be carried out on that device in order to confirm the TCAD simulations and to evaluate the EMCCD key parameters.A test vehicle including this design is under realization and should be characterized in the next few months.
Conclusions
The CTI and the dark current have been measured on a CDTI transfer register, showing a CTI lower than 10 −4 and a dark current of 0.11 nA/cm 2 .This register has been tested with an adapted timing diagram in order to obtain an electron multiplication similar to the EMCCD devices.Several problems were highlighted, such as the decrease in the FWC and the strong dark current increase with the rise of CDTI gate voltages for reaching the avalanche state.Especially, the strong dark current generation is imputed to the large CDTI Si-SiO2 interfaces that cannot be reduced.These limitations do not allow for the observation of signal multiplication in these structures, so a modified structure has been proposed thanks to a hybrid design between classical CCD and CDTI.This new structure keeps the very good transfer and dark current property of the CDTI registers, and the TCAD simulations show a drastic reduction in the dark current thanks to smaller Si-SiO2 areas subjected to the high electric field and to the use of surface oxide instead of CDTI oxide.This structure is currently under realization and the first measurement should be performed in the next few months.This work demonstrates that the adaptation from a known 2D EMCCD device to a vertical and three-dimensional EM-CDTI device is not easy, although the CDTI device offers challenging performances.
Figure 1 .Figure 2 .
Figure 1.(a) Top schematic view of the CDTI phase register and (b) schematic representation of the electrostatic potential (U) profile along the transfer path.The schematics are redesigned from the reference [15].
Figure 4 .
Figure 4. CTI measured by EPER against the injected charge with pulse duration of 1 µs and rising/falling edges times of 10 ns.
Figure 5 .
Figure 5. (a) Schematic representation of the electrostatic potential (U) profile along transfer path in avalanche mode and (b) timing diagram in avalanche mode for the CDTI register with low and high state potential (U), pulse duration of 2 µs, and rising-falling edge times of 10 ns.
Figure 6 .
Figure 6.(a) Output signal measurement and (b) net electron signal as function of the injected signal for φ2 voltage between 3 V and 11 V after 220 stages.
Figure 7 .Figure 8 .
Figure 7. (a) Planar cross-section at 1 µm depth of a 2D TCAD simulation of the electrostatic potential map.(b) Simulated potential profile between two CDTI as a function of φ2 for a given injection level, allowing us to see the buried channel and the surface regime.
Figure 9 .
Figure 9. FWC signal and dark current signal after 220 stages as function of φ2 voltage for different stage pitches.Dark current curves are overlapping.
1x10 4 Figure 10 .
Figure 10.FWC signal and dark current signal as function of φ2 voltage for different temperatures (20 °C, 0 °C, −20 °C, and −40 °C).In inset, a zoom-in on the dark current signal as function of low φ2 voltage.
Figure 11 .Figure 12 .
Figure 11.(a) Schematic of the new hybrid structure with the addition of the top electrode and (b) timing diagram associated with pulse duration of 2 µs and rising/falling edges times of 10 µns.
Figure 13 .
Figure 13.Planar cut at 0.05 µm from the top of the register of the electric field TCAD simulation with different sizes of TE biased at 15 V.
Figure 14 .
Figure 14.TCAD 2D distribution of doping concentration of the surface CCD device used for surface gate oxide calibration.SG is for storage gate, RST is for reset transistor, FD is the floating diffusion, and P+ is the ground contact.
Figure 15 .
Figure 15.Comparison of dark current measurements on the CDTI structure and simulations on the new CDTI structure, including the TE gate for two CDTI biases.Old refers to the initial CDTI register and new refers to the new design with TE gate.
1x10 5 Figure 16 .
Figure 16.Electric field vs the TE voltage under TE at the depth where the quasi-fermi gradient is maximum for an oxide thickness of 18 µm. | 7,064.4 | 2023-11-30T00:00:00.000 | [
"Engineering",
"Physics"
] |
“Internal and external determinants of Iraqi bank profitability”
The determinants of bank profitability are very important, as bank profitability sig- nificantly affects the economies of countries. This study aims to examine the internal determinants (bank-specific characteristics) and external determinants (macroeco- nomic factors and government variables) of bank profitability in Iraq. The study uses unbalanced panel data from 18 banks in Iraq for thirteen years, from 2005 to 2017. The relationship is estimated using a fixed effects approach. The study selected 18 conventional banks considering their data availability in the period from 2005 to 2017. Based on the panel data method, the results show that bank size, the equity to total assets and total loans to total assets ratios, GDP growth, and government effectiveness have a sig- nificant and positive impact on the profitability of Iraqi banks. Meanwhile, credit risk, inflation, interest rate, unemployment, and political instability have a significant nega- tive influence on bank profitability. To the authors’ knowledge, this study is considered one of the earliest studies of its kind, in which the main factors affecting Iraqi bank profitability are determined. That said, this paper makes a significant contribution to the theoretical literature, the industry, and policymakers, so that the performance of Iraqi conventional banks can be improved.
INTRODUCTION
In the context of most developed and less developed countries, traditional, or conventional, banks have become the basis of financial sectors. In developing and emerging market countries, banks stand out as dominant financial institutions. Many countries have characteristics such as low per capita income and asset levels, lax accounting standards, and a corporate sector primarily driven by small, family-owned businesses. Since developing countries lack the necessary infrastructure, it is not surprising that banks and other financial intermediaries have become superior in the financial arena, and capital markets have not developed fast enough (Sharma, Chami, & Khan, 2003).
In most MENA countries, including Iraq, money and stock markets are currently underdeveloped. Consequently, commercial banks play a significant and essential role in regional economies. Therefore, a fragile banking system, which can be exacerbated by low profitability, can damage the entire financial system of the affected country or even spill over to other countries, especially in the case of international banking operations. Banks hold a substantial amount of deposits from households, private companies, government sectors, and other institutions. Performing the function of financial intermediation, banks redirect funds from savers to borrowers, contributing to macroeconomic activity, which usually stimulates economic growth. Banks are also a channel through which monetary policy can be pursued and
LITERATURE REVIEW
The bank performance literature was written more than three decades ago using the market power theory and efficiency structure theory (Athanasoglou, Brissimis, & Delis, 2008). The market power theory argues that profit will come when market forces are stronger. In contrast, the efficiency structure theory assumes that with effective management, profit will be made, and, therefore, there will be a higher concentration.
In past decades, various studies were conducted on factors impacting bank profitability in developed and developing economies. Nonetheless, no study has been done to explore the determinants of bank performance in Iraq. Factors influencing bank profitability can be divided into several aspects, namely, bank-specific variables, economic indicators, and government variables, therefore, these areas will be considered in the literature. Bank-specific characteristics related to bank performance are bank size, equity to total assets, liquidity, credit risk, and total loan to total assets. The same scholars argue that the critical external factors influencing bank profitability are economic indicators (for instance, GDP growth, interest rate, inflation rate, and the unemployment rate), not to mention government variables (e.g., regulatory quality, political instability, and government effectiveness).
Having studied the literature, it became possible to identify gaps of this study, not to mention the shortcomings of the existing empirical studies.
Bank-specific characteristics
Many studies were conducted using data from different nations. Bank size, the equity-to-total assets ratio, liquidity, credit risk, and the total loan-to-total assets ratio are the characteristics that relate to bank profitability. In previous studies, all the influence, positive or negative, of bank size on the bank's performance was recorded. Bukhari and Qudous (2012), Naceur and Omran (2011), Athanasoglou et al. (2008) report, however, that the size of banks does not have any influence on performance. In particular, De Andres and Vallelado (2008) say that huge banks offer low costs and therefore high market power. Thus, the bank size hypothesis is as follows: H1: Bank size has a positive and significant impact on Iraqi banks' profitability.
It is predicted that well capitalized banks will be safe enough, and, therefore, profits will be lower (Athanasoglou et al., 2008). This shows a positive association between capital ratio and bank profitability. Buser, Chen, and Kane (1981) report that, theoretically, when the high franchise value is high, banks are highly capitalized and need to be well-capitalized. The previous empirical studies establish that the equity to total assets ratio is positively and significantly associated with bank profitability. Consequently, the next hypothesis is as follows: H2: High proportion of equity to total assets leads to high bank performance. Miller and Noulas (1997) pointed to an inverse association between credit risk and bank performance, that is, a higher proportion of loans to total assets tends to make a bank more susceptible to doubtful debts, and this brings down the profit margins. Nevertheless, Valverde and Fernandez (2007) prove that credit risk has a significant positive influence on bank profitability. Bukhari and Qudous (2012) explore a significant association between credit risk and the Pakistani banks' profitability. The theory proposes that usually a high percentage of credit risk is associated with lower bank performance and, therefore, the authors suggest that: H3: Credit risk negatively impacts bank performance.
Low liquidity can encourage banks to borrow at penal rates, and at that point their reputation is critical. The findings of previous studies are also recognized as inconsistent. While some researchers pointed to a significant adverse association between the liquidity ratio and bank profitability, several scholars explored a definite link between liquidity and bank performance. Meanwhile, Bukhari and Qudous (2012) found that liquidity does not affect performance.
H4: The liquidity ratio is significantly related to bank performance.
As a proxy of bank assets quality, the proportion of total loans to total assets (TL/TA) is considered. A higher ratio leads to deterioration in the quality of bank assets, as banks hold provisions because they expect losses after defaults on the credit portfolio (Poghosyan & Cihak, 2009). Previous empirical studies reveal a significant positive association between the total loans-to-total assets ratio and bank profitability (Sanlsoy, Aydn, & Yalçnkaya, 2017). However, the data obtained contradict the results of Vong and Chan (2009). Meanwhile, Liang, Xu, and Jiraporn (2013) found an insignificant association between the ratio of total loans to total assets and profitability of banks. Thus, the hypothesis is: H5: The total loans to total assets ratio is significantly and positively associated with bank profitability.
Economic factors
External factors are economic indicators that are beyond the authority of a bank and affect the profitability of banks. Bank performance may be affected by one of the leading macroeconomic indicators, which is economic growth or GDP growth. are among the scholars who find a significant negative association between GDP growth and firm profitability. In theory, GDP growth during times of low risk of default on bank loans makes people more demanding of banking services. Therefore, this improves bank profitability. Therefore, another hypothesis is as follows: H6: GDP growth is positively and significantly related to Iraqi banks' profitability.
The inflation rate reflects the change in the proportion of the price level over the last period. Colander (2001) argued that the price level is an index of all prices in the economy, making it a common tool as an inflation index. Besides, CPI measures fixed basket prices for consumer goods, weighted by the proportion of each component in average consumer spending. Thus, the influence of inflation on bank performance depends on whether the inflation rate is unanticipated or expected (Perry, 1992). Firstly, in the case of expected inflation, banks can adjust interest rates on time, and, therefore, revenues can increase faster than costs, having a positive influence on profitability. Secondly, in the event of unanticipated inflation, banks may not quickly change interest rates. Thus, bank spending will grow steadily faster than bank returns. In effect, this will have an inverse impact on the profitability of a bank. Many practical studies revealed a positive and significant association between inflation and bank profitability (Gyamerah & Amoah, 2015;Noman et al., 2015). Nevertheless, some other studies found out that inflation had a negative impact on bank performance (Bilal, Saeed, Gull, & Akram, 2013). That said, in theory, a positive influence of inflation on bank profitability is predictable because high inflation proportion relates to high bank performance; hence, it is hypothesized that: H7: Inflation is positively and significantly related to bank performance.
Regarding the interest rate, a lot of empirical research has been done. Bilal Unemployment can also be another macroeconomic indicator of the profitability of banks. Unemployment means the proportion of unemployed labor. Normally, the unemployment rate stands prominent as a key factor in estimating the economic condition. The higher proportion of unemployment will affect the cash flow streams of households, and it is also an indicator of the relationship between production and demand (the lower the production, the lower the effective demand); so this situation will lead to a decrease in firm revenue. Furthermore, an increase in the unemployment rate leads to lowered total demand and increased loan default rate; in effect, the firm's profit will be at stake (Heffernan & Fu, 2008). Furthermore, Bordeleau (2010) argued that unemployment adversely influenced bank performance, while Ferrouhi (2017), Owusu-Antwi, Mensah, Crabbe, and Antwi (2015) revealed that inflation did not have any effect on bank profitability. Thus, for all this inconsistency, the hypothesis is established as follows: H9: High percentage of unemployment leads to deterioration in bank performance.
Government variables
Government effectiveness, political stability, and the quality of regulation are equally vital for banking activities. However, there are not many studies that have explored these areas and analyzed their impact on bank performance. Moreover, Berger, Clarke, Cull, Klapper, and Udell (2005) argue that with the same approach to banks performance, the static, selection, and dynamic effects of all forms of governance are important for bank performance. Three indicators used in the literature, namely, regulatory quality (REGQU), political instability (POLINS), and government effectiveness (GOVEF), will be the focal point, where they indicate the degree of harmony within the institution.
As for the influence of regulation on bank profitability, the literature review has led to some inconclusive results, where some authors have managed to highlight the positive influence of regulation on bank performance, and some others have shown a negative effect. Regulation mitigates the impact of managerial decisions on shareholder wealth, leading to a regulation replacement by internal control mechanisms that are not able to soften the blows of agency conflicts. The presence of regulatory authorities interfering with the discipline of a leader limits the discretion of the latter. Demirguc-Kunt, Laeven, and Levine (2004) found low financial intermediation cost in countries that have better property rights, strict judicial power, and great commitment to the implementation of contracts.
However, Barth, Caprio, and Levine (2001) noted that bank nationalization had an adverse correlation with the banking sector development and positive association with bank inefficiency measures. As such, Arun and Turner (2002) argued that the inefficiencies associated with bank management forced governments in developing countries to retreat slowly from the banking sector. In this work, based on the arguments of these authors, it is established that: H10: Regulation is significantly and positively related to bank performance.
Political instability is another type of country-specific risk that can change economies, bank outputs, and performance. Yahya, Akhtar, and Tabash (2017) examine the influence of political instability on the profitability of banks in Yemen, and found a positive association. Nevertheless, Şanlsoy et al. (2017) analyzed the influence of political instability in Turkey, and found a significant negative relationship. Based on banks from MENA countries, Ghosh (2016) studied the association between political instability and the performance of a bank and found an inverse association. Likewise, Jebnoun (2015) explored the impact of political instability in Tunisia and confirmed a significant negative relationship. Hence, the next hypothesis is as follows: H11: Political instability negatively and significantly influences bank performance.
Regarding legal implementation and regulatory power, Levine, Loayza, and Beck (2000) in their cross-checking the South East Asian banks, indicated that government restraints allowed banks to increase their credit facilities and retain large market shares, and that brought higher returns. La Porta, Lopez-de-Silanes, Shleifer, and Vishney (1998) studied the performance bank determinants and found out that a poor legal system can protect creditors, which leads to decrease in bank performance in the economy. In the same vein, Demirguc-Kunt et al. (2004) found that a better legal system and effective regulatory systems are associated with less corruption, reducing the frictions or shortcomings that are common in the financial system. As for Asian banks, anyone can assume that fragile law enforcement and high corruption will be improved when effective regulatory and legal systems appear, eventually and possibly asserting a positive association with bank performances. Likewise, Chan and Abd Karim (2016) revealed that government effectiveness and the efficiency of a bank are positively associated. In the same vein, Lensink and Meesters (2007) discovered that government effectiveness reduces banks' costs on dealing with bureaucracy. Thus, it is hypothesized that: H12: Government effectiveness has a positive influence on bank profitability.
Data and sample
Appropriate variables expected to affect the performance of banks have been nominated after considering the Iraqi economy and as recommended by the literature. The data for the study were collected from annual reports, World Development Indicators (WDI), and Worldwide Governance Indicators (WGI). Bank-specific variables are bank size, the ratio of equity to total assets, liquid ratio, credit risk, and the total loans to total assets ratio. The data on these variables were obtained from the annual reports published on the ISX and on banks' websites. The data on economic indicators include GDP growth, inflation rate, interest rate, and unemployment; all the data on these indicators were obtained from the WDI. In addition, governance data were obtained from the WGI. Banks with data for less than ten years were removed from the study sample. The study sample included Iraqi listed commercial banks with data available for the study period. 18 commercial banks were involved, but there was an unbalanced panel for the data of some banks not available for the period.
Variable measurement
In the banking sector, since it consists of various categories of banks, both external and internal factors determine bank profitability. As mentioned early on the profitability of banks in the literature, bank performance is usually tested by three measures, such as NIM, ROA, and ROE. However, as shown in this paper, appropriate independent variables predictable to affect the performance of banks have been nominated when it is referred to the current economic situation in Iraq and in accordance with previous literature. The value and measurement of the study variables are shown in Table 1.
Model specification
This paper explores the potential determinants of performance for Iraqi banks using a panel data approach. The reason for using the panel data approach is to shed light on the heterogeneity of independent variables and to obtain more precise findings by making more observations. Using the panel data approach to study the critical determinants of bank performance is what is newly introduced in this paper (Wooldridge, 1999;Baltagi, 1995).
The panel data approach specification can be written as follows: where Y it is bank profitability measured by NIM, ROA, and ROE; β 1 -β 12 are coefficients of explanatory variables; BNKZ denotes bank size (total assets natural log), ETA is the proportion of the equity to total assets ratio, LIQU is liquidity (refers to the liquid assets to total assets ratio, CRDR is credit risk, TL/TA is the total loans to total assets ratio, GDPG -GDP growth rate, INFLR -inflation rate, INTR -interest rate, UNEMP -unemployment, REQU -regulatory quality, POLINS -political instability, GOVEF -government effectiveness, and ε it is an error term.
Performance Model in NIM terms:
Descriptive statistics
A descriptive study is a measurement of central dispersion and tendencies. It is often useful to define a chain of data set parsimoniously in a natural order, which would allow an individual to get an idea of the elementary features of the data. Dispersion measurements are the variance, range, and standard deviation. This study includes standard deviation as a measure of dispersion and a mean as a measure of central tendency. The mean refers to the measurement of central tendency and gives a general idea of unnecessary data in adopting one with each of the data observations. In this study, maximum value, minimum value, stand-ard deviation, and mean value are used as a descriptive analysis to explain each variable. Table 2 shows the descriptive statistics.
Correlation analysis
The interrelationship between variables was examined using the Pearson correlation. Correlation analysis is carried out to detect any autocorrelation among the study variables (see Table 3).
Multiple regression analysis
Before the regression development, some tests related to the quality of the adjustment would be important. Key tests were conducted by Newey (1985) Note: ***, **, and * mean that correlation is significant at 1%, 5%, and 10%, respectively. See Table 1 for definition and measurement of variables.
The total model significance was tested using the Fisher test. The values of 2.9, 1.8, and 2.1 for model 1, model 2, and model 3, respectively, indicate that the ratio of variance in the dependent variables explained in models is less than 0.05. Also, the R 2 value for three models, which approximates 1, indicates that these models are well adjusted. Thus, the estimated results are shown in Table 4
Robustness checks
To enhance the strength of the study results, the association between bank-specific, macroeconomic and government determinants and bank performance were explored. It should be stated that the tables are not exhibited as they are too space-consuming. Firstly, it is checked if the link between bank-specific characteristics (bank size, equity to total assets, liquidity, credit risk, and total loans to total assets), economic variables (inflation, GDP growth, interest rate, and unemployment) and government variables (regulatory quality, political instability, and government effectiveness) and performance is non-linear. Here, quadratic terms of all variables are entered into Equation (1), Equation (2) and Equation (3). In the non-tabulated results, the fixed effects estimates of modified Equation (1), Equation (2) and Equation (3) with the quadratic terms, find no significant coefficients on any of the quadratic bank-specific, economic factors, and government variables. This finding suggests that the influence of bank-specific characteristics, economic factors and government factors on bank performance is linear. Secondly, alternative measures are used for bank size. Thus, bank size is dichotomized at the median of total assets. Banks can be classified as small when the total assets are below the median, whereas banks can be considered substantial when their total assets exceed the median. Finally, the regressions of the primary model are re-tested using the alternative bank size measurement, which is a dummy variable, not a log of total assets, hypothesizing if total assets exceed the median, and 0 otherwise. In all these cases, the main findings remained similar to those shown in Table 4. Note: ***, **, and * mean significance at 1%, 5%, and 10%, respectively. P-value (in parentheses) and the variable definitions are explained in Table 1. Table 4 shows that bank profitability is positively and significantly affected by the size of banks for all models (ROA, ROE, and NIM) at the 1% level. This result is similar to the findings of Jadah and Mohammed (2016), Jadah, Murugiah, and Adzis (2016). The positive association between bank size and bank performance indicates that big banks benefit from the scale of economies and, therefore, have high performance.
DISCUSSION
In addition, the regression results of the current study show that the capitalization of banks, characterized by the ratio of total equity to total assets (EQ/TA), is significantly associated with banks' profitability (measured by ROA, ROE, and NIM). The positive association is because banks with high equity capital are safer and are unlikely to face a bankruptcy crisis. This helps banks to reduce capital costs and boost their performance. This finding is in tandem with the results of many scholars who revealed a positive link between the ratio of equity to total assets and bank performance. This means that well-capitalized banks can: 1) use investment opportunities; 2) avoid the expected bankruptcy costs for their customers and for themselves; this will lower the cost of capital; and 3) overcome the problems resulting from unforeseen losses, compared with other banks; this will positively affect the cost of capital and increase their profitability.
In addition, the findings illustrate an adverse association between credit risk and bank performance Thus, this result can be interpreted by the circumstances in which financial institutions are exposed to high-risk loans, and more unpaid loans give an idea that these loans have become losses, which reduces the income of commercial banks. Findings on bank responsibility encourage focus on credit risk management following the negative influence on bank performance. These results also show that Iraqi banks will improve their performance through effective credit risk management, which improves forecasting of future risks.
The study findings also show that the liquidity ratio is slightly related to the performance of banks, as evidenced by all models. Therefore, the fourth hypothesis on the liquidity ratio, which is largely related to bank performance, is rejected, which indicates that the liquidity ratio does not determine the Iraqi bank's performance. This finding agrees with the previous study by Bukhari and Qudous (2012). Nevertheless, these findings are not consistent with the results found by Gyamerah and Amoah (2015) and some other scholars, who concluded a significant influence of liquidity on bank performance. However, the study findings show that the total loans to total assets ratio (TL/TA) significantly and positively influence NIM only, implying that Iraqi banks can maximize their net interest margin significantly through the increased lending activities. This finding is consistent with expectations, and it is in line with previous research results of Şanlsoy et al. (2017). However, the results contradict the conclusions of Vong and Chan (2009).
The regression result in Table 4 shows that GDP growth significantly and positively affects the performance of banks in all models (NIM, ROA, and ROE). This indicates that the results of Table 4 Finally, the study reveals that, in all the study models, the influence of government effectiveness is positive. This is consistent with the expected results in the twelfth hypothesis. In fact, it is estimated that the effectiveness of government intervention has a positive influence when it comes to improving financial performance. This result deserves mention, as this condition is not fulfilled in Arab countries. This is supported by Levine et al. (2000), who state that a better institutional environment helps to develop markets and stimulate financial development, leading to high efficiency of the banking industry. In the same vein, Lensink and Meesters (2007) studied the link between institutions and bank performance and revealed that government effectiveness reduced banks' costs on dealing with bureaucracy. The finding is in good agreement with the study by Chen (2009), who states that government effectiveness leads to higher banks'cost-efficiency. In addition, Barth et al. (2001) found that the effective government led to high institutional quality, gradually leading to higher bank efficiency.
CONCLUSION
The research subject defines the purpose of this study, and, in this range, the determinants of bank performance were analyzed using a data approach and a panel of eighteen Iraqi banks for 13 years (from 2005 to 2017). An unbalanced panel of 220 observations was used for econometric analysis. The results show that most bank-specific characteristics, economic factors, and government variables have a statistically significant impact on the performance of Iraqi commercial banks. The regression results of this study show that the size of Iraqi banks and the total equity to total assets ratio are among key determinants of Iraqi bank's profitability. There is support for the that that large banks have exploited the economies of scale and that well-capitalized banks faced low costs of obtaining external finance, and such a feature can lead to increased performance. However, the total loans to total assets ratio (TL/TA) is significantly related to bank performance in terms of NIM only. Otherwise, it will not be significant. Consequently, the loan ratio cannot justify the variability of Iraqi banks' performance. Moreover, regarding the influence of external factors on bank profitability, the findings indicate that the influence http://dx.doi.org/10.21511/bbs.15 (2).2020.08 of GDP growth and government effectiveness has a positive association with the performance of Iraqi banks. Nonetheless, inflation, interest rates, unemployment, and political instability have a negative impact on the performance of Iraqi banks.
In order for Iraqi banks to achieve their goals, it is useful to be able to recognize factors that determine the performance of successful banks in developing policies to strengthen and maintain the stability and strength of the banking sector in Iraq. While all this shows a close relationship between the welfare of the banking sector and economic growth, factors affecting the profitability of the financial sector, both for administrators and stakeholders in banks, cannot be excluded. Raising awareness of these factors is key to helping regulators and bank administrators develop good future strategies to make Iraq's banking sector more profitable. | 5,957 | 2020-05-13T00:00:00.000 | [
"Economics",
"Business"
] |
Insights into the dynamics of SARS-CoV-2 pandemic via Shannon-Fisher causality plane
This paper performs a systematic investigation into the temporal evolution of severe acute respiratory disease coronavirus 2 (SARS-CoV-2) pandemic considering 15 diverse countries. Based on the foundations of Information Theory, we apply the Shannon-Fisher causality plane (SFCP), to map the dynamics behavior inherent to SARS-CoV-2 and their respective locations along the (SFCP). Our results show that this dynamics varies widely along the SFCP from the lower-right region, characterized by high entropy and low degree of reliability in relation to the information extracted from the analyzed data set to the top-right region, characterized by the less entropic and high degree of reliability in relation to the information extracted from the analyzed data set. It reveals that we have three different groups of countries in controlling the SARS-CoV-2 pandemic. A country that was proactive in implementing measures such as social distancing, quarantine, orders to stay at home, testing symptomatic and asymptomatic loads and hygienic measures to limit the impacts of SARS-Cov-2 (China) and that today is clearly in the decay phase with the number of cases tending to zero and is no longer in a pandemic situation (efficient). Moderately proactive countries, ie, implemented measures only when the spread of SARS-Cov-2 was already reaching the country (France, Germany, United Kingdom, Spain, Sweden, Italy, Ireland, USA, Austria, and Canada) (moderately efficient) and the reactive countries, which took a long time to implement the measures and/or the infection came later and as a result are not managing to reduce the number of daily cases of SARS-CoV-2 (Russia, Iran, Brazil, and India) (inefficient) and are the new epicenters of the SARS-COV-2 pandemic. Besides, we applied the Bandt Pompe permutation entropy ( H ) and the Fisher Information ( F ) to obtain the rank of the most efficient countries to the fight against the SARS-CoV-2. To the best of our knowledge, no researches have been ranking the most proactive countries in the fight against the SARS-CoV-2 dissemination. We truly believe that the empirical results showed in this research draws new perspectives that can collaborate in the formulation of more efficient healthy public policies to combat SARS-CoV-2 spread.
Introduction
A variety of preventive strategies have been put in place by different governments across the world. These strategies include testing symptomatic and asymptomatic carries, social distancing, quarantine, stay-at-home orders, and hygienic measures 1,7 . These strategies had different effects in diverse countries according to the time they were applied and other factors.
Thus, the chief goal of this research is to investigate the temporal evolution for SARS-CoV-2 pandemic consider the daily incidence number of COVID-19 cases for 15 countries based on the Shannon-Fisher causality plane (SFCP) 8,9 . Thus, we applied the Bandt & Pompe permutation entropy 10 and Fisher Information measure 11,12 which allowed us to map and rank the countries most efficient in controlling the SARS-CoV-2 pandemic. The SFCP is a powerful tool of Information Theory to investigate the global and local characteristics of the Bandt Pompe's probability density function (PDF) related to the dynamics of SARS-CoV-2 dissemination.
The highlights of this research for the literature are: First, it analysis the complex dynamics of SARS-CoV-2 pandemic through the estimation of the Bandt & Pompe permutation entropy combined with Fisher Information measure. Second, it builds up the SFCP for SARS-CoV-2, which allowed us to map the virus dynamic behaviour and their respective locations along the SFCP for each country. Third, it shows the ranking of the countries most efficiently in controlling the SARS-CoV-2 pandemic taking into account the complexity hierarchy.
Results
The insights promoted by our fitting procedure are based on data comprising of the number of daily incidences related of COVID-19 cases in 15 countries (Methods section for more details). For each time series, we investigate the phenomenology inherent to the temporal evolution of SARS-CoV-2 pandemic. These types of time series are widely known to be non-stationary. Despite that, the time series of the daily number of COVID-19 cases can be used to characterize the fractional Brownian motion, also a non-stationary process [13][14][15] .
Given this, taking into account the permutation entropy as a way to discriminate time series, the previous empirical results showed that better results 16,17 are provided by original series, in this case is the daily incidence of SARS-CoV-2 infection. For each country, the time series of temporal evolution of SARS-CoV-2 pandemic are shown in Fig. 1.
The temporal evolution of the number of daily incidences related to COVID-19 depicts three distinct phases: an exponential increase phase, a plateau phase, and a decrease phase. In view of this, it seems that China, France, and Germany are in the decrease phase, while Brazil and India are in the exponential increase. In turn, Iran deserves attention once it peaked at April followed by a decay phase and then a new increase since May. While, the US presented a critical exponential increase phase, but it seems systematically decrease.
The SARS-CoV-2 pandemic generated strong negative effects into the investigated countries in this research, causing abrupt changes in the worldwide status quo of the people lives 7 . For each country, we perform the Boxplot 18 to verify the anomalous values (outliers). Fig. 2 presents the Boxplot.
The analysis of Fig. 2 reveals the number of outliers for each country. It is an indication that extreme events in the time series of the daily incidence of COVID-19 are more recurrent for Austria, Brazil, China, France, Germany, India, Ireland, and Spain. Extreme events can be understood as statistically improbable occurrences. Even with a high probability of not occurring, these extreme events are observed in financial, social, and natural systems, bringing hard consequences 19 .
In this way, we also applied the univariate quantile-quantile (Q-Q) plot 20 , which is a powerful tool used to examine the distributional similarities and differences between two independent samples [20][21][22] . Fig. 3 depicts the Q-Q plot.
The Q-Q plot is a non-parametric approach more robust than the classical histogram to study the statistical properties inherent to measures of central tendency, dispersion and skewness. For all countries investigated in this analysis, there is a standard statistical behavior in relation to the daily observations of COVID-19 cases. Especially at the ends of the Q-Q plot, the daily incidences follow a strong non-linear pattern, revealing that the data are not distributed by a Gaussian (X ∼ N (0, 1)).
Based on the Information Theory quantifiers, Permutation entropy (H), Statistical complexity (C), and Fisher Information (F), all measures evaluated by Bandt & Pompe method, it is possible to define three causality information planes 23 . Here, we use the causality Shannon-Fisher plane (HxF) which performs an analysis based on the global and local characteristics of the Bandt Pompe's probability density function (PDF). We emphasize that there are some peculiars characteristics between the causality Shannon-Complexity plane (HxC) and Shannon-Fisher causality plane (HXF). The focus of (HxC) comprises the global characteristics of the associated time series Bandt Pompe's probability density function (PDF). Thus, the range variation of (HxC) includes [0, 1] x [C min ,C max ] 24 .
The Shannon-Fisher causality plane considers the global and local characteristics of the Bandt Pompe's PDF. The range is [0, 1] x [0, 1]]; no limit curves have been shown to exist so far 25 . This approach has already been used successfully to distinguish noise from chaos 26 , to investigate the characterization of motor, imagery movements in electroencephalogram 27 , to study the physiology of cerebral cortex 23 , to analyze the complex dynamics relative of observed and simulated ecosystem gross primary productivity 25 and in the info-quantifiers to the logistic map 9 For each country, the temporal evolution of the number of daily incidence related to COVID-19 day from January 22, 2020 until May 28 2020 with 128 observations. These time series present peculiar characteristics such as non-linear dynamics, noisy and chaotic characteristics.
3/15
China is the only country that is located within the ideal position zone to a stochastic process related to a random walk. It reveals that China is no longer in a pandemic situation and presents no community contagion. France is the country that is closest to entering this ideal position zone followed by Germany and the UK and will soon no longer be in a pandemic situation and is without presenting community contagion.
The other countries are far from this ideal position zone to a stochastic process related to a random walk. However, with the exception of Sweden, Brazil, India, Iran, and Russia the other countries are already in the decay phase. Sweden has a behavioral anomaly in terms of the spread of SARS-CoV-2. Apparently, it is the only one on the plateau phase. Brazil, India, Iran, and Russia are in the exponential growth phase related to the daily number of COVID-19 cases.
The SFCP method measures the magnitude of the total impact of the SARS-CoV-2 pandemic for the entire period of analysis. Thus, a temporal analysis of the number of COVID-19 cases is not carried out, but a general picture of the pandemic situation for the countries investigated. In this sense, what is being compared is the general picture of 15 countries, which allows us to verify which ones were most impacted considering the number of daily incidences SARS-CoV-2 infection.
The temporal evolution of the number of daily COVID-19 cases tends to have their starting positions close to the lower-right region at the Shannon-Fisher causality plane. The countries located in this region or in its surroundings are characterized by high entropy and low degree of reliability in relation to the information extracted from the analyzed data set, so their behavior is closer to a random walk. These countries have adopted more efficient measures in controlling the SARS-CoV-2 pandemic. These measures are directly associated with the fulfillment of the determinations of the World Health Organization (WHO) and with a greater political and social engagement we can highlights social distancing, quarantine, stay-at-home orders, testing symptomatic and asymptomatic loads and hygienic measures to control the SARS-CoV-2 pandemic.
While the countries located in the SFCP intermediate region or in its surroundings are characterized by a less entropic degree and a high degree of reliability in relation to the information extracted from the analyzed data set. These countries took a long time to adopt the measures recommended by the WHO and the political and social engagement was delayed as a result of that they still have a moderate risk of spreading by SARS-CoV-2.
The countries located to the top-right region at SFCP in its surroundings are characterized by lower entropy and higher degree of reliability in relation to the information extracted from the analyzed data set. These countries took even longer to adopt the measures recommended by the WHO and have a low level of political and social engagement in complying with these measures, thus presenting a high risk in relation to contagion by SARS-CoV-2. In addition, the situation related to this health crisis is enhanced by the collapse of the health system, poverty and a few years of educational studies.
The permutation entropy H and Fisher information F are used to quantify the degree of efficiency in combating the impacts of the spread of SARS-CoV-2 considering the time series of daily incidences of COVID-19 cases. For time series of the daily incidence of SARS-CoV-2, the time evolution is characterized by a temporal pattern derives in deviations from the ideal position to a stochastic process inherent a random walk. The higher distance to this random ideal position reflects a higher level of efficiency in the controlling the SARS-CoV-2 dissemination. Table 1 presents a rank of successful countries in controlling the SARS-CoV-2 pandemic and these impacts based on the complexity hierarchy. For each country, the higher distance to point (1.0) reflects a higher level of efficiency in controlling the SARS-CoV-2 pandemic. In this sense, the results presents that China, France, Germany and U.K. are near to the lower boundary of the Shannon-Fisher causality plane. Because of this, they can be considered more efficient countries 16,[28][29][30] . Moreover, the Euclidian distance presents a relevant discrepancy in the distance between China and France. It reveals that China is less complex per entropy value than France.
Otherwise, the lower distance from point (1.0) reflects less level of efficiency in the combat against to the SARS-CoV-2. Our results show that India, Brazil, Russia and Iran are lying significantly farther from the right corner leads behavioral dynamic related to SARS-CoV-2 far from a random walk. Thus, these countries are less efficient 16 and show long-term correlated. Moreover, these countries are characterized by lower entropy and a lower degree of randomness. Given this, these countries are the most inefficient in the combat against to the SARS-CoV-2.
Discussion
Based on the empirical results presents in Fig. 1, Fig. 2, Fig. 3, Fig. 4 and Table 1 we perform individual analyzes to understand the complex phenomenology inherent of the number of daily incidence of COVID-19 cases. Our results show that we can cluster the 15 countries investigated into 3 different groups related to the SARS-CoV-2 pandemic, the efficient countries, the moderately efficient countries and the inefficient countries.
China is located in a region of the highest entropy and lowest Fisher information. These results may be a consequence of a quick and proactive stance in controlling the SARS-CoV-2 spread since had emerged the epidemic outbreak disease 31 , although it had a high peak in the daily incidence of COVID-19 cases in the beginning of the pandemic. China has been preparing to contain future pandemics by applying lessons learnt from SARS ever since 2003 32 . Within a matter of weeks, China implemented all the tools ranging from case detection with immediate isolation, and contact tracing with quarantining and medical observation of all contacts 33 .
Although Europe does not have the same preparation than China, Germany, France and U.K. are the countries closest to reaching this stage that today is China, a country that has a lower daily incidence of SARS-CoV-2 infection rate tending to zero. These countries clearly have a drastic reduction in COVID-19 cases in the last days studied. However, they did not invested in the strategy of mass testing, contact tracing, and physical distancing in the early stage of SARS-CoV-2 spread which led them to failures in inicial stage containment of COVID-19 pandemic 34, 35 But, we believe that public health policies implemented together with some factors such as population adherence to social isolation measure, genetic and age composition 36 and high human development index (HDI) may have helped in decreasing the impact of SARS-CoV-2 in these countries.
Spain, Italy, Ireland, United States, Austria and Canada, already presenting a curve of cases confirmed by the decaying SARS-CoV-2, but still indicating a moderate risk of having new outbreaks if sanitary measures are not followed and there is early relaxation. Sweden has adopted more relaxed strategy to control the SARS-CoV-2 pandemic with no massive testing on suspected individuals and no strict lockdown in her most affected regions and, probably, because of this, we can observe a difference in a pattern of COVID-19 cases.
In the top-right region investigated by the Shannon-Fisher causality plane are Russia, Iran, Brazil and India. These countries are presenting high daily cases of SARS-CoV-2, their curves are growing and, therefore, their position on the Shannon-Fisher causality plane, present greater distance to the point of efficiency. Brazil and Iran had confirmed cases later than the others countries already mentioned, which contributed to being in the pandemic control stage presented. Among the factors that may be influencing Russia in its low efficiency and the response to COVID-19, the lack of clear political leadership has been pointed out, just like in Brazil 37 .
Brazil and India are two very populous countries that are undergoing economic development. Facing an outbreak of COVID-19 with these characteristics is quite challenging. Specifically in the case of Brazil, where we are familiar with the situation, several factors such as a permissive culture, political polarization, 38 high social inequality, poor sanitation in many regions, public transportation often crowded in large cities, among other characteristics may be contributing enormously to the low efficiency in controlling the pandemic.
In conclusion, our results reveal different behavioral dynamics of the SARS-CoV-2 epidemic in the diverse countries evaluated. It is important to have instruments in order to analyze the country's efficiency in controlling the pandemic so that it is possible to know the strategies that were used by the most successful countries. Thus, a more effective response can be taken in the face of a new SARS-CoV-2 outbreak, or by countries that are still experiencing the first outbreak. However, it is worth noting that each country has its own intrinsic economic, social, behavioral, genetic, and age characteristics, among others, that will influence the final result, bringing about a more or less effective control of SARS-CoV-2 outbreaks.
5/15
Methods This section has been segregated into 2 distinct subsections in which we present the theoretical framework of the methods used in this research. The Bandt & Pompe method (BPM) and the Shannon-Fisher Causality plane (SFCP). Fig. 5 depicts the flowchart related to the theoretical framework used in this research.
Data
The main database used in our analysis is the time series of daily incidence related to SARVS-COV-2 taking into account 19 countries. For each country, the periods cover 122 day from January 22, 2020 until May 28 2020 with 128 observations. The data were obtained via public service on the internet at COVID-19 Dashboard by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University,https://github.com/CSSEGISandData/COVID-19. According to the terms of use, the data is public and can be used for public health, educational, and academic research purpose. A list of countries considered in this research with country name and the respectively geographical coordinates and Human Development Index (IDH) is presented in Table 2. Specifically, the HDI is an index that considers in its calculation 3 variables, life expectancy at birth, expected years of schooling for children and mean years of schooling for adults, and per capita income.
In this research, we taking into account countries that present very high human development (Austria, Canada, France, Germany, Ireland, Italy, Russia, Spain, Sweden, U.S. and U.K.), high human development (Brazil, China and Iran) and medium human development (India).
For each country, regardless of the value related to its HDI, it is notable that SARVS-CoV-2 has dramatically altered social, economic structures and the health system. Enforcing the isolation of people on a global scale.
Permatutation entropy
The Permutation Entropy (PE) approach was proposed by Bandt & Pompe 10 as the Shannon's entropy 39 to quantify the probability distribution of ordinal patterns which are constructed through the symbolization technique that considers time causality by comparing neighboring values of the time series under analysis 40,41 .
The permutation entropy is a measure for information content in time series 42 which is apply to evaluate uncertainty, disorder, state-space volume, and lack of information 43 inherent in a process as displayed by discrete measurements of a parameter of the system 44 .
Thus, more predictable signals (which show a tendency to repeat only a few ordinal patterns) have lower permutation entropy than less predictable signals (which tend to exhibit all possible ordinal patterns).
This method taking into account a symbolic sequence of ordinal patterns of segments (words) of a given size (embedding dimension) denoted by d 16,40 . Then, for each ordinal pattern the respective instances are counted in order to construct an 6/15 ordinal pattern probability distribution 17 . More specifically, these ordinal patterns related to the quantifying of Bandt & Pompe probability distribution are calculated by observing the local ordering of consecutive values within each word.
In view of this, the Bandt & Pompe method (BPM) can be described as follows. For a given time series x t ,t = 1, ..., T initially T − (d − 1) overlapping segments X t = (x t , x t+1 , ..., x t+d−1 ) of length d (incorporation dimension) are generated and within each segment the values are classified in ascending order, which provides the set of indices r 0 , r 1 , ..., r d−1 such that x t+r 0 ≤ x t+r 1 ≤ ... ≤ x t+r d−1 . The corresponding sequences denoted by π = r 0 , r 1 , ..., r d−1 can take on any of d! possible permutations of the set {0, 1, ..., d − 1} and are symbolic representatives of the original segments. The permutation entropy of order d ≥ 2 is now defined as a Shannon entropy of probability distribution P(π), given by where {π} represents the summation over all the d! possible permutations of order d, and p(π) denotes the relative frequency of occurrences of the permutation π. It follows 0 ≤ H(d) ≤ log d! t follows that 0 ≤ H(d) ≤ log d! where the lower limit is reached for a strictly increasing or decreasing series (only one permutation appears), and the upper limit for a completely random series where all d! Possible permutations appear with the same probability. The ideal d strongly is related to the phenomenology of each event studied, but in order to promoted a goodness of fit statistics as a rule of thumb it is typically recommended 40 to choose maximum d such that T > 5d!. In addition, the application of BPM is recommended not only to time series of low dimensional dynamical systems, but also to any type of time series such as regular, chaotic, noisy or reality based) 29 . Given this characteristic, the BPM has been successfully applied in distinct research areas such finance 28,29,41 , geophysics 45 , engineering [46][47][48] , physiology 49-51 , hydrology 52, 53 and climatology 54 Fisher information Fisher (1922) proposed a statistical measure of indeterminacy now called Fisher Information 55 . There are some possibilities for interpreting this statistical measure (i) as a measure of the ability to estimate a parameter, (ii) as the amount of information that can be extracted from a set of measures (the "quality" of the measurements) and (iii) as the disorder state measure of a system or phenomenon 11,12,56 being its most important property is the so-called the Cramer-Rao Bound (CRB) for nonlinear parameter estimation 9, 57-60 . The Cramer-Rao inequality 26 is obeyed by all time series of quarterly GDP'S.
Specifically, Fisher information is a robust measure given by which evaluate the gradient content of the distribution f (continuous PDF), thus being quite sensitive even to tiny localized perturbations.
In view of this, it is essential to note that the gradient operator significantly influences the contribution of tiny local f -variations to the value of Fisher's information, so that the quantifier is called "local". Note that Shannon's entropy decreases with the skewed distribution, while Fisher's information increases in this case. Local sensitivity is useful in scenarios whose description requires an appeal to a notion of "order" 8,9,26 .
The concomitant problem regarding loss of information due to discretization has been thoroughly studied, for more details see ( 61-63 and their references) and, in particular, it implies the loss of Fisher's shift invariance, which has no importance for our current purposes. For the calculation of the Fisher information measure (discrete PDF), we follow the proposal of Dehesa and collaborators 64 based on the amplitude of the probability f (x) = ψ(x) 2 so, its discrete normalized version (0 ≤ F ≤ 1) is defined by
7/15
In this paper the normalization constant F 0 reads F 0 = 1, if P i * for i * = N and P i = 0∀i = i * , 1/2 otherwise.
Based on the level of ordering of the investigated system, we can considering that it describe by a probability density function (PDF) given by P o = {p k ∼ = 1; p i ∼ = 0 ∀ i = k; i = 1, ..., N} (with N, the number of states of he system) in consequence we have a Shannon entropy S [P o ] ∼ = 0 and a normalized Fisher's information measure F [P o ] ∼ = F max = 1. Otherwise, if the investigated system is represented by a very disordered state, we can understand this particular state is described by a PDF given by the uniform distribution P e = p i = 1 N ∀ i = 1, ..., N . | 5,713.4 | 2020-06-22T00:00:00.000 | [
"Computer Science"
] |
New DTW Windows Type for Forward- and Backward-Lookingness Examination. Application for Inflation Expectation
This study provides an application of dynamic time warping algorithm with a new window constraint to assess consumer expectations’ information content regarding current and future inflation. Our study’s contribution is the novel application of DTW for testing expectations’ forward-lookingness. Additionally, we modify the algorithm to adjust it for a specific question on the information content of our data. The DTW overcomes constraints of the standard tool that examines forward-lookingness: DTW does not impose assumptions on time series properties. In empirical study we cover seven European counties and compare the DTW outcomes with the results of previous studies in these economies using a standard methodology. The research period covers 2001 to mid-2018. Application of DTW provides information on the degree of expectations’ forward-lookingness. The result, after standardization, are similar to the standard parameters of hybrid specification of expectations. Moreover, the rankings of most forward-looking consumers are replicated. Our results confirm the economic intuition, and they do not contradict previous studies.
Introduction
This study investigates consumer inflation expectations' forward-lookingness using a dynamic time warping (DTW) algorithm. Expectations are private agents believes regarding economic situation. Their formation and properties are the centre of 1 3 interest for central banks. They are the driving force of transmission of central bank signals into economic results (Woodford 2003). During the post-crisis era of low inflation, expectations play an even more important role as the standard interest rate transmission remains ineffective. Numerous studies investigate whether the properties of expectations have changed since the Great Recession and their implications for policymakers (Ehrmann 2014;Łyziak and Mackiewicz-Łyziak 2014;Łyziak and Paloviita 2018).
When non-specialists form their expectations about inflation, they may consider past inflation only or forecast inflation on the basis on numerous forward-looking factors. The notion of forward-lookingness is closely related to rationality of expectations. The rational expectation hypothesis was introduced to economics by J.F. Muth (1961). It gained recognition after a seminal papers by R.E Lucas (Lucas 1972(Lucas , 1976 and brought the revolution in macroeconomics. The rational expectations hypothesis is by far the most common assumption applied in macroeconomic modelling and analysis. This remains true regardless the obvious empirical evidence that the hypothesis does not hold.
The simplest description of the rational expectations hypothesis states that economic agents expectations are the same as the forecasts of the model being used to describe economy. The model reflects adequately economic system and relations. Consequently, private forecasts are, on an average, equal to realization of the variable. The intuition behind the hypothesis is far away from econometrical approach to incorporation of past values in forecasting. The rational expectations story is that economic agents, including consumers, could ignore past information about inflation and refer only to the value of future inflation. They are believed to know economic model as well as policy makers do. There is no need to stick to past inflation values while express their forecasts. The rational expectations are thus fully forward-looking -focused on the future, and equal, on an average, to actual inflation realization. Backward-looking expectations remains in opposition to rationality: they stick to past inflation. Information content of expectations:-forward-or backward-looking is a primary concern of our study. Having in mind that that description holds some simplification, we refer to the former approach as to forward-looking expectations and we call the latter-backward-looking.
Unlike existing literature, our study is firstly methodological. We propose an alternative method for assessing the degree of expectations' forward-lookingness. Our search for a novel solution is motivated by the shortcomings of the existing approaches. The standard procedure estimates hybrid specification of expectations following economic theory and intuition about the information content of private forecasts. However, its application can be questioned due to the properties characteristics of the time series (expectations are quite often non-stationary) and the results' robustness (they are estimator-dependent and react strongly to small adjustments in the research period).
With the findings of previous studies, our own experience with hybrid specification estimations, and the topic's relevance to central banks in mind, we have decided to apply an alternative approach-dynamic time warping (DTW) to assess the degree of expectations' forward-lookingness. The DTW technique originates from speech recognition where it founds numerous applications (Itakura 1975;Myers et al. 1980;Rabiner and Juang 1993;Rabiner et al. 1978;Sakoe and Chiba 1978;Benkabou et al. 2018). In time series analysis, DTW is an non-parametric technique for measuring the similarity or distance between two temporal sequences which may vary in time or speed. DTW application in economics is rare despite its advantages: it does not impose assumptions on the time series properties or the lag structure. To our best knowledge, few examples of DTW application for economic analysis are available in the literature. DTW was used to detect recessions (Raihan 2017) and clustering of business cycles (Franses and Wiemann 2018), similarity networks among 35 currencies in international foreign exchange markets (Wang et al. 2012), and commodity prices' co-movements (Śmiech 2015). Arribas-Gil and Müller (2014) present a pairwise dynamic time warping and show its application opportunities to online auction data.
Apart from a methodological contribution to the research on expectations, our study provides an alternative understanding of forward-lookingness (FL) and backward-lookingness (BL) of expectations as DTW allows for different perspectives while searching for similarities in time series. We compare our findings using theoretical approaches to forward-and backward-looking expectations with the results using modified assumptions about horizons of information incorporated. The DTW algorithm provides both: a distance measure that is insensitive to local compression and stretches and the warping, which optimally deforms one of the two input series onto the other. DTW solves the problem of local time shifting in time series. Thus, it does not assume constancy of delays over time.
This paper offers a practical solution but is not about proposing a new forecasting method. As expectations are forecasts by non-specialists, we do not wish to forecast forecasts. We would like to assess their properties in a proper way. From the policy-maker point of view, the value added arising from the recognition of expectations FL and BL is far enough for policy analysis purposes, inc. inflation modelling. It allows for determining inflation equation (new-Keynesian Phillips curve or its hybrid specification) that better replicates empirical evolution of inflation. It could be useful for calibrating parameters of inflation equation.
The summary of the value added of our paper is as follows: we build up on an existing literature on the assessment of expectations properties. This methodological novelty is extended by the provision of an alternative understanding of forward-and backward-lookingness. Moreover, we offer the modification of DTW algorithm that allows for tackling this specific problem.
Our sample covers consumer expectations derived from the European Business and Consumer Surveys held under the auspices of European Commission. We present DTW for seven monetary areas: Croatia, the Czech Republic, Hungary, Poland, Romania, Sweden, and the UK. This sample covers economies for whom we conducted previous examinations using standard methodology. Thus, we are able to compare the DTW results with our previous findings and that of the others. Apart from our need for comparing results, we find that the economies that we cover function within the European Union monetary policy (price stability as priority, high degree of central bank independence). The research period covers 2001 to mid-2018.
3
The rest of the paper proceeds as follows. Section 2 presents the materials and methods. In this section we briefly outline the standard methodology that estimates the degree of expectations' FL and the DTW technique, in detail. The next section describes the results for both versions of the algorithm and juxtaposes them with the standard estimations of FL. The last section provides the conclusions.
Standard Estimations of the Degree of Expectations for Forward-Lookingness
The state-of the-art in the field of this examination is not referred to theoretical understanding of expectation but to the methodology of their properties examinations. We describe the standard procedure that returns the degree of forward-lookingness to highlight its drawbacks and provide the background for the comparison of our results with findings from previous studies. Once the rationality of expectations is rejected (what means that expectations are not unbiased predictors of inflation), the search for their hybrid specification of expectation is legitimated. With hybrid specification we can then consider the extent to which expectations are forwardlooking and backward-looking. The search for hybrid specification of expectations reflects theoretical models of expectations formation. The forward-looking component of expectations is identified using the rational expectations and the backwardlooking component is identified using adaptive (Eq. 1) or static (Eq. 2) expectations. The specifications of the hybrid nature of expectations involves estimation of Eq. 1 or Eq. 2.
where e t+12|t is the expected inflation rate at time t + 12 formed at time t, t+12 is the actual inflation at t + 12 (analogous meaning of other subindices), t is the white noise error.
For both equations, if 1 = 0 and 2 = 1 , the expectations are fully forwardlooking. Standard specification imposes a certain, constant structure of lags. This is reflected in the subindices of our equations. T + 12 months horizon of expectations relates to the survey questions: consumers are asked about their estimates of price level within the next 12 months. Expectations are juxtaposed with actual inflation in one year horizon and past inflation. Past inflation from two months before the survey is considered ( t − 2 ) as this is the latest inflation available for consumers. If we use June survey as an example, responders could be aware of April inflation (published end May). Moreover, consumers need time to process economic information. Two months lag is the shortest that seams justified. (1) New DTW Windows Type for Forward-and Backward-Lookingness… Equation (1) presents adaptive hypothesis of expectations in its backward-looking part. It assumes that responders refer to past expectations errors (expectations made 14 months ago could be compared with 2 months lagged inflation -latest figure not available). The adaptive specification hypothesis relates expectations to their past values and is corrected by past expectation errors. Additionally, Eq. 1 incorporates the possible impact of a change in the current inflation on inflation expectations.
Second specification (Eq. (2)) reflects more strictly thinking about forward-and backward-lookingness in terms of the distance. This is a static specification -backward-looking part of the equation-relates only to the latest past inflation. This is why compare our results with this specification.
We identify forward-lookingness with the rationality of expectations, knowing that this is only a simplification, as rationality provides much more meaning that just unbiasedness. The above mentioned specifications are broadly used in empirical examinations to assess the degree of expectations of forward-lookingness (Carlson and Valev 2002;Gerberding 2001;Heinemann and Ullrich 2006;Łyziak 2013;Łyziak and Mackiewicz-Łyziak 2014). The authors provide estimations of both, adaptive and static specifications and then interpret the equation with a better goodness-to-fit.
This simple and theory-related procedure is suitable for a stationary time series; however, expectations are not always expressed as a stationary time series. Usually, stationarity is not discussed and tests for unit root presence are not delivered; this is the case of the majority of papers cited above. This means that the time series properties are neglected. This approach is supported by some studies that acknowledge that while dealing with expectations properties, namely their rationality, stationary structure of the time series over some medium horizon may be assumed. The advocates of such approaches claim that producing unbiased expectations in a non-stationary environment would, in many circumstances, be an implausibly demanding task (Evans and Gulamani 1984). Moreover, even if tests of expectation properties for a non-stationary environment exist, they remain almost unused in examinations that aim at delivering economic interpretations and implications as opposed to a methodological solution.
Short of ignoring some assumptions for time series properties, once one decides to test the degree of expectations forward-lookingness using standard tools, a decision on estimators must be made. Owing to the endogeneity problem, the ordinary least square (OLS) method is inconsistent. Hence, instrumental-variables regression (the two-stage least squares (2SLS) estimation method) could be applied. However, the IV estimator is imprecise (large standard error), biased when the sample size is small, and biased in large samples if one of the assumptions is only slightly violated (Martens et al. 2006). Moreover, choosing the right instrument is crucial. To justify the validity of an instrument, one needs to show that it is correlated with the endogenous independent variable, but not with the residual, which is often not trivial. Numerous papers discuss weak instruments and their consequences (Hahn et al. 2004;Staiger and Stock 1994;Stock and Wright 2000). The choice of an estimator is, to some extent, arbitrary and it affects the results.
Dynamic Time Warping Algorithm
A fundamental task in time series analysis is to quantify the similarity (or dissimilarity) between two numerical sequences. The method used to measure the distance is key for the performance of many data mining jobs such as, classification, clustering and retrieval. Considering the time series in an economic context, conventional distance measures such as Euclidean distance, are often not suitable, because time shifts or time distortions exist are common and unpredictable. On the other hand, classical econometric methods usually do not easily adhere to assumptions regarding the distribution of variables and stations, which is the case of hybrid specification of expectations estimations. The DTW is, as the name suggests, align-based. The goal of the algorithm is to find an optimal alignment between two time series. By optimal alignment we understand that it achieves the minimum global cost (distance) while ensuring time continuity. The global cost is the summation of the cost between each pair of points in the alignment. The comparison of the traditional Euclidean distance based and DTW based approach is shown in Fig. 1.
Let assume that we want to compare two time series: a test/query X = (x 1 , x 2 , … , x N ) of the length N and a reference Y = (y 1 , y 2 , … , y M ) of length M. We choose a non-negative, local dissimilarity function f between any pair of elements x i and y j : is small (i.e. low cost) if x i and y j are similar to each other, else d(i, j) is large (i.e. high cost). The most commonly used functions are the Euclidean and Manhattan distances. When employing one of the distance functions, the local cost measure for each pair of elements of the sequences X and Y are evaluated and presented in a cost matrix C ∈ R N+M . A warping path is a contiguous set of matrix elements that defines a mapping between the time indices of X and Y that satisfies the following conditions: • The boundary condition: 1 = (1, 1) and T = (N, M) , which ensures that the first elements of X and Y as well as the last elements of X and Y are aligned to each other. New DTW Windows Type for Forward-and Backward-Lookingness… • The monotonicity condition: ∀ i i = (r, c), i+1 = (r � , c � ) ⇒ r � ≥ r and c ′ ≥ c , reflects the requirement of faithful timing: If an element in X precedes a second element from X this should also hold for the corresponding elements in Y, and vice versa. • The continuity condition: ∀ i i = (r, c) and i+1 = (r � , c � ) ⇒ r − r � ≤ 1 and c − c � ≤ 1 , which means that no element in X and Y can be omitted and there are no replications in the alignment.
Given , the total cost d and the average normalized accumulated cost d between the warped time series X and Y is computed as follows: where m is a per-step weighting coefficient and M is the corresponding normalization constant. To determine an optimal path DTW and avoid an exponential computational complexity, we use dynamic programming. The cumulative cost matrix D satisfies the following identities: The goal is to find an alignment between X and Y having a minimal average accumulated cost.
The optimal path is computed in the reverse order of the indices, starting with (N, M). Intuitively, such an optimal alignment runs along a "valley" of low cost within the cost matrix C (Müller 2007). The density plot of the cost matrix with optimal warping is presented in Fig. 2 and the algorithm pseudocode is given below. 2018)). One common DTW variant is to impose global constraint conditions. A global constraint, or window, forbids warping curves from entering a given region of the (i, j) plane.
where T 0 is the maximum allowable absolute time deviation between the two matched elements. The two most commonly used global constraint regions are the Sakoe-Chiba band (Sakoe and Chiba 1978) and the Itakura parallelogram (Itakura 1975), as shown in Fig. 3. The alignments of cells cannot be selected from the whole matrix, but rather only from the white area. In our study, we also impose New DTW Windows Type for Forward-and Backward-Lookingness… constraints on windowing to obtain two versions of the degree of forward-lookingness assessment.
DTW for Testing Expectations Forward-Lookingness
As mentioned earlier, windowing constraints limit the number of points that each point can link to. It is an intuitive solution to the pathological alignment problem and it speeds-up the algorithm. In our study, it plays also a significant role in economic interpretation.
Forward-vs backward-lookingness. The windowing application provides the possibility to modify the notions of forward-and backward-lookingness. The classic approach to hybrid specification of expectations, wherein backward-looking expectations are related to the latest inflation figure known for consumers (Eq. 2). Thus, the algorithm assesses the Euclidean distance between expectations and past inflation and the distance between expectations and future inflation and compares them both. It is purely related to a theoretical understanding of static and rational expectations. It simply checks the distance of expectations to the most recent inflation and the distance of expectations and inflation realisation 12 months ahead.
The new presented alternative approach provide an intuitive meaning of the forward and backward-lookingness of expectations. This diverges from the standard, theory-related understanding of formation of expectations. Thus, in this alternative specification, the forward-looking component cannot be identified with rational expectations, and the backward-looking expectation cannot be identified with the static specification of expectations. We allow consumers to consider any past or forecasted value of inflation even if they miss the exact, theory-related horizons. This approach is especially applicable for consumers as they are unqualified economic agents. Their limited awareness of economic conditions is even reflected in how their expectations are examined (briefly discussed in the next section). Thus, our main assumption, which differentiates this study from existing works is that we define expectation as forward-looking, when in the case of their formulation, consumers refer to any future inflation and as backward-when they formulate expectations based on the value of inflation from the past.
DTW windowing for the forward and backward-lookingness From algorithmic point of view, the above mentioned modification of BL and FL definition cause, that we do not compare the value of expectations to one specific lag, as in Eqs. 1-2. If the consumers formulate expectations based on inflation from the past, regardless of whether it occurred two (which is a classic static specification of expectations), three or even four months ago, then they are backward-looking. Therefore, to measure the level of forward-lookingness, we search the warping path only within the upper triangular cost matrix. Similarly, to measure the level of backward-lookingness, we use the lower triangular cost matrix. As we can see on Fig. 4 on left panel, with such restricted area the algorithm look for the shortest path only in area of points in time that comes after a i − th point in x set. So we can say that it is forward looking. On right panel we see lower triangular windows, so the area of acceptable solutions is always associated with points occurring earlier in time than the given point. We connect x points with different delays by points from y, that is the situation which is called backward looking.
The pseudocode for the algorithm that incorporates windowing constraints for alternative specification of forward-and backward-lookingness is shown below. We present and interpret the results for both specifications: theory-related and with relaxed assumptions on the relation in time of the time series.
3
New DTW Windows Type for Forward-and Backward-Lookingness…
Empirical Study
This section presents our sample and the empirical results for the standard and alternative understanding of FL. We also compare our results with the estimations of hybrid specification of expectations.
Data
Our sample covers Croatia, the Czech Republic, Hungary, Poland, Romania, Sweden and the UK for the period of 2001 to mid-2018. Evaluation data of this study are consumer inflation expectations. We derive them from European Business and Consumer Survey held under the auspices of European Commission. Regular monthly harmonised surveys are conducted by the Directorate General for Economic and Financial Affairs in the European Union and in the applicant countries. Our sample is thus covered by methodologically consistent survey. The survey questions and methodology are presented in a guidebook (European Commision (2020)).
Fig. 4 New forward and backward windows type
Consumer expectations are examined in qualitative surveys-responders express their opinion about the direction of inflation change in the future. Their responses are quantified. Carlson and Parkin (1975) probability method, in its modified version adjusted to Batchelor and Orr (1988) five-question survey is applied to quantify consumer expectations. This is the most commonly applied procedure which transforms consumers qualitative assessments of perceived and expected price level change into quantified inflation expectations. In this examination, we first quantify consumers' inflation perception and then use a perceived inflation rate as a scaling factor of expectations quantification (subjectified version of quantification). 'Normal level of inflation' is represented by 36M moving average of inflation (scaling factor for question on perceived inflation). Inflation rates used are official statistical offices figures. For each economy we obtained 210 monthly observations (pair: expectations-inflation). The raw data are presented on Fig. 5.
Results
Before we present the aggregate results of our study, an illustrative example of two subperiods between June 2008 and December 2010 for the United Kingdom is shown (see Fig. 6). During the first 15 months expectation were clearly more forward-looking. The distance between expectation and inflation was the shortest for future inflation (the curve that represents inflation was visibly shifted to the right). Since August 2009 (the sixteenth month of this subsample) inflation was visibly shifted to the left in relation to expectations. Expectations were mostly shaped by past inflation, and so they were backward-looking. Theses relations are confirmed by distances measure: in the first subperiod (1-15 months) normalised FL distance is 0.3863 while BL distance is almost two times higher: 0.6137. During the second subperiod (16-30 months) FL distance is 0.68472 and BL is 0.3153. Thus for the first subsample expectations are closer to future inflation (FL), in the case of the second one-to past inflation (BL), which confirms the intuitive understanding of the graph.
Full sample results. First, we present the distance results between the tested series based on the assumptions and measures. Then, we present the results of forwardlookingness estimations for our sample, and we compare them to existing results. Distance and normalized distance estimations for our sample are presented in Table 1.
First, Euclidean distance between expectation and inflation one year ahead, followed by the Euclidean distance between expected inflation and its latest realisation, represent theory-related approach to forward-and backward-lookingness. Time series shifts are imposed to mimic the hybrid specification of expectations. Second, we present the DTW distance with forward and backward window constraints (DTW: one direction-constrained version without lags). Finally, as a reference point, we present the standard DTW distance without constraints. Longer time series have naturally higher total distances, which makes a direct comparison impossible. Analogous to total distance result presentation, we begin with Euclidean distance results, the DTW constrained results, and finally-we show the DTW results. In each case, the standard DTW algorithm returns the shortest distance. This confirms that expectations are neither fully forward-nor backward-oriented, and their formation pattern changes over time. The distances measured by DTW with windows (forwardor backward-looking) are also smaller than those obtained using the theory-related approach, because they are allowed to consider different shifts. The common point of our results is that the forward-looking distance exceeds the backward-looking distance for consumers expectations regardless of the country.
This result confirms suggestions arising from standard hybrid specifications of expectations: backward-looking is far more prominent among consumers than forward-looking. To compare our results with examinations applying standard methodology, we provide the forward DTW distance in terms of FL coefficients. We apply an analogous notation of 2 to express the degree of expectations forward-lookingness: The coefficients of forward-and backward-lookingness, similar to the standard approach presented in Eq. 2, sum to 1. Table 2 lists the ranking of consumers FL in decreasing order.
3
The results returned by the DTW algorithm and their modifications largely confirm the previous results obtained using the standard methodology. Our coefficients cannot be compared to 2 in terms of levels. Hence, we provided a ranking of countries according to consumers FL and refrain from direct juxtaposition of the degrees of forward-lookingness obtained using the standard procedure and ours. Nonetheless, we can point out some common points of previous research. Firstly, similar to Clinton et al. (2017). Thus, the Czech consumers ranking second in our results was expected and confirms previous findings. Fourthly, consumer expectations in advanced economies are generally more FL compared to transition economies. This is not the case with our study as Czech and Polish consumers outperform Swedish consumers in terms of their FL. With the caveats related to our methodology and the comparability to other studies' results in mind, we summarize that DTW returns promising results, allowing for an international comparison of expectations' forward-and backward-lookingness.
Conclusion
The study aimed at investigating consumer expectations' forward-lookingness by applying a dynamic time warping algorithm and its modifications. Our goal was mainly methodological: we searched for a method that would overcome the disadvantages of standard methodology applied to capture the degree of expectations' FL.
With the results of standard estimations as a possible robustness check, we produced a ranking of consumers' FL for seven economies. Our rankings replicate, to a large extent, the findings of the standard methodology. Additionally, we extended a theory-related understanding of forward-and backward-looking expectations formation allowing for more intuitive lag shifts between the time series (expectations, past inflation, and inflation realization). Relaxed assumptions on the horizons to which economic agents refer is especially applicable for consumers who are the least qualified group of economic agents. We found the DTW an interesting tool to detect the degree of expectations' FL. Its main advantages-lack of assumption about time series properties and time relation of considered variables-should be highlighted once again. Through the DTW application, we avoided the objections that could be formulated toward a standard methodology examination which ignores the properties of time series. We also note that the majority of authors presenting results on expectations rationality and the degree of their FL reject econometric appropriateness of their estimations, if in favour of the results' interpretations. This approach could be justified to some extent. However, a search for more adequate methods is quite necessary. While discussing our results we need to bear two caveats in mind. As the DTW is algorithm-based we cannot estimate statistical significance of numbers that represent forward-and backward-lookingness. It seems obvious when algorithms involved: they measure distance thus, analogously to distances in space, there is no need to check whether the distance is significant. Lack of statistical significance measure may be questioned by econometricians that are used to estimations of parameters. The second caveat we need to make is about proxies of expectations that we applied in this study. Survey-based expectations quantified with Carlson and Parkin probabilistic approach do not avoid the original sin of this method. Criticism towards probabilistic approach spreads in economic literature (Lahiri and Zhao 2015;Lolić and Sorić 2018) and we are aware of it. No broadly accepted alternative of quantification has appeared up to now. The most innovative proposals move verification of expectations or studies on their formation to laboratory (Becker et al. 2009;Cornand and Hubert 2020). The other extreme is that some authors avoid quantification and use balance statistics or fraction of responses (Acedański and Włodarczyk 2016). We are not ready for the former, the latter is not enough for our study. Thus we decided to apply standard and well recognised quantification procedure being aware of its drawbacks.
Finally, we should examine further application of DTW for analysing the properties of expectations. The next step can be imposing local weights to favour the vertical, horizontal, or diagonal direction in the alignment; one can introduce an additional weight vector (w d , w h , w v ) ∈ R 3 , yielding the modified recursion. In the application under consideration, it can be a preference of the horizontal alignment direction. The second interesting development could be using the DTW on moving subsequence. Such research will allow to determine the moment of the expectations changing from forward to backward-lookingness, and vice versa. It would also help us check whether this was related to economic events or the activities of central banks.
Funding This work was supported by the National Science Centre, Poland the grant No. 2018/31/B/ HS4/00164. Availability of data and material Data used in empirical study available on https ://githu b.com/rutko wskaa /DTWfo rInfl ation Expec tatio n | 6,917.4 | 2021-03-04T00:00:00.000 | [
"Economics"
] |
Non-Linear Hopped Chaos Parameters-Based Image Encryption Algorithm Using Histogram Equalization
Multimedia wireless communications have rapidly developed over the years. Accordingly, an increasing demand for more secured media transmission is required to protect multimedia contents. Image encryption schemes have been proposed over the years, but the most secure and reliable schemes are those based on chaotic maps, due to the intrinsic features in such kinds of multimedia contents regarding the pixels’ high correlation and data handling capabilities. The novel proposed encryption algorithm introduced in this article is based on a 3D hopping chaotic map instead of fixed chaotic logistic maps. The non-linearity behavior of the proposed algorithm, in terms of both position permutation and value transformation, results in a more secured encryption algorithm due to its non-convergence, non-periodicity, and sensitivity to the applied initial conditions. Several statistical and analytical tests such as entropy, correlation, key sensitivity, key space, peak signal-to-noise ratio, noise attacks, number of pixels changing rate (NPCR), unified average change intensity randomness (UACI), and others tests were applied to measure the strength of the proposed encryption scheme. The obtained results prove that the proposed scheme is very robust against different cryptography attacks compared to similar encryption schemes.
Introduction
Multimedia data such as text, audio, video, and image play a very important role in information security. One of the most important types of multimedia content is digital images due to their military applications, authentication of biometrics, medical science, and personal albums. In order to protect privacy and maintain the security of private images against unauthorized use or vulnerable attacks while passing through a public network, we need a trustable image encryption process. Many encryption schemes have been proposed, standardized, and widely adopted since the 1970s. These encryption schemes can vary between data encryption standard (DES) and advanced encryption standard (AES) techniques [1,2]. In 1963, Edward Lorenz applied chaos theory in computer systems [3]. Afterward, the cryptography schemes based on chaos theory were the primary choice for most cryptographers when proposing new encryption algorithms. Logistic map-based algorithms together with higher dimensional chaos functions lead to more secure encryption schemes against cryptanalytic attacks [4][5][6][7][8][9].
Recently, many low-dimensional chaotic systems have been developed [10][11][12]. These researchers proposed an encryption scheme with good chaos performance. Although these systems have low complexity, they are based on a fixed chaotic map which results in these low-dimensional systems becoming vulnerable to brute force attacks. Some encryption algorithms depending on logistic maps have been proposed in [13][14][15][16][17][18][19][20][21]. The digital image encryption schemes are mainly based on two processes, namely, position permutation, value transformation, or a combination of both processes. Position permutation is simply executed by fixing the pixel values and permuting the image position. On the other side, value transformation is accomplished by fixing the image position and assigning new values for the pixels. Due to its applicability and simplicity in implementation, the position permutation process is considered a primitive operation in most image encryption schemes. The encryption algorithms based on permutation-only processes show poor resistance against cipher text-only attacks and/or known/chosen-plaintext attacks and are only used in moderate or low-level security applications. The main purpose of the value transformation technique is to establish linear independency relations among several variables. Such operations can be accomplished simply through an XOR operation. The main advantage of the value transformation process is the non-reversibility manner, i.e., to reverse the value transformation operation we need the two arguments' initial values used to create such a process, which is impossible to achieve.
In order to maintain the optimal security performance, several researchers proposed encryption schemes based on both processes, starting with position permutation, then applying value transformation. Most of the proposed algorithms to generate new pixel value during the value transformation process were depending on a fixed 3D chaotic map. To further increase the security of such image encryption schemes, we suggest a new encryption cryptosystem to generate a logistic parameter hopped 3D chaotic map that is used to generate the new pixel values during the value transformation process. We applied our proposed digital image encryption scheme to previously analyzed well-known images to compare our tests results with previous encryption schemes. The obtained results for our encryption scheme showed better performance results compared to other encryption schemes based on a fixed 3D chaotic map in terms of several types of attacks.
The rest of the article is organized as follows: Related image encryption schemes depending on 3D chaotic maps are briefly covered in Section 2; Section 3 explains the proposed image encryption cryptosystem that depends on a logistic parameter hopped 3D chaotic; Statistical tests used to evaluate the performance of our encryption scheme and simulation results are presented in Section 4; And, finally, Section 5 concludes the proposed algorithm.
Related Work
In different encryption schemes, a variety of strategies and different chaotic algorithms are adopted. Xiaoling Huang et al. [22] offered an encryption algorithm depending on the permutation-diffusion operation. The chaotic map output was revised through a middle parameter influenced by secret keys yielding to a temporal delay. Xu, L et al. [23] introduced a bit-level image encryption algorithm depending on piecewise linear chaotic maps (PWLCM). The authors transformed the plain image into two identical binary sequences. The two sequences generated were diffused mutually through a new diffusion strategy. Finally, they applied bits permutation through swapping the binary sequences by means of the chaotic map.
El-khamy, S.E. et al. [24] proposed a new chaotic image encryption algorithm depending on permutation and substitution in the Fourier domain. The authors achieved a large degree of randomization by applying a Fractional Fourier transform. Baker map, together with a generated key depending on a modified logistic map, was used for the permutation process yielding to an increase in the space of the encryption key. Dongdong Lin et al. [25] offered an image encryption cryptosystem based on information entropy. The authors evaluated the security metric validity and security properties of the algorithm. They identified some unsecured issues, commonly generated in such algorithms, and how to avoid them.
Chengqing Li et al. [26] reevaluated the image scrambling encryption algorithm security. They stated that the internal correlation remaining in the cipher image disclosed corresponding information about the plain image. Finally, they concluded that the scrambling elements could be used to support plain text attacks. Chunhu Li et al. [27] presented an image encryption algorithm depending on the three-dimensional (3D) chaotic logistic map. A chaos-based key stream was generated through a modified 3D chaotic logistic map. The proposed encryption scheme included diffusion and confusion properties. Several security tests were applied to measure the performance of the proposed scheme in measuring the cryptographic application suitability.
Parameter Hopped 3D Chaotic Map Image Encryption Scheme
The proposed image encryption scheme is shown in Figure 1a and based on the parameter hopped 3D chaotic map. The image encryption scheme is generated through five main steps, namely parameter hopped 3D chaotic map generation, histogram equalization, row rotation, column rotation, and exclusive-OR (XOR) logic operation. Figure 1b represents the flowchart of the proposed algorithm.
Generation of Initial Conditions
In this section, we describe our proposed algorithm to generate a pseudorandom bit sequence based on a logistic parameter hopping 3D chaotic map. The varying parameters for the 3D hopping chaotic map are a i , b i and c i , and are generated through (1)-(4) under the specified initial conditions.
Generation of 3D Parameter Hopping Logistic Map
The 3D parameter hopping logistic map is generated through (5)-(7) as follows [28]: where a 1 = 3.7900, b 1 = 0.0185, c 1 = 0.0125, x 1 = 0.2350, y 1 = 0.3500, and z 1 = 0.7350. Figure 2a shows the chaos phenomena of the 3D parameter hopping logistic map depending on the varying parameters a i , b i and c i of the 3D hopping chaotic map. Figure 2b displays the bifurcation diagram of the 3D hopping parameters x, y and z obtained from Equations (5)- (7) with initial values of a 1 = 3.7900, b 1 = 0.0185, c 1 = 0.0125, x 1 = 0.2350, y 1 = 0.3500 and z 1 = 0.7350. It is clear that the bifurcation diagram of the proposed chaotic map has an enhancement in the parameter range of hopped chaotic sequence compared with the fixed chaotic parameters used in [28]. The generated values and histogram generation of hopped chaotic sequence x, y and z obtained through (1)-(7) are depicted in Figure 3. Figure 3a,c,e shows the generated values for x, y and z with initial values of a 1 = 3.7900, b 1 = 0.0185, c 1 = 0.0125, x 1 = 0.2350, y 1 = 0.3500 and z 1 = 0.7350, while, Figure 3b,d,f represents the histogram for each obtained value of x, y, and z, respectively. Obviously, the histogram of the generated chaotic sequence has non-uniform distribution that may have an effect on the security of the system.
Histogram Equalization
The generated histograms displayed in Figure 3 are non-uniformly distributed. To further increase the security of the generated histograms, we apply an equalization process for x, y, and z through (8)-(10) as follows where η 2 , η 4 and η 6 are large random numbers and they are chosen to be equal and greater than 100,000 for simplicity, while M and N are chosen to be equal to the image dimension (256 × 256). It is clear from Figure 4b,d,f that after applying the above constraints, we obtain the equalized histogram for x new , y new and z new .
Row Rotation
For a gray image of M×N dimensions, the row rotation is executed by applying an offset value 1 η , then selecting M elements of chaos sequence x beginning from the offset value 1 η , and finally applying the chaos value x obtained through Equation (5) to rotate the row. To increase the security of the generated sequence, the row rotation could be right or left rotation according to the chaos value (odd or even).
Column Rotation
The column rotation is similar to the row rotation and can be applied by selecting N elements of chaos sequence y, choosing 3 η to be an offset value, then starting from 3 η and applying the chaos value y obtained from Equation (6). Now, we have an encrypted image with row and column rotation but with the same histogram of the original image. x new = (integer(x × η 2 ))modN (8) y new = (integer(y × η 4 ))modM (9) z new = (integer(z × η 6 ))mod256
Row Rotation
For a gray image of M×N dimensions, the row rotation is executed by applying an offset value η 1 , then selecting M elements of chaos sequence x beginning from the offset value η 1 , and finally applying the chaos value x obtained through Equation (5) to rotate the row. To increase the security of the generated sequence, the row rotation could be right or left rotation according to the chaos value (odd or even).
Column Rotation
The column rotation is similar to the row rotation and can be applied by selecting N elements of chaos sequence y, choosing η 3 to be an offset value, then starting from η 3 and applying the chaos value y obtained from Equation (6). Now, we have an encrypted image with row and column rotation but with the same histogram of the original image. To overcome histogram attacks, we need to apply one more step to change the value of the image pixel as described in the following point.
XOR Operation
A final step in the encryption process is to XOR the generated sequence obtained through row and column rotations to get new pixel values other than the original ones. The XOR operation is done by converting the M×N image to a new 1 × MN image, then using an offset value η 5 , XOR the chaos sequence z starting from η 5 and select M × N elements to finally get a well-secured encrypted image.
Simulation Setup
The simulations were implemented in MATLAB R2015b (MathWorks, Natick, MA, USA) on a computer with Windows 10, Intel Duo Core I5 @2.53 GHz, 8 GB DDR3 RAM. The proposed cryptosystem was applied to a group of four gray images Lena, Deblur, Mandrill, and Peppers each with a dimension of 256 × 256 as shown in Figure 5a. The proposed 3D mapping encryption algorithm described in the previous section was applied by using the system parameters and initial values given in Table 1, which resulted in an encrypted version for the four selected images as shown in Figure 5b. Then we decrypted the cipher image to get the original image by using the correct key as shown in Figure 4c.
Statistical Analysis
Statistical attacks are a common type of image encryption attack due to the high correlation properties for adjacent pixels within an image. Such kinds of attacks could be avoided through randomly redistributing the pixels within the image and assigning a new value for each pixel. Figure 5 shows the histogram of the images under tests for both the original and encrypted versions. The encrypted images histograms are shown in Figure 6b,d,f and are uniformly distributed in terms of the pixel values compared to those in Figure 6a,c,e.Such uniformity distribution of the pixel values gives a good indication for the strength of the proposed encryption scheme.
Key Sensitivity Analysis
Key sensitivity is a reliable test to measure the encryption cryptosystem strength for a digital image. The better the encryption algorithm, the more sensitive (against even a slight change in a single key) it should be. Table 2 depicts the parameters and initial values used to measure the key sensitivity of our proposed cryptosystem. Even with a variation in one bit in a single parameter between the encryption correct key (K1) and wrong key (K2) for the same image, we realized a difference in the resulting histogram obtained in both cases such as that shown in Figure 7. Table 2. List of the keys used for key sensitivity analysis.
NPCR and UACI Randomness Tests
Two of the most common tests used to measure the image encryption algorithm against differential attacks are NPCR and UACI. Mao and Chen [5,21], first introduced both randomness tests in 2004.
To measure the differential attacks, a randomly pixel of a plain image was chosen and a slight change in the pixel value occurred to get a new plain image. Then, the encryption algorithm was applied on both images to produce the cipher images C 1 and C 2 of the original and new images, respectively. NPCR and UACI are calculated and listed in Table 3. Sufficiently high NPCR/UACI values for both cipher images are usually considered as a strong resistance to differential attacks. The results depicted in Table 3 demonstrate that a slight variation in the original image caused no effect on the existing cryptosystem. However, a significantly larger difference was recognized for our proposed method, i.e., high sensitivity of the proposed cryptosystem even for a slight variation in the original image. The comparison of NPCR and UACI for the proposed and different algorithms on Lena image is demonstrated in Table 4.
Correlation Properties Analysis and Tests
The correlation values between two neighboring pixels in the original image was high and near to 1 for horizontal, vertical, and diagonal positions. Cryptanalysts usually exploit correlation to cause cipher break. To avoid such ciphered image attacks, adjacent pixels must be de-correlated, with low value and close to 0. The correlation formula is given by: In Equation (14), N represents the total number of adjacent pixel and (p i , s i ) are the adjacent pixels' values. The correlation between two pixels for both original and ciphered images are depicted in Table 5, and Figure 8, respectively. Consequently, the proposed cryptosystem achieved zero-correlation and had a high privilege against correlation attacks.
The comparison of correlation coefficient for the proposed algorithm and other algorithms for Lena image is demonstrated in Table 6.
Peak Signal-to-Noise Ratio (PSNR)
PSNR is defined by the quality estimator for image after compression or some modification like mean square error (MSE). Equations (15) and (16) represent the calculation of the PSNR and MSE respectively PSNR = 20 log 10 where, P max is the highest pixel value of the gray image and its value is 255. P(i, j) and C(i, j) are the pixel value at a certain point (i, j) in the original image and the encrypted image, respectively. As long as the PSNR value is small, the resulted encryption algorithm will be more robust. The values of MSE and PSNR for the input tested images are listed in Table 7. The results of PSNR show that the proposed algorithm is very robust.
Noise Attack
During data transmission procedure, the opponent tries to decrypt the encrypted data. When the opponent fails to decrypt the ciphered data, he uses active or passive attacks to prevent the receiver from decrypting the encrypted data. Noise attack is one of most common ways used to distort the communication between the sender and receiver. Therefore, salt and pepper noise attacks were used with different intensity to measure the effect on the decrypted image. The results are provided in Figure 9. It may be visible that the proposed cryptosystem paper can be robust against the salt and pepper noise attack.
Entropy Analysis and Test Results
The entropy H of a message source S is obtained through the following formula: where P (S i ) denotes the probability of (Si). Assuming the message source (S) emitting 256 pixel values of equal probability, the resulting entropy would be near to 8. The obtained entropy represents a truly random source and with an ideal value of the message source S. The uniform distribution indicates greater entropy information. An encrypted image with information entropy less than the ideal value would result in a high risk for the possibility of certainty, which means real image security is threatened. The obtained information entropy values through our proposed encryption scheme as seen in Table 8 refers to ideal values close to 8. The information entropy test results obtained for our proposed encryption scheme would give a good indication of the strength of the proposed algorithm against security threats.
Local Shannon Entropy
Local Shannon Entropy (LSE) is a new performance test to adjust the exact randomness by selecting the non-overlapping blocks inside the cipher image. This performance can be measured by computing the mean of the entropy analysis calculated in the previous section on each block in the cipher image. LSE can be expressed by where, S 1 , S 2 . . . . . . S k are particular k image blocks while l is the amount of pixels for each block. Table 9 illustrates the LSE values for the cipher image.. The results show that the LSE value for the proposed algorithm is nearer to the optimum value (≈8). Therefore, the proposed cryptosystem has high randomness.
Time Efficiency
Time efficiency is running on a computer with Windows 10, Intel Duo Core I5 @2.53 GHz, 8 GB DDR3 RAM (Dell, Round Rock, TX, USA). The time is calculated on both encryption and decryption process. The test is applied on proposed images of size 256 × 256 pixels. Table 10 records the time efficiency of the proposed system and different encryption schemes. The results show that the proposed algorithm is sufficiently fast compared with other schemes, and meets real-time performance necessities. To summarize the performance analysis, Table 11 shows the analysis of the proposed algorithm compared with different schemes on the Lena image.
Conclusions
The main contribution described in this article is the proposal of a novel non-linear algorithm based on a logistic parameter hopped 3D chaotic map, using chaotic hopped parameters instead of fixed parameters for the chaotic map as well as the equalized histogram to increase the security of image encryption. First, dimensional permutation for the rows and columns of the image was obtained through our generated code. Secondly, we assigned the generated random values for the pixels during the value transformation stage. The steps required to build our encryption scheme involved starting with the code generation, followed by the position permutation and shuffling the rows and columns, then applying value transformation for the image pixels ending with the XOR operation. Most of the previous encryption techniques were depending on chaotic maps that used codebooks as a source for code generation. The modulated algorithm added more randomness and scattering for the generated code, which was very difficult to predict. The proposed encryption scheme was evaluated under several statistical tests such as: entropy analysis test, key sensitivity test, correlation properties, peak signal-to-noise ratio, noise attacks, and randomness tests including UACI and NPCR. The obtained test results were compared to similar encryption schemes based on 3D chaotic maps to evaluate the strength of our proposed scheme. The obtained results showed a significant improvement for the system security and resistance against different types of crypto analytical threats compared to other image encryption schemes based on similar algorithms. | 4,919.6 | 2021-04-27T00:00:00.000 | [
"Computer Science"
] |
Herbivory Amplifies Adverse Effects of Drought on Seedling Recruitment in a Keystone Species of Western North American Rangelands
Biotic interactions can affect a plant’s ability to withstand drought. Such an effect may impact the restoration of the imperiled western North American sagebrush steppe, where seedlings are exposed to summer drought. This study investigated the impact of herbivory on seedlings’ drought tolerance for a keystone species in this steppe, the shrub Artemisia tridentata. Herbivory effects were investigated in two field experiments where seedlings were without tree protectors or within plastic or metal-mesh tree protectors. Treatment effects were statistically evaluated on herbivory, survival, leaf water potential, and inflorescence development. Herbivory occurrence was 80% higher in seedlings without protectors. This damage occurred in early spring and was likely caused by ground squirrels. Most plants recovered, but herbivory was associated with higher mortality during the summer when seedlings experienced water potentials between −2.5 and −7 MPa. However, there were no differences in water potential between treatments, suggesting that the browsed plants were less tolerant of the low water potentials experienced. Twenty months after outplanting, the survival of plants without protectors was 40 to 60% lower than those with protectors. The percentage of live plants developing inflorescences was approximately threefold higher in plants with protectors. Overall, spring herbivory amplified susceptibility to drought and delayed reproductive development.
Introduction
The capacity of plants to withstand drought varies at different stages of development [1,2]. Seedlings and juveniles are typically the most vulnerable to drought [1]. At these stages, the lack of an extensive root system markedly limits the plant's ability to maintain water uptake as the soil dries out [2,3]. As a result, drought stress is a major factor limiting plant recruitment from natural seed banks or seeds and seedlings planted in restoration and reforestation projects [1,4]. The adverse effects of drought on seedling establishment will likely worsen due to the expected increase in the frequency and intensity of drought with climate change [5].
In addition to drought, the seedling stage tends to be more susceptible to other stresses and disturbances than later stages of development [1]. One such disturbance is herbivory [6]. While exceptions exist, there is often an increase in plant chemical and structural defenses from the seedling to the mature stage, making the former more prone to attack by herbivores [6,7]. Furthermore, the limited storage reserves present in seedlings can limit their recovery following herbivory, resulting in low herbivory tolerance [8].
Herbivory and drought may overlap or succeed each other in either order. Because the plants' requirements to cope with drought and herbivory differ, the effect of one stressor can reduce the plant's ability to withstand the other [9]. For example, maintaining water uptake during drought is often mediated by the preferential allocation of photosynthates to root growth rather than to plant defenses or shoot growth, thus leading to more susceptibility liminary observations suggested that these treatments would lead to different levels of herbivory. Similar experiments were started in two consecutive years. In both experiments, we evaluated the effect of the treatments on the time course of herbivory and mortality. In addition, we measured variables indicative of plant water status for the second experiment. We hypothesized that in browsed plants, the carbon demand for shoot regrowth would occur at the expense of root growth, reducing the plant's ability to extract water from deeper soil, resulting in lower plant water potential and higher summer mortality than in uneaten plants.
Climatic Conditions during the Experimental Period
Temperature and precipitation followed patterns typical of the area, with most precipitation occurring during the winter and spring ( Figure 1). However, there were some differences between the years. In 2020, significant rainfall occurred in late spring and early summer, and soil moisture did not decline as low as in the summer of 2019 ( Figure 1B). In addition, the summer of 2021 presented climatic conditions more conducive to drought than in the previous two years. Precipitation in the winter and spring of 2021 was about 40% lower than in the winter and spring of 2019 and 2020. Furthermore, summer temperatures in 2021 were higher than in the previous two years ( Figure 1A).
First Field Experiment
The first experiment started in October 2018 in Kuna Butte, ID, USA (43°26' 47.32″ N, 116°26' 48.61″ W). We outplanted 750 seedlings in a lattice at a distance of about 1.5 m from each other. The seedlings were randomly assigned to one of three treatments (n = 250): without tree protector, with plastic tree protector (25.2 mm mesh, 44 cm height, and
First Field Experiment
The first experiment started in October 2018 in Kuna Butte, ID, USA (43 • 26 47.32 N, 116 • 26 48.61 W). We outplanted 750 seedlings in a lattice at a distance of about 1.5 m from each other. The seedlings were randomly assigned to one of three treatments (n = 250): without tree protector, with plastic tree protector (25.2 mm mesh, 44 cm height, and 10 cm in diameter), or with metal tree protector (6 mm mesh and closed at the top) ( Figure 2). Independent of the protector treatment, damage to or losses of seedlings were minimal during the fall of 2018 ( Figure 3). In contrast, significant damage due to herbivory occurred by the late winter of 2019. At this time, the percentage of seedlings that experienced herbivory was about 90%, 7%, and 1% for the no-protector, plastic, and metal protector treatments, respectively ( Figure 3A, p < 0.0001 between no-protector and the other two treatments). The damage varied between seedlings that experienced herbivory, but a representative example of the observed damage is shown in Figure 3C,D. Subsequently, herbivory damage markedly declined ( Figure 3A), and by 17 April 2019, 68% of the injured seedlings had begun to resprout. During the summer, additional herbivory occurred in the vicinity of harvester ant nests. These plants were defoliated entirely and did not recover from herbivory. However, only a few plants were affected, and the loss was similar between treatments. In contrast to the first winter and spring in the field, herbivory during the winter and spring of 2020 was negligible ( Figure 3A). Independent of the protector treatment, damage to or losses of seedlings were minimal during the fall of 2018 ( Figure 3). In contrast, significant damage due to herbivory occurred by the late winter of 2019. At this time, the percentage of seedlings that experienced herbivory was about 90%, 7%, and 1% for the no-protector, plastic, and metal protector treatments, respectively ( Figure 3A, p < 0.0001 between no-protector and the other two treatments). The damage varied between seedlings that experienced herbivory, but a representative example of the observed damage is shown in Figure 3C,D. Subsequently, herbivory damage markedly declined ( Figure 3A), and by 17 April 2019, 68% of the injured seedlings had begun to resprout. During the summer, additional herbivory occurred in the vicinity of harvester ant nests. These plants were defoliated entirely and did not recover from herbivory. However, only a few plants were affected, and the loss was similar between treatments. In contrast to the first winter and spring in the field, herbivory during the winter and spring of 2020 was negligible ( Figure 3A).
In plants without protectors, mortality occurred during spring 2019 and continued during the summer ( Figure 3B). In contrast, mortality primarily happened during the summer in plants with plastic and metal protectors. By the end of summer 2019, survival was 23.4, 75.6, and 85.2% for the no-, plastic, and metal protector treatments, respectively ( Figure 3B). Moreover, even though herbivory was minimal during the summer, plants without protectors showed lower summer survival than the other treatments. Starting with the plants that were alive on 29 June 2019, summer survival was 59.1% for plants without protectors, 77.9% for plants within plastic protectors, and 85.5% for plants within metal protectors (p < 0.0001 between plants without and with protectors). Subsequently, survival slightly declined for all treatments. At the end of July 2020, survival from the beginning of the experiment was 19.5, 70.7, and 77.5% for the no-, plastic, and metal protector treatments, respectively ( Figure 3B). These differences were significant between the no-protector and the other treatments (p < 0.0001) but not between the plastic and metal protector treatments (p = 0.08). In plants without protectors, mortality occurred during spring 2019 and continued during the summer ( Figure 3B). In contrast, mortality primarily happened during the summer in plants with plastic and metal protectors. By the end of summer 2019, survival was 23.4, 75.6, and 85.2% for the no-, plastic, and metal protector treatments, respectively ( Figure 3B). Moreover, even though herbivory was minimal during the summer, plants without protectors showed lower summer survival than the other treatments. Starting with the plants that were alive on June 29, 2019, summer survival was 59.1% for plants without protectors, 77.9% for plants within plastic protectors, and 85.5% for plants within metal protectors (p < 0.0001 between plants without and with protectors). Subsequently, survival slightly declined for all treatments. At the end of July 2020, survival from the beginning of the experiment was 19.5, 70.7, and 77.5% for the no-, plastic, and metal For the plants alive at the end of July 2020, there were differences in the percentage of plants bearing inflorescences. This percentage was 2.2% in plants without protectors, 23.5% in plants within plastic protectors, and 41.9% in plants within metal protectors. Significant differences were detected between each pair of treatments: p = 0.001 for the metal vs. plastic protector comparison, p = 5.3 × 10 −7 for metal vs. no protector, and p = 0.0002 for plastic vs. no protector.
Second Field Experiment
This experiment started in October 2019 in a plot adjacent to that used in the previous experiment. We followed identical outplanting methods and protector treatments but with only 150 seedlings per treatment (n = 150). The time course of herbivory damage Plants 2022, 11, 2628 6 of 16 observed in this experiment was similar to that observed following the 2018 outplanting. Damage to or loss of seedlings was minimal during the fall of 2019 and most of the winter of 2020 ( Figure 4A). In contrast, significant damage due to herbivory occurred in March 2020. In that month, the percentage of seedlings that experienced herbivory was 85%, 25%, and 5% for no-protector, plastic, and metal protector treatments, respectively (p < 0.0001). Herbivory damage markedly declined by April 2020 (Figure 4A), and the injured plants began to resprout. However, these plants did not fully recover in terms of their size. At the end of summer 2020, the projected shoot area in the no-protector treatment was lower than in the other treatments (p < 0.0001), with values of 16.03 (±2.05), 45.53 (±4.97), and 49.06 (±4.27) cm 2 for the no-protector, plastic, and metal protector treatment, respectively. In the summer, a few plants suffered terminal damage from harvester ants, but, other than this damage, herbivory was minimal during the rest of the experiment. On 1 May 2020, after most herbivory had occurred, survival was similar between treatments ( Figure 4B). From then on, however, survival rates began to differ. In particular, seedlings without protectors had lower survival at the end of summer 2020 than those with protectors (p < 0.001). This trend continued until August 2021. At this time, the survival of seedlings without protectors was 51.9%, those with plastic protectors was 76.7%, and those with metal protectors was 89% ( Figure 4B). These differences in survival On 1 May 2020, after most herbivory had occurred, survival was similar between treatments ( Figure 4B). From then on, however, survival rates began to differ. In particular, seedlings without protectors had lower survival at the end of summer 2020 than those with protectors (p < 0.001). This trend continued until August 2021. At this time, the survival of seedlings without protectors was 51.9%, those with plastic protectors was 76.7%, and those with metal protectors was 89% ( Figure 4B). These differences in survival were significant between each pair of treatments (p = 0.012 for the metal vs. plastic protector comparison, p = 2.9 × 10 −9 for metal vs. no-protector, and p = 3.56 × 10 −5 for plastic vs. no-protector). Additionally, herbivory in early spring reduced the proportion of live plants with inflorescences. In July 2020, this proportion was about three times higher in seedlings with protectors than in those without them ( Table 1, p < 0.0001). Similar results were observed in July 2021. In this experiment, an additional measure of survival and inflorescence development was made one year later, in July 2022 (supplementary data). Survival remained similar to August 2021, being 49.3% for seedlings without protectors, 74.3% for those within plastic protectors, and 86.9% for seedlings within metal protectors. These differences were significant between each pair of treatments. In contrast, the percentage of live plants bearing inflorescences increased in all treatments, and no statistical differences were noted between them (Table 1). To characterize the degree of water stress the plants experienced, we measured predawn and midday leaf water potential (Ψ l ) during the summer of 2020 and the midday Ψ l and stomatal conductance (g s ) during the spring and summer of 2021. These measurements were only conducted in the no-protector and metal protector treatment because these treatments showed the highest difference in terms of the extent of herbivory. In 2020, predawn Ψ l ranged from −1.4 to −7 MPa, and midday Ψ l ranged from −1.6 to −8 MPa ( Figure 5A,B). The variation in Ψ l increased during the progression of the summer. However, except for a day in mid-August, the median values of midday Ψ l remained relatively constant from July to December ( Figure 5A). In addition, differences in predawn or midday Ψ l between seedlings without and with metal protectors were not significant.
In 2021, midday Ψ l declined from about −1 MPa in spring to about −2.5 MPa in mid-summer ( Figure 5C). As in 2020, differences in Ψ l between the no-and metal protector treatments were not significant. However, there were some differences between the years. Although 2021 was drier than 2020 (Figure 1), Ψ l values were higher in the 2021 summer than in 2020 ( Figure 5A,C). Furthermore, for comparable periods, the variability in Ψ l between plants was much less in 2021 than in 2020. Stomatal conductance showed a similar pattern to Ψ l , with a decline from spring to summer and no apparent differences between the two treatments ( Figure 5D).
herbivory. In 2020, predawn Ψl ranged from −1.4 to −7 MPa, and midday Ψl ranged from −1.6 to −8 MPa ( Figure 5A,B). The variation in Ψl increased during the progression of the summer. However, except for a day in mid-August, the median values of midday Ψl remained relatively constant from July to December ( Figure 5A). In addition, differences in predawn or midday Ψl between seedlings without and with metal protectors were not significant. The days when water potentials were measured are plotted as categorical variables rather than a continuous time sequence to make the boxplots more noticeable.
Discussion
This study identified a period about five months after outplanting in which A. tridentata seedlings suffered intense herbivory ( Figures 3A and 4A). Most seedlings resprouted following this damage. However, herbivory increased the plants' susceptibility to abiotic stresses, including drought, resulting in lower survival in unprotected seedlings when compared with protected seedlings (Figures 3B and 4B). In addition, herbivory delayed reproductive development ( Table 1).
Most of the herbivory observed in the two field experiments occurred during the late winter and early spring when ground squirrels (Urocitellus endemicus) emerged after a prolonged period of estivation followed by hibernation [41,42]. This timing and the type of cut noted in the seedlings ( Figure 3D) strongly suggest that ground squirrels were the primary cause of herbivory. Interestingly, this damage only occurred during the first winter and spring following outplanting. Subsequently, plants showed much less susceptibility to herbivory. These observations suggest changes in plant chemistry or structure that discouraged herbivory. Herbivory may have triggered some of these changes, but they could also have resulted from developmental processes [6,43]. Some observations support this notion; by the second winter in the field, many plants within protectors had branches extending out of them. These branches experienced minimal herbivory.
While most plants regrew after herbivory, the damage decreased subsequent survival. This decrease occurred in both outplantings, but the pattern and extent of survival decline varied between them. For the first outplanting, most of the mortality happened during the first spring and summer following outplanting. In contrast, for the second outplanting, little mortality occurred during the spring, but mortality became considerable during the summer and continued during the fall and winter. Additionally, twenty-two months after the fall 2018 outplanting, plants without protectors had 51 and 58% lower survival than plants in the plastic and metal protector treatments. Such differences in survival were smaller for the fall 2019 outplanting, where survival for the no-protector treatment was 25 and 37% lower than in the plastic and metal protector treatment. Thus, the capacity to tolerate herbivory was lower in the first than in the second outplanting.
A possible reason for the differences in herbivory-induced mortality between the two outplantings was the lower precipitation during the spring and early summer of 2019 compared to the same period in 2020 (Figure 1). Due to these differences in precipitation, the onset of drought may have occurred earlier in 2019 than in 2020. Under this scenario, plants that suffered herbivory in 2019 would have had less opportunity to recover than those that experienced herbivory in 2020, leading to earlier and higher mortality. Such an explanation is consistent with the compensatory continuum hypothesis that predicts a decrease in herbivory tolerance under resource-limiting conditions [44]. The reduced tolerance to herbivory in the year with lower precipitation is also in agreement with results in other species, where the ability to regrow and reproduce after herbivory, also known as plant compensation, diminished with less water availability [45][46][47].
In our study, the timing of herbivory and its delayed and negative impact on summer survival allowed us to investigate possible mechanisms by which herbivory reduced drought tolerance. We hypothesized that in browsed plants, the carbon demand for shoot regrowth would occur at the expense of root growth, reducing the plant's ability to extract water from deeper soil, resulting in lower plant water potentials than uneaten plants. This hypothesis was tested by measuring the predawn and midday water potential during the summer following the 2019 outplanting. Contrary to our hypothesis, we did not detect differences in water potential. During the summer, plants reached water potentials between −2.0 and −8.0 MPa. Values in the upper end of this range, between −2 and −4 MPa, while low, are above those that cause hydraulic failure in A. tridentata [48]. In contrast, water potentials below −4 MPa and down to −8 MPa were within a range where significant losses in xylem hydraulic conductivity were likely to occur [48]. The proportion of measured plants reaching a water potential of below −4 MPa was 18% in plants with metal protectors and 25% without protectors, but the difference was not significant (χ 2 = 0.31). Because values below −4.0 MPa represented only 29% of the Ψ l measured during the summer, a larger sample size may have revealed statistical differences. However, even if this was the case, dissimilarities in Ψ l alone seem insufficient to account for the observed differences in survival.
An additional or alternative factor that may have contributed to the higher mortality of browsed plants is a reduction in their ability to withstand the low Ψ l experienced. Water potentials below −2.5 MPa corresponded with low to minimal stomatal conductance ( Figure 5C,D). This decrease in stomatal conductance may have led to periods when plants had a negative carbon balance and depended on NSCs to maintain metabolism [49]. Moderate to severe defoliation often reduce NSC concentrations [13,50,51]. Consequently, a possibility is that plants that suffered herbivory had, before the drought, fewer NSCs than those unbrowsed. Low levels of NSCs can reduce drought tolerance through several effects, such as higher vulnerability to cavitation, impaired ability to osmoregulate and maintain phloem function, and less capacity to recover from xylem embolism after a drought [52][53][54][55]. These effects could have led to higher mortality in unprotected A. tridentata seedlings [56]. Non-structural carbohydrates also play an important role in cold tolerance [57,58]. Consequently, fewer NSCs in plants that suffered herbivory could account for their higher mortality during the winter.
Besides its effect on survival, herbivory markedly decreased the percentage of live plants that developed inflorescences in the year of the direct damage and the subsequent year ( Table 1). The capacity of plants to compensate for herbivory, resulting in similar or more flower and seed production in browsed than uneaten plants, is highly variable and affected by resource availability [44,59,60]. In our experiments, the timing of herbivory and drought likely prevented a compensatory response. Herbivory occurred in young plants, which, despite resprouting, remained smaller than the protected plants. The loss of vegetative structures combined with drought within a few months of herbivory may have delayed the transition from juveniles to adults and decreased the production of internal signals and photosynthates that promote flowering [61][62][63].
The main differences in survival and flowering occurred between plants without and those with metal or plastic protectors. However, we also detected differences between the metal and plastic protector treatments. In both outplantings, the percentage of plants that experienced herbivory was higher in plants within plastic protectors than in those within metal ones. For the 2018 outplanting, the difference in herbivory between these treatments did not impact survival, but it correlated with a lower percentage of plants developing inflorescences in the plastic treatment. In contrast, for the 2019 outplanting, the higher herbivory in the plastic treatment compared to the metal one was associated with 12% higher survival in the latter but no differences in the percentage of plants developing inflorescences. Thus, in both outplantings, the metal protectors provided some benefits over the plastics ones. Whether these benefits justify using metal over plastic protectors is unclear, but some considerations suggest that the former may have other advantages. In plants smaller than those used in this study, recovery from herbivory may be more difficult. Under these circumstances, metal protectors may have a higher impact on survival. In addition, metal protectors can provide some defense against grasshoppers in years with high herbivory by these insects and are much more durable than plastic ones. Consequently, metal protectors can be used in multiple succeeding outplantings.
As was noted in the introduction, high seedling mortality is a significant factor hindering the reestablishment of A. tridentata in disturbed areas. Based on the mortality caused directly or indirectly by herbivory in this study, practices aimed at reducing it are likely to increase recruitment in A. tridentata and thereby contribute to restoring sagebrush habitats. Particularly in habitats where the abundance of ground squirrels or other herbivores is high, the application of protectors seems worth the additional cost associated with their use. However, for large outplantings, the logistics of placing and ultimately removing protectors make their use somewhat impractical [64]. Consequently, developing more efficient methods to reduce herbivory would be valuable. In this regard, one of this study's results suggests an intriguing possibility. The incidence of herbivory was much lower during the second spring in the field, implying significant developmental or environmental plasticity in plant defenses. Identifying the triggers of this change may provide an opportunity to prime the seedlings before outplanting to reduce their susceptibility to herbivory. Such priming could involve modifying fertilization and watering regimens during late nursery growth to resemble particular field conditions or applying compounds that trigger palatability changes and increase chemical defenses [65][66][67].
Independent of the treatment applied, some of the results collected during the summer are informative of seedlings' physiological characteristics and developmental differences in their ability to cope with drought. The relationship between predawn and midday Ψ l was close to one ( Figure 6). Based on the work of Martínez-Vilalta [68], a slope of 1 indicates strict anisohydric stomatal behavior. Anisohydric behavior means that plants keep a relatively constant soil to leaf Ψ gradient as drought develops, allowing them to maintain high g s and photosynthesis [49,69]. Assuming that the predawn Ψ l represents the water potential of the soil from where the roots took water [70], A. tridentata seedlings showed a behavior close to anisohydric. Such anisohydric behavior is consistent with results recently reported for adult plants of the same subspecies; Sharma et al. [71] showed that A. tridentata ssp. wyomingensis was more anisohydric than A. tridentata ssp. vaseyana. However, the relationship between the predawn-midday Ψ l gradient and g s is unclear for our young plants since we did not conduct parallel measurements of these variables. However, based on the midday Ψ l and g s observed in the following year ( Figure 5C,D), it seems very unlikely that there was not a decrease in g s independent of the predawn-midday Ψ l gradient. In Eucalyptus gomphocephala DC., Franks et al. [72] observed a constant plant water potential gradient with increasing water deficits but decreased stomatal conductance. They described this behavior as anisohydric but isohydrodynamic and indicative of parallel g s and hydraulic conductivity reductions with drought [72]. Whether A. tridentata seedlings followed this behavior requires further experimentation [73], but it would explain the apparent discrepancies between water potential gradients and changes in g s with declining Ψ l . The midday Ψl also showed less variation and higher values during the summer of 2021 than in the summer of 2020 ( Figure 5). For comparable periods (mid-July to late-August), the average midday Ψl was 1.4 MPa (p < 0.0001) higher in 2021 than in 2020. This increase occurred even though the weather was more conducive to drought in 2021 than in 2020. Such results are not entirely unexpected. The plants were larger in 2021 than in 2020 and likely had a more extensive root system to extract water from moister and deeper soil. Nevertheless, the Ψl data revealed a notable increase in the plant's ability to maintain higher water potentials with drought, which explains the marked decrease in mortality between the first and second summer in the field.
Plant Material
The plant material used in the experiments was Artemisia tridentata ssp. wyomingensis seedlings provided by the Bureau of Land Management; this agency uses similar seedlings in restoration projects. Seeds to grow these seedlings had been collected within five miles from our experimental field site at the Morley Nelson Snake River Birds of Prey National Conservation Area (ID, USA). The seeds were sown in 150 mL cone-tainers filled with a 3:1 peat moss to vermiculite mix and subsequently grown for ten months before outplanting, as described by Fleege [74].
Experimental Approach
The study involved two similar experiments that started in two consecutive years The midday Ψ l also showed less variation and higher values during the summer of 2021 than in the summer of 2020 ( Figure 5). For comparable periods (mid-July to late-August), the average midday Ψ l was 1.4 MPa (p < 0.0001) higher in 2021 than in 2020. This increase occurred even though the weather was more conducive to drought in 2021 than in 2020. Such results are not entirely unexpected. The plants were larger in 2021 than in 2020 and likely had a more extensive root system to extract water from moister and deeper soil. Nevertheless, the Ψ l data revealed a notable increase in the plant's ability to maintain higher water potentials with drought, which explains the marked decrease in mortality between the first and second summer in the field.
Plant Material
The plant material used in the experiments was Artemisia tridentata ssp. wyomingensis seedlings provided by the Bureau of Land Management; this agency uses similar seedlings in restoration projects. Seeds to grow these seedlings had been collected within five miles from our experimental field site at the Morley Nelson Snake River Birds of Prey National Conservation Area (Murphy, ID, USA). The seeds were sown in 150 mL cone-tainers filled with a 3:1 peat moss to vermiculite mix and subsequently grown for ten months before outplanting, as described by Fleege [74].
Experimental Approach
The study involved two similar experiments that started in two consecutive years (2018 and 2019). Both experiments were conducted in adjacent plots in Kuna Butte, ID, USA (43 • 26 47.32 N, 116 • 26 48.61 W). The soil at this site is Power-McCain silty loam, which is classified as fine-silty, mixed, superactive, mesic Xeric Calciargids [75]. The first experiment started in October 2018. At this time, most of the vegetation at the site was dry and consisted of stalks of non-native plants, mainly crested wheatgrass (Agropyron cristatum (L.) Gaertn.), cheatgrass (Bromus tectorum L.), and tumble mustard (Sisymbrium altissimum L.). We outplanted 750 seedlings in a lattice at a distance of about 1.5 m from each other. The seedlings were randomly assigned to one of three treatments (n = 250): without tree protector, with plastic tree protector (25.2 mm mesh, 44 cm height, and 10 cm in diameter), and with metal tree protector (6 mm mesh and closed at the top) ( Figure 2). The seedlings were watered immediately after outplanting through a PVC tube inserted about 20 cm from the soil surface, and this watering was repeated two weeks later. After these watering events, the plants only received natural precipitation. A weather station at the site recorded temperature, precipitation, and moisture in the top 20 cm of soil.
The efficacy of the tree protectors in reducing herbivory was assessed by counting the plants that showed significant signs of herbivory, as judged by extensive removal of branches or leaves. These observations, and those of seedling mortality, were made approximately monthly between October 2018 and December 2019 and less frequently during the spring and summer of 2020. In addition, at the end of the 2020 summer, we counted the number of plants bearing inflorescences.
The second experiment started in October 2019. Conditions at the site were similar to those described earlier. Additionally, we followed identical outplanting methods and protector treatments but with only 150 seedlings per treatment. Seedlings showing herbivory damage and seedling mortality were measured nearly monthly between November 2019 and September 2020 and less frequently during the fall of 2020 and spring and summer of 2021. Plants bearing inflorescences were counted in the late summer of 2020 and 2021. In addition, at the end of summer 2020, we estimated the shoot area for each treatment. For this purpose, we took pictures of 25 randomly selected seedlings per treatment. These photos were used to measure the shoot areas using ImageJ software [76].
To assess the effect of herbivory on plant water status, we also measured leaf water potential (Ψ l ) and stomatal conductance (g s ) in the second experiment. These measurements were only conducted in the no protector and metal protector treatment to reduce work. The determination of midday Ψ l started in the early summer of 2020 and continued to the late summer of 2021. We measured midday Ψ l bi-weekly during the summer and less frequently in fall and spring. In addition, during the summer of 2020, we took measurements of predawn Ψ l . Midday and predawn Ψ l measurements were made in eight (2020) or five (2021) plants per sampling day and treatment using a pressure chamber (PMS Instrument Company; Albany, OR, USA). For this purpose, small lateral shoots were wrapped in Saran wrap, excised, and immediately used to determine their Ψ l . Stomatal conductance was measured during the summer of 2021 in the same plants used to measure midday Ψ l . Three measurements were taken per plant between noon and 2 pm using an SC1 leaf porometer (Meter Group, Pullman, WA, USA).
Data Analyses
The effect of the tree protectors on the number of plants that experienced herbivory and survival was analyzed using the ggsurvplot and pairwise_survdiff functions in the Survminer R package [77]. To examine the impact of the treatments on the shoot area and the number of plants bearing inflorescence, we used a one-way ANOVA and a chi-square test, respectively. Possible differences in Ψ l and stomatal conductance between the noand metal protector treatments were evaluated by boxplot comparisons. All statistical analyses were conducted using base functions in R 4.0, except for the boxplots, which were generated with the seaborn library in Python [78,79].
Conclusions
This study showed that in A. tridentata, herbivory by small mammals can markedly increase plants' susceptibility to abiotic stresses, indirectly limiting the re-establishment and recruitment of this species. Herbivory damage mainly occurred during the first winter and spring following outplanting. Most plants recovered from this damage, but herbivory was associated with higher mortality during summer drought. Based on the water potentials measured, browsed plants were less tolerant of the low water experienced in the summer, presumably resulting in mortality at higher water potentials than unbrowsed seedlings. In addition to its effect on survival, herbivory decreased the percentage of live plants that underwent reproductive development. The causes that determined this reduction require further investigation but are likely linked to the browsed plants' smaller size and leaf area [61]. Interestingly, herbivory markedly diminished after the first spring in the field. This result strongly suggests that developmental or environmental factors triggered increases in plant defenses or other changes that deter herbivory. Identifying the causes of these changes may allow for the development of more effective approaches to decrease the incidence of herbivory. To maximize the re-establishment of A. tridentata, reducing herbivory and its adverse effects on drought tolerance is likely to become more critical due to the expected rise in the frequency and severity of drought associated with climate change [80].
Data Availability Statement:
The data presented in this study are available within the article and its supplemental materials. | 7,938 | 2022-10-01T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
The financial burden from non-communicable diseases in low- and middle-income countries: a literature review
Non-communicable diseases (NCDs) were previously considered to only affect high-income countries. However, they now account for a very large burden in terms of both mortality and morbidity in low- and middle-income countries (LMICs), although little is known about the impact these diseases have on households in these countries. In this paper, we present a literature review on the costs imposed by NCDs on households in LMICs. We examine both the costs of obtaining medical care and the costs associated with being unable to work, while discussing the methodological issues of particular studies. The results suggest that NCDs pose a heavy financial burden on many affected households; poor households are the most financially affected when they seek care. Medicines are usually the largest component of costs and the use of originator brand medicines leads to higher than necessary expenses. In particular, in the treatment of diabetes, insulin – when required – represents an important source of spending for patients and their families. These financial costs deter many people suffering from NCDs from seeking the care they need. The limited health insurance coverage for NCDs is reflected in the low proportions of patients claiming reimbursement and the low reimbursement rates in existing insurance schemes. The costs associated with lost income-earning opportunities are also significant for many households. Therefore, NCDs impose a substantial financial burden on many households, including the poor in low-income countries. The financial costs of obtaining care also impose insurmountable barriers to access for some people, which illustrates the urgency of improving financial risk protection in health in LMIC settings and ensuring that NCDs are taken into account in these systems. In this paper, we identify areas where further research is needed to have a better view of the costs incurred by households because of NCDs; namely, the extension of the geographical scope, the inclusion of certain diseases hitherto little studied, the introduction of a time dimension, and more comparisons with acute illnesses.
Background
The 2010 WHO Global Status report on non-communicable diseases (NCDs) showed that they are now the most important cause of mortality worldwide. Indeed, more than 36 million people died from NCDs in 2008, mainly cardiovascular diseases (48%), cancers (21%), chronic respiratory diseases (12%), and diabetes (3%). Nearly 80% of these deaths occurred in low-and middleincome countries (LMICs), where, on average, they now exceed communicable diseases as the major cause of disease burden [1]. Even in the remaining countries where infectious diseases are the main health problem, NCDs are growing rapidly. NCDs are expected to exceed communicable, puerperal, prenatal and food diseases on the list of leading causes of death in all countries by 2020. The increasing importance of NCDs has caused them to no longer be viewed simply as a health issue but rather as a development issue worthy of discussion at a High-level Meeting of the 66 th General Assembly of United Nations [2].
Considerable literature exists on the impact of NCDs on households in high-income countries [3][4][5][6][7]; researchers are now beginning to examine the implications of NCDs in low-and middle-income settings as well [8]. Indeed, the impact is expected to differ because there is little financial risk protection in many LMICs and thus financial costs are largely borne by households themselves rather than governments or insurance schemes [9]. The framework presented in Figure 1 describes the channels through which NCDs can affect the economic welfare of households.
We conducted a literature review to present existing evidence on the financial burden from NCDs in lowand middle-income settings, at the individual and household level. The aim is to provide accurate and relevant information on this important issue to policymakers, and determine where further research is needed.
Methods
We performed a literature search with Cabdirect, Sciencedirect and Web of Knowledge, using combinations of the following key words: "Non-communicable disease", "chronic illness", "diabetes", "cardiovascular disease", "cancer", and "chronic respiratory disease" with "cost", "impoverish", "financial burden", "health expenditure", "expense", "out-of-pocket", "health spending", "catastrophic expenditure", "catastrophic expense", and "catastrophic spending". A total of 8,966 results (including duplicates) were obtained. After duplicate removal, titles and abstracts of the remaining papers were reviewed to assess their relevance according to the following inclusion criteria: i) papers in English or French; ii) from 1990 onwards; iii) covering at least one low-, lowermiddle-or upper-middle-income country a [11]; iv) measuring the household or individual financial costs; v) of one condition (or more) falling under the definition of "chronic diseases" [12] or classified in "Group II diseases" according to the ICD-10 code [8]. This screening led to the selection 43 articles and a secondary literature search was performed using the references cited in these selected papers. Finally, a total of 49 papers were identified, whose full-length versions were obtained for this review. Each of these studies was examined for information on disease(s), study population, analysis methods and findings. These details are presented in Additional file 1: Table S1.
Overview of the methods used in the literature
The studies found in the literature reflect the diversity of methods used to assess household financial burden from NCDs. The methodological differences in the studies inherently prevent a formal meta-analysis from being performed. However, at the same time, these differences offer opportunities to explore results through the lens of different techniques. In this section, we present a discussion on the methodologies used. Some studies look at a specific NCD (e.g., diabetes, cancers, cardiovascular diseases), while a majority consider NCDs in general or a combination of two or more specific NCDs. We found only one previous literature review which included studies on multiple NCDs, but it includes studies from only a few countries and did not include any studies from Africa and Latin America [13].
The original studies found also differed according to data sources and sample sizes. Some authors conducted their own surveys for the purpose of the studies, while others used data from existing surveys carried out by another entity (e.g., National Institute of Statistics, Ministry of Health, Health Insurance Plans). In these surveys, households and individuals were generally chosen randomly, through simple, stratified or cluster sampling [14][15][16][17][18][19][20][21][22]. However, many studies used convenient samples of patients suffering from a specific illness in health care facilities, something that we report when presenting the results [23][24][25][26][27][28][29][30][31]. Additionally, studies looking at specific diseases generally used relatively small samples, while those considering a broad set of diseases usually relied on bigger samples. For the assessment of diabetes costs, for example, some studies selected a small number of diabetic patients: 50 in North India, 53 in Cape Town (South Africa) and 77 in Ghana [23,25,32]. Similarly, in a study in Enugu (Nigeria), Obi and Ozumba used a sample of 95 patients suffering from cervical cancer [27]. On the other hand, up to 206,700 individuals from 48,600 households were included in a study on chronic diseases in Mexico [33]. In terms of internal validity of findings, some studies used hospital registries or insurance reimbursement records to verify the information reported by patients and/or their relatives during face-to-face interviews [34][35][36]; a majority of studies, however, simply accepted the answers of the respondents as being valid. Finally, some studies use data from focus group discussions and key informant interviews to complement their analyses [18,32,[37][38][39].
In the studies looking at NCDs in general, the term "chronic diseases" is frequently used, and even if the major NCDs are usually taken into account, the definitions vary from one study to another. For example, Shi et al. defined a chronic ailment as an ailment that lasts or is expected to last for at least 12 months, resulting in functional limitations or the need for ongoing medical services, and includes disability [15]. In Kenya, Chuma et al. defined chronic illnesses as those reported to have lasted three months or more [38], while for Goudge et al., any illness that had persisted for longer than a month was defined as chronic [37]. Mondal et al. considered that a chronic illness is a condition that lasts more than three weeks, which needs to be managed on a long-term basis [40]. However, many of these studies provide the list of diseases they considered as chronic, and thus it was possible to know whether NCDs were included along with some communicable diseases (for example, HIV/AIDS). In these cases, we report results related only to chronic NCDs. Nevertheless, in some studies it was not possible to be sure that the focus was limited to only chronic NCDs.
Irrespective of the diseases considered, many studies assessing the direct costs incurred by households for the treatment of NCDs also focus on impoverishment and catastrophic health expenditure due to these expenses. Impoverishment occurs when a respondent would have had a net income above the poverty line in the absence of the expenditure on the disease, but below it after. Different poverty lines are used across studies -US$ 1 per day, US$ 1.08 per day, US$ 1.25 per day and US$ 2 per day [28,35,39,41,42].
Catastrophic heath expenditure occurs when people spend a disproportionate amount of their income (sometimes non-food expenditure) on the condition, as described in Xu et al. [43]. However, a great variety of specific definitions for catastrophic health expenditure were used in the studies presented here. The thresholds for determining a disproportionate level of expenditure vary from 10% to 60%; some studies deviated from this more standard approach. For example, Mukherjee et al. used the concept of "high health care expenditure" instead of catastrophic health payments [44]. In this study, a household was identified as having incurred high outof-pocket expenditure on health care if its annual health care expenditure was high in comparison to those of other households within the same caste group in India [44].
The evidence on the direct costs from non-communicable illnesses
Many of the studies assessed direct costs, which include all costs incurred by individuals and households for the treatment of NCDs. In theory, these costs should be net of any reimbursement from insurance. We present evidence on these direct costs organized by disease.
Diabetes
Diabetes is a leading NCD and 16 studies included in this review looked at the direct costs incurred for both outpatient and inpatient services. All studies, except one, relied on convenience samples, so the results need to be interpreted carefully. Overall, the studies found that varying shares of household income are allocated to paying for diabetes care. This ranges from as low as 5% of income for a rural low-income population in India to up to 24.5% for a low-income group in Madras (India) [34,36,45]. Spending can also differ between richer and poorer households and studies found that poorer households spend a higher proportion of their income on care for diabetes than richer households. These differences can be quite strikingone study from India found that in urban areas, the share of income spent on diabetes care in the poorest households was seven times that of the richest households [45]. Spending on diabetes can also be a considerable share of overall household health spending. A study in Sudan reported that on average 65% of household health expenditure was spent on caring for a child with diabetes [46].
Medications are frequently found to be the largest component of expenditure on diabetes [47]. Spending on medications represented from 32% to 62% of total expenditure on diabetes care in various setting such as India, Mexico, Pakistan and Sudan (Table 1). In rural Ghana, spending on insulin alone represents around 60% of the monthly income of those on the minimum daily wage [32]. Using originator-brand medication resulted in much higher spending in the only diabetes study that used random sampling rather than convenience samples. This study found that in Yemen and Mali, purchasing an originator brand medicine for glibenclamide (a medicine used to treat type II diabetes) in the private sector was found to potentially impoverish an additional 22% and 29% of the population, respectively, versus 3% and 19%, respectively, if the lowest priced generic product was purchased [41]. Laboratory and transportation costs were generally the second largest component of expenditure. Some studies also document expenditure related to special dietary regimes (up to 20% of the direct costs in North India [23]).
The presence of complications and the duration of the illness are usually associated with an increase of the direct costs. For example, Khowaja et al. found that in Pakistan, the direct cost for patients with co-morbidities was 45% higher than the direct cost for patients without co-morbidities [50]. Similarly, in India, those without complications were found to have an 18% lower cost compared to the mean annual cost for outpatient care for all patients with diabetes, while those with three or more complications had a 48% higher cost [51]. Similar results were found in India, China, Thailand and Malaysia [34,36,45,48]. These studies also highlight the fact that treatment at an early stage is much cheaper for households than treatment at a later stage with complications.
Some studies looked at coping strategies used by households to pay for these direct costs. In India, the majority of patients (89%) used their household income to fund the monitoring and treatment of their diabetes, while household savings were used by 22% of retired patients and by 19% of those in the lowest income bracket. When faced with hospitalization, 56% of patients had to dip into their savings or borrow in order to fund the costs [51]. Additionally, very few households are reimbursed by insurance. In India, Kapur found that only 1% of patients claimed the costs of treatment on insurance [51], while Ramachandran et al. observed that medical reimbursement was obtained by 14.2% of urban patients but by only 3.2% of rural patients [45]. Moreover, Khowaja et al. found that in Pakistan, none of the persons with diabetes indicated that their cost was borne by an insurance company or their employer [50].
Cardiovascular diseases
Five studies examined spending on cardiovascular diseases. In a study using data from a household survey in Kazakhstan, people with cardiac problems were found to pay on average 24% more for health care than people with other health problems [22]. As with diabetes, studies from Congo and Uganda also found that the use of originator brand drugs increases spending on cardiovascular diseases [24,41]. Once again, there was only one cardiovascular disease study that did not use a convenience sample [41].
Out-of-pocket payments for the treatment of cardiovascular diseases also lead to significant costs for households. Up to 71% of patients who had experienced an acute stroke were found to face catastrophic health [35]. The study of Heeley et al. also found that catastrophic payments and impoverishment due to cardiovascular diseases are more common in people with no health insurance than in those with health insurance [35]. In a study covering 35 states and union territories in India, Rao et al. investigated the coping strategies used by households to deal with expenses incurred for hospitalizations due to cardiovascular diseases [52]; 57% of these expenses were paid from household savings, 35% from borrowings, and 8% from the sale of assets. In the poorest group, up to 55% of out-of-pocket spending was financed through borrowings, and only 38% through savings [52].
Cancer
Cancers also represent an emerging health problem in LMICs and seeking health care for these diseases can have a significant effect on families' welfare. We found three papers which focus specifically on the direct cost from cancers. In a study using data from a randomized household survey in Pakistan, 27.1% of those who sought care for cancer at private facilities were found to finance their care through unsecured loans, while 7.1% relied on assistance from others [53].
Two studies using convenience samples also shed some light on components of spending on cancer care. Indeed, Zhou et al. found that health insurance facilitates the financial access of treatment for patients suffering from oesophageal cancer in China, particularly for purchasing drugs [31]. Meanwhile, transportation, multiple investigations, radiotherapy and chemotherapy were the main components of direct costs for cervical cancer in Nigeria [27].
Other non-communicable diseases
The financial burden from other NCDs, such as epilepsy, cirrhosis, chronic obstructive pulmonary disease (COPD), rhinitis and depressive disorders, is also estimated in some studies. Even if they are not as studied as the major NCDs presented previously, these types of illnesses can also exert a considerable pressure on household finances. For example, a study from Mumbai (India) based on a random sample of households found that the share of the annual personal income spent on outpatient care for allergic rhinitis was 1.7% when treatment was sought in public facilities. Similarly, care for COPD represented 13.3% of annual personal income among those using private facilities. With hospitalization at public facilities, out-of-pocket payments for COPD represented up to 62.3% of the annual personal income compared to 50.7% for hospitalization in private facilities [54]. Using a focus group, Russell and Gilson document the case of an individual suffering from asthma, who incurred a direct cost representing 15% of his monthly wage when seeking care for a sore chest in a private clinic and pharmacy [39]. Multiple laboratory tests and the presence of complications were also found to cause high expenses for a convenience sample of patients suffering from cirrhosis in Brazzaville (Congo) [26].
Coping strategies used to pay for care associated with these NCDs are similar to those used to cope with more documented NCDs. In Pakistan for example, Mahmood and Ali Mubashir using a random sample found that 22.9% of patients with circulatory diseases (heart diseases, rheumatic fever and blood pressure) who visited private doctors/clinics for treatment financed care through unsecured loans, while 8.8% relied on assistance from others [53]. Among those who did not visit any facility, 67.4% reported financial constraints as the reason for not seeking care.
Non-communicable diseases combined
We found a large number of studiesall based on randomized household surveyslooking at NCDs in general, instead of focusing on specific illnesses. Some studies highlight the association of having a household member suffering from a chronic disease with a significant increase in health care expenditure and a higher risk of impoverishment. In Russia, for example, each additional case of chronic disease in a household was found to increase the probability of incurring health care expenditure by 8% and the amount of healthcare expenditure by 6.2% [19]. Similarly, in Uganda, households with a member suffering from a chronic illness were found to be three times more likely to incur costs for health care than other households [18]. In Kazakhstan, people with chronic illness were found to pay on average 18% more than people with other health problems, while in Georgia, the mean cost for outpatient care in case of chronic illness was almost two times higher than in case of acute illness [21,22]. On the other hand, a study from India found that the relative importance of chronic diseases for spending may be lowerthe mean annual per capita health expenditure for a chronic episode was 11% lower than for an acute one [44].
Undeniably, expenses incurred when seeking health care for chronic diseases represent an important financial burden for households as presented in Table 2. In fact, the costs of health care for chronic illnesses were found to represent from 5.0% of household income in rural Kenya to up to 30-50% of monthly income for vulnerable households in South Africa, where care for these illnesses were unaffordable without gifts from social networks [37,38]. Similarly, household spending on chronic illness represented 4.14% of household's total annual health care expenditure in urban areas and 5.73% in rural areas of West Bengal in India; however, it was up to 11% in Vietnam and 32% in Maharashtra, Bihar and Tamil Nadu states of India, with a higher share for hospitalization and drugs [20,40,55]. All these studies used a random sample. Another proxy of households' capacity to pay used in the literature is their non-food expenditure. Sun et al. found that in China, the average proportion of chronic disease expenditure to annual non-food expenditure was about 27% in Shandong Province and 35% in Ningxia province for patients covered by New Cooperative Medical Scheme (NCMS), a public health insurance scheme for rural residents [16]. For non-NCMS members, these proportions were 47% and 42%, respectively.
In several studies, the presence of household members with chronic ailments was also found to lead to catastrophic health expenditure and impoverishment. The probability of catastrophic expenditure was then 4.4 times higher among households having incurred expenses for treating chronically ill persons in Georgia, and up to 7.8 times higher in Burkina-Faso [17,56]. Similar results were found in West Bengal (India), in Lebanon and in China [15,40,57,58]. Up to 11.6% of households in Western and Central China were pushed under the US$ 1.08 poverty line after incurring outpatient expenses associated with chronic diseases [42]. Moreover, Shi et al. found the incidence of medical impoverishment to reach 19.6% in households where more than 50% of members had a chronic illness [16].
As with diabetes, when households are covered by health insurance, the reimbursement rates for chronic diseases are relatively low. In Shandong and Ningxia in China, for example, only 11.16% and 8.67%, respectively, of overall medical expenditure for chronic diseases was reimbursed by the NCMS [16]. However, another study from Western China found that health insurance provided protection against impoverishment due to expenses for chronic diseases [42]. Government subsidies for medicines were also found to lower the expenses for many chronic diseases in Vietnam [29].
Coping strategies documented in the literature combining chronic diseases are similar to those described in the studies on specific NCDs. In Georgia, when households were lacking financial means, the most dominant strategy was to borrow from a friend or relative (70%), followed by selling household valuables (10%) and/or household goods/products (10%) [21].
Literature on the indirect costs due to non-communicable diseases in low-and middle-income countries
Households and individuals also bear indirect costs when they are affected by NCDs. These costs mainly include time and productivity loss by patients and caregivers because of the illness as well as income lost by patients and family members. Whereas there is no doubt that these indirect costs can pose a substantial burden on households, there are numerous methodological challenges in measuring this burden adequately; these challenges have been discussed in detail in a previous study [59]. Nonetheless, in this section, we present the available evidence on the indirect costs of NCDs as reported in the literature. This constitutes findings from 11 studies, which mainly use convenience samples, on loss of income, loss of time and other forms of financial loss related to these illnesses. We discuss possible limitations of these findings in the discussion section.
Loss of income
In India, one study suggests that the indirect cost for diabetes patients and their caregivers was 28.76% of the total treatment cost. It was claimed that loss of income of the patient comprised the greatest portion of indirect costs (60.54%), followed by loss of income of caregivers (39.46%) [23]. Rayappa et al. found that in Bangalore (India), 30.9% of respondents suffering from diabetes reported a change in personal income, and on average, they faced a reduction of 20.9% of their personal income [48]. In addition, 20.8% of the respondents reported a change in family income, with a mean reduction of 17.4%. Similarly, Arrossi et al. found that in Argentina, 39% of households with a member suffering from cervical cancer lost family income, partially or totally [28]. Among households that lost income, 47% lost less than 25% of family income, 34% lost 25-50% and 19% lost 50% or more of their income. As a result of the reported loss of income, it was estimated that the proportion of patient's households living in poverty increased from 45% to 53%. Likewise, Obi and Ozumba found that in Nigeria, all patients suffering from cervical cancer and their relatives lost income from workplaces due to absenteeism, disengagement from work and missing business appointments [27]. In a study covering 19 countries, one of the two studies using randomized household survey data documenting indirect costs, Levinson et al. found that serious mental illness was associated with a potential reduction in earnings of 10.9% of average national earnings in LMICs [60]. The second study using randomized household survey data was from Russia and found that labour income decreased by 4.8% per additional case of chronic disease in the household [19]. Some studies only estimate the NCDs-related indirect costs for patients and their families in absolute value (local currencies or US$) [50,51,61].
Loss of working time
The loss of income borne by patients suffering from NCDs is mainly due to self-reported absenteeism from usual economic activity. In fact, the treatment of NCDs usually requires repetitive visits to health facilities in addition to the inability to work due to their poor health. This can lead to additional losses of working time both for patients and caregivers. In the literature, the mean loss of working time reported by patients was found to vary from 2.8 ± 1.7 hours per visit for diabetes in Pakistan to 58 ± 105 days per year for epilepsy in India [30,50]. Episodes of respiratory diseases can also cause important losses of working time as shown in a case study in Colombo (Sri Lanka) where Russell and Gilson found a patient suffering from asthma took two days off work for a sore chest, losing 6% of his monthly wage [39]. However, time costs are not limited to patients, but also affect caregivers. In Buenos Aires (Argentina) for example, Arrossi et al. found that in 45% of households with a member suffering from cervical cancer, at least one member reduced his/her working hours [28]. For diabetes patients in Thailand, caregivers were found to spend on average 42.21 ±39.94 hours per month on health care activitiese.g., giving medicinesand 21.87 ± 31.81 hours on activities of daily livinge.g., helping with eating and dressing [61].
Other forms of indirect costs
Some other forms of indirect costs due to NCDs were found in the literature; these generally concern households' livelihood and welfare. The study on cervical cancer in Buenos Aires (Argentina) by Arrossi et al. examined these and also found that due to a loss of income, there were delays in payments for essential services such as telephone or electricity and as a result 43% of households had the service cut [28].
There were also significant effects on self-reported daily food consumption, which was reduced in 37% of households, while 38% of households reported that they sold property or used savings to offset income loss. Some impacts on education were found and school absences were more prevalent in 28% of households. There were also problems to pay for education in 23% of households. Furthermore, 45% of patients were cared for by one or more informal caregivers that did not live with them and one-third of these caregivers' households reduced their daily consumption of food and 26% had delays in payments of essential services such as electricity or telephone services. It should be noted that these are the types of welfare losses have shaped the concept of catastrophic health expenditure.
There were also direct impacts on employment and at least one member stopped working in 28% of households affected by cervical cancer. Several interviewees who stopped working expressed the hope of going back to their jobs after treatment, fearing at the same time that this would no longer be possible. Similarly, a study from Bangalore (India) by Rayappa et al. found that only 33.4% of diabetes patients worked and among those working, 23% experienced problems at their job, affecting their productivity and at times requiring changing work to a less strenuous job (5.9%) or giving up the job (14.7%) [48]. Considering NCDs in general, Abegunde and Stanciole found that in Russia, chronic illnesses, which included NCDs, impose a reduction of 5% in household consumption of non-health-related items [19].
Discussion
This literature review has presented the available evidence on the household financial burden related to NCDs in LMICs. However, before discussing its most important results, it is important to highlight some of the methodological issues in many of the studies that were included. First, the heavy reliance on convenience samples taken from people who are seeking and obtaining treatment, often at hospitals, will almost certainly result in an upward bias in costs for the average person with the condition. The people who do not seek treatment or who seek treatment at a lower level of care, implying lower costs, have no chance of being selected.
Second, self-reported costs, even from random samples of patients, are likely to be biased upwards when there are no controls. Some of the people with the condition would have incurred some health expenses in any case and this can only be captured by including controls without the condition [59,62]. In other words, it is likely that part of the costs reported by patients with NCDs were not directly associated with those conditions. This issue is particularly important when considering indirect costs. It is clear that the method of asking people how many days they could not work overestimates the true loss in work time from a disease because many of the people, particularly in low-income countries, would not have been working on those days, or for all of those days, in the absence of the disease [59]. Nor do the studies consider whether absent workers are replaced by other family members in family enterprises or farms. For example, frequently other family members fill in for a sick person during the planting season in agriculture so that the same area of land is planted despite the illness [63]. This does not, of course, mean that there are no opportunity costs associated with the illness, but that the measured production from the family enterprise is not altered as much. In general, therefore, we expect that the costs from studies with no controls to be overestimates of both direct and indirect costs.
The substantial variations in study designs and definitions described earlier also make comparisons tricky and meta-analysis infeasible. There is considerable heterogeneity in objectives and the methodologies used in the papers. While we have more confidence in the studies relying on randomized samples, we present more details about each study in file 1: Table S1 to give readers further information and to allow them to consider possible generalizations of the results. Taking into consideration the methodological issues highlighted here and in earlier sections, we can still conclude that NCDs already impose substantial financial costs on some of their sufferers in lower-income countries. As a result, the cost of obtaining treatment for NCDs is also becoming a cause of impoverishment and financial catastrophe in these countries. While this is not particularly surprising given the growing burden of disease associated with these conditions, it has not been documented before.
Again not surprisingly, complications related to the severity of illness were found to increase the household financial burden, both for the patient and for caregivers. Health promotion, prevention and early treatment would reduce some of these costs although each country would need to choose the appropriate mix of prevention and treatment according to their relative costs and impact. We also found strong evidence that costs could be reduced by more rational use of medications for NCDs. The costs of medication for all the different types of NCDs considered here accounted for the highest proportion of the direct costs; where addressed, originator brand medicines were frequently used instead of available generics and costs were then substantially higher than they needed to be. While many LMICs already have strategies to promote the rational use of medicines, there is still some way to go particularly in promoting the use of lower cost generics.
The weakness or non-existence of mechanisms to protect households financially from the burden of NCDs is, however, probably the most important finding in this study. In the studies that considered insurance and provided information on reimbursement rates, NCD-related treatment is generally uncommon and frequently patients and their relatives do not report that they claimed any reimbursement from insurance or employers. Likewise, none of the studies we reviewed reported a system of social security that provides compensation for loss of income incurred by patients and their families because of NCDs. Poor households are more likely to suffer disproportionally from the financial effects of this lack of social protection. To meet the costs, households reported taking unsecure loans, using savings or selling household assets, all of which can lead to longer-term problems for the household. For example, the wider literature suggests that many of the loans taken by households for health expenses are at very high interest rates that can take generations to repay [64]. This is part of a bigger problem in LMICs, many of which rely extensively on direct out-of-pocket payments to fund health services. Recently, many have recognized the need to modify the way they raise funds and more generally to modify their health financing systems so as to improve financial risk protection and ensure greater access to needed health services [65]; it is important to note that it will be increasingly necessary to include NCDs in whatever type of financial risk protection strategy is developed. This is particularly important for poor families because NCDs no longer affect only the more affluent people in society [1,8,13,66,67].
While we think that the financial costs reported in this review will overestimate the costs of a typical patient with NCDs, such that the numbers cannot be used to extrapolate the costs of NCDs to a country, they highlight the other consequences of the lack of financial risk protection in LMICs. In the random sample studies, many people with NCDs reported that they did not seek care at all because of financial reasons (Additional file 2). Many of their conditions are likely to become more severe in the absence of treatment, leading to early death and greater problems for caregivers and households. The effects of not seeking care for poorer households is of particular concern given that the ability to work is one of the most important poverty escape routes [68][69][70][71][72]. Strategies to improve financial risk protection will also lead to increased financial access to health services while demand side responses, such as cash transfers, can help reduce some of the financial barriers to seeking care, such as transport costs. Nevertheless, demand-side approaches in LMICs are, to our knowledge, limited largely to maternal and child health (and education) and some communicable diseases [73][74][75][76][77].
Through this review, we are also able to identify areas where further research is needed. Among the four major NCDs, the financial costs from chronic respiratory diseases are very poorly documented, although they cause four times more deaths than for example diabetes, which has been researched more [1,78]. According to the WHO, almost 90% of COPD deaths occur in LMICs and the highest prevalence of smokingthe primary cause of COPDamong men is in these countries [1,78,79]. It would therefore be interesting to have more assessments of the financial costs of these diseases in future studies. Additionally, while all studies reviewed here used crosssectional data, panel data will be very useful in assessing the evolution of costs incurred by households because of NCDs. The comparison of the relative importance of the cost of NCDs with that of acute illnesses is also of a great interest here, as according to the papers reviewed, there is no clear trend. Indeed, some studies show that NCDs are more costly for households, while others observe the opposite. Sometimes in the same country, different results are found depending on the area (urban vs. rural), the type of health care (outpatient vs. inpatient) and household socioeconomic status (poor vs. better-off ) [17,21,38,39,44,55]. More studiesintroducing for example a time dimension and a distinction between private and public providersare therefore needed to shed more light on this issue. It may also be important to expand the geographical outlook in future research to be more representative of a wider group of developing countries. This is true even after accounting for the influence of the languages used in this review. Of the 49 studies found, most were from Asia, as compared to only a handful from Latin America or Eastern Europe, and 10 studies from Africa.
Conclusions
The literature on the social, financial and economic consequences of NCDs in developing countries has not kept pace with the epidemiological evidence. It has been known for some time that the burden of disease associated with NCDs and injuries is already higher than that associated with the health conditions included in the Millennium Development Goals (HIV/AIDS, tuberculosis, malaria, and maternal, child and reproductive health), even in developing countries. Moreover, it has been well documented that the share of NCDs in the overall disease burden will continue to increase globally. Indeed, the UNs' 2011 conference on NCDs stressed the importance of these diseases as a development issue.
The literature we reviewed sheds some light on the financial consequences of NCDs on households in LMICs. Nonetheless, there are limitations to generalization of these findings due to methodological challenges. Valid estimates of the average costs of NCDs will require random samples with controls to account for people who have costly and less costly treatments, and what would have happened in the absence of the diseases. Panel data would be ideal although these studies are more expensive than cross-sectional designs. However, importantly, this review suggests that it is equally as important to focus on people who could not seek care for NCDs due to financial reasons. Little is known about the subsequent development of disease, impacts on these people's health and the financial, social and other consequences associated with foregone treatment.
The push to develop health-financing systems that improve financial risk protection and help achieve universal health coverage in LMICs is promising. However, policymakers need to ensure that the health as well as the financial burden from NCDs is adequately addressed in future reforms, while at the same time improve access and financial protection for all other health services needed by the population.
Endnotes a US$ 995 or less, US$ 996 to US$ 3,945, and US$ 3,946 to US $ 12,195, respectively. b Defined as out-of-pocket expenses that accounted for ≥30% of the total annual household income that was reported at baseline. | 8,930.6 | 2013-08-16T00:00:00.000 | [
"Medicine",
"Economics"
] |
Real Time Distributed and Decentralized Peer-to-Peer Protocol for Swarm Robots
This contribution proposes an approach to enhance the capability of robotic agents to join the Internet of Things (IoT) and act autonomously in extreme and hostile environment. This capability will help in the development in environments where the connectivity, availability, and responsivity of the devices are subject to variations and noises. A real time distributed and decentralized Peer-to-Peer protocol was designed to allow Autonomous Unmanned Surface Vessels (AUSV) extend their context awareness. The developed Middleware allows a real time communication and is designed to run on top of a Real Time Operating System (RTOS). Furthermore, the proposed Middleware will give researchers access to a large amount of data collected by sensors, and thus solve one of the major problems encountered while training artificial intelligence models which is the lack of sufficient data. Keywords—Autonomous robots; smart objects; peer-to-peer; real time communication; ROS2; ZeroMQ; middleware
I. INTRODUCTION
In the past, static robots, such as industrial arms, were used to perform repetitive tasks in a production line where the environment is well controlled and known in advance, at that time, collaboration between robots was not a priority. However, we are increasingly seeing the emergence of applications involving a swarm robot that share a common ultimate goal, e.g., The Autonomous Unmanned Surface Vessels (AUSV) or Unmanned Ground and Aerial Robots that must achieve missions like first responders, coast guards, area search, target detection and tracking, formations, rendezvous [1][2][3].
From these facts, the research on collaborative robots has increased considerably, and many researchers have started to make their focus on the internal design of the robot"s context awareness [4][5], and the trend is to use Mobile Robots in hostile environments where the stability of the surrounding conditions and the connectivity is limited.
Mobile Robots require collaborative capabilities to achieve complex missions on hostile environment, e.g., AUSV may need to collaborate to build a mesh network where each AUSV serves as a network node. However, most of proposed AUSV was designed to operate in an already known environment and are not designed to adapt themselves to the new changes in the context. We propose middleware for collaboration, communication, device hardening for deployments in extreme environments. We explore Multi Agent Systems (MAS) as a solution to enhance the collaboration by increasing autonomy, flexibility, and composability of robotic agents with the IoT devices available on their surrounding environment to promote the self-awareness of those agents. Not only the sensing and actuation are considered, but we also look at the distribution of decision-making in term of collaboration between the components of the application.
Our proposed Middleware named Collaborative Open Platform for Distributed Artificial Intelligence (COPDAI) allows a real time communication between a community of robots while supporting link and component degradation. The community takes distributed decisions that position agents on strategic locations to mitigate the risk of disconnection. Position depends also on the capabilities such as sensing and actuating. Agents are interconnected and they maintain this interconnection as principal vehicle of communication among them in a peer-to-peer mode.
Another problem that COPDAI will try to solve is the difficulty of having access to sufficient data to train artificial intelligence models, COPDAI will promote the sharing of sensor data and the trained models within the scientific community, as well as within the mobile robots.
II. RELATED WORK
In recent study [6] authors presented multiple node communication mechanisms: Simple message, Ports, Topics, Events and services, and based on pre-established criteria, they compared several Robotics Software Framework (RSF) to evaluate the coverage of each of them to defined criteria. It is worth mentioning that robotic systems are often designed over an Ethernet. Field Buses, such as CANBus, I2C, EtherCAT, Serial lines, FireWire, PROFIBUS, and even PCI are often used. Unfortunately, most RSFs and MASs use only the IP protocol.
Generally, the MAS was used for its great flexibility and the ability to reuse components in different projects. Several patterns have been proposed for its implementation in Multi-Robot Systems, proving a gain in development time [7], in this Work Jade Middleware was used to ensure communication.
Agents distribution can be categorized into three forms [8]: Embedded agents at the robot level, agents located at a server level or hybrid distribution: Intelligence and computational agents are external to the robot, and acquisition and control agents are embedded. www.ijacsa.thesai.org In [9], the authors worked on the control of soccer robots, three schemes based on the multi-agent system paradigm were established: the first scheme is based on the control of the robots from a remote computer, in this configuration the robots had no embedded intelligence, the second scheme is based on a distributed architecture where the vision and the decision are done on a central computer and the control of the motors is delegated to embedded systems attached to the robots, the third scheme allows a greater autonomy of the robots where the acquisition of the sensor data, the decision as well as the control of the motors are done at the level of the robot, in addition to an eventual communication between robots.
In [10] the authors have proposed a distributed knowledge base, this base is shared between them. The agents are organized in a hierarchical way and in case of errors that occur in an agent belonging to the lower level, the agents of the higher level replan the trajectory of the robot.
In [11], the authors based their middleware on the Realtime CORBA specification [12] which extends the basic CORBA model to support real-time constructs. A client/server model was adopted, and the predictability improvement was based on Real-time CORBA mechanisms such as: threadpooling and priority assignment.
In [13], the authors focused on the support of networking and middleware of mobile embedded systems, a communication protocol named TDMA allowed the transmission of data and manage the uncertainty related to the communication, in addition a shared memory named RTDB was defined to allow the agents to share data.
The authors in [14] developed a Humanoid Robot using XBotCore middleware, for real time communication, the middleware use EtherCAT protocol, and the software was built on the top of Xenomai RTOS, the middleware was designed to satisfy 1 kHz control frequency and implement four tasks in real time behavior among other: robot kinematic chain, robot joints, robot Force/Torque sensors. In [15], a middleware based on the concept of control kernel has been developed. Different types of nodes have been designed on top of two protocols namely: CAN bus and Ethernet, the nodes have different capabilities and can provide different types of services depending on their computing power. Lightweight nodes communicate on top of CAN bus and powerful nodes on top of Ethernet. Also, in [16] we studied 14 Middlewares which are either oriented to robotics applications or smart objects applications, we concluded that most of the Middlewares do not meet the real time constraint like: UBIWARE [17], LMAARS [18], ACOSO [19], Voyager [20], JCAF [21], Aura [22], UBIWARE [23], LMAARS [24] and SOCRADES [25], while others suffer from a centralized architecture like ROS [26], ICARS [27], COROS [28].
III. COPDAI COMMUNICATION ARCHITECTURE
This Each sensor, actuator or decision module can be attached to the robot body or located in its external environment; we will represent each of these components by a node.
Due to the constraint of the hostile environment, our architecture must be robust to the instability of the physical communication links, thus each node can appear and disappear at any time, the Middleware must allow each node to detect the presence of the other nodes and must implement a recovery mechanism in case of communication failure.
In addition to that, our architecture must not have a Single Point of Failure (SPOF): the degradation of a node must not compromise the whole robot"s mission, or at least we must be able to switch to a safe position, for that the architecture must be decentralized, we propose a Peer-to-Peer communication between the nodes. Also, we need to allow distributed computing between nodes: thus, a node that is located on a computer/server with more resources (CPU, RAM…) can contribute to the computations that a node located on an embedded board with limited resources cannot do by itself.
In addition to that, the constraint of real time requires us to define a priority between the transmitted messages, and thus allow the node to process these messages with a minimum level of guarantee and a predictable behavior.
Finally, the middleware must promote collaboration within the scientific community through the sharing of content and collected data during experiments (sensor data, actuators data…) and optionally results or the trained model. We distinguish four families of possible communication between these nodes among others ( Fig. 1 Distributed logging: What strategy to adopt to trace communications and collect logs from the nodes in order to detect possible failures or to debug?
A. Transport Layer
We choose the concurrency framework ZeroMQ [31] as transport layer, it gives us sockets that carry atomic messages across various transports, among others: IPC and TCP, researchers evaluate the performance of OpenDDS, ORTE and ZeroMQ middleware in terms of latency and scalability, they choose the publish/subscribe pattern to study those middleware performances and results show that ZeroMQ has the best performance with minimal latency [32]. Also, researchers here [33][34] have found that ZeroMQ scales much better and can smoothly handle high data loads and even bursts of requests, which was not the case in their old middleware version based on CORBA. (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 12, No. 11, 2021 267 | P a g e www.ijacsa.thesai.org
B. Transport Mechanisms
COPDAI will support in its first version the following transport mechanisms: TCP/IP, UDP/IP, and IPC, other mechanisms will be supported in the future releases such as Bluetooth, Serial wire and Acoustic communications.
Nodes within the same embedded card/computer will use IPC to communicate with each other and nodes located on different embedded boards/computers will communicate using IP protocols.
We were inspired by the ZeroMQ Realtime Exchange Protocol (ZRE) which governs how a group of peers on a network discover each other, organize into groups, and send each other events [35]. ZRE runs over the ZeroMQ Message Transfer Protocol (ZMTP). ZRE has been designed to run in smart home and can accept a limited number of nodes: each node establishes a connection to the other ones, which means if we have N nodes we are going to have ( ) connections, which can cause the saturation of a network quickly. Another problem is that ZRE support only IP communication which represents an unjustifiable overhead in our case for the nodes that must communicate within the same embedded card / computer. And finally, ZRE has not implemented any notion of service.
COPDAI supports 4 Messaging types:
Node to Node messaging: Nodes that belong to the same hierarchical group, and that are located on the same physical medium (embedded card / network segment) can communicate directly in Peer-to-Peer.
Topic messaging: it is the case where some nodes want share message about the same topic.
Hierarchical messaging: Nodes are organized in groups that accept a maximum number k of members, each group contains a leader, communication between members of the same group is direct, but communication between two nodes belonging to different groups must go through the respective leaders of each group, Bridging messaging: Communication between nodes belonging to two physical boundaries (two embedded cards or two Network segments) passes through a dedicated node, this node is elected among the group leaders. Fig. 2 shows a use case of communication types with k=3, if we compare ZRE with COPDAI in this use case, in Network Segment 1 we have only 3 IP connections instead of 171 using ZRE, also ZRE does not allow communication between nodes in segment 1 and 2:
C. Discovery on the Same Machine
In a specific folder location, within the user home directory, each node creates a file with its UUID as file name. Each the node modifies its file timestamp.
Each nodes list the files whose last modification date is less than , and so, they will be able to know the new nodes that have just appeared or those that have disappeared (Fig. 3).
D. Discovery over IP
We want to keep back compatibility with the ZRE protocol for discovery over IP Protocol, so we are going to use the same mechanism: ZRE uses UDP IPv4 beacon broadcasts to discover nodes. Each ZRE node shall listen to the ZRE discovery service which is UDP port 5670. Each ZRE node SHALL broadcast, at regular intervals, on UDP port 5670 a beacon that identifies itself to any listening nodes on the network [35].
The header shall consist of the letters "Z", "R", and "E", followed by the beacon version number, which shall be %x01. The body shall consist of the sender"s 16-octet UUID, followed by a two-byte mailbox port number in network order. If the port is non-zero this signals that the peer will accept ZeroMQ TCP connections on that port number. If the port is zero, this signals that the peer is disconnecting from the network. The body contains also another two-byte mailbox port number for real time communication channel, and since in our case the Bridge node hides behind it several nodes which should be discoverable to the outside world, we will extend the ZRE beacon so that the body contains UUIDs of these nodes (Fig. 4). Node that receives a valid beacon with a non-zero port number will be considered as a new peer. UDP messages are limited to 1500 bytes on LANs and 512 bytes on Internet, so the bridge node cannot handle more than 92 nodes in LANs and 30 nodes in Internet, if the bridge node reaches its limit, a new one is elected and handle the rest of the nodes.
Another problem is that bridge node can get the first beacon from a peer after it starts to receive messages from it, so in this situation we got a message from a node that we don"t know its IP address and port (Fig. 5).
So, we must consider discovery over TCP: Our first command to any new peer to which we connect is an "Hello" command with our IP address and ports. Bellow the steps we will follow: If we receive a UDP beacon from a new peer, we connect to the peer through a TCP socket.
Each message must contain the UUID of the sender.
If it"s a Hello message, we connect back to that peer if not already connected to it.
If it"s any other message, we must already be connected to the peer, if it is not the case, we raise an assertion.
We send messages to each peer using the per-peer socket, which must be connected.
When we connect to a peer, we also tell our Node that the peer exists.
Every time we get a message from a peer, we treat that as a heartbeat. Fig. 6 shows the message format for the "Hello" command throw IP. Bellow we explain the signification of each part of the "Hello" Message: 1) Part 1: It is an Event Type (4 bytes), it is equal to %d1, 2) Part 2: It is the signature which let us control the received message is a COPDAI Message, must always equal to %xAAA2, 3) Part 3: It is the protocol version. 4) Part 4: It is a sequence number which will allow our node to check if there were any lost messages between the current received message and the last received one, for each peer. 5) Part 5: It is a string that concatenates the IP address of the peer and its port, the endpoint is specified as "tcp://ipaddress:mailbox". www.ijacsa.thesai.org 6) Part 6: It is a string that concatenates the IP address of the peer and its real time port, the rtendpoint is specified as "tcp://ipaddress:rt-mailbox". 7) Part 7: list of UUIDs nodes under the responsibility of the sender and for each UUID list of proposed services. 8) Part 8: List of groups to which the peer belongs. 9) Part 9: The "group status sequence" is a one-octet number that is incremented each time the peer joins or leaves a group. Each peer may use this to assert the accuracy of its own group management information.
10) Part 10: List of services offered by the sender. 11) Part 11: A Human friendly peer"s name. 12) Part 12: Headers is a hash table (Key/Value HashMap) of additional information that the peer can eventually send.
E. Detecting Disappearances over IP
Several reasons can come into play and distort the decision that a peer has really disappeared: due to high TCP traffic the UDP packets can be dropped (which causes a high latency before getting the beacon) or a high latency before getting a message on top of the TCP and which is also considered as heartbeat.
To overcome this problem, if we don"t get a beacon from the peer after a while, we switch to TCP heartbeats which consist of sending a PING command and receiving a PING_OK response, the PING command is described in ZRE protocol as follow (Fig. 7). Bellow we explain the signification of the new part of the "PING" Message: Part 1: It is an Event Type (4 bytes), t is equal to %d6.
If the Peer is still alive it must respond with a PING_OK as described in Fig. 8 Bellow we explain the signification of the new part of the "PING OK" Message: Part 1: It is an Event Type (4 bytes), it is equal to %d7
F. Greeting Message over IPC
The following (Fig. 9) illustrates the Hello message in case of IPC Communication: Bellow we explain the signification of the new part of the "Hello" Message over IPC: Part 1: It is an Event Type (4 bytes), it is equal to %d8. Fig. 10 shows a typical example of the links between nodes in the COPDAI Middleware, nodes of the same hierarchical group communicate with each other and with their leader, leaders communicate with each other and with the Bridge Node, and finally Bridge Nodes communicate with each other. Each time the leader detects that there is a change in the nodes under its responsibility, e.g., a node in the group has disappeared, a new node has joined its group, it will notify the other leaders by sending the following message (Fig. 11): Fig. 11. COPDAI Topology Heartbeating Message.
G. Topology Heartbeating
Bellow we explain the signification of new part of the "Topology Heartbeating" Message: Part 1: It is an Event Type (4 bytes), it is equal to %d9 Bridge Node being itself a Leader, it is responsible for notifying the other Leaders in the same machine by any change in its group.
A Bridge Node is elected among the Leaders, so it is responsible for the propagation of the topology to others Bridge Nodes once a change has happened at the level of its group, or at the level of a group of another leader, the message (Fig. 11) is sent to others Bridge Nodes, with the difference that it concatenates all the nodes present on the machine with their respective services and not only the nodes that belong to its group.
In the opposite direction, once a Bridge Node receives a topology message from another one, it notifies the Leaders on its machine using the following message format (Fig. 12), in the same way the leaders propagate this message to each member of their group:
H. Communication between Two Peers
One of the problems we have encountered in trying to have true Peer-to-Peer communication is that ZeroMQ socket is not symmetric, to overcome this problem, we have adopted the harmony pattern: For the outgoing messages, we are going to use a DEALER socket per peer so we can safely send messages.
For the ingoing messages, we choose the ROUTER socket, and so, the Harmony pattern comes down to these components ( Fig. 13 and 14): One UDP socket where we listen to the broadcasted beacons (In case of Bridge Node).
One ROUTER socket that we bind to an ephemeral port, and where we receive incoming messages from peers.
One DEALER socket per peer that we connect to the peer"s ROUTER socket.
One ROUTER socket (named RT-ROUTER) that we bind to an ephemeral port, and where we receive incoming messages from peers which must be processed in real time (we suppose here that the Node is a type of RTCyclicNode and the listener is decorated properly to behave in real time (more details in our recent contribution [36]).
One DEALER socket (named RT-DEALER) per peer that we connect to the peer"s RT-ROUTER socket.
Reading from our ROUTER/RT-ROUTER socket.
Writing to the peer"s DEALER/RT-DEALER socket. (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 12, No. 11, 2021 271 | P a g e www.ijacsa.thesai.org If the peer disappears and comes back with a different IP address and/or port, we have to disconnect our DEALER sockets and reconnect to the new ports.
In the case of IPC communication, a folder hierarchy was adopted as shown in (Fig. 15), a file is created in a folder named "dealer" which will be used as a medium for the DEALER socket, another file will be created in the folder "dealer/rt" for real time communication, the same tree structure is adopted for the ROUTER and RT-ROUTER sockets. The exchanged message between peers in the same hierarchical group is in this format (Fig. 16). Bellow we explain the signification of new parts of the Message exchanged between two nodes: Part 1: It is an Event Type (4 bytes); it is equal to %d2. Part 3: the service name to invoke. Part 6: the content message which is serialized using Protocol buffer [37] (it is the serialized object we are going pass to the service as a parameter).
I. COPDAI Node Topology Knowledge Management
Each node according to its position in the COPDAI hierarchy (Normal Node, Leader or Bridge), maintains some knowledge about the current topology: o For each Bridge Node: The Endpoint, port, rt-port.
J. Message Routing
When a node wants to send a message to another node which does not belong to its hierarchical group, it constructs the message (Fig. 17) and sends it to the leader, if the leader finds that this node is managed by another leader on the same machine it sends the message to it, this last one transmits the message to the target node, if not, it transmits the message to the Bridge Node on the local Machine, this one will send the message to the Bridge Node which manages the target node, and thus the message is routed until it reaches its destination, an example is illustrated in Fig. 18:
K. Group Messaging
For group messaging we want to be able to join and leave groups, discover the existing of nodes in other groups and send a message at once to several nodes belonging to the same group. This gives us some new protocol commands: JOIN -we send this to all peers when we join a group (Fig. 19): Bellow we explain the signification of new parts of the join message: Part 1: It is an Event Type (4 bytes), it is equal to %d4.
Part 4: the group the node wants to join.
LEAVE -we send this to all peers when we leave a group (Fig. 20). Bellow we explain the signification of new parts of the leave message: Part 1: It is an Event Type (4 bytes), it is equal to %d5, Part 4: the group the node wants to leave.
The following Fig. 21 illustrates the format of a message that will be sent to a group. When a Leader receive a JOIN or LEAVE or a multi-part message it propagates it to other Leaders, the same for a Bridge Node if it receives a JOIN or LEAVE or multi-part message it propagates it to others Bridge Nodes.
Leaders and Bridge Nodes also are responsible for propagating this message to the Nodes they manage.
L. Election and Membership
1) When a node starts the first time, it sends a request to find a free group to all nodes in the same machine (Fig. 22).
2) If a leader receives a group search request, and if there is free space in its group, it reserves a space for the requester and sends an invitation (Fig. 23).
3) Once an invitation is received, the node sends a request to join the leader group (it must return the same invitation code received in the previous step) (Fig. 24). 4) Then the leader sends a confirmation with the name of the group that the node has just become one of its members (Fig. 25).
5)
Once this is done the node sends a "JOIN" message to notify everyone that it has just joined the group (Fig. 19), and after that it creates the necessary IPC sockets with other group"s members.
Note: All these operations are time-stamped, if it takes times to receive a response, the operation is cancelled, and the process is resumed.
6)
If the node doesn"t receive any response from leaders, or if it has failed to become a member of a group after a configurable amount of time, the node creates a new group, joins it, and sends JOIN command to outside world, after that it becomes the leader of this new group it notifies peers with the following message (Fig. 26). 7) If Leaders receive a "LEADERSHIP" message, they send back a congratulation message, and specify which one of them is the Bridge Node (Fig. 27).
8) Once the Leader receives a congratulation message, it creates the necessary IPC sockets with the other Leader. 9) If after a while, the Leader doesn"t receive any congratulation message, it considers itself as a Bridge Node, creates the necessary TCP sockets and starts listening on the UDP port to detect other Bridge Nodes over IP.
10) If a leader disappears, the rest of the nodes in the group start elections by exchanging between them the following message containing their start date (Fig. 28): the node that started first becomes the new leader and continues the process explained in step 6. 11) Each node saves/updates the last time it signalled its presence to the outside world, if it was a while since it notifies peers about its presence (a pre-parameterized value in COPDAI Middleware), and if it was a Leader, it concludes that he is no longer the leader and considers itself as a normal node and starts the process again from step 1.
12) If a Bridge Node disappears, the rest of the Leaders start elections by exchanging between them the message shown in (Fig. 28): The Leader that started first becomes the new Bridge Node and continues the process from step 9. The new Bridge Node is responsible of propagating the new topology to the outside world (Fig. 11).
M. Content Sharing
We used the InterPlanetary File System (IPFS) [38] which is a peer-to-peer distributed file system that stores and retrieves files in a BitTorrent-like way.
So, to allow sharing of data captured by sensors (images, videos…) or artificial intelligence models between researchers / robots, we installed in each machine the ipfs daemon which connects it to the global distributed network by running the following commands: $> ipfs init (1) $> ipfs daemon (2) IPFS requires 512MiB of memory and the installation takes only 12MB, if the machine doesn"t have the necessary resources, we just ignore the IPFS installation.
The first time a node starts, it verifies if it has the ipfs capability by running the following command: If a node wants to add any file to the distributed file system, it just run: $> ipfs add filename (4) To allow nodes located on machines that do not have sufficient resources to share files, we run a dedicated COPDAI nodes (named IPFS Nodes) in servers that have enough resources, these nodes offer the "ipfs" service, and each node can send files to them using the "SEND MESSAGE" command (Fig. 16). The IPFS nodes persist the message content in file and after that, add it to the distributed file system (command 4). www.ijacsa.thesai.org After adding a file to ipfs, the command 4, returns a hash code, IPFS already offers the possibility to retrieve files via browser or command line interface (CLI) but it requires that we know already the hash code, to overcome this limitation, an event is fired associating each hash code with the UUID of the node that generated it and the file creation time, the event is sent to the distributed tracing system. In the future we plan to add an interface to brows the files by agents UUIDs / Names.
Researchers can share their content by just sharing the hash code.
N. Distributed Logging
In such a , also an Android version of the COPDAI agent has been developed [43] to allow any robot to benefit from the existing sensors on a smartphone (Accelerometer, GPS, GYROSCOPE, Magnetometer…) and this allowed us to validate our communication architecture as well as to extend the capabilities of the robot used in our contribution [44] (Fig. 29) after attaching the smartphone to its body.
To challenge our middleware, we compared its performance with ROS2 [45] which is the upgrade of ROS1 by utilizing the Data Distribution Service, the main goal of ROS2 is to provide the real time capability, it is under heavy development, it supports communication over IP, ROS2 doesn"t support ARM board even though most mobile robots use embedded cards based on ARM architecture like (Jetson TX2, Raspberry Pi, BeagleBone, Orange Pi, …), because they are energy efficient.
For each type of communication: COPDAI communication over IPC, COPDAI Real Time communication over IPC, Real Time communication over IP and communication using ROS2, we measured the latency that a message takes to pass from one node to another, we studied the following three scenarios: a node communicates only with one other node, a node communicates with 10 nodes and a node communicates with 100 nodes at the same time, for each scenario we sent 10k messages, to limit network noise, all nodes were deployed on the same machine (Asus Zephyrus ROG, CPU Pentium i7 2. 3GHz, RAM 16 GB) the RTOS used: Ubuntu 20.04 Patched to PREMPT_RT. Table I illustrates the average latencies in each scenario, we notice that for the scenario of the real time communication using the COPDAI middleware on top of IPC the average latency did not change greatly by increasing from 10 nodes to 100 nodes, which shows a great stability of the system, on the other hand we notice that the average latency climbed in an exponential way in the case of ROS2, same for the case of the communication using COPDAI RT over IP or COPDAI over IPC the average latency is stable and robust to the scaling up. Table II illustrates the maximum latencies obtained in each scenario, the highest latency was obtained when communicating between 100 nodes using ROS2 with more than 6 minutes of delay between sending and receiving the message, while we notice that the maximum latency in the case of using COPDAI RT over IP did not exceed 9 seconds and 7 seconds over IPC. Table III illustrates the minimum latencies obtained in each scenario, communication between two nodes using COPDAI RT over IPC give the best result we also notice that in the case of 100 nodes for the same protocol we obtain a good result. As shown in Fig. 30, our middleware shows a very good performance, it scales efficiently, the real-time communication on IPC is the most optimized, which justifies our choice of such a mechanism for a communication within the same machine, and in general the real-time communication shows a great stability.
V. CONCLUSION
In this paper, a distributed, decentralized, real-time Peerto-Peer protocol has been designed to allow robots and smart objects to act autonomously and improve their capabilities, COPDAI Middleware allows so, the Autonomous Unmanned Surface Vessels share their knowledge in extreme and hostile environments where links and components are subject to degradation. The designed protocol allows COPDAI nodes to build a mesh network and be aware of their environment. In addition, COPDAI solves the problem of the difficulty to have access to enough data and effectively train artificial intelligence models, by easily enabling the sharing of collected sensor data among the members of the scientific community. A first version of this Middleware has been developed in python, java and Android. We were also able to increase the perception capabilities of a mobile robot by attaching to its body an Android smartphone where COPDAI nodes are deployed, nodes collect mobile sensor data (Accelerometer, GPS, GYROSCOPE, Magnetometer...) and push them to the node deployed on the robot's embedded card. We compared the performance of COPDAI and the ROS2 Middleware. we found that COPDAI has a lower latency and better response time, in addition to a more stable communication when scaling the number of deployed nodes.
In our next work, we will look at the security of communication between nodes, and we will detail discovery and communication for nodes that are behind Firewalls or Routers. | 7,827.4 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |