text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Effect of Prebreakdown Time on Shock Wave Generation Characteristics of Underwater Plasma Sound Source
Acoustic array bunching is an eectivemethod to realize the directional radiation of underwater plasma sound source (UPSS). e prebreakdown time is one of the decisive factors for the performance of acoustic array bunching. e working characteristics of the underwater plasma sound source and the prebreakdown process of underwater plasma arc discharge are deeply analyzed. e precondition of plasma sound source array bunching is given by analyzing the waveforms of electrical signals and acoustic signals. rough the experiments of arc discharge based on dierent aqueous solutions, a feasible scheme of underwater plasma sound source array bunching in salt water is proposed. rough the arc discharge experiments based on dierent system discharge parameters, the variation of prebreakdown time under the inuence of dierent charging voltage, conductivity, and other parameters was studied. e experimental results show that the prebreakdown time decreases with the increase of charging voltage and conductivity. At the same time, the discharge current increases, the oscillation time decreases, and the energy injection rate on the electrode accelerates. e research results are helpful to understand the prebreakdownmechanism of plasma sound source and optimize the underwater plasma sound source array bunching scheme and the acoustic shock wave waveform.
Introduction
e high-power underwater strong sound source has a strong military and civilian demand background. During its development process, there have been many underwater strong sound sources with di erent generation mechanisms [1,2], such as active material driven, electromagnetic, explosive, laser, parametric array, uid driven, and plasma. Compared with the traditional underwater strong sound source, the discharge of UPSS can be divided into two forms: underwater high-voltage pulsed arc discharge and underwater corona discharge [3,4]. e shock waves generated by the two kinds of discharges have the advantages of high instantaneous emission power, wide frequency coverage, high electro-acoustic conversion e ciency, good repeatability, and controllable acoustic radiation direction [4][5][6]. By using the acoustic lens method, curved surface re ection method and array bunching method, high intensity bunching acoustic shock wave with sharp directivity can be formed in the speci ed direction [7][8][9][10]. erefore, this technology has been widely used in many elds, such as marine strategic resources development, oil pipeline blockage removal, medical extracorporeal shock wave lithotripsy, underwater ultra-wideband target detection, and remote secure communication [3-6, 11, 12]. e traditional acoustic lens method and curved reector method are based on the geometric law to achieve acoustic bunching, and their radiation directivity and bunching gain are limited by the focus of the lens or curved re ector [7]. erefore, for high-power UPSS, in order to exibly control the radiation directivity and further improve the radiation intensity, the UPSS array bunching method can be used to form a strong bunching acoustic shock wave in a speci ed area [8][9][10].
is method not only reduces the requirement for the emission power of a single plasma sound source but also improves the safety and reliability of the system. However, the UPSS will undergo a variety of complex physical and chemical processes in the process of instantaneous discharge. For example, it will produce physical phenomena such as huge pulse current, electromagnetic radiation, acoustic shock wave, and bubble pulsation.
In order to achieve multi-source array bunching, it is necessary to ensure that the arrival time of the acoustic shock wave generated by each source (array element) at the bunching center point is consistent. However, the arc discharge process of UPSS is very complex, and there are many factors affecting the array bunching of the shock wave. Taking the bunching of two sound sources as an example, in order to realize the acoustic shock wave bunching at a specified position, the arc discharge of plasma sound sources must be synchronized well. In particular, the triggering of the dual-channel trigger switch should be synchronized, and the prebreakdown time and the generation time of the acoustic shock wave should also be consistent. e research results of references [10,18] show that after the trigger switch of the underwater plasma sound source is turned on, the time interval between the time of shock wave generation and the time of plasma channel formation is strictly consistent. However, the time dispersion of the bubble wave generated by the bubble pulsation is larger. erefore, in theory, as long as the plasma channel formation time of the two sound sources is controllable, the acoustic shock wave can be bunched at a specified position. After that trigger switch is a trigger and turned on, the formation of the acoustic shock wave still needs to go through two stages of prebreakdown and plasma channel formation, wherein the plasma channel formation is complete instantaneously. erefore, it is very important to study the effect of prebreakdown time on plasma source array bunching.
In this paper, the prebreakdown process in the arc discharge process of UPSS is analyzed in depth, and the effect of prebreakdown time on the UPSS array bunching is studied through experiments. At the same time, the influence of different discharge parameters, such as water conductivity, discharge voltage and discharge current, on the prebreakdown time is analyzed to provide theoretical support for the subsequent research and design of UPSS array bunching.
In the second part of this paper, the working characteristics of UPSS are introduced, the experimental equipment of underwater plasma arc discharge is given, the typical electrical signal waveform and acoustic signal waveform are analyzed, and the concept of prebreakdown time is given; in the third part, the design idea of UPSS double-sound source array bunching experimental device is proposed, and the prebreakdown process in different aqueous solutions is analyzed; in the fourth part, the influence of different discharge parameters on the prebreakdown time is analyzed; finally, the research conclusions and future research directions are summarized.
Operating Characteristics OF UPSS
e arc discharge working process of UPSS is to convert the electric energy stored in the energy storage capacitor into sound energy. e arc discharge occurs between the discharge electrodes. When the field strength between the discharge electrodes reaches 10 3 ∼10 4 V/cm [14]; that is, the injected electric energy is enough to dissociate and ionize the water medium, the high-voltage electrode in the discharge electrode will extend some "leaders" with high conductivity to the low-voltage electrode [12].
is process is usually called the prebreakdown process, or the electro-thermal breakdown process. When one of the leaders reaches the other electrode, the high field strength is still maintained between the tips of the electrodes, an electron avalanche will occur, and a highly conductive plasma discharge channel will be generated, the prebreakdown process will end, and the arc discharge stage will begin [14,19]. At the moment when the plasma discharge channel is formed, the acoustic shock wave radiates omnidirectionally into the water, and gradually attenuates into a direct wave (shock wave) acoustic pulse in the process of propagation. If the acoustic shock wave needs to be radiated directionally to improve the radiation intensity in the specified direction, the center of the discharge electrode can be placed at the geometric focus of the curved reflector, and the direct wave will converge according to the geometric law to form a bunched shock wave [7]. With the end of the arc discharge between the discharge electrodes, the plasma discharge channel gradually extinguishes and evolves into a bubble with high temperature and high pressure inside. e bubble makes an expansion-contraction motion and produces a strong bubble shock wave [13][14][15][16]. e bubble collapses gradually during the pulsation process, and the arc discharge process ends at this time.
In this paper, the experimental device diagram of UPSS arc discharge is shown in Figure 1. e experimental device is mainly composed of five parts: high-voltage charging system, discharge circuit, high-voltage trigger system, charge and discharge control system, and measuring device. Directional radiation of a single sound source can be realized through the ellipsoidal reflector, that is, the discharge electrode is placed at the first focal point of the ellipsoidal reflector, and the generated shock wave will be reflected by the ellipsoidal reflector and form a bunching wave at the second focal point. e discharge voltage on the discharge electrode was measured by high-voltage probe, the discharge current on the discharge electrode was measured by Rogowski coil, and the acoustic shock wave waveform (including direct wave (shock wave), bunching wave, and bubble wave) was measured by the pressure sensor. e sound wave measuring point is located in the direction of the sound axis of the ellipsoidal reflector, 240 mm away from the center of the sound source. e measured electro-acoustic characteristics of UPSS during arc discharge are shown in Figure 2.
According to Figure 2(a), in addition to the acoustic shock wave signal measured by the pressure sensor, the electromagnetic interference (EMI) signal can also be clearly seen. e EMI signal is caused by the trigger pulse and the intense discharge current crosstalk to the pressure sensor. e EMI signal corresponds to the discharge voltage start time (6.0 ms) and the discharge current start time (6.252 ms) measured on the discharge electrode, as shown in Figure 2(b).
In Figure 2(a), the direct wave propagating to the measurement point 240 mm appears at 6.416 ms, so the propagation time is Here, L is the propagation distance of the direct wave (shock wave), C is the propagation velocity of the direct wave in water. e sound velocity is related to many factors, and it is approximately 1480 m/s in this paper. erefore, the formation time of the direct wave is 6.416 ms − 0.162 ms � 6.254 ms. (2) e experimental results show that the direct wave is formed at the moment when the strong oscillating discharge current (6.252 ms) appears, rather than at the moment when the discharge voltage appears on the secondary side of the trigger switch after the trigger switch is turned on. e strong oscillating discharge current mentioned here means that when the plasma channel discharges violently, the resistance in the channel is very small, so the discharge current shows an underdamped oscillation [12]. e secondary of the trigger switch is connected to the discharge electrode, and the discharge electrode is placed in the water medium. When the trigger switch is turned on, the plasma arc discharge does not occur immediately, but after a period of time, the plasma arc discharge and acoustic shock waves are generated. is period of time is called the prebreakdown time. at is to say, the prebreakdown time starts at the moment when the secondary discharge voltage of the trigger switch appears, and ends at the moment when the secondary discharge voltage oscillates rapidly or the strong oscillating discharge current appears. Good prebreakdown time consistency is one of the important prerequisites for UPSS array bunching. In this paper, the prebreakdown process and the electrical parameters affecting the prebreakdown time are analyzed through experimental research, which provides a reliable guarantee for the realization of UPSS array bunching.
Analysis of Prebreakdown Process
During the underwater plasma arc discharge, the discharge electrode is placed in water. When the field strength in the discharge electrode gap is strong enough to dissociate and ionize the water medium, the prebreakdown process will be affected not only by the energy release rate on the energy storage capacitor but also by the electric energy injection rate on the discharge electrode [12,14].
In this paper, the design idea of the dual sound source array bunching experimental device based on UPSS is proposed. e experimental principle diagram is shown in Figure 3. e high-voltage charging system is used to charge two groups of energy storage capacitors respectively, and the capacitance values of the two groups of energy storage capacitors are the same. In order to ensure the synchronous triggering of the two sound sources, the same trigger signal is used to control the two sets of high-voltage trigger systems respectively, and the two sets of high-voltage trigger systems have the same structure. e high-voltage trigger systems of the two channels are connected with two sets of discharge electrodes, and the two sets of discharge electrodes are placed in the same water medium. Based on this design idea, on the premise of satisfying the experimental conditions, as long as the prebreakdown time of the dual sound source is strictly controlled, the bunching of the dual sound source based on UPSS can be realized.
However, for the prebreakdown process, the main factors affecting the prebreakdown time include the synchronization of trigger switches, the distance between discharge electrodes, discharge voltage, discharge current, conductivity, and so on. Secondly, discharge electrode material, discharge electrode structure (tip-tip or tip-plate structure), water temperature, and other factors will also affect the prebreakdown time [20][21][22][23][24]. In the experimental study, it is assumed that the trigger switch has good synchronization performance, and the discharge electrode adopts a tip-to-tip structure, which can reduce the discreteness of plasma channel formation. In addition, the two sets of discharge electrode materials and electrode spacing are the same, and the water temperature is constant at 15°C. On this basis, the feasibility of array bunching is analyzed, and the influence of different discharge voltage, discharge current, conductivity, and other parameters on the prebreakdown time is discussed.
UPSS is usually used in tap water or salt water, that is, the water medium has a certain conductivity. Under the influence of the conductivity of the water medium, the "leader" with high conductivity will propagate towards the other end of the discharge electrode. After a period of time, that field strength of the leader head increases, causing the movement of the conductive particles to accelerate, thus preparing for the vaporization and ionization of the aqueous medium. When the density of the water medium conductive particles in the discharge electrode gap is high enough, the leader continues to propagate to the other end of the discharge electrode, thus forming a high-voltage arc across the two electrodes. At this time, the energy will continue to be injected, forming a plasma arc discharge channel, which will achieve water dielectric breakdown [14,20]. It can be seen that the influence of the conductivity of the water medium on the prebreakdown time is more obvious. Next, we will analyze the prebreakdown time in the process of plasma discharge in tap water and salt water through experiments. e experimental conditions are as follows: the material of the discharge electrode is copper, the structure is tip-tip, the water temperature is 15°C, the capacitance of the energy storage capacitor is 5 μF, the charging voltage is 20 kV, the conductivity of tap water is 0.3 mS/cm, and the conductivity of salt water is 17.8 mS/cm. For a single plasma sound source, the arc discharge is discrete. [7] Under the repeated experimental conditions, five groups of arc discharge experiments were carried out in tap water and salt water respectively, and the changes of the secondary discharge voltage of the trigger switch were measured. e measurement results are shown in Figure 4 and Figure 5, respectively.
It is obvious from Figure 4 that in the process of plasma arc discharge in tap water, the prebreakdown time is random, ranging from tens of microseconds to hundreds of microseconds. is conclusion is consistent with that found in references [14,20]. erefore, in tap water, even if the parameters of each plasma sound source are the same, the prebreakdown time of each discharge is still quite different, so it is difficult to realize the synchronous discharge of two sound sources, let alone the array bunching of two sound sources. It can be seen from Figure 5 that the conductivity of salt water is higher than that of tap water, the prebreakdown time in the arc discharge process is several microseconds, which is much less than that in tap water, and the time required for prebreakdown is very stable. erefore, it is feasible to realize dual source array bunching in salt water. In order to further illustrate this conclusion, we set the charging voltage at 16 kV, 20 kV, and 24 kV, the capacitance of the storage capacitor at 10 μF, and other parameters unchanged. Five plasma arc discharge experiments were carried out for tap water and salt water respectively. e prebreakdown time comparison results obtained are shown in Table 1.
In Table 1 shows that the prebreakdown time in tap water is random, while the prebreakdown time in salt water is less random. In addition, in both tap water and salt water, the prebreakdown time decreases with the increase of the charging voltage. Observe the average value of the prebreakdown time when the capacitance of the energy storage capacitor is 10 μF and the charging voltage is 20 kV in Table 1, and observe the average value of the prebreakdown time when the capacitance of the energy storage capacitor is 5 μF and the charging voltage is 20 kV in Figure 4. It is also found that with the increase of the capacitance of the energy storage capacitor, the prebreakdown time becomes shorter and the randomness is smaller. e above analysis shows that the prebreakdown time is related to the injection rate of the electric energy of the discharge electrode. By analyzing the changes of discharge voltage, discharge current, and conductivity, the variation law of prebreakdown time under different discharge parameters can be mastered. It provides theoretical support for the research of UPSS array bunching.
Variation of Prebreakdown Time under Different System Parameters
In this paper, the precondition of the underwater plasma dual sound source array bunching is that the trigger switches of the two sound sources are triggered synchronously, the arc discharge occurs synchronously, and the direct wave is formed synchronously. According to the research results of reference [10], the time discreteness of the direct wave and the bunching shock wave formed after the arc discharge of UPSS is small. Because the time interval between the formation time of the direct wave and the conduction time of the trigger switch is consistent. After the direct wave is reflected and converged according to the geometric law of the curved reflector, the bunching shock wave is formed. erefore, there is also a strict time interval between the bunching shock wave and the direct wave. erefore, after the trigger switch is turned on, as long as the prebreakdown time is consistent, the dual sound source array bunching can be realized. e prebreakdown time is affected by the charging voltage, the distance between the discharge electrodes and the conductivity. By studying the variation of prebreakdown time under different system parameters, the experimental results are helpful for us to understand the prebreakdown process.
e Conditions of the First Experiment.
e capacitance of the energy storage capacitor is 10 μF, the charging voltage is 16 kV, the water temperature is 15°C, the distance between the discharge electrodes is 4 mm, and the conductivities are 0.3 mS/cm, 22.1 mS/cm , and 44.1 mS/cm, respectively. Under different conductivity conditions, the waveforms of discharge voltage and discharge current measured in the experiment are shown in Figure 6. e results show that the conductivity of the water medium has an obvious effect on the prebreakdown time.
e larger the conductivity is, the smaller the prebreakdown time is, and when the conductivity is only 0.3 mS/cm (in tap water), the prebreakdown time will be greatly increased. In Figure 6(b), when the conductivity is relatively large, the discharge current will reach several tens of KA. At this time, the current density at the tip of the discharge electrode is very high, so that the water medium near the tip of the discharge electrode is continuously heated, gasified, and ionized to generate microbubbles. With the increase of the field strength between the electrodes, the "leader" quickly compensates and propagates to the other electrode, forming a plasma channel, and then triggering the water medium breakdown. When the conductivity is 0.3 mS/cm, the discharge current on the electrode occurs later and has a smaller amplitude.
is is because when the discharge voltage is larger, the larger field strength between the electrodes will still make the "leader" propagate from the high-voltage electrode to the low-voltage electrode and then form the plasma channel discharge, and the discharge current will be generated on the electrode. Figure 6(b) also shows that with the increase of conductivity, the peak value of discharge current increases and the oscillation time decreases after the breakdown of the water medium. It is shown that the injection rate of electric energy in the discharge electrode increases and the waveform parameters of the discharge current directly affect the waveform parameters of the acoustic shock wave. e electric energy mentioned here means the electric energy injected into the discharge electrode after the prebreakdown time is determined, which can be obtained by integrating the discharge current waveform and the discharge voltage waveform at the same time. When the conductivities are 22.1 mS/cm and 44.1 mS/cm, respectively, the waveforms of the acoustic shock waves generated by the arc discharge are shown in Figure 7. As can be seen from Figure 7, when the conductivity is large, the peak pressure of the shock wave is large and the bottom width of the shock wave waveform is small. At the same time, due to the influence of prebreakdown time, the generation time of the shock wave is ahead of that of the shock wave generated when the conductivity is small. is shows that the parameters of the shock wave are related to the parameters of the discharge current and the prebreakdown time.
e Conditions of the Second Experiment.
e capacitance of the energy storage capacitor is 10 μF, the conductivity is 22.1 mS/cm, the water temperature is 15°C, and the distance between the discharge electrodes is 4. e charging voltages are 16 kV, 20 kV, and 24 kV, respectively. Under different charging voltage conditions, the waveforms of discharge voltage and discharge current measured in the experiment are shown in Figure 8.
From Table 1 and Figure 8(a), we can see that during the arc discharge of UPSS, the prebreakdown time in salt water decreases with the increase of the charging voltage of the system, but the decrease is not obvious. As is evident from Figure 8(b), the peak value of the discharge current increases as the charging voltage increases. At this time, the electric field strength at the tip of the electrode is larger, and the prebreakdown time will be reduced. e peak value of the discharge current is approximately linear with the energy injection rate of the discharge electrode, the peak value, and the pulse width of the acoustic shock wave. Figure 9 shows the acoustic shock wave waveforms for different charging voltages. It can be clearly seen that the waveform parameters of the acoustic shock wave are basically consistent with the waveform parameters of the discharge current, such as peak pressure, full width at half maximum (FWHM), and other parameters. To sum up, the results of the two experiments can provide theoretical support for the design of acoustic shock wave parameters generated by UPSS arc discharge and UPSS array bunching.
Conclusions
In this paper, the prebreakdown process of UPSS arc discharge based on different aqueous solutions is studied, and the conclusion that UPSS array bunching can be realized in salt water is put forward. rough the UPSS arc discharge experiments based on different system discharge parameters, the change law of prebreakdown time under the influence of different system parameters is analyzed, and some meaningful conclusions are obtained: (i) When the trigger switch is turned on, the secondary of the trigger switch will have a discharge voltage, which is called the prebreakdown initiation time.
When the discharge voltage oscillates rapidly or a large oscillatory discharge current appears on the discharge electrode, it is called the end time of prebreakdown. At the end of the prebreakdown process, almost simultaneously, the arc discharge produces an acoustic shock wave. erefore, the precondition of UPSS array bunching is that the prebreakdown time and the formation time of acoustic shock wave are controllable, and the synchronization is good. (ii) e experimental results show that the randomness of the prebreakdown time is obvious when the arc discharge is carried out in tap water, while the prebreakdown time is more stable when the arc discharge is carried out in salt water. erefore, UPSS array bunching can be realized in salt water with a certain conductivity. (iii) e prebreakdown time decreases with the increase of the charging voltage and conductivity of the UPSS system. At the same time, the peak value of the discharge current on the discharge electrode increases, the oscillation time decreases, and the electric energy injection rate becomes faster, which makes the current density at the tip of the electrode increase, resulting in a faster prebreakdown heating process.
In this paper, when studying the change law of the prebreakdown time, it is assumed that the water temperature is constant, the discharge distance is fixed, the triggering conditions are consistent, and the ablation of the electrode material is negligible. e experimental research is carried out based on the above assumptions. In the followup, we will further study the arc discharge process of UPSS, the relationship between system parameters and waveform parameters, the statistical analysis of waveform generation characteristics, and the bunching of the dual sound source.
ese studies will provide theoretical support for the optimization design of UPSS array bunching and acoustic shock wave waveform, and have important guiding significance.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 5,958.4 | 2022-09-28T00:00:00.000 | [
"Physics"
] |
Positivity from J-Basis Operators in the Standard Model Effective Field Theory
In the effective field theory (EFT), the positivity bound on dim-8 effective operators tells us that the $s^2$ contribution in the scattering amplitude of 2-to-2 process geometrically corresponds to the convex cone composed of the ultraviolet (UV) states as the external rays. The J-Basis method can provide a complete group theory decomposition of the scattering amplitude on the direct product of the gauge group and the Lorentz group, thus to search for all UV states. Compared to previous methods, which can only perform direct product decomposition on the gauge groups, the J-Basis method greatly improves the strictness of the restrictions and also provides a systematic scheme for calculating the positivity bounds of the dim-8 operators.
Introduction
The Standard Model Effective Field Theory (SMEFT) framework provides a systematic approach to parameterize new physics (NP) effects at high energy by using low energy degrees of freedom.As a non-renormalizable theory, SMEFT Lagrangian contains many operators with higher mass dimension written as where C (n) and O (n) are Wilson Coefficients (WCs) and effective operators respectively of mass dimension n.These effective operators are written based on the standard model field building blocks, following the Lorentz and gauge symmetries [1][2][3].They are enumerated order by order via the canonical mass dimension and form the complete and independent basis up to dimension 8 and higher in Ref. [4][5][6][7][8][9][10][11], with generalization to any mass dimension in Ref. [12,13].The WCs parameterize ultraviolet (UV) information from NP theory.In the top-down approach, once the heavy states of a UV theory are integrated out, effective operators at the low energy scale can be obtained , called the matching procedure.Since the WCs comprise the information from UV theory, if the experimental data shows deviation from Standard Model (SM) prediction, the WCs can be determined.
Given the null signal of NP, the WCs can only be restricted by current data or bounded theoretically.Using various processes, it is possible to restrict the WCs by experimental data via global fits [14][15][16][17][18][19][20][21][22].On the other hand, positivity bound was proposed [23][24][25][26][27][28] to constrain WCs based on the unitarity, analyticity, locality properties of quantum field theory.There are many works to discuss positivity restriction of SMEFT operator coefficients.The earliest work of the positivity bound can be traced back to Ref. [23], which established a positivity bound in the forward scattering limit of 2-to-2 elastic scattering (see also [24][25][26][27][28] for earlier discussions and applications in strong dynamics).The main idea of the elastic positivity bound is using unitary and analyticity characters to point out that 2-to-2 elastic forward scattering amplitude is non-negative.Recent literatures use the mathematical concept Arc to give positivity bound as a semi-positive Hankel matrix filled by the WCs linked to involving effective operators at different mass dimensions [29,30].The partial wave analysis and unitary are also used to restrict the dim-6 operators' WCs [31] and various motivation for going beyond dim-6 have been discussed in the Ref. [32][33][34][35][36][37].
Since WCs contain UV information, it's possible to enumerate possible NP particles based on effective operators, which is called the inverse problem [38,39].The top-down approach is a well-studied and systematized procedure via matching and running [40][41][42][43][44][45][46][47][48][49].The bottom-up inverse problem [50], however, has been rarely discussed in literature.The main difficulty is that each effective operator can be mapped to infinitely many UV theories.This case is referred to as "degeneracy".Some articles propose to search for the possible UV states based on group representation decomposition [23,[51][52][53][54][55][56][57][58].The positivity can also be used to find possible UV states in the bottom-up way by combining theoretical bounds in the SMEFT and its UV states.The theoretical framework of positivity is that from a geometric perspective, the s 2 contribution of SMEFT amplitude exists in a silent cone formed by the external rays linked to the corresponding UV completion with different quantum numbers [52,[59][60][61].Thus, the whole procedure only relies on principles of quantum field theory: i.e. unitarity and UV's locality, thus the positivity framework is quite universal.
In this work, a local UV quantum field theory (QFT) is assumed in order to link the s 2 -order contribution of the scattering amplitude to the convex geometry, and thus the positivity bound is linked to the cone space shaped by the UV particles, as discussed in Ref. [52,59,60,[62][63][64]. Starting with the analyticity behavior of the forward scattering amplitude M ij→kl (s) and the generalized optical theorem, the dispersion relation can be derived as (1.2) Here i, j, k, l means the color and polarization of 4 outer legs while X stands for the heavy states.By applying the convex hull theory, one shows that the silent cone which contains s 2 contribution of amplitude has the form which sum over all the possible UV amplitude products m X,ij K m X,kl K (i, j means external particles while X means the heavy state, K = R, I means the real and the imaginary part of the amplitude), while every UV state stands for the possible external ray of the cone which provides geometric perspective on the UV physics of the SMEFT operators.
From the geometry perspective, it is essential to find a complete list of the UV states in a systematic way.In previous works [59,60,65,66], the gauge group projectors formed by Clebsch-Gordan (CG) coefficients is utilized and UV states are enumerated to form the cone to obtain bounds for scattering processes in the SMEFT.This is called the projection method.However, this method can not guarantee finding all the possible UV states without a systematic program on the UV completion searching.In recent work [7,9,12,53,57,67,68], the Pauli-Lubanski operator W 2 and Casimir operator are introduced to decompose contact scattering amplitude to different eigenstates with specific quantum numbers.By identifying these eigenstates as the UV particles with corresponding quantum numbers, our work provides a systematic method to exhaust all the possible UV states for the effective operators in the SMEFT, which is called the J-Basis method [12,53,57].
In this work, both the convex geometry and the J-Basis method are applied in the dispersion relation to derive the positivity bounds in the SMEFT.After utilizing the J-Basis method to find the complete UV completion, the previous positivity bound based on the complete UV states according to the silent cones formed by external rays are updated.By comparing our results with the previous projection method [69], we point out that the previous method of searching UV states ignores some Lorentz structures in the group decomposition, so that it exists defects.From the comparison of the results, a more complete UV completion for the specific 2-to-2 scattering process at the Lagrangian level can be obtained so that our bounds are more precise than before.
The paper is organized as follows.In Sec. 2, we derive the dispersion relation for 2-to-2 forward scattering amplitude and show how to use the dispersion relation to give a geometry perspective of amplitudes.In Sec. 3, we introduce relevant the Pauli-Lubanski operator for the momentum and the Casimir operator for the gauge structure.Then we show how to build a set of amplitudes representing possible UV states with definite angular momenta J and gauge quantum number R, that is, the J-Basis method.In Sec. 4, for some typical scattering processes discussed in previous works, we show our bounds by using the J-Basis method and the UV selection to search for more complete UV completion at tree level and compare ours with previous bounds to show the rigour of the J-Basis.
Dispersion Relation
Any 2-to-2 forward scattering amplitude M ij→kl (s, t) for the full UV theory can be written as (2.1) By taking derivatives on the amplitude, applying analyticity of amplitudes and considering the contour integral as shown in Fig. 1, the dispersion relation can be obtained (e.g. Ref. [70] by replacing 0, 4m 2 in the contour Γ with m 2 − and m 2 + ) by defining m ± ≡ m 1 ±m 2 , Here Disc M (s, 0) = M (s + iϵ, 0) − M (s − iϵ).After setting t = 0 and applying the variable replacement u = m 2 + − s, we obtain The above discussion is quite general: the second derivative of the low energy scattering amplitude is related to the imaginary part of the high energy scattering amplitude in the forward limit.This statement applies to both the elastic and the inelastic scatterings.
For the elastic scattering ij → ij, by further applying the optical theorem, the positivity dispersion relation Eq. 2.4 can be obtained, where σ t is total scattering cross section of process ij → X.Further, taking m ± < ϵΛ < Λ to subtract the SM contribution, the general expression on elastic positivity bound has the form c 2 > 0 [71,72].For the inelastic scattering ij → kl, to utilize the more general optic theorem, we need to do a little more work.By adding conjugate term on M ij→kl (s, t), M ij→kl s = m 2 + /2 is defined as the real part of the derivative of the forward amplitude M ij→kl (s, t) for scattering ij → kl process.By applying M * kl→ij (s + iε) = M ij→kl (s − iε) to connect the time reversal ij → kl and its conjugate terms, Eq. 2.3 becomes From the above equation, we note that in the forward limit, a twice-subtracted dispersion relation can be derived for M ij→kl (s, t), assuming that a UV completion exists and is consistent with the fundamental unitary principles of the QFT.
In above Eq.2.5, the contributions of the kinematic poles are subtracted out [73][74][75][76].Furthermore, by assuming the Λ is the scale of the UV theory, we can compute the amplitude in the IR to a desired accuracy within the EFT in the energy scale within −(ϵΛ) 2 ≤ s ≤ (ϵΛ) 2 (ϵ ≤ 1).Then we choose the lower limit of the integral of Eq. 2.5 turn to the value ϵΛ larger than m + , so that we can subtract out the low energy parts of the dispersion relation integrals corresponding to the EFT theory and keep the denominator of the integrands positive.Besides, the SM contribution of the Eq.2.6 will be suppressed by inverse powers of ϵΛ in Ref. [77].
The above dispersion relation can be much simplified to (2.6) This equation can be traced back to the improved positivity bounds discussed in Ref. [52,78,79], and can also be regarded as the Arc defined in Ref. [29,30], with a radius (ϵΛ) 2 .Now by applying the more general form of the optical theorem, the dispersion relation can be written as The power of analyticity is that the EFT and UV amplitude can be connected [80].Considering the s 2 contribution corresponding to the dim-8 effective operators O (8) i , we obtain that EFT: which means we establish the link between the dispersion relation of the full theory and the EFT theory to obtain the convex geometry of the EFT.
Several comments are in order.First, if choosing ij = kl, it recovers the elastic bounds.Second, the sum in the integrand on the r.h.s. is over all the intermediate states, denoted by X, which might contain infinite states.Thus it provides a geometric perspective that the UV physical amplitudes X M ij→X→kl exist in a cone C spanned by many rays in which each ray represents contributions from UV particles X with certain quantum numbers.
Taking the shorthand notation M ij→X → m ij , all the UV amplitudes constitute the cone with To find the boundary of the cone, it is necessary to find all the possible immediate states with certain quantum numbers.So the problem becomes how to find all the possible UV states for a scattering process.
Cone Construction
From above, we notice that the s 2 contribution of the 2-to-2 amplitude should stay in the cone formed by the UV states.Now the problem becomes how to find all possible UV states: one way is the projection method by using the Irrep's (irreducible representation) projectors formed by CG coefficients and another one is the J-Basis method which is discussed in Sec. 3.Here we focus on introducing the projection method and show its incompleteness in searching for UV completions.
If we don't know all possible UV states, naturally, we can use the CG coefficients to establish projectors that can expand the EFT operators [59,64,66,71,81], for the dim-n Irrep X which comes from the direct product of the two basic representation, the projectors can be written as follow, Here m X,ij n is the CG coefficient where X represents Irrep with different quantum number, n represents the dimension of the Irrep X, the indices i, j represent the component of the two basic representation, and j ↔ l represents that the crossing symmetry [82,83] is imposed to the projectors.
Taking the 4H scattering as an example to show concrete steps to search for all projectors, The H is a complex field with the SU (2) w symmetry, which can be written as H = (H 2 + iH 1 , H 4 − iH 3 ).Thus, by considering the direct product of Here X is the heavy state while the indices of the Lorentz and the gauge group are omitted for the simplification of marking.We can obtain the projectors listed in Table 1 for expanding the 4H scattering amplitudes.
However, in Ref. [8], there are only six projectors.Once supercharge is considered in, HHX and H † H † X's same dimension Irreps should be merged, so the number of the UV states standing for HHX and H † H † X is only 2, so the number of projectors reduces to 6.
M X,n
kl→X is the matrix formed by CG coefficients for Irrep X and its component n, k, l, while i(j|k|l) means that crossing symmetry in QFT is imposed to the projectors.However, by
using the J-Basis method and the UV selection, nine UV states can be found in Table 2.This shows that finding UV states by decomposing the gauge group direct product miss the spin-2 UV states in that case.
(1, 0, −1) (1, 0, 2) Table 2. Tree level UV completion in 4H scattering process.The (A) and (S) after SU (2) w /Y means anti-symmetry and symmetry for the amplitude ij → X under the ij exchanges.In this paper, ⃗ c(M) is the UV-EFT matching results in the basis defined in Ref. [8], while ⃗ c(p) is the UV-EFT matching results in the Partial Wave (P-)Basis defined in Ref. [7].
Except the spin-2 states, the rest UV states can be checked in Ref. [69].Similarly, for 4W scattering, we obtain projectors as follow, (2.13) With N = 3, these above projectors represent SU (2) adjoint representation decompositions, while N = 2 stands for decompositions in SO (2) or spin space.After imposing the crossing symmetry on these projectors, as what we did in Sec.2.1.We reach the conclusion that for tree-level UV completion of 4W scattering, there are 9 possible UV states.However, in the tree level, we point out that the old framework of searching UV completion may cause a mistake.By applying the UV selection analysis in the vector boson scattering (VBS) case, we find that some UV states in the tree level completion corresponding to projectors couldn't exist because their Lagrangian is zero or they are eliminated by the equation of motion (EOM), i.e.UV state corresponding to such projector doesn't exist.
Besides, the construction of the projectors for 4 fermions scattering amplitudes is a little more complicated [65].The crossing symmetry j ↔ l changes to ik ↔ kī into consideration in this case so that the projectors of the 4 fermion scattering can be written as, Easily, the cone for the 4 fermions scattering can be defined as follows,
Cone Calculation and Obtaining Bounds
Now we know how to construct projectors which represents UV states.Then the projectors can used to expand corresponding EFT amplitudes, and we can calculate positivity bounds.First, we need to determine the dimension of projectors, then choose a set of basis B Y ijkl to expand projectors and EFT amplitudes to acquire a group of vectors {c XY } for different UV states X in the basis space by applying Eq. 2.16.If other B Y ijkl rather than the operators O n,ijkl are chosen as basis, these can be linked according to the basis transformation relationship Eq. 2.17.
Then, the amplitude M ijkl is expanded by applying Eq. 2.18 to obtain the corresponding vector C n c nY .
where C n c nY is the ⃗ c while N m Y is the ⃗ n that we need to search for.Finally we obtain the cone spanned by a set of vectors {c XY } representing UV states in the B Y ijkl space while the EFT amplitude represented by the vector C n c nY exists in the inner of the cone.According to the character of the cone mathematically, for any vectors ⃗ c in the cone, the dot between ⃗ c and every normal vectors ⃗ n corresponding to the faces of the cone is larger than 0.
Since vectors representing UV states form the cone, naturally, we can search for faces (dim n − 1) of the cone to describe the cone.The unique feature of a face is its normal vector.In fact, if we choose the inward direction as the positive direction for normal vectors, the dot product of every normal vector and any vectors in the cone is always positive.This is essentially the positivity bound which we search for.For a simple linear cone, once we acquire ⃗ c, it's easy to obtain all normal vectors of the cone by using the specialized mathematical calculation program like polymake [84].
In conclusion, we know every facet of the cone can be characterized by its normal vectors.For specific 2-2 forward scattering with determined particle types, by using group decomposition to search all projectors forming the cone which contains EFT amplitudes, then we can find all subsets The collection of A i (⃗ c) must contain all faces of the cone, equally, we can calculate the normal vector n i (⃗ c) for every A i (⃗ c) to select n i (⃗ c) satisfied Eq. 2.19 to obtain positivity bound.
In the Sec.4.1, we give detail calculation for the bounds of the operators involved in the 4H scattering by the steps introduced above.In some more complicated cases, for the 2-to-2 scattering involving W and B in Sec. 4, intermediate states coupling with different external particles may have a degeneracy relationship measured by a parameter x like W W X + xBBX where X is UV state, the Lorentz and the gauge indices are omitted for the convenience of marking.It means the cone has curved surfaces parameterized by x.Similarly, the normal vectors corresponding to the surfaces are also parameterized by x.Finally, by solving positive value conditions for these multivariate quadratic polynomials, the positivity bounds with roots can be obtained.
Poincare Casimir and Partial Wave Basis
For the Lorentz structures, we briefly introduce the Poincare Casimir operator which has been elaborated in Ref. [12,53,57,67,68].When the Poincare Casimir operator W 2 acts on an eigenstate of spin J and momentum P , we obtain the following equation, where W µ is the Pauli-Lubanski operator.
Our framework was established in spinor notation.The specific W 2 form is introduced in Ref. [53], Here P = P µ σ µ α α, P T = P µ σµ αα and M, M are chiral components of the Lorentz generator Now we consider how the W 2 acts on the scattering amplitude.When the W 2 I acts on a process I → I ′ , we obtain where C J N is the C-G coefficient corresponding to the intermediate state of N particles with total angular J , and s I = ( i∈I p i ) 2 is the Mandelstam variable in the scattering channel.
Gauge Eigen-basis and SU (N ) Casimir
In the previous subsection, we introduced how to construct the partial wave basis by using Poincare Casimir operators.Moreover, the decomposition of the gauge structure need to be considered in.
In fact, the projection framework [66] enumerate possible UV states by CGC.They wrote projectors P I→I ′ to expand amplitudes W I→I ′ .It equals to search all Invariant Subspaces of the direct product of gauge groups.Despite having a similar principle, we introduce a more systematic tool: the SU (N ) Casimirs from [7,9,12,53].First we introduce the SU (2) and SU (3) Casmirs as In positivity, we consider the multi-states for the external particles, accordingly, we should write T for the direct product representations as with T ⊗{r i } and E r i being the generator and identity matrix for different Irreps.The acting of T on a state Θ I 1 I 2 ...I N can be written as Let's take the scattering of ππ as an example.Noticing that π with the generator T A IJ = iϵ AIJ isn't basic representation of SU (2) group and considering the decomposition of T m {12} , firstly, we can find all independent color tensors as By applying properties of the Levi-Civita symbol i.e. (3.10) After diagonalization, three eigenstates in the m-Basis can be obtained as follow, (3.11)
Lorentz Eigen-Basis Construction
Now we show W 2 is appropriate to construct the Lorentz Eigen-Basis with angular momentum decompositions.
Amplitude Operator Correspondence
First, according to the spinor notation, the relationship between the spinor block and the operator block are obtained [7,9,[85][86][87] as follows, Amplitude Blocks Taking the amplitude ⟨12⟩ [23][24]s 14 as an example: (3.12) From the above, we find that the spinor notation may not be equal to operator monomials, and different operator form choices are related by the EOMs.Anyway, we see the possibility of constructing local operators through polynomials of amplitudes in the spinor notation.
According to Ref. [53], multiplied by the particular Mandelstam variables in the scattering channel doesn't alter the angular momentum of the scattering states, so we can get a general form of operators corresponding to different angular momentum.
Poincare Casimir and Lorentz Eigen-Basis
Now we have introduced the correspondence between amplitudes and operators.Naturally, for operators of a specific category, finding its complete spinor amplitude basis to construct eigenstates for W 2 is what we discuss in this section.
In Ref. [12,53,68,88,89], a complete basis of local amplitudes and the corresponding operators are defined as the Young Tableau (Y-)Basis.The name comes from the construction based on a Young-Tableau of the SU (N ) group [7,9,90,91], where N is the number of particles involved in the amplitude.For the type of operators we are interested in, we define the relevant parameters of Young-Tableau as Here k is the number of derivatives in operator type, while h i is the helicity of particle i.
The above parameters give such a Young Tableau in Fig. 2. Next, we just need to fill labels 1 to N into Young Tableau to acquire the basis represented by a specific Young Diagram, while the number of each label (particle i) is given by #i = ñ − 2h i for the particular class of scattering state to satisfy it's a Semi-Standard Young Tableau (SSYT): in each row the labels are non-decreasing from left to right; in each column, the labels are increasing from top to bottom.
For example, once we consider dim-8 4H operators, the Young diagram's rows and columns are equal to 2 * 4 with every label i ′ s number is two.The number of its SSYT is three.After considering the gauge tensor, we can get a Y-Basis as follows, (3.17) Finally, we claim that all other bases can be reduced into the Y-Basis through the Schouten identity, the momentum conservation, and the on-shell conditions.In fact, this is a simplified approach to searching for the amplitude basis in the spinor notation.
Gauge J-Basis from Gauge Casimir
The correspondence of the gauge structures between operators and amplitudes is simple.The invariant tensors of group factors in the amplitudes exactly correspond to the invariant tensors that are used to contract the fields in operators to form gauge singlets.
The gauge factors were not considered in the last section, so that the Y-Basis may become polynomials when it's acted by Casimir operators.However, a complete and independent monomial basis called the gauge m-Basis, can always be calculated from these polynomials by linear transformations.An efficient algorithm to find the gauge m-Basis has been proposed in [12].
We can achieve it in two steps: First, we need to determine the Young Tableaux of the particle we consider, then use the Littlewood-Richardson (L-R) rule to find all direct products expressed by the group structure constants.Finally, we use the gauge Casimir operator to find its all invariant subspaces, just as we did in section 3.2.
The updated External Ray Positivity Bounds
In this section, we consider the UV completion in the tree level for the external ray positivity bound by J-basis method.Not only the J-basis can obtain the UV states but also any decompositions of the amplitudes with specific angular momentum J and quantum number.Hence, the J-basis method can be applied to analyze the positivity bound for the loop-level scattering amplitudes, which can be decomposed into several angular momentum combinations.
The whole procedure for the J-basis method is described in the following.First we process the amplitude decomposition for the amplitude basis of the specific process matching the s 2 contribution in the SMEFT to obtain the possible UV states.Then we use the UV selection based on repeat field, the EOM and other redundancy to obtain the UV states in the tree level formally.We present a flow chart Fig. 3 to the whole procedure and compare this method with the projection method.Then we discuss several typical 2 → 2 scattering processes following the procedure in flowchart and show differences in the results.
4 SM Higgs scattering
The 2 Higgs to 2 Higgs scattering is a typical example discussed in Ref. [92].It involves several dim-8 operators as In the external ray method, first in Ref. [59], the gauge group SU (2) w CG-coefficients of the SU (2) w gauge group are used to form projectors in Table 1.Projectors in Table 1 match the UV states B 1 , S, B, W, Ξ 0 , Ξ 1 in Table 2.After utilizing the J-Basis method, we find extra new spin-2 UV states G, H 0 , H 1 .
Here we present the details of the J-Basis method applying to the 4H scattering.First, we list the 6 P-Basis operators for the type D 4 H 4 involved in the 4H scattering, Acting the Poincare Casimir operator W 2 on these P-Basis operators, we obtain the eigenstates and the eigenvalues of J-Basis in Table 3.In detail, we process these steps by using the program ABC4EFT in Ref. [12].Then we transform the P-Basis to the basis in Ref. [8].
By applying the UV selection, all the possible UV states that match nine eigenstates are written out.So we can obtain the Table 2 in Sec.2.2 corresponding to Table 3.
After obtaining all the UV states, we can apply Eq. 2.17, Eq. 2.18 and Eq.2.19 to obtain positivity bounds.More detailed, we choose the EFT operators O n,ijkl as basis B ijkl to expand the UV amplitude.So we just need to search for all the normal vectors of the cone constructed by all the matching results from the fifth column of Table 2 directly.
The number of rank-2 subsets of {⃗ c(p)} is C 2 9 = 36.Thus we could obtain 36 normal vectors corresponding to every rank-2 subset which represents the corresponding possible facet of the cone.To select the correct facets of the cone, we need to select the normal vectors which satisfy the positivity argument Eq. 2.19.However only the following 4 normal vectors ⃗ n(p) in these 36 normal vectors which are listed Eq.4.4 satisfying that for every 3) The normal vectors that satisfies the positivity argument Eq. 4.4 are ), (5, 9, 1), (1, 3, 2) .(4.4) The EFT amplitude (C 1 , C 2 , C 3 ) should exist in the cone, so we obtain new positivity bounds, Thus external rays are changed to H 1 , H 0 , B 1 , B 0 .In the perspective of the cone's bottom, we obtain Fig .4. Based on Fig. 4, the Monte Carlo Sampling shows that the allowed area of the WC space is larger than the one obtained by the projection method, and the cone is a quadrangular pyramid actually.
By applying the J-Basis method in the SM Higgs sector we find that in Ref. [59] the projection to the UV states representing potential external ray bounds provides tighter bounds.
1. Let's us write out such UV Lagrangian W W V where W is the W boson, and V represents the heavy vector.In the term W W V , the indices of the Lorentz and the gauge groups has been omitted for simplification of marking.The first leading contribution would match to D 2 W 4 which corresponds to dim-10.So W W V couplings can be excluded.3 gives the possibility of existing spin-2 UV couplings as W L W L X.However, if you calculate the matching of UV state W µν I L W I Lνρ G ρµ to the P-Basis,
Meanwhile, Table
Thus, the matching result for this UV state W µν I L W I Lνρ G ρµ exists in the ray (1, 0, 0, 0, 0, 0) of the WC space.The result violates the J-Basis analysis result (−4, −3, 0, 0, 0, 0) for the UV state (2, 1, 1, 0) in the channel (W L , W L , W L , W L ).Besides, Ref. [59] provides another character of the dispersion relation in Eq. 2.8 that the amplitude cone is a silent cone.This means there shouldn't exist any other UV state in the negative direction of the UV state with quantum number (0, 1, 5, 0) for the 4W scattering case.In Table 4, it shows that in the opposite direction of (0, 1, 5, 0), there exists the UV state with the quantum number (2, 1, 1, 0).The three result from the J-Basis method, from the UV matching, and from geometry perspective seem incongruous in that case.However, there is no conflict among the three results because the UV states of tensor particle with the form W W G can be eliminated by the EOM.For the tensor coupling W I Lµν W I Lρ ν G µρ , the interaction Lagrangian can be rewritten as follow By applying the characters of the σ matrix, Eq. 4.9 can be expanded as (4.10) There are many kinds of terms in the expansion of Eq. 4.10, but all the terms can be transformed to the form W I Lµν W I Lρ ν G µρ by using Tr (σ λ σρ ) = 2g λρ , (4.12) Finally, Eq. 4.13, the transformation relationship can be obtained (4.13)
3.
By applying the EOM of the massive spin-2 particles, we can show that the Eq.4.13 equals zero.The free Lagrangian of the massive spin-2 quantum theory [93,94] is The expressions S LG (kinetic term) and S m (mass term) are By applying the Euler-Lagrange equation, we can obtain the EOMs and find that the h µν is traceless.The above discussions show that not all the amplitude decompositions correspond to the determined UV states in any case.The results of amplitude decomposition require the UV selection by the EOMs, the repeat field and other identities.Finally, we can write out all possible UV states for the 4W scattering in Table 5.Now according to Eq. 2.17, Eq. 2.18 and Eq.2.19, the normal vectors of the 4W scattering amplitude cone can be calculated to obtain bounds.The cone has three categories of normal vectors which have the form in the WC space as (4.17) The EFT amplitude (C 1 , C 2 , C 3 , C 4 , C 5 , C 6 ) should exist in the cone.So the product between the EFT WCs and the normal vector above should be positive, which represents the positivity bounds.
Then the positive argument of the vectors in the Eq.4.17 in the WCs space can be obtained by solving such a system of binary quadratic inequalities: To solve the third inequality, there are some tricks, i.e. we can regard x 1 as a known number so as to calculate the single quadratic inequality.Then we obtain quadratic inequality of x 2 from b 2 ≥ 4ac.Finally, we obtain the bounds as The volume of the allowed WC space is 0.435% by the Monte Carlo Sampling.Despite that the cone is described by more than 6 WCs, we can still show the structure of the cone in 3D space as in Fig. 5 by choosing the specific slicing in the dim-6 WC space.In the scheme of the slice in Fig. 5, the UV state (2, 1, 3, 0) is projected to origin while the UV states (2, 1, 5, 0) and (2, 1, 1, 0) are projected to the y axes.More than that, the circle corresponds to the UV state (0, 1, 5, 0), and the (0, 1, 1, 0) is degenerated to a linear ray y = 4x.All of them are in the inner or surface of the slice.
In the previous result in Ref. [59], the projectors formed by SO (2) and SU (2) w CG coefficients were used to represent UV states.the previous work considered the CP-conserving case and reached the results of the 9 possible external rays (UV states) presented by E m,n where m, n are different Irreps of SU (3) c and SU (2) w .However we reach the conclusion that there are only 5 possible UV states in the tree level completion.For example, E 1,2 means (0, 1, 3, 0) in Table 4 whose contribution is zero after the decomposition of the Lorentz and the gauge group.In conclusion, we find that not all irrep projectors can be realized with the UV completion.
Comment with the 4 Gluon Scattering
According to the detailed discussions about the UV completion of the 4W scattering in the Sec.4.2.1, we find that the number of UV states in the tree level completion to restrict vector boson cones is less than previous results obtained by projection in Ref. [69].Hence, the 4 gluon scattering is similar.More specifically, color group direct product decompositions (projectors) are listed as follows, while the 10 representations in 8 ⊗ 8 = 1+8+ 8+10+ 10+27 from Ref. [66] is eliminated for it doesn't correspond to the inverted symmetry (ij → ji, kl → lk).The projectors corresponding the group decompositions of the direct product of the two SU (3) c adjoint representations are listed in Eq. 4.20.The SO (2) group decompositions are the same as Eq.2.13.Finally, in Ref. [69], it reaches the conclusion that there are 15 possible UV states for the 4 Gluon scattering case, (4.20)However, according to discussions in Sec.4.2.2, five spin-1 UV states couldn't exist for their leading contribution correspond to the dim-10 EFT operators.Besides, the UV state G µν i G jνρ f ijk S corresponding to the quantum number (Spin = 0, SU (3) c = 1) obviously equals to zero in Lagrangian.This means that based on the J-Basis method, searching UV states by applying the UV selection can obtain the more reasonable bounds.
4 Lepton Scattering
In this case, the involved P-Basis operators can be divided into four categories based on the their symmetry of the corresponding Young-Tableau in Eq. 4.21.
Here p i represents the generation of the particle i.Thus, the corresponding Young-Tableau gives the corresponding tensor structure of the generation of operators.So the WC space can be defined as (C By applying the J-Basis method in amplitude decomposition, we can obtain Table 6 as a possible list of the UV completion. State Spin SU (2) w /U (1) y Interaction ⃗ c(p) (−4, 0, 0, 0) Table 6.UV completion for the 4 lepton scattering.Here the g pi g pj means coupling constant of fermions between different generations p i , p j .
One Generation
In this case, the involved operators become degenerate . So we can obtain the positivity bounds as Eq. 4.24 gives a cone marked by green with external rays respectively representing the UV states H and Ξ 1 in Fig. 6 .However in Ref. [69], it only obtained UV states B 1 , B, Ξ 1 , W.
Hence, the bounds in Ref. [69] are C 1 ≤ 0, C 1 + C 2 ≤ 0 which shows a looser bounds marked by purple in Fig. 6 than this results in Eq. 6.
How to Deal with the Multi-Generation Case
We need to expand the generation indices of the operators in Eq. 4.22, because when we choose different generations (p 1 p 2 p 3 p 4 ) in the same type of operators, the coefficients g p 1 ,p 2 , g * p 3 ,p 4 are different.We use the UV state B µ 0 as an example to show how to expand generation indices.For simplification we only consider the lepton coupling with two-generation like Next, we use the combination (p i p j p k p l ) where the index p i represents the generation of the particle i in the operator to refer to the operators with different generation combinations.Then based on the permutation group, combination of generation indices (p 1 p 2 p 3 p 4 ) can take (1212), (1221), (2112), (2121).For the operator with the type O (p) , we can obtain that (1212) = (1221) = (2121) = (2112).As for the operators with the form Let's try to write the matching vectors with components of generations tensor (p 1 , p 2 , p 3 , p 4 ) in the WC space as After expanding the generation indices, we could obtain the matching results in Table 7.
We can obtain positivity bounds as (4.26)
The full Flavor Case
Considering two-generation of fermion, the UV Lagrangian can be written as the form Here X is the UV state while we omit the derivative D µ , σ matrix and other indices for the convenience of marking.the WC space with the tensor indexed by generations and types of Lorentz structure can be defined as follows, ( Here p 1 , p 2 , p 3 , p 4 in (p 1 p 2 p 3 p 4 ) e represent particles' generations of the operator while e represents the serial number in Eq. 4.21 which stand for the Lorentz structure and Young-Tableau form of the operator.
The matching results are shown in Table 8.The corresponding cone parametrized by the ratios of couplings between different generations g 22 g 11 = x and g 12 g 11 = y, exists in a 14-D space.It's very hard to obtain its analytical solutions.However, some analytical constraints can be obtained in some cases like the Minimal Flavor Violation (MFV).
The MFV Case
MFV is represented that all the flavor violation is generated from Yukawa coupling terms with the form Y ij L i L † j and only EFT operators which are Yukawa singlet can exist [95][96][97].It give strong constraints both on the EFT and the UV theory.First considering the UV lepton sector in Table 6, the Yukawa matrix is an identity matrix.So it excludes the first four coupling terms with the form L i L j so that we need only consider the term of L i L † i coupling in Table 9.Hence, we need to find all operators whose tensors of generation indices (p 1 p 2 p 3 p 4 ) are singlet to obtain involved operators.For two-generation cases, singlet tensor combinations are (1111), (2222), (1212).By defining the WC space as and the previous UV selection, Table 8 can be reduced to Table 9.According to Table 9, the corresponding normal vectors can be simply obtained, (y 2 1 , By using Eq.2.17, Eq. 2.18, Eq. 2.19, we obtain the positivity bounds for the 4 Lepton scattering with the two-generation under the MFV assumption: (4.28) The allowed area's volume of the WC space is 0.974% by Monte Carlo sampling in the WC space.
For the three-generation condition, the amplitude cone is an 18-D cone with curved surfaces parameterized by x = g 22 g 11 and y = g 33 g 11 .The matching results are listed in Table 10.Despite it being complicated, numerical solutions can be obtained by applying the particle data of SM in Ref. [98]: For convenience, we only consider for the CP-conserving case.The operators involved in this scattering process are listed as follows, 1, the spin-2 term is also eliminated by the EOMs, while the spin-1 term contributes to dim-10 operators in the lead order.The WC space can be defined as ) .So we obtain the matching results in Table 7.Here the matching result of quantum number (0, 1, 3, 0) is W Iµν B µν W Iλρ B λρ in the m-Basis.However, it is eliminated by the repeat field when it's between the m-Basis and the P-Basis.
After calculation, we obtain the positivity bounds as ) . the slice of the 2D slices of the 3D cone is plotted in Fig. 7.The last bound in Eq. 4.31 is represented by the circle in Fig. 7 which corresponds to the UV state (0, 1, 1, 0).
2-to-2 Scattering involving W and Higgs
For simplification, we limit the involved particles to the W L and the H. Considering that W L and H have been discussed separately, in the step of the UV selection, we only need to consider the J-Basis decompositions for type D 2 HH † W L W L .In Table 12, we give the possible UV states corresponding to the amplitudes decompositions.First, considering for coupling terms like W L W L X + xHH † X where the indices of the Lorentz and the gauge groups are omitted for the convenience of marking, the x means the coupling constant describing the degeneracy between W L W L X and BBX in the same quantum number, while X mean the heavy state.We have already know the W W V term is impossible in tree level in Sec.4.1.As a result, we can confirm that the degeneracy coupling only exists in (0, 1, 1, 0).Then according to Ref. [7], operators involved W L H in the P-Basis are The matching results of the UV terms with the form W L HX are listed in that the Table 12 give the complex solutions, O 2 W L H has no contribution to the matching results of W L W L X + xHH † X, which means we can exclude C 2 W L H from the WC space and obtain the real matching results in the WC space.So the WC space can be defined as , C (2) Finally we can obtain full matching results in Table 13.
We obtain the six normal vectors of the cone spanned by the matching results in Table 13 as (0, 1, 0, 0, 0, 0) , −1, we can clearly see that the first and the second normal vectors offer the positivity bounds for the 4W L scattering.When x goes to negative infinity, W L and H would decouple and the last four normal vectors offer bounds for the 4H scattering case, similarly in W and quark scattering case.The total positivity bounds are listed as follows, (4.34) Similarly, the bounds in the first and the second lines can be directly obtained from scattering process involving the same particle while the other bounds represent the degeneracy between W and H particles.
2-to-2 Scattering involving W and Quark
For convenience, we only consider W L , W R and one generation quark.The involved operators in the P-Basis are So the WC space can be defined as The J-Basis analysis for the 4Q scattering is listed in Table 14.Given the possible UV resonances from the J-Basis, we select the UV completion by the following steps.First, by assuming that UV states are color singlets to exclude coupling terms with the form ators corresponds to the possible external ray that form the cone in the WC space of the EFT operators.It means that the more complete UV states we find, the more accurate shape of the cone we can acquire so as to obtain the exact bounds for the WCs.Previously, using the projection method based on the CG coefficients to represent UV or enumerating all possible UV states either provide redundant UV states or omit some UV states so as to obtain a not so strict constraint.Among the results obtained previously, the bounds of the 4W scattering show a significant difference.the J-Basis method and the UV selection We introduce the J-Basis method in Sec. 2. In fact, the J-Basis takes the Lorentz structure into consideration to provide direct product decompositions of the spin structure and uses the Casimir Operators to give decompositions of gauge structure.Then according to a quantum number of decompositions, all possible UV Lagrangian in tree-level can be written.After that, we need to process the UV selection to check whether its contribution to tree-level matching is eliminated by the EOMs, the repeat field and other redundancy or not, to give an accurate UV completion.We apply the J-Basis method and the UV selection to calculate the bounds of some typical processes, such as the 4H, 4W and 4 lepton scattering, and present the results in Sec. 4. Despite that the J-Basis can give a systematic scheme to find all the UV states, it's hard to obtain the analytical bounds in some cases.Especially for the 4 fermion scattering with multi-generation we cannot obtain fully analytical solutions due to too many parameters represents couplings between different generations.However, by imposing limitations such as the MFV case, the numerical solution can be obtained.In summary, the J-Basis idea and the UV selection provide a systematic framework to find all the UV states and gives more rigorous limitations in positivity-bound problems.
Discussion
The positivity bounds based on external rays, by itself, is a powerful tool to determine the exact boundary of the UV-completable EFTs and supersedes bounds from the elastic scattering, and has a better physical interpretation of the relationship between the UV and the SMEFT.Many typical 2-to-2 scattering involving the SM particles are calculated in previous work have been updated in our works by the J-Basis method and the UV selection.However, obtaining the full set of bounds for all the SMEFT operators seems impossible because the degeneracy of two states with the same quantum number turns to obtain bounds to solve corresponding complex multivariate quadratic inequalities.So we should be able to obtain numerical bounds for all the SMEFT operators.
A.2.2 Massive Spin-2 Couplings
We have already discussed, there are only W 2 L W 2 R terms.
Figure 1 .
Figure 1.Diagram of the analytic structure of the forward amplitude in the complex s plane in the case m 1 = m 2 = m.The simple poles at s = m 2 and 3m 2 and the branch cuts starting at s = 4m 2 and 0 correspond to resonances and multi-particle thresholds in the s-and u-channels, respectively.
Figure 3 .
Figure 3. Flow chart of the J-Basis method to obtain the UV states corresponding to the possible external rays.
Table 4 .
J-Basis analysis results for the 4W scattering.Column of O (m) j represents the m-Basis results, and the column of O (p) j represents the P-Basis results.The combination of groups is defined as as (Spin, SU (3) c , SU (2) w , Y )
Figure 6 .
Figure 6.The 2-D cone of the 4 Lepton scattering amplitude in one-generation case.
4.4 2 -
to-2 Scattering involving W and B in the CP-Conservation Case
Figure 7 .
Figure 7.The cone of the 2-to-2 scattering involving W and B boson.
Table 5 .
Matching Results for the 4W scattering.
Table 9 .
Matching results for the full flavor case of the 4 Lepton scattering in the MFV case.Here y = g22 g11 .
Table 10 .
Matching results for three generation 4L scattering in the MFV case.
Table 12 .
The amplitude's decompositions of the 2-to-2 scattering in the channel W H → W H in the P-Basis of Eq. 4.32. | 11,253.2 | 2023-12-07T00:00:00.000 | [
"Physics"
] |
The Roles of Two Type VI Secretion Systems in Cronobacter sakazakii ATCC 12868
The type VI secretion system (T6SS), which has been found in 25% of gram-negative bacteria, is a crucial virulence factor in several pathogens. Although T6SS gene loci have been discovered in Cronobacter species, one of the major opportunistic foodborne pathogens, its function has not been elucidated. In this study, the roles of two phylogenetically distinct T6SS gene clusters in Cronobacter sakazakii ATCC12868 were investigated. Analysis of 138 genome sequences of C. sakazakii strains, we found that one T6SS gene cluster (T6SS-1) was ubiquitous in all examined strains, whereas another (T6SS-2) was absent or degenerated in a large proportion of the strains (n = 97). In addition, we confirmed the T6SS-1 antibacterial function through an in-frame deletion in the vasK and hcp genes. Compared with the wild-type strain, the T6SS-2-deficient mutant presented a much stronger colonization of organs when infecting neonatal rats. Thus, we proposed that T6SS-2 plays a role in pathogenic processes. This is the first study to investigate the functions of T6SS in C. sakazakii, and the results will extend our understanding of the pathogenic and phylogenetic characteristics of C. sakazakii.
INTRODUCTION
Cronobacter spp. is an emerging opportunistic food-borne gram-negative pathogen known to cause severe clinical infections in neonates, including necrotizing enterocolitis (NEC), sepsis, and meningitis (Biering et al., 1989;Gallagher and Ball, 1991;Caubilla-Barron et al., 2007). Infections by Cronobacter sakazakii have been reported only in infants, the elderly and immunocompromised adults (Healy et al., 2010;Hunter and Bean, 2013). Neonates with poor immunity or lowbirth weight are the most susceptible population, often acquiring the infection by consuming contaminated powdered infant formula (Muytjens et al., 1983;Tall et al., 2014). Cronobacter spp. have caused several outbreaks of neonatal meningitis and necrotizing enterocolitis, resulting in a high mortality rate (approximately 33-80%) (Lai, 2001;Healy et al., 2010) and serious sequelae such as brain abscesses and impaired sight and hearing (Kleiman et al., 1981;Muytjens et al., 1983). Type VI secretion system (T6SS) has been found in over 25% of sequenced gram-negative bacterial strains (Bingle et al., 2008). Structurally, the organelle is analogous to a contractile phage tail, which is comprised of 12 to more than 20 proteins. T6SS core components consist of 13 conserved proteins. Among them, VasK is a membraneassociated protein with ATPase activity, and is essential for a functional T6SS apparatus (Ma et al., 2009). Hcp is one of the components of the T6SS phage tail, which can also be delivered as an effector (Bingle et al., 2008;Russell et al., 2014). Type VI secretion system (T6SS) is a versatile protein secretion apparatus that can directly deliver toxins into eukaryotic cells as well as other bacteria. Their functions are associated with virulence, host immunity resistance and interbacterial interaction. The T6SS of Pseudomonas aeruginosa can secrete three kinds of effectors (Tse1-3), which can destroy peptidoglycans, cell membranes, and cytoplasmic components in infected cells (Russell et al., 2011;Russell et al., 2013). T6SS genes are required for virulence of Vibrio cholerae toward Dictyostelium amoebae and macrophages (Pukatzki et al., 2006). Yersinia pseudotuberculosis resists host immunity through the transport of Zn 2+ in a T6SS-dependent mechanism (Wang et al., 2015). In addition, T6SS demonstrates antivirulent characteristics in some species (Chow and Mazmanian, 2010;Bendor et al., 2015). In Bordetella bronchiseptica, a T6SS-deficient mutant exhibits a hypervirulent phenotype when infecting immunodeficient mice (Bendor et al., 2015).
Through whole-genome analysis, several putative T6SS loci have been discovered in Cronobacter spp. (Joseph et al., 2012). However, their functions are not yet understood. By genomic analysis of 138 C. sakazakii strains, two integrated T6SS loci were found namely T6SS-1 and T6SS-2. T6SS-1 was ubiquitous among our examined strains, whereas the T6SS-2 gene cluster is absent or degenerated in approximately 70% (97/138) of the strains. In addition, approximately 80% (23/29) of clinical strains are T6SS-2-negative. Therefore, we sought to answer the question of whether T6SS-1 plays an essential role during strains' growth and infection, whereas T6SS-2 is redundant in their routine niches. To answer this question, T6SS-deficient strains were constructed by deletion of vasK and hcp genes. The properties of interbacterial competition, human intestinal epithelial cell invasion, human macrophage intracellular survival, and neonatal rat infection between wild-type and mutant strains were evaluated. The findings of this study would shed more light on the role of T6SS in the pathogenesis of C. sakazakii and enable future development of therapeutic strategies to combat C. sakazakii infections.
Bacterial Strains and Cell Lines
The bacterial strains and plasmids used in this study are listed in Supplementary Table S1. C. sakazakii (ATCC12868), Caco-2 cells and U937 cells were obtained from ATCC. Caco-2 cells and U937 cells were cultured in minimal essential medium (MEM) and RPMI 1640 (Gibco), respectively, and supplemented with 10% fetal bovine serum (Life Technologies) in a 5% CO2 atmosphere at 37 • C. For competition experiments, the strains were grown in Luria Bertani (LB) broth at 37 • C with shaking, and when required, antibiotics were added at the following concentrations: ampicillin (100 µg/mL), chloramphenicol (20 µg/mL), and streptomycin (100 µg/mL) (Sigma).
Distribution of the Two T6SS Gene Loci in C. sakazakii Strains
A total of 138 genome sequences of C. sakazakii were examined. All genomes were compared against the genome of ATCC12868, which contains the two integral T6SS loci. The presence of the two T6SS loci in each individual genome was assessed using the Artemis Comparison Tool (ACT) (Carver et al., 2005).
T6SS Gene Mutation and Complementation
Mutant strains containing deletions of vasK1, hcp1, vasK2, and hcp2 were constructed using the λ-red recombinase system (Datsenko and Wanner, 2000). Briefly, PCR primers (Supplementary Table S2) contained the sequences corresponding to the end of the desired deletion, whereas the 20 nucleotides at the 3 end contained the sequence of the chloramphenicol (cam) drug resistance cassette from the plasmid pKD3. Plasmid pKD46 was utilized to synthesize recombinase. Complementation experiments were performed by cloning the respective genes into a pTrc99A vector with IPTG-induced expression on LB-agar plates containing 1m M IPTG.
Growth Curve
Strains were incubated overnight and then transferred into 100 ml of fresh LB at a ratio of 1:100. The strains were then grown at 37 • C with shaking at 175 rpm/min. OD600 was measured every 30 min for each strain.
Bacterial Competition Assay
Streptomycin-resistant derivatives of prey strains were generated by spontaneous mutation as previously described (Johnson et al., 2005). Competition experiments were then performed as previously described (MacIntyre et al., 2010). In brief, streptomycin-sensitive predator and streptomycin-resistant prey bacteria were mixed at a 1:1 or 10:1 ratio. Approximately 10 8 bacteria were then spotted on dry LB-agar plates and incubated at 37 • C for 1-4 h. The bacteria were harvested, diluted, and plated on LB plates containing 100 µg/ml of streptomycin. Each experiment was performed in duplicate and repeated thrice.
Caco-2 Cell Invasion Assay
We used a gentamicin protection assay to determine the number of intracellular bacteria, which was performed as described previously (Kim and Loessner, 2008) with some modifications. Briefly, Caco-2 cells were seeded in 6-well plates. After 24 h, the monolayer of cells was infected with mid-exponential phase bacteria at a multiplicity of infection (MOI) of 10 for 90 min in an incubator at 37 • C with 5% CO 2 . The infected cells were washed three times with sterile phosphate-buffered saline (PBS), and then fresh medium containing gentamicin (100 µg/ml) was added. The plate was incubated for 1 h at 37 • C with 5% CO 2 and then washed three times with PBS. The infected cells were lysed with 0.1% Triton X-100 for 10 min. The bacteria were then collected and plated onto LB agar using 10-fold serial dilutions. Each experiment was performed in duplicate and repeated thrice.
Human Macrophage Invasion and Intracellular Survival Assay
The gentamicin protection assay and intracellular survival assay were performed as previously described . In brief, U937 cells were seeded in 24-well plates with phorbol 12-myristate 13-acetate (PMA). After 48 h, cells were gently washed with RPMI to remove residual PMA. Monolayer cells were infected with mid-exponential phase bacteria at a MOI of 10 for 1 h at 37 • C with 5% CO 2 . The infected cells were washed two times with PBS. Fresh medium containing gentamicin (100 µg/ml) was added, and the plate was incubated for 1 h at 37 • C with 5% CO 2 , and then washed three times with PBS. Fresh medium containing gentamicin (10 µg/ml) was added, with the cells being incubated continually at 37 • C with 5% CO 2 . The infected cells were then lysed with 0.1% TritonX-100 at time points 0, 12, 24, 36, and 48 h. The bacteria were collected and plated onto LB agar using 10-fold serial dilutions.
Neonatal Rat Experiments
All animal experiments were performed according to the standards of the Guide for the Care and Use of Laboratory Animals (Council, 2011). Experimental protocols were approved by the Institutional Animal Care Committee at Nankai University. Animal experiments were conducted as previously described (Mittal et al., 2009). Briefly, 4-day-old Sprague-Dawley rat pups from one mother were randomly divided into several groups and infected orally with 10 4 CFU of wild-type C. sakazakii, T6SS-deficient strains, and complemental strains in 30 µl of PBS. The control group was fed with PBS. The rats were euthanized 48 h after infection. Brain, liver, and spleen were aseptically removed and homogenized in sterile PBS. Bacterial counts in the tissue homogenates were determined by plating 10-fold serial dilutions on chloramphenicol-, ampicillin-, or streptomycin-LB agar plates.
Ethics Statement
All animal experiments were carried out according to the standards set forth in the Guide for the Care and Use of Laboratory Animals published by the Institute of Laboratory Animal Resources of the National Research Council (Untied States). The experimental protocols were approved by the Institutional Animal Care Committee at Nankai University. We have made efforts to minimize animal suffering and reduce the number of animals used.
Distribution and Genetic Structure of T6SS Gene Loci
The annotations for available T6SS clusters and their components were based on the SecReT6 database 1 , combined with manual 1 http://db-mml.sjtu.edu.cn/SecReT6/ checking. In the ATCC 12868 strain, two intact T6SS loci were found and named T6SS-1 and T6SS-2 (Supplementary Table S3). T6SS-1 contained 21 contiguous genes, including 18 conserved core components and 3 accessory genes (Figure 1). The ptc1 has been shown to play a regulatory function in P. aeruginosa (Mougous et al., 2007). The putative peptidoglycan amidase toxin-antitoxin combination and phospholipase genes are also located inside the cluster, and are antibacterial effectors in E. cloacae, S. typhimurium, and P. aeruginosa (Russell et al., 2013;Zhang et al., 2013). Therefore, this suggests the T6SS-1 may have an antibacterial function. A total of 15 core component genes were found in T6SS-2, with no regulatory and effector genes (Figure 1). Therefore, the function of T6SS-2 is far from clear.
A total of 138 C. sakazakii genome sequences were used to investigate the distribution of the two T6SS clusters. A total of 96 C. sakazakii genomes were sequenced by our lab, and the remaining 42 were obtained from NCBI database. The results showed that an intact T6SS-1 gene locus was present in all 138 strains, whereas an intact T6SS-2 was only found in 41 (29.7%) strains. Approximately 25% (35/138) of the strains had lost their entire T6SS-2 locus, and 44.9% (62/138) of the strains were T6SS-2-degenerated, with the vasK, hcp, and/or tssH genes being absent or present as a pseudogene. In addition, approximately 80% (23/29) of the clinical strains contained a deficient or degenerated T6SS-2 cluster (Figure 2). All strains were responsible for a fatal clinical disease, such as 701, 767, 695, and NM1240, contained truncated tssH and vasK genes, which implied that their T6SS-2 was non-functional. tssH is predicted to be a type VI secretion system ATPase which plays an important role in sheath recycling (Brodmann et al., 2017). These data suggest that loss of T6SS may be beneficial to C. sakazakii infection in neonates.
Wild-Type and Two T6SS-Deficient Strains Exhibit Similar Growth Rates
In other pathogens, it was shown that deletion of the vasK and hcp genes can inactivate T6SS (Mougous et al., 2006;Pukatzki et al., 2006). Therefore, T6SS-1-and T6SS-2-deficient strains were created by deleting of vasK or hcp gene. The deletion of T6SS genes had no effect on the bacteria's growth rate (Supplementary Figure S1), indicating that any differences between the wild-type and the two T6SS-deficient strains were not a result of differences in the growth rate.
ATCC12868 Kills Other Gram-Negative Bacteria in a T6SS-1-Dependent Manner
We first performed an experiment to determine whether the T6SS of C. sakazakii had antibacterial functions similar to those of T6SS in other pathogens such as V. cholerae and P. aeruginosa (Hood et al., 2010;MacIntyre et al., 2010). Escherichia coli K-12, E. coli O157:H7 (EHEC), Salmonella typhimurium and Citrobacter rodentium were selected as the gram-negative prey strains, and Enterobacter faecalis, Staphylococcus aureus, and Streptococcus pneumoniae were selected as the gram-positive prey strains. Wild-type, T6SS-1-deficient, and T6SS-2-deficient strains were co-cultured with these prey strains for 1-4 h. The wild-type strain and T6SS-2-deficient strain were highly FIGURE 1 | Schematic representation of the two T6SSs gene loci in Cronobacter sakazakii. Core components of the T6SSs are shown in blue. Uncharacterized genes are shown in gray. In the T6SS-2 cluster, fimA4, fimB, fimD, and fimA5 are pilus-associated genes. The mutant genes in the experiment are shown in red.
FIGURE 2 |
The distribution of T6SS-2 in 138 C. sakazakii strains, including clinical and non-clinical strains. The genome of each strain was compared to that of ATCC12868. Strains that lack the 13 essential core components of T6SS-2 are classified into the deficient group, while strains with the loss of several of vasK, tssH, hcp or other important genes partially, or entirely are defined as degenerated.
virulent toward the Gram-negative bacteria. However, this virulence was abrogated when either the vasK1 or hcp1 gene was deleted ( Figure 3A). Unsurprisingly, gram-positive bacteria were resistant to killing by C. sakazakii (Figure 3B). To confirm T6SS-1 mediated virulence toward gram-negative strains is in a T6SS-dependent manner, E. coli O157:H7 was selected as experimental prey for future experiments. The result showed survival of E. coli was restored in the complemented strains H6561 and H6563 (Supplementary Figure S2). The survival curves showed that C. sakazakii had the highest killing efficiency during the second hour of infection (Figure 4). These results suggested that T6SS-1 of C. sakazakii had antibacterial functions similar to its counterparts in V. cholerae and P. aeruginosa, whereas T6SS-2 did not exhibit antibacterial function.
Wild-Type and T6SS-Deficient Strains Exhibit Similar Caco-2 Invasive Efficiencies
In some pathogens, T6SS is involved in host cell invasion (Zhou et al., 2012). Additionally, it has been previously shown that the presence of traversing intestinal epithelial cells is required for C. sakazakii to cause sepsis and meningitis. Therefore, we assessed whether C. sakazakii invaded Caco-2 cells in a T6SS-dependent manner. After 90 min of incubation of Caco-2 monolayer cells with the wild-type, vasK1 and vasK2 strains, respectively, and 1 h of gentamicin treatment, intracellular survival was assessed. No significant difference in invasive efficiency between the wide-type and the two T6SS deletion mutant strains was observed ( Figure 5A). These results demonstrate that neither T6SS-1 nor T6SS-2 were involved in the invasion of intestinal epithelial cells.
Wild-Type and T6SS-Deficient Strains Have the Same Competence in Macrophage Invasion and Intracellular Survival
In pathogens such as V. cholerae, P. aeruginosa, and S. enterica, T6SS is involved in the invasion and intracellular survival of pathogens within macrophages (Pukatzki et al., 2007;Blondel et al., 2013). C. sakazakii can survive and multiply in macrophages for a relatively long time . Therefore, we tested whether the two T6SSs played a role in the invasion and survival of C. sakazakii in macrophages. Bacterial were recycled at 1, 12, 24, 36, and 48 h post-infection. Eventually, the two T6SS-deficient strains ( vasK1 and vasK2) exhibited similar invasion abilities (T0) and intracellular reproduction tendencies (T12, T24, T36, and T48) ( Figure 5B). These results suggested that neither T6SS-1 nor T6SS-2 affected C. sakazakii survival in human macrophages.
T6SS-2-Deficient Strain Exhibits a Hypervirulent Phenotype in Neonatal Rats
The ATCC12868 strain has been documented to cause meningitis . We used a neonatal rat model to investigate the virulence of wild-type and T6SS-deficient strains in animals. The 4-day-old rats were orally fed with 10 4 CFU/30 µl of either wild-type or T6SS-deficient strains. Bacteria were recovered from the brain, liver, and spleen at 48 h post-infection. Our results showed that similar numbers of T6SS-1-deficient and wild-type bacteria were recovered from different organs. However, T6SS-2-deficient bacteria were collected from brains at FIGURE 3 | C. sakazakii targets gram-negative species in a T6SS-1-dependent manner. Survival of streptomycin-resistant prey is shown. (A) Streptomycin-sensitive predators and streptomycin-resistant gram-negative preys were mixed at a 10:1 ratio and incubated for 4 h. The surviving preys were counted by plating on agar containing 100 µg/ml streptomycin and is presented as Log10 CFU. (B) The Gram-positive preys were counted using the same method as (A). The data represent three independent experiments. about a 10-fold higher number compared to that of wild-type. The numbers were also higher in the liver and spleen but to a slightly lesser extent than in the brain (Figure 6). Complemental strains also showed a significant difference from the deficient strains (Supplementary Figure S3). These results suggest that T6SS-2 might limit the ability of C. sakazakii to invade or grow in host organs.
DISCUSSION
The T6SS-1 cluster is ubiquitous among C. sakazakii strains, and its GC content (59.64%) is similar to that of the whole genome (57.02%), which suggests that the gene cluster is part of the inherent genetic material of the species. The antibacterial function of T6SS-1 may be important for the species to gain survival advantages in both environmental and host niches, as it is in several other pathogens such as V. cholerae, P. aeruginosa, and Serratia marcescens (MacIntyre et al., 2010;Russell et al., 2011;Alcoforado Diniz and Coulthurst, 2015). In addition, we propose that the antibacterial function has a more profound significance for C. sakazakii during host infection, especially as a cause of NEC in newborns, as the most vulnerable targets of C. sakazakii are neonatal infants that have low complexity and diversity in their fluid gut microbiota (Grishin et al., 2013). We hypothesize that Cronobacter kills gram-negative species after infection, further reducing the complexity and diversity of an already frail gut microbiota. The relation between NEC and gut microbiota is still unclear (Grishin et al., 2013), and further research is needed to determine whether the antibacterial function of T6SS-1 enhances the ability of C. sakazakii to cause neonatal NEC.
In this study, we found that the deletion of T6SS-2-associated genes is beneficial to C. sakazakii during its infection of neonatal rats, implying that the T6SS-2 has an anti-virulent function. Similar anti-virulence functions have been found in other pathogens (Parsons and Heffron, 2005;Kinkel and McIver, 2008;Chow and Mazmanian, 2010;Li et al., 2014;Bendor et al., 2015). For example, T6SS is required for B. bronchiseptica to infect wild-type mice; however, a T6SS-deficient mutant exhibits a hypervirulent phenotype when infecting immunodeficient mice (Bendor et al., 2015). In S. typhimurium, SciS (vasK homolog) reduces intracellular bacterial numbers at later stages of infection and attenuates virulence to achieve a balance within the host environment (Parsons and Heffron, 2005). The T6SS-2 cluster has a much lower GC content (51.02%) than the whole genome and contains pseudogenes in a large proportion of C. sakazakii strains (97/138), which suggests that the species is losing this gene cluster under an unknown selective pressure.
FIGURE 5 | Contribution of the two T6SSs to C. sakazakii human intestinal epithelial cell (Caco-2) invasion and human macrophage U937 intracellular survival. (A) Caco-2 cells were infected at a MOI of 10 for 90 min. Bacteria were recovered after a 1 h gentamicin protection assay. The results are presented as relative percentages. The error bars indicate standard deviations for the means of three separate experiments performed in triplicate. (B) U937 cells were infected at a MOI of 10 for 60 min. The intracellular bacterial numbers are described as T0. After a 1 h gentamicin protection assay, intracellular bacteria were recovered at time points of 12, 24, 36, and 48 h. The results are presented as the percent intracellular of the inoculum. Data are the means ± standard error of two independent experiments performed in triplicate.
FIGURE 6 | Bacterial colonization in tissues of neonatal rats infected with wild type or the two T6SS gene mutants, respectively. Groups of 4 days old neonatal rats (n = 6) were orally infected with 10 4 CFU/30 µl strains. Brains, livers and spleens were harvested at 48 h post-infection. Equal weights of tissues were homogenized and plated on LB agar containing 20 µg/ml chloramphenicol or 100 µg/ml streptomycin. The number of bacteria were counted and expressed as Log10 CFU/g tissue ± SD. All P-values were determined using Mann-Whitney test. * P ≤ 0.05; * * P ≤ 0.01. This work is the first to describe the function of T6SS in Cronobacter spp. We found that the two T6SS were different in both function and distribution among C. sakazakii strains. To our knowledge, this is the first report of T6SS-1 especially contributing to interbacterial competition, which might be crucial for C. sakazakii to compete with other species in their various niches. The T6SS-2 cluster might be important for C. sakazakii during host interaction, as the deletion of T6SS-2 genes led to a much higher level of organ infection. It was demonstrated that the T6SS-2 was not involved in human intestinal epithelial cell invasion and intracellular survival in macrophages. Therefore, additional mechanisms of T6SS-2 used to interact with host interaction need to be investigated in the future. Since the T6SS-2 gene cluster contains 4 pilus-associated genes, these four flagellin genes (fimA4, fimB, fimD, and fimA5) expressions were compared with wild-type and T6SS-2-deficient mutant ( vasK2). The quantitative real-time PCR result showed that the expression level of pilus was decreased in mutant, suggesting that the expressions of the T6SS-2 and these four pilus genes were coordinated (Supplementary Figure S4). FimA is a potent inducer of pro-inflammatory cytokines involved in tissue destruction (Choi et al., 2016). In T6SS-2 of C. sakazakii ATCC 12868, two of the four pili genes encode the FimA protein. We propose that low expression levels of FimA protein in T6SS-2-deficient mutant strains may help bacterium evade the host immune response which result in the high pathogenicity (Viscount et al., 1997). The exact molecular mechanism will be explored in the future.
AUTHOR CONTRIBUTIONS
BL and MW conceived and designed the experiments. MW and HC performed the experiments and analyzed the data. QW, TX, and XG prepared the strain samples. MW, HC, and BL prepared the manuscript. All authors read and approved the final manuscript. | 5,320 | 2018-10-22T00:00:00.000 | [
"Biology"
] |
Using Mobile Phone Sensor Technology for Mental Health Research: Integrated Analysis to Identify Hidden Challenges and Potential Solutions
Background Mobile phone sensor technology has great potential in providing behavioral markers of mental health. However, this promise has not yet been brought to fruition. Objective The objective of our study was to examine challenges involved in developing an app to extract behavioral markers of mental health from passive sensor data. Methods Both technical challenges and acceptability of passive data collection for mental health research were assessed based on literature review and results obtained from a feasibility study. Socialise, a mobile phone app developed at the Black Dog Institute, was used to collect sensor data (Bluetooth, location, and battery status) and investigate views and experiences of a group of people with lived experience of mental health challenges (N=32). Results On average, sensor data were obtained for 55% (Android) and 45% (iOS) of scheduled scans. Battery life was reduced from 21.3 hours to 18.8 hours when scanning every 5 minutes with a reduction of 2.5 hours or 12%. Despite this relatively small reduction, most participants reported that the app had a noticeable effect on their battery life. In addition to battery life, the purpose of data collection, trust in the organization that collects data, and perceived impact on privacy were identified as main factors for acceptability. Conclusions Based on the findings of the feasibility study and literature review, we recommend a commitment to open science and transparent reporting and stronger partnerships and communication with users. Sensing technology has the potential to greatly enhance the delivery and impact of mental health care. Realizing this requires all aspects of mobile phone sensor technology to be rigorously assessed.
Introduction Background
Mobile phone sensor technology has great potential in mental health research, providing the capability to collect objective data on behavioral indicators independent of user input [1][2][3]. With the plethora of sensors built into mobile phones, passive collection of a wide range of behavioral data are now possible using the device most people carry in their pockets [4]. Passive data collection operates in the background (requires no input from users) and allows measurement of variables longitudinally with detailed moment-tomoment information and collection of temporal information on dynamic variables, such as users' feelings and activity levels. Given that these digital records reflect the lived experiences of people in their natural environments, this technology may enable the development of precise and temporally dynamic behavioral phenotypes and markers to diagnose and treat mental illnesses [5].
children, showing that adolescent girls with more depressive symptoms have smaller social networks.
Depression is also associated with decreased activity and motivation and increased sedentary behavior [17]. Cross-sectional data indicates that people with depression are less likely to be active than people without depression [18]. Furthermore, longitudinal studies have shown that baseline depression is associated with increased sedentary behavior over time [18] and that low physical activity at baseline is associated with increased depression [19]. Again, mobile phone sensors, particularly GPS, are well placed to monitor an individual's location, physical activity, and movement. Initial research in a small sample (N=18) has indicated potential features of GPS data, such as a lower diversity of visited places (location variance), more time spent in fewer locations, and a weaker 24-hour, or circadian, rhythm in location changes, that are associated with more severe depression symptoms [7].
Challenges of Mobile Phone Sensor Technology
Despite the potential of mobile phone sensor technology in mental health research, this promise has not yet been brought to fruition. The use of mobile phone sensor technology for mental health research poses several key challenges, both technical and issues specific to mental health apps. A primary technical challenge is the reliable collection of sensor data across mobile platforms and devices, for example, location data may be missing due to sensor failure to obtain GPS coordinates [20,21], participants not charging or turning off their phones, or unavailability of any network connections for a long period of time, hampering data transfer to servers [7,10]. The mode of data collection also influences data completeness, which can differ between operating systems. Passive collection of sensor data are easier to support on Android than iOS; about twice as many apps are available for Android than for iOS [22]. This likely reflects greater restrictions that iOS places on accessing system data and background activity, making personal sensing using iOS devices challenging.
Another technical issue is battery life. Frequent sampling of sensor data can consume a significant proportion of a mobile phone's battery [23]. Ultimately, if an app collecting sensor data are too resource-intensive, users' motivation to continue using it decreases [24], which may lead to the app being uninstalled, ceasing the flow of data to researchers. Optimizing passive data collection to obtain the most detailed information possible should therefore be balanced with expectations of users regarding battery consumption. This is a significant practical challenge faced by mobile sensing apps.
In addition, there are specific challenges for using mobile phone sensor technology for mental health purposes, such as the engagement and retention of users [25]. Increasingly, a user-centered design approach is considered an integral part of any mental health app development [26][27][28][29]. Individuals with the target disorder can provide important information about the direction and focus of the app as well as how they engage with an app given their symptom profile. For example, focus groups of individuals with Post-Traumatic Stress Disorder (PTSD) indicated that PTSD Coach was particularly useful for managing acute PTSD symptoms and helping with sleep [30]. Clinicians, on the other hand, can provide input into the design and functionality of an app from a therapeutic perspective. For example, clinicians indicated that an app for individuals with bipolar disorder to self-manage their symptoms should focus on medication adherence, maintaining a stable sleep pattern, and staying physically and socially active [31]. Codesign of mental health apps with end users and other stakeholders increases the likelihood that the app will be perceived as attractive, usable, and helpful by the target population [24]. Although design and usability issues are often discussed for apps that require active user engagement, it is also important for passive data collection apps to increase user engagement and retention because this will ensure lower rates of missing data and dropouts. Furthermore, many apps have an ecological momentary assessment (EMA) component to complement passive sensor data collection.
User perceptions of an app's confidential handling and use of data, as well as privacy and anonymity, are additional challenges of passive data collection [9,32,33]. Mental health data are highly sensitive because of the potential negative implications of unwanted disclosure [34]; therefore, uncertainty about whether a service is confidential can be a barrier to care [35]. Indeed, data privacy and confidentiality are major concerns for the users of mental health apps [36,37], but no consensus has yet been reached on ethical considerations that need to be addressed for the collection of passive sensor data. Moreover, user perceptions of security and privacy may differ; for example, Android and iOS users differ in characteristics such as age and gender [38] and also in their awareness about security and privacy risks of apps [39]. Deidentification may be used to the protect privacy of individuals [40] but may also remove information that is important to maintain the usefulness of data, depending on context and purpose for use [41]. Systems making use of predictive analysis techniques not only collect data but also create information about personal mental health status, for example, through identification of markers for risk [42]. Therefore, social impact needs to be considered beyond individual privacy concerns.
Outline
In this study, we examined challenges of using mobile phone sensor technology for mental health research by analyzing results of a feasibility study that was conducted to test an app collecting passive sensor data. We analyzed the amount of sensor data that was collected, assessed the ability to quantify behavioral markers from Bluetooth and GPS data collected in a real-world setting, quantified battery consumption of the app, and examined user feedback on usability. No mental health questionnaires were administered as part of the feasibility study, although demographic and diagnostic data were available from the volunteer research register from which participants were drawn. We also investigated views of participants about acceptability of passive data collection for mental health research. The purpose of collecting this information was to build greater understanding of how social norms and perceptions around technology and data collection impact the feasibility, ethics, and acceptability of these technologies. We related results from our feasibility study to existing literature in these areas to identify common challenges of using mobile phone sensor technology in mental health research. We also drew some distinctions between available apps and made brief recommendations for the field going forward.
Methods
Mobile phone app Socialise, a mobile phone app developed at the Black Dog Institute, was used to assess the feasibility and challenges of passive data collection in a group of volunteers. We developed Socialise as a native app in Java for Android and Objective-C for iOS to collect passive data (Bluetooth and GPS) and EMA. Building on the results of a previous validation and feasibility study [43,44], we implemented several changes to improve scanning rates on iOS and here we tested Socialise version v0.2. We used silent push notifications to trigger Bluetooth and GPS scans and to upload data to the server. Silent push notifications, along with the "content-available" background update parameter, were used to deliver a payload containing an operation code corresponding to either a Bluetooth or GPS scan or one of a number of data uploads. The allowable background time for processing a push notification is sufficient to perform these scans and record data, and we hence used silent push notification to overcome some of the limitations imposed by iOS on apps running in the background. In addition, we used the significant-change location service to improve data collection rates. Unlike Android devices, no mechanism exists on iOS to allow the app to relaunch when a device restarts. By subscribing to the significant-change location service, the app is notified when the device restarts and triggers a local notification reminding participants to resume data collection.
Participants and Procedure
This study was approved by the University of New South Wales Human Research Ethics Committee (HC17203). Participants were recruited through advertisements disseminated through the Black Dog Institute volunteer research register. Individuals sign up on this register to volunteer for research. As part of the sign-up process, individuals provide demographics and diagnostic information (ie, mental disorders they have experienced in their lifetimes). To be able to participate in this study, individuals had to be 18 years or older, reside in Australia, speak English, and have a mobile phone running Android version 4.4 or newer or running iOS8 or newer. Interested individuals received a link to the study website where they could read participant information and provide consent. Of the 32 participants who provided consent to participate in the study, 31 also agreed to have their data made available on a public repository. Once they gave consent, participants received a link to install the Socialise app and a unique participant code. When participants opened the app, they were asked to give permission for the app to receive push notifications and collect location and Bluetooth data. Participants then had to fill in the unique participant code. Once the app opened, participants were asked to complete an entry survey, which included questions about the age of their mobile phone, the amount of time spent on their phone each day, and evaluation of their satisfaction with the onboarding process.
Participants were instructed to use the Socialise app for 4 weeks. Bluetooth and GPS data were collected during scans that were conducted at intervals of 8, 5, 4, or 3 minutes (equivalent to 7.5, 12, 15, and 20 scans per hour, respectively). Each scanning rate was tested for 1 week, and participants were instructed to use their phones normally for the duration of the study.
Data Collection
We used the BluetoothManager private API on iOS devices to collect Bluetooth data, because the public CoreBluetooth API contains only functions for interacting with lowenergy devices. It is currently not feasible to use Bluetooth Low Energy to map social networks in iOS [45]. To collect GPS data, the CoreLocation framework was utilized on iOS. The Android implementation leveraged the built-in Bluetooth APIs and LocationManager to collect Bluetooth and GPS data. Data acquisition settings were identical on iOS and Android, and both were set to collect Bluetooth, GPS, and battery data every 3, 4, 5, and 8 minutes.
Because the Bluetooth media access control address of a device is potentially personally identifiable information, these data were cryptographically hashed on the handset to ensure the privacy of participants. Hashing generates a consistent "signature" for each data item that cannot be reversed to reveal the original data value. To record only other mobile phones, detected devices were filtered according to the Bluetooth Core Specification. This involved removing any devices not matching the Class of Device 0×200 during the Bluetooth scan.
Participants were asked to complete a short questionnaire at the end of each week to document any problems that they encountered using the app. It included questions about whether they had changed phone settings (eg, turned off GPS or mobile data or turned on airplane mode), whether they used Bluetooth on their phone, and whether they thought the Socialise app impacted battery life. These findings were evaluated using a 7-point Likert scale. In addition, a set of questions about the acceptability of sensor data collection and some contextual information about that acceptability was collected at the end of the study.
Data Analysis
Data completeness was assessed by comparing the number of Bluetooth and GPS scans that were scheduled for the duration of the study (9156 samples per participant) with the number of data samples that were uploaded by the app; that is, we scheduled scans every 3, 4, 5, and 8 minutes, each for a week (4 weeks), which comes to 20´24´7+15´24´7+12´24´7+7.5´24´7=9156 total scans.
Most research using mobile phone Bluetooth to track social interactions has been performed in closed social networks [10,15,43,46]. In contrast, in this study, sensor data were collected from participants living in Australia who were unlikely to have social connections with each other. We therefore followed procedures described by Do et al [47] for analyzing Bluetooth data in a real-world setting. Instead of using Bluetooth to assess social connection between participants, Bluetooth was used to make a coarse estimate of human density around the user, which provides a rough proxy for social context. We first distinguished between known and unknown devices. Known devices were defined as devices that had been observed on at least 3 different days during the duration of the study. We then computed the average number of known and unknown devices that were detected at each hour of the day to obtain a social context profile for each participant.
We followed procedures outlined in Saeb et al [7] for analyzing GPS data. To identify location clusters, we first determined whether each GPS location data sample came from a stationary or a transition state. We calculated the time derivate to estimate movement speed for each sample and used a threshold of 1 km/h to define the boundary between the two states. We then used K-mean clustering to partition data samples in the stationary state into K clusters such that overall distances of data points to centers of their clusters were minimized. We increased the number of estimated clusters from 1 until the distance of the farthest point in each cluster to its cluster center fell below 500 m. We also estimated circadian movement, a feature that strongly correlated with self-reported depressive symptom severity [7]. Circadian movement measures to what extent participants' sequence of locations follows a 24-hour rhythm. To calculate circadian movement, we used least squares spectral analysis [48] to obtain the spectrum of GPS location data and estimate the amount of energy that fell with the 24-hour frequency bin. Circadian movement was then defined as the logarithm of the sum of energy for longitude and latitude [7].
The battery consumption of the Socialise app was estimated by varying the scanning rate each week. Varying scan rates enabled us to differentiate the battery consumption of the Socialise app from that of other apps running on the participants' mobile phones. We estimated the battery consumption of the Socialise app using linear regression, assuming that battery consumption scaled linearly with the number of scans performed per hour. To estimate battery consumption, we first extracted data samples when the battery was discharging and then computed the change in battery charge between scans. We next estimated the length of time for the battery to be exhausted separately for each scanning rate and device. We used a robust fitting algorithm, that is, reweighted least squares with the bisquare weighting function [49], to estimate the average battery consumption across devices and how it changed with scanning rate.
All analyses were performed using Matlab version R2018a (The MathWorks Inc, Natick, MA, USA) and the Matlab scripts used to analyze data are available at Zenodo: http://doi.org/10.5281/zenodo.1238408.
To evaluate user perceptions of battery consumption of the app, we compared responses on perceived impact on battery life across the 4 weeks of the study to assess whether perceived impact was affected by the actual scanning rate. To examine views of participants about the acceptability of passive data collection for mental health research, we compared their responses for different data types and contexts using a one-way repeated-measures one-way analysis of variance (ANOVA). Statistical analyses were performed using JASP version 0.8.3.1 (University of Amsterdam, the Netherlands). We also collected open responses to these questions, allowing for qualitative analysis. However, owing to the small number of responses, coding to saturation was not possible and we conducted a thematic analysis instead, dividing responses into categories to determine their approximate range.
Participant Characteristics
Overall, 53 people expressed interest in participating in the study. Of these, 41 completed registration and gave informed consent. Of the 41, 1 participant was not eligible because the person did not live in Australia, 1 participant withdrew, 2 participants were unable to install the app on their mobile phones, and 5 participants did not respond to the follow-up email. The remaining 32 participants successfully installed the app on their mobile phones.
The age of participants was broadly distributed with the majority aged from 55 to 64 years (see Table 1). Most were female (23/30, 77%) and reported that they had been diagnosed with a mental disorder (23/32, 72%); depression and anxiety disorders were most commonly reported (Table 1). Participants reported using their mobile phones regularly, and most devices were less than a year old (15/30, 50%).
Data Completeness
Over the course of the study, 1 participant withdrew and another stopped participating. We therefore obtained sensor data from 28 of the 41 who consented to participate with a retention rate of 68%. Survey data were collected from 23 participants (participants who provided at least one response on the short questionnaire at the end of each week) and 13 participants completed the exit survey, as seen in Figure 1. Over the 4 weeks, a total of 9156 data points was scheduled for each participant. We also recorded the model of the device, but there did not appear to be a clear relationship with the scanning rate, as seen in Figure 2.
Passive Data Collection
In this study, we collected two types of sensor data (Bluetooth and GPS) using the Socialise app. Both types of data may provide behavioral indicators of mental health.
Bluetooth Connectivity
When assessing the number of mobile phone devices that were detected using Bluetooth, we observed large variability between participants, both in the total number of devices that were detected and the ratio of known and unknown devices, as seen in the top panel of Figure 3. When considering the average number of nearby mobile phones at different times of the day, few nearby devices were detected during sleeping time (0-6 am), and they were mostly known devices, as seen in the bottom panel of Figure 3. In contrast, office hours had the most device detections and also showed the highest percentage of unknown devices. In the evening, the number of known devices stabilized, whereas the number of unknown devices gradually decreased.
Battery Consumption
We considered that users typically charge their phones once per day and are awake typically from 6 am to 10 pm (16 hours). With operation of the app, battery life should ideally last at least 16 hours after a full recharge. After systematically varying the time interval between GPS and Bluetooth scans, we used a robust fitting algorithm to estimate the average battery consumption of the Socialise app across devices and scanning rates. Based on the fitted blue regression line seen in Figure 6, we estimated that the average battery life was 21.3 hours when the app did not scan at all, and was reduced to 18.8 hours when the app scanned every 5 minutes, resulting in a reduction of 2.
Usability
As part of an iterative design and development process, we asked participants to report any problems they experienced in using the Socialise app. Overall, 30 participants (30/32, 94%) answered questions about problems associated with installing and opening the app with half (15/30, 50%) indicating they experienced problems. The most common problem was difficulty logging into the app with the unique participant code (7 participants; Table 2). Many reported problems were technical, which are difficult to address in a preemptive manner because they often depend on user-dependent factors, such as the type, brand, and age of their mobile phones and user behavior (eg, skimming instructions). Fewer participants (23/32, 72%) answered questions about problems they experienced while running the app; these questions were administered at the end of each week. In total, questions were answered 56 times over the course of the study. Just under half (11/23, 48%) of the respondents reported problems running the app, and a problem was identified 32% (18/56) of the time ( Table 3). The most common problem was that the app provided a notification to participants stating that they had restarted their phone when users, in fact, had not (7 times). Again, it is evident that a number of encountered problems were technical and, as before, they may be due to mobile phone and user behavior-related factors. Participants were asked to answer questions about problems running the app four times during the study. Twenty-three unique participants answered these questions, yielding 56 responses.
Ethics
To explore ethics and privacy considerations of passive mobile phone sensor data collection, we included a set of survey questions about the acceptability of sensor data collection and some contextual information about that acceptability. Survey questions were administered at the end of the feasibility study (n=13) using a 5-point Likert scale. The top panel of Figure 8 shows that most participants expressed comfort with all aspects of data collection; 77% (10/13) of the participants were either comfortable or very comfortable with GPS, 53% (7/13) with Bluetooth, and 100% (9/9) with questionnaires. A repeated-measures ANOVA showed no main effect of data type (F2,24=2.09, P=.15, n=13). We also asked participants how comfortable they were with data collection in different contexts, as seen in the bottom panel of Figure 8. Repeated-measures ANOVA showed a main effect of context (F2.4,29.2=7.48, P=.01). Post hoc t tests showed that participants were more comfortable with data collection for research than for advertising (t12=−3.99, P=.002) and for medical intervention than for advertising (t12=3.89, P=.003). Participant 11 (henceforth P11), who said they were "Neither comfortable nor uncomfortable" with GPS data collection, explained that "[I was] ok; however, as I was not fully aware of the intentions of the collection of the GPS data and my battery life declining, I started to then get uncomfortable." Another participant, who also said "neither" for both Bluetooth and GPS tracking said, "I wasn't sure what the purpose was," and "[I] don't understand the implications of this at all" [P12]. P13 said, "Why collect this data?" and " [I] cannot see what value it would be other than to satisfy arbitrary research goals" and felt it to be "an invasion of my privacy." These responses imply that although the level of discomfort was low overall, a degree of uncertainty existed around the purpose of data collection, and this uncertainty increased discomfort.
Another theme related to the motivation of being helpful to the research or the Institute by providing data. Overall, 4 of the 13 respondents mentioned being helpful as a motivation. P3 was "very comfortable" with GPS tracking and said, "[I] wanted to help in some way." P2 was quite comfortable with the app running in the background "because I realize that information will be used for the betterment of [the] community." P7 said, "[I] would like to do anything I can that might help more study," and P8 would continue using the app or "anything that could help." This theme is unsurprising given that these users are on a volunteer research register. A second and related theme was around trust. One user explained, "[I] trust the Black Dog [Institute]" (P3) and was therefore comfortable with passive data collection.
Many participants framed their level of comfort with data collection in terms of its perceived effect or impact on them. One participant was "very comfortable" with GPS tracking because "it didn't affect me" (P4). Others said, "[it] does not bother me" (P2), "[it] did not bother me" (P10), or "[I] did not think much about it" (P9). However, another user who said, "[I was] comfortable" with GPS data collection, explained: "I actually forgot most of the time that it was collecting it. Which slightly made me uncomfortable just in regard to how easily it can happen" (P5). P11, who answered "neither" for effect or impact, said that GPS tracking was impacted by what... was draining their battery. P2 also said, "Bluetooth drains battery" and "[I was] uncomfortable" with the Bluetooth being on, but also that it was "not a huge problem." Finally, one user was "uncomfortable" with GPS tracking, explaining, "I believe it is an invasion of my privacy" (P13). However, the same user believed there were "no privacy issues" with Bluetooth data collection.
Another aspect of impact on users was the idea of perceived benefit or lack thereof for them. When responding to a question about whether they would continue to the use the app: "If the app were to be modified showing people you meet and giving information about what it means, I probably would [continue using it]" (P1). However, others said they "don't see a use for it" (P5) and "[were] not sure how useful it would be for me" (P9). This is not surprising considering that the app is solely for data collection. However, it shows that participants would expect to receive information that they can interpret themselves.
Principle Findings
A feasibility study was conducted to test the Socialise app and examine challenges of using mobile phone sensor technology for mental health research. Sensor data (Bluetooth, GPS, and battery status) was collected for 4 weeks, and views of participants about acceptability of passive sensor technology were investigated. We were able to collect sensor data for about half of the scheduled scans. Social context, location clusters, and circadian movement were features extracted from sensor data to examine behavioral markers that can be obtained using the app. Battery life was reduced by 2.5 hours when scanning every 5 minutes. Despite this limited impact on battery life, most participants reported that running the app noticeably affected their battery life. Participants reported the purpose of data collection, trust in the organization that collects data, and perceived impact on privacy as important considerations for acceptability of passive data collection.
Behavioral Markers
Instead of assessing social connections between participants, Bluetooth data were used to make a coarse estimate of human density around the participant, which provides a rough proxy for social context. The number and familiarity of devices detected were used to differentiate social contexts. Specifically, more unfamiliar devices were detected during work hours, and fewer familiar devices were detected in the evening. This pattern largely matched that observed by Do et al [47], although the number of overall devices that were detected in our study was lower. This may be partly because we recorded only Bluetooth data from mobile phone devices while filtering out other Bluetooth devices.
We extracted two features from GPS data previously shown to have strong association with self-reported mental health data [7]: circadian movement and location clusters. Circadian movement measures to what extent participants' sequence of locations follows a 24-hour rhythm. Comparing circadian movement assessed separately each week to values across weeks revealed good reliability (Cronbach alpha .79), indicating acceptable consistency in circadian movement estimated in different weeks at different scanning rates. Circadian movement was estimated over 1 week of GPS data, and consistency may be further improved by estimating circadian movement over longer time intervals. We also used a clustering algorithm to identify the number of location clusters that each participant visited. The number of clusters ranged from 4-30 with a median of 8 clusters, which was higher than the number of location clusters reported by Saeb et al [7], ranging from 1-9 with an average of 4.1 clusters. This may be partly due to geographical differences between studies (Australia vs United States). Human mobility patterns are strongly shaped by demographic parameters and geographical contexts, such as age and population density, and it should therefore be determined whether behavioral markers extracted from GPS data are universal or context-dependent [50,51].
Technical Challenges
We were able to collect sensor data for about half of the scheduled scans (Android 55%, iOS 45%). The Socialise app (v0.2) incorporated two technical modifications (ie, using push notifications to trigger scans and using significant-change location service to alert participants when their phone restarted and remind them to resume data collection) to improve data completeness on iOS devices compared with our previous studies, which revealed significant disparity between Android and iOS data acquisition rates using previous versions of the app [43,44]. The 50% data rate in this study is similar to the rate reported in a study using Purple Robot, in which 28 of 40 participants (70%) had data available for more than 50% of the time [7]. However, GPS data of only 18 participants (45%) were used for location analysis in that study, suggesting that the GPS data rate may have been lower. Likewise, in a study using Beiwe in a cohort with schizophrenia, the mean coverage of GPS and accelerometer data were 50% and 47%, respectively [52]. Missing data may limit the number of participants for whom features can be reliably estimated and may also introduce bias in outcome measures extracted from sensor data, for example, participants with fewer data points will appear to have fewer social connections [53]. Interestingly, a recent pilot study (N=16) found that the total coverage of sensor data is itself associated with selfreported clinical symptoms [52].
We found that the Socialise app, when scanning every 5 minutes, reduced battery life from 21.3 hours to 18.8 hours, a 12% reduction. We used silent push notifications to trigger scans intermittently because continuously sampling sensor data would drain the phone's battery in a few hours. Pendão et al [54] estimated that GPS consumed 7% and Bluetooth consumed 4% of total battery power per hour when sampling continuously or 1% and 3%, respectively, when sampling periodically. Therefore, a straightforward solution to conserve battery life is to adjust intervals between data collection points. Longer time intervals between scans and shorter scanning durations can reduce battery consumption, but scanning durations that are too short may not yield meaningful sensor information [23]. Although we used silent push notifications to schedule intermittent scans, other apps use an alternating on-cycle to offcycle schedule, in which GPS was scheduled to collect data with 1 Hz frequency for a 60seconds on-cycle, followed by a 600-seconds off-cycle [52]. Another approach to conserve battery is to use conditional sensor activation, for example, adaptive energy allocation [55] and hierarchical sensor management [23]. These solutions reduce the activation of specific sensors at times when they are not needed.
Ethical Considerations
The collection of sensor data involves large quantities of individualized social and behavioral data, and security and privacy have been recognized as a high priority [9,10]. Our participants reported that the purpose of data collection was an important consideration to weigh against any perceived privacy risks, which relates to the theme of uncertainty around purposes of data collection. The consent process for mental health data collection is therefore of importance with regard to both articulating this purpose and outlining confidentiality and risk of harm to patients [35]. Patient safety should be built into the design of data collection apps. Although this study did not collect mental health data, we intend to use the Socialise app in future studies to assess the mental health symptoms of participants. As such, we have built into the Socialise app a safety alert system, by which participants who indicate high scores on mental health questionnaires will be immediately given contact information about support services and be contacted by a mental health professional to provide additional support. This is consistent with the views of practitioners who have emphasized the importance of including contacts for medical professionals or other services in case of emergency or the need for immediate help [9]. Patients should be made aware of the standard turnaround time for a response to requests for help [2] and administering organizations should ensure that these expectations are clearly defined and consistently met [2].
Our results revealed a degree of uncertainty about the purpose of the study, suggesting that many participants took part without necessarily feeling informed about reasons for it. The communication of purpose should therefore be improved for future studies. Hogle [56] emphasized the need to make a clear distinction whether health-related data are collected for population-level research as opposed to individual, personal treatment or identification of issues. In addition, data processing techniques are often opaque to users, and informed consent may thus be difficult to achieve [42]. Respondents also emphasized their willingness to help the organization with its research and their trust in the organization as a stand-in for certainty about how data would be used. We believe that researchers should not rely on organizational trust as a stand-in for true understanding and informed consent because there is a risk of breach of trust if data are not used as expected.
Other issues included data ownership and the direction of any benefits created, considering that the data are from users [40]. Pentland et al [57] argued that participants should have ownership over their own data, by which they mean that app users should maintain the rights of possession, use, and disposal with some limitations on the right to disclose data about others in one's network. This can be achieved by holding users' data much as a bank would, with informed consent, or by storing data locally on a user's device and requiring upload for analysis [57]. However, when it comes to data, it is those with the capacity to store, analyze, and transfer data who have meaningful power over it; therefore, the concept of data ownership is limited [58].
Passive sensor data may be used for predictive analytics to identify those at risk of mental health issues. However, there is a possibility that predictive models may increase inequalities for vulnerable groups [40], particularly when commercial interests are at play. Psychiatric profiling will identify some as being at high risk, which may shape self-perception [59] and beliefs about an individual. This is particularly significant if the individual is a minor [2]. Hence, nonmedical and commercial use of this data to estimate mental state and behavior is an area of concern [2].
Recommendations
Based on these findings and the literature on passive sensing, usability, and ethics, we make the following recommendations for future research on passive sensing in mental health.
Reporting of Data Completeness and Battery Consumption to Benchmark Different Technical Solutions
Standard reporting of meta-data will enable benchmarking of apps and identification of technical obstacles and solutions for sensor data collection across devices and operating systems. For example, we estimated that the Socialise app reduced battery life by 2.5 hours when scanning every 5 minutes. Although the app had small effect on battery consumption (81% of devices had an average battery life of more than 16 hours), users were very sensitive to battery performance. Standard reporting of data rates and battery consumption will allow quantitative comparisons between approaches and develop technical solutions that meet user expectations on battery life.
Releasing Source Code of Data Acquisition Platforms and Feature Extraction Methods
The number of mobile phone apps for passive sensing is still increasing, but differences in methodology and feature extraction methods can impede the reproducibility of findings. This can be overcome with a commitment to open science because a number of elements of passive data research could be shared. Currently, several sensing platforms are open source, such as Purple Robot [6] and recently, Beiwe [52]. Following this lead, methods for feature extraction could be made open source, such that scripts are available for use on different data sources, providing consistency in feature extraction. Finally, the data itself should be made available on open data repositories to enable data aggregation across studies to test potential markers in larger samples, resulting in more reproducible results [60]. However, data sharing not only has great potential but also involves concerns about privacy, confidentiality, and control of data on individuals [61]. These concerns particularly apply to sensor data such as GPS that can be reidentified [62]. Databases that allow analysis to be conducted without access to raw data may be one potential solution.
Identifying a Limited Number of Key Markers for Mental Health
Although the use of passive data in mental health is still exploratory, researchers need to move toward agreement on best practice methods and features. The current unrestricted number of features has the danger of inflating degrees of freedom and may endanger replicability of findings [63]. Practices such as preregistration of study hypotheses and proposed methods to quantify features could help reduce spurious correlations and will be key in identifying reliable markers of mental health [64]. However, work with different sensor modalities is at different stages of development. For example, a number of GPS features have been identified and replicated [6], whereas potential markers of social connectedness using Bluetooth data still require research to assess predictive value in open network settings. This development of new methods of data analysis is indeed one of the most immediate challenges [5]. Once candidate methods have been identified, and it will be important to test these markers in larger longitudinal studies to see whether they predict the development of mental health problems and can be used to support prevention and early intervention programs [65].
Providing Meaningful Feedback to Users
User engagement is also a key requirement for successful implementation of sensor technology in mental health research. Investigating user experience can help us understand user expectations and improve user engagement and retention [66]. Although passive data collection is designed to be unobtrusive, perceived benefit is an important consideration for continued use of mental health apps. A user-centric design process [27] and the American Psychiatric Association's app evaluation model [67] should be followed to provide meaningful user feedback from sensor data. We also recommend using more robust measures for informed consent, considering the opacity of data analysis techniques and purposes [47] and engaging users with informative feedback derived from their data.
Transparency in the Purpose of Data Collection
Evidence from the literature and participant responses suggests that purposes of data collection are important as well as the awareness of the user. The use of data was found to be most the important factor in a person's willingness to share their electronic personal health data [10], and participants cared most about the specific purpose for using their health information [68]. Rothstein argued that there is too much emphasis on privacy when the concern should be about autonomy [69]. This refers to the informed consent process, during which researchers should ensure understanding and enable autonomous and active consent on that basis [69]. It is therefore recommended that researchers take care to ensure that the consent process allows participants really to understand the purpose of the research. This, in turn, is likely to increase the level of comfort with data collection.
Conclusion
The use of passive data in mental health research has the potential to change the nature of identification and treatment of mental health disorders. Early identification of behavioral markers of mental health problems will allow us to preempt rather than respond, and understanding idiosyncratic patterns will enable personalized dynamic treatment delivered at the moment. Although a number of significant technological and broader challenges exist, we believe that open science, user involvement, collaborative partnerships, and transparency in our attempts, successes, and failures will bring us closer to this goal. | 9,583 | 2018-05-22T00:00:00.000 | [
"Psychology",
"Medicine",
"Computer Science"
] |
Evolution of longevity improves immunity in Drosophila
Abstract Much has been learned about the genetics of aging from studies in model organisms, but still little is known about naturally occurring alleles that contribute to variation in longevity. For example, analysis of mutants and transgenes has identified insulin signaling as a major regulator of longevity, yet whether standing variation in this pathway underlies microevolutionary changes in lifespan and correlated fitness traits remains largely unclear. Here, we have analyzed the genomes of a set of Drosophila melanogaster lines that have been maintained under direct selection for postponed reproduction and indirect selection for longevity, relative to unselected control lines, for over 35 years. We identified many candidate loci shaped by selection for longevity and late‐life fertility, but – contrary to expectation – we did not find overrepresentation of canonical longevity genes. Instead, we found an enrichment of immunity genes, particularly in the Toll pathway, suggesting that evolutionary changes in immune function might underpin – in part – the evolution of late‐life fertility and longevity. To test whether this genomic signature is causative, we performed functional experiments. In contrast to control flies, long‐lived flies tended to downregulate the expression of antimicrobial peptides upon infection with age yet survived fungal, bacterial, and viral infections significantly better, consistent with alleviated immunosenescence. To examine whether genes of the Toll pathway directly affect longevity, we employed conditional knockdown using in vivo RNAi. In adults, RNAi against the Toll receptor extended lifespan, whereas silencing the pathway antagonist cactus‐–causing immune hyperactivation – dramatically shortened lifespan. Together, our results suggest that genetic changes in the age‐dependent regulation of immune homeostasis might contribute to the evolution of longer life.
Impact Summary
Despite much progress in our understanding of the genetic basis of aging, mainly from studying large-effect mutants, little is known about natural variants that contribute to the evolution of lifespan and related fitness traits. To identify the mechanisms by which longevity evolves, we sequenced a set of D. melanogaster populations that have been undergoing selection for late-life reproduction and postponed senescence, relative to unselected controls, for over 35 years. Instead of an enrichment of evolutionary changes in previously identified "canonical" longevity genes, we found an enrichment of genetically diverged immunity genes, suggesting that variation in immune function contributes to the evolution of lifespan and late-life fertility. To test this hypothesis, we employed immunity assays: longlived flies survived infections better and showed altered age-dependent immune gene expression as compared to control flies. Using in vivo RNAi we confirmed that reduced expression of immune genes extends lifespan while immune overactivation is strongly detrimental.
Despite major progress in our understanding of the genetic basis of aging and life history, especially in model organisms such as yeast, C. elegans, Drosophila, and mice (Guarente and Kenyon 2000;Partridge and Gems 2002;Tatar et al. 2003;Guarente et al. 2008;Kenyon 2010;Flatt and Heyland 2011), the identity and effects of naturally segregating polymorphisms that affect variation in lifespan and correlated fitness traits and which might thus underpin the evolution of longevity and life history remain poorly understood to date (De Luca et al. 2003;Pasyukova et al. 2004;Carbone et al. 2006;Flatt and Schmidt 2009;Paaby et al. 2014;Carnes et al. 2015;Flatt and Partridge 2018).
Several major evolutionarily conserved pathways that regulate lifespan and correlated fitness traits, including insulin/insulinlike growth factor 1 signaling (IIS), have been identified using analyses of large-effect mutants and transgenes in the laboratory (Partridge and Gems 2002;Tatar et al. 2003;Kenyon 2010), but to what extent genes in these "canonical" pathways harbor segregating alleles that affect lifespan is mostly unknown (Flatt and Schmidt 2009;Paaby et al. 2014;Carnes et al. 2015;Flatt and Partridge 2018). For instance, only few studies to date have identified functional effects of segregating IIS polymorphisms upon lifespan and correlated life-history traits in populations of Drosophila (Paaby et al. 2010(Paaby et al. , 2014Remolina et al. 2012) or which contribute to longevity in human centenarians (Suh et al. 2008; Willcox et al. 2008;Flachsbart et al. 2017;Joshi et al. 2017).
Here, we take advantage of a >35-year-long laboratory selection experiment for late-life fertility and increased lifespan in Drosophila melanogaster, first published by colleagues in 1984 (Luckinbill et al. 1984; also see Luckinbill and Clare 1985;Arking 1987), to analyze the genomic footprints underlying the evolution of delayed reproduction and postponed aging. In this long-term selection experiment, replicate lines derived from an outbred base population have been selected for late-life fertility and-indirectly-for increased lifespan by breeding only from flies that survived and were fertile at a relatively old age. In contrast, unselected replicate control lines have been propagated across generations by breeding from flies with a random age at reproduction (for details see Supplementary methods). Selected flies in this experiment have evolved late-life fertility and live 40-50% longer than unselected control flies, yet exhibit reduced early fecundity relative to the controls (see Supplementary methods). Thus, these selection lines are subject to a genetic trade-off between late-life performance (long life, late-life fertility) and early fecundity, as is commonly observed in laboratory evolution experiments that directly or indirectly select for changes in Drosophila lifespan (Luckinbill et al. 1984;Rose 1984;Zwaan et al. 1995;Partridge et al. 1999;Stearns et al. 2000;Remolina et al. 2012).
The central finding from our genomic analysis of this selection experiment is that evolutionary changes in innate immunity contribute to the evolution of late-life performance in fruit flies, probably by improving age-dependent immune homeostasis. Although still little is understood about the mechanistic interplay between immunity and aging (Garschall and Flatt 2018), our analyses suggest that immune function is a major longevity assurance mechanism that can be targeted by selection on standing genetic variation.
OF LONGEVITY
To characterize the genomic signature of longevity we used nextgeneration pool-sequencing (Pool-seq) (Schlötterer et al. 2014) to obtain genome-wide allele frequency estimates from four longlived selection lines and two unselected control lines after ࣙ 144 generations of selection (see Supplementary methods for details). We identified candidate SNPs by comparing allele frequency differentiation between the selection and control regimes with a stringent F ST outlier approach (Lewontin and Krakauer 1973;Akey 2009) (Fig. 1A,B). The majority of SNPs (62.2%) showed no or less differentiation between the selection versus control regime as compared to differentiation within these regimes (selection signal-to-noise ratio ࣘ 0; Fig. 1B,C). We defined SNPs as candidates if they showed very strong, consistent verify that this number of candidate SNPs is not due to chance we applied our candidate criteria to all 6435 possible sets of eight pairwise comparisons; out of these combinations only one set is biologically informative in terms of inferring selection, that is the set of all eight pairwise control versus selection comparisons (see Supplementary methods). No combination of eight pairwise comparisons yielded as many candidate SNPs as this "true" set of comparisons (red bar), with a probability that the "true" number of candidate SNPs is due to chance of P 1.6 × 10 −4 . and significant differentiation in all eight pairwise comparisons between the four selection and two control lines (signal-tonoise ratio 0.9; F ST(selection vs. control) > 0.9; Bonferroni-corrected Fisher's exact test: P < 10 −9 ) (Fig. 1A,B,C). Using this approach, we identified 8205 candidate SNPs in 868 genes dis-tributed across the entire genome ( Fig. 1B; Table S1; genes were defined as the sequence between the ends of the 5' and 3' UTRs plus 1 kb up-and downstream; also see Supplementary methods). Candidate loci appeared to cluster non-randomly in specific genomic regions, suggesting pervasive polygenic selection and/or indirect selection due to "hitchhiking" ("genetic draft") ( Fig. 1B; Table S1). To further validate our set of longevity candidate SNPs and to exclude false positives due to randomness, for example because of genetic drift, we used a combinatorial approach (see Supplementary methods). We found that-when applying our stringent candidate criteria -it is highly unlikely (P 1.6 × 10 −4 ) that this large number of candidate SNPs arose by chance (Fig. 1D).
PARALLELISM
While some mechanisms of longevity are evolutionarily conserved ("shared") among species and thus "public," for example insulin/insulin-like growth factor 1 signaling (IIS), most others are likely to be lineage-specific and thus 'private' (Martin et al. 1996;Partridge and Gems 2002;McElwee et al. 2007). Similarly, at the intraspecific level, parallel and convergent evolution in independent populations might result in the repeated use of the same genes underlying a given trait ("gene reuse") (Conte et al. 2012), but to what extent this might be the case for longevity remains unclear. Addressing this question might give insights into the predictability of the evolution of lifespan at the genetic level (Stern and Orgogozo 2008;Conte et al. 2012).
To examine how frequently the same genes are used by different populations during the evolution of late-life fertility and longevity, we compared our list of candidate genes to those from two other "Evolve and Resequence" studies of Drosophila longevity and correlated life-history traits (Remolina et al. 2012;Carnes et al. 2015). The study by Carnes et al. (2015) provides a genomic analysis of an independent long-term selection experiment by Rose (Rose 1984) similar in duration to ours (Luckinbill et al. 1984), with both selection experiments first published back-to-back in 1984. The other study, by Remolina et al. (2012), performed whole-genome sequencing of a shorter, 50-generationlong selection experiment for longevity. Importantly, both Rose (1984) and Remolina et al. (2012) selected for increased lifespan by postponing reproduction, using a design that is qualitatively identical to ours.
We discovered statistically significant sharing of candidate loci across all possible overlaps among the three datasets (Fig. 2, Table S2), indicating genetic parallelism underlying the evolution of late-life performance. Our dataset contained 147 (11.7%) of the candidate genes of Carnes et al. (2015) and 102 (10.9%) of those of Remolina et al. (2012). Twenty candidate genes (ß2%) were shared across all three studies, representing clear cases of gene reuse during the evolution of longevity and late-life fertility (Fig. 2, Table S2). Thus, as might be expected from a highly complex and polygenic trait such as lifespan (McElwee et al. 2007), most candidate loci tend to be population-specific. However, a small but significant proportion of candidate loci is shared among independent populations, perhaps suggesting the existence of Table S2 for functional annotations of the shared longevity candidate genes; see Table S5 for statistical details.
"preferred" loci of evolutionary change (Stern and Orgogozo 2008) for longevity. Several of these "high confidence" genes represent promising candidate loci for future functional experiments.
Notably, although each study identified several loci that belong to "canonical" longevity pathways (Guarente and Kenyon 2000;Partridge and Gems 2002;Tatar et al. 2003;Guarente et al. 2008;Kenyon 2010), for example the IIS pathway, the candidate lists and overlaps contain few "classical" lifespan genes that have previously been identified in studies of large-effect mutants and transgenes. This might be due to a lack of standing variation at these "canonical" longevity loci: perhaps these conserved-effect loci have been optimized by selection but are now subject to strong purifying selection (see Remolina et al. 2012;Flatt and Partridge 2018). Thus, while segregating IIS polymorphisms with major effects on life-history traits including lifespan have been identified (Geiger- Thornsberry and Mackay 2004;Paaby et al. 2010Paaby et al. , 2014Flachsbart et al. 2017;Joshi et al. 2017), our results are consistent with the hypothesis that loci in these canonical pathways might be under selective constraints (see Remolina et al. 2012;Flatt and Partridge 2018).
Even though "canonical" longevity loci seem to be underrepresented, many of the overlapping candidate genes that we have identified have strong empirical support from functional genetics, GWAS, QTL, or gene expression studies, with known roles in lifespan determination, somatic maintenance (e.g., resistance against starvation or oxidative stress, immunity, metabolism), and age-specific fecundity (see functional annotations in Table S2). The fact that several candidate loci are known to affect age-specific fecundity is consistent with the age-at-reproduction selection regime used by all three studies and possibly also with genetic trade-offs between early fecundity and lifespan (and/or late-life fecundity) seen in these selection experiments.
IMMUNE FUNCTION
We next sought to characterize the functions of our candidate loci with gene ontology (GO) analysis (Kofler and Schlötterer 2012) (Table S3; considering the ontologies "Biological Function," "Molecular Function," and "Cellular Component"). Interestingly, we found an enrichment of candidate genes associated with "antifungal peptides" with a false discovery rate of ß9% (FDR = 0.085), whereas the term "determination of adult lifespan" had no support (FDR = 1) (Table S3). Immunity against fungi (and gram-positive bacteria) is regulated by Toll signaling (Belvin and Anderson 1996;Lemaitre et al. 1996;De Gregorio et al. 2002;Valanne et al. 2011), and among our candidates we identified several prominent members of this pathway, including the Toll ligand spätzle (spz), the receptor Toll (Tl), the Toll inhibitor cactus (cact), the NFκB transcription factors Dorsalrelated immunity factor (Dif) and dorsal (dl), the upstream serine proteases persephone (psh) and sphinx2, and two regulators of cactus, scalloped (sd) and cactin (Fig. 3, Table S4). The other major immune pathway, the Imd pathway (De Gregorio et al. 2002;Kleino and Silverman 2014;Myllymäki et al. 2014), also harbored several but fewer candidates, including peptidoglycan recognition protein LE (PRGP-LE) and the antimicrobial peptide Cecropin A1 (CecA1) (Fig. 3, Table S4).
The enrichment of immunity genes prompted us to hypothesize that genetic changes in immune function might contribute to the evolution of longevity and correlated fitness traits (DeVeale et al. 2004;Finch 2007). Importantly, Remolina et al. (2012) also found enrichment of genes involved in "defense response to fungus," and Carnes et al. (2015) observed divergence in immune gene expression between long-lived selection and control lines, suggesting that the relation between immunity and lifespan might be general (DeVeale et al. 2004;Finch 2007). While we found a larger number of genes in the Toll pathway, Carnes et al. (2015) and Remolina et al. (2012) found more candidates in the Imd pathway. However, several immune genes are shared across the three studies, despite a relatively small overlap at the individual gene level (Table S4). Immunity might thus represent a general mechanism underlying longevity, with immune genes having pleiotropic effects on lifespan and correlated fitness components.
Despite this compelling commonality across independent experiments, still little is known about how immunity proximately affects longevity and correlated fitness traits; similarly, whether genetic changes in immunity might contribute to the evolution of longer life remains unknown (Garsin et al. 2003;DeVeale et al. 2004;Kurz and Tan 2004;Libert et al. 2006;Troemel et al. 2006;Libert et al. 2008;Fernando et al. 2014;Guo et al. 2014;McCormack et al. 2016;Kounatidis et al. 2017;Loch et al. 2017;Yunger et al. 2017). We therefore aimed to test whether the evolved genomic signature of immune gene enrichment observed in our study -and similarly by Carnes et al. (2015) and Remolina et al. (2012)-might represent a physiological mechanism underlying evolutionary changes in lifespan and late-life fertility.
INDUCTION WITH AGE
We first examined whether the selection and control lines differ in the expression of antimicrobial peptides (AMPs), the major effectors of the innate immune response. We used three AMPs as readouts of Toll and Imd signaling activity, Drosomycin (Drs), Attacin A (AttA), and Diptericin (Dpt). Drs and AttA are regulated by both Toll and Imd signaling, whereas Dpt is mainly regulated by the Imd pathway (De Gregorio et al. 2002). Using quantitative realtime PCR, we determined mRNA levels of young (5-6-day-old) and aged (25-26-day-old) female flies, either without pricking, upon aseptic pricking (mock control) or upon prick infection with Erwinia carotovora carotovora 15 (Ecc 15). Systemic infections with this bacterium induce the expression of all three AMPs assayed here (Lemaitre et al. 1997;Basset et al. 2000;De Gregorio et al. 2002).
Without pricking, control flies upregulated AMP baseline expression with age (Fig. 4A) -a pattern that is commonly observed in wild-type flies and attributed to persistent chronic infection and a prolonged immune response at old age (Seroude et al. 2002;DeVeale et al. 2004;Zerofsky et al. 2005;Ren et al. 2007;Ramsden et al. 2008). In marked contrast to control flies, baseline AMP levels remained constant as a function of age in selected flies (Fig. 4A).
AMP expression also differed substantially between control and selected flies upon infection: at young age, the AMP response was slightly stronger in long-lived flies than in control flies, whereas at old age long-lived flies tended to downregulate AMP induction (Fig. 4b). Thus, unlike aged wild-type flies which upregulate AMPs but suffer from immunosenescence and show signs consistent with chronic inflammation (i.e., reduced infection survival, increased bacterial load, more persistent AMP induction upon infection; see Zerofsky et al. 2005;Ren et al. 2007;Ramsden et al. 2008;Myllymäki et al. 2014 Figure 3. Genes of the Toll and Imd pathways represent longevity candidates. Overview of the Toll and Imd pathways, the two major pathways regulating the humoral innate immune response against fungi and gram-positive bacteria (Toll) and gram-negative bacteria (Imd). Among our longevity candidates we found an enrichment of immunity-related genes (enrichment of GO terms associated with "antifungal peptides"). Longevity candidate genes identified in the Toll and Imd pathways are shown in red. For additional immunityrelated candidate genes see Table S4.
Our results therefore suggest that long-lived flies might have evolved improved age-dependent immune homeostasis and alleviated immunosenescence (DeVeale et al. 2004). These evolutionary changes in immune gene induction might also be linked to the late-life fertility of the long-lived lines. Since in our selection experiment lifespan was selected for by postponing reproduction, the observed differences in immune gene induction between the regimes might be a byproduct of selection for increased late-life fertility in the long-lived selection lines. This would be consistent with the observation that infection reduces fecundity: infection-induced synthesis of AMPs incurs a cost of reproduction in wild-type flies but this cost is abolished in Imd pathway mutants (Zerofsky et al. 2005).
UPON INFECTION
To investigate whether selected and control flies differ in realized immune function we measured their survival after infection with four different pathogens (Fig. 5). Long-lived flies survived infections with a fungus (Beauveria bassiana, Bb), with the Gramnegative bacterium Ecc15 and with the Gram-positive bacterium Enterococcus faecalis (Ef) overall markedly better than control flies (Fig. 5A,B,C,E). Improved survival of long-lived flies was observed for both young and aged flies after infection with Bb and Ecc15, whereas for Ef infection only aged long-lived flies showed increased survival relative to controls (Fig. 5A,B,C,E). Because one of our candidate genes, the JAK/STAT activating cytokine un-paired3 (upd3 ; Table S4), is involved in antiviral immunity (Zhu et al. 2013), we also measured the survival of flies upon infection with Drosophila C virus (DCV). This assay was carried out only with young, not aged flies, but we again found that long-lived flies survived infection with DCV much better than control flies (Fig. 5D,E). The evolution of prolonged lifespan might thus be accompanied-or partly be caused -by selection for improved realized immunity.
Next, we examined the ability of selection and control flies to successfully clear bacterial (Ecc15) infections over a 6-day period postinfection. The ability of control flies to clear an infection was Table S5.
higher than that of long-lived flies at young age but declined at old age; in contrast, clearance was overall lower in long-lived flies yet did not change with age (Fig. 5F). The lower clearance ability of long-lived selected flies, independent of their age, together with their improved survival upon infection, possibly indicates that they have evolved to be more tolerant to infections than unselected control flies (Best et al. 2008;Schneider 2008, 2012;Felix et al. 2012).
OVERACTIVATION IS DETRIMENTAL
Our results above support the idea that improved age-dependent regulation of immunity contributes to longevity and late-life fertility, but how immune genes affect lifespan is not well studied, especially in Drosophila (DeVeale et al. 2004;Libert et al. 2006;Fernando et al. 2014;Guo et al. 2014;Kounatidis et al. 2017;Loch et al. 2017). For example, previous work has shown that constitutive upregulation of the peptidoglycan recognition proteins PGRP-LE and PGRP-LC causes hyperactivation of Imd signaling and reduces lifespan (DeVeale et al. 2004;Libert et al. 2006). Similarly, several mutants of negative regulators of Imd signaling display shortened lifespan (Fernando et al. 2014;Kounatidis et al. 2017). While we also identified PGRP-LE as a lifespan candidate gene, most immunity genes in our analysis belong to the Toll pathway (Fig. 3, Table S4).
To examine whether Toll signaling affects lifespan, we used transgenic RNAi to silence four longevity candidate genes of the Toll pathway: the ligand spz, the receptor Tl, the inhibitor cact, and the NFκB transcription factor Dif. To prevent deleterious side effects of knocking down these developmentally critical genes (Nüsslein-Volhard and Wieschaus 1980;Belvin and Anderson 1996) we used a mifepristone-inducible daughterless (da)-GeneSwitch(GS)-GAL4 driver (Tricoire et al. 2009) to direct expression of UAS-RNAi constructs against Table S5.
these genes specifically during adulthood and throughout the fly body. Downregulation of the Tl receptor-but not of its ligand spzmildly but significantly extended lifespan (Fig. 6A,B,C,D), while silencing the antagonist cact-leading to Toll pathway hyperactivation (Lemaitre et al. 1996;Aggarwal and Silverman 2008)drastically reduced lifespan (Fig. 6E,F), similar to the effects of overactivation or derepression of Imd signaling (DeVeale et al. Table S5. 2004; Libert et al. 2006;Guo et al. 2014;Kounatidis et al. 2017). Interestingly, we found opposite lifespan effects of Dif-RNAi for females (Fig. 6G) and males (Fig. 6H). In agreement with our findings for females, two studies have previously found that a loss-of-function mutant of Dif lives longer than wild-type (Le Bourg et al. 2012;Petersen et al. 2013), but why silencing Dif reduces male lifespan remains unclear. Our results thus establish that downregulation of Toll signaling increases lifespan (albeit weakly so), whereas overactivation of this pathway strongly shortens life.
Expression of the different UAS-RNAi responder constructs was driven with a mifepristone-inducible daughterless(da)-GeneSwitch(GS)-
Our findings for the Toll pathway are also consistent with recent studies of IMD signaling showing that lifespan is extended under conditions of reduced lifetime IMD activity (Loch et al. 2017) or when the IMD AMPs AttacinC (AttC) and Diptericin B (DiptB) are downregulated in the fat body (Lin et al. 2018). The evidence available to date therefore suggests that decreased activity of the immune system can promote lifespan (DeVeale 2004), possibly by reducing the costs of immune deployment (McKean and Lazzaro 2011). Moreover, as we show here, longer lifespan can evolve -at least partly -via evolutionary changes in immunity.
Conclusion
Explaining the genetic basis of variation in longevity is a longstanding problem in evolutionary genetics and the biology of aging (Finch 1990;Rose 1991;Zwaan 1999;Partridge and Gems 2006;Flatt and Schmidt 2009;Flatt and Partridge 2018). Here we have performed a whole-genome sequencing analysis of an over 35-year-long selection experiment for postponed aging and late-life fertility in Drosophila (Luckinbill et al. 1984).
Notably, among the longevity candidate genes identified in our genomic screen, we found an enrichment of immune genes, especially in the Toll pathway. By comparing our data to those from two previous genomic studies of longevity selection in Drosophila (Remolina et al. 2012;Carnes et al. 2015) we infer that-while different studies might identify different immune genes as longevity candidates-immune function likely represents a general processlevel mechanism underlying the evolution of longevity assurance and of late-life performance (Martin et al. 1996;Partridge and Gems 2002;McElwee et al. 2007). This is particularly noteworthy in view of the growing evidence that aging, inflammation and immunity are intricately linked at the molecular level (DeVeale et al. 2004 ;Kurz and Tan 2004;Finch 2007;Salminen et al. 2008;Eleftherianos and Castillo 2012). However, how immunity contributes to longevity and correlated fitness traits is largely unclear.
While aged wild-type flies upregulate immune gene expression (Pletcher et al. 2002;Seroude et al. 2002;Landis et al. 2004), they typically have a reduced capacity to fight off and survive infections, suggesting that they suffer from immune overactivation and immunopathology (Zerofsky et al. 2005;Ren et al. 2007;Ramsden et al. 2008). Here, we show that long-lived flies, by contrast, tend to downregulate the induction of immune effector genes (AMPs) with age yet have substantially improved survivorship upon infection. This seems to confirm that elevated immune gene expression at old age might either be ineffective or even detrimental, perhaps as a consequence of senescent dysregulation of gene expression (Zerofsky et al. 2005;Khan et al. 2017). The downregulation of AMPs seen in the long-lived selection lines might also be a byproduct of selection for late-life fertility in these lines since elevated AMP expression upon infection is known to reduce fecundity (Zerofsky et al. 2005).
Since optimal immunity depends on the balance between efficient clearance of pathogens and limiting immunity-induced damage (Cassedevall and Pirofski 1999;Read et al. 2008;Råberg et al. 2009;Medzhitov et al. 2012), we propose that selection for longevity and late-life fertility leads to improved age-dependent immune homeostasis and alleviates the trade-off between immunity and immunopathology. This trade-off can be decoupled to some degree by tolerance mechanisms (Medzhitov et al. 2012), suggesting that the improved immunity of long-lived flies mightat least in part-be due to increased tolerance. In line with the notion of a trade-off between immunity and immunity-induced damage, work in the mealworm beetle shows that deployment of the immune effector phenoloxidase (PO) causes early-life inflammation, faster aging, and immunopathology at old age, whereas RNAi silencing of PO extends lifespan and improves survival after infection (Khan et al. 2017). This is consistent with the fact that hyperactivation or derepression of Imd signaling (DeVeale et al. 2004;Libert et al. 2006;Fernando et al. 2014;Kounatidis et al. 2017) and, as we observe here, of Toll signaling reduces lifespan. Conversely, we find that adult downregulation of Toll signaling mildly promotes lifespan, similar to recent findings for the Imd pathway (Kounatidis et al. 2017;Lin et al. 2018).
Together, our work reveals the existence of a causal-but mechanistically still poorly understood-link between improved age-dependent immunity and the evolution of longevity and latelife fertility (Garschall and Flatt 2018). This relationship clearly warrants further mechanistic and evolutionary study.
Methods
All methods are given in the Supplementary methods file (see Supporting Information section below), including details of selection and control lines, next-generation sequencing, bioinformatic, and statistical analyses, gene expression analyses, immunity assays, transgenic RNAi, and lifespan assays. in the laboratory; the Bloomington Drosophila Stock Center (BDSC) and the Vienna Drosophila RNAi Center (VDRC) for fly stocks; Véronique Monnier for the da-GS-GAL4 strain; and Luis Teixeira for the DCV strain. Our work was supported by grants from the Austrian Science Foundation (FWF P21498-B11 and W1225) and the Swiss National Science Foundation (SNSF PP00P3 133641 and PP00P3 165836) to T.F. G. S.-M. was supported by a NOS Alive -IGC fellowship.
DATA AVAILABILITY
Sequencing data used for genomic analyses are available from the European Nucleotide Archive (ENA) under accession PRJEB28048 / ERP110212. Raw data for experimental assays are available from Dryad under accession https://doi.org/10.5061/dryad.cp38vj4.
CONFLICT OF INTEREST
The authors declare no conflict of interest.
Supporting Information
Additional supporting information may be found online in the Supporting Information section at the end of the article.
Supplementary methods (pdf). Description of all methods, including details of selection and control lines, next-generation sequencing, bioinformatic and statistical analyses, gene expression analyses, immunity assays, transgenic RNAi and lifespan assays. Table S1. (xls). Longevity candidate SNPs and candidate genes. Table S2. (xls). Shared candidate genes across three independent studies. Table S3. (xls). Gene ontology (GO) analysis of longevity candidate genes. Table S4. (xls). Immunity genes implicated in lifespan and aging. Table S5. (xls). Full statistical details of data analyses shown in the main text. | 6,326.6 | 2018-11-12T00:00:00.000 | [
"Biology"
] |
Multitrace AdS/CFT and Master Field Dynamics
We consider gauge theories with multitrace deformations in the context of certain AdS/CFT models with explicit breaking of conformal symmetry and supersymmetry. In particular, we study the standard four-dimensional confining model based on the D4-brane metric at finite temperature. We work in the self-consistent Hartree approximation, which becomes exact in the large-N limit and is equivalent to the AdS/CFT multitrace prescription that has been proposed in the literature. We show that generic multitrace perturbations have important effects on the phase structure of these models. Most notably they can induce new types of large-N first-order phase transitions.
Introduction
In 't Hooft's large-N limit of gauge theories [1], the scaling of the bare gauge coupling g 2 ∼ 1/N is tuned so that the vacuum energy is proportional to N 2 . This scaling generalizes to an arbitrary action according to the rule: where W is a general functional of operators of the symbolic form the set of single-trace gauge-invariant operators with expectation values of O(1) in the large-N limit 2 . For more general theories, including scalar fields and fermions in the adjoint representation, we extend the basic family of gauge-invariant operators to include these fields as well. These operators become quasi-classical in the large-N limit, in the sense that lim This means that there is a notion of saddle-point configuration −a "master field" defined up to gauge transformations, which makes the 1/N expansion into a semiclassical expansion [2].
Known or conjectured master fields are usually established for single-trace actions, i.e.
for linear W in (1.1), such as the Yang-Mills action. However, the behaviour of master fields under perturbations by multitrace operators is of primary interest, especially in the context of the AdS/CFT correspondence [3]. In the holographic mapping, multitrace operators are associated to multiparticle states in the bulk theory. Hence they correspond to exotic deformations of the string background [4]. Moreover, truly non-perturbative effects in the bulk theory manifest themselves as finite-N multitrace effects on the CFT. This is simply the translation of the fact that only O(N ) elementary powers of the form Tr F n are algebraically independent: for n ≫ N the single-trace operator decomposes as a sum of products of lower-order single-trace operators. Hence, the spectrum of the bulk theory must deviate significantly from a Fock space for states with O(N ) "particles".
It is then very interesting to study the effect of multitrace deformations on the AdS/CFT saddle point, particularly the effect of deformations that are non-polynomial in the traces. Recently, the AdS/CFT algorithm was modified to incorporate multitrace operators [5,6] (see also [7]). Here we elaborate on some points made in [6] to argue that this modification can be understood in rather general terms, as an application of the mean-field approximation.
In analysing the large-N master field, we could attempt a saddle-point approximation once we have managed to exactly integrate out O(N 2 ) degrees of freedom. If we remain with O(N ) degrees of freedom, this sets the order of magnitude of the fluctuations. Since the action is of O(N 2 ), we have a sharp saddle point. In practice, such a program only works in very restricted models in low dimensions, where we can integrate out explicitly the O(N 2 ) angular variables (for a discussion of multitrace operators in these models, see [8]). Still, one can argue in great generality that in the leading large-N approximation W can be taken essentially linear.
Let us suppose that we have managed to change variables in the path integral from the gauge field A µ to the set of gauge-invariant monomials O n with n < O(N ). In the process we generate a complicated (non-local) effective action Γ. At the large-N saddle-point we have: where we have incorporated the fact that the solution of the saddle-point equations is nothing but the planar expectation values: In view of (1.3), it is clear that these equations are exactly the same as those that follow from a model with a single-trace action given by where the effective single-trace couplings ζ n are given by Therefore, provided we only consider the planar N → ∞ limit, any quantity of the original theory (1.1) can be computed in the single-trace theory (1.4), with the expectation values O being determined self-consistently. 3 Thus, in the AdS/CFT set-up, the combination ∂W (O cl )/∂O n plays the role of the source for the single-trace operator O n , and this precisely determines the boundary conditions proposed in [5,6].
Our discussion shows that the basic phenomenon is more general than the particular AdS/CFT set-up. Namely, it is a general consequence of the fact that the Hartree (or Thomas-Fermi) approximation becomes exact in the large-N limit (see for example [9]). In this limit, the interactions between the gauge-invariant variables O n can be substituted by the interaction of each variable with a collective mean field that must be determined sefl-consistently.
We should emphasize that these rules are only valid in the strict N = ∞ limit. The 1/N corrections will alter the master equation (1.5) since the Hartree approximation obtains corrections. Equivalently, the AdS/CFT boundary conditions of [5,6] will receive 1/N corrections, in addition to the usual loop corrections in the bulk of AdS.
Master Field Dynamics
To be more specific, let us suppose that the deformations by a certain single-trace operator O: are under control, in the sense that we are able to compute the planar one-point function O ζ as a function of ζ and the other couplings of the Lagrangian. Then we can compute any planar expectation value of the more general theory with perturbation where F is general function, µ is a mass scale and d O is the scaling dimension of the operator O in the single-trace model. We simply do our calculations in the single-trace theory (2.1) with perturbation where ζ is given self-consistently by the solution of the "master equation": where the prime denotes differentiation. In principle, we can give ζ a space-time dependence so that (2.4) becomes a functional equation for an effective source. Such a generalization is appropriate to compute correlation functions in the multitrace-deformed theory. However, for the purposes of this paper we are only interested in vacuum properties of the master field, i.e. we consider only condensates and effective couplings that are The "master equation" (2.4) implies that multitrace deformations whose single-trace "elementary" operator has a vanishing one-point function are equivalent (in the large-N limit) to single-trace deformations, i.e. a constant shift of the coupling dual to the single-trace operator O. Hence, in order to have specifically new phenomena associated to multitrace deformations we need non-vanishing condensates.
This means that the auxiliary single-trace model with perturbation (2.3) must break conformal invariance either explicitly or spontaneously. Since the one-point function depends on the particular state that we are considering, it is plain that the physical properties of multiple-trace deformations have a strong dependence on the full physics of condensates of the associated single-trace model.
We may take F as a non-polynomial function of single-trace operators. However, we implicitly treat the non-linear terms as a perturbation since the scaling dimensions d O are defined with respect to the single-trace theory. At any rate, it is interesting to evaluate (2.4) when the function F becomes non-polynomial.
Our main observation in this paper is the following. The function G( ζ ) may have a complicated structure, being non-linear in both ζ and the couplings of the bare Lagrangian W . In particular, if G( ζ ) has various nodes, we have a set of solutions { ζ α } for a given fixed value of the microscopic couplings in W . In this case we must select the master field that dominates the large-N dynamics among the various solutions ζ α .
By analogy with similar situations in large-N physics we characterize the dominating master field by requiring that the partition function be maximized: Large-N phase transitions induced by the multitrace couplings will arise when the dominating zero of G( ζ ) changes discontinuously as a function of the microscopic couplings in W . These phase transitions will be characterized by a "latent heat" release of O(N 2 ). Typically, Z( ζ ) W will be a monotonic function of ζ, so that a change of branch in (2.5) will require that the cardinality of the solution set { ζ α } changes as a function of W .
Although the phenomena described so far are expected to be rather general, we will illustrate them in a specific example in the context of the AdS/CFT correspondence.
Multitraces in Deformed QCD
As a concrete example along the previous lines, we consider a regularized version of four-dimensional non-supersymmetric Yang-Mills theory that has been introduced in [10]. In its most straightforward definition, the model is given by the low-energy theory on the world-volume of a stack of N parallel D4-branes at finite temperature T . Equivalently, we can view it as a Scherk-Schwarz compactification of the D4-branes on S 1 ×R 4 , the compact circle having size 1/T . At large distances on R 4 the effective theory is a four-dimensional Yang-Mills theory modified at energies of O(T ) by remnants of the five-dimensional N = 4 super Yang-Mills theory. The action is given by where g s is the string coupling and α ′ is the string's Regge slope. For λ ≪ 1 we have the standard planar perturbation theory of the four-dimensional Yang-Mills theory. On the other hand, for λ ≫ 1 we have a good description in terms of the low-curvature expansion of the black D4-brane metric. In this case, the expansion parameter is controlled by the curvature of the near-horizon metric in string units: with R c the curvature radius. Defining the supergravity description is good for 0 < x ≪ 1. At x ∼ 1 we have the standard "correspondence point" in the sense of [11], which represents the matching to the perturbative regime. As long as we only look at energy scales of O(1) in the large-N limit, we can neglect non-perturbative thresholds associated to large values of the dilaton, since these involve explicit powers of N .
The simplest multitrace perturbation in these models is a non-linear function of the Lagrangian density, According to (2.4), all physical quantities in this model, such as thermodynamic functions, condensates, Wilson loops, etc. , can be computed in the large-N limit in the auxiliary with effective 't Hooft coupling λ given by the solution of the equation where the gluon condensate L λ is determined by the expectation value of the action: The partition function in the planar supergravity approximation is defined in terms of the thermal free energy of the D4-brane (see, for example [12]): where C is a positive numerical constant. This expression for the partition function has been normalized to the Euclidean action of the wrapped D4-brane metric with supersymmetric boundary conditions, i.e. we define the five-dimensional thermal free energies with respect to the T = 0 vacuum.
Notice that, even if the general multitrace deformation of the T = 0 D4-brane theory may break supersymmetry, the N = ∞ effective theory (3.3) does not. Hence, the D4brane theory reduced on a supersymmetric circle will be supersymmetric at N = ∞ and no condensates will be induced. 4 This implies that the condensates are entirely due to thermal effects of the D4-brane theory and our normalization of (3.6) is the physically correct one.
Combining (3.5) and (3.6) we find the value of the gluon condensate (c. f. [13]): This expectation value has the crucial property of diverging as λ → ∞. Since this is precisely the supergravity regime of the effective single-trace theory, we learn that multitrace deformations are potentially stronger in the region where AdS/CFT is under quantitative control and they may be reliably studied.
In terms of the dimensionless expansion parameters x ≡ 1/λ andx ≡ 1/λ the master equation reads (3.8) Equation (3.8) was derived within the supergravity approximation to the near-horizon black D4-brane solution. In terms of the supergravity expansion parameterx, this is the regime: As before, these limits ignore other thresholds that are related to large dilaton corrections and are of subleading order in the 1/N expansion.
One important property of (3.8) is the redundancy of the description in terms of the original variables in the microscopic Lagrangian, i.e. the coupling x and the multitrace couplings that define the function F . For a fixed value ofx all models in the codimension-1 submanifold Mx : x −x + F ′ = 0 have the same large-N properties. The region of the microscopic coupling space where supergravity is a good approximation is the union of these submanifolds for 0 <x < 1: Mx . (3.10) One component of the boundary is Mx =0 defined as It yields the strong-coupling (low-curvature) limit of the AdS/CFT background. On the other hand, the correspondence line (the matching to perturbative variables) occurs at x = 1 or Although M 0 ∪M 1 are components of the boundary of S, they do not exhaust it in general.
Multicritical Behaviour
For 0 < x ≪ 1 there is always a standard solution of (3.8) that is valid for very small multitrace couplings. This solution hasx ≈ x and can be obtained iteratively as the limit of the set {x (k) } withx However, it is clear that there will be other solutions if F ′ [C/x 2 ] shows "bumps" in the supergravity interval 0 <x < 1.
Let us assume that F ′ admits a finite Laurent expansion around the origin, so that the master equation takes the form: (3.14) The j = 0 term is equivalent to a constant shift of x and has been removed from (3.14).
Our first result is a simple consequence of the divergence of (3.7). The pole part of F , corresponding to j < 0 in (3.14), has no dramatic effects in the supergravity interval 0 <x ≪ 1. Thus, multitrace deformations that are completely singular in perturbation theory become rather tame in the supergravity approximation. This looks surprising at first sight, but it fits naturally with the character of AdS/CFT as a strong/weak coupling duality with respect to the 't Hooft coupling.
Conversely, perturbations that are polynomial in multitraces translate into nonanalytic contributions to G(x) and therefore dominate the supergravity regime atx → 0.
In this limit G(x) diverges with a sign that is correlated with that of f J , J being the largest value of the index j. In particular, for f J < 0 and small there is always a solution: This solution disappears for f J > 0, unless one also dials the microscopic 't Hooft coupling to negative values: x < 0.
We have found that for 0 < x < 1 and small |f J |, we have a discrete jump in the number of solutions of the master equation as f J crosses zero. This is a source of possible phase transitions.
A more general multicritical behaviour in the vicinity ofx ≈ 0 will depend on the higher multitrace powers. Let us consider a simple example of a deformation proportional to The master equation (3.14) reads: with f i ∼ g i up to numerical constants. Besides the standard solutionx + ≈ x for very small f j there are other interesting solutions. Consider x > 0, f 2 > 0 and f 1 < 0, with f 2 ≪ |f 1 | ≪ x and furthermore x f 2 ≪ f 2 1 . Then, the master equation has two small solutions in the vicinity ofx These solutions coincide for f 2 1 ∼ x f 2 and disappear for larger values of f 2 .
Phase Transitions
The previous discontinuities in the solution set of the master equation translate into large-N phase transitions. Since the partition function scales as we find that the dominant solutions in the supergravity approximation are those with the smallest value ofx within the unit interval. The jump of the effective action across the transition fromx α tox β is given by Coming back to the examples in the previous subsection, we see that there is always a phase transition when f J crosses zero from negative to positive values. In this casex α = 0 andx β ≈ x > 0. The density of "latent heat" in (3.18) is infinite. This phase transition is not hard to interpret. Since f J is the coupling of the multitrace interaction of highest order, it dominates the limit of large field-strengths. Hence, the very strong singularity for f J → 0 − reflects the fact that the microscopic action is not bounded below for f J < 0.
A more physical phase transition with finite "latent heat" takes place in the twocoupling model (3.16) with x > 0, f 2 > 0 and f 1 < 0, when the two solutions around x − ∼ −f 1 /x coalesce as we decrease the magnitude of |f 1 |/f 2 . For small values of this ratio the only solution isx ≈ x.
This example illustrates the general pattern of phase transitions in this class of models. When the minimal solutionx − of the master equation is separated from the first subleading onex ′ − by a local maximum of G(x), a variation of the parameters can bring the maximum to zero and make the two solutions coalescex − =x ′ − . A further variation of the parameters can bring the maximum to negative values and make the double solution disappear. This generic situation is depicted in Fig. 1 below.
Generalization to Other Dimensions
This set-up can be generalized to the regularized Yang-Mills model on R p , with p < 5, in terms of a hot Dp-brane model and the corresponding generalization of the AdS/CFT correspondence [14]. In this case, the effective dimensionless 't Hooft coupling normalized at the cutoff scale µ = T is given by This is the expansion parameter of the planar perturbative expansion. The expansion parameter of the supergravity approximation that arises at λ p (µ) ≫ 1 is: The large-N solution of these models perturbed by multitrace interactions of the form can be studied along lines similar to the p = 4 case above. Here, L denotes the Yang-Mills Lagrangian operator, corrected by regularization artefacts at the scale µ = T . As before, one reduces the problem to the study of an effective single-trace model with supergravity expansion parameterx that is determined self-consistently. The supergravity regime of the N = ∞ problem is then given by 0 <x ≪ 1. The partition function in the single-trace model with effective couplingx is 1 where C p is a positive numerical constant. The gluon condensate is given by These expressions show that the p = 3 case, based on the hot D3-brane, yields trivial multitrace deformations in this approximation. This is a consequence of the free energy of D3-branes being very smooth for large 't Hooft coupling. Of course, this situation changes when considering subleading terms in the α ′ expansion of the supergravity background. It is interesting to study these corrections in more detail, although we will not attempt to do this here.
For p < 3 one finds a situation somewhat similar to that discussed before in the p = 4 case. The master equation forx reads: (4.6) Hence, the same qualitative properties follow, regarding the multiplicity of solutions at smallx. In particular, the crucial singularity atx = 0 of the gluon condensate (4.5) still holds.
The main difference with p = 4 is that, according to (4.4), for p < 3 it is the largest solutionx + that dominates the partition function. As a result, we expect that the standard solutionx ≈ x will dominate and that sharp phase transitions will be more difficult to produce than in the p = 4 case.
Conclusions
In this paper we have studied some simple multitrace deformations of the basic nonsupersymmetric QCD model in [10], as well as its generalizations to less than four dimensions. In particular we have considered deformations by a non-linear function of the Lagrangian operator.
Our main result is the emergence of new types of "multicritical" behaviour, similar in many ways to those studied in the context of matrix models [8]. There appear various competing master fields whose dynamics yields new examples of large-N phase transitions. It turns out that the dynamical effect of multitrace deformations is particularly strong in the supergravity approximation to the AdS/CFT master field.
These results suggest various avenues for further research. It would be interesting to study more examples of large-N phase transitions induced by multitrace deformations. Eventually, these phase transitions should be related to the breakdown of string perturbation theory in the geometrical description of the large-N master field. Another interesting question is the effect of multitrace deformations on other large-N phase transitions that have been identified in single-trace models, in particular, the phase transitions associated to theta-dependence in [15] or those related to finite-size effects, as in [16,10,12]. | 4,937.8 | 2002-06-22T00:00:00.000 | [
"Physics"
] |
seqgra: principled selection of neural network architectures for genomics prediction tasks
Abstract Motivation Sequence models based on deep neural networks have achieved state-of-the-art performance on regulatory genomics prediction tasks, such as chromatin accessibility and transcription factor binding. But despite their high accuracy, their contributions to a mechanistic understanding of the biology of regulatory elements is often hindered by the complexity of the predictive model and thus poor interpretability of its decision boundaries. To address this, we introduce seqgra, a deep learning pipeline that incorporates the rule-based simulation of biological sequence data and the training and evaluation of models, whose decision boundaries mirror the rules from the simulation process. Results We show that seqgra can be used to (i) generate data under the assumption of a hypothesized model of genome regulation, (ii) identify neural network architectures capable of recovering the rules of said model and (iii) analyze a model’s predictive performance as a function of training set size and the complexity of the rules behind the simulated data. Availability and implementation The source code of the seqgra package is hosted on GitHub (https://github.com/gifford-lab/seqgra). seqgra is a pip-installable Python package. Extensive documentation can be found at https://kkrismer.github.io/seqgra. Supplementary information Supplementary data are available at Bioinformatics online.
Introduction
Over the last 5-10 years, neural networks were successfully applied to make large gains on a wide range of tasks in such diverse fields as computer vision, computer audition, natural language processing and robotics. While the structure and the semantics of the data used to train and evaluate neural networks can be vastly different, the core learning algorithms are almost always the same and the neural network architectures are often composed of similar building blocks. This is also true for the field of genomics, and computational biology as a whole, where deep neural networks are trained on data that are obtained experimentally using functional genomics assays such as DNase-seq (Boyle et al., 2008), ATAC-seq (Buenrostro et al., 2013) and ChIP-seq. Motivated by their success, architectural building blocks commonly seen in these networks, such as convolutional layers, recurrent layers, batch normalization, dropout and skip connections (Kelley et al., 2016;Nair et al., 2019;Quang and Xie, 2016;Zhou and Troyanskaya, 2015), have been imported from computer vision and other fields. This cross-fertilization between fields and the general applicability of the building blocks of deep learning has more recently been seen in the adoption of transformerbased architectures for image classification tasks in computer vision and protein prediction tasks in biology. However, most datasets used to train supervised deep learning models in biology are different from datasets in computer vision and natural language processing in two ways. (i) Biological problems contain noisy input and noisy labels in that not only is there substantial intraclass variability and noise in the input, e.g. images labeled as cat contain cats that vary in terms of breed, color, position, pose, etc., but also a significant fraction of examples are mislabeled, i.e. images labeled as cat are empty or contain dogs. This is rare in computer vision datasets, but common in datasets derived from functional genomics assays. (ii) Feature attribution or other model explanation methods are not human-interpretable. We understand images of cats in the sense that we know which parts of the image contain information that is relevant for the classification (because they belong to the cat) and which parts are irrelevant (because they belong to the background). This intuitive understanding is necessary when attribution methods such as saliency maps are applied to assess a model's ability to base predictions on relevant parts of the input. In biology, examples often include DNA sequence windows of various widths, most commonly 1000 base pairs (bp), which, unlike images of cats, are not humanreadable. This biology-specific issue of inherently opaque examples exacerbates the general interpretability issue of deep neural networks, whereas the lack of high-quality datasets contributes to the reproducibility crisis and makes it more difficult to compare architectures, as they are often only evaluated on a custom dataset.
The method introduced here, seqgra, attempts to improve the process by which neural network architectures are chosen for specific genomics prediction tasks and provides a framework to evaluate model interpretation methods. Its fully reproducible pipeline provides a means to (i) simulate data based on a predefined set of probabilistic rules, (ii) create and train models based on a precise description of their architecture, loss, optimizer and training process and (iii) evaluate the trained models using conventional test set metrics as well as an array of feature attribution methods. These feature attribution methods in combination with simulated data and thus perfect ground truth enable an analysis of the model's decision boundaries and how well they capture the underlying rules of the data generation process from step 1. Utilizing this framework, models are not only evaluated based on their predictive performance, but also on the ability to recover the vocabulary (e.g. specific transcription factor binding site motifs) and grammar (e.g. spacing constraints between interacting transcription factors) of the dataset, while assigning little weight to confounding factors and idiosyncratic noise.
Efforts in this area include Kipoi (Avsec et al., 2019), a repository for trained genomics models, and Selene (Chen et al., 2019), a framework for biological sequence-based deep learning models that supports training of PyTorch models, model evaluation with conventional test set metrics (ROC and precision-recall curves), and variant effect prediction and in silico mutagenesis of trained models. To our knowledge none of the existing methods offer functionality for simulating data using a general framework of probabilistic rules, nor do they incorporate feature attribution methods.
Furthermore, this simulation-based framework can also serve as a means to investigate the strengths and weaknesses of various feature attribution methods across different neural network architectures that are trained on datasets with varying degrees of complexity. With simulated and thus perfect data, the idiosyncrasies of attribution methods can more easily be exposed.
Motif database
We used HOMER motifs for all grammar sequence elements that were based on transcription factor binding site motifs. These motifs were obtained by analyzing data from publicly available ChIP-seq experiments (Heinz et al., 2010).
Feature importance evaluators
While conventional test set metrics, such as ROC curves and precision-recall curves, assess model performance based on a set of examples (e.g. the test set), feature importance evaluators (FIEs) quantify the contribution of each input feature to the model's prediction. In the context of seqgra, FIEs are used to assess what we call grammar or vocabulary recovery, the degree to which a model was able to align its decision boundaries with the rules of the grammar that was used to simulate the data it was trained on. This is possible because for simulated data we not only know the ground truth label for each example, but also which positions are part of the background and thus contain no information about the class label, and which positions were altered by a grammar rule and thus do contain information about the class label. These position-level annotations (background positions, grammar positions) are provided for all simulated examples.
More formally, FIEs take a model f(x), a target y and an example x i of width n, and return z, an n-dimensional vector that contains the attribution value (also known as importance, relevance, contribution) of each input position to model f(x) predicting target y. Please note that n is the sequence length of the example, not the number of features. For instance, if the input to the model is a 150 nt DNA sequence, x i is a 150 by 4 matrix (one-hot encoded), containing 600 features, but its width n ¼ 150.
Gradient-based feature importance evaluators
This large class of FIEs uses backpropagation to calculate the partial derivatives of the output, f y ðxÞ, with respect to the input, x i . seqgra includes seven gradient-based FIEs off-the-shelf, whose implementations are based on code by Wang (2018).
The most basic FIE, raw gradient (Simonyan et al., 2014), just returns the gradient with respect to the input example x i z RG ¼ @f y ðxÞ @x i ; (1) or short rf y ðx i Þ, where f j ðÁÞ is the activation of the target neuron in the output layer, e.g. class j for multiclass classification tasks. The absolute gradient method or saliency is defined as where jxj applies the element-wise absolute value operation to vector x.
Gradient-x-input (Baehrens et al., 2010; gradient times input) is defined as Integrated Gradients (Sundararajan et al., 2017) take the average of multiple (here, K ¼ 100) gradients evaluated along the linear path from the baseline x 0 (which in seqgra is the zero vector) to the input example x i . The method is defined as seqgra also supports gradient-based methods that alter the way the gradient is obtained using backpropagation, namely Guided Backpropagation (Springenberg et al., 2015), Deconvolution (Zeiler and Fergus, 2014) and DeepLIFT (Shrikumar et al., 2017). The details of these methods are beyond the scope of this work.
Model-agnostic feature importance evaluators
Model-agnostic FIEs do not require access to the gradients and make no assumptions about the structure of the model, hence the name. They rely solely on the ability to evaluate f y ðxÞ, for various altered versions of x. Sufficient input subsets (SIS) (Carter et al., 2019) is a perturbation-based method that identifies subsets of input features that are sufficient to keep f y ðxÞ > s, i.e. if all other features are masked, the class prediction does not change (is still above some threshold s). Unlike gradient-based FIEs, which return a real-valued vector of feature attributions, SIS returns a binary vector, indicating for each feature whether it is part of an SIS or not.
Hardware infrastructure
Models presented in this paper were trained on three compute nodes with a total of six CPUs (2Â Intel Xeon E5-2630 v4, 2Â Intel Xeon Gold 6138, 2Â Intel Xeon Gold 6240), 26 GPUs (8Â NVIDIA GeForce GTX 1080 Ti with 11 GB GDDR5X, 10Â NVIDIA GeForce RTX 2080 Ti with 11 GB GDDR6 and 8Â NVIDIA Titan RTX with 24 GB GDDR6), and a total of 833 GB of main memory. The total GPU time (for training and evaluation) was roughly 12 GPU months.
seqgra provides a reproducible, simulation-based framework for neural network architecture evaluation
The method we describe in this paper (seqgra) generates synthetic biological sequence data according to predefined probabilistic rules in order to either (i) evaluate neural network architectures trained on these datasets or (ii) compare feature attribution methods in a setting with perfect dense (position-specific) labels. In the former scenario, the result would be a neural network architecture thatwhen trained on datasets generated from a similar set of rules-has high predictive performance and decision boundaries that closely reflect those set of generative rules. The goal of the latter approach is to investigate the interplay between grammar complexity and model complexity and how they influence feature attribution methods.
A dataset in the context of seqgra, whether obtained by simulation or experiment, is always divided into three subsets, training set, validation set and test set. Each of the subsets comprises a number of supervised examples, which are (x, y, a)-triplets. Here, the input variable x is a biological sequence (DNA, RNA, protein) of fixed or variable length, also referred to as sequence window or features; y is the target variable, the condition this example belongs to (e.g. cell type), which is either a mutually exclusive class or a non-mutually exclusive label, for multiclass classification tasks or multilabel classification tasks, respectively; and a is the positional annotation of the example, denoting for each position in x whether it is part of the grammar or part of the background. Grammar positions contain information related to y and are therefore important for classification, whereas background positions do not and are thus irrelevant for classification.
The core functionality of seqgra can be broken down into three components: (i) simulator, (ii) learner and (iii) evaluator. Each component corresponds to a distinct step in the pipeline depicted in Figure 1A.
In step 1, the simulator generates a synthetic dataset according to the specifications laid out in the data definition, a document that contains a precise description of the generated data, from the background nucleotide distribution to the set of probabilistic rules that determines how information about the condition y (label, class) is encoded in the sequence window x. This set of probabilistic rules is also referred to as grammar or sequence grammar throughout this manuscript (hence the name seqgra), and although related to formal grammars, seqgra's probabilistic rules are not expressed as and not equivalent to production rules in the context of formal language theory.
Schematic depictions of six toy datasets, generated from probabilistic rules of varying complexity, are shown in Figure 1B. In each case, the dataset contains examples belonging to one of four classes and the probabilistic rules determine how information about the class y (in this case, the cell type) is encoded in the sequence window x. The ability to recover this relationship during training is imperative for the model's predictive performance. The sequence windows of the examples are shown as gray bars with colored spots, where background positions are shown in gray and grammar positions are shown in color. In the first example, each of the four cell types can easily be identified by the presence of a class-specific k-mer at the center of the sequence window, a relationship that, unsurprisingly, can be learned perfectly (i.e. close to an ROC AUC of 1.0) and efficiently (i.e. with few training examples) by most neural network architectures. Since a set of rules as simple as the one used in example 1 will almost always be an inadequate description of any biological process, seqgra allows for various ways to increase the complexity. Example 2 represents a small step up in complexity by replacing the fixed, class-specific k-mer with a class-specific position weight matrix (PWM), which is a common representation of naturally occurring sequence elements, such as binding sites for a transcription factor. Another small step up in complexity is example 3, where the PWM is placed randomly within in sequence window. In example 4, none of the PWMs is class-specific, only a combination of PWMs. Rules like these could be used to model cell type specific chromatin accessibility that is dependent on the interaction between transcription factors. Examples 5 and 6 encode class information in the relative position of PWMs instead of their presence or absence, with example dataset 5 using class-specific order constraints and example dataset 6 class-specific spacing constraints.
Once the synthetic dataset is generated, it is used by the learner component in step 2 to train a neural network model. It is important to note that the learner only has access to x and y of the (x, y, a) example triplets, and the positional annotations a are only utilized in step 3. Analogous to the role of the data definition for the simulator in step 1, the model definition serves as a blueprint for the learner by providing a precise description of the neural network architecture, the loss function, the optimizer and hyperparameters of the training process, and thus ensuring a reproducible model creation, training and serving process for both PyTorch and TensorFlow models.
In step 3, the fully trained model from step 2 is then evaluated with the help of an array of conventional test set metrics and FIEs, such as Integrated Gradients (Sundararajan et al., 2017) and SIS (Carter et al., 2019).
As a means to illustrate the various inputs and outputs of this pipeline, we prepared the results of a single seqgra analysis in Supplementary Figure S2 (using DNA sequences as input) and Supplementary Figure S3 (using protein sequences as input) and describe the process in Supplementary Section S1.7.
seqgra-enabled ablation analysis reveals most efficient neural network architecture
Ablation, a technique widely used in neuroscience to determine the functions of brain regions by removing them one by one, has been used similarly to identify the relevant components of an artificial neural network (Lillian et al., 2018;Meyes et al., 2019). We performed ablation analysis to determine the effects of dropout (Srivastava et al., 2014) and batch normalization (Ioffe and Szegedy, 2015) on the predictive performance and grammar recovery of a basic neural network architecture with two hidden layers, a convolutional layer with 10 21-nt wide filters, followed by a dense layer with 5 hidden units, and dropout or batch normalization operations after each layer. Models were trained on binary classification datasets generated by grammars using class-specific HOMER motifs (see schematic in Fig. 2A), class-specific order of HOMER motifs (Fig. 2B) and class-specific spacing constraints between HOMER motifs (Fig. 2C). Test set precision-recall curve AUCs are shown for all models across all grammars in Figure 2D. Unsurprisingly, the predictive performance of all architectures increases with dataset size, and all architectures approach a PR AUC of 1.0 for sufficiently large datasets. But this analysis reveals a striking difference between the neural network architectures in terms of their efficiency, i.e. how many training examples are required to reach an AUC of approximately 1.0. On the grammars tested here, batch normalization had a seqgra negative effect on efficiency, requiring up to 100 000 examples more to converge than architectures without the operation. The architecture with dropout after each hidden layer was the most efficient and highest performing, both in terms of predictive performance and grammar recovery (i.e. the model's propensity to classify examples based on grammar positions) as shown in Figure 2E.
Deepsea dominates comparison of popular genomics deep learning architectures
Furthermore, we compared three popular neural network architectures used in the field of genomics, Basset (Kelley et al., 2016), ChromDragoNN (Nair et al., 2019) and DeepSEA (Zhou and Troyanskaya, 2015). All three architectures were devised with functional genomics datasets in mind and were originally trained on multilabel classification datasets obtained from numerous DNase-seq assays, with ChromDragoNN also utilizing RNA-seq and DeepSEA ChIP-seq data. With over 4 million (Basset), over 6 million (DeepSEA) and over 20 million (ChromDragoNN) trainable parameters, all three can be considered high-capacity models. The three architectures make use of commonly used building blocks such as convolutional, followed by dense layers (all three), max pooling and dropout operations (all three), ReLU activation functions (all three), batch normalization (Basset and ChromDragoNN) and skip connections (ChromDragoNN). Input and output layers were adjusted to fit the prediction task and architectures were trained on simulated datasets from scratch without pretraining on their original datasets.
We used the area under the microaveraged precision-recall curve to evaluate the test set predictive performance on four multiclass classification tasks (with 2, 10, 20 and 50 classes) and three or four grammars each, with a sequence window of 1000 nucleotides. The results are shown in Supplementary Figure S6A for binary classification, and Supplementary Figures S6B, S6C and S6D for multiclass classification with 10, 20 and 50 classes, respectively. The HOMER motifs used by the grammars presented here are listed in Supplementary Tables S2-S5. Each panel contains precision-recall AUCs of models trained on datasets generated by one grammar, using five different random seeds for simulation (error bars) and 19 different dataset sizes. The DeepSEA architecture exhibited an at times substantially higher predictive performance than Basset and ChromDragoNN and was the highest performing architecture on all tested datasets. While DeepSEA is the preferred architecture on datasets derived from the grammars we tested, this is not necessarily true for datasets with other grammars or experimentally obtained First, a simulator generates synthetic data according to the rules and specifications defined in the data definition file. Second, a learner creates a neural network model whose architecture and hyperparameters are specified in the model definition file, and trains it on the synthetic data from step 1. And third, the trained model is evaluated in terms of predictive performance and its ability to recover the rules specified in the data definition file. (B) The data definition specifies the basic properties of the synthetic data, including the alphabet (e.g. DNA, RNA, protein) and its distribution, as well as condition-specific rules (the grammar), which determine how information about the label y is encoded in the input x. (C) The model definition contains all information required to create and train the model. (D) A schematic of six simulated toy datasets for multiclass classification, where the classes y correspond to cell types and the input x are sequence windows (depicted as gray bars) that encode information about the class y at certain positions in x (colored areas). The rules that determine how this information is encoded range from basic (cell type specific k-mer at fixed position) to complex (non-specific combinations of PWMs with cell type specific spacing constraints) data. ChromDragoNN, e.g. is intended to be also trained on RNAseq data, which we did not provide. Interestingly, we observed that high-capacity architectures such as those tested here perform better on datasets generated by grammars that include interactions, specifically interactions that encode the class label in the order or spacing of the interacting sequence elements. This is not the case for smallscale architectures with less than 100 000 trainable parameters, which, as expected, do better on grammars without interactions, where the class label is encoded in the presence of class-specific sequence elements.
High predictive performance of simulation-vetted neural network architecture recapitulated with ChIP-seq data
In this section, we address the question of whether neural network architectures that perform well on simulated data also succeed on data obtained experimentally. We decided to model the well-known hetero-dimeric pair of transcription factors SOX2 and POU5F1, whose spacing constraints were previously characterized (Chew et al., 2005;Guo et al., 2012). To that end, we used the HOMER motifs SOX2_HUMAN.H11MO.0.A and PO5F1_HUMAN. H11MO.1.A as sequence elements in the data definition. We also included spacing constraints (0-3 bp between SOX2 and PO5F1 motifs). Figure 3A shows a schematic depiction of the analysis.
The experimental dataset was based on two ChIP-seq assays, which targeted the two transcription factors. The preprocessed data were obtained from the Cistrome Data Browser (Mei et al., 2017), specifically the data associated with GEO IDs GSM1701825 for SOX2 and GSM1705258 for POU5F1.
We evaluated the same neural network architectures on both the simulated and the experimental datasets. The architecture described in Figure 3B with one fully connected layer (not counting the output layer) is an example of an architecture that does not assume any structure in the input. It is a naive architecture in the sense that it was constructed without any knowledge about the grammar that was used to simulate the data. The architecture described in Figure 3C, on the contrary, makes assumptions about the data that are in agreement with the grammar, such as a 1D spatial structure with information encoded in 11-nt long code words (enough to cover the SOX2-POU5F1 interaction), whose position in the sequence window is irrelevant.
As expected, the test set predictive performance of the naive architecture (Fig. 3D) was significantly lower than the grammarinformed architecture (Fig. 3E). Furthermore, the performance on the simulated data proved to be a good predictor for the performance on the experimental data ( Fig. 3D and E).
The agreement between feature importance and the grammar positions, a proxy for a model's ability to recover the SOX2 and POU5F1 motifs, is shown in Figure 3F for the naive architecture and in Figure 3G for the grammar-informed architecture. The grammarinformed model's predictions were based almost exclusively on grammar positions (positions that contained SOX2 and POU5F1 motifs), whereas this was not the case for the naive model. Both panels were created with the Integrated Gradients FIE.
Discussion
In this paper, we introduced seqgra, a deep learning infrastructure method for genomics. It is intended to streamline the development of deep learning models for biological sequence-based prediction tasks, by providing a reproducible unified framework for (i) flexible, rule-based synthetic data generation; (ii) model training and (iii) model evaluation with conventional test set metrics and feature attribution methods. This three-step pipeline supports datasets obtained by simulation and experiment, models implemented in PyTorch and TensorFlow, and numerous gradient-based feature attribution methods as well as SIS, a model-agnostic feature attribution method, in addition to conventional ROC and precision-recall curves for model evaluation. Our method greatly simplifies an array of commonly performed diagnostics and performance assessments of deep learning models, such as ablation analysis, estimated dataset size requirements and tolerated noise thresholds. The simulator and the language of the probabilistic rules are flexible enough to span multiclass and multilabel classification tasks with any number of classes or labels, DNA or amino acid sequence windows of variable or fixed length, class-dependent background distributions, sequence elements defined as PWMs or list of k-mers with associated probabilities, and interactions between sequence elements with associated order or spacing constraints.
Moreover, the controlled environment of data simulation and reproducible model training, serving and evaluation makes seqgra a suitable testbed for feature attribution and interpretability methods and their interdependencies with neural network architectures and the complexity level of the training data. Moreover, the framework can be used to perform extensive comparisons between deep learning libraries, which are rarely done (see Supplementary Figs S10 and S11) or identify undocumented behavior of the deep learning technology stack, such as an unusual training instability caused by a random seed of zero on some grammar-architecture combinations, which is reproducible and occurs in both PyTorch and TensorFlow (see Supplementary Figs S7 and S8). To avoid confusion, we would like to point out that seqgra is not a neural architecture search technique in the sense that it will not propose suitable neural network architectures for a particular dataset. The model definition is an input, not an output of the seqgra pipeline. However, seqgra can be used in conjunction with neural architecture search, such as AMBER (Zhang et al., 2021), a neural architecture search method for architectures aimed at genomics prediction tasks, or general hyperparameter optimization methods, such as Hyperband (Li et al., 2016). Likewise, seqgra currently does not automatically explore the space of generative rules to find a set of rules that match a particular experimental situation. The rules underlying the generative process of the simulator are an input to seqgra (the data definition) and usually based on domain expert. Furthermore, if the goal is to find the model with the highest predictive performance on a particular experimental dataset, a general hyperparameter optimization approach such as Hyperband or NAS when exploring a carefully selected hyperparameter subspace, is expected to outperform seqgra. However, these (in all likelihood) very-high-capacity models tend to be less useful when the primary concern is not predictive performance, but a better understanding of the underlying rules. Oftentimes a simpler model with lower predictive performance is better than a complex model with higher predictive performance, especially when dealing with biological data where noise levels are high and often systematic, e.g. biases introduced by the assay that are present in both training and test sets, but are not part of the underlying biological systems. The predictive performance gains that incorporate these are often undesired.
One caveat of all simulation-based approaches is the inevitable gap between simulated and real-world datasets, in the sense that the former is always a simplified approximation of the latter. Thus, insights gained from simulated data might not carry over to the experimental world. In fact, to a certain degree, this will always be the case. However, while high-performing neural network architectures on simulated data might not perform as highly on experimental data, the opposite is rarely the case, i.e. low-performing architectures in simulation are unlikely to improve when trained on noisier and/or smaller experimental datasets. Moreover, if a model performs well on both simulated and experimental data, that does not imply that the underlying grammar rules of the simulated data are similar to the rules governing the experiment. The opposite situation, where the model performs well on simulated data and poorly on experimental data, in contrast, is oftentimes more insightful as it suggests that either the underlying rules are different or if the rules are similar, the model fails to learn them because of high noise levels in the experimental data or a paucity of experimental data available for training.
While the intricacies of noisy and biased high-throughput genomics experiments make for highly complex and poorly understood datasets, training highly complex alchemy-like (Hutson, 2018a) deep neural networks on them contributes little to a mechanistic understanding of the biological processes that are at work underneath and might worsen the reproducibility crisis in both machine learning (Hutson, 2018b) and biology (Baker, 2016;Begley and Ellis, 2012). Simulated data, however, are perfectly understood, its noise levels controlled and any biases artificially introduced and accounted for, which makes it an excellent environment for model evaluation. With seqgra, the clean room of simulated data and a precise description of the patterns in the data (i.e. the probabilistic rules in the data definition) on the one end is paired with an array of feature attribution methods on the other, to answer questions that are often impossible to answer with poorly understood genomics data. One such question is whether the predictions of the model are based on those parts of the input that are in fact relevant for the phenomenon that is predicted, or, to put it another way, whether the model was able to recover the underlying rules of the dataset. | 6,827.8 | 2021-06-15T00:00:00.000 | [
"Computer Science"
] |
Bioartificial Polymeric Materials Based on Collagen and Poly ( N ‐ isopropylacrylamide )
Films of collagen (CLG) and poly(N-isopropylacrylamide), PNIPAAm, were prepared by casting from water solutions. These bioartificial polymeric materials were studied to examine the influence of PNIPAAm content and glutaraldehyde vapor cross-linking on the thermal and biological stability of CLG. Mixtures, ranging from 20-80 wt% CLG composition, were cross-linked through exposure to glutaraldehyde vapors. Thermal and morphological properties of the films, cross-linked or not, were investigated by differential scanning calorimetry, thermogravimetry, and scanning electron microscopy, with the aim of evaluating miscibility, thermal stability, and interactions among the constituents. The experimental results indicated that the homopolymers are not thermodynamically compatible. However, there is good evidence that effective interactions, probably due to hydrogen bond formation, takes place between the constituents. These interactions are more evident on the samples that were not cross-linked. DSC studies revealed that PNIPAAm exerts a thermal stabilizing effect on uncross-linked CLG, while the cross-linking with glutaraldehyde affects only the biological polymer, preventing the interactions with PNIPAAm. SEM micrographs of the uncross-linked mixtures showed that the morphology, in all compositions studied, remains similar to the pure collagen. In the corresponding cross-linked samples, a more compact aggregation is observed although no appreciably changes can be seen.
Introduction
"Bioartificial polymeric materials" is a term to designate a new class of materials based on blending synthetic and natural polymers for biomedical applications 1,2 .Originally, these new materials were conceived to overcome the poor biological performance of synthetic polymers and also to enhance the mechanical characteristics of biopolymers, in order to be employed as biomaterials or as low-environmental impact materials [1][2][3][4][5][6][7] .In this context, our group has investigated several systems involving both biological (primarily fibrin, hyaluronic acid, and collagen) and synthetic components (polyurethanes, poly(vinyl alcohol), and poly(acrylic acid)) [8][9][10][11][12][13] .
Reconstituted collagen (CLG) has been used in a wide variety of biomedical applications 14 .However, the lack of mechanical properties of CLG, due to the chemical treatment used to isolate it, as well as the high biodegradation rate after implantation are factors that have limited its application 15 .
To solve the biodegradation problem, we have aimed to crosslink the collagen in order to reduce its degradation rate and to avoid the rapid dissolution of the material when it comes in contact with biological fluids.Previously, collagen-based materials have been cross-linked by chemical treatment with gaseous glutaraldehyde 11,12 and by a dehydro-thermal procedure 8,9,13 .The thermal cross-linking method has been a good alternative to the chemical method as there are no release of cytotoxic residuals that could affect the biocompatibility 9 .
Poly(N-isopropylacrylamide), PNIPAAm, is a thermo-responsive polymer, i.e. a polymer that dissolves in water at room temperature, but undergoes a phase separation when heated to approximately 32 °C, exhibiting a lower critical solution temperature (LCST) 16 .
Many studies have been published utilizing this thermo-sensitive behavior of PNIPAAm in fields such as drug delivery systems 17,18 and thermo-sensitive membranes [19][20][21] .PNIPAAm has been extensively studied under the hydrogel state as a polymeric matrix for use in biotechnology and bioengineering 22 , and has been found to be cell compatible 23 .However, PNIPAAm, in a gel or linear form, shows low mechanical strength, which limits its practical application.To solve this problem, many studies have focused on the preparation of blends and copolymers and on the formation of interpenetrating polymer networks.Following this principle, we have previously reported 24,25 the preparation of blends of PNIPAAm with poly(vinyl alcohol) (PVAl), poly(vinyl pyrrolidone) (PVP), poly(acrylic acid) (PAA), and poly(ethylene-co-vinyl alcohol) (EVAL).
Collagen and PNIPAAm are well known for their interesting biological properties, however the interactions between these polymers in blends has not been studied previously.PNIPAAm contains a proton accepting amide group, while collagen contains a carbonyl moiety and an N-H group (amide bonds) and hydroxyl groups as side groups, suggesting some possible interactions between these two macromolecules.Blends can be either miscible or immiscible, in a thermodynamic sense (i.e., miscible polymers blends behave as a single-phase system down to the segmental level of dispersion, and are usually associated with ΔHm < 0).The term "compatible blend" is an utilitarian term indicating a material to be commercially attractive, usually homogeneous to the eye with enhanced physical properties.
The aim of this work was to study the interactions between collagen and PNIPAAm in the solid state (thin films) and draw conclusions regarding the miscibility of these components.The interactions between these macromolecules were examined from a physicochemical point of view.Thermal behavior and morphological aspects of the CLG/PNIPAAm blends were considered in the solid state, in the form of thin films, cross-linked or not with gaseous glutaraldehyde.Differential scanning calorimetry (DSC), thermogravimetry (TG) and scanning electron microscopy (SEM) were used as effective methods of evaluating stability, miscibility, and compatibility among the components, respectively.
Materials
Soluble collagen, CLG, type III from calf skin, was supplied by Sigma (St. Louis, MO, USA) and used as received.
PNIPAAm was synthesized through a free radical mechanism, under a nitrogenous atmosphere, according to the method described by Freitas 26 .The monomer (NIPAAm; Aldrich, Milwaukee, WI, USA) and the initiators (ammonium persulfate and sodium metabisulfite; Reagen, Rio de Janeiro, RJ, Brazil) were used at analytical grade without further purification.
CLG/PNIPAAm films preparation
A 1% (w.v -1 ) CLG solution was prepared in acetic acid 0.5 M at 0 °C with mild stirring.A PNIPAAm solution (1% w.v -1 ) was prepared in water at room temperature with continuous stirring.Different amounts of the two solutions were mixed together, while stirring for 30 minutes at room temperature, in order to obtain various CLG/PNIPAAm weight ratios.Films were cast on Petri plates by water evaporation at room temperature and were then stored in a desiccator.
Films cross-linking
CLG/PNIPAAm films were cross-linked with gaseous glutaraldehyde.Films of the blends were fixed on the upper side of a desiccator which contained, in the lower part, 5 mL of an 8% GTA water solution.The container was placed in an oven at 37 °C for 18 hours in the dark.
Methods
The comparative studies of the thermal and morphological behaviors of the cross-linked and uncross-linked films were accomplished by differential scanning calorimetry, DSC, thermogravimetry (TG), and scanning electron microscopy, SEM.
Differential scanning calorimetry
DSC curves were obtained using a Perkin Elmer DSC 7 equipment.Samples at about 5 mg and nitrogen flow were used.
Dried samples.Aluminum pans were used with the following thermal cycles: from ambient to 160 °C, back to the ambient temperature and then to 300 °C; all at 10 °C/min.The first cycle was used to dry the samples.The results given are from the second heating cycle.
Non-dried samples.Samples were heated from ambient to 130 °C with a scan rate of 5 °C/min in sealed stainless-steel pans.
Thermogravimetry
TG was carried out under nitrogen flow (30 mL/min) using a Thermogravimetric Analyzer Perkin-Elmer TGA 6.Samples of about 4 mg were heated from 30 to 800 °C at a heating rate of 20 °C/min.
Scanning electron microscopy
SEM micrographs were carried out on a JEOL JSM-5600 (at 12 kV) using samples which were ripped at ambient temperature and sputter coated with gold.
Results and Discussion
The thermodynamic properties of collagen, PNIPAAm, and their blends were studied using differential scanning calorimetry (DSC) and thermogravimetric analysis (TG) techniques.
As reported previously 27 , the amorphous homopolymer PNIPAAm used in these studies was synthesized in a linear form and showed, by light scattering analysis, weight average molecular weights in the range of 10 5 .Thermogravimetric results suggest that PNIPAAm is stable until 350 °C, losing 74% of its mass, in a single stage, from 350 to 450 °C.By DSC analysis, PNIPAAm presented a glass transition temperature of 135 °C and a degradation temperature of 433 °C 27 .
Thermal analysis -DSC
The thermal and mechanical properties of CLG are strongly dependent on the water content in the starting material 11 .To avoid the influence of water, two different experimental conditions were used in the DSC studies: in the first, using aluminum pans, the samples were dried during the first heating cycle.In the second condition, stainless-steel pans were filled with the non-dried material and then sealed to suppress vaporization and water loss.
Uncross-linked films
Non-dried samples.Results of DSC analysis carried out on uncross-linked and non-dried samples are reported in Figure 1 and Table 1.
Pure CLG showed an endothermic peak at 104 °C related to its denaturation process.The denaturation temperature (Td) increased in the samples up to 50 wt.(%) of PNIPAAm and then remained almost constant.The enthalpy of denaturation (ΔHd), on the other hand, did not change with the presence of a synthetic polymer.
These results indicate that PNIPAAm exerts a stabilizing effect on the uncross-linked CLG, overcoming the effect of bulk water that normally lowers its denaturation temperature 28 .
Dried samples.The dried samples showed a substantially similar trend (Figure 2).CLG is thermally stabilized by the presence of PNI-PAAm, as verified by the shifting of the denaturation peak towards higher temperatures, with no appreciable changes on the enthalpy of denaturation.As expected, the absence of water led to an increase in the Td of the dried collagen compared to the non-dried one.The glass transition temperature (Tg) of PNIPAAm, in contrast to the Td CLG behavior, shifted to lower temperatures with increases in the CLG content (from 137 to 122 °C for 0/100 and 80/20 CLG/ PNIPAAm blends, respectively).Although it is difficult to observe the Tg at low concentrations of PNIPAAm, the decrease in the Tg values could be explained by the interactions of CLG and PNIPAAm.Collagen, which is a hydrogen donor, should form hydrogen bonds with the carbonyl group from PNIPAAm.The synthetic polymer contains a proton-accepting carbonyl moiety, while collagen presents hydroxyl and amino groups as side groups.Therefore, a hydrogen-bonding interaction may take place between these two chemical moieties in a blend of collagen and PNIPAAm.The formation of hydrogen bonds between the two different macromolecules competes with the interactions between molecules of PNIPAAm, resulting in a decreasing of Tg PNIPAAm values.
The calorimetric results obtained for uncross-linked CLG/PNI-PAAm blends, either dried or not, indicate that there are strong interactions between the synthetic and biological polymers.These interactions, probably due to the hydrogen bonding, appear to exert a thermal stabilizing effect on collagen and promote the shift of Tg to lower temperatures.
Cross-linked films
The cross-linking of the CLG/PNIPAAm mixtures was carried out as in the case of pure CLG where it was verified that the best results have been obtained by exposing the CLG films to the GTA vapors for 18 hours 12 .
Non-dried samples.In Figure 3 and Table 1 the typical DSC curves of non-dried and cross-linked samples and the values of temperature and enthalpy of denaturation, as a function of CLG concentration are presented, respectively.As reported in previous works 11,12 , the cross-linking method affects the biological polymer, which can be verified by the enhancement of its denaturation temperature and the related enthalpy.
By comparing the values on Table 1, we can observe that the influence of PNIPAAm on CLG denaturation seems smaller in the case of cross-linked samples.The cross-linking prevents an efficient interaction between the two components.
Dried samples.The denaturation of CLG is prevented by the stabilization due to the cross-linking promoted by GTA.This phenomenon can be verified comparing Figures 2 and 4. The Td of pure CLG increases from 183 °C in the absence of cross-linking to 206 °C with GTA cross-linking.The same behavior is observed at all concentrations.As observed for the non-dried samples, the thermal stabilizing effect on CLG, related to the cross-linking, overcoming that one due to the interactions with PNIPAAm.Although it is difficult to observe the Tg in the films at low concentrations of PNIPAAm in Figure 4, it is possible to observe insignificant variations on the Tg of PNIPAAm in relation to the uncross-linked blends (Figure 2).The cross-linking of CLG macromolecules most likely diminishes the number of possible interactions with PNIPAAm through hydrogen bonding as observed in uncross-linked blends.
Thermal analysis -TG
Figure 5 illustrates typical TG curves of uncross-linked films.The cross-linked samples show a substantially similar trend.The desiccation treatment did not completely remove the water from the films, as it appears from the presence in all samples of the band at about 70 °C associated with the evolution of freely bound water.Pure Table 1.DSC results (temperature and enthalpy of CLG denaturation process) for non-dried CLG/PNIPAAm blends (uncross-linked and cross-linked samples).
CLG (%)
Uncross-linked Cross-linked Tg PNIPAAm (°C) Td CLG (°C) ΔHd CLG (J.g -1 ) Tg PNIPAAm (°C) Td CLG (°C) ΔHd CLG (J.g - In the whole range of the composition studied, it was impossible to observe the presence of different phases, suggesting that strong interactions between the constituents are present, in agreement with the DSC data. In comparison with the uncross-linked samples, films crosslinked with GTA seem to result in a more compact aggregation, which does not considerably change the original structure (Figure 7).It is still possible to observe the typical longitudinal disposition of the untreated collagen, although this kind of morphology is less evident after cross-linking.In all concentrations studied, SEM did not distinguish the presence of separated phases, using the indicated magnification grade.
Conclusions
The experimental results obtained in this work on mixtures of CLG/PNIPAAm, either cross-linked or uncross-linked with glutaraldehyde, indicate that the components are thermodynamically incompatible.However, there is good evidence that effective interactions, probably due to hydrogen bond formation, take place between the constituents.These interactions are more evident on the samples that collagen is characterized by an additional two losses of weight.The first one, with maximum at about 200 °C, is related to the loss of structural water (strongly bound to the collagen); the second event with a maximum at 320 °C is related to decomposition of collagen molecule.The TG curve of PNIPAAm, beyond the transition due to the loss of freely bound water, shows another thermal event in the temperature around 400 °C, which is attributed to the loss of weight by the degradation of the synthetic polymer.The curves of the blends show all the phenomena present in the pure components.We can observe that CLG is thermally more stable in the blends than the pure CLG, since the degradation temperature is shifted to higher temperatures.These temperatures are dependent on the blends concentration and we can explain this behavior by the thermal stabilizing effect due to interactions among the polymers primarily through hydrogen bonds.For the cross-linked blends, the thermal stabilizing effect on CLG is due to the cross-linking, as observed by the DSC results (Figures 3 and 4).
Morphological studies
The electron microscope images of uncross-linked mixtures show that the appearance of the fractured sections remains quite similar to that of pure collagen, as can be observed from Figure 6.The longitudinal arrangement, due to the fibrous structure of CLG, is evident in all samples including those in high PNIPAAm concentrations (Figure 6b).
were not cross-linked.In this respect, the most remarkable results are the PNIPAAm stabilizing effect on the denaturation temperature of CLG and the shift to higher temperatures of the PNIPAAm and CLG thermal degradation temperatures.The cross-linking with glutaraldehyde also promotes a thermal stabilizing effect on the biological polymer.The CLG denaturation temperature is enhanced after exposing the films to glutaraldehyde vapors, as a result of the cross-linking.On the other hand, cross-linking prevents the effective interactions among CLG and PNIPAAm, which were observed on the uncross-linked samples.In all concentrations studied, on both crosslinked or uncross-linked samples, SEM micrographies showed that the morphology of the blends remains quite similar to that of pure collagen, indicating the longitudinal arrangement due to the fibrous structure of collagen.
Figure 1 .
Figure 1.DSC curves of uncross-linked and non-dried samples.Stainless steel pans; heating rate of 5 °C/min.
Figure 3 .
Figure 3. DSC curves of cross-linked and non-dried samples.Stainless steel pans; heating rate of 5 °C/min. | 3,654 | 2007-06-01T00:00:00.000 | [
"Materials Science"
] |
Frequency-Zooming ARMA Modeling for Analysis of Noisy String Instrument Tones
This paper addresses model-based analysis of string instrument sounds. In particular, it reviews the application of autoregressive (AR) modeling to sound analysis/synthesis purposes. Moreover, a frequency-zooming autoregressive moving average (FZ-ARMA) modeling scheme is described. The performance of the FZ-ARMA method on modeling the modal behavior of isolated groups of resonance frequencies is evaluated for both synthetic and real string instrument tones immersed in background noise. We demonstrate that the FZ-ARMA modeling is a robust tool to estimate the decay time and frequency of partials of noisy tones. Finally, we discuss the use of the method in synthesis of string instrument sounds.
INTRODUCTION
It has been known for quite a long time that a free vibrating body may generate a sound that is composed of damped sinusoids, assuming valid the hypothesis of small perturbations and linear elasticity [1]. This behavior has motivated the use of a set of controllable sinusoidal oscillators to artificially emulate the sound of musical instruments [2,3,4]. As for analysis purposes, tools like the short-time Fourier transform (STFT) [5] and discrete cosine transform (DCT) [6] have been widely employed since these transformations are based on projecting the input signal onto an orthogonal basis consisting of sine or cosine functions.
An appealing idea, which is also based on resonant behavior of vibrating structures, consists in letting the resonant behavior be parametrically modeled by means of resonant filters (all-pole or pole-zero) excited by a source signal. For short duration excitation signals and filters parameterized by a few coefficients, such a source-filter model implies a compact representation for sound sources. Furthermore, para-metric modeling of linear and time-invariant systems finds applications in several areas of engineering and digital signal processing, such as system identification [7], equalization [8], and spectrum estimation [9]. The moving-average (MA), the autoregressive (AR), and autoregressive movingaverage (ARMA) models are among the most widely used ones. Indeed, there exists an extensive literature on estimation of these models [9,10,11,12].
There is a long tradition in applying source-filter schemes in sound synthesis. For instance, the linear predictive coding (LPC) [13] used for speech coding and synthesis is one of the most well-known applications of source-filter synthesis. The problems involved in source-filter approaches can be roughly divided into two subproblems: the estimation of the filter parameters and the choice or design of suitable excitation signals. As regards the filter parameter estimation, standard techniques for estimation of AR and ARMA processes can be used. Ways of obtaining adequate excitations for the generator filter have been discussed in [14,15,16].
Model-based spectral analysis of recorded instrument sounds also finds applications in parametric sound synthesis. In this context, it is possible to derive the frequencies and decay times of the partial modes from the parameters of the estimated models (all-pole or pole-zero filters). This information can be used afterward to calibrate a synthesis algorithm, for example, a guitar synthesizer based on the commuted waveguide method [17,18].
However, when dealing with signals exhibiting a large number of mode frequencies, for example, low-pitched harmonic tones, high-order models are needed for properly modeling the signal resonances. Therefore, it is plausible to expect difficulties to either estimate or realize such highorder models.
A possible way to alleviate the burden of employing highorder models is to split the original frequency band into subbands with reduced bandwidth. Frequency-selective schemes allow signal modeling within a subband of interest with lower-order filters [14,19,20,21]. Naturally, the choices of the subband bandwidth as well as the modeling orders depend on the problem at hand. For instance, in [20], Laroche shows that adequate modeling of beating modes of a single partial of a piano tone can be accomplished by applying a high-resolution spectral analysis method to the signal associated with the sole contribution of the specific partial. In this case, the decimated subband signal associated with the partial contribution was analyzed via the ESPRIT method [22].
In this paper, we review a frequency-zooming ARMA (FZ-ARMA) modeling technique that was presented in [23] and discuss the advantages of applying the method for analysis of string instrument sounds. Our focus, however, is not on the FZ-ARMA modeling formulation, which bears similarities to other subband modeling approaches, such as those proposed in [14,20,24,25], among others. In fact, we are more interested in reliable ways to estimate the frequencies and decay times of partial modes when the tone under study is corrupted with broadband background noise. Within this scenario, our aim is to investigate the performance of the FZ-ARMA modeling as a spectrum analysis tool.
Every measurement setup is prone to noise interference to some extent, even in controlled conditions as in an anechoic environment. For instance, the recording circuitry involving microphones and amplifiers is one of the sources of noise. In [26], the authors highlight the importance of taking into account the level of background noise in the signal when attempting to estimate the decay time of string tone partials, especially for the fast decaying ones.
Another situation in which corrupting noise has to be carefully considered is in the context of audio restoration. In a recent paper [27], the authors proposed a sound source modeling approach to bandwidth extension of guitar tones. The method was applied to recover the high-frequency content of a strongly de-hissed guitar tone. To perform this task, a digital waveguide (DWG) model for the vibrating string has to be designed. In [27], the DWG model was estimated using a clean guitar tone similar to the noisy one. This resource was adopted because the presence of the corrupting noise prevented obtaining reliable estimates for the decay time of high-frequency partials. These estimates were determined via a linear fitting over the time evolution of the partial amplitude (in dB), which was obtained through a procedure similar to the McAulay and Quatieri analysis scheme [2,28].
Through examples which feature noisy versions of both synthetic and real string tones, we demonstrate that the FZ-ARMA modeling offers a reliable means to overcome the limitations of the STFT-based methods regarding estimating the decay time of partials. This paper is organized as follows. Section 2 reviews the basic properties of AR and ARMA modeling and discusses signal modeling strategies in full bandwidth as well as in subbands. In Section 3, we formulate the FZ-ARMA modeling scheme and address issues related to the choice of the processing parameters. In Section 4, we employ the FZ-ARMA modeling to focus the analysis on isolated partials of synthetic and real string tones. Moreover, we assess the FZ-ARMA modeling performance on estimating the decay times of the partial modes under noisy conditions. In addition, we confront the results of spectral analysis of the subband signals using ARMA models against those obtained through the ESPRIT method. Section 5 discusses applications of the FZ-ARMA modeling in sound synthesis. In particular, we show an example in which, from the FZ-ARMA analysis of a noisy guitar tone, a DWG-based guitar tone synthesizer is calibrated. Conclusions are drawn in Section 6.
Basic definitions
An ARMA process of order p and q, here indicated as ARMA(p, q), can be generated by filtering a white noise sequence e(n) through a causal linear shift-invariant and stable filter with transfer function [12] For real-valued filter coefficients, the transfer function of an ARMA(p, q) model has p poles and q zeros. Considering a flat power spectrum for the input, that is, P e (z) = σ 2 e , the resulting output x(n) has power spectrum given by where the symbol * stands for complex conjugate. An AR process is a particular case of an ARMA process when q = 0. Thus, the generator filter assumes the form which is usually referred to as the transfer function of an allpole filter.
Parameter estimation of AR and ARMA processes
Thorough descriptions of methods for estimation of AR and ARMA models are outside the scope of this paper since this topic is well covered elsewhere [9,12] and computer-aid tools are readily available for this purpose. Here, we briefly summarize the most commonly used methods. Parameter estimation of AR processes can be done by several means, usually through the minimization of a modeling error cost function. Solving for the model coefficients from the so-called autocorrelation and covariance normal equations [9] are perhaps the most common ways.
The stability of the estimated AR models is an important issue in synthesis applications. The autocorrelation method guarantees AR model estimates that are minimum phase. The Matlab function ar.m allows estimating AR models using several approaches [29].
Parameter estimation of ARMA processes is more complicated since the normal equations are no longer linear in the pole-zero filter coefficients. Therefore, the estimation relies on nonlinear optimization procedures that have to be done in an iterative manner. Prony's method and the Steiglitz-McBride iteration [30,31] are examples of such schemes. A drawback of these methods is that the estimated pole-zero filters cannot be guaranteed to be minimum phase. In addition, and especially for high-order models, the estimated filters can be unstable. The functions prony.m and stmcb.m are available in Matlab for estimation of ARMA models using Prony's and Steiglitz-McBride methods, respectively [32].
Full bandwidth modeling
Modeling of string instrument sounds has been approached by either physically motivated or signal modeling methods. Examples of the former can be found in physics-based algorithms for sound synthesis [18,33,34,35]. Examples of the latter include the AR-based modeling of percussive sounds presented in [14,15,16,36,37].
In principle, when approaching the problem from a signal modeling point of view, it seems natural to employ a resonant filter, such as an all-pole or pole-zero filter, to model the mode behavior of a freely vibrating string, which consists of a sum of exponentially decaying sinusoids. However, modeling of broadband signals can be a tricky task. One practical issue related to both AR and ARMA modeling is model order selection. In general, there is no automated way to choose an appropriate order for the model assigned to a signal. For instance, one can deduce that AR modeling of low-pitched tones in full bandwidth is expected to require high-order models. The same is valid for piano tones which are produced by one to three strings sounding together. In this case, considering the detuning among the strings and two polarizations of transversal vibration per string, up to 6 resonance modes should be allocated to each partial of the tone.
In fact, the temporal envelope exhibited by partials of guitar and piano tones can be far from being exponentially decaying. On the contrary, the usually observed temporal envelopes contain frequency beating and two-stage decay [38].
This indicates that the partials are composed of two or more modes that are tightly clustered in frequency. The need for high-resolution frequency analysis tools is evident in these cases.
If frequency analysis is to be performed by means of AR/ARMA modeling, higher spectral resolutions can be attained by increasing the model orders. However, parameter estimation of high-order AR/ARMA models may be problematic if the poles of the system are very close to the unit circle and if there are poles located close to each other. Realizing a filter with these features is very demanding as the required dynamic range for the filter coefficients tends to be huge. In addition, computation of the roots associated with the corresponding polynomial in z, if necessary, can be also demanding and prone to numerical errors [39].
Frequency-selective modeling
The aforementioned problems have motivated the use of alternative modeling or analysis strategies based on subband decomposition [40]. In such schemes, the original signal is first split in several spectral subbands. Then, modeling or analysis of the resulting subband signals can be performed separately in each subband. Examples of subband modeling approaches can be found in [14,16,20,24,25].
A prompt advantage of subband decomposition of an AR/ARMA process is the possibility to focus the analysis on thinner portions of the spectrum. Thus, a small number of resonances can be analyzed at a time. This accounts for using lower-order models to analyze subband signals. Moreover, the subband signals can be down sampled, as their bandwidth is reduced compared to that of the original signal. As a consequence, the implied decrease in temporal resolution due to down-sampling is rewarded by an increase in frequency resolution. This fact favors the problem of resolving resonant modes that are very close to each other in frequency. The effects of decimating AR and ARMA processes have been discussed in [21,41,42].
FREQUENCY-ZOOMING ARMA METHOD
As presented in [23], the FZ-ARMA analysis consists of the following steps.
(i) Define a frequency range of interest (for instance, to select a certain frequency region around the spectral peaks one wants to analyze). (ii) Modulate the target signal (shift in frequency by multiplying with a complex exponential) to place the center of the previously defined frequency band at the origin of the frequency axis. [12,30,31] is employed to perform this task. More specifically, we used the stmcb.m function available in the signal processing toolbox of Matlab [32].
In mathematical terms, and starting with a target sound signal h(n), the first two steps of the FZ-ARMA method imply defining a modulation frequency f m (in Hz) and multiplying h(n) by a complex exponential, as to obtain the modulated response where Ω m = 2π f m / f s with f s being the sample rate. This modulation implies only a clockwise rotation of the poles of a hypothetical transfer function H(z) associated with the AR process h(n). Thus, if z i is a pole of H(z) with phase arg(z i ) = Ω i , its resulting phase after rotation becomes The lowpass filtering is supposed to retain without distortion those poles located inside its passband. On the other hand, down sampling the resulting lowpass filtered response yields modified poles where K zoom is the zooming factor, which relates the new sampling rate to the original one as f s,zoom = f s /K zoom . Now, we know what the zooming procedure does to the poles, z i , of the original transfer function. As a result, those poles,ẑ i,zoom , estimated in subbands via ARMA modeling, need to be remapped to the original fullband domain. This can be accomplished by inverse scaling the poles and counter rotating them, that is, The frequency and decay time of the resonances present within the analyzed subband can be drawn from the angle and magnitude ofẑ i , respectively. Note that the original target response is supposed to be real valued and, therefore, its transfer function must have complex-conjugated pole pairs. However, due to the onesided modulation performed in (4), the subband model returns pure complex poles. Thus, if the goal is to devise a realvalued all-pole filter in fullband for synthesizing the contribution of resonances within the analyzed subband, its transfer function must include not only the remapped poles, but also their corresponding complex-conjugates.
Hereafter, when referring to the models of the complexvalued subband signals, we will adopt the convention FZ-ARMA(p, q), where p and q stand for the orders of the denominator (AR part) and numerator (MA part), respectively.
Choice of parameters for the FZ-ARMA method
The choice of the FZ-ARMA parameters, that is, f m , K zoom , and the model orders, depends on several factors. We will now discuss these issues.
Zoom factor
Considering first the zoom factor, it can be said that the greater K zoom , the higher the frequency resolution attainable in a subband. This favors cases in which the frequencies of the modes are densely clustered. However, large values of K zoom imply a more demanding signal decimation procedure and shorter decimated signals.
The values of K zoom and f s,zoom are tied together, and the latter defines the bandwidth of the subband which the analysis will be focused on. For instance, if the aim is to analyze the behavior of isolated partials of a tone, the choice of f s,zoom should be such that its value be less than two times the minimum frequency difference between adjacent partials. On the other hand, f s,zoom should be large enough to guarantee that the modes belonging to a given partial do not lie inside different subbands.
While the model estimation may be unnecessarily overloaded if based on long signals, it may yield poor results if based on few signal samples only. Therefore, the criterion upon which the value of f s,zoom is chosen should also take into account the number of samples that remains in the decimated signal.
Modulation frequency
Suppose that we are interested in analyzing a set of resonances concentrated around a frequency f r . Having defined the bandwidth of the zoomed subband f s,zoom , a straightforward choice is to set the value of the modulation frequency to f m = f r . Note that this option places the resonance peaks inside the subband around Ω r = 0. As pole estimation around Ω r = 0 may be more sensitive to numerical errors, we decided to adopt f m = f r − f s,zoom /8, which implies concentrating the peaks around Ω r = π/4. This frequency shift is not harmful since the resonance peaks are still well inside the subband. Thus, their characteristics are not severely distorted by the nonideal lowpass filtering employed during the decimation procedure. However, to afford this choice of f m and still ensure the isolation of a tone partial, the maximum value of f s,zoom should be at maximum one and half times the minimum frequency difference between adjacent partials.
The frequency of the partials can be predicted from that of the fundamental if the tone is harmonic or quasiharmonic. However, as some level of dispersion is always present, errors at the frequencies of the higher partials are expected to occur. Alternatively, the frequencies of the partials can be determined by performing spectral analysis on the attack part of the tone and running a peak-picking algorithm over the resulting magnitude spectrum, as employed in [16,25]. This approach is more general since it can deal with highly inharmonic tones.
In our experiments, we first estimate the fundamental frequency of the tone, a task that was performed through the multipitch estimator described in [43]. Then, after modeling the first partial, which allows obtaining a precise value of this partial frequency, the frequency of the following partial to be analyzed is set as the sum of the estimated frequency of the current partial with the value of the fundamental frequency. This procedure is repeated until one reaches the desired number of partials to be analyzed. This approach minimizes the problems related to multiplicative errors when predicting the frequencies of higher partials based on integer multiples of the fundamental frequency.
Model order
Regarding the orders of the ARMA models, they should be chosen as to allow the modeling of the most prominent resonant modes of the signal. Depending on the case, a priori information on the characteristics of the signal at hand can be used to guide suitable model-order choices. For string instrument sounds, the estimation of the number of modes per partial can be based on the number of strings per note and the number of polarizations per string.
Moreover, it is known that if a real-valued signal has p resonant modes, one has to allocate at least two poles per resonant mode, that is, an ARMA(2p, 0), to properly model it. However, due to the one-sided modulation used in the FZ-ARMA scheme, the resulting subband signals are complex valued, thus composed of pure complex poles. Therefore, only one single complex pole per mode suffices. As a consequence, at the expense of working with a complex arithmetic, the FZ-ARMA scheme optimizes the resources spent on modeling of the subband signals. This represents one advantage over, for instance, the modulation scheme proposed in [20], which yields real-valued decimated signals.
FZ-ARMA MODELING OF STRING INSTRUMENT TONES
In this section, we apply the FZ-ARMA modeling to analyze the resonant modes of isolated partials of string instrument sounds. We start by analyzing synthetic signals as a way to objectively evaluate the results. This allows knowing beforehand the mode frequencies and decay rates of the artificial tone. Thus, we can compare them with the estimates obtained via the FZ-ARMA modeling. In this context, the choice of the model orders is investigated as well as the modeling performance under noisy conditions. Then, following a similar analysis procedure, we evaluate the modeling performance of the FZ-ARMA method on recorded tones of realworld string instruments.
Guitar tone synthesis
In this case study, the synthetic guitar tone is generated by means of a dual-polarization DWG model [18]. Thus, each of its partials has two modes with known parameters, that is, resonance frequencies and time constant of the exponentially decaying envelope. The string model for one polarization is depicted in Figure 1. Its transfer function is given by x(n) e(n) + Figure 1: Block diagram of the string model. For the sake of simplicity, we implemented the loop filter via the one-pole lowpass filter with transfer function given by The magnitude response of H LF (z) must not exceed unity in order to guarantee the stability of S(z). This constraint imposes that 0 < g < 1 and −1 < a < 0. As regards the fractional-delay filter H FD (z), we chose to employ the firstorder allpass filter proposed in [44], which implies the computation of a single coefficient a fd . This choice assures that the decay rates of the partials depend mainly on the characteristics of H LF (z).
The dual-polarization model consists in placing two string models in parallel as depicted in Figure 2. With this model, amplitude beating can be obtained by setting slightly different delay line lengths for each polarization. In addition, two-stage envelope decay can be accomplished by having loop filters with different magnitude responses for each polarization.
Consider first a string model with only one polarization. The partials of the resulting tone will decay exponentially and form a perfect harmonic series, that is, their frequencies are f ν = ν f p , where f p is the fundamental frequency of the tone, and ν = 1, . . . , f s /(2 f p ) the partial indices. To determine the decay rate associated with each partial, we need to know the gain of the loop filter as well as the group delay of the feedback path (cascade of z −Li , H FD (z), and H LF (z)) at the partial frequencies. By defining the partial frequencies in radians as ω ν = 2π f ν / f s , the gain of the loop filter at ω ν is given by The group delay of a transfer function F(z) is commonly defined as the ratio Γ F (ω) = −∂ arg{F(e jω )}/∂ω. Then, if one defines G(ω ν ) as the group delay (in samples) of the feedback path at ω ν , that is, G(ω ν ) = L i +Γ HLF (ω ν )+Γ HFD (ω ν ), the decay time (in seconds) of the partials can be obtained by Now we can generate an artificial guitar tone through the dual-polarization model, analyze it using the FZ-ARMA method, and compare the estimated values of the mode parameters with the theoretical ones. The tone is generated via the model shown in Figure 2 with parameters given in Table 1.
By adopting the parameters shown in Table 1, one guarantees that the modes of each partial will decay with different time constants. Hence, each partial exhibits a two-stage envelope decay behavior. Moreover, the mode frequencies of each partial are also different, thus yielding amplitude modulation in its envelope.
FZ-ARMA analysis
To proceed with the FZ-ARMA analysis of the generated tone, we have to choose appropriate values for the frequency bands of interest and corresponding modulation frequencies.
In this example, equal bandwidth subbands are used to analyze the partials. The subband bandwidth is chosen to be equal to the fundamental frequency of the vertical polarization. This implies a new sampling frequency of f s,zoom = f p,v = 200 Hz for the subband signals and a zoom factor K zoom = 220. For convenience, we only show results of parameter estimation up to the 45th partial. As highlighted in Section 3.1.2, for each partial frequency f ν (of the vertical polarization) to be analyzed, the modulation frequency is chosen to be f m = f ν − f s,zoom /8.
The goal of this experiment is to gain an insight of the model orders that are necessary to reasonably estimate the mode parameters of the partials of a guitar tone. The FZ-ARMA procedure was devised in such a way that the subband signals are supposed to contain only two complex modes. Therefore, at least an FZ-ARMA(2, 0) must be employed to model each subband signal.
The results of mode parameter estimation obtained in this example are shown in Figure 3. From Figure 3, it is possible to verify that low-order models suffice to estimate the mode frequencies. On the contrary, to properly estimate the decay time of the partial modes, higher-order models are required. Furthermore, as one could expect, it is more difficult to estimate the time constants of faster decaying modes.
Analysis of noisy tones
We start with the same synthetic tone devised in Section 4.1.1. This tone is then corrupted with zero-mean white Gaussian noise, whose variance is adjusted to produce a certain signal-to-noise ratio (SNR) within the first 10 milliseconds of the tone. We proceed with the FZ-ARMA analysis of four noisy tones with SNR equal to 40, 20, 10, and 0 dB, respectively. The goal now is to investigate the effect of the SNR on the decay time estimates of the partial modes.
As in the previous example, equal-bandwidth subbands are used to analyze the partials of the tone. But, here, the adopted value of the zoom factor was K zoom = 600. As before, the frequency f ν of each partial to be analyzed defined the modulation frequency, which was chosen to be f m = f ν − f s,zoom /8. To model the two-mode partial signals, FZ-ARMA(3, 3) models were used. From the poles of each estimated model, those two with the largest radii were selected to determine the decay times and frequencies of the partial modes. In addition, for the sake of convenience, the estimated mode parameters were sorted by decreasing values of decay time.
The results are depicted in Figure 4, in which the solid and dashed lines describe the reference values of the decay time, associated with the vertical and horizontal polarizations, respectively, as functions of the partial indices. The circle and square markers indicate the corresponding estimated values.
As one could expect, the estimation performance is worsened when decreasing the SNR. Nevertheless, it is worth noting that even for the signal with SNR equal to 10 dB, the majority of the estimated values of decay time is concentrated around the reference values, especially for low-frequency partials. The occurring outliers can be either discarded, for example, negative values, or removed by means of median filtering. As for the mode frequency estimates (not shown), the maximum relative error encountered for the tone with SNR = 0 dB is of order equal to ±0.1%, which is negligible.
Comparison against STFT-based methods
At this stage, one wonders if an estimation procedure based on short-time Fourier analysis or heterodyne filtering would yield similar results as those of the FZ-ARMA-based scheme when dealing with noisy signals. In these approaches, each prominent partial is isolated somehow and the evolutions of its amplitude over time are tracked. Then, a linear slope is to be fitted to the obtained log-amplitude envelope curve. The decay time of the analyzed partial is determined from the slope of the fitted curve.
To start answering our question, we should remember that, even for clean signals, there are situations in which the just described slope fitting does not give appropriate results. Perhaps the most striking one is when the envelope curve shows amplitude beating. Back to the noisy signals, there may be a point in the amplitude envelope curves of the partials after which the noise component dominates the amplitude. The noise floor is not so critical for the decay time estimation of low-frequency partials since they are usually stronger in amplitude and decay slowly. On the other hand, highfrequency partials are in general weaker in magnitude and decay fast. They are likely to reach and be masked by the noise floor very early in time. Taking into account the noise floor level is essential for the decay time estimation of these partials (see [26, Figure 5]). For the sake of simplicity, we do not use neither the heterodyne filtering nor the sinusoidal modeling (SM) analysis in the comparisons shown in this section. Instead, we can resort to the frequency-zooming procedure itself. The amplitude envelope curves of each partial are obtained directly from the evolution of the signal magnitude within each subband. Note that we are dealing with narrow subbands (bandwidth of about 70 Hz) and that each subband isolates a given partial. Therefore, the so-attained envelope curves will approximate well the curves that would result from either the heterodyne filtering or the SM analyses. The latter, however, would provide smoother curves. Yet, they would inevitably be lower-bounded by the average amplitude of the noise floor.
As an example, we compare the analysis of two highfrequency partials (6th and 13th) of the string tone devised in Section 4.1. These high-order partials are chosen on purpose to illustrate the effect of the corrupting noise on the amplitude envelope curves. Figure 5 compares the envelope curves of the featured partials in 3 conditions: noiseless tone (thinner solid line), noisy signal with SNR = 0 dB (dashdotted line), and modeled signal based on the noisy target (thicker solid line). From Figure 5, it becomes evident that, for the noisy signal, decay time estimation of the partials via slope fitting is impractical. On the contrary, the FZ-ARMA modeling is capable of properly estimating the decay time of the slowest decaying or the most prominent partial mode. Note that we are primarily interested in the slope of the envelope curve. The upward bias, which is observed in the envelopes of the modeled signals, occurs due to the difference in power between the clean and the noisy version of the signal.
The frequency-zooming procedure per se accounts for a significant improvement in the value of the SNR. For instance, if the target signal is a single complex exponential immersed in white noise, the reduction in SNR due to the zooming will be given by 10 log 10 (K zoom ). Of course, an even bigger SNR improvement can be achieved by FFTbased analysis. This comes from the fact that tracking a single frequency bin in the DFT domain (preferably refined by parabolic interpolation) implies analysis within a much narrower bandwidth than the frequency-zooming scheme. However, the improvement in the SNR is not the main issue here. This larger SNR improvement does not prevent the amplitude envelope from being lower-bounded by the noise floor level after some time.
The keypoint here is that fitting a parametric model to the partial signals allows capturing the intrinsic temporal structures of them, even in noise conditions. Moreover, the resonance features are derived from the model parameters rather than from a simple curve fitting process. As a consequence, a further improvement in the SNR is achieved, culminating in more reliable estimates for the decay time of the partials. Of course, the corrupting noise tends to degrade and bias the estimated models. Thus, any improvement in the SNR before the modeling stage is welcome. The frequency zooming helps in this matter as well.
Comparison against ESPRIT method
One could also think of applying other high-resolution spectral analysis methods to the subband signals. For instance, Laroche has used the ESPRIT method [20,22] to analyze modes of isolated partials of clean piano tones. Just for comparison purposes, we repeat the experiments conducted in Section 4.1.3 using the ESPRIT method [22,45]. More precisely, we employ the frequency-zooming procedure as before, but replace the ARMA modeling with the ESPRIT method as a means to analyze the subband signals.
In the ESPRIT method, we have to set basically three parameters: the length of the signal to be analyzed, N, the a priori estimate of the number of complex exponentials in the signal, M, and the pencil parameter, M ≤ P pencil ≤ N − M. Analysis of noise sensitivity of the ESPRIT method has been conducted in [45] for single complex exponentials in noise. It revealed that setting P pencil = N/3 or P pencil = 2N/3 are the best choices for the pencil parameter, in order to minimize the effects of the noise on the exponential estimates. Furthermore, as highlighted in [20], overestimating M is harmless and even desirable to avoid biased frequency estimates. The ESPRIT method outputs M complex eigenvalues from which the frequency and decay time of M exponentials can be derived. As M is usually overestimated, a pruning scheme has to be employed to select the most prominent exponentials. In our experiments, we take only the two exponentials with the largest decay times.
According to the results of our simulations, the performances of the ESPRIT and ARMA methods are equivalent for estimating the frequencies of the resonant modes. For instance, as regards the frequency estimates, the maximum relative errors measured for the tone with SNR = 0 dB were 0.19 and 0.11, respectively, for the ESPRIT and ARMA methods. In this particular example, FZ-ARMA(3, 3) models were used whereas the parameter values adopted in the ESPRIT method were N = 295, P pencil = 98, and M = 20. The situation is different when it comes to the decay time estimates. It seems that the accuracy of these estimates is very dependent on the choice of pencil parameter. For instance, when dealing with noisy signals, setting P pencil = M yields underestimated values of decay time. On the contrary, increasing the value of P pencil tends to produce overestimated values of decay times. According to the results of our experiments, this is also the case if P pencil = N/3 is chosen. Figure 6 confronts the reference values of the decay time against the estimates obtained through the ESPRIT method with M = 20 and P pencil = 98. It can be clearly seen that the decay time estimates are substantially overestimated, even for moderate levels of SNR. Interestingly enough, repeating the experiments for P pencil = M = 20 yields better results, as can be seen in Figure 7. In this case, the estimates are much more accurate than those obtained with P pencil = 98. Notwithstanding, these estimates are still worse than those drawn from the poles of the ARMA (3,3) fitted to the subband signals, as one can verify from Figure 4. Therefore, we stick to the FZ-ARMA modeling in the following experiments.
Discussion
Carrying out systematic performance comparisons among the addressed methods of decay time estimation is outside the scope of this work. Including such comparisons would demand not only covering a broader range of situations and examples, but also precise description of the algorithms and the calibration of their associated processing parameters. Besides, comparisons between FFT-based schemes of spectral analysis, such as the SM technique, and parametric approaches are not fair. Sticking to comparisons among parametric methods of spectral analysis would necessarily include other techniques than just the ARMA and ESPRIT methods. The comparisons shown in Section 4.1.4 are basically meant to highlight the situations in which STFT-based methods for decay time estimation are prone to failure. A presumed goal is to motivate the need for alternative solutions to decay time estimation in noisy conditions.
As for the performance comparisons between the ARMA and the ESPRIT methods, they were conducted after the frequency-zooming stage in order to keep equal conditions. Yet, the performance results can depend significantly on the choice of the processing parameters. This fact is clearly verified by comparing the results shown in Figures 6 and 7. Moreover, translating the parameters of one method into those of the other may not be straightforward. Due to the aforementioned reasons, we restrict the comparisons to a single case study. Rather than tabulating the attained performances, we believe that visual assessment on Figures 4, 6, and 7 offers more effective means of drawing conclusions on the results.
In summary, the STFT-based schemes are appropriate for decay time estimation of the partials when the partials show monotonic and exponential decay and when the measurement noise is low. If the noise component is prominent, reliable decay time (and frequency) estimation of the high-order partials will be prevented. For both the parametric methods tested, and under the setups adopted, a reliable frequency estimation for the partials of noisy tones is attained. As regards the decay time estimation in noisy conditions, the ARMA analysis performs better in general than the ESPRIT method. Now, we comment specifically on the analysis results of the noisy tone with SNR = 20 dB. The ESPRIT method seems to overestimate the decay times as the value of the pencil parameter increases. Adopting the minimum value for the pencil parameters yielded the best results. Yet, the ES-PRIT analysis underestimates the decay times of the loworder partials. This is critical from the perceptual point of view, especially if one aims at resynthesizing a new tone based on the analyzed data. For the high-order partials, however, the ESPRIT-based decay time estimates seem to converge with low variance to the decay time of the slowest resonance mode. In contrast, there are more outliers in the decay time estimates attained via the ARMA analysis. Nevertheless, the ARMA analysis seems to do a better job in properly segregating the estimates into two distinct resonance modes.
Finally, when it comes to choosing the most appropriate technique, many variables should be considered. Examples of such variables are the characteristics of the problem at hand and the aimed objectives, the effectiveness of the available tools in performing the targeted task, and the available computational resources. The latter issue, although important, does not fit to the profile of this paper. Therefore, discussions on the computational complexity of the tested methods are not included.
Experiments on recorded string instrument tones
In this section, we follow the same methodology used in Sections 4.1.2 and 4.1.3 to analyze recorded tones of real-world string instruments. Here, we do not have a set of reference values for the decay times of the partials. Nevertheless, based on the results obtained for the synthetic tone, we can assume that the FZ-ARMA modeling of an originally clean tone provides correct estimates for the decay time of the partial modes. Then, this set of values can be taken as a reference.
For this experiment, we selected a clean classical guitar tone A2 ( f p = 109.97 Hz, softly plucked open 5th string), which was recorded in anechoic conditions. Three noisy versions of this tone, with SNR = 60, SNR = 40, and SNR = 20 dB, respectively, were generated by adding zero-mean white Gaussian noise to the clean tone. The noise variance was adjusted as to produce the desired SNR during the attack part of the tone (about 20 milliseconds starting from the maximum amplitude).
The first step of the analysis procedure is to obtain an estimate of the fundamental frequency of the noisy tone. This estimate is the starting point to the choices of the bandwidth of the subbands and the modulation frequencies to be used in the FZ-ARMA analysis. The fundamental frequency of tone with SNR = 20 dB was estimated to be f p = 110.25 Hz, which is not far from that of the clean tone. Thus, by following the guidelines stated in Section 3.1.2, we can proceed toward analyzing the higher partials of both the clean and the noisy tones. The parameters used in the FZ-ARMA analysis were K zoom = 600, f m = f ν − f s,zoom /8, and FZ-ARMA(3, 3) models. This time, only the decay time of the slowest decaying mode of each partial was extracted.
The results of this experiment are displayed in Figure 8. The solid line curves correspond to the estimated values of decay time based on the original clean tone. On the other hand, the circles show the corresponding estimated values based on the noisy tones with indicated SNRs. From Figure 8, we observe that, even for the tone with SNR = 20 dB, the FZ-ARMA analysis provides reliable decay time estimates, especially for the low-frequency partials.
Digital waveguide synthesis
We have seen in Section 4 that the FZ-ARMA modeling can be used as an analysis tool, aiming at estimating the parameters associated with the resonances of the tone partials. Thus, based on the set of frequencies and decay times estimated for each partial, one could design a DWG model to resynthesize the tone.
More interestingly, the FZ-ARMA modeling allows estimating more than one frequency and decay time per partial. Thus, one can consider using this information to design the filters of a multipolarization DWG model, such as the dualpolarization DWG model shown in Figure 2. As in source- filter synthesis, in DWG-based synthesis, the excitation signal is in charge of controlling the initial phase and amplitude of the resonance modes. In this work, however, we will not tackle the attainment of suitable excitation signals but we concentrate more on the calibration of the string models.
Calibrating a multipolarization DWG model based on the estimated parameters of the partial modes is a difficult task, especially when dealing with real-world recorded tones immersed in noise. This is mainly due to the high variance exhibited in the estimates of decay time of the partial modes.
In contrast to what is seen in the analysis results of the synthetic tone shown in Section 4.1.2, the decay time of the partial modes, estimated from a recorded tone, cannot be easily discriminated in two or more distinct classes. Thus, deciding which partial mode belongs to which polarization turns out to be a difficult nonlinear optimization problem. We leave this topic for future research and we stick to the calibration of the one-polarization DWG model.
Calibration of one-polarization DWG model from noisy tones
We start with an example in which the target signal is the corrupted version (SNR = 20 dB) of the recorded guitar tone featured in Section 4.2. From the FZ-ARMA analysis of this tone, we obtained estimates for the frequency and decay time of the partial modes. Then, the specification for the magnitude of the loop filter at the partial frequencies can be obtained by where ν is the partial index, f ν are the frequencies of the partials in Hz, and τ ν are the corresponding decay times in seconds.
As the sequence of estimated decay times, which was based on the corrupted signal, seems to have a couple of outliers, it was first median filtered using a three-sample window. The values of τ ν that result from the filtered sequence are then used in (12).
The specification of the loop filter within the frequency range above the frequency of the 40th partial is devised artificially. We fit a −6 dB per octave slope to the magnitude specification points associated with the highest 10 partials and extrapolate the curve up to the Nyquist frequency. To design a loop filter that approximates this extended specification, we resort to the IIR design method proposed in [46,47]. Figure 9 shows the results obtained by approximating the specified (smoothed) magnitude response of the loss filter via an 8th-order IIR lowpass filter.
We could also think of designing a dispersion filter for the DWG model. In this case, the specification for phase response of the allpass dispersion filter could be based on the estimated frequencies of the partials in a similar manner to what was done in [48,49]. However, for the noisy tone under study, the variance observed in these estimates prevented one from obtaining any meaningful specification for the dispersion filter.
CONCLUSION
In this paper, a spectral analysis technique based on FZ-ARMA modeling was applied to string instrument tones. More specifically, the method was used to analyze the resonant characteristics of isolated partials of the tones. In addition, analyses performed on noisy tones demonstrated that the FZ-ARMA modeling turns out to be a robust tool for estimating the frequencies and decay times of the partial modes, despite the presence of the corrupting noise. Comparisons between the estimates attained by FZ-ARMA modeling and those obtained via the ESPRIT method revealed a superior performance of the former method when dealing with noisy tones. Finally, the paper discussed the use of FZ-ARMA modeling in sound synthesis. In particular, the calibration of a DWG guitar synthesizer was successfully carried out based on FZ-ARMA analysis of a recorded guitar tone, which was artificially corrupted by zero-mean white Gaussian noise. | 10,498.6 | 2003-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Combining Unmanned Aerial Vehicle (UAV)-Based Multispectral Imagery and Ground-Based Hyperspectral Data for Plant Nitrogen Concentration Estimation in Rice
Plant nitrogen concentration (PNC) is a critical indicator of N status for crops, and can be used for N nutrition diagnosis and management. This work aims to explore the potential of multispectral imagery from unmanned aerial vehicle (UAV) for PNC estimation and improve the estimation accuracy with hyperspectral data collected in the field with a hyperspectral radiometer. In this study we combined selected vegetation indices (VIs) and texture information to estimate PNC in rice. The VIs were calculated from ground and aerial platforms and the texture information was obtained from UAV-based multispectral imagery. Two consecutive years (2015 & 2016) of experiments were conducted, involving different N rates, planting densities and rice cultivars. Both UAV flights and ground spectral measurements were taken along with destructive samplings at critical growth stages of rice (Oryza sativa L.). After UAV imagery preprocessing, both VIs and texture measurements were calculated. Then the optimal normalized difference texture index (NDTI) from UAV imagery was determined for separated stage groups and the entire season. Results demonstrated that aerial VIs performed well only for pre-heading stages (R2 = 0.52–0.70), and photochemical reflectance index and blue N index from ground (PRIg and BNIg) performed consistently well across all growth stages (R2 = 0.48–0.65 and 0.39–0.68). Most texture measurements were weakly related to PNC, but the optimal NDTIs could explain 61 and 51% variability of PNC for separated stage groups and entire season, respectively. Moreover, stepwise multiple linear regression (SMLR) models combining aerial VIs and NDTIs did not significantly improve the accuracy of PNC estimation, while models composed of BNIg and optimal NDTIs exhibited significant improvement for PNC estimation across all growth stages. Therefore, the integration of ground-based narrow band spectral indices with UAV-based textural information might be a promising technique in crop growth monitoring.
Plant nitrogen concentration (PNC) is a critical indicator of N status for crops, and can be used for N nutrition diagnosis and management. This work aims to explore the potential of multispectral imagery from unmanned aerial vehicle (UAV) for PNC estimation and improve the estimation accuracy with hyperspectral data collected in the field with a hyperspectral radiometer. In this study we combined selected vegetation indices (VIs) and texture information to estimate PNC in rice. The VIs were calculated from ground and aerial platforms and the texture information was obtained from UAV-based multispectral imagery. Two consecutive years (2015 & 2016) of experiments were conducted, involving different N rates, planting densities and rice cultivars. Both UAV flights and ground spectral measurements were taken along with destructive samplings at critical growth stages of rice (Oryza sativa L.). After UAV imagery preprocessing, both VIs and texture measurements were calculated. Then the optimal normalized difference texture index (NDTI) from UAV imagery was determined for separated stage groups and the entire season. Results demonstrated that aerial VIs performed well only for pre-heading stages (R 2 = 0.52-0.70), and photochemical reflectance index and blue N index from ground (PRI g and BNI g ) performed consistently well across all growth stages (R 2 = 0.48-0.65 and 0.39-0.68). Most texture measurements were weakly related to PNC, but the optimal NDTIs could explain 61 and 51% variability of PNC for separated stage groups and entire season, respectively. Moreover, stepwise multiple linear regression (SMLR) models combining aerial VIs and NDTIs did not significantly improve the accuracy of PNC estimation, while models composed of BNI g and optimal NDTIs exhibited significant improvement for PNC estimation across all growth stages. Therefore, the integration of ground-based narrow band spectral indices with UAV-based textural information might be a promising technique in crop growth monitoring.
INTRODUCTION
Nitrogen (N) is one of the most important elements for crop growth. In order to ensure high yield, excess N fertilizer was put into the field, which results in severe N leaching and environmental pollution (Ju et al., 2006;Li et al., 2007). Therefore, precision N management is urgent and essential, which might bring significant economic and environmental benefits. Precision N status monitoring is prerequisite for determining optimal N rate. Traditional method for monitoring crop N status was through destructive sampling and chemical analysis, which was tedious and time-consuming. As a nondestructive method, remote sensing techniques have been applied to monitor N status in the past several decades (Filella et al., 1995;Tarpley et al., 2000;Hansen and Schjoerring, 2003;Zhu et al., 2007;Stroppiana et al., 2009;Inoue et al., 2012;Yao et al., 2015;Sun et al., 2017).
Crop N concentration estimation with remote sensing was studied widely (Table 1), and the majority of studies used ground-based hyperspectral reflectance. Vegetation indices (VIs) were commonly used to estimate crop leaf/plant N concentration (LNC/PNC), and new VIs were proposed to improve estimation accuracy (Stroppiana et al., 2009;Tian et al., 2011;Wang et al., 2012). One of the early studies found the red edge and near-infrared ratio performed best in cotton LNC estimation among all the combinations with 20 spectral bands (Tarpley et al., 2000). Matrix plots were commonly used to find the best performing normalized difference vegetation index (NDVI) or ratio vegetation index (RVI) among thousands of wavelength combinations. For example, Zhu et al. (2007) found that the combination of 1,220 and 610 nm as either simple ratio (SR) or a normalized difference index (NDI) performed best in LNC estimation of rice and wheat crops. Tian et al. (2013) reported that SR (R 553 , R 537 ) was the optimal combination for rice LNC estimation. Stroppiana et al. (2009) proposed an optimal normalized difference index [NDI opt = (R 553 -R 483 )/(R 553 +R 483 )], which was strongly correlated with rice PNC (R 2 = 0.65), but least correlated with leaf area index (LAI) and aboveground biomass. For PNC estimation in winter wheat, the optimal NDVI or RVI was composed of reflectance in 400 and 370 nm (Li et al., 2010b). Furthermore, Tian et al. (2011) proposed two new three-band spectral indices [R 434 /(R 496 +R 401 ) and R 705 /(R 717 + R 491 )] to estimate rice LNC with hyperspectral reflectance data, and these two indices significantly outperformed other existing VIs in LNC estimation. Similarly, (R 924 -R 703 +2×R 423 )/(R 924 +R 703 -2×R 423 ) was proposed with hyperspectral data and proved to be significantly related to LNC of both rice and wheat crops (Wang et al., 2012).
From the aforementioned studies, the majority focused on crops LNC and a limited number of studies were on plant N concentration (PNC), which has been taken as an effective indicator of crop N status. When actual PNC is compared to the critical N concentration at the corresponding biomass level, the N nutrition index (NNI) could be obtained for determining crop N nutrition status and guiding N applications for a target yield (Lemaire et al., 2008;Zhao et al., 2016;Ata-Ul-Karim et al., 2017). Therefore, precise PNC estimation is critical and useful for in-season site-specific N management.
Ground-based spectral data has been used to estimate crop PNC, but the estimation accuracy is not so satisfactory (Stroppiana et al., 2009;Li et al., 2010b), especially with a multispectral sensor (Li et al., 2010a;Cao et al., 2013). Because canopy reflectance is dominated by leaves and hardly receive the signal of stem and panicle (after heading stage), and PNC is consisted of leaf, stem and panicle concentration, thus canopy reflectance is hard to explain the variation of PNC. Moreover, ground-based platform is often limited by low spatial coverage and unfavorable weather conditions.
Recently, unmanned aerial vehicles (UAVs) offer particular advantages over other remote sensing platforms with a high spatial resolution, a spectral resolution adapted for a specific purpose (here PNC estimation) and an appropriate revisit time. UAVs have been applied in many aspects related to crop growth monitoring, as summarized in Yang et al. (2017), but few studies about rice N status monitoring could be found. Because canopy structural variable (e.g., LAI and biomass) might greatly influence the interaction between leaves and radiation, which covered the signal of N status, thus making difficult to estimate N concentration (Stroppiana et al., 2009). Furthermore, previous studies have reported that the ultraviolet, violet and blue regions were shown to be consistently important for PNC estimation (Stroppiana et al., 2009;Li et al., 2010b). However, bands from these regions are generally missing from the current UAV-based sensors. Hunt et al. (2005) found crop N nutrition status could not be detectable with UAV RGB imagery. Lebourgeois et al. (2012) used two sensors (RGB and NIR-G-B cameras) mounted on a UAV to detect N status in sugarcane and found the best correlation of LNC with a broadband version of the simple ratio pigment index (SRPI b ) (R 2 = 0.7) among all indices examined. Furthermore, Schirrmann et al. (2016) found the ratio of the red and green channels from UAV RGB imagery correlated well (R 2 = 0.68) with PNC in winter wheat for only the heading stage. Liu et al. (2017) used UAV imagery to estimate LNC in wheat winter successfully with the cost of hyperspectral camera. Whether rice PNC could be estimated with UAV multispectral imagery at multiple stages remains to be addressed.
Texture is an important characteristic used to identify objects or regions of interest in any images (Haralick et al., 1973), and it has been commonly used for image classification (Laliberte and Rango, 2009;Murray et al., 2010). Since the beginning of twenty-first century, texture from satellite imagery has been used to estimate aboveground biomass but only for the forest (Lu and Batistella, 2005;Sarker and Nichol, 2011;Kelsey and Neff, 2014). UAV imagery takes the advantage of ultra-high spatial resolution, which indicates that texture is also an important source of information (Podest and Saatchi, 2002;Dell'Acqua and Gamba, 2003). However, texture in the UAV imagery was rarely used for crop growth monitoring. In addition, whether combining ground hyperspectral data could compensate for the limited bands of UAV sensors and improve the estimation accuracy of PNC is worthy to be explored. Therefore, the objectives of this study were (i) to explore the capability of UAV-based multispectral imagery in rice PNC estimation with spectral and textural information, and (ii) to improve PNC estimation accuracy through combining ground hyperspectral data and UAV imagery.
Experimental Designs
Two consecutive years' experiments were conducted in the experimental station of National Engineering and Technology Center for Information Agriculture (NETCIA) located in Rugao, Jiangsu province, China (120 • 45 ′ E,32 • 16 ′ N). The predominant soil type was loam and the organic carbon concentration in the soil was 12.95 g kg −1 . The annual average temperature, number of precipitation days, and precipitation were about 14.6 • C, 121.3, and 1055.5 mm, respectively. In 2015, two rice (Oryza sativa L.) cultivars were planted with four levels of nitrogen fertilizer (0 (N0), 100 (N1), 200 (N2) and 300 (N3) kg N ha −1 as urea). The treatments with minimum and maximum N rates (N0 and N3) were planted with one density (22 plants m −2 ) and the treatments with intermediate N rates (N1 and N2) were planted with two densities (13 and 22 plants m −2 ). The experiment was organized in 36 plots (5 × 6 m for each plot) with a completely randomized block design (Figure 1). In 2016, the experiment was similar to the former with same rice varieties. Two rice cultivars were planted with two densities (13 and 22 plants m −2 ) and three levels of nitrogen fertilizer (0 (N0), 150 (N1) and 300 (N2) kg N ha −1 as urea). In these two experiments, other field management practices during the experiment followed the local production standards.
Ground Sampling and N Concentration Determination
Ground destructive samplings were taken along with the UAV campaigns at rice critical growth stages ( Table 2). Three hills of rice plants were randomly selected from the sampling region of each plot and separated into different organs (leaf, stem and panicle). All the samples were oven-dried for 30 min at 105 • C and later at 80 • C to a constant weight, then weighed, ground and stored in plastic bags for chemical analysis. The total N content in different organs was determined with the micro-Keldjahl method (Bremner and Mulvaney, 1982). The plant N concentration was calculated as: Where L W , S W and P W are the dry weights of leaf, stem and panicle samples, respectively. L N , S N and S N are the N concentrations of leaf, stem and panicle samples, respectively.
UAV Image Acquisition
The UAV used in this study was a multi-rotor Mikrokopter OktoXL (Zhou et al., 2017). It was equipped with a sixband multispectral (MS) camera, a 1.3 Megapixel (1,280 × 1,024) Tetracam mini-MCA6 (Tetracam, Chatsworth, CA, USA) camera with center wavelengths of 490, 550, 680, 720, 800, and 900 nm. The angular field of view is 38.26 • × 30.97 • , resulting in an individual image footprint of 69 × 55 m, and a nominal resolution of 0.054 m ground sampling distance at 100 m above ground level.
Images were captured at one frame per 3 s and saved as a 10 bit RAW format. Camera settings were adjusted to lighting conditions and set to a fixed exposure for each flight. After the flight, only one image (covering all the 36 plots) was selected for post analysis due to the small study area. All the flights were executed in stable ambient light conditions between 11:00 a.m. and 1:30 p.m.
Field Spectral Measurements
Rice canopy spectral reflectance was collected with an ASD FieldSpec Pro spectrometer (Analytical Spectral Devices, Boulder, CO, USA) with a 25 • field of view. The spectral range was 350-2,500 nm, with a 1.4 nm sampling interval between 350 and 1050 nm and a 2 nm sampling interval between 1,000 and 2,500 nm. All the spectral measurements were taken at a height of 1.0 m above the rice canopy from 11:00 a.m. to 1:00 p.m.
UAV Imagery Processing
UAV images were processed in IDL/ENVI environment (Exelis Visual Information Solutions, Boulder, Colorado, USA) and the image preprocessing workflows followed Zhou et al. (2017). Later, band registration was taken with the 25 ground control points (GCPs) to obtain an image with six spectral bands. Radiation correction was conducted with an empirical line correction method (Smith and Milton, 1999;Zhou et al., 2017) by using the six flat calibration canvas at different reflectance intensities (Figure 1). The reflectance of each plot was represented by the average of reflectance values over the non-sampling area of the plot.
Calculation of Vegetation Indices
In this study, canopy spectral reflectance acquired from aerial and ground platforms was used to calculate a number of vegetation indices (Table 3), which were reported to be well correlated with N or chlorophyll concentration. Because multispectral images had only six spectral bands, only NDVI, CI G , CI RE , OSAVI and VI opt were calculated with UAV imagery.
Texture Analysis
Gray level co-occurrence matrix (GLCM) was the most commonly used texture algorithm (Haralick et al., 1973), and employed to test the potential of texture analysis of UAV images on PNC estimation. After the radiation correction was conducted, eight GLCM-based texture measurements [e.g., mean (MEA), variance (VAR), homogeneity (HOM), contrast (CON), dissimilarity (DIS), entropy (ENT), second moment (SEM) and correlation (COR)] were computed with a window size (3 × 3 pixels) in the direction of 45 • using the ENVI software. Texture analysis was taken on five bands without 900 nm due to the close correlation between the reflectance of two near infrared bands (data not shown).
Since compared with spectral reflectance data, VIs were shown to reduce the influence of canopy geometry, soil background, illumination angles and atmospheric conditions when estimating biophysical properties (Tucker, 1979;Huete et al., 1985). Therefore, we assumed that texture index with ratio or normalization of texture measurements might have the same function. Then a normalized difference texture index (NDTI = (T 1 -T 2 )/(T 1 +T 2 ) was proposed, where T 1 and T 2 was random texture measurement from the five bands). In order to select an appropriate texture combination, the correlation between PNC and NDTI was tested by using all possible combinations of texture.
Statistical Analysis
The data collected from the two-year experiment were pooled to examine the relationships of PNC with VIs, NDTIs and the combinations with simple linear regression (SLR) and stepwise multiple linear regression (SMLR). In order to simplify the estimation model, the number of variables in multiple linear regression (MLR) models was set no more than two. The statistical analysis was executed in Graph-Pad Prism (GraphPad Software Inc., San Diego, CA, USA, 1996) and SPSS 20.0 software (SPSS INC., Chicago, IL, USA, 2002).
The established models were validated with all the data using a k-fold (k = 10) cross-validation procedure, and evaluated by the differences in the root mean square error (RMSE) and the relative RMSE (RRMSE). The RMSE and RRMSE were calculated using Equations (2, 3), respectively: Where O i , P i and O i were the observed, predicted and mean values of rice PNC, respectively, and n was the number of samples. Table 4 shows the simple linear relationships between PNC and VIs from two platforms. For aerial VIs, NDVI a , and OSAVI a exhibited moderate performance and CI G−a and CI RE−a performed equally well and best amongst all VIs for pre-heading stages. For post-heading stages and the entire season, all aerial VIs were weakly related to PNC with the highest R 2 of 0.28 and 0.14, respectively.
Performance of Spectral Vegetation Indices
For ground VIs, the majority of VIs only performed well for pre-heading stages. OSAVI g and VI opt−g showed no significant difference in PNC estimation before or after heading stage, while PRI g and BNI g exhibited equal performance cross all growth stages. Compared with VIs from ground-based platform, aerial VIs performed better for pre-heading stages and worse for postheading stages and entire season (Figure 2).
Performance of Texture Features and Texture Indices
The relationships between PNC and texture measurements of all spectral bands were found to be poor across all growth stages, though stronger correlation was observed with MEA 800 (R 2 = 0.51), MEA 800 (R 2 = 0.41) and HOM 720 (R 2 = 0.42) for pre-heading, post-heading stages and entire season, respectively (Supplementary Table 1). Compared with individual texture measurements, NDTIs performed significantly better in PNC estimation across all growth stages (Table 5). NDTI1, composed by MEA 800 and MEA 720 , performed best in PNC estimation for the pre-heading stages. For post-heading stages, the top eight best-performing NDTIs were mainly composed by texture measurements from red edge and near infrared bands. The top one was NDTI9 with MEA 800 and DIS 720 , explaining 61% variability of PNC for the post-heading stages. Similar to the result for post-heading stages, NDTIs showing close relationship with PNC for entire season were all composed of texture measurements in 720 and 800 nm. NDTI17 could explain 50% variability of PNC, which was superior to other NDTIs. Table 6 shows the best performance of SMLR models combining VIs and NDTIs. Combining NDTIs and aerial VIs, SMLR models did not show significant improvement in comparison to the optimal VI or NDTI with SLR models across all growth stages. The optimal model for pre-heading stages was still composed of CI RE−g with SLR, while the MLR models for post-heading and entire season were all consisted of the top two best-performing NDTIs.
Performance of VI and NDTI Combinations
However, when combining NDTIs and groud-based VIs, the performance of MLR models improved significantly across all growth stages. Interestingly, all the models were consisted of optimal NDTI and BNI, explaining 72, 73, and 75% variability of PNC for pre-headings stages, post-heading stages and entire season, respectively. Therefore, the combination of ground-based VIs and NDTIs with MLR models could be taken as an efficient approach in PNC estimation.
Model Validation
All the regression models were cross-validated with all data and the best performing VI from both platforms, texture index and MLR models were shown in Figures 3-5 for different stage groups. For pre-heading stages, all the selected models had close performance and MLR models showed minor advantages (Figure 3). The highest estimation accuracy (RMSE = 0.16 and RRMSE = 10.92%) was obtained by model-4, composed of NDTI1 and BNI g . For post-heading stages, NDTIs exhibited higher estimation accuracy than that of VIs ( Figure 4B). Significant improvements were achieved by MLR models, and model-2 produced lowest RMSE and RRMSE ( Figure 4E) following with model-5 consisted of NDTI9 and BNI g (Figure 4F). For entire season, PRI g and BNI g performed equally well and were superior to other VIs and NDTIs (Figures 5C,D). However, compared with these two VIs, Model-6 combining BNI g and NDTI17 yielded higher estimation accuracy with RMSE and RRMSE of 0.17 and 13.49%, respectively ( Figure 5F).
Different Performance of VIs From Two Platforms
In this study, counterpart VIs from UAV imagery performed better than that from ground, but only for pre-heading stages (Figure 2). That might be caused by the variation in reflectance extracted from different sampling sizes. For UAV MS imagery, reflectance was extracted from the non-sampling area (around 12 m 2 ) within each plot. While the field view of the ASD spectrometer placed at 1 m above the canopy was a circle in a diameter of approximately 0.22 m (around 0.15 m 2 ). For post-heading stages, the canopy was more homogenous and the ground-based VIs outperformed the aerial VIs in PNC estimation.
The best-performing VI was CI RE−a before heading stage, which was expected and in agreement with the findings of Li et al. (2010b). At the early growth stages, biophysical parameters (e.g. biomass, LAI) varied greatly and masked the contribution of chlorophyll and N to the canopy reflectance (Haboudane et al., 2002), thus VIs consisted of red edge and NIR bands performed better than other indices (Table 4). However, aerial VIs had weak capability in PNC estimation for post-heading and the entire season, because those VIs, which are sensitive to the canopy structure, became saturated in high biomass level and hard to monitor N status. Furthermore, ground-based VIs performing consistently well in PNC estimation across all growth stages were composed of blue and green bands (Stroppiana et al., 2009;Yu et al., 2013). UAV-based multispectral cameras were equipped with limited bands with broad bandwidth, thus they can not obtain those N concentration specific VIs. Hunt et al. (2005) also found that UAV RGB imagery could not be used to detect crop nutrient status due to the improper bands.
Ground hyperspectral data takes the great advantage of abundant spectral bands and narrow bandwidth, thus it offers more options for VI computation. In this study PRI g and BNI g exhibited good performance across all growth stages, because they were computed with blue and green bands that are specifically sensitive to N concentration, which is consistent with findings from Stroppiana et al. (2009) and Yu et al. (2013). However, the highest correlation between PNC and ground VI was not so satisfactory for post-heading stages with R 2 < 0.50. That might be because the presence of panicles changed the plot structure and affected the spectral signature (Gnyp et al., 2014). In a UAV-based grain yield prediction study, Zhou et al. (2017) also found that the estimation accuracy of grain yield decreased as rice panicles emerged from the sheath at heading stage. Therefore, it is essential to improve PNC estimation for post-heading stages and the entire season with new data source.
Difference in Texture Features Between Stage Groups
Texture can be used as a description of spectral feature distribution in spectral image space (Ning, 1998), which might be interpreted with biological meaning as for the spectral features.
In this study we found most texture measurements were weakly related to PNC across all growth stages (Supplementary Table 1), which corresponds well to the findings of Lu and Batistella (2005). Besides, Jin et al. (2015) found only MEA texture feature was useful in residue cover estimation in maize. MEA 490 and MEA 680 performed well only for pre-heading stages, because reflectance in the visible bands varied slightly at low chlorophyll content levels and saturated at high levels (Hatfield et al., 2008). As a result, the texture features from the visible bands fluctuated slightly and it was difficult to use visible texture features for detecting the variation in PNC. HOM 720 and MEA 800 performed well at late growth stages and the majority of texture features at 720 nm were superior to other texture features for the entire season. That might due to the fact that the reflectance at red edge and NIR bands had a broader variation through the growing season and the texture features from these bands could explain more variation in PNC. However, texture indices performed significantly better than individual texture measurements, which might be similar to advantages of vegetation index that could reduce the influence of canopy geometry and soil background over raw reflectance data (Tucker, 1979;Huete et al., 1985). Sarker and Nichol (2011) also reported that the ratio of texture parameters could improve the estimation accuracy of forest biomass. Given different stage groups, the optimal NDTI was different, because canopy structure varies as rice plants grow, and leaves dominate the canopy before heading stage. After that panicles emerge out from the sheath, which makes the canopy reflectance more complicated due to the difference in leaf and panicle reflectance (Tang et al., 2007). Interestingly, the optimal NDTIs across all growth stages consisted of texture parameters from red edge and NIR bands ( Table 5). Since they are good indicators of canopy chlorophyll (Gitelson et al., 2003b(Gitelson et al., , 2005, LAI and biomass (Gitelson et al., 2003a), the NDTIs from those bands performed well in PNC estimation.
Actually, it is still complicated to select an appropriate texture involving window sizes and image bands for a specific research topic. Although numerous studies have reported texture features were useful in biomass (Lu, 2005;Sarker and Nichol, 2011), LAI (Wulder et al., 1998) and residue cover (Jin et al., 2015) estimation, the underlying mechanism of selected texture measurement remains to be better understood. Those questions need to be clarified in the future studies.
Advantages of Combining Ground-Based Spectral Data and UAV Imagery
The combination of spectral data and texture measurements has been proposed to improve biomass (Lu, 2005;Eckert, 2012), LAI (Wulder et al., 1998) and residue cover (Jin et al., 2015) estimation with satellite data. In present study, we found that the improvement was not pronounced in PNC estimation by combining aerial VIs and NDTIs due to the limited bands of UAV sensors. However, the combination of ground-based VIs and NDTIs improved PNC estimation significantly across all growth stages, especially for the post-heading stages ( Table 6). Because ground-based hyperspectral data is available for those VIs that are highly sensitive to N concentration. In addition, texture analysis could efficiently address saturation problems associated with vegetation indices in dense canopies (Kelsey and Neff, 2014) and detect variable canopy structural characteristics well (Eckert, 2012). MLR models integrated both techniques and could explain 75% variability of PNC for entire season with a general model, which was superior to the findings of Li et al. (2010b) and Stroppiana et al. (2009). Therefore, a combination of UAV imagery and ground hyperspectral data could be taken as an effective hybrid method for N status monitoring in rice. Future work will focus on transferring such an integrative methodology presented here to other agronomic parameters estimation.
Implications for Future Applications
Most previous studies estimated crop PNC with ground-based hyperspectral data, but the estimation accuracy was moderate (Stroppiana et al., 2009;Li et al., 2010b). Although high accuracy of PNC estimation in rice was obtained by Yu et al. (2013), the optimal estimation model was established by six bands, which was difficult for practical application. In this study we found that CI RE from UAV multispectral imagery could be used to estimate PNC for pre-heading stages. That indicates that UAV imagery might have potential for N diagnose and management based on PNC, before the heading stage (Ding et al., 2003;Cao et al., 2016). Texture information from UAV imagery could be useful for PNC estimation for post-heading stages, which shows that grain yield and quality is predictable with PNC at late growth stages. Therefore, UAV multispectral imagery could be used to estimate rice PNC with independent models for different stage groups.
Furthermore, the hybrid method combining groundbased hyperspectral data and UAV imagery could accurately estimate PNC across all growth stages. As crop growth monitoring techniques develop, multiple sensors from different platforms have been integrated to collect data (Bendig et al., 2015;Tilly et al., 2015). Additionally, UAV-based hyperspectral imaging might execute this method easily. Therefore, this method is feasible and offers technique support for N diagnose and management, and grain yield and quality prediction in the future.
CONCLUSIONS
This work showed UAV-based multispectral imagery could be used to estimate rice PNC with spectral data only for pre-heading stages, but texture information from UAV imagery could be used to estimate PNC across all growth stages with moderate accuracy. PRI and BNI computed with ground-based hyperspectral data performed consistently well across all growth stages. Furthermore, the combination of ground VIs and NDTIs improved the PNC estimation significantly, but the improvement with aerial VIs and NDTIs was not pronounced. Therefore, this hybrid method with ground spectral data and UAV imagery texture information was promising in rice N status monitoring.
Future work should focus on determining optimal texture parameters involving different texture algorithms, window sizes and spectral bands. Moreover, multiple year datasets are needed to evaluate this new hybrid method to improve the robustness and applicability. Most importantly, realizing N diagnose and N management depending FIGURE 5 | Cross-validation scatter plots for measured PNC vs. estimated PNC derived from selected models for entire season: CI RE−a (A), NDTI17 (B), PRI g (C), BNI g (D), Model-3 (E), and Model-6 (F).
on PNC with presented method is more essential and anticipated.
AUTHOR CONTRIBUTIONS
YZ and TC designed and directed the rice trials at the experimental station of NETCIA in Rugao, China. HZ and DL conducted the field measurements and the collection of samples. HZ processed the images, analyzed the samples and wrote the paper. YZ, TC, XY, and WC gave valuable comments to the manuscript and carried out critical revisions. All authors gave final approval for publication.
ACKNOWLEDGMENTS
The authors gratefully thank Kai Zhou, Xiaoqing Xu, Jiaoyang He for their great assistance in rice planting, data collection and harvesting. The authors also want to thank Dr. Xiaojun Liu for selecting rice seeds and Ms. Juan Shen for rice field management. | 7,161 | 2018-07-03T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Performance Indicators of Printed Construction Materials : a Durability-Based Approach
Studying the durability of materials and structures, including 3D-printed structures, is now a key step in better meeting the challenges of sustainable development and integrating technical and economic aspects from the design phase into the execution phase. While digital and robotics technologies have been well developed for construction 3D printing, the material aspect still faces critical issues to meet the evolving requirements for buildings. This research aims to develop performance indicators for 3D-printed materials used in construction regardless of the nature of the material. A general guideline is to be established as a result of this research. Thus, the literature review analyzes traditional durability approaches to construction materials and challenges are identified for potential applications in construction. The results suggest that performance indicators for 3D-printed materials should be checked as printable through an experimental case study. This research could be of interest to researchers, professionals, and start-ups in the construction and materials research fields.
Introduction
Durability is integrated into a sustainable development approach in terms of environmental preservation, technical and economic optimization of structures, and control of construction and maintenance costs [1,2].Understanding the durability of materials is crucial in construction projects.It guides the design of the structure, the formulation of the material, and its implementation.Sustainability has been defined by Eurocode 2 as follows: "A durable structure must meet the requirements of serviceability, strength, and stability throughout the life of the project, without significant loss of functionality or excessive unexpected maintenance." The main objective of analyzing the durability of a material is to precisely select the desired characteristics in order to optimize its composition according to the environmental constraints to which it will be subjected during expected service life.
The use of cementitious materials in construction is vast and diversified [3].Indeed, the properties and characteristics of cementitious materials have allowed them to evolve in line with scientific discoveries and industrial progress [4].Nowadays, several researchers are interested in printing cementitious material [5] and several factors come into play and should be taken into account to ensure printability [6].Materials printability is still one of the major challenges [7].There is no relevant guideline for the 3DCP in terms of material formulation and no standard criteria are set to limit the specifications [8].Thus, various efforts must be made to standardize the technology in the construction industry.
In this light, the durability of cementitious materials derived from additive manufacturing technology is studied in this research.The key challenges for printability were outlined by Wangler et al. [9].The focus was the concrete extrusion and intermix with the previously deposited layer.Each layer must support its own weight and the weight of the material to be subsequently deposited.
Normative and Regulatory Contexts
Concrete structures have comprehensive normative support in the form of European and French standards, which makes it possible to better understand and control the durability of structures.
Standard NF EN 206 (Concrete-specifications, performance, production, and conformity) states the requirements for concrete components, its properties in fresh and hardened state, limitations imposed on its composition, specification, delivery of fresh concrete, production control procedures, conformity criteria, and evaluation.
Two different approaches are presented in this standard: a prescriptive approach and a performance-based approach.They specify, in terms of composition and performance, concrete formulas adapted to each exposure class.The obligation of means and the obligation of results are the two alternatives used to study the durability of the material in its environment.The requirements for each approach are different.
Prescriptive Approach
This approach presents the traditional approach to studying the durability of concrete.It defines the specified limit values to meet the requirements of durability in relation to a proposed use under certain environmental conditions.
The NF EN 206 standard defines specified limit values for the composition and properties of concrete according to each exposure class in two tables (NA.F.1 and NA.F.2) [10].These values are based on 50-year service life of the structure.
Depending on each exposure class, the standard specifies limit values for:
•
The maximum water-efficiency/equivalent binder ratio,
•
The minimum strength class of the concrete, the minimum content of equivalent binder, and the minimum air content.
It also contains requirements regarding additions and the maximum quantities allowed for the calculation of the Liantéq with each addition (fly ash, silica fume, ground slag, limestone, or siliceous addition), and for each type of cement to be used.
The quantity of equivalent binder (Liantéq) corresponds to the quality of cement (C) increased by the quantity of addition (A) weighted by a coefficient (k) according to each type of addition (Liantéq = C + k*A).
This prescriptive approach has shown its limitations in assessing the potential durability of concretes.Indeed, several studies have shown that compositional limit values do not make it possible to estimate the durability of a structure [4,11].
In addition, this approach limits the use of non-standardized components and the number of additives.This may also represent a brake on the development of new materials for 3D printing technology given the properties required for this type of application.
In addition to this prescriptive approach, a high-performance approach is authorized by the NF EN 206 standard that offers the only alternative for qualifying different or innovative systems.It is necessary to guide or optimize composition choices according to the desired durability by taking into account technical, economic, and environmental aspects as shown by Hooton and Bickley [12].The durability of concretes is also understood by considering certain characteristics or properties of the material that are known to be of interest in predicting its evolution when exposed to specific environmental conditions [13].
Performance Approach
The performance approach is a powerful and necessary lever for innovation and sustainable development.A performance-based approach is necessary for the development and improved durability of new materials as they can only be qualified based on their compositions and their performance and behavior in a given environment.It also makes it possible to consider the use of local materials, mineral additions, and additives as new degrees of freedom to address technical, environmental, and economic issues.Various studies have examined the impact of alternative constituents on cement and natural aggregates on the performance and durability of concrete.For example, Jimenez et al. and Paine and Dhir studied the effect of recycled aggregates by using a performance-based approach [14,15].The national RECYBETON project addressed the issue of the use of recycled aggregates from deconstructed concrete [16].
A great deal of interest has been shown in the performance-based approach in the context of the national PERFDUB research project [17].The objective is to define a methodology at the national level to justify the durability of concretes.The aim is to aggregate knowledge and feedback and to fill gaps in a framework that brings together all the actors concerned so that an effective approach becomes operational and widely used.
Building on previous and ongoing projects (e.g., [18][19][20]), different concepts have been used to implement a successful approach to sustainability.The two main concepts correspond, on the one hand, to the method based on sustainability indicators and, on the other hand, to the system based on the use of performance tests.
Durability Indicators
Durability indicators are parameters that appear to be fundamental in assessing and predicting the durability of the material and structure with respect to the degradation process under consideration [21].These are essential tools to anticipate damage and optimize maintenance.
These indicators make it possible to determine the properties of materials in relation to the environment and to feed predictive models for aging.
The AFGC guide summarizes the different methods available for determining durability indicators as shown in Table 1.Example of research projects on tensile strength of 3D printed materials for construction [22][23][24].The concept of printability was also investigated in literature research, and linked to some indicators, such as the shear rate, viscosity, and thixotropy [6,25].
•
Sustainability indicators specific to a given degradation process, such as alkali reaction or freezing.
In addition, the direct determination of some general sustainability indicators may be replaced by the direct determination of alternative indicators.
General durability indicators are key parameters for the durability of concrete with respect to corrosion of reinforcement, alkali reaction, or any other degradation.The general indicators defined are as follows [26]: The determination of all these parameters is not systematically necessary as it depends on the foreseeable damage to the environment and the practical case studied.The values of the durability indicators vary greatly with the age of the material before three months, especially when the concrete formula contains a high proportion of hydraulic or slow-reaction pozzolanic mineral additions (fly ash, slag).Another important parameter in determining sustainability indicators is the water status of the samples, which is essential for the development of chemical reactions and their macroscopic consequences [27][28][29].The saturation rate or water vapor sorption and desorption isotherms present the complementary parameters necessary to determine and interpret various sustainability indicators [30].
Several studies have shown the impact of water status on concrete properties.For example, the compressive strength of dry concrete increases by 40-70% compared with saturated concrete [31].On the other hand, various studies have shown the influence of water status on transfer parameters:
•
Permeability to liquid water varies according to the saturation rate.Figure 4 shows the evolution of water permeability as a function of saturation rates.It shows that for S ≤ 40%, the transport in the liquid phase is negligible, whereas, for the S > 80% domain the increase in relative permeability is very significant [32].
•
For the penetration of chloride ions, the diffusion coefficient decreases with the water content [33,34].
It is interesting to determine general sustainability indicators under conditions that approximate in situ water conditions of a structure (RH > 60%).
3D Printing for Construction
3D printing in construction is a new field of study that has attracted many researchers [3,35,36].The current challenges of 3D printing concern the robotics systems, the software part [9], and the material part [3,37].The principle of 3D printing in construction has many similarities with the one we already know for ordinary printing, a nozzle deposits layers of viscous concrete at each pass by climbing each time a notch [22].One of the major scientific locks of construction 3D Printing is the material.The latter passes through the mixer, pump, robot and then is extruded through the nozzle.The printed material should be designed to withstand horizontal and vertical constraints throughout the printing process [38].The main material currently used for 3D printing is cement.One of the main advantages of printing a cementitious material is avoiding the laborious formwork phase, which represents 35-60% of the total cost of concrete structures.
The material is initially in a viscous state and solidifies once printed.The 3D-printed material stands in place and reaches a strength that allows it to resist the weight of the layers that will come on top.
Research Vision
The objective of this research is to define a methodology for assessing the durability of 3D-printed materials.
The performance-based approach is an appropriate method because the materials developed are of a specific composition that should meet the specified requirements for fresh and cured concrete while considering the production process and the 3D printing process (Figure 1) chosen for executing the work.The prescriptive approach, however, does not meet all the criteria for such a goal.
In addition, the performance indicators for the 3D printing process in construction aim to promote sustainable concrete formulas with low environmental impacts and to increase the use of additives and admixtures to improve certain properties, whether fresh, during setting and curing, or in a cured state.Indeed, these additives mainly impact the rheology, hydration kinetics, and mechanical performance [22,39].
Studying the durability of materials and structures, including 3D-printed structures, is now a key step in better meeting the challenges of sustainable development and in integrating technical and economic aspects from the design phase into the execution phase.This work makes it possible to provide answers and recommendations to prevent possible damage to the structures and to estimate their lifespan using existing predictive models.
Research Vision
The objective of this research is to define a methodology for assessing the durability of 3Dprinted materials.
The performance-based approach is an appropriate method because the materials developed are of a specific composition that should meet the specified requirements for fresh and cured concrete while considering the production process and the 3D printing process (Figure 1) chosen for executing the work.The prescriptive approach, however, does not meet all the criteria for such a goal.
In addition, the performance indicators for the 3D printing process in construction aim to promote sustainable concrete formulas with low environmental impacts and to increase the use of additives and admixtures to improve certain properties, whether fresh, during setting and curing, or in a cured state.Indeed, these additives mainly impact the rheology, hydration kinetics, and mechanical performance [22,39].
Studying the durability of materials and structures, including 3D-printed structures, is now a key step in better meeting the challenges of sustainable development and in integrating technical and economic aspects from the design phase into the execution phase.This work makes it possible to provide answers and recommendations to prevent possible damage to the structures and to estimate their lifespan using existing predictive models.
Proposed Theory-Based Approach for 3D Printing Process in Construction
The proposed approach comprises four steps: • Step 1: Selection of sustainability indicators adapted with additive manufacturing technology (layer-by-layer deposition), such as porosity and sorption, and desorption isotherms.
•
Step 2: Experimental campaign according to the procedures defined in this report, in the various standards, and from previous projects; e.g., the characterization of samples taken during the various printing tests.
•
Step 3: Determination of the different sustainability indicators using existing correlations and empirical models.The objective is to study the impact of composition and implementation parameters on sustainability.
•
Step 4: Prediction of the lifespan of the materials studied and presentation of recommendations for improving formulation and implementation.
This paper draws on the results of three printing campaigns that used printed cementitious matrix formulations.The first campaign used a conventional formulation.The second campaign used a slow-hardening formulation for the cementitious printed material, and the third campaign used a rapid-hardening formulation.Those formulations were compared thanks to a selection of durability indicators.After that, a printing framework is proposed, and some recommendations for future studies are presented.
Proposed Theory-Based Approach for 3D Printing Process in Construction
The proposed approach comprises four steps: • Step 1: Selection of sustainability indicators adapted with additive manufacturing technology (layer-by-layer deposition), such as porosity and sorption, and desorption isotherms.
•
Step 2: Experimental campaign according to the procedures defined in this report, in the various standards, and from previous projects; e.g., the characterization of samples taken during the various printing tests.
•
Step 3: Determination of the different sustainability indicators using existing correlations and empirical models.The objective is to study the impact of composition and implementation parameters on sustainability.
•
Step 4: Prediction of the lifespan of the materials studied and presentation of recommendations for improving formulation and implementation.
This paper draws on the results of three printing campaigns that used printed cementitious matrix formulations.The first campaign used a conventional formulation.The second campaign used a slow-hardening formulation for the cementitious printed material, and the third campaign used a rapid-hardening formulation.Those formulations were compared thanks to a selection of durability indicators.After that, a printing framework is proposed, and some recommendations for future studies are presented.
Research Methodology
Tests were conducted to evaluate the properties that influence the durability of cementitious matrix materials.These tests involved essential tools for comparing the performance of different formulations and for studying the impact of composition and implementation.The choice of these tests was performance-based.
The tests were performed on molded specimens or cores taken from samples of an unknown formulation.The durability tests were blinded, which limited the extent of analysis.
A great deal of interest was expressed in this report for the characterization of the porous structure, which has a key role in the durability of cementitious materials.As part of a performance-based approach, sustainability indicators were studied to assess and predict the durability of the materials.These parameters make it possible to determine the properties of materials in relation to the environment and to feed predictive models for aging.
The research methodology consisted of assessing the durability of cementitious printed materials.Samples were recovered after each printing campaign to perform durability tests and the results compared throughout the evolution of successive printing campaigns.
Materials
Three printing campaigns that used cementitious matrix formulations were realized.The first campaign used a conventional formulation (Figure 2).The second campaign used a slow-hardening formulation for the cementitious printed material, and the third campaign used a rapid-hardening formulation (Figure 3).All specimens were made under the same printing conditions (same pumping rate, without vibration and shock).The three campaigns are designated as follows: The tests were performed on molded specimens or cores taken from samples of an unknown formulation.The durability tests were blinded, which limited the extent of analysis.
A great deal of interest was expressed in this report for the characterization of the porous structure, which has a key role in the durability of cementitious materials.As part of a performancebased approach, sustainability indicators were studied to assess and predict the durability of the materials.These parameters make it possible to determine the properties of materials in relation to the environment and to feed predictive models for aging.
The research methodology consisted of assessing the durability of cementitious printed materials.Samples were recovered after each printing campaign to perform durability tests and the results compared throughout the evolution of successive printing campaigns.
Materials
Three printing campaigns that used cementitious matrix formulations were realized.The first campaign used a conventional formulation (Figure 2).The second campaign used a slow-hardening formulation for the cementitious printed material, and the third campaign used a rapid-hardening formulation (Figure 3).All specimens were made under the same printing conditions (same pumping rate, without vibration and shock).The three campaigns are designated as follows: • Campaign 1: MC14-10-16, • Campaign 2: MCR19-01-17, • Campaign 3: MCR20-01-17.(French Project MATRICE [40]).
Accessible Porosity to Water
Porosity is an indicator of concrete quality and is one of the most important indicators of sustainability.It has a direct impact on mechanical strength and durability.The porous network is responsible for the penetration and infiltration of aggressive substances into the concrete.Porosity is determined by hydrostatic weighing.
This test also determines water absorption and wet and dry densities (Figure 4).
Water Absorption
Water absorption is a measure of the amount of water absorbed through the concrete's pores open to the surrounding environment.It is determined by immersing a dry, weighed test piece in water and measuring the mass increase (see Figure 5).It is expressed as a percentage of the dry mass of the specimen.The water absorption is calculated using Equation (1): Mhumide: constant wet mass of the specimen after immersion Msèche: constant dry mass of the specimen after drying in the oven (French Project MATRICE [40]).
Accessible Porosity to Water
Porosity is an indicator of concrete quality and is one of the most important indicators of sustainability.It has a direct impact on mechanical strength and durability.The porous network is responsible for the penetration and infiltration of aggressive substances into the concrete.Porosity is determined by hydrostatic weighing.
This test also determines water absorption and wet and dry densities (Figure 4).(French Project MATRICE [40]).
Accessible Porosity to Water
Porosity is an indicator of concrete quality and is one of the most important indicators of sustainability.It has a direct impact on mechanical strength and durability.The porous network is responsible for the penetration and infiltration of aggressive substances into the concrete.Porosity is determined by hydrostatic weighing.
This test also determines water absorption and wet and dry densities (Figure 4).
Water Absorption
Water absorption is a measure of the amount of water absorbed through the concrete's pores open to the surrounding environment.It is determined by immersing a dry, weighed test piece in water and measuring the mass increase (see Figure 5).It is expressed as a percentage of the dry mass of the specimen.The water absorption is calculated using Equation (1): Mhumide: constant wet mass of the specimen after immersion Msèche: constant dry mass of the specimen after drying in the oven
Water Absorption
Water absorption is a measure of the amount of water absorbed through the concrete's pores open to the surrounding environment.It is determined by immersing a dry, weighed test piece in water and measuring the mass increase (see Figure 5).It is expressed as a percentage of the dry mass of the specimen.The water absorption is calculated using Equation (1): M humide : constant wet mass of the specimen after immersion M sèche : constant dry mass of the specimen after drying in the oven
Wet and Dry Volumetric Masses
The wet (MVH) and dry (MVS) densities are calculated using the following expressions: The volume (V) of the specimen is determined by hydrostatic weighing (Figure 5), thanks to the following formula: The wet (MVH) and dry (MVS) densities are calculated using the following expressions: The volume (V) of the specimen is determined by hydrostatic weighing (Figure 5), thanks to the
Compressive Strength
Compressive strength is determined in accordance with NF EN 206 (2014) and measured on 28day cubic samples (Figure 11).Tests were performed on five specimens for each formulation.Samples have a cubic form 8 x 8 cm.
Sorption-Desorption Isotherms
The purpose of this test is to determine the sorption-desorption isotherms of samples from the different printing campaigns for cementitious and earth-based materials.Apparel in Figure 6 was used.
The samples studied in this report are the cement paste samples recovered on October 14, 2016, as well as samples from the printing campaigns of January 19 and 20, 2017.The specimens were cut using a diamond disc saw to adapt them to this test.
The principle of the test is to determine the mass moisture content of the samples at different levels of relative humidity.In the beginning, the specimens were dried until a constant mass was obtained.Then, the specimens were placed in cups and left in a climatic chamber (Figure 13), programmable in temperature and humidity.This allowed the specimens to be weighed automatically without being removed from the chamber by pre-defining the measurement conditions
Porosity
Porosity is determined by the following formula:
Compressive Strength
Compressive strength is determined in accordance with NF EN 206 (2014) and measured on 28-day cubic samples (Figure 11).Tests were performed on five specimens for each formulation.Samples have a cubic form 8 × 8 cm.
Sorption-Desorption Isotherms
The purpose of this test is to determine the sorption-desorption isotherms of samples from the different printing campaigns for cementitious and earth-based materials.Apparel in Figure 6 was used.
The samples studied in this report are the cement paste samples recovered on October 14, 2016, as well as samples from the printing campaigns of January 19 and 20, 2017.The specimens were cut using a diamond disc saw to adapt them to this test.
The principle of the test is to determine the mass moisture content of the samples at different levels of relative humidity.In the beginning, the specimens were dried until a constant mass was obtained.Then, the specimens were placed in cups and left in a climatic chamber (Figure 13), programmable in temperature and humidity.This allowed the specimens to be weighed automatically without being removed from the chamber by pre-defining the measurement conditions of, e.g., temperature and air velocity, and programming the desired relative humidity cycle.This reduced the time required to achieve equilibrium at the relative humidity (RH) RH under consideration because it does not disturb the environment and samples during weighing.
The temperature was kept constant (23 • C).RH levels of 30%, 50%, 75%, and 95% were tested.The maximum relative humidity was limited to 95% because the full range of humidity is difficult to achieve in practice.
The equilibrium mass obtained for each relative humidity considered was used to determine the mass water content of the sample in percent.
Buildings 2019, 9, x FOR PEER REVIEW 6 of 17 of, e.g., temperature and air velocity, and programming the desired relative humidity cycle.This reduced the time required to achieve equilibrium at the relative humidity (RH) RH under consideration because it does not disturb the environment and samples during weighing.
The maximum relative humidity was limited to 95% because the full range of humidity is difficult to achieve in practice.
The equilibrium mass obtained for each relative humidity considered was used to determine the mass water content of the sample in percent.
Results and Discussion
Table 2 summarizes the results of the experiment.Water absorption, accessible porosity to water, and compressive strength of the three campaigns' printed samples are plotted as a radar graph that describes the durability performance of the printed materials.
Water Absorption
The MC14-10-16 specimens from the first printing campaign of November 10, 2016 have good average potential durability because the average porosity value is 12.8%.However, for the MCL19-01-17 formulation, the average porosity is 14.6%.Thus, the potential durability is low for these specimens.However, for the rapid recovery formulation, durability is very low, with an average value of 17.5% (greater than 16%).
Compressive Strength
The compressive strength is higher for the specimens of the first printing campaign, with an average value equal to 39.67 MPa.However, for the rapid-setting formulation of January 20, 2017, a
Results and Discussion
Table 2 summarizes the results of the experiment.Water absorption, accessible porosity to water, and compressive strength of the three campaigns' printed samples are plotted as a radar graph that describes the durability performance of the printed materials.
Water Absorption
The MC14-10-16 specimens from the first printing campaign of November 10, 2016 have good average potential durability because the average porosity value is 12.8%.However, for the MCL19-01-17 formulation, the average porosity is 14.6%.Thus, the potential durability is low for these specimens.However, for the rapid recovery formulation, durability is very low, with an average value of 17.5% (greater than 16%).
Compressive Strength
The compressive strength is higher for the specimens of the first printing campaign, with an average value equal to 39.67 MPa.However, for the rapid-setting formulation of January 20, 2017, a significant deterioration in compressive strength was observed that reached an average value of 11.35 MPa.
Sorption and Desorption Curves
Experimental measurements are used to plot the sorption and desorption curves, or isotherms of the specimens studied as shown in Figure 7. Knowledge of the moisture content for each relative humidity provides a point on the curve.
For the desorption curve, the samples are placed successively in a series of test environments where the relative humidity decreases in stages.The starting point of this curve corresponds to RH = 95%.Figure 7 shows the sorption/desorption curves obtained at 23 • C. The mass moisture content is defined as the ratio of the evaporable water mass to the dry material mass.
The results show that MC14-10-16 molded specimens have a low hygroscopic power that does not exceed 1%.However, the specimens recovered from the test bodies printed during the printing campaigns of January 19 and 20, 2017 have a higher hygroscopic power, especially for fast-setting samples with a mass moisture content of 1.6% at RH = 95%.This makes these test bodies more accessible to aggressive agents than molded samples.
Buildings 2019, 9, x FOR PEER REVIEW 7 of 17 significant deterioration in compressive strength was observed that reached an average value of 11.35 MPa.
Sorption and Desorption Curves
Experimental measurements are used to plot the sorption and desorption curves, or isotherms of the specimens studied as shown in Figure 7. Knowledge of the moisture content for each relative humidity provides a point on the curve.
For the desorption curve, the samples are placed successively in a series of test environments where the relative humidity decreases in stages.The starting point of this curve corresponds to RH = 95%.Figure 7 shows the sorption/desorption curves obtained at 23 °C.The mass moisture content is defined as the ratio of the evaporable water mass to the dry material mass.
The results show that MC14-10-16 molded specimens have a low hygroscopic power that does not exceed 1%.However, the specimens recovered from the test bodies printed during the printing campaigns of January 19 and 20, 2017 have a higher hygroscopic power, especially for fast-setting samples with a mass moisture content of 1.6% at RH = 95%.This makes these test bodies more accessible to aggressive agents than molded samples.The isotherms of sorption and desorption show the existence of the phenomenon of hysteresis, which reflects the fact that it is easier for water to enter the porous network than to leave it.This is frequently explained by the geometric shape of the pores, with voids being connected by smaller pass sizes.Indeed, the results show that for the MCR20-01-17 tests during the desorption phase, samples keep a mass moisture content equal to 1.1% at RH = 30%.This will lead to deterioration in the durability of this formulation compared with others.
The results will provide insights into to the specific surface area according to BET theory [41], as well as information on pore size distribution by applying the BJH theory [42].
Analysis and the Proposed Durability Approach for 3D-Printed Materials
While the three printing campaigns resulted in printed structures, the results reveal different characteristics.This led us to rethink the indicators for 3D-printed materials for construction due to the performance-based approach.Those indicators are associated with fresh, pre-hardened, and The isotherms of sorption and desorption show the existence of the phenomenon of hysteresis, which reflects the fact that it is easier for water to enter the porous network than to leave it.This is frequently explained by the geometric shape of the pores, with voids being connected by smaller pass sizes.Indeed, the results show that for the MCR20-01-17 tests during the desorption phase, samples keep a mass moisture content equal to 1.1% at RH = 30%.This will lead to deterioration in the durability of this formulation compared with others.
The results will provide insights into to the specific surface area according to BET theory [41], as well as information on pore size distribution by applying the BJH theory [42].Construction materials, such as concrete, are generally non-Newtonian fluids before hardening.Their curve is set, and literature gives a holistic view on how construction materials behave.The challenge for the scientific community regarding 3D-printed materials is to understand how printed materials behave before and after the printing process.Figure 10 In its fresh state, concrete is considered a non-Newtonian rheofluidifying fluid, i.e., its viscosity will decrease, and it will harden slowly if not subjected to stress.
Printed concrete is not like poured concrete in that it tends to have a low E/C ratio and the grains have a small diameter and spindle grain size.In his article on the properties of printable cementitious materials, Suvash Chandra Paul used a rheometer called the "Schleibinger Viscomat NT" to measure the rheology of printable mortar formulations [37].However, the critical point of this test is that it cannot directly determine viscosity and shear stress.Following this, Wangler et al. were able to determine the calibration coefficients for the values obtained from the Viscomat and transformed them into shear stress and plastic viscosity [9].
Pumpability Indicator
Concrete is said to be pumpable if, under pressure, the flow is enough to make the printing process smooth.The E/C ratio plays a role in determining the pumpability.A pumpable material is not necessarily printable.The material can be pumpable but not adequate for printing in terms of durability and shape.
Shape Stability
Shape stability is a synonym of what researchers call "buildability."Shape stability quantifies the number of filament layers that could be constructed without significant deformation of the lower layers.It must be possible to indicate whether the layered structure is able to predict failure time when the structure collapses.Indeed, to learn more about this parameter, the shape stability and resistance of the layers come into play.For shape stability, contour crafting has been adopted (e.g., cylinder stability test), which saves us printing tests with the layer settlement test.
Anisotropy of 3D-Printed Construction Materials
Anisotropy is the property of materials that have different characteristics depending on their orientation.Wood, for instance, is anisotropic because of its compressive strength changes according to the orientation of the constraint (i.e., wood grain).
The printing process of construction materials introduces anisotropy.The materials are deposited layer-by-layer, which creates potential weaknesses between the layers that should be studied in depth in future research.Figure 11 shows how vertical and horizontal constraints can affect the printed layers.In its fresh state, concrete is considered a non-Newtonian rheofluidifying fluid, i.e., its viscosity will decrease, and it will harden slowly if not subjected to stress.
Printed concrete is not like poured concrete in that it tends to have a low E/C ratio and the grains have a small diameter and spindle grain size.In his article on the properties of printable cementitious materials, Suvash Chandra Paul used a rheometer called the "Schleibinger Viscomat NT" to measure the rheology of printable mortar formulations [37].However, the critical point of this test is that it cannot directly determine viscosity and shear stress.Following this, Wangler et al. were able to determine the calibration coefficients for the values obtained from the Viscomat and transformed them into shear stress and plastic viscosity [9].
Pumpability Indicator
Concrete is said to be pumpable if, under pressure, the flow is enough to make the printing process smooth.The E/C ratio plays a role in determining the pumpability.A pumpable material is not necessarily printable.The material can be pumpable but not adequate for printing in terms of durability and shape.
Shape Stability
Shape stability is a synonym of what researchers call "buildability."Shape stability quantifies the number of filament layers that could be constructed without significant deformation of the lower layers.It must be possible to indicate whether the layered structure is able to predict failure time when the structure collapses.Indeed, to learn more about this parameter, the shape stability and resistance of the layers come into play.For shape stability, contour crafting has been adopted (e.g., cylinder stability test), which saves us printing tests with the layer settlement test.
Anisotropy of 3D-Printed Construction Materials
Anisotropy is the property of materials that have different characteristics depending on their orientation.Wood, for instance, is anisotropic because of its compressive strength changes according to the orientation of the constraint (i.e., wood grain).
The printing process of construction materials introduces anisotropy.The materials are deposited layer-by-layer, which creates potential weaknesses between the layers that should be studied in depth in future research.Figure 11 shows how vertical and horizontal constraints can affect the printed layers.
395
The second method is the use of a shear consolidation tube as a linkage (Figure 13).This method 396 is to be further developed in future research.The anisotropic behavior of printed construction materials is introduced during the layer-by-layer deposit process as opposed to other setting methods, such as casting [43].Thus, our ascertainment is that durability indicators should be acknowledged for both horizontal and vertical variations.
Methods to Mitigate the Shear Stress Generated by the Printing Process
The first method is the use of fibers to link the layers as shown in Figure 12.Thus, the shear stress could be significantly reduced, and the structural state of the 3D-printed structure can stand a chance to offset the constraints.The anisotropic behavior of printed construction materials is introduced during the layer-bylayer deposit process as opposed to other setting methods, such as casting [43].Thus, our ascertainment is that durability indicators should be acknowledged for both horizontal and vertical variations.
Methods to Mitigate the Shear Stress Generated by the Printing Process
The first method is the use of fibers to link the layers as shown in Figure 12.Thus, the shear stress could be significantly reduced, and the structural state of the 3D-printed structure can stand a chance to offset the constraints.The second method is the use of a shear consolidation tube as a linkage (Figure 13).This method is to be further developed in future research.The second method is the use of a shear consolidation tube as a linkage (Figure 13).This method is to be further developed in future research.
Conclusion
The objective of this research was to evaluate the properties that influence the durability of printed construction materials.These tests involved durability indicators for comparing the performance of three tested formulations.The choice of these indicators was based on a performancebased approach rather than a prescriptive one.Three printing campaigns were realized, and the results compared the performance of the samples.Indicators, such as porosity and water absorption, were identified for the performance-based approach.The results led to the proposition of some performance indicators to consider when evaluating printed construction materials.Rheology, pumpability, and workability are the indicators identified for the input printing process.For the output (hardened materials), the indicators are compressive strength, water vapor permeability, flexural strength, water permeability, tensile strength, and internal cracking.Those indicators should be assessed on both the vertical and horizontal axis because of the anisotropy of the printed materials.
Future research should focus on testing the framework, and the durability indicators that help assess the pre-printability of construction materials.Indeed, researchers need a macro-way to evaluate the printability.Those tests are a pre-assessment and not the final assessment of printability.
Conclusions
The objective of this research was to evaluate the properties that influence the durability of printed construction materials.These tests involved durability indicators for comparing the performance of three tested formulations.The choice of these indicators was based on a performance-based approach rather than a prescriptive one.Three printing campaigns were realized, and the results compared the performance of the samples.Indicators, such as porosity and water absorption, were identified for the performance-based approach.The results led to the proposition of some performance indicators to consider when evaluating printed construction materials.Rheology, pumpability, and workability are the indicators identified for the input printing process.For the output (hardened materials), the indicators are compressive strength, water vapor permeability, flexural strength, water permeability, tensile strength, and internal cracking.Those indicators should be assessed on both the vertical and horizontal axis because of the anisotropy of the printed materials.Future research should focus on testing the framework, and the durability indicators that help assess the pre-printability of construction materials.Indeed, researchers need a macro-way to evaluate the printability.Those tests are a pre-assessment and not the final assessment of printability.
Figure 1 .
Figure 1.3D printing process: input and output.
Figure 1 .
Figure 1.3D printing process: input and output.
•
Campaign 1: MC14-10-16, • Campaign 2: MCR19-01-17, • Campaign 3: MCR20-01-17.Buildings 2019, 9, x FOR PEER REVIEW 3 of 17Tests were conducted to evaluate the properties that influence the durability of cementitious matrix materials.These tests involved essential tools for comparing the performance of different formulations and for studying the impact of composition and implementation.The choice of these tests was performance-based.
Figure 3 .
Figure 3. Images from the second 3D printing campaign that resulted in the MCR20-01-17 samples.
Figure 3 .
Figure 3. Images from the second 3D printing campaign that resulted in the MCR20-01-17 samples.
eau : Underwater mass of the sample determined by hydrostatic weighing; ρ w : density of water, 1000 kg/m 3 .Buildings 2019, 9, x FOR PEER REVIEW 5 of 17 èℎ− : Underwater mass of the sample determined by hydrostatic weighing; : density of water, 1000 kg/m 3 .
7. 5 .
Central Role of Rheology as an IndicatorA material's constituents have a direct influence on its rheology.Having an idea of the rheological properties of concrete allows for good control of the pre-mixing flow rate, which is decisive during printing.In other words, rheology defines the flow velocity for given shear stress.Different categories of fluids exist with the most known being the Newtonian fluids.Regardless of the stress applied, these fluids do not have a shear threshold.Their flow is proportional to the stress and keeps a constant viscosity as shown in Figure9.
17 Figure 10 .
Figure 10.Shear behavior for different types of fluids.
Figure 10 .
Figure 10.Shear behavior for different types of fluids.
Figure 11 .
Figure 11.Vertical and horizontal constraints applied to printed layers.
Figure 13 .
Figure 13.Shear consolidation tube as a linkage mechanism.
Figure 13 .
Figure 13.Shear consolidation tube as a linkage mechanism.
Table 1 .
Methods for measuring durability indicators.
Table 2 .
Water absorption, accessible porosity to water, and compressive strength of the three campaigns' printed samples.
Table 2 .
Water absorption, accessible porosity to water, and compressive strength of the three campaigns' printed samples. | 9,331 | 2019-04-22T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Application of Optimal Control to the Epidemiology of Dengue Fever Transmission
In this paper, we build an epidemiological model to investigate the dynamics of the spread of dengue fever in human population. We apply optimal control theory via the Pontryagins Minimum Principle together with the Runge-Kutta solution technique to a “simple” SEIRS disease model. Controls representing education and drug therapy treatment are incorporated to reduce the latently infected and actively infected individual populations. The overall thrust is the minimization of the spread of the disease in a population by adopting an optimization technique as a guideline.
Introduction
Dengue fever is a painful, debilitating mosquito-borne disease caused by one of four closely related dengue viruses (Noorani [1]). It is transmitted by the bite of an infected Aedes mosquito. Until now, more than 100 million cases of dengue fever occur worldwide in the Indian subcontinent, Southeast Asia, Southern China, Taiwan area, The Pacific Islands, The Caribbean, Mexico, Africa, Central and South America, Southern United States, and Southern Australia. In Indonesia, dengue cases increase yearly in almost all regions (Rodriguez and Monteiro [2]). The virus can be spread partly due to an increase in urbanization and also by climate change. Since considerable damage can result from the effects of dengue fever infection, effective control strategies are of vital importance.
A very important aspect of the strategy related to dengue fever spreading is quick and effective action (Rodriguez and Monteiro [2]). Dengue hemorrhagic four strains, and immunity to one seems to make infection by a second strain more dangerous (Laurencia et al. [4]). Experiments for producing and testing those control measures, such as education, antiviral drugs, are costly and time consuming, so any tool, such as a mathematical model(Lasalle [5]) that will enable us to predict the outcome is highly valuable. Mathematical models avail us with useful predictions about the potential transmission of a disease and the effectiveness of possible control measures. In addition, epidemiology has emerged as an effective tool in disease control. The relationship between mathematics and epidemiology has been increasing. For the mathematician, epidemiology provides new and exciting branches, while for the epidemiologist; mathematical modeling offers an important research tool in the study of the spread of diseases.
Bernoulli [6] proposed an epidemiological model which is considered by many authors as the first epidemiological mathematical model. Further work between 1927 and 1933 including those of Kermack and McKendrick [7] largely influenced the development of mathematical epidemiology models (James [8]). These attempts provided the fundamental framework for the compartmentalization of epidemiological models. Understanding them is vital in gaining important knowledge of the underlying aspects of the dengue fever spread (Thome et al. [9]). It is also important in assessing the impact of control measures for reducing mortality.
The discovery of antibiotics and vaccines heralded a new hope in disease control. Despite this, new challenges resulting from factors such as drug-resistance have also emerged. Sometimes this led to the emergence of more virulent forms of previously eradicated diseases. For example resistances to such diseases as malaria, tuberculosis, dengue and yellow fever have emerged and, as a result of climate changes, they have been spreading into new regions (Helena &Teresa [10]). Efforts to cope with this challenge have given rise an increasing trend in the application of mathematical models and interdisciplinary approaches in disease study. Their uses have contributed immensely in decision making and planning in the health sector. In the work reported herein, an SEIRS compartmentalized model is introduced, followed by an optimal control technique in a
Dengue Fever Transmission Model with Education
Quantitative methods are often applied to achieve optimization of investments in the control of a disease. This is necessary in order to obtain maximum benefits from a fixed amount of financial resources. In this case, our efforts will be directed towards the dynamics of the Aedes mosquito vector as well as some management protocols aimed at controlling or alleviating the spread of the disease. Such management principles involving the termination of the reproduction cycle of mosquitoes by avoiding the accumulation of still water in pot-holes and ditches especially after a heavy downpour, are of vital importance as well as educating the local population on issues related to basic hygiene through the television (TV) and radio.
Model Assumptions and Mathematical Formulation
1) The population is uniform and mixes homogeneously. The total population size, at any time t > 0, where N stands for the total population, E for exposed I for infected, S for susceptible and R for recovered.
2) The natural birth rate b and death rates μ n are assumed to be different.
3) Each individual in the population is considered as having an equal probability of contacting the disease with a contact rate β. 4) An infected individual makes contact and is able to transmit the disease with βN per unit time, that is, the contact rate is proportional to the total population size.
5) The fraction of contacts by an infected with a susceptible is S/N. Therefore the number of new infections in unit time per infective becomes (βN)(S/N). This rate is called an infection rate. This gives the rate of new infections or those leaving the susceptible category as (βN(S/N)I = βSI, which is called an incidence of the disease. This type of incidence is called bilinear incidence i.e., proportional to the product of the number of infective individuals and the number of susceptible individuals.
6) The number of infected individuals move from the exposed compartment per unit time is δE at time t.
7) The exposed E move from their compartment to I-compartment at a constant rate δ, so that 1/δ is the mean latent period.
8) The infectious I move from their compartment to R-compartment at a constant rate γ, so that 1/γ is the mean infectious period.
9) The rate of susceptible, exposed, infected and recovered individual removed from each compartments through natural death and disease induced death are μ n S, μ n E, μ n I, μ n R and μ d I respectively. 10) The recovered individual R move from their compartment to susceptible(S)-compartment at a constant rate α, 11) The differential equations from these assumptions can be represented by a system of ordinary differential equations: An optimal control strategy aimed at minimizing the objective (cost) functional J of the cost of education for a susceptible population is described by the following differential equations: where, A is balancing cost factor due to the infective and B is the weight on the cost of education. Figure 1 is a compartmentalized representation of the mathematical formulation and optimization strategy for education.
Based on the above assumptions, an optimal control problem is formulated by incorporating one of the intervention strategies into our basic mathematical model (see Equations (1) and (2)).
• u(t) is the control which represents the education ratio of susceptible individuals being educated per unit of time with bounds between 0 and 1. • The inflow of population to the susceptible class is obtained, by combining assumptions 2, 5, 9, 10 and control (education).
• A number of individuals leaves S and enter E, at the same time, a fraction of exposed E moves to infectious group I with a latent rate δ. δE represents an individual's move from exposed to infectious. Some of the exposed group die through natural death rate μ n , μ n E represents movement from exposed to death.
• Some individuals leave E and enter into the infected individuals I with latent rate δ. • A part of the population leaves I and enter the recovered group with recovery rate γ. Combination of assumptions 2, 5, 9, 10 in addition to the control u, gives the rate of recovered.
Combination of Education and Treatment by Drug therapy
Antiviral drugs are known to be very helpful in decreasing or preventing disease symptoms at the first sign of a dengue outbreak even when there is no evidence of fever. Before we incorporate drug therapy as part of our treatment protocol and control measures, we will deal with how the application of drug therapy affects some of the model compartments.
• Consider control variables u 1 , u 1 E as representing an individual's move from exposed to recovered. The exposed populations change per unit of time becomes, • In addition, a number of individuals leaves the infected group I and enter the recovered group with recovery rate γ. A number of individuals also leaves the susceptible and exposed groups S and E to enter the recovered group with controls u and u 1 respectively. This gives rate of recovered as: The differential equation of the diagram for t ≥ 0 is given in a system of ordinary differential equation. Introducing the controls representing the education and drug therapy treatment the model of Equation (1) becomes where, S(0), E(0), I(0), R(0) are the initial conditions. The definitions of above model parameters are listed in Table 1. The control functions, u(t) and u 1 (t) are [11]).
The control, u 1 (t), represents the effort on drug therapy treatment of latently infected individuals to reduce the number of individuals that may be infectious.
While the control u(t) is the effort on education of susceptible individuals to increase the number of recovered individuals.
A is balancing cost factor due to the infective, B and B 1 are the weight on the cost of education and drug respectively. Figure 2 is now the overall representation of the model formulation.
The control problem involves a number of individuals with latent and active dengue fever infections. The cost of applying education and drug therapy treatment controls u(t) and u 1 (t) are minimized subject to the differential equations (6). The performance specification involves the numbers of individuals with latent and susceptible components respectively, as well as the cost for applying education control (u) and drug therapy treatment control (u 1 ). The objective functional is defined as: where T is the final time and the coefficients, A, B, B 1 are balancing cost factors reflecting the importance of the three parts of the objective function. We need to find an optimal control pair, u and u 1 , such that ( ) ( ) where, is the control set.
Analysis of Optimal Control
The necessary conditions that an optimal pair must satisfy come from the Pontryagins Maximum Principle (Helena [12]). This principle converts (5) and (6) into a problem of minimizing point-wise Hamiltonian H, with respect to (u,u 1 ).
First we formulate the Hamiltonian from the cost functional (6) and the governing dynamics (5) to obtain the optimality conditions. Pontryagin introduced the adjoint function to relate the differential equation to the objective functional.
The necessary conditions needed to solve this OC problem, can be followed stepwise: Step 1: Formulate the Hamiltonian for the problem and by applying Pontryagin's principle to the Hamiltonian and find optimal controls u * , 1 u * with the corresponding solution S * , E * , I * and R * of equation (5).
Step 2: Write the adjoint differential equation, the optimality condition and transversality boundary condition (if necessary). Using the Hamiltonian to find the differential equation of the adjoint λ, and obtain the adjoint variables λ 1 , λ 2 , λ 3 and λ 4 that satisfy adjoint condition.
( )
, where 1, 2,3, 4 The optimality condition is given by, Step 4: Solve the four differential equations for S * , E * , I * , R * and λ with boundary conditions, substituting u * and 1 u * in the differential equations with the expression for the optimal control from the previous step.
Step 5: After finding the optimal state and adjoint, solve for the optimal control. We solve that system of differential equations for the optimal state and adjoint.
The solution of the optimal control in problem terms of S * , E * , I * , R * and λ, represents the characterization of the optimal control (u * ). The state equations and the adjoint equations together with the characterization of the optimal control and the boundary conditions constitute the optimality system. John [14].
Backward-forward Sweep Method
From the model the optimal control problem becomes: With initial value, As previously indicated, any solution to the above optimal control problem must also satisfy where, i = 1, 2, 3, 4, x 1 = S, x 2 = E, x 3 = I, The optimal controls are, The optimality condition can usually be manipulated to find a representation of u * in terms of t, state variables and λ. If this representation is substituted back into the ODEs for the state variables and λ then the Equations (11) and (12) form a two-point boundary value problem. The Runge-Kutta method is then applied to solve initial value problems, and resolve the optimality system of the optimal control problem. This approach is generally referred to as the Forward-Backward Sweep method. Information about convergence and stability of this method can be found in (Lenhart & John [14]). The process begins with an initial guess on the control variable. Then, the state equations are simultaneously
Numerical Illustrations and Conclusions
Numerical solutions to the optimality system comprising the state Equation (5)and adjoint equations are carried out using MATLAB and using parameters in Table 1
Results for Optimal Education Only
With this strategy, education (u) is utilized in the disease control while the control on drug therapy treatment (u 1 ) is set to zero, with weight factors B 1 = 0, A = 100, B = 0.04. For this strategy, we observed that the number of susceptible individuals is higher when education and drug therapy treatment are absent ( Figure 3). For the latently exposed (E) individuals in Figure 4, it can be seen that with the presence of education the percentage rate of the exposed is lower than when there is no education. The same trend is followed in Figure 5, where the percentage of the infected group (I) is lower when exposed to education. However the percentage of the recovered individuals (R) with education is higher than when there is no exposure to education.
Optimal Drug Therapy Treatment Only
The control (u 1 ) on drug therapy treatment is utilized while the control on education(u) is set to zero, with weight factors A = 100, B = 0.04, B 1 = 0.06. For this strategy, it can be observed in Figure 7, that controls with education and drug therapy treatment lowers the percentage of susceptible individuals (hardly perceptible in the diagram) than with education alone. This is because the recovered individuals go back to susceptible group and increase the susceptible group at higher rate. For the latently infected individuals in Figure 8, it can be seen that in the absence of education, and with an initially exposed population of 4.5%; there is hardly any change in the percentage of the individuals exposed both with vious that the impact of education takes time to be felt or manifested in the dynamics. However there is a dramatic change in the dynamics after this period as the percentage of the exposed with education and treatment becomes significantly lower than for those with education alone. For the infected individuals in Figure 9, with an initially infected population of 9.04%, it can be seen that using both intervention mechanisms is better than using education as only control mechanism. As earlier observed, there is a time lag of about ten weeks for the impact of education to be reflected in the dynamics. The same trend is observed in Figure 10 for the percentage of the recovered where the time lag for education is about five weeks before the influence of education with treatment shows a higher percentage than with education alone.
Optimal Education and Drug Therapy Treatment
With this strategy, the controls on education (u) and drug therapy treatment (u 1 ) are utilized, with weight factors A = 100, B = 0.04, B 1 = 0.06. Figure 11 shows that the percentage of susceptible individuals with education and treatment is lower than the susceptible population in the absence of education and drug therapy treatment. Figure 12, shows that without control the percentage of exposed individuals is higher than would be the case with education and treatment options. The positive effect of treatment and education is further confirmed in Figure 13 where there is a higher percentage of individuals recorded without any control measures. Figure 14 shows that as more people get
Concluding Remarks
The results displayed herein not only confirm the validity of the mathematical formulation derived but also illustrate how to optimally apply control measures involving treatment and education for the control of dengue fever. Utilizing education and drug therapy treatment lead to better disease control in the population than utilizing drug therapy treatment only. In addition, the application of only one form of control measure though it results in a delayed peak in the percentage of exposed and infected, is not as effective as using both controls. Thus control programs that specialize in an optimal application of multi-control measures can effectively reduce or alleviate the effects of dengue fever spread.
Further work should include other control variables like the effect of bioimmunology on the spread of dengue fever, the use of medicated mosquito nets, development and application of vaccines, creation of sterile mosquito males for the control of mosquito population etc.
Conflicts of Interest
The authors declare no conflicts of interest regarding the publication of this paper. | 4,095 | 2019-01-21T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Mathematics"
] |
Semileptonic decays of Bc\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B_c$$\end{document} meson to S-wave charmonium states in the perturbative QCD approach
Inspired by the recent measurement of the ratio of Bc\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B_c$$\end{document} branching fractions to J/ψπ+\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J/\psi \pi ^+$$\end{document} and J/ψμ+νμ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J/\psi \mu ^+\nu _{\mu }$$\end{document} final states at the LHCb detector, we study the semileptonic decays of Bc\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B_c$$\end{document} meson to the S-wave ground and radially excited 2S and 3S charmonium states with the perturbative QCD approach. After evaluating the form factors for the transitions Bc→P,V\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B_c\rightarrow P,V$$\end{document}, where P and V denote pseudoscalar and vector S-wave charmonia, respectively, we calculate the branching ratios for all these semileptonic decays. The theoretical uncertainty of hadronic input parameters are reduced by utilizing the light-cone wave function for the Bc\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B_c$$\end{document} meson. It is found that the predicted branching ratios range from 10-7\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$10^{-7}$$\end{document} up to 10-2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$10^{-2}$$\end{document} and could be measured by the future LHCb experiment. Our prediction for the ratio of branching fractions BR(Bc+→J/Ψπ+)BR(Bc+→J/Ψμ+νμ)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{\mathcal {BR}(B_c^+\rightarrow J/\Psi \pi ^+)}{\mathcal {BR}(B_c^+\rightarrow J/\Psi \mu ^+\nu _{\mu })}$$\end{document} is in good agreement with the data. For Bc→Vlνl\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B_c\rightarrow V l \nu _l$$\end{document} decays, the relative contributions of the longitudinal and transverse polarization are discussed in different momentum transfer squared regions. These predictions will be tested on the ongoing and forthcoming experiments.
Introduction
Recently, the LHCb Collaboration has measured the semileptonic and hadronic decay rates of the B c meson and obtained BR(B + c →J/ π + ) BR(B + c →J/ μ + ν μ ) = 0.0469 ± 0.0028(stat) ± 0.0046(syst) [1]. It is a motivation to investigate the B c meson semileptonic decays to charmonium, which are easier to identify in experiment. Indeed, both the CDF and the D0 Collaboration have measured the lifetime of the B c meson through its semileptonic decays [2][3][4]. More recently, the LHCb Collaboration gave a more precise measurement of its lifetime using semileptonic B c → J/ψμν μ X decays [5], where X denotes any possible additional particles in the final states. At the a e-mail<EMAIL_ADDRESS>quark level, the semileptonic decays of the B c meson driven by a b → c transitions, where the effects of the strong interaction can be separated from the effects of the weak interaction into a set of Lorentz-invariant form factors. It may provide us with the information as regards the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements V cb and the weak B c to charmonia transition form factors.
There are many theoretical approaches to the calculation of B c meson semileptonic decays to charmonium. Some of them are: the nonrelativistic QCD [6,7], the Bethe-Salpeter relativistic quark model [8], the relativistic quark model [9][10][11], the light-cone QCD sum rules approach [12,13], the covariant light-front model [14], the nonrelativistic quark model [15], the QCD potential model [16][17][18], and the lightfront quark model [19]. The perturbative QCD (pQCD) [20,21] is one of the recently developed theoretical tools based on QCD to deal with the nonleptonic and semileptonic B decays. So far the semileptonic B u,d,s,c decays have been studied systematically in the pQCD approach [22][23][24][25]. One may refer to the review paper [26] and the references therein.
In our previous work [27,28], we analyzed the two-body nonleptonic decays of the B c meson with the final states involving one S-wave charmonium using the perturbative QCD based on k T factorization. By using the harmonicosillator wave functions for the charmonium states, the obtained ratios of the branching fractions are consistent with the data and other studies. Especially some of our predictions were well tested by the recent experiments at ATLAS [29] and LHCb [30], which may indicate that the harmonic-oscillator wave functions for S-wave charmonium work well.
In this paper, we extend our previous pQCD analysis to the semileptonic B c decay such as B c → (η c (nS), ψ(nS))lν (here l stands for the leptons e, μ, and τ ) with the radial quantum number n = 1, 2, 3, while the higher 4S charmonia are not included here since their properties are still not understood well. The semileptonic decays B c → (J/ψ, η c )lν have been studied in pQCD [31], compared to which the new ingredients of this paper are the following.
(1) Instead of the traditional zero-point wave function for the B c meson, the light-cone wave function which was well developed in Ref. [32] is employed in order to reduce the uncertainties caused by the hadronic parameters. In addition, the charmonium distribution amplitudes are also extracted from the correspond Schrödinger states for the harmonic-oscillator potential. (2) Here, the momentum of the spectator charm quark is proportional to the corresponding meson momentum. In Ref. [31], the charm quark in the B c meson carries a momentum with only the minus component. That is, its invariant mass vanishes, while the charm quark in the final states is proportional to the charmonium meson momentum and its invariant mass does not vanish. This substantial revision will render our analysis more consistent. (3) We updated some input hadronic parameters according to the Particle Data Group 2014 [33]. (4) Besides including the B c → (J/ψ, η c )lν decays, the B c → P/V (2S, 3S)lν decays are also investigated, where it is theoretically easier compared with that of nonleptonic decays. Our goal is to provide a ready reference to the existing and forthcoming experiments to compare their data with the predictions in the pQCD approach.
The paper is organized as follows. In Sect. 2 we define kinematics and describe the wave functions of the initial and final states, while the analytic expressions for the transition form factors and the differential decay rate of the considered decay modes are given in Sect. 3. The numerical results and relevant discussions are given in Sect. 4. The final section is the conclusion. The evaluation of the 3S charmonium distribution amplitudes is relegated to the appendix.
Kinematics and the wave functions
It is convenient to work at the B c meson rest frame and the light cone coordinate. The B c meson momentum P 1 and the charmonium meson momentum P 2 are chosen as [34] with the ratio r = m/M and m(M) is the mass of the charmonium (B c ) meson. The factors η ± = η ± η 2 − 1 come with the definition of the η of the form [34] with the momentum transfer q = P 1 − P 2 . When the final state is a vector meson, the longitudinal and transverse polarization vector L ,T can be written as The momentum of the valence quarks k 1,2 , whose notation is displayed in Fig. 1, is parametrized as the k 1T,2T , x 1,2 represent the transverse momentum and longitudinal momentum fraction of the charm quark inside the meson, respectively. One should note that there is no endpoint singularity in the B c meson decays and the integral is still convergent without the parton transverse momentum k 1T of B c meson in the collinear factorization. However, we here still keep it to suppress some non-physical contributions near the singularity (for example the singularity at x 1 = 0.1923 for B c → J/ψ decay). Fig. 1 The leading-order Feynman diagrams for the semileptonic decays B + c → P/V l + ν l with l = (e, μ, τ) There are three typical scales of the B c to charmonium decays: M, m, and the heavy-meson and heavy-quark mass difference¯ . These three scales allow for a consistent power expansion in m/M and in¯ /m under the hierarchy of M m ¯ . In the heavy-quark and large-recoil limits, based on the k T factorization theorem, the corresponding form factors can be expressed as the convolution of the hard amplitude with B c and charmonium meson wave functions. The hard amplitude can be treated by perturbative QCD at the leading order in an α s expansion (single gluon exchange as depicted in Fig. 1). The higher-order radiative corrections generate the logarithm divergences, which can be absorbed into the meson wave functions. One also encounters double logarithm divergences when collinear and soft divergences overlap, which can be summed to all orders to give a Sudakov factor. After absorbing all the soft dynamics, the initial and final state meson wave functions can be treated as nonperturbative inputs, which are not calculable but universal. Similar to the situation of the B meson [35], under above hierarchy, at leading order in 1/M, the B c meson light-cone matrix element can be decomposed as [36][37][38][39][40] with the unit vectors v = (0, 1, 0 T ) on the light cone. Here, we only consider the contribution from B c , while the contribution of¯ B c starting from the next-to-leading-power¯ /M is numerically neglected [41,42]. In coordinate space B c can be expressed by The distribution amplitude φ B c is adopted in the form [32] with shape parameters ω = 0.5 ± 0.1 GeV and the normalization conditions N is the normalization constant. For the charmonium meson, because of its large mass, the higher-twist contributions are important. The light-cone wave functions are obtained in powers of m/E or¯ /E where E(≈ M) is the energy of the charmonium meson. In terms of the notation in Ref. [43], we decompose the nonlocal matrix elements for the longitudinally and transversely polarized vector mesons (V = J/ψ, ψ(2S), ψ(3S)) and pseudoscalar respectively. For the distribution amplitudes of the 1S and 2S states, the same form and parameters are adopted as in [27,28]. The distribution amplitudes of 3S states will be derived in the appendix.
Form factors and semileptonic differential decay rates
The two factorizable emission Feynman diagrams for the semileptonic B c decays are given in Fig. 1. The transition form factors, , and A 0,1,2 (q 2 ) are defined via the matrix element [44,45], with 0123 = +1. In the large-recoil limit (q 2 = 0), the following relations should hold to cancel the poles: In the pQCD framework, it is convenient to compute the other equivalent auxiliary form factors f 1 (q 2 ) and f 2 (q 2 ), which are related to F + (q 2 ) and F 0 (q 2 ) by [31] Following the derivation of the factorization formula for the B → P, B → V transitions [46], we obtain these form factors as follows: with . α e and β a,b are the virtuality of the internal gluon and quark, respectively. Their expressions are The explicit expressions of the functions E ab , the scales t a,b , and the hard functions h are referred to [27]. In fact, if we take q 2 → 0, these expressions are agree with the results in Ref. [28]. At the quark level, the charged current B c → P(V )lν decays occur via the b → clν l transition. The effective Hamiltonian for the b → clν l transition is written as [47] where G F = 1.16637 × 10 −5 GeV −2 is the Fermi coupling constant and V cb is one of the CKM matrix elements. The differential decay rate of B c → Plν reads [14] d where m l is the mass of the leptons and λ(q 2 ) = (M 2 + m 2 − q 2 ) 2 − 4M 2 m 2 . Since the electron and muon are very light compared with the charm quark, we can safely neglect the masses of these two kinds of leptons in the analysis. For the channel of B c → V lν, the decay rates in the transverse and longitudinal polarization of the vector charmonium can be formulated as [14] d The combined transverse and total differential decay widths are defined as
Numerical results and discussions
In our calculations, some parameters are used as inputs, which are listed in Table 1.
As known, the pQCD results of these form factors are reliable only in the small q 2 region. For the form factors in the large q 2 region, the fast rise of the pQCD results indicates that the perturbative calculation gradually becomes unreliable. In order to extend our results to the whole physical region, we first perform the pQCD calculations to these form factors in the lower q 2 region (q 2 ∈ (0, ξ(M − m) 2 ) with ξ = 0.2(0.5) for the B c → 1S(2S/3S) transition), and then we make an extrapolation for them to the larger q 2 region (q 2 ∈ (ξ(M − m) 2 , (M −m) 2 )). There exist in the literature several different approaches for extrapolating the form factors from the small q 2 region to the large q 2 region. The three-parameter form is one of the pervasive models, where the fit function is chosen as where F i denotes any of the form factors, and a, b are the fitted parameters.
Our results of the transition form factors at the scale q 2 = 0 together with the fitted parameters a, b are collected in Table 2, where the theoretical uncertainties are estimated including three aspects.
The first kind of uncertainties is from the shape parameters ω in the initial and final states and the charm-quark mass m c . In the evaluation, we vary the values of ω within a 20 % range and m c = 1.275 GeV by ±0.025 GeV. We find that, in this work, the form factors are less sensitive to these hadronic parameters than our previous studies [27,28]. For example, the error induced by m c is just a few percent here, while in Ref. [28] this can reach 10-20 %. This can be understood from the B c meson wave function. In Ref. [28], the δ function depends strongly on the mass of charm quark which results in a relative large uncertainty. The second error comes from the decay constants of the final charmonium meson, which are shown in Table 1. Due to the low accuracy measurement of the decay width of the double photons decay of the pseudoscalar charmonia, the relevant uncertainty of F 0.+ is large. The last one is caused by the variation of the hard scale from 0.75 to 1.25 t. Most of this uncertainty is less than 10%, which means the next-to-leading-order contributions can be safely neglected. The errors from the uncertainty of the CKM matrix elements are very small, and they have been neglected.
It shows that the B c → P/V (1S, 2S) transition form factors are a bit larger than our previous calculations [27,28]. It is because, here, instead of the traditional zero-point wave function, we have used the light-cone wave function for the B c meson [32]. The shape of the leading twist distribution amplitude of the B c meson together with the final S-wave charmonium states are displayed in Fig. 2. It is easy to see that Table 1 The values of the input parameters for numerical analysis. The tensor decay constant f T V are determined through the assumption f T V m V = 2 f V m c , which has been used in [48] Mass (GeV) M Bc = 6.277 [28] m b = 4.18 [28] m c = 1.275 [28] m τ = 1.777 [12] m J/ψ = 3.097 [27] m ηc = 2.981 [27] m ψ(2S) = 3.686 [28] m ηc(2S) = 3.639 [28] m ηc(3S) = 3.940 [12] m ψ(3S) = 4.040 [12] CKM V cb = 40.9 × 10 −3 [28] V ud = 0.97425 [28] Decay constants(MeV) f Bc = 489 [27] f π = 131 [27] f J/ψ = 405 ± 14 [27] f ηc = 420 ± 50 [27] f ψ(2S) = 296 +3 −2 [28] f ηc(2S) = 243 +79 −111 [28] f ψ(3S) = 187 ± 8 [12] f ηc(3S) = 180 +27 −32 [12] Table 2 The fit parameters a, b, and the pQCD predictions of F 0,+ (0), A 0,1,2 (0), and V (0) for B c → nS(n = 1, 2, 3) decays, where the uncertainties come from the hadronic parameters including shape parameters ω in the initial and final state wave functions and charm-quark mass m c , decay constants, and the hard scale t, respectively Table 2. Since one of the peaks of the 2S charmonium states wave function is so close to the peak of the B c meson wave function, the overlaps between them are large, which enhances the values of the B c → P/V (2S) form factors. However, due to the presence of the nodes in the 3S states wave function and the smaller decay constants, the corresponding form factors of the B c decays to the 3S states are slightly suppressed. We plot the q 2 dependence of the weak form factors with center values without theoretical uncertainties in Fig. 3 for the six decay processes in their physical kinematic range. We can see the different q 2 dependence of the form factors among the B c decays to different S-wave charmonia clearly. For example, the form factors for the B c → P/V (1S) transition have a relatively strong q 2 dependence, but those of the B c → P/V (2S/3S) transition show a little weaker q 2 dependence. In addition, most of these form factors become larger with increasing q 2 . However, this behavior is not universal. For instance, from Fig. 3 some of the form factors for B c → P/V (2S/3S) decays decreases with the increasing q 2 in the large region. A similar situation also exists in the light-front quark model [19] and in the ISGW2 quark model [49]. This behavior of the difference for the corresponding final states is the consequence of their different nodal structure in the wave functions.
Integrating the expressions in Eqs. (23) and (24) over the variable q 2 in the physical kinematical region, one obtains the relevant decay widths. Then it is straightforward to calculate the branching ratios. The results of our evaluation of the branching ratios for all the considered decays appear in Table 3 in comparison with predictions of other approaches. For the B c → P/V (1S) decays, our results are comparable to those of [6,7] within the error bars, but larger than the results from other models due to the values of the weak form factors.
For the B c → P/V (2S, 3S) decays our predictions are generally close to the light-cone QCD sum rules results of [12]. However, the relativistic quark model predictions for the B c → P/V (3S) decays in Refs. [9,10] are typically smaller, which can be discriminated by the future LHC experiments.
From Table 3, we can see the former four processes have a relatively large branching ratio (10 −2 ), while the branching ratios of the last four processes are comparatively small (10 −7 ∼ 10 −3 ). They have the following hierarchy: This is due to the tighter phase space, smaller decay constants, and the less sensitive dependence of the form factors on the momentum transfer q 2 for the higher excited state, which can be seen in Fig. 3. The combined effect above suppresses the branching ratios of the semileptonic B c decays to radially excited charmonia. For decays to higher charmonium excitations such a suppression should be more pro-nounced. In order to reduce the theoretical uncertainties from the hadronic parameters and the decay constants, we defined six ratios between the electron and tau branching ratios, i.e. Branching ratios (in units of %) of B c → P/V lν l decays evaluated by pQCD and by other methods in the literature. The errors induced by the same sources as in Table 2 Modes This work [6,7] Table 3, we obtain where the errors correspond to the combined uncertainty in the hadronic parameters, decay constants, and the hard scale. Since these parameter dependences canceled out in Eq. (28), the total theoretical errors of these ratios are only a few percent, much smaller than those for the branching ratios. In general, these ratios are of the same order of magnitude in the different approaches except the light-cone QCD sum rules [12], where it is obtained the smallest values of R(η c (3S)) = 33.3. For a more direct comparison with the available experimental data [1], we need to recalculate some of the nonleptonic B c decays by using the same wave functions and input parameters as this paper, whose results are where the errors induced by the same sources as in Table 2.
The ratios among the branching fractions are shown explicitly in Table 4, from which we can see that the ratios BR(B + c →J/ψπ + ) BR(B + c →J/ψl + ν l ) and BR(B + c →ψ(2S)π + ) BR(B + c →J/ψπ + ) are well consistent with the recent data [1,30], and also comparable with the prediction of the NRQCD [6,7]. Furthermore the latter still agree with the previous pQCD calculations [28] 0.29, although both BR(B + c → ψ(2S)π + ) and BR(B + c → J/ψπ + ) [27,28]. We now investigate the relative importance of the longitudinal ( L ) and transverse ( T ) polarizations contributions to the branching ratios of B c → V lν l decays within Region (1), Region (2), and the whole physical region, whose results and the ratios L T are displayed separately in Table 5. For light electron and muon, the regions are defined as: Region (1) Table 5, all of L T are < 1 in Region (2), which means that the transverse polarization dominates the branching ratios in this region. It can be understood as follows. For the B c → 1S, 2S decays, the form factor V as shown in Fig. 3 increase as the q 2 increases, which enhances the transverse polarization contribution in the large q 2 region, while for the B c → 3S decay, although the value of V decreases gradually with increasing q 2 , the form factor A 1 , which gives a dominant contribution to L , is significantly suppressed in the large region, and as a results the dominant contributions to the branching ratios of B c → ψ(2S) decays come from Region (1).
For B c → ψ(2S, 3S)eν e decays L is comparable with T in the whole physical region. These results will be tested by LHCb and the forthcoming Super-B experiments.
Conclusion
We calculate the transition form factors and obtain the branching ratios of the semileptonic decays of B c meson to S-wave charmonium states by employing the pQCD factorization approach. By using the light-cone wave function for the B c meson, the theoretical uncertainties from the nonperturbative hadronic parameters are largely reduced. It is found that the processes of B c to the ground state charmonium have comparatively large branching ratios (10 −2 ), while the branching ratios of other processes are relatively small owing to the phase space suppression, smaller decay constants, and the weaker q 2 dependence of the form factors. The theoretically evaluated ratio BR(B + c →J/ π + ) BR(B + c →J/ μ + ν μ ) = 0.046 +0.003 −0.002 is consistent with the recent data from LHCb. In addition, some interesting ratios among these branching fractions are discussed and compared with other studies. In general, these ratios in the different approaches are of the same order of magnitude, while there are also large discrepancies for specific decay modes. The partial branching ratios for transverse and longitudinal polarizations were investigated separately in B c → V lν l decays. We found that the transverse polarization gives a large contribution in the large q 2 region. For the semileptonic B c → ψ(2S, 3S)eν e decays the longitudinal contribution is comparable with the transverse contribution in the whole physical region. These theoretical predictions could be tested at the ongoing and forthcoming experiments. | 5,710.6 | 2016-10-01T00:00:00.000 | [
"Physics"
] |
On the aesthetic significance of imprecision in computational design: Exploring expressive features of imprecision in four digital fabrication approaches
Precision of materialized designs is the conventional goal of digital fabrication in architecture. Recently, however, an alternative concept has emerged which refashions the imprecisions of digital processes into creative opportunities. While the computational design community has embraced this idea, its novelty results in a yet incomplete understanding. Prompted by the challenge of the still missing knowledge, this study explored imprecision in four digital fabrication approaches to establish how it influences the aesthetic attributes of materialized designs. Imprecision occurrences for four different digitally aided materialization processes were characterized. The aesthetic features emerging from these imprecisions were also identified and the possibilities of tampering with them for design exploration purposes were discussed. By considering the aesthetic potentials of deliberate imprecision, the study has sought to challenge the canon of high fidelity in contemporary computational design and to argue for imprecision in computation that shapes a new generation of designs featuring the new aesthetic of computational imperfection.
Introduction
The issue of imprecision, reflected in the discrepancy between the design and its physical embodiment, is inherent to any design-to-construction process in architecture. Numerous architectural projects from the past and the present are appreciated for tectonic qualities that result from the architect having taken advantage of this imprecision. A recent example of such work is the Armadillo Vault at the Venice Biennale, in which the geometric differences between the digital model and the physical building blocks are negotiated through adaptive assembly strategies based on computing and traditional stonemasonry techniques, yielding a highly expressive design. 1 In the history of architecture, similar examples of such negotiations can also be found, albeit not involving the digital techniques but rather the negotiation between the hand drawing, the scaled physical model and the realized full-scale structure. Among these, Antonio Gaudi's efforts to realize hyperboloid vaults, Frei Otto's attempts with tensile and membrane structures, or Felix Candela's work on ultra-thin parabolic shells are some of the notable examples.
Today, with the fast-paced incorporation of advanced digital modeling, computation, and fabrication techniques in architecture, a closer look at imprecision occurring between the digital model and its physical embodiment is important due to its generative potential-at the level of material performance and design esthetics. The new media, viewed not only as means for representing and handling complexity, but also as media for explorative inquiry, have the capacity to expand the current repertoire of imprecision handling strategies onto a new territory, in which imprecision becomes the central driver of the design process. Elevated interest in this new direction is already visible in current digital design research, taking the form of a paradigm shift coined as the second digital turn. [2][3][4][5] This study proceeds along similar lines of thought by considering an artistic practice in which the inconsistencies between digital models and their material representations are sustained and amplified. A closer look is taken at four different digital fabrication techniques recently employed in architecture, to understand, capture, and characterize how imprecision and manufacturing errors occur in these techniques. The overarching purpose is to reveal how imprecision in digital design and production can be embraced and turned from a weak point into a creative opportunity.
One potential of the positive approach to manufacturing errors argued for in this study is its capacity to guide digital design and fabrication beyond the realm of high fidelity and ultimate precision, toward an esthetic practice marked by novel expressive forms and new types of tectonic design features. Such an establishment and acknowledgment of a new esthetic repertoire in the architectural practice can lead to yet another important benefit, that is, enhanced sustainability in building component production. If certain geometric, material, esthetic, and textural imprecisions start being classified as esthetically valuable instead of flawed, this can lead to a substantial reevaluation of current paradigms and quality assessment criteria in building component manufacturing. This can eliminate the need to discard some components from the production chain and therewith have economic benefits of decreasing the amount of construction material waste, saving the manufacturing time and maximizing the intellectual and technological effort related to customized digital fabrication processes.
Modern craft discourse and its stance on imprecision
To explain the relevance of linking imprecision and errors in digital fabrication with design esthetics, a broader discourse on modern craft needs to be considered. This discourse provides arguments for the value of imprecision in design production but also creates a foundation to explore alternative takes and territories within digital fabrication research in architecture. In this context, Richard Sennett, in his seminal book 'The Craftsman', defines the traits of good craft and offers hints for how, under certain conditions, even the use of machines could be regarded as good craft. 6 According to Sennett, craft is much more than a skillful, welltrained use of a tool. Rather, it entails a broader capacity, marked by the simultaneous use of the hand and the head, of skill and imagination, of combined problem solving and problem finding. A good work of craft is open ended. It shows signs of improvization and is underdetermined rather than strictly following a plan. Tactility, relativity, and incompleteness are the markers of good craftwork. Interestingly, Sennett chooses to critically discuss digital architectural design in this context. He urges the modern digital architect to avoid the risk of hands off design automation caused by digital tooling and encourages adapting a craftsperson's way of thinking while using technology. He suggests that this could be achieved by the architect actively participating in the process carried out by a machine and critically learning from interaction with it. Sennett claims that open ended, hands on experimentation with machine workflows creates a condition for creative discovery and that making these workflows intentionally incomplete and indeterminate promises novel features to emerge in a design.
In a similar manner, David Pye, in his well known piece 'The nature and art of workmanship' supports the value of imprecision in design production. 7 He introduces the concept of workmanship of risk to place a clear esthetic value on the imperfections of crafted work, including work crafted with the aid of machines. In workmanship of risk, the artisan operating a machine deliberately takes a creative risk by tweaking the process in a way that departs from the industrial manufacturing standard, to achieve unanticipated esthetic results. The imperfections of the materialized design arising from this approach have high esthetic quality because they feature uniqueness and diversity so valued in handmade objects. This uniqueness and diversity counter the uniform and repeatable esthetic of designs materialized through carefully planned and precisely executed mass production. Pye also argues that the imperfections of material processing at the level of surface detail are as valid determinants of high quality as the expression of an object's shape.
An akin perspective is also presented in a more recent craft theory by Howard Risatti who acknowledges the esthetic value of imprecision, discussing it from the standpoint of material imperfections. 8 According to Risatti, craft is characterized by the material being worked in harmony with its properties. If those properties also embrace irregularities of the material, the craftsperson negotiates around these by both skillfully as well as artistically incorporating them into the finished piece. In this way, the material irregularities turn into valuable esthetic design features.
The discourse on craft presented above provides a good argumentative foundation to perceive the unavoidable material imperfections as a creative advantage, by stressing their role in enhancing the uniqueness of the object and expressing the human effort and manual intervention behind it. However, it also enables to take a step further and provide an alternative perspective, in which one does not imitate the processes of handicraft in digital production but instead attempts to identify the imprecisions occurring in those processes and employ these for artistic exploration purposes. Exactly this way of thinking is argued for in this study. The suggestion is to validate the positive value of error and imprecision in the materialization process supported by digital tools by looking at inherent weaknesses in various digital fabrication techniques and exploiting those as a qualitative advantage.
A similar alternative approach to the value of craft, albeit from a different angle, has already been hinted in the digital architecture discourse. It was argued that the digital manufacturing process can reconcile the architect with craft in a new way. That is to say, the interaction of the designer with the material through the medium of a digital machine, instead of having a figurative or ornamental purpose as in traditional craft framings, can be much more open ended and based on material negotiation. 9 In this way, it can unleash the morphogenetic properties of the material and through this yield unanticipated tectonic design features with high esthetic value. 10
Existing research contexts and new knowledge contributions
To position this study within the existing contexts of architectural digital fabrication research, three themes of relevance are discussed below-material and tool agencies, intentional imprecision in computation and the aesthetics of imperfection. Within those contexts, the knowledge contributions of the research presented in this article are discussed.
Material and tool agencies
The interest of digital architectural design in material agency was sparked by the writings of Manuel DeLanda who claimed that matter has morphogenetic powers and should be employed for the purpose of giving According to Stoker and others, the newness of computers and a willingness to experiment created opportunities to circumvent hierarchy and traditions at SOM. "We did things by the seat of our pants. . .we didn't know we couldn't do things. . . when existing procedures didn't work, we'd go around them." Upon graduating from The Ohio State University with a degree in architecture, Louise Sabol 12 began her career in SOM's Denver office when it had just opened, and the economy was booming; by then there were new computing resources. As she remembers: "The technology area was less supervised because the management didn't know how to do it. . .There was freedom in it, because you could just try stuff."
Women's backgrounds and roles in the Computer Group
To appreciate the significance of the Computer Group, it is important to understand the social context surrounding computing at the time. Before the popularization of personal computing in the mid-1980's, cultural attitudes surrounding computers were often negative and dismissive. Many architects believed that computing was manual labor, undignified and uncreative. Older male employees (and partners) in the office rejected computers because they believed typing was women's work. 8 When some male SOM partners were offered computer terminals, they declined but said their secretaries could have them (this was before word-processing existed). But the Computer Group managed to overcome this bias, including several women who sought out computing as part of their careers.
Lynn Paxson worked for the SOM offices in New York as a designer and a space programmer. She acted as a liaison between the official Computer Group and the design project teams and as a user-tester. (Although the Computer Group in Chicago was best known, other SOM offices had similar specialized teams.) Paxson studied computing in graduate school and remembers that it was an uncommon practice for most designers, especially women. "Women were not encouraged to work with computers in school. Even in school, [the other students thought] you were only interested in the computer because you're bad at design-because your drawings don't look as good. They saw typing as a different thing. As if typing was the same as coding! Sitting at a keyboard was weird to designers." Nicholas (Nick) Weingarten was one of the original members of the Computer Group and joined SOM from the computer graphics program at Cornell. He worked on several iterations of a proprietary interactive CAD system for the firm that also integrated elements of structural and mechanical engineering. Weingarten recounts that the Computer Group had difficulty hiring because there were so few designers who were skilled in both computers and architecture. 13 Doug Stoker was one of them. As an architecture student at the University of Illinois, Stoker was introduced to FORTRAN due to a course scheduling error, making him the first fine arts student ever to take the class there. "When I graduated, I sent out 100 letters to firms that might have a computer and I got one response, from SOM, where I interviewed with Fazlur Khan (1970)." When SOM began work on the Jeddah airport in the late 1970's ( Figure 3), the firm needed to expand quickly with as many skilled and talented people they could find. At a time when there was still a strong "boys' network," women found that experience in the Computer Group gave them a place to build their skills and helped them stand out from their peers. Paxson 14 had previous experience with computers from her social science work in graduate school, where she used them to perform statistical analysis. In her last years in the program, she injured her hand in a fall and was unable to hold a pencil to draw. Paxson recalled: "It made me depressed because I couldn't communicate. One of the other students was a former architecture dean and could teach really well. He said 'you know, Lynn, what makes you an architect is not your hand but what is in your head. By the time you get licensed, there will be other ways to draw.'" While she recovered, Paxson found work as a design intern and continued learning to use the computer in earnest-typing one-handed.
While working on her doctorate at the City University of New York, Paxson was hired by a smaller firm with offices across the country. There, she heard that SOM was looking for someone to join their Computer Group. When she called to find out about the job, they told her she wasn't eligible because she did not have a computer science degree. Soon after, the head of the design department called Paxson and then offered her International Journal of Architectural Computing 00(0) form. 11 Such a generative framing of matter is significant for the study presented in this article, as it confirms the prominence of material behaviors in shaping the aesthetics of a design. Architectural research efforts inspired by these generative potentials of matter successfully demonstrated that material behaviors do not need to be controlled to achieve meaningful aesthetic effects. One of the precedent projects is the P-Wall by Andrew Kudless, in which the dynamic behaviors of a flowing plaster mass cast into a flexible textile formwork were explored as a novel way of shaping the expression of an architectural partition. 12 Several studies extended this idea, by coupling it with the formative agency of tools. Flexible formworks partly capitalizing on spontaneous material behaviors and partly steered by means of customized programming of digital machines were introduced. 13,14 In other studies, it was additionally proposed to combine the digitally-aided crafting of matter with its manual fashioning. 15 It was also demonstrated through physical prototypes and examples that customized digital fabrication setups triggering formative material behaviors enable an interesting approach to design materialization that extends beyond the straightforward manufacturing of a 3D model. [16][17][18][19] From the above work, it can be concluded that enlightening but isolated perspectives on material and tool agencies have been presented so far. Namely, the previous studies focused on a single material combined with a single fabrication technique. What seems to be still missing is a comparative analysis of agencies in various materials, fabrication techniques and material crafting methods. The intention of the investigation presented in this article is to provide such a comparison by probing four combinations of different digital fabrication approaches, material typologies and manual material crafting strategies.
Imprecision in computation
Imprecision was explicitly addressed as a positive element of the computational design process at the conference of the Association for Computer Aided Design in Architecture ACADIA in 2018, in which it became the central theme of discussion. The introductions to conference proceedings challenged the preconception of computer-aided design as ultimately precise by boldly stating that it does not require precision and full dependency on computers. 20 However, although the research papers in the proceedings presented a wide array of approaches pertaining to imprecision in digital and material processes, the primary focus of the majority of the studies was on computational strategies of bridging the gap between the digital model and the physical piece. However, one study stands out by explicitly stating that digital imprecisions could be explored for aesthetic and creative purposes. 21 The authors demonstrate how intentional digital imprecision enables designers to step outside the disciplinary conventions and how it can become a generative design parameter rather than a factor to exclude for the sake of ultra-precise, literal materialization. However, because the mentioned study only discussed imprecisions in the context of architectural drawing and representation, it can be concluded that the phenomenon of imprecision at the level of materials, digital techniques and handicraft still remains undisclosed.
In light of this, the presented study seeks to contribute with an analysis of imprecisions inherent to a particular set of digital techniques and manifested throughout the entire design process-from initial digital concept creation to its final materialization. This analysis is done by following the computational translations and geometrical transformations that a conceptual digital blueprint model undergoes as it is materialized into a physical design instance. Additionally, examples of how the identified imprecision occurrences can be further explored to shape novel aesthetic features of the design are provided.
Aesthetics of imperfection
Finally yet importantly, a theme of significance for this study is that of the aesthetics of imperfection. In design research outside of architecture, such as product design research, the mistakes of the manufacturing process are highly valued. They are claimed to make the experience of the design object richer and more enduring. 22 If deliberately made visible in the design, they have a positive symbolic value, with an imperfect material object reflecting the imperfections of its human user and therefore contributing to her/his aesthetic enjoyment. 23 Through this positive approach to imperfection, the mainstream ideal of perfection is challenged by a novel concept of imperfect beauty. 24 In contrast to product design, in architecture only a few studies so far have embraced imperfection as a viable design feature. One study contributed with an important discussion on a theoretical level by arguing that the prevalent focus on digital precision eliminates the great potential of material agency. 25 Through this argumentation, the authors attempted to invite the architectural community to overcome the prevalent fear of imperfection in favor of creatively exploring it.
A similar line of reasoning was present in the prefaces of the proceedings of the already recalled ACADIA 2018 conference. The texts conveyed provoking claims that deliberate imprecision in computation could give rise to novel typologies of architectural form, surface and texture. 26 Despite this, the majority of the research papers in the ACADIA 2018 proceedings focused the discussions on computational strategies of handling material imprecisions rather than on its new aesthetic implications.
An outline of the aesthetic benefits of imprecision can, however, be found in architectural research on 3D printing. One of the first studies on the subject discussed how the errors of 3D printing using the layered material deposition technique can be turned into aesthetic opportunities. 27 This work was continued in studies that explored intentional errors in 3D printing, generated by deliberately tweaking a 3D printer's G-code and hardware setup. 28,29 Owing to these studies, knowledge on the aesthetics of imprecision in the context of 3D printing is now quite broad. At the same time, however, the aesthetic of imprecision for other fabrication techniques remains unexplored.
Consequently, in this context the contribution of the presented research resides in the characterization of the inherent aesthetic features arising from imprecision in four different fabrication approaches: CNC milling, vacuum thermoforming, 3D printing using the binder jet technique and robotic single-point incremental forming.
Research method and the design experimentation process
The described study was carried out within the methodological tradition of research by design, following its established criteria for research quality and validity. 30,31 Accordingly, the central medium for accessing and producing knowledge were the processes of designing and creating digital and physical artifacts. 32 The analyses of qualities of the created designs, the analyses of phenomena accompanying their creation, as well as reflections upon them formed the basis for drawing research conclusions in a way considered valid by the architectural research community. 33,34 In its entirety, the research process had a pragmatic purpose of deepening current knowledge as well as a strong experimental and speculative attitude, geared toward the provocation of the status quo through a discussion of alternative scenarios of making at the border of digital design and handicraft. 35 Consequently, a research framework consisting of four design experiments was devised to understand the phenomenon of imprecision and characterize its fundamental influence on aesthetic design features. Each experiment involved materialization of an object whose geometrical form originated from a common digital blueprint, representing an abstract, double-curved 3D form. The form was intentionally kept abstract, to limit and focus the experiments on issues of core concern to the study, therewith eliminating the need to handle a large number of complex design parameters, as would have happened if materialization of an architectural element with a particular function was considered. The blueprint also served as a stable frame of reference, enabling to trace and comparatively analyze the geometric and surficial discrepancies between the original geometry and its materialized instances.
The conducted experiments probed four different digital fabrication techniques, materials, and crafting methods ( Figure 1). Each experiment included three stages of transition from digital blueprint to physical artifact: digital processing, molding, and casting.
Each experiment began with the digital processing of the original blueprint model. The processing involved geometric alterations necessary for materializing the blueprint using the chosen fabrication method. Then, the processed geometry was used as input to fabricate the mold. In each experiment, one of the four fabrication methods (CNC milling, vacuum forming, 3D printing, and robotic incremental forming) was employed with one of the four different mold materials (polystyrene foam, gypsum, alginate, and a thermoplastic polymer PET-G). Each experiment ended with the casting of a silicone solution, in liquid and/or paste-like state, into the mold. The casting constituted the crafting part of the process, with silicone applied and manipulated by hand using various instruments and manual techniques.
Experiment 1: Imprecisions in materialization involving CNC milling
This first experiment began with the digital processing stage featuring a conversion of the original blueprint geometry into a representation required by the fabrication technique. Conventionally, CNC milling requires a mesh surface model to generate the machine toolpaths, while the blueprint was created as a NURBS (non-uniform rational basis-spline) model consisting of curves. To convert that model into the required representation, standard tools available in the 3D modeler Rhinoceros ® 5 were used. Firstly, a NURBS patch was generated from the curves using a built-in Patch command, which fits a surface through selected curves. The patch surface is built by first finding the best-fit plane through the points sampled along the NURBS curves and then by deforming this plane into a double-curved surface that attempts to match the sampled points to the extent determined by a dedicated Stiffness parameter. The resultant NURBS patch surface was then converted to a triangular mesh representation, using a built-in STL (stereolithography) conversion function in Rhinoceros®. This triangular STL mesh representation was employed to generate the machine code in the firmware of the CNC machine. Therein, the model was sliced automatically into horizontal sections of equal height. The points marking the section borders constituted the final 3D data included in the machine code.
For mold fabrication, high-density polystyrene foam was chosen. Polystyrene foam was gradually milled away in horizontal planes, resulting in a negative mold. The parameters of the milling process were set within the machine code, with the slicing increment, step size, drill diameter, and tooltip shape directly affecting the precision, roughness, and quality of the mold surface.
The manual casting process involved pouring a liquid silicone solution into the mold and then carefully distributing it using a paintbrush. The painting was followed by multidirectional tilting of the mold. Tilting was employed to counter the gravitational flow of the liquid silicone toward the bottom of the mold and, through this, to equally distribute it on the mold surface. The tilting continued until the silicone began to coagulate.
Observed phenomena of imprecision. The first imprecisions occurred in the digital processing stage. The earliest ones arose during the computational conversions of the original NURBS curves into a NURBS patch and then into a mesh representation. In such conversions, some geometric data describing the precedent geometries is lost, as each converted instance is only an approximation of the geometry from which it was generated. 36 In this experiment, such computational loss of precision was indiscernible in an ocular way in the produced physical model. Therefore, for this particular experiment setup it did not significantly affect the aesthetic expression of the final design.
The second and more aesthetically profound occurrence of imprecisions accompanied the mold fabrication. Because the mesh model had been sectioned to produce the machine code, the final geometrical data for the milling machine consisted of selected points lying on the section curves. These points contained only a portion of the geometrical data from the original model, causing a significant departure from the initial geometrical description. This data loss left its marks in the fabricated mold. The mold acquired a stepped surface, with the traces of the CNC mill toolpath forming new features that were not present in the original design ( Figure 2). These traces are further imprecise as they are visually evident in the flatter, less curved zones of the model and less visible in its steepest, most curved zones.
During mold fabrication itself, further occurrences of imprecision were captured, related to material processing. Because in CNC milling the mold is produced by gradually milling away the material, imprecision also emerges from tool and material contact. The chosen mold material -polystyrene foam -is not ideally compact and hard. Therefore, its processing using a drill produced artifacts of roughness on the surface of the mold.
The final, although very subtle, occurrences of imprecision were discovered during the manual crafting of silicone into the mold. Because the silicone solution was being worked into the rough surface of the mold continuously using a paintbrush, the silicone cast did not only inherit an intensely rugged surface texture and the toolpath traces from the mold but also acquired an opaque and translucent appearance ( Figure 2) -features that were not present in the ideally smooth and transparent digital blueprint representing the initial design concept.
Examples of further design explorations of the discovered imprecisions. In the stage of geometry processing, imprecision occurring due to geometric data loss in the NURBS to mesh conversions could be further emphasized by exaggerating the parameters of the computational processes of conversion -to produce the final mesh that departs from the outline of the NURBS precedent in a more dramatic, visually discernible way.
Further, in the step of mesh translation into machine code, the CNC milling parameters, such as sectioning increment, roughing and finish could be further manipulated to explore other toolpath curve distributions, which would then affect the surface expression and roughness.
In the stage of mold fabrication, other mold materials, such as various kinds of wood, polymers or lower density foams, could be employed to explore the effects of their hardness and density on the milled surface expression. This could potentially cause other textural qualities and surface artifacts to emerge, extending beyond the discoveries made in this experiment. In the stage of material crafting, a broader repertoire of silicone application tools could be used to leave inherent traces within the material, for example, paintbrushes of varying shape and size, clay modeling spatulas, and palette knives. In addition, extra manual treatment of the silicone mass with these different tools could be explored when the mass begins to coagulate, to leave visually-evident traces within the cast.
Experiment 2: Imprecisions in materialization involving vacuum thermoforming with CNC milled dyes
In this experiment, the digital processing stage involved the creation of digital geometries needed for the production of positive thermoforming dyes. These dyes were chosen to be fabricated through CNC milling of polystyrene foam. Therefore, the original digital blueprint was processed using the same workflow as in the first experiment. The differences were that now a positive geometry was created and that it was additionally duplicated and scaled down to produce a larger and a smaller dye.
In the stage of mold fabrication, the positive dyes CNC-milled earlier in the process were used as underlays for vacuum thermoforming. Two thermoplastic polymer sheets were heated and lowered onto the polystyrene dyes. Vacuum was activated, causing the air in between the dyes and the sheets to escape and making the softened polymer fold itself over the dyes. The process resulted in two molds that could be put into one another.
In the stage of casting, the silicone was poured into the larger mold. As soon as the solution partly coagulated, the smaller mold was placed inside, lifted upwards after a while and left in this position until the solution has fully cured.
Observed phenomena of imprecision. Identically as in the first experiment, the first occurrences of imprecisions were related to the digital model processing for CNC dye milling and then the processes of milling of polystyrene foam dyes. As the nature of imprecisions was similar to the one in the previous experiment, they are not discussed here.
The next occurrence of imprecisions was captured upon vacuum thermoforming of the thermoplastic polymer molds. The first discovered feature of imprecision was represented by the blurred border of the mold in relation to the outline of the dye. Another feature was a partial texture inheritance from the polystyrene foam dye by the polymer mold. This inheritance occurred imprecisely as well, becoming most defined only in the roughest points of the polystyrene underlay, with the rest of the polymer mold surface acquiring a smoother texture (Figure 3).
The final imprecisions occurred during material crafting, that is, manual mold parting. The parting caused the air to enter the still wet silicone solution, creating air cavities within the mass that were unplanned in the original design (Figure 3). The cast resulting from the parting was imprecise also because it acquired variable thicknesses that did not exactly follow the distance between the molds.
Examples of further design explorations of the discovered imprecisions. In the stage of digital processing, the inaccuracies arising from the approximation of the mesh model into a stepped toolpath representation could be harnessed to customize the milling process by creating a continuous feedback loop between the digital model, the toolpath representation and the features of the physical model. In particular, the phase of toolpath creation, instead of being done in the slicer firmware of the CNC machine, could be performed directly within the 3D modeling environment, using parametric aids such as Grasshopper® in Rhinoceros® 3D. This would allow to parametrically define the toolpath from the bottom up instead of relying on its automated generation. Such a custom toolpath design could form a basis for outputting the machine code directly from the parametric environment, using dedicated plugins or custom programming in, for example, Python.
In addition, if offline or real-time sensing of the milled surface features, for example, roughness or local curvature, were employed, then the numerical data from such surface inspection could be incorporated in the parametric script to define the shape of the toolpath curve and even adapt it in real time during milling. This could open up a vast space for explorations that could directly affect the outline, positioning and other features of the toolpath curve. If a 6-axis industrial robot would replace the 3-axis milling machine, this exploration could become very sophisticated in terms of the 3D articulation of the toolpath marks on the milled surface, which could now be processed from many directions in space. Secondary and tertiary milling rounds could be designed, based on 3D scanning data representing the surface milled in the first round. The surface features of this 3D scanned representation could then become an additional source of numerical data, employed to define the parameters for the second and third rounds of milling. In this way, the final dye pro- duced in such a process could gain new surface features generated based on its own local surface imperfections translated back into the toolpath model.
In the stage of mold fabrication, one could tamper with the vacuum thermoforming process parameters, such as the heating time, temperature, and the moment of vacuum application. Changing these parameters would affect the level of material agency of the thermoformed polymer, affecting the definition of the geometry border by making it either more or less defined. Tampering with these parameters could also cause the emergence of new artifacts on the formed plastic surface, such as wrinkles and webbing that could form interesting additions to the surface expression. Moreover, the surface of the CNC-milled dye could also be manually crafted by sanding the dye surface, closing its pores with paint, or coating with putty, to explore how this affects the appearance of the vacuum mold's surface and texture.
In the casting stage, the material agency of the silicone solution in its transition stages proceeding from liquid through partly coagulated to fully cured states could be explored more extensively. Through the diversification of the mold parting and demolding times for each of the cast sublayers one could affect the visual and in-mass properties of the final cast.
Another possibility for extending the material crafting explorations at the level of casting could involve the introduction of other materials left permanently within the silicone mass, such as polystyrene foam pellets (Figure 4), to manipulate the precision of the cast thickness in relation to the molds. Other potential crafting interventions introducing higher levels of imprecision could embrace the use of removable dispersed mold elements that would affect the mold thicknesses locally and affect its aesthetic appearance (Figure 4).
Experiment 3: Imprecisions in materialization involving molding using a 3D printed dye
In this experiment, imprecision occurring at the magnified level of detailed surface design was investigated. Therefore, a fragment of the digital blueprint was worked on, and populated with new surface features -3D corrugations.
The digital processing stage involved modeling a cross-section profile representing one corrugation. A fragment of the patch surface created in experiment 1 was sliced out and populated with this corrugation. The resultant NURBS model was then converted into a triangulated mesh representation required by the software generating the 3D printer's machine code. That model was sectioned horizontally, with the section curves demarcating the approximate outlines of layers that would later on comprise the physical model.
The mold fabrication embraced the creation of a gypsum dye imprinted to produce the final mold for silicone casting. The dye was 3D printed layer by layer in the binder jet technique, with layers of gypsum powder fused together using a liquid bonding agent. The finished model was imprinted in alginate mass ( Figure 5).
In this experiment, the casting was done using pigmented silicone, to explore how the overlay of colors and transparencies affects the aesthetic surface qualities. To better control the respective sublayers of the cast and prevent their excessive blending, the silicone liquidity was reduced by using a hardening agent that turned it into silicone paste. The paste was then applied using a brush, layer by layer, gradually filling in the mold corrugations. While still wet, the excess of paste was taken away to create a smooth underlay for each of the next layers, also applied in the paste form.
Observed phenomena of imprecision. The first imprecisions occurred during the digital conversion of the corrugated NURBS model into a mesh model and then machine code required by the 3D printer. Similar as in the previous experiments, the geometric data employed to fabricate the physical model was reduced in relation to the original 3D data describing the blueprint. The model for the 3D printing process was an approximated, stepped version of the originally smooth NURBS precedent.
A second round of imprecisions occurred during mold fabrication. Firstly, the 3D printed dye model acquired visible layers on its surface and gained subtle surface porosities resulting from the roughness of the bonded gypsum layers. Further imperfections emerged during the preparation of the alginate mass from a mixture of alginate powder and water-air bubbles randomly distributed within the mass arose from mass mixing. The imprinting of the 3D printed dye into the alginate also yielded imprecisions. The layered traces of 3D printing became erased within the alginate while the rough texture of the gypsum was transferred into its surface. New features also emerged due to the rupture of the air bubbles in the alginate mass, causing the imprint to acquire small indents in its texture. Finally, a slight change of dimensions and proportions in relation to the gypsum dye occurred in the alginate mold due to the shrinkage of alginate upon drying. This affected the distances between the corrugations, which became smaller compared with those in the 3D printed gypsum dye.
The imprecisions of the casting stage embraced partial losses of surface continuity upon silicone demolding, in the form of randomly distributed surface chippings. At some places, bits of silicone became so strongly embedded within the alginate that they were left in it upon demolding, making the finish of the cast incomplete at those points. The silicone cast was also endowed with a new matte finish and a white tint, arising from the delicate roughness of the alginate ( Figure 5).
Examples of further design explorations of the discovered imprecisions.
In the digital processing stage, the surface corrugation pattern could be made more irregular, to make the proportion change due to alginate shrinkage more dramatic. This would affect the global appearance of the silicone cast and create an opportunity for further exploring both the surficial and cross-sectional properties of the cast, through varying its thickness, generating changes in surface roughness and generating gradual color transitions -all applied within the mass of one cast.
In the stage of molding, the air bubble rupture effect causing surface indents could also be amplified, for example, by intentionally introducing a larger number of air bubbles within the alginate mass through either mixing it mechanically or using an air pump.
In the stage of casting, the instruments for silicone application in combination with the material properties of the liquid silicone could be explored further. For example, ultra-precise application of liquid silicone could take place within the corrugations using a syringe or a high precision pipette. This could also be accompanied by partial blending of colors that would occur if silicone with varying color hues and translucency levels was applied and combinations of both silicone paste and liquid silicone were used in the process. 37
Experiment 4: Imprecisions in materialization involving robotic incremental forming
In this experiment, the intention was to explore imprecision in bigger models, closer to the architectural scale. Therefore, the digital blueprint was enlarged thrice and robotic fabrication was chosen as a medium for larger-scale materialization.
The digital processing stage involved the creation of a toolpath for the robotic process. The workflow from the first experiment was reused again, to generate a NURBS patch surface that would form the basis for generating the toolpath.
However, before toolpath generation, this patch needed to be altered to comply with the requirements of the SPIF process for the material chosen to be formed -PET-G polymer. That is, the steepness of the original geometry at its border needed to be reduced from 90 degrees to less than 75 degrees. The straight top edge cut-off also needed to be eliminated, by creating a curved geometry outline at the top. Finally, a secondary apron surface, inclined at 45 degrees, was added along the outer geometry border, to aid the forming of the steep geometry border.
After these alterations, the resultant patch surface was sliced with horizontal, equally distributed planes and smooth section curves were extracted. A dense network of points lying on those curves was then generated and used as a basis for the final polyline toolpath generation.
In the stage of mold fabrication, the process of robotic single-point incremental forming (SPIF) was carried out. A large PET-G polymer sheet was deformed incrementally, with the robot arm moving along the given polyline toolpath.
The casting process involved a pigmented silicone solution that was applied in liquid state so that it could flow and follow the outline of the robotic mold. The silicone was applied in several layers. Each time it was first poured roughly and then more accurately distributed with a flexible spatula and a paintbrush. Before entering the coagulation phase, the material was continuously moved from the lowest parts toward the most inclined top edges to avoid excessive accumulation toward the bottom of the mold.
Observed phenomena of imprecision. The imprecisions of the digital processing stage concerned, again, the translations between the different geometry representations. In particular, the conversion from the originally smooth blueprint curves to a jagged polyline curve defining the robot toolpath. Similar geometrical data losses took place as in the previous experiments, with the polyline curve containing only a portion of the original geometrical data from the blueprint.
An inherently new typology of inaccuracies also arose. It was due to additional model processing required by the polymer material. The blueprint geometry underwent changes of steepness at its border, together with an acquisition of a closure at the top and a secondary surface along the border. This caused the final input for the robotic process to differ quite dramatically from the original blueprint, with the most dramatic discrepancies concerning the geometry perimeter zone.
During the robotic forming process, the material agency of the mold material also played in quite significantly. The polymer underwent unanticipated deformations due to the internal strains induced by its forming. The geometric border was distorted in relation to the original blueprint. It became less sharply defined and spatially deformed, with the middle parts sinking down and the corners pushed upwards. In addition, the lowermost and least formed extremities of the geometry became pushed inwards, causing the emergence of local concavities in the globally convex form ( Figure 6).
The imprecisions of the casting stage were tightly related to the deformations of the polymer mold. Due to the presence of the unanticipated concavities of the mold, the thickness of the silicone layer became highly varied. The silicone was accumulated as a thicker layer around the convex regions and thinnest in the steepest and concave zones. This created an intriguing aesthetic effect of color intensity increase and surface translucency decrease, and therefore color and translucency gradation across the surface of the cast ( Figure 6) -an effect that was not intended in the stage of digital blueprint creation. Finally, because the forming process was toolpath-based, the traces of that toolpath were embedded in the mold, consequently causing deflections on the surface of the cast.
Examples of further design explorations of the discovered imprecisions. The unexpected mold deformations and their consequences visible in the silicone cast can be explored further at the level of digital geometry processing, for example, by altering the course of the toolpaths of the robotic forming process.
The toolpaths can be fine-tuned locally to amplify the geometrical deformations of the mold in the stage of molding. For instance, the observed local concavity formation effect, occurring due to the inward relocation of the least formed mold parts, can be strengthened by intentionally enlarging, adding subtracting or shifting the areas in which it occurs.
In this way, in the stage of casting, varying silicone accumulation compositions can be produced using the same model as a point of departure (Figure 7). The aesthetic expressions of the silicone casts can be explored even further, for example, by applying silicone with varying color gradients within one mass to affect the optical perception of its depth. A detailed example of such design explorations is described in a separate publication. 37
Types of imprecision
At the beginning of this inquiry, imprecision was defined as inexactness expressed as an observable discrepancy between the features of the initial design and its particular digital or physical representation. The conducted experiment series helped to refine this general definition by distinguishing and characterizing two kinds of imprecision accompanying the processes of digital fabrication: computational imprecision and material imprecision.
The identified computational imprecision relates to the level of digital data fidelity. In this study, it is defined as the variance between the data describing the original digital model and the data defining the digital representation of that model created for the purpose of digital fabrication. Such a representation is generated by either translating data from one level of resolution to another or by translating from one geometry representation, such as a NURBS model, to another one, such as a mesh model. In these translations, digital data describing the initial model is often lost and replaced by other data approximating the geometrical or topological description of the original model. While the algorithms that drive the processes of model data translation are predictable and computationally precise, their calculations produce a geometrical result that is an approximation or a lower-resolution representation of the original model. These approximations and lower-resolution models depart from the original dataset and, in this sense, can be qualified as being imprecise.
The material imprecision, on the other hand, relates to the level of fidelity of the physical model's features in relation to the features of the digital geometry that was used to fabricate it. This imprecision is influenced by the properties of materials of the physical models but also by the ways and means of processing these materials (e.g. deformation, subtraction, addition of material; tool shape and diameter). The material imprecision is characterized by the occurrence of features and artifacts in geometry and texture that were not present in the original model. These features and artifacts can arise from the dynamic behaviors of materials undergoing processing or from the material properties such as density, porosity, brittleness, hardness, creep, and elasticity. In this way, the notion of imprecision in computation and digital fabrication can be regarded as an umbrella term describing computational and material discrepancies -between the digital datasets occurring during the translation of the original model to a representation for fabrication purposes but also between the geometric and surficial features of the input model and its materialized version, represented by the mold and the cast made from that mold.
Imprecision characteristics in digital model processing, molding, and casting
The conducted experiments revealed that imprecision accompanies all three stages of the transition process from digital blueprint to physical instantiation: geometry processing, molding, and casting.
The imprecisions of digital geometry processing are represented by the geometrical differences between the initial digital blueprint geometry and the processed models enabling fabrication. Each fabrication method requires specific data inputs. Therefore, the original 3D data of the blueprint requires conversions by partial removal, replacement or completion with new data. This creates geometrical discrepancies and in certain cases can leave visible traces in the final manufactured design.
The imprecisions of the mold fabrication stage arise from the difference between the geometrical data used as input for the fabrication machine and the physical result in the mold. The mold geometry and/or the details of its surface diverge from the perfect shapes of the meshes or toolpaths driving the machine tools. The physical result differs due to the agency of the mold material-its inherent properties and behaviors unfolding upon its processing. The exact instantiation of imprecisions depends on the method of material processing accompanying a particular fabrication method.
The imprecisions of the casting stage are indicated by further geometric and surficial discrepancies between the mold and the cast and between the cast and the digital blueprint geometry. The nature of these discrepancies depends on the global geometry characteristics of the mold and the textural qualities of its surface, as well as on the physical properties of the cast material itself, such as liquidity and viscosity levels. In addition, the particular instruments and manual techniques employed to apply the cast material onto the surface of the mold affect the character of imprecision.
Esthetic attributes accompanying imprecision
The occurrence of imprecisions at the above three levels causes the emergence of four categories of new expressive attributes that transform the aesthetic appearance of the original design ( Figure 8
Further reflections on imprecision in computation and digital fabrication
The awareness of these imprecision levels and their inherent aesthetic traces opens up a vast space for extended design explorations. By tampering with imprecision phenomena on those levels, and by exploring them with different combinations of fabrication methods, mold materials, and manual casting techniques, the designer can generate a highly diverse pool of design alternatives, all produced from a single design blueprint. An additional finding, interesting from the standpoint of how computational design is commonly perceived, is that the precision of digital modeling is not as evident as one may assume. The conducted experiments revealed that the produced digital models, such as NURBS surfaces, meshes, section curves, and toolpaths, are in themselves precise mathematically and geometrically. However, as soon as they are compared with the original digital blueprint from which they are derived, they paradoxically lose this precision, becoming approximations of the original shape.
A similar paradox can be identified during the digital fabrication of the molds. Here, mathematical precision is embedded in the digital input for each fabrication method, such as a mesh model or a polyline toolpath from which the final machine code is generated. The fabrication machine movements are also very precise, following numerical prescriptions defined in the code. However, once the machine tool makes contact with the molded material, that precision no longer holds. The often-unexpected properties and behaviors of that material come into play. Therefore, even if the machine movement perfectly follows the shape of its digital blueprint model, the mold material turns the outcome into an imprecise representation of that model.
Possible approaches to handling imprecision in architectural design and production
Although the focus of this study has been on the characterization of imprecisions in selected digital fabrication approaches, it is interesting to take a step further and consider how the discovered imprecisions could be handled in a digital design environment. One approach is to quantify, predict, and control imprecisions computationally. In this context, a study done at the Center for Information Technology and Architecture (CITA) introduced two rigorous approaches to monitoring and handling imprecisions in robotic incremental sheet metal forming of architectural elements. The first approach focused on in-process toolpath correction, based on online, distance-based monitoring of deviations between the input digital model and physical geometry, resulting in the correction of toolpaths for the subsequent forming rounds. 38 The second approach involved the prediction of deviations using a neural network algorithm that applies data of the forming process of the 3D scanned physical geometry to create a predictive model of inaccuracies that can be used for the correction of the original design. 39 An interesting feature of these two approaches is that they create a rigorous feedback loop between ideation, design, simulation, fabrication, and evaluation, enabling material imprecision to be expressed numerically and handled using computational means.
If a similar logic were to be applied to deploy and creatively steer the emergence of material imprecisions in the fabrication approaches in this study, these imprecisions could be quantified as follows. For vacuum thermoforming and robotic incremental forming, the imprecisions represented by geometrical deviations of the mold could be expressed as an orthogonal distance between the ideal geometry profile and the fabricated one. 40 Another way would be to calculate the changes in principal curvature for local deviation quantification and aggregate normal vectors for global deviation quantification. 41,42 For the case of CNC milling and 3D printing, the most evident imprecisions, concerning surface quality, could be quantified using numerical surface roughness parameters obtained through measurement using optical instruments. Most simply, surface roughness would then be expressed as a deviation in the direction of a normal vector of a real surface from its ideal representation. 43 For small-scale features on surfaces, such as porosities experienced in the produced CNC-milled polystyrene molds and dyes, roughness could be quantified as the high frequency, short wavelength component of the measured surface profile curve. 44 The numerical data derived using these approaches could then be used for the prediction and creative steering of inaccuracy emergence, perhaps using as points of departure the approaches developed at CITA or other relevant strategies developed in manufacturing engineering. [45][46][47] Which workflows and computational strategies would be best for executing an imprecision handling approach based on this logic is an interesting subject for future research.
A slightly different approach to imprecision handling would be a qualitative one, executed in a computational setting. Such an approach was proposed in a recent study done by one of the article's authors. A method of artistic handling of fabrication errors was developed in the context of robotic single-point incremental forming, in which the fabrication imprecisions were only intermediately expressed numerically to support a qualitative visual analysis of the geometrical errors. In this case, imprecision emergence and amplification was meant to be triggered by the designer in the computational environment but in a way based on qualitative judgement of the physical fabrication result. 48 Both the quantitative and qualitative approach have their strengths and limitations. For example, the highprecision quantitative approach will allow for control of the result toward high precision while also requiring the development of custom algorithms and instrumentation that may not be readily available and easily graspable. The qualitative approach, on the other hand, offers an opportunity for artistic and intuitive design in the computational setting but it may be challenging to implement in the production of final building components, conventionally assessed from the standpoint of accuracy. Therefore, it seems that yet another solution would be to develop a hybrid approach that combines the previously mentioned rigorous quantitative methodology with the qualitative one. In such a case, the precision-oriented approach could be applied to design and fabricate portions of architectural elements requiring high fidelity, such as the meeting edges of cladding panels. The less precise, qualitative approach could then be used to shape parts of the design that do not require high accuracy, such as decorative features located in the inner zones of cladding elements.
Conclusion
This study has sought to demonstrate that imprecision in the computational design process can play a generative role in shaping novel expressive attributes of the design. The provided contribution embraced a fundamental analysis of how imprecision occurs in the transition process from digital blueprint to physical object for four different combinations of fabrication techniques, materials, and crafting methods. Examples and descriptions of emergent aesthetic features arising from imprecision were given, together with hints for how the discovered imprecisions could be explored further.
An important inference from this study is that precision in computational design cannot be taken for granted. Digital processes commonly considered as highly precise can prove not to be such upon closer examination. This observation implies that contemporary computational architectural design is in many ways already operating in the realm of imprecision and approximation. By expressing such an observation, the authors have sought to prompt the computational design community to develop further strategies employing imprecision as a creative design driver -to emphasize the currently emerging design exploration practice based on the joint artistic agency of designers, machines, and materials. Within this new practice, the elements of craft and uncertainty more legitimately enter the computational process. Digitally conceived and fabricated designs no longer need to literally mirror their digital blueprints and designers are no longer required to fully control the behaviors of architectural materials. The value of intentional imprecision lies in the new interactions between the digital tools, the crafting means and unpredictable material behaviors. These new interactions have the capacity to uncover new ways of architectural thinking and creation. Although geometric perfection and accuracy will remain valid design goals, a positive approach to imprecision expands them by reemphasizing the fundamental aspects of the design practice: exploring novel aesthetic expressions, exercising artistry and spontaneously engaging with the tools and the materials of making.
Thereby, the stance of the article's authors toward imprecision is positive. The authors have directly witnessed the potentials of employing imprecision as a design driver in a recently completed architectural research project, in which this approach has allowed to broaden the solution space when exploring different design expressions for tactile interactive architectural interfaces. 49 From this experience, the authors infer that that imprecision plays an important role in extending the foci and expressive repertoires of computational design of today.
At the scale of a building, the application of guided imprecision implies the emergence of increasingly articulated ornamental features of building materials, which-if located at the eye and hand level-encourage visual and haptic examination by pedestrians or building users. A higher resolution of detail promises to yield a new generation of micro-scale architectural detailing that provokes much closer than usual examination of the architectural fabric, and a possibility to shape a new kind of a personal, immediate experience of architectural materials. Moreover, certain imprecisions, such as the geometrical ones observed in the process of incremental forming, could be applied at a larger scale, that of the building. This could be achieved for example by generating unique, varying geometrical designs for each panel in the façade cladding. In such ways, imprecision can contribute to shaping new expressions of buildings-both at the magnified level as well as at the urban scale.
Finally, perhaps the most important impact of imprecision studies in architecture pertains to their possible economic and ecological implications. For example, the positive outlook on imprecision could entail that some of the industrially produced building elements exhibiting imperfect features, which traditionally would be eliminated from the production chain, could be kept within the production loop. 50 This could have a profound impact on the amount of waste generated by the building sector, but also on the consumption of raw materials and energy. In addition, if those imperfection-generating industrial processes could be considered for mass-customization, with their errors harnessed creatively using computational techniques and customdesigned fabrication workflows, the positive effects of imprecision could become even more articulated. Moreover, many new sustainable but currently niche materials with an imperfect appearance, such as cellulose-based composites or biomaterials fabricated by biological agents and containing living matter, for example, mycelium and bacterial cellulose, could more strongly enter the market of esthetically-valued building materials.
Such benefits of favorable treatment of imprecision are already widely acknowledged in industrial product design, in which production defects, such as uneven texturing, non-homogenous coloration, incomplete mold filling and flow lines, are employed to personalize consumer products and create unique commercial branding strategies. 51 Influential industrial designers of the 20th and 21st century, such as Gaetano Pesce, have used imperfections to design products recognized worldwide. A well-known example is Pesce's "Broadway Chair," in which an imperfect blending of translucent, colored resin during the process of injection molding forms unique in-mass colorations that make each chair design one of a kind.
The above arguments reveal the importance of studies on imprecision in the context of architecture. With the current powerful toolkits of the architect, containing computational and digital fabrication media, the discipline can contribute new insights to the current state of the art. It can broaden the current knowledge on imprecision in manufacturing onto the phases of digital design, material experimentation and large scale construction. | 13,925.8 | 2020-12-07T00:00:00.000 | [
"Art",
"Computer Science",
"Engineering"
] |
Image potential states of germanene
We have measured the two-dimensional image potential states (IPS) of a germanene layer synthesized on a Ge2Pt crystal using scanning tunnelling microscopy and spectroscopy. The IPS spectrum of germanene exhibits several differences as compared to the IPS spectrum of pristine Ge(001). First, the n = 1 peak of the Rydberg series of the IPS spectrum of germanene has two contributions, labelled n = 1- and n = 1+, respectively. The peak at the lower energy side is weaker and is associated to the mirror-symmetric state with opposite parity. The appearance of this peak indicates that the interaction between the germanene layer and the substrate is very weak. Second, the work function of germanene is about 0.75 eV lower in energy than the work function of Ge(001). This large difference in work function of germanene and pristine Ge(001) is in agreement with first-principles calculations.
Introduction
Two-dimensional image potential states (IPS) are unoccupied electronic states that are trapped in a potential well in front of a surface. The potential well is formed by the surface projected bulk bandgap and the image potential barrier arising from the interaction between an electron near the surface and its positive image charge. The electrons in these 2D states have a free-electron like dispersion parallel to the surface and are confined in the perpendicular direction. The confinement in the direction normal to the surface results in a Rydberg-like series of peaks below the vacuum level [1][2][3]. The IPS can be measured by several techniques including inverse photoemission and two-photon photoemission [4], low energy electron diffraction [5] and scanning tunnelling microscopy (STM) [6][7][8][9]. In STM the electric field in the tunnel junction modifies the electrostatic potential and the IPS peaks are shifted to higher energies. If the IPS peaks exceed the vacuum level, we enter the field emission resonance regime. Therefore, the series of peaks are sometimes referred to as field emission resonances or Gundlach oscillations [10]. The investigation of these electronic states provides important information concerning the charge injection and the dynamics of charges on surfaces, the dissipation behaviour on topological insulators [11], induced light emission [12,13], influence of the electric field on electron dynamics [14,15], electronic effects of the surface potential corrugation [16], variations of the work function [17][18][19], quantum bit interactions [20] and quantum dot behaviour of graphene nanoislands [21]. In addition, using STM IPS spectroscopy it is also possible to obtain atomic resolution on diamond [22].
For a free-standing 2D material, theory predicts the occurrence of two mirror-symmetric Rydberg series of IPS exhibiting opposite parity with respect to the reflection plane of the 2D material [23]. The hybridization of these double series of states between the successive graphene layers in graphite produces the so-called interlayer states [24], which play a key role in the superconducting properties of alkali intercalated graphite [25]. When single and bilayer graphene are grown on SiC the first pair of the mirror-symmetric double Rydberg series persist, indicating a weak coupling of graphene with the underlying substrate [26]. When graphene is grown on metallic substrates, the double parity Rydberg-like series of IPS of the free-standing graphene evolves towards a single series. This is due to the repulsion between both materials and the reduction of the mirror symmetry of the free-standing graphene layer. The latter even holds for the weakly interacting regime where the separations between graphene and metal are relatively large [27]. However, for strong interactions and thus small graphene-metal separations [28] IPS spectra reveal the formation of a graphene-metal interfacial state [29]. Moreover, IPS spectra provide very accurate information on the work function of a material. For instance, IPS measurements using STM of a graphene/metal system show a lateral modulation of the work function due to the modulated graphene/substrate interaction [29]. Furthermore, IPS studies of graphene and h-BN grown on Ag [30] reveal a difference in their work functions [31].
While several STM measurements of IPS and work function variations were performed on different graphene systems, to our knowledge no such studies were reported on 2D-Xenes [32] of the group 14 elements of the periodic table. The silicon, germanium and tin analogues of graphene are referred to as silicene, germanene and tinene or stanene, respectively. These 2D materials share many properties with their carbon counterpart [33][34][35][36][37]. There are, however, also a few differences between the 2D-Xenes and graphene. The honeycomb lattice of the 2D-Xenes is buckled, whereas graphene's honeycomb lattice is planar. Furthermore, 2D-Xenes have a larger spinorbit coupling than graphene because the atomic number of Si, Ge and Sn is larger than that of C. Unfortunately, silicene, germanene and stanene do not occur in nature and therefore they have to be synthesized.
Here we will measure the two-dimensional IPS of germanene synthesized on Ge 2 Pt crystals. We will compare the IPS spectrum and work function of germanene with pristine Ge(001). In order to validate our experimental observations, we will also perform density functional theory calculations. The analysis of the n = 1 state of the germanene IPS spectrum as well as the spectroscopic features at energies below the n = 1 IPS peak will provide important information on the coupling of the germanene to the underlying substrate.
Experimental details
The image potential states have been measured with a low-temperature scanning tunneling microscope under ultra-high vacuum (UHV) conditions at 77 K. The base pressure of the UHV system was in the range of 1 × 10 −11 mbar. The germanium (Ge) (001) substrates were cut from nominally flat single side polished and slightly doped n-type samples. In order to avoid contamination of the Ge(001) substrates, only sample holders that were composed of molybdenum, tantalum or aluminium oxide have been used. The samples were cleaned by cycles of 500-800 eV argon ion bombardment and annealing at 1100 K [38]. After several cycles the Ge(001) substrates exhibited a well-ordered c(4×2) dimer-row reconstruction and monolayer high atomic steps at 77 K. Subsequently, we have deposited a few monolayers of platinum (Pt) on some of our Ge(001) samples by heating a 99.997% purity platinum (Pt) wire wrapped around a tungsten filament. After Pt deposition the Ge(001) sample was annealed at a temperature above the eutectic temperature (1040 K) of the Pt-Ge alloy. This resulted in the formation of Pt 0.22 Ge 0.78 eutectic droplets. Upon cooling down the Ge(001)/Pt system to temperatures below the eutectic temperature these droplets undergo spinodal decomposition into a pure Ge phase and a Ge 2 Pt alloy. The clusters found at room temperature are composed of a Ge 2 Pt core decorated with a germanene shell [39,40]. A detailed description of the growth studies of germanene on Ge 2 Pt prepared on Ge(001) is presented in the Supporting Information (available at stacks.iop.org/TDM/7/035021/mmedia). The z(V) spectroscopy experiments were performed in the constant current mode, recording the tip-surface distance while varying the bias voltage, and averaging multiple curves acquired at different positions on the surface. The dz/dV curves are the numerical derivatives of the z(V) traces. The I(V) measurements were performed using a lock-in amplifier with a modulation voltage of 20 mV and a frequency of 1.7 kHz.
Computational details
The density functional theory (DFT) calculations were made using the projected augmented wave (PAW) formalism [41] as implemented in the Vienna ab initio simulation package (VASP) [42,43]. The exchange-correlation effects were taken into account by using the generalized gradient approximation [44]. A 600 eV energy cutoff for the plane-waves and a convergence threshold of 10 -7 eV were used. In order to avoid interactions between the cells, a 30 Å thick vacuum slab was added in the direction normal to the germanene sheet. The Brillouin zone was sampled by a (32×32) k-point mesh. The work function was estimated as a difference between the vacuum potential and the Fermi energy.
Germanene/Ge 2 Pt and Ge(001)−c(4×2)
In figures 1(a) and (b) scanning tunneling microscopy images, dI/dV and dz/dV spectra of pristine Ge(001)−c(4×2) and germanene grown on a Ge 2 Pt crystal are shown. At cryogenic temperatures the dimer rows of Ge(001) are buckled. Adjacent dimers within a dimer row buckle in opposite directions resulting in zigzag rows (see figure 1(a)). Adjacent dimer rows can buckle in-phase or out-of-phase resulting in a p(2×2) or c(4×2) reconstruction, respectively. Here we have focussed on regions with a c(4×2) buckling registry as these regions are more abundant than the p(2×2) regions. In addition, at sample biases exceeding the edge of the conduction band p(2×2) domains are often converted to c(4×2) domains [38,45]. The nearest neighbor distance between dimers and dimer rows are 4 Å and 8 Å, respectively. A structural model of the c(4×2) reconstructed phase is shown in the left panel of figure 1(a). In the middle panel a dI/dV spectrum of Ge(001) is shown. The spectrum shows a bandgap, which is somewhat smaller than the bulk bandgap owing to the presence of surface states in the forbidden zone.
Germanene has a buckled honeycomb structure with a lattice constant slightly larger than 4 Å. The buckled honeycomb lattice is composed of two triangular sub-lattices that are slightly displaced with respect to each other in a direction normal to the germanene sheet (see the structural model in the left panel of figure 1(b)) [39]. Owing to this buckling one of the triangular sub-lattices shows up more prominently than the other triangular sub-lattice. We should emphasize here that the electric field of the STM can result in a charge transfer from one triangular sublattice to the other triangular sub-lattice. The latter leads to a decrease of the spin-orbit bandgap and for sufficiently large electric fields even to full closure followed by a reopening of the bandgap [40,46]. The differential conductivity (dI/dV), which is proportional to the density of states, displays a characteristic V-shape for germanene (see figure 1(a) middle panel). This V-shaped density of states is one of the hallmarks of a two-dimensional Dirac material [34][35][36][37]. The germanene is lightly n-doped as the charge neutrality is located at negative energy with respect to the Fermi level. In principle one would expect to see a difference between the dI/dV spectra of the two sublattices of germanene. Our experiments, however, do not reveal this spatial variation, which we ascribe to the relatively small buckling of only 0.2 Å [39].
In the right panels of figures 1(a) and (b), dz/dV curves are shown for Ge(001) and germanene, respectively. For both surfaces several well-defined oscillations are observed when the applied bias voltage exceeds the work function of the substrate. These oscillations are the IPS resonances in the triangular potential well formed between the substrate and the STM tip. The IPS spectra of Ge(001)−c(4×2) and germanene show some differences: (i) The oscillations of the pristine Ge(001)−c(4×2) substrate occur at different energies than the oscillations of the germanene layer; (ii) A larger number of oscillations are observed for Ge(001)−c(4×2) than for germanene; (iii) The exact shape of the IPS n = 1 peak, observed at around 5 V, has a symmetric and welldefined appearance for Ge(001)−c(4×2), whereas for germanene the IPS n = 1 peak is asymmetric and can be decomposed into two peaks; (iv) At energies smaller than 5 eV, several well-defined spectroscopic peaks occur for germanene, which are absent for Ge(001)-c(4×2). In order to explain these differences, a detailed analysis and comparison of the IPS spectra of germanene and Ge(001)-c(4×2) is presented in the following section.
2D-IPS and work function of Germanene/Ge 2 Pt and Ge(001)-c(4×2)
The local work function and the IPS oscillations of Ge(001)-c(4×2) and germanene grown on Ge 2 Pt crystals were measured using z(V) spectroscopy. In case that the sample bias coincides with one of the IPS states the z-piezo retracts in order to maintain a constant tunnel current. This is because IPS states correspond to standing electron waves in the triangular shaped potential well between substrate and tip.
In the dz/dV curves, the IPS appear as successive peaks at energies above the work function (see figure 2). The spectra were measured for different set point values of the tunnel current (figures 2 (a) and (b)-left panels). A shift to higher energies of the IPS peaks positions with increasing set point current is a typical feature caused by the higher electric field in the tunnel junction. It should be noted that the spectral features, i.e. the energy and the shape of the peaks, are different for Ge(001)-c(4×2) and germanene. This difference can be explained partly by a difference in the work function [17][18][19][29][30][31] of the two materials.
Moreover, the number of observed IPS peaks is also different for the two materials. This can be ascribed to a different shape of the potential well for the two systems. The energy values E n of the IPS peaks were obtained by fitting the dz/dV spectra to a series of Lorenzian profiles and were plotted in the middel panels of figures 2 (a) and (b). By fitting the IPS peaks with a (n−1/4) 2/3 dependence [10], where E n = ϕ + (ℏ/2m) 1/3 (3πeF/2) 2/3 (n − 1/4) 2/3 , with ϕ the work function and F the electric field. In the fit we considered the high order peaks (n > 1), since these peaks are less influenced by the interaction with the substrate [19]. Accordingly, we obtained an average work function of 4.48 ± 0.20 eV for Ge(001)-c(4×2) and 3.73 ± 0.55 eV for germanene. The work function of Ge(001)-c(4×2) agrees well with available experimental data [47,48]. Since no experimental data is available for free-standing germanene, we have performed density functional theory calculations for germanene. Using a lattice constant of 3.82 Å and a buckling of 0.86 Å [37] a work function of 4.1 eV was found. The calculated value of the work function of 4.1 eV is slightly larger than the value of 3.73 eV that was extracted from the IPS analysis. The work function values were plotted for both materials in figure 2-right panel, as a function of the tunnel current set point. The apparent shift of the work function of germanene towards lower values may be related to the strong influence of the electric field on the germanene layer [15,49,50,51]. The electric field will also affect the shape of the potential well between the germanene layer and the STM tip. Furthermore, striking differences in the IPS spectra of germanene and Ge(001)-c(4×2) are observed at energies below the n = 1 IPS peak (figures 2(a) and (b) left panels). While the weak wrinkles in the curve of Ge(001)-c(4×2) are related to the electronic variations in the conduction band, the interpretation measurements, the germanene layer is strongly affected by the electric field in the tunnel junction (drawn with dashed arrows). The electric field may slightly lift the germanene layer from the substrate. This results in a decrease in the number of IPS (the horizontal lines between substrate and tip). The spectroscopic features observed at energies below the n = 1 peak (gray horizontal lines below the top germanene layer) refer to the mirror-symmetric IPS with opposite parity (−). (b) dz/dV curve in the range of 1 eV to 6 eV measured at 100 pA and fitted to 5 Lorentzian peaks, the last one corresponding to the first n = 1 + IPS. Inset: Linear fits of the negative parity IPS peak energies, measured at different set points, versus (n−1/4) 2/3 . of the multiple peak-like features in the spectra of germanene can be explained by two opposing scenarios.
1) The interaction of the germanene layer with the substrate is strong, which can result in the formation of interfacial states [27,29,49,50]. As an example of such a system we refer to graphene/Ru(0001) [29,50]. Moreover, these interfacial states could also affect the first IPS peak. The decomposition of the n = 1 peak into two peaks might be induced by the interfacial states ( figure 2(b)-left panel).
2) The interaction of the germanene layer with the substrate is very weak. We anticipate that our germanene system is, at least to some extent, similar to graphene on SiC. For the graphene on SiC Bose et al [26] observed the two mirror-symmetric n = 1 peaks for graphene as well as bilayer graphene on SiC. The energy separation of these two n = 1 peaks is larger for graphene as compared to bilayer graphene. In fact, the bilayer graphene spectrum is more similar to our germanene spectra than the single layer graphene spectrum. Therefore, it is possible that we are not dealing with a single layer of germanene, but rather with two or more layers of germanene on Ge 2 Pt. The latter probably also explains why the STS dI/dV spectra reveal a V-shaped density of states. It is very likely that the metallic character of Ge 2 Pt will destroy the Dirac nature of the first germanene layer as the important electronic states near the Fermi level of germanene can hybridize with the electronic states of the Ge 2 Pt substrate. A second germanene layer will be decoupled from the underlying Ge 2 Pt substrate via an electronically death germanene buffer layer. In addition, a decoupled layer of germanene will be strongly affected by the electric field in the tunnel junction. The electric field may lift the germanene layer from the substrate, as depicted in figure 3(a), and may results in the observation of the higher order IPS peaks with negative parity. In figure 3(b) a Lorentzian fit of a dz/dV spectrum measured on germanene with 5 peaks is shown. The energy values of the peaks (extracted from the dz/dV curves recorded at different current set points) are plotted in the inset of figure 3(b). Interestingly, all fits obey the Gundlach relation, suggesting that we are dealing with a decoupled germanene layer, i.e. scenario 2.
Conclusions
We have measured the two-dimensional image potential states of a germanene coated Ge 2 Pt crystal and pristine Ge(001). We have found a work function of about 4.5 eV for Ge(001) and 3.75 eV for germanene, which agrees well with available experimental data and density functional theory calculations. A detailed analysis of the peaks at energies below the n = 1 IPS peak provides strong evidence that the germanene layer is decoupled from Ge 2 Pt, suggesting that the Ge 2 Pt crystals are coated by more than one germanene layer. | 4,258.8 | 2020-06-17T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Tissue-specific developmental regulation and isoform usage underlie the role of doublesex in sex differentiation and mimicry in Papilio swallowtails
Adaptive phenotypes often arise by rewiring existing developmental networks. Co-option of transcription factors in novel contexts has facilitated the evolution of ecologically important adaptations. doublesex (dsx) governs fundamental sex differentiation during embryonic stages and has been co-opted to regulate diverse secondary sexual dimorphisms during pupal development of holometabolous insects. In Papilio polytes, dsx regulates female-limited mimetic polymorphism, resulting in mimetic and non-mimetic forms. To understand how a critical gene such as dsx regulates novel wing patterns while maintaining its basic function in sex differentiation, we traced its expression through metamorphosis in P. polytes using developmental transcriptome data. We found three key dsx expression peaks: (i) eggs in pre- and post-ovisposition stages; (ii) developing wing discs and body in final larval instar; and (iii) 3-day pupae. We identified potential dsx targets using co-expression and differential expression analysis, and found distinct, non-overlapping sets of genes—containing putative dsx-binding sites—in developing wings versus abdominal tissue and in mimetic versus non-mimetic individuals. This suggests that dsx regulates distinct downstream targets in different tissues and wing colour morphs and has perhaps acquired new, previously unknown targets, for regulating mimetic polymorphism. Additionally, we observed that the three female isoforms of dsx were differentially expressed across stages (from eggs to adults) and tissues and differed in their protein structure. This may promote differential protein–protein interactions for each isoform and facilitate sub-functionalization of dsx activity across its isoforms. Our findings suggest that dsx employs tissue-specific downstream effectors and partitions its functions across multiple isoforms to regulate primary and secondary sexual dimorphism through insect development.
Comments to the Author(s)
This manuscript presents a descriptive analysis of gene expression during the development of a mimetic butterfly, Papilio polytes, with a focus on the dsx gene. Dsx is a known regulator of sexspecific development in insects, and has been previously shown to contribute to sex-limited mimicry in Papilio spp. In this report, the authors describe the developmental transcriptome of P. polytes; identify several alternative isoforms of dsx and quantify their expression in different tissues and at different stages of development using RNA-seq and quantitative PCR; use the developmental transcriptome and motif searches to suggest potential downstream targets of dsx; and predict the secondary structure of alternative dsx protein isoforms. Although this analysis lacks an experimental or hypothesis-testing component, it provides resources and sets the stage for future experimental analyses, so it is a useful contribution to the field.
In general, the paper is straightforward, and the data are well described. However, I think several issues require clarification or improvement. Some of these relate to potential overinterpretation of the data.
Issues related to correlation network analysis (throughout the paper). This is a major concern of mine. With so few samples, the module structure inferred by this type of analysis is notoriously sensitive to parameter settings. First, the authors need to report their parameters, and show that the modules they infer are at least somewhat robust to parameter settings. Otherwise, the notion of "dsx-containing module" has little meaning. Second, I strongly suspect that most of the modular structure comes from the use of very different tissues and widely separated developmental stages. In this sense, the different "dsx-containing modules" that the authors report for different tissues/stages may simply reflect differential gene expression between tissues and stages, which mostly has nothing to do with dsx. You have different modules for different tissues/stages, and dsx has to fall out "somewhere" so of course it comes out in different modules in different samples. So perhaps the fact that dsx is associated with different "modules" at different stages/tissues tells you very little about the regulatory relationship between dsx and other genes in these "modules". The Dsx binding motif is fairly simple, so the fact that many putative "target" genes have that motif may also not mean very much. I urge the authors to reexamine their network analysis more carefully, to understand where the network structure is really coming from, and whether the allocation of dsx to particular modules is robust.
Lines 56-57 "It appears as if this critical gene has evolved multiple mechanisms to maintain and govern different morphs even in closely related species" -This statement, while potentially true, is not directly supported by the data presented in this paper. Frankly, this hypothesis seems to be no more likely after this study than it was before. dsx expression in unfertilized eggs: can the authors please confirm that the eggs were dissected from ovaries in a way that excluded somatic gonad cells? The detection of dsx transcripts by PCR in eggs from unmated females is surprising. This is PCR -even a low amount of contamination from somatic tissues could potentially account for this result.
Lines 134-150. The description of putative dsx targets seemed quite confusing to me. First, what is the evidence for describing dvl-3 or lin as "known dsx targets"? I don't know of any direct evidence for that. Second, Abd-B is a target of dsx in Drosophila; that does not necessarily mean that it's also a dsx target in Lepidopterans. I noticed that Abd-B did not show up in the abdominal "module" (Figure 2). Please be more careful in distinguishing confirmed facts from hypotheses.
Lines 176-228. The model at the end of the paper is highly speculative and rests on very little data. Some critical parts of this model are still only conjectures that remain to be tested by experiments. There's nothing wrong with a light does of interesting speculation at the end of a paper, but please make clear which parts of the model are more solid, and which are speciulative.
Lines 22-223 "Besides Lepidoptera, Coleoptera is the only other order that has two female-specific exons of dsx". Actually, the same is true for cockroaches (Blatella).
Recommendation?
Major revision is needed (please make suggestions in comments)
Comments to the Author(s)
Manuscript #: RSOS-200792 Title: "Tissue-specific developmental regulation and isoform usage underlie the role of doublesex in sex-limited polymeophic mimicry in Papilio swallowtails" Authors: Riddhi Dexhmukh et al.
Comments
doublesex (dsx) encodes a transcription factor that generally act as a master regulator for sexual differentiation in insects. Recent studies revealed that dsx has diverse pleiotrophic effects to govern unique sexual dimorphic traits such as horns in beetles, wings in insects, and mimicry in butterflies.
In this study, the authors focused on the mimetic phenotype especially observed in female wings in Papilio polytes. In this species, dsx regulates female-limited Batesian mimicry. In an attempt to understand how a critical master regulator dsx controls a novel adaptive phenotype just like the Batesian mimicry while maintaing its inherent function of sexual differentiation, the authors performed a developmental transcriptome analysis to identify potential targets of dsx through metamorphsis.
Overall, this is a nicely done, and several interesting observations are represented and discussed. But I think that this study lacks several data essential for their conclusion like "Isoforms F2 and F3 contributed to most of the dsx expression in mimetic wings" and "elevated expression of dsx isoforms F2 and F3 might be sufficient to give rise to mimetic phenotype. Also, it is unclear what of the findings in this study show novelty as compared with the previously reported findings. Therefore, this manuscript is not suitable for publication in this current form.
Major requirements for revision 1. Figure 1A. If the authors want to argue that the higher expression of dsx in the forewings and the hindwings is closely related to the mimetic features, then they should compare the expression profile of dsx between mimetic females and non-mimetic females. Why did the authors compare it between mimetic females and non-mimetic "males". Such comparison will just simply emerges the difference in dsx expression between females and males, which is not directly involved in the mimetic features observed in wings.
2. Figure 2. As pointed above, if the purpose of this study is to identify the putative targets of dsx that may be related to the mimetic phenotype, then the authors should compare the transcriptome data between mimetic females and non-mimetic females. The data presented in Figure 2 does not rule out the possibility that it may simply reflect sexual difference because the date was based on the transcriptomic comparison between females and males.
3. Lines 203-207. The authors said that isoforms F2 and F3 contribute to most of dsx expression in mimetic wings and that elevated expression of dsx isoforms F2 and F3 might be sufficient to give rise to mimetic phenotype. However, again, if the authors want to say so, then they should perform comparative analysis between mimetic females and non-mimetic females.
Decision letter (RSOS-200792.R0)
We hope you are keeping well at this difficult and unusual time. We continue to value your support of the journal in these challenging circumstances. If Royal Society Open Science can assist you at all, please don't hesitate to let us know at the email address below.
Dear Dr Kunte,
The editors assigned to your paper ("Tissue-specific developmental regulation and isoform usage underlie the role of doublesex in sex-limited polymorphic mimicry in Papilio swallowtails") have now received comments from reviewers.
Boht reviewers raise significant concerns and a number of points that will require careful consideration. We would like you to revise your paper in accordance with the referee and Associate Editor suggestions which can be found below (not including confidential reports to the Editor). Please note this decision does not guarantee eventual acceptance.
Please submit a copy of your revised paper before 23-Jul-2020. Please note that the revision deadline will expire at 00.00am on this date. If we do not hear from you within this time then it will be assumed that the paper has been withdrawn. In exceptional circumstances, extensions may be possible if agreed with the Editorial Office in advance. We do not allow multiple rounds of revision so we urge you to make every effort to fully address all of the comments at this stage. If deemed necessary by the Editors, your manuscript will be sent back to one or more of the original reviewers for assessment. If the original reviewers are not available, we may invite new reviewers.
To revise your manuscript, log into http://mc.manuscriptcentral.com/rsos and enter your Author Centre, where you will find your manuscript title listed under "Manuscripts with Decisions." Under "Actions," click on "Create a Revision." Your manuscript number has been appended to denote a revision. Revise your manuscript and upload a new version through your Author Centre.
When submitting your revised manuscript, you must respond to the comments made by the referees and upload a file "Response to Referees" in "Section 6 -File Upload". Please use this to document how you have responded to the comments, and the adjustments you have made. In order to expedite the processing of the revised manuscript, please be as specific as possible in your response.
In addition to addressing all of the reviewers' and editor's comments please also ensure that your revised manuscript contains the following sections as appropriate before the reference list: • Ethics statement (if applicable) If your study uses humans or animals please include details of the ethical approval received, including the name of the committee that granted approval. For human studies please also detail whether informed consent was obtained. For field studies on animals please include details of all permissions, licences and/or approvals granted to carry out the fieldwork.
• Data accessibility It is a condition of publication that all supporting data are made available either as supplementary information or preferably in a suitable permanent repository. The data accessibility section should state where the article's supporting data can be accessed. This section should also include details, where possible of where to access other relevant research materials such as statistical tools, protocols, software etc can be accessed. If the data have been deposited in an external repository this section should list the database, accession number and link to the DOI for all data from the article that have been made publicly available. Data sets that have been deposited in an external repository and have a DOI should also be appropriately cited in the manuscript and included in the reference list.
If you wish to submit your supporting data or code to Dryad (http://datadryad.org/), or modify your current submission to dryad, please use the following link: http://datadryad.org/submit?journalID=RSOS&manu=RSOS-200792 • Competing interests Please declare any financial or non-financial competing interests, or state that you have no competing interests.
• Authors' contributions All submissions, other than those with a single author, must include an Authors' Contributions section which individually lists the specific contribution of each author. The list of Authors should meet all of the following criteria; 1) substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data; 2) drafting the article or revising it critically for important intellectual content; and 3) final approval of the version to be published.
All contributors who do not meet all of these criteria should be included in the acknowledgements.
We suggest the following format: AB carried out the molecular lab work, participated in data analysis, carried out sequence alignments, participated in the design of the study and drafted the manuscript; CD carried out the statistical analyses; EF collected field data; GH conceived of the study, designed the study, coordinated the study and helped draft the manuscript. All authors gave final approval for publication.
• Acknowledgements Please acknowledge anyone who contributed to the study but did not meet the authorship criteria.
• Funding statement Please list the source of funding for each author.
Once again, thank you for submitting your manuscript to Royal Society Open Science and I look forward to receiving your revision. If you have any questions at all, please do not hesitate to get in touch. Your manuscript has now received two expert reviews. You will see that they have both have concerns, some of which are major. In particular, you should address, if possible, the following points: i) ensure that the expression profile comparisons are appropriate; ii) re-examine your network analysis more carefully; iii) make sure there is no over-interpretation of the data and clarify its novelty. If you think you are able to address these concerns, your manuscript will be reconsidered by the external reviewers. This is not a provisional guarantee that your manuscript will be found acceptable for publication.
Comments to Author:
Reviewers' Comments to Author: Reviewer: 1 Comments to the Author(s) This manuscript presents a descriptive analysis of gene expression during the development of a mimetic butterfly, Papilio polytes, with a focus on the dsx gene. Dsx is a known regulator of sexspecific development in insects, and has been previously shown to contribute to sex-limited mimicry in Papilio spp. In this report, the authors describe the developmental transcriptome of P. polytes; identify several alternative isoforms of dsx and quantify their expression in different tissues and at different stages of development using RNA-seq and quantitative PCR; use the developmental transcriptome and motif searches to suggest potential downstream targets of dsx; and predict the secondary structure of alternative dsx protein isoforms. Although this analysis lacks an experimental or hypothesis-testing component, it provides resources and sets the stage for future experimental analyses, so it is a useful contribution to the field.
In general, the paper is straightforward, and the data are well described. However, I think several issues require clarification or improvement. Some of these relate to potential overinterpretation of the data.
Issues related to correlation network analysis (throughout the paper). This is a major concern of mine. With so few samples, the module structure inferred by this type of analysis is notoriously sensitive to parameter settings. First, the authors need to report their parameters, and show that the modules they infer are at least somewhat robust to parameter settings. Otherwise, the notion of "dsx-containing module" has little meaning. Second, I strongly suspect that most of the modular structure comes from the use of very different tissues and widely separated developmental stages. In this sense, the different "dsx-containing modules" that the authors report for different tissues/stages may simply reflect differential gene expression between tissues and stages, which mostly has nothing to do with dsx. You have different modules for different tissues/stages, and dsx has to fall out "somewhere" so of course it comes out in different modules in different samples. So perhaps the fact that dsx is associated with different "modules" at different stages/tissues tells you very little about the regulatory relationship between dsx and other genes in these "modules". The Dsx binding motif is fairly simple, so the fact that many putative "target" genes have that motif may also not mean very much. I urge the authors to reexamine their network analysis more carefully, to understand where the network structure is really coming from, and whether the allocation of dsx to particular modules is robust.
Lines 56-57 "It appears as if this critical gene has evolved multiple mechanisms to maintain and govern different morphs even in closely related species" -This statement, while potentially true, is not directly supported by the data presented in this paper. Frankly, this hypothesis seems to be no more likely after this study than it was before.
dsx expression in unfertilized eggs: can the authors please confirm that the eggs were dissected from ovaries in a way that excluded somatic gonad cells? The detection of dsx transcripts by PCR in eggs from unmated females is surprising. This is PCR -even a low amount of contamination from somatic tissues could potentially account for this result.
Lines 134-150. The description of putative dsx targets seemed quite confusing to me. First, what is the evidence for describing dvl-3 or lin as "known dsx targets"? I don't know of any direct evidence for that. Second, Abd-B is a target of dsx in Drosophila; that does not necessarily mean that it's also a dsx target in Lepidopterans. I noticed that Abd-B did not show up in the abdominal "module" (Figure 2). Please be more careful in distinguishing confirmed facts from hypotheses.
Lines 176-228. The model at the end of the paper is highly speculative and rests on very little data. Some critical parts of this model are still only conjectures that remain to be tested by experiments. There's nothing wrong with a light does of interesting speculation at the end of a paper, but please make clear which parts of the model are more solid, and which are speciulative.
Lines 22-223 "Besides Lepidoptera, Coleoptera is the only other order that has two female-specific exons of dsx". Actually, the same is true for cockroaches (Blatella).
Comments
doublesex (dsx) encodes a transcription factor that generally act as a master regulator for sexual differentiation in insects. Recent studies revealed that dsx has diverse pleiotrophic effects to govern unique sexual dimorphic traits such as horns in beetles, wings in insects, and mimicry in butterflies.
In this study, the authors focused on the mimetic phenotype especially observed in female wings in Papilio polytes. In this species, dsx regulates female-limited Batesian mimicry. In an attempt to understand how a critical master regulator dsx controls a novel adaptive phenotype just like the Batesian mimicry while maintaing its inherent function of sexual differentiation, the authors performed a developmental transcriptome analysis to identify potential targets of dsx through metamorphsis.
Overall, this is a nicely done, and several interesting observations are represented and discussed. But I think that this study lacks several data essential for their conclusion like "Isoforms F2 and F3 contributed to most of the dsx expression in mimetic wings" and "elevated expression of dsx isoforms F2 and F3 might be sufficient to give rise to mimetic phenotype. Also, it is unclear what of the findings in this study show novelty as compared with the previously reported findings. Therefore, this manuscript is not suitable for publication in this current form.
Major requirements for revision 1. Figure 1A. If the authors want to argue that the higher expression of dsx in the forewings and the hindwings is closely related to the mimetic features, then they should compare the expression profile of dsx between mimetic females and non-mimetic females. Why did the authors compare it between mimetic females and non-mimetic "males". Such comparison will just simply emerges the difference in dsx expression between females and males, which is not directly involved in the mimetic features observed in wings.
2. Figure 2. As pointed above, if the purpose of this study is to identify the putative targets of dsx that may be related to the mimetic phenotype, then the authors should compare the transcriptome data between mimetic females and non-mimetic females. The data presented in Figure 2 does not rule out the possibility that it may simply reflect sexual difference because the date was based on the transcriptomic comparison between females and males.
3. Lines 203-207. The authors said that isoforms F2 and F3 contribute to most of dsx expression in mimetic wings and that elevated expression of dsx isoforms F2 and F3 might be sufficient to give rise to mimetic phenotype. However, again, if the authors want to say so, then they should perform comparative analysis between mimetic females and non-mimetic females.
Do you have any ethical concerns with this paper? No
Have you any concerns about statistical analyses in this paper? No
Recommendation?
Accept as is
Comments to the Author(s)
The revised manuscript is now suitable for publication. I am satisfied with the author's responses and plausible explanations.
Decision letter (RSOS-200792.R1)
We hope you are keeping well at this difficult and unusual time. We continue to value your support of the journal in these challenging circumstances. If Royal Society Open Science can assist you at all, please don't hesitate to let us know at the email address below.
Dear Dr Kunte,
It is a pleasure to accept your manuscript entitled "Tissue-specific developmental regulation and isoform usage underlie the role of doublesex in sex differentiation and mimicry in Papilio swallowtails" in its current form for publication in Royal Society Open Science. The comments of the reviewer(s) who reviewed your manuscript are included at the foot of this letter.
You can expect to receive a proof of your article in the near future. Please contact the editorial office<EMAIL_ADDRESS>and the production office<EMAIL_ADDRESS>to let us know if you are likely to be away from e-mail contact --if you are going to be away, please nominate a co-author (if available) to manage the proofing process, and ensure they are copied into your email to the journal.
Due to rapid publication and an extremely tight schedule, if comments are not received, your paper may experience a delay in publication.
Please see the Royal Society Publishing guidance on how you may share your accepted author manuscript at https://royalsociety.org/journals/ethics-policies/media-embargo/. Thank you for considering our manuscript. The comments provided by the Associate Editor and the reviewers have helped in improving the manuscript significantly. We have addressed nearly all the concerns in the revised manuscript (changes marked using Track Changes), and our response to the reviewer comments are below.
ASSOCIATE EDITOR'S COMMENTS:
"In particular, you should address, if possible, the following points: i) ensure that the expression profile comparisons are appropriate; ii) re-examine your network analysis more carefully; iii) make sure there is no over-interpretation of the data and clarify its novelty. If you think you are able to address these concerns, your manuscript will be re-considered by the external reviewers. This is not a provisional guarantee that your manuscript will be found acceptable for publication."
Response:
We have now clarified the original contribution of our work and the advances that we offer to the field in the last two paragraphs of Introduction, and in the last paragraph of Results and Discussion. We have addressed the remaining points below in our responses to reviewer comments.
REVIEWER: 1
"Issues related to correlation network analysis (throughout the paper). This is a major concern of mine. With so few samples, the module structure inferred by this type of analysis is notoriously sensitive to parameter settings. First, the authors need to report their parameters, and show that the modules they infer are at least somewhat robust to parameter settings. Otherwise, the notion of "dsx-containing module" has little meaning. Second, I strongly suspect that most of the modular structure comes from the use of very different tissues and widely separated developmental stages. In this sense, the different "dsx-containing modules" that the authors report for different tissues/stages may simply reflect differential gene expression between tissues and stages, which mostly has nothing to do with dsx. You have different modules for different tissues/stages, and dsx has to fall out "somewhere" so of course it comes out in different modules in different samples. So perhaps the fact that dsx is associated with different "modules" at different stages/tissues tells you very little about the regulatory relationship between dsx and other genes in these "modules". The Dsx binding motif is fairly simple, so the fact that many putative "target" genes have that motif may also not mean very much. I urge the authors to reexamine their network analysis more carefully, to understand where the network structure is really coming from, and whether the allocation of dsx to particular modules is robust." Response: Our co-expression analysis with WGCNA followed the standard protocol (also provided as a tutorial with the R package), mostly using default parameters in addition to others that were calculated as a part of this analysis. In systems where we cannot predict how the data might behave, the authors of the WGCNA package recommend using default parameters as they work well across a wide range of experiments. We have explained some of these steps in the methods section (lines 101-105). We separated data for the wing and abdominal tissues across stages prior to the co-expression analysis and performed a WGCNA run for each combination of phenotype and tissue. We compared the coexpressed genes after completing the four runs. While we agree there would be some effect of the stage on the expression of genes, the tissues were the same in each case. We re-checked expression patterns of all the genes reported in Fig. 2 and their correlation with dsx expression in the respective tissues (the correlation coefficients for co-expressed genes in each comparison have been added to supplementary Table S4), and modified the table in Fig. 2 as well. Most of the genes represented in that table now show strong correlations with dsx expression (>0.75, Pearson correlation coefficient). The genes relevant in mimetic wings show a peak in 3-day pupal wings compared to other stages. Non-mimetic wings and abdomen had low expression of dsx to begin with and the co-expression profiles here may not mean much, as we discuss in the paper (lines 171-173). However, some genes that came up as relevant in the non-mimetic wings showed wing-specific expression irrespective of sex, which we have retained, as they may perform a more generic function related to wing development, we have also mentioned this in the main text (lines 173-175). "Lines 56-57 "It appears as if this critical gene has evolved multiple mechanisms to maintain and govern different morphs even in closely related species" -This statement, while potentially true, is not directly supported by the data presented in this paper. Frankly, this hypothesis seems to be no more likely after this study than it was before." Response: This statement mainly referred to the genetic basis of mimicry in Papilio memnon and Papilio polytes. While dsx regulates mimicry in both these species, its genetic architecture differs in the closely related species and this may reflect in its molecular mechanism of developmental regulation as well. Our intention was to imply that close examination of dsx activity at the molecular level might help us understand how this gene regulates such diverse phenotypes in different species. We have clarified this statement in the main text (lines 57-66). dsx expression in unfertilized eggs: can the authors please confirm that the eggs were dissected from ovaries in a way that excluded somatic gonad cells? The detection of dsx transcripts by PCR in eggs from unmated females is surprising. This is PCReven a low amount of contamination from somatic tissues could potentially account for this result.
Response: While sampling the unfertilized eggs from unmated females, we tried our best to remove all the tissue from around the eggs as closely as possible. Have now mentioned this is methods in lines 81-82.
Lines 134-150. The description of putative dsx targets seemed quite confusing to me. First, what is the evidence for describing dvl-3 or lin as "known dsx targets"? I don't know of any direct evidence for that. Second, Abd-B is a target of dsx in Drosophila; that does not necessarily mean that it's also a dsx target in Lepidopterans. I noticed that Abd-B did not show up in the abdominal "module" ( Figure 2). Please be more careful in distinguishing confirmed facts from hypotheses.
Response:
We apologize for the lack of clarity in that section. We were referring to the relevance of Abd-B and Wnt pathway to dsx activity and that our results showed links to these pathways. We rigorously scrutinized all our WGCNA hits and their correlations with dsx expression. Our reexamination of WGCNA results cast some doubt on lin as a suitable candidate because despite its wingspecific expression, it did not show a peak in 3-day pupal stage similar to dsx. We have modified the text accordingly. At the same time, we now highlight osa-like as an important candidate that showed female-biased expression in 3-day pupae and high correlation with dsx expression in wings, and which governs genes involved in wing patterning. This gene was earlier included in Fig. 2 as a key candidate, but we had not highlighted it in the text. We have modified lines 160-166 to accommodate this. We also acknowledge that ChIP-Seq and co-IP would be the best way forward to identify physical targets of dsx. The absence of Abd-B and Abd-A in the abdomen samples indicates that the expression of these two genes was not correlated with that of dsx in the abdomen. They might have clustered with other abdomen-specific genes in a separate module.
"Lines 176-228. The model at the end of the paper is highly speculative and rests on very little data. Some critical parts of this model are still only conjectures that remain to be tested by experiments. There's nothing wrong with a light does of interesting speculation at the end of a paper, but please make clear which parts of the model are more solid, and which are speculative."
Response:
We have added lines 217-222, and modified other parts of this section along with the figure legend of Fig 4 to clarify which aspects of panels 4A and 4B are based on our results and which aspects of the model need to be tested further.
Lines 22-223 "Besides Lepidoptera, Coleoptera is the only other order that has two female-specific exons of dsx". Actually, the same is true for cockroaches (Blatella).
Response:
Thank you for bringing this to our notice. We have modified that statement in lines 277-279 and added a citation for the same. Response: Normalized counts refer to the number of reads aligning to a gene after accounting for library size and composition for all the samples in the dataset. Due to this, normalized counts do not have units. We believe that this is a common practice. Figure 3 titleyou are really describing isoform expression here, not "activity" Response: Thank you for bringing this to our notice. We have modified the title. We have also added an updated version of Fig. 3 in the revision.
REVIEWER: 2
"… But I think that this study lacks several data essential for their conclusion like "Isoforms F2 and F3 contributed to most of the dsx expression in mimetic wings" and "elevated expression of dsx isoforms F2 and F3 might be sufficient to give rise to mimetic phenotype. Also, it is unclear what of the findings in this study show novelty as compared with the previously reported findings. Therefore, this manuscript is not suitable for publication in this current form.
1. Figure 1A. If the authors want to argue that the higher expression of dsx in the forewings and the hindwings is closely related to the mimetic features, then they should compare the expression profile of dsx between mimetic females and non-mimetic females. Why did the authors compare it between mimetic females and non-mimetic "males". Such comparison will just simply emerges the difference in dsx expression between females and males, which is not directly involved in the mimetic features observed in wings." Response: Males and non-mimetic females have similar wing pattern and phenotype, therefore male or female-specific dsx isoforms are not altering the non-mimetic wing pattern in a sex-specific manner. While previous work in Kunte et. al., 2014 andNishikawa et. al., 2015 have compared dsx expression in mimetic and non-mimetic females, we do not fully understand the role of dsx, if any, in wing patterning in non-mimetic individuals. Our motivation behind the use of males (instead of non-mimetic females) was that it might help us understand the role of dsx in wing development in the absence of mimicry and we could compare genes between males and mimetic females to screen mimicry-specific candidates. At the same time, we were also unable to obtain samples for non-mimetic females across developmental stages, despite several attempts to establish pure-breeding non-mimetic lines in the lab during sampling for this work (this form is relatively uncommon in India).
2. Figure 2. As pointed above, if the purpose of this study is to identify the putative targets of dsx that may be related to the mimetic phenotype, then the authors should compare the transcriptome data between mimetic females and non-mimetic females. The data presented in Figure 2 does not rule out the possibility that it may simply reflect sexual difference because the date was based on the transcriptomic comparison between females and males.
Response:
We agree that the data in Figure 2 might represent some sex-specific candidates, however, since males and non-mimetic females share wing phenotypes and possibly the underlying genetic network responsible for those phenotypes. It is still a useful comparison to make to find phenotypespecific wing development candidates irrespective of sex. However, we have modified relevant text to reflect this limitation.
Response:
We were able to compare isoforms between mimetic and non-mimetic females at 3-day pupal stage and observed the same results, with very little expression of F1 and mostly F2 and F3 contributing to dsx expression. However, since we were unable to obtain samples for non-mimetic females for other stages, we did not show this data previously. We have now modified Fig. 3 and added a panel showing this comparison. We have also modified that statement to avoid drawing firm conclusions solely based on qPCR and expression data. | 8,113 | 2020-09-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
A mid-latitude stratosphere dynamical index for attribution of stratospheric variability and improved ozone and temperature trend analysis
We find that wintertime temperature anomalies near 4 hPa and 5 0◦N/S are related, through dynamics, to anomalies in ozone and temperature, particularly in the tropical stra tosphere, but also throughout the upper stratosphere and me sosphere. These mid-latitude anomalies occur on timescales of up to a m nth, and are related to changes in wave-forcing. A change in the meridional circulation extends from the middle stratosphe re into the mesosphere and forms a temperature-change quadr upole from equator to pole. We develop a dynamical index based on de trended, deseasonalised mid-latitude temperature. When 5 employed in multiple linear regression, this index can acco unt for up to 40% of the total variability of temperature and o zone and a doubling of the total coefficient of determination in th e equatorial stratosphere above 20 hPa. Further, the uncert ainty on all multiple-linear regression coefficients can be reduced by up to 45% and 25% in temperature and ozone, respectively, a nd so this index is an important tool for quantifying current and f uture ozone recovery.
We thank both referees for very helpful guidance and suggestions in the background, justification and statistical analysis performed.This has led to significant improvements in the quality of the manuscript (which are already apparent in our current revisions and in our response below).Major changes in the revised manuscript have been highlighted in bold text.
We begin by highlighting major updates that both reviewers should be aware of: i) Figures 10, 11, 12 and 13 have been updated to reflect the use of AR1 instead of AR0. Figure 13 in particular has been completely changed, and numbers added, to clarify the error improvement and a change in the mean values.Figure 12 has been updated to reflect similar style and information as provided in Figure 11.ii) The point made in the manuscript about no aliasing between regressors being shown by the relative importance plots has been modified.Due to the use of AR1, for temperature, there is a redistribution of the relative importance from the original regressors without the new index to the new one, in addition to the increase in total variance accounted for.However, the fact it does not change the mean value of the regression coefficients in the trend still supports the claim that it does not alias the derived signal.Modifications to the text in the manuscript have been made to reflect this change, and discussion on this point has been added.iii) A significant amount of background have been added to the introduction.iv) We have renamed the MLSD index to the Upper-branch Brewer Dobson Circulation (UBDC) index, to reflect a more direct interpretation of what it represents, and for which it should be more easily understood; this changes the title of the manuscript.
Paper's main finding is coherence in the variability of stratospheric temperature and ozone in the tropics and extratropics and in the upper stratosphere and lower mesosphere.The authors attribute this coherence to dynamics, specifically to the stratospheric meridional (Brewer-Dobson) circulation, and propose that an index accounting for dynamical effects could be used in multiple regression analysis as additional regressor.They further build such an index using extratropical upper stratospheric temperatures and demonstrate that the index explains considerable fraction of variability in stratospheric ozone and temperatures.Although the authors present interesting analysis, they still have to show how their analysis is related to previous research and high light novel results.The use of regressors accounting for dynamical effects has been discussed in previous WMO Ozone Assessments, discussing their pros and cons.I believe that a more thorough discussion of issues associated with the use of dynamical proxies, as well as relation of the current analysis with previous studies is needed before possible publication in ACP.Please see my specific comments below.
Major comments 1. Various dynamical proxies have been used in past to explain stratospheric variability related to dynamics, see examples in Weiss, et al. 2001;Brunner et al. 2006;Mäder et al., 2007;Wohltmann et al., 2005;2007 and references therein.While a considerable fraction of variability in both ozone and temperatures can indeed be explained by these proxies, this benefit comes at the cost of attributing variability to processes which are themselves dependent on the variables to be explained (wave propagation depends on the mean state of the stratosphere), i.e. one mixes cause and effect.I suggest that these issues should be discussed in the manuscript.Relevant discussion regarding the use of dynamical proxies for attributing ozone variability can be found in Chapter 2 of WMO ozone Assessment 2011 (Sections 2.1.2 and 2.4).
We agree, and we appreciate the useful set of references that has led to an expansion of the discussion in the manuscript.Further, the new background material simply further highlights the need for such a dynamical proxy, especially in the equatorial region, since previous studies have focused on BDC proxies that operate on inter-annual timescales and longer, and while often being briefly mentioned, the monthly and shorter timescales are usually ignored (except, e.g.Chandra et al., 1986 as mentioned by the second reviewer).Previous studies have also focused on total ozone column, mid-to-high latitudes, and the mid-to-lower stratosphere.This leads to the clear conclusion that, while not a new concept, the development of a proxy accounting for noise-like dynamical events in the upper stratosphere and mesosphere is necessary, and that its application and focus on the equatorial region is new. References: 1. Brunner, D., J. Staehelin, J.A. Maeder, I. Wohltmann, and G.E. Bodeker, Variability and trends in total and vertically resolved stratospheric ozone based on the CATO ozone data set, Atmos. Chem. Phys.,6 (12), 4985-5008, doi: 10.5194/acp-6-4985-2006, 2006. 2. Mäder, J.A., J. Staehelin, D. Brunner, W.A. Stahel, I. Wohltmann, andT. Peter, Statistical modeling of total ozone: Selection of appropriate explanatory variables, J. Geophys.Res., 112, D11108, doi: 10.1029/2006JD007694, 2007. 3. Wohltmann, I., M. Rex, D. Brunner, and J. Mäder (2005), Integrated equivalent latitude as a proxy for dynamical changes in ozone column, Geophys.Res. Lett., 32, L09811, doi:10.1029/2005GL022497. 4. Wohltmann, I., R. Lehmann, M. Rex, D. Brunner, andJ. Mäder, A processoriented regression model for column ozone, J. Geophys.Res., 112, D12304, doi: 10.1029/2006JD007573, 2007. 5. Weiss, A. K., J. Staehelin, C. Appenzeller, and N. R. P. Harris (2001), Chemical and dynamical contributions to ozone profile trends of the Payerne (Switzerland) balloon soundings, J. Geophys.Res., 106(D19), 22685-22694, doi:10.1029/2000JD000106 6. WMO: Scientific Assessment of Ozone Depletion: 2010, Global Ozone Research and Monitoring Project, 52, 516, 2011.2. There are also problems with using temperature as a proxy representing extratropical wave dynamics.Stratospheric temperature is controlled by a number of processes, such as horizontal and vertical advection, diabatic heating, and not all variability is necessarily directly attributable to extratropical wave forcing.Constructing an index by maximizing correlation, as is done in this study, also maximizes the risk of mixing statistical noise with physical processes.That is why using proxies more directly related to wave activity could be a better choice.While I agree that wave activity proxies such as EP-flux divergence are difficult to calculate, one can try, for example, heat flux evaluated at 100hPa (e.g.Newman et al. 2001), which is quite easy to calculate.
Again, we agree with this assessment and indeed we will make it clear that we are mixing in physical processes from different specific sources (see previous response) in the introduction.Further, it would make sense that if wave-driving from the troposphere at mid-latitudes is one of the main drivers of variance in the equatorial upper stratosphere, then the use of EP-flux divergence (EPFD), or the heat flux at 100 hPa would represent a more physical proxy.We note, however, that while Newman et al. (2001) were successful in representing short-term dynamical fluctuations in the stratosphere with EPFD, they did not investigate effects above 10 hPa.During the analysis of our original manuscript, we investigated the relationship between 100 hPa heat flux and equatorial ozone and temperature in the upper stratosphere, but were unable to find any clear agreement, even if we considered a lag to account for the time waves take to propagate into the stratosphere and force a response.We revisited this following the reviewer's suggestion (above) and considered the correlation between indices of the NAO, AAO, ENSO, QBO, and 100 hPa heat flux (v'T') averaged between 60-90 S, 60-90 N, 45-75 N and 45-75 S. We further divided months out to consider Dec-Feb and Nov-Apr for the northern hemisphere, and Jun-Aug and May-Oct for the southern hemisphere, and either the original timeseries or detrended and deseasonalised.We compared all of these cases with the MLSD index, which has high agreement locally to the respective hemisphere and with the equatorial upper stratosphere and mesosphere.Considering just the R^2 values (i.e.correlation coefficient squared), we found agreement exceeding 0.15 only in three cases: for DJF 60-90 N after deseasonalising and detrending the data, and 0.18 and 0.19 for DJF 45-75 N with and without deseasonalising and detrending (the third value shown is also similar for a 1 month lag); there was nothing clear in the southern hemisphere, with values all close to zero.This indeed suggests that there is possibly some relationship between heat flux at 100 hPa, and we concede this was a simplistic set of checks.However, the three results showing some coherence with the MLSD index account for very little of the variance we see in temperature above 10 hPa.The source of the variance warrants further investigation beyond this manuscript, as our analysis clearly shows a relationship with changes in temperature in the upper stratosphere and mesosphere related to what appears to be a wave-forcing like response in the EPFD and stream functions (i.e.Figs 5 and 6).
We have expanded this section to a separate discussion of equatorial and higher latitude MLR analysis, which, e.g.includes other proxies such as the AO and NAO, but also see our response to Major Comment 1.
2. P2L118: I believe there are older references which show influence of dynamics on stratospheric ozone, e.g.Fusco and Salby 1999 and references therein.
We have included this reference in addition to others, and those already mentioned above.
Reference: Fusco, A. C. and Salby, M. L.: Interannual variations of total ozone and their relationship to variations of planetary wave activity, J. Clim., 12, 1619Clim., 12, -1629Clim., 12, , 1999. .3. P2L22-23: Please note that acceleration of BD circulation leads not only to increase of ozone in the extratropics but also to a decrease in the tropics, thus it is more correct to say that ozone is redistributed, not just increased.
We suggest this can be made clear with the addition of the following bold text: "The increase in ozone at mid-latitudes comes partly from ODSs reductions, but also because the BDC is expected to accelerate (Garcia and Randel, 2008;Butchart, 2014), which will reduce the time for ozone depletion to occur and lead to faster transport of ozone from the equatorial region to higher latitudes.This in turn leads to a reduction of ozone over the equator and a prevention of a full recovery over the tropics.Thus, the recovery of ozone at mid-to-high latitudes can be understood as being partly due to less ozone destruction by lower ODS concentrations, and partly due to a faster redistribution of ozone-rich air from the tropics." 4. P3L27-28: I think smoothing removes short-term variability, not long-term.Please rewrite.
The sentence was incorrectly formulated, and should make clear we remove the smoothed time series from the original; thus the following formulation should be clearer.
"…we remove all long-term variability by subtracting a timeseries that has been smoothed, with a 13-month running mean, and then deseasonalised, with monthly values, at each latitude and pressure." 5. P8L4-6: Please see Major Comment 2. I think some caution is needed when using stratospheric temperature as proxy for dynamics.
See response to major comments 1 and 2 and revisions of the text.
6. P9L6: 'Verses' -> 'versus' Done 7. Figure 10: The difference in Fig. 10b between regression results from GOZCARDS and SWOOSH from the one hand and SBUV from the other hand are interesting.It appears like dynamical variability in GOZCARDS and SWOOSH is represented by the other proxies because, after addition of the dynamical proxy, the explained variability changes only little in these data sets, and the total explained variability is quite similar in all four data sets.Do you think it is purely statistical effect or it may be related to the way these data sets are compiled?(Sorry I am not familiar with these data sets.) This is an entirely valid, and interesting, question.As you suggest, we believe (and have evidence to support) that the answer resides in the way the datasets are compiled.Indeed, looking at individual time series, it becomes clear that the earlier periods in GOZCARDS and SWOOSH at these altitudes contain high variance fluctuations that look more related to the datasets used themselves than real variability; GOZCARDS and SWOOSH use a similar source of data (SAGE II) for this period.Given the variance is on the order of that in the MLSD index, it is likely (and this is now a postulation) that the reduced improvement during this period is due to these high variance artefacts.We are tackling this problem and are due to submit an article relating to this soon.
8. Figure 11: I am puzzled by why the annual R2 for the w/ MLDS regression in the middle panel is larger than any seasonal one.The result from the w/o regression, where the annual R2 looks like the mean of seasonal results, looks more logical, is it not?
As you correctly identified, there was a mistake in how anomalies were dealt with in the regression routine.This has been corrected, and indeed the results now appear more logical.9. Captions to Figure 11: What is distribution peak?Is it the mode?
The peak is, more precisely, the median and we have added this to the description.
L. Hood (Referee) Received and published: 2 August 2016
Overall, this is a useful effort to improve statistical estimation of stratospheric ozone and temperature trends and interannual variability by accounting for a source of shortterm (month-tomonth) dynamical variability in tropical stratospheric data sets.The presentation is excellent and the figures are state-of-the art.However, the value of the adopted technique for trend estimation and its ability to "explain" a larger fraction of the variance in the observations is somewhat overstated, in my opinion.Some important revisions are needed prior to publication.
Main comments:
(1) A major claim of the paper is that inclusion of the mid-latitude stratosphere dynamical (MLSD) index can reduce the uncertainty "on all multiple linear regression coefficients ... up to 45% and 25% in temperature and ozone, respectively."First of all, the accuracy of these reduction estimates is questionable because, as mentioned on p. 11, line 12, "we do not consider use of any autoregressive modeling."In other words, serial correlation (autocorrelation) of the residuals of the MLR analysis is not accounted for.It is possible that serial correlation of the monthly residuals is increased when the MLSD index is used because the month-to-month variability is reduced.Have the authors tested whether this is the case?Accounting for any increased serial correlation would increase the uncertainty estimates.For example, application of a "pre-whitening" technique (e.g., Tiao et al. [1990]; Garny et al. [2007]) would ensure that the residuals are approximately white noise thereby yielding more reliable uncertainty estimates.Please re-do the analysis in this manner to provide such a test and yield more accurate (larger) uncertainty estimates.
These are indeed important points.Serial correlation is important, and you are correct that their consideration does indeed puff-up error bars.However, it does not change the main result and the usefulness of the index.To make this point clear we have produced a new plot, which we include in the paper, to emphasize that auto-regression will have an effect and should be considered.Further, we will replace The plot is shown below: AR0 (blue) and AR1 (yellow) are shown for cases with (thick lines) and without (thin) the MLSD index for SWOOSH (ozone, left) and SSU (Temp., right).We see the percentage change (in respective colours) at heights where we see the largest changes.In both cases, we still have a maximum improvement of up to 30% in the errors.Not shown here, but will be in the final manuscript, is that the index now increases R^2 from a maximum increase of 40% (fig 10), to nearly 60% in temperature, and between 30 and 55% for ozone (depending on the dataset used).In the 1998-2012 periods, ozone error improvements are essentially unaffected at a 25% reduction in uncertainty, but the earlier 1983-1997 period is affected, on average reducing uncertainties by around 10% to a maximum of around 15%; some regions show a small increase in error, but this likely reflects the fact the datasets show different variability on all timescales (see response to question from the other reviewer on this point).We will update the manuscript to reflect this.
[Additional note: we also considered AR2, but AR1 was sufficient to account for partial correlation at 1-month] Second, even without accounting for serial correlation, the difference in the ozone and temperature trend results with and without the MLSD term shown in Figure 13 is not very impressive.For the sake of clarity, consider only the yellow curves in the figure.The error bars for with (thick curves) and without (thin curves) the MLSD cases overlap.These are presumably 2σ error bars, right?If not, then the overlap is even larger.The error bars are roughly the same size at most levels.At 2.5 hPa, the ozone error bar appears to be about 25% smaller for the with MLSD case, which is consistent with the authors' statement.But it is not a very significant difference considering the sizes of the error bars and the large variation in the trend estimates from one pressure level to the next.For most of the other levels, the difference in size of the error bars is hard to discern.
We agree with these comments (the error bars are 2-sigma).In fact, we tried to make this clear with the grey shading in the old version of Fig 13a to highlight the altitudes where we see the largest improvement.In hindsight, the plot has such a large absolute range of profiles, that seeing this improvement is difficult.Figure 10 already shows similar results, that is the reduction in uncertainty as a function of altitude (right panels of each sub-plot) -the idea of Figure 13 was to show how it appeared in practice.By accounting for an attributable source of variability (or at least being able to show that it is not simply noise, but a clear dynamical factor) we make a step closer to better understanding those variables we are trying to determine (e.g.trend and solar cycle) -see point below.The new figure (above) reduces the absolute range and focuses in on one of the datasets.We consider this a more useful plot, and discuss and refer to other articles that do show the profiles.
(2) The other major claim of the paper is that use of the MLSD index in a regression analysis can "explain much larger fractions of the total variability."I am not sure that the word "explain" is appropriate.The dynamically induced variability is being accounted for in the MLR analysis but it is not really being explained.For example, the see-saw temperature and ozone variations between the tropics and extratropics are in many cases associated with minor and major polar stratospheric warmings in the winter hemispheres.The latter are modulated by a number of external forcings including the QBO and the solar cycle.A true explanation of the variability would therefore need to account for the external forcings that are controlling the rate of wave absorption events, which in turn produce the ozone and temperature fluctuations.I also disagree with the terminology "total coefficient of determination", which is used in place of explained variance (R2) in the text.The words "determination", "explained", and "attribution" are all misleading if the sources of the dynamical fluctuations are not identified.Please revise the introduction and conclusions section to make this clear.
We are happy to clear up terminology.As the other reviewer also pointed out, the use of temperature mixes potential sources of the variance that correlates with temperature, but is actually the underlying driver, and we have added additional text to the introduction to account for this.The point we are trying to make is that we can 'account' for variability that is physical, and not simply noise that, unconsidered, would lead to higher uncertainty in quantities we wish to determine.It is true, the index itself doesn't necessarily represent the underlying driver of the changes in the meridional flow, but it does act as a proxy and is related to a real variance in the system (which we relate through the EPFD to wave driving, as shown in the manuscript).We disagree about the use of the coefficient of determination, R^2, and would argue it is a useful quantity with which to test how much better our regression model, with the index, improves the amount of variability we can account for.By applying the bootstrapping (examples in Figs 11 and 12), we can also account for further statistical uncertainties to ensure that the improvement from the additional dynamical index is robust.
Minor comments:
(3) I agree with the other referee that the history of the ozone and temperature variations that are discussed in the paper and their application to trend analyses is not adequately summarized in the paper.The first report of the existence of such global stratospheric temperature oscillations with a change in phase between low and middle to high latitudes was by Fritz and Soules [1970].Some stratospheric dynamicists still refer to these oscillations as the "Fritz-Soules effect".See also, e.g., Andrews et al. [1987] for general discussions of their dynamical origin.Another observational study by Chandra [1986] could also be referenced.
We have added additional discussion and references as suggested by both referees (see response above to the first referee on this point).The reference by Chandra [1986] was particularly enlightening; our findings also confirm, and expand upon, the results from that study.
(4) In Figure 1 (and maybe other figures), the definitions of the diamonds in the upper right corner seem to be incorrect and are opposite to those given in the caption.
You are correct: the legend in the figure was wrong; this has been fixed; we also checked the other figures, which did not have this problem.
Changes of note to "A mid-latitude stratosphere dynamical index for attribution of stratospheric variability and improved ozone and temperature trend analysis" by William T. Ball et al
All relevant changes to the document have been highlighted in bold in the attached new version of the manuscript.
Additionally, the following major changes should be considered: -The index, which is the focus of the paper, has been renamed to be a more explanatory name: Upper-branch Brewer Dobson Circulation (UBDC) index.-The title has been changed to reflect a change in the index name: "An Upper-branch Brewer Dobson Circulation index for attribution of stratospheric variability and improved ozone and temperature trend analysis".-Figure 10 has been updated to reflect the change of statistical analysis to include AR1 autoregressive processes.-Figures 11 and 12 have been updated.
-Figure 13 from the initial manuscript has been replaced with a new one following, and as discussed in the response to, reviewers' comments.
Introduction
Trend analysis, typically using multiple linear regression (MLR), is a key approach to understand drivers of long-term changes in the stratosphere (e.g.WMO (1994), Soukharev and Hood (2006), Chiodo et al. (2014), Kuchar et al. (2015), Harris et al. (2015)).Ozone and temperature have received most attention, partly because they have the longest observational records.Temperature is important for understanding climate change, while quantifying changes in the ozone layer is necessary to estimate the impacts of elevated, or reduced, ultraviolet (UV) radiation reaching the surface, especially following the implementation of the Montreal Protocol to reduce halogen-containing ozone depleting substances (ODSs).
in the Brewer-Dobson circulation (BDC), whereby air rises in the tropics, advects polewards either on a lower, shallow-(below ∼50 hPa) or an upper, deep-branch, and descends at mid-latitudes (less than ∼60 • ) or over the poles, respectively (Birner and Bönisch, 2011).The BDC is mainly driven by mid-latitude upward propagating planetary and gravity waves that break and impart momentum, acting like a paddle to drive the circulation (Haynes et al., 1991;Holton et al., 1995;Butchart, 2014).Wave forcing depends on the mean-state of the flow, and vice-versa (Charney and Drazin, 1961;Holton and Mass, 1976); changes in either affect ozone transport by a change in the speed of the BDC that leads to adiabatic heating, or cooling, and directly affects chemistry through temperature-dependent reaction rates (Chen et al., 2003;García-Herrera et al., 2006;Shepherd et al., 2007;Lima et al., 2012).As such, ozone and temperature have an inverse relationship in the equatorial stratosphere above 10 hPa, which in turn has a dependence on dynamics (Fusco and Salby, 1999;Mäder et al., 2007;Stolarski et al., 2012), although this is not always the case in the lower stratosphere (Zubov et al., 2013).Ultimately, then, dynamical perturbations at mid-to-high latitudes can directly influence the variability of ozone and temperature (Sridharan et al., 2012;Nath and Sridharan, 2015).
The stratospheric ozone layer has been damaged by the use of ODSs and following a ban through the 1987 Montreal Protocol (Solomon, 1999), levels of ODSs have declined since their peak in 1998 (Egorova et al., 2013;Chipperfield et al., 2015), although the peak may be earlier or later depending on the location of interest.However, the rate of ozone recovery is latitude dependent, with southern mid-to-high latitudes expected to recover from elevated ODSs (WMO, 2011).The increase in ozone at mid-latitudes comes partly from ODSs reductions, but also because the BDC is expected to accelerate (Garcia and Randel, 2008;Butchart and Scaife, 2001;Butchart, 2014), which will reduce the time for ozone depletion to occur and lead to faster transport of ozone from the equatorial region to higher latitudes.This in turn leads to a reduction of ozone over the equator and a prevention of a full recovery over the tropics.Thus, the recovery of ozone at mid-to-high latitudes can be understood as being partly due to less ozone destruction by lower ODS concentrations, and partly due to a faster redistribution of ozone-rich air from the tropics.Additionally, the cooling stratosphere will slow ozone depletion and further support the increase in ozone at mid-latitudes (WMO, 2014).However, estimates of decadal trends in ozone since 1998 have a high level of uncertainty (Harris et al., 2015) because various long-term datasets provide different pictures (Tummon et al., 2015), and we do not understand much of the stratospheric variability on short timescales.Anomalous, monthly variability, like that at the equator as shown in Figs.7 and 8 in Shapiro et al. (2013), and which could be related to high latitude variability (e.g.Kuroda and Kodera (2001) and Hitchcock et al. (2013)) may simply be considered as noise in MLR trend estimates (and other regressors) where it is not accounted for, which increases the uncertainty.
In MLR analysis of the equatorial stratosphere, variability is usually described with at least six regressors that represent In summary, our aim here is to provide an index (section 4) to account for sporadic, noise-like stratospheric variability in monthly timeseries that represents rapid adjustments in the BDC and, therefore, better account for residual variance, improve estimates of trends and regressor variability, and reduce their uncertainties (section 5).We do this using model, reanalysis and observational data (section 2) to identify a source for the short-term variability (section 3).
2 Data and models
Chemistry climate model in specified dynamics mode
To investigate temperature and ozone variability in the stratosphere and mesosphere at all latitudes, without data gaps, we simulate historical ozone and temperature variations using the Chemistry Climate Model (CCM) SOlar Climate Ozone Links (SOCOL; version 3 (Stenke et al., 2013)) in specified dynamics mode, whereby the vorticity and divergence of the wind fields, temperature and the logarithm of surface pressure are 'nudged' using the ERA-Interim reanalysis (Dee et al., 2011) between 1983-2012 and up to 0.01 hPa; see Ball et al. (2016) for full nudging details.Note that we use the Stratospheric Processes and their Role in Climate (SPARC)/International Global Atmospheric Chemistry (IGAC) Chemistry Climate Model Intercomparison (CCMI) boundary conditions and external forcings (Revell et al., 2015), except for the solar irradiance input, for which we use the SATIRE-S model (Krivova et al., 2003;Yeo et al., 2014).In the following we focus on temperature and ozone variables; the former is nudged, while the latter is simulated by the CCM SOCOL.
Observations
We verify that the nudged-model output fields ozone (not nudged) and temperature (nudged) agree with observations.For ozone we use the Stratospheric Water and OzOne Satellite Homogenized (SWOOSH) ozone composite (Davis et al., 2016) for 215-0.2hPa (∼10-55 km) at all latitudes.For temperature, we compare the nudged-model output with independent measurements from the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument (Russell et al., 1999) on the Thermosphere-Ionosphere-Mesosphere-Energetics and Dynamics (TIMED) satellite, spanning 2002-2015 and for 100 to 0.00001 hPa (∼10-140 km) and latitudes out to 52 2014)), and JRA-55 (Ebita et al., 2011) and MERRA (Rienecker et al., 2011) reanalyses.All observations are re-gridded onto the SOCOL model pressure levels and latitudes.We consider monthly mean zonally-averaged data.
Equatorial ozone and temperature variability
We define short-term, 'anomalous', variability here to be that occuring on monthly, or shorter, timescales.To identify this rapid variability, distinct from behaviour on seasonal and longer timescales, we remove all long-term variability by subtracting a timeseries that has been smoothed, with a 13-month running mean, and then deseasonalised, with monthly values, at each latitude and pressure.We apply this pre-processing to all variables described in sections 3 and 4.An example equatorial (20 • S-20 • N) ozone and temperature anomaly timeseries from the CCM SOCOL at 2.5 hPa is shown in Fig. 1.
SWOOSH ozone from 1985 to 2012, and SABER temperature from 2002 to 2012 (Fig. 1), show similar anomalies to the model and have correlation coefficients (r c ) of 0.72 and 0.83 with the nudged model results, respectively; the model, therefore, reproduces observations well.The monthly temperature and ozone anomalies have a very strong relationship, especially between 0.1 and 6.3 hPa, with negative r c reaching -0.96 (Fig. 2) between 0.1 and 10 hPa, while being positive elsewhere.
To establish the coherency of the ozone-temperature relationship in the tropics, we identify 'extreme' anomalies (or 'events') as those at least at the 90th percentile from the mean in temperature and at less than the 10th percentile for ozone (and vice versa).We call 'low-T' events those that have low equatorial temperature at the same time as a high ozone concentration (blue lines at 2.5 hPa, Fig. 2), and 'high-T' for the opposite situation (red lines); the Low-T thresholds are -1.3K for temperature and +2.4% for ozone, while High-T thresholds are +1.1 K and -2.2% (these are also given the upper plot of Fig. 1).
We note that the ozone mixing ratio maximum in parts per million (ppm) is at ∼10 hPa.We use 2.5 hPa as a reference here, but other pressure levels at altitudes between 0.1 and 6 hPa give similar results.The majority of the events (45/60) occur in December-January-February (DJF; red/blue in Fig. 2) and June-July-August (JJA) (yellow/turquoise in Fig. 2).High-T and low-T months remain grouped above 10 hPa, but mix and lose coherence at altitudes below 10 hPa, implying that the events have a similar source at all altitudes above 10 hPa, but a different one below (i.e.r c is high at 25 and 40 hPa, but the events at 25 hPa are well-mixed).This indicates a likely transition between BDC branches and that the driver of variability is dynamical, which we confirm in the following.December-January-February (DJF) anomalies are identified by red (high-T) and blue (low-T) diamonds.
Mid-latitude temperature variability
To identify and locate the source of the driver behind ozone and temperature anomalies shown and described in the previous section, in Fig. 3 we correlate the 2.5 hPa equatorial temperature low-T and high-T events with detrended and deseasonalised temperature at all latitudes and pressure-levels, for DJF and JJA months (Figs. 3a and b,respectively).A quadrupole-like structure emerges with positive correlations centred around 2.5 hPa at the equator and in the winter-polar mesosphere (<0.8 hPa), and negative correlations in the winter stratosphere at mid-to-high-latitudes and in the equatorial mesosphere.The inverse correlation in the stratosphere for DJF extreme months peaks at ∼52 • N (r c = -0.92);while JJA events peak at ∼43 • S (r c = -0.93).
We find similar results when using other equatorial pressure levels near 2.5 hPa as a reference to calculate correlations.zero are given as yellow and blue contours, respectively).Equatorial temperature anomalies (∼2 K) are smaller than at high latitudes (∼5 K or more).The maximum temperature response at mid-to-high latitudes does not always reside at the same location as the peak correlation.Although the statistics are less robust, since the period is shorter, the quadrupole structure is also evident in SABER observations (Fig. 4).Thus, we can be confident that the nudged-model is giving a good representation of observations.those within regions defined by the red (high-T events) and blue lines (low-T events); red crosses are for high-T events in December, January, and February (DJF), yellow for high-T events in June, July and August (JJA), and green for 'other' high-T events.Dark blue, turquoise and blue represent DJF, JJA and 'other' low-T months (see also legends in 1.0 and 1.6 hPa plots).Correlation coefficients are given for all crosses together.The y-scale has been decreased by a factor of 30 and 5 at 0.01 and 0.05 hPa, respectively, as indicated in the plots.
The quadrupole structure is likely the result of (i) an acceleration of the BDC that adiabatically cools the equator during Low-T events as more air arrives at high-latitudes, and adiabatically heats there, and (ii) a deceleration of the BDC that adiabatically heats the equator during High-T events as less air arrives at high-latitudes leading to cooling there; both processes are associated with changes in wave activity.
We show that the mid-latitude temperature, as well as the equatorial temperature and ozone anomalies are related to variations in wave activity using the Transformed Eulerian Mean streamfunction (TEMS; and 2, and used in Fig. 3, we find clear EPFD and TEMS anomalies centred near 55 • , slightly poleward of the mid-to-high latitude peak correlations (Fig. 3a-b).As anomalies, they do not represent a reversal of meridional air flow, but a slowing or acceleration.When high-T anomalies occur the EPFD is positive, which implies zonal mean westerly winds have accelerated and the BDC has slowed, which is confirmed by the TEMS, indicating increased equatorward flow.This will have the exact effect found, of adiabatically heating the equatorial region and cooling the mid-to-high latitudes relative to the mean state.
The opposite is the case for low-T anomalies.These results confirm that equatorial anomalies are dynamically driven and we suggest that it appears to be mainly related to, at the equator, a shift in the ozone maximum upwards during low- T events that then produces the anti-correlation seen in Fig. 2 above 10 hPa, and vice versa during high-T events.A further consequence of the circulation changes for ozone is that a temperature increase should lead to faster catalytic destruction, and therefore a decrease of ozone, and vice versa for temperature decreases, though these effects seem to be less important than the rapid profile adjustment itself.It seems surprising that these dynamical signals survive other processes, such as chemistry and radiative effects, on a monthly timescale and we suggest this warrants further Shading colours and black contours are the same as in Fig. 3 (dashed, negative; solid, positive; thick, zero).Signals at two standard deviations from zero are given as yellow contours in panels b and c.
Upper-branch Brewer Dobson Circulation (UBDC) index
The link between anomalous mid-latitude temperature changes and equatorial temperature and ozone provides a way to account for sporadic variability.When performing, e.g., an MLR analysis to understand variability in the stratosphere, such an index of monthly anomalies can account for a large proportion of variability previously unaccounted for, and drive down uncertainties on regressor estimates.We focus here on the equatorial region, but our results imply this index could be applied to other locations in the stratosphere and mesosphere.
Below, we describe how we construct an upper-branch Brewer Dobson circulation (UBDC) index based on detrended and deseasonalised temperature averaged over 43-49 • S and 2.5-6.3 hPa for June-October, and averaged over 52-57 • N and 4-10 hPa for November-May.Our index utilizes the output from the CCM SOCOL in specified dynamics mode, similar to ERA-Interim and observations, but such an index could be constructed in a similar way for any specific model.
Construction
To construct a useful upper-branch Brewer Dobson circulation (UBDC) index requires the identification of maximum correlation between the equator and each hemisphere separately, followed by a combination of information from these two regions.
We have previously considered just the extreme events, but we now consider all monthly anomalies between 1983 and 2012.
While wave-activity drives the temperature changes, it is not an easily observable quantity.Thus, temperature is a natural and simple quantity to build the index with.Additionally, we have found that the CCM SOCOL in free-running mode (i.e.without nudging) shows the same anomalous temperature-quadrupole structure as in Fig. 3 (not shown).Therefore, one can easily construct an index using model data to represent anomalous behaviour in the equatorial regions, and elsewhere where there is a quadrupole response.We identify the maximum inverse temperature correlations at mid-latitudes in both DJF and JJA by varying the reference equatorial pressure-level.We find that averaging over the nine grid-cells centred on the mid-latitude peak improves the relationship with the equatorial region.Therefore, we construct the index with anomalous temperatures averaged over 43-49 • S and 2.5-6.3 hPa in the southern hemisphere (SH) JJA months, and 52-57 • N and 4-10 hPa in the northern hemisphere (NH) DJF months.
For March-May and September-November months, we complete the UBDC index by combining November-April NH anomalies with May-October SH anomalies; this combination maximises the relationship with equatorial temperature.We plot the index derived from the CCM SOCOL in specifed dynamics mode using ERA-Interim in Fig. 7. Fig. 8 shows the SH and NH mid-latitude temperature anomalies versus the 4 hPa 20 • S-20 • N equatorial average (a and b, respectively; grey crosses represent November-April, and black May-October).The SH May-October temperature anomalies are inversely correlated with equatorial temperatures (r c =-0.70) while November-April are not (r c =0.05); the opposite is true for the NH (r c =0.02 and -0.78, respectively).The ozone-temperature events identified in Fig. 1 are highlighted with coloured circles, showing that the equatorial anomalies are related to mid-latitude wave-driving.Fig. 8c shows the UBDC index plotted against all equatorial temperature anomalies at 4 hPa (r c =-0.74).The lower panels (d-f) show the equatorial ozone relationship with respect to midlatitude temperature and the UBDC index; the absolute correlation coefficient is lower (r c =0.65) than for equatorial temperature in the upper panels, but there is still a strong relationship.
In Fig. 9 we show the amount of variability the UBDC index can account for in nudged-model temperature anomalies everywhere , using the coefficient of determination r 2 c , or R 2 .It ranges from 0 to 1; a value of 1 is synonymous with the index accounting for 100% of the variability.In Fig. 9a the UBDC index can account for >50% of variability between 10 and 1 hPa, and above 0.05 hPa.The variability accounted for at mid-latitudes is less (up to ∼30%), even at the index source locations (white circles), because the UBDC index has almost zero agreement half of the time there (see Fig. 8). I Fig. 9b and c, the UBDC index accounts for much of the DJF/JJA variability: above 20 hPa it can account for over 70% of equatorial variability, more than 60% of polar mesospheric variability (80% in the SH), and much of polar stratosphere variability.and 6).
Improvement in MLR analysis using the UBDC index
The UBDC index leads to a large uncertainty reduction in MLR analysis.To show this, we consider MLR with or without the index focused on the equatorial region (20 • S-20 • N).In both cases we use the two QBO indices, SAOD, ENSO, a linear Focusing on the yellow bars in Fig. 13 representing results using AR1 processes, we show the equatorial decadal trend profiles of the datasets considered in Fig. 10 and the 2σ uncertainties derived from multiple linear regression with (thick lines) and without (thin lines) the UBDC index, between 25 and 0.2 hPa.A full discussion of the differences in the profiles is undertaken by Tummon et al. (2015) and Harris et al. (2015), so we do not repeat that here.We simply note that the mean decadal equatorial trends in temperature are almost unaffected by the UBDC index (right panel of Fig. 13).However, we see that the influence of the UBDC index on the mean profile of ozone in SWOOSH leads to a decrease in the ozone trend of ∼0.5% per decade, at the altitudes where the index also performs best at reducing uncertainties (Fig. 13a).This decrease may be a result of the largest anomalies after 1998 being positive (see upper plot in Fig. 1), which might introduce a slight upward bias in the trend analysis; once accounted for with the UBDC index, this bias is removed and the trend is reduced slightly.
Nevertheless, this result suggests that ozone trend estimates that do not take the short, anomalous variability into account will overestimate the decadal trends, though it is clear that the biggest uncertainties remain in the underlying datasets themselves (Harris et al., 2015).
Conclusions
We have shown that detrended and deseasonalised ozone and temperature anomalies in the tropics are strongly influenced by mid-latitude dynamical perturbations that influence temperature throughout the upper stratosphere and mesosphere of the perturbed hemisphere.The strongest correlations with these anomalies occur at latitudes around 50 • in the winter of both hemispheres, which are linked to changes in wave-forcing.
We develop a new upper-branch Brewer Dobson circulation (UBDC) index, which has the power to considerably improve the statistical significance of ozone and temperature trends, and account for much larger fractions of the total variability.Our results suggest that the index is able to improve the uncertainty of temperature and ozone estimates by up to 30 and 25%, respectively, between 0.3 and 40 hPa, and up to 60% of the total variance accounted for.While we focus on improvements in equatorial temperature and ozone, we suggest it could also be used in the analysis of other stratospheric variables, and also in other regions as well as in the mesosphere.The UBDC index should be employed in future investigations of stratospheric trends in the upper stratosphere and mesosphere.For modelling studies, this index can be extracted from pressure levels and latitudes similar to those put forward here, though the exact peak is likely to be model dependent; for future trends it may be necessary to determine the exact peak again since the regions of wave propagation and breaking may change.
In all cases considered here, the UBDC index both improves our ability to reduce uncertainties and better account for equatorial stratospheric ozone and temperature variability and, by extension, attain better estimates of trends in stratospheric and mesospheric mid-to-high latitude variability.
Fig 13 with this one, since the point of Fig 13 is to show clearly how the reduction (and any effect on mean value) works in practice -this also addresses the second main point below.
the solar cycle UV flux changes (e.g., with the F10.7cm radio flux), volcanic eruptions (stratospheric aerosol optical depth; SAOD), the El Nino Southern Oscillation (ENSO) surface temperature variations, two orthogonal modes of the dynamical quasi-biennial oscillation (QBO), and the equivalent effective stratospheric chlorine (EESC), which describes the long-term or, alternatively to applying both GHG and EESC proxies, a linear (or piece-wise linear) trend is considered.At higher latitudes, other proxies have beenused to represent dynamical indices, e.g. in the northern hemisphere, the North Atlantic Oscillation (NAO) and Arctic Oscillation (AO), which are related to surface pressure changes, though their relation is less anti-correlated with stratospheric variability than, e.g., the tropopause pressure (Weiss et al., 2001).Trends in dynamically related quantities, such as horizontal advection and mass divergence, contribute to long-term changes in ozone (Wohltmann et al., 2007).For short timescales, Wohltmann et al. (2007) note that tropospheric pressure is a physical quantity directly responsible for changes in lower stratospheric temperatures, but that it is nevertheless inferior to stratospheric temperature when accounting for ozone and temperature variance in column ozone and temperature; this can also depend on the location of tropospheric blocking events (WMO, 2014).However, longer timescales render these proxies unreliable due to additional radiative effects.Ziemke et al. (1997) identified that the use of high-latitude temperature at 10 hPa in winter-spring months, together with 200 hPa temperature at mid-latitudes all year round, were most effective at reducing residuals, though Ziemke et al. (1997) limited their study to total column ozone.In fact, several studies have identified proxies, such as temperature in the stratosphere, that can help improve MLR analysis (e.g., Ziemke et al. (1997); Appenzeller et al. (2000); Weiss et al. (2001); Mäder et al. (2007)).However, these have tended to be focused on total column ozone, the mid-to-lower stratosphere, or mid-latitude and polar regions, and thus most attention on dynamical variability remains associated with the lower branch of the BDC (e.g.Newman et al. (2001); Wohltmann et al. (2005); Brunner et al. (2006)).Further, studies of dynamical variability have also tended to focus on seasonal and inter-annual timescales, and thus any fluctuations on monthly or shorter timescales may be missed, underestimated, or driven by processes operating on different timescales.There are differing conceptual approaches to improve regression models (i.e., see WMO (2011)): use of a statistical approach (e.g.Mäder et al. (2007)); or to use proxies that can be (at least partly) physically understood (e.g.Wohltmann et al. (2007)).Both approaches have their limitations and the physical mechanisms may not be fully understood in either case.As Wohltmann et al. (2007) points out, the use of (or lack thereof) unphysical or too many regressors could lead to systematic errors -through the attribution of correlated variables -that go unnoticed because error statistics do not change, or indeed, decrease.A third approach simply considers dynamical variability as noise that leads to enhanced uncertainties on trend analysis.The identification of a correlation between two variables can be considered a first step in identifying the physical mechanism that underlies the causal link.A relationship between the proxies and processes that drive their variability need to be shown through additional information, it cannot be done simply through statistical means alone as it needs information from a physical understanding, either a priori, or following further investigation.Here, our aim is to find an index, or proxy, that represents rapid changes, on timescales of a month or less, in the upper branch of the BDC by investigating an identified association between temperature variation in the mid-latitude upper stratosphere and planetary wave-breaking.While temperature alone does not represent a complete picture of the physical driver of rapid processes, especially where the exact physical drivers remain unresolved; the use of standard dynamical proxies is, as we shall show, not enough to capture this variance.Chandra (1986) identified similar short-term dynamical variability that we identify in monthly data here, but he applied it to understand dynamical influences on upper stratospheric variability relevant to identifying 27-day solar irradiance variability -indeed showing that 27-day solar modulation was very difficult to identify due to large, rapid dynamical fluctuations -and did not extrapolate this information to improving MLR analysis, as we aim to do here.The drawback to using temperature is that it mixes processes that might have different influences on ozone(Wohltmann et al., 2005).However, it is a simple and direct measure of dynamical changes, at least on monthly or shorter timescales, that relate to rapid dynamical adjustments within the stratosphere.
Figure 1 .
Figure 1.Monthly anomalies of equatorial (20 • S-20 • N) ozone (upper; %) and temperature (lower; degrees Kelvin) at 2.5 hPa, following the subtraction of 13-month box-car smoothing and monthly-deseasonalising from the CCM SOCOL model in specified dynamics mode.SWOOSH ozone composite timeseries and SABER temperature measurements are shown in light-blue in the upper and lower plots, respectively.The dashed blue and red horizontal lines are the thresholds shown in Fig.2; thresholds for each coloured diamond are given on the right of the upper panel.June-July-August (JJA) anomalies exceeding the thresholds have orange (high-T) and turquoise (low-T) diamonds;
Figures
Figures 3c-f show temperature composites for each event type: (c) DJF Low-T, (d) JJA Low-T, (e) DJF High-T and (f) JJA High-T; all show the same temperature-quadrupole structure as in Fig. 3a-b (signals at two and three standard deviations from
Figure 2 .
Figure 2. Regression of equatorial (20 • N-20 • S) ozone and temperature anomalies (following 13-month smoothing and monthly deseasonalising) from the CCM SOCOL model in specified dynamics mode for pressure levels 0.01 to 40 hPa (∼80 -22 km).Grey crosses are for all other months in 1983/01-2012/10.Coloured crosses in each plot are determined at 2.5 hPa (lower-left, and plotted as diamonds in Fig. 1) by
Figs. 5 )
Figs. 6), which is a measure of the resolved wave-induced forcing of the mean flow (positive values imply an acceleration of the zonal mean flow and a deceleration of the BDC, and negative values the opposite).Using the events identified in Figs. 1
Figure 3 .
Figure 3. Correlation coefficient maps of zonal mean 20 • N-20 • S 2.5 hPa temperature anomalies from the SOCOL model with respect to latitude and altitude for all identified low-and high-T (a) DJF and (b) JJA events, as defined in Fig. 2. (c-f) Composite temperatures for (c) DJF low-T, (d) JJA low-T, (e) DJF high-T and (f) JJA high-T events.Dashed (solid) contours are negative (positive) with the bold line representing zero.Signals at the 2 and 3 standard deviations from zero are given as yellow and blue contours, respectively, in panels c-f.
Figure 4 .
Figure 4. (a) SABER temperature data correlation coefficient map of zonal mean 20 • N-20 • S 2.5 hPa anomalies with all latitudes and altitudes for all low-and high-T DJF and events.Composite temperatures for DJF (b) low-T and (c) high-T events, as defined in Fig 2.
Figure 5 .
Figure 5.The median of the Transformed Eulerian Mean streamfunction (TEMS) anomalies for (a) DJF Low-Temperature events, (b) JJA Low-T events, (c) DJF High-T events, (d) JJA High-T events for the same months as in Fig. 3c-f.Contours lines (solid, positive; dashed negative) and colours are given in the legend.Positive values indicate clockwise-acceleration along the contour lines; negative are anticlockwise.Data are from the SOCOL model in specified dynamics mode.
Figure 6 .
Figure 6.As for Fig. 5, but for EP-Flux Divergence.Positive values indicate increased wave-activity; negative, decreased activity.
Figure 7 .
Figure 7.The UBDC index from the CCM SOCOL model in specifed dynamics mode using ERA-Interim from 1983 to 2012.
Figure 8 .
Figure 8. (a-b) SOCOL equatorial temperature anomalies (4hPa, 20 • N-20 • S) plotted against (a) temperature means from 2.5-6.3 hPa and 43-49 • S and (b) 52-57 • N and 4-10 hPa.(d-e) As for upper panels, but equatorial ozone anomalies (4 hPa, 20 • N-20 • S) are instead plotted against high-latitude temperature anomalies.Grey crosses are for November-April months; black crosses for May-October; correlations for both periods are given in each panel.Red and blue circles identify the DJF High-T and Low-T events in Fig. 1 and 2, respectively; orange and light-blue circles similarly identify JJA events.(c,f) May-October 43-49 • S temperatures and November-April 52-57 • N temperatures are combined in the right panel (UBDC index) and plotted against equatorial (c) temperature and (f) ozone.
Figure 9 .
Figure 9. Coefficient of determination (R 2 ) maps of the upper-branch Brewer Dobson circulation (UBDC) index with SOCOL model temperature at all latitudes and altitudes for (a) all months, (b) DJF and (c) JJA months.White circles represent the approximate region that the UBDC index is derived from.
Figure 10 .Figure 11 .Figure 12 .Figure 13 .
Figure 10.Coefficient of determination summed over all regressors (R 2 c ) and the reduction in the Student's t-test-based error on regressor coefficients (%) for equatorial profiles (positive values) for (a) temperature from 1983-2005, and for ozone between (b) 1985 and 1997, and(c) 1998 and 2012 for various datasets (see legends).For R 2 , dotted lines represent estimates without the UBDC index, solid lines with, and the difference (without-UBDC minus with-UBDC) is given as negative and dashed lines. | 12,188.4 | 2016-07-08T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Factors Influencing E-Filing Usage Among Indonesian Taxpayers: A Technology Acceptance Model (TAM) Theory Approach
The research is designed to investigate the impact of perceived usefulness, tax understanding, social factors, and information technology readiness on the utilization of e-filing among individual
INTRODUCTION
The effective functioning of the government necessitates substantial financial resources for national development, particularly in developing countries like Indonesia.These funds are derived from contributions made by the public in the form of taxes.Taxes serve as essential financial instruments for the state, contributing to the state treasury by established tax laws.They function as tools to regulate and implement government policies, particularly in the realms of social and economic affairs (Sulistyorini, 2019).
Every citizen, whether an individual or corporate taxpayer, is obligated to comply with tax payments.In a bid to enhance taxpayer compliance, the government has adopted the self-assessment system.This system entrusts taxpayers with the responsibility to voluntarily register, compute, deposit, and report their tax obligations.In cases where disparities arise from the examination results, the authorized officer has the discretion to issue either a Tax Collection Letter (STP) or a Tax Assessment Letter (SKP) (Sulistyorini, 2019).
In accordance with tax legislation, specifically Law No. 28 of 2007 Article 2 paragraph (1), Indonesia currently operates under the Self-Assessment System.This system instills confidence in taxpayers to independently calculate, deposit, and report their taxes.To facilitate taxpayers in their tax return reporting, the Directorate General of Taxes has issued a regulation allowing for the electronic submission of tax returns or extension notifications through e-filing.The specific procedures for submitting annual tax returns for individual taxpayers using Form 1770S or 1770SS through e-filing are outlined in Directorate General of Taxes Regulation Number 01/PJ/2014 (Dewi, 2019).
With the rapid advancement of technology, there is a need for a more efficient and effective tax reporting system to facilitate taxpayers.In line with this, the Director General of Taxes (DGT) has introduced the e-filing system through Director General of Taxes Regulation NUMBER PER-01/PJ/2017.This online and real-time system aims to simplify the tax reporting process, potentially increasing the participation of registered taxpayers.The introduction of e-filing eliminates the need for taxpayers to wait in queues at tax service offices, enhancing effectiveness and efficiency.Despite the benefits of computerized tax return reporting, the socialization of e-filing for taxpayers has not been fully optimized and sustained.Consequently, the awareness and utilization of e-filing among taxpayers remain limited (Natalia dkk., 2019) According to (Widyadinata & Toly, 2014), the purpose of e-filing is to improve the level of service for the public by providing facilities for reporting SPT with electronic media using the internet to taxpayers.This can also help reduce costs and the time that is needed for each taxpayer so that they can process, prepare, and be able to report SPT correctly to the tax office and also on time.
According to information from the Directorate General of Taxes (DGT) information system, the e-filing submission of annual individual income tax returns for the fiscal years 2015-2018 has shown an increase compared to the corresponding period in the previous years.This indicates a growth in the submission of individual income tax returns through e-filing.Concurrently, there has been a decrease in manual submissions of individual income tax returns during the same period when compared to the previous years.The new regulation stipulates that taxpayers must utilize recognized efiling channels for submission, eliminating the direct submission of electronic document formats to the Tax Office.Dody Herawan, Head of the LTO IV Primary Tax Office of the Directorate General of Taxes of the Ministry of Finance, clarified that the obligation to report via e-filing applies to income tax returns under Article 21/26 or periodic tax returns like employee payroll deductions.However, for the reporting of Annual Income Tax Return Article 25, not later than March 31, e-filing is recommended but not mandatory.Failure to use e-filing for income tax returns under Article 21/26 and VAT is considered as non-submission to the state.
Despite the mandatory regulation, there are still challenges in the widespread adoption of e-filing.Some taxpayers find the computerized system confusing and difficult, citing a lack of understanding in operating e-filing.Additionally, the overall ability of taxpayers to use e-filing remains limited, and there are varying perceptions among taxpayers regarding its utility.Perception, being the process of interpreting sensory impressions to give meaning to the environment, influences decisions based on individual information and viewpoints (Lizkayundari & Kwarto, 2018).
Taxpayer perspectives on e-filing are influenced by advancements in current technology.Therefore, it is crucial to understand the perceptions of utility, tax comprehension, social factors, and technological readiness concerning the adoption of e-filing.This study focuses on taxpayers registered at KPP Cikarang Selatan.
Perceived usefulness is a measure by which the use of technology is believed to bring benefits to each individual who uses it (Wahyuni dkk., 2015).If taxpayers feel that the use of e-filing can improve tax reporting performance, increase the effectiveness of tax reporting, simplify tax reporting, and increase productivity in carrying out their tax obligations, they will always and willingly use e-filing in the future because it has features that help taxpayers report taxes.Research conducted by (Devina & Waluyo, 2016) shows that there is a positive and significant influence between the perceived usefulness variable and the use of e-filing.This research is supported by (Lizkayundari & Kwarto, 2018), which shows that perceived usefulness has a positive and significant effect on interest in using e-filing.However, research conducted by (Wahyuni et.al, 2015) shows different results, namely that perceived usefulness does not affect behavioral intensity when using e-filing.
Tax understanding is a way or process of studying carefully to understand and gain as much knowledge as possible.This statement about tax understanding can also be interpreted as meaning that a taxpayer must understand taxes and how to calculate, fill out, and report tax returns.So the perception of tax understanding can influence taxpayers to use e-filing when reporting annual tax returns.Research conducted by (Pradnyana, 2019) and (Zahrani & Mildawati, 2019) shows that tax understanding has a positive effect on taxpayer compliance.The results of this study are different from research conducted by (Wiratan & Harjanto, 2018), which shows that tax understanding does not affect taxpayer compliance.
Information technology readiness means that individuals in this case are ready to accept existing technological developments, including the emergence of an e-filing system.Information technology readiness can be seen from various aspects, namely the availability of internet connections, good software and hardware facilities, which are a means of using e-filing, and the ability of human resources to use information technology.Research conducted by (Anam dkk., 2020) and (Daryatno, 2017) which shows that there is a significant influence between the variables of perceived readiness for information technology and the use of e-filing.In contrast to research conducted by (Wiratan & Harjanto, 2018) which shows that information technology readiness has no effect on the use of efiling.
TINJAUAN PUSTAKA Technology Acceptance Model (TAM)
The theory pertinent to information technology usage is known as the Technology Acceptance Model (TAM), initially formulated by Davis (1989).TAM is widely employed to elucidate and forecast the behavior of information technology users.Derived from the Theory of Reasoned Action (TRA) by Fishbe and Ajzen (1975), TAM is centered on an individual's perception influencing their attitude and behavior.This model predicts user acceptance of technology based on perceived usefulness and perceived ease of use.Perceived usefulness is the user's confidence that system usage will enhance performance, while perceived ease of use pertains to the user's confidence in the system's ease of use and learnability.Therefore, TAM's two variables explain user behavior as users acknowledge the benefits and ease of use, influencing their acceptance of information technology (Natalia et.al, 2019).
So it can be said that TAM is an analysis model to determine user behavior for technology acceptance.According to Natalia et.al, (2019), TAM represents an information system theory outlining how users adopt and utilize technology.Within TAM, two pivotal factors influencing users when adopting a new information system are identified as follows: a. Perceived Ease of Use: As articulated by Davis (1989), "ease" signifies "freedom from difficulty or great effort."Additionally, "perceived ease of use" is defined as "the degree to which a person believes that using a particular system would be free of effort."When applied to information systems, this implies that users perceive the system as user-friendly, requiring minimal effort and devoid of challenges.This encompasses the ease of aligning the information system with users' preferences.Davis's (1989) research findings indicate that perceived ease effectively elucidates users' reasons for system adoption and acceptance.b.Perceived Usefulness: In Davis (1989), "the degree to which a person believes that using a particular system would enhance his or her job performance" encapsulates perceived usefulness.This pertains to users' belief that employing the library information system will enhance their performance, outlining the system's benefits across various aspects.Therefore, in the perception of usefulness, a belief is formed to guide decision-making on whether to adopt an information system.The underlying assumption is that if users believe the system is beneficial, they will likely adopt it; conversely, disbelief may lead to non-adoption.
Hipotesis The Effect of Perceived Usefulness on E-Filing Usage
A person's initial familiarity and enjoyment in using e-filing contribute to their perception of its usefulness.As individuals become more accustomed to efiling, they start to recognize its benefits.Hence, it can be inferred that as an individual taxpayer's perceived usefulness of the e-filing system strengthens, their willingness to utilize the e-filing facility for reporting tax obligations also increases (Devina & Waluyo, 2016).The perceived usefulness, in the context of users, is associated with the system's productivity, effectiveness, and its utility for comprehensive tasks.Therefore, continuous enhancement of the system's usefulness by the Directorate General of Taxes (DGT) is crucial, as it fosters increased adoption of e-filing and encourages other taxpayers who haven't used e-filing to adopt the system.The more taxpayers perceive e-filing as beneficial for boosting productivity, the more likely they are to persist in its usage.This aligns with the findings of Devina & Waluyo's (2016) research, affirming that perceived usefulness positively influences the use of e-filing.In summary, the hypothesis derived from this understanding is as follows: H1: Perceived usefulness has a positive and significant impact on the use of efiling.
The Effect of Tax Understanding on E-Filing Usage
The presence of tax understanding significantly impacts taxpayer compliance, emphasizing the importance of taxpayers possessing a comprehensive grasp of tax regulations.A robust comprehension of taxation is pivotal for fostering compliance, as taxpayers are expected to navigate and adhere to tax regulations effectively.Hardinigsih (2017) underscores the significance of understanding, portraying it as a crucial step taken by taxpayers to familiarize themselves with existing tax regulations.It involves gaining insight into tax laws, calculation methods, and the procedures for completing and submitting tax returns.The perception of tax understanding plays a pivotal role in motivating taxpayers to opt for e-filing when fulfilling their annual tax obligations.As taxpayers' understanding of taxation elevates, so does their likelihood of complying with the use of e-filing.
This correlation finds support in the research conducted by Zahrani & Mildawati (2019) and Pradnyana (2019), both asserting that tax understanding positively influences taxpayer compliance.In light of this discussion, the formulated hypothesis is as follows: H2: Tax understanding has a positive and significant effect on the use of e-filing.
The Effect of Social Factors on E-filing Usage
Social factors encompass the extent to which an individual perceives the influence of others in encouraging them to adopt a new system.Within an organizational context, the success of information systems usage is often determined by social factors.Lie and Sadjiarto (2013) elucidate that social factors encompass various elements such as friendships, family ties, superiors, coworkers, and more, which collectively contribute to motivating taxpayers to embrace e-filing.This implies that taxpayers' inclination to use e-filing is influenced by the encouragement received from friends, colleagues, and family, thereby shaping their interest in adopting e-filing.
The greater the impact of encouragement in favor of e-filing use, the stronger the taxpayer's inclination toward e-filing.This viewpoint finds validation in the studies conducted by Syaninditha & Setiawan (2017) and Hardika and Ernawati (2018), both asserting that social factors exert a positive influence on e-filing adoption.Summarily, the formulated hypothesis is stated as follows: H3: Social factors have a positive and significant effect on the use of e-filing.
The Effect of Information Technology Readiness on E-Filing Usage
Taxpayer information technology readiness denotes an individual's preparedness to embrace contemporary technological advancements, particularly the introduction of an e-filing system.This readiness is intertwined with the progression of an individual's mindset, signifying that a more techsavvy individual, who readily adapts to technological developments, possesses a more advanced mindset (Pradnyana, 2019).
The utilization of e-filing in the future is contingent on several factors, including a robust internet connection, well-equipped software and hardware, and a technologically literate workforce.With these components in place, taxpayers are more likely to consistently and willingly employ e-filing due to its features that facilitate tax reporting (Devina & Waluyo, 2016).Therefore, the higher the level of information technology readiness, the greater the propensity for taxpayers to adopt e-filing.
This assertion finds affirmation in the research conducted by Anam et.al (2020) and Daryatno (2017), both asserting that information technology readiness exerts a positive impact on e-filing adoption.To synthesize, the formulated hypothesis is as follows: H4: Information Technology Readiness has a positive and significant effect on the use of e-filing.The Effect of Perceived Usefulness, Tax Understanding, Social Factors, and Information Technology Readiness on E Filling Usage Devina & Waluyo (2016) research demonstrates a positive correlation between perceived usefulness and the adoption of e-filing.Similarly, Daryatno (2017) asserts that perceived usefulness contributes positively to the utilization of e-filing.Zahrani & Mildawati (2019) study reveals a positive impact of tax understanding on taxpayer compliance.Syaninditha & Setiawan (2017) research suggests that social factors positively influence the adoption of e-filing.Anam et.al (2020) research argues that information technology readiness positively influences e-filing adoption.Drawing from these findings, it is apparent that there are commonalities in the outcomes of previous studies with Daryatno (2017) research, indicating that perceived usefulness, tax understanding, social factors, and information technology readiness collectively exert a significant impact on e-filing adoption.Consequently, it can be inferred that perceived usefulness, tax understanding, social factors, and information technology readiness collectively play a significant role in the adoption of efiling.Subsequently, the following hypothesis can be posited: H5: Perceived usefulness, tax understanding, social factors, and technology readiness together have a significant effect on the use of e-filing.
Coefficient X1 = 0.732
The perceived usefulness regression coefficient is positive, specifically 0.732.This implies that for each 1-point increase in perceived usefulness, the utilization of e-filing is expected to increase by 0.732.3. Coefficient X2 = 0.472 The tax understanding regression coefficient is positive, amounting to 0.472.This indicates that with every 1-point increase in tax understanding, the usage of e-filing is projected to increase by 0.472.4. Coefficient X3 = 0.152 The regression coefficient of social factors is positive, standing at 0.152.This suggests that for every 1-point increase in social factors, the use of e-filing is anticipated to increase by 0.152. 5. Coefficient X4 = 0.191 The regression coefficient of information technology readiness is positive, recorded as 0.191.This signifies that for every 1-point increase in information technology readiness, the utilization of e-filing is predicted to increase by 0.191.
Hypothesis Test Statistical Test t (partial)
The t Statistical Test or Partial Test is employed to indicate the extent of the influence of each independent variable separately in elucidating the variance in the dependent variable (Ghozali, 2018).The criteria for the t test are assessed through the t and sig columns.If the sig value is <0.05, it indicates that the association between the independent variable and the dependent variable is considered significant or influential.
The F Significance Test, or Simultaneous Test, is utilized to ascertain whether there is an interplay between independent variables or not.This test assesses whether, as a whole or simultaneously, the independent variables collectively impact the dependent variable.The criteria for the F/Simultaneous Test are determined by the sig column, where a sig value ≤ 0.05 signifies that the connection between the independent variables together or simultaneously is deemed significant or influential (refer to Table 4), indicating a simultaneous and significant impact (Sarjono, 2013).The inference drawn from the outcomes of the F statistical test indicates that the significance value of 0.000 is less than 0.05.This signifies that the independent variables, namely perceived usefulness, tax understanding, social factors, and information technology readiness, collectively exert a simultaneous influence on the utilization of E-Filing.
R 2 Determination Coefficient Test
The R 2 Coefficient of Determination test is employed to assess the extent to which the independent variable contributes to the dependent variable.The examination of the determinant coefficient (R2) is visible in the adjusted R square column (Ghozali, 2018).Derived from the determinant coefficient examination (R 2 ), the adjusted R square stands at 0.232, signifying 23.2%.This implies that variations in e-filing utilization can be impacted by the independent variables-namely perceived usefulness, tax understanding, social factors, and information technology readiness.Conversely, the remaining 76.8% (100% -23.2%) is subject to the sway of other variables or factors beyond the confines of this study.
DISCUSSION
This research is a study that analyzes the effect of perceived usefulness, tax understanding, social factors, and information technology readiness on the use of e-filing registered at KPP Pratama Cikarang Selatan.
The Effect of Perceived Usefulness on E-Filing Usage
The results of the hypothesis 1 test, as presented in Table 3, reveal that the t value for the perceived usefulness variable (X1) is 4.122 with a significance level of 0.000.Since the calculated t value of 4.122 exceeds the t table value of 1.98447, and the significance level of 0.000 is lower than the significant probability α = 0.05, Ho is rejected, and Ha1 is accepted.Consequently, it can be concluded that perceived usefulness exerts a positive and significant influence on the utilization of e-filing by individual taxpayers registered at KPP Pratama Cikarang Selatan.An increase in the perceived usefulness level corresponds to a higher rate of e-filing adoption among taxpayers.This research is in line with research conducted by (Devina & Waluyo, 2016) and (Dewi, 2019) which proves that perceived usefulness has a positive and significant effect on the use of e-filing.
The Effect of Tax Understanding on E-Filing Use
The outcomes of hypothesis 2 testing, as displayed in Table 3, indicate that the t value for the tax understanding variable (X2) is 2.434 with a significance level of 0.017.Given that the calculated t value of 2.434 surpasses the t table value of 1.98447, and the significance value of 0.017 is less than the significant probability α = 0.05, Ho is rejected, and Ha1 is accepted.Hence, it can be concluded that tax understanding yields a positive and significant impact on the utilization of e-filing among individual taxpayers registered at KPP Pratama Cikarang Selatan.A higher level of tax understanding corresponds to an increased adoption of e-filing among taxpayers.
This research is in line with research conducted by (Zahrani & Mildawati, 2019) and (Pradnyana, 2019) which proves that tax understanding has a positive and significant effect on the use of e-filing.
The Effect of Social Factors on the Use of E-Filing
Based on the results of hypothesis 3 testing as presented in Table 3, the t value for the social factor variable (X3) is 1.987 with a significance level of 0.049.Given that the calculated t value of 1.987 exceeds the t table value of 1.98447, and the significance value of 0.049 is less than the significant probability α = 0.05, Ho is rejected, and Ha1 is accepted.Therefore, it can be asserted that social factors exhibit a positive and significant impact on the utilization of e-filing among individual taxpayers registered at KPP Pratama Cikarang Selatan.A heightened level of social factors correlates with an increased adoption of e-filing among taxpayers.
This research is in line with research conducted by (Syaninditha & Setiawan, 2017) and (Natalia dkk., 2019) which proves that social factors have a positive and significant effect on the use of e-filing.
The Effect of Information Technology Readiness on the Use of E-Filing
The outcomes of hypothesis 4 testing are delineated in Table 3, indicating a t value of 2.158 for the information technology readiness variable (X4) with a significance level of 0.033.Given that the computed t value of 2.158 exceeds the t table value of 1.98447, and the significance value of 0.033 is lower than the significant probability α = 0.05, Ho is dismissed, and Ha1 is acknowledged.Consequently, it can be concluded that information technology readiness manifests a positive and significant influence on the utilization of e-filing among individual taxpayers enlisted at KPP Pratama Cikarang Selatan.An elevated level of information technology readiness corresponds to an increased use of e-filing among taxpayers.
This research is in line with research conducted by (Anam dkk., 2020) and (Daryatno, 2017) which proves that information technology readiness has a positive and significant effect on the use of e-filing.
Perceived Usefulness, Tax Understanding, Social Factors, and Information Technology Readiness
The outcomes of hypothesis 5 testing, as presented in Table 4, reveal a t value of 3.161, coupled with a significance level of 0.002.Given that the calculated t value of 3.161 surpasses the t table value of 1.98447, and the significance value of 0.002 is less than the significant probability α = 0.05, H0 is rejected, and Ha4 is embraced.Consequently, it can be asserted that the variables of perceived usefulness, tax understanding, social factors, and information technology readiness collectively exert a substantial impact on the utilization of e-filing.The heightened levels of perceived usefulness, tax understanding, social factors, and information technology readiness correlate with an increased utilization of e-filing.
CONCLUSIONS AND RECOMMENDATIONS
Based on the findings and discussion of the empirical study on the Perceived Usefulness, Tax Understanding, Social Factors, and Information Technology Readiness influencing the utilization of E-Filing at KPP Pratama Cikarang Selatan, the following recommendations are proposed: 1.For the Tax Service Office, Pratama Cikarang Selatan: a.The Directorate General of Taxes and ASP are encouraged to simplify the features of the e-filing reporting process, making it more user-friendly.This would address the perception among individual taxpayers at KPP Pratama Cikarang Selatan that e-filing is currently perceived as complicated.b.Streamlining the e-filing reporting method further is recommended, eliminating the need for individual taxpayers to visit KPP directly.This adjustment aims to address the perception of some taxpayers who view efiling as less effective.2. For Taxpayers: Taxpayers are encouraged to proactively seek information regarding the taxation system, particularly the procedures involved in fulfilling tax obligations through the E-Filing system.This increased awareness would contribute to a better understanding and utilization of E-Filing for tax reporting purposes.
ADVANCED RESEARCH
a. Introducing additional variables beyond perceived usefulness, tax understanding, social factors, and information technology readiness that could influence the adoption of E-Filing among taxpayers.This may include factors like system quality, information quality, and other relevant aspects.b.Broadening the research scope by increasing the sample size or extending the study to include individual entrepreneur taxpayers and/or corporate taxpayers.This expansion aims to provide a more authentic, precise, and insightful portrayal of the subject matter, contributing to a more comprehensive understanding of the factors influencing E-Filing usage.
Table 1 .
Annual Income Tax Return Receipt for Individual Tax Year 2015-2018 The table above, based on data from the Directorate General of Taxes of the Ministry of Finance (DGT Kemenkeu), indicates that in 2018, there were 42 million registered taxpayers, with 18 million utilizing e-filing for tax reporting.This means that 42.85 percent of taxpayers opted for e-filing in 2018, marking an increase from 35 percent in 2017.The percentage of SPT reporting through efiling witnessed continuous growth from 2015 to 2018, with increments of 28.57percent from 2015-2016, 50 percent from 2016-2017, and 77.77 percent from 2017-2018.However, the total e-filing users until 2018 remained below 50 percent of registered taxpayers.The Directorate General of Taxes (DGT) of the Ministry of Finance mandates online tax reporting through e-filing for income tax (PPh) and valueadded tax (VAT) returns, effective April 1, 2018, as per Regulation of the Minister of Finance (PMK) No. 9/PMK.03/2018.This amendment, released on January 26, 2018, modifies PMK Number 243/PMK.03/2014 on Tax Return.
Table 5 .
Test Coefficient of Determination R 2 | 5,560.8 | 2024-02-29T00:00:00.000 | [
"Business",
"Computer Science"
] |
Log-concavity and log-convexity of moments of averages of i.i.d. random variables
We show that the sequence of moments of order less than 1 of averages of i.i.d. positive random variables is log-concave. For moments of order at least 1, we conjecture that the sequence is log-convex and show that this holds eventually for integer moments (after neglecting the first $p^2$ terms of the sequence).
Introduction and results
Suppose X 1 , X 2 , . . . are i.i.d. copies of a positive random variable and f is a nonnegative function. This article is concerned with certain combinatorial properties of the sequence a n = Ef X 1 + · · · + X n n , n = 1, 2, . . . .
For instance, f (x) = x p is a fairly natural choice leading to the sequence of moments of averages of the X i . Since we have the identity we conclude that the sequence (a n ) ∞ n=1 is nonincreasing when f is convex. What about inequalities involving more than two terms?
Recall that a nonnegative sequence (x n ) ∞ n=1 supported on a set of contiguous integers is called log-convex (resp. log-concave) if x 2 n ≤ x n−1 x n+1 (resp. x 2 n ≥ x n−1 x n+1 ) for all n ≥ 2 (for background on log-convex/concave sequences, see for instance [8,12]). One of the crucial properties of log-convex sequences is that log-convexity is preserved by taking sums (which follows from the Cauchy-Schwarz inequality, see for instance [8]). Recall that an infinitely differentiable function function f : (0, ∞) → (0, ∞) is called completely monotone if we have (−1) n f (n) (x) ≥ 0 for all positive x and n = 1, 2, . . .; equivalently, by Bernstein's theorem (see for instance [7]), the function f is the Laplace transform of a nonnegative Borel measure µ on [0, +∞), that is For example, when p < 0, the function f (x) = x p is completely monotone. Such integral representations are at the heart of our first two results.
In particular, applying these to the functions f (x) = x p with p < 0 and 0 < p < 1 respectively, we obtain the following corollary.
For p > 1, we pose the following conjecture.
We offer a partial result supporting this conjecture.
Theorem 4. Let X 1 , X 2 , . . . be i.i.d. nonnegative random variables, let p be a positive integer and let b n be defined by (4). Then for every n ≥ p 2 , we have b 2 which is clearly a log-convex sequence (as a sum of two log-convex sequences). The following argument for p = 3 was kindly communicated to us by Krzysztof Oleszkiewicz: The sequences (n −2 ) and (3n −1 −n −2 ) are log-convex. By the Cauchy-Schwarz inequality the factor at n −2 is nonnegative, so again (b n ) is log-convex as a sum of three log-convex sequences. It remains elusive how to group terms and proceed along these lines in general. Our proof of Theorem 4 relies on this idea, but uses a straightforward way of rearranging terms.
Remark 6. It would be tempting to use the aforementioned result of Boland et al. with φ(x, y) = (xy) p to resolve Conjecture 1. However, this function is neither convex nor concave on (0, +∞) 2 for p > 1 2 . For 0 < p < 1 2 , the function is concave and (2) Corollary 3 improves on this by removing the factor n 2 −1 and Corollary 3 removes the factor n 2 −1 Concluding this introduction, it is of significant interest to study the log-behaviour of various sequences, particularly those emerging from algebraic, combinatorial, or geometric structures, which has involved and prompted the development of many deep and interesting methods, often useful beyond the original problems (see, e.g., [3,4,5,6,10,11,12,13] ). We propose to consider sequences of moments of averages of i.i.d. random variables arising naturally in probabilistic limit theorems. For moments of order less than 1, we employ an analytical approach exploiting integral representations for power functions. For moments of order higher than 1, our Conjecture 1, besides refining the monotonicity property of the sequence (b n ) (resulting from convexity), would furnish new examples of log-convex sequences. For instance, neither does it seem trivial, nor handled by known techniques, to determine whether the sequence obtained by taking the Bernoulli distribution with parameter θ ∈ (0, 1), b n = n k=0 n k k n p θ k (1 − θ) n−k is log-convex. In the case of integral p, we have b n = p k=0 S(p, k) n! (n−k)!n p θ k , where S(p, k) is the Stirling number of the second kind.
The rest of this paper is occupied with the proofs of Theorems 1, 2, 4 (in their order of statement) and then we conclude with additional remarks and conjectures.
(u n (t)) is log-convex (because sums/integrals of log-convex sequences are log-convex: the Cauchy-Schwarz inequality applied to the measure µ yields which combined with u n (t) ≤ u n−1 (t)u n+1 (t), gives a 2 n ≤ a n−1 a n+1 ). The logconvexity of (u n (t)) follows from Hölder's inequality, which finishes the proof.
Proof of Theorem 2
Suppose now that f (0) = 0 and f ′ is completely monotone, say f ′ (x) = ∞ 0 e −tx dµ(t) for some nonnegative Borel measure µ on (0, ∞) (by (3)). Introducing a new measure Integrating against dx gives Let F be the Laplace transform of X 1 , that is where, to shorten the notation, we introduce the following nonnegative function To show the inequality it suffices to show that pointwise G(n, s)G(n, t) ≥ 1 2 G(n − 1, s)G(n + 1, t) for all s, t > 0. This follows from two properties of the function G: 1) for every fixed t > 0 the function α → G(α, t) is nondecreasing, 2) the function G(α, t) is concave on (0, ∞) × (0, ∞).
Indeed, by 2) we have (in fact we only use concavity in the first argument). It thus suffices to prove that is nonnegative, which follows by 1).
We have obtained where β(q) = p! α(q)·q1!···qm! and µ(q) = µ q1 · · · µ qm . By homogeneity, we can assume that µ 1 = EX 1 = 1. Note that when X 1 is constant, we get from (5) that Since Q p has only one element, namely {1, . . . , 1} and µ({1, . . . , 1}) = 1, when we subtract the two equations, the terms corresponding to m = p cancel and we get By the monotonicity of moments, µ(q) ≥ 1 for every q, so (b n ) is a sum of the constant sequence (1, 1, . . .) and the sequences (u Proof. The statement is clear for m = 1. Let 2 ≤ m ≤ p − 1 and p ≥ 3. We have To see that this is positive for every x ≥ p 2 − 1 and 2 ≤ m ≤ p − 1, it suffices to consider m = p − 1 and x = p 2 − 1 (writing x x−k = 1 + k x−k , we see that the right hand side is increasing in x). Since we have which is clearly positive.
Final remarks
Remark 8. Using majorization type arguments (see, e.g. [9]), Conjecture 1 can be verified in a rather standard but lengthy way for every p > 1 and n = 2. The idea is to establish a pointwise inequality: we conjecture that for nonnegative numbers x 1 , . . . , x 2n and a convex function φ : [0, ∞) → [0, ∞) we have where for a subset I of the set {1, . . . , 2n} we denote x I = i∈I x i . We checked that this holds for n = 2. Taking the expectation on both sides for φ(x) = x p gives the desired result that b 2 n ≤ b n−1 b n+1 . Remark 9. It is tempting to ask for generalisations of Conjecture 1 beyond the power functions, say to ask whether the sequence (a n ) defined in (1) is log-convex for every convex function f . This is false, as can be seen by taking the function f of the form f (x) = max{x − a, 0} and the X i to be i.i.d Bernoulli random variables. | 2,037 | 2020-01-17T00:00:00.000 | [
"Mathematics"
] |
Possible Use of an Agricultural Service with Artificial Intelligence to Monitor Crops
.
Introduction
Automation in any industry is the most preferred way to put systems as a whole a new pedestal of fortune. New ways and technologies are transforming our significant sectors almost to the point where human interactions becomes negligible. The autonomy that is given to the machinery is immense. Therefore, the question arises as to how much autonomy is acceptable to us and whether we can fully yield and rely on it. Right now, things are not so vague and the application of autonomy technologies is still in the range of human grip. Modern technologies may claim to be completely autonomous, but what they really mean is that technology can operate with the help of a human as part of the industry 4.0 is called smart agriculture [1][2].
The need for complete autonomy comes from the human desire to accompany one's life with ease. It can be noted that the trend towards comfort increases every year, as well as standards. Only new approaches and technologies as big data, internet of things (IoT), and artificial intelligence (AI) can give the desired results if used correctly for the smart agriculture [3]. Although there is another side, like the medal. The same technology used for good can be used or lead to more serious consequences. These consequences can cause a well-known climate change, which can have catastrophic consequences for humanity in present and in the future, if proper and timely measures are not taken. Some manifestations of climate change appear in different forms and most often not in a positive spectrum. Henceforth, if autonomy is desired, the judicious use of technology must be considered. A clear example of the necessity of smart agriculture in coffee is the application of IoT sensors for climate change forecasting [4].
These technologies, in the presence of autonomy, reflect their usefulness in different areas. Most of the time they are controlled using AI. Machine vision in some machines is the main component, allowing it to function for a certain period of time without human interaction. Of course, there must be other components, such as: digital sensors, clever mechanics, well written overall algorithm and others.
This work will look into the possibilities of using AN almost completely autonomous complex for monitoring and identifying crops diseases and taking appropriate measures. Some of the methods proposed by the project are widely used, but not at the level (of autonomy) presented in this work. The mentioned project is supposed to be used in the agricultural sector, as it is more in line with the general idea. This service is not actually used anywhere in the agricultural sector or anywhere else. This is a theoretical work on how the proposed service would work if applied in the agricultural sector.
The Agricultural Sector and the Use of AI
The agricultural sector is an important part of humanity in order to survive and hence the prosperity of the sector will positively affect them. Although it functions to fulfil the needs of the planet's population, there are some issues that have yet to be addressed. The first and most important influencer is population growth that pushes agricultural sector to the verse of changes. And the second one is a by-product of agriculture. Agriculture plays an important role in climate change and accounts for almost a third of the total impact on the planet. Therefore, modern methods and technologies should become an integral part of the agroindustrial complex to reduce environmental damage and increase the quality of the crop. Automation in the field of agricultural industry in the form of through technologies, namely, in this case, the use of mostly artificial intelligence, for the timely detection of diseases in crops, will positively affect the general description of the selected type of activity [5][6]. Timely detection of diseases and their prevention can save the crop from the spread of diseases. And also, the technology can be used to identify other parameters, for example: water lack of water, the presence of pests, a lack of fertilizers. Today, AI technology is a promising field for the development and automation of many processes in different fields. Consequently, the use of AI in the field of agro-industry will positively affect many problems that previously took a lot of time and effort, both in individuals and large companies. There are other elements that enhance the AI to maximize it potential, and these will be discussed lately in this work. Therefore, the suggested work must easy the impact on global warming and increase production rate too.
Aim of the Work
Consider the benefits of an autonomous service for early detection and monitoring of crop diseases and their further treatment.
Agroservice
This section describes the basics of the proposed project and the steps to be taken to achieve autonomy. These steps summarize the general idea of the project, but not the full picture.
Area set up The very thing to do at the beginning is to select and equip the area for analysis and monitoring of crop diseases. For high-quality analysis and data collection, one need to determine the area where different types of diseases in crop can be observed (occur). In most cases, this may be an area where crops are grown. The selected area should be equipped with technical devices (cameras, drones, sensors, etc.) for data collection. IoT should be used to improve the organization of value cycles of crop, the communications between machines and data acquisition [7]. For small areas (for example, greenhouse), stationary chambers or a mechanism with an attached camera that can view all plants around the perimeter can be used. And for large areas (sown fields), drones with a camera can be used.
Algorithms The next step is creating an algorithm (neural networks) to analyze the necessary data collected from the selected area (previously mentioned). The main part of the project is to create AI, which will be capable to detect diseases in a wide range. The neural network should identify and classify different diseases, analyzing the information passed through it, in the form of photos or videos (as well as location). To do this, the system must be trained using a database [5][6].
Replenishment of the library date by sets (date of set of various diseases in plants) and training of neural networks. In the beginning, the data can be collected manually from the selected places, and the available date can be used. The collected data is unloaded into a cloud storage. The next process is the training of neural networks using the selected data. If necessary, the collected data is processed using software. Consequently, the training of neural networks will go until the error decreases to acceptable values.
Mini station with UAVs Creating a mini station where unmanned aerial vehicles (UAVs) will be located. To grow vegetation inside small areas (greenhouse) there is no need to use drones, since stationary cameras and various sensors are able to work using the created algorithm. To automate the departure of drones, one need mini stations, where drones with the desired modules will be located. It will be necessary to simulate a mini station where drones can be located, which will be remotely (automatically) fly out, analyze the area and return back with the collected data. First of all, the Drone observer will go out and collect information and can be used in all agriculture sectors [8]. Then, depending on the results, drones will fly out with the desired modules (sprayers against diseases or pests). Therefore, the function of the station is the ability to host drones (as well as charge them for further flights) and send the collected information from drones to the general service.
Output results Obtaining the output data and their validation. The output will be given after passing the collected data through a neural network. Namely, the text with the names of diseases and their location. Depending on the output, the service may respond appropriately.
Service Creating a full -fledged automated service is the end result of the project. All of the above tasks must be combined into one service that is able to work automatically without the participation of people, except for maintenance. This service should work as a complex that is able to identify problem areas and respond correctly to the problem that has appeared.
Advantages of the Proposed Approach
At the final stage of the project, it is assumed that a fully automated system or a set of services will be created to prevent various kinds of problems associated with the well -being of the crop. The first is the ability to remotely monitor the selected areas (both greenhouses and fields) by using IoT sensors to understand environmental and geographical conditions for the crops [9]. The ability to identify diseases at an early stage in plants (the method of examining external features using through technologies). Determine the location of the problem area (location of the plant), therefore, eliminate the problem found. This approach will allow you to quickly and accurately identify the problem and eliminate it. The advantages of this technique are: automation of processes, coverage of hard-to-reach places, the correct distribution of resource waste, a point solution to problems (for example, usually if any diseases appear, the solution is spraying the entire area. This method is not effective (economical) and environmentally harmful both for soils, and for the atmosphere). The second is data collection. The data in our time are considered a valuable resource, as powerful computing computers have appeared capable of processing a large number of non -classified information in a short time. What does this give within this project? Neural networks will become better and better with each data collection, therefore, this will positively affect the sphere used. The collected data can be used as a preventive tool to identify the cause of the investigative connection. Also, the collected data can be scientifically investigated to identify the most suitable ways of automation of processes, where costs, waste, working personnel, and impact on the climate are minimized, and the quality, speed and accuracy of this technique is also increased [10]. The proposed work was carried out to show the use of an agroservice for the detection and monitoring of crop diseases at an early stage and their further treatment. The proposed work is not fully used in any area, but has a lot of advantages when applied in agriculture. today's agriculture lacks the introduction of new approaches to keep up with needs and threats. Thus, the proposed method can solve some problems and bring the industry to a new level. | 2,479.4 | 2023-01-01T00:00:00.000 | [
"Computer Science"
] |
Recovery of Materials and Fresh Water Supply using Renewable Energy
World population is constantly growing and at the same time the lifestyle is increasing progressively. The growth process requires intense exploitation of known resources as well as more and more new resources to be made available. Such growing demand creates pressure on the huge but finite available global resources. Water, food and energy demand is growing proportionally and are interrelated in a complex way, the so-called water-energy-food nexus [1]. Similarly the demand for materials, inevitably related to water, energy and food supply is steadily increasing as well and they are implicitly part of the nexus. Adequate supply is required in order to secure our present and to sustain the growth satisfying the increasing needs. The pressure on resources is often directly translated on pressure on the land use. Land is massively utilized for food production, fresh water supply, fuels supply, energy production as well as extraction of materials. Already at this stage there are discussions on possible competition between the various activities with regard to land use; the case of bio-masses just to mention the most evident case [2-4]. New technologies will make our life even easier and more comfortable but inevitably they will put more pressure on different resources. For example the large deployment of batteries has already an impact on the supply of lithium worldwide and the utilization of magnesium ion batteries in the near future may pose the same issue [4]. Another example is the supply of rare earth elements critical for the development of renewable energy and other smart technologies [6-8]. From the EU perspective, restricting to energy, both the supply of fuels and raw materials are mainly depending on imports from outside the EU. This is weakening somehow the future security of supply and it is posing question for the long-term strategy to follow [9].
Introduction
World population is constantly growing and at the same time the lifestyle is increasing progressively. The growth process requires intense exploitation of known resources as well as more and more new resources to be made available. Such growing demand creates pressure on the huge but finite available global resources. Water, food and energy demand is growing proportionally and are interrelated in a complex way, the so-called water-energy-food nexus [1]. Similarly the demand for materials, inevitably related to water, energy and food supply is steadily increasing as well and they are implicitly part of the nexus. Adequate supply is required in order to secure our present and to sustain the growth satisfying the increasing needs. The pressure on resources is often directly translated on pressure on the land use. Land is massively utilized for food production, fresh water supply, fuels supply, energy production as well as extraction of materials. Already at this stage there are discussions on possible competition between the various activities with regard to land use; the case of bio-masses just to mention the most evident case [2][3][4]. New technologies will make our life even easier and more comfortable but inevitably they will put more pressure on different resources. For example the large deployment of batteries has already an impact on the supply of lithium worldwide and the utilization of magnesium ion batteries in the near future may pose the same issue [4]. Another example is the supply of rare earth elements critical for the development of renewable energy and other smart technologies [6][7][8]. From the EU perspective, restricting to energy, both the supply of fuels and raw materials are mainly depending on imports from outside the EU. This is weakening somehow the future security of supply and it is posing question for the long-term strategy to follow [9].
Besides land use constrains, coming back to food production, even if the EU has a strong agricultural infrastructure, it also depends heavily on import of fertilizers -again a sort of raw material. It is important to develop more renewable resources, both energy and materials, coping with the future growing demands while possibly reducing pressure on land use. One possibility that has always been considered is to exploit more effectively the seas, salt lakes, underwater salt waters and oceans. Sea water is known to contain in fact many different elements which can be employed for the food chain as well as the raw Volume 2 Issue 1 -2018 material supply. Besides sodium, calcium, potassium, magnesium, lithium and titanium, molybdenum and not to mention uranium and thorium can be obtainable form sea water. Important natural resources of this type are so called brines deposits, where generally sea and ground water has been evaporated by contact with hot geothermal or volcanic rocks and consequently the salts concentrated [10]. Brines may have significantly different composition depending on the origin of the water and the nature of the rocks and other factors. Elements concentrations can be significantly enriched as well when compared to sea water. Huge brines deposits exist worldwide and they will be further exploited in few decades but they are again finite resources outside the EU.
The issue is that the elements mentioned above are very much diluted and conventional processes often fail to be competitive. Just the mere energy required to evaporate away part of the water to concentrate the salt is often a killing factor. Historically, however, we have extracted salt from sea water in an acceptable way using evaporation process in the so called saline. The process relies on huge surfaces on land, ponds, where sea water is pumped in and let evaporated by nature; sun, wind as well as time and temperature. The process is rather slow and requires extensive surfaces. However, 'mining' different elements from sea water directly could be done at larger scale directly on deserted islands not suitable for agricultural or other purposes or even off shore (with floating structures).
What exactly sea water contains?
A very well-known fact is that the sea water contains sodium, being also the most abundant element there. In addition, there are considerable amounts of other valuable elements. A list of the most common elements contained in sea water together with their relevant applications is given in the Table 1 below. The materials major suppliers and the EU net import dependency are also presented in the table.
a. Refers to lime production. b. USGS 2010 data are used to calculate the supplier shares since the latest years USA data were not reported and USA is one of the major suppliers [12].
c. Refers to titanium and titanium dioxide production. f. Refers to phosphate rock: The import reliance of phosphate rock is currently 100%. No production is taking place within the EU: expert's opinion.
g. Exact shares not found.
As can be seen from Table 1, the EU is strongly dependent on third countries for the supply of materials found in sea water, used in a broad range of applications. The import dependency varies between 75% and 100% which might create possibly a problem of securing the supply of these materials in the future. Therefore, the EU may have a particular interest in developing the required technology to extract raw materials from the sea. In particular, the security of supply of the raw materials is strongly influenced by the fact that often a few third party countries are supplying the EU and in some cases, e.g. magnesium, often a single country has the full control.
Sea water versus brines
As mentioned before, the so-called brines occurring in different location on earth as typical natural resources for materials. Huge brines deposits exist worldwide and they will be further exploited in few decades but they are again finite resources outside the EU. Such deposits may have significantly different composition depending of the origin and other factors (e.g. marine, nonmarine, hybrid, alkaline brines etc.). Saltwater isolated in salt lakes has evaporated in long geological eras mainly due to volcanic/ geothermal source or surface evaporation, leaving concentrated brines or even solid deposits. Non-marine brines are originated by, for example, rivers depositing again along geological eras elements washed away from rocks. The contact with particular rocks and the quality of inflow rain water is also playing a role in the brine formation.
Combinations of factors are also widely observed in nature making the composition of brines very diverse and being suitable for specific exploitation. For example, the lithium-rich brines of Salar de Uyuni in the Bolivian Altiplano-one of the largest salt pan on Earth-contain lithium at concentrations in the order of 500ppm or 500g/t (concentrations are ranging from 80ppm up to 1500ppm depending on location). Even higher concentration are found at specific locations, while the lower level which is still commercially interesting for exploitation is about 20ppm (USA). By comparison, the enrichment in lithium compared to sea water is approximately of a factor 120; which in terms means that a lot of sea water needs to be evaporated to obtain high concentrated brines economically viable for further exploitation.
The situation is almost reversed when we compare the figures for magnesium; its concentration in sea water, 1290ppm, is far higher than the one found in most brines. Whatever the target element is, the concentration process would generate huge amount of valuable products like it happens with brines. In fact, coming back to brine exploitation, lithium is often a by-product while potassium is one of the main products. In any case, as mentioned before, huge amounts of fresh water could be coproduced in parallel which could be utilized in dry areas for many purposes e.g. human settlement, agriculture, tourism etc.
Energy and power requirements
Yes, there are plenty of valuable elements in sea water but one basic question is of course how to separate or at least to concentrate the salt from the huge amount of water. The energy required to concentrate sea water by evaporation can be estimated as well as the power requirements depending on the processing time requirements. Renewable energy is in particular evaluated for its potential to recover valuable elements from sea water. Renewable energies especially in the form of solar and wind are often also available in regions with limited or no-access to energy transmission and distribution grids. This is particularly true for the off-shore case or the case of remote non habited islands. In many cases, the potential harvesting of renewable energy is associated in the best case with such huge grid investments that are making the projects impossible. Often the connection to a grid is not at all technically feasible. The utilization of renewable energy directly on those locations for the production of raw materials may be more economically attractive for the above mentioned cases. The produced materials could be then piled and carried on-shore by sea vessels.
Harvesting and utilizing RES at the source location, avoiding power transmission infrastructure and reducing energy conversion steps will enhance overall efficiency of the whole process. This has to be kept in mind since the energy requirement remains one critical element which often puts the process out of business. For example, a few years ago it was in fact discussed the possibility to extract fissile fuels, like uranium and thorium from sea water to feed nuclear reactors [15]. The process was based on pumping sea water through a membrane. The required energy turned to be too high even in comparison to the energy which could be obtained from the recovered fissile fuel, making the whole process not sustainable. Other technologies are at development stage, like absorption mats which may be less energy demanding [16] or the so called Metal-organic Frameworks [17]. In the following, with a view of RES utilization, we will focus on water evaporation as the main process to concentrate salts. We assume here to utilize direct solar thermal energy for the main evaporation process and other renewable for the required pumping of the sea water and brine. The concentration of the salt by evaporation of the water requires a lot of energy. Assuming salt water temperature of approximately 15 °C, the minimum energy required to warm up and then evaporate 1 ton of water can be estimated as 0,73MWh/ton.
As we know solar radiation at a good solar location in the summer months may yield at least 4 KWh/day/m 2 of heat considering approximately 4 hours equivalent of peak radiation. This means that, if we consider 1 km 2 a total energy of 4000MWh/ day in the form of heat is available. In turns, potentially slightly more than 5000 tons per day of water could be evaporated. Assuming good radiation condition and 50% of the year operation schedule (like summer seasons in the Mediterranean areas), we could ideally evaporate approximately 1Mton/year of water utilizing the reference 1km 2 of solar thermal energy. As a result we would concentrate solutions containing more than 1000 tons of magnesium, 400 tons of potassium and calcium and other elements (Table 1). Agriculture, glass manufacturing, explosives for mining and civil works, metal treatment, fireworks; food and Pharma sectors.
Calcium 400 tonnes China (66%), Others (34%) [1] 100% [4] Alloying agent in the production of aluminium, beryllium, copper, lead, and magnesium alloys; deoxidizer, desulfurizer, or decarbonizer for various ferrous and non-ferrous alloys; reducing agent in the extraction of uranium, zirconium, thorium; cements and mortars production for construction; food industry. There are also other power requirements to be evaluated; mainly the power required to pump the huge amount of water from the intake through the whole process. The power required depends mainly on the elevations to which the water needs to be pumped as well as other factors such as water treatment, depleted water separation requirements, etc. Power in the range of the tens of MWe is required as a minimum to feed 1km 2 of installation. To obtain the power in remote areas, photovoltaic or wind installations could be employed, requiring additional surface to be placed; such surface is anyhow far smaller when compared to the overall surface required. Other options to generate power could be deployment of geothermal sources, heat pumps, etc.
Discussion
In any case, to exploit seas and oceans, sea water needs to be segregated and evaporated. Different strategies to recover the different elements can be followed then. The concentration could be obtained by evaporation of the water like it was done and still done in many places in Salinas. Like obtained in Salinas, renewable energy is the most promising option-it only needs to be optimized to compensate for otherwise long evaporation times. As calculated in the previous sections, utilizing 1km 2 of surface, in a Mediterranean region of the EU we could potentially evaporate approximately 1Mton of sea water using solar thermal energy. In order to minimize the impact on land use, ideally floating installations on a salt lake or making use of land unsuitable for agriculture like lake and seas shore, desert islands or areas are desirable. The result of the yearly operation would then be the production of concentrated brines offering real possibility to extract valuable material. If we conservatively assume a recovery percentage of approximately 50% for the various elements we could estimate to recover annually the amounts given in Table 2.
Deploying more units, or better utilizing a larger field, results become of interest for large scale supply. A single desert island or floating structure in the lower Mediterranean region of the EU with a surface of 1000km 2 (30 x 33km) would make available 500.000 tons of magnesium, 200.000 tons of potassium and 85 tons of Lithium as by-product. The mentioned amount of magnesium is huge considering that the world's production is in the range of the Mton [18]. Lithium amount is rather low; by comparison, Australia and Chile were the biggest producers of lithium in 2015 with approximately 12-13 thousand tons each [11]. The advent of Magnesium-ion batteries in the near future will be a key development [19]. The possibility to extract Magnesium from sea water attracts the attention also of big countries such as USA [20]. Currently research is ongoing on developing energy effective techniques for Magnesium recovery from sea water. Additional RES driven or chemical exchange processes may be developed for purification and metal reduction whenever required. The proposed installation will be fully renewable, and besides raw materials, will be able to make available huge mass supply of fresh water which may be of high value for human use and further exploitation of agriculture, especially in remote dry areas.
It is worth to be noted that the idea could be exploited by all countries with direct or indirect access to the sea. Of course when relying on renewable energy based on solar for the evaporation, the latitude may be critical for the productivity index for installations relying on solar radiation. Utilization of renewable wind energy off-shore might be more appropriate for higher latitudes.
It should be recognized that RES can be deployed in a way not competing with energy production. On the contrary, it can be better deployed in areas near the coastline or inside land but with easy connection to the sea coastline where the access to a suitable energy infrastructure is difficult and of course where RES are abundant. It is in fact unlikely that RES will be harvested for energy purposes due to the high costs of laying additional energy grid infrastructures. A rough estimation of the huge potential available can be than obtained by evaluating the surface areas of such regions.
A simple approximation can be done for the EU, just considering the case of solar RES. It will be a conservative estimation, but indicating the order of magnitude of the potential which, as mentioned before, will ever be hardly exploited for energy production anyhow. As a proxy of the energy infrastructure we can consider the existing transmission-distribution electrical grid. Areas far from the available energy infrastructure, often scarcely inhabited as well, are taken into account for the evaluation. Focusing on solar RES, we can then further limit the evaluation by restricting the latitude range between 42°N to 35°N to account for the higher and longer solar exposure across the year.
A fast estimation indicates that it is easy to point at a suitable surface in the range of at least 2000km 2 . This means a potential in the range of approximately 1Mton per year for magnesium, approaching the global supply figure of the worlds, and far more than the EU consumption. In fact, the EU supply, which is approximately 170.000 tons [21] per year, can be obtained exploiting about less than 340km 2 . These estimations should be considered rather conservative. In fact just for comparison the small islands of Kythira and Karpathos in Greece in a suitable climatic region are together larger than 700km 2 , while inhabited by just 11.300 inhabitants (with population declining as in many of the smaller Aegean islands). For a more precise estimation a more sophisticated analysis is necessary, for example applying geographic information system (GIS) software. This will be the topic of a dedicated following study. The same can be done considering other RES sources and different climatic areas.
As an additional benefit, more than 60% of the calcium and 30% of the potassium, which EU imports (in the form of calcium carbonate and potash) from non-EU countries can be harvested from the same surface of 2000km 2 [22]; to compensate for the whole EU import of these two raw materials, surfaces of roughly 3500km 2 and 6500km 2 respectively would be required. Although non-critical for the EU economy, these two materials are feeding stock for important sectors, such as agriculture (fertilizers), chemicals, construction, production of glass, paper, plastics, etc.
A simplified sketch of the system layout is presented in Figure 1. One more obvious advantage of such concept is obtaining significant amount of salt being predominantly used as a feedstock for the production of industrial chemicals. The demand for salt has upward trend and new possibilities are currently explored in the EU, e.g. Akzo Nobel's Specialty Chemicals business is opting to extent by 25% its production of high-purity salt in the Netherlands, following extension of its facility in Denmark. A new joint venture has also been recently established in Spain [23]. Currently, the EU possesses only 18% share in the global production with Germany being the major EU producer, while China and the United States are the largest salt producers, accounting together for around 37% of the global salt production [24,25]. The EU is also one of the biggest salt importers globally: in 2016 the EU imported $920.3 million worth of salt [26].
Recovery of Materials and Fresh Water Supply using Renewable Energy 6/6
Copyright: ©2018 Blagoeva et al.
Conclusion
Sea water contains valuable materials which are important for different industrial sectors such as energy, transport, agriculture, metals production, etc. The EU reliance on these materials is very high; in most cases above 75%. The concentration levels of these materials in sea water are rather low but almost constant worldwide. Concentration process to obtain brines via evaporation employing renewable energies can be the starting point for exploitation. Salt production could be of course just one more benefit of such projects. Benefits would also include reduced need of power transmission infrastructure by harvesting and utilizing RES at the source location and as much as possible exploiting directly thermal energy, assuring thus increased overall efficiency.
Just as indication, a significant amount of magnesium, potassium and calcium plus several other elements in smaller quantities can be retrieved from an area of only 1km 2 . For the production of magnesium necessary to satisfy the current EU demand, a surface area of approximately 340km 2 would be sufficient. An area of approximately 2000km 2 , estimated to easily be available just in the EU could deliver the current global magnesium production in the range of 1 Mton. To be noted that magnesium, on which we depend 100% from non-EU suppliers, is an attractive candidate also for the development of future rechargeable magnesium-ion battery technology. As an additional benefit, the import reliance on other raw materials such as potassium, calcium and salt can be substantially reduced.
Potentially the impact on land could be negligible since dry inhabited islands could be utilized at first place. Possible floating concepts could be developed as well, for example on salt lakes. Additionally, the process could create mass supply of fresh water in remote dry areas for further exploitation of agriculture, positively impacting the water-energy-food nexus. It should be noted that the concept could be exploited easily by countries with access to sea. It can have a particular interest for the EU due to its huge import depends on raw materials, creating possibly a problem of security of supply in the future. | 5,279.4 | 2018-01-17T00:00:00.000 | [
"Engineering"
] |
Enhancement of β-Phase Crystal Content of Poly(vinylidene fluoride) Nanofiber Web by Graphene and Electrospinning Parameters
Electrospun poly(vinylidene fluoride) (PVDF) nanofiber web has been widely utilized as a functional material in various flexible sensors and generators due to its high piezoelectricity, ease processability, and low cost. Among all the crystalline phases of PVDF, β-phase is a key property for PVDF nanofiber web, because the content of β-phase is directly proportional to piezoelectric performance of PVDF nanofiber web. Herein, the impact of graphene content (GC), tip-to-collector distance (TCD), and rotational speed of collector (RSC), as well as their interactions on the β-phase formation of PVDF nanofiber web is systematically investigated via design of experimental method. The fraction of each crystalline phase of PVDF nanofiber web is calculated by FTIR spectra, and the crystallinity is determined by XRD patterns. The influences of GC, TCD, and RSC on both β-phase fraction and crystallinity of PVDF nanofiber are analyzed using Minitab program. The results show that GC, TCD, and RSC all have significant effect on the β-phase content of PVDF nanofiber web, and GC is the most significant one. In addition, an optimal electrospinning condition (GC = 1 wt%, TCD = 4 cm, and RSC = 2000 r-min−1) to fabricate high β-phase crystallinity of PVDF nanofiber web is drawn, under which the crystallinity can reach 41.7%. The contributions in this study could provide guidance for future research on fabricating high performance PVDF nanofiber web based sensors or generators.
Electrospinning is a straightforward, scalable, and cost-effective versatile technique to fabricate high β-phase content of piezoelectric PVDF material, which is PVDF nanofiber web. [17][18][19][20] However, there are a huge number of parameters that influence the β-phase formation during PVDF electrospinning such as additive, tip-to-collector distance (TCD), rotational speed of collector (RSC), PVDF concentration, solvent mixture, applied voltage, injection flow rate, needle tip gage, and so forth. Among them, doping additives, for instance silver nanoparticle, [14] nanocaly, [21] and graphene oxide, [3,22] into PVDF electrospinning solution is one of the most effective methods for improving the β-phase content, for they can serve as nucleation agents to promote the β-phase formation. Compared with these additives, graphene is a two-dimensional sheet with huge π-boned carbon atoms packed in a honeycomb crystal lattice, presenting extraordinary electrical and mechanical properties such as super electrical conductivity, excellent flexibility, large specific surface, and high mechanical strength. [23,24] It has also been reported that adding graphene into PVDF electrospinning solution could improve the β-phase content. [15,25] However, it is not always beneficial with the increase of graphene content. Some research releveled that the high content of graphene could have negative effect on the β-phase formation. [7,26,27] Therefore, it is necessary to investigate the relationship between the β-phase formation and the graphene content as well as its underlying mechanism. In addition, another efficient avenue to facilitate the β-phase formation is to increase the electric field to promote the dipolar polarization of PVDF nanofiber web. [15] There are two feasible methods for elevating the electric field during electrospinning. One is to increase applied voltage at a fixed TCD and the other one is to reduce TCD. It is rather difficult to elevate the applied voltage too high because every electrospinning machine has its own maximum applied voltage to protect electronics; the latter is more effective and practical than the former. Besides, the β-phase is conventionally obtained from the α-phase through mechanical stretching. [28] It was hypothesized that the drawn ratio of PVDF nanofiber could improve at a high RSC during collecting to promote the phase transformation from the α-phase to the βphase, further increasing the β-phase content of PVDF nanofiber.
Herein, the effects of graphene content (GC), TCD, and RSC as well as their interactions on the β-phase formation of PVDF nanofiber web were systematically investigated through design of experiment (DOE). The results showed that GC, TCD, and RSC all have significant impact on the β-phase content of PVDF nanofiber web, and GC is the most significant one (P < 0.05). In addition, an optimal electrospinning condition (GC = 1 wt%, TCD = 4 cm, and RSC = 2000 r·min -1 ) to prepare high β-phase crystallinity of PVDF nanofiber web was drawn, and the crystallinity can reach 41.7%. More importantly, the underlying mechanism of GC, TCD, and RSC on the β-phase formation was also fully discussed.
Materials
PVDF pellets (M w ≈ 2.75 × 10 5 ) were supplied by Sigma-Aldrich (UK). Graphene nanoplatelets with the average thickness of 5-7 atomic layers and the sheet size of about 25 μm were purchased from Sigma-Aldrich (UK) as well. Silver nitrate (M w ≈ 169), N,N-dimethylformamide (DMF), and acetone were purchased from Fisher Scientific (UK).
Electrospinning Process
GC and two electrospinning parameters, i.e. TCD and RSC, were selected as research objects for DOE and two levels (low and high levels) were set in each parameter as shown in Table 1. The center level of each parameter was set to repeat four times to calculate the variation within the experiment.
The protocols of electrospinning process are as follows. PVDF solution (12 wt%) was prepared by dissolving PVDF in a binary solvent system of DMF/acetone (6/4, V/V). Then a small account (1 wt%) of silver nitrate was added into PVDF solution by stirring for 4 h to form silver nanoparticles, which acted as the phase stabilizer. The required amount of graphene (Table 1) was added and stirring was continued for 6 h at 60 °C. Next, the prepared solution was placed into an ultrasonic bath for 30 min to make the homogeneous dispersions just before electrospinning. The prepared solution was filled into a 10 mL syringe with a needle of 23 gage. Lastly, the PVDF nanofiber web was fabricated according to the conditions of TCD and RSC presented in Table 1 with an applied voltage of 18 kV and an injection flow rate of 0.6 mL·h −1 . The electrospinning of each sample was conducted for 6 h to obtain the uniform thickness of ~100 μm of PVDF nanofiber web. Table 2 lists the sample specifications for PVDF nanofiber webs obtained with different conditions. Twelve samples (S01-S12), which were called treated samples, were prepared for DOE analysis, S09-S12 were repeated samples for the variation calculation within the experiment, and an untreated extra sample (S13), in which silver nanoparticles and graphene were not added, was prepared as a reference sample.
The crystalline structure of PVDF nanofiber web was analyzed by X-ray diffraction (XRD, Bruker, D8 Advance) in the 2θ range of 10° to 40° with a CuKα (λ = 1.54 Å) radiation source under an operating 40 kV voltage and 40 mA current. The 0.5 10 1000 S10 0.5 10 1000 S11 0.5 10 1000 S12 0.5 10 1000 S13 0 10 5 area corresponding to each crystalline peak was obtained through the curve deconvolution of each XRD pattern using PeakFit program. Based on the data, the total crystallinity (C t ), the β-phase crystallinity (C β ), the γ-phase crystallinity (C γ ), and the α-phase crystallinity (C α ) were calculated using the following Eqs. (5)-(8): [15] where ΣA cr and ΣA amr are the summation of the integral area of the crystalline peaks and the amorphous halo from PVDF, respectively. ΣA α , ΣA β , and ΣA γ indicate the total integral area from α, β, and γ-crystalline phases peaks, respectively. The morphology of PVDF nanofiber web was observed by a field emission scanning electron microscope (FE-SEM, Hitachi S4800) operated at an acceleration voltage of 1.5 kV.
Statistical Analysis
The effects of GC, TCD, and RSC as well as their interactions on the β-phase fraction, the total crystallinity, and the β-phase crystallinity of PVDF nanofiber web were all analyzed by Minitab program, in which a two-level factorial design analysis with three factors was used. The statistical distributions of the fiber diameters were calculated using ImageJ software.
Piezoelectric Response Testing
The piezoelectric pressure sensor was fabricated by attaching Ni-Cu plated polyester tapes as electrodes on both sides of the PVDF nanofiber web. The open-circuit output voltage of pressure sensor was measured using a Biopac system (MP160 and HLT1000), a piezo film lab amplifier with voltage mode (Measurement Specialties), and an electromechanical universal testing system (Instron, 3400).
FTIR Analysis of PVDF Nanofiber Web
The formation of three crystalline phases, α, β, and γ-phases, was analyzed by FTIR spectra with the prepared PVDF nanofiber webs. Fig. 1(a) compares the FTIR spectra of the treated PVDF nanofiber webs (S01-S12) and the untreated reference PVDF nanofiber web (S13) in the wavenumber region from 400 cm −1 to 1600 cm −1 . The two characteristic bands at 489 and 764 cm −1 correspond to the nonpolar α-phase, and two vibrational bands at 1234 and 1275 cm −1 are attributed to γ-phase and βphase, respectively. [29,30] Observably, the treated samples have no obvious band at 489 (α-phase), 764 (α-phase), or 1234 (γphase) cm −1 , indicating they are mainly composed of β-phase. By contrast, the untreated sample, S13, consists of a relatively large amount of α-phase, which can be proved by the two clear bands at 489 and 764 cm −1 .
Figs. 1(b)-1(e) show the calculated fraction of each phase. As shown in Fig. 1(b), S13 possesses the lowest content of F EA (85.4%), while S08 exhibits the highest content of F EA (98.3%). Moreover, F EA s of all treated samples are significantly higher than that of S13 (P < 0.01). There is no remarkable difference between the silver doped samples (S01-S04) and the silver/graphene doped samples (S05-S08), illustrating that the graphene cannot significantly change the F EA of PVDF nanofiber web. Similarly, in Fig. 1(c), S13 reveals the lowest content of F β , 83.1%, which is notably lower than those of the treated sample (P < 0.01). Moreover, the four repeated samples (S09-S12) show higher F β among treated samples whereas the silver/graphene doped samples (S05-S08) exhibit lower F β by comparing with the silver doped sample (S01-S04). This means adding graphene of 0.5 wt% into PVDF nanofiber web elevates F β , but graphene more than 0.5 wt% in turn reduces F β , even worse than that of the only silver doped samples. In addition, the effect of TCD on F β can be determined by comparing S01 with S03, S02 with S04, S05 with S07, and S06 with S08. However, there is no significant difference between each-two samples, implying the impact of TCD on F β is not significant. Similarly, the impact of RSC on F β can also be evaluated by comparing neighboring samples, i.e., S01 with S02, S03 with S04, S05 with S06, and S07 with S08. The results show that F β increases with the increase of RSC, which means RSC has influence on F β of PVDF nanofiber web. As shown in Fig. 1(d), S13 exhibits the highest F α (14.6%), which is almost three times higher than those of the treated samples (P < 0.01), and S08 exhibits the lowest F α (1.7%). This result can be ascribed to the additives i.e., silver nanoparticles and graphene, which serve as the nucleation agents to promote the β-phase formation. However, there is no notable difference between the silver doped samples and the silver/graphene doped samples, which means graphene dose not influence F α . In Fig. 1(e), the four repeated samples (S09-S12) and the reference sample S13 show lower F γ , ~ 2%, than other samples, but it is not significant (P > 0.05).
Overall, although electrospinning process promotes the β-phase formation, doping with some additives, i.e., graphene and/or silver nanoparticles, can further increase F β and reduce F α . However, adding too much graphene into PVDF nanofiber web can reduce F β . In addition, TCD has no significant influence on F β whereas the higher RSC presents the higher F β .
XRD Analysis of PVDF Nanofiber Web
The crystalline structures of PVDF nanofibers were identified from XRD patterns as well. On the basis of the results of FTIR analysis, two typical samples, the untreated reference sample S13 and the treated representative sample S08, were compared and discussed for XRD analysis. Figs. 2(a) and 2(b) present the XRD patterns of S13 and S08, respectively. The characteristic peaks at 18.3° and 19.2° are α-characteristic diffractions, the two peaks at 20.6° and 36.5° reflect to β-characteristic diffractions, and the peak at 20.2° refers to γ-characteristic diffraction. [9,15] It can be found that S13 (Fig. 2a) has a clear peak at 18.3°, which can be attributed to the α-phase diffraction, while the peak does not appear for sample S08 (Fig. 2b). This result is in agreement with the FTIR analysis. On the other hand, S08 has two distinct peaks at 26.7° and 38.2°, which are attributed to graphene and silver nanoparticle, respectively. It should be noted that the peak at 26.7° is the graphite-characteristic diffraction rather than graphene peak. This is because the graphene size used in this experiment is about 25 μm with the average thickness of 5-7 atomic layers. As expected, the thin and flexible graphene sheets could be folded in each PVDF nanofiber with the diameter of several hundred nanometers during electrospinning. [9,15] Figs. 2(c) and 2(d) display the curve deconvolution of S13 and S08, respectively, ranging from 15° to 25°. The crystallinity of PVDF nanofiber web including C t , C β , C γ , and C α was calculated from the curve deconvolution of XRD patterns. Table 3 lists the calculated crystallinity of each phase of the PVDF nanofiber webs prepared. S13 shows the lowest C t (29.4%), and the treated samples exhibit significantly higher C t than that of S13. Remarkably, S08 shows the highest C t , which can reach 45.5%. Although F β of S08 is lower than those of other treated samples, its C β is the highest (41.7%) among treated samples in view of its high C t . The four samples (S05-S08) with graphene of 1 wt% exhibit the higher C β than that of the four repeated samples (S09-S12) with a graphene of 0.5 wt%, illustrating that C β of PVDF nanofiber web increases with the rise of graphene content. In addition, the influence of TCD on C β is determined by comparing S01 with S03, S02 with S04, S05 with S07, and S06 with S08, showing the lower TCD with the higher C β . Similarly, the impact of RSC on C β is evaluated as well by comparing neighboring samples, i.e., S01 with S02, S03 with S04, S05 with S06, and S07 with S08, presenting the higher RSC with the larger C β . S08 shows the highest C γ (~2.7%) whereas S13 exhibits the lowest C γ (~0.9%). In contrast, S13 shows the highest C α (~4.3%), which is consistent with the result from the FTIR analysis, implying doping additives could reduce C α . Overall, S08 exhibits the highest C β while S13 shows the lowest C β , which is in agreement with the result of FTIR analysis. Nevertheless, there are also some results obtained from XRD analysis which are different with those of the FTIR analysis. One conflicting result is that C β increases with the GC raise continuously, while the maximum F β is reached at the GC of 0.5 wt%. Another one is that both TCD and RSC have effect on C β whereas RSC only has impact on F β . These results clearly imply that F β is different from C β .
Design of Experiment Analysis via Minitab
Minitab program was employed to evaluate the effect of GC, TCD, and RSC on F β , C t , and C β of the PVDF nanofiber web. Meanwhile, their interaction was also analyzed. Fig. 3(a) displays the effect of GC, TCD, and RSC on F β of PVDF nanofiber web. It presents that only GC has significant influence on F β since GC goes beyond the dash line (the standardized effect of 3.182), while others do not. This means that the other two parameters (TCD and RSC) do not significantly affect F β . On the other hand, in Fig. 3(b), all three parameters have significant impact on C t and the most contributive parameter is GC as well. It is noteworthy that the interaction between GC and TCD is a significant parameter for C t . Similarly, in Fig. 3(c), GC, TCD, and RSC all have significant influence on C β and the most contribution to C β is also from GC, but there is no interaction among them. According to Fig. 3(d), the main effect plot for C β , the relationship between each parameter and C β could be determined. GC and RSC have positive relationship with C β , but TCD has a negative one. Based on such relationship, the optimal electrospinning conditions to prepare PVDF nanofiber with the maximum C β can be drawn. The maximum C β (41.7%) was obtained when Fig. 2 XRD patterns of (a) the untreated reference PVDF nanofiber web (S13) and (b) the typical treated PVDF nanofiber web (S08) ranging from 10° to 45°, and curve deconvolution of (c) the untreated reference sample (S13) and (d) the typical treated sample (S08) ranging from 15° to 25°. GC = 1 wt%, TCD = 4 cm, and RSC = 2000 r·min -1 .
Role of GC, TCD, and RSC in β-Phase Formation
From statistical analysis, it shows that GC is the most significant parameter to F β , C t , and C β of PVDF nanofiber web. Incorporation of graphene in electrospinning solution notably increased C t and C β of PVDF nanofiber. This is because graphene serves as the nucleation agent to assist the β-phase formation due to the specific interaction of the surface charge of graphene with the CH 2 dipoles of β-phase. [14] To be specific, graphene has a huge amount of free electron from π-boned carbon atoms packed in a honeycomb crystal lattice, showing electro-negative (upper insect of Fig. 4). Therefore, the graphene surface can attract the electro-positive CH 2 dipoles to be oriented in one side of the PVDF chain, inducing the β-phase formation. This electrostatic interaction mechanism has been proven by several reported studies. Lou et al. [5] added silver nanowires that exhibited electro-negative into PVDF nanofiber to increase the β-phase content. Mi et al. [16] reported that hydrogen bonds between PVDF/poly(methyl methacrylate) interface could facilitate the β-phase formation by aligning the dipoles. Zhu et al. [3] obtained enhanced β-phase content of PVDF nanofiber web by doping graphene oxide. Furthermore, graphene can effectively confine and direct the arrangement of PVDF chains to promote crystalline formation, which is defined as a "molecule movement restriction" effect due to ultra-strong mechanical property. [7,15] In addition, adding graphene into PVDF can enhance its electrical property, i.e., permittivity; [7] consequently, the PVDF nanofibers are more efficiently polarized by electric filed during electrospinning to promote βphase formation. [21] TCD is the second significant parameter for C t and C β . Changing TCD has two-sided effect. Although reducing TCD could increase the electric field, promoting the dipolar polarization of PVDF nanofiber web to enhance the β-phase content, it may hinder the crystal development because of the shortened formation time. However, according to the result from DOE analysis, it shows that the lower TCD is more beneficial than the higher TCD for improving C t and C β of the PVDF nanofiber. Fig. 4 illustrates the PVDF electrospinning process with the optimal condition (TCD = 4 cm and RSC = 2000 r·min -1 ) drawn from the DOE analysis. The route of PVDF nanofiber formation by electrospinning can be divided into two zones, liquid flow zone and transition zone. [31] The PVDF solution jet coming out from the Taylor cone firstly goes through the liquid flow zone. In this zone, the directions of electric field and the PVDF chains are parallel (Fig. 4), and PVDF chains cannot be polarized. The jet then flows into transition zone. As the solution jet in flight, it gradually transforms from liquid phase to solid phase. Simultaneously, the jet is stretched and bended by electrostatic repulsion, forming a nanofiber with helical structure. Because of this shape deformation, the directions of electric field and PVDF chains become vertical, which polarizes the PVDF chains to promote β-phase formation. Compared to the traditional TCD of 10− 16 cm for PVDF electrospinning, [6,14,32] the lowest TCD in this study is about 4 cm, which is almost the minimum spinning distance to fabricate PVDF nanofiber. The lowest TCD significantly multiplies the electric field, which is three times higher than that of TCD of 16 cm. As a result, the elevated electric field polarizes the PVDF nanofiber to facilitate the β-phase formation more efficiently. [21,22] The PVDF nanofiber contains high permittivity folded graphene. Consequently, the polarization effect can be magnified by doping with graphene due to the increased permittivity of PVDF nanofiber web. This makes the interaction between GC and TCD a significant parameter for C t . Moreover, the electric force induced by the strong electric field stretches the PVDF nanofiber more intensively to promote the transformation from α-phase into βphase formation shown in the bottom insect of Fig. 4. [25] RSC has significant influence on C t and C β of the PVDF nanofiber web as well. This is because the PVDF nanofiber would be drawn again in the collecting process due to the high RSC, resulting in a further increased crystallinity. This could be observed by SEM images of S13 and S08. As shown in Fig. 5, the PVDF nanofibers of S13 fabricated at a low RSC of 5 r·min −1 are entangled with each other whereas most individual PVDF nanofiber of S08 fabricated at a high RSC of 2000 r·min −1 is oriented, indicating they were stretched when collected. In addition, the average nanofiber diameter of S13 is around 150 nm (Fig. 5a, inset), but that of S08 is below 100 nm (Fig. 5b, inset), also implying the PVDF nanofibers of S08 were drawn at the high RSC.
Piezoelectric Performance Evaluation
To investigate the effect of C β on their piezoelectric response, the piezoelectricity of two typical samples, S13 and S08, was measured. Instron was employed to apply periodic pressure (10 kPa) to the fabricated sensors and the open-circuit output voltage of the sensor was measured using Biopac system as shown in Fig. 6(a). To simplify the piezoelectric performance evaluation, the peak-to-peak output voltage (V p-p ) was measured and compared in this work. Figs. 6(b) and 6(c) compare the V p-p of S13 and S08. As expected, S08 exhibits a V p-p of 0.14 V whereas S13 presents a relatively low V p-p of 0.06 V, which means the piezoelectric response of S08 is more than two times of that of S13. The increased piezoelectric performance is mainly attributed to the higher C β of S08 (41.7%) in comparison with that of S13 (24.1%), strongly demo- nstrating the impact of C β on the piezoelectric performance.
CONCLUSIONS
A series of PVDF nanofiber webs have been fabricated with different GC and diverse electrospinning parameters, i.e. TCD and RSC. Then their crystalline phases, α, β and γ-phases, have been thoroughly analyzed by FTIR spectra and XRD patterns. The effects of GC, TCD, and RSC as well as their interactions on the β-phase formation of PVDF nanofiber have been investigated by Minitab program. The results showed that GC, TCD, and RSC all have significant effect on C β of PVDF nanofiber web; especially, GC is the most significant one. An optimal electrospinning condition (GC = 1 wt%, TCD = 4 cm, and RSC=2000 r·min -1 ) to prepare high C β (S08, 41.7 %) of PVDF nanofiber web has been drawn, and such optimal sample (S08) exhibited better piezoelectric response than that of graphene untreated reference sample (S13). It should be noted that the large β-phase fraction from FTIR spectra does not mean the high β-phase crystallinity from XRD patterns; additionally, the effect of three parameters on the β-phase fraction is not the same as those on the β-phase crystallinity, indicating the crystalline fraction is a totally different factor from the crystallinity. Significantly, both GC and electrospinning parameters such as TCD and RSC were systematically analyzed via DOE, which can have positive impacts on developing high β-phase content of PVDF nanofiber webs for high-performance flexible sensors and generators. Time (s) S13: V p-p ≈ 0.06 V S08: V p-p ≈ 0.14 V Fig. 6 (a) Schematic of piezoelectric sensor fabricated using PVDF nanofiber web and piezoelectric response measurement under conditions (applied pressure: 10 kPa, frequency: 3.5 Hz, R in = 1 GΩ, and impact area: 1 cm 2 ), and comparison of piezoelectric response of (b) S13 and (c) S08.
ACKNOWLEDGMENTS
Respiratory Protective Equipment against Byssinosis for Cotton Workers" and The University of Manchester through project AA14512 (UMRI project "Graphene-Smart Textiles E-Healthcare Network"). L.J. and Z.L. were funded by the China Scholarship Council (CSC). | 5,828.2 | 2020-06-18T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
A novel morbillivirus pneumonia of horses and its transmission to humans.
contributed to the studies outlined in this paper. Richard Lawrence, Clinical Superintendent of Medicine, Westmead Hospital, provided valuable discussions on clinical aspects and case presentation. Our investigations were supported by the National Health and Medical Research Council and the Ramaciotti Foundations. Plikaytis B. Enzyme-linked immunosorbent assay and direct immunofluorescence assay for Lyme disease. veterinary authorities in Queensland and at the CSIRO Australian Animal Health Laboratory were advised of an outbreak of acute respiratory disease in horses at a stable in the Brisbane suburb of Hendra. The trainer of the horses had been hospitalized for a respiratory disease and was in critical condition. At that time, the cause of the horses' illness was unclear and any link between equine and human disease was thought improbable. Poisoning, bacterial, viral, and exotic disease causes were investigated. The history of the horses on this property was considered important (Figure 1). Two weeks before the trainer's illness, on September 7, two horses had been moved to the Hendra stable from a spelling paddock in Cannon Hill (6 km). One of these, a pregnant mare, was sick and died within 2 days. The other horse was subsequently moved on and never became sick. By September 26, 13 horses had died: the mare; 10 other horses in the Hendra stable; one horse, which had very close contact with horses in the Hendra stable, on a neighboring property ; and one which had been transported from the stable to another site (150 km). Four Hendra horses and three others (one in an adjacent stable, one moved to Kenilworth, and one to Samford) were later considered to have been exposed and recovered from the illness. Some of these horses were asymp-tomatic. Nine Hendra horses have remained unaffected. The sick horses were anorexic, depressed, usually febrile (temperature up to 41°C), showed elevated respiratory rates, and became ataxic. Head pressing was occasionally seen, and commonly, a frothy nasal discharge occurred before death. On September 14, a stablehand at the Hendra stable developed an influenza-like illness characterized by fever and myalgia. The next day, the horse trainer also became ill with similar symptoms. Both had close contact with the dying mare, particularly the trainer who was exposed to nasal discharge while trying to feed her; he had abrasions on his hands and arms. The stablehand, a 40-year-old man, remained ill for 6 weeks and gradually recovered. Besides myalgia, he also had headaches, …
A Novel Morbillivirus Pneumonia of Horses and its Transmission to Humans
To the editor: On September 22 and 23, 1994, veterinary authorities in Queensland and at the CSIRO Australian Animal Health Laboratory were advised of an outbreak of acute respiratory disease in horses at a stable in the Brisbane suburb of Hendra. The trainer of the horses had been hospitalized for a respiratory disease and was in critical condition. At that time, the cause of the horses' illness was unclear and any link between equine and human disease was thought improbable. Poisoning, bacterial, viral, and exotic disease causes were investigated. The history of the horses on this property was considered important (Figure 1). Two weeks before the trainer's illness, on September 7, two horses had been moved to the Hendra stable from a spelling paddock in Cannon Hill (6 km). One of these, a pregnant mare, was sick and died within 2 days. The other horse was subsequently moved on and never became sick. By September 26, 13 horses had died: the mare; 10 other horses in the Hendra stable; one horse, which had very close contact with horses in the Hendra stable, on a neighboring property; and one which had been transported from the stable to another site (150 km). Four Hendra horses and three others (one in an adjacent stable, one moved to Kenilworth, and one to Samford) were later considered to have been exposed and recovered from the illness. Some of these horses were asymp-tomatic. Nine Hendra horses have remained unaffected. The sick horses were anorexic, depressed, usually febrile (temperature up to 41°C), showed elevated respiratory rates, and became ataxic. Head pressing was occasionally seen, and commonly, a frothy nasal discharge occurred before death.
On September 14, a stablehand at the Hendra stable developed an influenza-like illness characterized by fever and myalgia. The next day, the horse trainer also became ill with similar symptoms. Both had close contact with the dying mare, particularly the trainer who was exposed to nasal discharge while trying to feed her; he had abrasions on his hands and arms. The stablehand, a 40-year-old man, remained ill for 6 weeks and gradually recovered. Besides myalgia, he also had headaches, lethargy, and vertigo. The trainer, a 49-year-old man, was a heavy smoker and showed signs consistent with Legionella infection. He ultimately required ventilation for respiratory distress and died after 6 days (Selvey L, et al. A novel morbillivirus infection causing severe respiratory illness in humans and horses, submitted).
At the beginning of the diagnostic investigation in horses, African horse sickness, equine influenza, and hyperacute equine herpes virus were excluded as possible causes by antigen trapping enzymelinked immunosorbent assay (ELISA), polymerase chain reaction (PCR), or electronmicroscopy. Tests for Pasteurella, Bacillus anthracis, Yersinia, Legionella, Pseudomonas, and Streptobacillus moniliformis were negative, and poisons consistent with the clinical and gross pathology, such as paraquat, were excluded by specific testing.
However, within 3 days, a syncytial forming virus was detected in vero-cell cultures inoculated with diseased horse tissues and shortly thereafter was seen to grow in a wide range of cells. These included MDBK, BHK, and RK13 cells. Subsequently, a syncytial forming virus also was isolated in LLK-MK2 cells that had been inoculated with tissue from the deceased trainer's kidney. The isolation of these viruses and their preliminary characterization by electron microscopy, immunoelectromicroscopy, serology, and genetic analyses are described elsewhere (Murray PK, et al. A new morbillivirus which caused fatal disease in horses and man, submitted).
In summary, ultrastructural analysis showed that the virus is a member of the Paramyxoviridae family. It is enveloped, pleomorphic (varies in size from 38 nm to more than 600 nm), and is covered with 10 nm and 18 nm surface projections. It contains herringbone nucleocapsids that are 18 nm wide with a 5 nm periodicity. The presence of 'double-fringed' surface projections on this virus is considered unique. Immunoelectronmicroscopy showed that both the horse and the human virus react with convalescent-phase horse sera and with sera from the two human cases.
PCR primers were synthesized from consensus
Paramyxoviridae matrix protein sequences and tested against the horse virus. Those specific for paramyxoviruses and pneumoviruses did not bind, but one pair of morbillivirus primers gave a 400 bp product. Determination of the sequence of this product enabled the synthesis of horse virus-specific primers. Phylogenetic analyses of the matrix protein sequence indicates that the virus is unique and distantly related to other known members of the group. A comparison of translated M protein sequence shows that it has a 50% homology with the morbillivirus group (80% if conservative amino acid substitutions are used). This distant relatedness is emphasized by our observations that neutralizing antisera to measles virus, canine distemper, and rinperest virus failed to neutralize the virus.
The viruses isolated from the horses and the trainer are ultrastructurally identical. Serum from the horses and the two human cases specifically cross-neutralize the virus, and the horse virus-specific PCR primers provide a positive reaction with the human virus isolate. Therefore, the horses and the trainer were infected with the same virus.
At the beginning of the diagnostic investigation, tissues from the lungs and spleens of diseased horses were injected into two recipient horses. After 6 and 10 days, the recipient horses became ill with high fever and severe respiratory signs, demonstrating that the disease was transmissible. Two days later the horses were destroyed. The equine morbillivirus
Dispatches
Vol. 1, No. 1 -January-March 1995 was isolated from tissues from both of these horses. To document that the isolated horse virus was pathogenic, experimental transmission tests were also conducted. Two additional horses received a total of 2 x 10 7 TCID50 of tissue culture virus by intravenous inoculation and by intranasal aerosol. Both horses became seriously ill, and after a short, severe clinical episode, were destroyed 4 and 5 days after exposure. At necropsy, they showed gross and histopathologic lesions that were primarily respiratory and consistent with the natural disease. Virus was reisolated from their lungs, liver, spleen, kidney, lymph nodes, and blood. The pathology of this infection is interesting. In horses, the dominant gross pathology is lesions in the lungs. These are congested and edematous with prominent lymphatic dilation in the ventral margins. In natural cases, the airways were usually filled with thick, fine, stable foam which was occasionally blood-tinged; this was not seen in the experimental cases. Histologically, in horses, there is interstitial pneumonia, proteinaceous edema with pneumocyte, and capillary degeneration. Virus can be located in endothelial cells by immunofluorescence and syncytial cells also could be seen in blood vessel walls, confirming the vascular of trophism of this virus (Murray PK, et al, submitted). The trainer's post-mortem findings showed similarities to those of the horses (Selvey L, et al, submitted).
No further clinical cases of disease have been seen in horses or humans since this outbreak. Serologic surveillance of people who had close contact with the sick horses, mostly stable workers, veterinary pathologists, animal health field staff, or people who lived in the vicinity of the affected stables, was negative (Selvey L, et al, submitted).
Serologic testing of all horses on quarantined properties and within 1 km of the Hendra stable, and a sample of horses from the rest of Queensland was undertaken (Table 1). A total of 1,964 horses were tested from more than 630 premises. The negative results from this testing also indicate that the infection has not spread. In the entire horse survey, only seven horses, all from the Hendra property and the adjoining stables, were positive. Four of these animals had been clinically affected, but three were asymptomatic. Because of the potential risk and the difficulty in establishing freedom from infection, these seven recovered horses were later destroyed.
Although persistent virus excretion or carrier states are not known to occur in other morbillivirus infections, this equine virus is unique and it cannot be presumed to behave similarly. Australian veterinary authorities are now satisfied that the incident is over.
We have described a newly recorded disease of horses with an obvious zoonotic potential; moreover, the causative agent was previously unrecorded and is significantly different from other members of its genus, morbillivirus. Infection seems to have spread from the mare that first showed the now characteristic clinical signs, to other horses in the same stables, to a horse in close contact from an adjacent stable, and also to two human attendants. Clearly, this outbreak was not highly contagious and it rapidly resolved. However, the virus is highly pathogenic with 65% of naturally infected horses and all four experimental horses dying.
Further investigations of the virus and the disease are now warranted since it could reemerge in Australia or elsewhere. Investigations of its origin, its replication, its pathogenesis, and its possible occurrence elsewhere in connection with equine respiratory disease are merited. | 2,625 | 1995-01-01T00:00:00.000 | [
"Biology"
] |
Graph SLAM-Based 2.5D LIDAR Mapping Module for Autonomous Vehicles
: This paper proposes a unique Graph SLAM framework to generate precise 2.5D LIDAR maps in an XYZ plane. A node strategy was invented to divide the road into a set of nodes. The LIDAR point clouds are smoothly accumulated in intensity and elevation images in each node. The optimization process is decomposed into applying Graph SLAM on nodes’ intensity images for eliminating the ghosting effects of the road surface in the XY plane. This step ensures true loop-closure events between nodes and precise common area estimations in the real world. Accordingly, another Graph SLAM framework was designed to bring the nodes’ elevation images into the same Z-level by making the altitudinal errors in the common areas as small as possible. A robust cost function is detailed to properly constitute the relationships between nodes and generate the map in the Absolute Coordinate System. The framework is tested against an accurate GNSS/INS-RTK system in a very challenging environment of high buildings, dense trees and longitudinal railway bridges. The experimental results verified the robustness, reliability and efficiency of the proposed framework to generate accurate 2.5D maps with eliminating the relative and global position errors in XY and Z planes. Therefore, the generated maps significantly contribute to increasing the safety of autonomous driving regardless of the road structures and environmental factors.
Introduction
Maps are a very important pillar to enable autonomous driving by encoding the real world with good weather factors and environmental conditions.Maps are mainly used to localize vehicles in the XY plane for safely conducting autonomous maneuvers with respect to other road users [1,2].In the Z plane, the maps are utilized to estimate pitch and roll angles as well as to measure distances to other vehicles [3].Therefore, the mapping module must generate precise maps in terms of accurately positioning roads in XY-Z planes and integrating the environmental representations at a high definition level.
Tunnels, dense trees, high buildings and railways are considered as challenging road structures that may deflect and obstruct satellite signals even with the use of accurate GNSS/INS-RTK (GIR) systems.This leads to ghosting effects in the XY map domain because of the relative-position errors.The ghosting effects decrease the localization accuracy because of changing the road pattern compared to that encoded in the observation data during the autonomous driving as illustrated in Figure 1a.In the Z plane, the altitudinal relative-position errors change the road slope and create virtual/unreal bumps in the road consistency.Both of these phenomena indicate global-position errors of the map in the Absolute Coordinate System (ACS).Thus, increasing the robustness of the mapping module against challenging environments is a critical demand to globalize autonomous driving safely.Graph SLAM (GS) is a dominant approach to increase map accuracy in a probabilistic framework.Thrun applied Graph SLAM, showing promising results to enhance the map and trajectory positions [4].Grisetti then explained the implementation steps in a simpler manner, illustrating experimental results by a small robotics platform [5].Olson detailed some technical aspects to magnify the utilization level, such as strategizing loop-closure detection, covariance estimation and switchable constraints of GPS modeling [6].Roh proposed a framework called ISAM to generate maps based on LIDAR 3D point clouds and camera images [7].The altitudinal features, such as walls and building fronts, are segmented in the point clouds to create a set of polygons.Loop-closure events are detected according to the similarity score between the segmented walls to compensate the localization errors.The walls are decomposed in lines in the Z-direction and a full 3D map is then constructed by incorporating the elevation measurements of the lines.All of these components are integrated into a pose-slam platform to optimize the relationships between the vehicle positions and generate precise maps.Triebel suggested a Graph SLAM approach to generate 3D maps based on dividing, clustering and classifying the point clouds into multilayers [8].A cloud is divided into a set of fixed-size cells with respect to the height interval.The divided cells are then determined to represent vertical and horizontal structures in the real world.This tactic facilitates the loop-closure detection and improves the matching process between point clouds using iterative closest point (ICP) [9].As each point cloud is assigned into a robot position, a set of constraints are designed to constitute the relationships between positions in ACS.Finally, an optimization process is applied to optimize the robot trajectory as well as the point cloud distributions in the map domain.Recently, an impressive effort has been demonstrated to apply SLAM based on ground feature extraction in the LIDAR point cloud [10].The extracted features are then classified into edge and planner groups and used by the Levenberg-Marquardti optimization technique to estimate the vehicle poses in consecutive scans.Another method called LIO-SAM has been proposed to incorporate the estimated motion from an inertial measurement unit with the results of LIDAR scan matching for optimizing the vehicle trajectory [11].
Most SLAM-proposed approaches operate in the point cloud domain [12] and rely on point distribution pattern-based iterative matching strategies to process xyz positions at once.This may reduce the utilization of environmental features to compensate relative position errors due to the LIDAR sparsity.Moreover, wrong matching results might be provided because of changing the distribution patterns, especially in the Z-direction at the Graph SLAM (GS) is a dominant approach to increase map accuracy in a probabilistic framework.Thrun applied Graph SLAM, showing promising results to enhance the map and trajectory positions [4].Grisetti then explained the implementation steps in a simpler manner, illustrating experimental results by a small robotics platform [5].Olson detailed some technical aspects to magnify the utilization level, such as strategizing loopclosure detection, covariance estimation and switchable constraints of GPS modeling [6].Roh proposed a framework called ISAM to generate maps based on LIDAR 3D point clouds and camera images [7].The altitudinal features, such as walls and building fronts, are segmented in the point clouds to create a set of polygons.Loop-closure events are detected according to the similarity score between the segmented walls to compensate the localization errors.The walls are decomposed in lines in the Z-direction and a full 3D map is then constructed by incorporating the elevation measurements of the lines.All of these components are integrated into a pose-slam platform to optimize the relationships between the vehicle positions and generate precise maps.Triebel suggested a Graph SLAM approach to generate 3D maps based on dividing, clustering and classifying the point clouds into multilayers [8].A cloud is divided into a set of fixed-size cells with respect to the height interval.The divided cells are then determined to represent vertical and horizontal structures in the real world.This tactic facilitates the loop-closure detection and improves the matching process between point clouds using iterative closest point (ICP) [9].As each point cloud is assigned into a robot position, a set of constraints are designed to constitute the relationships between positions in ACS.Finally, an optimization process is applied to optimize the robot trajectory as well as the point cloud distributions in the map domain.Recently, an impressive effort has been demonstrated to apply SLAM based on ground feature extraction in the LIDAR point cloud [10].The extracted features are then classified into edge and planner groups and used by the Levenberg-Marquardti optimization technique to estimate the vehicle poses in consecutive scans.Another method called LIO-SAM has been proposed to incorporate the estimated motion from an inertial measurement unit with the results of LIDAR scan matching for optimizing the vehicle trajectory [11].
Most SLAM-proposed approaches operate in the point cloud domain [12] and rely on point distribution pattern-based iterative matching strategies to process xyz positions at once.This may reduce the utilization of environmental features to compensate relative position errors due to the LIDAR sparsity.Moreover, wrong matching results might be provided because of changing the distribution patterns, especially in the Z-direction at the revisited areas.In featureless areas and wide road segments, the features in the Z-direction do not play a significant role in enhancing the matching quality as illustrated in Figure 2a.Furthermore, the altitudinal features in urban environments may also negatively affect the matching process.For example, stopped cars at a traffic signal might be registered in a point cloud and prevent the encoding of the real stationary environmental features as shown in Figure 2b.The vehicle may revisit the same traffic signal during the map-data collection with encoding either different patterns of stopped cars or the real stationary environmental features.Consequently, the matching results in both cases to the registered point-cloud in the first visit at the traffic signal will be wrong.
Remote Sens. 2021, 13, x FOR PEER REVIEW 3 of 16 revisited areas.In featureless areas and wide road segments, the features in the Z-direction do not play a significant role in enhancing the matching quality as illustrated in Figure 2a.Furthermore, the altitudinal features in urban environments may also negatively affect the matching process.For example, stopped cars at a traffic signal might be registered in a point cloud and prevent the encoding of the real stationary environmental features as shown in Figure 2b.The vehicle may revisit the same traffic signal during the map-data collection with encoding either different patterns of stopped cars or the real stationary environmental features.Consequently, the matching results in both cases to the registered point-cloud in the first visit at the traffic signal will be wrong.The 2.5D maps are very promising components to power autonomous vehicles because of reducing the storing size, providing dense details, decreasing the data representation and sufficiently enabling real-time processes compared to 3D point cloud maps.In addition, the 2.5D maps provide elevation values that can be used to enable many applications, such as localization [13], pitch and roll angle calculations [3,14] and obstacle distance estimation [15].Furthermore, it allows the use of map matching models in the image domain [16] instead of point cloud-based registration methods [17,18].This significantly decreases the mismatching events because of using LIDAR reflectivity compared to the 3D point distribution patterns [19].This enables a continuous dense representation of environments, reduces the processing time of compensating relative position errors and simplifies the implementation process.However, applying SLAM technologies to generate accurate 2.5D maps using LIDAR elevation and intensity data is still challenging and rarely addressed [20][21][22].We previously proposed a GS framework to generate precise LIDAR elevation maps in the Z-direction without processing the map in the XY plane [21].However, the need to use 2.5D maps (elevation and intensity) has significantly emerged to enable many applications, such as 3D localization, pith and roll online calibration, building 3D perception model-based fusion camera and LIDAR data.Hence, we present in this paper a unique GS framework to generate precise 2.5D LIDAR maps emphasizing the robustness and reliability in very challenging environments and road structures.
Key Solution and Proposed Strategy
The most important pillars to obtain reliable results by SLAM are the utilization strategy of the environmental features to compensate the relative position errors and the mechanism of the optimization process.Therefore, we invented a new tactic to fully apply GS The 2.5D maps are very promising components to power autonomous vehicles because of reducing the storing size, providing dense details, decreasing the data representation and sufficiently enabling real-time processes compared to 3D point cloud maps.In addition, the 2.5D maps provide elevation values that can be used to enable many applications, such as localization [13], pitch and roll angle calculations [3,14] and obstacle distance estimation [15].Furthermore, it allows the use of map matching models in the image domain [16] instead of point cloud-based registration methods [17,18].This significantly decreases the mismatching events because of using LIDAR reflectivity compared to the 3D point distribution patterns [19].This enables a continuous dense representation of environments, reduces the processing time of compensating relative position errors and simplifies the implementation process.However, applying SLAM technologies to generate accurate 2.5D maps using LIDAR elevation and intensity data is still challenging and rarely addressed [20][21][22].We previously proposed a GS framework to generate precise LIDAR elevation maps in the Z-direction without processing the map in the XY plane [21].However, the need to use 2.5D maps (elevation and intensity) has significantly emerged to enable many applications, such as 3D localization, pith and roll online calibration, building 3D perception model-based fusion camera and LIDAR data.Hence, we present in this paper a unique GS framework to generate precise 2.5D LIDAR maps emphasizing the robustness and reliability in very challenging environments and road structures.
Key Solution and Proposed Strategy
The most important pillars to obtain reliable results by SLAM are the utilization strategy of the environmental features to compensate the relative position errors and the mechanism of the optimization process.Therefore, we invented a new tactic to fully apply GS in the node level and image domain instead of the vehicle position level and point cloud domain.In addition, the optimization process is decomposed into two phases: intensity in the XY plane (GS-XY) and elevation in the Z plane (GS-Z), and the integration strategy of both maps in the Absolute Coordinate System (ACS) is referred by GS-XYZ
Node Domain
The intensity and elevation maps are encoded by dividing the road into a set of nodes, and each node represents an environment area in the real world.A LIDAR point-cloud is cut at 0.3 m in the Z-direction with fixed width w and height h called LIDAR-frame.This cutting threshold is simply designated to encode curbs, road edges, painted landmarks and lower parts of poles, barriers, trees, traffic lights, etc.In addition, it prevents the moving road users from being presented in the map.The cut point cloud is converted into a grayscale image to represent the road surface as shown in Figure 3a.The frames are accumulated in an intensity image, road-surface, based on dead reckoning (DR) position estimation X DR t in Equation ( 1) and [23].
where ve t is the vehicle velocity and ∆ t is the time interval with the previous position in the XY plane.DR is used to preserve smooth measurements of the vehicle trajectory inside the nodes and avoid local jumps of GPS signals.The elevation values of the road pixels in the intensity image are simultaneously assigned to a floating matrix called elevation image as illustrated in Figure 3b, applying a similar equation of (1) in the Z-direction.The accumulation process is terminated to produce a node when the area (i.e., width W and height H) of the corresponding intensity-image exceed 1 M pixels.The top-left corner of each node is used to identify the XY position in ACS.The xy coordinates of the top-left corner are obtained by the minimum/maximum vehicle positions in x/y directions inside the node as demonstrated in Figure 3a.For the node identification in the Z-plane, the average pixel value in the elevation image is calculated.These arrangement, accumulation and identification procedures are concluded in the term node strategy.in the node level and image domain instead of the vehicle position level and point cloud domain.In addition, the optimization process is decomposed into two phases: intensity in the XY plane (GS-XY) and elevation in the Z plane (GS-Z), and the integration strategy of both maps in the Absolute Coordinate System (ACS) is referred by GS-XYZ
Node Domain
The intensity and elevation maps are encoded by dividing the road into a set of nodes, and each node represents an environment area in the real world.A LIDAR pointcloud is cut at 0.3 m in the Z-direction with fixed width w and height h called LIDARframe.This cutting threshold is simply designated to encode curbs, road edges, painted landmarks and lower parts of poles, barriers, trees, traffic lights, etc.In addition, it prevents the moving road users from being presented in the map.The cut point cloud is converted into a grayscale image to represent the road surface as shown in Figure 3a.The frames are accumulated in an intensity image, road-surface, based on dead reckoning (DR) position estimation DR t
where is the vehicle velocity and Δt is the time interval with the previous position in the XY plane.DR is used to preserve smooth measurements of the vehicle trajectory inside the nodes and avoid local jumps of GPS signals.The elevation values of the road pixels in the intensity image are simultaneously assigned to a floating matrix called elevation image as illustrated in Figure 3b, applying a similar equation of ( 1) in the Z-direction.The accumulation process is terminated to produce a node when the area (i.e., width W and height H) of the corresponding intensity-image exceed 1 M pixels.
The top-left corner of each node is used to identify the XY position in ACS.The xy coordinates of the top-left corner are obtained by the minimum/maximum vehicle positions in x/y directions inside the node as demonstrated in Figure 3a.For the node identification in the Z-plane, the average pixel value in the elevation image is calculated.These arrangement, accumulation and identification procedures are concluded in the term node strategy
GS Optimization Strategy in Node Domain
Figure 4 illustrates our vision of decomposing maps into elevation and intensity components using the node strategy.Graph SLAM is applied twice to optimize the intensity map and then the corresponding elevation map.The intuition behind this tactic is that the most dominant stationary pattern to compensate the relative-position errors in the XY plane is the road surface.This is because the road surfaces are less subject to the change
GS Optimization Strategy in Node Domain
Figure 4 illustrates our vision of decomposing maps into elevation and intensity components using the node strategy.Graph SLAM is applied twice to optimize the intensity map and then the corresponding elevation map.The intuition behind this tactic is that the most dominant stationary pattern to compensate the relative-position errors in the XY plane is the road surface.This is because the road surfaces are less subject to the change compared to the higher features.Accordingly, the altitudinal positions of the road-surface can be then easily optimized by forcing the relative position errors in the Z plane to be zero at the loop closure areas in the XY plane.Figure 4a shows two nodes at a loop-closure event representing the same road surface in ACS.The relative error in the XY plane is illustrated in Figure 4b by the ghosting effects, whereas the elevation drifting occurs in the Z plane.The coordinates of the top-left corners are firstly optimized by applying GS in the XY plane (GS-XY) based on the intensity images as shown in Figure 4c.Accordingly, the xy-correspondences between the nodes' road-surfaces become accurate, and the altitudinal relative-position errors can be precisely calculated using the elevation-images.Thus, GS is secondly applied (GS-Z) to make these errors as small as possible and bring the nodes into the same Z-level as indicated in Figure 4d.Consequently, the 2.5D map can then be generated in ACS as detailed in the next section.
Remote Sens. 2021, 13, x FOR PEER REVIEW 5 of 16 compared to the higher features.Accordingly, the altitudinal positions of the road-surface can be then easily optimized by forcing the relative position errors in the Z plane to be zero at the loop closure areas in the XY plane.Figure 4a shows two nodes at a loop-closure event representing the same road surface in ACS.The relative error in the XY plane is illustrated in Figure 4b by the ghosting effects, whereas the elevation drifting occurs in the Z plane.The coordinates of the top-left corners are firstly optimized by applying GS in the XY plane (GS-XY) based on the intensity images as shown in Figure 4c.Accordingly, the xy-correspondences between the nodes' road-surfaces become accurate, and the altitudinal relative-position errors can be precisely calculated using the elevation-images.Thus, GS is secondly applied (GS-Z) to make these errors as small as possible and bring the nodes into the same Z-level as indicated in Figure 4d.Consequently, the 2.5D map can then be generated in ACS as detailed in the next section.
Edge Selection and Calculation
The compensation of the relative-position errors is an essential step to apply GS.However, these errors represent local relationships between two nodes.The map generation in ACS can be achieved by optimizing the entire relative positions between nodes collectively and globally.Therefore, the coherency, consistency and accuracy between entire nodes in the map must be maintained and improved.This demand is achieved by properly designing the GS cost function.Figure 5a particularly demonstrates the relationships/edges between nodes in the XY plane.Each node possesses three types of edges: sequential, anchoring and potential loop-closure.A sequential edge E DR expresses the relationship between two consecutive nodes Ni and Ni-1 based on the top-left corners X and can be calculated using Equation (2).
where f(,) is a simple function to calculate the relative position, XDR is the edge constraint representing the dead reckoning relative position and Σ is the standard deviation of the
The Proposed Graph SLAM Framework (GS-XYZ) 3.1. Edge Selection and Calculation
The compensation of the relative-position errors is an essential step to apply GS.However, these errors represent local relationships between two nodes.The map generation in ACS can be achieved by optimizing the entire relative positions between nodes collectively and globally.Therefore, the coherency, consistency and accuracy between entire nodes in the map must be maintained and improved.This demand is achieved by properly designing the GS cost function.Figure 5a particularly demonstrates the relationships/edges between nodes in the XY plane.Each node possesses three types of edges: se-quential, anchoring and potential loop-closure.A sequential edge E DR expresses the relationship between two consecutive nodes N i and N i−1 based on the top-left corners X and can be calculated using Equation ( 2).An anchoring edge E GPS is used to place a node in the real world according to the GIR system as in Equation (3).
where X GPS is the edge constraint representing the node position based on the GIR system and Γ is the covariance error.These edges determine the position in ACS of merging many participating nodes in the same area based on the smallest covariance error.
An image edge E img is mainly issued to compensate the XY relative position between two nodes in a revisited road segment and estimate the common area as in Equation ( 4).
where X img is the edge constraint based on matching the environmental features.The matching calculation is trigged when a loop-closure event between two nodes is detected.The identification strategy by top-left corners facilitates the detection process.The coordinates of the corners do not rely on the driving direction in the scanned environment.In other words, if the vehicle trajectory was in the upper lane (opposite driving direction) in Figure 3a, the xy-coordinates of the corner would be exactly the same.This makes the node distribution in ASC very coherent and homogenous.Therefore, the loop-closure events can be detected using a set of distance thresholds, such as the driving distance between two node candidates, the Euclidian distance between two top-left corners and the driving distance between two consecutive loop-closure events.Accordingly, a set of node-pairs that potentially share a considerable area in the real world is obtained.
As each node encodes a wide segment of the road surface in the intensity image, phase correlation (PhC) is applied on the detected loop-closure events to estimate X img in Equation ( 4) and locally bring the common areas between every two nodes into the same An anchoring edge E GPS is used to place a node in the real world according to the GIR system as in Equation ( 3).
where X GPS is the edge constraint representing the node position based on the GIR system and Γ is the covariance error.These edges determine the position in ACS of merging many participating nodes in the same area based on the smallest covariance error.An image edge E img is mainly issued to compensate the XY relative position between two nodes in a revisited road segment and estimate the common area as in Equation ( 4).
where X img is the edge constraint based on matching the environmental features.The matching calculation is trigged when a loop-closure event between two nodes is detected.
The identification strategy by top-left corners facilitates the detection process.The coordinates of the corners do not rely on the driving direction in the scanned environment.In other words, if the vehicle trajectory was in the upper lane (opposite driving direction) in Figure 3a, the xy-coordinates of the corner would be exactly the same.This makes the node distribution in ASC very coherent and homogenous.Therefore, the loop-closure events can be detected using a set of distance thresholds, such as the driving distance between two node candidates, the Euclidian distance between two top-left corners and the driving distance between two consecutive loop-closure events.Accordingly, a set of node-pairs that potentially share a considerable area in the real world is obtained.
As each node encodes a wide segment of the road surface in the intensity image, phase correlation (PhC) is applied on the detected loop-closure events to estimate X img in Equation ( 4) and locally bring the common areas between every two nodes into the same xy-coordinates in ACS.PhC is widely utilized in the image processing and computer vision fields to solve various issues [24][25][26].Technically, PhC relies on the shared pattern between two images to estimate xy-translation offsets using Fourier frequency transform (FFT) [27].The robustness of PhC was the main motivation to invent the node strategy and design a new Graph SLAM framework.Moreover, PhC provides a correlation matrix that can be significantly employed to estimate the covariance error Ω in Equation ( 4).Accordingly, PhC has been modified to improve the performance on the intensity images, increase the matching accuracy and estimate the common area in each node of a loop-closure event.Figure 5b shows two nodes of a detected loop-closure event with the results of applying the enhanced PhC.The nodes are perfectly matched and provide the common area specifications in each node without any prior information of the top-left corner coordinates.
Cost Function Concept (Example: GS-XY)
The cost function is designed to optimize the edges and minimize the relative and global errors in the XY plane as in Equation (5).
The optimization process (GS-XY) can mathematically be expressed by Equation (6), which refers to the relationships of each node with other nodes in the map by H matrix.
The diagonal elements in H imply the summing up of weighted confidences of the sequential and anchoring edges, whereas the off-diagonal elements indicate the loopclosure events weighted by PhC matching scores.The vector b demonstrates the weighted accumulative errors of the entire edges that are contacted to each node.A set of translation offsets ∆X is obtained by solving Equation ( 6) and added to the top-left corners of the map nodes.The offsets move the nodes in the XY plane to the optimal positions in ACS, eliminating the ghosting effects and maintaining smooth road contexts.
Transforming GS-XY to GS-Z
The style of applying GS-Z on the elevation images is similar to GS-XY in terms of edge concept and cost-function design.On the other hand, calculating the elevation relative position errors should be strategized.In our previous work [21], the detected common areas by PhC at each loop-closure event were used to calculate the elevation errors.PhC may provide inaccurate lateral matching results in the wide roads, where the shared landmarks are not sufficient (Figure 6a), as well as wrong correspondences in the longitudinal direction between nodes of critical environments, such as tunnels, where the road pattern is identical.This leads to wrong calculations of the altitudinal errors in the common areas, increases the excluded edges in the optimization process and may negatively influence the optimization process in the Z plane by producing unsmooth local road context in the elevation maps.
The inaccurate PhC matching results are overcome by adding ∆X of GS-XY to the nodes' top-left corners as demonstrated in Figure 6b.Accordingly, a common area in node i (U × V) can be projected accurately to node j regardless of the previously estimated area by PhC in node j .These two areas are guaranteed to represent the same environment in the real world, and they must be in the same Z-level.Therefore, a loop-closure elevation edge Z img is precisely estimated by calculating the average altitudinal error in Equation ( 7) between the two areas based on the true pixel correspondences.
The cost function of GS-Z is designed in Equation ( 8) using similar edge concepts of GS-XY as illustrated in Figure 6c, and a set of Z-offsets is obtained accordingly.A z-offset of an ith node is added to the entire pixels in the corresponding elevation image, i.e., updating the altitudinal average value for identifying the node in the Z plane.The elevation map is then generated accurately by rearranging the nodes' elevation images in ACS.
Remote Sens. 2021, 13, x FOR PEER REVIEW 8 of 16 The cost function of GS-Z is designed in Equation ( 8) using similar edge concepts of GS-XY as illustrated in Figure 6c, and a set of Z-offsets is obtained accordingly.A z-offset of an ith node is added to the entire pixels in the corresponding elevation image, i.e., updating the altitudinal average value for identifying the node in the Z plane.The elevation map is then generated accurately by rearranging the nodes' elevation images in ACS.
Platform Configuration and Framework Setups
Figure 7 shows the robotics platform used to collect map data.The vehicle is equipped with Velodyne LIDAR 64 for 360 scanning the environment and generating 3D point clouds.A coupled GIR system POSLV 220 is deployed in the trunk to receive satellite signals and estimate the vehicle position, acceleration, velocity and angular parameters.After collecting the map data, these measurements are post-processed to produce very accurate vehicle trajectories.LIDAR point clouds are then assigned to the trajectories to generate the maps [28].
Experimental Platform and Test Course 4.1. Platform Configuration and Framework Setups
Figure 7 shows the robotics platform used to collect map data.The vehicle is equipped with Velodyne LIDAR 64 for 360 scanning the environment and generating 3D point clouds.A coupled GIR system POSLV 220 is deployed in the trunk to receive satellite signals and estimate the vehicle position, acceleration, velocity and angular parameters.After collecting the map data, these measurements are post-processed to produce very accurate vehicle trajectories.LIDAR point clouds are then assigned to the trajectories to generate the maps [28].
According to the proposed framework, the size of a LIDAR-frame w × h is 512 × 512.The area threshold of a node to be generated is set to W.H = 1 M pixels.The pixel resolution is 0.125 m in the intensity-image and 0.01 m in the elevation image (direct save of the altitudinal information in a floating matrix).A LIDAR frame is generated every 100 ms, whereas the GIR system provides measurements within 10 ms.Therefore, a synchronization process is technically applied based on the timestamps to estimate the vehicle position as soon as a LIDAR frame is created.Accordingly, DR is used to estimate the vehicle position in XY and Z planes as explained in the node strategy.The processing unit has Intel-Core™ i7-6700 CPU working at 3.40 GH with 64 GB of RAM.The operating system is Windows 10 64X and the localization system was coded using VS-2010 C++ environment with integrating OpenCV 2.3.1 and Eigen libraries.The FFTW library was integrated into the programing environments and the processing time to estimate that the relative position between two nodes by PhC is around 20 ms and [29].
Platform Configuration and Framework Setups
Figure 7 shows the robotics platform used to collect map data.The vehicle is equipped with Velodyne LIDAR 64 for 360 scanning the environment and generating 3D point clouds.A coupled GIR system POSLV 220 is deployed in the trunk to receive satellite signals and estimate the vehicle position, acceleration, velocity and angular parameters.After collecting the map data, these measurements are post-processed to produce very accurate vehicle trajectories.LIDAR point clouds are then assigned to the trajectories to generate the maps [28].
Test Course
The proposed framework has been tested in a critical environment to emphasize the robustness and reliability against the GIR system.Tunnels are very efficient environments to test any proposed SLAM method because they have no surrounding environmental features that may differ according to the vehicle positions or the driving lane, i.e., the road surface is identical in the two trials, and deviations can easily be observed on the painted landmarks.Yamate Tunnel is an 18.2-km highway road in Tokyo and ends at Ohashii Junction with a 30-m underground depth.It is the world's longest tunnel and consists of two tubes.Each tube enables a single driving direction and contains two lanes.In order to extend the course length and increase the size of the map data, we started to drive the vehicle from Yono Junction as shown in Figure 8a.Therefore, the course length becomes 34 km, including a considerable open-sky area before entering the tunnel.As driving inside a tube is in one direction, we scanned the first tube two times and drove at different velocities for each scan.This increases the node number and makes the optimization of relationships between nodes very challenging as detailed in the next section.According to the proposed framework, the size of a LIDAR-frame w × h is 512 × 512.The area threshold of a node to be generated is set to W.H = 1 M pixels.The pixel resolution is 0.125 m in the intensity-image and 0.01 m in the elevation image (direct save of the altitudinal information in a floating matrix).A LIDAR frame is generated every 100 ms, whereas the GIR system provides measurements within 10 ms.Therefore, a synchronization process is technically applied based on the timestamps to estimate the vehicle position as soon as a LIDAR frame is created.Accordingly, DR is used to estimate the vehicle position in XY and Z planes as explained in the node strategy.The processing unit has Intel-Core™ i7-6700 CPU working at 3.40 GH with 64 GB of RAM.The operating system is Windows 10 64X and the localization system was coded using VS-2010 C++ environment with integrating OpenCV 2.3.1 and Eigen libraries.The FFTW library was integrated into the programing environments and the processing time to estimate that the relative position between two nodes by PhC is around 20 ms and [29].
Test course
The proposed framework has been tested in a critical environment to emphasize the robustness and reliability against the GIR system.Tunnels are very efficient environments to test any proposed SLAM method because they have no surrounding environmental features that may differ according to the vehicle positions or the driving lane, i.e., the road surface is identical in the two trials, and deviations can easily be observed on the painted landmarks.Yamate Tunnel is an 18.2-km highway road in Tokyo and ends at Ohashii Junction with a 30-m underground depth.It is the world's longest tunnel and consists of two tubes.Each tube enables a single driving direction and contains two lanes.In order to extend the course length and increase the size of the map data, we started to drive the vehicle from Yono Junction as shown in Figure 8a.Therefore, the course length becomes 34 km, including a considerable open-sky area before entering the tunnel.As driving inside a tube is in one direction, we scanned the first tube two times and drove at different velocities for each scan.This increases the node number and makes the optimization of relationships between nodes very challenging as detailed in the next section.
Graph SLAM in the XY Plane
The node strategy has led to a unique design of GS framework that reduces the map data size and decreases the processing time of the optimization process.This allows researchers to maintain a simple and easy arrangement of intensity and elevation values of the road-surface in the real world.The decomposition of the map into these two components facilitates the constitution of the relationships between nodes by the cost function.The test course was scanned by 283 nodes (16, 8f.On the other hand, the two scans are considerably deviated by GIR at the entrance of Yamate Tunnel until its end.The relative-position X img is precisely estimated using PhC by extracting the coordinates of the common areas in the corresponding nodes' images as shown in Figure 8g.In order to provide a holistic assessment of PhC against GIR results, Figure 9a illustrates the loop-closure edges along the two scans.Each edge represents the difference in x and y directions of the top-left coordinates of the detected common areas between a pair of nodes.The difference should be small because a common area represents the same road segment in the real world as can be observed in Figure 8. GIR is massively affected inside the tunnel and the two scans have large deviations up to 4 m.This indicates the low map quality and the risky ghosting effects in representing the road surface.Moreover, it refers to different locations of the road segments in ACS.
Graph SLAM in the XY Plane
The node strategy has led to a unique design of GS framework that reduces the map data size and decreases the processing time of the optimization process.This allows researchers to maintain a simple and easy arrangement of intensity and elevation values of the road-surface in the real world.The decomposition of the map into these two components facilitates the constitution of the relationships between nodes by the cost function.8f.On the other hand, the two scans are considerably deviated by GIR at the entrance of Yamate Tunnel until its end.The relativeposition X img is precisely estimated using PhC by extracting the coordinates of the common areas in the corresponding nodes' images as shown in Figure 8g.In order to provide a holistic assessment of PhC against GIR results, Figure 9a illustrates the loop-closure edges along the two scans.Each edge represents the difference in x and y directions of the top-left coordinates of the detected common areas between a pair of nodes.The difference should be small because a common area represents the same road segment in the real world as can be observed in Figure 8. GIR is massively affected inside the tunnel and the two scans have large deviations up to 4 m.This indicates the low map quality and the risky ghosting effects in representing the road surface.Moreover, it refers to different locations of the road segments in ACS.PhC has sufficiently compensated the relative position errors based on matching the static environmental features in the node images.The cost function significantly assigns these compensations to the graph with the sequential and anchoring edges.The graph is then optimized, and a set of translation offsets of the top-left corners is obtained.Figure 9b-c show the GS offsets of the nodes in Y and X directions.The offsets are small in the open-sky areas, i.e., the nodes have same positions of GIR in ACS.This proves the robustness of the proposed framework to generate the same map quality of GIR in such environments because GIR maps can be considered as ground-truth.The offset profiles demonstrate a continuous change of the nodes' positions inside the tunnel.This implies the reliability of the proposed framework to detect the low accurate areas and fix the relative-position errors accordingly.In addition, the offsets differ between the two maps at the same area.
This implicitly illustrates the influences of the anchoring edges to determine the global position of the combined map based on the correct integration of the relevant covariance errors in ACS. Figure 10 shows images of GIR and GS-XY maps at different places inside Yamate Tunnel.The GIR map images demonstrate different ghosting patterns, whereas the GS map provides accurate road-surface representations.This indicates the robustness of the framework to constitute and optimize the node positions significantly regardless of the road structure and environment types.
Remote Sens. 2021, 13, x FOR PEER REVIEW 11 of 16 PhC has sufficiently compensated the relative position errors based on matching the static environmental features in the node images.The cost function significantly assigns these compensations to the graph with the sequential and anchoring edges.The graph is then optimized, and a set of translation offsets of the top-left corners is obtained.Figure 9b-c show the GS offsets of the nodes in Y and X directions.The offsets are small in the open-sky areas, i.e., the nodes have same positions of GIR in ACS.This proves the robustness of the proposed framework to generate the same map quality of GIR in such environments because GIR maps can be considered as ground-truth.The offset profiles demonstrate a continuous change of the nodes' positions inside the tunnel.This implies the reliability of the proposed framework to detect the low accurate areas and fix the relative-position errors accordingly.In addition, the offsets differ between the two maps at the same area.
This implicitly illustrates the influences of the anchoring edges to determine the global position of the combined map based on the correct integration of the relevant covariance errors in ACS. Figure 10 shows images of GIR and GS-XY maps at different places inside Yamate Tunnel.The GIR map images demonstrate different ghosting patterns, whereas the GS map provides accurate road-surface representations.This indicates the robustness of the framework to constitute and optimize the node positions significantly regardless of the road structure and environment types.9a (open-sky) with the combined results using PhC and the GIR system, respectively.PhC provides an inaccurate estimation of the longitudinal relative-position, whereas a true combination is obtained in the GIR combined map image.Figure 11e shows the corresponding GS map image with the same quality of GIR map.Thus, GS-XY can be considered as a filtration process of wrong results of PhC for applying GS-Z as observed by applying our previous work [21].This increases the number of correct altitudinal edges between nodes.Moreover, decomposing GS-XYZ into two phases in the XY plane and then the Z plane makes the calculation of altitudinal edges very simple and easy.This is because the covariance estimation of Z img can be set to a constant scalar for the entire altitudinal edges in the map.9a (open-sky) with the combined results using PhC and the GIR system, respectively.PhC provides an inaccurate estimation of the longitudinal relative-position, whereas a true combination is obtained in the GIR combined map image.Figure 11e shows the corresponding GS map image with the same quality of GIR map.Thus, GS-XY can be considered as a filtration process of wrong results of PhC for applying GS-Z as observed by applying our previous work [21].This increases the number of correct altitudinal edges between nodes.Moreover, decomposing GS-XYZ into two phases in the XY plane and then the Z plane makes the calculation of altitudinal edges very simple and easy.This is because the covariance estimation of Z img can be set to a constant scalar for the entire altitudinal edges in the map. Figure 12a shows the accuracy profiles of the GIR system in the Z-direction of the two scans.Obviously, the profiles demonstrate higher satellite signal quality in the opensky area with different accuracies in some segments because of changing the traffic flow and driving scenarios along the two scans.The profiles considerably and gradually become inaccurate inside the Yamate Tunnel.Figure 12b in the red profile illustrates the altitudinal relative errors in the common areas between nodes of the two scans according to the GIR elevation map.The profile indicates huge altitudinal errors inside the tunnel because of representing same road segments in the real world.Figure 12b in the green profile shows the altitudinal error after applying GS-Z and distributing the obtained zoffsets on the nodes in the two scans.GS-Z has perfectly reduced the altitudinal error, significantly recovering the damaged and critical areas compared to the GIR elevation map.and the significant small differences after applying GS-Z (green profile).The dotted lines refer to edges in Figure 13.
Graph SLAM in the Z Plane
As an edge represents a loop-closure between two nodes and in order to highlight the reliability of GS-XYZ to publish the elevation map in ACS, two particular edges/loop- Figure 12a shows the accuracy profiles of the GIR system in the Z-direction of the two scans.Obviously, the profiles demonstrate higher satellite signal quality in the open-sky area with different accuracies in some segments because of changing the traffic flow and driving scenarios along the two scans.The profiles considerably and gradually become inaccurate inside the Yamate Tunnel.Figure 12b in the red profile illustrates the altitudinal relative errors in the common areas between nodes of the two scans according to the GIR elevation map.The profile indicates huge altitudinal errors inside the tunnel because of representing same road segments in the real world.Figure 12b in the green profile shows the altitudinal error after applying GS-Z and distributing the obtained z-offsets on the nodes in the two scans.GS-Z has perfectly reduced the altitudinal error, significantly recovering the damaged and critical areas compared to the GIR elevation map. Figure 12a shows the accuracy profiles of the GIR system in the Z-direction of the two scans.Obviously, the profiles demonstrate higher satellite signal quality in the opensky area with different accuracies in some segments because of changing the traffic flow and driving scenarios along the two scans.The profiles considerably and gradually become inaccurate inside the Yamate Tunnel.Figure 12b in the red profile illustrates the altitudinal relative errors in the common areas between nodes of the two scans according to the GIR elevation map.The profile indicates huge altitudinal errors inside the tunnel because of representing same road segments in the real world.Figure 12b in the green profile shows the altitudinal error after applying GS-Z and distributing the obtained zoffsets on the nodes in the two scans.GS-Z has perfectly reduced the altitudinal error, significantly recovering the damaged and critical areas compared to the GIR elevation map.and the significant small differences after applying GS-Z (green profile).The dotted lines refer to edges in Figure 13.
As an edge represents a loop-closure between two nodes and in order to highlight the reliability of GS-XYZ to publish the elevation map in ACS, two particular edges/loop-
Conclusions
We proposed a relatively simple Graph SLAM framework to generate accurate LI-DAR intensity and elevation maps.The framework operates in the node level and image domain instead of the conventional strategies of operating in the vehicle position level and point cloud domain.This reduced the data size in the optimization process, allowing researchers to utilize phase correlation to efficiently compensate the xy relative position errors based on road surface representations, and facilitated constitution of the relationships between nodes regardless of the vehicle trajectory.Moreover, the optimization process is decomposed into two phases to apply Graph SLAM in the XY plane and accordingly optimize the elevation position in the Z plane.This unique tactic has been proved to significantly reduce the influences of changing the high environmental features by traffic flow or driving scenarios compared to the conventional strategies of applying SLAM in the XYZ plane at once and guarantee precise generation of the intensity maps in the XY As an edge represents a loop-closure between two nodes and in order to highlight the reliability of GS-XYZ to publish the elevation map in ACS, two particular edges/loopclosures are shown in Figure 13a,b.The first edge represents the maximum altitudinal error inside the Yamate Tunnel (1.2 m in Figure 12a), whereas the second edge is closer to the end of the tunnel.For more precise evaluation, the trajectory of the vehicle in node j (reference) is projected onto node j (target), accurately based on the resultant common areas by GS-XY as indicated by the dotted lines in Figure 13c,d.The altitudinal error is then calculated at each vehicle position by subtracting the corresponding values in the elevation images of GIR and GS-XYZ.It can be observed that the GIR profiles have considerable and different distances in Z-direction in both edges.This indicates the altitudinal relative position errors between the two scans as well as the global position errors of the elevation map in these two local road segments.The GS profiles are perfectly aligned to make the altitudinal relative errors as small as possible and bring the common areas in the loop-closure events into the same Z-level.One can observe that GS-XYZ has aligned the two scans at the middle distance in the first edge and closer to the first scan in the second edge.The two scans in the first edge (at 1.2 m error) has almost the same accuracy in the real world as indicated in Figure 12a (nodes 225 and 227), whereas the first scan possesses better accuracy at the end of the tunnel.This indicates the flexibility and robustness of the GS-Z cost function to publish the elevation map at the most accurate global positions in ACS.In order to prove that the global position of each road segment is determined with respect to the entire nodes in the map with preserving smooth road context in the Z-direction, Figure 13e shows the altitudinal errors of the GS-XYZ elevation map using the entire vehicle trajectory/positions of the first scan compared to the GIR elevation map.This figure indicates that the GS-XYZ map is very accurate and can be used to localize autonomous vehicles precisely as well as sufficiently enable other applications, such as object distance estimation and roll/pitch angle calibration.This indicates the robustness of the designed cost function to compensate the altitudinal error locally between every two nodes of a loop-closure event and globally with respect to the other events in the entire map in ACS.
Conclusions
We proposed a relatively simple Graph SLAM framework to generate accurate LIDAR intensity and elevation maps.The framework operates in the node level and image domain instead of the conventional strategies of operating in the vehicle position level and point cloud domain.This reduced the data size in the optimization process, allowing researchers to utilize phase correlation to efficiently compensate the xy relative position errors based on road surface representations, and facilitated constitution of the relationships between nodes regardless of the vehicle trajectory.Moreover, the optimization process is decomposed into two phases to apply Graph SLAM in the XY plane and accordingly optimize the elevation position in the Z plane.This unique tactic has been proved to significantly reduce the influences of changing the high environmental features by traffic flow or driving scenarios compared to the conventional strategies of applying SLAM in the XYZ plane at once and guarantee precise generation of the intensity maps in the XY plane.In addition, this tactic enabled accurate determination of the elevation errors and facilitated the edge calculation and covariance estimation of the relationships between nodes in the Z plane.The experimental results have verified the robustness and reliability of the proposed framework to generate very accurate and coherent 2.5D maps in the world's longest tunnel at 30 m depth underground compared to an accurate and expensive GNSS/INS-RTK system.Therefore, the proposed framework increases the scalability of the mapping module to represent the real world precisely and enable safe autonomous driving.
Patents
This work was patented in Japan in 2020 with the global number: 2020-090099.
Figure 1 .
Figure 1.(a) Ghosting effects (duplication of road landmarks because of scanning the segment twice with different positioning accuracies by GNSS/INS-RTK) in the map and multiple matching patterns with the observation point cloud.(b) Identical map without ghosting and the corresponding matching result.
Figure 1 .
Figure 1.(a) Ghosting effects (duplication of road landmarks because of scanning the segment twice with different positioning accuracies by GNSS/INS-RTK) in the map and multiple matching patterns with the observation point cloud.(b) Identical map without ghosting and the corresponding matching result.
Figure 2 .
Figure 2. (a) Featureless wide road in the Z-direction.(b) Preventing static features to be encoded at a traffic signal by stopped cars.
Figure 2 .
Figure 2. (a) Featureless wide road in the Z-direction.(b) Preventing static features to be encoded at a traffic signal by stopped cars.
Figure 3 .
Figure 3. Accurate node strategy.(a) Intensity image to represent road surface at 0.3 m in the Z-direction with surrounding environments and top-left corner identification tactic.(b) Elevation image.
Figure 3 .
Figure 3. Accurate node strategy.(a) Intensity image to represent road surface at 0.3 m in the Z-direction with surrounding environments and top-left corner identification tactic.(b) Elevation image.
Figure 4 .
Figure 4. Graph SLAM in node level.(a) Two nodes at a loop-closure event with intensity and elevation images.(b) Deviations (relative position errors) between nodes in the XY and Z planes by the GIR system.(c) Applying GS-XY on intensity images by eliminating ghostings and aligning the road perfectly.(d) Applying GS-Z on elevation images using correct calculations of altitudinal errors by GS-XY and making altitudinal errors as small as possible.
Figure 4 .
Figure 4. Graph SLAM in node level.(a) Two nodes at a loop-closure event with intensity and elevation images.(b) Deviations (relative position errors) between nodes in the XY and Z planes by the GIR system.(c) Applying GS-XY on intensity images by eliminating ghostings and aligning the road perfectly.(d) Applying GS-Z on elevation images using correct calculations of altitudinal errors by GS-XY and making altitudinal errors as small as possible.
Remote Sens. 2021, 13, 5066 6 of16 where f (,) is a simple function to calculate the relative position, XDR is the edge constraint representing the dead reckoning relative position and Σ is the standard deviation of the vehicle velocity inside the driving area between N i and N i−1 .These edges are necessary to preserve the smoothness of the road context and prevent deviations in a local area because of false loop-closure detection.Remote Sens. 2021, 13, x FOR PEER REVIEW 6 of 16 vehicle velocity inside the driving area between Ni and Ni-1.These edges are necessary to preserve the smoothness of the road context and prevent deviations in a local area because of false loop-closure detection.
Figure 5 .
Figure 5. (a) Relationship between nodes in the XY plane and edge types of GS-XY cost function.(b) Applying PhC to merge two nodes (N3 and Nt) based on matching road landmarks with estimating common areas for loop-closure edges.
Figure 5 .
Figure 5. (a) Relationship between nodes in the XY plane and edge types of GS-XY cost function.(b) Applying PhC to merge two nodes (N3 and Nt) based on matching road landmarks with estimating common areas for loop-closure edges.
Figure 6 .
Figure 6.(a) Two nodes merged wrongly by PhC.(b) Correct merging by GS-XY showing the correct position of the common area in each node.(c) GS-framework in the Z plane applied on elevation images based on projected areas by GS-XY.
Figure 6 .
Figure 6.(a) Two nodes merged wrongly by PhC.(b) Correct merging by GS-XY showing the correct position of the common area in each node.(c) GS-framework in the Z plane applied on elevation images based on projected areas by GS-XY.
Figure 8 .
Figure 8. Yamate Tunnel in Tokyo.(a) Two-times scanned course starting at Yono Junction and ending with the Yamate Tunnel.(b) Top-left corners of course nodes showing loop-closure lines (green) and the first node in the tunnel.(c-g) Three loop-closure events along the course demonstrating in rows: open-sky area, Yamate Tunnel entrance and end of the tunnel.
Figure 8 .
Figure 8. Yamate Tunnel in Tokyo.(a) Two-times scanned course starting at Yono Junction and ending with the Yamate Tunnel.(b) Top-left corners of course nodes showing loop-closure lines (green) and the first node in the tunnel.(c-g) Three loop-closure events along the course demonstrating in rows: open-sky area, Yamate Tunnel entrance and end of the tunnel.(c) Camera image.(d) Patch of a node image in first scan.(e) Same area in second scan.(f) Merging scans based on GNSS/INS-RTK.(g) Merging scans based on phase correlation.
041 point clouds) in the first scan and by 285 nodes (15,473 point clouds) in the second scan.The difference in the node numbers is intentionally created by a slight change of the starting point, and the difference in the point cloud number indicates different velocities of the vehicle during the two scans.Obviously, loop-closure events of the two scans are existent continuously along the course as illustrated in Figure 8b. Figure 8c-g show three loop-closure edges with the merged images based on PhC and GIR.The nodes of the two maps in open-sky areas are accurately combined in ACS using GIR in Figure Remote Sens. 2021, 13, x FOR PEER REVIEW 10 of 16 (c) Camera image.(d) Patch of a node image in first scan.(e) Same area in second scan.(f) Merging scans based on GNSS/INS-RTK.(g) Merging scans based on phase correlation.
The test course was scanned by 283 nodes (16,041 point clouds) in the first scan and by 285 nodes (15,473 point clouds) in the second scan.The difference in the node numbers is intentionally created by a slight change of the starting point, and the difference in the point cloud number indicates different velocities of the vehicle during the two scans.Obviously, loop-closure events of the two scans are existent continuously along the course as illustrated in Figure 8b. Figure 8c-g show three loop-closure edges with the merged images based on PhC and GIR.The nodes of the two maps in open-sky areas are accurately combined in ACS using GIR in Figure
Figure 9 .
Figure 9. (a) Loop-closure edges (coordinate difference) of the common areas between two scans in x and y directions obtained by GNSS/INS-RTK system and phase correlation on the node images.(b) The proposed GS-XY's Y-offsets to topleft corners of nodes in the first and second scans.(c) X-offsets to two scans.
Figure 9 .
Figure 9. (a) Loop-closure edges (coordinate difference) of the common areas between two scans in x and y directions obtained by GNSS/INS-RTK system and phase correlation on the node images.(b) The proposed GS-XY's Y-offsets to top-left corners of nodes in the first and second scans.(c) X-offsets to two scans.
Figure 10 .
Figure 10.Wrong combined node images inside the Yamate Tunnel by GIR ( first row) and accurate combination by GS-XY framework (bottom).
Figure
Figure 9a contains spiking peaks inside the open-sky area.The bridge and tunnel road segments are almost identical in the two scans because of the surrounding walls and barriers.Therefore, it is difficult for PhC to accurately recover the relative-position errors in the longitudinal direction.Figure 11a-d show the two scans' nodes of the highest peak in Figure9a(open-sky) with the combined results using PhC and the GIR system, respectively.PhC provides an inaccurate estimation of the longitudinal relative-position, whereas a true combination is obtained in the GIR combined map image.Figure11eshows the corresponding GS map image with the same quality of GIR map.Thus, GS-XY can be considered as a filtration process of wrong results of PhC for applying GS-Z as observed by applying our previous work[21].This increases the number of correct altitudinal edges between nodes.Moreover, decomposing GS-XYZ into two phases in the XY plane and then the Z plane makes the calculation of altitudinal edges very simple and easy.This is because the covariance estimation of Z img can be set to a constant scalar for the entire altitudinal edges in the map.
Figure 9a contains spiking peaks inside the open-sky area.The bridge and tunnel road segments are almost identical in the two scans because of the surrounding walls and barriers.Therefore, it is difficult for PhC to accurately recover the relative-position errors in the longitudinal direction.Figure 11a-d show the two scans' nodes of the highest peak in Figure9a(open-sky) with the combined results using PhC and the GIR system, respectively.PhC provides an inaccurate estimation of the longitudinal relative-position, whereas a true combination is obtained in the GIR combined map image.Figure11eshows the corresponding GS map image with the same quality of GIR map.Thus, GS-XY can be considered as a filtration process of wrong results of PhC for applying GS-Z as observed by applying our previous work[21].This increases the number of correct altitudinal edges between nodes.Moreover, decomposing GS-XYZ into two phases in the XY plane and then the Z plane makes the calculation of altitudinal edges very simple and easy.This is because the covariance estimation of Z img can be set to a constant scalar for the entire altitudinal edges in the map.
Figure 10 .
Figure 10.Wrong combined node images inside the Yamate Tunnel by GIR (first row) and accurate combination by GS-XY framework (bottom).
Figure
Figure 9a contains spiking peaks inside the open-sky area.The bridge and tunnel road segments are almost identical in the two scans because of the surrounding walls and barriers.Therefore, it is difficult for PhC to accurately recover the relative-position errors in the longitudinal direction.Figure 11a-d show the two scans' nodes of the highest peak in Figure9a(open-sky) with the combined results using PhC and the GIR system, respectively.PhC provides an inaccurate estimation of the longitudinal relative-position, whereas a true combination is obtained in the GIR combined map image.Figure11eshows the corresponding GS map image with the same quality of GIR map.Thus, GS-XY can be considered as a filtration process of wrong results of PhC for applying GS-Z as observed by applying our previous work[21].This increases the number of correct altitudinal edges between nodes.Moreover, decomposing GS-XYZ into two phases in the XY plane and then the Z plane makes the calculation of altitudinal edges very simple and easy.This is because the covariance estimation of Z img can be set to a constant scalar for the entire altitudinal edges in the map.
Figure 9a contains spiking peaks inside the open-sky area.The bridge and tunnel road segments are almost identical in the two scans because of the surrounding walls and barriers.Therefore, it is difficult for PhC to accurately recover the relative-position errors in the longitudinal direction.Figure 11a-d show the two scans' nodes of the highest peak in Figure9a(open-sky) with the combined results using PhC and the GIR system, respectively.PhC provides an inaccurate estimation of the longitudinal relative-position, whereas a true combination is obtained in the GIR combined map image.Figure11eshows the corresponding GS map image with the same quality of GIR map.Thus, GS-XY can be considered as a filtration process of wrong results of PhC for applying GS-Z as observed by applying our previous work[21].This increases the number of correct altitudinal edges between nodes.Moreover, decomposing GS-XYZ into two phases in the XY plane and then the Z plane makes the calculation of altitudinal edges very simple and easy.This is because the covariance estimation of Z img can be set to a constant scalar for the entire altitudinal edges in the map.
Figure 11 .
Figure 11.Robustness of GS-XY against wrong loop-closure edges.(a) First scan.(b) Second scan.(c) Wrong matching of the two images using phase correlation.(d) Accurate combination of two maps by the GNSS/INS-RTK system.(e) Accurate combination of the two images using GS-XY to recover the wrong result in (c) based on optimizing the entire relationships between scans' nodes by the designed cost function.It can be observed that the map images in (d,c) are denser than those in (a,b) to indicate the importance to safely combine and update maps in such highway environments.
Figure 12 .
Figure 12.(a) Standard deviation of the GIR system in the Z plane showing unstable accuracy along elevation images inside the Yamate Tunnel.(b) Large altitudinal differences between common areas in GIR elevation images (red profile)and the significant small differences after applying GS-Z (green profile).The dotted lines refer to edges in Figure13.
Figure 11 .
Figure 11.Robustness of GS-XY against wrong loop-closure edges.(a) First scan.(b) Second scan.(c) Wrong matching of the two images using phase correlation.(d) Accurate combination of two maps by the GNSS/INS-RTK system.(e) Accurate combination of the two images using GS-XY to recover the wrong result in (c) based on optimizing the entire relationships between scans' nodes by the designed cost function.It can be observed that the map images in (d,c) are denser than those in (a,b) to indicate the importance to safely combine and update maps in such highway environments.
16 Figure 11 .
Figure 11.Robustness of GS-XY against wrong loop-closure edges.(a) First scan.(b) Second scan.(c) Wrong matching of the two images using phase correlation.(d) Accurate combination of two maps by the GNSS/INS-RTK system.(e) Accurate combination of the two images using GS-XY to recover the wrong result in (c) based on optimizing the entire relationships between scans' nodes by the designed cost function.It can be observed that the map images in (d,c) are denser than those in (a,b) to indicate the importance to safely combine and update maps in such highway environments.
Figure 12 .
Figure 12.(a) Standard deviation of the GIR system in the Z plane showing unstable accuracy along elevation images inside the Yamate Tunnel.(b) Large altitudinal differences between common areas in GIR elevation images (red profile)and the significant small differences after applying GS-Z (green profile).The dotted lines refer to edges in Figure13.
Figure 12 .
Figure 12.(a) Standard deviation of the GIR system in the Z plane showing unstable accuracy along elevation images inside the Yamate Tunnel.(b) Large altitudinal differences between common areas in GIR elevation images (red profile) and the significant small differences after applying GS-Z (green profile).The dotted lines refer to edges in Figure 13.
Figure 13 .
Figure 13.(a) Two nodes of the loop-closure edge with ID 340 in Figure 12a.(b) Two nodes of the loop-closure edge with ID 433 in Figure 12a.The common area in the reference node (first scan) is projected onto the target node (second scan) based on GS-XY.The vehicle trajectory in the first scan in the common area is then used to calculate the altitudinal error in the elevation images.(c,d) GS-Z and GIR elevation trajectories in the two scans in the common areas.A large elevation error was produced by GIR between the two scans, whereas GS-Z minimized the error and brought the node into same Z-level because of representing the same road segment in the real world.(e) The map elevation error between the two scans along the entire map (the first scan's trajectory was used as reference).
Figure 13 .
Figure 13.(a) Two nodes of the loop-closure edge with ID 340 in Figure 12a.(b) Two nodes of the loop-closure edge with ID 433 in Figure 12a.The common area in the reference node (first scan) is projected onto the target node (second scan) based on GS-XY.The vehicle trajectory in the first scan in the common area is then used to calculate the altitudinal error in the elevation images.(c,d) GS-Z and GIR elevation trajectories in the two scans in the common areas.A large elevation error was produced by GIR between the two scans, whereas GS-Z minimized the error and brought the node into same Z-level because of representing the same road segment in the real world.(e) The map elevation error between the two scans along the entire map (the first scan's trajectory was used as reference). | 15,295 | 2021-12-14T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Elucidation of the origin of chiral amplification in discrete molecular polyhedra
Chiral amplification in molecular self-assembly has profound impact on the recognition and separation of chiroptical materials, biomolecules, and pharmaceuticals. An understanding of how to control this phenomenon is nonetheless restricted by the structural complexity in multicomponent self-assembling systems. Here, we create chiral octahedra incorporating a combination of chiral and achiral vertices and show that their discrete nature makes these octahedra an ideal platform for in-depth investigation of chiral transfer. Through the construction of dynamic combinatorial libraries, the unique possibility to separate and characterise each individual assembly type, density functional theory calculations, and a theoretical equilibrium model, we elucidate that a single chiral unit suffices to control all other units in an octahedron and how this local amplification combined with the distribution of distinct assembly types culminates in the observed overall chiral amplification in the system. Our combined experimental and theoretical strategy can be applied generally to quantify discrete multi-component self-assembling systems.
S ince Pasteur 1 discovered the spontaneous resolution in ammonium sodium tartrate and stated that life was intimately related to the asymmetry of the universe, the phenomenon of chirality and how it transfers has intrigued scientists [2][3][4][5] . Because of its profound impact on life science 6,7 , molecular motors 8 , and practical applications like the asymmetric synthesis 9 and enantioseparation 10 of pharmaceuticals, of particular importance is the chiral amplification occurring in molecular reactions and self-assembling systems. Though systematic combined experimental and theoretical investigations have been well established for the chiral amplification in molecular reactions in asymmetric catalysis [11][12][13] and autocatalysis 14 , the elucidation of the chiral amplification in self-assembling systems [15][16][17][18][19][20][21][22][23][24][25][26][27][28][29] by explicit structural information and rational theoretical modelling remains a big challenge 30 . Amplification of chirality in self-assembling systems is usually denoted as the "sergeants-and-soldiers" effect 30,31 , referring to the ability of a few chiral units (the "sergeants") to control a large number of achiral units (the "soldiers"). Since the pioneering work of Green et al. 31 , many studies have reported on the amplification of chirality and the sergeants-and-soldiers effect in "infinite" systems like helical polymers [32][33][34][35][36] , one-dimensional supramolecular polymers [37][38][39][40][41][42][43][44] , and two-dimensional (2D) supramolecular networks [45][46][47][48] . Although these studies provided important prototypes to mimic the chirality transfer in biopolymers and substantially progressed the fabrication of functional soft materials 18,[49][50][51] , the product in such infinite systems usually comprises a mixture composed of polymers (or assemblies) with highly diverse numbers of repeating units-in other words, a "company" containing various kinds of "squads" consisting of distinct numbers of sergeants and soldiers. In contrast, to obtain an in-depth understanding of the amplification of chirality, it is crucial to design systems that provide products with explicit compositions on the molecular level.
An elegant approach to incorporate both chiral and achiral units into discrete assemblies has been described by Reinhoudt and colleagues 52,53,] , forming hydrogen-bonded double rosettes (squads) that each contains precisely six units of sergeants and/or soldiers. The product (company) includes only limited kinds of discrete assemblies (squads), allowing for the development of kinetic models to fit the experimental data and to simulate chiral amplification in dynamic systems. Another advantage of discrete assemblies over polymers is that they can be well characterised by nuclear magnetic resonance (NMR) and single-crystal X-ray diffraction analyses. Taking these advantages, Nitschke and colleagues [54][55][56][57] systematically studied the amplification of chirality and long-range stereochemical communication in discrete metal-organic cages. However, the thorough investigation of a company consisting of different kinds of squads was hindered by the difficulty in separating these non-covalent interaction-based discrete assemblies.
As a result of the experimental difficulty on the explicit characterisation of multi-component self-assembling systems, the corresponding theoretical studies are limited 58 . Despite few theoretical models 42,52,[59][60][61][62][63] , theoretical simulation taking account of molecular information as well as the synergy between experimental and theoretical studies is still to be established.
Herein, we report a strategy to investigate the sergeants-andsoldiers effect not only within a mixture product (company), but also within each kind of discrete assembly (squad) individually. Pure organic octahedra incorporating hexapropyl-truxene faces and both chiral and achiral vertices are constructed through dynamic covalent chemistry. The octahedra containing different numbers of chiral vertices can be separated by chiral highperformance liquid chromatography (HPLC), where the isolated octahedra are sufficiently stable for subsequent NMR and spectroscopic investigations. Such analysis of separated assemblies combined with structural analyses and theoretical simulations allows us to reveal the origin of the strong amplification of chirality in discrete assemblies. Moreover, with a theoretical model for discrete assemblies based on a mass-balance approach, we rationalise the product distributions as a function of the fraction of chiral units, thus unveiling the fundamental mechanisms of the sergeants-and-soldiers effect. The model results perfectly fit our experimental observations and reveal the relationship between the observed sergeants-and-soldiers effect and the relative free energies of the various octahedron types quantitatively.
Results
Chiral octahedra with facial rotational patterns. As previously reported 64 , chiral organic octahedra with facial rotational patterns can be constructed from four equivalents of truxene building block and six diamines through dynamic covalent chemistry. In this study, we change the truxene building block by replacing the butyl groups with propyl groups to better separate the mixture of octahedra products.
Sergeants-and-soldiers effect in a company of mixed squads. We further employed mixtures of achiral EDA and chiral CHDA to form new octahedra with mixed vertices. EDA and CHDA in various ratios (10:0, 9:1,…, 0:10) were mixed with TR and catalytic trifluoroacetic acid (TFA) to form dynamic libraries 65,66 in toluene (see Supplementary Tab. 2 for detail). The dynamic libraries were immersed in a thermostated bath at 60°C for 48 h, leading to equilibrium distributions of mixed products incorporating both EDA-linked and CHDA-linked vertices in a single octahedron (Fig. 2a). These octahedra are designated as 1 n 2 m , where n and m represent the number of EDA-linked and CHDAlinked vertices, respectively. Circular dichroism (CD) spectra of the product mixtures were measured after thermodynamic equilibrium was reached (Fig. 2b).
The mixtures with different EDA/CHDA ratios exhibit similar CD spectra with increasing intensities upon increasing fraction of CHDA. The plot of the relative CD intensity (measured at 340 nm) as a function of the molar percentage of chiral CHDA-linked vertices clearly shows a nonlinear chiral amplification upon the increase of the fraction of CHDA (Fig. 2c).
The observed amplification in chiroptical response in truxene octahedra suggests the chiral CHDA can regulate the achiral EDA. This phenomenon is similar to the chiral amplification in some other discrete assemblies formed by hydrogen bonds 52,53 or metal-organic coordinations [54][55][56][57] .
Regarding the achiral components (EDA) as soldiers and chiral components (CHDA) as sergeants, each discrete assembly, i.e., individual octahedron, can be viewed as a squad and the equilibrium distributions of mixed products can be considered as a company containing various types of squads. All studies on discrete assemblies to date have only revealed the average sergeants-and-soldiers effect in a company, which incorporates various types of squads. To understand the sergeants-and-soldiers effect in discrete assemblies in depth, it is necessary to scrutinise the distinct squads rather than the integrated company.
Sergeants-and-soldiers effect in isolated octahedra squads. Due to the rigidity of the octahedra and the relative stability of imine bonds, we are able to separate the octahedra based on their composition as well as their configuration by chiral HPLC 65 . Eight fractions were found in the mixed equilibrium product containing 50% CHDA (Fig. 3a). These fractions were isolated and individually characterised by mass, CD, and NMR spectro- whereas both of the remaining two fractions matched the composition of octahedron 1 6 . Although there are possible stereoisomers for octahedra 1 2 2 4 , 1 3 2 3 , and 1 4 2 2 as shown in Fig. 2a, we did not observe any sign of corresponding peak splitting in the HPLC spectra. CD spectra of the first six octahedra 1 n 2 m ( 0 n 5; m ¼ 6 À n) and the seventh fraction with the composition of 1 6 are almost identical (Fig. 3b, c). According to our previous study 64 and ZINDO/S simulation (Supplementary Figs. 21 and 22), the CD spectra are strongly dependent on the facial configuration rather than the vertex components, and the octahedra with different facial configurations (i.e., AAAA, AAAC, AACC, ACCC, and CCCC) exhibit considerably different CD spectra.
Therefore, all six octahedra 1 n 2 m ( 0 n 5; m ¼ 6 À n) and the octahedra 1 6 in the seventh fraction are in the same facial configuration, i.e., the AAAA as in the (AAAA)-2 6 . And the octahedra in the eighth fraction can be accordingly assigned as (CCCC)-1 6 , since it exhibits a mirror-like CD spectrum to the (AAAA)-1 6 in the seventh fraction.
Every octahedra containing a CHDA-linked vertex has the same AAAA configuration, hence all EDA-linked vertices in these octahedra are in the gauche conformation with a dihedral angle of c.a. −60°, as shown in the Fig. 2a. This indicates a strong geometrical control of CHDA sergeants: just a single CHDAlinked vertex (sergeant) suffices to control the remaining EDAlinked vertices (soldiers) in any octahedron (squad), as illustrated in Fig. 4a for a 1 5 2 1 octahedron.
Structural basis of chiral amplification in single octahedron. Further understanding of the strong leadership of the CHDA sergeant was revealed by NMR investigation. As a representative example, the 1 H NMR spectrum of 1 5 2 1 (Fig. 4b, c) Except for the influence of the CHDA-linked vertex, the overall spectrum reveals only a single set of peaks of protons on the truxene backbone, suggesting the truxene faces of 1 5 2 1 are located in a T-symmetry with the facial configuration of AAAA or CCCC 64,67 . Otherwise, the resonances would further split into three sets (for C 3 -symmetric CCCA and CAAA) or six sets (C 2symmetric CCAA) due to different facial configurations 64 . Considering the CD analysis, the configuration of 1 5 2 1 is assigned to be AAAA. The 1 H NMR spectra of the octahedra 1 n 2 m ( 0 n 5; m ¼ 6 À n) are rather similar ( Supplementary Fig. 23), corroborating that all of the six octahedra with CHDA-linked vertices have the same AAAA facial configuration.
The nuclear Overhauser effect (NOE) crosspeak between H c and H b (instead of H d ) shown in the NOE spectrum of 1 5 2 1 (Fig. 4b, d) indicates that all imine bonds rotate in the same anticlockwise direction as the sp 3 carbons of the truxene core. The NOE crosspeak between H c and H e1 (instead of H e2 ) indicates that all five EDA-linked vertices are in the same gauche conformation like the CHDA-linked vertex. The structural rigidity of truxene octahedra and the consistency of vertex conformation are also confirmed by the NOE spectra (Fig. 4d) and the single-crystal analysis of 1 6 ( Supplementary Fig. 2). We presume the structural rigidity and the conformational consistency are crucial to the efficient chiral amplification inside the octahedra.
To shed light on the conformational consistency of the EDAlinked vertices, we calculated the free energies of different conformers of (AAAA)-1 5 2 1 using DFT calculations. The (AAAA)-1 5 2 1 conformer with all EDA-linked vertices in c.a. −60°gauche conformation has a much lower energy than any other conformer. For illustration, the difference in energy between the conformer with all EDA-linked vertices in c.a. −60°gauche conformation and the conformer with three EDA-linked vertices in c.a. 60°gauche conformation is approximately 108 kJ mol −1 (Supplementary Fig. 24). To our Fig. 3 Analysis of octahedra types formed at 1:1 ratio of EDA to CHDA. a HPLC spectrum of the equilibrium product. b CD spectra of the isolated octahedra types in toluene (offsetted). c CD intensities of the isolated octahedra types at 340 nm. Error bars indicate the CD intensity differences obtained from two parallel experiments ARTICLE NATURE COMMUNICATIONS | DOI: 10.1038/s41467-017-02605-x knowledge, the resulting consistency in vertex conformation is a unique property of truxene octahedra, which is not possessed by other similar organic octahedra. For example, in the TFB 4 EDA 6 octahedra formed from 1,3,5-triformylbenzene (TFB) and EDA 68,69 , the EDA-linked vertices have been proven to be able to dynamically change between c.a. 60°gauche conformer and c.a. −60°gauche conformer 68 , and both Tsymmetric and C 3 -symmetric TFB 4 EDA 6 exist in the crystal products 69 .
Equilibrium distribution analysis and theoretical model. To elucidate the distribution of sergeant over the squads, we subsequently analysed the products formed for the different molar fractions of CHDA by HPLC, as shown in Fig. 5a and Supplementary Figs. 25-35. As is evident from Fig. 5b, the 1 n 2 m product distribution is strongly controlled by the molar fraction of CHDA. High CHDA ratios in the mixtures result in predominant formation of the octahedra with high values of m, and vice versa. Further investigation of the product distributions with different ratios of chiral units confirmed that the general sergeants-andsoldiers effect in a company is a weighted average of the effects in each squad (Supplementary Fig. 36).
To rationalise the equilibrium distributions of 1 n 2 m product as a function of the molar fraction of CHDA, we devised a massbalance model as illustrated in Fig. 5c. This model is built on the same principles as earlier thermodynamic models for mixed discrete assemblies 52,60-62 , but differs essentially as it explicitly takes possible differences between the equilibrium constants due to cooperative effects into account. In this model, seven types of octahedra are considered, i.e., a single type for each ratio of EDAlinked and CHDA-linked vertices. 1 6 represents both (AAAA)-1 6 and (CCCC)-1 6 , which are assumed to be equally abundant, while the other 1 n 2 m represent all possible conformers with an AAAA facial configuration only. The dynamic exchange of CHDA and EDA between the octahedral vertices is described using 6 independent equilibrium constants K i (1 ≤ i ≤ 6), which are related to free energy differences via K = e ΔG/RT . Whereas possible differences between equilibrium constants were priorly ignored in the absence of data on individual species, our physical separation of the various octahedron types allows to determine them individually. The equilibrium constants allow us to express the equilibrium concentrations of all distinct octahedron types in terms of the equilibrium concentration of 2 6 : Three mass balances can be derived for the model: (i) the overall TR concentration should equal the equilibrium concentration of free TR plus four times the concentrations of all octahedra types summed, (ii) the overall concentration of EDA should equal the equilibrium concentration of free EDA plus the sum of the concentrations of each octahedra type multiplied by its respective number of EDA-linked vertices, and (iii) the analogous mass balance for CHDA. As detailed in the supporting information (Supplementary Eqs. [12][13][14], these three mass balances in combination with Eq. 1 allow to calculate the equilibrium concentrations of all octahedra types for a given set of equilibrium constants K i and overall concentrations of TR, CHDA, and EDA. Octahedron distributions calculated as a function of the molar fraction CHDA can subsequently be compared to the experimental data (summarised in Supplementary Tab. 3). Best fits of the octahedron distributions and CD intensity are shown in Fig. 5b and Supplementary Fig. 37, respectively. These fits were obtained with the equilibrium constants corresponding to the free energy differences as shown in Fig. 5d and Supplementary Fig. 38. This shows that 2 6 has a lower free energy than 1 6 . In addition, it shows that the free energy gains upon insertion of the second, third, fourth, fifth, and sixth CHDA-linked vertex are all rather similar, whereas the free energy gain upon insertion of the first CHDA-linked vertex is approximately 2 kJ mol −1 smaller. That 2 6 should have a lower free energy than 1 6 is also corroborated by the experimental HPLC results (Supplementary Tab. 3 and Supplementary Fig. 39); these indicate that for the same excess of major diamine vertex, the fraction of 2 6 is always higher than that of 1 6 and the free CHDA concentration is always lower than the free EDA concentration. The relative free energies upon the exchange between CHDA and EDA vertices as predicted by the mass-balance model are also in accordance with DFT calculations of the various types of octahedra ( Supplementary Fig. 40 42 and 43) showed that free CHDA has some states closer to the Fermi level than free EDA has, indicating that CHDA is slightly more reactive. In addition, both integrated crystal orbital Hamiltonian population and integrated crystal orbital overlap population analyses suggest that the N-C bond is slightly stronger for the CHDA-TR case than for the EDA-TR case (Supplementary Figs. 44 and 45 and Supplementary Tab. 7). Together, these findings explain for the slight preference of CHDA vertices over EDA vertices as observed in the experimental and modelling results.
Discussion
We have developed a strategy that permits in-depth investigation of the amplification of chirality in discrete molecular assemblies, both from an experimental and a theoretical perspective. Chiral octahedra incorporating a combination of chiral and achiral vertices have been constructed through dynamic covalent chemistry as an experimental model. The product mixtures were first investigated by CD spectroscopy to show a non-linear amplification of CD intensities upon the increase of the fraction chiral vertices; i.e., a notable sergeants-and-soldiers effect in an integrated company. Subsequently, the sergeants-and-soldiers effects within the individual kinds of octahedra (squads) were investigated by separating all octahedron types by chiral HPLC, providing much more explicit information on chirality amplification than by the conventional investigation of mixtures. All octahedra containing one or more chiral vertices exhibit the same CD spectrum as the octahedron containing pure chiral vertices, indicative that one chiral vertex (sergeant) suffices to control the conformation of all achiral vertices (soldiers) in an octahedron (squad). NMR analyses and DFT calculations attribute this strong chiral amplification within octahedra to the structural rigidity of truxene faces and interactions between the propyl arms on truxene. Furthermore, a newly developed mass-balance model for mixed octahedra perfectly fitted the observed sergeants-andsoldiers effects. With this model the equilibrium distribution of the various octahedra, i.e., the distribution of the sergeants over the squads, could be rationalised as a deviation of the statistical distribution due to small free energy differences between the octahedra. DFT calculations attributed these differences in free energy to minor conformational differences between the octahedra and a slightly stronger binding of CHDA over EDA. As such, we presented a combined experimental and theoretical strategy that can be applied more generally to quantify small differences in association energy in discrete multicomponent systems. Through the design of a suitable experimental system and complementary theoretical equilibrium model we thus revealed the origin of chiral amplification in discrete molecular polyhedra, which may provide fundamental insights into the transfer of chirality in supramolecular systems as well.
Methods
Synthesis. TR building block was readily synthesised from truxene in three steps with high yields (experimental and characterisation details can be found in the Supplementary Methods). Stock solutions of TR (3.2 mM), EDA (9.6 mM), (R, R)-CHDA (9.6 mM), and TFA (19.2 mM) in toluene were mixed at certain volume ratios to give the samples A to K, with concentrations of the various species as detailed in Supplementary Tab. 2. The mixtures were then immersed in a thermostated bath at 60°C for 48 h to reach equilibrium.
NMR and MS characterisation. 1 H and 13 C NMR spectra were recorded on a Bruker AVIII-500 spectrometer (500 MHz) in deuterated dichloromethane and are reported relative to residual solvent signals. Matrix-assisted laser desorption ionisation time-of-flight mass spectra were collected on a Bruker microflex LT-MS with 2,4,6-trihydrotyacetophenane (0.05 M in methanol) as matrix. High-resolution mass spectra were collected on a Bruker En Apex Ultra 7.0T FT-MS.
Single-crystal X-ray diffraction. Single-crystal X-ray diffraction data were collected on a Rigaku SuperNova X-Ray single crystal diffractometer using Cu Kα (λ = 1.54184 Å) micro-focus X-ray sources at 100 K. The raw data were collected and reduced using the CrysAlisPro software package, while the structures were solved by direct methods using the SHELXS program and refined with the SHELXL program. Solution and refinement procedures are presented in the Supplementary Methods and specific details are compiled in Supplementary Tab. 1.
HPLC and CD characterisation. HPLC analyses were performed on a Shimadzu LC-16A instrument at 298 K using a Daicel Chiralcel IE column. A linear gradient elution was employed within 40 min from 5% ethyl acetate to 30% ethyl acetate in n-hexane with 4% ethanol and 0.1% diethylamine of total volume at a flow rate of 1 mL min −1 . The sample concentration was 400 μM in toluene, and the injection volume was 3 μL. Absorbance of octahedra was monitored at 325 nm. HPLC spectra of the equilibrium products containing different ratios of CHDA are presented in the Supplementary Figs. 25-35. CD spectra were measured in toluene solutions with a JASCO J-810 circular dichroism spectrometer.
Computational methods. All structures were first optimised by the molecular mechanics method (using COMPASS II force field) and further optimised by the DFT method (using Vienna ab initio Simulation Package (VASP)). The electronic structure and bonding analyses were performed based on the partial density of states, crystal orbital Hamiltonian population, crystal orbital overlap population functions and Bader topological analysis. The CD spectra were calculated at ZINDO semi-empirical level with Gaussian 09. Details on the methods are provided in the Supplementary Methods. | 5,042.8 | 2018-02-05T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
De novo-based transcriptome profiling of male-sterile and fertile watermelon lines
The whole-genome sequence of watermelon (Citrullus lanatus (Thunb.) Matsum. & Nakai), a valuable horticultural crop worldwide, was released in 2013. Here, we compared a de novo-based approach (DBA) to a reference-based approach (RBA) using RNA-seq data, to aid in efforts to improve the annotation of the watermelon reference genome and to obtain biological insight into male-sterility in watermelon. We applied these techniques to available data from two watermelon lines: the male-sterile line DAH3615-MS and the male-fertile line DAH3615. Using DBA, we newly annotated 855 watermelon transcripts, and found gene functional clusters predicted to be related to stimulus responses, nucleic acid binding, transmembrane transport, homeostasis, and Golgi/vesicles. Among the DBA-annotated transcripts, 138 de novo-exclusive differentially-expressed genes (DEDEGs) related to male sterility were detected. Out of 33 randomly selected newly annotated transcripts and DEDEGs, 32 were validated by RT-qPCR. This study demonstrates the usefulness and reliability of the de novo transcriptome assembly in watermelon, and provides new insights for researchers exploring transcriptional blueprints with regard to the male sterility.
Introduction
Watermelon [Citrullus lanatus (Thunb.) Matsum. & Nakai], a member of the Cucurbitaceae family, is an important crop worldwide, with annual production of approximately 110 million tons in 2013 (FAO, http://faostat.fao.org/). The first reference genome sequence of the East Asian watermelon was released in 2013 [1], based on next-generation sequencing (NGS) techniques. According to the genome announcement, watermelon has a diploid genome (2n = 2x = 22) of~425 Mb, with 11 chromosomes and 23 440 transcripts. Completion of the reference genome has allowed members of the Cucurbitaceae to be analyzed using RNA-seq. Two common RNA-seq assembly methods are widely used: de novo-based approach (DBA) and reference-based approach (RBA) [2][3][4]. Although both approaches can be applied to transcriptome PLOS derived from the ms-1 Chinese male sterile line [23]. The plant materials and total RNA isolation for production of raw RNA-seq data in this study are described in our previous report [13]. Raw RNA-seq data used in this article are available in the GEO database under accession number GSE69073.
DBA and RBA data processing
For DBA, prior to assembly, Illumina adapter sequences were removed using Trimmomatic [27]. Clean transcripts were assembled using Trinity (r20140717) [28]. After generating transcript contigs, RNA-seq reads were mapped to the constructed transcriptome reference using Bowtie 2 [29], and RSEM [30] to align and quantify reads. Isoform and gene count matrices were generated using abundance_estimates_to_m atrix.pl implemented in Trinity. Finally, contigs were annotated using Trinotate (r20140708) (https://trinotate.github.io/), and only plantoriginated transcripts (Viridiplantae) were used for downstream analyses. Among those isoform transcripts, the longest was selected as the representative sequence for each gene.
To compare DBA and RBA, RBA results from a previous study [13] were used. The watermelon reference genome (cv. 97103) version 1 from the Cucurbit Genomics Database [1] was employed. In addition, Trimmomatic [27], Tophat2 [31], and HTSeq-count [32] were used to quantify the abundance of mapped reads and to annotate watermelon genes. UniProtKB gene identification was used to compare gene lists between DBA and RBA.
Statistical analysis to identify DEGs in MF and MS
Considering our 2 x 2 factorial experimental design, analysis of variance (ANOVA) was used for RNA-seq and qPCR analysis. First, a negative binomial-assumed two-way analysis of deviance (ANODEV) model was employed for RNA-seq analysis as follows: where i = MF and MS lines, j = floral bud and flower. The effect of breeding line (MF or MS) and tissue on the detection of male-sterility-related genes was tested statistically using edgeR implemented in R [33]. Significance cutoff was used at the FDR adjusted P-value 0.01. Likewise, a two-way ANOVA model was employed for the qPCR experiment because the value of Δct is commonly used to derive relative gene expression-usually satisfied according to the assumption of a normal distribution. The statistical model used was as follows: The value of Δct was calculated based on the difference of time it took to reach the threshold between control and targeted genes, which is negatively correlated with RNA-seq gene expression. To make the direction of Δct represent gene expression, the −Δct value was employed in the analysis.
Functional terms and network analysis of significantly enriched terms
The Database for Analysis, Validation, and Integrated Discovery (DAVID) was used to characterize specific gene lists [34,35]. Three categories of functional terms from the GO database were employed: BP, MF, and CC. In addition, since most already-annotated genes derived from Arabidopsis thaliana, the Arabidopsis gene annotation was used as a background. Significance was considered at a P-value 0.001 for newly annotated transcripts and a P-value 0.01 for DEDEGs. Generally, transcripts are described in diverse biological terms; therefore, functional terms can be classified based on M:N relationships. Significantly enriched terms were first combined, then a gene association matrix was generated (Terms x Genes). Binary values were used in this matrix, i.e., 0 means that a gene does not have a specific function, and 1 means that a gene does have a specific function. Using this matrix, correlation-based network analysis was conducted and the FDR adjusted P-value <0.01 was considered as a significant relationship. Finally, identified relationships were visualized in network format using qgraph package implemented in R [36]. Spring layout was also used to classify similar terms based on the strength of their connections.
RT-qPCR for technical validation
Primers for a total of 33 randomly selected candidates [14] newly annotated transcripts (Tables G and H in S1 File) and 19 DEDEGs (Tables I and J in S1 File)] were designed using Primer3 [37]. Three biologically replicated samples of floral buds and mature flowers were collected from individual MF and MS plants, respectively. cDNA was synthesized by SuperScript RTa-seIII (Thermo Fisher Scientific, USA) and oligo (dT) 15 using 1 μg total RNA isolated from each sample. Watermelon 18S rRNA was used as an internal control to normalize mRNA. Reagents used for qPCR were 10 μl PCR pre-mix, 1 μl evergreen fluorescence dye (SolGent, Korea), 1 μl cDNA, and 500 nM of each primer (except for the 18S control experiment in which 250 nM of each primer was used). PCR conditions were as follows: 95˚C for 12 min, 40 cycles at 95˚C for 10 s, and 60˚C for 30 s.
Summary of sequencing and de novo transcriptome assembly statistics
We previously produced raw RNA-seq data for four watermelon samples: flower and floral bud tissue samples of male-sterile and male-fertile lines (2 breeding lines × 2 tissues) [13]. The four samples contained read numbers ranging from 25 299 088 to 29 490 814. Approximately 80% of the total reads in all samples met Q30 quality control criteria (Table A in S1 File). Here, de novo assembly was performed after removing poor-quality reads and adapter sequences. A total of 50 581 312 reads were used to define transcripts, resulting in 138 811 transcripts ( Table B in S1 File). The average length of assembled transcripts was 1100 bp and the N50 was 2032 bp. For transcripts containing multiple isoforms that differed because of splicing events, the longest isoform was chosen to represent each gene. In all, 94 496 candidate genes were assembled; the average length of assembled genes was 773 bp and the N50 was 1327 bp ( Table B in S1 File).
Gene annotation and discovery of novel genes
The 94 496 assembled transcripts were queried against the Swiss-Prot database using BLASTP and BLASTX [38]. Prior to further analyses, we selected only plant-originated watermelon transcripts by filtering annotated genes from species belonging to the kingdom Viridiplantae; 11 072 and 14 398 plant-originated transcripts were annotated in DBA by BLASTP and BLASTX, respectively (E-value 10 −5 ). The BLASTP search originally annotated fewer transcripts (11 072) than the BLASTX annotation (14 398), but removal of duplicated transcripts based on UniProtKB ID reduced the difference between the two BLAST annotations (BLASTP, 7135; BLASTX, 8045).
To detect novel transcripts, we compared the annotations derived from DBA and RBA after removing duplicated annotations using UniProtKB ID [39]. BLASTP analysis suggested that 6280 of 7135 nonduplicated transcripts (88.0%) were commonly identified between DBA and RBA, and detected 855 putative novel genes (1132 transcripts with duplicated UniprotKB ID) ( Fig 1A). A similar pattern was also observed in the BLASTX annotations (S1 Fig): a large proportion of transcripts (6673, 82.9%) were common between both DBA and RBA, and 1372 genes were newly identified by DBA.
Despite the difference in employment of query sequence (BLASTP, predicted protein coding sequence; BLASTX, conceptual translation of sequence), the concordance between BLASTP and BLASTX analysis demonstrates robust annotation. Although the number of transcripts presented was larger in the BLASTX results (14 398) than in the BLASTP results (11 072), the proportion of unique gene annotation was higher with BLASTP (64.4%) than BLASTX (55.9%), as was the proportion of DBA-RBA common gene annotation (88.0% in BLASTP versus 82.9% in BLASTX). The majority (10 888 of 11 072) of de novo assembled contigs annotated in BLASTP were also annotated in BLASTX. Additionally, BLASTP annotation appeared to represent a relatively conservative annotation result in terms of its skewness towards lower E-values ( Fig 1C). For this reason, we chose the BLASTP annotation for the downstream analyses. In all, 11 072 transcripts (7135 UniProtKB identified genes) were annotated based on BLASTP from DBA using the DAH3615 watermelon lines. Investigation of the annotation sources of those transcripts (Fig 1B) revealed that a majority of transcripts (74.6%) were most closely related to genes from plants, especially from Arabidopsis thaliana (59.2%).
Functional network analysis of novel transcripts
To investigate the functional features of newly annotated transcripts, we performed enrichment analysis of functional terms based on three gene ontology (GO) sub-categories: 'biological process' (BP), 'molecular function' (MF), and 'cellular component' (CC). Eighteen, 28, and 8 functional terms were significantly enriched in BP, MF, and CC, respectively (enrichment test P-value 0.001, Tables C-E in S1 File). As similar biological terms were repeatedly detected across the three GO categories, we conducted network analysis of functional terms to classify analogous terms. This revealed five large clusters grouped as transmembrane transporter, homeostasis, stimulus, nucleic acid binding, and Golgi and vesicles (Fig 2). Three of the five clusters, nucleic acid binding-related, transmembrane transporter-related and homeostasis-related terms, were significantly related (in a correlation test based on a gene association matrix, FDR adjusted P-value 0.01), whereas two clusters, stimulus-related and Golgi and vesicle-related clusters, were independently observed. In three highly correlated clusters, diverse terms related to biological functions that controls internal homeostasis against external stimulus through transmembrane ion transport signals were significantly detected together. As these significantly enriched functional terms are fundamental in plants and other organisms, these newly annotated transcripts suggest its importance on plant viability, especially on stimulus and regulation of homeostasis.
Identification of differentially-expressed genes (DEGs) by DBA
After removing non-expressed transcripts, we statistically analyzed 10 829 BLASTP-annotated transcripts, to not only identify DEGs associated with male sterility, but also to detect any de novo-exclusive DEGs (DEDEGs) that might be identified as DEGs in only DBA. Two-way analysis of deviance (ANODEV) was conducted for each transcript, taking into account the existence of both sterility and tissue-type variation. After removing duplicated UniProtKB IDs, 443 DEGs (508 transcripts with duplicated UniprotKB ID) were detected between the male-fertile DAH3615 (MF) and the male-sterile DAH3615-MS (MS) lines (FDR adjusted Pvalue 0.01).
Comparing the lists of DEGs identified via DBA and RBA, 138 nonduplicated DEGs (representing 140 transcripts) were identified as DEDEGs (Fig 3A). The gene expression pattern of these transcripts was visualized as a heatmap (Fig 3B), which revealed drastic differential expression of DEDEGs between MS and MF groups. Of these genes, 20 genes were up-regulated in MF, and showed no expression in MS (Fig 3C).
Functional enrichment analysis to identify the functional characteristics of these genes revealed significantly enriched biological terms (enrichment test P-value 0.01) across the three GO categories (Fig 3D and Table F in S1 File). Network analysis revealed four functional clusters: terms related to homeostasis/transmembrane ion transport, wax, nucleic acid binding, and galactosidase. Homeostasis, transmembrane ion transporter activity and nucleic acid binding-related functional clusters were particularly common among newly annotated transcripts (Fig 2) and DEDEGs (Fig 3D).
Technical validation of novel transcripts and DEDEGs
Since no biological replications were used in our RNA-seq experiment, replicates were needed to validate the technique. Three biological replicates were subjected to real-time quantitative First, RT-qPCR was performed on 14 randomly selected candidates for newly annotated transcripts to investigate reliability. We presumed that the existence of the target transcripts could be demonstrated based on their relative gene expression measured against the control gene, regardless of condition. In this way, gene expression for all 14 newly annotated transcript candidates was successfully detected (S2 Fig), thus demonstrating the reliability of the discovery of novel transcripts, and supporting the use of DBA as complementary to RBA in watermelon. Next, to verify DEDEGs, 19 candidates of 138 DEDEGs (MF vs. MS, FDR adjusted Pvalue 0.01) were randomly selected for RT-qPCR validation. Two-way ANOVA was used to determine the statistical significance of differential expression of 19 DEDEG candidates. The relative gene expression of each transcript was also compared based on RT-qPCR, although one gene (RLF9) failed to reach the threshold. The other 18 transcripts were all significantly detected as DEGs by comparing MF and MS (Fig 4) (Bonferroni's adjusted P-value 0.01). Log 2-fold change (log2FC) values indicated that all of these transcripts were highly down-regulated in MS samples, providing evidence for their fertility-biased expression and association with male sterility.
Discussion
RNA sequencing (RNA-seq) is more cost-and time-effective than expressed sequence tag (EST), qPCR, or microarray analysis and can be used to directly construct de novo transcriptomes of non-model organisms [40]. Transcriptome/genome analysis through RNA-seq can be effectively accomplished through RBA when a reference genome exists, but RBA is completely dependent on the degree of completion of the reference genome. DBA, an alternative approach using de novo transcriptome assembly, can produce distinct results irrespective of presence or quality of a reference genome, but it requires a high degree of computation capacity. Further, RBA and DBA can be used independently or as a combined method.
Fig 2. Functional GO term analysis for newly annotated transcripts.
A total of 855 newly annotated transcripts were used in functional enrichment analysis. Significantly enriched GO terms (enrichment test P-value 0.001) were visualized in network format to cluster similar terms. Each node represents significantly enriched GO terms across three subcategories; biological process (BP), molecular function (MF), and cellular component (CC). The strength of edges depends on the correlation (only significantly correlated relationships are represented; correlation test FDR adjusted P-value < 0.01). Assignment of node location was determined according to centrality and numbers of related nodes. Five representative clusters were highlighted as colored circles and numbered GO terms of each cluster were shown in included tables. https://doi.org/10.1371/journal.pone.0187147.g002 De novo transcriptome analysis of watermelon Reference genomes of major model organisms have been updated continuously based on follow-up research [41]. Since its release, the reference genome of watermelon has been served a crucial role in watermelon genome analyses; however, the current watermelon reference genome is only a draft version published at 2013 and has not been not updated yet. For these reasons, we applied RBA and DBA simultaneously to uncover the watermelon transcriptome and to contribute to watermelon genome analysis.
In the watermelon RNA-seq studies using RBA, different numbers of annotated genes were observed, indicating differences in transcriptome profiling, likely attributable to various factors such as experimental conditions or tissue/breed specificity [11,12,14]. Thus, gene annotations in diverse tissues and breeding lineages are helpful to explore such specificities via RNAseq analysis. Based on our previous RBA study, we speculated that complementary annotation is needed to elucidate the missing part of the previous RBA, considering the relative infancy of the watermelon reference genome. The low mapping rates to the watermelon reference genome (51.0-54.7%) compared to that of Arabidopsis thaliana also indicate the insufficiency of RBA using the watermelon reference with DAH3615 lines, implying that application of DBA could be helpful to complement the reference-based watermelon transcriptome in terms of providing genomic information [42,43].
Here, we compared DBA and RBA approaches on the same RNA-seq data to identify whether DBA would improve upon RBA-based annotation and provide distinct results. With regard to the low mapping rates on the reference genome, we speculated that the individual application and comparison of DBA and RBA would minimize the loss of information, and enable more straightforward observation of the watermelon transcriptome than another combined strategy, such as align-then-assemble. To minimize false positives and conservatively compare DBA with RBA, we collected only plant-derived transcripts for BLASTP annotation of the de novo assembled transcriptome, and discovered 855 new transcripts that represent parts of the transcriptome thus far undiscovered by RBA (Fig 1A). Since these novel findings may provide valuable information that RBA could not detect, it was necessary to validate their reliability-given that all background information is generated ab initio, DBA is particularly prone to the problem of false positives.
Four notable pieces of evidence support the reliability of DBA in this study. First, while 855 genes (1132 transcripts) were newly detected, 6280 (88.0%) genes from DBA were commonly identified in RBA (Fig 1A). Second, the large proportion (74.6%) of annotated transcripts were most closely related to those from plants, including Arabidopsis thaliana (59.2%) (Fig 1B), despite the fact that the functional annotation in the Swiss-Prot database is generally skewed towards representative mammalian organisms. Third, E-value distribution in the annotation step revealed that many annotated transcripts matched uniquely between the database and query sequences; this finding indicates that the annotated contigs were well assembled with clarity ( Fig 1C). Finally, when 14 of the 855 newly identified transcripts were selected for RT-qPCR with biological replicates, gene expression could be confirmed for all transcripts (S2 Fig). This result supports the idea that the newly identified transcripts derived from DBA are bona fide. Taken together, these results show that DBA provides distinct benefits for watermelon transcriptome research; the identification of 855 novel transcripts is valuable for complementing the available RBA genomic information of watermelon.
For an overview of the possible functional properties of the 855 newly annotated watermelon transcripts, we conducted functional term network analysis. We detected five large clusters related to transmembrane transporter, homeostasis, stimulus, nucleic acid binding, and Golgi and vesicles (Fig 2 and Tables C-E in S1 File). Genes assigned to these terms serve basic but crucial roles in cellular activities of plants. Three of the five clusters, nucleic acid bindingrelated, transmembrane transporter-related and homeostasis-related terms, were especially highly correlated. These serve a role in one of the most important cellular processes, cell survival through regulation of homeostasis. The fact that the functional terms of these newly annotated transcripts are relevant to basic processes occurring in the plant provides evidence of the necessity for DBA in watermelon. Thus, we conclude that the use of DBA contributes useful insights and enables diverse interpretations in further watermelon research.
Of our 855 newly annotated transcripts, DBA uniquely revealed 138 transcripts that were differentially expressed between DAH3615 and DAH3615-MS; we termed these DEDEGs. A previous transcriptome study using RBA suggested the existence of biased expression between MF and MS groups [13], and indeed a similarly biased pattern was observed among our DEDEGs (Fig 3B and 3C). Considering the phenotypic differences between male sterility and fertility and the fertility biased-expression patterns observed in both RBA and DBA analyses, this observation suggests that the DEDEGs are candidates for serving roles in male sterility along with DEGs previously discovered by RBA.
The 138 DEDEGs we observed formed four functional clusters including homeostasis/ ion transporter, wax, nucleic binding, and galactosidase-related terms (Fig 3D and Table F in S1 File). These clusters are frequently observed to function in plant sterility. Ion transporters are involved in signal transduction, cell wall metabolism, and rearrangement of cytoskeletons [44]. Such transporters are enriched in pollen and involved in pollen maturation and pollen tube elongation. Notably, Ms-cd1 mutant cabbage producing collapsed pollen showed repression of various ion transporters in floral buds [45]. Galactosidase is a cell wall modifying enzyme that is involved in microspore development [46]. The anther surface and pollen exine are composed of cutin and intra-and epi-cuticular waxes [47]. Similar to our results, waxrelated genes and galactosidase-related genes have been reported in the transcriptome comparison of male-sterile and fertile lines [45][46][47][48]. Consistent with these results, we anticipate that the DAH3615-MS, which lacks pollen and exhibits small-sized stamen, is deficient in these structural proteins.
Our RNA-seq experiment was based on data from single biological samples; thus, technical validation was required to authenticate those findings. The importance of biological replicates in conducting accurate experiments that take biological variation into account cannot be ignored; however, RNA-seq is frequently been used to pre-screen and narrow the focus of transcriptomic studies, which may then be followed by RT-qPCR. We used RNA-seq analysis to select the most probable candidates for technical validation, then performed RT-qPCR on three biological replicates, and three technical replicates for 33 randomly selected transcripts (14 newly annotated transcripts and 19 DEDEGs). Thirty-two of 33 transcripts were successfully (Fig 4 and S2 Fig), and functional annotations of these transcripts were produced. The rest of the newly annotated transcripts discovered by DBA are also strong candidates to be breeding line-specific genes, and further experiments with replication are needed to verify their differential expression.
Among the 18 DEDEGs successfully validated by RT-qPCR, we identified EXPA9, which encodes expansin, a cell wall-loosening enzyme located in pollen grains that participates in pollen germination to loosen the cell wall of the stigma and the style, thus helping lignin pollen tube penetration [49,50]. Additionally, among our validated transcripts were two TBB1 and one TBB2 tubulin genes (Fig 4). Alpha-tubulin and beta-tubulin are major components of microtubules, which play roles in pollen development and pollen tube germination [51]. Pyruvate decarboxylase (PDC) transforms pyruvate into acetaldehyde and carbon dioxide [49]. It is abundant in pollen grains and is related to pollen tube germination and growth. PDC2 is the only functional PDC gene in pollen; the pdc2 knockout mutant had significantly reduced pollen tube growth compared to the wild type. PDC2 has been suggested as a strong candidate for a role in male sterility in petunia [52].
Our study also revealed that expression of transcripts related to flowering time and organogenesis was biased towards the MF line. A transcript for the phosphatidylinositol/phosphatidylcholine transfer protein SFH3, was highly enriched in the MF line (Fig 4; logFC: −8.3); this protein is reportedly associated with early bolting and early flower formation, giving rise to variation in flower and petal size in Brassica napus [53].
Orthologs of some of the genes we identified have been reported to be responsible for inducing male sterility. A transgenic fasciclin-like arabinogalactan protein (FLA3)-overexpressing line had reduced stamen filament elongation that was both directly and indirectly associated with male sterility [54]. Based on this report, we anticipate that FLA5 also has the potential to be involved in male sterility.
Polyubiquitination (catalyzed via UBQ14) regulates various physiological functions such as sexual reproduction. Ubiquitin (Ub) and Ub-conjugated proteins are involved in early anther development in Nicotiana alata [55]. The E3 ligase-like protein and the F-box protein are related to male sterility in hybrid rice [56]. Another DEDEG showed similarity to BAG1, which encodes a protein with an ubiquitin-like domain and a BAG domain that, like heat shock-induced gene 1, a putative grape BAG protein, promotes the meristematic transition from vegetative to reproductive growth and early flowering [57,58]. ATPase (encoded by PMA2) plays a crucial role in energy release by dephosphorylating ATP to ADP. SPLAYED (SYD), a novel SWI/SNF ATPase homolog, interacts with LEAFY, which is a well-known regulator of floral transition in Arabidopsis. As shown by a study of a syd-2 line, which exhibits male fertility and a reduction in anther dehiscence, SYD is necessary for reproductive and meristem development [59].
Another notable DEDEG was a transcript encoding a protein Stellacyanin (STEL), a blue copper protein, which was predominantly expressed in male-fertile watermelon lines. Although, there are no previous reports of a relationship between STEL and male sterility or reproductive organ development, our previous RBA results have deduced another blue copper protein to be the most significantly differentially expressed gene in watermelon male-fertile lines compared to male-sterile lines [13]. We therefore conclude that STEL could be a novel gene involved in male sterility in watermelon.
The potential links to reproductive development of our candidate genes described above serves to further validate the reliability of DBA, especially in identifying genes that might be helpful for future studies of male sterility in watermelon. Although we have technically validated only 18 of the candidate DEDEGs, the others are also likely to be strong candidates related to male sterility of watermelon-further studies should seek to validate these genes.
To sum up, we carried out DBA to complement RBA on watermelon RNA-seq data. This simultaneous application and comparison of both approaches improved upon RBA alone, as shown by the following results. A total of 855 transcripts were newly discovered using DBA, and 138 DEDEGs were identified as DBA-derived candidate male-sterility genes. The DEDEGs and their technical validation corresponded with RBA results in terms of male-fertility biased expression and genes with analogous functions. Through the functional annotation of our newly annotated transcripts, essential gene functions related to transmembrane transport, homeostasis, stimulus, nucleic acid binding, and Golgi and vesicles were established for watermelon species. Furthermore, our set of 138 putative male sterility-related genes should prove valuable for further watermelon studies. Overall, we conclude that DBA provided a distinct result that could not be discovered using RBA with the current watermelon reference genome. Within the limits of the reference genome of Citrullus lanatus, individual application of DBA and RBA can be a valuable tool to complete the transcriptome. The reliable results obtained in this watermelon genome study can be useful for further watermelon transcriptome studies, showing the value of DBA in non-model plant organisms and providing clues to male sterility in watermelon. This integration of DBA and RBA thus contributes to genome study of watermelon as well as to plant male-sterility research. | 6,077.4 | 2017-11-02T00:00:00.000 | [
"Biology",
"Agricultural And Food Sciences"
] |
Optimized RNA Extraction and Northern Hybridization in Streptomycetes
Northern blot hybridization is a useful tool for analyzing transcript patterns. To get a picture of what really occurs in vivo, it is necessary to use a protocol allowing full protection of the RNA integrity and recovery and unbiased transfer of the entire transcripts population. Many protocols suffer from severe limitations including only partial protection of the RNA integrity and/or loss of small sized molecules. Moreover, some of them do not allow an efficient and even transfer in the entire sizes range. These difficulties become more prominent in streptomycetes, where an initial quick lysis step is difficult to obtain. We present here an optimized northern hybridization protocol to purify, fractionate, blot, and hybridize Streptomyces RNA. It is based on grinding by a high-performance laboratory ball mill, followed by prompt lysis with acid phenol-guanidinium, alkaline transfer, and hybridization to riboprobes. Use of this protocol resulted in sharp and intense hybridization signals relative to long mRNAs previously difficult to detect.
Introduction
Northern hybridization, although less sensitive than other methods for RNA analysis, is the only technique providing information about the concentration of specific transcripts among a complex, overlapping RNA population. This information is required in the study of important events of gene expression regulation, such as RNA transcription termination, processing, and degradation. The quality of a northern hybridization protocol depends on three main points: protection of RNA integrity, unbiased recovery of the entire transcripts population, and its efficient and even transfer to the filter.
Streptomycetes are largely studied antibiotic-producing soil bacteria, which undergo a complex life cycle characterized by the differentiation of a vegetative and an aerial, spore-producing mycelium. Lysis of these and of other gram-positive organisms is difficult to achieve. Most Streptomyces RNA purification protocols include an initial incubation step during which some RNA degradation pathways active in vivo may keep on going in vitro. This often results in RNA degradation. To overcome these artifacts, an efficient lysis must be promptly achieved and quickly followed by addition of a fully denaturing agent which protects RNA integrity.
A second point that must be considered is the loss of low M.W. transcripts occurring when the RNA is purified by column chromatography.
Finally, blotting in the presence of a neutral buffer results in a non-efficient transfer of high M.W. transcripts to the filter.
Thus, we set up an optimized protocol for studying mRNA processing and decay in streptomycetes which could overcome these limits. This procedure allowed to gain information on the expression of Streptomyces coeli- color dnaK operon, indicating that early published data (see Fig. 2 in [1]), probably reflected a selective loss of specific transcripts leading to a misinterpretation of what really occurs in vivo.
RNeasy Kit (Qiagen) Dnase I (Invitrogen) Chemicals were from Sigma, where not otherwise specified.
Methods
Bacterial cultures Dense spore suspension (100 μl; ca. 10 8 spores ml −1 ) of S. coelicolor M145 was plated directly onto sterile cellophane covered R5 plates [2] and incubated at 30°C. After 20-h growth, half of the culture plates were heat shocked at 42°C and half were left at 30°C. Fifteen minutes later the mycelium was promptly scraped from the cellophane, immediately frozen in liquid N 2 and ground in Mikrodismembrator II (Sartorius) at 2,000 rpm for 45 s (alternatively, mycelium can be stored at −80°C).
RNA extraction
The mycelium powder was treated according to [3] with some modifications. It was suspended in prewarmed (about 50°C) SolD (approximately 1.5 ml/ 100 mg mycelium), the solution was then transferred into a Corex glass tube and 0.01 vol of sodium acetate 2 M pH4 and 1 vol of acid phenol were added, mixing thoroughly after each addition. After few minutes, 0.2 vol of chloroform were added, and after vortexing and a 15-min incubation, the sample was centrifugated at 15,000×g for 30 min. To the aqueous phase, 1 vol of isopropanol was added and RNA was recovered by centrifugation after at least 1 h at −20°C, suspended in 0.33 vol of SolD and precipitated with 1 vol of isopropanol at −20°C for 45 min. After centrifugation, RNA was suspended in BTPE 0.5×, treated with DNase I (Invitrogen), phenol/chloroform extracted, precipitaded with 0.33 vol of ammonium acetate 10 M and 2.5 vol of ethanol at −20°C and after centrifugation, suspended in BTPE 0.5×.
The extraction using RNeasy Kit (Qiagen) was carried out accordingly to the manufacturer's instructions.
Northern RNA was glyoxylated using 10 μl glyoxal mixture/2 μl RNA [4]. The sample was incubated at 55°C for 1 h, chilled on ice, and loaded onto 1.8% agarose gel in BTPE 1×. The fraction action was carried out at 5 V/cm.
After electrophoresis the gel was soaked in 30 mM NaOH for 5 min with gentle agitation, then put onto downward transfer apparatus, using 20 mM NaOH as transfer solution. After 2 h, the gel was removed, the filter air dried for 1 min, neutralized in BTPE/50 mM Trisacetate pH7, air dried, UV 254nm fixed for 15 s, and hybridized to a 32 P-labeled riboprobe (spec. act. 7×10 8 cpm/µg) in PerfectHyb Plus Hybridization Buffer (Sigma), at 73°C overnight. After hybridization, the membrane was washed twice in 0.4× SSPE at 73°C and twice in 0.1× SSPE at 67°C. Northern RNA protocol 1. Grow bacteria on cellophane covered plates. 2. Harvest mycelium by scraping and promptly freeze it by immersion in liquid N 2 . 3. Grind the frozen sample in Mikrodismembrator for 45 s at 2,000 rpm (alternatively, it can be stored at −80°C). 4. Extract RNA according to [3]. Prewarm SolD and use larger volumes (1.5 ml/100 mg mycelium). 5. Further purify RNA by ethanol precipitation in the presence of 2.5 M ammonium acetate. 6. Glyoxylate and fractionate the RNA according to [5]. 7. Soak the gel in 20 mM NaOH for 5 min in agitation. 8. Transfer the RNA to a nylon membrane by alkaline downward blotting according to [6]. 9. Hybridize the filter to an antisense riboprobe.
Discussion of Key Steps in the Protocol
The first crucial step in the present protocol is the grinding of the frozen mycelium. Grinding of the mycelium, previously frozen in liquid N 2 , by means of Mikrodismembrator II (MD) reduces it, maintained in a frozen state, to a fine and homogeneous powder in a very short time (45 s) thus preventing all degradative processes. MD is a small apparatus where a small steel ball is moved in a bilateral direction inside a small grinding chamber, successfully used for nucleic acids extraction from various biological samples ranging from tumoral tissues [7] to Bacillus subtilis cells [8]. To maintain the mycelium in a frozen state during grinding, the plastic chamber must be prechilled with liquid N 2 . This precaution allows fast, efficient, and highly reproducible grinding of Streptomyces mycelium. The RNA is then purified according to [3]. The use of prewarmed SolD to solubilize the mycelium powder prevents the freezing of this buffer and/or the precipitation of some components following the contact with the frozen sample thus allowing a prompt denaturation of cellular RNases. Our purification protocol does not make use of columns thus preventing loss of low M.W. RNA, and the A 260/280 of purified RNA appears to be equivalent to that obtained by column procedures. Moreover, the use of an alkaline medium for transfer of the RNA to a filter [5] improves, by partial RNA hydrolysis, the transfer of high M.W. transcripts. We optimized this step by a preliminary soaking of the gel in NaOH, which allows the gel pH to quickly turn from 6.5 to at least 9.0. Moreover, the use of a NaOH concentration higher than that suggested by [5] improves glyoxal removal and hydrolysis of long RNA molecules resulting in their efficient transfer to the filter and stronger hybridization signals.
We applied our protocol to the analysis of RNA complementary to the dnaK operon of S. coelicolor. Figure 1 shows the pattern of ethidium bromide stained bands seen on the gel and on the filter after the RNA transfer. It clearly appears that the bands corresponding to the 4S and 5S RNA species are almost completely absent in the samples purified by the column procedure (C). However, the intensity of the bands corresponding to larger species is equivalent in the two sets of samples. Figure 2 shows the bands seen after hybridization of the filter to an antisense riboprobe complementary to the 3′ distal region of dnaK operon. This probe hybridizes with transcripts ranging from 4.3 Kb (corresponding to the entire operon [1]) to about 300 nt [6]. Their expression level is much more enhanced when the temperature of growth is raised from 30°C to 42°C (heat shock treatment [1,6]). Comparing our results with those already published on the RNA population complementary to the dnaK operon during the Fig. 2 Hybridization to a probe complementary to the distal region of dnaK operon. The nylon filter was hybridized to a riboprobe as described in "Materials and Equipment" and "Methods" sections. The dnaK operon map shown in the upper part of the figure was that of [9] modified according to [6] Fig. 1 Streptomyces RNA in agarose gel and onto nylon membrane. Glyoxylated RNA (10 μg; purified by means of the Mikrodismembrator (MD) or the columns (C) protocol were fractionated on an agarose gel and transferred to a nylon membrane as described in "Materials and Equipment" and "Methods" sections. The two panels show, respectively, the bands visualized in the gel after fractionation and on the nylon filter after the RNA transfer. Pictures were taken under UV 260 light heat shock response [1] it clearly appears that the use of our protocol results not only in a highly enhanced intensity of bands associated with the largest transcripts but also in an inversion of the relative intensity of large and small transcripts. Figure 2 shows that the relative intensity of the 4.3 Kb transcript is almost equivalent in samples purified according to either of the two protocols. However, it clearly appears that the hybridization signal of low M.W. transcripts present in the RNA purified with our procedure is much more intense than that seen in the homologous C sample. The latter, although purified by a protocol defined for total RNA extraction (claimed cut-off at 200 nt), results in loss of medium-low M.W. transcripts.
All this stresses the quality of our protocol for studying dynamic events of RNA metabolism where the relative concentration of high and low M.W. transcripts must be compared. | 2,437.4 | 2010-03-17T00:00:00.000 | [
"Biology"
] |
Exosomal circLPAR1 Promoted Osteogenic Differentiation of Homotypic Dental Pulp Stem Cells by Competitively Binding to hsa-miR-31
Human dental pulp stem cells (DPSCs) hold great promise in bone regeneration. However, the exact mechanism of osteogenic differentiation of DPSCs remains unknown, especially the role of exosomes played in. The DPSCs were cultured and received osteogenic induction; then, exosomes from osteogenic-induced DPSCs (OI-DPSC-Ex) at different time intervals were isolated and sequenced for circular RNA (circRNA) expression profiles. Gradually, increased circular lysophosphatidic acid receptor 1 (circLPAR1) expression was found in the OI-DPSC-Ex coincidentally with the degree of osteogenic differentiation. Meanwhile, results from osteogenic differentiation examinations showed that the OI-DPSC-Ex had osteogenic effect on the recipient homotypic DPSCs. To investigate the mechanism of exosomal circLPAR1 on osteogenic differentiation, we verified that circLPAR1 could competently bind to hsa-miR-31, by eliminating the inhibitory effect of hsa-miR-31 on osteogenesis, therefore promoting osteogenic differentiation of the recipient homotypic DPSCs. Our study showed that exosomal circRNA played an important role in osteogenic differentiation of DPSCs and provided a novel way of utilization of exosomes for the treatment of bone deficiencies.
Introduction
Dental pulp stem cells (DPSCs) can differentiate into odontoblasts, chondrocytes, adipocytes, and osteoblasts [1,2]. Besides those properties, DPSCs have unique characters compared with other sources of mesenchymal stem cells (MSCs), being more accessible with minimal trauma and having shown great potentiality in regenerative medicine for the treatment of various human diseases such as bone deficiencies [3]. The primary goal to regenerate new bone is to activate osteogenic differentiation of certain somatic stem cells such as bone marrow mesenchymal stem cells (BMSCs) [4]. Unfortunately, the exact mechanism of osteogenic differ-entiation is unclear, which might be one significant hurdle has to be overcome in order to achieve optimal clinical outcomes of bone augmentation. While DPSC as a highpotential candidate for bone regeneration, the mechanism for osteogenic differentiation of DPSC has to be further studied for better utilization of DPSC in bone regenerative medicine.
Discovered more than three decades ago, exosomes were initially regarded as a waste product releasing way by tiny vesicles composed of plasma membrane [5]. Recent years, dramatically increased exosome studies shed the light on profound functions of exosomes. Different types of RNAs in exosomes are key factors contributed to those functions of exosomes, such as exosomal microRNA (miRNA) which showed important roles in osteogenic differentiation [6]. Early in 2014, variations of expression of exosomal miRNAs derived from human BMSCs during osteogenic differentiation were found [7]. Later, the change of miRNA expression in exosomes from mineralizing osteoblasts was also found and the exosomal miRNAs showed osteogenic promotion effects [8]. Recently, MSC-derived exosomal miRNA let-7 was found to have positive role in osteogenesis [9]. However, miRNA is not the only RNA inside the exosome [10], and the roles of other types of exosomal RNAs on osteogenic differentiation need further studied.
Circular RNA (circRNA) is a serendipitous discovery in a study of human tumor suppressor gene (DCC) initially investigating exon connectivity by reverse transcription polymerase chain reaction (RT-PCR) [11]. However, the importance of circRNA had not gained attention in biological field until a number of studies showed that circRNA widely existed in different cells of human [12] and was specifically expressed in certain types of cell [13] and notably stable [12]. Recently, similar to the changed exosomal miRNA expression during certain cells' osteogenic differentiation, circRNA expression also showed a change during osteogenic differentiation [14]. Circular RNA CDR1 had an miR-7 sponge effect that positively facilitated osteogenic differentiation of periodontal ligament stem cells [15]. Nevertheless, circRNA linked to miRNA and message RNA (mRNA) as an axis [16]. However, due to intricate expression of cir-cRNAs in osteogenic differentiation, the role of circRNAs in osteogenic differentiation has to be further investigated.
In this study, we studied the altered expression of exosomal circRNA derived from DPSCs under osteogenic induction, further demonstrated a circRNA affected osteogenic differentiation of DPSCs, trying to uncover the mechanism of osteogenic differentiation of DPSCs for future clinical treatment of bone regeneration.
Materials and Methods
2.1. Isolation and Culture of DPSCs. The tooth removal surgeries were performed at the Affiliated Stomatology Hospital of Kunming Medical University, Kunming, Yunnan, China. The pulp tissues were harvested from one healthy patient, aged 20 years old, with impact third molars, and the removed teeth were free of periodontal or endodontic problems. The removed teeth were stored in precooled PBS immediately, and the next procedures were taken within 4 hours. Under the aseptic condition, the tooth was split and the pulp tissue was removed. Briefly, the pulp tissue was digested for 1 h at 37°C in a solution containing 3 mg/mL collagenase type I and 4 mg/mL dispase. After the filtration through 70 mm cell strainers (Falcon; BD Labware), the cells were cultured at 37°C under 5% CO 2 in the Dulbecco's modified Eagle's medium (DMEM; Gibco) containing 20% mesenchymal cell growth supplement (Lonza, Inc.) and antibiotics (100 U/mL penicillin, 100 mg/mL streptomycin, and 0.25 mg/mL amphotericin B; Gibco). After 3 days of culture, floating cells were removed and the culture medium was replaced with fresh medium [17].
2.2. Identification of DPSCs. The 2 nd generation of DPSCs was cultured in a laser confocal dish with 5 × 10 4 cells. When cells converged to 50%, the culture was terminated and fixed by 4% paraformaldehyde for 15 minutes. PBS with 0.5% Triton was used for 15 minutes, and 5% BSA was used to block for 1 hour. Then, the rabbit antivimentin and mouse anticytokeratin (abcam, US) were added and incubated overnight at 4°C, and PBS washed three times. The FITC-labeled sheep antimouse and cy3-labeled sheep anti-rabbit antibody were added and incubated at the room temperature for 1 hour, then PBS washed three times. Cell nuclei were stained with DAPI, and laser confocal microscopy was used for observation and photographing.
The 2 nd generation DPSCs were taken, digested by trypsin, washed by PBS three times, suspended, and counted. The cells were divided into several flow tubes according to the requirement of 1 × 10 5 cells for each antibody. The cells were incubated with antibodies CD34, CD44, CD45, CD90, and STRO-1 at the room temperature for 45 minutes. Cell surface markers were identified by the flow cytometry after PBS washing and suspension.
Isolation of Osteogenic-Induced DPSC-Derived Exosomes
(OI-DPSC-Ex). Exosomes secreted by DPSCs during the starvation of 48 h without FBS, which were marked as EX0. Other groups of DPSCs were cultured in the osteogenic induction medium (100 nm dexamethasone, 10 mM b-glycerophosphate, and 200 mM ascorbate phosphate in DMEM +10%) with 15% exosome-free FBS (VivaCell, China); exosomes secreted by these osteogenic-induced DPSCs at days 5 and 7 were extracted and marked as EX5 and EX7. The exosomes of EX0, EX5, and EX7 were sent for high-throughput transcriptome sequencing (Guangzhou RiboBio Co., Lt. China) and applied to coculture with homotypic DPSCs. The isolation of exosomes was done followed by the ultracentrifugation method [18]. Briefly, the collected culture medium was centrifugated at 3000g for 20 minutes, and the supernatant was collected. Then, centrifugation at 16500g for 20 minutes, the supernatant was collected again. The supernatant was filtered with a 0.2-micron filter to collect the filtrate. After that, the filtrate was centrifuged at 100,000g for 70 minutes (CP 100WX, Hitachi, Japan), discarded the supernatant, and resuspended the precipitate with PBS. Finally, the precipitation was resuspended with 200 μL PBS.
BCA Test for Exosomes.
For quantification and normalization of exosome-containing PBS solutions, the protein in the solutions was quantified by protein BCA Assay Kit (Solarbio, China). The test was performed following the instruction. The finial exosome-containing PBS solutions for next experiments had the protein contained 1 mg/mL.
2
BioMed Research International 2.6. Nanoparticle Tracking Analysis (NTA) and Flow Cytometry Assay for Exosomes. The exosome particle size of EX0, EX5, and EX7 was verified by analysis of ZETASIZER Nanoseries-Nano-ZS (Malvern, UK) according to the operations manual. For the flow cytometry assay, the EX0, EX5, and EX7 were stained with CD63-FITC and CD81-FITC flow cytometry antibodies (BD Biosciences, San Jose, USA). Then, the nonstained EX0, EX5, and EX7 samples as negative control were marked as NC. The flow cytometry assay was performed according to instrument (BD accuri C6 flow cytometer) operation rules.
DPSCs
Cocultured with EX0 and EX7. In this study, the exosomes EX0, EX5, and EX7 were collected and sequenced. EX7 showed higher expression of circLPAR1 than EX5. Therefore, we chose EX7 for subsequent experiments instead of EX5. The 3 rd generation of DPSCs was inoculated into 6well plates with 2 × 10 5 cells per well density after the cell counting. All wells were randomly divided into EX0-treated group, EX7-treated group, the osteogenic induction medium group as a positive control (PC) group, and normal-cultured group as a negative control (NC) group. For EX0-and EX7treated groups, DPSCs were cocultured with exosome contained medium (20 μL EX0 or EX7 in 1 mL DMEM+10% exosome-free FBS) without the osteogenic induction medium for 14 days. For PC group, osteogenic differentiation of DPSCs was induced by the osteogenesis induction medium (abovementioned) with 15% exosome-free FBS for 14 days. The NC group was normal-cultured DPSCs (DMEM+15% exosome-free FBS). All the culture medium was replaced, and morphological changes were observed under the microscope every 3 days.
Exosome
Phagocytosis. Phagocytosis of exosomes was detected by the following method: DPSCs were inoculated into 12-well plates with 3 × 10 4 cells per well (15% exosome-free FBS+DMEM were used for culture). The 20 μL exosome solution with protein contained 1 μg/μL was mixed to 4 μL PKH67 and 200 μL diluent, and incubated at the room temperature for 5min. Next, 200 μL exosome-free FBS was added to terminate the reaction; then, the exosomes were extracted. DPSCs were inoculated into 12-well plates with 10% exosome-free FBS+DMEM and divided into the control group and experimental group. The control group was added with the exosomes no marking of PKH67 and 15% exosome-free FBS+DMEM. The experimental group was added with the exosomes marked by PKH67 and 15% exosome-free FBS, incubating at 37°C and 5% CO 2 for 24 hours. Then, the original culture medium was removed, washed twice with PBS, fixed with 4% paraformaldehyde at the room temperature for 30 min, washed twice with PBS, and stained with DAPI. The phenomenon of phagocytosis of exosomes was observed by a fluorescence microscope (TE2000U, Nikon, Japan). For luciferase assays, wildtype circLPAR1 (hsa_circ_0003611) and mutant type of the binding site genomic region fragments were synthesized by Sangon Biotech (Shanghai, China) and inserted into Pmir-GLO vector. hsa-miR-31 mimics were supplied by Sangon Biotech. The activity of Firefly luciferase and Renilla luciferase was detected with Dual-Luciferase® Reporter Assay Systems (Promega). Every analysis was performed three times.
Alkaline Phosphatase (ALP) Staining and Alizarin Red
2.14. Statistical Analysis. All experiments were performed at least three times. The data were represented as the mean ± standard deviation (mean ± SD). Data were analyzed using the Student's two-tailed t-test to compare the means of two groups or a one-way ANOVA for comparison of the means of more than two groups using SPSS 17.0 (IBM, Chicago, IL, USA). P < 0:05 was considered statistically significant. (Figure 1(a)). Flow cytometry assay showed that mesenchymal-specific markers CD44, CD90, and STRO-1 were expressed positively, and the expression rate of cell surface antigen was close to 85%. Hematopoietic and endothelial specific antigens CD34 and CD45 were expressed negatively (Figure 1(b)).
Identification of Exosomes.
Exosomes were isolated from starvation of DPSCs' culture medium marked as EX0, from DPSCs' osteogenic-induced culture medium at days 5 and 7 marked as EX5 and EX7. All groups of exosomes showed circular structures with a size range of 20-120 nm under TEM scanning (Figure 2(a)). Nanoparticle tracking analysis revealed that the average particle size and main peak of particle size were within the range of exosome particle size (Figure 2(b)). The detected particle distribution coefficient (PDI) was between 0.08 and 0.7, which proved the moderate dispersion of the system and high confidence of the results. The expression of CD63 and CD81 in EX0, EX5, and EX7 was detected by flow cytometry instrument. Compared with the NC group, both two tested markers had highly expressed signals (Figure 2(c)).
Osteogenic-Induced DPSC-Derived Exosomes (OI-DPSC-Ex) Induced Osteogenic Differentiation of Recipient
Homotypic DPSCs. The fluorescence microscope demonstrated that the exosomes were phagocytized by DPSCs (Figure 3(a)). Subsequently, we tested the osteogenic effects of EX0 and EX7 on homotypic DPSCs. ALP activity in PC groups at days 7 (D7) and 14 (D14) was significantly higher than that at the initial time point day 0 (D0). The exosome induction group EX7 also showed similar staining results (Figure 3(b)). ARS confirmed that the EX7 group produced the calcium deposit by recipient DPSCs (Figure 3(d)). The highly expressed level of osteogenic induction genes RUNX2, col-1, and OCN further confirmed that EX7 promoted (Figure 3(c)). In contrast to EX7 group, EX0 group showed similar results in ALP staining, ARS, and level of osteogenic induction gene expression, which was considered had no osteogenic effect on DPSCs.
CircLPAR1 Was Obviously Upregulated in Osteogenic-
Induced DPSC-Derived Exosomes. To investigate the mechanism of OI-DPSC-Ex on promoting osteogenic differentiation of DPSCs, we performed high-throughput sequencing of circRNA in EX0, EX5, and EX7. Through bioinformatics analysis of the sequencing result, we found that there were 11 circRNAs raised steadily from EX5 to EX7 (Figure 4(a)). By fluorescence quantitative PCR detection, we confirmed that LPAR1 (hsa_circ_0003611) expression level in exosomes was increased gradually with the extension of induction time (Figure 4(b)). Therefore, we listed LPAR1 as a research object, hoping to explore the physiological role of cir-cLPARP1 in exosomes.
hsa-miR-31
Was the Target of circLPAR1. It was predicted by online tools (https://circinteractome.nia.nih.gov/) that circLPAR1 (hsa_circ_0003611) bound to hsa-miR-31, a miRNA that showed significant inhibitory effect on The average particle size and main peak of particle size were within the range of exosome particle size. (c) The exosome-specific marker (CD63 and CD81) in EX0, EX5, and EX7 was detected by flow cytometry. The NC group was exosomes with no stain. 6 BioMed Research International [19,20] (Figure 5(a)). In order to confirm the accuracy of this prediction, verified experiments were conducted. First, we detected the expression levels of hsa-miR-31 and circLPAR1genes in EX7-and EX0-treated DPSCs. The results showed that circLPAR1 expression level was upregulated in EX7-treated DPSCs, while hsa-miR-31 was decreased significantly (Figure 5(b)). There was a negative correlation between the two RNAs. Furthermore, we verified whether circPLAR1 was a direct hsa-miR-31 target using luciferase reporter assays. DPSCs cotransfected with the hsa-miR-31 mimics, and circPLAR1 plasmid suppressed the activity of a luciferase reporter, but did not affect the mutant circLPAR1 group. This result demonstrated that cir-cLPAR1 would be the target of hsa-miR-31 ( Figure 5(c)).
3.6. Exosomal circLPAR1 Induced Osteogenic Differentiation via Downregulation of hsa-miR-31. Based on above results, we hypothesized that circLPAR1 induced osteogenic differentiation of DPSCs through competitively binding to hsa-miR-31. To verify this hypothesis, we transfected hsa-miR-31 inhibitor and circLPAR1 overexpression vector into DPSCs. ALP activity detection and alizarin red staining were performed on the 14 th day and 21 st day after the transfection. We found that both downregulation of hsa-miR-31 and upregulation of circLPAR1 promoted osteogenic differentiation of DPSCs by the ALP assay and ARS (Figures 6(a) and 6(b)). The WB assay was used to detect the expression of SATB2, RUNX2, col-1, and OCN in si-miR-31, circLPAR1 overexpression, and EX7-treated groups for 14 days. Compared with the NC group, the expression of osteogenic differentiation-related proteins in the three experimental groups was all elevated (Figure 6(c)).
Discussion
Decoding the mystery of osteogenic differentiation would be the cardinal step for archiving predictable bone regeneration outcomes. However, due to various trigger factors, the complex of signaling pathways, etc., osteogenic differentiation remains many unanswered questions. In recent decades, DPSC has been chosen as a promising cell source for regenerative medicine [3]. However, the clinical application of DPSC is still far from ideal [21]. It might be due to a major reason of unclear mechanism of osteogenic differentiation of DPSCs.
The discovery of exosome has opened a new direction of cell research, particularly the cell-cell communications [22]. Moreover, the exosome played important roles in cellular differentiation [23]. On the one hand, the biomolecular messages inside of exosomes altered synchronously to the stages of osteogenic differentiation. Xu et al. reported the alteration of exosomal miRNA expression during the osteogenic differentiation and the different expression correlated to the degrees of osteogenic differentiation [7]. On the other hand, the altered biomolecular messages loaded in exosomes had specific biological effects closely related to its ongoing osteogenic differentiation. Wang et al. not only showed the change of exosomal miRNA expression during the osteogenic differentiation, but they also demonstrated the exosomes from various stages of osteogenic differentiation that had different . NC was the negative control. * P < 0:05, * * P < 0:01, and * * * P < 0:001 indicated significant differences. 8 BioMed Research International osteogenic effects on the homotypic recipient cells [6]. In addition, it was found that the exosomes from osteogenicinduced stem cells from human exfoliated deciduous teeth contained mRNA and proteins of Wnt3a and BMP2, which showed osteogenic effects on periodontal ligament stem cells (PDLSCs) [24]. However, those are the researches of exosomal miRNA, mRNA, and proteins related to osteogenic differentiation.
Recently, circRNA has showed the impact on osteogenic differentiation. Lloret-Llinares et al. found different expression of certain circRNAs during osteogenic differentiation of MC3T3-E1 cells by RNA sequencing (RNA-seq), and they demonstrated a circ19142/circ5846-targeted miRNA-mRNA axis [25]. A thorough analysis of circRNA expression profiles during osteogenic differentiation of PDLSCs revealed that more than one hundred circRNAs changed expression significantly and showed a stage-specific change of circRNA expression [26]. 43 circRNAs were found to change expression during osteogenic differentiation of mouse adiposederived stromal cells, of which two circRNAs (mmu_cir-cRNA_013422 and mmu_circRNA_22566) were upregu-lated and showed the miRNA-sponge effect to miR-338-3p [27]. Zhang et al. had verified a link between circIGSF11 and miR-199b-5p, the downregulation of circIGSF11 led to enhancement of osteogenesis by the upregulation of miR-199b-5 expression [14]. Those studies were of importance to show the role of circRNAs in osteogenesis. However, up to date, very few studies considered the exosomal circRNA, which is the special cargo of exosome having effects on the recipient cells for osteogenesis. Hence, our study firstly reported the altered expression of exosomal circRNA of DPSCs undergoing osteogenic differentiation and identified the circLPAR1 that played a promotive role on osteogenic differentiation of DPSCs.
We thoroughly examined all exosomal circRNAs by RNA-seq at days 5 (EX5) and 7 (EX7) after osteogenic induction and day 0 (EX0, the exosomes from starvation of DPSCs without osteogenic induction). Among upregulated exosomal circRNAs, circLPAR1 was continuously upregulated along with induction time as the results of comparing EX5, EX7 to EX0, respectively. However, EX7 had a higher expression of circLPAR1 than EX5. Therefore, the EX7 was selected The expression level of exosomal circLPAR1 during osteogenic differentiation time intervals (EX0, EX5, and EX7) was detected by qRT-PCR. * P < 0:05, * * P < 0:01, and * * * P < 0:001 indicated significant differences. 9 BioMed Research International for testing its osteogenic effects. The results of EX7 showed similar effects as the positive control group, whereas the EX0 group showed the null effect on osteogenesis. EX7 contained the high level of circLPAR1 expression would be the key point of promotion of osteogenesis of DPSCs. Cir-cLPAR1 is a kind of G-protein coupled receptor commonly expressed in normal human tissues [28], and it has been studied in cancer field recently [29][30][31], but its effect on osteogenesis remains mostly unknown. The results from bioinformatic prediction and luciferase reporter assay showed that circLPAR1 had a strong binding capacity with hsa-miR-31 which was a proved miRNA inhibitor of osteogenic differentiation [32][33][34]. Moreover, hsa-miR-31 inhibited the osteogenic differentiation of mesenchymal stem cells by targeting SATB2 [20,34,35], a protein that showed significant role in bone biology and positively linked to the level of expression of RUNX2, OPN, OSX, OCN, etc. [12,19,20,[35][36][37]. In this study, the results confirmed that the expression level of SATB2 and other osteogenic differentiation-related genes was upregulated in the exosome (EX7) treated, si-miR-31 transinfected, and circLPAR1-overexpressed groups.
Our study identified that circLPAR1 was highly expressed in the exosomes derived from osteogenic-induced DPSCs. Then, the large number of circLPAR1 entered the recipient homotypic DPSCs, then bound to hsa-miR-31 which was the miRNA targeted to gene SATB2. Therefore,
11
BioMed Research International circLPAR1 eliminated negative effect of hsa-miR-31 on osteogenic differentiation of DPSCs. Subsequently, the expression level of SATB2 was increased and led to the upregulation of its downstream genes which related to osteogenic differentiation such as RUNX2. The increased RUNX2 promoted the occurrence and development of osteogenic differentiation (Figure 7).
The highlight of this study was to investigate the effects of OI-DPSC-Ex on homotypic DPSCs, which showed one possible role of the exosomes played during osteogenic differentiation. We supposedly considered the effect of the exosomes derived from induced cells further inducing the homotypic cells as a "re-enhanced" loop or a chain reaction or a "turbocharger-effect"; these exosomes further amplified the induction effect on themselves and surrounding cells as a consequence of a successful completion of differentiation.
Conclusion
This study demonstrated the increasing circLPAR1 expression in the exosomes derived from DPSCs during osteogenic differentiation. These exosomes had the osteogenic effect on the recipient homotypic DPSCs via exosomal circLPAR1 that upregulated SATB2 expression by competitively binding to hsa-miR-31. Our findings uncovered exosomal circRNA expression profile during osteogenic differentiation of DPSCs and revealed a new mode of understanding of the role of exosomes played in osteogenic differentiation, providing a novel way of utilization of exosomes for the treatment of bone deficiencies.
Data Availability
The data are available upon reasonable request.
Conflicts of Interest
There are no conflicts of interest from any author. | 5,051.2 | 2020-09-28T00:00:00.000 | [
"Medicine",
"Biology"
] |
A brief bout of exercise alters gene expression and distinct gene pathways in peripheral blood mononuclear cells of early- and late-pubertal females
A brief bout of exercise alters gene expression and distinct gene pathways in peripheral blood mononuclear cells of early- and late-pubertal females. studies show that brief exercise alters circulating neutrophil and peripheral blood mononuclear cell (PBMC) gene expression, ranging from cell growth to both pro-and anti-inflammatory processes. These initial observations were made solely in males, but whether PBMC gene expression is altered by exercise in females is not known. Ten early-pubertal girls (8–11 yr old) and 10 late-pubertal girls (15–17 yr old) performed ten 2-min bouts of cycle ergometry ( (cid:1) 90% peak heart rate) interspersed with 1-min rest intervals. Blood was obtained at rest and after exercise, and microarrays were performed in each individual subject. RNA was hybridized to Affymetrix U133 (cid:2) 2.0 Arrays. Exercise induced significant changes in PBMC gene expression in early (1,320 genes)- and late (877 genes)-pubertal girls. The expression of 622 genes changed similarly in both groups. Exercise influenced a variety of established gene pathways (EASE (cid:3) 0.04) in both older (6 pathways) and younger girls (11 pathways). Five pathways were the same in both groups and were functionally related to inflammation, stress, and apoptosis, such as natural killer cell-mediated cytotoxicity, antigen processing and presentation, B cell receptor signaling, and apoptosis. In summary, brief exercise alters PBMC gene expression in early- and late-pubertal girls. The pattern of change involves diverse genetic pathways, consistent with a global danger-type response, perhaps readying PBMCs for a range of physiological functions from inflammation to tissue repair that would be useful following a bout of physical activity. Comparison of Microarray Results to RT-PCR We used a paired t -test to determine the effect of exercise on the six genes selected in the 10 early-pubertal and 10 late-pubertal girls.
IN THIS STUDY we examined for the first time the effects of exercise on peripheral blood mononuclear cell (PBMC) gene expression in early-and late-pubertal girls. There are mounting data that leukocytes are not only involved in eradicating pathogens but also play a role in wound repair, muscle growth, other key developmental processes, and childhood diseases like asthma and arthritis (12,13,27,31). Moreover, brief exercise can trigger bronchoconstriction and anaphylaxis, both serious conditions in which the inflammatory response appears to be dysregulated (7). Our knowledge of the acute effects of exercise on leukocyte gene expression in humans is still rudimentary.
In both adults and children, a leukocytosis occurs following brief, heavy exercise (29). In addition, we now know from recent studies performed in this and other laboratories that there are remarkable changes in the gene expression profile pattern of the circulating leukocytes rapidly accompanying exercise (2,4,9,25). The impact of exercise on the immune response is increasingly seen as one of the fundamental mechanisms through which levels of physical activity modulate health, growth, and disease risk in both children and adults.
We hypothesized, first, that exercise would stimulate gene expression in PBMCs in girls; and second, that the genomic response would include alterations in pro-and anti-inflammatory cytokines, stress factors, and growth mediators. The analysis of individual gene responses through techniques like microarray is becoming a powerful tool in understanding mechanisms that control the physiological response of immune cells to a variety of perturbations, but the enormity of the data generated by such analyses can be perplexing, and, at the same time, it is increasingly recognized that the functional, physiological significance of changes in gene expression may be better understood by examining the coordinated modulation of groups of genes acting in discrete pathways. Since very little in general is yet known about individual gene expression of leukocytes in response to exercise, and even less about specific gene profiles in particular, one major objective of this study was to analyze PBMC gene responses to exercise in terms of pathways.
In general, there are fewer studies focused on immune responses to perturbations like exercise in females compared with males. However, it is well recognized that both gender and puberty can influence immune function (11,17,28), and there is a small but growing body of literature suggesting that leukocyte gene expression may also be influenced by these factors (14). The primary focus of the present study was to continue to test the hypothesis that exercise alters circulating leukocyte gene expression in children as well as in adults. We also began to explore whether puberty altered leukocyte gene expression in response to exercise by recruiting females specifically in either the early or late pubertal stages of childhood and adolescence.
Subjects
Twenty healthy females participated in this study (Table 1). Ten early-pubertal participants (age range 8 -11 yr old) and 10 latepubertal participants (age range 15-17 yr old) comprised the study sample. We used a validated self-administered questionnaire that has been widely used to assess pubertal status (23,26). Using this tool, only girls who were at Tanner 1 were included in the early-pubertal group and those at Tanner 5 were included in the late-pubertal group. In the late-pubertal girls, we made no attempt to perform exercise selectively in either the luteal or follicular phase of the menstrual cycle. Individuals participating in competitive sports and with a history of any chronic medical conditions or use of any medications were excluded from participation. The Institutional Review Board at the University of California, Irvine approved the study, and written informed assent and consent was obtained from all participants and their parents upon enrollment.
Anthropometric Measurements
Standard calibrated scales and stadiometers were used to determine height and body mass. Dual-energy X-ray absorptiometry (DXA) was used to measure body fat, expressed as a percentage.
Measurement of Fitness
Each subject performed a ramptype progressive cycle ergometer using the SensorMedics metabolic system (Ergoline 800S, Yorba Linda, CA). Subjects were vigorously encouraged during the highintensity phases of the exercise protocol. Gas exchange was measured breath-by-breath, and the anaerobic (lactate) threshold and peak V O2 were calculated using standard methods (6).
Exercise Protocol
At least 48 h but not exceeding 7 days following the completion of the ramp test, each subject performed exercise consisting of ten 2-min bouts of constant-work-rate cycle ergometry, with a 1-min rest interval between each bout of exercise. The work rate was individualized for each girl and was calculated to be equivalent to the work rate corresponding roughly to 50% of the work rate between the anaerobic threshold and the peak oxygen uptake (as determined noninvasively from the ramptype test). This resulted in a relative work rate that was equivalent between study subjects. We have used this protocol in the past, first to more closely mimic the "stop-start" nature of spontaneous physical activity (1), and second to ensure that the exercise input was standardized to physiological indicators of each subject's exercise capacity (32).
Blood Sampling and Analysis
An indwelling catheter was inserted into the antecubital vein. A baseline sample was taken 30 min after the placement of the catheter and before the onset of exercise. We waited 30 min to ensure that measurable physiological parameters of stress (e.g., heart rate and blood pressure) were at baseline levels. Subjects then completed the ten 2-min bouts of constant work rate, and additional blood samples were obtained immediately after exercise (total of 40 samples). The plasma was separated and stored at Ϫ80°C and thawed only once for analysis. Complete blood counts (CBC) for white blood cell analysis were obtained by standard methods from the clinical hematology laboratory at University of California, Irvine.
PBMC Separation
PBMCs were isolated using OptiPrep density gradient medium (Sigma). Standard and consistent practices were employed in an effort to minimize any potential changes in mRNA expression levels due to manipulation of PBMCs. The duration from blood draw to stabilization of RNA never exceeded 60 min.
RNA Extraction
Total RNA was extracted using TRIzol (Gibco BRL Life Technologies, Rockville, MD) reagent and purified using the RNeasy Midi columns method (Qiagen, Valencia, CA). RNA pellets were resuspended in diethyl pyrocarbonate-treated water. RNA integrity was assessed (before beginning target processing) by running out a small amount of each sample (typically 25-250 ng/well) onto a RNA Lab-On-A-Chip (Caliper Technologies, Mountain View, CA) that was evaluated on an Agilent Bioanalyzer 2100 (Agilent Technologies, Palo Alto, CA).
Preparation of Labeled cRNA
The detailed protocol for preparation and microarray processing was performed as recommended by the manufacturer and is available in the Affymetrix GeneChip Expression Analysis Technical Manual (Affymetrix, Santa Clara, CA). Briefly, 4 g total RNA was used as a template for double-stranded cDNA synthesis. Single-stranded then double-stranded cDNA was synthesized from the poly(A) spike-in controls and mRNA present in the isolated total RNA using the SuperScript Double-Stranded cDNA Synthesis Kit (Invitrogen, Carlsbad, CA) and a T7-oligo(dT) primer (Integrated DNA Technologies, Coralville, IA) that contains a T7 RNA polymerase promoter site added to its 3Ј-end. A portion of the resulting double-stranded cDNA was used as a template to generate biotin-tagged cRNA from an in vitro transcription reaction (IVT), using the Affymetrix GeneChip IVT Labeling Kit.
Hybridization to Microarray
A total of 15 g of the resulting biotin-tagged cRNA was fragmented to an average strand length of 100 bases (range 35-200 bases) following prescribed protocols (Affymetrix GeneChip Expression Analysis Technical Manual). Subsequently, 10 g of this fragmented target cRNA was hybridized at 45°C with rotation for 16 h (Affymetrix GeneChip Hybridization Oven 640) to probe sets present on an Affymetrix U133ϩ2 arrays. The GeneChip arrays were washed and then stained (SAPE, streptavidin-phycoerythrin) on an Affymetrix Fluidics Station 450, followed by scanning on a GeneChip Scanner 3000. Microarrays were performed in each individual subject, not in pooled samples.
Real-time PCR (RT-PCR). For confirmation of microarray gene expression findings, RT-PCR was carried out on six genes selected from the natural killer cell-mediated cytotoxicity pathway (FASLG, GZMB, PRF1, CASP3, CD247, and KLRD1). This pathway was significantly altered by exercise in both the early-and late-pubertal girls. One microgram of RNA was reverse transcribed using the High Capacity cDNA Reverse Transcription Reagents kit (Applied Biosystems) according to the manufacturer's instructions, using random primers in a 20-l reaction. The RT-PCR analysis was performed with the ABI PRISM 7000 Sequence Detection System (Applied Biosystems) by using TaqMan Universal PCR Master Mix and Assays-on-Demand Gene Expression probes (Applied Biosystems) (FASLG: assay ID, Hs00181226_g1; GZMB: assay ID, Hs00188051_m1; PRF1: assay ID, Hs00169473_m1; CASP3: assay ID, Hs00263337_m1; CD247: assay ID, Hs00609515_m1; KLRD1: assay ID, Hs00233844_m1). Actin beta was used as an endogenous control.
Data Analysis
Microarray analysis. The results were quantified and analyzed using GCOS 1.4 software (Affymetrix) using default values (Scaling Target Signal Intensity ϭ 500). The microarray data were analyzed using ArrayAssist version 5.2.2 (StrataGene). We normalized the data using GC-RMA. Only probe sets that reached a signal value Ն20 in Values are means Ϯ SE. Peak V O2, peak oxygen uptake. *Significant difference between early-and late-pubertal girls (P Ͻ ϭ0.0002).
at least one array and a present call by MAS5 criteria in at least 30 arrays were selected for further analysis. Overall, 24,089 of 54,675 probe sets represented on the array met these criteria. The microarray CEL files and GC-RMA normalized data have been deposited in the GEO database (http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?accϭGSE14642). We further applied BRB-ArrayTools software version 3.6.0 (http// linus.nci.nih.gov/brb/tool.htm) to determine significantly changed probe sets from before to after exercise for early-pubertal and latepubertal girls separately. Traditional Student's paired t-test was first applied to each probe set, and then significantly changed probe sets were identified with permutation tests (30). With 95% confidence, the final list of significantly changed probe sets in each group has less than a 5% false discovery rate (FDR). The change of gene expression from preexercise to postexercise was additionally compared between early-and late-pubertal girls using a two-sample t-test and FDR adjustment.
Gene annotation. The final list of significantly changed probe sets was then additionally analyzed using the functional annotation tools provided by DAVID, the Database for Annotation, Visualization and Integrated Discovery (http://david.abcc.ncifcrf.gov), to classify the genes into pathways using the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. Only pathways with Expression Analysis Systematic Explorer (EASE) score Ͻ 0.044 are presented in this analysis. EASE score is a modified Fisher exact P value in the DAVID system used for gene-enrichment analysis. EASE score P value ϭ 0 represents perfect enrichment. P value Յ 0.05 is considered as gene enrichment in a specific annotation category (http://david.abcc. ncifcrf.gov/helps/functional_annotation.html#summary).
Comparison of Microarray Results to RT-PCR
We used a paired t-test to determine the effect of exercise on the six genes selected in the 10 early-pubertal and 10 late-pubertal girls.
Physiological Data
The physiological data are presented as means and SE. The twosided paired t-test was applied for testing changes from before to after the exercise within each group, and the two-sample t-test was applied for examining group difference. All analyses were done using SAS9 (Cary, NC), and the significance level was set at 0.05.
Anthropometric and Physiological Characteristics
The anthropometric and physiological characteristics of the 20 participants appear in Table 1. The subjects were of normal fitness (early-pubertal girls: 93.5 Ϯ 3.7% of the subjects' predicted V O 2max ; late-pubertal girls: 98.8 Ϯ 6.7% of the subjects' predicted V O 2max ) (5).
PBMC Response to Exercise
As shown in Fig. 2, the number of total WBCs, lymphocytes, and monocytes was found to be significantly elevated at peak exercise in both early-pubertal girls [WBC increases 2,760 Ϯ 466 (51%, P ϭ 0.0002), lymphocytes 1,262 Ϯ 280 (60%, P ϭ 0.0015), and monocytes 212 Ϯ 52 (92%, P ϭ 0.0026)] and late-pubertal girls [WBC 2,800 Ϯ 414 (52%, P ϭ 0.0001), lymphocytes 1,174 Ϯ 321 (72%, P ϭ 0.0064), monocytes 287 Ϯ 86 (94%, P ϭ 0.012)]. There was not significant difference in the number of cells increased in late-compared with early-pubertal girls. Fig. 1. Assessment of exercise intensity during the ten 2-min bouts used to gage the peripheral blood mononuclear cell (PBMC) genomic response in early (white bars)-and late (gray bars)-pubertal girls. The values represent the means Ϯ SE of heart rate, work rate, and oxygen uptake (V O2) expressed as percentage of the individual participant's peak values obtained in an earlier session from the progressive exercise protocol. As can be seen, the relative exercise intensity was virtually the same in the 2 groups. Lactate increased in both groups (*P Ͻ 0.05) 170 EXERCISE AND PBMC GENE EXPRESSION IN GIRLS Fig. 2. Effect of exercise on PBMCs in the early and late pubertal girls. Data represent the meansϮSE before (white bars) and immediately after exercise (gray bars) for all leukocytes, lymphocytes, and monocytes. Hatched bars represent the late-pubertal girls. *Significant (P Ͻ 0.05) increase from before to after exercise.
Effect of Exercise on Gene Expression
Exercise induced significant changes in PBMC gene expression in both the early-and late-pubertal girls. Exercise altered 877 genes (611 up, 266 down) (FDR Ͻ 0.05) in late-pubertal girls and 1,320 (829 up, 491 down) in the early-pubertal girls. Detailed lists of genes that were altered by exercise in earlyand late-pubertal girls are shown in Supplemental Tables S1 and S2, respectively, which are available with the online version of this article. Figure 3 schematically compares the magnitude of the PBMC gene change in the two groups and shows the 622 genes that were significantly affected in a similar direction (420 up, 202 down) in both groups (Supplemental Table S3).
We characterized genes affected by exercise in each group using gene pathway analysis using KEGG. We found 11 pathways in the early-pubertal girls and six pathways in the late-pubertal girls that were significantly affected by exercise (EASE score Ͻ 0.044) ( Table 2). Five pathways were the same in both groups. A complete detailed list of the individual genes in the five common pathways is shown in Table 3. Many of the pathways identified were involved in innate or early immune responses.
Comparison between Early-and Late-Pubertal Females
There were 255 genes affected by exercise in the latepubertal girls that were not affected in the early-pubertal girls, and conversely, there were 698 genes affected by exercise in the early pubertal girls that were not affected in the late (Fig. 3). However, when we applied the statistical analysis using two-sample t-test described above, we found no significant differences (10% FDR) in individual PBMC gene expression in response to exercise between early-and late-pubertal girls.
RT-PCR Corroboration of Specific Genes
The six genes selected for RT-PCR all were found to be significantly upregulated by microarray analysis. Similarly, analysis of RT-PCR in all 20 participants (early-and latepubertal girls) revealed significant changes in these same genes (P Ͻ 0.004, by paired t-test). Similar to our previous studies (4,24,25), the RT-PCR changes parallel those found by microarray analysis.
DISCUSSION
We found that a relatively brief bout of exercise, designed to mimic more natural patterns of physical activity in children, induced a remarkable change in PBMC gene expression in healthy females. These observations were made using a stringent statistical analysis of the microarray data to limit the possibility of false-positive results. In the early-pubertal girls the relatively brief but heavy exercise protocol altered 1,320 genes, whereas in the late-pubertal girls the expression of 877 genes was changed. The expression of 622 genes changed similarly in both groups (Fig. 3, Supplemental Table S3). We also found that exercise influenced a variety of established gene pathways (EASE Ͻ 0.044) in both older (6 pathways) and younger girls (11 pathways, Table 2). The five pathways that were the same in both groups and their individual genes are shown in Table 3. The majority of the significant pathways was related to inflammation, stress, and apoptosis, consistent with previous work from this and other laboratories on gene expression following exercise in circulating leukocytes in humans. Fig. 3. Comparison of the effect of exercise on PBMC genes in early-and late-pubertal girls, showing the relative magnitude of the effect (circles) and the size of the overlap (shaded area). There were 622 PBMC genes that were significantly altered by exercise in both groups. Tables S1-S3 are available with the online version of this article.
Continued
Two alternative mechanisms might explain the robust effect of exercise on PBMC gene pathways: the first, a direct effect of exercise on gene expression within the population of circulating PBMCs; and the second, an indirect effect, the mobilization into the circulation of PBMCs that were expressing genes differently in their marginal pools (e.g., lung, lymphatics) or because of different maturational status of the mobilized PB-MCs compared with those PBMCs already in the circulating blood. In human studies, it would be unfeasible to sample gene expression of marginal pools of PBMCs. Nonetheless, our data do permit us to draw some inferences concerning possible mechanisms.
Consider first, for example, the gene with the highest increase in expression-fas ligand (FSALG). The 4.4-fold increase that we observed for this gene could occur, however, only if marginal PBMCs that entered the circulation during exercise (N.B., we observed about a 65% increase) were expressing FASLG at levels ϳ10-fold greater than the circulating PBMCs. The decrease in expression for another gene, fusion (FUS), which had a fold change of 0.4, is possibly even more definitive. If, in the extreme case, the marginal PBMCs had no detectable expression of this gene, the lowest possible postexercise fold change would be 0.6. Thus it is reasonable to speculate from the current data that exercise has at least some direct effect on gene expression even in the circulating population of PBMCs.
Particularly intriguing was our finding of what appeared to be activation of the natural killer (NK) cell-mediated cytotoxicity pathway. When upregulated, the genes in this pathway enable circulating leukocytes to identify, attach to, and ultimately kill cells perceived as foreign or dangerous (NK cells express an array of activating cell surface receptors that can trigger cytolytic programs, as well as cytokine or chemokine secretion). Although this particular pathway was so named because of work done in NK cells, many of the same genes are expressed in cytotoxic lymphocytes as well (16). Consequently, changes in gene expression that we identified did not, most likely, result from alterations of gene expression solely in the relatively small number of NK cells but also in CD8ϩ lymphocytes as well.
In the early-pubertal girls, 22 genes in the NK cell-mediated cytotoxicity pathway were altered following exercise (21 had higher expression), and in the late-pubertal girls 18 genes in this pathway had higher expression after exercise ( Table 3). Many of the genes that had higher expression after exercise involved cell receptors (10 receptors in the early-pubertal girls and 7 receptors in the late-pubertal girls) ( Table 3). NK surface receptors can trigger cytolytic programs, as well as cytokine or chemokine secretion associated with inducing apoptosis in target cells. Indeed, there is evidence suggesting that exercise does induce physiological activation of innate immune cells like NK cells (18,19), providing a possible link between the exercise-induced changes in gene expression and functional outcomes.
The NK cell-mediated cytotoxic pathway includes processes in which potential target cells must be recognized as "foreign"; consequently, it is not surprising that the antigen processing and presentation pathway, too, was altered by exercise (earlypubertal girls, 16 genes; late-pubertal girls, 13 genes, Table 3). In the early-pubertal girl group, 10 genes within this pathway had higher expression after exercise-all of them within the class I major histocompatibility complex (MHC I) pathway that inhibits the cytotoxic activity of NK cells. Similarly, six genes in the antigen processing and presentation pathway within the class II major histocompatibility complex had lower expression after exercise consistent with a muting of cytotoxic activity in CD4T cells. We speculate that the seeming paradox that activation by exercise of genes that attenuate the killing LG, late-pubertal girls. *Fold change is defined as the geometric mean of expression levels of After/Before. A fold change of 0.7 indicates the expression level after exercise is about 70% of the expression level before the exercise. A fold change of 1.6 indicates the expression level after the exercise is about 160% of the expression level before the exercise. function of NK and cytotoxic t-cells occurs at the same time the numbers of these cells increase in the circulation is another manifestation of the ability of exercise to simultaneously stimulate both pro-and anti-inflammatory leukocyte function. The regulation of leukocyte function in response to exercise might parallel that of circulating cytokines, as noted earlier by Ostrowski et al. (22): ". . . cytokine inhibitors and anti-inflammatory cytokines restrict the magnitude and duration of the inflammatory response to exercise." In the present study, heat shock 70-kDa protein 1a (HSPA1B), one of the genes in the antigen processing and presentation pathway, had higher expression immediately after exercise in both early (2.90-fold change)-and late-pubertal girls (3.37-fold change). These data corroborate the growing body of evidence implicating the heat shock protein family of chaperone genes and proteins in the cellular response to exercise. Most studies have focused on muscle tissue in which exercise is now known to elicit a robust Hsp gene and protein response (20). Intriguingly, there are also data from this and other laboratories pointing toward an Hsp gene and protein response in circulating leukocytes following exercise as well (4,8). For example, in our own earlier studies in PBMCs in young men and in early-and late-pubertal boys (25), we also found that members of the HSP gene regulatory family were altered by exercise. HSPs are increasingly seen as early responders in the signaling that occurs in the systemic immunological response to "danger" (21). Hsp70, one of the HSPs found in the circulation, is emerging as pluripotent cytokine and mediator (3). The consistent increase in Hsp70 gene expression in PBMCs in early-and late-pubertal girls suggests that its role is likely to be in modulating activation of leukoctyes early in the danger or stress response paradigm. The functional consequences of this exercise-altered gene expression in Hsp 70 have yet to be elucidated.
We found a general suppression of the B-cell receptor signaling pathway. Circulating B-cells are activated through stimulus of their receptors (by interaction with APCs) and then differentiate into either circulating memory B-cells or immunoglobulin producing plasmacytes. The impact of exercise on B-cell immunoglobulin production is not fully elucidated; however, there are several studies suggesting that even moderate exercise (e.g., ϳ35% peak V O 2 ) leads to suppression of immunoglobulin production (10). Our finding, that exercise attenuated the B-cell receptor signaling pathway, is consistent with these previous observations, and again, suggests the possibility of regulation of leukocyte function at the gene expression level following exercise.
We attempted to examine possible differences and similarities in the gene expression response to exercise between the early-and late-pubertal girls. One remarkable finding was the consistency of the fold change in the group of genes that were significantly influenced by exercise in both groups (Fig. 4). Clearly, there is a well-controlled pattern of gene expression in leukocytes in response to exercise that is manifest in both early-and late-pubertal girls.
In examining Fig. 3, we wondered whether within the set of genes that did not overlap (i.e., genes whose expression significantly changed in one group following exercise but not the other) there were examples of genes that differed significantly between the early-and late-pubertal girls. Applying rigorous statistical modeling, however, revealed no genes that achieved statistically significant differences despite the fact that there were 698 genes affected by exercise in the early-pubertal girls only, and 255 genes affected by exercise in the late-pubertal girls only. It is possible that given the large number of genes examined in the microarray, and the relatively small sample size, we inadvertently limited our ability to discover moderate maturational differences in gene expression evoked by exercise. Other techniques, such as the use of RT-PCR on targeted gene candidates, may prove to be better adept at finding smaller but consequential differences related to maturation.
We also compared the results in females with those obtained in recently published early-and late-pubertal boys (24). A number of hypotheses suggesting sex differences do emerge from the data. Indeed, there were many more genes that changed similarly in the late-pubertal boys and girls (453 genes in common) than in the early-pubertal subjects (80 genes). These data are included in Data Supplements 4 and 5, available with the online version of this article. In the early-pubertal subjects, the younger boys had far fewer genes influenced by exercise (109 genes) than did any of the other groups (i.e., early-and late-pubertal girls: 1,320 and 877, respectively; late-pubertal boys: 1,246). These initial observations raise intriguing questions regarding the role of sex hormones, maturation of physiological responses to exercise, and sex-associated changes in body composition on the PBMC response to exercise that await further investigation.
Exercise is a complex and profound physiological perturbation in which, not surprisingly, the sudden insult to cellular homeostasis leads to a systemic "danger-type" response (15), one reflecting the increased vulnerability of an organism required to flee a predator or pursue prey. Our data indicate that in early-and late-pubertal girls brief exercise is associated not only with increased numbers of immune cells in the circulation but also may serve as a "wake-up" call in which key gene pathways are activated preparing PBMCs for fighting infection, wound repair, and, at the same time, setting the stage for apoptosis should these functions not be needed and the activated cells be eliminated. We could not, in these experiments using minimally invasive procedures in healthy children, determine whether the changes in gene expression resulted from direct effects of exercise on PBMCs or, alternatively, from a shift of PBMC type cells in marginal pools that were expressing genes differently than those in the circulation. By whatever mechanism, however, exercise-associated changes in gene expression in the circulating pool of PBMCs was substantial. The Fig. 4. Comparison of fold change in 622 PBMC genes that were significantly affected by exercise in both the early-and late-pubertal girls. Data points represent the mean fold change for each of the exercise-sensitive genes in both groups. The fold change in the 2 groups was highly correlated (r ϭ 0.98). extent to which changes in gene expression in PBMCs are accompanied by functional or physiological changes in these cells, and either ameliorate, or, if abnormal, stimulate disease, has yet to be determined.
GRANTS
This work was supported in part by National Institutes of Health Grants RO1-HL-080947, P01-HD-048721 and the University of California-Irvine Satellite Grant GCRC-MO1-RR00827. | 6,594 | 2009-07-01T00:00:00.000 | [
"Biology"
] |
Effects of Cold-Rolling/Aging Treatments on the Shape Memory Properties of Ti49.3Ni50.7 Shape Memory Alloy
In this study, the combined effects of strengthening, precipitates, and textures on the shape recovery ability and superelasticity of thermomechanically treated Ti49.3Ni50.7 shape memory alloy (SMA) in both the rolling and transverse directions were studied by experimental measurements and theoretical calculations. Experimental results and theoretical calculations showed that the 300 °C × 100 h aged specimen exhibited the best shape memory effect because it possessed the most favorable textures, highest matrix strength, and most beneficially coherent stress induced by Ti3Ni4 precipitates. The 30% cold-rolled and then 300 °C × 100 h aged specimen exhibited the highest strength and superelasticity; however, its shape recovery ability was not as good as expected because the less favorable textures and the high strength inhibited the movements of dislocations and martensite boundaries. Therefore, to achieve the most optimal shape memory characteristics of Ni-rich TiNi SMAs, the effects of textures, matrix strength, and internal defects, such as Ti3Ni4 precipitates and dislocations, should all be carefully considered and controlled during thermomechanical treatments.
Introduction
Near-equiatomic TiNi shape memory alloys (SMAs) are widely used in a variety of applications because of their excellent shape memory effect, superelasticity, high strength and ductility, and good damping capacity [1,2]. It has been reported that thermomechanical treatments, including work hardening, solid-solution strengthening, precipitation hardening, and grain refinement, normally strengthen TiNi SMAs by increasing the critical shear stress for slip [3][4][5]. In addition, the shape memory effect and superelasticity of thermomechanically treated TiNi SMAs can be improved by suppressing the irreversible slip deformation during the martensite reorientation and stress-induced martensitic transformation. Nevertheless, thermomechanical treatments may simultaneously change the textures and microstructures of TiNi SMAs, thereby influencing their mechanical properties. Numerous studies have reported the relationship between shape memory behaviors and the crystallographic properties of TiNi SMAs [6][7][8][9][10][11]. Miyazaki et al. [6] calculated the theoretical recoverable strains of single-crystal TiNi SMA along different directions by the lattice deformation matrix and compared these values to experimental results. Shu et al. [7] and Inoue et al. [8] calculated the theoretical recoverable strains of polycrystal TiNi SMAs possessing different textures and compared these values to experimental results. Ye et al. [9] observed the texture evolution of TiNi SMAs during thermal cycling under load and calculated the strain-texture map in B19' martensite. Laplanche et al. [10] studied the evolutions of microstructure and texture during the processing of Ti 49 Ni 51 shape memory sheets using electron backscatter diffraction. The effects of temperature and texture on the reorientation of martensite variants in TiNi SMAs have also been reported [11]. Even though these researchers thoroughly investigated the relationship between the texture and theoretical recoverable strain of SMAs, few researchers have considered the effects of microstructure, such as precipitates, dislocations, and the types of martensite twin variants, on the shape memory characteristics of SMAs. Therefore, these theoretical calculated values may deviate from the experimental results and lack practical application.
To address this issue, Sehitoglu et al. [12][13][14][15][16][17] conducted a series of investigations on the mechanical properties and theoretical recoverable strain of single crystal Ni-rich TiNi SMAs. They also studied the effects of precipitates on the shape memory properties of peak-aged and over-aged Ni-rich TiNi SMAs. Ni-rich TiNi SMAs were chosen for their studies because Ti 3 Ni 4 normally precipitates when the alloys are aged at appropriate temperatures and time intervals. In the early stage of aging, the boundaries of the Ti 3 Ni 4 precipitates are coherent with the matrix, and the lattices around these precipitates are distorted by the coherent stress field. The extent of the lattice distortion is larger in the longitudinal direction of the precipitates than in the transversal direction. As the aging time increases, Ti 3 Ni 4 precipitates grow larger and lose the coherent interfacial relation with the matrix. Meanwhile, dislocations can be observed in the matrix around the precipitates. In addition, Ti 3 Ni 4 precipitates can increase the matrix strength [18]. Nishida et al. [19] investigated the transformation behaviors of Ti 49 Ni 51 SMA after aging and discovered that the morphologies of martensites in solution-treated and aged specimens are quite different. In solution-treated specimens, martensite variants are self-accommodated to each other. However, in aged specimens, the plate-like martensite grows along specific directions, and single-oriented martensite forms in the grains of the parent phase. The microstructures of most of the martensites in solution-treated specimens are <011> M type II twins or (111) M type I twins, and those in aged specimens, which have fine lenticular Ti 3 Ni 4 precipitates, are (001) M type I twins, with <011> M type II twins appearing as the Ti 3 Ni 4 precipitates grow larger.
However, because Sehitoglu's research focused only on single crystals [12][13][14][15][16][17], no discussion was presented on the textures of the polycrystal effect. In practical applications, on the other hand, the transformation behaviors or mechanical properties of TiNi SMAs are often controlled by thermomechanical treatments. Although these treatments have several effects on TiNi SMAs and lead to complexity in comprehending the corresponding evolution of shape memory characteristics, these crucial effects have not yet been discussed systematically. In the present study, we aimed to understand the strengthening effects and the changes in microstructure and texture generated by thermomechanical treatments, and to elucidate their combined effects on the shape memory characteristics of Ti 49.3 Ni 50.7 SMA. In addition, the theoretical recoverable strains of polycrystalline Ti 49.3 Ni 50.7 SMA were calculated to elucidate the effects of textures on the shape memory characteristics of this alloy. Figure 1a shows the differential scanning calorimetry (DSC) results of the solution-treated and selected 300 • C aged Ti 49.3 Ni 50.7 specimens for time intervals of 0 to 400 h. As shown in Figure 1a, the solution-treated specimen possessed a one-stage B2↔B19' martensitic transformation, and all 300 • C aged specimens exhibited a two-stage B2↔R↔B19' transformation. The presence of R-phase in aged Ti 49.3 Ni 50.7 SMA was induced by the formation of Ti 3 Ni 4 precipitates. Figure 1b shows the variation of Ms, Mf, As, Af, Rs, and Rf transformation temperatures (Ms and Mf are the start and finish temperatures of forward martensitic transformation, respectively; As and Af are those of reverse martensitic transformation, respectively; Rs and Rf represent the start and finish temperatures of R-phase transformation during the cooling, respectively) for 300 • C aged specimens determined from Figure 2 reveals that the microhardness of Ti 49.3 Ni 50.7 SMA initially increased with increasing aging time due to the increasing amount of Ti 3 Ni 4 precipitates in the aged specimens. However, the microhardness exhibited a drop during the aging time from 5 to 25 h because, in this aging period, the 300 • C aged specimens were in the soft R-phase state at room temperature, rather than in the hard B2 parent phase state, as demonstrated in the DSC results shown in Figure 1. After 25 h of aging, the microhardness of aged Ti 49.3 Ni 50.7 specimens increased again with increasing aging time, approaching a maximum value of 347 Hv at 100 h. This suggests that aging for 100 h is the most beneficial aging treatment to improve the strength of Ti 49.3 Ni 50.7 SMA due to enhancement of the matrix strength by the Ti 3 Ni 4 precipitates when the boundaries between the precipitates and matrix are coherent [18]. However, the microhardness of the Ti 49.3 Ni 50.7 specimens decreased again after 100 h of aging because the boundaries between the precipitates and matrix became semi-coherent or incoherent when the Ti 3 Ni 4 precipitates grew too large, leading to the deterioration of the strengthening effect.
DSC and Microhardness Results
In order to determine the thermomechanical effect on the shape memory and superelasticity properties of Ti 49.3 Ni 50.7 SMA, a specimen was 30% cold-rolled and then aged at 300 • C for 100 h for the following experiments. Figure 3 shows the DSC results for the 30% cold-rolled and then 300 • C × 100 h aged specimen; the DSC curves of the 300 • C × 100 h aged specimen shown in Figure 1a are also plotted for comparison. As shown in Figure 3, the 30% cold-rolled and then 300 • C × 100 h aged specimen exhibited only a rather broadened one-stage B2↔B19' martensitic transformation, with transformation peaks appearing at approximately 50 • C.
Shape Memory Effect Measurements
Figure 4a,c shows the tensile test results for the solution-treated, 300 • C × 100 h aged, and 30% cold-rolled and then 300 • C × 100 h aged rolling direction (RD) and transverse direction (TD) specimens, respectively. All specimens were tensile tested at −80 • C (below Mf temperature), and the curved lines at the bottoms of the figures represent the recoverable strain (ε re ) of specimens after being heated to 100 • C (above Af temperature). In Figure 4, σ re M represents the stress for martensite reorientation, and ε M total (the total recoverable strain) equals 6.9% minus ε p for the solution-treated specimen and the 30% cold-rolled and then 300 • C × 100 h aged specimen, and 10.6% minus ε p for the 300 • C × 100 h aged specimen. Here, ε p is the permanent (unrecoverable) strain after the specimen was heated to 100 • C, presented as an example in Figure 4b. The ε M total values of the specimens determined from Figure 4 are listed in Table 1. In Table 1, the theoretical recoverable strain values, which were obtained from the theoretical calculations depicted in the following discussion section, are also listed here. As shown in Figure 4, both the 300 • C × 100 h aged specimen and the 30% cold-rolled and then 300 cold-rolling was significantly improved by the 300 • C × 100 h aging treatment. This phenomenon can be explained by the fact that low-temperature aging (<600 K) typically increases the density of Ti 3 Ni 4 precipitates in Ni-rich TiNi SMAs, which provides more pinning points to hinder dislocation movement [5]. In addition, it has been demonstrated that annealing cold-rolled TiNi SMA at 200 • C to 600 • C is sufficient to nullify the martensite stabilization, but the dislocations induced by cold-rolling still remain inside the alloys [3]. These dislocations not only raise the required critical stress for slip, but also serve as pinning points to the moving twin boundaries. Therefore, the thermomechanical treatment of 30% cold-rolling and then 300 • C × 100 h aging was the most beneficial method to strengthen Ti 49.3 Ni 50.7 SMA in this study. However, as shown in Table 1, despite the fact that the 30% cold-rolled and then 300 • C × 100 h aged specimen possessed the highest strength, it was the 300 • C × 100 h aged specimen that achieved the largest ε M total value of approximately 8.8%. This unexpected result contradicts our intuitive understanding, suggesting that the strengthening effect is not the only factor that determines the shape memory ability of SMAs. Figure 5a,b shows the results of superelasticity measurements of RD and TD specimens, respectively, for the solution-treated, 300 • C × 100 h aged, and 30% cold-rolled and then 300 • C × 100 h aged specimens. In Figure 5, each specimen was measured at a temperature 15 • C above its Af temperature. Figure 5 shows that the 30% cold-rolled and then 300 • C × 100 h aged specimen had a higher stress value (approximately 710 MPa) to induce the stress-induced martensite (SIM) (σ f SIM )
Superelasticity Measurements
than did the solution-treated specimen (σ f SIM was approximately 510 MPa) and 300 • C × 100 h aged specimen (σ f SIM was approximately 435 MPa). This feature demonstrates that the 30% cold-rolled and then 300 • C × 100 h aged Ti 49.3 Ni 50.7 SMA had the highest strength of all the specimens in terms of superelasticity measurement. Figure 5 also shows that the transformation stress of the 30% cold-rolled and then 300 • C × 100 h aged specimen increased as the transformation strain increased in the SIM region. This increase occurred because the Ti 3 Ni 4 precipitates and dislocations that formed during the cold-rolling and aging treatment caused inhomogeneous martensitic transformation during the loading/unloading processes. The dual-phase structure in the SIM region also caused the hardening of the alloy because of the interaction of martensite correspondence variants and the blockage of existing martensite boundaries [16,17]. Figure 6a,b shows the ODF φ 2 = 45 • results of the solution-treated specimen and the 30% cold-rolled specimen, respectively. Only the results of φ 2 = 45 • are presented in Figure 6 because most of the preferred orientations are conveniently observed in this section [8]. According to Figure 6a, the solution-treated specimen exhibited a (110)[110] B2 texture and a {111}<uvw> B2 texture spreading from the line of φ = 55 • . The ODF result of the 300 • C × 100 h aged specimen is not presented here because both B2 phase and R-phase coexisted in this alloy at room temperature. Fortunately, according to previous studies [8,20,21], it has been demonstrated that the types of textures generated by hot/cold-rolling are hardly affected by the subsequent heat-treatment. Therefore, it is reasonable to assume that the textures of the 300 • C × 100 h aged specimen should be similar to those of the solution-treated one. Figure 6b shows that the 30% cold-rolled Ti 49. 3
Discussion
In order to elucidate the effects of thermomechanical treatment and its derivative textures on the shape recovery ability of the SMAs, the theoretical recoverable strains of Ti 49.3 Ni 50.7 SMA were calculated in this study. Previous studies reported that the transformation strain calculated from the lattice deformation matrix could be applied as the maximum recoverable strain [6,7,9,16]. This method has been modified to calculate the recoverable strain for polycrystals and thin films with given textures [7,[22][23][24][25]. According to the lattice deformation theory, E (i) is defined as the transformation strain matrix of a material when it transforms from austenite to the ith variant of martensite and is assumed to be recoverable. For a polycrystal, the orientation of the grain at point x is given by a rotation R(x) relative to a reference coordinate. The transformation strain matrix associated with the ith variant of martensite in the grain x is E (i) (x) = R(x)E (i) R T (x). Here, R T is the transpose of R. Consider a polycrystal in its self-accommodated martensite state, subjected to a uniaxial tensile load σ in the directionê; the real recoverable strain ε R = maxê · Eê. However, ε R cannot be directly calculated because the E of a specimen is unknown. Therefore, an inner bound ε R i and an outer bound ε R o are applied to estimate this real recoverable strain. In the calculation of the inner bound ε R i , each pair of martensite variants is supposed to be twin-related, while in the calculation of the outer bound ε R o , the twin-relations among martensite variants are not considered. The relation among inner bound In the present study, the outer bound ε R o of specimens is calculated, since it provides a good value relation among textures [7]. Thus, the transformation strain matrix for a polycrystal is: where µ i is the volume fraction of grain i and λ i j is the portion of material in grain i with the transformation strain matrix E (j) . Accordingly, the maximum theoretical recoverable strain of the specimen extended along directionê can be calculated from the lattice deformation matrix by Equation (2): wherev i is the tensile direction for grain i. E (1) for Ti 49.4 Ni 50.6 SMA from B2 to B19' martensite [26,27] is: in which α = 0.0243, β = −0.0437, δ = 0.0580, and ε = 0.0427. This matrix is applied to our calculations as the transformation strain matrix for Ti 49.3 Ni 50.7 SMA. In the case of TiNi SMAs, the number of martensite lattice correspondence variants (N) is 12; therefore, E (2) -E (12) can be calculated from E (1) by these relations indicated in Reference [28]. From Equation (2) and the determined textures listed in Table 1, the theoretical recoverable strains of both the solution-treated Ti 49.3 Ni 50.7 specimen and the 300 • C × 100 h aged specimen are calculated as 9.10% along RD and 8.09% along TD, since they possess identical textures. In the same manner, the theoretical recoverable strains of the 30% cold-rolled and then 300 • C × 100 h aged specimen can be calculated as 8.23% along RD and 7.34% along TD. The calculated theoretical recoverable strains for each specimen are summarized in Table 1. According to Table 1, the theoretical recoverable strains of the solution-treated specimen and the 300 • C × 100 h aged one are higher than those of the 30% cold-rolled and then 300 • C × 100 h aged specimen, indicating that the textures of the solution-treated specimen and the 300 • C × 100 h aged one are more favorable to the shape recovery ability of this SMA.
However, as shown in Table 1, the experimentally determined recoverable strains (ε M total ) of the solution-treated specimen were much lower than expected. This feature could be attributed to the fact that the solution-treated specimen was the least strengthened of the three, indicating that slip deformation was easier in the solution-treated specimen. Table 1 also shows that the calculated theoretical recoverable strains of the 300 • C × 100 h aged specimen were identical to those of the solution-treated one; however, the experimental ε M total values of the 300 • C × 100 h aged specimen were much higher than those of the solution-treated one. They were higher because the formation of Ti 3 Ni 4 precipitates in the 300 • C × 100 h aged specimen hindered the slip deformation during the tensile test. Furthermore, the coherent stress fields around Ti 3 Ni 4 precipitates could also facilitate the shape recovery of the alloy.
According to Table 1, the experimental ε M total values of the 30% cold-rolled and then 300 • C × 100 h aged specimen were also lower than those of the 300 • C × 100 h aged one, even though its σ re M and σ f SIM values were higher than those of the other two specimens. They were higher because the high strengthening effect exhibited in the 30% cold-rolled and then 300 • C × 100 h aged specimen not only obstructed the movement of dislocations, but also hindered the movement of martensite boundaries. Moreover, as demonstrated in Figure 5, the slope of the stress-strain curve for the 30% cold-rolled and then 300 • C × 100 h aged specimen exhibits an obvious change before reaching the SIM region. This change indicates that the movements of martensite boundaries may be obstructed, causing plastic deformation before the formation of SIM to compensate for the external deformation.
Experimental Procedures
The Ti 49.3 Ni 50.7 SMA used in this study was prepared from raw materials of titanium and nickel (both with 99.99 wt % purity) with six cycles of remelting in a vacuum arc remelter (Series 5 Bell Jar, Centorr Vacuum Ind., Nashua, NH, USA), with pure titanium used as a getter in ultrahigh purity argon gas. The weight loss during the remelting was less than 1 × 10 −5 . The as-melted ingot was hot-rolled at 900 • C into a plate with a thickness of about 1.5 mm and then solution-treated at 900 • C for 1 h. The oxidation layer of the plates was chemically etched by a solution composed of HF:HNO 3 :H 2 O = 1:5:20 (in volume ratio), and then polished with sandpaper. The solution-treated Ti 49.3 Ni 50.7 specimen was cut with a diamond saw into small specimens with dimensions of 20 mm × 15 mm × 1.2 mm. The specimens were sealed into evacuated quartz tubes and aged at 300 • C in a furnace (BLUE M 894, Lindberg/MPH, Riverside, MI, USA) for different time periods before being quenched in water. The martensitic transformation behavior and transformation temperatures of the specimens were determined by differential scanning calorimetry (DSC) with TA Q10 equipment (Q-10, TA Instruments, New Castle, DE, USA) at a constant temperature rate of 10 • C/min.
The microhardness of the specimens was determined using an Akashi MVK-E Vickers tester (Model:HM-112, Mitutoyo Corp., Kanagawa, Japan) with a load of 4.9 N applied for 15 s. Seven tests were done on each specimen, and the distances between any two tests were at least five times the size of the indenter to avoid the effect of the stress field generated by the indenter's point. The average microhardness value of each specimen was calculated from seven tests, with the largest and the smallest values excluded. Specimens for tensile tests were cut in the RD and TD before being ground into dog-bone shapes with a gauge size of 10 mm × 3.5 mm × 1.2 mm. The tensile tests were determined using a SHIMAZU AG-IS 50kN tensile test machine (AG-IS 50KN, Shimadzu Corp., Kyoto, Japan) equipped with a thermostatic chamber. During tensile tests, the strain rate was set at 1.3 × 10 −3 /s, and the specimens were loaded to strains of 6.9 or 10.6% before being unloaded to 0.5 kN. For superelasticity measurements, each specimen was tensile tested at a temperature of 15 • C above its Af temperature. For shape memory effect determinations, each specimen was tensile tested at −80 • C and then heated to 100 • C. The orientation distribution functions (ODF) of the specimens were calculated from the (200) B2 , (110) B2 , and (211) B2 pole figures, which were measured using a Rigaku TTR-AX3 X-ray diffractometer (TTR-AX3, Rigaku Corp., Tokyo, Japan). The ODF, including odd terms and ghost correction, was calculated up to an order of l max = 22 by the series expansion method.
Conclusions
In this study, for polycrystal Ti 49.3 Ni 50.7 SMA with various thermomechanical treatments, the transformation temperatures, microhardness, shape memory effect, superelasticity, and orientation distribution functions (ODF) of specimens were measured and their theoretical recoverable strains were calculated by the lattice deformation theory. Experimental results and theoretical calculations demonstrated that thermomechanically treated specimens had varied superelasticity and recoverable strains corresponding to the alloys' different microstructures, textures, and strengths. The 300 • C × 100 h aged specimen had the highest ε M total value because it had the most favorable textures and beneficial internal defects, such as Ti 3 Ni 4 precipitates that formed and induced coherent stress. The 30% cold-rolled and then 300 • C × 100 h aged specimen had the highest strength and superelasticity of all the samples; however, its shape recovery ability was not as good as expected because the high strength of the alloy inhibited the movements of dislocations and martensite boundaries. This study reveals that the strength of Ni-rich TiNi SMAs can be significantly improved with appropriate thermomechanical treatments; however, such treatments also have side effects on the shape memory characteristics of the SMAs. The combination of hardness measurements and SMAs tensile response seems to show that the suppression of slip by Ti 3 Ni 4 precipitates is critical and perhaps more important than texture. | 5,475.4 | 2017-06-26T00:00:00.000 | [
"Materials Science"
] |
A new bio-inspired metaheuristic algorithm for solving optimization problems based on walruses behavior
This paper introduces a new bio-inspired metaheuristic algorithm called Walrus Optimization Algorithm (WaOA), which mimics walrus behaviors in nature. The fundamental inspirations employed in WaOA design are the process of feeding, migrating, escaping, and fighting predators. The WaOA implementation steps are mathematically modeled in three phases exploration, migration, and exploitation. Sixty-eight standard benchmark functions consisting of unimodal, high-dimensional multimodal, fixed-dimensional multimodal, CEC 2015 test suite, and CEC 2017 test suite are employed to evaluate WaOA performance in optimization applications. The optimization results of unimodal functions indicate the exploitation ability of WaOA, the optimization results of multimodal functions indicate the exploration ability of WaOA, and the optimization results of CEC 2015 and CEC 2017 test suites indicate the high ability of WaOA in balancing exploration and exploitation during the search process. The performance of WaOA is compared with the results of ten well-known metaheuristic algorithms. The results of the simulations demonstrate that WaOA, due to its excellent ability to balance exploration and exploitation, and its capacity to deliver superior results for most of the benchmark functions, has exhibited a remarkably competitive and superior performance in contrast to other comparable algorithms. In addition, the use of WaOA in addressing four design engineering issues and twenty-two real-world optimization problems from the CEC 2011 test suite demonstrates the apparent effectiveness of WaOA in real-world applications. The MATLAB codes of WaOA are available in https://uk.mathworks.com/matlabcentral/profile/authors/13903104.
these initial solutions are improved. Finally, the best solution found during the implementation of the algorithm is introduced as the solution to the problem 4 . However, none of the metaheuristic algorithms guarantee that they will be able to provide the optimal global solution. This insufficiency is due to the nature of random search in these types of optimization approaches. Hence, the solutions derived from metaheuristic algorithms are known as quasi-optimal solutions 5 .
Exploration and exploitation capabilities enable metaheuristic algorithms to provide better quasi-optimal solutions. Exploration refers to the ability to search globally in different areas of the problem-solving space to discover the best optimal area. In contrast, exploitation refers to the ability to search locally around the available solutions and the promising areas to converge to the global optimal. Balancing exploration and exploitation is the key to the success of metaheuristic algorithms in achieving effective solutions 6 . Achieving better quasioptimal solutions has been the main challenge and reason for researchers' development of various metaheuristic algorithms 7,8 .
The main research question is that despite the numerous metaheuristic algorithms introduced so far, is there still a need to develop new algorithms? The No Free Lunch (NFL) theorem 9 answers the question that the optimal performance of an algorithm in solving a set of optimization problems gives no guarantee for the similar performance of that algorithm in solving other optimization problems. The NFL theorem concept rejects the hypothesis that a particular metaheuristic algorithm is the best optimizer for all optimization applications over all different algorithms. Instead, the NFL theorem encourages researchers to continue to design newer metaheuristic algorithms to achieve better quasi-optimal solutions for optimization problems. This theorem has also motivated the authors of this paper to develop a new metaheuristic algorithm to address optimization challenges. This paper's novelty and contribution are in designing a new metaheuristic algorithm called the Walrus Optimization Algorithm (WaOA), which is based on the simulation of walrus behaviors in nature. The main contributions of this article are as follows: • The natural behaviors of walruses inspire WaOA's design in feeding when migrating, fleeing, and fighting predators. • WaOA is mathematically modeled in three phases: exploration, exploitation, and migration.
• The efficiency of WaOA in handling optimization problems is tested on sixty-eight standard objective functions of various types of unimodal, multimodal, the CEC 2015 test suite, and the CEC 2017 test suite. • WaOA performance is compared with the performance of ten well-known metaheuristic algorithms.
• The success of WaOA in real-world applications is challenged in addressing four engineering design issues and twenty-two real-world optimization problems from the CEC 2011 test suite.
The rest of the paper is as follows. The literature review is presented in the "Literature review" section. The proposed WaOA approach is introduced and modeled in the "Walrus Optimization Algorithm" section. Simulation studies are presented in the "Simulation studies and results" section. The efficiency of WaOA in solving engineering design problems is evaluated in the "WaOA for real world-application" section. Conclusions and future research directions are included in the "Conclusions and future works" section.
Literature review
Metaheuristic algorithms are based on the inspiration and simulation of various natural phenomena, animal strategies and behaviors, concepts of biological sciences, genetics, physics sciences, human activities, rules of games, and any evolution-based process. Accordingly, from the point of view of the main inspiration used in the design, metaheuristic algorithms fall into five groups: evolutionary-based, swarm-based, physics-based, human-based, and game-based.
Evolutionary-based metaheuristic algorithms have been developed using the concepts of biology, natural selection theory, and random operators such as selection, crossover, and mutation. Genetic Algorithm (GA) is one of the most famous metaheuristic algorithms, which is inspired by the process of reproduction, Darwin's theory of evolution, natural selection, and biological concepts 10 . Differential Evolution (DE) is another evolutionary computation that, in addition to using the concepts of biology, random operators, and natural selection, uses a differential operator to generate new solutions 11 .
Swarm-based metaheuristic algorithms have been developed based on modeling natural phenomena, swarming phenomena, and behaviors of animals, birds, insects, and other living things. Particle Swarm Optimization (PSO) is one of the first introduced metaheuristics methods and was widely used in optimization fields. The main inspiration in designing PSO is the search behaviors of birds and fish to discover food sources 12,13 . Ant Colony Optimization (ACO) is a swarm-based method inspired by the ability and strategy of an ant colony to identify the shortest path between the colony to food sources 14 . Grey Wolf Optimization (GWO) is a metaheuristic algorithm inspired by grey wolves' hierarchical structure and social behavior while hunting 15 . Marine Predator Algorithm (MPA) has been developed inspired by the ocean and sea predator strategies and their Levy flight movements to trap prey 16 . The strategy of the tunicates and their search mechanism in the process of finding food sources and foraging have been the main inspirations in the design of the Tunicate Swarm Algorithm (TSA) 17 . Some other swarm-based methods are White Shark Optimizer (WSO) 18 , Reptile Search Algorithm (RSA) 19 , Raccoon Optimization Algorithm (ROA) 20 , African Vultures Optimization Algorithm (AVOA) 21 , Farmland Fertility Algorithm (FFA) 22 , Slime Mould algorithm (SMA) 23 , Mountain Gazelle Optimizer (MGO) 24 , Sparrow Search Algorithm (SSA) 25 , Whale Optimization Algorithm (WOA) 26 , Artificial Gorilla Troops Optimizer (GTO) 27 , and Pelican Optimization Algorithm (POA) 28 .
Physics-based metaheuristic algorithms have been inspired by physics' theories, concepts, laws, forces, and phenomena. Simulated Annealing (SA) is one of the most famous physics-based methods, the main inspiration
Walrus Optimization Algorithm
In this section, employed fundamental inspiration and the theory of the proposed Walrus Optimization Algorithm (WaOA) is stated, then its various steps are modeled mathematically.
Inspiration of WaOA.
Walrus is a big flippered marine mammal with a discontinuous distribution in the Arctic Ocean and subarctic waters of the Northern Hemisphere around the North Pole 52 . Adult walruses are easily identifiable with their large whiskers and tusks. Walruses are social animals who spend most of their time on the sea ice, seeking benthic bivalve mollusks to eat. The most prominent feature of walruses is the long tusks of this animal. These are elongated canines seen in both male and female species that may weigh up to 5.4 kg and measure up to 1 m in length. Males' tusks are slightly thicker and longer and are used for dominance, fighting, and display. The most muscular male with the longest tusks dominates the other group members and leads them 53 . An image of walrus is presented in Fig. 1. As the weather warms and the ice melts in late summer, walruses prefer to migrate to outcrops or rocky beaches. These migrations are very dramatic and involve massive aggregations of walruses 54 . The walrus has just two natural predators due to its large size and tusks: the polar bear and the killer whale (orca). Observations show that the battle between a walrus and a polar bear is very long and exhausting, and usually, polar bears withdraw from the fight after injuring the walrus. However, walruses harm www.nature.com/scientificreports/ polar bears with their tusks during this battle. In the fight against walruses, killer whales can hunt them successfully, with minimal and even no injuries 55 .
The social life and natural behaviors of walruses represent an intelligent process. Of these intelligent behaviors, three are the most obvious: (i) Guiding individuals to feed under the guidance of a member with the longest tusks.
Tracking the best population member in the search process directs the algorithm toward promising areas. In the social life of walruses, the most potent walrus, which can be recognized as having the longest tusk, is responsible for guiding the other walruses. Moving walruses in this process leads to significant changes in their position. Simulating these large displacements increases the algorithm's ability in global search and exploration ability.
(ii) Migration of walruses to rocky beaches. One of the natural behaviors of walruses is their migration due to warming weather in summer. In this process, walruses make big changes in their position by moving towards outcrops or rocky beaches. In the WaOA simulation for a walrus, the position of other walruses are assumed as migration destinations, one of these positions is randomly selected, and the walrus moves towards it. In the design of WaOA, imitating this strategy, global search and discovery capabilities are improved. The difference between the migration strategy and the foraging process under the guidance of the strongest walrus is that in this process, the population update process is prevented from relying on a particular member, such as the best member of the population. This updating process prevents early convergence and the algorithm from getting stuck in local optima.
(iii) Fight or escape from predators. The fighting strategy of walruses in the face of their predators, such as the polar bear and the killer whale, is a long chase process. This chasing process takes place in a small area around the walrus position and causes small changes in the walrus position. Therefore, simulating the small displacements of the walrus by aiming at better positions during the fight leads to an increase in WaOA's ability to search locally and exploit to converge to better solutions.
Mathematical modeling of these behaviors is the primary inspiration for developing the proposed WaOA approach.
Algorithm initialization.
WaOA is a population-based metaheuristic algorithm in which the searcher members of this population are walruses. In WaOA, each walrus represents a candidate solution to the optimization problem. Thus, the position of each walrus in the search space determines the candidate values for the problem variables. Therefore, each walrus is a vector, and the population of walruses can be mathematically modeled using so-called the population matrix. At the beginning of WaOA implementation, populations of walruses are randomly initialized. This WaOA population matrix is determined using (1).
where X is the walruses' population, X i is the i th walrus (candidate solution), x i,j is the value of the j th decision variable suggested by the i th walrus, N is the number of walruses, and m is the number of decision variables.
As mentioned, each walrus is a candidate solution to the problem, and based on its suggested values for the decision variables, the objective function of the problem can be evaluated. The estimated values for the objective function obtained from walruses are specified in (2).
where F is the objective function vector and F i is the value of the objective function evaluated based on the i th walrus.
Objective function values are the best measure of the quality of candidate solutions. The candidate solution that results in the evaluation of the best value for the objective function is known as the best member. On the other hand, the candidate solution that results in the worst value for the objective function is called the worst member. According to the update of the values of the objective function in each iteration, the best and worst members are also updated.
Mathematical modelling of WaOA. The process of updating the position of walruses in the WaOA is modeled in three different phases based on the natural behaviors of this animal.
Phase 1: feeding strategy (exploration). Walruses have a varied diet, feeding on more than sixty species of marine organisms, such as sea cucumbers, tunicates, soft corals, tube worms, shrimp, and various mollusks 57 . However, walrus prefers benthic bivalve mollusks, particularly clams, for which it forages by grazing around the www.nature.com/scientificreports/ sea floor, seeking and detecting food with its energetic flipper motions and susceptible vibrissae 58 . In this search process, the strongest walrus with the tallest tusks guides the other walrus in the group to find food. The length of the tusks in the walruses is similar to the quality of the objective function values of the candidate solutions. Therefore, the best candidate solution with the best value for the objective function is considered the strongest walrus in the group. This search behavior of the walruses leads to different scanning areas of the search space, which improves the exploration power of the WaOA in the global search. The process of updating the position of walruses is mathematically modeled based on the feeding mechanism under the guidance of the most vital member of the group, using (3) and (4). In this process, a new position for walrus is first generated according to (3). This new position replaces the previous position if it improves the objective function's value; this concept is modeled in (4).
where X P 1 i is the new generated position for the i th walrus based on the 1st phase, x P 1 i,j is its j th dimension, F P 1 i is its objective function value, rand i,j are random numbers from the interval [0, 1] , SW is the best candidate solution which is considered as the strongest walrus, and I i,j are integers selected randomly between 1 or 2. I i,j is used to increase the algorithm's exploration ability so that if it is chosen equal to 2, it creates more significant and broader changes in the position of walruses compared to the value of 1, which is the normal state of this displacement. These conditions help improve the algorithm's global search in escaping from the local optima and discovering the original optimal area in the problem-solving space.
Phase 2: migration. One of the natural behaviors of walruses is their migration to outcrops or rocky beaches due to the warming of the air in late summer. This migration process is employed in the WaOA to guide the walruses in the search space to discover suitable areas in the search space. This behavioral mechanism is mathematically modeled using (5) and (6). This modeling assumes that each walrus migrates to another walrus (randomly selected) position in another area of the search space. Therefore, the proposed new position is first generated based on (5). Then according to (6), if this new position improves the value of the objective function, it replaces the previous position of walrus.
where X P 2 i is the new generated position for the i th walrus based on the 2nd phase, x P 2 i,j is its j th dimension, F P 2 i is its objective function value, X k , k ∈ {1, 2, . . . , N} and k � = i, is the location of selected walrus to migrate the i th walrus towards it, x k,j is its j th dimension, and F k is its objective function value.
Phase 3: escaping and fighting against predators (exploitation). Walruses are always exposed to attacks by the polar bear and the killer whale. The strategy of escaping and fighting these predators leads to a change in the position of the walruses in the vicinity of the position in which they are located. Simulating this natural behavior of walruses improves the WaOA exploitation power in the local search in problem-solving space around candidate solutions. Since this process occurs near the position of each walrus, it is assumed in the WaOA design that this range of walrus position change occurs in a corresponding walrus-centered neighborhood with a certain radius. Considering that in the initial iterations of the algorithm, priority is given to global search in order to discover the optimal area in the search space, the radius of this neighborhood is considered variable so that it is first set at the highest value and then becomes smaller during the iterations of the algorithm. For this reason, local lower/upper bounds have been used in this phase of WaOA to create a variable radius with algorithm repetitions. For simulation of this phenomenon in WaOA, a neighborhood is assumed around each walrus, which first is generated a new position randomly in this neighborhood using (7) and (8). then if the value of the objective function is improved, this new position replaces the previous position according to (9). www.nature.com/scientificreports/ where X P 3 i is the new generated position for the i th walrus based on the 3rd phase, x P 3 i,j is its j th dimension, F P 3 i is its objective function value, t is the iteration contour, lb j and ub j are the lower and upper bounds of the j th variable, respectively, lb t local,j and ub t local,j are local lower and local upper bounds allowable for the j th variable, respectively, to simulate local search in the neighborhood of the candidate solutions.
Repetition process, pseudocode, and flowchart of WaOA. After updating the walruses' position based on the implementation of the first, second, and third phases, the first WaOA iteration is completed, and new values are calculated for the position of the walruses and the objective functions. Update and improve candidate solutions is repeated based on the WaOA steps according to Eqs. (3)-(9) until the final iteration. Upon completion of the algorithm execution, WaOA introduces the best candidate solution found during execution as the solution to the given problem. The WaOA implementation flowchart is presented in Fig. 2, and its pseudocode is specified in Algorithm 1.
Input all information of optimization problem.
Create initial population. Set = 1 and = 1. Phase 2: Select randomly as the migration destination for using Eq. (5).
<
Save the best candidate solution so far.
Output the best quasi-optimal solution of the objective function found by WaOA. , and TLBO has a computational complexity equal to to O(Nm(1 + 2T)) . Therefore, it is clear that the proposed WaOA approach has higher computational complexity than all algorithms used for comparison. However, to make a fair comparison, we used the population size of each metaheuristic algorithm in the simulation analysis so that the total number of function evaluations is the same for all employed algorithms.
Simulation studies and results
In this section, WaOA simulation studies on optimization applications are presented. The efficiency of WaOA in providing the optimal solution has been tested on sixty-eight standard objective functions, including unimodal, high-dimensional multimodal, fixed-dimensional multimodal, the CEC 2015 test suite, and the CEC 2017 test suite. The information on these test functions is specified in the Appendix and Tables A1 to A5.
The reasons for choosing these benchmark functions are as follows. Unimodal functions F1 to F7 are suitable for evaluating the exploitation ability of metaheuristic algorithms in convergence towards the global optimal as they do not have a local optimum. Multimodal functions F8 to F23 are suitable options for evaluating the exploration ability of metaheuristic algorithms due to having multiple local optimal. The CEC 2015 and the CEC 2017 test suites have complex benchmark functions that are suitable for evaluating the ability of metaheuristic algorithms to balance exploration and exploitation during the search process. WaOA performance is compared with ten well-known GA, PSO, GSA, TLBO, GWO, MVO, MPA, TSA, RSA, and WSO algorithms to determine the quality of WaOA results. The values set for the control parameters of the employed algorithms are specified in Table 1. The WaOA and mentioned competitor algorithms had been implemented on F1 to F23, each in twenty independent runs containing a thousand iterations (i.e., T = 1000 ). In this study, parameter N is considered equal to 20 for WaOA, 30 for TLBO, and 60 for other competitor algorithms to equalize the number of function evaluations. In this case, considering the computational complexity of each algorithm, the number of function evaluations for each metaheuristic algorithm is equal to 60,000.
Optimization results are reported using four statistical indicators: mean, best, standard deviation, and median. In addition, each algorithm's rank in handling each objective function is determined based on the average criterion.
Evaluation unimodal objective function. Unimodal Evaluation high-dimensional multimodal objective functions. High dimensional multimodal functions with several local and globally optimal solutions have been selected to evaluate WaOA exploration capability in global search. The optimization results of F8 to F13 functions using WaOA and competitor algorithms are reported in Table 3. What can be deduced from the results of this table is that WaOA has converged to the global optimal in optimizing F9 and F11. WaOA is also the best optimizer for optimizing F10, F12, and F13. TSA is the best optimizer for the F8 objective function, while WaOA is the second-best optimizer for this objective function. Analysis of the simulation results shows that WaOA has an acceptable performance in optimizing high-dimensional multimodal objective functions and has provided a superior outcome compared to ten competitor algorithms.
Evaluation fixed-dimensional multimodal objective function. The fixed-dimensional multimodal functions, which have fewer local solutions than functions F8 to F13, have been selected to evaluate WaOA's ability to balance exploration and exploitation. The optimization results of F14 to F23 functions are reported in Table 4. The results show that WaOA ranks first as the best optimizer in handling all F14 to F23 functions. Furthermore, analysis of the simulation results shows the superiority of WaOA over ten compared algorithms due to the high power of WaOA in balancing exploration and exploitation. The performances of WaOA and competitor algorithms in solving F1 to F23 functions are presented as boxplot diagrams in Fig. 3. Intuitive analysis of these boxplots shows that the proposed WaOA approach has provided superior and more effective performance than competitor algorithms by providing better results in statistical indicators in most of the benchmark functions.
Statistical analysis. In this subsection, the superiority of WaOA over competitor algorithms is statistically analyzed to determine whether this superiority is significant or not. To perform statistical analysis on the obtained results, Wilcoxon signed-rank test 59 is utilized. Wilcoxon signed-rank test is a non-parametric test that is used to detect significant differences between two data samples. The results of statistical analysis using this test are presented in Table 5. What can be seen from the study of the simulation results is that WaOA has a significant statistical superiority over the competitor algorithm in cases where the p-value is less than 0.05. www.nature.com/scientificreports/ Sensitivity analysis. WaOA is a population-based optimizer that performs the optimization process in a repetitive-based calculation. Accordingly, the parameters N (the number of members of the population) and T (the total number of iterations of the algorithm) are expected to affect the WaOA optimization performance. Therefore, WaOA's sensitivity analysis to parameters T and N is presented in this subsection. For analyzing the sensitivity of WaOA to the parameter N , the proposed algorithm for different values of the parameter N equal to 20, 30, 50, and 100 is used to optimize the functions of F1 to F23. Optimization results are given in Table 6, and WaOA's convergence curves under this analysis are presented in Fig. 4. What is evident from the analysis of WaOA's sensitivity to the parameter N is that increasing the searcher agents improves WaOA's search capability in scanning the search space, which enhances the performance of the proposed algorithm and reduces the values of the objective function. www.nature.com/scientificreports/ For analyzing the sensitivity of the proposed algorithm to the parameter T , WaOA for different values of the parameter T equal to 200, 500, 800, and 1000 is used to optimize the functions of F1 to F23. Optimization results are in Table 7, and the WaOA's convergence curves under this analysis are presented in Fig. 5. Based on the obtained results, it is found that increasing values of T gives the algorithm more opportunity to converge to better solutions based on exploitation ability. Therefore, it can be seen that with increasing values of T , the optimization process has become more efficient, and as a result, the values of the objective function have decreased.
Evaluation of the CEC 2015 test suite. The optimization results of the CEC 2015 test suite, including
C15-F1 to C15-F15 using WaOA and competitor algorithms, are released in Table 8. The simulation results show that WaOA is the best optimizer for C15-F1 to C15-F8, C15-F10, C15-F13, and C15-F14 functions. In addition, in solving C15-F9 after MVO, in C15-F11 after WSO, C15-F12, and C15-F15 after GSA, the proposed WaOA is the second-best optimizer. Analysis of simulation results shows that WaOA provides better results in most functions of the CEC 2015 test suite, and in total, with the first rank of the best optimizer in handling the CEC 2015 test suite, has provided superior performance compared to competitor algorithms. Table 9. What can be seen from the analysis of the simulation results is that WaOA is the first best optimizer for C17-F1 to C17-F6, C17-F8 to C17-F30 functions. in solving C17-F7, proposed WaOA after GSA is the second-best optimizer. Comparison of simulation results shows that WaOA has provided better results in most functions of CEC 2017 test suite, and has provided superior performance in solving this test suite compared to competing algorithms.
Informed consent.
Informed consent was not required as no human or animals were involved.
Ethical approval. This article does not contain any studies with human participants or animals performed by any of the authors.
WaOA's application to real-world problems
Metaheuristic algorithms are one of the most widely used techniques in dealing with real-world applications. This section tests WaOA performance in optimizing four engineering design challenges and twenty-two constrained optimization problems from the CEC 2011 test suite. It should be noted that to model the constraints of optimization problems, the penalty function has been used. Thus, if a solution does not meet any of the constraints of the problem, a penalty coefficient is added to the value of its objective function corresponding to each noncompliance of the constraint, and as a result, it is known as an inappropriate solution.
tension/compression spring design optimization problem. Tension/compression spring design is a challenge in real-world applications with the aim of minimizing the weight of tension/compression spring. A schematic of this design is shown in Fig. 6 59 . The tension/compression spring problem formulation is as follows: The results of using WaOA and competing algorithms in optimizing the Tension/compression spring design variables are presented in Table 10. The simulation results show that WaOA has provided the optimal solution to this problem with the values of the variables equal to (0.0519693, 0.363467, 10.9084) and the corresponding objective function value equal to 0.012672. The statistical results obtained from the performance of WaOA and competitor algorithms are reported in Table 11, which shows the superiority of WaOA in providing better values for statistical indicators. The WaOA convergence curve while achieving the solution for tension/compression spring is shown in Fig. 7.
Welded beam design.
Welded beam design is a real global challenge in engineering sciences whose main goal in design is to reduce the fabrication cost of the welded beam. A schematic of this design is shown in Fig. 8 60 .
With
WaOA and competing algorithms are implemented on the welded beam design problem, and the results are presented in Table 12. Based on these results, WaOA has provided the optimal solution to this problem with the values of the variables equal to (0.20573, 3.470489, 9.036624, 0.20573) and the corresponding objective Table 13. This table shows that WaOA performs better in terms of statistical indicators. The convergence curve from the WaOA implementation on the welded beam design is shown in Fig. 9.
Pressure vessel design.
Pressure vessel design is a real-world optimization challenge that aims to reduce design costs. A schematic of this design is shown in Fig. 12 63 . The formulation of pressure vessel design problem is as follows: Table 17. Statistical results indicate that WaOA has effectively optimized the pressure vessel design challenge by providing more favorable values for statistical indicators. The WaOA convergence curve in achieving the optimal solution is shown in Fig. 13.
Evaluation of twenty-two real-world optimization problems from the CEC 2011 test suite.
In this subsection, the performance of WaOA in handling real-world applications is challenged on twenty-two constrained optimization problems from the CEC 2011 test suite. This test suite has twenty-two optimization problems, namely: parameter estimation for frequency-modulated (FM) sound waves, the Lennard-Jones potential problem, the bifunctional catalyst blend optimal control problem, optimal control of a nonlinear stirred tank reactor, the Tersoff potential for model Si (B), the Tersoff potential for model Si (C), spread spectrum radar polyphase code design, transmission network expansion planning (TNEP) problem, large-scale transmission pricing problem, circular antenna array design problem, and the ELD problems (which consist of DED instance 1, DED instance 2, ELD Instance 1, ELD Instance 2, ELD Instance 3, ELD Instance 4, ELD Instance 5, hydrothermal scheduling instance 1, hydrothermal scheduling instance 2, and hydrothermal scheduling instance 3), the Messenger spacecraft trajectory optimization problem, and the Cassini 2 spacecraft trajectory optimization problem. Full details and description of the CEC 2011 test suite are available at 64 . The results of employing WaOA and competitor algorithms on these real-world optimization problems are presented in Table 18. The boxplot diagrams obtained from the performance of metaheuristic algorithms in handling CEC 2011 test suite optimization problems are drawn in Fig. 14. Based on the simulation results, WaOA is the first best optimizer to solve all C11-F1 to C11-F22 optimization problems. Based on the simulation results, the proposed WaOA approach has provided better results in most of the optimization problems and has provided superior performance in handling the CEC 2011 test suite in competition with competing algorithms. Also, the results obtained from the statistical analysis for p-value show that WaOA has a significant statistical superiority compared to competitor algorithms.
Conclusions and future works
In this study, a new bio-inspired metaheuristic algorithm called the Walrus Optimization Algorithm (WaOA) was developed based on the natural behaviors of walruses. Feeding, escaping, fighting predators, and migrating are the primary sources of inspiration used in the design of WaOA. Therefore, the WaOA theory was explained, and its mathematical modeling was presented in three phases: (i) feeding strategy, (ii) migration, and (iii) escaping and fighting against predators. Sixty-eight standard benchmark functions of various types of unimodal, multimodal, the CEC 2015 test suite, and the CEC 2017 test suite, were employed to analyze WaOA performance in providing solutions. The information on these test functions is specified in the Appendix and Tables A1 to A5. The optimization results of unimodal functions showed the high ability of WaOA exploitation in local search to converge towards global optimal. The optimization results of multimodal functions indicated the high ability of WaOA exploration in global search and not to be caught in locally optimal solutions. WaOA's performance results were compared with the ten well-known metaheuristic algorithms. The simulation and comparison results showed that the proposed WaOA approach has a high ability to balance exploration and exploitation and is much 0 ≤ x 1 , x 2 ≤ 100, and 10 ≤ x 3 , x 4 ≤ 200. www.nature.com/scientificreports/ superior and more competitive against ten competitor metaheuristic algorithms. In addition, the results of the WaOA implementation in addressing the four design issues and twenty-two real-world optimization problems from the CEC 2011 test suite demonstrates the effectiveness of the proposed approach in real-world applications.
Although it was observed that WaOA had provided superior results in most of the benchmark functions, the proposed approach has some limitations. The first limitation facing all metaheuristic algorithms is that it is always possible to design newer algorithms that can provide better results than existing algorithms. The second limitation of WaOA is that the proposed method may fail in some optimization applications. The third limitation of WaOA is that the nature of random search in this algorithm leads to the fact that there is no guarantee of achieving the global optimum. Moreover, the authors do not claim that the proposed WaOA approach is the
Data availability
All data generated or analyzed during this study are included directly in the text of this submitted manuscript. There are no additional external files with datasets. | 7,763 | 2023-05-31T00:00:00.000 | [
"Computer Science"
] |
Research on Extended Kalman Filter and Particle Filter Combinational Algorithm in UWB and Foot-Mounted IMU Fusion Positioning
As UWB high-precision positioning in NLOS environment has become one of the hot topics in the research of indoor positioning, this paper firstly presents a method for the smoothing of original range data based on the Kalman filter by the analysis of the range error caused by UWB signals in LOS and NLOS environment. -en, it studies a UWB and foot-mounted IMU fusion positioning method with the integration of particle filter with extended Kalman filter. -is method adopts EKF algorithm in the kinematic equation of particle filters algorithm to calculate the position of each particle, which is like the way of running N (number of particles) extended Kalman filters, and overcomes the disadvantages of the inconformity between kinematic equation and observation equation as well as the problem of sample degeneration under the nonlinear condition of the standard particle filters algorithm. -e comparison with the foot-mounted IMU positioning algorithm, the optimization-based UWB positioning algorithm, the particle filter-based UWB positioning algorithm, and the particle filter-based IMU/UWB fusion positioning algorithm shows that our algorithm works very well in LOS and NLOS environment. Especially in an NLOS environment, our algorithm can better use the foot-mounted IMU positioning trajectory maintained by every particle to weaken the influence of range error caused by signal blockage. It outperforms the other four algorithms described as above in terms of the average and maximum positioning error.
Introduction
With the wide application of indoor positioning technologies in some areas such as supermarket shopping, fire emergency navigation, and hospital patient tracking, indoor positioning can be implemented through the following two approaches.One is based on the various wireless network technologies, such as WiFi (wireless fidelity) [1,2], RFID (radio frequency identification) [3], and UWB (ultra-wideband) [4,5], which can be used to realize indoor positioning according to the intensity of received signals, the TOA (time of arrival), or TDOA (time difference of arrival).Among all of these technologies, UWB technology can achieve a decimeter-level positioning precision.However, in some special cases, such as emergency rescue, UWB signals might be blocked by people, walls, or the other barriers in the complex indoor environment.As it might result in the problems of signal multipath effect or intensity attenuation, high-precision positioning can hardly be achieved in NLOS (nonline of sight) environment through the UWB positioning approach.
e other approach is based on the IMU (inertial measurement unit), such as accelerometer, gyroscope, magnetometer, and so on [6], which can be used for positioning according to the integral or the PDR (pedestrian dead reckoning) method.However, this approach has a deficiency, which is an accumulative error.In order to overcome the problem of error accumulation, in [7], the authors proposed ZUPT (zero velocity update) in 2005 to correct the system error and applied it in NavShoe.In 2012, in [8], the authors proposed the implementation of a shoe-mounted ZUPT-aided open-source INS (inertial navigation system) for real-time positioning.At a cost of around USD (United States dollar) 800, this sample system was able to control the navigation error within the range of 0.2%-1% in a short distance (within 100 meters).Moreover, through the analysis of the limitations of ZUPT and the error model [9], they managed to eliminate the drift error according to the optimization algorithm to enhance the algorithm efficiency [10].In 2013, based on the shoe-mounted INS, a locally distributed system framework was proposed [11], which could increase significantly the autonomous positioning precision by constraining the course angle deviation of INS in accordance with the distance between both feet.In 2014, Nilsson and his team [12] developed a positioning approach based on the IMU arrays to further increase the reliability and precision of autonomous positioning and at the same time open sourced the experimental positioning platform.In 2017, Wagstaff et al. [13] presented a method to improve the accuracy of a foot-mounted, zero velocity-aided inertial navigation system (INS) by varying the estimator parameters based on a real-time classification of the motion type.By combining the motion classifier with a set of optimal detection parameters, we show how we can reduce the INS position error during mixed walking and running motion.In [14], the authors presented an experimental study on the noise performance and the operating clocks-based power consumption of multi-IMU platforms.It is observed that the four-IMU system is best optimized for cost, area, and power.
Although the ZUPT technology to some extent can realize error correction, it still cannot overcome the problem of error accumulation arising in the long-distance positioning for a long time with an IMU.erefore, the integration of IMU with UWB is a tendency to achieve the high-precision and real-time indoor positioning.With the integration of IMU, not only the following observations such as velocity and direction can be obtained but also the multipath and NLOS effects can be eliminated [15,16].In addition, based on the EKF (extended Kalman filter), loose combination can be adopted to track the pedestrian's movement.In [17], the authors realized the UWB/IMU tightly coupled algorithm based on the EKF and made a comparison with the optical tracking system to show the higher precision.Similarly, in [18], the authors implemented the UWB and inertial data fusion algorithm based on a steady state KF with a fixed gain.e main advantage of this method is that it can be implemented efficiently in lowperformance WSN nodes with low-power consumption.With the introduction of a tightly coupled algorithm based on UWB/INS, in [19], the authors analyzed the influence of the integrity monitoring algorithms on the positioning performance.In [20], the authors put forward an adaptive fuzzy Kalman filter method.eir experiment turned out that this algorithm outperformed the basic KF algorithm in terms of the positioning result.In [21], the authors designed a tightly coupled GPS (global positioning system)/UWB/INS integrated system based on the adaptive robust Kalman filter.Yet, it is only for outdoor use [22].It was the first time that the positioning of a flying drone with the integration of vision, IMU, and UWB was proposed to realize the twodimensional positioning accuracy of 10 cm.However, in the literature [23], visual-inertial SLAM (simultaneous localization and mapping) technology was used for the positioning of a flying drone.Meanwhile, the adoption of UWB technology for error correction had obtained a full six-DoF pose of the drone.In [24], the authors studied the EKF loosely/tightly coupled UWB/INS integration based on the PDF algorithm, but they utilized the ray-launching simulations to generate UWB data.In [25], the authors presented an improving tightly-coupled navigation model for indoor pedestrian navigation.In the proposed model, a channel filter is used for the estimation of the distance between the reference node (RN) and blind node (BN) measured by the UWB, and then, a 15-element error state vector is used in the filter for fusing foot-mounted IMU and UWB measurements.e real test results show that the proposed model is effective to reduce the error compared with the conventional model, its mean position error has reduced by about 14.81% compared with the UWB only model.In [26], the authors fused an ultra-wideband (UWB) sensor-based positioning solution with an inertial measurement unit (IMU) sensor-based positioning solution to obtain a robust, yet, optimal positioning performance.Sensor fusion is accomplished via an extended Kalman filter (EKF) design which simultaneously estimates the IMU sensors' systematic errors and corrects the positioning errors.Fault detection, identification, and isolation are built into the EKF design to prevent the corrupted UWB sensor measurement data due to obstructions, multipath, and other interferences from degrading the positioning performance.Computer simulation results indicate that more than 100% positioning performance improvement over the UWB sensor-based positioning solution along can be obtained through the proposed sensor fusion solution.In [27], the authors proposed an approach to combine IMU inertial and UWB ranging measurement for relative positioning between multiple mobile users without the knowledge of the infrastructure.
ey incorporate the UWB and the IMU measurements into a probabilistic-based framework, which allows cooperatively positioning a group of mobile users and recovering from positioning failures.Most of the above methods have adopted EKF for UWB/INS fusion positioning and optimization.However, the premise for the use of EKF is to assume that both of system errors and observation errors conform to Gaussian distribution.But in the NLOS condition, signal transmission might be affected by barriers due to the blockage or reflection, which would increase the time delay of signal transmission.Under such a circumstance, if the assumption still holds that UWB ranging errors must conform to Gaussian distribution, it would result in great error.In this paper, UWB and IMU fusion positioning has been studied based on the PF (particle filter). is is because that the particle filter can tackle the multimodal distribution of errors.As long as there are sufficient particles available, an approximate globally optimal solution can be obtained effectively.is paper introduces two UWB and IMU fusion algorithms based on the PF and compares them with the other three UWB or IMU-based positioning algorithms for the analysis.e remainder of the paper is organized as follows: In Section 2, the analysis and pretreatment of UWB data error is illustrated, and Section 3 introduces two UWB and IMU fusion algorithms based on the PF.In order to facilitate the 2 Mobile Information Systems comparison and analysis, it also provides the positioning results based on the foot-mounted IMU, the optimizationbased algorithm, and the particle filter algorithm based on pure UWB data.Subsequently, several experiments are analyzed in Section 4, and Section 5 concludes the paper.
Analysis and Pretreatment of UWB Data Error
In order to verify the UWB data error in LOS and NLOS environment, we carried out a correlation experiment in the entrance hall of School of Computer Science and Technology in China University of Mining and Technology.As shown in Figure 1, the hall is paved with 0.8 * 0.8 m marble tiles so that the calibration of real positions can be made.As indicated in Figure 2, the core chip used in UWB tag/beacon is the DWM1000 chip from DecaWave.
Analysis of UWB Data Errors in LOS Condition.
As shown in Figure 3, it starts from a distance of 2.4 m.Based on a progressive increase of 0.8 m in range, record the distances between the UWB tag and beacon and calculate the relevant errors in Table 1, where it shows that mean error grows with the increase in distance.For example, when the distance is 1.6 m, the mean error is 0.37 m.However, when the distance is increased to 8 m, the mean error reaches 0.56 m.But generally, as indicated in Figure 4, there is a small standard deviation of errors, which also proves that the positioning result is quite stable.
Analysis of UWB Data Errors in NLOS Condition.
As shown in Figure 5, firstly test the influence of the marble column on the UWB signals in the experimental area, where the column is located 1.13 m away from the UWB beacon.It starts from a distance of 3.39 m between the UWB tag and the beacon.Record the distances between the UWB tag and the beacon based on a progressive increase of 0.8 m in distance, and calculate the related errors in Table 2.As shown in Figure 6, there is a significant increase in the mean error with the increase in distance.When the range is 3.39 m, the mean error is 0.62 m.However, the mean error will grow to 3.45 m when the distance is increased to 10.17 m.Please note that when the distance is increased to more than 9.04 m, the laptop that is connected to the UWB beacon can hardly receive any signal after limited groups of range data have been acquired.It reveals that, with the increase in distance, column blockage will lead to an increase in signal attenuation.Actually, when the distance between the tag and the beacon is within 4.52 m, there is a steady change in the ranging result with a standard deviation of 0.04.However, when the distance is greater than 4.52 m, the ranging result will become extremely unstable.ere will be a big standard deviation of ranges in the presence of column blockage with the maximum error increased to 4.59 m from 2.01 m.After that, perform an experiment for the influence of pedestrian blockage on the UWB signals.As shown in Figure 7, it starts from a distance of 1.6 m when the pedestrian moves freely between the beacon and the tag.en, record the distances between the UWB tag and the beacon based on a progressive increase of 0.8 m in range, and calculate the relevant errors in Table 3.With an increase in distance, the standard deviation of ranges will show a trend of increase first and then decrease.For example, when the distance is below 4.8 m, the standard deviation is around 0.2 m to show a stable ranging result with the maximum error within 1.5 m.However, when the distance is between 5.6 m and 7.2 m, the standard deviation rises quickly.As indicated in Figure 8, big amplitude arises on the corresponding three distance curves, revealing that the ranging results are extremely unstable with the maximum error up to 7.78 m.However, when the distance is over 8.0 m, the ranging result becomes stable again with the standard deviation within 0.3 m.In this experiment, the sudden increase in ranging error always occurs at the moment
Mobile Information Systems
when the pedestrian moves close to the tag or the beacon.is is because that there is a sudden interference from the pedestrian on the UWB signals.But when the distance goes up to a certain degree, signal di raction occurs to make the signals free from the in uence of pedestrian blockage.
UWB Data Pretreatment.
ere is an abnormity in the ranging result, when the UWB signals are shielded by the obstruction or the positioning tag is beyond the detection limit of the UWB beacon.Hence, it is necessary to use the KF (Kalman lter) for the smoothing of range data.Algorithm 1 provides the Kalman ltering process based on the data acquired in four beacons when the pedestrian moves around in the experimental area.Use the KF algorithm to lter every column of UWB data.As indicated in Figures 9 and 10, the ltered UWB range data becomes relatively smoother.In this experiment, the pedestrian walks at a speed of 1.5 m/s, and the UWB data have been collected based on a frequency eoretically, the range di erence between two adjacent sampling times must be lower than 0.5 m.However, Figure 11 shows that lots of data are above this threshold due to the multipath e ect of UWB data and the column blockage.In addition, the range di erence is becoming smaller after the Kalman ltering shown in Figure 12.As indicated in Table 4, there is a reduction in the range di erence between two adjacent sampling times in four beacons regarding the mean value, the maximum di erence, and the variance.
IMU/UWB Fusion Positioning and Analysis
is paper presents a UWB and foot-mounted IMU fusion positioning method through the integration of the PF with the EKF.In order to verify the algorithm performance, this paper provides the experimental results obtained according to the foot-mounted IMU-based positioning algorithm, the optimization algorithm-based UWB positioning algorithm, the particle lter-based UWB algorithm, and the particle lter-based IMU/UWB fusion positioning algorithm for the contrast and analysis.
e Foot-Mounted IMU-Based Positioning Algorithm.
Fischer et al. [28] put forward a simple but comparatively more precise positioning algorithm based on the foot-mounted IMU.In summary, their ideas can be concluded into Algorithm 2.
After Line 1 acquires acc_s and gyro_s, get pitch and roll separately according to the following formula.e value of yaw can be obtained with a magnetometer or through the manual setting: roll a tan acc s(2,1) acc s(3,1) , yaw init_heading. ( Horizontally keep the IMU still for 30∼60 seconds to obtain the mean value of the angular velocity noise and take it as the zero bias of gyro, gyro_bias.Also, set up an angular velocity skew-symmetric matrix S w : e coordinate transformation matrix C _pre is provided as below: In Line 9, a new coordinate transformation matrix C will be generated from C _pre and S w , as soon as there are new data arriving.en, Lines 10∼12 will calculate current acceleration, velocity, and position vector in the navigation coordinates.e calculation on these three variables has always been made based on the mean value of the states at the moment and the previous moment.is is because that movement process always occurs between two adjacent data points.After that, construct an observation matrix H and an observation noise matrix R,
Mobile Information Systems
where the noise values of zero speed detection are used as the observations: e acceleration skew-symmetric matrix S a is calculated through the following equation: −acc_n( 2) acc_n( 1) 0 Calculate the state transfer matrix F and the system error covariance matrix Q through the following equations: With the adoption of the direction error, position error, and velocity error as the state values, calculate the error propagation according to the formula provided in Line 14.In Line 15, if a static state is detected, calculate the Kalman gain K rst and then, utilize the velocity vector of the current state to calculate the error vector delta_x, which includes the direction error, the position error, and the velocity error.After that, construct an angular error skewsymmetric matrix S e according to the value of the direction error: −attitude_error(3, 1) attitude_error(2, 1) −attitude_error(2, 1) attitude_error(1, 1) 0 6 Mobile Information Systems In Line 21, the coordinate transformation matrix C is corrected, while Line 22 corrects the current velocity and position value.Line 24 records the directional values in the walking process.Finally, Line 26 returns the positioning result obtained based on the foot-mounted IMU.
UWB Positioning Based on the Optimization Algorithm.
e UWB data-based positioning result can be calculated through the optimization algorithm with constraints, such as L-BFGS-B.Algorithm 3 gives the computation ow based on the constraint condition that the horizontal position coordinates are within the range of −40 to 40 m.Every time after the range data from the four beacons to the tag is obtained, use the L-BFGS-B algorithm to calculate the positioning result through the minimization of the cost function.In the cost function algorithm (4), rst, calculate the range from each beacon to the current pose and then, sum up this value with the absolute di erence of the corresponding range that has been observed.e result can be taken as the return value of the cost function.
UWB Positioning Based on the PF Algorithm.
In order to track the pedestrian's moving status, this paper adopts the particle lter algorithm for the positioning.e computation ow is provided in Algorithm 5, where the rst line is the initialization of the following variables, including the particle number, the initial position, the state noise variance, the evaluation of noise variance, the dynamic array of particle states, the array of particle scores, the weight array, the numerical fusion positioning result, and the array of static beacon coordinates.e state of every particle consists of (x, y), the current position of the particle.Line 2 is the initialization of the particle state.In other words, disperse the particles around the initial position based on the variance sigma1.Lines 3-9 are the course of position tracking based on the particle lter.In Line 4, Gaussian noise is added to the position of every particle in the P_state.Line 5 calculates the weight of every particle based on the range value acquired from the collected UWB data.As described by Algorithm 6, assume that there are n beacons available.en, calculate the likelihood function of the kth UWB data observed at the moment according to the variance sigma2 by taking the range from the current particle to a certain beacon as the mean value.rough the multiplication of n likelihood functions, get the product as the weight of the current particle.Line 7 performs the resampling of particle weight to update the particle state based on the value of the resampling.
IMU/UWB Fusion Positioning Based on the PF Algorithm.
As indicated by Algorithm 7, this paper adopts the IMU/UWB fusion positioning to reduce the positioning error caused by the deviation of UWB ranges.Lines 1-2 are the initialization of the particle number, the initial position, the variance of sampled noise, the evaluation of noise Mobile Information Systems variance, the array of particle states, the array of particle scores, the weight array, the numerical fusion positioning result, and the array of static beacon coordinates.Line 3 utilizes the first 5 groups of UWB data to estimate the pedestrian's initial position through the triangle method.
Mobile Information Systems
Centered by the initial position, particles are dispersed randomly based on the variance sigma1 to initialize the array of particle states.Lines 5∼12 give the particle filter-based fusion process based on the ImuPath and UWB data.Line 6 acquires the increment of the trajectory between two adjacent UWB data points based on the ZuptImu algorithm.Line 7 takes the increment of the trajectory as the mean value to sum up the value of the P_state array according to the variance sigma1.Line 8 makes an evaluation on the particle state based on the currently acquired UWB data according to the principle similar to that used in Algorithm 6. Lines 9∼10 indicate the resampling of particles and weight updating with the particle state to be updated based on the resampling value.Line 11 provides the final fused positioning result through the weighted calculation on the weight of the current particle.
IMU/UWB Positioning Based on the Integration of PF and EKF Algorithm.
As indicated by Algorithm 8, this paper presents a positioning method through the integration of the particle filter with the Kalman filter.Lines 1-2 are the initialization of the following variables, including the IMU data counts, the UWB data counts, the number of particles, the initial position, the initial direction, the variance of state noise, the evaluation of noise variance, the pose of the particle, the score array, the weight array, the dynamic array End (10) If time(UwbData(uwb_index)) > time(ImuData(imu_index)) (11) For i in particle_num (12) pose[i] � Ekf_List [i].get_Position(add_Noise(ImuData,ZuptData, sigma1)) (13) End ( 14) imu_index++ ( 15) Else (16) For i in particle_num (17) score For i in particle_num (22) pose_fusing Mobile Information Systems of particle states, and the array of static beacon coordinates.Every particle consists of the computational nodes on the path of the ZUPT-based foot-mounted IMU. at is to say, every particle can make the real-time calculation on the Mobile Information Systems 11 movement path according to the EKF algorithm.Lines 3-5 are the initialization of the particle state, or in other words, the estimation on the pose based on the rst 20 groups of IMU data at the initial moment.Lines 7-9 indicate that the program quits if the IMU data or UWB data have been read.e algorithm ow chart is shown in Figure 13.
Experiment
e experimental eld was established in the entrance hall of School of Computer Science and Technology in China University of Mining and Technology, where the tester had been mounted with an IMU device X-IMU produced by a UK company X-IO on the foot, with the output frequency of 128 Hz, as shown in Figure 14. e data communication between the UWB positioning system and IMU is shown in Figure 15, in which there are four positioning beacons (Beacon 0∼3) and one positioning tag, they are connected via wireless links, and their ranging data are transmitted to the UWB server by Beacon 0. After the data are preprocessed by the UWB server, they are transmitted to laptop via WiFi.Meanwhile, the IMU data on the foot are also transmitted to laptop via Bluetooth.So, the UWB data and IMU data can be transmitted to laptop for time synchronization.In Figure 15, experimental facilities in the solid box are all carried by testers, in which the tag is installed on a helmet, the IMU is mounted on a foot, and the laptop is held by a tester, as indicated in Figure 16, who walks at a constant velocity.ree routes have been designed in the experiment.As indicated in Figure 17, there are Route 1, a rectangular route with fewer turns, and Route 2, a polygonal route with more turns.Route 3, as shown in Figure 18, may lose some UWB data during the walking process.
ere are four beacons e third is the experiment performed under the situation of losing some UWB data.e performance and the reliability of the algorithm can be assessed through the calculation of the positioning errors in the various schemes.
Analysis on the Positioning Paths without Pedestrian
Blockage.In the entrance hall of School of Computer Science and Technology, walk along the route marked in red for three circles and along the route marked in blue for two circles as indicated in Figure 17, where the start and end of each route has been marked out.Figures 19 and 20 demonstrate the positioning paths based on the various algorithms with the positioning errors provided in Table 5 after the calculation.
Based on the original IMU data, Scheme I denoted as ZuptImu is indicated by the pink path in Figures 19 and 20.Due to the accumulative error in IMU data, the positioning result obtained based on this scheme will deviate from the real path with the mean error up to 0.987 m and the maximum error at 3.405 m.
Based on the optimization algorithm, Scheme II denoted as OptUwb is indicated by the red path in Figures 19 and 20.As most of the path agrees well with the real trajectory based on this algorithm, big error will arise in the positioning result when signals are blocked.Or in other words, the optimization
14
Mobile Information Systems algorithm fails to converge to the correct result.For example, in Route 2, the maximum positioning error reaches 6.678 m, which indicates that the trajectory has deviated from the real path.
Based on the UWB signals, Scheme III is denoted as PfUwb with the use of the particle filter for the positioning.Based on this scheme, the current positioning result can be corrected according to the range value acquired from the four beacons.With this algorithm, the mean error in Route 1, and Route 2 is separately 0.624 m and 0.527 m.It proves that the positioning result is quite stable.
As indicated in Figures 19 and 20, Schemes IV and V, respectively, denoted as PfImuUwb and PfEkfImuUwb are indicated by the blue and black paths.Both of the schemes can guarantee a stable positioning result in Route 1 and Route 2 with the mean positioning error approximate to that in Scheme III.
Analysis on the Positioning Path in NLOS Condition.
Section 4.1 reveals that all of these three schemes, including PfUwb, PfImuUwb, and PfEkfImuUwb can steadily implement the calculation on the path when there is no interference from the pedestrian.In order to verify the stability of these three algorithms without pedestrian interference, we had three pedestrians to move about in the experimental area.Figures 21 and 22 show the UWB ranges in Route 1 and Route 2 with pedestrian interference.Figures 23 and 24 demonstrate the difference in adjacent ranges from four beacons, revealing that pedestrian blockage will lead to lots of errors in range data.Figures 25 and 26 show the different positioning trajectories obtained based on the various algorithms with the positioning errors provided in Table 6.
In the context of signal interference, great deviation will arise with the PfUwb algorithm.Especially when the pedestrian that causes signal interference is close to the beacon, the sudden change in signal transmission will lead to a big leap in the positioning result.As indicated by the green path in Figures 25 and 26, the mean error is up to 0.696 m in Route 1 with a maximum error of 2.981 m.In Route 2, the mean error reaches 0.587 m and the maximum error is 2.299 m.
As indicted by the blue path in Figures 25 and 26, the PfImuUwb algorithm can alleviate to some extent the positioning error arising in the PfUwb algorithm with the aid of the IMU positioning result.However, this algorithm has very limited deviation correction ability.erefore, in most cases, the positioning result obtained through this algorithm will be affected by the UWB signals to make the positioning result similar to that through the PfUwb algorithm.For example, the maximum positioning error in Route 1 reaches 2.896 m through this method, and significant distortion can be found on part of the path.
As the PfEkfImuUwb algorithm can utilize every particle to maintain the IMU-based EKF positioning and tracking, the positioning result is equivalent to the integration of several results obtained in multiple positioning paths to weaken the influence from abnormal UWB signals.On this regard, the positioning result through this method is comparatively smoother.Meanwhile, the mean error in Route 1 and 2 is Mobile Information Systems separately 0.624 and 0.527 m, which also proves that this algorithm can guarantee a stable positioning result.
Analysis on the Positioning Path in NLOS Condition.
Route 3 starts from Zone A, passes Point C, and reaches Zone B. With a clockwise walk, it finally goes back to the starting point.As shown in Figure 27, there are two areas where the loss of UWB data occurs during the walking process.One is that when the tester gets to Point C, the data exception of Beacon 3 occurs due to the occlusion of walls, and these abnormal data can be masked by the filtering algorithm.e other is that when the tester walks into Zone B, the ranging signals of Beacons 2 and 3 cannot be received due to the occlusion of walls as well as the increase of distance.e positioning results of the ZuptImu algorithm and PfEkfImuUwb algorithm are shown in Figure 28.e main disadvantage of the ZuptImu algorithm is that there is a big deviation in direction computation.After turning several corners, the deviation is bigger and bigger, and the final closing error reaches 2.3 m, with an obvious mismatch between positioning trajectories and actual trajectories.But, in the PfEkfI-muUwb algorithm, each particle maintains EKF positioning and tracking based on the IMU, which guarantees a small motion deviation of each particle.Under the situation of losing some UWB data, the motion angle at the point of error particles is corrected through the constraint of only one UWB beacon in Zone B and the ranging constraint of two UWB beacons when passing through the straight line of Point C, with a almost overlap between positioning trajectories and actual trajectories.
Conclusions
is paper presents a UWB and foot-mounted IMU fusion positioning method through the integration of PF with EKF.Although this algorithm can achieve good positioning result in the context of pedestrian blockage, it needs to be further improved and perfected in terms of the followings: on the one hand, as path calculation based on the EKF is maintained by every particle, it undoubtedly will increase the computational burden, which can be solved through the parallel algorithm; on the other hand, some variables that are deduced from the IMU algorithm, such as velocity, acceleration, angle, and angular velocity, can be added into the particle state, and since they are equivalent to the addition of a constraint of uniform change, a better effect can be achieved theoretically.
is positioning is based on the detection method of ArUCO beacons, and its accuracy can reach about 7 cm after optimization of the backend system [29].At present, this algorithm is still under study.
Figure 1 :
Figure 1: e experimental area in the entrance hall.
Figure 3 :
Figure 3: Ranging test in LOS condition.
Figure 4 :
Figure 4: Ranging results in LOS condition.
Figure 16 :Figure 17 :Figure 18 :Figure 19 :
Figure 16: Experiment with a laptop in the hand.
Figure 21 :Figure 22
Figure 21: e UWB ranges in Route 1 under the interference.
Table 1 :
Ranging errors in LOS condition based on a progressive increase of 0.8 m in distance.
Table 3 :
Ranging results based on a progressive increase of 0.8 m in distance with pedestrian blockage.
Figure 8: Ranging results with column blockage.
Table 5 :
Error comparison and analysis based on the ve algorithms.
Table 6 :
Error comparison and analysis based on the three approaches. | 7,429.8 | 2018-07-03T00:00:00.000 | [
"Computer Science"
] |
The economic costs of a multisectoral nutrition programme implemented through a credit platform in Bangladesh
Abstract Bangladesh struggles with undernutrition in women and young children. Nutrition‐sensitive agriculture programmes can help address rural undernutrition. However, questions remain on the costs of multisectoral programmes. This study estimates the economic costs of the Targeting and Re‐aligning Agriculture to Improve Nutrition (TRAIN) programme, which integrated nutrition behaviour change and agricultural extension with a credit platform to support women's income generation. We used the Strengthening Economic Evaluation for Multisectoral Strategies for Nutrition (SEEMS‐Nutrition) approach. The approach aligns costs with a multisectoral nutrition typology, identifying inputs and costs along programme impact pathways. We measure and allocate costs for activities and inputs, combining expenditures and micro‐costing. Quantitative and qualitative data were collected retrospectively from implementers and beneficiaries. Expenditure data and economic costs were combined to calculate incremental economic costs. The intervention was designed around a randomised control trial. Incremental costs are presented by treatment arm. The total incremental cost was $795,040.34 for a 3.5‐year period. The annual incremental costs per household were US$65.37 (Arm 2), USD$114.15 (Arm 3) and $157.11 (Arm 4). Total costs were led by nutrition counselling (37%), agriculture extension (12%), supervision (12%), training (12%), monitoring and evaluation (9%) and community events (5%). Total input costs were led by personnel (68%), travel (12%) and supplies (7%). This study presents the total incremental costs of an agriculture‐nutrition intervention implemented through a microcredit platform. Costs per household compare favourably with similar interventions. Our results illustrate the value of a standardised costing approach for comparison with other multisectoral nutrition interventions.
| BACKGROUND
Child and maternal malnutrition is a persistent problem in Bangladesh. The prevalence of stunting in Bangladesh is 31% for children under 5, with 9% severely stunted and 2% severely wasted. Further, children in rural areas of the country are more likely to be stunted than their counterparts in urban areas. In Bangladesh, 24% of evermarried women between the ages of 15−19 years of age are undernourished and rural women are more likely to suffer from undernourishment than urban women. At the same time, the proportion of overweight women has increased to 32%, highlighting the problem of both under and overnutrition in the country (NIPORT & ICF, 2020).
Nutrition-specific interventions that address the immediate causes of malnutrition have long been used to target undernutrition.
Nutrition-sensitive interventions-that address intermediate or underlying causes of malnutrition and address multiple outcomes-have also shown promise. Nutrition-sensitive agriculture (NSA) programmes, in particular, have the potential to accelerate progress in addressing child malnutrition (Ruel & Alderman, 2013). Effective NSA programmes often combine behaviour change communication (BCC) with the production of nutrient-rich foods, increasing dietary diversity and improving caregiver knowledge and infant and young child feeding practices in poor households (Keats et al., 2021).
Engaging women in agriculture and nutrition can increase their decision-making power and control over assets, albeit with potential tradeoffs in work burdens. Women's empowerment in Bangladesh has also been linked to improvements in diet diversity and reductions in child stunting (Holland & Rammohan, 2019). Growing evidence shows that well-implemented nutrition-sensitive agricultural programmes can improve maternal and child diets and household access to nutritious foods (Ruel et al., 2018). However, a key challenge for policymakers and programme implementers prioritising investment decisions hinges on the gap in the evidence on the costs of implementing multisectoral agriculture-nutrition programmes (Ruel & Alderman, 2013;Ruel et al., 2018). This is further limited by a lack of standardised methods that make comparisons difficult to interpret (Gyles et al., 2012;Njuguna et al., 2020;Ramponi et al., 2021).
The Targeting and Realigning Agriculture to Improve Nutrition (TRAIN) project was a multisectoral nutrition-sensitive intervention This study presents the incremental costs of implementing an integrated agriculture-nutrition intervention through an existing BRAC microcredit platform. 1 We estimate the total incremental financial and economic costs of the TRAIN programme by implementation arm, including costs per beneficiary. We also examine cost shares by programme inputs and activities.
Financial costs represent the implementing partner's actual expenditures on goods and services purchased to deploy the intervention.
Economic costs, on the other hand, are defined as the opportunity cost of all of the resources used to produce something; and can include the value of resources that may not have been paid for, such as volunteer frontline worker time or programme participant time. Once data becomes available from an ongoing impact evaluation, these costs will be combined with programme benefits for a full economic evaluation including cost-benefit and cost-effectiveness analyses.
Our findings serve many purposes. A robust understanding of the costs of multisectoral nutrition strategies is critical for priority-setting and for motivating donors. This study is useful for governments and development partners to target investments in multisectoral nutrition programmes. Standardised unit cost data provides a cost benchmark for governments, donors and non-profit organisations on how to design, budget and measure the resource requirements of interventions. Lastly, this analysis contributes to an effort to build an evidence base on the costs of multisectoral nutrition programmes across different settings and platforms. This study was conducted by the Strengthening Economic Evaluation for Multisectoral Strategies for Nutrition (SEEMS-Nutrition) consortium, led by the University of Washington in partnership with IFPRI. SEEMS-Nutrition develops standardised approaches and tools to assess the costs and benefits of multisectoral nutrition programmes.
Key messages
• Nutrition-sensitive agriculture programmes can improve rural undernutrition but lack information on costs.
• We use a standardised approach to estimate the total incremental costs of an integrated nutrition intervention in Bangladesh to improve maternal and child undernutrition.
• Costs per household compare favourably with similar interventions.
• This study provides evidence on the costs of integration to support the design and implementation of multisectoral nutrition programmes.
• Our results illustrate the value of standardising costing to facilitate comparisons with other multisectoral nutrition interventions. 1 Microcredit is a financial service offered by microfinance programmes, which target the poor and others unable to access traditional banks.
were selected from 144 unions of 36 Subdistricts from 10 districts.
TRAIN incorporated a BCC strategy for maternal and child health and nutrition into a female-focused microcredit programme promoting production diversity and income generation.
BRAC has led microcredit programmes in Bangladesh since 1974.
Dabi, a credit platform lending only to women, has coverage through 2146 local branches with more than 3.5 million borrowers. Dabi provides loans to women to increase income and production in agriculture and to promote empowerment. Dabi disburses $1.8 billion in loans annually, the majority for agriculture (60%). 2 The TRAIN programme was built upon the existing Dabi microcredit platform linked to agriculture. Programme criteria dictate that one member of the household is a married female Dabi beneficiary (henceforth referred to as the 'index female') of childbearing age between 15 and 49 years old. If there was more than one woman with qualifying criteria in the household, then one women was selected randomly. Given that the last 3 months of implementation were disrupted by COVID-19 and extended until October 2020, our study only covers the pre-COVID period.
| METHODS
This study used a novel standardised costing approach that contributes to gaps in the literature on the costs of multisectoral nutrition programmes (Margolies et al., 2021). The methodology was developed by SEEMS-Nutrition, led by the University of Washington in collaboration with IFPRI and funded by the Bill and Melinda Gates Foundation. The SEEMS-Nutrition approach provides standardised research protocols, data collection tools and guidance on allocating costs. This approach defines a set of input and activity cost category codes that are specific to multisectoral intervention components for agriculture, nutrition and gender empowerment (C. ).
Our analysis also adheres to principles outlined in the Global Health Cost Consortium Reference Case for Estimating the Costs of Global Health Services and Interventions (Vassall et al., 2017).
The cost analysis was conducted from the payer and societal perspectives. The analysis included costs incurred by BRAC, frontline workers and programme beneficiaries. Costs related to third-party external research were excluded. The SEEMS-Nutrition framework uses a four-step approach. Costs are aligned with a multisectoral nutrition typology that identifies resource use and outputs along the programme impact pathways to achieve standardised unit costs and the basis for benchmarking and economic evaluation.
Step 1 aligns the TRAIN programme to a typology of nutrition-sensitive value (NSV) chain interventions that (1) increase the supply of nutrient-rich foods, (2) increase the demand for nutrient-rich foods, and (3) promote the enabling environment for nutrition.
Step 2 maps the programme impact pathways to clearly articulate the linkages from activities to outputs and outcomes.
Step 3 identifies all activities, inputs and costs along the impact pathway.
Step 4 identifies outputs and outcomes for each activity to define the components of total and unit costs. The four-step framework estimates the direct intervention costs and opportunity costs associated with all programme activities.
Multisectoral nutrition approaches may include one or more of the intervention typologies with different components, services and outputs. Therefore, it is important to clearly define the unit cost for one or more outputs. Supporting Information: Appendix Table 1 shows the standardised process to derive the unit cost per beneficiary. Supporting Information: Appendix Tables 2 and 3 describe the standardised SEEMS-Nutrition activity and input categories and definitions. Supporting Information: Appendix Table 4 illustrates how costs were mapped to the NSV chain typology.
| Data collection
The cost analysis captures the total costs of the TRAIN intervention incremental to the existing microcredit programme. Primary and secondary cost data were collected for the period of October 2016−January 2020, which included 6 months of start-up and 3 years of full implementation. The SEEMS mixed-methods approach 2 BRAC. http://www.brac.net/sites/default/files/microfinance.pdf. THAI ET AL. | 3 of 12 combines financial expenditure data with micro-costing methods to identify and value resources and allocate costs to activities and inputs. The Activity-Based-Costing-Ingredients (ABC-I) method (Kaplan & Anderson, 2004;Tan-Torres Edejer et al., 2003) for micro-costing has been previously applied to nutrition programmes to assess cost-efficiency and cost-effectiveness (Fiedler et al., 2008;Heckert et al., 2019;Margolies & Hoddinott, 2015). Costs were obtained from existing records, surveys and from primary data collection with BRAC.
We leveraged secondary data from programme monitoring and reports. When available, planning and progress reports were reviewed retrospectively for randomly selected programme staff and frontline workers. Costs were disaggregated into start-up and recurrent categories. Start-up costs occurred in the first 6 months, such as planning, materials development and staff training. Recurrent costs included ongoing activities like household visits, community events and monitoring.
Primary cost data were collected in two rounds (May 2019 and February 2020) through semistructured in-depth interviews (IDIs) and focus group discussions (FGDs). IDIs and FGDs gathered data on interviewees' opportunity costs and outof-pocket (OPP) expenses. In the first round of data collection, we organised IDIs and FGDs at a centralised location. FGDs were conducted with FOs (n = 1) and DMs (n = 1), and IDIs with BRAC staff (n = 3) and PKs (n = 3). In the second round, 7 additional FGDs were conducted at BRAC regional offices in Rangpur (n = 4), Dhaka (n = 2), and Khulna (n = 4) divisions with PKs for a total of 24 participants. One FGD was conducted with FOs (n = 2) and one FGD was conducted with DMs (n = 5). Additional IDIs were conducted with BRAC head office staff (n = 4). A local research collaborator facilitated and translated interviews in the local language. Supporting Information: Appendix Table 7 provides further primary data collection details.
We estimated the opportunity cost of beneficiary participation with data from the process evaluation. The process evaluation included information on beneficiary time use for the index female respondent and the index husband in each household. 3 The process evaluation was conducted by IFPRI in April 2019. Programme output data were collected from the RCT baseline survey and from BRAC monitoring surveys. These included the number of beneficiaries reached by each activity. These data provided the denominator for the unit cost calculations.
| Data analysis
We analysed secondary expenditure and process evaluation data and combined these with primary data on economic costs. First, we analysed process evaluation data on beneficiary time allocation and OOP expenditures for participating households using Stata 16 statistical software. Second, we obtained financial expenditure data from BRAC. These data were entered into a SEEMS-Nutrition expenditure analysis template in Microsoft Excel (Version 16). We then mapped line-item expenditures to standardised input and activity codes using the template. Third, we used Excel to summarise and analyse micro-costing data from the qualitative interviews and focus group discussion.
Most line-item expenditures were easily mapped to standardised SEEMS activity and input codes. However, there were some exceptions: (1) BRAC personnel who contributed to multiple activities; and (2) shared capital and supply costs. For the first, we developed allocation rules for each activity using data from KIIs and FGDs. For example, using qualitative interviews we found PKs spent 80% of their time on nutrition counselling, 10% on training, 5% on planning and 5% on coordination meetings. PK salaries were allocated accordingly to those activities. Shared inputs or capital costs were allocated proportionally across the related activities.
Shared costs are described in greater detail below.
| Personnel costs
First, all nonshared financial and economic personnel costs were allocated across programme activities. We combined expenditure and time allocation data from BRAC staff and frontline workers (IDIs and FGDs). This information was used to allocate personnel costs to programme activities (Supporting Information: Appendix Table 2). Since the intervention was built upon ongoing activities, only one national staff-a Senior Sector Specialistwas assigned to the programme full-time. We interviewed this person to obtain information on their time allocation to programme activities. At the subnational level, BRAC staff were assigned part-time. The index husband is the spouse of the primary female respondent.
| Economic costs of frontline workers
Estimates of frontline worker (PK) costs include personnel costs from BRAC expenditures combined with estimates of OOP costs and the valuation of time above the contracted 36-h week. Economic costs such as OOPs and overtime hours/travel were not reimbursed by BRAC. PK personnel costs, gleaned from both financial expenditure and economic cost data were allocated based on time spent on programme activities.
| Beneficiary opportunity costs
Beneficiaries participated in programme activities such as household counselling and community events. We estimated the opportunity costs of participation in TRAIN activities from process evaluation data. Opportunity costs were based on information on beneficiaries' OOP expenses and the average time per year spent on programme activities. To value beneficiaries' time accurately, we used daily wage rates for agricultural labour for men and women from a 2017 IFPRI survey in implementation villages. We used mean daily wage rates for male and female agricultural workers to value beneficiaries' time.
Once we obtained all personnel and beneficiary costs aligned to programme activities, we mapped them onto the standardised SEEMS activity categories (Supporting Information: Appendix Table 2).
| Start-up and capital costs
One-time start-up costs, capital and equipment costs for durable goods valued over USD$100 and lasting over 1 year were annuitized. These costs were annuitized over the implementation period using a discount rate of 3% and an expected useful life of 10 years.
Annuitization ensures an equivalent annual cost is estimated and reflects the value-in-use of capital items, rather than reflecting the financial cost from the time of purchase (Brooker et al., 2008). Taxes for durable goods and value-added taxes for small goods where tax was included as part of the commodity cost were included in the financial costs. Taxes were excluded in economic costs except in the case of small supplies. Costs were adjusted for inflation and are presented in 2019 USD using an exchange rate of $1USD/84.77 Bangladesh taka (BDT). 4
| Unit costs
Total incremental costs were broken down by their financial and economic components. The total incremental cost per beneficiary is defined as the total cost divided by the total number of beneficiaries.
We also present cost breakdowns by intervention typology, programme activity, inputs and timing (start-up and recurrent). We estimate annual cost per beneficiary and the annual cost per household by treatment arm. The cost profile is the share of the disaggregated cost over the total programme costs for the 3.5-year period. (Table 3). We used a gamma distribution for all parameters since hours worked and costs are all nonnegative (Dodd et al, 2006). For each scenario, 5,000 simulations were run with samples drawn from the various parameter distributions. We provide the below rationale and details for each parameter we varied in the sensitivity analysis. See Supporting Information: Appendix Table 6 for additional details. (Table 3).
| RESULTS
The total incremental cost of the TRAIN programme including economic and financial costs over 3.5 years was USD$795,040.34 (
| Sensitivity analyses
The tornado diagram shown in Figure 3 shows the impact that varying inputs has on the results.
| DISCUSSION
Integrated agriculture and nutrition interventions can provide effective platforms to reach vulnerable populations (Ruel et al., 2018). Despite progress in generating evidence on the effectiveness of these programmes, important gaps remain on intervention costs.
This study offers evidence of the total incremental costs and costs per beneficiary of delivering a multisectoral agriculture-nutrition programme through a microcredit platform in Bangladesh.
Microfinance programmes have been lauded for expanding access to financial services to the poor but have also generated mixed results (Amin et al., 2003;Banerjee et al., 2015). That said, the reach of microfinance programmes-BRAC reaches 126 million people-presents a promising delivery platform for other services targeted to the vulnerable, such as agricultural extension and nutrition behaviour communication change. This study provides insights into the potential costs of such integrated programmes using a microfinance platform including granular details of disaggregated costs including the total incremental financial and economic costs by implementation arm and programme cost drivers by input and activity type.
For intervention unit costs, the average incremental cost per household was USD$63.10 regardless of the treatment arm. Similar to T A B L E 2 Summary of unit costs for the TRAIN intervention in Bangladesh (USD, 2019)
A. Number of beneficiaries
Type of beneficiary We also examine the costs of incorporating women's empowerment activities. Women's empowerment activities accounted for 30% of programme costs, as part of facilitating the enabling environment for nutrition. As noted, the unit cost per household increased with programme complexity. Thus, Arm 4, which has the most activities, including gender forums and men's sensitisation, had the highest unit costs per beneficiary. Women spent almost twice the amount of time than men; our sensitivity analysis emphasises the range of higher opportunity costs for women (Table 3). This raises the concern of the time burden of programmes targeted at women. Brauw et al., 2018). A study in Cambodia estimated the costs over ten years of a homestead production intervention at USD$929 per household (Dragojlovic et al., 2020). In Zimbabwe, the costs per household for a programme providing community gardens for people with HIV were USD$1890 (Puett et al., 2014). A crosscountry study in Ethiopia, Nigeria and India modelled costs per child reached for 12 agriculture-nutrition interventions. Modelled costs ranged widely from USD$0.58 for a media and education campaign to USD$2650 for a livestock programme (Masters et al. 2018). While these studies assess costs for different types of interventions and outputs (i.e., individual beneficiaries or households reached), the unit costs generated are critical for decisionmakers to assess the affordability of multisectoral nutrition for a given intervention and country context. They also demonstrate the need for improved guidance to generate standardised cost estimates to increase comparability and generalisability.
Incorporating frontline worker opportunity costs in the TRAIN cost analysis highlights the programme's sustainability concerns.
Despite the travel stipend provided in the second year, frontline workers shouldered additional OOP costs given the intensive nature of household-level interventions. SEEMS-Nutrition is underlining such questions in building evidence on the costs and costeffectiveness of nutrition-sensitive interventions. This study presents full programme costs in the pre-COVID period and is an important step toward a comprehensive economic evaluation. As noted above, a recent systematic review has found that nutrition-sensitive agricultural interventions have a significant positive impact on dietary diversity among children 6−60 months old (Margolies et al. 2022).
This paper helps illustrate the costs of these complex interventions, providing a benchmark that can be used to assess costs and affordability of this and similar programmes. Importantly, the methods outlined here provide a template for future cost analyses and the results provide the groundwork for meaningful comparisons among multisectoral programmes. Lastly, the economic costs, which include both the financial and economic costs of all implementing partners, government and participants, can provide insights into the sustainability of the programme, and will be combined with a forthcoming study on the programme's impact on nutrition outcomes to generate evidence on cost-effectiveness.
| CONCLUSIONS
This study presents the financial and economic incremental costs of implementing an integrated agriculture-nutrition intervention through a micro-credit platform in Bangladesh. Cost-per beneficiary estimates compare favourably with multisectoral nutrition-sensitive interventions implemented through different platforms. These results demonstrate that a standardised approach for measuring the costs of multisectoral nutrition strategies enhances comparability and transparency, increasing the application of cost data for assessing affordability for use in evaluation, planning and policymaking.
AUTHOR CONTRIBUTIONS
Giang Thai led data collection and cost analyses and contributed to the manuscript. Amy Margolies contributed to the cost study methodology, conducted cost analyses and led drafting of the manuscript. Aulo Gelli contributed to the cost study methodology and to conception and design, provided technical advice, drafted and reviewed manuscripts. Nasrin Sultana co-led data collection and reviewed the manuscript. Esther Choo contributed to the data analysis and reviewed the manuscript. Neha Kumar reviewed the manuscript. Carol Levin contributed to the cost study methodology and to conception and design, provided technical advice, participated in data collection, edited the manuscript and reviewed manuscripts. | 5,056.8 | 2022-10-18T00:00:00.000 | [
"Medicine",
"Economics"
] |
Average shape of longitudinal shower profiles measured at the Pierre Auger Observatory
The average profiles of cosmic ray shower development as a function of atmospheric depth are measured for the first time with the Fluorescence Detectors at the Pierre Auger Observatory. The profile shapes are well reproduced by the Gaisser-Hillas parametrization at the 1% level in a 500 g/cm2 interval around the shower maximum, for cosmic rays with log(E/eV)>17.8. The results are quantified with two shape parameters, measured as a function of energy. The average profiles carry information on the primary cosmic ray and its high energy hadronic interactions. The shape parameters predicted by the commonly used models are compatible with the measured ones within experimental uncertainties. Those uncertainties are dominated by systematics which, at present, prevent a detailed composition analysis.
Introduction
The Fluorescence Detector (FD) of the Pierre Auger Observatory [1] has collected unprecedented statistics of high quality data, imaging the longitudinal profile of the electromagnetic component of showers induced by ultra-high energy cosmic rays.The integral of the shower profiles gives direct calorimetric measurements of the primary particle energy, with small corrections due to the energy carried away by muons and neutrinos [2].The depth at which the profile maximum occurs, X max , is the main observable for analysis of the mass composition of primary cosmic rays.
This work follows the hybrid data reconstruction procedure and event selection developed earlier for a composition unbiased X max measurement [3].The determination of each shower geometry uses the timing of the triggered FD pixels and one station in the Surface Detector, which should be at less than 1.5 km from the shower core, and have an expected trigger probability greater than 95%, for proton and iron showers.The energy deposit profile is obtained from the photons detected at the telescope, taking into account the different emission mechanisms and the light absorption and scattering in the atmosphere; only events with viewing angles above 20 • from the shower axis are accepted, to avoid direct Cherenkov light affecting the determination of X max .In addition, a fiducial field of view is defined as a function of energy to ensure a uniform acceptance of most of the X max distribution observed in the data; finally, for each shower track, at least 300 g/cm 2 must be observed.This should include the X max position, for which the expected resolution, given the geometry, must be under 40 g/cm 2 .
The main goal of this contribution is to present a precise measurement of the shape of the longitudinal electromagnetic profile of cosmic ray showers [4], and of the remaining information that it can carry about the interactions of primary cosmic rays.
Measuring the profile shape
Because individual events are subjected to high statistical uncertainties, we construct a high accuracy average profile that can be analysed in detail and provide information about the sample.The profile of each event is normalized to its maximum energy deposit (dE/dX) max , and centred at X max .For each 10 g/cm 2 bin of X = X − X max , the normalized energy deposit is averaged, taking into account the respective uncertainties of each contributing event.
Figure 1(top) shows a reconstructed average profile in data, and the fractions of light components contributing at each X .After the geometry of each event has been obtained, the light arriving at the FD at a given time is converted to its emission point at the shower axis, taking into account the attenuation by Rayleigh scattering in air and Mie scattering on aerosols.Most of the detected photons are from isotropic fluorescence emission, but a large component of the forward Cherenkov beam -integrated along the shower axis -can reach the telescope through scattering.The later part of the profile has a lower contribution of fluorescence light (the one directly proportional to the energy deposition) -and so will be more subject to corrections of atmospheric effects and assumptions used in the reconstruction.Figure 1(bottom) compares the average profiles of simulated energy deposition to the ones obtained after a full detector simulation and applying the same reconstruction procedure as used for data.The full chain is applied separately to proton and iron showers to show that the early part of the profile is the one that keeps information on the interaction of the different primary particles, and that this information is preserved by the reconstruction of the high accuracy average profile.
Figure 2 shows again the reconstructed average profile in data, in the region of [−300, +200] g/cm 2 around the maximum, for which the detailed quantitative analysis is done.Also shown are the bin-by-bin residuals to the fitted Gaisser-Hillas function [5], as an experimental test that it is a good representation of the shower longitudinal energy deposit profiles.The Gaisser-Hillas shape is confirmed at a 1% level in each of six energy bins with log(E/eV) > 17.8.The Gaisser-Hillas function is written as [6]: where R = λ/|X 0 |, L = |X 0 |λ and X 0 ≡ X 0 −X max .L can be seen as a Gaussian width, and R an asymmetry parameter, with smaller correlations than the λ and X 0 parameters that are more commonly used.These parameters keep information even when applied to average profiles [7].
Systematic uncertainties
Table 1 shows the maximum deviation caused by several categories of systematic uncertainties, compared to the much higher statistical accuracy.Most of the effects on the shape parameters are asymmetric and energy dependent.In the table only the highest values are presented.The small differences found between the simulated and reconstructed profiles (fig. 1) do not change significantly the values of R and L obtained in the chosen fit region, except for the first energy bin (up to 1 EeV).A small bias correction is determined by averaging the effect over proton and iron to return to the values as obtained with the generated dE/dX values.Half of this correction is added as a systematic uncertainty, presented together with the effect of the energy scale uncertainty [8].
The FD consists of 24 telescopes viewing elevations from 2 to 32 degrees, overlooking the 3000 km 2 Surface Detector (SD) which samples the particles arriving at ground with an array of stations separated by 1.5 km 1 .One of the telescopes was excluded from the start of the analysis, since it showed large time residuals in the geometry fit.The stability of the average profile across the remaining 23 telescopes was used to check for detector effects, and the differences accounted for as a small detector systematic uncertainty.The effect of the hybrid geometry reconstruction is studied directly by varying within their uncertainties five independent parameters, representing the shower detector plane and the shower axis.Possible variations on the average shower profiles obtained with different selections in shower zenith angle or distance to the telescope were searched for, but found to be contained within the previously derived geometric systematic uncertainties in L and R.
The determination of single event profiles, and their X max and (dE/dX) max , is clearly a very fundamental piece of the analysis.In particular, the Gaisser-Hillas fit of single event profiles uses constraints to guarantee the convergence when short tracks are observed, affecting mainly low energy events.These were derived earlier from a low statistics sample of events with long tracks, and are expressed as λ = L • R = (61 ± 13) g/cm 2 and X 0 = X max + L/R = (121 ± 172) g/cm 2 ; an additional constraint on the normalized integral was derived from simulations, compatible with all available models and compositions.Changing the constraints coherently by one standard deviation in each parameter results in systematic deviations of up to 3 g/cm 2 in L and 0.01 in R.
The fluorescence and Cherenkov yields were also varied within uncertainties and the reconstruction was repeated with and without multiple atmospheric scattering corrections, but a larger effect comes from separating events according to the percentage of light associated with direct fluorescence light: when selecting only events with more than 10% of non-direct fluorescence light (the mean value in the analysed sample), the resulting average profile is wider, with L increased by up to 2 g/cm 2 (while the change in R is negligible).
The atmospheric effects play an important role in the profile reconstruction and the Pierre Auger Observatory also has a large set of instruments dedicated to monitoring the atmospheric conditions [1].The data was separated according to the seasons of the year, and the impact of cloud patches in the sky was also studied.The main effect comes, however, from the overall aerosol content and height profile.The atmospheric aerosol attenuation, τ A (h), is obtained hourly by firing a laser from the centre of the SD array [9,10] and comparing the observed number of photons in the FD with the ones observed in a clear reference night.The correlated uncertainties in the definition of the clear night and the uncorrelated uncertainties from the hourly measurements were propagated by reconstructing again the full sample, and they lead to the largest systematic effect in the values of L, that changes by ± 5 g/cm 2 , and R, that changes by ± 0.02.
Two new observables
Figure 3 shows the measured values of L and R, compared to the predictions of three hadronic interaction models for proton and for iron primaries.We conclude that the measured shower shapes are compatible with model predictions within uncertainties and so that the models provide a fair description of the electromagnetic shower component, measurable by the Fluorescence Detectors.The predictions of the models for the L variable are different, with Sibyll 2.3c [12] predicting higher values than either QGSJetII-04 [13] or EPOS-LHC [14].The data is compatible with both proton or iron values in Sibyll 2.3c, while it is above the prediction for pure iron from the other two models.On the other hand, the three models predict similar values for R, higher for iron than for proton showers, and decreasing with energy for each pure composition.In this case, the data indicates a slower energy evolution, but the present systematic uncertainty prevents an analysis in terms of primary mass composition.
Figure 4 puts together the two observables in two energy bins, highlighting the correlations of statistical and systematic uncertainties.It also shows the prediction for all possible mass compositions (obtained by combining different fractions of H, He, Ni and Fe), according to each of the high energy hadronic interaction models.The combination of the two variables allows a clearer separation for different predictions.For example, in the energy bin of log (E/eV) ∈ [18, 18.2], the profile shape parameters predicted by QGSJet-II04 for any composition are at more than one standard deviation from the values allowed by data, while the shape predicted by Sibyll 2.3c is fully compatible with the measurements for both energies.A measurement of the average shape with lower systematic uncertainties would provide constraints on the hadronic models, specially when put together with other composition sensitive variables.
Summary
We present the first measurement of the shape of the average shower profiles as a function of atmospheric depth, with high statistical precision, and a 1% level check that these follow the Gaisser-Hillas parametrization in the region of [-300, 200] g/cm 2 around the shower maximum.
The analysis provides a systematic check of the shower reconstruction procedures used in the Pierre Auger Observatory, and it is concluded that the present measurement of the profile shape parameters is mostly affected by the (E/eV) L vs R for the energy bin 10 18 to 10 18.2 eV (left) and from 10 18.8 to 10 19.2 eV (right).The inner dark grey ellipse shows the fitted value for data and its statistical uncertainty, and the outer light grey area the systematic uncertainty.For each hadronic model proton, helium, nitrogen and iron showers were simulated and average profiles were built making all possible combinations.Each of the points represents the value of L and R for a given model and composition combination, so the phase space spanned by each model is contained in its respective coloured area.Pure proton is, for each model, on the upper left side (high L and low R) and the transition to iron goes gradually to the lower right side.
uncertainties in the description of the height profiles of atmospheric aerosols.
The predictions of the hadronic interaction models for the longitudinal profile of the electromagnetic shower component are compatible with the measurements, within present experimental uncertainties.However, it is also shown that the two new observables (L and R) that describe the average profile shape have the potential of further discriminating between models and mass composition scenarios.
Figure 1 .
Figure 1.Average longitudinal shower profile in data (top) and simulation (bottom) for events with energies between 10 18.8 and 10 19.2 eV.Top: The measured normalized energy deposit is shown in black, and the coloured regions (detailed in the legend) represent the average fraction of direct and scattered Cherenkov light in the photons measured at the telescope aperture, computed in the individual shower reconstruction.Bottom: Both generated and reconstructed profiles are shown (and their ratio in the inset plot) in blue for proton and red for iron simulated showers.
Figure 2 .
Figure 2. Measured average longitudinal shower profiles for energies between 10 18.8 and 10 19.2 eV.Data is shown together with the Gaisser-Hillas fit to the profile.The residuals of the fit are shown in the bottom inset.
Figure 4 .
Figure 4. L vs R for the energy bin 10 18 to 10 18.2 eV (left) and from 10 18.8 to 10 19.2 eV (right).The inner dark grey ellipse shows the fitted value for data and its statistical uncertainty, and the outer light grey area the systematic uncertainty.For each hadronic model proton, helium, nitrogen and iron showers were simulated and average profiles were built making all possible combinations.Each of the points represents the value of L and R for a given model and composition combination, so the phase space spanned by each model is contained in its respective coloured area.Pure proton is, for each model, on the upper left side (high L and low R) and the transition to iron goes gradually to the lower right side.
Table 1 .
Breakdown of systematic uncertainties for R and L. Uncertainties are energy dependent and asymmetric, so only the largest value is reported.
[11]re 3. R (left) and L (right) as a function of energy.The data are shown in black, with the vertical line representing the statistical error and the brackets the systematic uncertainty.Air showers were simulated with CORSIKA[11]using different interaction models (see legend).The predictions from simulations are shown with full lines for proton and dashed lines for iron primaries. | 3,395 | 2019-05-16T00:00:00.000 | [
"Physics"
] |
Noninvasive bunch length measurements exploiting Cherenkov diffraction radiation
We present the observation and the detailed investigation of coherent Cherenkov diffraction radiation (CChDR) in terms of spectral-angular characteristics. Electromagnetic simulations have been performed to optimize the design of a prismatic dielectric radiator and the performance of a detection system with the aim of providing longitudinal beam diagnostics. Successful experimental validations have been organized on the CLEAR and the CLARA facilities based at CERN and Daresbury laboratory respectively. With ps to sub-ps long electron bunches, the emitted radiation spectra extend up to the THz frequency range. Bunch length measurements based on CChDR have been compared to longitudinal bunch profiles obtained using a radio frequency deflecting cavity or coherent transition radiation (CTR). The retrieval of the temporal profile of both Gaussian and non-Gaussian bunches has also been demonstrated. The proposed detection scheme paves the way to a new kind of beam instrumentation, simple and compact for monitoring short bunches of charged particles, particularly well-adapted to novel
accelerator technologies, such as dielectric and plasma accelerators. Finally, CChDR could be used for generating intense THz radiation pulses at the MW level in existing radiation facilities, providing broader opportunities for the user community. DOI: 10.1103/PhysRevAccelBeams. 23.022802
I. INTRODUCTION
The emission of Cherenkov radiation by charged particles travelling through matter was discovered in 1934 [1,2] and, due to its fascinating properties (i.e., the emission of a large number of photons in a narrow and well-defined solid angle), has found numerous applications in many fields including astrophysics [3], and particle detection and identification [4,5]. Recently, a first experimental study was performed to demonstrate the potential of noninvasive beam diagnostic techniques based on the detection of incoherent Cherenkov diffraction radiation (ChDR) [6]. The latter refers to the emission of Cherenkov radiation by charged particles traveling not inside, but in the vicinity of a dielectric material. This combines the already well-known advantages of Cherenkov radiation with a noninvasive technique, allowing, as such, a breakthrough in beam instrumentation. The coherent emission of Cherenkov radiation by a bunch of charged particles was studied theoretically by Danos [7] in 1955, who anticipated the emission of high-power radiation at wavelengths similar or longer than the bunch length. Experimental validations came later with the development of dielectric Cherenkov masers [8] that produced high output powers in the microwave regime [9][10][11] using low energy (<1 MeV), very high current (>1 kA) electron beams with pulse durations of tens of nanoseconds. Similarly, using the backward emission of Cherenkov radiation in a metamaterial structure, high output powers were achieved by Hummelt [12] using a lower electron current (<100 A) and proved to be an interesting path toward compact high-power THz sources. Coherent Cherenkov radiation from short electron bunches was also observed experimentally in 1991 by Ciocci [13] and Ohkuma [14]. Its capability to produce high output powers at millimetre or sub-millimetre wavelengths has inspired several groups to develop dielectric wakefield acceleration (DWA) [15], where recent demonstrations have shown ultrahigh accelerating gradient (i.e., >GV=m) in capillary tubes [16].
To our knowledge and despite its great potential, coherent Cherenkov diffraction radiation (CChDR) has not been exploited for beam diagnostic purposes [17], and even if many groups working with short bunches have been using in the past coherent radiation, the detection systems were mainly based on coherent transition radiation (CTR) [18][19][20][21][22], coherent diffraction radiation (CDR) [23], coherent Smith Purcell radiation [24] or electro-optical sampling (EOS) [25].
The paper presents detailed experimental and theoretical investigations of CChDR spectral-angular properties.
A good consistency among electromagnetic simulations, analytic theory and experimental data obtained at two unique facilities has been achieved. The study generalizes the use of ChDR for beam diagnostics of short bunches emitting coherent radiation in a broad frequency range. The first section presents the background theory and simulation results involved for the design of a bunch length monitor. The frequency response of the radiator and its sensitivity to beam position have been worked out. The second and the third sections present the results from the experimental validation performed at two electron beam facilities, namely CLEAR at CERN and CLARA at the Daresbury laboratory. Measurements of bunch length and bunch profile have been obtained and successfully compared to simulations. We finally conclude highlighting the potentiality of such devices in existing and novel accelerators.
II. THEORETICAL BACKGROUND
Electrodynamics is ruled by Maxwell's equations, which describe the interaction between electromagnetic fields and matter. An electromagnetic field propagating inside or in the proximity of a given material will polarize the atoms located on its surface, giving rise to so-called polarization radiation (PR) [26]. Cherenkov diffraction radiation is a particular case of PR for a charged particle propagating in the vicinity of a dielectric material as shown in Fig. 1. The charge particle is characterized by its normalized velocity β ¼ v=c and the associated Lorentz factor γ ¼1= ffiffiffiffiffiffiffiffiffiffiffi 1−β 2 p , c being the speed of light in vacuum. The radiation is FIG. 1. Emission of Cherenkov diffraction radiation by an electron propagating at a distance ρ from the surface of a conical dielectric material. emitted at the well-known Cherenkov angle θ c ¼ arccos ð1=β ffiffi ffi ε p Þ, which, for relativistic particles, mainly depends on the relative permittivity of the medium, ε. As radiation leaves the material, the photons would naturally be refracted, following Snell's law, at an angle Θ, that depends on the permittivity of the dielectric and on the orientation of the exit-face of the material (see Fig. 1). A general expression of the exact solution of Maxwell's equations for the total radiated magnetic field ⃗ H can be written as: where ⃗ E 0 and ⃗ H 0 are the electric and magnetic fields of the virtual photons associated to ultrarelativistic particles, ε 0 the vacuum dielectric constant and ω the angular frequency of the radiation. We recall that the relation between ⃗ H 0 and ⃗ E 0 is defined by: The propagator Φ in Eq. (1) acts on the virtual photon's field at the position ⃗ r 0 radiating a photon at the position ⃗ r. It can be written as follows: Considering the far-field condition, the ChDR magnetic field emitted within the radiator volume V R by a single electron, can be calculated as: where R is the distance of observation. The wave vector ⃗ k is defined as The observation vector is ⃗ R ¼ Rðcos ϕ sin θ; sin ϕ sin θ; cos θÞ and ϕ and θ are the azimuthal and polar observation angles respectively in spherical coordinates. In such a geometry, the electric field of a virtual photon is ⃗ E 0 : with the radial vector defined as ⃗ ρ ¼ ρðcos φ; sin φ; 0Þ and the radial and longitudinal field components given by: K 0 and K 1 are the modified Bessel functions of the second kind with zero and one order respectively. Considering the far-field ChDR radiation emitted within the radiator volume V R by a single electron, it is possible to show [26], processing Eq. (4), that for radiator's lengths L such that L ≫ L f , where L f is the formation length of the ChDR inside the volume V R : the angular-spectral distribution of the radiated energy can be expressed as: where f is a Fraunhofer far-field-diffraction term, depending on the radiator shape and on the emitted photon energy, dΩ ¼ sin θdθdϕ is the unitary solid angle of radiation collection and where the electron impact parameter, i.e., the distance between the electron and the dielectric surface, has been denoted by a.
III. ELECTROMAGNETIC SIMULATIONS
A detailed simulation study has been initiated in order to investigate the expected performance and limitation of a detection system based on CChDR for short bunch length monitoring.
A. Simulations with MAGIC
2D axi-symmetric simulations have been first performed using the MAGIC code [27] studying the emission of CChDR in a hollow conical dielectric radiator with an opening angle of 45°. For this purpose, simulations have been running using an hypothetical 1.2 ns long electron bunch with a time dependent sinusoidal current modulation at a given frequency. Figure 2 shows examples of CChDR emitted by a hollow cone with an internal radius of 0.5 mm by an electron beam with an average current electron of 1 A modulated at a frequency of 25 GHz. The two cases displayed are assuming dielectric with a different relative permittivity. The photons leave the radiator with a different angle, at 90°for ε ¼ 5 and 45°for ε ¼ 2.1 as shown in Figs. 2(a) and 2(b) respectively. In the second case, the Cherenkov emission angle inside the dielectric is already at 45°and the radiation leaves the material straight through. The amplitude of the magnetic field recorded at the point P located 10 cm away from the exit face of the dielectric is also depicted in Fig. 3 for completeness. In order to study the frequency response and the position sensitivity of such a monitor, a series of simulations were performed, using a radiator made of PolyTetraFluoroEthylene (PTFE) that has an ε ¼ 2.1 in the sub-THz region. A first series has been done varying the frequency of the electron beam modulation from 6.25 to 100 GHz. For each of those modulation frequency, simulations have also been run using hollow dielectric cones with different internal radii ranging from 0.5 to 5 mm, while considering an electron beam passing through the center of the hollow channel. For each simulation case, the output power recorded at point P has been extracted. The results have been then normalized to the maximum output power and plotted as a relative transfer function of the radiator, which corresponds to the spectrum ideally emitted by a single particle. Figure 4(a) shows the transfer function for different internal radii ϱ of the hollow conic dielectric while the frequency response of the radiator is presented Fig. 4(b). Each of the points in Fig. 4(b) is the average of the transfer functions obtained from the simulation of hollow cone with different internal radii. Both plots show an excellent agreement between simulations obtained with MAGIC and calculations using the analytic model reported in Refs. [28], based on the formulas given in the previous section. The photon intensity decreases at lower frequencies as electromagnetic oscillations at large wavelengths compared to the physical size of the radiator cannot be generated and propagated inside the medium. The size of the radiator can thus be optimized to provide a detection system with an adequate frequency response adapted to specific needs. For all frequencies the emitted power decreases when increasing internal radius. However, as expected from diffraction radiation theory, the output power decreases significantly faster for higher frequencies. One can observe a reduction by a factor 8 and 2 for frequencies of 100 GHz and 25 GHz respectively. In a given detector geometry, one can then expect a very different sensitivity response to beam position offset depending on the measured frequency. This is also an interesting and crucial aspect of CChDR that should be taken into account when designing and using detectors based on it.
B. Simulations with VSim
In accelerators, charged particles are typically grouped in short bunches that would emit coherent radiation in a broad frequency range. Simulations with such a short electron bunch have been made using the VSim particle-in-cell solver [29] with beam parameters presented in Table I similar to those available in facilities for which we report experimental results later in the paper. The radiator made of PTFE has been considered having an internal radius of 5 mm. Instead of simulating the beam as a bunch of macroparticles, the electron beam has been simulated longitudinally and transversely as a Gaussian distributed charge. This helps minimizing numerical noise caused by the mesh size and its limited resolution at high frequencies [30]. Two examples are shown in Fig . As shown in the previous section, the CChDR wave-front generated inside the dielectric propagates at 45°a nd exits the dielectric straight through. In order to calculate the coherent power spectrum produced in the radiator, the transverse electric field generated in Vsim has been recorded as a function of time at one of the detector's location (see Fig. 5) and then Fourier-transformed. This procedure has been repeated for five different rms bunch lengths, from 1 to 5 ps. The reconstructed CChDR spectra are depicted in Fig. 6 for an electron bunch traveling on axis within the hollow channel of the dielectric prism, compared to the CChDR spectra calculated via the model of Ref. [28] in the same conditions.
IV. EXPERIMENTS AT CLEAR
The CERN Linear Electron Accelerator for Research (CLEAR) [32] exploits ≲1 ps, ∼200 MeV electron bunches for several applications including the development of novel concepts for THz generation and beam diagnostics [33]. A layout of the accelerator facility and the experimental setup is presented in Fig. 7, while the beam parameters are summarized by Table II. The electron beam is generated on a Cs2Te photo-cathode (QE > 0.3%, lifetime 1 year) using an UV (converted from IR) laser. The rf-gun is followed by three 4.5 m-long, 3 GHz accelerating structures. The first structure can be used as a buncher to tune the bunch length from 0.5 ps to 10 ps rms, by means of velocity bunching. A S-band rf deflector is installed just downstream of the accelerating structures for bunch length diagnostics [32]. The transverse mode propagating inside this cavity provides a time dependent beam deflection, mapping the longitudinal distribution of the electron bunches into a transverse profile monitored by a scintillator screen positioned downstream (top-right inset of Fig. 7). When the rf deflector is off, the beam can be transported straight to the end of the beam line where a ∼1.5 m long in-air experimental test stand is located. For our present studies, a dielectric radiator has been installed together with the associated detectors as shown in Fig. 7 (bottom-right inset). The radiator is a 2.5 cm long hollow prism made out of PTFE with a 5 mm internal radius and a 5 cm wide base. The geometry of the detection system has been chosen in order to be sensitive to bunch length variations. Three of the four output faces of the prism are instrumented using horns and waveguides in V, W, and G bands to collect and measure ChDR in three independent frequency bands. Each detection line is composed of a rectangular waveguide horn, band-pass filters and Schottky diodes from Millitech. The two detectors installed on the left and right sides of the dielectric are measuring at frequencies of 84 AE 1 GHz and 113.5 AE 9 GHz respectively. The third detector placed in a plane 45°upward is detecting radiation emitted at a frequency of 60 AE 1 GHz. The gain-horns have an aperture of 4 × 4 cm 2 and 3 × 2 cm 2 for the 60 and 84 GHz detection lines respectively and 2 × 1.5 cm 2 for the 113.5 GHz one. The relative sensitivity of the three detection lines is taken into account when analyzing the experimental data, including the response curves of the Schottky diodes, the bandwidth of the band-pass filters and the gain of the horns.
A. Bunch length monitoring with coherent Cherenkov diffraction radiation
The temporal profile of the photocathode laser beam is typically Gaussian, nevertheless by adjusting the amplitude FIG. 6. CChDR spectra simulated using VSim for different electron bunch lengths compared to the CChDR spectra calculated via the model in Refs. [28,31]. and phase of the rf signal injected in the rf gun, one can modify the longitudinal shape of the electron bunch and produce Gaussian and skew-Gaussian bunch shapes. Bunch length measurements performed using CChDR have been compared to the longitudinal beam profile obtained using the rf deflecting cavity, which can resolve bunch profiles as short as ∼100 fs rms. A typical measurement with the rf deflector is based on the average of ten consecutive shots in order to minimize the importance of statistical fluctuations.
Alignment procedure of the beam inside the prism
Both the ChDR prism and the Schottky diode detectors are mounted on a remotely controlled and adjustable optical table driven by stepping motors that allow to displace the experimental setup both in the horizontal and vertical planes independently. Initially the center of the hollow prism has been precisely positioned to the beam axis using laser alignment. A 1.5 ps long electron bunch has been sent through the center of the radiator, focused down to ≲1 mm rms transverse beam size, and kept stable in the same configuration during the measurements. In order to align the beam in the center of the ChDR detector, the whole experimental setup has then been moved horizontally by steps of ∼200 μm, the corresponding radiation power being recorded using the two horizontal detectors. The experimental results are presented in Fig. 8 that shows the evolution (as a function of the stepper motor position) of the reconstructed beam position signal, which is calculated as the absolute value of difference over the sum Δ=Σ of the signals coming from the two horizontal detectors. With a transverse beam size of 1 mm, the scan in position cannot be performed over a range larger than AE2.5 mm, which is half of the whole aperture. For larger position offsets, the direct emission of Cherenkov radiation, produced by transverse beam tails passing through the dielectric itself, distorts the measurements. Simulation results using Vsim are also presented on the same plot for comparison. The electric field obtained by simulations is recorded at the two locations, left and right from the prism output surface, as presented in Fig. 5. For each beam position offset, the time dependent fields are first squared and integrated in order to obtain the expected CChDR power from both sides of the radiator. The simulation results are also plotted as the absolute value of difference over sum of the two corresponding output powers, similarly to what is done experimentally. The comparison between experiments and simulations clearly shows a very similar trend. The present configuration has provided a practical way to center the beam inside the prism, which has been a prerequisite in order to provide reliable bunch length measurements as they are presented in the following.
According to the transfer function in Fig. 4(b) and to the frequencies adopted for the experiment, the relative error made with the statement fðV R ; ω 1 Þ ¼ fðV R ; ω 2 Þ ¼ f ð0Þ is ∼10%. The intensity ratio S 1 =S 2 , corresponding to the signals S 1 ¼ S 1 ðω 1 Þ and S 2 ¼ S 2 ðω 2 Þ measured at ω 1 and ω 2 , each of them proportional to the quantity defined above as d 2 I=dΩdω, can be written via Eq. (10) as: The rms bunch length σ τ can thus be obtained as a function of S 1 and S 2 : The difference in the detection bandwidths between Δω 1 and Δω 2 is not contained within Eq. (12), therefore it has been taken into account as a systematic error on the bunch length measurement dσ τ ¼ ðdσ τ =dω 1 ÞΔω 1 þ ðdσ τ =dω 2 ÞΔω 2 . Figure 9 presents a comparison between rf-deflector and CChDR measurements as a function of the rf phase of the electron gun. In this case, Gaussian bunches with a 100 pC charge have been used. The agreement is excellent for short bunches and the larger error visible on the CChDR measurements for longer bunches is due to the weakness of the corresponding power emitted in the detection frequency bands (i.e., 84 GHz and 113.5 GHz) for such bunch lengths. This could be overcome by choosing detectors working at lower frequencies. Figure 9(a) also includes a comparison with electromagnetic simulations performed with VSim. From the simulated bunch power spectrum shown in Fig. 6, we computed and plotted the simulated power ratio of Sð84 GHzÞ=Sð113.5 GHzÞ. The agreement is within 5% and it clearly highlights the validity of our approach. The comparison between the longitudinal profile measured with the rf deflector and the one reconstructed from CChDR is shown in Fig. 9(b) for completeness, compared to ASTRA simulations [34].
Measuring skew-Gaussian electron bunches
Skew-Gaussian bunch profiles have been also produced on CLEAR by exploiting an energy-time correlation along the temporal profile of the electron bunch, adjusting the phase of the rf gun. This effect depends strongly on the bunch charge and has been studied in depth using ASTRA simulations [34]. Negligible below a bunch charge between 200 and 300 pC, it becomes important for charges around 400 pC, as the one used for the experiment shown in this section. At even higher charges, space-charge effects would produce super-Gaussian longitudinal profiles. In order to detect skew-Gaussian bunch profiles, a different model, as reported in the Appendix, must be used. This extracts both the bunch length and the skewness parameters, σ τ and α respectively. An additional measurement point at a different frequency is however necessary in order to extract both parameters. Experimentally, data from the third diode S 3 working at 60 AE 1 GHz, have been taken into account by numerically solving the following system of equations: The expression of the bunch form factor for the skew-Gaussian case is calculated explicitly within Appendix. A comparison between measurements and simulations performed by ASTRA is presented in Fig. 10, showing a good agreement among the methods. Fig. 9(a).
V. EXPERIMENTS AT CLARA
The experimental work presented in this section has been conducted at CLARA/VELA facility in Daresbury Laboratory-STFC [35]. The CLARA photoinjector generates electron beams with an energy 35 MeV at 10 Hz bunch repetition frequency, with charge ranging from 70 to 100 pC. The beams are then injected into the VELA beam line where the bunches can be compressed from 2 ps to 0.2 ps rms by varying the phase of an accelerating cavity, similarly as done at CLEAR. The beams are then delivered to the so-called BA1 experimental zone, where a vacuum chamber can host devices under test.
A. Experimental setup
For the CChDR measurements this chamber has been equipped with a remotely controlled manipulation system and a suite of beam diagnostics including an energy spectrometer, YAG screens, and beam position monitors. In the final setup, shown in Fig. 11, a CChDR and a CTR radiator have been installed in the experimental vacuum chamber with the aim to compare their radiation spectra and their capability to measure short bunch lengths. The CChDR radiator has been a triangular prism made of PTFE with a base size of 5 cm and 4 cm width, while the CTR screen has been a 2 cm radius metallic (polished steel) screen 2 mm thick. Both radiators have been installed on individual holders, each of them mounted on a movable platform with three degrees of freedom. When using CTR, the beam has been steered through the center of the screen, while in the case of CChDR, the distance between the beam and the surface of the dielectric has been typically adjusted between 0.1 and 2 mm. The generated radiation has been reflected by an in-vacuum concave mirror with a focal length of 101.6 mm, installed on a rotating platform. The distance between the mirror and the radiators has been set to be equal to its focal length in such a way that the radiation was parallel over long distances (see Fig. 11). The rotating platform has been mounted on a translational stage enabling to adjust the position of the concave mirror when moving the CChDR radiator. The radiation has been then extracted through a quartz window and analyzed using a Martin-Puplet interferometer [36], by means of a system of motorized mirrors. The Martin-Puplett interferometer is one of the most efficient instruments to measure radiation spectra in (sub-)THz frequency range. Due to its design with a polarizing beam splitter, it provides the ability to eliminate the influence of internal interference and the effect of the charge fluctuations in the accelerator, using two detectors (PD1 and PD2 in Fig. 11). The signals obtained with these two detectors are anticorrelated relative to each other. Therefore, if dividing the signal difference by the sum, a normalized interferogram δðxÞ is obtained as: The x variable here is the difference in length between the two arms of the interferometer. This procedure has required a fine alignment of all the elements of the interferometer. The alignment has been performed using a test THz source and a new-line Terasence camera operating from 10 GHz to 1 THz [37].
B. Bunch length reconstruction with coherent Cherenkov diffraction radiation
Interferograms of CChDR and CTR registered at the beam energy of 35 MeV, the beam charge of 70 pC, and for the impact parameter of 1 mm are shown in Figs. 12(a) and 12(b). The data acquisition system has been synchronized with the accelerator rf system. Fifty samples from each pyroelectric detector for each step of the scanning interferometer mirror have been recorded to minimize the statistical uncertainty. Interferograms are shown with a triangular apodization window [38]. No other postprocessing techniques, including zero-padding have been used. Figures 12(c) and 12(d) show the result of Fourier transform of δðxÞ and corresponding single electron spectra. For single electron spectra, the models described in [28] for CChDR and [31] for CTR have been used. In particular, the CChDR radiation model takes into account the beam energy, radiator geometry, impact parameter, relative position between collecting optics and radiator, angle between radiator and beam-axis, and radiator refractive index. At lower frequencies both spectra decrease in intensity due to finite radiator dimensions and diffraction effects in the radiation delivery system. At higher frequencies the spectra are dominated by the electron bunch length and longitudinal profile modulations. The longitudinal bunch form factor has been derived normalizing the radiation spectra by single electron spectra. Figure 12(e) illustrates the comparison between the bunch form factors extracted from CTR and CChDR spectra. For the data analysis we have chosen the central part of the form factor from 300 to 800 GHz. The low frequency part (<300 GHz) is distorted by the diffraction effects caused by concave mirror, output quartz view port, finite apertures of the interferometer, motorized mirrors and detector. These effects are not taken into account in the single electron spectrum and, therefore cannot be used for analysis. At higher frequencies the coherent radiation is significantly suppressed by the bunch length. In this case the apparatus noise generates artifacts in the analysis which are significantly different for CChDR and CTR. Therefore both curves have been extrapolated to lower and higher frequencies from the same points by a Gaussian and an exponential curve, following the prescriptions in [39]. At the connection points the curve value and its first derivative were set to be equal. The extrapolated curves are shown in 12(e), outside the region limited by the black lines placed at the connection points. The reconstruction of the bunch temporal profile by just an inverse Fourier transform is not possible because only the amplitude of jðωÞ can be measured in an experiment, but not its phase. According to [40], the amplitude jjðωÞj and the phase ψðωÞ ¼ argðjðωÞÞ are related by the Kramers-Kronig relation in a way that if the function jjðωÞj is measured at all frequencies, then the phase ψðωÞ is obtained as: The longitudinal bunch profile can be derived as: For the analysis the integrals have been replaced by fast Fourier transform (FFT) algorithms. The frequency range has been chosen from zero to the point beyond which the phase does not make any difference. Figure 12(f) shows the bunch profile reconstruction taking into account the phase.
The full width at half maximum (FWHM) has been measured to be 686 AE 14 fs with CChDR and 640 AE 12 fs with CTR. Such a result is in agreement with the expected value of bunch length for this machine. Despite a serious difference in CChDR and CTR spectra due to differences in the single electron spectra, after normalization the longitudinal form factors look very similar. The reconstructed bunch profiles are shown in 12(f) demonstrate a good consistency of the cores of the distributions. However the tails are different, formed from instabilities in the bunch compression configuration and might vary shot-by-shot. To measure the tails precisely, a single shot spectrometer system is needed. The current setup has enabled us only to diagnose a stable part of the beam. Typical bunch length measurements, including interferometry, FFT, single electron spectrum normalization, and Kramers-Kronig reconstruction have been taking approximately fifteen minutes. Nevertheless, this time can be reduced down to one minute by using detectors with higher signal to noise ratio and an automatic software-based procedure.
VI. CONCLUSIONS
We have presented a detailed theoretical and experimental investigation of coherent Cherenkov diffraction radiation (CChDR) spectral-angular characteristics. We have demonstrated how electromagnetic simulations match the analytical theory. The experimental verification performed on two independent electron beam accelerator test facilities, namely on the CLEAR at CERN and on the CLARA at the Daresbury laboratory, is in excellent agreement with theoretical expectations, confirming its key unique properties, e.g., high directionality, intense photon yield and noninvasive nature. The feasibility of the CChDR phenomenon as a tool for non-invasive short bunch diagnostics has been investigated. Supported by electromagnetic simulations, we have demonstrated that the bunch length can be retrieved using a simple and cost-effective technical solution. Bunches in the sub-ps to ps range have been measured, successfully benchmarking this new detection scheme with more standard but invasive techniques, e.g., rf deflecting cavities or coherent transition radiation. The simulations made in VSim have shown that the wakefields' pattern is such that there is no spatial and temporal overlap with the trailing bunches, both considering the time-structure of the electron pulses at CLEAR and CLARA. Future tests will be conducted for radiators with larger apertures, more interesting for usual beam instrumentation, and for which the production of and the interaction with the wakefields is even more negligible. To the best knowledge of the authors, this is the first detailed demonstration of the use of CChDR as noninvasive beam diagnostics. Due to the lack of fundamental limitation for resolution in bunch length measurement with coherent radiation technique, CChDR could be extended to extremely short bunch length facilities, e.g., free electron lasers and plasma or dielectric accelerators, where coherent radiation is emitted in the near infrared or visible range. In this case suitable materials should be selected for avoiding possible dispersion of ChDR propagating within the radiator. The detection system could be then directly based on optical fibers, reaching very small dimensions. Typical peak powers are reaching values around the MW threshold, which provide large and easily detectable output signals, also interesting for THz generation in already-existing, large-scale central and/or futuristic radiation facilities providing additional research opportunities for the users. This opens the possibility for further optimization reducing the sensor in size and providing, as such, compact non-invasive instruments. Undoubtedly, the use of CChDR for generating high peak power in the THz frequencies requires additional investigation and optimization, in particular of the radiator design. This, as well as dedicated studies on the use of CChDR for beam position monitoring with a more interesting design in terms of accelerator technology, will be the topic of upcoming papers.
ACKNOWLEDGMENTS
K. L.'s contribution was supported by the Competitiveness Programme of National Research Nuclear University "MEPhI (Moscow Engineering Physics Institute)."
APPENDIX: FOURIER TRANSFORM OF A SKEW-GAUSSIAN DISTRIBUTION
In this work we define the skew-Gaussian distribution as: the parameter α giving a measure of the skewness of the distribution (Fig. 13). It is also worth noting that the sign of α determines if the skewness is along the rising or the falling edge of the distribution. From now on we consider a skew-gaussian electron current density jðtÞ: where j 0 is the peak current density. We finally recall that the spectrum of coherent radiation dI=dω emitted by this electron bunch is ∝ jjðω; σ τ ; αÞj 2 , where jðω; σ τ ; αÞ is the Fourier transform of the electron current density. In order to get an explicit expression for jðω; σ τ ; αÞ, we shall solve the integral: First of all we solve the trivial part of the integral: Then we move to solve the second part, involving the erf function. The first step is to combine the terms in the arguments of the exponential functions in Eq. (A3) in such a way to obtain the following expression: Now we derive the integral expression in Eq. (A5) by ω: The expression (A6) can be then integrated by parts: | 7,614.6 | 2020-02-10T00:00:00.000 | [
"Physics"
] |
Crystallization of FCC and BCC Liquid Metals Studied by Molecular Dynamics Simulation
: The atomic structure variations on cooling, vitrification and crystallization processes in liquid metals face centered cubic (FCC) Cu are simulated in the present work in comparison with body centered cubic (BCC) Fe. The process is done on continuous cooling and isothermal annealing using a classical molecular-dynamics computer simulation procedure with an embedded-atom method potential at constant pressure. The structural changes are monitored with direct structure observation in the simulation cells containing from about 100 k to 1 M atoms. The crystallization process is analyzed under isothermal conditions by monitoring density and energy variation as a function of time. A common-neighbor cluster analysis is performed. The results of thermodynamic calculations on estimating the energy barrier for crystal nucleation and a critical nucleus size are compared with those obtained from simulation. The di ff erences in crystallization of an FCC and a BCC metal are discussed. to
Introduction
Crystallization of metals and alloys from the melt (liquid state) takes place by nucleation and growth [1]. This process has occupied the minds of scientists for quite some time owing to still unresolved issues, especially related to high nucleation and growth rates of pure substances. Pure metals are known to exhibit a high crystal nucleation rate at a low enough temperature [1] and high growth rates in a wide temperature range [2]. Crystallization of different substances [3,4] including liquid metals [5] and alloys [6] has been studied by molecular-dynamics (MD) simulation with various potentials. Furthermore, MD simulation of glass transition of pure metals, such as Fe [7,8], Ni [9,10], Cu [11], Al [12] and other metals [13] has been performed. In a recent work possible heterogeneous nucleation of Fe was studied at 0.67T m and 0.58T m (T m is the melting point which in pure metals is equal to the liquidus temperature T l ) [14]. However, one should mention that T m of 2400 K given by the potential used is significantly overestimated compared to the experimental value. Recent embedded atom potentials provide a better correspondence to the experimental values. The crystallization behavior of other metals was also studied via MD simulation [15] and compared with the experimental data [16]. The energy barriers for crystallization of pure metals were also compared with those obtained from computer simulation [17]. In the present work the atomic structure variation, vitrification and crystallization processes in liquid metals (FCC Cu and BCC Fe) are simulated on continuous cooling and isothermal atomic structure variation, vitrification and crystallization processes in liquid metals (FCC Cu and BCC Fe) are simulated on continuous cooling and isothermal annealing using a classical MD computer simulation procedure with an embedded-atom method potential at constant pressure.
Computational Procedure
Molecular dynamics (MD) computer simulation using a software package for classical molecular dynamics (LAMMPS) [18] was used to model the crystallization process of Fe and Cu metals at periodic boundary conditions. The simulation was performed at 1 fs time step using the embedded atom potentials for Cu [19] and Fe [20] at nearly constant temperature and pressure. In order to let crystallization occur, the cubic cell size was typically chosen to be about 10 nm. A typical crystalline cell of the atoms containing 108,000 for FCC and 128,000 atoms for BCC structure was heated to of 2500 K to melt and then cooled down to a certain temperature. In some cases, larger sized cells were used as indicated. Melting is confirmed by the radial distribution function and stabilization of the density variation with time. Cooling was performed to different temperatures at about 5 × 10 13 K/s. Temperature upon simulation was maintained at ±6 K ( Figure 1) while pressure was kept around zero. Neither Fe nor Cu evaporated in the molten state as should have happened at zero pressure, likely because there was not enough time for a gaseous phase to nucleate from inside the atomic cell (a gaseous phase has to overcome an energy barrier to be formed as well as a crystalline one) and absence of the surfaces owing to periodic boundary conditions. A thermostat was used to control the temperature [21,22] while pressure was maintained by a Barostat [23]. A software package "OVITO" [24] was used to visualize and analyze the simulation results. Adaptive Common Neighbor Analysis (CNA) [25] was used to analyze structural changes in Fe and Cu.
Results and Discussion
The density changes on heating and cooling as a function of temperature are illustrated in Figure 2a. Equilibrium liquid state was obtained at 2500 K in less than 1 ps. The holding time was also varied from 1 to 100 ps without visible changes in the liquid structure and density. Thus, the atomic cells have been kept for at least 10 ps at 2500 K prior to cooling. Neither Fe nor Cu evaporated in the molten state as should have happened at zero pressure, likely because there was not enough time for a gaseous phase to nucleate from inside the atomic cell (a gaseous phase has to overcome an energy barrier to be formed as well as a crystalline one) and absence of the surfaces owing to periodic boundary conditions. A thermostat was used to control the temperature [21,22] while pressure was maintained by a Barostat [23]. A software package "OVITO" [24] was used to visualize and analyze the simulation results. Adaptive Common Neighbor Analysis (CNA) [25] was used to analyze structural changes in Fe and Cu.
Results and Discussion
The density changes on heating and cooling as a function of temperature are illustrated in Figure 2a. Equilibrium liquid state was obtained at 2500 K in less than 1 ps. The holding time was also varied from 1 to 100 ps without visible changes in the liquid structure and density. Thus, the atomic cells have been kept for at least 10 ps at 2500 K prior to cooling. Vitrification of both Fe and Cu liquids was attained at the cooling rate of about 5 × 10 13 K/s (Figure 2). At the cooling rate of 5 × 10 12 K/s and lower the studied metals start to crystallize directly Vitrification of both Fe and Cu liquids was attained at the cooling rate of about 5 × 10 13 K/s (Figure 2). At the cooling rate of 5 × 10 12 K/s and lower the studied metals start to crystallize directly on cooling. Similarly, high critical cooling rate values indicating high instability of the supercooled liquid were obtained in other works [13,26,27]. However, theoretical calculations [28] and the experimental results [29] suggest a lower critical cooling rate (of about 10 9 K/s) for glass-transition in pure metals, especially with BCC lattice. The difference could be connected with the type of potentials used.
The values of glass-transition temperature (T g ) defined by the change of the density as a function of temperature slope are about 1100 K for Fe and 800 K for Cu (Figure 2a). Of course, these values correspond to this very high cooling rate used when viscosity is still low. However, the value for Fe corresponds quite well to the isochoric conditions when the volume of the liquid becomes similar to that of the crystalline phase [30]. Two typical pair distribution functions (PDF) for Fe are shown in Figure 2b. The values of T g derived from PDF min [31] and ratios of PDF min /PDF max [32] are shown in Figure 2c. These values are about 100 K lower than those obtained from Figure 2a. As one can see for Fe, having a BCC lattice at room temperature has much lower PDF min value compared to Cu which has an FCC lattice. It resembles the crystalline structure: there is a deeper gap between the coordination shells in BCC lattice. Splitting of the first and second pair distribution function (PDF(R)) maxima into two peaks is clearly observed in Figure 2b. It indicates short-and medium-range ordering of the liquid and glassy phases on cooling and that glassy Cu and Fe inherit the short-range order of the corresponding crystals.
Phase transformation kinetics for Fe and Cu are shown in Figure 3; Figure 4, respectively, by monitoring the density changes. Energy of the system changes in the way opposite to density. Please, note that the final product density is different from case to case owing to different fractions of the internal defects. Full system equilibration upon crystallization requires much longer time. While at 1100 K and above the density, volume and energy values are visibly stable within the incubation period (relaxation proceeds in 10 ps for liquid Fe at 1100 K) (Figure 3a), these values gradually change at a lower temperature owing to structural relaxation at low atomic mobility. The structure relaxation time becomes longer than the simulation time below 1100 K for Fe which corresponds quite well to the equivolume/isochoric glass-transition temperature determined before [30]. It also takes place at about 800 K for Cu when the density and energy of the system gradually change with time prior to crystallization. Both BCC and FCC crystals nucleate directly from the melt and no two-stage nucleation [33] is found.
Metals 2020, 10, x FOR PEER REVIEW 4 of 11 on cooling. Similarly, high critical cooling rate values indicating high instability of the supercooled liquid were obtained in other works [13,26,27]. However, theoretical calculations [28] and the experimental results [29] suggest a lower critical cooling rate (of about 10 9 K/s) for glass-transition in pure metals, especially with BCC lattice. The difference could be connected with the type of potentials used. The values of glass-transition temperature (Tg) defined by the change of the density as a function of temperature slope are about 1100 K for Fe and 800 K for Cu ( Figure 2a). Of course, these values correspond to this very high cooling rate used when viscosity is still low. However, the value for Fe corresponds quite well to the isochoric conditions when the volume of the liquid becomes similar to that of the crystalline phase [30]. Two typical pair distribution functions (PDF) for Fe are shown in Figure 2b. The values of Tg derived from PDFmin [31] and ratios of PDFmin/PDFmax [32] are shown in Figure 2c. These values are about 100 K lower than those obtained from Figure 2a. As one can see for Fe, having a BCC lattice at room temperature has much lower PDFmin value compared to Cu which has an FCC lattice. It resembles the crystalline structure: there is a deeper gap between the coordination shells in BCC lattice. Splitting of the first and second pair distribution function (PDF(R)) maxima into two peaks is clearly observed in Figure 2b. It indicates short-and medium-range ordering of the liquid and glassy phases on cooling and that glassy Cu and Fe inherit the short-range order of the corresponding crystals.
Phase transformation kinetics for Fe and Cu are shown in Figure 3; Figure 4, respectively, by monitoring the density changes. Energy of the system changes in the way opposite to density. Please, note that the final product density is different from case to case owing to different fractions of the internal defects. Full system equilibration upon crystallization requires much longer time. While at 1100 K and above the density, volume and energy values are visibly stable within the incubation period (relaxation proceeds in 10 ps for liquid Fe at 1100 K) (Figure 3a), these values gradually change at a lower temperature owing to structural relaxation at low atomic mobility. The structure relaxation time becomes longer than the simulation time below 1100 K for Fe which corresponds quite well to the equivolume/isochoric glass-transition temperature determined before [30]. It also takes place at about 800 K for Cu when the density and energy of the system gradually change with time prior to crystallization. Both BCC and FCC crystals nucleate directly from the melt and no two-stage nucleation [33] As can be found for pure Fe in Figure 3, the incubation time (defined as an intersection of two tangents to the plot before and after the onset time) changes irregularly with the atomic cell size owing to stochastic character of nucleation. In a cell containing 432,000 atoms, two overcritical nuclei nucleated at about 120 ps. In a cell containing 1,024,000 atoms, the first nucleus was formed at 150 ps, the second at 170 ps, the third at 210 ps, the fourth at 230 ps and the fifth at 240 ps. If one assumes total nucleation time of 90 ps for four nuclei per a cell of 23 nm size, then the average homogeneous nucleation rate is 3.7 × 10 33 m −3 ·s −1 . It is a high value but quite close to that reported in an earlier work [8]. The growing nanocrystals have irregular but nearly spherical shape. In a cell containing 128,000 atoms, the first overcritical stably growing nucleus was formed at 280 ps and consumed the entire volume at about 410 ps. Thus, at these temperatures (1100 and 900 K which are 0.61 Tl and 0.50 Tl, respectively; when the liquidus temperature Tl = 1811 K) crystallization of Fe is hardly statistically predictable, and it is not so sensitive to the cell size if its length exceeds about 10 nm. BCC Cu at 750 K (which is 0.55 Tl) has even higher homogeneous nucleation rate of 8 × 10 34 m −3 s −1 . At the homological temperature higher than 0.65 Tl crystallization of Cu and Fe is not detectable within 3 ns of maximum simulation time.
Owing to rather high nucleation rates, the isothermal crystallization curves can be statistically well reproduced below 0.55 Tl for Cu ( Figure 4b) and below 0.4 Tl for Fe at the chosen atomic cells. An average incubation time is about 2 times longer than that on pure Ni studied earlier [34]. From this viewpoint Ni is a special liquid which is the least stable against crystallization among these metals. Furthermore, in spite of the fact that the nucleation barrier is quite high in pure Ni the nucleation rate is also high attributed to the high atom attachment kinetics [35].
From the isothermal crystallization plots, the beginning, also called an incubation time, and finish time of transformation can be calculated using two corresponding tangents to the plot before and after the inflection point. Note that actual transformation time is longer because the two tails of the S-curve are not taken into consideration when two tangents are applied. However, it becomes more difficult to detect the incubation period for each plot below 700 K owing to the long relaxation time manifested in constant variation of the baseline. Nevertheless, a time-temperaturetransformation (TTT) diagram was constructed, and it is shown in Figure 5. The simulation results are well reproduced (owing to high nucleation rate) below 0.55 Tl for Cu and below 0.4 Tl for Fe. The nose of the TTT diagram is about 0.55 Tl for Fe and about 0.6 Tl for Cu. As can be found for pure Fe in Figure 3, the incubation time (defined as an intersection of two tangents to the plot before and after the onset time) changes irregularly with the atomic cell size owing to stochastic character of nucleation. In a cell containing 432,000 atoms, two overcritical nuclei nucleated at about 120 ps. In a cell containing 1,024,000 atoms, the first nucleus was formed at 150 ps, the second at 170 ps, the third at 210 ps, the fourth at 230 ps and the fifth at 240 ps. If one assumes total nucleation time of 90 ps for four nuclei per a cell of 23 nm size, then the average homogeneous nucleation rate is 3.7 × 10 33 m −3 ·s −1 . It is a high value but quite close to that reported in an earlier work [8]. The growing nanocrystals have irregular but nearly spherical shape. In a cell containing 128,000 atoms, the first overcritical stably growing nucleus was formed at 280 ps and consumed the entire volume at about 410 ps. Thus, at these temperatures (1100 and 900 K which are 0.61 T l and 0.50 T l , respectively; when the liquidus temperature T l = 1811 K) crystallization of Fe is hardly statistically predictable, and it is not so sensitive to the cell size if its length exceeds about 10 nm. BCC Cu at 750 K (which is 0.55 T l ) has even higher homogeneous nucleation rate of 8 × 10 34 m −3 s −1 . At the homological temperature higher than 0.65 T l crystallization of Cu and Fe is not detectable within 3 ns of maximum simulation time.
Owing to rather high nucleation rates, the isothermal crystallization curves can be statistically well reproduced below 0.55 T l for Cu ( Figure 4b) and below 0.4 T l for Fe at the chosen atomic cells. An average incubation time is about 2 times longer than that on pure Ni studied earlier [34]. From this viewpoint Ni is a special liquid which is the least stable against crystallization among these metals. Furthermore, in spite of the fact that the nucleation barrier is quite high in pure Ni the nucleation rate is also high attributed to the high atom attachment kinetics [35].
From the isothermal crystallization plots, the beginning, also called an incubation time, and finish time of transformation can be calculated using two corresponding tangents to the plot before and after the inflection point. Note that actual transformation time is longer because the two tails of the S-curve are not taken into consideration when two tangents are applied. However, it becomes more difficult to detect the incubation period for each plot below 700 K owing to the long relaxation time manifested in constant variation of the baseline. Nevertheless, a time-temperature-transformation (TTT) diagram was constructed, and it is shown in Figure 5. The simulation results are well reproduced (owing to high nucleation rate) below 0.55 T l for Cu and below 0.4 T l for Fe. The nose of the TTT diagram is about 0.55 T l for Fe and about 0.6 T l for Cu. According to adaptive common neighbor analysis (Figure 6), crystallization becomes detectable by volume and energy changes when the fraction of atoms in crystalline clusters attains about 2%. The increase in the numbers of atoms with the icosahedral-like configuration prior to the nucleation of crystalline phases found in earlier works [36,37] is not seen in the present study (see Figure 6). Although only BCC crystals form and grow in Fe FCC, BCC and hexagonal close packed (HCP) atomic arrangements are found in Cu owing to low stacking fault energy of this metal (well estimated experimentally) as seen in Figure 7. According to adaptive common neighbor analysis (Figure 6), crystallization becomes detectable by volume and energy changes when the fraction of atoms in crystalline clusters attains about 2%. The increase in the numbers of atoms with the icosahedral-like configuration prior to the nucleation of crystalline phases found in earlier works [36,37] is not seen in the present study (see Figure 6). Although only BCC crystals form and grow in Fe FCC, BCC and hexagonal close packed (HCP) atomic arrangements are found in Cu owing to low stacking fault energy of this metal (well estimated experimentally) as seen in Figure 7. According to adaptive common neighbor analysis (Figure 6), crystallization becomes detectable by volume and energy changes when the fraction of atoms in crystalline clusters attains about 2%. The increase in the numbers of atoms with the icosahedral-like configuration prior to the nucleation of crystalline phases found in earlier works [36,37] is not seen in the present study (see Figure 6). Although only BCC crystals form and grow in Fe FCC, BCC and hexagonal close packed (HCP) atomic arrangements are found in Cu owing to low stacking fault energy of this metal (well estimated experimentally) as seen in Figure 7. Typical growth rates (crystal radius change as a function of time) are shown in Table 1. The growth rates are lower than those found earlier in simulations [15]. The growth rate of FCC Cu crystals is less temperature dependent than that of BCC Fe. Furthermore, one should note that some growing particles have much lower growth rate owing to interaction with other nuclei (even undercritical) [38]. At the same time the observed growth rate values for Fe are quite close to the experimental ones of 40 m/s [39]. The observed growth rate of the order of tens meters per second are lower than those obtained for flat interfaces. A liquid/solid interfacial energy (σ) value of 0.37 J/m 2 is obtained in the present work for Fe at 1100 K as an excess energy per surface area of a single particle of 1.8 nm radius. It is calculated taking into account the potential energy difference of 23.7 eV at 170 ps of the simulation time between its value for the entire atomic cell and a sum of that energy values for the atoms belonging to a crystalline particle of about 3.6 nm diameter formed at that stage (1657 atoms) and the potential energy of the other atoms in the cell belonging to liquid phase (128,000−1657 = 126,343 atoms). The potential energy for the liquid part of the atomic cell was obtained at the 50 ps stage when all atoms belonged to the supercooled liquid phase while that of a crystal was taken for fully crystalline state on heating the initial perfect crystal to 1100 K. The value of 0.33 J/m 2 for the Cu crystal at 850 K is found in a similar way. These values are not too far from the experimental values of 0.28 and 0.18 J/m 2 obtained for Fe and Cu, respectively [1]. Please note that although the calculated values are somewhat overestimated owing to the internal defects in the growing crystals, the value for Fe is still higher than that for Cu. The earlier simulation results indicated that the crystal-melt interfacial energy calculated from a planar interface needs to be adjusted to a smaller value in order to predict the correct nucleation rate at deep supercooling [40].
Calculation of the critical nucleus size (Rc) and work for the formation of critical nucleus (W*) was done according to classical nucleation theory: Typical growth rates (crystal radius change as a function of time) are shown in Table 1. The growth rates are lower than those found earlier in simulations [15]. The growth rate of FCC Cu crystals is less temperature dependent than that of BCC Fe. Furthermore, one should note that some growing particles have much lower growth rate owing to interaction with other nuclei (even under-critical) [38]. At the same time the observed growth rate values for Fe are quite close to the experimental ones of 40 m/s [39]. The observed growth rate of the order of tens meters per second are lower than those obtained for flat interfaces. A liquid/solid interfacial energy (σ) value of 0.37 J/m 2 is obtained in the present work for Fe at 1100 K as an excess energy per surface area of a single particle of 1.8 nm radius. It is calculated taking into account the potential energy difference of 23.7 eV at 170 ps of the simulation time between its value for the entire atomic cell and a sum of that energy values for the atoms belonging to a crystalline particle of about 3.6 nm diameter formed at that stage (1657 atoms) and the potential energy of the other atoms in the cell belonging to liquid phase (128,000−1657 = 126,343 atoms). The potential energy for the liquid part of the atomic cell was obtained at the 50 ps stage when all atoms belonged to the supercooled liquid phase while that of a crystal was taken for fully crystalline state on heating the initial perfect crystal to 1100 K. The value of 0.33 J/m 2 for the Cu crystal at 850 K is found in a similar way. These values are not too far from the experimental values of 0.28 and 0.18 J/m 2 obtained for Fe and Cu, respectively [1]. Please note that although the calculated values are somewhat overestimated owing to the internal defects in the growing crystals, the value for Fe is still higher than that for Cu. The earlier simulation results indicated that the crystal-melt interfacial energy calculated from a planar interface needs to be adjusted to a smaller value in order to predict the correct nucleation rate at deep supercooling [40]. Calculation of the critical nucleus size (R c ) and work for the formation of critical nucleus (W*) was done according to classical nucleation theory: Using the Gibbs free energy difference (∆G v ). σ values of 0.277 J/m 2 for Fe and 0.178 J/m 2 for Cu were obtained from Ref. [1]. The temperature dependences of enthalpies of liquid and crystalline phases were used to estimate the enthalpy of melting at about 1809 K. The enthalpy difference of 0.164148 eV/at or 15.837 kJ/mol found for Fe is not far from the experimental value of 15.2 kJ/mol [1]. The value of 13 kJ/mol was used for Cu [1]. ∆G v can be estimated from the liquid crystallization enthalpy (∆H c ) and the liquidus temperatures T l using the following equation [41]: According to thermodynamic calculations the critical nucleus radius in the studied temperature range is about 0.4-0.6 nm for Cu and for 0.5-0.6 nm Fe. Steadily growing particles found in this simulation were larger than 0.8 nm in size. At such small size the energy barrier W* to the thermal energy RT ratio is about 20 for Cu and about 30 for Fe. A higher energy barrier found for Fe is rather responsible for a lower number of nuclei found for Fe at the same homological temperature. It is interesting to note that even at such high energy barriers homogeneous nucleation takes place after hundreds of picoseconds.
The number of crystal nuclei formed within the typical cells of about 10 nm length in all metals ranges from 1 to 10, leading to the number density of precipitates of the order of 10 24 m −3 . The highest number density of nuclei is observed in Cu which leads to better reproducibility of the crystallization traces at different simulation runs. Such high number densities of precipitates are more or less typical for crystallization of pure metals at extremely low homological temperatures of about 0.55 T l or lower, which are unattainable in the experiments on undercooling but quite common in various metallic glasses which nanocrystallize in a similar temperature range [42,43].
Conclusions
Liquid structure variation of cooling and glass transition indicated that glassy Cu and Fe inherit the short-range order of the corresponding crystals. Nevertheless, a nucleation and growth type (with an energy barrier) phase transformation is observed. The crystallization kinetics of FCC Cu and BCC Fe were analyzed in isothermal conditions by monitoring the density and energy variation as a function of time, and a TTT diagram was constructed. BCC Fe shows a lower number density of precipitates than FCC Cu. A higher energy barrier for nucleation found for Fe is rather responsible for a lower number of nuclei found for Fe at the same homological temperature. On the other hand, BCC Fe, in general, shows a shorter incubation period than FCC Cu and generally a higher growth rate though the nucleation rate of Cu crystals at a higher rate. The nose of the TTT diagram is about 0.55 T l for Fe and about 0.6 T l for Cu. Homogeneous nucleation of both metals becomes difficult to detect above about 0.65 T l within a reasonable simulation time scale, while below~0.6 T l a high population density of crystalline precipitates is obtained. The growing nanocrystals have an irregular but nearly spherical shape. The observed growth rates of the order of tens meters per second are lower than those obtained for flat interfaces. Some of the crystalline particles displayed a lower growth rate than the others owing to the lattice defects within the growing crystals and existing order in the supercooled liquid. The crystal growth rate was scattered from crystal to crystal likely owing to the local order in surrounding environment and the lattice defects (mostly HCP staking faults) within the growing crystals in Cu. According to thermodynamic calculations the critical nucleus radius in the studied | 6,612.2 | 2020-11-18T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Broadband Access for All : Strategies and Tactis of Wireless Traffic Sharing
Network engineers have designed an array of protocols that enabled shared access in various wired and wireless contexts at different layers of the protocol stack [1]. One approach to managing unlicensed spectrum is to rely on a technical protocol to allocate and manage shared access Lehr [2]. This paper addressed the benefits of unlicensed wireless traffic within licensed traffic (anticipated as cognitive router-based networking). A focus on shared access to non-exclusive use of the spectrum with an holistic view of technical and institutional features is suggested for effective management of ‘spectrum commons’. Using the adapted cognitive radio architectural model and its associated multi-hop ad-hoc networking strategies to implement the ‘spectrum-common’, mobility is enhanced with each node acting as a router and packet forwarder. We formulate management frameworks that can integrate well with liquid protocols for mobile nodes. Also, these frameworks incorporate new strategies of intelligently adapting the nodes to dynamically participate in setting bandwidth capacity stochastically. The projected use of dynamic bandwidth shaping algorithm for the cognitive radio-based network (CRN) when implemented will make broadband access more economical to users and the spectrum used effectively. Keywords—ad-hoc; spectrum commons; etiquettes; software defined radio (SDR); MIMO
INTRODUCTION
The significant progress in wireless technology and the growth of wireless services has provided the principal impetus for reforming spectrum management and hence the transition toward increased reliance on market forces.While many wireless technologies contribute to both the viability and desirability for managing spectrum via unlicensed (smart wireless system technology, including software or cognitive radios, smart antennas and multiple input multiple output MIMO system) platforms, the benefits of unlicensed wireless are best anticipated in the context of ad-hoc networks [3].
The ad-hoc networks are mobile, dynamic wireless networks that require no fixed infrastructures [4].As the continuous end-to-end connectivity between its mobile nodes is not guaranteed, [5] pointed out that the ability to self-form and self-mange remains a major challenge.Due to this partial and intermittently connected wireless frameworks, the mobile adhoc network (MANET) hosts induce link disruptions, which may result to degradable service disruptions except assisted by derived technologies including intelligent etiquettes and strategized mobility management.
Today, spectrum licenses to provide mobile services offer an entry barrier that gives incumbent licensee a strategic advantage.However, with robust competition and the threat of increased allocation for competing wireless technologies on one hand and the prospect of having to pay for additional spectrum to support new (3G wireless broadband) services on the other, the mobile operators are more inclined to share spectrum [1] and [2].
A. Motivation
As policy-makers are committing to a dual regime of flexible licensed and unlicensed spectrum to provide for the evolution from the centralized approaches to more decentralized management regimes, the elements of a protocol for managing the spectrum commons must be defined.These new protocols are required both at the level of running codes (as protocols and standards) and at the level of institutional frameworks.Also, as wireless traffic become more like Internet traffic with heterogeneous, bursty or fat-tailed, long-hold time for connectivity but variable link status due to ad-hoc networking, there is need to deploy now strategies to manage wireless resources [6].The proposed rules was examined for expected performance and support for ad-hoc communications with reduced overheads but increased quality of service (QoS) in [1].
B. Objectives
The objectives of this research are to: define suitable framework and CR-based infrastructure for a spectrum commons; incorporate learning strategies to make the defined protocols liquid and suggest approaches of incorporating defined etiquettes into existing management protocols to achieve sharing goals.
A. Regulatory Models
Reference [2] identified three models of spectrum management to include commands and control (C&C), property rights (as licensed) and open access (as unlicensed) users.www.ijacsa.thesai.orgAs discussed, C&C is a scheme whereby the government acts as the regulatory agency such as the Federal Communication Commission (FCC) in the US or Ofcom in the UK.Here, the government controls the choice of technology, spectrum uses and users.According to [7], this system is vulnerable to influence costs.As the government regulators lack the expertise to make informed decisions, the regulation is often slow and expensive, and therefore it is criticized as nonmarket-based approach [8].
In contrast, the licensed (property rights or exclusiveflexible use) and unlicensed (open access or "commons") models are approaches stylized as market-based because the decision making power is decentralized to the market.In these schemes, the service providers, equipment makers and endusers interact and compete in the market place to determine spectrum usage.
Reference [7] further explained that even as the licensed scheme confers a property right on the licensee to use the spectrum exclusively, there are rules, which limits its tradability and licenses are subject to term limits.In the same vein, the assumption of relative spectrum abundance is provisioned by the unlicensed scheme.Reference [9] also corroborated the unlicensed model as an open access scheme operated as a "commons" approach, where the right to access or use the spectrum is shared among users.though, under the licensed approach, an exclusive use license assigned may be traded in secondary markets, and licensees only have flexibility in the choice of technology and services offered.In addition, licensees are just allowed to trade the usage rights conferred by the license.
As the commons approach provide the right to access the spectrum in a shared manner (among the users) subject to protocols, the decision-making authority is decentralized to those who share access to the commons, and as the protocol embodies the mechanism for managing the spectrum, the decision -making is governed by the protocol put in place.Moreover, much flexibility is offered in commons even as the choice of the protocol may be made by the government or by the market via industry standardization unlike in license regime, where decision-making resides only with the central planner (government).However, the "commons" approach does not suggest that the spectrum will be free but it is open access to only those who conform to the unlicensed protocol.Furthermore, the unlicensed does not mean unregulated as costs incurred will be borne by users, either directly through access payments or indirectly through taxes, protocol implementation cost or congestion-related quality of service effects.These costs include costs of setting up and operating the management procedures such as processing costs to implement sharing protocol, its enforcement and control congestion.This is also borne in license.
Several additional distinctions between the licensed and commons are noted in [2].They are both "shared" in the sense that multiple devices and end-users simultaneously access and use the spectrum.For example, mobile operators share spectrum over multiple users, and competition among operators offers competition across technologies and markets.Also, they are both market-based and as these models offer dynamic spectrum access and movement by end-users via roaming and switching among operators, the mobile customers are secondary licenses who get to use the spectrum on the basis of rules established by the licensed operators
B. Communication Standards
The standard for modern telecommunication networks is to offer 99.999% availability.References [10], [11], [12] and [13] all discussed the role of unlicensed (commons) regimes as sure step towards providing solution to spectrum scarcity and a promoter of innovations in telecom services.
The rules for managing a spectrum commons as stipulated in [12] and [14] showed that centralized resource allocation mechanisms (ATM, token ring) provide more assurance of bounded access delays while distributed protocols (TCP, Ethernet) provide similar delays when networks are lightly loaded.Centralized approaches are less robust in dynamic state of ad-hoc networks, which characterise future wireless environments.
Similarly, VolP perfectly co-exist with FTP, email and other data traffic when the network is not congested.With TCP and IP segmentation of packets in transport, IP hop-by-hop and TCP (end-to-end) provides the special controls of allowing packets in variables length.As remarked in [2], much of the licensed spectrum (ISM band) used by Wi-Fi, Wireless LAN or Bluetooth is managed in a decentralized way analogous to the Internet and the applications are adaptive making resources isolation less strictly managed.
For these and many other standards to be effectively upheld to provide broadband access for all and BGP providing inter domain routing support, a more decentralized approach may be the only feasible way to manage resources.This also includes decoupling of spectrum frequencies from infrastructure investment and applications.
III. DESIGN FRAMEWORK FOR "SPECTRUM-COMMONS"
The design of an appropriate framework for managing unlicensed spectrum is conceived to be minimally constraining but very consistent with orderly management of shared access spectrum.
Development of framework or rules structured for operating unlicensed devices to co-exist with licensed devices as primary users in dedicated unlicensed spectrum is crucial to the sharing.
A. Spectrum Sharing Platform
The environment of mixed regimes as (fig. 1) provides for bulk of spectrum allocated via licensed and market-based unlicensed use.With cognitive radio network architectures and the dynamism exhibited by ad-hoc networking, the framework model is evolving, promoting innovations, and minimizing regulatory distortions.The design support marginal adjustments between licensed and unlicensed users, and within unlicensed supporting all changing protocols as need arises [7].www.ijacsa.thesai.orgThe proto-typical design includes licenses and unlicensed bands running BGP and the radio systems made smarter.This architecture enables dynamic spectrum sharing and the framework favours distributed/decentralised management characterised with maximal "common" benefits.Reference [9] posited set of etiquettes as rules and mechanism to instantiate a common regime.It includes "protocol" of running code for a software radio and technical standards for guiding the protocol design for a closed common.Fig. 1 depicted a "closed spectrum-common" platform for licensed and qualified operators (spectrum users) to implement the management regime for spectrum usage efficiency.A collective ownership of 3G spectrum and its management regime is prototyped as a "closed spectrum common) in this paper.
B. Design Rules
In agreement with [15] and [16], an infrastructural framework proposed to support the traffic sharing under secured Internet routing defined by BGP is characterised with: technology and associated capabilities to counter communication problems such as interception, interference, eavesdropping, spoofing, jamming, data falsification etc); frequency agility, expanded capacity for sharing, no transmit only device spreading spectrum capability and , transition to broadband platform; network provisioning for bursty traffic, multimedia services and other profiles; heterogeneous network technology provisions 3G, Wi-Fi, Infrared, satellites roaming and seamless mobility and spectrum reform policies, transits to expand flexible licensing and unlicensed spectrum management regimes instituted and sustained by defined etiquettes IV.IMPLEMENTATION OF A LIQUID PROTOCOL Wireless traffic control schemes for broadband services includes constant bit-rate (CBR), variable bit-rate (VBR) unspecified bit rate (UBR), guaranteed frame-rate traffic flow (GFR) and available bit-rate (ABR) service categories [17].
For liquidity, the available bit-rate (ABR) scheme is envisioned to work in the spectrum commons.ABR scheme is capable of dynamically adjusting to the varying bandwidth capacity.The bandwidth made available to an ABR connection on any link varies between minimum cell rate (MCR) and the peak cell rate (PCR).www.ijacsa.thesai.orgCombining equations ( 1) and ( 2), algorithmic description of T is given in fig. 2.
A. Discussion on CR-based model
The physical architecture of cognitive radio (CR) in ad-hoc setups make it feasible for receiving wideband signal.As software defined radio (SDR), with its radio frequency (RF) frontend, it is equipped with the capability to detect any weak signal in large dynamic range.This communication model can tune to any frequency band to receive any modulation.
As the estimator will be updated by the ABR connection source parametersbandwidth resources are reserved for CBR and VBR connection that will be set up and the bandwidth becomes free again when CBR and VBR connections are released.This non-reserved bandwidth made available to ABR connections make all traffic sharable.
B. Modalities for defined etiquettes
Using BGP routing protocol, the cognitive-based service network and its special feature integrates well with other routing protocols.BGP also enables routing across all Internet service and other network providers.Combining with other technologies (WLAN, spread signal, infrared, WiMAX etc), temporarily unused band is used by any of the opportunistic radio, based on defined etiquettes to improve overall spectrum utilization [18].
As the estimator is updated by ABR connection source parametersbandwidth resource is reserved for CBR and VBR connections set up and it becomes free again when CBR and VBR connections are released.This non-reserved bandwidth made available to ABR connections make virtually all traffic sharable.To evaluate the "commons" management regime, the following application specifications supported in unlicensed spectrum, under well defined protocols: Wi-Fi model of unlicensed devicespromotes innovation in wireless devices and IT business; mobile operators sharing of 3G spectrum minimizes transaction costs for accessing spectrum individually; realization of community mesh networksprovides mechansms for managing congestions, emphasizing co-ordination in co-existence.
reliance on industry standardization processfosters spectrum-specific etiquettes of management since the "commons" regime also require specialized mechanisms.
V. CONCLUSION
The capability of cognitive radio (CR) within the wireless traffic provides many of current wireless systems with adaptability to existing spectrum allocation and overall spectrum utilization.CR supports common channels signalling; enabled with consistent security and privacy, envisioned in secured BGP [16].Also, the commons spectrum will be more attractive to applications, which are adaptive and reasonably tolerant to congestion [19].The system therefore, having mechanism for allocating resources among users/uses is equipped with established procedures to verify protocol is in conformance with agreed etiquettes.
With licensed wireless environment there are increasing demand and use of heterogeneous devices, uses leading to relatively insufficient spectrum.Spectral usage will be more efficient and spectral scarcity alleviated for broadcast and communication networks if suggested model is adopted.Users will benefit more significantly.Strategies to enhance wireless mobility management for qualitative seamless roaming and service continuity are suggested for future research. | 3,147.2 | 2014-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Arbitrarily tunable orbital angular momentum of photons
Orbital angular momentum (OAM) of photons, as a new fundamental degree of freedom, has excited a great diversity of interest, because of a variety of emerging applications. Arbitrarily tunable OAM has gained much attention, but its creation remains still a tremendous challenge. We demonstrate the realization of well-controlled arbitrarily tunable OAM in both theory and experiment. We present the concept of general OAM, which extends the OAM carried by the scalar vortex field to the OAM carried by the azimuthally varying polarized vector field. The arbitrarily tunable OAM we presented has the same characteristics as the well-defined integer OAM: intrinsic OAM, uniform local OAM and intensity ring, and propagation stability. The arbitrarily tunable OAM has unique natures: it is allowed to be flexibly tailored and the radius of the focusing ring can have various choices for a desired OAM, which are of great significance to the benefit of surprising applications of the arbitrary OAM.
Results
Theory. We have predicted that a vector field with the vector potential of ) exp( ) is able to carry the two parts of OAM flux associated with the azimuthal gradient 29 where A is the complex amplitude of A, as A = uexp(jψ) by its module u and its phase ψ. α β + α β v v is a unit vector describing the distribution of polarization state of A, with α β + ≡1 2 2 . The unit vectors v α and v β indicate a pair of orthogonal polarization states and can be represented by a pair of antipodal points on the Poincaré sphere 28,30 . If α and β are the functions of the transverse coordinates (r, φ), A is a vector field; otherwise A degenerates into a scalar field. For the light field A, the OAM per photon can be identified as In fact, ′ L z is the well-defined OAM carried by the scalar vortex field with the helical phase of exp(jmφ), with an intrinsic and eigen OAM of mħ per photon 1 . We call ′ L z as the photon OAM of the first kind. ″ L z is associated with the vector field. We have demonstrated the OAM from the curl of polarization, called as the photon OAM of the second kind, which can be carried by the radially varying hybridly polarized vector fields only 29 . Since ″ L z is always zero for a scalar field because α and β are independent of φ, the vector field should be a unique opportunity for tailoring of ″ L z . With Eq. (2), the nonzero ″ L z requires the polarization states to be azimuthally varying. Although the local linearly polarized vector fields 27 and the hybridly polarized vector fields 28 exhibit both the azimuthally varying polarization states, ″ L z is still null (Supplementary S1). Let us refocus on the two azimuthally variant polarized vector fields 27,28 again, where a pair of orthogonally polarized components with the completely opposite helical phases of φ ±jm exp( ) have the equal intensity. This brings us an inspiration that the most possible solution for the nonzero ″ L z may be break through the balance in intensity between the two orthogonal components. In such a situation, the unit vector representing the distribution of polarization states should be rewritten as where T is the relative intensity fraction between the two orthogonal components within a range of ∈ T [0, 1]. With Eq. (2), we easily have eff and the OAM per photon is m eff ħ. Clearly, T as a degree of freedom can be used to continuously tailor the OAM within a range of [0, mħ], although m can only take an integer (Fig. 1). It is very interesting and surprising that for a desired OAM or m eff , it can be achieved by a variety of combinations of m and T, as a series of intersections of the color curves with the thin horizontal line (Fig. 1). In the extreme case of T = 1, it has been confirmed ″ L z = 0 (Supplementary S1). In the extreme case of T = 0, the vector field described in Eq. (3) degenerates into a scalar vortex field with the helical phase of exp(jmφ), carrying the OAM of mħ per photon 1 . Obviously, ′ L z should belong to a special case of ″ L z when T = 0. In particular, the phase exp(jψ) can be in fact incorporated into α and is still held. Therefore, ″ L z should be a general form of the OAM associated with the azimuthal gradient, and can be called as the general OAM of the first kind and is written as
Experiment.
To confirm the feasibility of the arbitrarily tunable OAM we presented, the focused vector field as the optical tweezers is a useful tool (Fig. 2a). The generation unit of the vector fields is very similar to that used in refs 24 and 25, but has a unique difference that the ± 1st orders carrying the completely opposite helical phases of exp(± jmφ) can have the different intensity (Methods). Thus the demanded vector field can be written as where u(r) has the top-hat profile with u(r) = U 0 circ(r/R 0 ) (Fig. 2b). U 0 is constant amplitude, and circ(r/R 0 ) is a well-known circular function defined as circ(r/R 0 ) = 1 within r < R 0 but circ(r/R 0 ) = 0 within r > R 0 , where R 0 is the field radius (Methods). As examples, Fig. 2b shows the schematic sketches of the polarization states of the azimuthally variant polarized vector fields with m = 1 and 3 as well as T = 1 and 1/3. For the vector fields created by a pair of orthogonal ) from the center of Π . We further define a great circle ∑ , which is the intersection of Π with a plane passing the center of Π and being parallel to the plane σ. In fact, the Poincaré sphere can also be used to characterize the arbitrarily tunable OAM, which is equal to the distance d of the plane σ from the center of Π , in units of mħ. Of course, the OAM can also be characterized as π Ω m ( /2 ) , by a solid angle Ω subtended by the spherical zone sandwiched between the two circles σ and ∑ on Π , with In particular, we should emphasize that the arbitrarily tunable OAM is independent of the choice of spinors.
It is of great importance to explore the propagation stability of the vector fields carrying the arbitrarily tunable OAM. The measured intensity pattern of the scalar vortex top-hat field with the helical phases of exp(+ j20φ) undergoes an evolution from the top-hat profile at z = 0 to the multi-ring structure at z = 1.2 m (top row in Fig. 2e). For the vector field created by a pair of orthogonal polarized bases with the opposite helical phases of exp(± j20φ) when T = 0.32, its propagation behavior has no difference from the scalar vortex top-hat field, implying that the vector field is propagation stable (bottom row in Fig. 2e) and the arbitrarily tunable OAM we presented is always remained at any plane during the propagation. For the vector field created by a pair of orthogonal polarized bases with the helical phases of exp(+ j20φ) and exp(− j5φ) when T = 0.32, this vector field is unstable during its propagation (Supplementary S3 and Fig. S3), resulting in the spatial separation of different OAM states. Therefore, the fractional OAM cannot be remained at any plane during the propagation.
The created azimuthally variant polarized vector field is introduced into the optical tweezers system (Method), which is a very useful tool to explore the photon OAM by observing and recording the orbital motion of the trapped particles (Video). We intercept the time-lapse photos of the orbital motion of the trapped particles (Fig. 3a). For the vector field with m = 16 and T = 0 (the vector field degenerates into a scalar vortex field with m = 16), the trapped particles move around the principal ring focus with an orbital period of τ ~ 2.47 s (in first row). When T is changed to T = 0.1, the orbital period of the trapped particles increases to τ ~ 2.94 s (in second row). When T is further increased to T = 0.3, the orbital period further increases to τ ~ 4.75 s (in third row). When m is switched from m = 16 to m = − 16 when keeping T = 0.3, the motion direction of the trapped particles is synchronously reversed with an orbital period of τ ~ 4.99 s (in fourth row) (Video). The slight difference of the periods is due to the slight difference of the intensity and shape of the two bases, of course, the activity of the particle and water has also the influence. However, for the vector fields with T = 1 (the hybridly polarized vector fields reported in ref. 28), no orbital motion of the trapped particles is observed, implying that such a kind of vector fields carry no OAM. The dependences of the orbital period τ of the trapped particles, on T for different m (Fig. 3b) and on m for different effective topological charge m eff (Fig. 3c) the realization of fractional OAM by using the vector fields, we now provide a very simple and intuitive physical model-damping model-for understanding the fractional OAM (Fig. 4). For the orbital motion of a classical particle, if the damping is introduced, its motion speed will become slow and then its OAM will also become small synchronously. This classical damping model enlightens us "Whether introducing the suitable damping is able to flexibly realize the control of photon OAM?" A scalar vortex field ′ A with the helical phase of exp(+ jmφ) and the polarization state vα is able to drive the orbital motion of the trapped particle due to the presence of angular momentum flux. Of course, another scalar vortex field ″ A with the opposite helical phase of exp(− jmφ) will provide the opposite-sense angular momentum flux, as a damping for ′ A . If ″ A has the same polarization state as ′ A , thus the total field = ′ + ″ A A A is still a scalar field with the polarization state vα. The angular momentum flux provided by ′ A can be completely or partially canceled by the damping field ″ A , which is able to realize the continuously tunable net angular momentum flux and then the arbitrary OAM (Fig. 4a). However, this is not what we ideally expected, because the interference between ′ A and ″ A gives rise to the nonuniformity in both the ring intensity and the local OAM (Fig. 4a). Fortunately, the vector nature of photons may be provide a solution. If the polarization state of ″ A , described by the unit vector v β , is orthogonal to vα of ′ A , the total field Α = ′ + ″ A A becomes into a vector field [26][27][28] . Its net OAM should also be continuously tunable (Fig. 4b). Moreover, it is of extreme importance that the intensity ring and the local OAM are both uniform in the azimuthal dimension (Fig. 4b). There is still a question why the topological charges of ′ A and ″ A are selected to be completely opposite in the above. Based on the damping model, it seems to be in principle allowed that ″ A has a helical phase of exp(− jm′ φ) with ′ ≠ m m. However, such a choice is in fact unsuitable because the total field A is unstable during its propagation (Fig. 4c), because two fields carrying the helical phases with the different topological charges can never always overlap in space.
Quantum understanding. We always attempt to understand the fractional OAM from the quantum point view. As is well known, the Hamiltonian = ∂ ∂ H jh t / and the z component of OAM φ = − ∂ ∂ L j / z are two commuting operators, i.e. =Ĥ L [ , ] 0 z , so both have the common eigen wave function. The light field described by Eq. (4) is composed of two orthogonally polarized components, which can be rewritten as = ′ + ″ A A A . We can easily confirm that is indeed the common wave function of Ĥ and L z , with the respective eigen values of ω and m ( ω and −m). Thus the photon states described by ′ A and ″ A are a pair of orthogonal polarized eigen wave functions of OAMs and have the eigen OAMs of m and −m per photon, respectively. Equation (4) in the manuscript also describes in fact a mixing wave function composed of two eigen OAM states (with the eigen OAMs of m and −m per photon). In other words, the photon is in a mixing OAM state composed of two eigen OAM states. We can easily obtain that the photon in this mixing state has an expectation value of OAM as per photon. We have presented a solution of the photon OAM for long-time challenge. The photon OAM we have proved has novel and unique natures: (i) it is continuously tunable within a range of [− mħ, mħ], (ii) it has the uniformity, and (iii) the light field carrying the arbitrarily tunable OAM has the uniform intensity ring and has the propagation stability. We presented the general OAM of the first kind, associated with the azimuthal gradient, which extends the OAM carried by the scalar vortex fields to the OAM carried by the azimuthally varying polarized vector fields. We have also extended the Poincaré sphere to represent the arbitrarily tunable OAM. Our idea may spur further independent insights into the generation of natural waves carrying the arbitrarily tunable OAM. The current technology trend has been perceived to direct from fundamental investigations towards probing its viabilities for surprising applications. The fast-moving exploitation on such diverse areas has pushed for further development on OAM generation technology.
Methods
Creation of azimuthally varying polarized vector fields. We follow the method similar to those used in refs 27 and 28 for creating the demanded azimuthally varying polarized vector fields (the part enclosed by the dashed-line box in Fig. 2a). The used light source is a continuous-wave laser operating at a wavelength of 532 nm (Verdi-5, Coherent Inc.), which outputs a near fundamental Gaussian mode. The laser beam is expended and then the collimated beam illuminates the computer-generated holographic grating displayed at the spatial light modulator (SLM), located at the input plane of the 4f system composed of a pair of lenses (L1 and L2). The incoming beam is diffracted by the computer-generated holographic grating with the amplitude transmittance of t(x, y) = [1 + γcos(2πf 0 x + δ)]/2 with the additional azimuthally varying phase δ = mφ, where φ is the azimuthal angle and m is the topological charge. The diffracted ± 1st orders are selected by a spatial filter (SF) located at the spatial frequency plane of the 4f system. The ± 1 st orders are transferred by a pair of 1/4 (or 1/2) wave plates into a pair of orthogonal circularly (or linearly) polarized bases. In particular, an intensity controller (IC) composed of a linear polarizer and a 1/2 wave plate is inserted into the − 1 st order path to achieve the continuous change of the relative intensity fraction T between the two paths. Finally, the orthogonal circularly (or linearly) polarized ± 1 st orders are recombined by a Ronchi grating (RG) placed at the output plane of the 4f system to create the demanded azimuthally varying polarized vector fields, as shown in Eq. (4). Thus we can select the different topological charge m, the different relative intensity fraction T and the different orthogonally polarized bases Optical tweezers and the indirect measurement of OAM. The direct method to measure the topological charge is mainly associated with directly detecting the phase distribution of the light field, such as detecting the interference patterns. This arbitrarily tunable OAM we proposed here is not associated directly with the vortex phases with the fractional topological charge, so we cannot directly measure the arbitrarily tunable OAM based on the measurement of fractional topological charge. We use the indirect method to measure the arbitrarily tunable OAM and confirm our idea by the optical tweezers.
The created azimuthally varying polarized vector field is introduced into an optical tweezers system composed of an inverted microscope including a 60× objective with NA = 0.75 (Fig. 2a). The neutral isotropic colloidal microspheres with the almost same diameter of 2.8 μ m are dispersed in a layer of sodium dodecyl sulfate solution between a glass coverslip and a microscope slide. The azimuthally varying vector field with the top-hat profile is focused into a multi-ring structure composed of a principal ring and secondary rings (Fig. 2d), and laser power in the focal region is kept at ~15 mW. The neutral microparticles can be trapped in the principal ring. The motion behavior of the trapped particles can indirectly characterize the photon OAM carried by the light field. If the trapped isotropic particles in the ring optical tweezers move around the ring orbit, implying that the azimuthally variant vector fields will have the capability to exert torque to the trapped isotropic particles. No doubt this verifies the presence of photon OAM. The motion direction and speed of the trapped particles indicate the sense and magnitude of the photon OAM carried by the azimuthally variant vector field. If no motion of the trapped isotropic particles is observed around the ring, implying that the fields carry no photon OAM. Through taking the video of the motion of the trapped particles, the orbital period or motion speed of the trapped particles can be measured, which indirectly characterize the OAM. | 4,233.6 | 2016-07-05T00:00:00.000 | [
"Physics"
] |
Decision-Theoretical Navigation of Service Robots Using POMDPs with Human-Robot Co-Occurrence Prediction
To improve the natural human-avoidance skills of service robots, a human motion predictive navigation method is proposed, namely PN-POMDP. A human-robot motion co-occurrence estimation algorithm is proposed which incorporates long-term and short-term human motion prediction. To improve the reliability of probabilistic and predictive navigation, the POMDP model is utilized to generate navigation control policies through theoretically optimal decisions. A layered motion control structure is proposed that combines global path planning and reactive avoidance. Multiple comity policies are integrated with a decision-making module that generates efficient and human-compliant navigational behaviours for robots. Experimental results illustrate the effectiveness and reliability of the predictive navigation method.
Introduction
As service robots have been designed to provide interactive tasks in domestic and office environments, they must reliably navigate around a populated room. When robots and people encounter each other, humanaware motion planners [1] help robots treat people as social entities and aim to endow robots with safe and humanfriendly navigational behaviours [2] [3] .
Predicting the motion of moving people is an effective way for compliant robot navigation in dynamic environments [4] . Many researchers [5] [6][7] [8] have developed efficient replanning algorithms to cope with the environmental dynamic and satisfy the real-time requirement by feeding the updated information to a grid map and optimizing the robot's path to minimize the expected time to its destination. Although reactive motion planners [9][10] [11] [12] are able to rapidly query the next appropriate action, they are prone to getting robots blocked in complex environments because of their greedy property and the uncertainty of human motion. According to research on human indoor motion modelling and understanding, a human's daily motions in a specific room environment presents certain long-term patterns. Nevertheless, uncertainties are pervasive in the velocity and heading direction of people's movement. Several studies have exploited the spatial-temporal nature of human motion using a chain of Gaussian distributions [13] , clustering the trajectories with Kmeans [14] , and learning human motion patterns from tracking data using an EM algorithm [6] . However, most past research ignores the combination of human motion uncertainty prediction with motion pattern prediction.
Another key factor of predictive navigation is inherent uncertainties [15] [16] , which typically necessitate intelligently reacting to unknown moving people. Due to the non-linear nature of human and robot motion, as well as sensor noise, the popular robot localization and people-tracking algorithm based on Bayes filtering can only estimate position distributions. The probabilistic human motion prediction algorithm is also likely to produce larger errors when making motion prediction.
Many researchers have already pointed out that probabilistic representation and reasoning is appropriate and very effective for navigating in noisy environments in the real world. In situations of probabilistic decisionmaking, Partially Observable Markov Decision Processes (POMDPs) [17] [18] have already been widely used in robot navigation and interacting with people. Robots such as Flo [19] or Pearl [20] use POMDPs at all levels of decisionmaking, and not only in low-level navigation routines. But since finding optimal control strategies in POMDP cases is computationally intractable due to the continuous and high-dimensional beliefs space, POMDPs have usually been applied to topological navigation. For example, Foka [16] proposes a method for combining the prediction of the destination of a moving obstacle and its one-step-ahead positions. However, the proposed method relies on complex hierarchical decomposition of the environment and has only been implemented and successfully tested in a simulation application.
In this paper, a novel approach called POMDPs for Predictive Navigation (PN-POMDP) is proposed. The idea of predictive navigation is largely inspired by Sisbot's [1] human-aware planner, which focuses on providing human-friendly robot behaviours that imitate human motion habits. However, the human-aware planner does not take into account uncertainties when robots work in unstructured environments. This paper deals with the robustness and reliability requirements of a navigation system using the probabilistic reasoning (POMDP) method. We compute the uncertainties of human motion in two parts: the ambiguity in path selection and the motion uncertainty along each path. Then, the human-robot cooccurrence probability is estimated by analysing two situations: conflict and obstruction. Our major contribution is the PN-POMDP framework that coordinates the global path planner, the motion reactor and the speed controller in the context of probabilistic decision-making under conditions of multiple uncertainties. The control framework combines the objectives of goal-guiding, reducing the probability of human-robot conflict. More importantly, by considering high perceptual aliasing and other uncertainty factors, it combines the probabilistic robot localization, people-tracking and human motion prediction in a natural probabilistic decision-making framework to generate robot control policies resulting in efficient and polite navigation behaviours. This paper is organized as follows. After an overview of the navigation system framework in Section 2, Section 3 describes the human motion prediction method and Section 4 introduces the human-robot co-occurrence estimation in the spatial-temporal aspect. Section 5 describes the decision-making mechanism of predictive navigation and the POMDP. Finally, some experimental results are reported in Section 6, followed by a final discussion and conclusion that summarizes the paper.
Uncertainties in predictive navigation
Firstly, the trajectories of human motion are uncertain; velocity and direction usually vary within a range when they are engaged in specific motion patterns. Secondly, the localization errors are pervasive. A Simultaneous robot Localization And People-tracking (SLAP) system using global cameras and an onboard laser range finder has been developed in our previous work [21] . This jointly estimates a robot's pose r t and people's ground-plane position t t t t (x , y , ) in the global coordinate frame using two sets of particles, as shown in Figure 1. But in clustering environments with table and chair legs, the localization errors are prone to deteriorate the human motion prediction. Thirdly, the control uncertainty [14] caused by wheel slip, time-delay and other unexpected factors is commonly reported in the robot navigation domain. Finding optimal policies in the POMDP case is computationally intractable because the beliefs space is continuous and high-dimensional. The solution adopted in this work is a hybrid control structure that combines the reactive motion control and the probabilistic strategy selection for generating optimal navigational behaviours, as shown in Figure 2. In the system, sensory data obtained from laser, global cameras and other sensors are processed by the SLAP module and fed to the Perception module. Human motion patterns learned by the Modelling module are also inputted to the Perception module, in which the future motion tendencies of people are predicted in both their long-term and short-term aspects. Then, the human and robot motion states are abstracted and three types of abstracted observations are formatted, namely People's Action Observation (PAO), People-robot Relation Observation (PRO) and Robot State Observation (RSO). These abstracted observations are inputted to the Control module. In the Control module, the POMDP-based decisionmaking sub-module generates a suitable predictive navigational (PN) policy that minimizes the risk of conflict with humans. We have designed four types of action: detour, slow-down, speed-up, and halt, which will be explained in Section 5. In order to ensure goal-directed and predictive navigation performance, the motion controller is constructed with a two-layered architecture augmented by policies generated from the POMDP controller. The wavefront-based global path planner calculates the optimal path and the reference points along the path based on mapped obstacles. The Nearness Diagram [12] based local reactive obstacle avoidance controller computes actual translational velocity and rotational velocity based on the reference points and real-time sensory data.
A more detailed illustration of the POMDP controller structure is depicted in the right part of Figure 2. The POMDP controller contains a state estimator (SE) and a policy generator. The state estimator computes probabilistic distributions upon the belief t b according to o , a and t 1 b . Meanwhile the policy generator maps the belief onto an optimal behaviour of the robot, i.e., a (bel) .
Long-term modelling of motion pattern
Based on the collection of the tracking trajectories, a set of motion patterns of people are clustered hierarchically using a fuzzy K-means algorithm based on the spatial and temporal information. This algorithm results in a set . Spatial probability of the person located at h t given step k of the motion pattern Ψ m is computed according to the Gaussian distribution and denoted as h Ψ k t m p( | ) : The probability evaluates the probability of the person that covers the point h t at time t , given a sequence of observations 1:t z and given that where is a normalizer, Ψ is the observation likelihood of are two prior probability distributions.
Examples of learned human motion patterns are shown in Figure 3, which indicates that people typically move between places with important objects to manipulate: fridge, a printer, a washing machine, etc.
Short-term motion prediction
To account for the short-term uncertainty of the movement along the path of m , the variation in velocity and heading orientation of the person are modelled.
We assume the following on the movement of a person [7] : T is the time step in which a person keeps to a certain velocity and heading direction; the possible ranges of his/her motion velocity and orientation are represented as , respectively; he/she changes the velocity and orientation only at every time step T . In this sense, the velocity (orientation) of a time step is constant, and is randomly and independently selected within the above range. According to these assumptions, the sequence of velocity (orientation) along time contains a list of velocities (orientations) that are independently distributed random variables. This assumption describes a common indoor motion style where people move smoothly between two places. Firstly, the orientation variance is modelled by a fanshaped area called the field of view, as shown in Figure 4. The field of view defines a coordinate system originated to be the goal of movement in the next time step. Eq. 4 indicates that the higher the value of , the less likely it is that the person will head in that direction.
Secondly, the velocity variance is modelled by a distribution h Moreover, according to the assumption that i v (i 0,...,t) follows the same but independent uniform distribution, the sequence of variables 0 t v ,...,v have the same mean and variance. This indicates that As a result, the above equations can be rewritten as: To combine the long-term and short-term prediction, the heading orientation probability orien 0 p ( | ) h h is used as the exponent discount factor to the velocity probability, and the probability of the motion pattern that the person is involved in at current position 0 h is also normalized by a normalization factor for all M motion patterns. Finally, the probability of reaching t h at time t is computed as:
Human-robot Co-occurrence Estimation
The probability of human and robot co-occurrence is estimated in the spatial-temporal aspect according to the robot's travelling route obtained from the global path planner and the human motion prediction. In the PN-POMDP system, situations of human-robot co-occurrence are classified into two types.
The first type is human-robot motion conflict, which is denoted as This is because if the person and the robot are moving along their respective paths at constant speeds of h v and r v , the robot will arrive at the place c P at time , and then its distance to the person will be less than safe L . If the motion uncertainty of the person is taken into account, the probability that he/she will arrive at place c P at a future time t is computed according to Eq. 18: The second situation is human-robot motion obstruction (as shown in Figure 7). This represents a situation where the robot's path will block a human's intentional trajectory, which happens to traverse a narrow passage. In this type of situation, a detected
The Elements: States, Actions and Observations
To automatically compile the POMDP model , , , , , , it is necessary to define some action and observation uncertainties. Actions ( ) are human-compliant avoidance behaviours that the robot can execute to give way to human in a polite manner:
Normal path following n a ; Accelerating along path u a ; Decelerating along path s a ; Dynamic replanning for detour r a .
The first three actions indicate that the robot follows the planned path with only velocity changes. These actions are usually more efficient for avoiding conflict with or obstruction of humans. The fourth action indicates that the robot replans a new path according to the updated environmental map by incorporating the probability distribution function (PDF) of p( ; t) t h with the occupancy grid map.
The reward ( ) defines the reward function that determines the immediate utility of executing action a at state s . In our system, the reward matrix is manually specified based on a criterion that behaviours ensuring more safety and politeness receive higher rewards. Nevertheless, the optimum choice of settable parameters can be adjusted through a user-supervised learning system when a robot is installed in a new environment and performs a daily room exploration task, as suggested by Lopez [22] .
POMDP Compilation
The transition model ( ) T s ,a,s specifies the conditional probability distribution of transiting from state s to s by executing action a . O(s ,a,o) is the observation model that computes the probability of obtaining Figure 8) and the EM algorithm is employed for learning from collected data until the output parameter converges. The Randomized Pointbased Value Iteration algorithm [18] is utilized to solve the above-defined POMDP model. In our system, 32 iterations and 63.578 seconds are required for offline model compiling. Figure 9 shows the errors during the iteration. The error between two successive iterations is plotted in the y-axis, with higher value indicating faster convergence rate. The proposed approach was validated in a real office environment of size 12 m×7 m. An ActivMedia Peoplebot was used in the experiments. We assumed that participants in the experiment walked at a smooth speed and intended to follow certain motion patterns. The sensory system for robot localization and people-tracking consists of five stationary CCD cameras mounted on each side of the room above head level and the robot's onboard laser range finder. The environmental grid map was previously built by a SLAM algorithm with a resolution of 0.1 m. Based on the collection of tracking trajectories, typical indoor motion patterns of humans are learned, as presented in our previous work [23] [24] .
Predictive Navigation
During online predictive navigation, the robot collected laser scan data with a time period of 200 ms, and updated the local map local grid with a time period of 80 ms, according to the positions of detected human legs. In the PN-POMDP algorithm, th1 , th2 , th3 and th4 are the threshold values that can be set and adjusted in the experiment.
In the first three testing scenarios, the robot and the human were initially located in the same room area within a short distance of each other (less than 5 m). In the first case, where the robot was moving towards the human, the system predicted human motion 5 seconds ahead of time. Figure 10(a) shows that the robot began to avoid possible human-robot conflict using the detour policy when it was still about 3 m away from the human. In the second scenario ( Figure 10(b)), when the robot had predicted that its path would intersect with the predicted human trajectory from one side, it selected the slow down action s a . This policy was efficient because frequent replanning was avoided and the robot would continue to move along the path with normal speed when the human had passed the predicted intersection point. To test the reliability of the algorithm, in the third scenario ( Figure 10(c)) the robot was following a person through a narrow corridor. In this case, the robot made the prediction that it would not interfere with the human's motion if it did not overtake him/her. Thus the robot followed its planned path with regular speed. The fourth testing scenario involved predictive navigation. The robot and the human were initially positioned in two different rooms and global cameras were utilized for people-tracking. In the experiment, the robot was initially positioned at place A as in Figure 11(a), and it planned to navigate through a narrow passage (passage II) to place B. In the meantime, a person intended to walk through the same passage in the opposite direction.
Before the robot approached the passage entrance, the probability of human-robot motion confliction at the entrance of the passage (place C in Figure 11 (b)) was estimated. More specifically, the tendency likelihood of the person's temporary motion to be engaged in the motion pattern ending at place C was estimated at as high as 97.5%. However, since the predicted occupancy probability of the grid cells within the passage was volatile, the traditional replanning method caused the robot to switch paths between two candidate routes (plan1, detour via passage I and plan2, continue via passage II). This method guided the robot to unnecessarily move back and forth at the passage entrance and finally reach the goal, requiring as long as 633 time periods. In comparison, the PN-POMDP method supporting multiple comity policies generated a highly efficient and human-friendly behaviour. When the human-robot conflict probability within the passage II was predicted, the robot consequently drove to a free space outside the entrance of the passage and waited.
After the person had passed through the passage, the robot proceeded to cross the doorway and continued on its route. The resulting behaviour of the robot improved the navigation efficiency (only 342 time periods for reaching the goal) by avoiding unnecessary repeated zigzagging and wandering before entering the passage. The pose and translational velocity of the robot during the navigation test is shown in Figure 12. As shown in Figure 12(a), the replanning method caused the robot to switch between the two candidate paths during the interval (2) to (5). In contrast, Figure 12(b) shows that the PN-POMDP method ensures smooth and efficient robot navigation. More importantly, the polite navigation behaviour is comprehensible to humans and shows full respect to the human. Figure 13 shows the experimental result in crowded environments with three participants walking around the robot. Since the PN-POMDP method supports multiple policies of predictive navigation, the robot frequently adjusted the policy according the predicted human-robot co-occurrence situations. In fact, after raising the reward function of the deceleration action s a , the robot tended to slow down for the human to pass first. This indicates that the PN-POMDP method is feasible to be applied to service robots that work in crowded environments such as exhibition halls and museums.
Trial study
A statistical trial study was also conducted to verify the success rate of the PN-POMDP method. We invited 12 participants (eight male and four female), ranged in age between 21 and 34. 33% of them were from nontechnological fields, while 67% worked in technologyrelated areas. The trial tests involved different types of situation as described above. In the trial tests, the following situations were treated as "failure": (i) The robot blocked the human's intended route of movement (subjective scoring); (ii) The robot failed to reach the goal because of getting trapped or localization failure; (iii) The robot reached the goal with time consumption as high as four times that needed in situations without humans moving around. Figure 14 shows that the PN-POMDP method achieved a higher success rate than the traditional real-time replanning method.
Conclusion
In this paper, we have presented a predictive navigation method for service robots in the POMDP framework. By learning human motion patterns and combining longterm and short-term human motion prediction, spacetime estimation of human-robot co-occurrence is achieved. In order to execute tasks in typical partially observable environments, POMDP-based probabilistic decision-making is incorporated to generate a theoretically optimal policy that allows the robot to behave in an efficient and polite manner. Thus the risk of conflict with human motion is minimized. The feasibility of the proposed methodology is validated by navigation experiments as well as user trials, in which the robot's navigational behaviour is interpreted by humans as safe, comprehensible and polite.
Although the system makes use of external cameras for human tracking, the proposed methodology framework does not rely on specific means for the acquisition of human motion. In situations where robots are not close to people, we suggest the utilization of global cameras to ensure seamless and reliable human-tracking, which improves the performance of predictive navigation. | 4,862.2 | 2013-02-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Light-Responsive and Antibacterial Graphenic Materials as a Holistic Approach to Tissue Engineering
While the continuous development of advanced bioprinting technologies is under fervent study, enhancing the regenerative potential of hydrogel-based constructs using external stimuli for wound dressing has yet to be tackled. Fibroblasts play a significant role in wound healing and tissue implants at different stages, including extracellular matrix production, collagen synthesis, and wound and tissue remodeling. This study explores the synergistic interplay between photothermal activity and nanomaterial-mediated cell proliferation. The use of different graphene-based materials (GBM) in the development of photoactive bioinks is investigated. In particular, we report the creation of a skin-inspired dressing for wound healing and regenerative medicine. Three distinct GBM, namely, graphene oxide (GO), reduced graphene oxide (rGO), and graphene platelets (GP), were rigorously characterized, and their photothermal capabilities were elucidated. Our investigations revealed that rGO exhibited the highest photothermal efficiency and antibacterial properties when irradiated, even at a concentration as low as 0.05 mg/mL, without compromising human fibroblast viability. Alginate-based bioinks alongside human fibroblasts were employed for the bioprinting with rGO. The scaffold did not affect the survival of fibroblasts for 3 days after bioprinting, as cell viability was not affected. Remarkably, the inclusion of rGO did not compromise the printability of the hydrogel, ensuring the successful fabrication of complex constructs. Furthermore, the presence of rGO in the final scaffold continued to provide the benefits of photothermal antimicrobial therapy without detrimentally affecting fibroblast growth. This outcome underscores the potential of rGO-enhanced hydrogels in tissue engineering and regenerative medicine applications. Our findings hold promise for developing game-changer strategies in 4D bioprinting to create smart and functional tissue constructs with high fibroblast proliferation and promising therapeutic capabilities in drug delivery and bactericidal skin-inspired dressings.
Determination of photothermal efficiency values
The efficiency values were calculated by following the protocol described by Feng et al. 28 = [ℎ( − )]− / [(1−10−)] (1) Being the photothermal conversion efficiency, ℎ the heat transfer coefficient, the surface area of the sample cuvette, the steady-state temperature, the temperature of the surrounding, the heat associated with the light absorbance of the solution, the incident laser power, and the absorbance of the nanomaterials at a wavelength of 808 nm. is defined through the following equation ( 2): Where is the mass of the water solution, the water heat capacity, the increase in water temperature, and the duration of the irradiation.ℎ, which can be named as , is defined as follows (3): To solve for , a sample time constant is defined (4): Also, as reported in the literature 28 , the following relation can be established (5): Therefore, the time constant is obtained from the equation of the graph when plotting time data vs .ℎ can be defined according to the obtained , also considering the mass of the solution and the heat capacity of water.
Semi-quantification of printability
The printability values were calculated by following the protocol described by L Ouyang et al. 29 When the bioink gels ideally, the extruded filament shows a smooth, consistently sized morphology, forming regular grids and square holes in the constructs.In contrast, undergelation leads to a more liquid-like state, causing the upper layer to merge with the lower layer and creating roughly circular holes in the process.It is known that circularity (C) of an enclosed area is defined as: where, L means perimeter and A means area.Circles have the highest circularity (C = 1) If the C value approaches 1, the shape is more circular.Circularity for a square shape is π/4.We establish bioink printability (Pr) for a square shape using the following function: Under ideal gelation or perfect printability, the interconnected channels in constructs exhibit a square shape, with a Pr value of 1.A higher Pr value indicates a greater bioink gelation degree, while a lower Pr value suggests a smaller gelation degree.The Pr value for each bioprinted scaffold was determined by analyzing optical images in ImageJ software to calculate the perimeter and area of interconnected channels (n = 3).Table S1.Elemental analyses of rGO and GP.
Figure S1 .
Figure S1.Representative TEM images of a) GO, b) rGO and c) GP nanomaterials.
Figure S5 .
Figure S5.Representative images of bioprinted scaffolds Alg (left panel) and Alg_rGO (right panel) before irradiation (top panel) and after irradiation with 808 nm light for 10 min at a power density of 0.5 W/cm 2 (bottom panel).
Figure S6 .
Figure S6.Representative pictures from the Live/Dead experiments of hFBs embedded into non-irradiated Alg and Alg_rGO hydrogels, at incubation time points of 0 h, 24 h and 72 h.
Figure S7 .
Figure S7.Representative pictures from the Live/Dead experiments of hFBs embedded into irradiated Alg and Alg_rGO hydrogels, at incubation time points of 0 h, 24 h and 72 h. | 1,101.8 | 2024-06-07T00:00:00.000 | [
"Materials Science",
"Engineering",
"Medicine"
] |
The effect of underlying inflammation on iron metabolism, cardiovascular risk and renal function in patients with type 2 diabetes
Abstract Aim To investigate the impact of inflammation on iron metabolism, cardiovascular risk and renal function in type 2 diabetes (T2D). Methods A total of 50 patients with T2D were included in this study. The patients were stratified into two groups based on their levels of C‐reactive protein (CRP), namely normal and high levels (n = 25/group). All laboratory tests were measured using standardised methods. Results Fasting plasma glucose levels were elevated in patients with high CRP when compared to those with normal levels (p = 0.0413). Total serum iron levels were lower in patients with high CRP levels (12.78 ± 3.50) when compared to those with normal levels (15.26 ± 4.64), p = 0.0381. However, ferritin and transferrin levels were comparable between the groups (p > 0.05). The mean cell volume (MCV) in the high CRP group was lower (87.66 ± 3.62) than the normal level group (90.79 ± 4.52), p = 0.0096, whilst the lipograms were similar (p > 0.05). The estimated glomerular filtration rate (eGFR) was lower in the high CRP group (98.06 ± 11.64) than the normal level group (104.7 ± 11.11), p = 0.046. Notably, CRP levels were negatively associated with serum iron levels (r = –0.38, p = 0.0061), MCV (r = –0.41, p = 0.0031), potassium (r = –0.37, p = 0.0086) and sodium levels (r = –0.28, p = 0.0471). Regression analyses showed that only CRP (β = –0.16, standard error [SE]: 0.06, p = 0.0125) and sodium (β = 0.51, SE: 0.25, p = 0.0434) levels contributed significantly to the prediction of serum iron levels. Conclusion Underlying inflammation in T2D is associated with increased incidence of hypertension and reduced levels of serum iron, MCV and renal function. Although there was no apparent clinical anaemia or renal dysfunction in these patients, mitigating inflammation may be effective in circumventing the ultimate development of iron deficiency anaemia and chronic kidney disease in T2D.
INTRODUCTION
Type 2 diabetes (T2D) is amongst the leading non-communicable diseases that are currently causing the most significant burden on the healthcare sector worldwide [1]. The recent International Diabetes Federation report estimated the global prevalence of T2D to be at 9.3% [2], with approximately two-thirds of the cases being from lowto middle-income countries. This high incidence of diabetes is ascribed to increased sedentary lifestyles and the eating of unhealthy diets, which both promote obesity and insulin resistance [3]. Notably, poor dietary choices are associated with iron deficiency, dyslipidaemia and cardiovascular disease (CVD) [4,5].
T2D is a chronic inflammatory disorder that is linked with altered iron metabolism, with the iron stores strongly associated with glucose control [6,7]. However, the exact pathological mechanisms that could explain altered iron metabolism in T2D remain elusive and are poorly understood. Nonetheless, obesity, a major risk factor for T2D and the use of glucose-lowering drugs such as metformin have been implicated in dysmetabolic iron syndromes [8,9]. In fact, anaemia caused by both iron deficiency and iron overload has been reported in patients living with T2D [10,11]. A reciprocal relationship between low-grade inflammation, obesity and iron disorders exists [12]. Low-grade inflammation modulates the synthesis and action of hepcidin and erythropoietin, which both regulate iron metabolism denoted by the serum iron profiles [13]. For instance, aberrant levels of total serum iron, iron stores and transferrin, an iron transporter protein modulated by these regulators have been described in patients with T2D [14][15][16]. This alteration in iron metabolism usually leads to anaemia, a symptom that is closely associated with dyslipidaemia [17,18], one of the major hallmarks of T2D and CVD [19,20]. Although the exact inferences on the relationship between iron metabolism and poor glucose control are controversial and influenced by various factors, it is evident that they are closely connected and need further exploration.
Type 2 diabetes is an independent risk factor for chronic kidney disease [21]. In fact, over 40% of patients with T2D develop diabetic nephropathy (DN), a condition that is characterised by proteinuria or reduced renal function in their lifetime [22]. The high incidence of DN in T2D has resulted in excessive mortality in these patients despite intensive therapies that alleviate its risk factors such as hyperglycaemia, hypercholesterolemia and hypertension [23]. Notably, a recent body of evidence has suggested the involvement of inflammation in the pathogenesis of DN and has sparked interest in the exploration of anti-inflammatory drugs as a therapeutic strategy in preventing the manifestation of DN in these patients [24][25][26]. We, therefore, questioned whether the degree of inflammation can be used to stratify DN risk in patients with T2D. Thus, this study first aimed to investigate the effect of inflammation on iron profiles and their associated haematological indices in patients with T2D. Second, we intended to assess the cardiovascular risk and renal function in T2D as well as determining any associations between inflammatory indices, renal function tests, iron and lipid profiles in patients with T2D.
Laboratory measurements
Blood samples for analysis were collected by a trained nurse into we also assessed white cell and platelet count and determined the levels of total protein, globulins and albumin, a negative acute-phase protein [28] as surrogate markers of inflammation using the Alinity c analyser (Abbot). To assess the impact of inflammation on renal function, we measured the levels of potassium, sodium, urea and creatine using the Anility c analyser (Abbot) and further calculated the estimated glomerular filtration rate (eGFR) using the modification of diet in renal disease study's equation (MDRD) [29]. Finally, to stratify the cardiovascular risk in the included patients, we performed lipid measurements (cholesterol levels and triglycerides) using the Alinity c analyser (Abbot).
Statistical methods and data analysis
The sample size was calculated using G*Power software (Version 3.1.9.2) based on the effect size of the primary ouctome reported in a previous study [30]. The following assumptions were used in determining the minimum number of required participants: a medium effect size (d) = 0.898071, α err prob = 0.05, power (1-β err prob) = 0.80 and an allocation ratio of 1:1. The D'Agostino & Pearson test was performed for normality testing. A chi-square test was used to test for relationships between categorical variables. Data were reported as mean ± SD or median and interquartile range depending on the data distribution.
For parametric data, the unpaired two-tailed student's t-test was used and in cases of unequal variance, a Welch's correction was performed.
The Mann-Whitney U test was used to compare non-parametric data.
Bivariate correlations were performed using the Spearman coefficient and the multivariant regression analysis was conducted to explain the relationship between total serum iron, CRP and FPG, eGFR, potassium and sodium levels.
RESULTS
A total of 50 adult patients with T2D were included in this study, 25 with normal and 25 with high CRP levels. The demographic and clinical characteristics of the included participants are shown in Table 1. The groups had a similar age distribution and the patients were from a similar socio-economic and ethnic background as they were recruited from the same community. All included patients were Blacks (Africans) and had a mean age of 50.16 ± 12.72 years and a male to female ratio of 0.43.
Clinical parameters and glucose parameters
There were no significant differences in the body mass index, systolic blood pressure, diastolic blood pressure and disease duration between the two groups (p > 0.05) (
Inflammatory profiles
The levels of CRP were used as a dependent factor to stratify the patients based on their inflammatory status. Similarly, the ESR (p = 0.0114), total protein (p = 0.0160) and globulin levels (p = 0.0003) were elevated in the patients with T2D and high CRP levels when compared to those with normal CRP levels (Table 1). However, there were no significant differences in the white cell, and platelet counts and albumin, the negative acute-phase reactant between the two groups (p > 0.05) ( Table 1).
Lipid profile levels
Dyslipidaemia is closely associated with increased cardiovascular risk in patients with T2D [31]. Therefore, we measured lipograms in patients with T2D. The levels of triglycerides, total cholesterol (Tc), low-density lipoprotein (LDL)-c, high-density lipoprotein (HDL)-c and HDL/cholesterol ratio were comparable between the two groups (p > 0.05) ( Table 1).
Iron profile levels and red blood cell indices
In order to assess the impact of inflammation on iron metabolism, Table 2).
We further measured haematological indices that are closely associated with iron metabolism. Although the red blood cell count and haemoglobin levels were comparable between the two groups (p > 0.05) ( Figure 1D-E), the red cell mean volume (MCV) in patients with T2D and high CRP levels was lower (87.66 ± 3.62) than in patients with normal CRP levels (90.79 ± 4.52), (p = 0.0096; Figure 1F). All other red blood cell indices were comparable between the two groups (p > 0.05; Table 2).
Renal function tests
DN is one of the most prevalent T2D-associated complications, that promotes the pathogenesis of chronic kidney disease [32,33].
Therefore, we assessed kidney function tests in the included patients and further assessed whether the renal impairment is exacerbated by underlining inflammation. Notably, patients with T2D and high CRP levels showed reduced levels of potassium (p = 0.0042), sodium (p = 0.0017) and eGFR MDRD (p = 0.0461) when compared to those with normal CRP levels ( Table 3). Despite these differences, the levels were within the normal range. The levels of creatinine (p = 0.1066) and
Correlation and regression analysis of glucose levels, CRP and iron profiles
To determine whether there are any associations between glucose, inflammation, iron profiles and renal function in patients with T2D, we performed a multivariate correlation analysis. Notably, the CRP levels were negatively associated with total serum iron levels (r = -0.38, F I G U R E 1 A comparison of iron profiles between patients with normal and high c-reactive protein levels. The levels of serum iron (A) and red cell mean volume (F) were significantly lower in patients with underlying inflammation when compared to those without. However, comparable levels of ferritin (B) and transferrin (C) as well as red cell count (D) and haemoglobin (E) were observed between the two groups. All results were expressed as mean ± standard deviation 0.25, p = 0.0434) significantly influenced total iron levels whilst eGFR levels could not be predicted by CRP, total iron levels, FPG or sodium levels (p > 0.05; Table 3).
DISCUSSION
This study aimed at investigating the impact of inflammation on iron metabolism, renal function, and cardiovascular risk in patients with T2D. Our results showed reduced total serum iron levels and red cell mean volume in patients with underlying levels of inflammation.
Notably, treatment with metformin, insulin, or a combination of both was associated with failure to normalise glucose levels in all patients and the hyperglycaemia observed was more pronounced in patients with T2D and high CRP levels. Moreover, all included patients had high blood pressure and the prevalence of overt cases of hypertension were marked in T2D with high CRP levels. Thus, these findings highlight the impact of inflammation in impairing glucose control and increasing cardiovascular risk. In the context of the latter, our findings did not show any cardio-protective effects of metformin, a first-line oral anti-hyperglycaemic drug. Although a negative association between iron levels and lipid profiles (triglycerides, Tc and LDL-c) has been previously described [34], the current study found no associations between these parameters. Finally, the levels of sodium, potassium and TA B L E 2 Iron profiles and its associated indices in patients with type 2 diabetes (T2D; n = 50) Low-grade inflammation in T2D modulates the synthesis of hepcidin, one of the important regulators of iron metabolism. Hepcidin maintains iron homeostasis through the inhibition of iron absorption and release from the intestines and macrophages as well as its subsequent transportation to the bone marrow for erythropoiesis [35]. The synthesis of iron is dependent on the inflammatory status and the oxygen-carrying capacity in the body [12,36]. In that context, the increased release of interleukin (IL)-6 during inflammation induces the activation of the Janus kinase (JAK)/signal transducer and activator of transcription 3(STAT3) proteins signalling which activates the transcription of the HAMP gene [37]. Whereas, an increase in body iron stores activate the bone morphogenetic protein/s-mothers against the decapentaplegic transduction pathway which induces the downstream activation of HAMP gene and the translation of hepcidin thereof [38]. The synthesis and action of hepcidin is also modulated by erythropoietin, a hormone that is released by the kidney in anaemic hypoxia to initiate red cell synthesis [13,39]. The dysregulation of these important hormones alters iron metabolism leading to the manifestation of anaemia in T2D [13]. Although the patients included in this cohort were not anaemic, the reduced levels of total serum iron levels and MCV suggest the early onset of microcytic iron deficiency anaemia in patients with T2D coupled with underlying inflammation. Therefore, alleviating inflammation in T2D could aid in improving iron metabolism in patients with a significant underlying inflammation since CRP and ESR levels were negatively associated with decreased total serum iron and MCV. Apart from the aberrant expression of iron regulator proteins, the reduction in total serum iron levels may be due to the side effects of metformin treatment since it is widely accepted to cause vitamin B12 malabsorption and its subsequent deficiency in circulation, resulting in impaired erythropoiesis [40].
It is widely acknowledged that obesity alters glucose metabolism and predisposes patients with T2D to develop CVD [41]. In fact, about a third of patients with T2D have CVD [42]. Notably in our study, patients with T2D and high CRP levels were associated with class I obesity and poor glucose control as denoted by a body mass index > 30 kg/m 2 and elevated FPG levels, respectively. These findings further highlight the effect of inflammation in aggravating poor glucose control in T2D via the activation of various pathways and the impairment of insulin signalling as previously reviewed [43]. This association has been ascribed to exacerbated inflammation and altered lipid metabolism which causes atherosclerosis and hypertension [44].
Although the blood pressure was comparable between the two groups, it is evident that hypertension was more prevalent in patients with T2D and high CRP levels. Thus, further highlighting the impact of obesity-associated inflammation on cardiovascular risk. Therefore, the use of anti-inflammatory drugs such as low-dose aspirin in patients with T2D is important in reducing cardiovascular risk in these patients as previously described [45].
In addition to other factors associated with poor glucose control and obesity, dyslipidaemia promotes the initiation of atherosclerosis and arterial thrombosis, which are both major risk factors for the pathogenesis of CVD in patients with T2D [46]. Although triglycerides and cholesterol levels were comparable between the groups, elevated triglycerides, Tc, and LDL-c coupled with reduced HDL-c levels in patients with T2D are associated with increased cardiovascular risk [47]. Therefore, cholesterol-lowering drugs such as statins are recom-mended for the primary prevention of CVD in patients with T2D [48].
DN is closely associated with systemic inflammation mediated by exacerbated activation of the JAK/STAT signalling pathway, the transcription factor nuclear factor-kB, and inflammatory cytokines [24,49,50]. The resulting pro-inflammatory milieu promotes the migration and infiltration of immune cells, particularly macrophages, into renal tissue [49]. Once activated, macrophages release proinflammatory cytokines such as IL-1, tumour necrosis factor-α and IL-6 which overall induces renal hypertrophy [25]. In addition, IL-6 alters the permeability of the glomerular endothelium and thickens the glomerular basement membrane, resulting in reduced GFR [25,50].
CONCLUSION
Iron metabolism is significantly influenced by the inflammatory status in patients with T2D. As such, reduced total serum iron levels and MCV are features of underlying inflammation in T2D. Moreover, the presence of underlying inflammation in these patients is closely associated with an increased incidence of hypertension and reduced renal function. Therefore, the amelioration of inflammation in patients with T2D may be an effective intervention to lower cardiovascular risk and circumvent the ultimate development of iron deficiency anaemia and chronic kidney disease in these patients.
ACKNOWLEDGEMENTS
We would like to thank Sr Helen Natanael and the clinical staff at
CONFLICT OF INTEREST
The authors declare no conflict of interest. | 3,763.2 | 2021-07-06T00:00:00.000 | [
"Medicine",
"Biology"
] |
Impact of green nance on carbon intensity-empirical research based on dynamic spatial Durbin model
3 Abstract 4 Green finance is of great significance in improving the ecological environment and 5 achieving the purpose of energy conservation and emission reduction. In order to 6 explore the influence of green finance on carbon intensity, four indicators of green 7 credit, green securities, green insurance and green investment are adopted to construct 8 the green finance development index in this paper. Based on the panel data of 30 9 provinces in China from 2009 to 2019, a dynamic spatial Durbin model is constructed 10 and the method of partial differential matrix is selected to analyze the influence of green 11 finance on carbon intensity in the short and long terms. The empirical results show that 12 (1) the development of green finance in local area has positive effect on the reduction 13 of carbon intensity. (2) with the significant spatial spillover effect on carbon intensity, 14 green finance can reduce the carbon intensity of the adjacent area and promote the 15 development of low-carbon economy. (3) dynamic test results prove that in terms of 16 direct effect and spatial spillover effect, green finance has a greater long-term effect on 17 carbon intensity.
Introduction
Human survival has been seriously threatened by global warming caused by China, as the world's largest CO2 producer, has been actively promoting the 24 development of low-carbon economic to reduce carbon emission . China 25 has proposed to reach the CO2 emission peak by 2030, and achieve the carbon neutrality 26 by 2060, indicating that the ecological civilization construction of China will focus on 27 carbon reduction to promote a comprehensive green transformation on economic and 28 social development (Yang 2016;Tang 2021). 29 As a policy framework system for environmental protection, green finance Results showed that the improvement of the green finance development index and the 69 increase in non-fossil energy utilization contributed to the reduction of carbon emission 70 intensity. Gianfrate and Peri (2019) believed that the issuance of green bonds by 71 governments was important to mobilize financial resources for the achievement of 72 carbon reduction targets. Glomsrød and Wei (2018) Therefore, in this paper, green finance development at regional level is researched.
93
Based on four dimensions and six indicators of green credit, green securities, green 94 insurance and green investment, data from 30 provinces in China are selected to 95 construct a comprehensive evaluation system of green finance. The concept of carbon 96 intensity is adopted to represent the carbon emission per unit of GDP. In addition, with 97 the time lag and spatial lag characteristics of carbon emission, the dynamic spatial 98 Dubin model is adopted to measure the impact of green finance development on carbon 99 intensity of local and adjacent areas from both short-term and long-term perspectives. intensities, industry has higher productivity efficiencies, indicating that the industry has 105 entered a mode of low-carbon economic development.
106
Carbon intensity is defined as follows: The entropy weighting method is adopted to select 6 indicators in 4 dimensions of 124 green credit, green securities, green insurance and green investment for the construction 125 of a comprehensive green finance evaluation system in this paper.
127
(1) Regional economic development level (Redl). The demand of people for 128 environmental quality and the awareness of environmental protection will increase with 129 the improvement of living standards, resulting in a reduction of local carbon intensity 130 (Luo et al. 2017). In this paper, per capita GDP is selected for the measurement of the 131 regional economic development level.
132
(2) Urbanization level (Url). As urbanization rates increase, natural gas is 133 gradually replacing coal as the main energy consumed, and thus reducing carbon 134 intensity. In this paper, the ratio of urban population to total population is adopted to Table 1.
262
The following spatial dynamic Durbin model is constructed based on the above 263 variables.
where Cit-1 is the time lagged term of carbon intensity; WCit-1 is the time and spatial 265 lagged term of carbon intensity; WCit is the spatial lagged term of carbon intensity; ρ is The regression results present the spatial autoregressive coefficient ρ is 0.101, 279 which is significant at the 10% level. This suggests that the increase of local carbon 280 intensity can lead to an increase in carbon intensity of adjacent areas. In addition, with 281 the characters of time lag and spatial lag, local carbon intensity is influenced by that of 282 the local and adjacent regions in the previous period. As a result, carbon intensity has a 283 certain "cumulative" effect. The regression results also show that the estimated 284 coefficient for the impact of green finance on carbon intensity is -0.396, which is 285 significant at the 1% level, indicating that the development of green finance can reduce 286 carbon intensity. The results of spatial spillover effect are crucial to the spatial Durbin 287 model. Therefore, the spatial spillover effect is analyzed and divided into long-term 288 effect and short-term effect in this paper.
289
According to Elhorst (2014), the basic form of the dynamic space Durbin space The above equation can be translated into the following form: According to the method of Elhorst (2014), the direct effect and spatial spillover 293 effect of X on Y can be solved by partial differential matrix operations. Compared with 294 the static spatial Durbin model, which has only long-term effect, the dynamic spatial 295 Durbin model has both short-term and long-term effects. As a result, at a particular time 296 t, the matrix of partial derivatives of the expected value Y corresponding to the values 297 of X from spatial units 1 to N can be written as: where the average value of diagonal elements is short-term direct effect, while the 299 average value of row sum or column sum of non-diagonal elements is short-term spatial 300 spillover effect, representing the influence of X in a specific spatial unit on Y in other 301 spatial units (Lesage 2009 and 1% levels, respectively. 308 According to the above theory of partial differential matrix operation, the direct 309 effect and spatial spillover effect of green finance on carbon intensity in the short and 310 long term can be calculated, and the results are provided in Table 6. It can be seen from 311 Table 6 that:
312
(1) The decomposition results show that the direct effect, spatial spillover effect 313 and total effect of green finance show a significant negative correlation with carbon 314 intensity in both short term and long term, indicating that the development of green 315 finance can reduce the carbon intensity in both local and adjacent areas.
316
(2) The direct effect of per capita GDP is negative in the short term, and the spatial 317 spillover effect is not significant. However, in the long term, both the direct and indirect
351
Geographical distance spatial weight matrix is adopted in this paper to test the 352 robustness of the result. The regression results show that the dynamic SDM model has 353 the best fitting effect with the utilization of geographic distance spatial weight matrix.
354
There is a significant negative correlation between green finance and carbon intensity, Table 7. spatial Durbin model is constructed and the method of partial differential matrix is 364 adopted to analyze the influence of green finance on carbon intensity. The main 365 conclusions are as follows:
366
(1) The development of green finance has spatial spillover effect, and can reduce 367 carbon intensity in local and adjacent areas.
368
(2) Green finance has a more significant influence on the direct effect and spatial 369 spillover effect of carbon intensity in the long term.
370
(3) Economic development, industrial structure and foreign investment all have 371 negative influence on carbon intensity. However, the spatial spillover effect of these 372 factors is not obvious. Competing interests The authors declare that they have no conflict of interest. 411 | 1,863.4 | 2022-02-18T00:00:00.000 | [
"Economics"
] |
Durability and Performance of Encapsulant Films for Bifacial Heterojunction Photovoltaic Modules
Energy recovery from renewable sources is a very attractive, and sometimes, challenging issue. To recover solar energy, the production of photovoltaic (PV) modules becomes a prosperous industrial certainty. An important material in PV modules production and correct functioning is the encapsulant material and it must have a good performance and durability. In this work, accurate characterizations of performance and durability, in terms of photo- and thermo-oxidation resistance, of encapsulants based on PolyEthylene Vinyl Acetate (EVA) and PolyOlefin Elastomer (POE), containing appropriate additives, before (pre-) and after (post-) lamination process have been carried out. To simulate industrial lamination processing conditions, both EVApre-lam and POEpre-lam sheets have been subjected to prolonged thermal treatment upon high pressure. To carry out an accurate characterization, differential scanning calorimetry, rheological and mechanical analysis, FTIR and UV-visible spectroscopy analyses have been performed on pre- and post-laminated EVA and POE. The durability, in terms of photo- and thermo-oxidation resistance, of pre-laminated and post-laminated EVA and POE sheets has been evaluated upon UVB exposure and prolonged thermal treatment, and the progress of degradation has been monitored by spectroscopy analysis. All obtained results agree that the lamination process has a beneficial effect on 3D-structuration of both EVA and POE sheets, and after lamination, the POE shows enhanced rigidity and appropriate ductility. Finally, although both EVA and POE can be considered good candidates as encapsulants for bifacial PV modules, it seems that the POE sheets show a better resistance to oxidation than the EVA sheets.
Introduction
Today, energy recovery from renewable sources and processes with less environmental and human health impacts, and the gradual release of traditional fossil fuel sources, due to their high carbon dioxide and pollutants production, are challenges and necessary issues. Therefore, the energy recovery considering sunlight, winds and tides is extremely attractive and specifically, the development of solar photovoltaic (PV) devices for efficient energy recovery is one of the most important research fields. As documented, the energy demand increases continuously, and is expected to reach around 778 exajoules (EJ) by 2035 [1].
Currently, 3SUN-ENEL Green Power (Catania, Italy) develops a new innovative device for efficient energy recovery, and particularly, in Figure 1a,b, a high reliable bifacial glass-glass heterojunction PV module is shown. This innovative heterojunction technology, combining amorphous and crystalline silicon, offers high performance and efficiency in energy recovery, even in extreme climatic conditions [2]. An important issue in PV modules construction and assembling is the use of appropriate encapsulant materials that can protect efficiently the active PV elements ensuring device high performance can protect efficiently the active PV elements ensuring device high performance and durability [3,4]. The encapsulant polymer-based materials must protect PV modules efficiently against humidity, oxygen and other gas, must be transparent and flexible and must have a good adhesion with glass and solar cells [5][6][7][8]. Different encapsulant materials, such as polydimethylsiloxane (PDMS), poly-ethylene vinyl acetate (EVA), polyvinyl butyral (PVB), thermoplastic polyolefins (TPO), polyolefin elastomer (POE), have been considered suitable for industrial purpose [9][10][11][12][13]. Considering the balance between costs and performance, the best polymer material as PV encapsulant is EVA, and furtherly to improve its environment resistance, the EVA is added with crosslinking agents and appropriate stabilizers [13]. Therefore, EVA degrades upon solar exposure, even if using crosslinking agents and stabilizers [14][15][16]. As documented, the EVA degradation proceeds with acetic acid development and the latter leads to encapsulants yellowing, compromising the PV module function [17][18][19].
However, 3SUN researchers and partners, related to compatibility assessment of commercial EVA, POE, TPO and Ionomer films as encapsulants for bifacial heterojunction PV modules, highlight that the polyolefin elastomers are more compatible to heterojunction technology than other considered commercial materials. Assembling full-size (72 cells) modules, no failures induced by the POE encapsulant are observed after 3000 h in damp heat conditions, 600 thermal cycles and a sequential test using 60 kWh/m 2 exposure [2]. Therefore, the published paper by Baiamonte et al. [20] proposes the formulation of encapsulants for bifacial heterojunction PV modules based on blends containing poly-ethylene vinyl acetate and polyolefin, i.e., EVA/PO, crosslinking agent and stabilizers, such as UV-adsorber, anti-oxidant and metal deactivator. All obtained results suggest that EVA/PO = 75/25 wt/wt%, containing crosslinking agent and stabilizers, show better mechanical behavior, optical properties and durability than that of neat EVA, suggesting a beneficial effect of the polyolefin presence at low amount. Besides, the photoxidation resistance of EVA/PO = 75/25 wt/wt% blend containing crosslinking agent and stabilizers is very similar to that experienced by neat EVA, highlighting that this blend is a good candidate as encapsulant material for bifacial PV modules.
In this work, the properties and performance of commercial EVA and POE sheets, before (pre-) and after (post-) lamination, as appropriate encapsulant materials for bifacial heterojunction PV modules are investigated. Accurate calorimetric, rheological and durability analysis, in terms of photo-and thermo-oxidation resistance, of both pre-and postlaminated EVA and POE are carried out, also considering the ability of these materials in heterojunction technology for PV modules assembling. However, commercial EVA and POE films are subjected to accelerated UVB exposure and prolonged thermal treatment, and their oxidation resistance is monitored by spectroscopic analysis in time. Considering the balance between costs and performance, the best polymer material as PV encapsulant is EVA, and furtherly to improve its environment resistance, the EVA is added with crosslinking agents and appropriate stabilizers [13]. Therefore, EVA degrades upon solar exposure, even if using crosslinking agents and stabilizers [14][15][16]. As documented, the EVA degradation proceeds with acetic acid development and the latter leads to encapsulants yellowing, compromising the PV module function [17][18][19].
However, 3SUN researchers and partners, related to compatibility assessment of commercial EVA, POE, TPO and Ionomer films as encapsulants for bifacial heterojunction PV modules, highlight that the polyolefin elastomers are more compatible to heterojunction technology than other considered commercial materials. Assembling full-size (72 cells) modules, no failures induced by the POE encapsulant are observed after 3000 h in damp heat conditions, 600 thermal cycles and a sequential test using 60 kWh/m 2 exposure [2]. Therefore, the published paper by Baiamonte et al. [20] proposes the formulation of encapsulants for bifacial heterojunction PV modules based on blends containing polyethylene vinyl acetate and polyolefin, i.e., EVA/PO, crosslinking agent and stabilizers, such as UV-adsorber, anti-oxidant and metal deactivator. All obtained results suggest that EVA/PO = 75/25 wt/wt%, containing crosslinking agent and stabilizers, show better mechanical behavior, optical properties and durability than that of neat EVA, suggesting a beneficial effect of the polyolefin presence at low amount. Besides, the photoxidation resistance of EVA/PO = 75/25 wt/wt% blend containing crosslinking agent and stabilizers is very similar to that experienced by neat EVA, highlighting that this blend is a good candidate as encapsulant material for bifacial PV modules.
In this work, the properties and performance of commercial EVA and POE sheets, before (pre-) and after (post-) lamination, as appropriate encapsulant materials for bifacial heterojunction PV modules are investigated. Accurate calorimetric, rheological and durability analysis, in terms of photo-and thermo-oxidation resistance, of both pre-and postlaminated EVA and POE are carried out, also considering the ability of these materials in heterojunction technology for PV modules assembling. However, commercial EVA and POE films are subjected to accelerated UVB exposure and prolonged thermal treatment, and their oxidation resistance is monitored by spectroscopic analysis in time.
Therefore, this proposed comparative study suggests and encourages further research studies regarding the formulation and discovery of PV encapsulants, with good performance, in terms of durability and oxidative resistance, and relatively low cost.
Materials
Commercial PolyEthylene Vinyl Acetate (EVA) and PolyOlefin Elastomer (POE) sheets, suitable for low UV cutoff, are purchased by Specialized Technology Resources Inc. Both EVA and POE contain appropriate crosslinking agent and stabilizing additives, such as antioxidants and hindered light amine stabilizers, as produced by manufacture. All additives have been added to EVA and POE during sheets formulation by producer. There are four different sheets considered: EVA pre-laminated (EVApre-lam), EVA post-laminated (EVApost-lam) and POE pre-laminated (POEpre-lam), POE post-laminated (POEpost-lam). The thicknesses of pre-laminated sheets are about 450 µm and to simulate industrially viable lamination process, the pre-laminated EVA and POE sheets have been subjected to pressure at 1 atm and temperature at 150 • C up to 20 min.
Characterizations
• Differential Scanning Calorimetry: The calorimetric data were evaluated by differential scanning calorimetry (DSC) using a DSC60-Shimadzu calorimeter. All experiments were performed under dry nitrogen on samples of about 10 mg in 40 µL sealed aluminum pans. For both EVA and POE, the calorimetric scans, heating: from −80 to 120 • C and cooling: from 120 to −80 • C, were performed for each sample at scanning heating/cooling rate of 10 • C/min. The values of heat flow have been normalized considering sample mass. • Rheological analysis: Rheological tests were performed using a stress-controlled rheometer (Rheometric Scientific, SR5, mod. ARES G2 by TA Instrument, New Castle, DE, USA) in parallel plate geometry (plate diameter 25 mm). The complex viscosity (η*), storage (G') and loss (G") moduli were measured under frequency scans from ω = 10 − 1 to 102 rad/s at T = 140 • C and T = 170 • C for EVA and POE, respectively. The strain amplitude was γ = 5%, which preliminary strain sweep experiments proved to be low enough to be in the linear viscoelastic regime. • FTIR Spectroscopy: A Fourier Transform Infrared Spectrometer (Spectrum One, Perkin Elmer) was used to record IR spectra using 16 scans at a resolution of 1 cm −1 . ATR-FTIR for some surface analysis has been also carried out, using 16 scans at a resolution of 1 cm −1 . The progress of both photo-and thermo-oxidation degradation for EVA and POE has been followed by running FTIR analysis with time and monitoring the variations in the hydroxyl range (3200-3600 cm −1 ) and carbonyl range (1800-1500 cm −1 ) in time, using Spectrum One software.
Accelerated Weathering and Thermo-Oxidation
Photoxidation was carried out using a Q-UV/basic weatherometer (from Q-LAB, Westlake, OH, USA) equipped with UVB lamps (313 nm). The weathering conditions were a continuous light irradiation at T = 70 • C.
Thermo-oxidation was carried out in a ventilated oven at 70 • C a time up to ca. 3500 h for both EVA and POE post-laminated sheets.
The progress of both photo-and thermo-oxidative degradation was followed by FTIR spectroscopic technique.
Differential Scanning Calorimetry (DSC) Characterization
The identification of the transition temperatures for both commercial EVA and POE sheets is performed through differential scanning calorimetry. In Figure 2a,b, the thermograms from −80 • C up to 120 • C of both pre-laminated and post-laminated EVA and POE materials are plotted, and in Table 1, the main identified transition temperatures are reported. In Figure 2a, the glass transition between −40 • C and −20 • C, i.e., Tg around −36 • C, for both EVApre-lam and EVApost-lam samples is detectable and this transition is well noticeable for post-laminated sample. It can be observed that EVApre-lam shows three endothermic peaks in the range from +30 • C up to +90 • C, see blue curve in Figure 2a. The first small peak at about +30 • C, probably, can be attributed to the presence of low molecular weight additives, with low temperature fusion transition. The other two fusion peaks, at about +55 • C and +85 • C, respectively, can be attributed to the fusion transition of two different crystalline structures of EVA sample. After the lamination, the thermogram of EVApost-lam appears slightly different, see red curve in Figure 2a, and there are two noticeable small exothermic peaks in the range +10 • C up to +35 • C, probably, due to the occurrence of crosslinking and additives dispersion during lamination upon prolonged thermal treatment at high pressure. Interestingly, the peak at about +30 • C is not well distinguished and a very broad shoulder in the range between 30 and 50 • C can be observed, highlighting a structural change in the organization of the low molecular weight additives and their interaction with EVA macromolecules. Besides, both fusion peaks at about +55 • C and +85 • C become well pronounced, pointing out the presence of two different polymer crystalline structures. Surprisingly, the fusion enthalpy for EVA increases ca. 1.6 times upon lamination process, suggesting the formation of better ordered 3D-structures, see last column in Table 1.
Differential Scanning Calorimetry (DSC) Characterization
The identification of the transition temperatures for both commercial EVA and POE sheets is performed through differential scanning calorimetry. In Figure 2a,b, the thermograms from −80 °C up to 120 °C of both pre-laminated and post-laminated EVA and POE materials are plotted, and in Table 1, the main identified transition temperatures are reported. In Figure 2a, the glass transition between −40 °C and −20 °C, i.e., Tg around −36 °C, for both EVApre-lam and EVApost-lam samples is detectable and this transition is well noticeable for post-laminated sample. It can be observed that EVApre-lam shows three endothermic peaks in the range from +30 °C up to +90 °C, see blue curve in Figure 2a. The first small peak at about +30 °C, probably, can be attributed to the presence of low molecular weight additives, with low temperature fusion transition. The other two fusion peaks, at about +55 °C and +85 °C, respectively, can be attributed to the fusion transition of two different crystalline structures of EVA sample. After the lamination, the thermogram of EVApost-lam appears slightly different, see red curve in Figure 2a, and there are two noticeable small exothermic peaks in the range +10 °C up to +35 °C, probably, due to the occurrence of crosslinking and additives dispersion during lamination upon prolonged thermal treatment at high pressure. Interestingly, the peak at about +30 °C is not well distinguished and a very broad shoulder in the range between 30 and 50 °C can be observed, highlighting a structural change in the organization of the low molecular weight additives and their interaction with EVA macromolecules. Besides, both fusion peaks at about +55 °C and +85 °C become well pronounced, pointing out the presence of two different polymer crystalline structures. Surprisingly, the fusion enthalpy for EVA increases ca. 1.6 times upon lamination process, suggesting the formation of better ordered 3D-structures, see last column in Table 1. Note: (*) this exothermic peak appears as a complex peak and the temperature identification of is difficult. In Figure 2b, the thermogram of POEpre-lam and POEpost-lam samples are plotted. Moreover, in this case, both POEpre-lam and POEpost-lam samples show a glass transition at around −25 • C and no significant different for glass transition of these samples is observed before and after lamination process. The POEpre-lam sample shows two well visible fusion peaks in the temperature range from +50 up to +100 • C, see blue curve in Figure 2b. It can be observed that after the lamination process both peaks at about +60 • C and +95 • C become well pronounced, pointing out again the presence of two different polymer crystalline structures. Interestingly, the increase of fusion enthalpy for POE upon lamination is ca. 2.7, suggesting the formation of better ordered 3D-structure also for POE, see last column in Table 1.
To sum up, it is worth noting that the glass transition and exothermic phenomena for both EVA and POE are almost uninfluenced by lamination process, while the fusion occurrence reveals that the lamination process could be considered responsible for the formation of a large amount of 3D-ordered crystalline structures. Specifically, the total peaks areas of EVApost-lam (from +25 • C up to 95 • C) and POEpost-lam (from +30 up to +110 • C) samples are higher ca. 1.6 times and 2.7 times than the peak areas of EVApre-lam and POEpre-lam samples, respectively. Based on these results, it can be supposed that the lamination process has a beneficial effect on the formation of 3D-ordered crystalline structures, and it seems that the final POE structure is better structured than the EVA one.
Rheological Characterization
In Figure 3, the trends of storage and loss moduli, G' and G", and complex viscosity, η*, as a function of the frequency of both pre-laminated and post-laminated EVA and POE materials are plotted. The rheological behavior of EVApre-lam and EVApost-lam are slightly different and it is worth noting that for both EVApre-lam and EVApost-lam, no Newtonian plateau is observed, and well pronounced shear thinning is visible, suggesting the presence of crosslinked 3D-structure. Unexpectedly, the values of both moduli G' and G" and complex viscosity are lower than the values of before the lamination process, and the latter could be understand considering that during prolonged lamination process, i.e., up to 20 min at high temperature and pressure, the EVA underwent thermal degradation, which leads to the formation of volatile acetic acid. Note: (*) this exothermic peak appears as a complex peak and the temperature identification of is difficult.
In Figure 2b, the thermogram of POEpre-lam and POEpost-lam samples are plotted. Moreover, in this case, both POEpre-lam and POEpost-lam samples show a glass transition at around −25 °C and no significant different for glass transition of these samples is observed before and after lamination process. The POEpre-lam sample shows two well visible fusion peaks in the temperature range from +50 up to +100 °C, see blue curve in Figure 2b. It can be observed that after the lamination process both peaks at about +60 °C and +95 °C become well pronounced, pointing out again the presence of two different polymer crystalline structures. Interestingly, the increase of fusion enthalpy for POE upon lamination is ca. 2.7, suggesting the formation of better ordered 3D-structure also for POE, see last column in Table 1.
To sum up, it is worth noting that the glass transition and exothermic phenomena for both EVA and POE are almost uninfluenced by lamination process, while the fusion occurrence reveals that the lamination process could be considered responsible for the formation of a large amount of 3D-ordered crystalline structures. Specifically, the total peaks areas of EVApost-lam (from +25 °C up to 95 °C) and POEpost-lam (from +30 up to +110 °C) samples are higher ca. 1.6 times and 2.7 times than the peak areas of EVApre-lam and POEpre-lam samples, respectively. Based on these results, it can be supposed that the lamination process has a beneficial effect on the formation of 3D-ordered crystalline structures, and it seems that the final POE structure is better structured than the EVA one.
Rheological Characterization
In Figure 3, the trends of storage and loss moduli, G' and G", and complex viscosity, η*, as a function of the frequency of both pre-laminated and post-laminated EVA and POE materials are plotted. The rheological behavior of EVApre-lam and EVApost-lam are slightly different and it is worth noting that for both EVApre-lam and EVApost-lam, no Newtonian plateau is observed, and well pronounced shear thinning is visible, suggesting the presence of crosslinked 3D-structure. Unexpectedly, the values of both moduli G' and G" and complex viscosity are lower than the values of before the lamination process, and the latter could be understand considering that during prolonged lamination process, i.e., up to 20 min at high temperature and pressure, the EVA underwent thermal degradation, which leads to the formation of volatile acetic acid. Therefore, the elimination of acetic acid molecules during lamination causes decrease for both moduli and viscosity, although crosslinking also occurs. The latter is understandable considering that in the melt state, the macromolecules of EVApost-lam are able to move themself and the system rigidity is lower than EVApre-lam. Interestingly, the G' and G" trends remain almost parallel between them, i.e., no cross-over point is observed, for both EVApre-lam and EVApost-lam, suggesting the presence of crosslinked structure, which does not change significantly upon lamination.
Contrarily, the viscosity of POEpost-lam is significantly higher than the viscosity of POEpre-lam, i.e., the difference is more than one decade, and additionally, the slopes of trends are different, highlighting a beneficial effect of the lamination process on POE crosslinking. The change from solid-like to liquid-like behavior for pre-laminated and post-laminated POE occurs at different frequencies, i.e., the cross-over point changes from 1.58 rad/s for POEpre-lam to 25.11 rad/s for POEpost-lam. Therefore, the rheological behavior of pre-laminated POE sample reveals the existence of no well-3D-structured sample, and in this case also, no Newtonian plateau is noticed. The rheological behavior of POEpost-lam is significantly changed upon lamination process and there is a well-3D-structured crosslinked sample.
Based on the rheological behavior, it can be surmised that the lamination process has a well pronounced beneficial effect on 3D-structuration for POE, rather than for EVA. After the lamination, the EVA sample exhibits an affination of existing 3D-structure, without significant change in the melt state behavior. The POEpost-lam shows solid-to liquid-like transition at high frequency, a significant viscosity enhancement and well pronounced shear thinning in comparison to POEpre-lam, highlighting a very good 3D-structuration.
Mechanical Characterization
In Figure 4, typical stress-strain curves of pre-laminated and post-laminated EVA and POE samples are plotted, and in Table 2, obtained values of main mechanical properties, i.e., elastic modulus, E, tensile strength, TS, and elongation at break, EB, are reported. It is clearly noticeable that the lamination process has a positive effect on the rigidity of both EVA and POE, i.e., the values of elastic modulus after the lamination increase about 45% more than the values before lamination. Therefore, the elimination of acetic acid molecules during lamination causes decrease for both moduli and viscosity, although crosslinking also occurs. The latter is understandable considering that in the melt state, the macromolecules of EVApost-lam are able to move themself and the system rigidity is lower than EVApre-lam. Interestingly, the G' and G" trends remain almost parallel between them, i.e., no cross-over point is observed, for both EVApre-lam and EVApost-lam, suggesting the presence of crosslinked structure, which does not change significantly upon lamination.
Contrarily, the viscosity of POEpost-lam is significantly higher than the viscosity of POEpre-lam, i.e., the difference is more than one decade, and additionally, the slopes of trends are different, highlighting a beneficial effect of the lamination process on POE crosslinking. The change from solid-like to liquid-like behavior for pre-laminated and post-laminated POE occurs at different frequencies, i.e., the cross-over point changes from 1.58 rad/s for POEpre-lam to 25.11 rad/s for POEpost-lam. Therefore, the rheological behavior of pre-laminated POE sample reveals the existence of no well-3D-structured sample, and in this case also, no Newtonian plateau is noticed. The rheological behavior of POEpost-lam is significantly changed upon lamination process and there is a well-3Dstructured crosslinked sample.
Based on the rheological behavior, it can be surmised that the lamination process has a well pronounced beneficial effect on 3D-structuration for POE, rather than for EVA. After the lamination, the EVA sample exhibits an affination of existing 3D-structure, without significant change in the melt state behavior. The POEpost-lam shows solid-to liquid-like transition at high frequency, a significant viscosity enhancement and well pronounced shear thinning in comparison to POEpre-lam, highlighting a very good 3D-structuration.
Mechanical Characterization
In Figure 4, typical stress-strain curves of pre-laminated and post-laminated EVA and POE samples are plotted, and in Table 2, obtained values of main mechanical properties, i.e., elastic modulus, E, tensile strength, TS, and elongation at break, EB, are reported. It is clearly noticeable that the lamination process has a positive effect on the rigidity of both EVA and POE, i.e., the values of elastic modulus after the lamination increase about 45% more than the values before lamination. As expected, for EVA sample, upon lamination, the tensile strength increases about 70%, while the elongation at break is reduced about 40%. Interestingly, for POE sample, upon lamination, the tensile strength increases about 48%, while the elongation at break remains almost unchanged. These results are understandable considering that during the lamination, the crosslinking process occurs, and this leads to an increase of rigidity, also according to the results by calorimetry and rheological analyses, above commented.
UV-Visible Characterization
In Figure 5a,b, the linear attenuation coefficient (K) of both pre-and post-laminated EVA and POE are plotted, respectively. The values of linear attenuation coefficient for all samples are calculated using the formula reported in the experimental part, i.e., considering the absorption values (A) and sample thicknesses (D). As known, the material is almost transparent when K value is close to zero. It is clearly noticeable that the EVApost-lam and POEpost-lam samples show K values lower than the EVApre-lam and POEpre-lam samples, especially in the visible range, although the thicknesses of both post-laminated samples are two times higher than the pre-laminated counterparts. This behavior is due to the lamination process having a beneficial effect on both occurrence of 3D-structuration for both EVA and POE and additives dispersion and distribution. Additionally, the small shoulders at about 290 nm in all K trends can be attributed to the presence of stabilizing molecules, and their presence is clearly noticeable before and after lamination. As expected, for EVA sample, upon lamination, the tensile strength increases about 70%, while the elongation at break is reduced about 40%. Interestingly, for POE sample, upon lamination, the tensile strength increases about 48%, while the elongation at break remains almost unchanged. These results are understandable considering that during the lamination, the crosslinking process occurs, and this leads to an increase of rigidity, also according to the results by calorimetry and rheological analyses, above commented.
UV-Visible Characterization
In Figure 5a,b, the linear attenuation coefficient (K) of both pre-and post-laminated EVA and POE are plotted, respectively. The values of linear attenuation coefficient for all samples are calculated using the formula reported in the experimental part, i.e., considering the absorption values (A) and sample thicknesses (D). As known, the material is almost transparent when K value is close to zero. It is clearly noticeable that the EVApostlam and POEpost-lam samples show K values lower than the EVApre-lam and POEprelam samples, especially in the visible range, although the thicknesses of both post-laminated samples are two times higher than the pre-laminated counterparts. This behavior is due to the lamination process having a beneficial effect on both occurrence of 3D-structuration for both EVA and POE and additives dispersion and distribution. Additionally, the small shoulders at about 290 nm in all K trends can be attributed to the presence of stabilizing molecules, and their presence is clearly noticeable before and after lamination.
FTIR Characterization
In Figure 6a,b, the FTIR spectra of both pre-and post-laminated EVA and POE are plotted, respectively. It is worth noting that the main absorption bands (ca. 2800-2900 cm −1 , due to CH stretching, ca. 1700 cm −1 due to carbonyl band stretching, and other bands in 1400-800 cm −1 , due to different chemical nature and structures) in FTIR spectra are saturated because there are thick original commercial samples. According to literature, the
FTIR Characterization
In Figure 6a,b, the FTIR spectra of both pre-and post-laminated EVA and POE are plotted, respectively. It is worth noting that the main absorption bands (ca. 2800-2900 cm −1 , due to CH stretching, ca. 1700 cm −1 due to carbonyl band stretching, and other bands in 1400-800 cm −1 , due to different chemical nature and structures) in FTIR spectra are saturated because there are thick original commercial samples. According to literature, the main representative FTIR ranges for polyolefins and polyolefin derivates are both carbonyl (ca. 1600-1800 cm −1 ) and hydroxyl (3200-3600 cm −1 ) range, and additionally, the oxidation degradation of these polymers can be profitable following the monitoring of changes in these two main ranges. It is worth noting that in the spectra of EVApre-lam a small shoulder at ca. 1650 cm −1 is noticeable and this could be attributed to the presence of some unsaturation in this material. In the spectra of EVApost-lam, the shoulder at ca. 1650 cm −1 is not visible, also because the carbonyl bands appear larger due to higher sample thickness, while a small shoulder at ca. 1780 cm −1 appears, probably, due to the formation of some esters during prolonged lamination process. Besides, the hydroxyl bands in EVApost-lam spectra appear more pronounced than the bands in the spectra of EVApre-lam. Similar consideration can be made also for the spectra of POEpre-lam and POEpost-lam samples; upon the lamination process, in the spectra of POEpost-lam a small shoulder at ca. 1650 cm −1 appears and the hydroxyl bands are more pronounced in comparison to POEpre-lam.
idation degradation of these polymers can be profitable following the monitoring of changes in these two main ranges. It is worth noting that in the spectra of EVApre-lam a small shoulder at ca. 1650 cm −1 is noticeable and this could be attributed to the presence of some unsaturation in this material. In the spectra of EVApost-lam, the shoulder at ca. 1650 cm −1 is not visible, also because the carbonyl bands appear larger due to higher sample thickness, while a small shoulder at ca. 1780 cm −1 appears, probably, due to the formation of some esters during prolonged lamination process. Besides, the hydroxyl bands in EVApost-lam spectra appear more pronounced than the bands in the spectra of EVApre-lam. Similar consideration can be made also for the spectra of POEpre-lam and POEpost-lam samples; upon the lamination process, in the spectra of POEpost-lam a small shoulder at ca. 1650 cm −1 appears and the hydroxyl bands are more pronounced in comparison to POEpre-lam.
Photoxidation Resistance
To investigate the photoxidation resistance of EVA and POE, the original sheets have been subjected to UVB exposure and the degradation has been monitored by FTIR analysis in time. In Figures 7a-d, the FTIR of EVApre-lam, EVApost-lam, POEpre-lam and POEpost-lam commercial samples at different exposure times are plotted.
Therefore, according to the literature, EVA photodegradation proceeds with accumulation of oxidation products leading to the formation of new absorption bands in the carbonyl domain (shoulders at 1780 cm −1 and 1715 cm −1 in IR spectra), in the hydroxyl domain (3200-3600 cm −1 ) and acetic acid formation, which leads to pH lowering and corrosion ability increasing. Moreover, EVA shows a very fast yellowing, due to the formation of oxidation products, and to avoid unwanted effects, the addition of stabilizers is imperative, especially for manufacturing in service upon sunlight [21,22]. However, as well known, the photodegradation of polyolefins and polyolefins-based polymers proceeds mainly with accumulation of groups in carbonyl domain (1600-1800 cm −1 ) and hydroxyl domain (3200-3600 cm −1 ), and subsequently, worsening of their macroscopic properties [20,[23][24][25]. Considering all these issues and FTIR analysis of these commercial EVA and POE samples, reported above, the progress of photoxidation for both EVA and POE can be followed profitable accounting the changes in carbonyl and hydroxyl domains. Besides, as commend above, on the FTIR spectra, main absorption bands are saturated because there are thick original commercial samples. In Figure 7a,b, the changes in the hydroxyl domain for EVA sheets are significant and well appreciable, while, in the carbonyl domain, the presence of only a small shoulder at ca. 1780 cm −1 can be observed. Similarly, for POE sheets, the changes in the hydroxyl domain are noticeable, while, in the
Photoxidation Resistance
To investigate the photoxidation resistance of EVA and POE, the original sheets have been subjected to UVB exposure and the degradation has been monitored by FTIR analysis in time. In Figure 7a-d, the FTIR of EVApre-lam, EVApost-lam, POEpre-lam and POEpost-lam commercial samples at different exposure times are plotted.
Therefore, according to the literature, EVA photodegradation proceeds with accumulation of oxidation products leading to the formation of new absorption bands in the carbonyl domain (shoulders at 1780 cm −1 and 1715 cm −1 in IR spectra), in the hydroxyl domain (3200-3600 cm −1 ) and acetic acid formation, which leads to pH lowering and corrosion ability increasing. Moreover, EVA shows a very fast yellowing, due to the formation of oxidation products, and to avoid unwanted effects, the addition of stabilizers is imperative, especially for manufacturing in service upon sunlight [21,22]. However, as well known, the photodegradation of polyolefins and polyolefins-based polymers proceeds mainly with accumulation of groups in carbonyl domain (1600-1800 cm −1 ) and hydroxyl domain (3200-3600 cm −1 ), and subsequently, worsening of their macroscopic properties [20,[23][24][25]. Considering all these issues and FTIR analysis of these commercial EVA and POE samples, reported above, the progress of photoxidation for both EVA and POE can be followed profitable accounting the changes in carbonyl and hydroxyl domains. Besides, as commend above, on the FTIR spectra, main absorption bands are saturated because there are thick original commercial samples. In Figure 7a,b, the changes in the hydroxyl domain for EVA sheets are significant and well appreciable, while, in the carbonyl domain, the presence of only a small shoulder at ca. 1780 cm −1 can be observed. Similarly, for POE sheets, the changes in the hydroxyl domain are noticeable, while, in the carbonyl domain a small shoulder at ca. 1640 cm −1 , due to presence of insaturations, is barely noticeable. In Figure 8a,b, the variations of the total band areas in hydroxyl domains for EVA and POE are plotted, respectively. Worth noting that the EVApre-lam and POEpre-lam samples show larger hydroxyl accumulations than the EVApost-lam and POEpost-lam samples, especially at long exposure time. Moreover, in Figure 8c,d, the pre-laminated samples show more pronounced increases for the shoulders at 1780 cm −1 for EVA and at 1640 cm −1 for POE, rather than the post-laminated samples. All these results highlight that the lamination process has a beneficial effect also on photoxidation resistance and again, it seems that the POE show better photoxidation resistance than the EVA one. In Figure 8a,b, the variations of the total band areas in hydroxyl domains for EVA and POE are plotted, respectively. Worth noting that the EVApre-lam and POEpre-lam samples show larger hydroxyl accumulations than the EVApost-lam and POEpost-lam samples, especially at long exposure time. Moreover, in Figure 8c,d, the pre-laminated samples show more pronounced increases for the shoulders at 1780 cm −1 for EVA and at 1640 cm −1 for POE, rather than the post-laminated samples. All these results highlight that the lamination process has a beneficial effect also on photoxidation resistance and again, it seems that the POE show better photoxidation resistance than the EVA one.
carbonyl domain a small shoulder at ca. 1640 cm −1 , due to presence of insaturations, is barely noticeable. In Figure 8a,b, the variations of the total band areas in hydroxyl domains for EVA and POE are plotted, respectively. Worth noting that the EVApre-lam and POEpre-lam samples show larger hydroxyl accumulations than the EVApost-lam and POEpost-lam samples, especially at long exposure time. Moreover, in Figure 8c,d, the pre-laminated samples show more pronounced increases for the shoulders at 1780 cm −1 for EVA and at 1640 cm −1 for POE, rather than the post-laminated samples. All these results highlight that the lamination process has a beneficial effect also on photoxidation resistance and again, it seems that the POE show better photoxidation resistance than the EVA one. Further confirmation comes also by ATR-FTIR analysis of the investigated samples, in Figure 9a-d, the ATR-FTIR spectra of EVApre-lam, EVApost-lam, POEpre-lam and POEpost-lam commercial samples, before exposure and at maximum UVB exposure time are plotted. Therefore, to confirm the presence of some chemical species, the ATR-FTIR analysis can be considered suitable for qualitative surface analysis. The bands in both hydroxyl and carbonyl domains in the spectra of the four investigated samples at maximum exposure time appear larger than the same bands before exposure, and the latter is clearly exacerbated for the pre-laminated samples, confirming again the beneficial effect of the lamination process on the photoxidation resistance. Further confirmation comes also by ATR-FTIR analysis of the investigated samples, in Figure 9a-d, the ATR-FTIR spectra of EVApre-lam, EVApost-lam, POEpre-lam and POEpost-lam commercial samples, before exposure and at maximum UVB exposure time are plotted. Therefore, to confirm the presence of some chemical species, the ATR-FTIR analysis can be considered suitable for qualitative surface analysis. The bands in both hydroxyl and carbonyl domains in the spectra of the four investigated samples at maximum exposure time appear larger than the same bands before exposure, and the latter is clearly exacerbated for the pre-laminated samples, confirming again the beneficial effect of the lamination process on the photoxidation resistance. Further confirmation comes also by ATR-FTIR analysis of the investigated samples, in Figure 9a-d, the ATR-FTIR spectra of EVApre-lam, EVApost-lam, POEpre-lam and POEpost-lam commercial samples, before exposure and at maximum UVB exposure time are plotted. Therefore, to confirm the presence of some chemical species, the ATR-FTIR analysis can be considered suitable for qualitative surface analysis. The bands in both hydroxyl and carbonyl domains in the spectra of the four investigated samples at maximum exposure time appear larger than the same bands before exposure, and the latter is clearly exacerbated for the pre-laminated samples, confirming again the beneficial effect of the lamination process on the photoxidation resistance.
Thermo-Oxidation Resistance
In Figure 10a,b, FTIR spectra of EVApost-lam and POEpost-lam as a function of thermo-oxidation time are plotted, respectively.
Thermo-Oxidation Resistance
In Figure 10a,b, FTIR spectra of EVApost-lam and POEpost-lam as a function of thermo-oxidation time are plotted, respectively. The monitoring of thermo-oxidation process was extended up to ca. 3500 h because no significant variations were noticeable prior. It is worth noting that EVApost-lam sample shows a slight increase of the absorption band in the hydroxyl range and there is the appearance of a small shoulder at 1780 cm −1 , suggesting the occurrence of oxidation phenomenon, see Figure 10a. Interestingly, the POEpost-lam sample is extremely stable up to ca. 3500 h thermo-oxidation, i.e., no significant variations in carbonyl and hydroxyl range are noticeable, highlighting no noteworthy occurrence of the oxidation phenomenon also at this prolonged thermal treatment, see Figure 10b. Considering this qualitative analysis, it can be summarized that the POEpost-lam sample is more stable and resistant to thermooxidation occurrence than EVApost-lam.
Conclusions
Accurate characterization of pre-and post-laminated EVA and POE industrial sheets was carried out by calorimetric, rheological and mechanical analysis. Obtained results suggest that the lamination process has a beneficial effect on the 3D-structuration on both polymers, though it seems that better results are obtained for POE sheets. Upon lamination process, in the melt state, the viscosity of POE increased, while the viscosity of EVA decreased. The latter could be understood considering that the EVA experiences degradation at high temperature with the formation of volatile acetic acids.
The durability, in terms of photo-and thermo-oxidation resistance, of EVA and POE sheets is evaluated monitoring the formation of new oxygen-containing species with absorption bands in the hydroxyl and carbonyl domain. The lamination process leads to the formation of more oxygen resistant sheets, and this is exacerbated for the POE sample.
Finally, to sum up, although both EVA and POE sheets can be considered suitable for encapsulant for bifacial heterojunction PV modules, the POEpost-lam sheet is better structured in the melt, it has good rigidity and ductility and is more stable, in terms of photo-and thermo-oxidation, than EVApost-lam. The monitoring of thermo-oxidation process was extended up to ca. 3500 h because no significant variations were noticeable prior. It is worth noting that EVApost-lam sample shows a slight increase of the absorption band in the hydroxyl range and there is the appearance of a small shoulder at 1780 cm −1 , suggesting the occurrence of oxidation phenomenon, see Figure 10a. Interestingly, the POEpost-lam sample is extremely stable up to ca. 3500 h thermo-oxidation, i.e., no significant variations in carbonyl and hydroxyl range are noticeable, highlighting no noteworthy occurrence of the oxidation phenomenon also at this prolonged thermal treatment, see Figure 10b. Considering this qualitative analysis, it can be summarized that the POEpost-lam sample is more stable and resistant to thermo-oxidation occurrence than EVApost-lam.
Conclusions
Accurate characterization of pre-and post-laminated EVA and POE industrial sheets was carried out by calorimetric, rheological and mechanical analysis. Obtained results suggest that the lamination process has a beneficial effect on the 3D-structuration on both polymers, though it seems that better results are obtained for POE sheets. Upon lamination process, in the melt state, the viscosity of POE increased, while the viscosity of EVA decreased. The latter could be understood considering that the EVA experiences degradation at high temperature with the formation of volatile acetic acids.
The durability, in terms of photo-and thermo-oxidation resistance, of EVA and POE sheets is evaluated monitoring the formation of new oxygen-containing species with absorption bands in the hydroxyl and carbonyl domain. The lamination process leads to the formation of more oxygen resistant sheets, and this is exacerbated for the POE sample.
Finally, to sum up, although both EVA and POE sheets can be considered suitable for encapsulant for bifacial heterojunction PV modules, the POEpost-lam sheet is better structured in the melt, it has good rigidity and ductility and is more stable, in terms of photo-and thermo-oxidation, than EVApost-lam. | 9,497.6 | 2022-03-01T00:00:00.000 | [
"Materials Science",
"Engineering",
"Environmental Science"
] |
Delivering quality health care to people in low-income countries such as Pakistan; the clear link between AMR and UHC
Health care is an essential industry that not only highlights national development but is also a necessary component for the long-term growth of our economies and communities. People’s well-being improves when they are in good health. Unfortunately, healthcare remains one of Pakistan’s most neglected industries, with little progress made in the 75 years since the country’s independence. The United Nations Sustainable Development Goals (SDGs) include universal health coverage (UHC) as an aim. This has resulted in UHC being accepted as a national aim by numerous nations in recent years to offer equitably accessible health care [1]. A health card for the whole population of Punjab, Pakistan’s most populous province, was announced in January 2022 following a successful trial in Khyber Pakhtunkhwa province. The health card entitles every family to yearly care for around 1 million Pakistani rupees ($5650) in public and private facilities alike [2]. A diagnosis of any communicable or non-communicable disease in Pakistan is closely related to receiving the death penalty due to the significant out-of-pocket expenditures (OOPS) connected with its treatments. When bacteria can withstand the effects of medications that would typically kill them, they are referred to as drug-resistant or antimicrobial resistant (AMR). AMR causes a decline in the efficacy of antimicrobials (AMs) against the bacteria that are the cause of the infection, results in additional health care expenses, and in the end causes therapy to fail. Antimicrobial resistance may be maintained if the primary health care system is well-functioning. Since people want and require high-quality medical treatment, AMR directly impacts a broken healthcare system. In addition to increasing the number of people affected, this malfunctioning health care system will also raise the cost of UHC. Due to AMR, the number of people affected not only increases, but the burden on the healthcare system will grow as well. Both communicable and noncommunicable diseases are already taking a heavy toll on Pakistan, which is currently struggling with the consequences of both diseases [3]. Dr Arif Alvi, Pakistan’s president, has called for sweeping reforms to the way antibiotics are administered to people, animals, and the environment, claiming that the overuse of antibiotics is a severe threat to the health of the whole population. According to data from the Global Antibiotic Resistance Partnership, there are too many registered antimicrobial products in Pakistan. Self-medication is common, and doctors prescribe too many antibiotics, which all contribute to drug resistance. Up to ten million people might die each year as a result of medication resistance, and the economic toll could reach £85 trillion by 2050 [4]. More than three medications are prescribed to each patient [5]. General Practitioners (GPs) and public sector institutions with a preference for expensive broad-spectrum antibiotics are more likely to use this inappropriately and indiscriminately. Antibiotics available over the counter (OTC) without a prescription are a frequent practice throughout the nation, leading to a rise in antimicrobial resistance (AMR). Several investigations in Pakistan have shown evidence of resistance in gram-negative organisms. A third-generation Cephalosporin-resistant Enterobacteriaceae has also been described. Typhoid continues to be a major public health problem across the country because of antibiotic resistance and treatment failure. Staphylococcus aureus multi-resistant strains have also been shown to be prevalent in studies throughout Pakistan. Similarly, MDR TB and chloroquine-resistant falciparum are major roadblocks to the respective national and provincial programs’ goals and have serious consequences for the general public [6]. It’s already difficult for the post-covid nation to get back on its feet. In addition to affecting the country’s future, the additional costs of AMR will have a direct influence on the country’s economy and future decisions on health outcomes. So the government should also create or encourage private health facility investments to guarantee healthcare access. Also, improved teaching platforms for clinicians and patients to learn about antimicrobial resistance and when and how to use antimicrobials appropriately should be made available. Over-the-counter prescriptions should be strictly enforced to ensure that only those who need them may purchase them. When it comes to health care, UHC and AMR are inseparable since they are mutually reliant and essential to the proper functioning of a healthy system.
Delivering quality health care to people in low-income countries such as Pakistan; the clear link between AMR and UHC
Health care is an essential industry that not only highlights national development but is also a necessary component for the long-term growth of our economies and communities. People's well-being improves when they are in good health. Unfortunately, healthcare remains one of Pakistan's most neglected industries, with little progress made in the 75 years since the country's independence. The United Nations Sustainable Development Goals (SDGs) include universal health coverage (UHC) as an aim. This has resulted in UHC being accepted as a national aim by numerous nations in recent years to offer equitably accessible health care [1].
A health card for the whole population of Punjab, Pakistan's most populous province, was announced in January 2022 following a successful trial in Khyber Pakhtunkhwa province. The health card entitles every family to yearly care for around 1 million Pakistani rupees ($5650) in public and private facilities alike [2]. A diagnosis of any communicable or non-communicable disease in Pakistan is closely related to receiving the death penalty due to the significant out-of-pocket expenditures (OOPS) connected with its treatments.
When bacteria can withstand the effects of medications that would typically kill them, they are referred to as drug-resistant or antimicrobial resistant (AMR). AMR causes a decline in the efficacy of antimicrobials (AMs) against the bacteria that are the cause of the infection, results in additional health care expenses, and in the end causes therapy to fail. Antimicrobial resistance may be maintained if the primary health care system is well-functioning. Since people want and require high-quality medical treatment, AMR directly impacts a broken healthcare system. In addition to increasing the number of people affected, this malfunctioning health care system will also raise the cost of UHC. Due to AMR, the number of people affected not only increases, but the burden on the healthcare system will grow as well. Both communicable and noncommunicable diseases are already taking a heavy toll on Pakistan, which is currently struggling with the consequences of both diseases [3].
Dr Arif Alvi, Pakistan's president, has called for sweeping reforms to the way antibiotics are administered to people, animals, and the environment, claiming that the overuse of antibiotics is a severe threat to the health of the whole population. According to data from the Global Antibiotic Resistance Partnership, there are too many registered antimicrobial products in Pakistan. Self-medication is common, and doctors prescribe too many antibiotics, which all contribute to drug resistance. Up to ten million people might die each year as a result of medication resistance, and the economic toll could reach £85 trillion by 2050 [4].
More than three medications are prescribed to each patient [5]. General Practitioners (GPs) and public sector institutions with a preference for expensive broad-spectrum antibiotics are more likely to use this inappropriately and indiscriminately. Antibiotics available over the counter (OTC) without a prescription are a frequent practice throughout the nation, leading to a rise in antimicrobial resistance (AMR).
Several investigations in Pakistan have shown evidence of resistance in gram-negative organisms. A third-generation Cephalosporin-resistant Enterobacteriaceae has also been described. Typhoid continues to be a major public health problem across the country because of antibiotic resistance and treatment failure. Staphylococcus aureus multi-resistant strains have also been shown to be prevalent in studies throughout Pakistan. Similarly, MDR TB and chloroquine-resistant falciparum are major roadblocks to the respective national and provincial programs' goals and have serious consequences for the general public [6].
It's already difficult for the post-covid nation to get back on its feet. In addition to affecting the country's future, the additional costs of AMR will have a direct influence on the country's economy and future decisions on health outcomes.
So the government should also create or encourage private health facility investments to guarantee healthcare access. Also, improved teaching platforms for clinicians and patients to learn about antimicrobial resistance and when and how to use antimicrobials appropriately should be made available. Over-the-counter prescriptions should be strictly enforced to ensure that only those who need them may purchase them. When it comes to health care, UHC and AMR are inseparable since they are mutually reliant and essential to the proper functioning of a healthy system.
Sources of funding for your research
None.
Declaration of competing interest
None declared. | 1,989.4 | 2022-08-25T00:00:00.000 | [
"Medicine",
"Economics"
] |
A robust flow cytometry-based biomass monitoring tool enables rapid at-line characterization of S. cerevisiae physiology during continuous bioprocessing of spent sulfite liquor
Assessment of viable biomass is challenging in bioprocesses involving complex media with distinct biomass and media particle populations. Biomass monitoring in these circumstances usually requires elaborate offline methods or sophisticated inline sensors. Reliable monitoring tools in an at-line capacity represent a promising alternative but are still scarce to date. In this study, a flow cytometry-based method for biomass monitoring in spent sulfite liquor medium as feedstock for second generation bioethanol production with yeast was developed. The method is capable of (i) yeast cell quantification against medium background, (ii) determination of yeast viability, and (iii) assessment of yeast physiology though morphological analysis of the budding division process. Thus, enhanced insight into physiology and morphology is provided which is not accessible through common online and offline biomass monitoring methods. To demonstrate the capabilities of this method, firstly, a continuous ethanol fermentation process of Saccharomyces cerevisiae with filtered and unfiltered spent sulfite liquor media was analyzed. Subsequently, at-line process monitoring of viability in a retentostat cultivation was conducted. The obtained information was used for a simple control based on addition of essential nutrients in relation to viability. Thereby, inter-dependencies between nutrient supply, physiology, and specific ethanol productivity that are essential for process design could be illuminated. Graphical abstract
Introduction
In recent years, spent sulfite liquor (SSL) has attracted attention as an attractive feedstock for second generation bioethanol production using genetically engineered baker's yeast [1]. As an abundant and cheap by-product of the sulfite cooking process of wood for pulp and paper production, spent sulfite liquor contains high amounts of a variety of different hexose and pentose sugars [2][3][4][5]. During pulping, lignocellulosic material is hydrolyzed into a solid cellulose fraction used for paper and viscose production and a liquid fraction containing mainly sugar monomers from hemicellulose [1,2,6]. Sugars are directly available and costly pretreatment can be avoided, which makes the biorefinery of spent sulfite liquor to ethanol economically feasible [4]. Nevertheless, hydrolysis also leads to accumulation of lignosulfonates, sulfate, and a variety of inhibitory break down products, like acetic acid, furfural, and hydroxymethylfurfural (HMF). Lignosulfonates mainly contribute to a high solid particle content of the spent sulfite liquor [7,8]. HMF and furfural have a still not fully explored inhibitory effect on yeast growth and ethanol productivity [9]. While acetic acid can be co-utilized as an additional carbon source in addition to sugars by commonly used biotechnological production hosts Saccharomyces cerevisiae and Escherichia coli [10][11][12], it has a strong influence on the cytosolic pH and can negatively influence viability [2,13]. Beside those challenges, the biorefinery of spent sulfite liquor provides the opportunity to produce sustainable biofuels by valorization of the waste stream. It does not compete with food production-like first-generation feedstocks-and applies zero-waste conversion technologies, a key component in future circular economy technologies [4,14,15].
For an economic and ecological bioprocessing of the continuously generated large quantities of spent sulfite liquor, bioprocessing via continuous fermentation of is essential. It leads to an increased productivity and high time-space yields in ethanol production. The inhibiting conditions in spent sulfite liquor processes lead to deteriorating growth rate, viability, and fermentation performance [16]. Consequently, maintaining steady cell viability and high biomass concentration is the main challenge in generating a stable and productive process. A promising strategy to meet these demands is to uncouple growth from product formation by cell retention in a retentostat. Previous retentostat experiments showed an accumulation of solid particles in the cell retention process despite pre-filtration of spent sulfite liquor (data not shown). The increased particle content leads to inaccurate biomass measurements which impedes determination of essential variables for process understanding such as growth rates, substrate uptake rates, and biomass yield [17]. Consequently, the in situ measurement of the viable biomass and cell count is essential for systematic optimization of cultivation parameters in the continuous cell retention process.
So far, determination of cell viability in spent sulfite liquor has been achieved mainly through alkaline methylene blue method and by counting the colony forming units on agar plates [1,16,18], which is time consuming, negatively affected by high particle backgrounds and cannot depict the physiology of different biomass populations. In an industrial setting, physical techniques capable of real time measurement are preferred [19]. Common methods for in situ measurement of viable biomass include dielectric spectroscopy, infrared spectroscopy and fluorescence spectroscopy, NIR spectroscopy, and Raman spectroscopy as well as microscopy combined with image analysis [19][20][21][22][23]. However, inline sensors are prone to high measurement noise and require chemometric knowledge to establish meaningful measurement techniques or display limitations in other fields [20]. For instance, turbidity probes are not feasible in combination with high particle background in complex media [24]; commercial dielectric spectroscopy probes can differentiate between viable cells and other solid particles, but cannot quantify the amount of dead cells and particle background. These techniques have also exhibited polarization problems when medium conductivity is high [20,21]. In particle-free medium, near-infrared spectroscopy (NIR) and Raman spectroscopy are powerful tools for a fast and non-invasive determination of substrate concentration, product formation, and viable biomass concentration [23,[25][26][27]. In complex medium containing a high particle load-like lignocellulose hydrolysate or spent sulfite liquor-it is not possible to differentiate between viable cells and solid medium particles with NIR and Raman [25,28]. According to Ewanick et al. [29], lignocellulose hydrolysate medium pretreated with filtration certainly requires extensive modeling to reduce baseline shifts and fluctuating spectral background.
Flow cytometry in combination with fluorescent viability staining [30,31] is a promising alternative when dealing with complex media containing particles and emulsified liquids. Thereby, the entire particle population is depicted in a quantitative way [32], including viable and non-viable biomass against media background. Furthermore, morphological assessment of biomass or analysis of media particles is possible [33]. In recent years, efforts to use flow cytometry in online mode have been successfully undertaken [34][35][36][37]. In this context, automated sample treatment involving dilution, fluorescent staining, and incubation is still a considerable bottleneck; however, for this purpose, automated sampling and sample processing systems have been developed recently [38].
In this study, a flow cytometry-based method to analyze yeast cells in complex media containing spent sulfite liquor with high particle background was developed. The method is capable of (i) yeast cell quantification against medium background, (ii) determination of yeast viability, and (iii) assessment of yeast physiology though morphological analysis of the budding division process. The method was successfully employed as a monitoring tool in fermentation processes of S. cerevisiae in spent sulfite liquor: first, the method was verified in chemostat processes at different biomass concentrations, and subsequently physiology and morphology under cell retention conditions were assessed.
Spent sulfite liquor medium
Spent sulfite liquor with a dry matter content of 30-32% (w/v) from an industrial source was used for all experiments in this study. Spent sulfite liquor served as the carbon source, containing approximately 12% (w/v) hexose and pentose sugars. In addition, per liter medium 15 mL phosphate stock solution (21.7 g L −1 K 2 HPO 4 , 182.3 g L −1 KH 2 PO 4 ) and 5 mL L −1 of a urea stock solution (400 g L −1 urea) were aseptically added to unfiltered or filtered spent sulfite liquor and the pH was adjusted to 5.0 or 5.5 with Mg (OH) 2 .
For continuous cultivations using cell retention, filtration of the medium was required to avoid blocking the cell retention membrane by solid spent sulfite liquor particles. To reduce major impurities, a pre-filtration step through a commercial cloth strainer and fine filtration via continuous crossflow filtration using a Pall PSP-113 polyolefin hollowfiber membrane (Pall Corporation, New York, USA) were carried out.
Cultivations in bioreactors
The chemostat process was done in four parallel 3-L DASGIP® Benchtop Bioreactors (Eppendorf AG, Hamburg, Germany), while the fermentation with cell retention was carried out in a 1.5-L stirred tank glass bioreactor (Applikon Biotechnology BV, Delft, Netherlands). All reactors had a working volume of 1 L.
Cultivations were started at an OD of 0.5 (chemostats) or 1.0 (retentostat) by adding an appropriate volume of preculture to the reactor. For the batch phase, the yeast was cultivated in YPD medium (chemostat) or SSL medium (retentostat). Upon depletion of the carbon source, cultivations were transferred into continuous mode by feeding minimal SSL medium with a constant dilution rate of 0.02 h −1 in chemostat and 0.07 h −1 in retentostat, which complies with a feed rate of 20 mL h −1 and 70 mL h −1 respective. The feeding of medium with either unfiltered or filtered spent sulfite liquor in chemostat was carried out in duplicate.
During the batch phases, aerobic conditions were maintained via agitation at 500 rpm (chemostat) or 800 rpm (retentostat) and aeration with air at 1 vvm adjusted by the mass flow controller (Brooks Instrument, Dresden, Germany). The dissolved oxygen was monitored by a VisiFerm DO 225 probe (Hamilton, Reno/NV, USA) or VisiFerm DO 120 probe (Hamilton, Reno/NV, USA), in chemostat or retentostat respective. At the transition to the chemostat phase and the retentostat phase, the agitation was set to 350 rpm (chemostat) or held at 800 rpm (retentostat). For anaerobic conditions in the entire chemostat phase and the respective anaerobic retentostat phases, the gas supply was switched to 0.07 vvm nitrogen. Reactor off-gas was analyzed using a DASGIP GA4 gas sensor module (Eppendorf AG, Hamburg, Germany) in the chemostat reactors and a CO 2 gas sensor module (BlueSens gas sensor GmbH, Hamburg, Germany) in the retentostat. pH was monitored by a pH electrode (Mettler-Toledo GmbH, Giessen, Germany) and controlled at 5.5 during batch and 5.0 during continuous cultivation phases by addition of 2 M KOH. In both processes, the temperature was constantly set at 32°C.
The full cell retention in the retentostat process was realized by continuously pumping the whole reactor content through a loop including a Pall PSP-113 polyolefin hollow fiber membrane (Pall Corporation, New York, USA). The harvesting was conducted by removing cell-free permeate through the hollow fiber membrane, while the retained cell broth was fed back into the reactor. The harvest rate was adjusted to maintain a constant filling volume of the reactor, realized by a dip tube in the chemostat, and monitored by a DASGIP® level sensor (Eppendorf AG, Hamburg, Germany) in the retentostat.
For the supplementation of a nutrient-pulse into the reactor, a solution of 10 g peptone and 5 g yeast extract in 50 mL demineralized water was prepared.
Flow cytometry
Samples from cultivations were diluted 1:10 into phosphatebuffered saline (50 g L −1 of 2.65 g L −1 CaCl 2 solution, For calibration of the method, a yeast pre-culture was centrifuged (4000 rpm, 10 min, 20°C) and dissolved with PBS buffer to reach an optical density of 1. To study various viability stages, one half of the solution was subjected to microwave treatment for 30 s at 940 W in a microwave oven. Subsequently, mixtures of viable and dead cells were prepared in several ratios to identify viable and non-viable populations. For identification of background noise in the medium, either raw or filtered SSL medium was added to the cell mixtures in pure buffer. Table 1 shows the mixtures of viable and dead cell suspensions measured either in PBS, unfiltered, or filtered SSL medium. In addition, cell and spent sulfite liquor concentrations were varied to test the effect on measurement accuracy.
Calculation of expected ratio of viable and dead cells
The expected ratio of viable and dead cells was calculated via Eqs. 1-4. This calculation approach will be discussed in the "Development of flow cytometry-based method" section. Percentage of dead cells in suspension (%) Equations 1-4 describe the calculation of expected ratio of viable (V) and dead (D) cells from expected cell counts.
Optical density and biomass determination
The optical density was measured in triplicates at a wavelength of 600 nm with a Spectronic 20 Genesys spectrophotometer (Thermo Scientific, Waltham, MA, USA).
The biomass was determined gravimetrically in triplicates. For this purpose, 2 mL culture broth was centrifuged (4500 rpm, 10 min, 4°C), washed with 4 mL deionized water, and dried in pre-weighed glass tubes for at least 24 h at 105°C.
Results
Initial method development was focused on identifying viable and non-viable biomass in various complex media backgrounds featuring spent sulfite liquor. Subsequently, the applicability of the method was tested (i) in a chemostat process with different biomass concentrations and particle backgrounds and (ii) as a process monitoring tool for cell physiology in a retentostat ethanol production process.
Development of flow cytometry-based method
Method calibration was performed using various biomass concentrations in different viability stages and media compositions. By using flow cytometry in combination with fluorescent staining, a false-positive detection of media particles as biomass could be avoided. For this purpose, we employed two types of fluorescent dyes: (a) fluorescein diacetate (FDA) resulting in green fluorescence through esterase activity [41] to detect metabolic activity of viable biomass and (b) propidium iodide (PI) resulting in red fluorescence as a result of DNA intercalation in cells with compromised membranes [42]. For method development, defined volumetric mixtures of viable and dead cells in different media backgrounds were measured. Figure 1 provides an overview of the identified clusters against different media backgrounds. Based on initial measurements of medium (i) with or without spent sulfite liquor and (ii) with and without cells, a distinction of yeast cells from media background was possible (see Fig. 1, middle column). Scatter plots of red and green total fluorescence signals clearly display three clusters: viable cells, dead cells, and media background (see Fig. 1b, center row). At high SSL particle concentrations (see Fig. 1b, c), deviations in red fluorescence caused by particle interaction with PI could be observed. Consequently, biomass identification was not only based on fluorescence but also on size (FSC length signals) and form (SSC signals) to eliminate false-positive results (data not shown). Subsequently, gates were fixed around these three clusters for classification.
As dead cells were obtained through harsh microwave treatment, partial cell disintegration was observed. Consequently, the measured cell count in the cell suspension is dependent on viability of the biomass and different from the volumetric mixing ratio. This was considered in the target ratio of viable and dead cells which is given in Table 2 as "expected ratio," calculated via Eqs. 1-4: results of each measurement series are given, comprising PBS buffer containing few particles, unfiltered SSL medium, and filtered SSL medium as background containing large amounts of particles. Figure 2 shows the impact of biomass concentration on the measurement at viable/dead mixtures of V40/D60 according to Table 1. At higher biomass concentrations, deviations between measured and expected values are clearly visible, dependent on the presence of high particle concentrations. This is further underlined by additional measurements at V50/D50, where the amount of spent sulfite liquor background was increased considerably. In such circumstances, the measurement capabilities of flow cell and detectors reach their limitations. The instrument software reduces data acquisition dependent on high particle concentrations to avoid data overload, which in turn leads to inaccurate results as illustrated in Fig. 3. This stresses the absolute necessity for proper sample dilution before measurement, as the method cannot cope with particle concentrations above 1 × 10 6 particles mL −1 . Subsequent measurements of regular samples were diluted accordingly.
In order to further characterize biomass, signal curve properties of various detector signals can be used to differentiate morphological aspects. As explained by Dubelaar et al. [43], forward scatter (FSC) and sideward scatter (SSC) signals represent size, shape, and overall morphology of measured elements [43]. By using the flow cytometer, it was possible to distinguish between single cells and agglomerates featuring budding cells, thereby illuminating further physiological aspects. Morphological classification of single and budding cells is summarized in Fig. 4. Based on previously established morphological classes for yeast analysis [39], firstly, all viable yeast cells were detected and, secondly, discrimination between single or budding yeast cells was possible (Fig. 4a). Signal shape profiles of single and budding cells with corresponding images taken by the camera of the flow cytometer are shown in Fig. 4b. Due to high florescence stemming from budding cell agglomerations, saturation of green fluorescence signals can be observed. This is dependent upon detector sensitivity settings and cannot be wholly avoided if a wide range of particle sizes needs to be covered in a single measurement.
Verification of biomass quantification against low and high particle backgrounds in a chemostat process
Upon successful establishment, the method was tested on its applicability in continuous cultivation. For this purpose, a chemostat experiment with unfiltered and filtered spent sulfite liquor with minimal nutrient supplementation was performed (see the "Spent sulfite liquor (SSL) medium" section). Using this approach, different biomass concentrations could be studied by flow cytometry under continuous conditions as the insufficient supply of media components resulted in a gradual wash out of cells after the initial YPD medium batch phase. Figure 5 displays the concentration of viable cells and dead cells as well as the particle content of unfiltered (Fig. 5a) or filtered (Fig. 5b) SSL medium in continuous chemostat cultivations. The gravimetrically determined dry weight declined after the batch phase, reaching a steady state proportional to the total cell count of viable and dead cells measured in flow cytometry (Fig. 5). The decline visualizes the wash out of the cells by the constant feed and harvest rate. Wash out also led to a consistent low count of dead cells. The spent sulfite liquor particle concentration for unfiltered (Fig. 5a) or filtered (Fig. 5b) SSL medium was also reaching a steady value. Due to constant feeding of SSL medium, the relatively particle-free YPD batch medium was replaced. The count of spent sulfite liquor particles eventually reached the value present in the respective SSL feed medium.
The concentration of particles in unfiltered SSL medium are up to 20 times higher compared to filtered SSL medium, which nicely illustrates the effect of the pre-filtration procedure. Regarding viability in unfiltered and filtered chemostat cultivations, no effect of the different spent sulfite liquor particle content can be found.
To summarize, the method enables quantification of viable and non-viable cell populations against high particle backgrounds in chemostat experiments. Additional information is obtained through quantification of said particle backgrounds. Thereby, the process can be assessed in ways that are not possible through common monitoring of total dry weight.
Monitoring of a retentostat process with accumulation of particle background
For process design of a continuous cultivation with cell retention, the physiology of the cells is essential. For that reason, the flow cytometry method was used as monitoring tool . Rows from top to bottom: a cells in particle-free buffer, b SSL medium, and c filtered SSL medium targeting physiological assessment over time during spent sulfite liquor fermentation in a retentostat process. Employing cell retention has the advantage of uncoupling yeast growth from product formation. That way, higher feed rates can be used and less substrate is needed for continuous formation of biomass [44,45]. On the other hand, the use of membrane systems for bioprocessing of spent sulfite liquor represents a significant challenge in terms of biomass monitoring: while the particle background of a chemostat in steady state is equal to the particle background of the feed medium, the particle background in a cell retention process leads to accumulation of dirt particles over time.
This process was specifically designed for ethanol production and therefore divided into biomass accumulation phases under aerobic conditions (I batch and II retentostat) followed by an anaerobic, catalytically active phase (III retentostat) for production of ethanol from minimal spent sulfite liquor medium.
The assessment of viability provided valuable insight: phases I and II displayed a steady decrease of viability; during phase III a massive drop of viability (see Fig. 6a) was registered by the flow cytometry method. Additionally, the residual sugar concentration was consistently high, with a corresponding low ethanol titer (see Fig. 6b). To promote cell growth and increase viability, essential nutrients were pulsed to the reactor and conditions were switched to aerobic batch mode (IV, 266 h). Using the flow cytometry-based method, an increase in viability could successfully be detected during this second batch phase (IV) (Fig. 6a). Moreover, a higher of biocatalyst, i.e., viable biomass in the reactor, led to increased sugar uptake and ethanol titers.
Furthermore, morphological assessments shown in Fig. 6c demonstrate the increasing ratio of buddying cells to single cells at higher viability values. Consequently, when an improvement of overall viability and depletion of major sugars (see phase IV; Fig. 6b) could be observed, the process was switched back to ethanol production by anaerobic retentostat (phase V).
Quantification of viable biomass also enables a more thorough assessment of productivity. Specific ethanol productivity q ethanol calculated from viable cells clearly shows a massive increase in phase III (see Fig. 6b) as opposed to q ethanol calculated from total dry weight. This indicates that although overall viability declined, the population of viable cells displayed enhanced productivity.
In addition, the flow cytometry data demonstrate the characteristic problem of cell retention: holding back dead cells and SSL particles. This is illustrated in a twofold way: Fig. 6bX displays a high presence of dead cells in phase V at the end of the cell retention process. At the same time, Fig. 6e shows that between 280 and 360 h the percentage of SSL particle background decreased compared to the increasing cell concentration in the reactor (see Fig. 6e). Table 2 Overview on calibration procedure for flow cytometry-based method development. Each volumetric mixing ratio of viable (V) and dead (D) cell suspensions is listed with expected target ratios which are compared to measured values for cells in PBS buffer, filtered SSL medium, and unfiltered SSL medium. All measurements were carried out in technical triplicates, the measurements with PBS buffer were additionally carried out in duplicates Volumetric mixing ratio of viable/dead
Discussion
A novel method capable of identifying and quantifying the following particle populations in complex spent sulfite liquor medium was implemented: viable yeast cells, dead yeast cells, and media background containing solid particles. In addition, yeast cell morphology and physiology can be assessed.
Advantages, disadvantages, and comparability of the method
In this study, flow cytometry was used to combine viability assessment and morphological analysis. Potential online use is possible but challenging as will be discussed in the "Applicability of the method" section. The method was specifically tailored to measurements in complex medium with high particle background. This signifies a fast and potent alternative to conventional offline measurements like dry cell weight and optical density which cannot distinguish between viable cells and media background. In addition, enhanced insight into yeast physiology is generated through simultaneous use of fluorescent viability staining and morphological assessment: information on overall viability, size distribution of media background and/or yeast cells can be obtained through one single measurement. Theoretically thousands of particles can be measured in a matter of minutes. Additionally, morphological cell features can be determined, even down to individual particles. This is especially useful to assess yeast physiology in distinct process stages through analysis of the budding division process. Other methods generally only provide an overview on viability and are time consuming [16,46,47]. The here-presented method could also be used with non-particle containing media. In such circumstances simpler biomass monitoring techniques would also be applicable, however information on non-viable biomass populations would be lost. To establish the method, a comprehensive calibration procedure was used: mixtures of viable and dead cells in media containing different numbers of particle populations were measured. Table 2 (see the "Development of flow cytometry-based method" section) provides an overview on measurement errors dependent on biomass and media particle content. Naturally, samples containing high particle concentrations are challenging. However, standard deviations between actual and expected values were consistently below 10%. However, a diverse particle population in the medium is challenging: to guarantee high information content across all process phases, adequate fluorescence detector sensitivity settings must be found for individual biomass and media combinations. In early process phases, detectors must be sensitive enough to detect viable biomass, and in later process stages, however, any signal saturation should to be avoided as it signifies a loss of information [33]. Furthermore, it should be noted that fluorescence spectral overlap might result in Fig. 4 Morphological classification of viable yeast cells. a Classification according to sample length and total SSC signals to distinguish between single cells (blue) and budding cells (orange). b Signal shape profiles of single cells and budding cells: FSC signal (black), SSC signal (blue), and green fluorescence signal (green). Corresponding image-in-flow picture taken by the flow cytometer's camera. White line signifies 10 μm misleading signals. Depending on fluorescence intensity, green fluorescence can also be registered by the red fluorescence detector as a misleading artifact [48]. The flow cytometer analysis software CytoClus used in this study did not feature any fluorescence spectral overlap compensation. Also, deviations in the red fluorescence originating from particle interaction with PI could be observed. Consequently, biomass identification is not only based on fluorescence but also on size and form to eliminate false-positive results. The method cannot cope with unlimited particle concentrations. As a result, particle concentration in samples must be kept under 1 × 10 6 particles mL −1 and verified in a preliminary measurement to avoid data overload and inaccurate results.
Disadvantages also include size-exclusion effects: small elements are generally over-represented due to the characteristics of the sampling tube (diameter 5 mm). However, such effects are hardy relevant when dealing with yeast due to its small size compared to other organisms like filamentous fungi.
Applicability of the method
This method is a potent tool for at-line characterization of bioprocesses featuring complex media. If online applicability is implemented, the method can also be used for routine monitoring tasks. The use of commercialized live/dead cell viability assays is possible as well. However, its application is dependent on the wavelength of fluorescence emission of viability dyes and corresponding fluorescence detector specifications.
Potential online applicability would be possible in combination with automated sampling and sample processing. For this purpose, sampling, dilution, and addition of fluorescent dyes need to be performed in a modular process analytical (PAT) system with a connected flow cytometer [33]. However, it should be noted that the method is currently still used as an at-line method. For a robust use in process control, online applicability would have to be implemented first.
The developed method can shed a light on the complex bioprocessing of spent sulfite liquor, which features a high solid particle presence which would interfere with measurement when using other techniques apart from flow cytometry. The application of the method in a simple chemostat process and a complex cell retention process gave significant deeper insight into the physiology of the yeast cells and on the accumulation of solid particles. The additional information can be used for process design, targeting the physiological optimization and thus the productivity and performance of the process. For instance, the assessment of specific productivity is much more accurate when the actual value for viable biomass is known. Fig. 5 Application of flow cytometry method in a chemostat process. Particle populations across process time including total dry weight (g L −1 ), viable cells, dead cells, and SSL particle background for unfiltered (a) and filtered (b) SSL medium. Particle and cell concentration are given in particles (N) per milliliter The main distinguishing feature of the here-presented flow cytometry method is its robustness and high information gain despite complex media backgrounds. In addition, viable and dead cell populations can be clearly distinguished using flow cytometry. Fig. 6 Monitoring of cell physiology and particle background of spent sulfite liquor fermentation in a retentostat process. Dotted lines distinguish process phases (I)-(V): I: 0-120 h/II: 120-166 h/III: 166-261 h/IV: 261-312 h/V: 312 h-end. a Ratio between viable and dead cells according to total particle count in %, yeast cell concentration in cells (N) per milliliter. b Ethanol titer and total residual sugar concentration (normalized values). c Viable cells in N per microliter, ratio between number of single and budding cells as an indicator for physiological budding activity. d Specific ethanol productivity q Ethanol calculated via total dry weight and via viable cell dry weight (normalized values). e Ratio of spent sulfite liquor particles against total particle count in %, total dry weight in grams per liter | 6,605.2 | 2020-02-07T00:00:00.000 | [
"Biology",
"Engineering"
] |
CloudBank for Europe
The vast amounts of data generated by scientific research pose enormous challenges for capturing, managing and processing this data. Many trials have been made in different projects (such as HNSciCloud and OCRE), but today, commercial cloud services do not yet play a major role in the production computing environments of the publicly funded research sector in Europe. Funded by the Next Generation Internet programme (NGI-Atlantic) from the EC, in partnership with the University California San Diego (UCSD), CERN is piloting the use of CloudBank in Europe. CloudBank has been developed by the UCSD, University of Washington and University of California, Berkeley with NSF grant support, to provide a set of managed services simplifying access to public cloud for research and education, via a cloud procurement partnership with Strategic Blue, a financial broker SME, specialised in cost management and optimisation. The European NGI experiment is provisioning cloud services from multiple vendors and deploying a series of use-cases in the domain of Machine Learning and HPCaaS, contributing to the scientific programme of the Large Hadron Collider. The main objective is to address technical, financial and legal challenges to determine whether CloudBank can be successfully used by Europe’s research community as part of its global research activity.
Introduction
The growing use of new methods such as those based on Machine Learning (ML), Internet of Things (IoT), HPCaaS and Quantum Computing, together with the commoditization of specialised cloud based services and frameworks have introduced the need to assess new computing models for research environments. The growing variety of hardware infrastructure platforms at scale, performance-portability frameworks and open cloud-based orchestration systems has increased the availability of scenarios to be explored, changing from basic elastic provisioning of virtual resources, evolving to a transparent and adaptive smart cloud environment continuum, available for data intensive applications, from simulation to analysis. This evolution is putting in evidence the potential to introduce new, more performant and cost-effective technologies than are currently available on-premise at many research laboratories and adapted to modern research data processing requirements.
The bulk of the cloud services provided by CERN to support its scientific programme is currently provisioned as in-house resources managed by the IT department and hosted in the computer centre on-site. Unforeseen increased demand linked to the scientific programme as well as unexpected events, such as the COVID-19, can impact the ability to meet new demands. Similarly, enterprise level risks (including major infrastructure incidents, cyber-attacks and insufficient or delayed delivery of h/w resources) can impact CERN's ability to deliver services. In a similar way as High-Energy Physics computing has made a strategic move from mainframes to personal computers 25 years ago[1], public commercial cloud services have rapidly become commodity offers and can be strategically considered in a hybrid model, to be integrated into the current computing research environments. Previous experience with public clouds has shown that it is possible to integrate commercial cloud services from different providers into the CERN cloud provisioning model [2] but a number of challenges need further investigation, such as: • ability to rapidly increase heterogeneous capacity without the need to re-tender and associated delays; • avoid vendor lock-in by having the possibility to rapidly select an alternative cloud service supplier; • be able to monitor and control the consumption of the procured services assigned to multiple, independent administrative units, tracked and billed separately.
(UCSD), University of Washington and University of California, Berkeley, with the support of an NSF grant, have developed CloudBank[3], a set of managed services to simplify access to public clouds for research and education. CloudBank has established a cloud procurement partnership with Strategic Blue[4], a financial services SME specialising exclusively in cloud procurement, cost management and optimisation. CERN in partnership with UCSD has been funded by the NGI Atlantic programme [5] from the EC to experiment the use of CloudBank in Europe. The objective of the CloudBank EU NGI experiment is to leverage on the aforementioned work and pilot the use of the service under European legislation to provision services from multiple cloud providers. These services will initially support the deployment in Amazon Web Services (AWS) and Google Cloud Platform (GCP) of a series of challenging use-cases in the domain of Machine Learning (ML) and HCPaaS that are contributing to the scientific programme of the Large Hadron Collider. The experiment will determine if CloudBank can be successfully used by Europe's research community as part of a global research activity addressing not only technical, but also financial and legal challenges to determine if such an approach can be applied in a European setting. Ultimately, the results of this analysis will lay the foundation to explore the mechanisms to extend CloudBank to European cloud providers, aligning it with the rise of European digital sovereignty, strengthening data self-determination for public European research and overall, lead to a simplification of in-house public cloud service procurement processes for the public research sector, integrating these into an hybrid research computing model, in view of a smooth transition to a heterogeneous cloud infrastructure.
CloudBank EU Experiment
The procurement of commercial cloud-based services has increased at CERN in recent years and the IT department has gained some experience in procuring cloud services to support its scientific programme. Procurement activities by CERN have also been supported by the EC through projects including Helix Nebula[6] , PICSE [7] , HNSciCloud[8] and more recently OCRE [9] and ARCHIVER[10]. In spite of very advanced initiatives, when onboarding public commercial cloud services, challenges remain in a combination of technical, financial (cost-effectiveness) and regulatory (data privacy).
Such challenges can span from delivering network traffic from cloud provider data centers to Research and Education Networks (NRENs), cloud-to-cloud provider traffic, or from public clouds to research organisations, to aspects related to cost-optimization considering that cloud environments have a substantial number of optimisation options. Depending on the workload and the real-life cloud scenario, one needs to consider a balance of performance vs. cost. Several options can be explored at different performance/cost ratios depending on the workload in question: type of resources (e.g. CPU, GPU, TPU), cloud vendor regions availability, type of instances (on-demand vs spot/preemptible) and network egress charges. Most of these elements are driven by the application. In the case of ML on cloud for example, modern workloads are used in what is usually called burst/auto-scaling mode, paying only what is used as a cost-effectiveness function, very different for example from workloads using a more traditional batch submission, that relies on predefined reserved capacity. When compared with on-premise capacity, cloud is a commodity product with increased flexibility. To use it effectively, it's not a simple case of "Lift and Shift" i. e., creating VMs and running software on top. One often needs a degree of platform optimization to get a similar level of throughput where underlying hardware infrastructure performance can be efficiently pushed to its limits. Therefore, the primary objective when using cloud is to minimise and optimise resources used for each individual application whilst when running an on-premise service, the goal is to keep the resources as busy as possible with a continuous flow of applications in order to maximise throughput and Return On Investment (ROI) for the hardware.
For successful onboarding of the potential offered by public clouds in the current research computing programmes, an analysis on models of data governance relationships with the cloud provider(s) must be performed taking into account applicable legislation. This includes assigning responsibilities for data processing and identifying financial and legal risks. In addition, the planning and practical validation of exit strategies (cloud to cloud provider or cloud to on-prem) is also required to mitigate potential vendor lock-in risks.
The CloudBank experimental service intends to address many of these combined challenges: for example, it will enable data intensive research use cases to accommodate cloud resources at scale on their data management and distribution architectures. The use cases deployed under the experiment can potentially assess the capabilities to transfer an increasing volume of research data allowing to better understand how research use cases are using cloud services and the underlying network with sustained traffic, exploring further for example the possibilities to use the transcontinental networks of large public cloud providers. Concerning the financial aspects, the CloudBank financial operations allow research performing organisations (RPOs) to understand in detail where cost is incurred, eliminate waste of unnecessary consumption and optimise procurement of the necessary. To maximise effectiveness, cost management must include elements of cost transparency and optimisation, without losing focus on value creation. For continuous improvement, technical and financial factors need to be aligned and a framework created to evaluate the efficiency of the end-to-end cost management and optimisation process.
The CloudBank EU experiment will also contribute to the establishment of a regulatory safe, sovereign and multi-cloud commercial infrastructure for research users, based on interoperability, open source software containerisation and open standards as valid technical and organisational measures under European legislation.
Use Cases
The initial set of use cases is primarily composed of ML and HPCaaS workloads derived or evolved from deployments in early initiatives, such as HNSciCloud, expanded for the NGI Experiment by the use cases proposers themselves ( Figure 1). There are ten use cases submitted and deployed so far, from different administrative structures within CERN including Experiments, Accelerator sector, Theory and IT Departments that address most phases of the research data processing workflow [11]. Examples are depicted in Figures 2 and 3. Priority has been given to those use cases that allow the CloudBank experiment to cover a wide range of applications, experiments and departments while limiting the procurement investment and minimising any additional support effort required from IT department personnel. Another important criteria was to select use cases that would not expose personal data at this stage, as the legal analysis for personal data processing is taking place in parallel.
The deployment of ML use cases using cloud-based services is particularly relevant. While ML becomes increasingly important for the LHC computing models, the ability to scale up remains a major issue yet to be addressed. This is particularly pertinent for use cases where a large number of accelerator architectures (GPUs, FPGAs and similarly, more recently, Graphcore IPUs) are required or can greatly improve the performance of training and inference algorithms.
A core objective of the experiment is to demonstrate a hybrid ML service model where the IT department can easily complement/extend in-house accelerator capacity (i.e. GPU) to hundreds of hardware accelerator instances. . depicts the results of the 3DGAN use case, a Generative Adversarial Network prototype designed to simulate electromagnetic calorimeters output. This use case explores highly parallel solutions to speedup the GAN training process. In particular, it compares TPU-based solutions to multi-GPU setups, analysing performance in terms of scaling efficiency and physics accuracy.The technical objectives when combined with the financial aspects establish a clear sustainable model for the CloudBank experiment outcomes. The results of the use case deployment will provide an assessment of the capabilities a financial broker may provide in terms of billing controls and using it as an additional source of data for cost verification and prediction without interfering with the technical offers of each of the cloud providers made available.
Brokering & Cost Tracking
In the market of cloud services, a financial broker functions in the same manner as it does in more traditional markets. The broker matches the demands of users with the suppliers, aiming to succeed in settling the best financial agreement between the two sides of the market [12]. In addition, it delivers billing services as well as finance and risk management capabilities so that clients are able to trade on their preferred terms, even when those terms mismatch with the sellers' preferred terms. A financial brokerage model for cloud, profitable for the broker, offers reduced costs for cloud users, and generates more predictable demand flow for cloud providers. The model offers access to multiple cloud providers making it contractually simpler to switch between providers without intervening in the technical delivery of cloud services.
In this context, CloudBank provides innovative financial engineering options that give researchers more flexible cloud terms tailored for their needs and contributes to the sustainability of its operations. The current US instance of CloudBank helps NSF by bundling multiple small requests from individual NSF grantees into a bulk request to cloud providers, disincentivizing more costly direct connections. Through this aggregation and innovative financial contract types, CloudBank passes along savings to NSF and researchers that would otherwise be unavailable to them.
The financial broker currently associated with CloudBank is Strategic Blue. Strategic Blue is a UK SME that offers three broad types of services, each of which is relevant and useful to CERN with regard to the use cases deployment during the CloudBank EU experiment activities: • Consulting / Training -access to Strategic Blue's insight on cloud procurement and pricing best practice, based on background in commodities trading.
• Cloud Options Pricing Insights -access to proprietary analysis of published historical and current public cloud pricing from vendors such as Amazon Web Services and Google Cloud Platform. • Cloud Options -Strategic Blue's core business involves helping clients who find they have a difference between the terms on which they would ideally like to purchase cloud computing, and the terms on which the cloud provider(s) are willing to sell at their lowest prices. Strategic Blue steps into the billing chain between cloud customer and cloud provider(s) and injects the required combination of a) billing services, b) finance and c) risk appetite in order to achieve the optimal financial result for all concerned.
CERN already has experience with Strategic Blue. In the past, it has successfully used the Cloud Options Pricing Insights services of Strategic Blue during the now completed and award winning HNSciCloud project as a means of benchmarking the bids received against market prices. This previous experience makes Strategic Blue understanding of CERN's administrative and technical processes and collaboration better and of greater value.
During the preparation phase of the experiment, an initial financial analysis was performed to estimate the total cost of the deployment of the use cases. This work was performed by the CERN IT CloudBank Office team in partnership with the lead researcher of each use case.
The process followed to estimate the use case costs is as follows: 1. The use case lead researcher provides through a questionnaire a realistic description of the expected usage and types of resources needed. The description also identifies the public commercial cloud providers (AWS or GCP) preferred by the lead researcher and the estimated timeline for deployment. 2. The CERN IT CloudBank Office calculated cost estimates for each use case according to the data provided by each lead researcher. The calculations were done based on a financial formula provided by the financial broker Strategic Blue, responsible for cost management.
The financial formula takes the following elements into account in the cost calculation of an instance: • Quantity of compute instance (Qc) • On demand price for the compute instance (ODQc) The equation used to determine the price (P) of an instance is the following (1): (1) = ( * * ℎ * + * * ℎ * ) * 1. 2 The instance cost is multiplied by 1.2 to cover additional services and contingency. The estimated cost of a use case consists in the sum of the estimated costs of all the instances. The pricing data used for the calculation are estimated values extracted from the service catalogues of each cloud provider. Therefore, the nature of the formula is generic and its purpose is to provide a ballpark for the cost estimation. For the same reason, the term egress has been excluded from the formula, since egress costs depend on contractual agreements between the cloud provider and the institution.
These cost estimates serve as a basis for the IT CloudBank Office to allocate funds for the approved use cases during the CloudBank EU experiment, that correspond to the Award Amount column in Figure 1, to be managed and tracked by CloudBank, as described.
The second part of cost management implies realistic consumption tracking across organisational units, as a major indicator to improve IT service forecasting, often recurring to visualization and analytics tools. Tracking becomes therefore crucial in order to achieve transparency and trust, across both users and IT services. In general terms, a cloud service brokerage system is described as providing "a single and common interface through which consumers can provision and manage their services on multiple clouds" [13]. As the brokering is scoped at the financial rather than technical level, this creates a path to expand cloud access using transparent billing, without interference in the technical delivery. Such a model could be expanded with funds being allocated transparently and directly by multiple different funding sources to each of the corresponding organizational units.
The US instance of CloudBank is using a commercial product based on Nutanix Beam [14] to provide cost governance capabilities that is compatible with AWS and Azure. Strategic Blue also provides a service for cost tracking based on AWS Quicksight [15] (Figure 4).
As part of the activities of the CloudBank EU NGI experiment, a generic mechanism is being developed to collect consumption and usage data from the use cases deployed over the cloud providers that are part of the experiment. The motivation behind the conception of the generic mechanism is to build a single dashboard to display aggregated data from multiple cloud providers, allowing consumption control, metrics discovery, usage patterns definition, etc. based on open source tools with a modular architecture not locked to any particular cloud vendor. As a backend for this mechanism, the Prometheus[16] tool was evaluated and eventually selected. Prometheus is an open-source and service monitoring system and time-series database with wide community support, part of the Cloud Native Computing Foundation [17]. It allows collecting metrics of all kinds from configured targets at given intervals, exporting them, displaying and visualising results. A Proof of Concept (PoC) deployment determined a) how data from a cloud provider could be made available to Prometheus as metrics and b) how to export and display the cost metrics.
The first step was to build a Prometheus exporter in Golang [18]. An exporter is essentially an application that exports existing metrics from third-party systems, in our case cloud providers, as Prometheus metrics. Exporters get the target's metrics via a network protocol, so the PoC had to create a client to connect to the cloud provider via the programmatic APIs made available, request the billing data and finally expose it via the network protocol that behaves as a target. At this point, the exporter can then retrieve the required metrics. The PoC has been documented [19] and initial metrics such as hours of CPU and GPU usage, number of Virtual Machines, and total consumption costs per project are being collected. The following step was to draft a mockup display based on Grafana[20] for the visualization of multiple metrics ( Figure 5). The overall goal is to enhance dashboards to complement the billing and data governance capabilities of CloudBank. Grafana is an open-source tool tightly integrated in the Prometheus architecture providing a multi-platform analytics and interactive visualization web application. It provides charts, graphs, and other supports for the visualization of metrics. Grafana also has a built-in support for Prometheus which makes configurations faster and allows a certain level of atomization when running the solution.
Finally, in terms of infrastructure, the PoC ( Figure 6) is being deployed using the central Openshift service running on-premise at CERN [21]. Openshift [22] is a platform that enables the deployment of applications using containers. The PoC is deployed as a Docker[23] container in an Openshift cluster, providing full automation to build, deploy and scale out. Having the PoC on CERN premises will allow CERN to retain control over the billing information, ensuring confidentiality of the data collected, as best practice and obligation for contractual management with commercial partners such as financial brokers and public cloud vendors. It also allows it to benefit from "in house" deployments based on Prometheus as a widely supported technology both at CERN and in general across the HEP communities.
Data Processing
The ML and HPCaaS use cases being deployed contain no associated personal data. At the same time, one of the determining objectives of the experiment is to validate CloudBank under EU based regulatory frameworks. As part of the NGI experiment, CERN is engaging with a legal partner to carry out a data protection contractual analysis with respect to European legislation (such as GDPR). The analysis will compare possible models of contractual relationships with major cloud provider(s). The goal is to have a clear mapping and breakdown of responsibilities for processing and sub-processing of personal data on the cloud, contractual solutions that will generally lead to a simplification of in-house public cloud service procurement processes.
In addition, the CloudBank EU experiment will also contribute to the definition of a data security model for research organisations using public commercial cloud providers. Aspects to be covered will include confidentiality of data, mitigation plans for DDoS attacks, antivirus and antimalware, security audits, encryption of data-at-rest and in transfer as a set of technical measures demonstrating alignment with ISO 27001 certification and implementation of the guidelines of the EU Agency of Cybersecurity (ENISA[24]), in aspects such as information security risk assessment, monitoring, data breaches and internal audits.
Initial Results & Future Work
The CloudBank NGI Experiment started in November 2020 with a few preliminary results at the time this paper was produced. For what concerns the use cases, 9 have started deployment both over AWS and GCP with others planned to start by the end of Q1 2021. The current CloudBank instance in the US is fully setup and can be used to pilot access to each of the billing accounts using CERN Single Sign On [25].
In what concerns the billing and tracking PoC, data was successfully retrieved from Google's BigQuery service [26] using a generic unlocked approach, allowing the developed exporter module to receive metrics that are made available to the Prometheus backend. Metrics breakdown per project including costs could already be observed in Prometheus displayed in Grafana. As the PoC is based on Docker, there is no need to install Prometheus and Grafana as both official containers are available in the Docker Hub [27][28] [29]. Besides allowing the use of containers, it also makes possible the provisioning of data sources and dashboards in Grafana [30], meaning that if the configuration files are added to the exporter, the metrics gathered by Prometheus will automatically be displayed in a dashboard already configured, eliminating the need to reinstall or reconfigure both Prometheus and Grafana components. After an initial analysis of the billing data displayed, the PoC is now requesting billing data once per day, where differences in consumption profiles can be already spotted: some projects have number of CPU hours increasing very rapidly as the use of Virtual Machines, whilst in others the number of resources is steady with no significant variation.
Some of the next development steps involve the enhancement of the Grafana dashboards display and role authentication so that the dashboard displays only billing data related to a given researcher or organisational unit. This will be followed by adding configuration options correlating type of services used, time periods and costs. Deployment on Openshift will be consolidated to make it independent of the underlying operating system, accelerating both development and deployment. Going into a pre-production scenario, high-availability will be a factor to consider, to
Conclusions
This NGI experiment will accelerate the adoption of public cloud services in Europe's public funded research sector. The transatlantic nature of the experiment will increase cooperation between US and EU research communities in their uptake of public cloud services.
The legal and contractual assessment of the CloudBank model with respect to European legislation, will build trust among the procurement offices of public sector research organisations such as CERN and lead to a simplification of their in-house cloud service procurement processes. The CloudBank EU NGI experiment will report publicly the progress made against its objectives, the lessons learned, with a recommendation for whether there is a case for expanding the model in a wider audience and timeframe. | 5,636.4 | 2021-01-01T00:00:00.000 | [
"Computer Science",
"Environmental Science",
"Engineering"
] |
Thermal Impact Assessment of Groundwater Heat Pumps (GWHPs): Rigorous vs. Simplified Models
Groundwater Heat Pumps (GWHPs) are increasingly adopted for air conditioning in urban areas, thus reducing CO2 emissions, and this growth needs to be managed to ensure the sustainability of the thermal alteration of aquifers. However, few studies have addressed the propagation of thermal plumes from open-loop geothermal systems from a long-term perspective. We provide a comprehensive sensitivity analysis, performed with numerical finite-element simulations, to assess how the size of the thermally affected zone is driven by hydrodynamic and thermal subsurface properties, the vadose zone and aquifer thickness, and plant setup. In particular, we focus the analysis on the length and width of thermal plumes, and on their time evolution. Numerical simulations are compared with two simplified methods, namely (i) replacing the time-varying thermal load with its yearly average and (ii) analytical formulae for advective heat transport in the aquifer. The former proves acceptable for the assessment of plume length, while the latter can be used to estimate the width of the thermally affected zone. The results highlight the strong influence of groundwater velocity on the plume size and, especially for its long-term evolution, of ground thermal properties and of subsurface geometrical parameters.
Introduction
Ground Source Heat Pumps (GSHPs) utilize the ground as a heat source or sink, respectively, for the heating or cooling of buildings. At moderate depths (5 to 20 m), the ground has an almost constant temperature, with a value close to the yearly average air temperature [1]; hence, GSHPs have a higher energy efficiency (coefficient of performance, COP) compared to Air Source Heat Pumps (ASHPs) [2], and their potential for the reduction of the carbon intensity of building air conditioning is widely acknowledged [3].
The energy efficiency of a GSHP strongly depends on site-specific properties of the ground, for both Borehole Heat Exchangers (BHEs) [4][5][6][7][8] and Ground Water Heat Pumps (GWHPs) [9][10][11][12]. GWHPs exchange heat with groundwater extracted through one or multiple water wells, and are increasingly employed as an air conditioning system, especially for large commercial and public buildings [13][14][15][16]. Groundwater is usually reinjected through wells into the same aquifer to avoid its depletion [17], but this leads to the formation of thermally altered zones, called thermal plumes. Thermal plumes pose two sustainability issues: the upstream propagation of the plume, which can reach the abstraction well (thermal short-circuit) [9,18,19], thus reducing the COP of the heat pump, and the downstream propagation of the plume, which can impair drinking water wells or other GWHPs. Assessing the subsurface thermal impact of open-loop geothermal systems is therefore essential for their design.
Flow and Heat-Transport in Porous Media
Heat transport in saturated porous media occurs by conduction (driven by temperature gradient), advection (due to the flow of the fluid phase), and dispersion (induced by the heterogeneity of the groundwater velocity field).
These mechanisms are described by the heat conservation equation in a two-phase (solid and liquid) medium [34], with the assumption of a groundwater flow aligned with the x-axis: where ρc and ρ w c w are the thermal capacities (J·m −3 ·K −1 ) of the porous medium and of the liquid phase, respectively; T is the temperature (K), considered equal for the solid and the liquid phases (i.e., the thermal equilibrium is considered instantaneous); v D is the Darcy velocity (m·s −1 ); α L and α T are, respectively, the longitudinal and transverse dispersivities (m) relative to the groundwater flow direction; λ is the thermal conductivity of the porous medium (W·m −1 ·K −1 ); and H is the heat source/sink (W·m −3 ). The first term of Equation (1) is the temperature variation over time, which depends on the volume-averaged (bulk) heat capacity of the porous medium (ρc): ρc = n e ρ w c w + (1 − n e )ρ s c s (2) where ρ s c s is the thermal capacity (J·m −3 ·K −1 ) of the solid phase. The second term describes the advection, which is function of the Darcy velocity (v D ): where K is the hydraulic conductivity (m·s −1 ), h is the hydraulic head (m), and i is the hydraulic gradient along the x direction (dimensionless). Conduction depends on the bulk thermal conductivity in the third term of Equation (1): where λ s and λ w are the thermal conductivities (W·m −1 ·K −1 ) of, respectively, the solid matrix and the groundwater, and n e is the effective porosity (dimensionless). Thermal dispersion is described in Equation (1) by the corresponding coefficients α L and α T . Dividing Equation (1) by ρc, we get: where v e is the effective velocity of groundwater through the pores, calculated as: Since heat is exchanged between the fluid and the solid phase, the advective velocity of the thermal plume is lower than the groundwater flow velocity [19]. This phenomenon is similar to solute sorption in porous media with linear kinetics. By analogy, it is therefore possible to define the thermal retardation coefficient of Equation (5) as the ratio between the effective velocity of groundwater (v e ) and heat front (v th ): (1 − n e )ρ s c s + n e ρ w c w n e ρ w c w = ρc n e ρ w c w (7) D x , D y , and D z are the longitudinal and transversal thermal dispersion coefficients (m 2 /s):
Model Setup
The objective of the first part of this work is to evaluate and compare the influence of thermal and hydrogeological subsurface properties (i.e., hydraulic conductivity of the aquifer, K; hydraulic gradient, i; effective porosity, n e ; aquifer and vadose zone thickness, b and d, respectively; thermal capacity ρc), as well as plant parameters (i.e., well spacing L), on the time evolution and spatial distribution of the thermal plume originated by a GWHP.
A simple but representative plant configuration was therefore adopted for the simulations, i.e., a well doublet in an unconfined aquifer, where the wells are aligned with the groundwater flow direction and the reinjection well is downstream of the abstraction well. The open-loop geothermal system was modelled with the finite-element code FEFLOW ® 6.2 (DHI-WASY, Berlin, Germany) [34], performing transient coupled flow and heat-transport simulations over an operating lifetime of 50 years. The default dimensions of the 3D modelling domain are 6000 m × 3000 m × 75 m, and variations were introduced only in the mesh thickness. The default vertical discretization is of 15 horizontal layers (16 slices), yielding a triangular mesh of 461,850 elements and 252,144 nodes for the default well configuration ( Figure S1 in the Supplementary Materials), with slight variations for the others. A mesh density convergence study was performed ( Figure S2 in the Supplementary Materials. The layers crossing the vadose zone, the aquifer, and the upper part of the underlying aquitard (layers 1 to 13) have a constant thickness of 2.5 m. The bottom two layers are thicker (10 m and 30 m, respectively), since this portion is deemed to be negligibly affected by the GWHP.
A simplified 2D cross-section of the model is shown in Figure 1.
Model Setup
The objective of the first part of this work is to evaluate and compare the influence of thermal and hydrogeological subsurface properties (i.e., hydraulic conductivity of the aquifer, ; hydraulic gradient, ; effective porosity, ; aquifer and vadose zone thickness, and , respectively; thermal capacity ), as well as plant parameters (i.e., well spacing ), on the time evolution and spatial distribution of the thermal plume originated by a GWHP.
A simple but representative plant configuration was therefore adopted for the simulations, i.e., a well doublet in an unconfined aquifer, where the wells are aligned with the groundwater flow direction and the reinjection well is downstream of the abstraction well. The open-loop geothermal system was modelled with the finite-element code FEFLOW ® 6.2 (DHI-WASY, Berlin, Germany) [34], performing transient coupled flow and heat-transport simulations over an operating lifetime of 50 years. The default dimensions of the 3D modelling domain are 6000 m × 3000 m × 75 m, and variations were introduced only in the mesh thickness. The default vertical discretization is of 15 horizontal layers (16 slices), yielding a triangular mesh of 461,850 elements and 252,144 nodes for the default well configuration ( Figure S1 in the Supplementary Materials), with slight variations for the others. A mesh density convergence study was performed ( Figure S2 in the Supplementary Materials. The layers crossing the vadose zone, the aquifer, and the upper part of the underlying aquitard (layers 1 to 13) have a constant thickness of 2.5 m. The bottom two layers are thicker (10 m and 30 m, respectively), since this portion is deemed to be negligibly affected by the GWHP.
A simplified 2D cross-section of the model is shown in Figure 1. The regional groundwater flow was reproduced by assigning constant hydraulic head values (hydraulic boundary condition (BC) of the first kind) on the western (upgradient) and eastern (downgradient) domain borders, based on both the aquifer thickness and the hydraulic gradient. Hydraulic head initial conditions were then assigned by running the model in steady state conditions and flow-only mode, without pumping wells. Two water wells, one for abstraction and The regional groundwater flow was reproduced by assigning constant hydraulic head values (hydraulic boundary condition (BC) of the first kind) on the western (upgradient) and eastern (downgradient) domain borders, based on both the aquifer thickness and the hydraulic gradient. Hydraulic head initial conditions were then assigned by running the model in steady state conditions and flow-only mode, without pumping wells. Two water wells, one for abstraction and one for reinjection, were implemented in the model as a Multi-layer Well (flow BC of the fourth kind, according to [34]) screened over the total aquifer thickness. The abstraction well was placed at coordinates x, y (1000; 1500) m, in order to allow the thermal plume to propagate entirely inside the domain, while the reinjection well was located downgradient at a selected distance from the abstraction wells. The well radius was set equal to 0.25 m, a typical value for wells operating at the flow rates considered in this work.
The thermal balance of the aquifer was reproduced by imposing the undisturbed aquifer temperature (assumed equal to 10 • C) on the upstream side of the domain (first kind BC). On the top slice, the heat exchange with the external air was reproduced imposing a Cauchy boundary condition of the third kind [35], i.e., a heat flux proportional to the difference relative to the above-mentioned reference temperature (10 • C). The geothermal temperature gradient was not taken into account, since its effect is usually negligible at depths between 30 m and 50 m [36][37][38][39]; hence, a null heat flux (second kind BC) was imposed on the lowest slice. The same BC was applied to the remaining domain borders. Consistently with the heat boundary conditions, an initial temperature of 10 • C was assigned to the whole domain.
The operation of the GWHP was simulated by imposing a constant temperature difference (∆T = T inj − T abs ) of ±4 • C between the injection and abstraction wells (negative when heating and positive when cooling). The flow rate Q(t) varies with the thermal load P(t) exchanged with the groundwater according to the following equation: The yearly evolution of P(t) follows a sinusoidal trend, as shown in Figure 2. The thermal load is negative when heat is extracted from groundwater, and positive when heat is injected. Details on the model validation are available in the Supplementary Materials. abstraction wells. The well radius was set equal to 0.25 m, a typical value for wells operating at the flow rates considered in this work. The thermal balance of the aquifer was reproduced by imposing the undisturbed aquifer temperature (assumed equal to 10 °C) on the upstream side of the domain (first kind BC). On the top slice, the heat exchange with the external air was reproduced imposing a Cauchy boundary condition of the third kind [35], i.e., a heat flux proportional to the difference relative to the above-mentioned reference temperature (10 °C). The geothermal temperature gradient was not taken into account, since its effect is usually negligible at depths between 30 m and 50 m [36][37][38][39]; hence, a null heat flux (second kind BC) was imposed on the lowest slice. The same BC was applied to the remaining domain borders. Consistently with the heat boundary conditions, an initial temperature of 10 °C was assigned to the whole domain.
The operation of the GWHP was simulated by imposing a constant temperature difference (∆ = − ) of ±4 °C between the injection and abstraction wells (negative when heating and positive when cooling). The flow rate ( ) varies with the thermal load ( ) exchanged with the groundwater according to the following equation: The yearly evolution of ( ) follows a sinusoidal trend, as shown in Figure 2. The thermal load is negative when heat is extracted from groundwater, and positive when heat is injected. Details on the model validation are available in the Supplementary Materials. Using this model setup, a parametric sweep study was performed starting from a default setup and varying one parameter value per simulation. Table 1 reports the complete parameter set, including default values and the ranges considered in this analysis. These parameters are representative of aquifers in which the application of the geothermal system hypothesised in this work could be considered [9][10][11].
The relative weight of each parameter was assessed by calculating the "sensitivity index" (SI), as suggested by Heiselberg et al. [40], defined as: Using this model setup, a parametric sweep study was performed starting from a default setup and varying one parameter value per simulation. Table 1 reports the complete parameter set, including default values and the ranges considered in this analysis. These parameters are representative of aquifers in which the application of the geothermal system hypothesised in this work could be considered [9][10][11]. The relative weight of each parameter was assessed by calculating the "sensitivity index" (SI), as suggested by Heiselberg et al. [40], defined as: where Y max and Y min are the maximum and the minimum values, respectively, of the output variable considered in the analysis, obtained by varying the parameter over the defined range. The output variables considered in our analysis are the length and the width of the thermal plume, as explained later in this section. We assigned a homogenous horizontal hydraulic conductivity value: values adopted for the aquifer are shown in Table 1, while the conductivity of the aquitard was set to K xx = K yy = 10 −7 m·s −1 . The vertical hydraulic conductivity is usually lower than the horizontal one, and K zz = 0.1K xx was used, as suggested in [12].
The thermal conductivity of groundwater was kept equal to λ w = 0.6 W·m −1 ·K −1 , while four different values were considered for the solid phase conductivity λ s . Such values were calculated with Equation (4), in order to obtain bulk values of thermal conductivity that are consistent with those reported in the VDI 4640 guidelines for saturated sedimentary lithologies [41].
A similar approach was adopted for the volumetric heat capacity (ρc) of the aquifer, which was assigned using Equation (2) and literature reference values for water-saturated sand and/or gravel [11]. As for the thermal conductivity, a fixed value was set for the fluid phase (ρ w c w = 4.2 M·J·m −3 ·K −1 ), while three different values were adopted for the solid phase.
Thermal dispersivity in aquifers has scarcely been studied in the literature. According to De Marsily [41], its value depends on the spatial scale of the observed phenomenon, and several empirical formulae are available in the literature [42]. Therefore, a wide range of longitudinal (α L ) and transverse (α T ) dispersivities were employed in the analysis, assuming α T = 0.1α L .
Besides thermal and hydrogeological parameters, the plant setup also significantly influences the propagation of thermal plumes. The effect of the distance between the two wells (L) and of the maximum injected flow rate (Q max ) were investigated, considering three values of well spacing and four values of the maximum annual water flow rate. The analysed range of total thermal power exchanged through the heat pump was 80-800 kW, thus covering most of the building heating and cooling applications for GWHPs. Subsequently, further simulations were run to understand how the propagation of thermal plumes differs if time-varying thermal loads (P(t)) are considered, compared to their yearly average P avg , calculated as follows: where t y is the time span considered (365 days). Different levels of unbalance between the heating and cooling demand were studied, ranging from heating-only to almost perfectly balanced, with the default demand being heating-dominated ( Table 1). The ratio of the yearly heating need E H over the total heat exchanged with the aquifer each year (E TOT = 1820 MWh·y −1 ) ranges between 0.6 and 1, with a default value of 0.75, as depicted in Figure 2. This means that, as a default configuration, the heat extracted in the heating season is 75% of the total heat exchanged; hence, three times the heat injected in the cooling season, which is the remaining 25%. For each of these combinations, a simulation imposing the constant yearly average of the thermal load (i.e., a net heat extraction) was run for comparison. This approximation, which is adopted by software such as Groundwater Energy Designer (GED) [33], allows one to drastically reduce the computational time required for the simulations. A perfectly balanced thermal demand (E H = 0.5 E TOT ) was not considered because the equivalent yearly average load is null, and would therefore lead to a null thermal impact on the ground.
Finally, the previous results of plume size for different configurations were compared with those obtained using the analytical formulae for 2D plume evolution problems provided by Banks [22]. Plume length for long distances can be approximated as: where x is the downstream distance (m) from the reinjection well; v th is the thermal advective velocity, as defined in Equation (7); v e is the groundwater effective flow velocity (m·s −1 ); and t is the simulation time (s). The width of the plume can be calculated with [22]: where y is the asymptotic plume width (m) perpendicular to the groundwater flow direction, i.e., the maximum width of the well release front, which depends on the maximum flow rate escaping down-gradient Q pl (m 3 ·s −1 ), the aquifer thickness b (m), and the Darcy velocity v D (m·s −1 ). Q pl is the fraction of the maximum annual injected flow rate (Q max ) which is not recirculated to the abstraction well and which, therefore, travels down-gradient; it depends on the intensity of thermal recycling between the wells: The parameter X of Equation (15) allows one to assess whether thermal recycling occurs (if X > 1) and, in this case, how strong it is: If X ≤ 1, no thermal recycling occurs and Equation (15) can be simplified to: In our simulations, the plume dimensions were determined considering the isotherm of alteration equal to 1 • C compared to the initial temperature (T 0 = 10 • C), i.e., the isotherm at 9 • C, since heating-dominated thermal loads are used. The plume length was calculated as the downstream distance (m) from the reinjection well, while the width was defined as the maximum extension of the 9 • C isotherm perpendicular to the groundwater flow direction ( Figure 3).
If ≤ 1, no thermal recycling occurs and Equation (15) can be simplified to: In our simulations, the plume dimensions were determined considering the isotherm of alteration equal to 1 °C compared to the initial temperature ( = 10 °C), i.e., the isotherm at 9 °C, since heating-dominated thermal loads are used. The plume length was calculated as the downstream distance (m) from the reinjection well, while the width was defined as the maximum extension of the 9 °C isotherm perpendicular to the groundwater flow direction (Figure 3). Both the length and the width of the thermal plume were calculated at half the depth of the aquifer, where the strongest thermal alterations are observed, as verified for each simulation. Since the flow rate of injected water varies on an annual basis, and the plume size is subjected to oscillations throughout the simulation time, the maximum annual values of plume width and length are plotted to show the time evolution of the plume dimensions.
Sensitivity Analysis
The results of the sensitivity analysis aim at identifying the parameters that have the strongest influence on the dimensions of a thermal plume. Starting from the hydrodynamic parameters, hydraulic conductivity (K) and hydraulic gradient (i), we observe that plume propagation only depends on the thermal velocity (v th = Ki/(n e R th ), from Equations (3), (6) and (7)). Conversely, the plume size is almost insensitive to the single values of K and i ( Figure 4a). Indeed, the hydraulic conductivity only influences the spatial distribution of the hydraulic head and, consequently, of the flow field close to the abstraction and injection wells. This effect is negligible far downstream of the well doublet. Both the length and the width of the thermal plume were calculated at half the depth of the aquifer, where the strongest thermal alterations are observed, as verified for each simulation. Since the flow rate of injected water varies on an annual basis, and the plume size is subjected to oscillations throughout the simulation time, the maximum annual values of plume width and length are plotted to show the time evolution of the plume dimensions.
Sensitivity Analysis
The results of the sensitivity analysis aim at identifying the parameters that have the strongest influence on the dimensions of a thermal plume. Starting from the hydrodynamic parameters, hydraulic conductivity ( ) and hydraulic gradient ( ), we observe that plume propagation only depends on the thermal velocity ( = /( ), from Equations (3), (6) and (7)). Conversely, the plume size is almost insensitive to the single values of and ( Figure 4a). Indeed, the hydraulic conductivity only influences the spatial distribution of the hydraulic head and, consequently, of the flow field close to the abstraction and injection wells. This effect is negligible far downstream of the well doublet.
(a) (b) Based on this result, the successive parametric sweep focused on variations of the Darcy velocity . As expected, higher values of lead to a considerable increase in the plume length and a dramatic decrease in its width (Figure 4b). A one order of magnitude increase in the regional groundwater velocity reduces the width of the thermal plume by more than 60%. Based on this result, the successive parametric sweep focused on variations of the Darcy velocity v D . As expected, higher values of v D lead to a considerable increase in the plume length and a dramatic decrease in its width (Figure 4b). A one order of magnitude increase in the regional groundwater velocity reduces the width of the thermal plume by more than 60%.
The plume extension also reaches a stationary value in a shorter time when the Darcy velocity increases (Figure 5a), with a trend similar to the Moving Infinite Line Source analytical solution [43]. It is also interesting to note that, as the regional groundwater flow velocity increases beyond a certain value (v D = 1 × 10 −5 m/s), the plume length suddenly drops (Figures 4b and 5a). If the Darcy velocity value increases further, the plume length starts to increase again. Indeed, if the groundwater velocity is high enough, the thermal plumes produced in each heating and cooling season are separated into two or more smaller plumes which travel downgradient, while cold and warm plumes are merged at lower velocities ( Figure 6). In this case, the farthest plume was considered for length determination. conductivity is the parameter most subject to uncertainties, depending on the method of determination [12,48]; therefore, it strongly affects the thermal impact quantification of GWHPs.
(a) (b) Unlike the Darcy velocity, the effective porosity has little effect on the plume length, exerted only in the long term (Figure 5b), and does not affect the plume width. Understanding how effective porosity affects the plume dimensions is not straightforward, since different variations of heat transport parameters occur as varies. When the effective porosity increases, the thermal capacity of the porous medium generally increases, since the thermal capacity is usually higher for groundwater than for the solid phase ( > ). An increment of results in a decrease in the effective velocity of groundwater ( ); on the other hand, the thermal retardation factor ( ) also decreases, according to Equation (7). This eventually results in a slight decrease in the thermal velocity = / . Therefore, if we consider only advection, a slight decrease in the plume length would be expected. However, an increase in the effective porosity also causes a reduction of the total thermal conductivity of the porous medium (Equation (4)), since groundwater is usually less conductive than the solid phase ( < ), and this makes the plume length increase, as explained later in this section. These contrasting effects are reflected in plume propagation. As long as the plume is expanding in the aquifer and the propagation towards the unsaturated layers is negligible (i.e., at the onset of GWHP operation), the length of the thermally affected zone diminishes as the effective porosity increases. On the other hand, as the plume also starts to expand in the unsaturated layers, the reduction of the thermal conductivity is the parameter most subject to uncertainties, depending on the method of determination [12,48]; therefore, it strongly affects the thermal impact quantification of GWHPs.
(a) (b) Unlike the Darcy velocity, the effective porosity has little effect on the plume length, exerted only in the long term (Figure 5b), and does not affect the plume width. Understanding how effective porosity affects the plume dimensions is not straightforward, since different variations of heat transport parameters occur as varies. When the effective porosity increases, the thermal capacity of the porous medium generally increases, since the thermal capacity is usually higher for groundwater than for the solid phase ( > ). An increment of results in a decrease in the effective velocity of groundwater ( ); on the other hand, the thermal retardation factor ( ) also decreases, according to Equation (7). This eventually results in a slight decrease in the thermal velocity = / . Therefore, if we consider only advection, a slight decrease in the plume length would be expected. However, an increase in the effective porosity also causes a reduction of the total thermal conductivity of the porous medium (Equation (4)), since groundwater is usually less conductive than the solid phase ( < ), and this makes the plume length increase, as explained later in this section. These contrasting effects are reflected in plume propagation. As long as the plume is expanding in the aquifer and the propagation towards the unsaturated layers is negligible (i.e., at the onset of GWHP operation), the length of the thermally affected zone diminishes as the effective porosity increases. On the other hand, as the plume also starts to expand in the unsaturated layers, the reduction of the thermal The drop of the plume length at high groundwater velocities can be explained by the fact that the separation of cold and hot plumes results in a more efficient heat exchange with the neighbouring layers (the vadose zone and aquitard), and, hence, the total length of the plume is reduced. Our results confirm the conclusion drawn in previous studies that the hydrodynamic parameters of the aquifer have a very strong influence on thermal plume size [44][45][46][47]. Hydraulic conductivity is the parameter most subject to uncertainties, depending on the method of determination [12,48]; therefore, it strongly affects the thermal impact quantification of GWHPs.
Unlike the Darcy velocity, the effective porosity has little effect on the plume length, exerted only in the long term (Figure 5b), and does not affect the plume width. Understanding how effective porosity affects the plume dimensions is not straightforward, since different variations of heat transport parameters occur as n e varies. When the effective porosity increases, the thermal capacity of the porous medium ρc generally increases, since the thermal capacity is usually higher for groundwater than for the solid phase (ρ w c w > ρ s c s ).
An increment of n e results in a decrease in the effective velocity of groundwater (v e ); on the other hand, the thermal retardation factor (R th ) also decreases, according to Equation (7). This eventually results in a slight decrease in the thermal velocity v th = v e /R th . Therefore, if we consider only advection, a slight decrease in the plume length would be expected. However, an increase in the effective porosity also causes a reduction of the total thermal conductivity of the porous medium (Equation (4)), since groundwater is usually less conductive than the solid phase (λ w < λ s ), and this makes the plume length increase, as explained later in this section. These contrasting effects are reflected in plume propagation. As long as the plume is expanding in the aquifer and the propagation towards the unsaturated layers is negligible (i.e., at the onset of GWHP operation), the length of the thermally affected zone diminishes as the effective porosity increases. On the other hand, as the plume also starts to expand in the unsaturated layers, the reduction of the thermal conductivity of the porous medium above and below the saturated zone due to a higher porosity value results in a slight long-term increase in the thermal plume length (Figure 5b).
Thermal dispersivity is one of the most influential aquifer properties for the propagation of thermal plumes; however, the scarcity of information and reliable experimental data available on this topic causes significant design issues [49][50][51]. As is evident from Figure 7a, the thermal dispersivity has a considerable effect on plume length, while its effect on plume width is negligible, especially for low dispersivity values. conductivity of the porous medium above and below the saturated zone due to a higher porosity value results in a slight long-term increase in the thermal plume length (Figure 5b). Thermal dispersivity is one of the most influential aquifer properties for the propagation of thermal plumes; however, the scarcity of information and reliable experimental data available on this topic causes significant design issues [49][50][51]. As is evident from Figure 7a, the thermal dispersivity has a considerable effect on plume length, while its effect on plume width is negligible, especially for low dispersivity values. When the value of dispersivity increases, energy dissipation in the aquifer increases, so the plume propagates slower, along the groundwater flow direction. Furthermore, longitudinal thermal dispersivities below 2 m do not sensibly influence the extension of the plume: we observe a negligible difference between = 0.5 m and = 2 m . This is an interesting finding for addressing the numerical modelling issue of oscillatory results, which arise when null or very low dispersivity values are adopted in the presence of strong advection [34]. Usually, in these cases, upwinding schemes are used to introduce a numerical dispersion, thus avoiding the oscillation of model results, but also altering them [34]. It is therefore possible to avoid using upwinding schemes by assigning thermal dispersivities of < 2 m and < 0.2 m; this prevents both oscillatory results and the underestimation of the thermal impact on the aquifer. Values of dispersivity higher than 2 m should be set carefully, since they could lead to a strong reduction of the estimated plume When the value of dispersivity increases, energy dissipation in the aquifer increases, so the plume propagates slower, along the groundwater flow direction. Furthermore, longitudinal thermal dispersivities below 2 m do not sensibly influence the extension of the plume: we observe a negligible difference between α L = 0.5 m and α L = 2 m. This is an interesting finding for addressing the numerical modelling issue of oscillatory results, which arise when null or very low dispersivity values are adopted in the presence of strong advection [34]. Usually, in these cases, upwinding schemes are used to introduce a numerical dispersion, thus avoiding the oscillation of model results, but also altering them [34]. It is therefore possible to avoid using upwinding schemes by assigning thermal dispersivities of α L < 2 m and α T < 0.2 m; this prevents both oscillatory results and the underestimation of the thermal impact on the aquifer. Values of dispersivity higher than 2 m should be set carefully, since they could lead to a strong reduction of the estimated plume length (e.g., −45% for α L = 50 m relative to α L = 2 m), as shown in Figure 7a. Notably, variations in the longitudinal and transverse dispersivity do not affect the maximum plume width, which mainly depends on the hydrodynamic aquifer properties.
A few numerical simulation studies of open-loop plants in the long-term (in the order of tens of years) have been published. In previous studies, limited to the saturated layer [44] and usually considering a time span up to one year [32,44], the heat conduction mechanism is negligible compared to advection and dispersion. However, when a sufficiently long time is considered, the conductive propagation of heat in the ground also involves the vadose zone above the aquifer and the underlying layers [52]. Figure 7d depicts the effect of such a mechanism on the longitudinal temperature distribution, comparing the assumptions of a no-flux boundary condition (second kind) applied to the ground surface and of the aforementioned reference temperature (third kind BC). The effect of heat exchange with neighbouring layers gets stronger over time (Figure 7b,d) and is negligible in the short term; this explains why previous studies, which considered short operating periods (less than one year [32,44,46]), concluded that heat conduction is negligible in GWHPs.
Conversely, the plume width is mainly affected by the hydrodynamic parameters of the aquifer, while the heat exchange has a negligible effect.
The volumetric heat capacity of the solid matrix has a minor influence on the plume size in the long-term, as shown in Figure 7c. Both the plume width and length slightly decrease as the thermal capacity of the porous medium (ρc) increases. This is due to the capability of a portion of the aquifer to store a greater amount of heat, given the same thermal alteration.
Long-term propagation of thermal plumes is also influenced by the thickness of the saturated and vadose zones, which both play a substantial role in the expansion of the thermal plume. As the saturated thickness increases (Figure 8a), the plume takes longer to reach the top of the aquifer and exchange heat with the unsaturated zone. Hence, the conductive exchange with neighbouring layers occurs later and heat remains confined for a longer period of time in the aquifer. This implies that the plume propagates considerably further along the groundwater flow direction, as confirmed by the analytical solutions of a 2D plane-symmetric transient heat transport problem reported by Tan et al. [53]. In previous studies, limited to the saturated layer [44] and usually considering a time span up to one year [32,44], the heat conduction mechanism is negligible compared to advection and dispersion. However, when a sufficiently long time is considered, the conductive propagation of heat in the ground also involves the vadose zone above the aquifer and the underlying layers [52]. Figure 7d depicts the effect of such a mechanism on the longitudinal temperature distribution, comparing the assumptions of a no-flux boundary condition (second kind) applied to the ground surface and of the aforementioned reference temperature (third kind BC). The effect of heat exchange with neighbouring layers gets stronger over time (Figure 7b,d) and is negligible in the short term; this explains why previous studies, which considered short operating periods (less than one year [32,44,46]), concluded that heat conduction is negligible in GWHPs.
Conversely, the plume width is mainly affected by the hydrodynamic parameters of the aquifer, while the heat exchange has a negligible effect.
The volumetric heat capacity of the solid matrix has a minor influence on the plume size in the long-term, as shown in Figure 7c. Both the plume width and length slightly decrease as the thermal capacity of the porous medium ( ) increases. This is due to the capability of a portion of the aquifer to store a greater amount of heat, given the same thermal alteration.
Long-term propagation of thermal plumes is also influenced by the thickness of the saturated and vadose zones, which both play a substantial role in the expansion of the thermal plume. As the saturated thickness increases (Figure 8a), the plume takes longer to reach the top of the aquifer and exchange heat with the unsaturated zone. Hence, the conductive exchange with neighbouring layers occurs later and heat remains confined for a longer period of time in the aquifer. This implies that the plume propagates considerably further along the groundwater flow direction, as confirmed by the analytical solutions of a 2D plane-symmetric transient heat transport problem reported by Tan et al. [53]. On the other hand, a higher saturated thickness ( ) leads to larger aquifer transmissivity, which reduces the width of the plume. Therefore, within the range of saturated thicknesses commonly found in shallow alluvial aquifers (where most of the GWHPs are installed), plume size can vary substantially in the long term.
Similarly, an increase in the unsaturated thickness (d) above the aquifer causes a significant increase in the plume length, while the plume width is insensitive to the depth of the saturated zone (Figure 8b). Indeed, heat transport across the unsaturated layers towards the atmosphere is conductive, and a thicker vadose zone has a higher thermal resistance, which limits this heat transfer mechanism. Figure 8b shows that the effect of vadose zone thickness is visible as the thermal On the other hand, a higher saturated thickness (b) leads to larger aquifer transmissivity, which reduces the width of the plume. Therefore, within the range of saturated thicknesses commonly found in shallow alluvial aquifers (where most of the GWHPs are installed), plume size can vary substantially in the long term.
Similarly, an increase in the unsaturated thickness (d) above the aquifer causes a significant increase in the plume length, while the plume width is insensitive to the depth of the saturated zone ( Figure 8b). Indeed, heat transport across the unsaturated layers towards the atmosphere is conductive, and a thicker vadose zone has a higher thermal resistance, which limits this heat transfer mechanism. Figure 8b shows that the effect of vadose zone thickness is visible as the thermal alteration approaches the ground surface. For example, after five years, the plume length curve for d = 10 m starts to diverge from the others, and the same occurs after 10 years for the curve with d = 20 m. This study, therefore, confirms the importance of depth to water table and aquifer thickness on the migration of the thermal plume, as was previously reported [11,19].
To conclude the sensitivity analysis, some plant settings also have an appraisable influence on the propagation of thermal plumes in GWHPs.
The thermal alteration at the injection well depends on the temperature of the re-injected water, which in turn strongly depends on the distance L between the injection and abstraction wells. As shown in Figure 9a, an increase in the distance L results in a decrease in the plume length and an increase in its width. This is due to a reduction of the recycled flow rate (see Equations (15) and (16)), which results in a reduction of the thermal alteration at the reinjection well (which influences the length of the plume), and an increase in the flow rate released downstream of the well doublet (Q pl ), thus increasing the plume width (see Equation (14)). For the considered plant configuration, no thermal recycling occurs when the injection well is located 100 m downstream of the abstraction well: hence, no variations in the plume dimensions are expected over this value of L. = 10 m starts to diverge from the others, and the same occurs after 10 years for the curve with = 20 m. This study, therefore, confirms the importance of depth to water table and aquifer thickness on the migration of the thermal plume, as was previously reported [11,19].
To conclude the sensitivity analysis, some plant settings also have an appraisable influence on the propagation of thermal plumes in GWHPs.
The thermal alteration at the injection well depends on the temperature of the re-injected water, which in turn strongly depends on the distance between the injection and abstraction wells. As shown in Figure 9a, an increase in the distance results in a decrease in the plume length and an increase in its width. This is due to a reduction of the recycled flow rate (see Equations (15) and (16)), which results in a reduction of the thermal alteration at the reinjection well (which influences the length of the plume), and an increase in the flow rate released downstream of the well doublet ( ), thus increasing the plume width (see Equation (14)). For the considered plant configuration, no thermal recycling occurs when the injection well is located 100 m downstream of the abstraction well: hence, no variations in the plume dimensions are expected over this value of . The injected-water flow rate and, therefore, the thermal power exchanged with the ground, has a very strong influence on thermal plume propagation (Figure 9d). The plume length has a roughly logarithmic correlation with the flow rate, and an increase in by a factor of ten results in a fourfold increase in the plume length. On the other hand, the width of the thermally affected area increases almost linearly with the injected flow rate (Figure 9c), as expected from Equation (14). The thermal alteration induced downstream of the injection well during each heating season interacts with the plume generated during the previous cooling season, determining the temperature The injected-water flow rate and, therefore, the thermal power exchanged with the ground, has a very strong influence on thermal plume propagation (Figure 9d). The plume length has a roughly logarithmic correlation with the flow rate, and an increase in Q max by a factor of ten results in a fourfold increase in the plume length. On the other hand, the width of the thermally affected area increases almost linearly with the injected flow rate (Figure 9c), as expected from Equation (14). The thermal alteration induced downstream of the injection well during each heating season interacts with the plume generated during the previous cooling season, determining the temperature distribution in the ground. For this reason, keeping the same total heat exchanged (E TOT = E H + E C ) and varying the proportion between the heating (E H ) and cooling (E C ) demands results in different thermal plume dimensions. As expected, the higher the ratio E H /E TOT , the longer and the wider the cold thermal plume originated by the well doublet (Figure 9d).
Finally, the relative importance of the analysed parameters for thermal plume size after 50 years of plant operation was assessed by means of the Sensitivity Index described by Equation (11). Among the hydrogeological and thermal parameters of the ground, the Darcy velocity v D is the most influential for plume propagation, both in the longitudinal and the transversal direction ( Figure 10). A strong impact on the plume length is also exerted by the dispersivity α L and the thermal conductivity of the solid matrix λ s , while the porosity n e and the volumetric heat capacity of the solid matrix ρ s c s have a negligible effect on thermal plume size. Strong variations in the plume length and width were observed while varying geometry parameters, such as the aquifer thickness b and the well spacing L, whereas the depth to the water table d is only relevant for plume length. The most important parameter for plume propagation in the long term is the injected water flow rate Q max , followed by the ratio of the yearly heating demand over the total demand of the heat pump E H /E TOT . and varying the proportion between the heating ( ) and cooling ( ) demands results in different thermal plume dimensions. As expected, the higher the ratio / , the longer and the wider the cold thermal plume originated by the well doublet (Figure 9d).
Finally, the relative importance of the analysed parameters for thermal plume size after 50 years of plant operation was assessed by means of the Sensitivity Index described by Equation (11). Among the hydrogeological and thermal parameters of the ground, the Darcy velocity is the most influential for plume propagation, both in the longitudinal and the transversal direction ( Figure 10). A strong impact on the plume length is also exerted by the dispersivity and the thermal conductivity of the solid matrix , while the porosity and the volumetric heat capacity of the solid matrix have a negligible effect on thermal plume size. Strong variations in the plume length and width were observed while varying geometry parameters, such as the aquifer thickness and the well spacing , whereas the depth to the water table is only relevant for plume length. The most important parameter for plume propagation in the long term is the injected water flow rate , followed by the ratio of the yearly heating demand over the total demand of the heat pump ⁄ .
Detailed results of this parametric study for plume length and width are reported in Table S1 and S2 of the Supplementary Materials.
Comparison with Simplified Models
Transient numerical models are the most precise and rigorous tool for the simulation of the subsurface thermal alteration induced by a GWHP. However, the expense and the time required to perform such simulations are usually very large. Two alternative simplified methods were therefore assessed: numerical simulation with a constant thermal load (equal to the yearly average value, as from Equation (12)) and analytical formulae available for thermal plume length and width evaluation (Equations (13) and (14)) [22]. By imposing a constant thermal load, the computational time for the simulations can be reduced from several hours to a few minutes. Analytical formulae allow fast and inexpensive evaluations of the thermal impact of GWHPs, although they neglect thermal dispersion and conduction.
We used these two approaches to calculate plume length and width at time = 50 years, comparing the results with those of the numerical model reported in the previous paragraph in order to assess whether the approximations introduced by these methods are acceptable or not. As shown in Figure 11, simulations performed with a constant thermal load introduce an error in the values of both the plume length and width.
A remarkable result is that, for heating-only systems (i.e., / = 1), the plume length is practically the same but, as the heating-cooling imbalance of the thermal load diminishes, the Detailed results of this parametric study for plume length and width are reported in Tables S1 and S2 of the Supplementary Materials.
Comparison with Simplified Models
Transient numerical models are the most precise and rigorous tool for the simulation of the subsurface thermal alteration induced by a GWHP. However, the expense and the time required to perform such simulations are usually very large. Two alternative simplified methods were therefore assessed: numerical simulation with a constant thermal load (equal to the yearly average value, as from Equation (12)) and analytical formulae available for thermal plume length and width evaluation (Equations (13) and (14)) [22]. By imposing a constant thermal load, the computational time for the simulations can be reduced from several hours to a few minutes. Analytical formulae allow fast and inexpensive evaluations of the thermal impact of GWHPs, although they neglect thermal dispersion and conduction.
We used these two approaches to calculate plume length and width at time t = 50 years, comparing the results with those of the numerical model reported in the previous paragraph in order to assess whether the approximations introduced by these methods are acceptable or not. As shown in Figure 11, simulations performed with a constant thermal load introduce an error in the values of both the plume length and width. discrepancy increases ( Figure 12) by up to about 30% for / = 70% (which is a fairly acceptable error). As the thermal load approaches the perfect balance between heating and cooling, i.e., / = 0.5, the error introduced by this approximation rapidly increases to unacceptable levels.
(a) (b) Figure 11. Relative error on plume length (a) and width (b), evaluated with a constant thermal load simulation and analytical formulae, compared to a variable thermal load simulation, for the considered range of parameters; isotherm ∆T = 1 °C, t = 50 years. The yearly averaged load usually leads to an overestimation of the plume length, thus proving a conservative approach, but also to an underestimation of the plume width (Figure 11b). This is due to the fact that the effect of peak flow-rates on the width of the injection well release front is not considered. The tested analytical formula largely overestimates the long-term plume length ( Figure 11a), since the advective model described by Equation (13) neglects thermal dispersion and conduction. On the other hand, the plume width can conservatively be estimated with Equation (14), with an acceptable overestimation of 30-60%. Underestimation of the plume width occurs only for A remarkable result is that, for heating-only systems (i.e., E H /E TOT = 1), the plume length is practically the same but, as the heating-cooling imbalance of the thermal load diminishes, the discrepancy increases ( Figure 12) by up to about 30% for E H /E TOT = 70% (which is a fairly acceptable error). As the thermal load approaches the perfect balance between heating and cooling, i.e., E H /E TOT = 0.5, the error introduced by this approximation rapidly increases to unacceptable levels. discrepancy increases ( Figure 12) by up to about 30% for / = 70% (which is a fairly acceptable error). As the thermal load approaches the perfect balance between heating and cooling, i.e., / = 0.5, the error introduced by this approximation rapidly increases to unacceptable levels.
(a) (b) Figure 11. Relative error on plume length (a) and width (b), evaluated with a constant thermal load simulation and analytical formulae, compared to a variable thermal load simulation, for the considered range of parameters; isotherm ∆T = 1 °C, t = 50 years. The yearly averaged load usually leads to an overestimation of the plume length, thus proving a conservative approach, but also to an underestimation of the plume width (Figure 11b). This is due to the fact that the effect of peak flow-rates on the width of the injection well release front is not considered. The tested analytical formula largely overestimates the long-term plume length ( Figure 11a), since the advective model described by Equation (13) neglects thermal dispersion and conduction. On the other hand, the plume width can conservatively be estimated with Equation (14), The yearly averaged load usually leads to an overestimation of the plume length, thus proving a conservative approach, but also to an underestimation of the plume width (Figure 11b). This is due to the fact that the effect of peak flow-rates on the width of the injection well release front is not considered. The tested analytical formula largely overestimates the long-term plume length (Figure 11a), since the advective model described by Equation (13) neglects thermal dispersion and conduction. On the other hand, the plume width can conservatively be estimated with Equation (14), with an acceptable overestimation of 30-60%. Underestimation of the plume width occurs only for the highest considered value of v D (2 × 10 −5 m/s), because of the plume separation effect and the increased heat dispersion. Hence, the equations proposed by Banks ([22], from Equation (13) to (16)) provide a good and conservative estimation of the plume width, while they cannot be used to assess the longitudinal extension of the thermally affected area.
For detailed results of the methods comparison study, for simulations at 10 years and 50 years, we refer the reader to Tables S1 and S2 of the Supplementary Materials.
Conclusions
Assessing the long-term subsurface thermal impact of open-loop geothermal systems is of crucial importance for their design and their proper spatial planning. Various numerical models and analytical solutions can be used to estimate the thermal plume produced by a GWHP, each characterized by different costs, degrees of complexity, and precision.
This work presents a parametric study through numerical flow and heat transport simulations carried out to identify key parameters and compare their relative influence within their typical ranges of variation. Moreover, simplified numerical modelling with a constant thermal load and analytical solutions for 2D plume propagation in an aquifer were investigated to quantify the discrepancy compared to transient numerical modelling.
Among the hydraulic and thermal subsurface parameters, the most influential for both plume length and width is the Darcy velocity (v D ), i.e., the product of hydraulic conductivity (K) and gradient (i). In this light, it is very important to determine the hydraulic conductivity with reliable in-situ tests, in order to perform realistic simulations and avoid major errors in plume size estimations.
The thermal properties of the ground affect the plume length but have a negligible influence on its width. In particular, thermal conductivity (λ) and dispersivity (α) of the ground play an important role in the long-term, while variations of heat capacity (ρc) have minor effects on plume size.
Depth to water table (d) and aquifer thickness (b) are also key factors in the propagation of thermal plumes, thus confirming the necessity of a 3D modelling geometry. Plume length dramatically increases with the thickness of the saturated and unsaturated zones, as thermal exchange with air is reduced, and propagation of the thermal plume is confined to the saturated zone. On the other hand, an increase in the saturated thickness results in higher transmissivity and, consequently, in a reduction of the plume width with an almost linear inverse trend.
The distance between abstraction and injection wells (L) noticeably influences the width of the thermal plume, while its impact on its longitudinal extension is lower, though not negligible.
Regarding thermal loads, a logarithmic increase in the plume length with the abstracted/injected flow rate (and consequently with the thermal power) is observed, while the correlation is almost linear for the plume width. The balancing of the heating and cooling demand of the building sensibly affects the size of the thermal plume, which reaches its maximum for a fully heating-dominated (or cooling-dominated) demand and its minimum for a perfectly balanced load.
The assessment of the thermal plume adopting a constant thermal load (equal to its yearly average) is not reliable at high groundwater velocities, due to the separation of plumes originated during heating and cooling seasons. In such cases, a simulation with a time-varying thermal load is required.
On the other hand, for heating or cooling-dominated loads, constant load simulations provide a good approximation of the thermal plume lengths. In this way, the computational time is drastically reduced, and simpler and cheaper software can be used. For balanced or slightly unbalanced thermal loads, however, the length of the thermal plume is largely overestimated using the constant load approach.
Finally, analytical formulae for 2D advective propagation provided by Banks [22] can be used to quickly assess the width of thermal plumes, while they provide unrealistically high estimates of the plume length.
We provide useful insights for the evaluation of the thermal impact of GWHPs on aquifers, and hence for their planning in densely inhabited areas. The key parameters are identified, along with the error margins deriving from a poor characterisation of the site.
The effects of simplifying assumptions, such as using the yearly average of the thermal load or adopting available analytical solutions, are also analysed, thus defining under what conditions such methods can be used with acceptable accuracy.
Supplementary Materials: The following are available online at www.mdpi.com/1996-1073/10/9/1385/s1, Figure S1: Mesh adopted in the analysis, abstraction and injection wells in red (252k nodes), Figure S2: Mesh convergence study results: thermal plume length (a) and width (b) after 50 years, for different levels of mesh refinement, Table S1: Length of the thermal plume after 10 years and 50 years of operation, calculated with different methods-parametric study, Table S2: Width of the thermal plume after 10 years and 50 years of operation, calculated with different methods-parametric study. | 13,287.4 | 2017-09-12T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Resummation of non-global logarithms and the BFKL equation
We consider a `color density matrix' in gauge theory. We argue that it systematically resums large logarithms originating from wide-angle soft radiation, sometimes referred to as non-global logarithms, to all logarithmic orders. We calculate its anomalous dimension at leading- and next-to-leading order. Combined with a conformal transformation known to relate this problem to shockwave scattering in the Regge limit, this is used to rederive the next-to-leading order Balitsky-Fadin-Kuraev-Lipatov equation (including its nonlinear generalization, the so-called Balitsky-JIMWLK equation), finding perfect agreement with the literature. Exponentiation of divergences to all logarithmic orders is demonstrated. The possibility of obtaining the evolution equation (and BFKL) to three-loop is discussed.
Introduction
Collimated sprays of particles, or jets, figure prominently in high-energy collider physics. This has led to a growing interest in the characterization of jet shapes and event shapes, with the goal to extract as much information as possible about underlying hard scattering events. The pencil-like nature of jets implies that one often encounters disparate angular and energy scales. These lead to large logarithms in theoretical calculations, whose resummation is necessary to obtain controlled, precise predictions. Theoretically, in analytic studies these large logarithms are often the only terms which one may hope to predict in an amplitude or cross section at higher orders in perturbation theory, and thus could potentially help reveal new structures. Both of these reasons make them especially important.
Thanks to developments spanning many years, resummation for most observables of interest is now possible. In the case of so-called global observables, which involve complete ('global') integrals over final state phase spaces, a critical ingredient the exponentiation of infrared and collinear divergences [1][2][3][4][5][6]. This predicts in a quantitative way the logarithms left after the cancelation of infrared and collinear divergences, guaranteed on general grounds by the Kinoshita-Lee-Neuenberg (KLN) theorem [7,8]. There exists however non-global observables, for which phase space cuts lead to soft radiation not being integrated over all angles ('not globally'), which are considerably more difficult to resum [9,10].
The aim of this paper is to propose a comprehensive theory of non-global logarithms, to all logarithmic orders and finite N c . This theory will be closely related to the BFKL theory, which controls another class of difficult logarithms in the Regge limit (high-energy scattering at fixed momentum transfer) [11,12].
To set the stage we consider a generic weighted cross-section of the form σ = n dΠ n A Q→n (p 1 , . . . , p n ) 2 u({p i }) (1.1) where dΠ n is the phases space measure for n partons. The measurement function u({p i }) specifies the details of the measurement, including various vetoes etc. For suitable infraredand collinear-safe choices the cross-section will be finite order by order in perturbation theory. As a preliminary simplification (to avoid initial state radiation), in this paper we will assume that the initial state is a color-singlet state of mass Q, and assume massless final states. A time-tested strategy to resum large logarithms is to introduce intermediate, divergent, matrix elements, which are to be renormalized at a suitable factorization scale. The template is Wilson's operator product expansion, which expresses the short-distance limit of a correlator in terms of short-distance OPE coefficients, anomalous dimensions, and long-distance matrix elements. The factorization scale µ, whose dependence is controlled by anomalous dimensions, cancels between the OPE coefficients and matrix elements and provides a handle on large logarithms. Our main proposal is that the pertinent operator for resumming non-global logarithms is the color density matrix : (1.2) We call it a density matrix because it is linear in both the amplitude and its complex conjugate. It is a functional of a continuous field of unitary matrices U ab (θ), which depend on a twodimensional angle and live in the adjoint representation of the gauge group. Pictorially, U , shown in fig. 1, is a (local) color rotation between the matrix element and its conjugate. A closely related construction has been used to describe parton showers at finite N c [13]. The physical motivation for eq. (1.2) is that the information carried by σ[U ] is clearly necessary to fully characterize the distribution soft gluons. Since soft radiation can be triggered by any other colored parton with a higher energy, keeping track of color charges in every direction, as σ[U ] does, indeed seems unavoidable. The information in σ[U ] is also intuitively sufficient: due to coherence effects, soft gluons are affected by the color charge carried by harder partons but generally not by other details.
Contrary to the original weighted cross-section, the density matrix σ[U ] is infrared divergent. We propose, and will demonstrate, that these infrared divergences exponentiate in terms of a well-defined anomalous dimension. This supports our claim that the information in σ[U ] is sufficient. After cancelling these divergences (see eq. (2.7)), the 'renormalized' density matrix then depends on a factorization scale as The anomalous dimension or Hamiltonian K assumes the form of a functional differential operator. Its one-loop expression, computed in eq. (2.14), repdoduces earlier formulas derived in the literature to deal with non-global logarithms [10,14].
Structure of the resummation
For concreteness, we briefly sketch how the formalism would be applied in a complete calculation of a specific non-global event shape, the (cumulative) hemisphere mass function, at finite N c . The hemisphere mass function is the distribution of the invariant masses in the left-and right-hemispheres of a decaying color-singlet state of mass Q (for example, a Z-boson). In the di-jet limit m L , m R Q, the observable simplifies to (1.4) Here θ L/R (i) = θ(∓p z i ) project onto respective hemispheres. Logarithms of X/Q and Y /Q are global and can be resummed using standard techniques (see ref. [15] and references therein). When X Y Q, R receives additional logarithms log(Y /X) due to the phase space cut between the two hemispheres. These are the non-global logarithms.
· · · U anbn (θ n ) * Figure 1. Color density matrix. For each colored final state, an independent color rotation is applied between the amplitude and its complex conjugate.
To resum them we take Y as the hard scale and X as the soft scale. At the hard scale we drop the X-dependent step function, and define a density matrix involving only Y : (1.5) This is of the form of eq. (1.2). Infrared divergences introduced by the U matrices exponentiate and renormalizing at the scale µ = Y (see eq. (2.7)) removes large infrared logarithms. Concretely, in perturbation theory, σ ren is polynomial in U 's and can be viewed as a bookkeeping device encoding the orientations of radiated particles. It starts with the dijet term (1.6) with n,n null vectors along the positive and negative z directions. To resum logarithms log(Y /X) we now use the RG equation (1.3) to run σ ren down to the scale X, where we deal with the infrared part of the measurement. In the leading-log approximation, the θ(X − . . .) factor in the observable effectively removes left-hemisphere radiation between the scales X and Y (except within a cone of small radius R ∼ X/Q centered around the left jet). The IR measurement can thus be phrased in terms of an averaging The step function θ R (i) = θ R (i) + θ −p z i /p 0 i + cos R allows real radiation inside the right hemisphere and left cone. It is important that the averaging depends only on angles, since at this stage information about energies is lost.
In principle both the UV and IR measurements (1.5), (1.8) are to be computed order by order in perturbation theory. Resummation is considered successful if theses series contain no large logarithms. Since the Hamiltonian K is only meant to resum soft logarithms between X and Y , the UV and IR measurement functions actually do contain further logarithms of Q and R, but these should be resummed using other, conventional, means. Comparison of this procedure with the leading-log prescriptions of refs. [9,10,16] is discussed in section 2. In keeping with the logic of factorization, in this paper we will concentrate on the (universal) evolution K and leave calculations of (process-dependent) measurement functions to future work.
A remarkable fact about K is that it is also (essentially) the Balitsky-Fadin-Kuraev-Lipatov (BFKL) Hamiltonian, that is, the boost operator of the theory in the high-energy limit. The same Hamiltonian K thus simultaneously governs non-global logarithms and the Regge limit. This was observed mathematically from the one-loop expressions in refs. [10,14]. A general explanation is available, however, using a conformal transformation, which extends to higher loop orders [17]. One thus anticipate the difference in QCD to be at most proportional to the β-function.
Since this correspondence will play an prominent role it is helpful to include a rough explanation here. High-energy forward scattering (for example the elastic pp → pp amplitude) amounts to taking an instantaneous snapshot of a hadron's wavefunction, so pictorially it measures the amplitude for a virtual shower to form inside the hadron and then recombine. This is illustrated in fig. 2(a). This is also roughly what the density matrix σ Q→(··· ) [U ] of a decaying virtual hadron measures. Importantly, however, one measurement is instantaneous while the other takes place at infinity. To relate them requires the nontrivial conformal transformation of ref. [17].
In this correspondence with the Regge limit, the color rotations in σ[U ] implement the shockwave of the Balitsky-JIMWLK framework [18][19][20][21]. Here the 'shockwave' is inserted at infinity between the matrix element and its complex conjugate. This was our original motivation for defining eq. (1.2). (Mathematically similar considerations were used in refs. [22,23] to exploit the conformal symmetry of the BFKL equation.) The UV and IR measurements represent, respectively, the projectile and target impact factors.
The aim of this paper is to analyze the properties of the Hamiltonian K and to calculate it explicitly to the next-to-leading order. The lessons learned from this calculation will then lead to an immediate proof of exponentiation. As a cross-check of the calculation we will compare against results obtained in the context of Regge limit scattering. This paper is organized as follows. In section 2 we review known facts regarding the exponentiation of infrared divergences and factorization of soft emissions. We illustrate the formulas by giving the leading terms in perturbation theory for the various ingredients. We also verify that the procedure around eq. (1.8) reproduces the established resummation of non-global logarithms at leading-logarithm order. In section 3 we perform the two-loop calculation. A key finding will be the possibility to express all terms in K as finite integrals over well-defined, finite and gauge-invariant building blocks. The final result is recorded in subsection 3.6. In section 4 we compare our result for K against the two-loop BFKL equation. We will find perfect agreement in conformal theories, with, as expected, a relatively compact correction term proportional to the β-function in QCD. In section 5, using the lessons learned from the two-loop calculation we derive formal expressions for K at three-loop and beyond, and demonstrate exponentiation in general. Conclusions are in section 6. A technical appendix reports complete details of our evaluation of the real-virtual contributions at twoloop.
Conventions and review
To set our conventions we now review the exponentiation of infrared divergences and give explicit formulas for the relevant objects at one loop. We also discuss the resummation of non-global logarithms at leading-log.
Soft factorization
As described in refs. [4][5][6], the exponentiation of infrared and collinear divergences is controlled by a soft anomalous dimension: For a gentle introduction we refer to ref. [24]. The infrared-renormalized amplitude H n is also called the hard function and is finite (in this paper we use only ultraviolet-renormalized amplitudes). It depends on the choice of a factorization scale µ, It is important to note that since Γ n acts as a matrix in the space of color structures, the path-ordering symbol cannot be omitted.
We work in D = 4−2 dimensions and the coupling constant depends on scale through in a theory with n F flavors of Dirac fermions and n S complex scalars (in QCD C A = 3 and T F = 1 2 ). The solution is The integral in (2.1) converges and produces the desired 1/ poles provided that is negative enough that the coupling vanishes in the infrared.
In the literature Γ n is often written as being -independent, which defines minimal subtraction schemes. We keep the more general notation since it will be useful to consider non-minimal schemes. As long as Γ n remain finite as → 0, different schemes are related by finite renormalizations of H n .
Amplitudes with m soft gluons are also known to factorize in a simple way [6,25], up to powers of k provided that all of {k 1 , . . ., k m } are softer than other scales in M . We have only indicated the color and Lorentz indices of the soft gluons (to be contracted with polarization vectors µ i i ). Since H n is the same as in eq. (2.1), this formula states that soft gluons can be 'tacked onto' an amplitude without recomputing it.
Similarly to Γ n , the soft currents S m are matrices in the space of color structures. According to eq. (2.2) they are finite and have factorization scale dependence Our main proposal is that the color density matrix admits a similar factorization,
Leading-order expressions
The tree-level emission of one soft gluon is controlled by Weinberg's well-known soft current: where R a i is the operator which inserts a color generator on leg i and β µ i = (1, v i ) µ is a null vector proportional to p i . These obey [R a j , R b k ] = if abc δ jk R c j and our normalizations are such that Tr[T a T b ] = 1 2 δ ab and Tr [1] = N c . The soft anomalous dimension at one-loop is [4] Γ (1) where the collinear anomalous dimensions are γ (1) q = −3C F for quarks. We loop-expand using the uniform notation: To find K at one loop it suffices to compute σ[U ] to that accuracy and compare the divergence with eq. (2.7). The real emission contribution to σ[U ] has an infrared divergence when an additional gluon is emitted at a wide angle, as shown in fig. 3(a). It is given as the square of the soft current (2.8): ij;0 is the (infrared) pole from the energy integral, . Here the sums run over the U -matrices present in σ[U ] (which at finite order in perturbation theory is a polynomial) and we use the abbreviation U ab k = U ab (β k ). The notation R a i here is slightly different, but compatible with, that used in eq. (2.8): R a i is the operator which replaces U i with U i T a . Similarly L a i , representing the color charge in the complex conjugate amplitude, replaces U i with T a U i . These obey: The virtual corrections ( fig. 3(b)) generate products of the type LL and RR with no extra U . An important constraint is that σ ren [U ab = δ ab ] must be evolution-invariant, since this correspond to the total cross-section which is finite by the KLN theorem. Equivalently K must vanish when U is the identity field. This unambiguously determines the LL and RR terms. Using the identities which in particular yield L i = R i when U ab i = δ ab , the (unique) solution is easily seen to be 2 (2.14) This gives the complete scale dependence of the density matrix σ ren [U ], including non-planar effects (and therefore, by the expected factorization, any non-global logarithm at leading-log).
We review a few known facts about this equation.
• Taking the 't Hooft planar limit N c → ∞ with λ = g 2 N c fixed, eq. (2.14) becomes for the dipole U ij = 1 Nc Tr U i U † j : Using simple color identities this reduces to a closed nonlinear equation: 3 This is the Banfi-Marchesini-Smye (BMS) equation governing non-global logarithms in the planar limit [10]. 4 Let us be more precise. As stated in the introduction, the functional RG equation (1.3) is to be integrated from the UV to the IR, starting from e.g. the dijet initial condition σ[U ] = U nn (1.6). In the IR one performs the average (1.8). In the planar limit, the averaging reduces to evaluating the functional at one point, σ[U ij = θ R (i)θ R (j)], so the procedure is equivalent to evolving the argument of the functional from the IR to the UV, e.g. the function U ij . This is how the relevance of eq. (2.16) arises.
would also satisfy the KLN theorem and preserve the reality of σ provided that its coefficient is imaginary. The imaginary part of the explicit expression (2.9) however shows that fij ∝ iπ is constant and thus cancels out using color conservation. 3 We have used: . 4 In addition, compared with ref. [26] which deals with the hemisphere function, one needs to set U here ; the step-functions factors are stable under evolution. At leading-log collinear divergences exponentiate independently so the R term in θ R in eq. (1.8) does not interfere with the non-global part.
• Away from the planar limit, eq. (2.14) coincides with the generalization of the BMS formula derived in ref. [14]. Again, as in the BMS case, the two forms differ only by multiplication of the U matrices by step functions, which commute with the evolution. The averaging procedure (1.8), done in the infrared, is as in refs. [14].
• The double-sum notation in eq. (2.14) is most natural in a perturbative context where σ[U ] is a polynomial in the U 's. Since the evolution increases the number of U 's, for solution it can be better to view K as a functional differential operator acting on σ[U ]. This is achieved by the following simple substitutions (done after normal-ordering all L, R's to the right of U 's) [27,28]: These L a i and R a i obey the same commutation relations as those defined previously, and in fact after substitution into eq. (2.14) one finds the same action on any polynomial σ[U ]. This reveals eq. (2.14) as a functional second-order differential equation of the Fokker-Planck type, whose solution can be importance-sampled via lattice Monte-Carlo techniques [27,28]. For studies of 1/N c effects in the context of non-global logarithms, see [16,29].
• Also well-studied is the weak-field regime where all matrices are close to the identity.
Following ref. [30] and references therein, we write U j = e igT a W a j and expand (2.17) in powers of W . This can be streamlined using the Baker-Campbell-Hausdorff formula, which gives Plugging this into the one-loop Hamiltonian yields after a small bit of algebra (only the first two terms contribute) up to nonlinear terms of the form δK ∼ g 4 W 4 δ 2 δW 2 . This is one form of the one-loop BFKL equation and its ('BJKP') multi-Reggeon generalization [11,12], valid for colorsinglet states (see ref. [30] and references therein). It acts on functionals σ[W ] where W a is identified as the Reggeized gluon field. This identification will play a useful role later in this paper. • Finally, we did not prove in this subsection that divergences do exponentiate according to eq. (2.7). We simply read off the exponent from a one-loop fixed-order calculation. Proofs to leading-logarithm accuracy are in refs. [10,14] and an all-order demonstration is given in section 5.
Evolution equation to next-to-leading order
We now present a calculation of K to the next-to-leading order, by matching two-loop infrared divergences in σ[U ] against eq. (2.7). The computation will be phrased exclusively in terms of convergent integrals over building blocks with a clear physical interpretation (renormalized soft currents), which will shed light on the exponentiation mechanism. We perform the computation in a general gauge theory, although at intermediate steps we only write formulas for color-adjoint matter. The reader not interested in the technical details can skip directly to the final result in subsection 3.6.
Building blocks: soft currents
A natural building block is the tree-level amplitude for emitting two soft gluons. It can be written naturally as a sum of disconnected and connected contributions: follows directly from the Feynman graphs shown in fig. 4(a) [25]. To optimize the notation all color generators are implicitly symmetrized: which is relevant when i = j. This notational convention (borrowed from ref. [31]) is why the connected part is proportional to f abc . Figure 5. Second building block: one-loop soft current.
To familiarize ourselves with the notation we review the transverseness check: We need to use color conservation in the form of the identity i R a i = 0. Since this holds when i R i is inserted to the right of an operator product, the implicit symmetrization in eq. (3.1) produces commutators. For example the divergence of the first sum is This is easily seen to cancel the second term in the parenthesis in (3.3), up to a β i -independent term which itself cancels due to i R c i = 0, proving transverseness. Pairs of soft fermions or soft scalars can also be emitted ( fig. 4(b)). For notational simplicity we carry out all intermediate steps in a theory with n adj Weyl color-adjoint Weyl fermions and n adj s real adjoint scalars (the final result will be trivial to generalize). Then: The second natural building block is the next-to-leading order soft gluon amplitude S 1 . We take it from the literature and quote the result in (A.1); representative graphs are shown in fig. 5.
Equation (2.7) gives the next-to-leading order kernel as the divergent part of the following combination: . (3.6) We will now see that this can be expressed in terms of soft currents.
Double-real emission
We begin with the terms in the NLO kernel which involve two wide-angle soft partons, and thus generate two additional U factors. The double-real contribution to σ (2) is by definition (suppressing color indices) (3.7) (The integrals have compact support due to the momentum-conserving δ-function in A, and we do not show a factor dΠ n u({p i }) associated with the underlying hard event.) The trick to evaluate (3.6) is to find compatible integral representations for K (1) and σ (1)ren . For K (1) we already have eq. (2.12) and subtracting it from σ (1) leaves simply The essential point here is that the matrix element factorizes in the soft region, |A n+1 | 2 (aβ 0 ) → |S n A n | 2 , so that subtracting K (1) 2 σ (0) is equivalent to removing the integration region a < µ (to all orders in ). Invoking factorization similarly, eq. (3.6) can be re-written as: Here F (a, b) denotes the integrand in (3.7). This formula is also exact in . One can see that the second integral is finite and the first integral has no subdivergences. (Except from collinear regions, which are dealt with in the next subsection.) After scaling a → ab in the first integral to extract the pole, one thus just get: This is the desired formula, which expresses the double-real contribution to K (2) as a convergent integral over tree-level soft currents. The integrand measures the extent to which two soft emissions are not independent from each other. Using the explicit expressions (3.1) the formula yields two nontrivial color structures fig. 6). These multiply angular functions We note the absence of a fully disconnected (Abelian) color structure: since its squared amplitude is proportional to 1/a 2 it disappeared before integration. The matter contribution in the last parenthesis is, in full: The a-integrals are elementary (and convergent!) and after some straightforward algebra starting from (3.2) yield For K (2) ij;00 we have used symmetry in (i ↔ j) to simplify. The expression is especially compact in N = 4 SYM (the first term). The second and third terms represent, respectively, an (adjoint) N = 1 chiral multiplet and a scalar. The rational structures are such that all potential divergences associated with the β i , β j and β k regions cancel. There remains a divergence as β 0 → β 0 , proportional to the gluon collinear anomalous dimension γ (1) g ∝ b 0 , which will be canceled shortly.
At this point we could stop: using simple and not-so-simple physical considerations one could determine the full result using only what we have so far. We find it instructive, however, to continue with the explicit computation.
Single-real emission
We now turn to terms with one radiated parton; since these contain only one U field at a wide angle, these will combine with and cancel the collinear divergences in eq. (3.12).
A good starting point is to express the single-real contribution in terms of the hard function (2.1): In the soft region we can replace the amplitude by a soft current times H n . It is useful to run the soft current to its natural scale, µ = a, using eq. (2.6). The energy integral then becomes, formally to all orders in perturbation theory, (3.14) Comparing against the factorization formula (2.7), since Γ n K virtual we see that the second integral represents a finite correction to the finite coefficient σ ren . The a-integrand on the first line, on the other hand, is nicely identified as the following shift to the exponent: This gives the single-real contribution to K, formally to all loop orders. This generalizes in the simplest conceivable way the leading-order result: one simply evaluates the loop-corrected soft current with energy set equal to the renormalization scale µ. The 'bar' onS accounts for a collinear subtraction, to be defined shortly (as well as for an O( ) mismatch between Γ n and K (1) virtual ). Because the soft current is used at its natural scale, the series for (3.15) contains no large logarithms and the b 0 term in eq. (3.6) is automatically accounted for.
We now deal with collinear divergences. We do so by subtracting suitable splitting functions. First, in the preceding subsection, we replace eq. (3.8) by (3.16) The next-to-last term removes the a → 0 limit of everything to its left. The formula differs from eq. (3.8) only by the splitting functions, which are added in such a way as to introduce no new soft divergences. The function Split i (p 1 ; k i ) (representing the amplitude for particle i to split into two with momenta aβ 0 and k i − aβ 0 , symmetrized between the two), is required to have the same integrand-level collinear singularity as |A n+1 | 2 when β 0 k i , ensuring convergence. This is guaranteed to exist by the factorization of amplitudes in collinear limits, and an explicit expression is given in (A.3).
We stress that the subtraction is not written as an integral over the phase space of (n + 1) particles with the original total momentum, since a and β 0 do not enter A n . This would probably be very inconvenient for numerical integration, which is a limitation compared to more general methods, for example, dipole subtraction [32]. However this is simpler and will suffice for our purposes.
To avoid changing σ (1)ren one needs to make a compensating subtraction in the virtual contribution: (3.17) This differs from the naive subtraction of the virtual part of K (1) (the R a i R a j term), which is essentially Γ n up to O( ) terms, by terms involving the splitting amplitude and compensating those in eq. (3.16). The iπ term is such that, as verified in appendix,H n is finite. The barred hard functions can thus viewed as simply the hard functions in a different scheme.
What is important is that the same subtractions can be applied to all expressions in the preceding subsection. For example, in the double-real term in eq. (3.7) one simply subtracts Split g (aβ 0 , bβ 0 ) 2 + i Split i (aβ 0 , k i ) 2 A n+1 (bβ 0 ) 2 from the integrand. In the Γ (1) terms one subtracts only the soft limit. In this way all two-particle-collinear divergences are removed from the preceding subsection, at the only cost of adding a piece to eq. (3.10): (3.18) This makes it collinear-safe when added to the integrand. The argument of the splitting function is such that most energetic particle of the pair has energy µ (which we scaled out). Regarding the virtual corrections, the only effect of (3.17) is to put the bar on S in (3.15).
Both eqs. (3.15), (3.18) are explicitly finite and to be evaluated directly in four dimensions. Their evaluation is conceptually straightforward and detailed in appendix A. It turns out that the result could have been anticipated using (not so trivial) physical considerations, so here we concentrate on explaining these considerations.
The most critical consideration is gluon Reggeization, or, more broadly, the connection with BFKL. As mentioned in introduction, K is the BFKL Hamiltonian in disguise (up to βfunction terms). Interactions between Reggeized gluons are constrained by physical principles such as Hermiticity of the boost operator and signature conservation U ab → U ba , which are not self-evident from the perspective of non-global logarithms.
We consider the weak-field regime where U = e igT a W a with the 'Reggeized gluon' field W small. Linearizing the Hamiltonian yields Reggeon-number conserving terms given in eq. (2.19), as well as 2 → 4 transitions (between states with with different powers of W ) at order g 4 , and so on. Hermiticity of the boost operator (with respect to the specific inner product given by the scattering amplitude of left-and right-Wilson lines) then predicts 4 → 2 transitions at the same order, whose existence is indeed well-known [33]. These are the terms which close the so-called Pomeron loop. Now when reverting to the current power-counting which treats (U − 1) as O(1), instead of O(g), these 4 → 2 transition become a three-loop effect (see ref. [30] and references therein). Signature forbids 3 → 2 transitions. Hence the remarkable statement that K must be triangular at one-and two-loop [30]: Mathematically, this formula (just with the L = 1 case) can be seen as equivalent to gluon Reggeization, since it ensures that sectors with different powers of W 's can be diagonalized independently at one loop. One then expects the Reggeized gluon (W field) to provide a good degree of freedom upon which to organize the perturbative spectrum of K to any order (as usually happens in degenerate perturbation theory after degeneracies are lifted at one-loop). For our immediate purposes, eq. (3.19) constrains two-loop color structures. One easily sees that no double-real color structure satisfies it by itself: for example, using (2.18), the first line of eq. (3.11) linearizes to give terms which replace three Reggeons W i W j W k by a single one W 0 . Cancelling this term uniquely fixes the range-three part of the single-real contribution (to the form in eq. (A.10)). From other terms one constrains the double-virtual and range-two kernels. In this way, using in addition that double-real terms are signatureeven, we find that the two-loop Hamiltonian can be parametrized by at most three angular functions: (3.20) The first one is shown in fig. 7. Since K ijk;00 and K ij;00 have already been determined from double-real emissions, effectively eq. (3.19) predicts all virtual corrections, up to a term proportional to the leading-order structure (the last line). The physical interpretation is that gluon Reggeization entails nontrivial real-virtual connections, which was indeed the original observation [11,12].
In appendix A the prediction (3.20) is compared with the direct evaluation of singlereal terms eqs. (3.15), (3.18). It turns out that there is a subtle loophole in the above argument: in the non-global log context, U ab → U ba is not symmetry but only need to send the Hamiltonian to its complex conjugate. Thus signature is not conserved. The ansatz fails, by a single signature-odd term: (3.21) The origin of this term is simple: the imaginary part of the one-loop soft current (A.1). Its physical significance will be discussed shortly. The explicit computation in appendix also yields the yet-undetermined signature-even function: (3.22) This contains only the two-loop cusp anomalous dimension [34] (normalized to γ (1) K = 1) and the one-loop β-function. 5 Physically, the b 0 term is fully dictated by the collinear anomaly discussed subsection 3.5, while the cusp anomalous dimension term can be checked to provide the correct Sudakov double logarithms in the limit of a narrow jet cone.
How to explain the real-virtual connections (3.20) from the perspective of non-global logarithms? Perhaps one could use the Feynman tree theorem [36]. This is a way of computing loops by putting one (or more) propagator per loop on-shell. Indeed, putting a gluon on-shell in fig. 5 one can recognize diagrams of fig. 4, so at least schematically this seems to work. The tree theorem was streamlined and generalized to higher loops, with at least partial success, in refs. [37,38]; it would be interesting to see its implications in detail.
Double-virtual terms
Let us now make sure that the ansatz (3.20) is not missing any virtual corrections. A priori these could involve two color structures: The coefficients are constrained by the KLN theorem (vanishing for U ab = δ ab ) and by Lorentz invariance, but without signature, unfortunately this does not fix them uniquely. (Below we 5 The appendix uses the so-called dimensional reduction scheme. In conventional dimensional regularization (CDR), more commonly used in QCD, a simple coupling redefinition [35] gives: 64C A 9 → 67C A 9 .
give an example of a signature-odd function satisfying both constraints.) We resort to explicit computation. The two-loop soft anomalous dimension is known to take the 'dipole' form [5,39,40] Γ (2) n = −2γ This gives the divergence of the amplitude after subtracting the square of Γ (1) n . Since we are instead subtracting K (1) , we need again to switch to the collinear-subtracted barred scheme (A.5):Γ We omit 'collinear terms' which depend on only one leg at a time, since these are trivial to fix using the KLN theorem. Concentrating on the terms which have nontrivial color structures and which are not so easily fixed, the calculation of eq. (3.25) is rather straightforward and detailed in appendix A. The outcome confirms that no additional terms besides (3.21) need to be added to the Ansatz (3.20).
We can now interpret this term. First we observe that it can be mostly removed by a finite scheme transformation. Namely, if we set where the MS density matrix is the minimally subtracted one we have been working with so far, then the two-loop Hamiltonian in MS gets shifted by a commutator with K (1) and a β-function term. It is easy to check that the commutator term precisely cancels (3.21). The β-function term then replaces it by This combination is Lorentz-invariant in an interesting way: under a rescaling of β i , the jsum become telescopic and simplifies to (C i − C i ) = 0. This also satisfies the KLN theorem, being zero when L = R. The existence of this structure is the only reason we needed to use the explicit formula (3.24) to get the virtual corrections, otherwise the KLN theorem and Lorentz invariance would have sufficed. It violates the triangular structure (3.19) but since it is proportional to the β-function this does not contradict the BFKL-based argument leading to it. It also has a simple and suggestive physical interpretation: effectively it replaces the spacelike couplings in the one-loop evolution, by timelike counterparts: With hindsight, had we used timelike couplings in the one-loop evolution, we would never have had to write down eqs. (3.21), (3.26) nor (3.27). We will nonetheless continue to use the (more conventional) spacelike coupling.
Lorentz invariance and (lack of ) collinear anomaly
We have assembled all ingredients of the kernel, but we notice that the angular functions are not Lorentz-covariant: the arguments of the logarithms (3.12) are not homogeneous in β 0 , β 0 (and thus depend on the frame choice implicit in the normalization β µ i = (1, v i )). This may be surprising given that dimensional regularization preserves Lorentz invariance.
The simple explanation is that we did not write the one-loop evolution in a D-dimensional covariant form. What would constitute a Lorentz-invariant version is instead: ij;0 + . . . , (3.29) which differs at order by the amount δ (1) The integrand is homogeneous in all of β i , β j , β 0 and one may check that under a Lorentz transformation the Jacobian factor precisely cancels the change in the parenthesis. (The factor 4 is for future convenience.) An O( ) shift to an anomalous dimension, as usual, is equivalent to a finite renormalization of σ ren , e.g. a scheme transformation. The density matrix in the 'Lorentz' scheme (3.29) is related to the MS one used so far, or better the MS scheme just defined, as: (3.30) This shifts K (2) by a commutator [K (1) , δ (1) ] as well as a β-function term.
This transformation is only well-defined because it contains both real and virtual terms: the middle integral in eq. (3.29) would otherwise be un-regulated even for = 0. This clash between Lorentz covariance and collinear divergences reflects the (now called) collinear anomaly of refs. [39,40]. Here, the anomaly cancels between real and virtual terms.
To achieve manifest Lorentz covariance we must still manipulate the expressions by using color conservation to add terms independent of some of the β i , being careful with commutators as below eq. (3.3). Collecting these commutators is tedious but fortunately the task can be easily automated on a computer. We find (as it should) that the color structures in eq. (3.20) are preserved under these operations (see also ref. [41]). Only the angular functions change: The functions E ij;0 and F ij;0 are arbitrary, with E ij;00 = −E ji;0 0 . The formula (3.33) below arises for E ij;00 = α ij α 0i α 00 α 0 j log α 0 i α 0 j α 0i α 0j and F ik;00 = α ik 2α 0i α 00 α 0 k log α ik α 00 α 0k α 0 i . (With these choices the integrand on the last line vanishes.)
Final result for the evolution equation
We record our final result for the two-loop Hamiltonian in the 'Lorentz' scheme (superscript ), which combines eqs. (3.20)- (3.22) with the finite renormalizations (3.26) and (3.30). For convenience we repeat the color structures, switching to the integro-differential notation (2.17): Here and i = d 2 Ω i , the color rotations L and R being differential operators defined in eq. (2.17). All products of L a i 's and R a i 's are implicitly symmetrized and normal-ordered to the right of U 0 , U 0 . The third term is simply the one-loop result (2.14) times the cusp anomalous dimension (3.22). The angular functions are: This is the complete result in N = 4 SYM. In a general gauge theory with n F flavors of Dirac fermions and n S complex scalars in the representation R, there additional contributions from matter loops, also obtained in eq. (3.12). Upon restoring group theory factors corresponding to representation R, in accordance with the square of fig. 4(b), these can be written:
Comparison with BFKL and conformal transformation
As mentioned in the introduction, the same Hamiltonian K governs the Regge limit. Hence the reader familiar with the literature on the Regge limit, in particular the Balitsky-JIMWLK equation, will have recognized several equations by this point. Let us now discuss the connection in detail. Physically, as sketched in the introduction, the connection originates from the existence of a conformal transformation which interchanges the x + = 0 light-sheet and future (null) infinity. This interchanges the target residing at x + = 0 with the color rotations in the definition (1.2) of σ[U ]. It is given explicitly as [17,42,43] where µ is a reference scale. This maps the Minkowski metric ds 2 = −2dx + dx − +dx 2 ⊥ to a multiple of itself, as one may verify. Points approaching the BFKL target, x + → 0, are mapped to infinity along the null direction y µ ∝ (β 0 , β ⊥ , β z ) = ( ). In this way the transverse plane of the BFKL problem is mapped stereographically onto the two-sphere at infinity of the non-global log problem. Provided that the conformal transformation (4.1) preserves the Lagrangian, this map predicts that K should go into the BFKL Hamiltonian upon substituting [17] 6 : We now verify this equivalence directly, beginning with the case of N = 4 SYM where conformal symmetry is unbroken. In the general case we will find a discrepancy proportional to the β-function, as anticipated.
Comparison in N = 4 SYM
It is instructive to consider a special case: we act with K (2) on a dipole U 12 = Tr[U 1 U † 2 ]. The form (3.33) is particularly convenient for this since K (2) ijk;00 vanishes when i = k or j = k. The only terms in the first line are thus K (2) 112;00 and K (2) 221;00 . Furthermore the remaining lines simplify since K In this way all two-loop color structures in the dipole case are expressed in terms of a single angular function. To evaluate the color factors we recall that while L a 1 U 1 = T a U 1 , in the 6 Here we use a normalization β 0 = 1 which differs from that adopted elsewhere in the present paper and in ref. [17]. This has not effect in Lorentz-covariant expressions such as eqs. antifundamental one has that L a 2 U † 2 = −U † 2 T a (this ensures that (L 1 +L 2 )U 12 = 0). Writing if abc T c = [T a , T b ] and collecting terms one easily finds This formula is identical to the conformal form of the two-loop evolution obtained by Balitsky and Chirilli, eq. (6) of ref. [44], with αs 4π K (1) + α 2 s 16π 2 K (2) here = − d dη there , as expected. 7 In the planar limit eq. (4.4) can be reduced to a closed nonlinear equation for a dipole function; we refer to the literature for further details.
Rapidity evolution for general products of Wilson lines in the Balitsky-JIMWLK framework has been obtained recently [41,[45][46][47], extending earlier results for two [44,48] and three Wilson lines [49,50]. Given the mutual agreement between these works, here we only compare directly against the conformal form of ref. [41]. Since the stereographic projection identifies the SL(2, C) conformal symmetry of the transverse plane with Lorentz symmetry of the two-sphere, this should match with the Lorentz scheme here.
The comparison is in fact straightforward: the range-three kernel K 3,2 shown in eq. (5.12) of ref. [41] is literally the first four terms of our K (2) ijk;00 . The remaining two terms in K (2) ijk;00 arise from the telescopic term F in eq. (3.31) hence do not affect the range-three part. (These terms are helpful to manifest the convergence at β 0 → β 0 .) Furthermore, the integral representations for K 3,1 and K 3,0 in ref. [41] reproduce the real-virtual pattern embodied in the first line of eq. (3.32). This demonstrates the agreement of range-three interactions. Combined with the agreement in the dipole case, this establishes the complete equivalence of eq. (3.32) with ref. [41] (and thus, by extension, refs. [45,46,49]).
In principle, upon linearizing around U = 1, one also expects complete agreement with the interactions between Reggeized gluons obtained in the BFKL approach. For two reggeons the agreement was demonstrated at the level of eigenvalues [44,48,51,52]. For three reggeons, it was noted in ref. [49] that a scheme transformation appeared to be lacking in order to match with ref. [53]. This issue should be clarified further. Here we simply note that there is a natural candidate: the next-to-leading order inner product (correlator of Wilson lines) [54,55]. In the BFKL approach the inner product does not receive loop corrections (the transverse part of the Reggeon propagator remains 1/p 2 ), so only after this effect is removed by a scheme transformation, should agreement be expected. 7 Since subtraction terms are not written in exactly the same, there is an apparent discrepancy: 12;00 − K However the integral vanishes, proving the agreement. This is easily shown by noting that being absolutely convergent, the integral defines a Lorentz-covariant function with the same homogeneity in β0, β1, β2 as the integrand, hence must a constant times α 12 α 01 α 02 . The constant vanishes by antisymmetry in (β1 ↔ β2).
It is interesting to compare technical aspects of the calculations. The tree-level soft current (3.1) is reminiscent of the light-cone gauge amplitudes in eq. (43) of ref [48]. The subtraction of subdivergences in eq. (3.10) is similar to the + prescription derived in refs. [48,56]. The transformation to the 'Lorentz scheme' (3.30) is identical to that leading to the 'conformal basis' in refs. [41,44]. As a significant technical simplification, however, we saved the Fourier transform step. Also the reliance on standard building blocks made it possible to benefit from results in the literature, namely the soft currents and collinear splitting functions.
Comparison including running coupling
Having demonstrated the agreement in N = 4 SYM, let us now compare the fermion and scalar loop contributions to the Balitsky-JIMWLK and non-global logarithm Hamiltonians, e.g. the terms proportional to n F and n S in eq. (3.34). Performing the comparison with refs. [44,57] we find that the two Hamiltonians agree for the most part, except for the following discrepancy (setting z ij = z i − z j ): where as before µ is the MS renormalization scale. In particular, the difference is proportional to the first β-function coefficient, as predicted [17]. The origin of the discrepancy (4.5) is clear: the inversion y + → 1/µ 2 y + in (4.1), which relates the BFKL and non-global log Hamiltonians, is only an isometry up to the Weyl rescaling ds 2 y → (µy + ) −2 ds 2 y . This is not a symmetry in a non-conformal theory. Physically, BFKL and non-global logarithms describe infinitely fast and infinitely slow measurements of an object's wavefunction, which would not normally be expected to be connected without conformal symmetry.
For future reference, we note that a general theory deals with Weyl transformations in non-conformal theories (see for example [58]). The essential feature is that, starting from the BFKL side and performing the conformal transformation (4.1), one ends up with a coordinatedependent coupling constant: In other words, the BFKL Hamiltonian in QCD in principle controls non-global logs in QCD but in an imagined setup with a coordinate-dependent coupling. Contrary to real QCD, in this setup a narrow jet never hadronizes: the increasing coupling due to the growing size of a jet, is compensated by its falloff at large y + . Thus effectively the coupling is set by the angular size. This reflects that angles map to distances in the BFKL problem. We will not pursue eq. (4.6) further here, but in any case it is clear that to all orders in perturbation theory the difference between the BFKL and non-global Hamiltonians must be proportional to the β-function (up to finite scheme transformations).
Higher loops and exponentiation
It is instructive to extend the general analysis of section 3 to higher loops. We will (mostly) ignore collinear subdivergences here, concentrating on the soft divergences. We can organize terms according to the number m of wide-angle partons (U matrices) added to an underlying n-jet event. Our starting point is the known exponentiation of virtual corrections (2.1), which gives the m = 0 case: For the next case of one wide-angle gluon, a formula was derived in eq. (3.14). We reproduce it here, in abbreviated notation, omitting U matrices, the angular integration, daa 1−2 energy measure, and absolute value squared on the matrix elements: The colons instruct us to normal-orders terms according to their renormalization scale (largest argument to the right). As in subsection 3.3, the first integral is identified as a shift K 1 (µ) = −S 1 (µ; µ) to the exponent. The remaining (finite) terms then define the hard coefficient σ ren ≤1 (µ). Moving on to two real emissions, we follow eq. (3.9) and write the cross-section as independent emissions plus an additional piece: We have introduced the 'connected' squared soft current defined as (all factors being evaluated at the same renormalization scale) (5.4) (In the present abbreviated notation we recall that each factor is a squared soft amplitude, S i ≡ |S i | 2 .) Again the first integral in eq. (5.3) is identified as a shift to the exponent, which generalizes eq. (3.10) to include virtual loop effects to all orders. The (finite) remainder then defines σ ren ≤2 . Using this method it is straightforward to extend the calculation to more radiated particles. For three radiated particles, for example, after pulling out Pe − µ 0 (K 0 +K 1 +K 2 ) we find again that particles with energy > µ decouple from divergences, no subdivergences, and the following shift to the anomalous dimension: The absence of subdivergences (finiteness of K 3 as → 0) is manifest from the fact that the expression involves only connected squared amplitudes, which vanish near the boundaries a → 0 or a, b → 0. This itself is a consequence of factorization, or more precisely eq. (2.5) in the form lim a 1 ,...,a k a k+1 ,...,an S(a 1 , . . . , a n ; µ) = S(a 1 , . . . , a k ; µ)S(a k+1 , . . . , a n ; µ) .
(5.7)
It is now clear how to generalize the pattern to higher orders. In fact a simple guess appears to give the anomalous dimension K to all orders: The exponential factor has a simple physical interpretation as an 'exclusion time' effect, and we recall that a's are the energies of real radiated particles. We have verified explicitly (with the help of a computer) that exponentiating K using eq. (2.7) reproduces all contributions where up to at least 9 real particles have energy below µ, so we believe that the formula is correct to all orders. Equation (5.8) is one of the main results of this paper. It expresses, to all loop orders, the Hamiltonian governing non-global logarithms as a convergent integral over finite, well-defined building blocks, generalizing the eqs. (3.10) and (3.15) used in the two-loop computation. The building blocks are the squares of the infrared-renormalized soft currents (which include virtual loops to all orders), defined in eq. (2.5). Only the 0 part of the renormalized currents are needed, in agreement with ref. [59].
Since the exponent K is manifestly finite as → 0 (being expressed in terms of connected squared soft currents), the formula also demonstrates to all loops that infrared divergences exponentiate according to eq. (2.7). The physical inputs were the known exponentiation (2.1) of virtual corrections, and the factorization of successive real emissions (5.7); eq. (2.7) is seen to be a purely combinatorial output.
To fully prove eq. (2.7) one should address the issue of collinear subdivergences, omitted in the present discussion. Physically we expect these to cancel, since σ[U ] is collinear-safe. In subsection 3.3 this was made manifest by defining collinear-subtracted real and virtual contributions, such that their sum was unaffected by the subtraction. We have no reason to think that this couldn't be achieved at higher orders as well, following the method of ref. [25].
We should mention that eq. (5.6) gives the evolution equation in a non-minimal scheme: the two-loop exponent K (2) 2 in eq. (5.5) depends on through a factor a −2 and thus differs from the MS result of this paper by terms proportional to . These are interpretable as a renormalization which shifts K (3) by a finite commutator, giving, instead of eq. (5.6): Discrepancies in K 1 at O( ) add other commutator terms. All these are unrelated to a further finite renormalization needed to make Lorentz covariance manifest. As in eq. (3.29) it can be fully predicted by upgrading the two-loop result (3.33) to a D-dimensional covariant form. Although these finite renormalizations become combinatorially very complicated at higher loop orders, being finite they cannot interfere with the statement (2.7) of exponentiation.
Finally, we list the ingredients which enter eq. (5.8) at three-loop: • The tree-level soft current for three soft gluons S 3 , at one-loop for two gluons S 2 , and two-loop for one gluon S • The next-to-leading order 1 → 2 and tree-level 1 → 3 collinear splitting amplitudes [60].
The two-loop soft current and three-loop soft anomalous dimension are presently known for two hard partons [61][62][63]. Unfortunately this will not suffice for non-global logarithms nor BFKL in general, since each radiated gluon counts like a hard one from the point of view of softer radiation. 8 However, for dipole evolution in the planar limit, everything needed is known.
Conclusion
In this paper we considered a 'color density matrix' which aims to characterize soft radiation in gauge theory. We argued that it should resum large logarithms arising in the presence of wide-angle phase space cutoffs, so-called non-global logarithms, to all orders in logarithms and 1/N c . We proved the all-order exponentiation of infrared divergences for this object in terms of an anomalous dimension K (see eq. (2.7)), constructed formally in eq. (5.8), modulo one technical assumption regarding collinear subdivergences. We explicitly computed K to twoloop (eqs. (3.32)-(3.34)) and performed a number of checks on this result. We also stressed the equality between K and the BFKL Hamiltonian, which allows our results to be viewed as an independent derivation of the next-to-leading order BFKL Hamiltonian, obtained here directly in a novel, compact form.
The procedure to calculate a cross-section receiving non-global logarithms was sketched in the introduction. One distinguishes infrared and ultraviolet scales, which are to be connected by evolving using K. At both ends lie finite quantities: an 'IR measurement' which contains details of the experimental definition of a 'soft' particle, and corresponding vetoes; an 'UV measurement' which depends on the initial state and possible vetoes imposing hard jets in the final state. The logic of factorization being that their calculations are independent of each other, we focused in this paper on the (universal) evolution K. Study of the (expectedly) finite, but process-dependent, measurement functions is left to future work, as well as phenomenological studies.
Mathematically, K is an integro-differential operator acting on functionals σ[U ] of a twodimensional field of unitary matrices U (θ) (e.g. SU (3) matrices in QCD), with θ an angle in the detector. This means that K cannot be diagonalized explicitly. Although it is a quite complicated object, it is a useful starting point for further approximations. These include, as reviewed in section 2, numerical Monte-Carlo techniques at finite N c , reduction to an ordinary integro-differential equation at large N c , or linearizationà la BFKL around U = 1. We hope that one of the compact forms obtained in this paper will prove convenient for a next-to-leading order numerical implementation.
For application to hadron colliders it will be important to go beyond the limitation of an initial color-singlet object, as done in this paper, and allow for initial state radiation. This could lead to additional (super-leading? [64,65]) effects related to subtle color-dependent phases in collinear limits [66,67].
The formalism does not distinguish between global and non-global logarithms, but it is easy to see how it simplifies in the case of global observables. For example, when radiation is excluded everywhere but inside narrow cones, the IR averaging procedure sets U = 0 outside these cones which effectively shuts down the real terms in the evolution. It is then dominated by virtual effects, as is usual for global observables. It is only for observables sensitive to details of wide-angle radiation that the complications of the formalism kick in. It would be interesting to connect the present approach with that of ref. [68], which deals with recursive infrared and collinear safe event shapes ('rIRC').
There has been recent activity regarding formal aspects of measurements at infinity, in connection for example with the Bondi, van der Burg, Metzner and Sachs (BMS) symmetry [69,70]. The density matrix construction could be useful in this context. From a theoretical perspective, the Hamiltonian K connects, in a unified way, the following gauge-theory concepts: the cusp anomalous dimension (governing global logarithms); the KLN theorem (cancelation of collinear and infrared divergences); the factorization of soft radiation; the BFKL equation.
The equivalence with BFKL, verified explicitly in section 4, is a consequence of conformal symmetry [17] and is an equality up to β-function terms (fixed by comparatively simpler matter loops (4.5)). The basic physical intuition is summarized in fig. 2. Remarkably, properties manifest in one context are not necessarily so in the other.
For example, one fundamental assumption in both the BFKL and Balitsky-JIMWLK frameworks is that transverse integrals should be saturated by transverse scales ∼ t s, ensuring that rapidity logarithms (log s) arise only from longitudinal integrations [18,71]. While reasonable it is unclear how one would prove this directly beyond the current state of the art, e.g. next-to-leading log. The correspondence with non-global logarithms immediately implies it to all orders, since it amounts to the amply understood cancelation of collinear divergences. The non-global logarithm formulation may also be computationally advantageous, as discussed in sections 4.1 and 5.
In the other direction, the phenomenon of gluon Reggeization suggested a compact way to write the evolution equation (see eq. (3.20)), which manifests a connection between real and virtual effects. Intriguingly, we found that these relations could perhaps also be explained by the Feynman tree theorem. It would be very interesting to see if either of these approaches generalizes to higher loop orders.
Finally, we mention that the simplest non-global logarithms to resum in this framework (beyond the planar limit) involve situations close to the linear regime U ≈ 1, where the linearized equation has lowest eigenvalue the well-known Pomeron intercept − 4αsC A log 2 π . Naively this regime might correspond to multiplicity-type measurements, e.g. counting away jet charged tracks as a function of angle and an energy cutoff. Perhaps this or some other observables could provide an indirect experimental handle on the BFKL Pomeron.
whereH implements the subtraction in eq. (3.17) of collinear splitting functions. Since the splitting functions for all but the radiated gluon cancel in the commutator, we will only need the gluon splitting function Split g (aβ 0 , bβ 0 ) 2 = C A (b − a) −2 ab(b −2 ) 2 x(1−x)α 00 + (n adj Weyl − 4) α 00 +x(1−x)(n adj s −2n adj Weyl +2)f , (A.3) where x = a/b. The prefactor has a kinematical origin and accounts for the change in the measure b 1−2 db. The computation of such functions is standard [32]. In the x-dependence one can recognize various DGLAP kernels P g→(··· ) (x), as expected. We use the dimensional reduction scheme so the parenthesis does not depend on . (Regarding color factors we recall that we show intermediate formulas only in a theory with color-adjoint matter.) The scalar contribution to the splitting function is polarization-dependent and for us the most useful information will be its dot product against β µ i β ν j , divided by β i ·β j : this is what enters eq. (3.15). This is given by 2α ij α 2 00 α 0 i α 0 j + convergent or telescopic . (A.4) The first form is obtained directly from the Feynman rules and makes manifest that the dependence on β i , β j is consistent with factorization. We will prefer the second form, which provides a closer match with eq. (3.12) and also yields a simpler integrated expression. It differs by terms which are either convergent or vanish using color conservation. Computing the integral in (3.17) we then obtain 2 log 2 (x) − π 2 6 . The sum runs over gluons to stress that we haven't computed the other cases, and the cusp anomalous dimension is in eq. (3.22).
The commutator then easily yields We stress that only the O( 0 ) terms of S 1 were needed to obtain this. It is noteworthy that the −π 2 C A /6 from the original soft function, the −π 2 C A /3 from the scheme transformation, and the −C A log(e −iπ ) 2 /2 from the phase of the logarithm have nicely canceled to leave the cusp anomalous dimension.
Substituting into eq. (3.15) the soft factor produces two color structures ij;0 U aa 0 (L a i R a j + R a i L a j ) , (A.7) which multiply the angular functions ij;0 = −4S These can be evaluated explicitly using (A.6). The remaining linear-in-U contribution, the subtraction (3.18), is simply n adj Weyl −4 α 00 + 1 6 n adj s −2n adj Weyl +2 f .
(A.10) This agrees precisely with eq. (A.6a), up to the iπ term recorded in eq. (3.21). For the other structure G (2) ij;0 actual − ansatz = α ij α 0i α 0j 2γ (2) ij;0 + eq. (A.9) + C A d 2 Ω 0 4π K (2) ij;00 = α ij α 0i α 0j γ ij;0 , (A.11) which fixes K ij;0 as recorded in the main text. Finally we check the double-virtual terms. To get the prediction from the ansatz (3.20) we need to integrate (A.10). The L 2 (α 0k ) terms look scary, but they cancel out trivially because one needs only the total antisymmetrization of G (2) ijk;00 modulo terms which do not depend on all three labels simultaneously. The integral is still a bit nontrivial but we could simplify its antisymmetric part using integration-by-parts. We omit the details and quote only the rather simple result for the G (2) ijk;0 contribution, ijk;0 = 8if abc i,j,k R a i R b j R c k log(α ij )L 2 (α jk ) . (A.12) Finally the other term in the ansatz is (dropping terms depending on one particle at a time) K log(α ij ) − b 0 (L 2 α ij ) + log 4 log α ij .
The preceding two equations are easily verified to be in perfect agreement with the commutator (3.25), proving that the ansatz does not miss any double-virtual term. As a final comment, we note that the L 2 function and most log 2's have a simple origin: the scheme change (3.30). For example ij;0 = − 1 2 + π 2 3 +L 2 (α ij )+log 4 log(2α ij )+ O( ). With hindsight, we could have saved ourselves much algebra by switching from MS to the Lorentz-covariant scheme in the first step, which would have prevented L 2 from ever appearing. | 14,957.6 | 2015-01-15T00:00:00.000 | [
"Physics"
] |
TheMeasurementMethod of the Impact Force of Shoulder Tackle and the Influence of the Lower-Extremity Strength on the Impact Force of Shoulder Tackle in Rugby Players
With the rapid development of Internet of things engineering, intelligent sports products are gradually understood by people, providing help for the health and performance of athletes. In rugby training, coaches record and observe the force of data only based on their own experience; the impact force (IF) of shoulder tackle (ST) is a key defensive ability evaluation component for rugby players. However, the information related to female rugby players is limited. Purpose. To understand the strength characteristics of the lower-extremity of rugby players and to develop theoretical references for ST training. Methods. e force sensing device is made with FLexiForceTMA502 pressure sensor, its data acquisition adopts LabVIEW and USB. e strength of the lower extremity was tested by IsoMed 2000, and the IF of ST was measured by a testing system among eighteen Chinese female rugby players, respectively. Results. (1) e reliability and validity of the impact force tester are tested by comparing the actual load with the calibration value and the dierence and correlation between the actual load value of dierent loads and the calibration value of it. (2) At 60°/s and 180°/s, the PT (PT) and relative PT (PT/BW) of bilateral lower-extremity extensors were greater than the exors. (3) e exor/extensor PT ratio of the left knee at 60°/s was higher than the right knee. (4) e linear regression equation was established between PT of dominant-side knee extensors and IF of ST. e coecient β of the linear regression equation was 0.866, 0.862, 0.892, 0.722, 0.788, and 0.737, respectively. Conclusions. (1)e design uses LabVIEW, USB, and FLexiForceTM A502 pressure sensor to complete the overall construction of the data acquisition system and impact force sensing device. (2) It is feasible to use the extensor strength of the dominant-side knee joint as a reference index to evaluate the IF of ST. (3) e balanced development of the front/reverse ST techniques can enhance its defensive capacity of it.
Introduction
At present, the analysis of sports performance and training e ects was carried out manually or with the help of software and applications. By applying the Internet of ings engineering in sports, the e ect of athletes' training was analyzed with the help of sensors in wearable devices. In 2004, Australian Catapult launched the rst GPS-5 Hz wearable exercise training monitoring device, which makes sports training and wearable devices deeply integrated, and sports training monitoring has entered the digital "black vest" generation [1]. In 2019, the clear-Sky sports performance intelligent evaluation system, which uses ultraoptical frequency data connection technology and gets rid of the limitation of GPS usage environment, and wearable devices in sports training entered an unprecedented era of rapid development [2].
Tracking athletes' health and performance through sensors and collecting movement data for analysis in the direction of an intelligent development in modern sports. It shows that the current technical shortcoming of the Chinese women's national team was the defensive ability of ST [3]. In order to change the impoverished and enfeebled situation of rugby sports in China, the assistance of science and technology, as well as the cross-boundary and cross-type selection of athletes, will also accelerate the expansion of rugby sports.
ST is a basic defensive action that would occur more than 200 times per game [4]. ere were many research studies on the influence of lower-extremity strength on the ST technique. Speranza et al. found that the ST technique of players was correlated with the lower-extremity maximal and explosive strength [5]. On the other hand, Jenkins and Gabbett measured the influence factors of the ST technique and found that lower-limb strength was one of the factors affecting the impact force of shoulder tackle in the multiple regression model [6]. e lower-extremity strength had different defensive effects on ST of distinct body positions and was correlated with the game results [7]. To be specific, ST was primarily affected by the technical actions and lower-extremity strength [8]. ST is highly correlated with the squat strength [5]. Moreover, the correlation between ST ability and the vertical squat jump performance was tight. e squat performance was also related to the PT, the relative PT, and the flexor/extensor ratio of lower-extremity joints [9]. e high impact and physical nature of the tackle during a rugby match place the tackler(s) and ball-carrier at risk of injury, and such injuries account for up to 61% that occur during a rugby match [10]. e collision tackle tends to elicit a large number of impact forces due to rapid acceleration before a collision followed by high impact forces, transferring momentum between opposing players. Partly due to such extreme mechanical variables, the most frequent cause of injury is the direct impact of tackling, which is tackling technique drills and conditioning exercises [11,12]. e balance of ST was stabilized by the joint structure and muscle strength of the lower extremities. Comfort et al., at a fixed angular velocity of 60°, tested the PTs and torque ratios of hamstrings and quadriceps during concentric and eccentric movements of the knee joint. e hamstring/ quadriceps ratios were identical between the dominant and nondominant legs, as well as forwards and backs. In addition, the flexor/extensor PT ratios were correlated to sports injuries [8]. In 2016, Brown showed that the hip and knee PT and the flexor/extensor PT ratios were different between the two legs of college rugby players. To be specific, the flexor/ extensor PT ratios in college players were lower than the ratios in professional athletes, and the hip and knee PT of the forwards were greater than the backs [13]. e 2014 research of Brown et al. found that the rugby league and rugby union players had different hip and knee strength, where the hip extensors of rugby union players and the knee flexors of rugby league players were weaker.
is imbalance would influence the performance of rugby players in sprinting, changing direction, and kicking [14]. e research by Dobbs et al. indicated that the ratios of the nondominant legs were greater than the dominant legs and encouraged to use the flexor/extensor PT ratio of the knee joint as an indicator to assess the preseason competitive state of the players and to predict the potential of sports injuries [15].
However, the research on the relation between the technical and tactical of rugby and the biomechanical characteristics of the female rugby players is insufficient. In particular, the evidence for the correlation between the defensive ability of ST and the lower-extremity strength needs further studies. In this paper, the IF of ST was tested by the testing system which consisted of hardware and software parts. e hardware part mainly included a force sensor, data acquisition card, and computer to collect, convert, transmit, store, and display. e software part involved an application program to implement the design and control of the data acquisition program, as well as a hardware driver to complete the working mode setting of the data acquisition card. We performed objective measurements and characteristic analysis on Chinese elite female rugby players' isokinetic hip, knee, and ankle strength. Meanwhile, the impacts of lowerextremity strength on ST techniques are explored, providing a theoretical reference for the specialized strength training, the technical learning and the injury prevention of rugby players, and the evidence of relevant research studies on rugby in the process.
Participants.
Eighteen voluntary players without any injury within three months or having more than threemonth training after recovery (based on the time recorded in the medical rehabilitation certificate) participated. e test duration was one week, and corresponding adjustments and arrangements were made for special situations. e basic characteristics of the subjects are shown in Table 1.
Methods.
e idea of a shoulder tackle impact force tester is as follows. (1) According to the characteristics of the shape (length and width of the shoulder) and the impact force of the shoulder of women rugby players, the FLex-iForceTMA502 piezoresistive pressure sensor is selected to make the impact force-sensing device-pressure test vest. ree sensor pieces are placed in parallel at the position of the shoulder guard to sense the impact force. (2) e impact force data acquisition system is designed by using the LabVIEW data acquisition program and USB multichannel data acquisition card. (3) e scientificity and practicability of the force tester had been tested through practical application, as shown in Figure 1.
Working Principle of Pressure Sensor.
e force of the impact was tested by the piezoresistive pressure sensor in research, which was based on silicon wafers as an elastic sensitive element; the four equivalent conductor resistances and the Wheatstone bridge were made of the diaphragm diffusion process with an integrated circuit; when the diaphragm is stressed, due to the semiconductor piezoresistive effect, the resistance value changes, so that the bridge output measured the change of pressure, using this method to make pressure sensor, as shown in Figure 2. e piezoresistive pressure sensor, mainly using the resistivity Δρ/ρ, changes, and the result of the piezoresistive effect is as follows:
USB Multichannel Data Acquisition
Card. e USB multichannel data acquisition card is run on the LabVIEW general acquisition platform of the Windows operating system and powered by a computer. USB multichannel data acquisition card is mainly composed of an isolation circuit, A/D conversion circuit, digital quantity input circuit, a digital quantity output circuit, isolation communication interface, and MCU. e microcontroller adopts a 16-bit ARM chip with strong data processing capacity and watchdog circuit, which can restart the system in case of accidents, making the system more stable and reliable, and can be applied in high performance and high-speed application environments. Photoelectric isolation is adopted between the input and output units and the control unit, and filtering measures are taken for the input signal, which greatly reduces the influence of field interference on the operation of the acquisition card and makes the module have high reliability. e data acquisition system in this study adopts USB2.0 multichannel data acquisition card with 10 terminals, as shown in Figure 3. See Figure 4. Figure 4
Setting of Range of Force Sensing Device.
e kinematic index of shoulder tackle was measured in the laboratory, and the maximum impact force was predicted. e experimental subject was 55 kg Chinese women's rugby players. Experimental kinematic data were filtered by a Butterworth second-order bidirectional low-pass filter with a cutoff frequency of 10 Hz. e Vicon system was used to make up points and intercept the complete shoulder movement; then, the C3D file was exported and imported into Visual 3DTM (USA C-Motion, Version: 4.00.20) for kinematic data processing. We use the Visual 3DTM software for static modeling of kinematics data, import athlete static calibration C3D file in Visual 3D, personalized modification of athlete's height, body mass, then build a static model mdh file set for dynamic data C3D file, and establish dynamic model. After the model is applied, the kinematic data is calculated by using the Pipeline data processing program built in Visual 3D. e direction of the impact force is the flexion and extension movement of the sagittal plane of the body (defined as the motion of the Z and Y axes). After the impact, the feet leave the ground, and the change of velocity ΔV in the sagittal plane of the Y and Z axes is derived from the momentum theorem formula: e theoretical maximum value of shoulder tackle impact force can be obtained through kinematic data: It is 1280 N about 230% of the body weight. Combined with existing studies, the impact force of the shoulder tackle on lite male athletes is 175% and 223% of their body weight in running and jumping and 128%-157% of their body weight in walking. e maximum impact force of shoulder tackle is 1274 N (130 kg), which is used as the reference and basis for designing the range of impact force sensing devices.
In this paper, the piezoresistive pressure sensor is used to sense the impact force, so as to make a pressure sensor. e piezoresistive pressure sensor is based on silicon wafers as an elastic sensitive element, and four equivalent conductor resistances are made of an integrated circuit diffusion process on the diaphragm, forming the Wheatstone bridge. When the diaphragm is stressed, as a result of the semiconductor piezoresistive effect, leading to resistance change, the change in pressure is measured by making the bridge output, using this method that made the pressure sensor.
It is mainly used for the change of resistivity Δρ/ρ and the piezoresistive effect. Piezoresistive pressure sensor is as follows: Mobile Information Systems 3
Experimental Design.
e isokinetic strength testing instrument was an IsoMed 2000 (F&D, Germany), and the manufacturer´s computer software IsoMed analyze V.3.1 was used. e testing contents included PT, relative PT, average power, relative average power, and the flexor/extensor PT ratio of the bilateral hip, knee, and ankle joints.
ere was a unified arrangement of tests, the specific time was arranged in the middle of the training, and the test was completed within a week. Participants were given appropriate rest between trials (>2 min) to prevent the effects of fatigue. e testing scheme includes the flexion/extension actions of the hip, knee, and ankle joints. In the preparation phase, the test requirements and cautions were explained to the athletes, and a fifteen-minute warm-up was taken, including ten-minute slow treadmill running where the treadmill machine was ICON SFTL27808 ad treadmill running speed was 9 km/h, and a five-minute dynamic stretching. Furthermore, the athletes were given at least three familiarization trials at each joint and each speed until they performed the movement properly. Since the angular velocity of the ankle-joint sagittal movement in ST action was less than 180°/s (the results of the experimental testing on the Ph.D. project), combined with existing relevant research studies [16], 60°/s and 180°/s were selected as the fixed angular velocities for the testing. In formal testing, each joint was tested at 60°/s and 180°/s through concentric actions during seated knee-extension/flexion and supine hip (ankle)-extension/flexion actions. At each velocity, 5 flexion and 5 extension actions were tested, and two to four test results were taken. e rest between the tests of contralateral homonymous joints was three minutes, and fortyminute intervals were required for heteronymous joints.
e methodology was strictly implemented by professional operators in accordance with the operating instructions, and the testing reports were printed automatically by the system. Moreover, the average of the four repetitions was used as the final value, with the parameters of the isokinetic strength test. e impact force of ST was tested by the self-developed testing system (Invention Patent Number: ZL201910594285.9), which consisted of hardware and software parts. e hardware part mainly included a force sensor, data acquisition card, and computer to collect, convert, transmit, store, and display. e software part involved an application program to implement the design and control of the data acquisition program, as well as the hardware driver to complete the working mode setting of the data acquisition card, as shown in Figure 5.
Statistical Analysis.
A parametric test was performed in SPSS (Version 25.0 for Windows, SPSS Inc., Chicago, IL, USA), and one-sample K-S test was used to measure whether the data of the lower-extremity isokinetic strength and the impact force of ST followed the normal distribution. e statistical data of the impact force of ST and the isokinetic strength were expressed as mean ± SD and were examined through an independent-sample t-test, which considered P < 0.05 to be statistically significant and P < 0.01 to be extremely significant, tested through Pearson correlation, and linear regression analysis between impact strength of ST and isokinetic strength of the low-extremity joint movement.
e Reliability and Validity of the Impact Force Tester Was Tested by Comparing the Actual Load with the Calibrated Value.
ere is no significant difference between the external load and the scalar value of the tester (P ≤ 0.001), and the two values have a highly positive correlation (Pearson correlation coefficient is "1"), which fully shows that the tester has reliable performance and high reliability; the detailed results are shown in Tables 2 and 3.
e reliability and validity of the impact force tester were tested by using the difference and correlation between the actual load values of different loads and the calibrated values of the tester.
PT and Relative PT.
e PT and PT/BW of the ipsilateral extensors were significantly higher than the flexors, and their differences were extremely significant (P < 0.01) where the hip, knee, and ankle joints of the lower extremities moved at 60°/s and 180°/s [15]; the detailed results are shown in Tables 4 and 5.
Average Power and Relative Average Power.
With the hip, knee, and ankle joints moving at fixed angular velocities of 60°/s and 180°/s, the relative average power of the ipsilateral extensors was significantly larger than the flexors, and there were significant differences between these muscles. At an angular velocity of 180°/s, the average power and the relative average power of the contralateral homonymous knee muscles had an extremely significant difference (P < 0.01). e detailed results can be seen in the corresponding statistics in Figures 6 and 7.
Flexor/Extensor PT Ratio.
When the knee joint of the lower limbs moved at a fixed angular velocity of 60°/s, the flexor/ extensor PT ratios were significantly different between the left and right knees, and the ratios of the right knee were higher than the left knee. e detailed results are shown in Figure 8.
Correlation between Lower-Extremity Isokinetic Strength and IF of ST.
e results as shown in Table 6 showed the impact force of ST.
According to the analysis of statistical data, the correlation between the relative PT of the right knee extensors and the low/middle position of front ST had a significant Mobile Information Systems correlation in the 99% confidence interval; PT and high position of front ST, high position of reverse ST, and middle position of reverse ST were significant in the 95% confidence interval [17], as shown in Table 7.
ere was a linear regression between the PT at 60°/s extensors of the right knee and IF of ST in different positions. e results showed a significant variance P < 0.05, and the regression results were statistically significant, as shown in Table 8.
Isokinetic Hip Strength Analysis.
e muscles at the hip joint are the strongest part of the lower extremity, which provide strength in the process of stretching in ST. is body can obtain a certain velocity in the front and upper directions to promote the stretching of the knee and ankle joints [18].
During the movements of the hip joint at fixed angular velocities of 60°/s and 180°/s, the differences between the contralateral homonymous flexors and extensors were within 10%, which indicated that the movements of hip flexors and extensors were in a normal and safe range. e distinction between the ipsilateral hip flexors and extensors could be partly explained as an adaptation to rugby sports. Mauling and the formation and propulsion of scrummaging, as well as tackling, which is the most commonly used defensive action, are mainly comprised of hip flexion and extension movements in the sagittal plane. Generally, catching and pick-and-goes begin from an upright position, and the sprint usually starts from change-of-direction movements. Most importantly, the rugby players generally stand upright for ST defense in training and matches; thus, the hip extension in the sagittal plane is the major action [19]. e hip and relative PT of the flexors and extensors at the angular velocity of 180°/s were obviously lower than 60°/s, which was consistent with many relevant studies [12,14]. e main reason is that the excitement and tension of the muscle fibers will take time to generate, which indicates that the faster movement speed will reduce the muscle contraction time and the number of muscle fibers collected, resulting in a decrease in the strength generated. When the slow movement is at 60°/s, slow-twitch muscle fibers are primarily collected and a certain proportion of oxidative energy supply is contained. It is worth mentioning that the fast movement at 180°/s will trigger the collection of fasttwitch muscle fibers. Moreover, the intensity of rugby sports is between high and extreme, while the mode of energy supply is dominated by anaerobic metabolism [20]. In general, the low-speed hip strength training of the athletes is satisfactory; nevertheless, the speed strength should be enhanced. Note that the female rugby players in the current study had the flexor/extensor ratios of 0.54-0.7 at 60°/s and 180°/s, respectively, while male football players (0.7-0.9) at 50°/s and 200°/s are less [21]. Scott R. Brown's studies indicated there was a great difference between dominant and nondominant legs isokinetic strength testing at 60°/s of rugby male players; the difference may be partially explained by the technique demands of rugby: tackles made primarily to dominant legs, reception, and running with the ball generally from upright positions, and sprint efforts commonly preceded with backward running. e net effect is that players who do training and play in a more upright position use their dominant hip extensors as the main producers of force, work, and power [14]. Unfortunately, no profiling of rugby female players' hip strength has been reported. ere is a need for strength profiling of elite players of the hip, as this helps understand the requirements and characteristics of rugby players, as well as guiding specific conditioning practices to better effect.
With the movements of the hip, knee, and ankle joints at fixed angular velocities of 60°/s and 180°/s, the average power and the relative average power of the flexors and extensors Note. e independent variable is the PT of the right knee extensor muscle, the dependent variable is IF of ST, and β is the standardized regression coefficient.
were different, which reflected that the maximal and explosive strengths of the extensors were higher than the flexors. At a fixed angular velocity of 180°/s, the average power and the relative average power of the flexors and extensors were improved, and the power increased with the rise of movement velocity is in a certain range. However, when the movement velocity of muscles reached the threshold, the power started to decrease with faster movement, and therefore, the average power was adopted as the standard.
At the same velocity, the average power and the relative average power of the extensors were higher than the flexors, indicating that the function of the extensors was superior to the flexors on the same condition. e reason was that the offense and defense of rugby sports were mainly based on stretching and supplemented by the buffer technique after touching land. e test results were constant with previous studies and reflected that the rugby players enhanced the ability to collect muscle fibers of the flexors at higher movement velocity. It was the requirement for rugby players to balance the development of the hip flexors and extensors and to adapt to this specialized sport.
Although the hip joint is not crucial in the close-range movements of rugby, it is a major joint of the lower-limb stretching and maintaining movement balance [22].
Isokinetic Knee Strength Analysis.
At the beginning of ST, the hip joint drives the knee joint to stretch and to generate appropriate lengths of contraction for the extensors at the best exertion angle, making the knee extensors contract concentrically. e muscles of the knee joint make the greatest contribution to the impact force of ST. In international studies, the flexor and extensor strength of the knee joint are one of the focuses of lower-limb strength in rugby players. Most of the studies claimed that the reasonable hamstring-to-quadriceps ratio (H/Q ratio) affected the learning of sports techniques such as balancing and coordinating capacities and was correlated to sports injuries [12,23,24].
H/Q ratio calculated by dividing the peak flexion torque by the peak extension torque of the same knee is optimal between 60% and 65% [16,25]. It is also a major indicator of knee joint injury rehabilitation when its range is between 0.5 and 0.8. A lower ratio could influence the stability of the knee joint and cause joint injury. For a higher ratio, the knee joint lacks stretching strength, which could affect the quality of running speed. e coordination of the passing and catching actions and the impact force of tackling and other specialized technical movements in rugby sports exist. e research results showed that the H/Q ratios in Chinese elite female rugby players were between 0.6 and 0.8, which were consistent with the international amateur female rugby players [15], as well as other excellent basketball and football players [26,27]. e H/Q ratios of the dominant legs were much higher than the nondominant legs. Moreover, the knee extensors had relatively greater strength, which was the result of habitually using ST techniques in general rugby sports [7]. e phenomenon that the relative PT changes with rising movement velocity is because the break and formation of the cross-bridges in myofibers will lose muscle strength. Besides, the fluid viscosity of the myofibers and the connective tissues requires internal forces to overcome the viscous resistance, which will cause the decline of muscle tension. e contraction of the agonistic muscles and the extension of the antagonistic muscles could also cause the loss of muscle strength due to the viscosity of muscle, and the viscous resistance is increased with higher contraction speed. e average PT of knee extensors in Chinese female rugby players was 199.21 ± 29.9 Nm, while the PT in international elite female handball athletes was 162.77 ± 26.54 Nm [28]. In addition, the average relative PT of the knee in Chinese female rugby players was 1.78 ± 0.32, compared to 1.7 ± 0.4 in international excellent female football players [21]. e flexor/extensor PT ratios between the dominant and nondominant leg in Chinese female rugby players were different at a fixed angular velocity of 60°/s. In addition, the average power and the relative average power between the extensors of the left and right leg were also different at 180°/s. e explosive strength to mobilize the muscles quickly and the maximal extensor strength of the dominant leg were stronger than the nondominant leg. e main reason is that the offense and defense of rugby sports are composed of multifrequency sudden stops and starts, as well as highly accelerated movements in changing directions. ST is the most frequently used tackling action in defense, accounting for 61% of the total tackling actions use [29]. In each game, the forwards of the team are exposed to an average of 55 physical collisions, while the backs are exposed to 29 physical collisions on average [18]. Most athletes habitually choose the right shoulder as the dominant shoulder and frequently perform ST actions using the dominant shoulder in training and matches, leading to the different muscle strength between the left side and right side of the knee joint.
In conclusion, the knee extensor strength in Chinese elite female rugby players was well developed, while the extensor strength between the left and right knee was imbalanced. It is recommended to focus on the balanced development of the lower-extremity left side and right side extensor and flexor strength in strength training, increasing the hamstring-toquadriceps ratio of the knee joint, and enhancing the strength or stretching exercises of the hamstrings.
Isokinetic Ankle Strength Analysis.
e ankle joint is the terminal joint and is the point of strength conversion.
rough the transmission of the ankle joint, the hip and knee strength could be applied to the ground. Particularly in the stretching phase of sprinting, the ankle strength determines the stability of completing the supporting actions, as well as the functional efficiency of the upper joints and the time sequence of participating in the movement [30]. e ankle joint is also the most easily injured joint, accounting for 25% of the total injuries in running and jumping sports. In the latest research from Noronha, the injury rate of the ankle joint was 1.63 per 1000 competition hours based on the duration of rugby union games, and the rate was 0.063 per game based on the number of series [31]. e research of Sman reported that the injury rate of the ankle ligament in rugby union games was 0.89 per 1000 competition hours, and the rate in rugby league games was 0.46 per 1000 competition hours [32]. e relevant research results indicated that the average PT ratio of the dorsiflexors and the plantar flexors was between 0.26 and 0.35. In addition, other studies showed that when the lower-limb extensors were well developed, the flexors were correspondingly well developed. e PT of plantar flexor muscle groups in the ankle joint was about three times of the dorsiflexor muscle groups. e main reason is that plantar flexor muscle groups play a leading role in the contraction of the ankle muscle groups. Accordingly, the maintenance of standing posture and general activities in daily life require lots of participation in plantarflexor muscle groups, and the extensors usually have a heavy burden in normal sports. e difference between the ankle flexor/extensor PT ratios in Chinese elite rugby players and the reasonable value was within a variation range of 10% to 15%, and the PT of plantar flexor muscle groups was about four times of the dorsiflexor muscle groups [14]. e latest studies found that the strength of the dorsiflexor muscle groups affected the balance capacity (Y). ere was a linear relationship shown as Y � 71.08-0.81 × CS ML + 5.47 × PT HABD + 0.24 × ADF KF , where CS ML was the displacement of the barycenter, PT HABD was the maximal strength of the hip abductor, and ADF KF was the ankle dorsiflexor muscle strength [22]. erefore, the Chinese female rugby players should enhance the strength training of the ankle dorsiflexor muscle groups and evenly develop the flexor and extensor strength of the ankle joint.
Impact of Lower-Extremity Strength on ST.
e lowerextremity movement of ST was the stretching in the asymmetrical and nonvertical jump. In the stretching stage, the extensors contracted concentrically to generate a torque, which indicated that the vertical and horizontal momentum is determined by the muscle strength, the body movement speed, and eventually the impact force [12]. e linear relationship between the extensors of the dominant knee and ey also reduce the buffer ability of the foot after touching land and ultimately affect the force and speed of ST action. On the other hand, the imbalance between the flexors and extensors of the knee and ankle joint could lead to the instability of ST action and influence the balance, stability, and technical learning of the action. Even worse, this problem could also cause joint injuries easily. e effective measures to improve ST ability are firstly developing lower-extremity strength in a balanced manner, especially the flexors and extensors of the unilateral side. Next, the left and right side ST techniques need to be developed evenly. e novice athletes should adopt left or right ST based on different distances. e habitual adoption of dominant ST defense could cause the imbalance of strength between the left and right sides of lower-extremity strength. In a word, the reverse ST reduces the defense effect and easily causes injuries.
ere is one key part of this study, the measuring instrument of ST impact force is designed according to the physiological characteristics of female athletes, and the force-sensing device can accurately measure the ST impact force of female athletes, which can help rugby coaches improve the effect of technical tackle training.
Conclusion
In this paper, the design uses the LabVIEW, USB multichannel data acquisition card, and FLexiForceTM A502 pressure sensor to complete the overall construction of the data acquisition system and impact force sensing device.
rough the application, the accuracy, reliability, and practicability of the data processing function of the tester system are fully verified. e tester can easily and effectively measure the force of the woman's shoulder tackle, which has the promotion value.
A variety of sensors are applied to athletes' wearable devices to achieve intelligence. e lower extremity of unilateral extensors in Chinese elite female rugby players was greater than the flexors. Meanwhile, the imbalance between the bilateral knee extensors and flexors limits the improvement of the defensive ability of ST. It is feasible to use the extensor strength of the dominant-side knee joint as a reference index to evaluate the defensive ability of ST. e balanced development of the lower-extremity strength and the front/reverse ST techniques can help the athletes to enhance the stability of ST actions and the defensive capacity and to prevent injuries effectively. Based on the experiment verification in this paper, the force tester will be further verified and improved and popularized in sports, and the proposed network should be validated in the theoretical analysis and practical applications in future work.
Data Availability e data that support the findings of this study are available from the corresponding author upon reasonable request.
Conflicts of Interest
e authors declare no conflicts of interest. | 7,688.8 | 2022-06-06T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Sarcosine is a prostate epigenetic modifier that elicits aberrant methylation patterns through the SAMe‐Dnmts axis
DNA hypermethylation is one of the most common epigenetic modifications in prostate cancer (PCa). Several studies have delineated sarcosine as a PCa oncometabolite that increases the migration of malignant prostate cells while decreasing their doubling time. Here, we show that incubation of prostate cells with sarcosine elicited the upregulation of sarcosine N‐demethylation enzymes, sarcosine dehydrogenase and pipecolic acid oxidase. This process was accompanied by a considerable increase in the production of the major methyl‐donor S‐adenosylmethionine (SAMe), together with an elevation of cellular methylation potential. Global DNA methylation analyses revealed increases in methylated CpG islands in distinct prostate cell lines incubated with sarcosine, but not in cells of nonprostate origin. This phenomenon was further associated with marked upregulation of DNA methyltransferases (Dnmts). Epigenetic changes were recapitulated through blunting of Dnmts using the hypomethylating agent 5‐azacytidine, which was able to inhibit sarcosine‐induced migration of prostate cells. Moreover, spatial mapping revealed concomitant increases in sarcosine, SAMe and Dnmt1 in histologically confirmed malignant prostate tissue, but not in adjacent or nonmalignant tissue, which is in line with the obtained in vitro data. In summary, we show here for the first time that sarcosine acts as an epigenetic modifier of prostate cells and that this may contribute to its oncometabolic role.
DNA hypermethylation is one of the most common epigenetic modifications in prostate cancer (PCa). Several studies have delineated sarcosine as a PCa oncometabolite that increases the migration of malignant prostate cells while decreasing their doubling time. Here, we show that incubation of prostate cells with sarcosine elicited the upregulation of sarcosine N-demethylation enzymes, sarcosine dehydrogenase and pipecolic acid oxidase. This process was accompanied by a considerable increase in the production of the major methyl-donor S-adenosylmethionine (SAMe), together with an elevation of cellular methylation potential. Global DNA methylation analyses revealed increases in methylated CpG islands in distinct prostate cell lines incubated with sarcosine, but not in cells of nonprostate origin. This phenomenon was further associated with marked upregulation of DNA methyltransferases (Dnmts). Epigenetic changes were recapitulated through blunting of Dnmts using the hypomethylating agent 5-azacytidine, which was able to inhibit sarcosine-induced migration of prostate cells. Moreover, spatial mapping revealed concomitant increases in sarcosine, SAMe and Dnmt1 in histologically confirmed malignant prostate tissue, but not in adjacent or nonmalignant tissue, which is in line with the obtained in vitro data. In summary, we show here for the first time that sarcosine acts as an epigenetic modifier of prostate cells and that this may contribute to its oncometabolic role.
Introduction
Chromatin structure defines the state in which genetic information in the form of DNA is organized (Toh et al., 2017). Recent advances in cancer research have shown that global changes in the epigenetic landscape are a hallmark of various types of malignant diseases, including prostate cancer (PCa) (Jeronimo et al., 2011;Baylin, 2002, 2007). Unlike DNA mutations, epigenetic changes induce conformational changes in the DNA double helix and modify transcription factor access to promoter regions upstream of coding sequences (Hanahan and Weinberg, 2011). Among the epigenetic modifications, DNA hypermethylation is one of the most common in PCa (Dobosy et al., 2007;Geybels et al., 2015;Hoque et al., 2005). Methylation is a result of the enzymatic transfer of a methyl group from the methyl-donor S-adenosylmethionine (SAMe) to the C5-position of cytosine, while cytosines are mostly methylated when bound to guanines (Jeronimo et al., 2011). The hypermethylation of regions with a high G/C content (palindromic CpG islands) of tumour suppressor genes leads to their inactivation, which may represent an early event in PCa development (Valdes-Mora and Clark, 2015).
Sarcosine (N-methyl glycine) is a widely discussed noninvasive biomarker of the early stages of PCa (Khan et al., 2013;Sreekumar et al., 2009). Several studies have delineated sarcosine as a PCa oncometabolite that increases the migration of malignant prostate cells while decreasing their doubling time (Heger et al., 2016a). Sarcosine also induces PCa cell invasion and intravasation in vivo (Khan et al., 2013) and affects expression of genes involved in cell cycle regulation and apoptosis (Heger et al., 2016a). Despite the obvious importance of sarcosine in PCa progression, the molecular mechanisms of its action have not yet been fully elucidated.
In its biochemical pathway, sarcosine is produced from glycine by glycine N-methyltransferase (GNMT) or alternatively from dimethylglycine by dimethylglycine dehydrogenase (DMGDH) (Heger et al., 2016a). Conversely, sarcosine oxidative N-demethylation yielding glycine is promoted by sarcosine dehydrogenase (SARDH) and pipecolic acid oxidase (PIPOX) (Cha et al., 2014). This process provides a methyl group that can be utilized for the methylation of homocysteine that completes a cycle in which the monocysteinyl moiety is converted sequentially from methionine to SAMe (Wilson et al., 2009). Consequently, SAMe can be demethylated to S-adenosylhomocysteine (SAH) (Sibani et al., 2002). During this process, methyl groups are supplied for numerous transmethylation reactions, including the methylation of nucleic acids (the pathway is schematized in Fig. 1). It is a general fact that epigenetic and metabolic alterations in cancer cells are highly intertwined; however, despite the obvious linkage between sarcosine and methionine pathways, whether sarcosine metabolism can contribute to the hypermethylation of DNA remains unknown.
Therefore, the aim of this study was to investigate the relationship between sarcosine metabolism and molecules pivotal for methylation processes. For the first time, we showed that the incubation of prostate cells with sarcosine resulted in a complex response that involved the elevated formation of the methyl-donor SAMe and increased global DNA methylation and stimulation of the expression of DNA methyltransferases (Dnmts)enzymes responsible for the catalytic transfer of methyl groups from SAMe to DNA (van der Wijst et al., 2015). Moreover, we demonstrated that sarcosine was partially able to avert the inhibition of Dnmts induced by the hypomethylating agent 5-azacytidine (5-Aza) and that 5-Aza inhibited sarcosine-induced migration of prostate cells. These data showed that sarcosine metabolism is coupled to aberrant DNA methylation, which is considered one of the major hallmarks of cancer.
Chemical compounds
All standards and other chemicals were purchased from Sigma-Aldrich (St. Louis, MO, USA) with ACS purity, unless noted otherwise.
Cell lines and treatment rationale
Three human prostate cell lines were used in this study: (a) the PNT1A human cell line, established by immortalization of normal adult prostatic epithelial cells; the primary culture was obtained postmortem from the prostate of a 35-year-old male; (b) the 22Rv1 human PCa epithelial cell line, derived from a xenograft that was serially propagated in mice after castration-induced regression and relapse of the parental, androgen-dependent CWR22 xenograft; and (c) the LNCaP human cell line, established from an androgen-sensitive metastasis located in the left supraclavicular lymph node in a 50-year-old Caucasian male. To evaluate the specificity of DNA methylation, we employed the following nonprostate cell lines: A2780 (ovarian cancer); MDA-MB-231 (triple-negative breast cancer); and SH-SY5Y and UKF-NB-4 (neuroblastoma). Except for UKF-NB-4 that was a kind gift from Eckschlager, cell lines used in this study were purchased from the Health Protection Agency Culture Collections (Salisbury, UK). The culture media used were as follows: Roswell Park Memorial Institute-1640 (RPMI-1640) for culturing PNT1A, 22Rv1, LNCaP and A2780 cells; Dulbecco 0 s Modified Eagle's Medium for culturing MDA-MB-231 and SH-SY5Y cells; and Iscove 0 s Modified Dulbecco 0 s Medium for culturing UKF-NB-4 cells. For culturing, the media were supplemented with 10% fetal bovine serum and penicillin (100 UÁmL À1 ) and streptomycin (0.1 mgÁmL À1 ). The cells were maintained at 37°C in a humidified incubator Galaxy 170R (Eppendorf, Hamburg, Germany) with 5% CO 2 . The treatment with sarcosine was initiated after the cells reached~70-80% confluence. For administration, 1 lM sarcosine (sterile-filtered, BioXtra product line) was used throughout the study. The rationale behind this decision was that this concentration was shown to stimulate a proliferation of prostate cells in vitro and in vivo (Heger et al., 2016b). As controls, we analysed the cells exposed to relevant volume of vehicle [sterile Milli-Q water (Merck Millipore, Burlington, MA, USA) w/o sarcosine].
Quantification of sarcosine intracellular uptake
Intracellular sarcosine was quantified using a HPLC HP 1100 Series (Palo Alto, CA, USA) coupled with a fluorescence detector (FLD operating with k exc = 350 nm, k em = 450 nm). Upon incubation (due to expectation of a fast intracellular accumulation, the longest incubation time was 3 h), the cells were washed with PBS (59) and extracted in a mixture of MeOH and 1 M acetic acid (80 : 20 v/v). Chromatographic separation and detection was performed after precolumn derivatization with o-phthalaldehyde and fluorenylmethyloxycarbonyl chloride. The compounds were eluted with a linear upward gradient of mobile phases composed by acetonitrile/water (90 : 10 v/v) and 0.1 M ammonium formate in water.
Immunocytochemistry
For immunocytochemistry (ICC), cells were grown on eight-well chamber slides and incubated with sarcosine (1 lM, 24 h). Then, cells were fixed with 4% paraformaldehyde for 15 min. After permeabilization with 0.3% Triton X-100 in PBS for 3 min, the cells were blocked with 10% BSA in PBS and incubated with the indicated primary antibodies at 4°C overnight. ICC was analysed with an EVOS FL Auto Cell Imaging System (Thermo Fisher Scientific, Waltham, MA, USA). Rabbit anti-GNMT (1 : 500) and rabbit anti-SARDH (1 : 200) were from Abcam (Camridge, UK); mouse anti-DMGDH (1 : 500) and mouse anti-PIPOX (1 : 200) were from Santa Cruz Biotechnology (Dallas, TX, USA); and mouse anti-Dnmt1 (1 : 1000) was from Thermo Fisher Scientific. ICC was quantified using ImageJ (National Institutes of Health, Bethesda, MD, USA), analysing the intensity of 80-120 cells per group with subsequent background subtraction (analysis without using primary antibodies).
Extraction and quantification of SAMe and SAH
S-adenosylmethionine and SAH were extracted in a mixture of MeOH and acetic acid [1 M, 80 : 20 (v/v)]. Briefly, 300 lL of solvent was added to the frozen cells followed by slow thawing on ice. The quantification of SAMe and SAH upon 2-, 6-, 12-and 24-h incubation with sarcosine (1 lM) was performed using HPLC with electrospray ionization quadrupole-quadrupole timeof-flight mass spectrometer (HPLC-ESI-QqTOF MS) Maxis Impact (Bruker Daltonik GmbH, Bremen, Germany). The separation was performed on the C18 reverse phase column Phenomenex Kinetex EVO (Phenomenex, Torrance, CA, USA). As mobile phases, water with 0.1% (v/v) formic acid and MeOH with 0.1% (v/v) formic acid were used.
Global analysis of DNA methylation
DNA was extracted from cells incubated with sarcosine (1 lM, 24, 48 and 72 h) using the ExtractNow TM DNA Mini Kit (Minerva Biolabs, Berlin, Germany) according to the manufacturer's instructions and quantified at k = 260 nm using a spectrophotometer, Infinite 200 PRO (Tecan, Mannedorf, Switzerland). The global DNA methylation analysis was performed using the commercial methylated DNA quantification kit (MDQ1 Imprint kit) in 96-well plates according to the manufacturer's instructions. The readout was performed at 450 nm. The results are expressed upon subtracting background.
Bisulphite treatment of genomic DNA, bisulphite polymerase chain reaction (BSP) and sequencing
Two micrograms of genomic DNA was treated with sodium bisulphite using the Epitect Bisulphite Kit (Qiagen, Hilden, Germany), according to the manufacturer's instructions. After bisulphite conversion, DNA was amplified with specific primers designed using the MethPrimer (Li and Dahiya, 2002) and Bisulphite Primer Seeker 12S (Zymo Research, Irvine, CA, USA) with emphasis on the amplification of CpG islands in the promoters or CpG-rich regions. The list of genes and sequences of primers used for BSP are provided in Table S1. All primers were confirmed to not amplify any nonbisulphited DNA. PCR products were analysed on 1% agarose gels, and the bands were purified with QIAEX II Gel Extraction Kit (Qiagen). The purified DNA was analysed by Sanger sequencing by SEQme company (Dobris, Czech Republic).
Quantification of the mobile pool of intracellular zinc
Intracellular free zinc was analysed according to a slightly modified protocol (Haase et al., 2006). Briefly, cells were grown in 96-well plates up to 70-75% confluency and pretreated with sarcosine or zinc sulphate for 24 or 48 h. Then, the medium was removed, and the cells were loaded with 2.5 lM Zinpyr-1 in loading buffer [10 mM 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid, pH 7.35, 120 mM sodium chloride, 5.4 mM KCl, 5 mM glucose, 1.3 mM calcium chloride, 1 mM magnesium chloride, 1 mM sodium dihydrogen phosphate and 0.3% BSA] and analysed on a Tecan Infinite M200 reader (Tecan) using k exc = 492 nm, k em = 527 nm.
Quantification of total intracellular zinc using atomic absorption spectrometry (AAS)
Prior to total zinc quantification, cells were washed with 10 lM ethylenediaminetetraacetic acid to remove extracellular bound zinc and then digested in nitric acid (65% v/v) and hydrogen peroxide (30% v/v). Total zinc was analysed using the graphite furnace AAS 280Z (Agilent Technologies, Santa Clara, CA, USA) with Zeeman background correction.
Isolation of RNA and qRT-PCR
High pure total-RNA isolation kit (Roche Life Science, Indianapolis, IN, USA) was used for isolation of cellular RNA of cells incubated with sarcosine (24 h). The medium was removed, and samples were washed twice with 5 mL of ice-cold PBS. Cells were scraped off, transferred to clean tubes and centrifuged at 20 800 g for 5 min at 4°C. After that, lysis buffer was added and RNA isolation was carried out according to manufacturer's instructions. RNA (500 ng) was transcribed using Transcriptor First Strand cDNA Synthesis Kit (Roche Life Sciences) according to manufacturer's instructions. Prepared cDNA (20 lL) was diluted with RNase-free water to a total volume of 100 lL, and 5 lL of this solution was employed for qRT-PCR using SYBR Green Quantitative Kit (Sigma-Aldrich). The specificity of the qRT-PCR was checked by melting curve analysis, and the relative levels of transcription were calculated using the 2 ÀDDCT method. The list of primers is provided in Table S2.
5-Aza treatment and wound-healing assay
Prior to 5-Aza treatment, its dose-response curves were obtained using the MTT assay performed in accordance with our previous study (Heger et al., 2016a). We chose the optimal concentration that allowed the cells to tolerate the treatment without affecting proliferation (Fig. S1). For the wound-healing assay, the cells were cultured in 6-well plates until they reached~80% confluency. Then, a pin was used to scratch and remove cells from a discrete area of the confluent monolayer to form a cell-free zone. The cells were then treated with sarcosine (1 lM), DNA-hypomethylating agent, 5-Aza (1 lM) and their combination (1 lM sarcosine and 1 lM 5-Aza) and incubated up to 24 h. At 6, 12 and 24 h, micrographs of the cells were taken using the EVOS FL Auto Cell Imaging System (Thermo Fisher Scientific) and compared with micrographs obtained at 0 h using TSCRATCH software (CSElab, Zurich, Switzerland).
Clonogenic assay
Cells were seeded in a 6-well plate at a density of 1 9 10 4 cells per well in growth medium and incubated for 6 h. Then, the cells were treated with 1 lM 5-Aza, 1 lM sarcosine and their combination (1 lM sarcosine and 1 lM 5-Aza, 24 h) as indicated. After medium renewal, the cells were incubated 8 days in adequate treatment-free culture media. Finally, the cells were washed with PBS and fixed using 500 lL of 3 : 1 MeOH : acetic acid for 5 min. To visualize the colonies, the cells were stained using 500 lL of 0.5% crystal violet in MeOH for 15 min.
Preparation of prostate tissue specimens
Following Institutional Review Board approval, we accessed prostate tissue from the Department of Pathology, University Hospital Brno. The samples included four prostate tissue specimens (1 benign only, 3 malignant mixed with benign tissue) that had been obtained from prostatectomies and transurethral resections of subjects with signed informed consent. Tissues were fixed in 10% buffered formalin and embedded in paraffin. A 1-4 lm section of each tissue sample was mounted on glass slides and stained with haematoxylin-eosin (H&E). A pathologist (E.T.) reviewed and annotated normal and cancerous areas on the H&E-stained adjacent tissue sections. Then, 10-lmthick adjacent sections were obtained using Leica SM2010 R (Leica, Wetzlar, Germany). These tissue sections were mounted on glass microscope slides for immunohistochemistry (IHC) and desorption electrospray ionization (DESI) mass spectrometry imaging (MSI) or indium tin oxide-coated slides for matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) MSI. The study conformed to the standards set by the Declaration of Helsinki.
IHC of prostate tissue sections
Tissue sections (10 lm) were deparaffinized with xylene and rehydrated in a graded ethanol series of decreasing ethanol concentrations. Heat-induced epitope retrieval was performed using sodium citrate buffer (10 mM sodium citrate, 0.05% Tween-20, pH 6.0) for 20 min. Sections were blocked with 5% BSA (1 h) and incubated in anti-Dnmt1 (1 : 200, in PBS-T with 1% BSA) overnight at 4°C in a humidified chamber. To control IHC specificity, primary antibodies were replaced by nonbinding immunoglobulins. After incubation in 0.3% H 2 O 2 in PBS for 15 min, the samples were incubated in secondary HRP-conjugated rabbit anti-mouse antibody (P0260; Dako) in PBS-T with 1% BSA for 1 h at 25°C. Finally, sections were developed with Vector VIP Peroxidase Substrate Kit (Vector Laboratories, Burlingame, CA, USA) at 25°C for 10 min, mounted with DPX Mountant for histology and examined using the EVOS FL Auto Cell Imaging System (Thermo Fisher Scientific).
DESI MSI
Desorption electrospray ionization mass spectrometry imaging was performed using the mass spectrometer OrbiTrap Elite (Thermo Fischer Scientific) with a DESI-2D ion source (Prosolia, Indianapolis, IN, USA). Imaging experiments were performed by continuous scanning of the tissue surface with spraying liquid (8 : 2, MeOH/water, v/v) at a flow rate 2 lLÁmin À1 , scanning velocity of 65 lmÁs À1 and a 55°s pray impact angle in the positive ion mode (for sarcosine m/z, 90.0549). The obtained data were processed using the BIOMAP 3.8.0.3 software (Novartis Institutes for Biomedical Research, Cambridge, MA, USA) to create 2D ion images.
MALDI-TOF MSI
The MSI was performed on a MALDI-TOF/TOF mass spectrometer Bruker UltrafleXtreme (Bruker Daltonik GmbH). As MALDI matrix, we used 2,5-dihydroxybenzoic acid (30 mgÁmL À1 ) in MeOH/water (50 : 50, v/v) with 0.2% trifluoroacetic acid. Prior to analyses, tissue sections were scanned and loaded into FLEXIMAGING 3.0 software (Bruker Daltonik GmbH). The m/z images were generated and visualized using SCILS LAB 2014b software (Bruker Daltonik GmbH). The detection of SAMe and SAH was confirmed by MALDI-TOF/TOF analysis using LIFT cell by detecting typical fragments of the SAMe precursor ion at m/z 399.145 and SAH precursor ion at m/z 385.130. Typical fragment ions of SAMe were identified at m/z 250.095, 136.063 and 102.055, whereas typical fragment ions of SAH were at m/z 250.044, 136.074 and 134.009.
Descriptive statistics
For the statistical evaluation of the results, the mean was taken as the measurement of the main tendency, while the standard deviation was taken as the dispersion measurement. Differences between groups were analysed using a paired t-test. For analyses, Software STATISTICA 12 (StatSoft, Tulsa, OK, USA) was employed.
Incubation of prostate cells with sarcosine elicits the upregulation of the sarcosine N-demethylation enzymes
We first confirmed our earlier finding that sarcosine stimulates the expression of SARDH but not GNMT (Heger et al., 2016a). Additionally, we examined the expression of DMGDH, which is involved in the subpathway that synthesizes sarcosine from dimethylglycine, and PIPOX, which catalyses the oxidative demethylation of sarcosine to yield glycine. Figure 2A, B illustrates that sarcosine incubation had significant (P < 0.05) stimulatory effects on the cytoplasmic expression of SARDH and PIPOX. In contrast, no effect on GNMT and DMGDH, which participate in sarcosine synthesis, was identified. Together with HPLC-FLD that revealed a rapid intracellular accumulation of sarcosine (Table S3), the obtained data indicate that the addition of sarcosine increases its intracellular pool, which leads to the consequent upregulation of SARDH and PIPOX, enzymes mediating sarcosine N-demethylation.
Sarcosine alters the global methylation status of prostate cells
As we identified stimulatory effects of sarcosine on the expression of sarcosine-converting enzymes, we continued with the quantification of SAMe and SAH. Figure 3A illustrates that in all prostate cell lines, sarcosine induced a significant elevation of SAMe. On the other hand, the amount of SAH was either decreased (metastatic LNCaP) or negligibly affected (nonmalignant PNT1A and malignant 22Rv1). This finding was also reflected by a significant increase in cellular methylation potentials (CMP, also SAMe : SAH ratio) (Ulanovskaya et al., 2013), particularly in metastatic LNCaP cells. In mammals, > 90% of SAMe is used for methylation reactions; therefore, in the next step, we focused on evaluating the extent of DNA methylation (schematized in Fig. 3B).
To understand the role of sarcosine in the DNA methylation, we first analysed time-course changes in global methylation of genomic DNA isolated from prostate cells (Fig. 3C). The highest increase in global methylation was identified for PNT1A cells, which also had the lowest relative baseline methylation. A similar trend was found in LNCaP and 22Rv1 cells, for which a faster onset of DNA methylation (upon 48-h incubation with sarcosine) was identified. We further examined the specificity of sarcosine-induced methylation in prostate cells. Figure 3D demonstrates that sarcosine failed to enhance DNA methylation in all tested nonprostate cells.
As SAMe is also involved in the biosynthesis of polyamines, we examined the effect of sarcosine on the intracellular pool of Spm and Spd. Figure S2 illustrates no significant contribution of the sarcosine-to-SAMe axis towards Spm and Spd levels. Overall, our results indicate a certain connection between sarcosine metabolism and the DNA methylation processes of prostate cells.
Sarcosine elicits the upregulation of Dnmts and promotes aberrant promoter methylation patterns
Significant stimulation of global DNA methylation prompted us to investigate the methylation patterns of CpG-rich islands in promoter regions of selected genes crucial for PCa development and progression. BSP sequencing (BSP product validation is shown in Fig. 4A) revealed that sarcosine caused a denser promoter methylation, particularly in CCND2 (6% vs. 37% postsarcosine treatment), CDKN2B (6% vs. 29%), CD44 (10% vs. 33%) and androgen receptor (AR) (18% vs. 36%). In contrast, no significant contribution of sarcosine towards the methylation of promoters of two analysed proto-oncogenes (JUN and FOS) was found (Fig. 4B). Notably, a sarcosine-induced decrease in transcriptional activity of CCND2 and CDKN2B was identified in our previously published microarray-based study (Merlos Rodrigo et al., 2017).
Methyl groups are catalytically transferred from SAMe to DNA by enzymes from the Dnmt family. Therefore, we investigated the influence of sarcosine on the expression of the three major Dnmts (Fig. 4C). In all prostate cells, sarcosine caused a pronounced stimulation of Dnmt1 (Fig. 4D). Interestingly, after 24 h, Dnmt1 expression decreased. On the other hand, the expression of Dnmt3a and Dnmt3b was stimulated upon longer sarcosine incubation. This result could be connected to divergent roles of Dnmts and their distinct methylation activities.
As Dnmts contain zinc finger structural motifs that coordinate zinc ions to stabilize the protein fold (Frauer et al., 2011;Otani et al., 2009), we further estimated the influence of sarcosine on levels of mobile vs. total intracellular zinc. Notably, sarcosine did not affect the intracellular mobile zinc pool (Fig. 4E). In contrast, total intracellular zinc status was markedly increased in all prostate cell lines upon incubation with sarcosine (Fig. 4F). This finding also corresponds to Dnmt1 expression. However, due to the abundance of zinc finger motifs within the human proteome, the stimulatory effects of sarcosine on zinc homeostasis should be further studied in detail.
Blunting of Dnmts by 5-Aza inhibits sarcosine migration and proliferation
To verify that DNA methylation plays a role in the sarcosine-induced stimulation of prostate cells, we first assessed the effects of 5-Aza, a pharmacological DNA methylation inhibitor on sarcosine-induced expression of Dnmts. We showed that in all prostate cells, 5-Aza treatment (10 lM) could diminish Dnmt1 expression (Fig. 5A) without noticeable toxicity (Fig. S1). Identification of the inhibitory effects of 5-Aza on Dnmt3a and Dnmt3b was biased by their low basal expression (Fig. 5B). Hence, we validated the experiment using qRT-PCR. Figure 5C confirms the pronounced inhibitory activity of 5-Aza. Interestingly, sarcosine partially reverted 5-Aza activity, particularly for Dnmt1. This finding was confirmed by ICC demonstrating nuclear localization of Dnmt1 and its slightly reverted expression upon 5-Aza/sarcosine treatment (Fig. 5D). To further address whether 5-Aza might regulate the previously described sarcosine-induced migration (Heger et al., 2016a;Khan et al., 2013;Sreekumar et al., 2009), we next analysed the effects of the above-discussed treatments using the monolayer wound-healing assay. As shown in Fig. 5E, sarcosine caused a considerable increase in migration of prostate cells with the lowest stimulatory effects identified for PNT1A cells. Moreover, the depletion of Dnmt by 5- Aza followed by sarcosine treatment resulted in pronounced inhibition of sarcosine-induced migration (quantified in Fig. 5F). Likewise, 5-Aza demonstrated inhibitory activity towards sarcosine-induced clonogenic growth (Fig. 5G). Overall, our data confirmed that phenotypic properties of prostate cells can be markedly influenced by cross talk between epigenetics and sarcosine metabolism.
3.5. Spatial mapping reveals concurrence in content of sarcosine, SAMe and Dnmt1 in histologically confirmed malignant zones To investigate the importance of sarcosine and the SAMe-Dnmt1 axis in prostate tissues, we performed a spatial mapping of paraffin-embedded specimens using MSI and IHC. For each case, a 10 lm section taken immediately adjacent to the H&E section was used. As shown in Fig. 6A, analyses were performed on tissue samples with histologically demarcated areas of normal and malignant tissue (yellow dashed line).
We found that malignant zones had considerably higher nuclear expression of Dnmt1 compared with benign tissue. This finding agrees with studies that identified increasing Dnmt1 levels during PCa progression and development of a castration-resistant phenotype (Chen et al., 2010;Patra et al., 2002;Valdez et al., 2013). Catalytic transfer of methyl groups by Dnmts is closely dependent on the methyl-donor SAMe. Therefore, using pixel-to-pixel MSI spectral data (representative spectra and total ion chromatogram are shown in Fig. S3 and Fig. S4A,B), 2D molecular images were constructed to visualize the spatial distribution of sarcosine, SAMe and SAH in prostate tissues. We identified substantial differences in sarcosine and SAMe levels between benign and malignant tissues. As an upregulation of Dnmt1 and an increased production of SAMe were partially observed in adjacent benign tissues (Fig. 6B), spatial analyses outline a new hypothesis that SAMe could possibly induce changes in the prostate tumour microenvironment. We are eager to further investigate this aspect.
Discussion
In the present study, we demonstrated that the incubation of prostate cells with sarcosine, a PCa oncometabolite, leads to concurrent increase in formation of SAMe, which caused increased prostate cellspecific methylation of CpG islands as a result of upregulation of Dnmts, particularly Dnmt1. Our results represent the first evidence that the sarcosine pathway is involved in regulating the prostate epigenetic landscape, highlighting that sarcosine is an important factor underlying PCa pathogenesis.
In cancer, metabolic rewiring modifies the epigenetic landscape via modulating the activities of a wide spectrum of biomolecules, including DNA-modifying enzymes, miRNA and lncRNA. A number of metabolic alterations have been previously linked to epigenetic aberrations. These aberrations include inactivating mutations of fumarate hydratase, succinate dehydrogenase, isocitrate dehydrogenase and others (Janeway et al., 2011;Sciacovelli et al., 2016). These events have a common feature, the incremental accumulation of respective oncometabolites, which drives tumorigenesis via epigenetic reprogramming.
Metabolism of sarcosine is vital for PCa. It has been previously reported that the sarcosine-forming enzyme GNMT is upregulated in localized and metastatic PCa relative to benign tissue and that high GNMT cytoplasmic expression is associated with lower disease-free survival rates of patients with PCa (Khan et al., 2013;Song et al., 2011). In contrast, the sarcosine Ndemethylating enzymes SARDH and PIPOX were downregulated in PCa tissues (Khan et al., 2013). These events lead to a mechanistic accumulation of sarcosine (Sreekumar et al., 2009), which has been found to potentiate PCa progression by stimulating proliferation, invasion and intravasation.
Sarcosine is a known ligand of the proton-coupled amino acid transporters, by which it can be efficiently taken up (Piert et al., 2017). This finding agrees with our data showing fast intracellular accumulation of sarcosine being the highest for metastatic (LNCaP) cells and the lowest for nonmalignant (PNT1A) cells. Although we have not examined the route of uptake, a significant increase in intracellular sarcosine provides clear evidence that prostate cells are capable of taking up sarcosine from the external environment.
The sarcosine N-demethylation pathway is in close proximity to the methionine cycle, in which SAMe is converted from methionine (Wilson et al., 2009). Notably, the intracellular accumulation of sarcosine was accompanied by the stimulation of SAMe production. As discussed vide supra, metastatic cells had the highest rate of sarcosine uptake. Interestingly, this phenomenon was associated with a significantly higher increase in SAMe and CMP compared with the rest of the tested cells. Accordingly, it has been previously demonstrated that sarcosine stimulatory activity towards a more invasive phenotype is not only the highest for metastatic cells but also acts in cells derived from benign and malignant prostate tissues through upregulation of genes involved in cell cycle and mitosis, while downregulating genes driving apoptosis (Heger et al., 2016a;Khan et al., 2013;Merlos Rodrigo et al., 2017).
A direct stimulatory effect of sarcosine on SAMe can explain the increase in the aggressiveness of prostate cells. An excess supply of SAMe might contribute to DNA hypermethylation and inappropriate gene silencing. Likewise, SAMe is required for the de novo biosynthesis of polyamines, which are essential for cancer cell growth (Nowotarski et al., 2013). Surprisingly, in all tested prostate cells, we identified a marked stimulatory effect of sarcosine towards DNA methylation without any significant effect on intracellular Spd and Spm.
It is worth to note that sarcosine did not cause an increase in the DNA methylation of tested cells of nonprostate origin, and thus, the described phenomenon is most likely predominant for prostate cells. Noteworthy, it has been found that human epidermal growth factor-2 (HER-2)-positive breast tumours display upregulation of sarcosine metabolism-related enzymes (GNMT, SARDH, PIPOX) (Yoon et al., 2014). Thus, investigation of linkage between sarcosine and DNA methylation status in these cells might be performed to elucidate importance of sarcosine for HER-2 positive breast tumours.
We anticipate that sarcosine-induced prostate-specific regulation of DNA methylation could be attributed to high intracellular zinc levels, unique to prostate cells. In 1985, Wallwork and Duerre discovered that zinc deficiency in vivo is reflected in the depressed rates of global DNA methylation (Wallwork and Duerre, 1985). Interestingly, they also identified that zinc deficiency does not impair the synthesis of SAMe and SAH, but the methyl group of SAMe turned over substantially more slowly in zinc-deficient rats. Despite this finding, it should also be noted that PCa cells can revert the zinc accumulation phenotype to reach higher levels of citric acid cycle activity (Costello et al., 2004). This result can presumably explain substantially lower increments of DNA methylation in malignant and metastatic cells compared with that in their nonmalignant counterparts, as demonstrated in Fig. 3C. In subsequent experiments, we analysed the promoter methylation of several genes pivotal for prostate tumorigenesis. Importantly, sarcosine induced a higher density of promoter methylation of CCND2 and CDKN2B, which both can inhibit the cell cycle G 1 /S transition. These findings are in line with our previous study showing depressed transcriptional activity of CCND2 and CDKN2B in prostate cells due to sarcosine exposure (Merlos Rodrigo et al., 2017). In addition, we also identified significantly denser methylation of the AR promoter. Although we did not analyse whether this can result in AR transcriptional block, this phenomenon could be responsible for the loss of AR expression and a consequent development of tumour hormonal independence (Kinoshita et al., 2000). Based on our pilot data, it is indisputable that genomewide methylation analyses of prostate tissues varying in sarcosine levels by cutting-edge techniques, such as pyrosequencing, could pronouncedly contribute to understanding the role of sarcosine in PCa pathogenesis.
The aberrant expression of Dnmts and disruption of DNA methylation patterns associated with various types of cancers is well known. For example, Valdez et al. (2013) revealed that in PCa, Dnmt1 nuclear staining significantly increased from normal to metastatic cancer. As the only way of methyl group transfer from SAMe to DNA is through catalytic activity of Dnmts, it is not surprising that sarcosine upregulates expression of these enzymes. Unexpectedly, the highest upregulation was found for Dnmt1, whose main role is to maintain the original pattern of DNA methylation in a cell lineage. However, Dnmt1 also shows capabilities for de novo methylation of DNA through functional cooperation with Dnmt3a (Fatemi et al., 2002). The upregulation of Dnmt1 has been found to cause a more aggressive phenotype of PCa and a transition to hormone resistance (Chen et al., 2010;Valdez et al., 2013). Unfortunately, none of these studies attempted to examine the levels of sarcosine in PCa tissues. Therefore, it is difficult to compare our data with published reports. Nevertheless, due to pronounced stimulatory effects of sarcosine on the expression of Dnmts and on the migration and clonal efficiency of prostate cells, we suggest the intracellular accumulation of sarcosine as a presumable factor influencing PCa aggressiveness. This suggestion is in line with studies delineating sarcosine as a metabolite that is highly increased during PCa progression to metastasis and elevated in invasive PCa cell lines (Khan et al., 2013;Sreekumar et al., 2009).
Interestingly, sarcosine can partially revert the inhibitory activity of 5-Aza, as evidenced by both the mRNA and protein expression of Dnmts. While we cannot offer a suitable explanation for this phenomenon, our data clearly demonstrated that the partial reversal of Dnmt expression does not enable sarcosine to stimulate prostate cell migration and clonogenicity, as found for sarcosine treatments without 5-Aza. Since 5-Aza is a known DNA-damaging agent inducing formation of double-strand breaks and G 1 /S arrest (Kiziltepe et al., 2007), our results suggest that sarcosine is protecting the exposed cells by inducing Dnmt1, a methyl transferase that is crucial for the genome protection and epigenetic control. As sarcosine can trigger the development of chemoresistance to various anticancer agents (Merlos Rodrigo et al., 2017), it can be expected that a combination therapy utilizing Dnmt inhibitors and cytostatic drugs could be beneficial for treatment of aggressive PCa. Indeed, a number of studies can be found describing the improvement of anticancer effects of distinct agents by Dnmt inhibitors in castration-resistant PCa (Festuccia et al., 2009;Sonpavde et al., 2008).
Using two in situ MSI approaches, we attempted to verify our findings in prostate tissue specimens. Our data provided clear evidence that malignant tissue contains unprecedentedly higher levels of sarcosine, SAMe and Dnmt1. Furthermore, we identified quite strict colocalization. Despite a limited number of samples, the results highlight the presumable importance of these molecules for PCa pathogenesis. Future work should focus on analysing a larger and more heterogeneous cohort of prostate tissue specimens. Beyond a deeper understanding of PCa pathogenesis, the validation of our pioneer findings could lead to the development of a novel class of PCa therapeutic agents that selectively target sarcosine N-demethylating enzymes to decrease their impact on the SAMe-Dnmt epigenetic axis.
Conclusions
In conclusion, as several metabolic alterations have already been connected with the altered epigenetic landscape. The results of the present study suggest a novel link between the sarcosine metabolic pathway and SAMe-Dnmt-mediated epigenetic modifications. Our study provides a solid base for further investigation that will be focused on finding the specific gene promoters affected by the sarcosine pathway and a consequent effect on their transcriptional activity. Moreover, our study suggests that the sarcosine-SAMe-Dnmt1 axis could be involved in the transition of PCa to hormone resistance. We are eager to continue to study this aspect. Despite analysing a limited number of clinical specimens, our pilot data underpin a role for the studied axis in PCa and the importance of large cohort-based studies. Unfortunately, due to the retrospective nature of our study, we were not able to collect urinary specimens. However, we anticipate that a comparative analysis of sarcosine in fresh urine specimens and tissues together with examining the SAMe-Dnmt1 axis could be helpful in resolving the uncertain diagnostic potential of urinary sarcosine. Fig. S1. Dose-response curve of 5-Aza for all tested prostate cell lines. The values are expressed as the mean of five independent replicates (n = 5). Fig. S2. Intracellular amount of spermine (Spm) and spermidine (Spd) in prostate cells incubated with sarcosine. Fig. S3. Representative average positive ion mode MALDI-TOF mass spectrum derived from pixel-topixel construction of 2D molecular images showing both SAMe and SAH. Table S1. Sequences of the primers used for BSP. Table S2. Sequences of the primers used for qRT-PCR and primer validation in LNCaP cells. Table S3. HPLC-FLD quantitation of intracellular sarcosine in prostate cells. | 8,467.2 | 2019-03-09T00:00:00.000 | [
"Biology"
] |
Double Tropopauses and the Tropical Belt Connected to ENSO
Abstract A detailed analysis of double tropopause (DT) occurrences requires vertically well resolved, accurate, and globally distributed information on the troposphere‐stratosphere transition zone. Here, we use radio occultation observations from 2001 to 2018 with such properties. We establish a connection between El Niño‐Southern Oscillation (ENSO) phases and the distribution of DTs by analyzing the global and seasonal DT characteristics. The seasonal distribution of DTs reveals several hotspot locations, such as near the subtropical jet stream and over high mountain ranges, where DTs occur particularly often. In this study, we detect a higher number of DTs during the cold La Niña state while warmer El Niño events result in lower DT rates, affecting the structure of the tropopause region. Close to the Niño 3 region, this relates to a much lower first lapse rate tropopause altitude during La Niña and corresponds to an apparent narrowing of the tropical belt there.
Introduction
The tropopause, the transition zone between the troposphere and the stratosphere, is of substantial importance to the exchange between these two atmospheric regimes (Holton et al., 1995). Depending on season and latitude, the tropopause is typically found at around 16 km in the tropics and at around 9 km at high latitudes (e.g., Schmidt et al., 2005;Seidel & Randel, 2006). At midlatitudes, the higher tropical tropopause domain may overlap the lower high-latitude tropopause domain and form double tropopauses (DTs), either because the high-latitude tropopause domain extends equatorward (Peevey et al., 2014;Wang & Polvani, 2011) or because the tropical tropopause domain extends poleward (Homeyer et al., 2010;Castanheira et al., 2012;Liu & Barnes, 2018;Pan et al., 2009;Randel et al., 2007). The more complex structure and variability of the upper troposphere and lower stratosphere (UTLS) region at midlatitudes, related to this overlap and the existence of DTs there, is key to understand the stratosphere-troposphere exchange (e.g., Boothe & Homeyer, 2017).
DT events are found especially frequently at midlatitudes in both hemispheres, in storm track regions, on the poleward side of the subtropical jet stream (STJ), more frequently during winter (Bischoff et al., 2007;Schmidt et al., 2006;Seidel & Randel, 2006), and over land (Schmidt et al., 2006). The STJ is of special interest, not only because it marks a region where midlatitudinal and tropical air meet, but the STJs are linked to Rossby wave breaking events, which again are associated to DTs and stratosphere-troposphere exchange ©2020. The Authors. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
DTs have also been found in association with mountain gravity waves (Schmidt et al., 2006), cyclogenesis (Añel et al., 2008) and the upward vertical motion in warm conveyor belts in strong cyclonic circulation systems (Peevey et al., 2014;Wang & Polvani, 2011). DTs are linked to the strength of the upward branch of the Brewer-Dobson circulation (Castanheira et al., 2012) and are detected in cloud-top inversion layers (Biondi et al., 2012).
Previous studies (e.g., Reid & Gage, 1985;Rieckh et al., 2014) have shown a strong correlation between the altitude of the first lapse rate tropopause (LRT) and the El Niño-Southern Oscillation (ENSO). Castanheira et al. (2012) showed a clear signal of both the quasi-biennial oscillation (QBO) and the ENSO in DT frequencies from reanalyzed data (ERA-Interim).
However, a systematic analysis of globally distributed DTs from observations and their link to ENSO phases has not yet been conducted. Previous studies on multiple tropopauses have mainly used radiosonde or reanalysis data. Due to the sparse sampling of radiosondes, a global view on the characteristics of DTs is hardly possible. Data with lower vertical resolution or smoother temperature profiles tend to smear out DT features and underestimate DT frequencies (Biondi et al., 2012;Manney et al., 2017;Vergados et al., 2014;Xian et al., 2019). Since the early 2000s, however, globally distributed measurements from radio occultation (RO) are available. This limb sounding technique provides temperature profiles with high accuracy and vertical resolution in the UTLS for applications in atmospheric research and climate (Anthes, 2011;Steiner et al., 2011). Some studies have used a subset of available RO data to study DTs (e.g. Lakkis & Canziani, 2009;Randel et al., 2007;Schmidt et al., 2006;Xu et al., 2014).
In this study, we take advantage of the precise DT detection with RO to present the global and seasonal characteristics of DTs from observations. For the first time, we analyze the relation between DTs and ENSO events and its implication on the tropopause structure based on the recent full multiyear RO record, which now covers 11 ENSO events. This larger sample substantially improves our ability to investigate these relations.
Data
Due to their high vertical resolution and global distribution, RO satellite measurements are well suited for investigating the thermal tropopause. RO measurements from different missions can be combined and used in continuation of each other Schreiner et al., 2007). We used the Wegener Center OPSv5.6 data set (Angerer et al., 2017), a compilation of most RO satellite missions to date, enabling us to study DTs over a longer time period than previous studies. In this study, we use temperature profiles from September 2001 to December 2018, interpolated to an evenly spaced, fixed vertical grid with h = 100 m between the grid points.
El Niño and La Niña events are identified using the Oceanic Niño Index (ONI), which is a ±1 running mean of monthly mean sea surface temperature anomalies in the Niño 3.4 region (5°S to 5°N and 170°W to 120°W). An event is called "El Niño" or "La Niña" when five succeeding months of the ONI are all above 0.5 K or all below −0.5 K, respectively.
In addition, we indicate the location of the STJ using the maximum mean horizontal wind speed between 200 and 300 hPa within 5°latitude × 5°longitude grid cells from European Centre for Medium-Range Weather Forecasts (ECMWF) 6-hourly analysis wind fields.
Methods
The standard definition of the World Meteorological Organization (WMO, 1957) was used to calculate the thermal LRTs: "(a) The first tropopause is defined as the lowest level at which the lapse rate decreases to 2°C/km or less, provided also the average lapse rate between this level and all higher levels within 2 km does not exceed 2°C/km. (b) If above the first tropopause the average lapse rate between any level and all higher levels within 1 km exceeds 3°C/km, then a second tropopause is defined by the same criterion as under (a). This tropopause may be either within or above the 1 km layer." This algorithm was applied to each RO temperature profile on an evenly spaced 100-m grid. The search started from below, using a latitude, φ, dependent lower altitude limit, z start (in meters), following z start ðφÞ ¼ 6; 250 m þ 1; 250 m × cosφ: (1) This limit was adapted from Son et al. (2011) and adjusted downwards to include more LRTs at midlatitudes. The search was terminated at 25 km.
A candidate LRT altitude was first found on the evenly spaced 100-m vertical grid, according to the thresholds in the WMO (1957) definition. Due to the implementation, this point was always found at an altitude above the threshold. To avoid a positive bias, the altitude was fine tuned, by linear interpolation, down to the altitude where the lapse rate equals Γ = 2°C km −1 . This altitude was selected to represent the LRT.
The number of first LRTs, N 1 , and the number of second LRTs, N 2 , from all the available temperature profiles were counted and the corresponding DT percentages calculated using We calculated these percentages within 5°× 5°grid cells and within each 5°zonal band or each 5°meridional band. Finally, monthly mean DT anomalies were created relative to the mean seasonal cycle from 2007 to 2018.
Global DT Occurrences
The main characteristics of the global DT distribution can be deduced from Figure 1. The figure reaffirms previous studies (e.g., Peevey et al., 2012Peevey et al., , 2014Randel et al., 2007;Schmidt et al., 2006;Wang & Polvani, 2011;Xu et al., 2014) and shows that the features are consistently revealed in the multiyear RO record. The seasonal features have been described in greater detail in Peevey et al. (2012). DTs are mainly found at midlatitudes along the STJ and are more frequent during winter and over land. The strong belt of DTs in the STJ regions gets weaker during summer, on both hemispheres, due to the weaker eddy activity as the summer STJ slows down. The STJs are primarily radiatively driven and move equatorward during winter (e.g., Maher et al., 2020;Manney et al., 2014), which is also the case for the location of the DT belt during December-January-February (DJF) ( Figure 1a) and June-July-August (JJA) (Figure 1c).
The enhanced DT percentages in the tropical regions for all seasons may be explained by cloud tops (Biondi et al., 2012) or gravity waves in the stratosphere (e.g., Hoffmann et al., 2013 and references therein), both caused by deep convection in the tropics. Additionally, some of the DTs detected in the tropics are related to changing QBO phases (e.g., Kedzierski et al., 2016). We also detect these QBO-related DTs because we use a rather high upper altitude limit of 25 km when finding the LRTs, which is several kilometers above the mean LRT altitude, well into the QBO region in the stratosphere.
Furthermore, Figure 1 reveals locations where DTs are found particularly often. In the Northern Hemisphere (NH), such hotspots are located east of the Rocky Mountains, over the Himalayas, and over Japan. In the Southern Hemisphere (SH), they are found over and east of the southern Andes and over southeast Australia. All these DT hotspots are found on the STJ band, on the lee side of high mountains. They get weaker during summer, supporting the source to be mountain gravity waves related to the STJ (Schmidt et al., 2006). The enhancement east of Japan more or less disappears during JJA but is globally the strongest hotspot during DJF. The hotspot leeward of the Southern Andes is, as an exception, prominent for all seasons and is globally the strongest hotspot during JJA. The area is known for its high number of occurrences of gravity waves (Ern et al., 2018;Hoffmann et al., 2013;Sato et al., 2012).
To the west of the Andes, there is a DT tail, pointing toward the tropics, that only shows up during DJF ( Figure 1a). As the main STJ flow is eastward, the tail is on the windward side of the Andes and therefore requires an alternative explanation. There may also be a similar, somewhat weaker, feature during JJA (Figure 1c) in the northeastern Pacific, west of the Rocky Mountains, also pointing toward the tropics.
Annual Patterns of DTs
The seasonal DT development is exposed in Figure 2
Geophysical Research Letters
The prominent straight line in Figure 2b, around 75°W, is attributed to the hotspot leeward of the southern Andes that shows up in every season in Figure 1. In contrast, the dip just west of 75°W reveals a meridional band with rarely any DTs, just to the west of the Andes, from the beginning to the end of the time series.
Geophysical Research Letters
The irregularly recurring patterns further west in Figure 2b, between 150°W and 90°W, resemble ENSO time patterns. The peaks mainly show up between 30°S and 10°S and again between 10°N and 30°N, which is just north and south of the Niño 3 region (5°N to 5°S, 150°W to 90°W). Both the locations and the pattern suggest a link to the ENSO. In the following, we unravel more details and possible explanations.
ENSO and DT Structure
The relation between ENSO and DT occurrences becomes evident in Figure 2c. First, it reveals that the warmer El Niño events result in lower DT rates. The indication of an ENSO connection is especially prominent for the El Niño events in 2009/2010 and 2015/2016. The ONI peak in 2006/2007 is not called an El Niño event because the values did not last long enough above the 0.5 K threshold, but nevertheless, the rate of DTs is also reduced. Second, the colder La Niña events lead to more DTs, especially between 150°W and 90°W.
For further investigation of the DT and ENSO relation, we limit the considered DTs to the DJF seasons only and examine El Niño and La Niña periods separately. This minimizes the seasonal influence on the difference between the two ENSO states. ENSO events occur more frequently during DJF, with 16 La Niña and 16 El Niño months detected in the considered time period. Figure 3 shows the global spatial distribution of DTs split into DJF La Niña events ( Figure 3a) and DJF El Niño events (Figure 3b). It appears that the DT tail to the west of the Andes (cf. Figure 1a) mainly originates from the DJF La Niña time periods, as it is much weaker during DJF El Niño and the other time periods (not shown). Figure 3c shows the difference between the DT percentages during DJF La Niña and DJF El Niño, that is, the difference between Figures 3a and 3b. It uncovers that the tail is not only a SH feature but also shows up on the NH, disguised (in Figure 3a) by the high frequency of DTs in the latitudinal band around the NH STJ during DJF.
The increased DT occurrences in the eastern Pacific region during La Niña compared to El Niño periods are caused by distinct atmospheric circulation regimes. During La Niña, the upwelling part of the Walker circulation is located over the Maritime continent, while during El Niño the main upwelling moves to the central Pacific (see, e.g., Gettelman et al., 2001;Lau & Yang, 2015). An analysis of atmospheric parameters related to cyclonic activity (divergence and vorticity; not shown here) confirmed that during La Niña (El Niño), cyclonic (anticyclonic) activity is dominant in the tropopause region above the eastern Pacific. According to Randel et al. (2007), upper tropospheric level cyclonic vorticity is related to an enhancement of DT occurrence and lower LRT heights compared to anticyclonic vorticity at the same altitude level. This is in good agreement with the observed differences in Figure 3c.
Figure 3 additionally suggests that during DJF La Niña, more DTs should be expected at locations where they are usually found than during DJF El Niño. The regions just west of the Andes hotspot and east of the Rocky Mountains hotspot especially stand out. The enhancement east of Japan, on the other hand, seems to be unaffected by ENSO events. There is a southward shift around the Himalayan hotspot during DJF El Niño, also seen between the seasons (Figure 1), related to jet stream shifts (Maher et al., 2020).
Narrowing the Tropical Belt During La Niña
We investigate the impact of ENSO conditions on the tropopause structure in Figure 4. Figures 4a and 4b show the mean of all the first LRT altitudes during DJF La Niña and DJF El Niño, respectively. For most longitudes, the mean first LRTs in the tropics are found at a slightly lower altitude during DJF La Niña (Figure 4a) than during DJF El Niño (Figure 4b). This is in agreement with Randel et al. (2007) (see above). Poleward of the Niño 3 region, they are exceptionally low, corresponding to a remarkable narrowing of the tropical belt around those longitudes. Figure 4i shows the altitude of all the first (blue) and second (orange) LRTs within a 10°meridional band at the Niño 3 region (leftmost dashed meridional band in Figure 4a), at their corresponding latitudes during La Niña. Additional tropopauses appear below the tropical tropopause domain (approximately 20°S to 20°N) from what seems to be an equatorward expansion of the high-latitude tropopause domain. The tropical LRT domain is still present, as a mix of first and second LRTs, but the mean first LRT appears much lower than usual in the specified region for DJF La Niña. The corresponding temperature profile cluster plots in Figures 4c-4h (DJF La Niña) and Figures 4o-4t (DJF El Niño) support these observations. This becomes especially clear in Figure 4d, where the temperature profiles start off with a steady lapse rate until they are sliced in two "branches" at the first LRT and break into typical, although less sharply defined, tropical temperature profiles (cf. Figure 4g). For comparison, the regular tropopause characteristics are depicted for two additional meridional bands in Figure 4.
Geophysical Research Letters
Similar to the DT frequency as described in the previous section, the altitude of the first LRT is also influenced by the circulation patterns. As cyclonic vorticity leads to lower first LRT altitudes (Randel et al., 2007), the shifting cyclonic circulation patterns during La Niña, induced by the change in the Walker circulation, are therefore considered to be connected to the observed tropical belt narrowing.
Conclusions
To summarize, high vertical resolution and global coverage make RO satellite measurements well suited for studying DTs. The measurements are especially accurate in the altitude range of the tropopause. We exploited these characteristics to investigate for the first time the relation of ENSO and the occurrence of DTs in global multiyear RO observations.
We demonstrated that findings from previous studies are consistently revealed and presented a seasonally and regionally resolved picture of various DT hotspots. DTs are mainly triggered along the STJ in winter, especially prominent over the Rocky Mountains, the Himalayas, and the Andes.
Temporal, resolved DT patterns introduced a connection between ENSO and DT occurrences. It revealed an increase in DTs to the west of the Andes, poleward of the Niño 3 region (5°N to 5°S, 150°W to 90°W). We found that the strength and location of this increase is evidently connected to the ENSO. Colder La Niña events lead to a higher number of DTs while warmer El Niño events result in lower DT rates. This difference in DT occurrences is considered to be caused by the changing atmospheric circulation regimes of the Walker circulation between the ENSO phases.
During La Niña, the higher number of DTs detected at the Niño 3 region seems to be an equatorward expansion of the high-latitude tropopause domain. This leads to a mix of first and second LRTs at the edge of the tropical belt and a mean first LRT altitude that is much lower than usual. This corresponds to an apparent narrowing of the tropical belt there.
Knowledge of the detailed structure of the UTLS region is of great relevance for the analysis of the stratosphere-troposphere exchange. Identifying regions of increased DT occurrences points to a possible enhanced exchange. This has implications for the composition of the atmosphere, influencing, for example, the radiative balance and the dynamics of the atmosphere (see, e.g., Stohl et al., 2003 and references therein). The enhanced DT frequencies and lower first LRTs suggest that these processes are of increased relevance in the tropical Eastern Pacific during La Niña.
Recent studies have discussed the widening of the tropical belt (e.g., Staten et al., 2018 and references therein). This widening is difficult to determine due to large internal variability. The dependence of the tropical belt width on ENSO presented in this study might be of relevance to future studies on this topic.
Compared to neutral ENSO phases, ENSO events substantially alter the UTLS structure at midlatitudes and the tropics. For a detailed analysis, vertically high resolved information with global coverage is needed. Our results show that RO observations are able to provide such analysis and contribute to gaining improved knowledge of the transition between troposphere and stratosphere and the variability of the tropical belt. | 4,576.6 | 2020-07-14T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Approximate Relational Hoare Logic for Continuous Random Samplings
Approximate relational Hoare logic (apRHL) is a logic for formal verification of the differential privacy of databases written in the programming language pWHILE. Strictly speaking, however, this logic deals only with discrete random samplings. In this paper, we define the graded relational lifting of the subprobabilistic variant of Giry monad, which described differential privacy. We extend the logic apRHL with this graded lifting to deal with continuous random samplings. We give a generic method to give proof rules of apRHL for continuous random samplings.
Introduction
Differential privacy is a definition of privacy of randomized databases proposed by Dwork, McSherry, Nissim and Smith [7]. A randomized database satisfies εdifferential privacy (written ε-differentially private) if for any two adjacent data, the difference of their output probability distributions is bounded by the privacy strength ε. Differential privacy guarantees high secrecy against database attacks regardless of the attackers' background knowledge, and it has the composition laws, with which we can calculate the privacy strength of a composite database from the privacy strengths of its components.
Approximate relational Hoare logic (apRHL) [2,16] is a probabilistic variant of the relational Hoare logic [4] for formal verification of the differential privacy of databases written in the programming language pWHILE. In the logic apRHL, a parametric relational lifting, which relate probability distributions, play a central role to describe differential privacy in the framework of verification. This parametric lifting is an extension of the relational lifting [10, Section 3] that captures probabilistic bisimilarity of Markov chains [13] (see also [6, lemma 4]). The concept of differential privacy is described in the category of binary relation and mappings between them, and verified by the logic apRHL.
Strictly speaking, however, apRHL deals only with random samplings of discrete distributions, while the algorithms in many actual studies for differential privacy are modelled with continuous distributions, such as, the Laplacian distributions over real line. Therefore apRHL is desired to be extended to deal with random continuous samplings.
Contributions
Main contributions of this paper are the following two points: • We define the graded relational lifting of sub-Giry monad describing differential privacy for continuous random samplings.
This graded relational lifting is developed without witness distributions of probabilistic coupling, and hence is constructed in a different way from the coupling-based parametric lifting of relations given in the studies of apRHL [1,2,16].
In the continuous apRHL, we mainly extend the proof rules for relation compositions and the frame rule. We also develop a generic method to construct proof rules for random samplings. By importing the new rules added to apRHL+ in [1], we give a formal proof of the differential privacy of the above-threshold algorithm for real-valued queries [8, Section 3.6].
Preliminaries
We denote by Meas the category of measurable spaces and measurable functions between them and denote by Set the category of all sets and functions. The category Meas is complete and cocomplete, and the forgetful functor U : Meas → Set preserves products and coproducts. We also denote by ωCPO ⊥ of the cateory of ω-complete partial orders with the least element and continuous functions.
A Category of Relations between Measurable Spaces
We introduce the category BRel(Meas) of binary relations between measurable spaces as follows: • An object is a triple (X, Y, Φ) consisting of measurable spaces X and Y and a relation R between X and Y (i.e. R ⊆ U X × U Y ). We remark that R does not need to be a measurable subset of the product space X × Y .
When we write an object (X, Y, Φ) in BRel(Meas), we omit writing the underlying spaces X and Y if they are obvious from the context. We write p for the forgetful functor p : BRel(Meas) → Meas × Meas which extracting underlying spaces: (X, Y, Φ) → (X, Y ). We call an endofunctor F on BRel(Meas) a relational lifting of an endofunctor E on Meas if (E × E)p = pF .
Sato
The Sub-Giry Monad The Giry monad on Meas is introduced in [9] to give a categorical approach to probability theory; each arrow X → Y in the Kleisli category of the Giry monad bijectively corresponds to a probabilistic transition from X to Y , and the Chapman-Kolmogorov equation corresponds to the associativity law of the Giry monad.
We recall the sub-probabilistic variant of the Giry monad, which we call the sub-Giry monad (see also [17,Section 4]): • For any measurable space (X, Σ X ), the measurable space (GX, Σ GX ) is defined as follows: the underlying set GX is the set of subprobability measures over X, and the σ-algebra Σ GX is the coarsest one that makes the evaluation function ev A : GX → [0, 1] (mapping ν to ν(A)) measurable for each A ∈ Σ X .
• For each f : The monad G is commutative strong with respect to the cartesian product in Meas.
The Kleisli category Meas G is often called the category SRel of stochastic relations [17,Section 3]. The category SRel is ωCPO ⊥ -enriched (with respect to the cartesian monoidal structure) with the following pointwise order: The least upper bound sup n∈N f n of any ω-chain f 0 ⊑ f 1 ⊑ · · · ⊑ f n ⊑ · · · is given by (sup n f n )(x)(B) = sup n (f n (x)(B)). The least function of each SRel(X, Y ) (written ⊥ X,Y ) is the constant function of the null-measure over Y . The continuity of composition is obtained from the following two facts: • From the definition of Lebesgue integral, for any ω-chain {ν n } of subprobability measures over X, X f d(sup n ν n ) = sup n X f dν n holds.
• From the monotone convergence theorem, we have X sup n f n dν = sup n X f n dν.
This enrichment is equivalent to the partially additive structure on SRel [17, Section 5]: For any ω-chain {f n } n∈N of f n : X → Y in SRel, we have the summable sequence {g n } n where g 0 = f 0 and g n+1 = f n+1 − f n .Conversely, for any summable sequence {g n } n∈N , the functions f n = n k=0 g n form an ω-chain.
Differential privacy
Throughout this paper, we define the approximate differential privacy as follows: What we modify from the original definition [8,Definition 2.4] is the domain and codomain of c; we replace the domain from N to R, and replace the codomain from a discrete probability space to G(R n ). We apply this definition to the interpretation of pWHILE programs. The input and output spaces can be other spaces: in section 4 we consider the above-threshold algorithm Above whose output space is Z. The above modification is essential in describing and verifying the differential privacy of this algorithm because it takes a sample from Laplace distribution over real line.
A Graded Monad for Differential Privacy
The composition law of differential privacy plays crucial role to in the compositional verification of the differential privacy of database programs. Barthe, Köpf, Olmedo, and Zanella-Béguelin constructed a parametric relational lifting describing differential privacy, and developed a framework for compositional verification of differential privacy [2].
Following this relational approach, we construct the parametric relational lifting of Giry monad to describe differential privacy for continuous random samplings. This lifting forms a graded monad on the category BRel(Meas) in the sense of [11]. The axioms of graded monad correspond to the (sequential) composition law of differential privacy. An M -graded monad ({T e } e∈M , η, µ e 1 ,e 2 , ⊑ e 1 ,e 2 ) on C is called an M -graded lifting of monad (T,
A Graded Relational Lifting of Giry Monad for Differential Privacy
Let M be the cartesian product of the monoids ([1, ∞), ×, 1) and ([0, ∞), +, 0) equipped with the product order of numerical orders. For each (γ, δ) ∈ M , we define the following mapping of BRel(Meas)-objects by Proof. Since the functor p is faithful, it suffices to show: First, the following equation holds: where ≤ is the numerical order relation on G1 ≃ [0, 1]. We omit the proof of this equation. It can be shown in the same way as [12,Theorem 12].
The M -graded lifting {G (γ,δ) } (γ,δ)∈M describes only one side of inequalities in the definition of differential privacy. By symmetrising this, we obtain the following M -graded lifting {G (γ,δ) } (γ,δ)∈M exactly describing the differential privacy for continuous probabilities: In the original works [2,3] of apRHL, the following relational lifting (−) ♯(γ,δ) is introduced to describe differential privacy. This lifting relates two distributions if there are intermediate distributions d 1 and d R , called witnesses, whose skew distance, defined by ∆ |}, is less than or equal to δ. We denote by D the subdistribution monad over Set. Let Ψ be a relation between sets X and Y , and d 1 ∈ DX and d 2 ∈ DY be two subdistributions. We define the relation Ψ ♯(γ,δ) ⊆ DX × DY as follows: Proposition 2.5 For any countable discrete spaces X and Y , and relation Ψ ⊆ We remark GX = DX for countable discrete space X. When X is not countable, we have the above results by embedding each d ∈ DX in the set DX ′ of subprobability distributions over the countable subspace X ′ = X ∩ supp(d).
is proved by the witnesses given by
The Continuous apRHL
We introduce a variant of the approximate probabilistic relational Hoare logic (apRHL) to deal with continuous random samplings. We name it the continuous apRHL.
The Language pWHILE
We recall and reformulate categorically the language pWHILE [2]. In this paper, we mainly refer to the categorical semantics of a probabilistic language given in [5,Section 2]. The language pWHILE is constructed in the standard way, hence we sometimes omit the details of its construction.
Syntax
We introduce the syntax of pWHILE by the following BNF: Here, τ is a value type; x is a variable; p is an operation; d is a probabilistic operation; e is an expression; ν is a probabilistic expression; i is an imperative; c is a command (or program). We remark constants are 0-ary operations.
We introduce the following syntax sugars for simplicity:
Typing Rules
We introduce a typing rule on the language pWHILE. A typing context is a finite set Γ = {x 1 : τ 1 , x 2 : τ 2 , . . . , x n : τ n } of pairs of a variable and a value type such that each variable occurs only once in the context. We give typing rules of pWHILE as follows: Γ ⊢ t e 1 : τ 1 . . . Γ ⊢ t e n : τ n p : (τ 1 , . . . , τ n ) → τ Γ ⊢ t p(e 1 , . . . , e n ) : Here, the type (τ 1 , . . . , τ n ) → τ of each operation p and each probabilistic operation d are assumed to be given in advance.
We easily define inductively the set of free variables of commands, expressions, and probabilistic expressions (denoted by F V (c), F V (e), and F V (ν)).
Denotational Semantics
We introduce a denotational semantics of pWHILE in Meas. We give the interpretations [[τ ]] of the value types τ : We interpret a typing context Γ = {x 1 : τ 1 , x 2 : τ 2 , . . . , x n : τ n } as the product space The interpretation of expressions are defined inductively by: The interpretation of commands are defined inductively by: Here, . . , x n : τ n }, f k = π 2 , and f l = π l • π 2 (l = k).
which is obtained from the distributivity of the category Meas.
We remark that, from the commutativity of the monad G, if Γ ⊢ x : τ and
Judgements of apRHL
A judgement of apRHL is c 1 ∼ γ,δ c 2 : Ψ ⇒ Φ, where c 1 and c 1 are commands, and Ψ and Φ are objects in BRel(Meas). We call the relations Ψ and Φ the precondition and postcondition of the judgement respectively. Inspired from the validity of asymmetric apRHL [2], we introduce the validity of the judgement of apRHL.
Proof Rules
We mainly refer the proof rules of apRHL from [2,16], but we modify the [comp] and [frame] rules to verify differential privacy for continuous random samplings.
The relational lifting G (γ,δ) does not preserve every relation composition. However, it preserve the composition of relations if the relations are measurable, that is, the images and inverse images along them of mesurable sets are also measurable (see also [12,Section 3.3]). Generally speaking, it is difficult to check measurability of relatons, hence the continuous apRHL is weak for dealing with relation compositions. However, we have the following two special cases: • The equality/diagonal relation on any space is a measurable relation.
• Any relation between discrete spaces is automatically a measurable relation.
Hence, the following [comp] rule is an extension of the original [comp] rule in [2]:
Φ and Φ ′ are measurable relations We define the [frame] rule with the construction Range(−): ] is countable discrete then the condition (ν 1 , ν 2 ) ∈ Range(Θ) is equivalent to supp(ν 1 ) × supp(ν 2 ) ⊆ Θ, and hence the above [frame] rule is an extension of the original [frame] rule in [2].
Soundness
The
Mechanisms
In this part, we give a generic method to construct the rules for random samplings, and by instantiating the method we show the soundness of the proof rules in prior researches: [Lap] for Laplacian mechanism [7], [Exp] for Exponential mechanism [14], [Gauss] for Gaussian mechanism [8, Theorem 3.22, Theorem A.1], and [Cauchy] for the mechanism by Cauchy distributions [15].
Let f : X × Y → R be a positive measurable function, and ν be a measure over Y . We define the following function f a : Σ Y → [0, 1] by
Sato
We remark that the function f (a, −) : Y → R is measurable. If the function is not 'almost everywhere zero' and Lebesgue integrable, that is, 0 < Y f (a, −) dν < ∞ then f a (−) is a probability measure.
The following proposition, which is an extension of [2, Lemma 7], plays the central role in the construction of sound proof rules for random samplings.
Proposition 3.2 Let f : X × Y → R be a positive measurable function, and ν be a measure over Y . For all a, a ′ ∈ X, γ, γ ′ ≥ 1, δ ≥ 0, and Z ∈ Σ Y (window set), if the following three conditions hold then for any B ∈ Σ Y , we have f a (B) ≤ γγ ′ f a ′ (B)+δ.
From the [rand] rule, the following rule is proved: We give the function f : 2σ 2 ), where σ > 0 is the variance of Gaussian mechanism. We introduce the probabilistic operation Gauss σ : real → real with [[Gauss σ ]] = f (−) , whose continuity is easily proved.
From the [rand] rule, we obtain the following rule:
An Example: The Above Threshold Algorithm
Barthe, Gaboardi, Grégoire, Hsu, and Strub extended the logic apRHL to the logic apRHL+ with new proof rules to describe the sparse vector technique (see also [8,Section 3.6]). They gave a formal proof of the differential privacy of above threshold algorithm in the preprint [1] in arXiv.
In this section, we demonstrate that the above threshold algorithm with realvalued queries is proved with almost the same proof as in [1]. The new proof rules of apRHL+ are still sound in the framework of the continuous apRHL.
Sato
We consider the following algorithm AboveT: if T ≤ S ∧ r = |Q| + 1 then 6: r ← j; 7: j ← j + 1 We recall the setting of this algorithm. This algorithm has two fixed parameters: the threshold t : real and the set Q : queries of queries where |Q| : int is the number of Q. The input variable is d : int, and the output variable is r : int. We prepare the new value types queries and data with [[data]] = R N and queries = int (alias), and the typings j : int, T : real, and S : real. We assume that an operation eval : (queries, int, data) → real is given for evaluating i-th query in Q for the input d. We require [[eval]] to be 1-sensitivity for the data d, that is, The differential privacy of Above is characterised as follows: The following rules in apRHL+ are sound in the framework of continuous apRHL: Hence we extend the contiuous apRHL by adding these rules, and therefore we construct a formal proof almost the same proof as in [1] in the extended continous apRHL.
The soundness of the rule [Forall-Eq] is proved from the following lemma:
Formal Proof
We now demonstrate that the (ε, 0)-differential privacy of algorithm AboveT is proved with almost the same proof as in [1].
From the [Forall-Eq] rule with variable r, it suffices to prove for all integer i, We denote by c 0 the sub-command consisting of the initialization line 2 of AboveT. From the rules [assn], [LapGen] rule with r = r ′ = 1, and σ = 2/ε, [seq], and [frame] we obtain where We denote by c 1 and c 2 the main loop and the body of the main loop respectively (i.e. c 1 = while (j < |Q|) do c 2 ). We aim to prove the following judgement by using the [while] rule: To prove this, it suffices to show the following cases for the loop body c 2 : Here, we provide the following loop invariant as follows: The judgement in the case (i) is proved from the rules (||d 1 − d 2 || 1 ≤ 1 ∧ T 1 + 1 = T 2 ) ⇒ (S 1 + 1 = S 2 ∧ T 1 + 1 = T 2 ).
The case (iii) is proved in the similar way as (i).
This appendix will be deleted from the final version of this paper.
A Appendix
We show some omitted proofs in this paper.
A.1 Proofs in Section 1.2 Proposition A.1 The composition of the category SRel = Meas G is continuous with respect to the ordering ⊑.
Proof. Consider a measurable function h : Y → GZ and an ω-chain {f n : X → GY } n with respect to ⊑. We fix x ∈ X. Since the ω-chain of measures f n (x) are bounded, and hence it conveges strongly (sup n f n )(x). This implies that, from the definition of Lebesgue integral, for any C ∈ Σ Z and x ∈ X, we obtain is a measurable function X → GY .
Proof. For each x ∈ X, the finiteness of the measures f 1 (x) and f 2 (x) imply the countable additibity of (f 1 − f 2 )(x) as follows: where n B n is the union of a countable disjoint collection B 0 , B 1 , .... Therefore f 1 − f 2 is at least a function of the form X → GY . The σ-algebra of GY is generated by the following countable collection: Since is measurable for all A ∈ Σ Y and α ∈ [0, 1] ∩ Q (i = 1, 2). We then calculate Hencer, the function f 1 − f 2 is measurable. ✷
A.2 Proofs in Section 2.2
We recall the definition of the indicator function χ A : X → [0, 1] of a subset A ⊆ X: | 4,726.4 | 2016-03-04T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Distributed Economic Dispatch Control Method with Frequency Regulator for Smart Grid under Time-Varying Directed Topology
: The paper studies a new distributed control method to solve the economic dispatch problem (EDP) under directed topology based on consensus protocol. Electrical equipment is closely related to frequency, and the frequency of each generator varies independently during operation. Therefore, it hinders the realization of economic dispatch. To solve the problem, we combine a frequency regulator with a consensus protocol, which eliminates the effect of frequency variation on the designed consensus algorithm. Meanwhile, considering the problem of excessive communication cost and low computational efficiency in large-scale power systems, an event-triggered mechanism is introduced into the designed algorithm. Furthermore, in order to overcome the unexpected loss of communication links, the time-varying topology mechanism is employed to develop the distributed economic dispatch (DED) algorithm to improve the robustness. Then, the stability of the above algorithm is proved by graph theory and convergence analysis. Finally, several simulations illustrate that our proposed methods are effective.
Introduction
With the emergence of the smart grid, people's power consumption has become more scientific and low consumption. How to solve the problem of power generation in power systems at low cost is an important topic of general concern in society. EDP is a conditional optimization problem, which mainly studies some dispatch methods to ensure reliable power supplies to users with the lowest generation cost [1]. At present, the economic dispatch methods of a smart grid are mainly divided into the centralized method and distributed method. The traditional centralized method requires a central processing unit to capture global resources for all generators and calculate large amounts of data [2,3], such as the Lagrange multiplier method and neural network Approach [4,5]. However, with the continuous expansion of power system scale, it leads to the over dispersion of power systems. In view of this situation, the centralized method has limitations so that the communication cost increases and the processing capacity decreases. Hence, a distributed algorithm is very suitable to solve EDP in the distributed power system [6], which does not need a centralized control center, and each generator only needs to obtain local information of adjacent generators to complete EDP.
In recent years, many DED methods have emerged. Generally distributed algorithms are designed based on consensus protocol [7]. Meanwhile, the solution of these methods Combined with the actual operating environment, the frequency will have a certain impact on the normal operation of the generator. However, some studies in the literature consider the change of frequency. Most algorithms add a frequency control term to the consensus protocol to balance the actual power of the generation and load [11]. This is a novel innovation, but it still has some aspects that are not considered enough. Inspired by the distributed algorithm in [11], in this paper, an event-triggered DED controlled algorithm with a frequency regulator under time-varying topology is proposed. The main innovations and improvements are summarized below: (1) Compared with [11], Ref. [11] considers that the frequency of each generator is always the same. However, in practice, the frequency of each generator may be different due to the uncertainty of frequency variation. Therefore, we introduce a frequency regulator into the consensus protocol to eliminate the impact of frequency change on the economic dispatching process, which is more suitable for the power generation process in the actual scenario, and ensure the balance of power supply and demand. (2) The existing work based on undirected topology [10,11,14,15] does not consider the case of communication disturbance or asynchronous communication; i.e., there is one-way communication between two generators. So, we use a row-stochastic and column-stochastic matrix to design an algorithm under directed topology to solve these problems. So, our algorithm has the characteristics of one-way communication.
(3) The paper adds an event-triggered mechanism to the algorithm, which makes up for the disadvantages of consuming too many communication resources and low computing efficiency in a large-scale power system. Therefore, the communication frequency of our algorithm is greatly reduced. (4) The communication fault proposed in [11] refers to the situation in which a generator cannot measure the frequency, but it does not consider the change of the actual topology caused by the communication fault. We consider that the topology may change at every moment due to line fault in this paper. Therefore, a new time-varying topology mechanism is added into the algorithm to increase the robustness of the entire power system, which makes the algorithm we designed more practical. Different from the design in [17,18], we design a mapping topology function in the consensus algorithm to indicate the topology at any time.
The structure of this paper is divided several parts: we describe some preliminaries in Section 2. Section 3 give a DED formulation and consensus protocol related to economic dispatch. Several DED control methods are proposed in Section 4 to solve EDP. In Section 5, we prove the convergence of the design algorithm through theoretical analysis. Several simulations are carried out in Section 6. Finally, a brief conslusion is provided in Section 7.
Preliminary
This section describes the correlation graph theory used in the paper, which is necessary to describe the communication network. In a smart grid, a directed topology G = (V, E) denotes the relationship between multiple generators, which is composed of To simplify the notation, it is assumed that there is no self-loop at any node, i.e., The in-neighbor and out-neighbor of each node are represented as a ji denote the out-degree and in-degree of each node, respectively. G is strongly connected if for each pair of v i , v j ∈ V, v i = v j , there are paths from v i to v j and from v j to v i .
The Laplacian matrix of a digraph is elucidated as Considering the time-varying topology, the communication topology G(k) changes with each iteration. Suppose the set of all possible topologies isG = {G 1 , n×n . E(k) represents the edge set of the topology at the k-th iteration. A δ(k) is the adjacency matrix in the k iteration corresponding to the current topology. The joint graph of G(k) is defined as ). G(k) is said to be uniformly jointly strongly connected if there exists a constant t > 0 such that G([k 0 , k 0 + t)) is strongly connected for any k 0 ≥ 0 [18]. The in-neighbor and out-neighbor of each node at the k-th iteration are represented as
Formulation of the EDP
EDP aims to minimize generating electricity i's total cost by rationalizing the power of generators. Therefore, EDP is a optimization problem with a constraint condition, so that minimizes the following objective function, where P i represents the output power of the i-th generator. P G = [P 1 , P 2 , . . . P N ] T ∈ R N + .C i (P i ) is the cost function. C(P G ) and P D represent the total cost and total power demand in the power system, respectively. P min i and P max i are defined as the minimum output power and the maximum output power of the i-th generator. (2) guarantees the balance of power supply and demand during the operation of the entire power system. By limiting the output of each generator by (3), we can ensure the normal operation of each generator. In most cases, C i (P i ) is generally a quadratic convex function, which is generally defined as where α i , β i and γ i denote the coefficients of each generator's cost function, respectively. To facilitate problem analysis, the following assumptions are provided.
Assumption 1.
The communication topology in this paper is a directed connected graph. The corresponding adjacency matrix is row-stochastic matrix A = [a ij ] n×n and column-stochastic matrix W = [w ij ] n×n , which satisfy A < 1, W < 1.
Assumption 2.
For generators with frequency regulators, the frequency of all generators can be monitored and initially run at a rated frequency f rated . The frequency of each generator can be different and vary periodically within the normal range.
Centralized Lagrangian Method
In order to solve EDP, the traditional centralized methods mainly rely on some global information in the power system and even require the parameter information of each generator. At present, the Lagrange multiplier method is the main centralized EDP method. Then, the Lagrangian function is defined as follows: where λ is both incremental cost and Lagrange multiplier. Therefore, λ * is the optimal incremental cost, which can be obtained by the Lagrange multiplier method.
Remark 1.
The centralized economic dispatch method requires some global resources of the entire power system and some detailed parameters of each generator. However, if there are too many generators in the power system, the method is not feasible. Hence, we introduce a basic distributed consensus protocol to solve distributed EDP.
Distributed Consensus Protocol
The necessary condition to realize economic dispatch in a power system is that each incremental cost has the same value. For the sake of overcoming the shortcomings of the centralized approach, a discrete-time DED consensus protocol under undirected topology is proposed as follows [11]. However, due to the different ability of each generator to obtain the state information of neighbor nodes, the bidirectional communication between distributed generator units may not be realized. Therefore, we will consider the DED under directed communication.
where λ i (k) denotes the incremental cost of the i-th generator at the k-th iteration. τ is a fixed step size. a ij represents the weight coefficient from generator j to generator i. Let λ(k) = [λ 1 (k), λ 2 (k), . . . λ N (k)], and (10) can be rewritten as where L a is a Laplacian matrix. Using (11), it is more convenient for us to deduce the convergence of the algorithm.
Lemma 1 ([11]
). (11) can converge to the following as: if and only if the communication topology is connected and satisfies where λ i (0) is the initial incremental cost of generator i. 1 = [1, 1, . . . 1] T . ρ(L a ) is the largest absolute value of the eigenvalue of L a .
DED Approach with Frequency Regulator
Based on (10), we propose a DED algorithm with a frequency regulator. Different from the algorithm proposed in [11], the frequency of each generator can be different and fluctuate periodically within the normal range. We add the frequency factor to the consensus protocol by introducing the objective attraction function [18] as follows where T * i (k) denotes a tatget attraction function, which helps λ(k) achieve consensus. In the power system, if all λ i (k) reach consensus under the action of consensus protocol, then From (15), we can obtain T * . Therefore, T * i (k) denotes the error between the optimal value and the current value. Furthermore, we can derive that T * i (k) = 0 when economic dispatch is complete. Therefore, our goal is to design a target attraction function T * i (k) so that our algorithm can reach consensus. Next, we designed a DED algotithm with a frequency regulator to achieve the goal.
where ∆λ i (k) is not only a frequency regulator but also a local frequency error. The frequency regulator can eliminate the effect of each generator frequency deviation because ∆ f i (k) quickly approaches zero under the action of the algorithm. µ and ε are the learning rates. σ is a proportional gain coefficient. N is the number of generators. The real-time discrete frequency of each generator is defined as f i (k). f i (k) changes periodically and stays the same for every period. From algotithm (16)- (18), λ i (k) can be calculated for each generator in each iteration. Then, P i (k) is obtained by (9). After a finite number of iterations, λ(k) converges to the same value. At the same time, the active power of each generator reaches the optimal value P * (k), and the total generation cost is also the minimum; that is, economic dispatch is completed. The algorithm mentioned is summarized in Algorithm 1.
Event-Triggered-Based DED Approach
With the increasing scale of the power system, the amount of resources for interaction between generators has increased dramatically, resulting in low computational efficiency, which may lead to communication congestion and even economic dispatch failure. To solve this problem, we introduce an event-triggered mechanism based on Algorithm 1.
In order to conveniently describe the event-triggered algorithm, a necessary introduction is given first. Event-triggered control means that information transmission is only allowed between generators after triggering conditions are satisfied, which effectively reduces the amount of computation, reduces the waste of communication bandwidth and improves the efficiency of economic dispatch. The notation specification is defined as follows: for ∀i, x i (k) denotes the variable transmitted by the i-th node in the communication topology at the k-th iteration.
denotes the transmission variable of the i-th node at event-triggered time instant k i t i . t i is the number of events triggered at the current time instant for the i-th node. E i (k, x i (k)) denotes the event-triggered function. Then, we define the event-triggered condition as , which represents the new event-triggered instant when the event-triggered condition is met. Moreover, the discrete time method naturally excludes zeno behavior [15].
Consider that the event-triggered mechanism is suitable for large power systems, we introduce leader-follower mode into the algorithm in order to avoid communication failures as much as possible. In this mode, leaders with a frequency regulator can measure the frequency, and followers without a frequency regulator have no frequency measurement ability. Therefore, we introduce a new event-triggered-based DED approach. Follower: Leader: where N L+ i represents the set of neighbor leader generators adjacent to leader generator i. ξ denotes the gain coefficient. y i (k) is an auxiliary variable. ∆P i (k) is the difference of active power between two iterations. A = [a ij ] n×n is a row-stochastic matrix and W = [w ij ] n×n is a column-stochastic matrix. Event-triggered condition: ) > 0, the event will be triggered. (20)-(27) that the leader generator can only receive messages from the neighbor leader and can send messages to any neighbor generator it points to. Meanwhile, the follower can communicate with any neighbor generator. Then, under the action of the consensus term, each generator i's incremental cost can be the same. The algorithm mentioned is summarized in Algorithm 2. Figure 1 shows the flow of Algorithm 2.
Remark 2. It can be seen from
Algorithm 2 Event-triggered based DED approach (Leader-Follower) Measure f i (k): the frequency of each generator at iteration k.
5:
for i = 1 : N do 6: if i is Leader then 7: The i-th generator receives Through (22)-(26), calculate the new value λ i (k). The
Robustness to Time-Varying Topology
At present, most related work generally considers that the communication topology is unchanged. However, in practical applications, the communication link may have uncertain faults, which may lead to changes of communication topology. Therefore, for the sake of solving the problem of imperfect communication, we design a robust DED consensus algorithm for time-varying topology based on the above algorithm, which is very necessary for the future smart power grid. Since the topology is time-varying, The mapping topology function is expressed as G , which represents the topology of the k-th iteration and its corresponding adjacency matrixes are
Assumption 4.
In the case of time-varying topology, A δ(k) < 1 and W δ(k) < 1 can be guaranteed for any A δ(k) and W δ(k) . Communication between leaders is always smooth.
Convergence Analysis
In this part, the convergence of the proposed algorithm is analyzed. First, the convergence of y i (k) and λ i (k) is proved under an event-triggered condition. Second, it is proved that the protocol can converge even considering the time-varying topology. To give the convergence analysis, we need the following lemmas.
Lemma 2 ([37]
). If the graph is connected, its Laplacian matrix is L a and the spectral radius ρ(L a ) satisfies ρ(L a ) < 2 τ , then Lemma 3. M n is the entire n-order matrix, let A ∈ M n , ρ(A) be the spectral radius, then Theorem 1. In a directed connected graph, let Ψ a = 1 − L a . As k → ∞, then k ∑ r Ψ a r can converge by adjusting L a so that Ψ a < 1.
Proof. We can observe that Ψ a , Ψ a 2 , Ψ a 3 , · · ·, Ψ a k is a geometric sequence. Therefore, we can use the geometric sequence summation formula to calculate.
So as k → ∞ and Ψ a < 1 , then Ψ a k = 0. We can conclude that then (37) converges to
Convergence in Time-Varying Topology
Considering the time-varying topology, according to (23), . Then, (23) can take the following form.
Next, we need to prove that )ϕ(r) converges. As mentioned above, there exists an ϕ max that makes ϕ(r) < ϕ max . Under such extreme conditions, if we can prove that the proposed algorithm can converge, then the algorithm must always converge in general. Therefore, we need to prove that According to Assumption 4, we can obtain W max < 1 and W min < 1. Using Lemma 3, (W min ) r are proved to be convergent. Since (51) can reach convergence under extreme conditions, then (51) must be convergent. Therefore, can also be proved to be convergent, which is omitted here. To sum up, (50) is proved to converge. According to Assumption 4, communication between leaders is always smooth; then, the connection between leader generators will not be interrupted in the process of timevarying topology. Therefore, the influence of topology change on the frequency regulator is negligible, which is equivalent to converging to 0 under a fixed topology. Combined with relevant conclusions of (48), we can ignore the value of the frequency regulator in the process of proving the convergence of (30). Let Ξ a δ(k) = 1 − τL a δ(k) , Ξ a δ(k) = 0. Since (50) converges, then (30) take the following form.
. The entries of e(k) have the same upper bound, and every entry of e max is the same. So, e(k) values have upper bounds e max for some norms, i.e., 0 ≤ e(k) ≤ e max . When k → ∞, let e(k) = e max and e(k) = 0, respectively. We can obtain Γ a (e max ) = Γ a (0) = 0, so Γ a (η) = 0. Therefore, (54) becomes The convergence of (55) is proved by mathematical induction.
We deduced that λ(n + 1) converges. Therefore, (55) converges so that it is deduced that (54) converges. It has been shown above that (54) converges, but consensus is not guaranteed. Next, we analyze the consensus of (54) on the basis of (55).
Simulation and Analysis
In this part, we conduct some simulations to verify the effectiveness of our algorithm. Frequency is variable in an electric power system, so we assume that the frequency changes periodically. According to the number of generators, we can divide them into four-generator systems and 10-generator systems. For a four-generator system, we study the economic dispatch algorithm with a frequency regulator under the circumstances of an event-triggered mechanism and the directed graph topology. Extending to 10-generator systems, we simulate economic dispatch algorithms with event-triggered mechanism under time-varying topology in leader-follower mode. Finally, the PI algorithm in [11] is compared with the algorithm we design to verify the superiority of our designed algorithm. Our parameters in the above algorithm are set as follows:τ = 1, ξ = 0.0001, µ = 0.1, γ = 0.00022, υ = 0.5, ε = 0.2, σ = 1.
Four-Generator System
We simulate a four-generator system, and Figure 2 shows the corresponding directed topology. The cost function coefficients and power limit for each generator in the system are listed in Table 1. Next, simulation experiments will be carried out. In this case, we add an event-triggered mechanism to the algorithm and we set the total power demand of the entire power system as 599 kw. Figures 3 and 4 illustrates the simulation results. From Figure 3a, we can see that the output of each generator remains stable whenever λ i (k) reaches consensus. P 1 is affected by the active power constraint and adopts the minimum power generation. The results in Figure 3b show the total cost of power generation in each iteration, and the total cost varies with the changing output power. Next, the frequency variation process for each generator is shown in Figure 3c, and the frequency varies within the normal range. Figure 3d shows the working process of each generator frequency regulator, and we can see that it goes to zero for a very short time after each change in frequency. Thus, the frequency regulator eliminates the influence of the frequency changes. Using the event-triggered mechanism's advantages, the iterative process of incremental cost for each generator is shown in Figure 4a. The incremental cost varies with frequency, and each generator transmits new information to its neighbor generators in the next iteration only if the event-triggered condition is met. Finally, the algorithm can achieve consensus. The iterative process of auxiliary variable y i (k) is shown in Figure 4b, and we can see that y i (k) also converges under the change in frequency. Figure 4c shows the event-triggered instant of each generator, i.e., {k i t i }, i = 1, 2, 3, 4. Moreover, the event-triggered statistics during iteration are listed in Table 2. We can see from Table 2 that the event-triggered mechanism improves the efficiency of calculation. Figure 4d shows the relationship between the power generation and total demand. It can be seen that the balance of supply and demand is achieved within the allowable range of error.
Ten-Generator System
In this part, we simulate a ten-generator system (N = 10). From Figure 5a, we obtain the corresponding directed topology G. The blue nodes are the leaders and the green nodes are the followers. The dotted line indicates the communication between leaders and the solid line indicates the communication between any two nodes.
In order to improve the robustness of topology, we add a time-varying topology mechanism to the event-triggered algorithm. In this case, the 10-generator system uses leader-follower mode, and only the leader-generator has a frequency regulator. From Figure 5b,c, all possible topology sets are set toG = {G 1 , G 2 }, and the time-varying topology G δ(k) ∈G switches between two fixed topologies G 1 and G 2 at each iteration. E 1 and E 2 represent the edge set of G 1 and G 2 , respectively. G δ(k) = (V, E(k)), G δ(k) ∈ ∼ G , which represents the topology of the k-th iteration. Its corresponding adjacency matrixs are A δ(k) and W δ(k) . G δ(k) is defined as follows.
In addition, we need to ensure that G(k) = (V, E 1 ∪ E 2 ) is uniformly jointly strongly connected. Then, we set the total power demand of the entire power system as 4085 KW. The cost function coefficients and power limit for each generator in the system are listed in Table 3. We can obtain the simulation results from Figures 6 and 7. Figure 6 is similar to the explanation of Figure 3; it shows the variable iteration in frequency variation. Figure 7a shows that our algorithm still achieves consensus under the time-varying topology and event-triggered mechanism. Figure 7b shows that the auxiliary variable y i (k) can still converge to a range in a time-varying topology. The information in Figure 7c about the event-triggered mechanism is summarized in Table 4. Figure 7d shows that the algorithm still balances supply and demand in the allowable range of error, reflecting the robustness of the algorithm.
Comparison Experiment
The purpose of this paper is to confirm the validity, correctness and superiority of our designed algorithm; we compare it with the PI algorithm mentioned in [11] from the perspective of event-triggered times, convergence speed and total power generation cost.
After introducing the event-triggered mechanism, from Figure 8a,b and Table 5, we can obtain that the number of iterations in this algorithm is significantly reduced, which reduces the consumption of communication resources compared with the PI algorithm. It can be observed from Figure 8c,d that the PI algorithm promotes the incremental cost to achieve consensus in 18 iterations, while the event-triggered algorithm only needs 12 iterations to achieve consensus. Therefore, the convergence speed of the event-triggered algorithm is higher than that of the PI algorithm. Figure 9 shows that the total power generation cost of the PI algorithm and event-triggered algorithm is almost the same. Under the same total power generation cost, according to the above analysis, the event-triggered algorithm we designed is obviously better than the PI algorithm.
Conclusions
In this paper, a novel DED control method is proposed to solve EDP under directed topology based on consensus protocol, which is suitable for a more general environment. The method introduces a frequency regulator into the consensus protocol to eliminate the influence of the frequency difference change of each generator. Moreover, considering large-scale power systems, we added an event-triggered mechanism to the method to decrease the communication resources. This method only needs to update the current state after meeting the triggered conditions, so it improves the efficiency of our calculation. In addition, the robustness of the algorithm is improved by introducing a time-varying topology mechanism. The mechanism uses the mapping topology function to carry out economic dispatch under the topology at any time. Finally, numerical experiments verify the validity of the method. Nevertheless, the method is not perfect. It is worth emphasizing that the algorithm in this paper also has limitations. For example, our frequency changes periodically. The frequency change period simulated in the experiment is 50 s. In addition, we need to meet a condition when we introduce the time-varying topology mechanism; that is, the topology we design must be a joint uniformly connected graph to meet the convergence of the algorithm. Thus, future works will focus on network attack, transmission losses, ramp rate limits for power generators and other practical operational constraints.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
Abbreviations EDP
Economic Dispatch Problem DED Distributed Economic Dispatch Nomenclature symbolic variable describe P i the output power of the i-th generator. C(P G ) total cost. P D total power demand. P min i the minimum output power of the i-th generator. P max i the maximum output power of the i-th generator.
the incremental cost of the i-th generator at the k-th iteration.
τ a fixed step size. a ij the weight coefficient from generator j to generator i. L a a Laplacian matrix. T * i (k) a target attraction function. ∆λ i (k) not only a frequency regulator but also a local frequency error. f i (k) the real-time discrete frequency of each generator. x i (k) the variable transmitted by the i-th node in the communication topology. x i (k) the transmission variable of the i-th node at event-triggered time instant k i t i . t i the number of event-triggered. | 6,435.8 | 2022-09-13T00:00:00.000 | [
"Engineering",
"Economics",
"Computer Science"
] |
Al2O3:Yb3+ integrated microdisk laser label-free biosensor.
Whispering gallery mode resonator lasers hold the promise of an ultralow intrinsic limit of detection. However, the widespread use of these devices for biosensing applications has been hindered by the complexity and lack of robustness of the proposed configurations. In this work, we demonstrate biosensing with an integrated microdisk laser. Al2O3doped with Yb3+ was utilized because of its low optical losses as well as its emission in the range 1020-1050 nm, outside the absorption band of water. Single-mode laser emission was obtained at a wavelength of 1024 nm with a linewidth of 250 kHz while the microdisk cavity was submerged in water. A limit of detection of 300 pM (3.6 ng/ml) of the protein rhS100A4 in urine was experimentally demonstrated, showing the potential of the proposed devices for biosensing.
Whispering gallery mode resonator lasers hold the promise of an ultralow intrinsic limit of detection. However, the widespread use of these devices for biosensing applications has been hindered by the complexity and lack of robustness of the proposed configurations. In this work, we demonstrate biosensing with an integrated microdisk laser. Al 2 O 3 doped with Yb 3 was utilized because of its low optical losses as well as its emission in the range 1020-1050 nm, outside the absorption band of water. Single-mode laser emission was obtained at a wavelength of 1024 nm with a linewidth of 250 kHz while the microdisk cavity was submerged in water. A limit of detection of 300 pM (3.6 ng/ml) of the protein rhS100A4 in urine was experimentally demonstrated, showing the potential of the proposed devices for biosensing. © 2019 Optical Society of America https://doi.org/10.1364/OL.44.005937 Passive microresonator optical sensors have been widely studied in last decades for their high sensitivity in the label-free detection of biomolecules [1][2][3]. These biosensors hold promise for their integration into portable, sensitive, low-cost, multiplexed, and easy-to-use devices that, within minutes, can detect biomarkers from bodily fluids for medical applications. Label-free detection relies on the sensitivity of the microresonators to changes in their local environment (i.e., evanescent field) due to the attachment of the target biomolecules, which, among other effects, induces shifts in the resonance frequency [4]. Such biosensors exhibit an intrinsic resolution limit given by the linewidth of the resonances [5], and require complex interrogation schemes with either tunable lasers or highresolution optical spectrum analyzers (OSA) to continuously monitor the location of the resonance wavelength, which hampers their implementation outside the laboratory.
The much narrower resonances of microresonator lasers dramatically decreases the intrinsic limit of detection (LOD). Furthermore, a simple, low-cost interrogation setup can be used to monitor the laser wavelength shift by heterodyning with an external reference laser [13]. However, up to date, the benefits attributed to active, laser-based devices have been mostly demonstrated on complicated three-dimensional optical cavities not suitable for on-chip integration, therefore hampering their widespread use as multiplexed biosensing platforms. Advancing sensing functionalities based on active microresonators requires a material platform that can overcome this limitation.
Aluminum oxide (Al 2 O 3 ) is an emerging photonic material that exhibits a large transparency window covering the visible and near-infrared wavelength ranges [14]. When doped with rare-Earth ions, it provides optical gain that has been used to demonstrate on-chip amplifiers [15] and lasers [16,17]. Recent reports have shown its monolithic integration with passive photonic functions [18][19][20]. These features make this material very interesting for the realization of active optical sensors. In particular, doping Al 2 O 3 with Yb 3 permits operation at a wavelength of ∼1000 nm, where water absorption is negligible. To date, there is only one report exploiting the active optical properties of Al 2 O 3 as a sensing material, where glass microspheres (1 to 20 μm in diameter) were detected by using a dual-wavelength distributed feedback laser [21]. The detection was not selective, as the device was sensitive to any microsphere brought into its close proximity.
In this work, an Al 2 O 3 :Yb 3 integrated microdisk laser biosensor is developed for the label-free detection of the S100A4 protein (12 kDa), which has been associated with human tumor development [22,23]. Detection of a concentration as low as 300 pM of the S100A4 protein in synthetic urine is experimentally demonstrated. This falls within the concentration range being reported as clinically relevant [24], therefore showing the potential of this active platform for label-free biosensing.
Al 2 O 3 :Yb 3 films were first deposited by radio frequency (RF) reactive cosputtering (AJA ATC 1500) from Al and Yb targets to form a 550 nm thick dielectric layer onto a thermally oxidized silicon wafer [25]. The substrate was heated to 450°C during the deposition. RF powers of 200 W and 35 W were applied to the Al and Yb targets, respectively. Oxygen and argon flows of 2.5 sccm and 30 sccm, respectively, were utilized with an operating pressure of 3.7 mTorr [25]. Microdisks and bus waveguides were patterned by UV contact lithography followed by reactive ion etching with BCl 3 :HBr (5:2) using a total power of 25 W [26]. The microdisk has a radius of 100 μm and a coupling gap of 0.6 μm to the bus waveguide, which has a width of 1.4 μm. With these disk dimensions, a 6.5% overlap of the electric field with the environment (i.e., water or synthetic urine upper cladding) was calculated. A 3 μm thick SiO 2 cladding was deposited by plasma-enhanced chemical vapor deposition (Oxford Plasmalab 80 Plus) at 300°C through a shadow mask to cover the input and output bus waveguides, leaving the microdisks fully exposed to the environment. Next, chips of 1.2 × 1.9 cm 2 were diced (Micro Ace 3). PDMS microfluidic channels with a cross section of 600 by 70 μm 2 were bonded onto the sample by simply placing them on top of the chip. Finally, the PDMS channels were filled with either deionized (DI) water (bulk refractive index and temperature sensitivity experiments) or synthetic urine (biosensing experiment). Figure 1(a) shows the device under test on the experimental setup. During the measurements, the temperature of the chip is controlled to 21.5 0.0025°C by means of a temperatureregulated stage. A fiber laser diode (Thorlabs BL976-SAG300) with a wavelength of 976 nm and linewidth of 0.5 nm is used to optically pump the Al 2 O 3 :Yb 3 microdisk lasers. The backward lasing light (TE polarized) is separated from the residual pump with a 980/1060 nm wavelength demultiplexer (Thorlabs WD202G-FC) and analyzed with a Hewlett Packard 70950B OSA while its power is measured with a Hewlett Packard 81536 A power sensor. The emission spectrum of the device subject to a flow of DI water is shown in Fig. 1(b), measured with a resolution of 100 pm. Single-mode operation at 1024 nm with a side-mode suppression ratio (SMSR) of 27 dBm is observed. The laser operates in TE polarization. A slope efficiency (i.e., laser power measured at the power meter with respect to launched pump power prior to coupling to the chip) of ∼0.1% and a lasing threshold of 7 mW launched pump power prior to coupling to the chip are measured. Currently, these power characteristics are limited by the low fiber-to-chip coupling efficiency of 20%, which could be enhanced in future works by implementing a vertical tapered end facet for higher fiber-to-chip coupling efficiency, and thus, larger slope efficiency and lower lasing threshold. Furthermore, the passive resonances of the disk resonator were scanned around the lasing wavelength with a tunable laser (10 kHz TOPTICA CTL 1050). It was found that at the lasing wavelength, the device has a cold quality factor of 1.2 × 10 5 and is undercoupled. Increasing the coupling coefficient to achieve critical coupling at the pump wavelength could also be beneficial for the lasing performance due to an increase of the enhancement factor of the pump light circulating in the disk resonator.
The emission spectrum of the single-mode microdisk laser is heterodyned with the same tunable laser emitting at an almost identical wavelength [ Fig. 2(a)] to achieve a low-frequency heterodyne beatnote (i.e., below 10 GHz) detectable by a RF spectrum analyzer (Hewlett Packard E4407B) [13]. The frequency of the beatnote, f beat , is given by with c being the speed of light in vacuum, and λ 1 and λ 2 being the wavelengths of the Al 2 O 3 :Yb 3 microdisk laser and of the external reference laser, respectively. The beatnote spectrum can be seen in Fig. 2(b), containing two closely separated RF peaks, which are due to the splitting of the microdisk lasing mode due to the coupling between the clockwise (CW) and counterclockwise (CCW) propagating modes in the disk [27]. Both peaks of the RF beatnote shift due to environmental perturbations applied to the microdisk. The laser linewidth of the Al 2 O 3 :Yb 3 microdisk laser is determined from the linewidth of the heterodyne beatnotes in the RF spectrum to be ∼200-300 kHz [28]. A direct self-beating spectrum between the CW and CCW laser modes exhibits a 3 dB linewidth of ∼500 kHz (RF spectral resolution of 100 kHz, 50 ms measurement time), which confirms a laser linewidth of ∼250 kHz for each of the laser split modes, which corresponds to a quality factor of ∼1 × 10 9 . The temperature and bulk refractive index sensitivity of the microdisk laser are characterized (Fig. 3). The beatnote frequency between the lower frequency laser peak of the microdisk laser and the external laser is monitored while varying the temperature of the device from room temperature to 25.4°C in steps of 0.5°C. During the measurements, 15 RF spectra per minute are recorded at a resolution of 1 MHz. A temperature sensitivity of 1.72 0.03 GHz∕K (6.02 0.11 pm∕K) is obtained, which is much smaller than the temperature sensitivity of Si microring resonators [29]. The bulk refractive index sensitivity is characterized by flowing water solutions of different concentrations of NaCl (0.0-0.5 wt. %) through the microfluidic channel. A bulk refractive index sensitivity of 5.74 0.21 THz∕RIU (20.1 0.7 nm∕RIU) is obtained, similar to the sensitivity reported earlier for a passive Al 2 O 3 microring resonator [30]. Both peaks in the RF spectrum have identical sensitivities. The sensitivity is not very high compared with conventional, silicon-on-insulator microring resonators [4], because the disk resonator has a rather high confinement and low fraction of optical power circulating in the analyte medium. Using a microring resonator or a thinner microdisk resonator could increase the sensitivity, although this could negatively affect the lasing performance by inefficient absorption of pump light. The LOD of the sensor is the smallest bulk refractive index variation that can be reliably detected (i.e., three times the standard deviation of the noise [5]). To determine the noise, the RF beatnote is recorded for 3 min while DI water is flown through the microfluidic channel on top of the sensor [ Fig. 4(a)]. Both RF peaks exhibit the same noise, σ 7 MHz (i.e., 24 fm). The noise arises mainly from fluctuations of the temperature of the chip, of the power of the laser diode used for pumping, and of the microfluidic flow (i.e., mainly refractive index and temperature fluctuations).
A further source of noise is the fluctuation of the frequency of the external laser due to temperature or atmospheric pressure variations within the laser cavity. A LOD of 3.7 × 10 −6 RIU can be extracted from these measurements. This LOD is similar to previous reports on passive microring resonator sensors, due to the sensor being currently limited by its noise and therefore not benefiting yet from the much smaller intrinsic LOD of 5 × 10 −8 RIU achieved in this active sensor (i.e., due to the increase in the Q-factor by about three orders of magnitude with respect to a passive microdisk). Fully exploiting the benefits of the narrow linewidth, and the high intrinsic LOD, of the active microdisk would require eliminating the noise sources present in the current system.
Finally, the microdisk laser is used to detect rhS100A4 proteins present in known concentrations using synthetic urine as a model of a complex body fluid. Molecular recognition based on highly specific protein-antibody reactions is used for the biosensing. To that end, monoclonal antibodies, which are able to bind rhS100A4, are immobilized onto the surface of the microdisks using the approach suitable for Al 2 O 3 :Yb 3 surfaces previously designed by us [30]. In order to determine the noise during biosensing experiments, samples of synthetic urine (SurineTM Negative Urine Control, Sigma Aldrich) are flown over the microdisk laser at a flow rate of 40 μl/min [ Fig. 4(b)]. An increase of the beatnote frequency can be observed during the initial 1 to 4 min after the introduction of the urine, due to temperature fluctuations. An average noise of σ 30 MHz is determined over the last 10 min; in the protein experiments, this figure was σ 25 MHz. Synthetic urine spiked with increasing concentrations of the rhS100A4 protein, ranging from 100 pM to 3 μM, is then flown over the sensor. The evolution of the RF spectra due to the binding of proteins to the immobilized antibodies is recorded for 20 min per concentration. Figure 5 shows the shift of the lowest frequency RF beatnote peak as a function of time for different protein concentrations. For all of them, a positive shift of the RF frequency occurs. This signal flattens over time, indicating that a dynamic equilibrium between binding and disassociation of the proteins to the antibodies is reached. Furthermore, the total amount of frequency shift after 20 min increases with the protein concentration. The lowest detected rhS100A4 protein concentration in synthetic urine is 300 pM, for which a frequency shift of 162 13 MHz is recorded, which exceeds three times the noise in blank synthetic urine samples (90 MHz). This result represents one of the lowest concentrations reported in the literature for the label-free detection in a complex matrix (i.e., synthetic urine), and it is 1 order of magnitude lower than the LOD that we achieved for the same protein using a passive ring resonator sensor [30]. However, as discussed above, this limit is still far from the intrinsic LOD derived by the laser linewidth. This result shows the possibility of using an active disk resonator to detect a clinically relevant cancer biomarker from a complex liquid, such as urine, at low LOD.
To conclude, in this work, we report the first proof of concept of the label-free biosensing capabilities of active, laser-based sensors based on Al 2 O 3 :Yb 3 microdisk resonators. The microdisks lasers can be integrated on-chip and, combined with microfluidics, exhibit narrow-linewidth single-mode lasing while operating in an aqueous environment. A heterodyning detection scheme using an external reference laser operating at a wavelength very close to the emission wavelength of the microdisk was used. A bulk refractive index sensitivity and LOD comparable with the state-of-the-art passive sensors was achieved, but with the advantage of using a simple, (potentially) portable, and low-cost readout scheme. Upon the stable binding of antibodies, the specific molecular recognition of rhS100A4 proteins, associated to cancer development, in synthetic urine was demonstrated. Detection of concentrations as low as 300 pM shows the biosensing capabilities of the Al 2 O 3 :Yb 3 microdisk resonators. These results pave the road towards the realization of biosensing platforms based on active, laser-based devices easy to integrate in point-of-care instruments equipped with portable, simple, and relatively cheap readout schemes. | 3,609.2 | 2019-12-05T00:00:00.000 | [
"Physics"
] |
PREdator: a python based GUI for data analysis, evaluation and fitting
The analysis of a series of experimental data is an essential procedure in virtually every field of research. The information contained in the data is extracted by fitting the experimental data to a mathematical model. The type of the mathematical model (linear, exponential, logarithmic, etc.) reflects the physical laws that underlie the experimental data. Here, we aim to provide a readily accessible, user-friendly python script for data analysis, evaluation and fitting. PREdator is presented at the example of NMR paramagnetic relaxation enhancement analysis.
Introduction
In nearly all fields of physical, chemical or biological research it is requiered to convert experimental data into mathematical expressions. Particularly the determination of a "best fit" for a series of data points to a mathematical model is a pivotal and potentially time consuming step in the extraction of results and data evaluation.
Nuclear magnetic resonance (NMR) spectroscopy not only provides structural information at the atomic scale on biological macromolecules but also on their dynamics, and hence, a more complete description of the system under investigation. Furthermore, dynamics parameters may contribute also to the understanding of proteins and their interaction with other proteins, nucleic acids or small ligands. The determination of longitudinal (R 1 ) or transverse (R 2 ) relaxation rates of protons in biological macromolecules deliver valuable molecular dynamics information on the system under investigation. For example, this information can be used to determine the interaction interface between individual domains or subunits on the basis of surface accessibility studies in situations where no other NMR parameters, e.g. nuclear Overhauser enhancement (NOE) or chemical shift perturbation data, are observable. In such cases, surface accessibility studies can be performed by using of chemically inert paramagnetic probes, e.g. paramagnetic metals, oxygen or *Correspondence<EMAIL_ADDRESS>RG Biomolecular NMR Spectroscopy at the Leibniz Institute for Age Research -Fritz Lipmann Institute, Beutenbergstr. 11, 07745 Jena, Germany nitroxides as cosolvents [1]. Protein residues located in the interior of proteins or at the interaction interface are shielded from the paramagnetic agent and experience a weak paramagnetic relaxation enhancement (PRE). In contrast, residues located at the solvent accessible surface experience a strong PRE.
PRE can experimentally be derived from longitudinal (R 1 ) or transverse (R 2 ) relaxation rate measurements. A sensitive and reliable measure of transverse PREs can be obtained from cross-peak intensities for the state with and without the paramagnetic cosolvent. Relaxation rates are measured by a series of 2D saturation-recovery spectra ( 1 H, 13 C-HMBC or 1 H, 15 N-CRINEPT [2]), in which the time delay during which relaxation takes place is gradually increased. The experiments are repeated with different concentrations of the paramagnetic agent. To extract the relaxation rates the signal intensities are fitted to I=I 0 (1-e −R i t ) where I 0 is the intensity after infinite recovery delay, R i is the longitudinal or transverse relaxation rate and t is the time. The PRE is calculated and is represented by the slope of the relaxation rate as a function of the concentration of the paramagnetic agent [3][4][5].
Even though, a variety of tools (e.g. MATLAB 8.0 and Statistics Toolbox 8.1 (The MathWorks, Inc., Natick, MA, US), GNU Octave [6] or R [7]) and NMR-software suites (NMRView [8], CCPN [9], ROTDIF [10]) are available for the extraction and fitting of relaxation data, here we provide a straightforward Python3 based application with a graphical user interface not only for the extraction http://www.scfbm.org/content/9/1/21 of relaxation data but also for the calculation of PREs. However, the script should also be useful for fitting and evaluation of virtually any set of data series.
Implementations and results
PREdator was initially conceived for the analysis of PRE. The application was written in a Mac OS X environment, but it can be run under any operating system for which a Python3 interpreter is available. Python3 and the packages Matplotlib [11], SciPy/NumPy [12] and dill [13] are required to run PREdator.py. Matplotlib [11] is used for data visualisation. All generated plots can be saved as either raster (PNG) or vector format files (PDF or EPS). PREdator also provides the option to save the current session and to restore it later. For data serialization the dill package is implemented [13].
A) B)
C) D) Figure 1 PREdator interface elements. The four interactive analysis windows in PREdator. A) Graphical representation of the original data and the fitted curve. B) Window for selection or deselection of data points that are considered for fitting, drop down menu for fitting function selection, fields for entering initial fitting parameters and experimental error, and entry fields to enter the axis labels for A). C) Graphical summary of one of the selectable fitting parameters over the range of submitted data (e.g. amino acid residues). D) Summary of the resulting fitting parameters, which can be saved as a text file, is shown in the upper text field. The fitting parameter (a,b or c) selected here is displayed over the data range in C). Entry fields to adjust the title and the axis labels for C) are also provided. http://www.scfbm.org/content/9/1/21 The input file has to contain comma separated data (see example files provided with the download package).
In an initial dialogue the user has the opportunity to choose a predefined fitting function from a list or to enter a self-defined fitting function with up to three fitting parameters. The implementation of NumPy allows to create self-defined fitting functions with predefined mathematical expressions (e.g. sin, cos or tan).
PREdator provides an initial estimate of the parameters to be fitted. If the user has knowledge of the order of magnitude of the fitting parameters and the experimental error then there is the possibility to enter such initial fitting and error parameters. Data fitting is performed with the curve-fit function implemented in the SciPy-package (modul: scipy.otimize) [12]. For visual inspection the fitted curve and the original data points are shown as graph ( Figure 1).
An operating window is provided to re-adjust the fitting function and/or the fitting parameters. Obvious data outliers can be deselected so that they are not considered for fitting. The change of the fitting outcome in the context of the selection and deselection of data points gives a qualitive estimate of fitting robustness. A summary of the fitting results and errors is given in a second window. Fitting errors are provided as one standard deviation errors. The user has the option to save the results to a text file.
For the calculation of the residue-specific PRE, the relaxation rate (R 1 orR 2 ) for each residue and each concentration of the paramagnetic cosolvent are obtained. The cosolvent concentration dependent relaxation rates for individual residues are subsequently correlated by a second fitting. The slope of the resulting fitted function of this second fitting step delivers the PRE for each individual residue.
PREdator delivers fitting parameters in a first step (e.g. R 1 of an individual residue of a protein for different concentrations of the paramagnetic cosolvent). In addition it allows to correlate those fitting parameters, obtained for different conditions, in a second step. The principle of such analysis is not restricted to the evaluation of PREs and is applicable to all kinds of experimental data sets where one type of measurement is repeated under different conditions. Examples include the analysis of fluorescence recovery after photobleaching (FRAP) in a living cell as function of the temperature or the assessment of a DNA-protein interaction under different salt, pH or temperature conditions and to compute properly fitted binding curves. The binding curves in turn can be used to derive the condition-dependent affinity parameter K d (equilibrium dissociation constant).
Conclusions
In summary, PREdator is a time saving tool for visual inspection, fitting and analysis of series of data points.
The application is freely accessible at http://nmr.flileibniz.de/nmrsoftware.shtml and can be adapted to user requirements. | 1,829.8 | 2014-09-24T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Microarray analysis identifies candidate genes for key roles in coral development
Background Anthozoan cnidarians are amongst the simplest animals at the tissue level of organization, but are surprisingly complex and vertebrate-like in terms of gene repertoire. As major components of tropical reef ecosystems, the stony corals are anthozoans of particular ecological significance. To better understand the molecular bases of both cnidarian development in general and coral-specific processes such as skeletogenesis and symbiont acquisition, microarray analysis was carried out through the period of early development – when skeletogenesis is initiated, and symbionts are first acquired. Results Of 5081 unique peptide coding genes, 1084 were differentially expressed (P ≤ 0.05) in comparisons between four different stages of coral development, spanning key developmental transitions. Genes of likely relevance to the processes of settlement, metamorphosis, calcification and interaction with symbionts were characterised further and their spatial expression patterns investigated using whole-mount in situ hybridization. Conclusion This study is the first large-scale investigation of developmental gene expression for any cnidarian, and has provided candidate genes for key roles in many aspects of coral biology, including calcification, metamorphosis and symbiont uptake. One surprising finding is that some of these genes have clear counterparts in higher animals but are not present in the closely-related sea anemone Nematostella. Secondly, coral-specific processes (i.e. traits which distinguish corals from their close relatives) may be analogous to similar processes in distantly related organisms. This first large-scale application of microarray analysis demonstrates the potential of this approach for investigating many aspects of coral biology, including the effects of stress and disease.
Background
Cnidarians are the simplest animals at the tissue level of organization, and are of particular importance in terms of understanding the evolution of metazoan genomes and developmental mechanisms. Members of the basal cnidarian Class Anthozoa, which includes the sea anemone Nematostella and the coral Acropora, have proved to be surprisingly complex and vertebrate-like in terms of gene repertoire [1][2][3], and are therefore of particular interest. Scleractinian corals are also of fundamental ecological significance in tropical and sub-tropical shallow marine environments as the most important components of coral reefs. Surprisingly, both the general molecular principles of cnidarian development and many aspects of the functional biology of corals are only poorly understood. Whole genome sequences are now available for both the textbook cnidarian Hydra magnipapillata and the sea anemone Nematostella vectensis. However, corals are distinguished from Nematostella and other cnidarians by the presence of an extensive skeleton composed of calcium carbonate in the form of aragonite. The ability to carry out calcification on a reef-building scale is enabled by the obligate symbiosis between scleractinians and photosynthetic dinoflagellates in the genus Symbiodinium.
Expressed Sequence Tag (EST) projects carried out on Acropora millepora and Nematostella vectensis have provided insights into the evolution of animal genomes [2,3]. The latter publication, based on ca 5800 unigenes from the coral Acropora and 10,500 unigenes from the sea anemone Nematostella, revealed the surprisingly rich genetic repertoire of these morphologically simple animals. The genomes of anthozoan cnidarians encode not only homologs of numerous genes known from higher animals (including many that had been assumed to be 'vertebrate-specific'), but also a significant number of genes not known from any other animals ('non-metazoan' genes; [3]). This picture of genetic complexity has been augmented by the recently completed whole genome sequence (WGS) of Nematostella vectensis [1], for which approximately 165,000 ESTs are now available. Similar resources exist for Hydra magnipapillata [4,5] although the much larger genome size of this organism has consequences for the completeness of the assembly. Both of these other cnidarians not only lack a calcified skeleton, but also do not enter symbioses. Entry into a symbiosis can have profound effects on gene expression patterns, with changes to immune function, and to many metabolic functions including CO 2 cycling, nutrient cycling, metabolite transfer and reactive oxygen quenching [6,7]. The phylogenetic position of Nematostella makes this a particularly useful comparator because both Nematostella and Acropora are classified into the anthozoan subclass Hexacorallia (Zoantharia).
Information and resources relevant to microarray studies on corals have recently been summarised [8]. Few precedents exist for the approach used here; the most directly relevant previous study is an array experiment comparing symbiotic and aposymbiotic sea anemones [9]. To gain insights into the molecular bases of coral development, including nematocyst formation, metamorphosis, and the processes of symbiont uptake and calcification, developmental microarray experiments were carried out using 12000 spot cDNA arrays representing 5081 Acropora millepora unigenes which, based on the EST sequence, are predicted to give rise to a bona fide protein. Four stages of coral development were compared, spanning the major transitions of gastrulation and metamorphosis ( Figure 1). These comparisons, which constitute the most comprehensive analysis of the development of any cnidarian to date, provide insights into the overall dynamics of the transcriptome during development as well as candidate genes for roles in metamorphosis, calcification and sym-Scanning electron micrographs of developmental stages in the Acropora millepora lifecycle Figure 1 Scanning electron micrographs of developmental stages in the Acropora millepora lifecycle. At spawning egg-sperm bundles are released by the colony and float to the surface, where they break up into individual eggs and sperm. Upon release and fertilization of the egg, cell division first produces a spherical bundle of cells which then flattens to form a cellular bilayer called the prawnchip (PC). Following gastrulation the spherical gastrula elongates to a pear shape as cilia develop. Further elongation produces a motile presettlement planula larvae (PL), possessing a highly differentiated endo-and ectoderm and an oral pore. Upon receipt of an appropriate cue, the larva settles and metamorphoses, forming the primary polyp (PO). Following calcification, symbiont uptake, and growth and branching, the adult colony is formed (A). The stages labelled with yellow letters represent those from which RNA was extracted, labelled and hybridized to the slides. Stages circled in red are those from which ESTs were spotted onto the slides. biont uptake. Spatial expression patterns were determined for many of the candidate genes identified in the array experiments. Comparisons with Nematostella, Hydra and other animals imply that nominally coral specific processes are executed by both conserved and novel (taxonspecific) genes, and suggest some intriguing parallels with other systems.
The identification and composition of synexpression clusters
Of the 5081 unigenes giving rise to predicted peptides that are represented on the arrays, a total of 1084 unigenes (2462 spots) were found to be up-or down-regulated (P = < 0.05) between any two consecutive stages. The microarray results were validated by virtual northern blots. The results for eight arbitrarily chosen clones are shown in Additional File 1; in each case the observed expression pattern corresponds with the microarray results.
Cluster analysis identified six major synexpression clusters ( Figure 2A) which map onto the major stages of coral development ( Figure 2B). Three of these clusters (CII, CIII and CIV) are of most interest from the perspective of coral-specific biology. Candidates for roles in nematocyst development, receipt of settlement cues and the implementation of metamorphosis may be represented in cluster II (genes up-regulated in planula) or cluster III (genes up-regulated in planula and primary polyp). Similarly, genes involved in the early stages of calcification are predicted to occur in cluster IV (genes up-regulated in primary polyp) and cluster III (genes up-regulated in planula and primary polyp). These same two clusters (CIII and CIV) may also provide candidates for roles in the establishment of symbiosis. Two other synexpression clusters (CI and CVI) are of more general developmental interest. The largest, cluster I (genes down-regulated after embryogenesis), consists of 567 unigenes whose transcript levels decreased after gastrulation and remained low ( Figure 2A). Cluster V (genes up-regulated in adult) consists of only 43 unigenes. The small size of this cluster may be due to the absence of adult material amongst the cDNAs spotted on the array and therefore presumably reflects only a small proportion of the total number of genes that are upregulated in adult coral. Functional breakdown data for the genes in these clusters are summarised in Table 1. Overall, approximately 15% of the differentially expressed genes are coral specific (no match to database sequences at < 1 × e -5 ), but the relative proportion of these nominally taxon-specific genes varies widely between the synexpression clusters. Clusters II (genes up-regulated in planula) and IV (genes up-regulated in primary polyp) contained the highest proportions (23.5% and 26%; 26 and 20 unigenes respectively) of unique genes, but these accounted for only 12% of cluster I ( Figure 2C). Conversely, cluster VI contained the highest proportion (59%) of 'core' genes, which are defined as genes represented in animals and other kingdoms ( Figure 2C). The proportion of Acropora unigenes matching only to other cnidarians was relatively constant across clusters, cluster VI (7%; 14 unigenes) being somewhat below the 9-11.5% range of the other clusters (data not shown).
Approximately 10% of cluster I (genes down-regulated after embryogenesis) consists of genes in functional category AIII, genes involved in cell replication [10], probably reflecting the extent to which cell proliferation dominates early embryogenesis. 29.1% of cluster VI (genes up-regulated after embryogenesis) were classified into functional category AV: protein synthesis cofactors, tRNA synthetase, and ribosomal proteins, whereas all other clusters contained very few genes in this category. 27.2% of cluster III (genes up-regulated in planula and primary polyp) were classified into AVI: Intermediary synthesis and catabolism enzymes; this is significantly more than in any other cluster.
Planula larvae are primarily dependent upon stored lipid, whereas the energy requirements of adult corals are often largely met by photosynthetic products exported from their dinoflagellate symbionts. These physiological changes are reflected by shifts in the coral transcriptome. For example, lipases are highly represented amongst the planula ESTs, but strongly down-regulated thereafter. Also of note are dramatic differences in representation of genes in category BII (intracellular signalling) between cluster I (10.5%) and cluster II (0.9%), and in genes in category BIII (extracellular matrix and cell adhesion) between cluster I (0.9%) and cluster II (14.4%). These shifts, and the sharp spike in expression of ECM and cell adhesion genes, are associated with the transition from an undifferentiated proliferative stage and the emergence of differentiated cell types.
Lectins related to sea cucumber CEL-III are strongly expressed during metamorphosis in Acropora
Whilst our understanding of metamorphoses in marine invertebrates is very incomplete, in several cases key molecules implicated in the underlying processes have been identified, and these include lectins [11,12]. Studies of coral settlement and metamorphosis have indicated that the inductive morphogenetic cue is exogenous/environmental and, whilst the exact structure of the metamorphosis inducing morphogen remains elusive, lipopolysaccharides are prime candidates [13] suggesting that cell surface recognition by coral larvae may be mediated by lectins. Lectins are therefore of particular interest as candidates for roles in settlement and metamorphosis as well as in other developmental processes including the Summary of microarray results Figure 2 Summary of microarray results. (A) Graphical representation of the six expression clusters: yellow corresponds to upregulation and blue to downregulation. Each row corresponds to an EST and each column to a developmental stage as labelled in Figure 1. Clusters I-VI consist of genes with their highest expression in the prawnchip, presettlement, presettlement and postsettlement, post-settlement, adult, and post-gastrulation stages, as diagrammed in (B). Presettlement orientation is oral to the left, postsettlement orientation is oral pointing out of the plane of the page. (C) Pie charts classifying the genes in each cluster into unique genes (blue-unique to Acropora), core genes (purple-matching a database entry in non-Metazoa, Radiata and Bilateria) and other (light yellow-any combination of any two of non-Metazoa, Radiata or Bilateria). Note that whilst 1084 unigenes were differentially expressed, the total number of unigenes in clusters is 1161. This is because 70 unigenes fall into two or more clusters, possibly due to the existence of splice variants for some unigenes. The number of unigenes in each cluster is in brackets.
uptake of Symbiodinium (see below). Indeed, a mannosebinding lectin has recently been described from A. millepora which binds both bacteria and Symbiodinium and may therefore have roles in both immunity and symbiosis [14] A search for genes encoding lectin domains in clusters II, III and IV identified six unigenes, two of which, A036-E7 and A049-E7, have significant overall similarity to a haemolytic lectin from sea cucumber. They lack clear Nematostella (or Hydra) counterparts, but a homologous gene is present in the Caribbean coral, Acropora palmata [15]. The two A. millepora proteins are 82.1% identical to one another ( Figure 3A), and 50.4% and 48% identical to Cucumaria echinata CEL-III [16] respectively. These were amongst the most highly represented of the differentially expressed unigenes (A036-E7 was represented by 13 ESTs and A049-E7 by 4) and, based on their expression patterns, they are candidates for roles in metamorphosis. In situ hybridization ( Figure 3B, C) revealed that both A036-E7 and A049-E7 are expressed in a subpopulation of ectodermal cells in the oral half of the larva (Figure 3B1-2; C1-2). In the post-settlement primary polyp they are exclusively expressed orally on the side that is exposed to the environment, the other, non-expressing side being against the substratum (Figure 3B3-4; C3-4). C. echinata CEL-III functions as an oligomer, apparently causing osmotic rupture of cell membranes after attachment to membranebound sugars [16,17], and their high sequence similarity suggests similar roles for the two Acropora proteins in cell recognition and lysis for tissue remodelling during metamorphosis. Alternatively, expression on the exposed surface of the polyp is also consistent with a role in selfdefence, and could indicate a function in lysis of invading microorganisms by a similar mechanism, as suggested by Kouzuma et al [17].
Other lectins in nematocyst differentiation
Three of the four remaining lectin domain containing proteins (A044-C2, A032-H1, and A043-H7) share an unusual structure, as they are each predicted by InterProScan [18] to contain an N-terminal signal peptide (for transport to the ER and secretion or organelle targeting), a central collagen domain, and a C-terminal galactose binding lectin domain ( Figure 4). Blast searching showed that all three were most similar to Nematostella proteins, and structural comparisons indicate that these Nematostella and Acropora proteins, although resembling the mini-collagens known from Hydra [19], are thus far known only from anthozoan cnidarians. Canonical mini-collagens [19,20] are components of the walls of cnidarian nematocysts, and are defined by the presence of approximately fourteen Gly-X-Y repeats flanked by proline-rich and Cysrepeat regions. The Acropora molecules described here, together with Nematostella mini-collagen-like proteins, are distinct in also containing lectin domains; there are no Hydra proteins which contain both of these domains. Both A044-C2 and A032-H1 have uninterrupted minicollagen repeats, and for these, whole mount in situ hybridization revealed a common expression pattern, transcripts first appearing in scattered ectodermal cells which are more abundant toward the oral end of the planula and then becoming limited to the oral side of the post-settlement polyp ( Figure 4A, B). Nematocysts are first apparent in the early planula larva (Additional File 2) and sections of embedded whole mount in situ preparations reveal expression in presumed cnidoblasts (Additional File 3), but also in other cells without the characteristic cnidoblast morphology. Whether these cells are developmental stages of cnidoblasts, or an entirely different class of cell, remains to be established. However, in the third of these related proteins, A043-H7, the minicollagen repeat is interrupted, and a completely different expression pattern is observed (see below).
Whereas the five proteins discussed above all contain galactose-binding lectin domains, the last of these six differentially expressed proteins (A043-D8) contains a C type lectin domain. Moreover, whilst a signal peptide is present, A043-D8 does not contain a mini-collagen domain. As in the case of A044-C2 and A032-H1, expression of A043-D8 appears in scattered ectodermal cells as the planula is developing ( Figure 4C), although the distribution of these cells appears to differ somewhat from those shown in Figures 4A and 4B. Histological sections fail to reveal any evidence of expression in obvious cnidoblasts.
A potential mediator of symbiont uptake Acropora species acquire symbionts directly from the environment and although uptake in the wild has only been observed a few days after settlement [21], larvae of Acropora [22,23] and a number of other coral species [24,25] are competent to take up symbionts. However, the exact time and mode of uptake remain to be established. Lectin/ polysaccharide signalling is used in many systems as a mechanism for symbiotic recognition [26], and has been implicated in the establishment of symbiosis in various marine invertebrates (e.g [27]). In the octocoral Sinularia lochmodes a lectin is involved in the conversion of Symbiodinium from a motile to the non-motile form required for symbiosis [28,29]. Also, masking cell surface glycoproteins with lectins decreases the rate of Symbiodinium infection of the sea anemone Aiptasia pulchella [30] and enzymatic digestion of cell surface glycans prevents Symbiodinium recognition and the establishment of symbiosis in the coral Fungia scutaria [31]. Although Smith [32] has argued otherwise, these more recent experiments point to a possible role for lectins in symbiont recognition/uptake in corals.
Sequence comparison and whole mount in situ hybridization of lectin coding genes A036-E7 and A049-E7 Alignment of A036-E7 and A049-E7 amino acid sequences with C. echinata CEL-III reveal that they are 82.1% identical (90.6% similar) to one another and 50.4% (65.1%) and 48% (64%) to CEL-III respectively. Black boxes represent identities and grey shaded boxes similarities. Localisation of A036-E7 (B) and A049-E7 (C) transcripts (dark purple) in presettlement planula larvae (1), metamorphosing larvae (2), and postsettlement polyps viewed from the oral side (3), and in cross section with the mouth pointing upward (4). Expression in the oral ectoderm is consistent with a role in metamorphosis or defence against pathogenic microorganisms.
Whole mount in situ hybridization of lectin coding genes A044-C2, A032-H1, A043-D8 and A043-H7 The one differentially regulated coral protein containing a lectin domain and with an expression pattern consistent with a role in symbiont uptake is A043-H7, introduced in the previous section as a mini-collagen-like protein.
Unlike those in the proteins with similar domain architecture (A044-C2 and A032-H1), the mini-collagen domain of A043-H7 is interrupted, (which may have structural consequences) and the gene's expression pattern is completely different. The expression pattern of A043-H7 immediately prior to settlement ( Figure 4D) is consistent with a role in symbiont uptake since, in contrast to many other cnidarians, the endoderm of the Acropora planula is tightly packed with yolk cells and frequently is hollow only immediately adjacent to the oral pore. As the endoderm is the most common route of cnidarian infection (see Discussion), the endodermal region immediately adjacent to the oral pore (i.e. the zone of A043-H7 expression) is a probable site of symbiont infection in the case of Acropora larvae. Confocal microscopy was recently used to demonstrate the binding of an A millepora mannosebinding lectin, which was not among our ESTs, to Symbiodinium, but its localization within the coral remains unknown [14].
Conserved and novel genes with roles in calcification
The molecular basis of calcification in corals is not well understood; the process involves the deposition of calcium carbonate in an area defined by an organic matrix [33] and is initiated immediately after settlement and prior to metamorphosis [34]. Initially a flattened plate is laid down, upon which are deposited radiating vertical walls corresponding to the septa which give the polyp its six-fold symmetry. Initial calcification can, and in the case of Acropora millepora does, happen in the absence of Symbiodinium, but the massive calcification of larger colonies is dependent on the photosynthetic symbiont through interacting cycles of respiration, photosynthesis and calcification. Although many animal phyla include calcifying representatives, few components of the calcification machinery appear to be conserved between different lineages. For example, in the scleractinian Galaxea fascicularis, one of the most prevalent protein components of the calcifying organic matrix is galaxin [35], which appears to be unique to corals. One exception to this heterogeneity is the alpha type carbonic anhydrase family, which has been implicated in CaCO 3 deposition from sponges to vertebrates [36]. Most animals have multiple carbonic anhydrases; distinct subfamilies are recognised [37,38] each of which is widely distributed phylogenetically, but in addition some calcifying animals have atypical carbonic anhydrases that may represent lineage specific adaptations to facilitate CaCO 3 deposition. For example, nacrein -a soluble organic matrix protein in the nacreous layer of pearl oysters -contains a carbonic anhydrase domain that is split by a Gly-X-Asn repeat domain [39] which may have a regulatory role [40]. In a directly relevant example, Tambutte et al. [38] have recently demonstrated that active carbonic anhydrase is present in the organic matrix of Tubastrea aurea and plays a direct role in the calcification process. In another recent paper Moya et al [41] have cloned, sequenced and immunolocalized a previously undescribed CA from the coral Stylophora pistillata. It is localized in the calicoblast ectoderm, from which it is secreted, and has a CA catalytic function. In terms of understanding the bases of skeleton deposition, carbonic anhydrases are therefore of particular interest.
Two carbonic anhydrase genes, C007-E7 and A030-E11 (cluster III) are up-regulated in the planula larva and postsettlement stages, and in situ hybridization shows that the expression of each gene is spatially restricted at those stages of development. C007-E7 is expressed most strongly in a restricted area at the aboral end of the metamorphosing larva and primary polyp ( Figure 5A1, A2). The expression of this gene in a disc at the aboral end is consistent with a role in calcification as this is the site where the process is initiated [34,[42][43][44]. In the slightly older polyp the expression in the aboral disc decreases to a circumferential ring ( Figure 5A3), and still later ( Figure 5A4), this ring is maintained, and expression commences in the tentacles. This expression pattern in the basal plate is consistent with involvement of carbonic anhydrase C007-E7 in the onset of calcification, but indicates that this carbonic anhydrase is not involved in the phase of calcification during which the adult structures are formed.
The second carbonic anhydrase, A030-E11, was expressed in the oral half of the metamorphosing larva ( Figure 5B1) and the entire ectoderm of the primary polyp, except the aboral disc ( Figure 5B2) and the oral pore (data not shown). In older polyps this carbonic anhydrase is expressed in the septa, where calcification is occurring to form adult structures ( Figure 5B4).
Expression analysis reveals that some "unique" coral genes have spatial expression patterns strikingly like that of carbonic anhydrase C007-E7, i.e. consistent with roles in the initiation of calcification. Figure 5C1 and 5D1 show genes with expression at the aboral end of the metamorphosing larva and in the basal plate of the metamorphosing larva, respectively. However, differences are apparent slightly later -C012-D9 expression becomes restricted to an aboral ring, and then appears to be switched off ( Figure 5C3, C4). Whilst B036-D5 expression also appears to be down-regulated in the basal plate, transcripts can be visualised in the mesenteries ( Figure 5D4) at a stage when C012-D9 transcripts are undetectable. Neither of these genes encodes known domains or could be functionally classified (using BlastP, Phi-Blast and InterPro Scan).
Whole mount in situ hybridization of two carbonic anhydrases and two genes of unknown function which may be involved in calcification However, their expression patterns are consistent with roles in early calcification.
A synexpression cluster of coral-specific genes
As indicated above, the proportion of unique genes was highest in synexpression clusters II ('planula') and IV ('primary polyp'). To investigate their possible roles, in situ expression patterns were determined for many of these coral-specific genes. Many gave specific expression patterns, some of which are consistent with roles in processes such as calcification, as previously discussed. In other cases, although groups of "unknown" genes appear to be expressed in the same cells, it is more difficult to interpret the likely biological significance of the patterns. One example of this phenomenon is provided by three 'planula' cluster unigenes (A044-A9, C008-B2 and C014-E10) with no clear hits to genes in other organisms; the corresponding proteins are each predicted to contain a signal peptide, and C014-E10 contains a SEA domain (an extracellular domain involved in carbohydrate binding).
In situ analysis showed that in the planula, the three transcripts are co-localised in a subpopulation of ectodermal cells that is concentrated orally. The post-settlement expression patterns of these three genes were also very similar, transcripts in each case being localised in scattered ectodermal cells of the polyp ( Figure 6A-C). The apparent co-localisation and co-expression of these unrelated but unique unigenes suggests that they may function in a common process or signaling pathway. The size of the synexpression group to which these three genes belong is unknown, but such gene clusters are of great interest, since they may represent coral-specific pathways or functions. Unfortunately, such genes also present great analytical difficulties; since their lack of clear homologs limits the inference of function from structure and the molecular tools required to test function are not yet available in corals, although progress is being made in that direction with other cnidarians [45][46][47].
Validation of the approach and methodology
Virtual northern blots for eight genes were consistent with the microarray results, thus confirming them. In addition, and consistent with the microarray results being accurate, several mini-collagen-like proteins were upregulated in the planula. Mini-collagens have thus far only been described from nematocysts, cnidarian-specific structures which first appear at the planula stage in A. millepora (Additional Files 2, 3).
Taxonomic and functional breakdown of the genes
The composition of the EST set used in these microarray experiments has previously been considered specifically with respect to the complement of developmental signalling pathway components [2,3], but this paper is the first to examine broad scale changes in gene expression during development for any cnidarian. The use of different criteria and thresholds, and the ever-changing baseline provided by the databases, complicates making direct comparisons with other developmental studies. For example, although a recent paper on developmental gene expression in the ascidian, Molgula [48] addressed many of the same questions, it focussed specifically on highly expressed genes (i.e. only those accounting for more than 0.2% of the total number of ESTs) so it is not possible to interpret apparent differences, such as in the percentage of unique genes. In terms of developmental changes, it is particularly noteworthy that the percentage of "core" genes (59%; i.e. those genes shared with members of other kingdoms as well as other animals) is highest in cluster VI and that the percentage of unique genes (12%) is lowest in cluster I. Presumably these figures reflect shifts from common cellular pathways during very early development to greater cellular and molecular diversification later. As in many other animals, the early development of Acropora appears to involve many stored maternal mRNAs. The composition of the maternal mRNA pool is complex, consisting principally of low abundance transcripts including those involved with cell division, RNA metabolism, and regulation of gene transcription (L McFarlane, unpublished). Among genes of particular interest, H2A.Z and H1, histones with roles in priming chromatin for developmental gene expression [49] in a variety of other systems, are highly represented in the prawn chip ESTs and strongly down regulated thereafter, as are cyclins A and B3. In Drosophila and Xenopus, maternal cyclin transcript levels are initially very high and then decrease dramatically after the onset of gastrulation [50][51][52]. Acropora may therefore follow this pattern of abundant maternal cyclin transcripts that drive very rapid cell proliferation early in embryogenesis, followed by lower transcript levels with the onset of slower developmentally regulated cell cycles. Cell cycle transcripts such as cyclin A and B were also abundant among the cleaving embryo ESTs of Molgula tectiformis [48] and in pre-gastrulation stages of Xenopus [53] and Drosophila [54].
Lectin domain proteins are potentially involved in diverse processes
There are a number of precedents for the involvement of lectin-containing proteins in metamorphosis. Lectins are differentially expressed at metamorphosis in two ascidians, Herdmania curvata [55] and Boltenia villosa [11,12]. In Boltenia, four lectins and two key lectin pathway genes are up-regulated in the larva or the newly settled adult [11]. The lectin induced complement pathway, which is initiated by a mannose-binding lectin, is important in Boltenia for the recognition of those bacteria which induce metamorphosis and tissue remodeling [12]. It is possible that the lectins up-regulated at metamorphosis in Acropora Whole mount in situ hybridization of three genes of unknown function Figure 6 Whole mount in situ hybridization of three genes of unknown function. In addition to the temporal synexpression established by microarray these three genes share common expression patterns and thus form a temporo-spatial synexpression group. Localisation of (A) A044-A9, (B) C008-B2 and (C) C014-E10 transcripts (dark purple) in (1) prawnchip, (2) presettlement larva, and (3) postsettlement polyp. Orientation in presettlement and postsettlement larvae is oral upward. Lack of expression in the prawnchip is followed by expression in a subset of ectodermal cells concentrated at the oral end of the presettlement larvae and postsettlement polyps. Their synexpression both temporally and spatially, suggest that they may be a novel group of genes interacting with one another.
have an analogous role in activating tissue remodelling. Consistent with this idea, a possible complement effector, the perforin domain protein apextrin, is expressed in a strikingly similar pattern to those of the CELIII lectins during metamorphosis in Acropora [56].
Lectin domain-containing proteins also potentially function in the recognition of symbionts by corals. Lectin/ polysaccharide signalling is used in many systems as a mechanism for symbiont recognition, the most widely known example being the recognition of sugars on the surface of nitrogen-fixing bacteria by the lectins of their host legume during the establishment of their symbiosis. Symbiodinium in scleractinian corals reside in the endoderm, and two mechanisms of entry have been described in those corals that acquire them from the environment. The first is directly into the endoderm via the oral pore after it is formed 3-5 days post fertilization in association with feeding, as was demonstrated in the coral Fungia scutaria [25] and the anemone Anthopleura elegantissima [57]. The second, also demonstrated in Fungia [24], is that they can enter via the epithelium pre-or post-gastrulation. Those which have entered by the ectoderm are then shunted to the endoderm where they are retained [24]. Elegant studies in the latter half of the last century described the cell biology of symbiont uptake and retention, for example [58], and it has recently been established that members of the Rab family of proteins are involved in determining whether symbionts are digested or retained [59][60][61]. Symbiodinium are not transmitted through the eggs of A. millepora, and while planulae can be infected [23] this may only occur after the oral pore has opened shortly before settlement ( [22] and AH Baird, pers comm.) although the timing and mode of symbiont uptake remain to be firmly established. The limited available field observations indicate that infection normally does not occur until a few days after settlement in A. millepora [21]. These observations point to the endoderm as the likeliest point of Symbiodinium uptake, but do not rule out a possible role for the ectoderm. There is clear evidence from a number of cnidarian species of selective maintenance of the most "appropriate" clade of symbiont, while conclusions on specificity of uptake and its possible mechanisms are equivocal, perhaps due to interspecific variabity. Nevertheless, there is evidence that lectins function in symbiont recognition, as previously summarised, and these molecules therefore remain obvious candidates for roles in symbiont uptake and maintenance by Acropora.
Genes involved in calcification
Two alpha type carbonic anhydrases are expressed in patterns that are consistent with roles in calcification. However, these genes are not restricted to heavily calcifying cnidarians, as both have probable orthologs in sea anem-ones and other cnidarians. This is perhaps not surprising, as carbonic anhydrases are involved in pH and CO 2 /bicarbonate homeostasis in all organisms, and the ability to deposit some form of calcified exoskeleton is taxonomically widespread among cnidarians. For example, polyps of the hydrozoan Hydractinia symbiolongicarpus secrete a mat of calcium carbonate, in the form of aragonite, on their substrate [62]. Two membrane-associated carbonic anhydrases have been described from planulae of the coral Fungia scutaria, but they are short and missing amino acids thought to be necessary for CA activity, although the authors hypothesize that they could play a role in the onset of calcification at the time of settlement [63]. The first Acropora carbonic anhydrase, C007-E7, matches most strongly to vertebrate IV/XV-type carbonic anhydrases, and consistent with this, is predicted to be GPI anchored. C007-E7 has likely orthologs in both Nematostella and Hydra. The second carbonic anhydrase, A30-E11, is a I/IItype carbonic anhydrase and is likely to be the Acropora ortholog of a protein identified in the sea anemone, Anthopleura elegantissima (29.8% identity and 43.1% similarity) as a "symbiosis gene" -it is strongly up-regulated when this facultatively symbiotic anemone takes up endosymbionts [64]. However, clear counterparts of this soluble cytosolic type carbonic anhydrase are present in both Nematostella and Hydra magnipapillata, neither of which harbours symbionts. Whereas the two carbonic anhydrase genes are not restricted to calcifying cnidarians, a number of other coral genes with similar expression patterns have no apparent sea anemone or Hydra homologs. One possible scenario is that many of the genes involved in calcium processing will have a widespread distribution while some of those involved in secreting the organic matrix may be more specific, as in the case of galaxin. It will be particularly interesting to see whether different gene repertoires play a significant part in the determining the dramatic differences in colony morphology that are characteristic of the various corals or whether this is due mainly to deploying the same genes in different ways.
"Coral-specific" processes as variations on known themes
One conclusion that follows from the work presented above is that many of the molecules involved in "coralspecific" processes such as metamorphosis and calcification are not coral specific -genes whose expression patterns imply key roles in implementing metamorphosis, such as the lectins A036-E7 and A049-E7 and apextrin [56] have homologs in other animals even though they are not present in Nematostella. Both of the carbonic anhydrases implicated in calcification also have clear counterparts in non-calcifying cnidarians. A second conclusion is that processes central to coral biology, such as symbiont recognition, may have analogous biochemical bases in phylogenetically distant systems. Lectins function in symbiont recognition in the legume-Rhizobium system; this analogy may be useful in understanding how specificity might be achieved in the coral/dinoflagellate symbiosis and in exploring the roles of the candidate molecules identified here. As in ascidians, metamorphosis in Acropora involves activation of an innate immune response, as both lectins and the perforin domain protein apextrin are strongly and specifically expressed at this time. Inevitably, other genes implicated in coral-specific processes appear at this stage to be taxon-restricted, but it is unclear to what extent this simply reflects the limited number and range of animals for which whole genome data are yet available. Genes that are today considered "coral-specific" may actually be more widely distributed; the number of genes considered vertebrate-specific shrinks with the publication of each additional animal whole genome sequence. Moreover, genes with no clear homologs may simply be old genes that have evolved beyond recognition.
One promising approach arises from the prediction that genes involved in "coral-specific" processes such as symbiont recognition are under positive selection. With the imminent availability of large EST datasets for several corals, a combination of in silico and in situ approaches should identify these genes and build on the pioneering study reported here.
Microarray description
The microarrays used in this experiment consisted of 13,392 spots derived from 12,240 cDNA clones (1,152 clones are represented more than once) and 432 spots representing positive and negative controls. The cDNA clones spotted onto the array were randomly selected from cDNA libraries that had been constructed in Lambda ZAP (Stratagene), and include 3456 clones from the prawnchip developmental stage, 4608 clones from the planula larva stage [65], and 4128 clones from the primary polyp. All of the material used for making the libraries came from Nelly Bay, Magnetic Island, Queensland, Australia (19°08'S 146°50'E).
All cDNAs spotted onto the slides were derived from cDNA libraries of the appropriate developmental stages. They were isolated by TempliPhi (GE Life Sciences) on excised clones except for 2,000 postsettlement polyp clones which were PCR amplified directly from individual phage suspensions and 3,012 planula larva cDNAs which were isolated previously [2] Generation Microarrays were generated by spotting the amplified cDNA onto GAPSII slides using a Biorad Chipwriter Pro, and then fixed by UV light exposure (150 mJ) followed by baking at 80°C for 3 hours. All cDNA clones represented on the arrays were sequenced from the 5' direction using standard Sanger (ABI Big Dye) sequencing technology.
EST analyses
After data filtering, ESTs were clustered using CAP3 [66]. The coding potential of the resulting unigenes was analysed using ESTScan [67]. 5081 were predicted to give rise to bonafide proteins, using the criterion of a coding potential of 25 or greater. The EST contigs which had predicted peptides were used to search the Uniprot database using BlastX [68] with a threshold of e = 1 × 10 -5 in order to functionally classify the predicted proteins according to the scheme in [10].
Experimental design
To assay for changes in gene expression during Acropora development, mRNA was isolated from four different developmental stages: the pre-gastrula "prawn chip" stage (8 hpf), the planula larva stage (83 hpf), the post-settlement primary polyp (130 hpf) and the adult colony. The rationale for selecting these stages is that they span key developmental events including the establishment of tissue layers and body axes at gastrulation, the transduction of settlement cues, settlement and metamorphosis, and the initiation of calcification and uptake of symbionts. Prawn chips, planula larvae and primary polyps were the offspring of colonies collected from Nelly Bay, Magnetic Island (19°08'S 146°50'E). Adult tissue was obtained from a colony in the same bay. Pools of approximately 1000 embryos were made to create each biological replicate [69]. Total RNA was extracted from these for each of our stage specific 'targets'. Tissue from a single colony was used in the case of adult RNA extraction. The entire experiment was replicated on different days using separate collections of material thus giving two biological replicates. Within each biological replicate, each developmental stage was compared with every other twice; once in each dye orientation. Thus, there are two biological and two technical replicates for each comparison (Figure 7). Since there are six possible comparisons with this design the entire experiment used 24 slides -12 for each biological replicate.
cDNA for probing arrays was produced from unamplified total RNA which was extracted using TRI Reagent (Ambion) according to the manufacturer's instructions. The quality was assessed using denaturing gel electrophoresis using standard methods [70]. For each hybridized sample, total RNA (80 ug) was reverse transcribed, labelled and hybridised using standard protocols [71].
Data analysis and verification
Slides were scanned using a GenePix 4200A scanner, and data extracted using Spot [72]. All further analyses were carried out using the limma package [73] for the R system [74]. Print-tip loess normalisation [75] was performed on each slide. Quantile normalisation was applied to mean log-intensities in order to make the distributions essentially the same across arrays.
The methodology used for statistical analysis is described in Smyth [76]. The prior probability of differential expression, for each pair of comparisons between stages, was taken as 0.1. The Benjamini and Hochberg method [77] was used to adjust the sequence-wise p-values, so that a choice of sequences for which the adjusted p-value is at most 0.05 identifies a set of differentially expressed genes in which 5% may be falsely identified as differentially expressed (see Additional File 4 for more detail). Array data have been deposited in the Gene Expression Omnibus (GEO) database (accession number GSE11251).
Results were also verified using M vs A plots, where M = the log ratio of the spot fluorescence intensity values and A = the log of the average spot fluorescence intensity. An example is given in Additional File 5. Spots for which no fluorescence was expected, including salmon sperm DNA, empty vector and primers, plotted near the origin of the MA plot, as expected. Negative controls for differential expression (i.e. spots expected to show hybridization but no differential expression), had an M value of or near to zero, but ranged in fluorescence intensity, also in accordance with expectations. Differentially expressed positive controls (i.e, spots expected to show both hybridization and differential expression between presettlement and postsettlement on the basis of virtual northern results) were positioned on either side of an M value of zero with a range of fluorescence intensities.
Cluster analysis was used to search for clusters of expression profiles in the data. K-means clustering was used to split the genes into 6 groups of differential expression profiles. Clustering was carried out using Cluster 3.0 [78] and the results viewed with Java TreeView [79]. Unigenes with protein coding potential > 25 and p-value < 0.05 in the test for differential expression between temporally sequential developmental stages were removed prior to cluster analysis.
Results for the microarray experiments were verified using "virtual northern blots" which were made using the Clontech SMART cDNA Synthesis Kit, according to the manufacturer's instructions using RNA from the same stages used in the microarray experiment. DNA used to probe the blots was generated by PCR (see section 2.5.4 PCR and spotting of cDNAs), purified using the Qiagen PCR Purification kit according to the manufacturer's instructions, and radiolabelled with 32 P-dATP using the Prime-A-Gene Labeling System (Promega) according to the manufacturer's instructions. Hybridization was conducted according to standard protocols [70] and visualized by exposure to a Phosphorimager (Molecular Dynamics) cassette overnight. Digital images were viewed with Quantity One software.
Low-throughput sequencing
In order to obtain the entire open reading frame, some unigenes selected for in situ hybridization required further sequencing. This was done either as described for EST sequencing or using 300 ng of plasmid as template. Raw data were viewed and edited with Chromas Lite and sequences were aligned with LaserGene (DNASTAR). cDNA sequences for genes characterized by in situ hybridization have been deposited in GenBank under accession numbers EU863776-EU863788. Figure 7 Microarray experimental design. Each developmental stage used in this experiment was directly compared to all others. Each arrow represents four hybridizations; two in one dye orientation (Cy3-Cy5) and two in the other (Cy5-Cy3), hence 24 slides were used in total. Further details are given in Materials and Methods.
In situ hybridization
Templates for riboprobe production were generated by PCR. Riboprobe synthesis and in situ hybridization were performed as reported by [80]. In order to view further histological detail embryos stained in whole mount were embedded in LR White Resin sectioned at various thicknesses and counterstained with Saffranin O. | 9,989 | 2008-11-14T00:00:00.000 | [
"Biology"
] |
The cost-effectiveness of a two-step blood pressure screening programme in a dental health-care setting
Background Hypertension is one of the largest contributors to the disease burden and a major economic challenge for health-care systems. Early detection of persons with high blood pressure can be achieved through screening and has the potential to reduce morbidity and mortality. We evaluate the cost-effectiveness of an opportunistic hypertension screening programme in a dental-care facility for individuals aged 40–75 in comparison to care as usual (the no-screening baseline scenario). Methods A cost-effectiveness analysis (CEA) was carried out from the payer and societal perspectives, and the short-term (from screening until diagnosis has been established) cost per identified case of hypertension and long-term (20 years) cost per quality-adjusted life year (QALY) were reported. Data on the short-term cost were based on a real-world screening programme in which 2025 healthy individuals were screened for hypertension. Data on the long-term cost were based on the short-term outcomes combined with modelling in a Markov cohort model. Deterministic and probabilistic sensitivity analyses were carried out to assess uncertainty. Results The short-term analysis showed an additional cost of 4,800 SEK (€470) per identified case of hypertension from the payer perspective and from the societal perspective 12,800 SEK (€1,240). The long-term analysis showed a payer cost per QALY of 2.2 million SEK (€210,000) and from the societal perspective 2.8 million SEK per QALY (€270,000). Conclusion The long-term model results showed that the screening model is unlikely to be cost-effective in a country with a well-developed health-care system and a relatively low prevalence of hypertension.
Introduction
Hypertension or high blood pressure (BP) is an important worldwide public health problem and the most important risk factor for the total disease burden worldwide [1], with its sequelae including stroke and myocardial infarction [2]. It is estimated that 10% of health-care spending is directly related to hypertension and its complications [3]. The overall prevalence of hypertension in adults is approximately 25-45% in Europe [4]. In Sweden, high BP affects an estimated 1.8 million people, representing 27% of the adult population (the prevalence increases with age from 12% in young adults to 56% in the elderly) [5]. Since there are effective treatments that reduce both high BP and an individual's risk of developing sequelae [6], it is important to identify those individuals who have high BP as early as possible. Early detection of individuals with high BP may be done through BP screening among "healthy" individuals.
One type of screening is opportunistic screening, whereby a patient utilizes a health-care facility for another reason and, in addition to the regular treatment related to the visit, receives BP screening. Since a majority of the population (80% in Sweden) regularly seeks dental-care services in the form of annual check-ups [7], dental-care service providers can be a possible provider of screening for hypertension, as shown in several studies [8][9][10].
There is a knowledge gap on the optimal population screening programme for detecting hypertension [2], and there is a great need to evaluate the long-term cost-effectiveness of such programmes. One such initiative, an opportunistic two-step screening of hypertension, was tested during the dental-care visits of a general population, resulting in a positive predicted value of 0.76 and a reduction of the false positive values by 86% via a second step of home BP measurement [10]. The cost-effectiveness of an opportunistic screening programme for high blood pressure in a general population has not previously been assessed. We have used the results from the opportunistic two step screening of hypertension [10] to conduct a follow-up cost effectiveness analysis to address this question. Thus, the aim of this study was to evaluate the cost-effectiveness of the aforementioned opportunistic two-step hypertension screening programme.
The intervention
A two-step BP screening was conducted at four different dental clinics in a region of southern Sweden. The intervention was a single-arm screening programme implemented in an unscreened population, and the no-screening comparator group in this evaluation is assumed to be characterized by the status quo in which blood pressure tests are carried out when individuals visit health-care facilities. In the screening, BP was measured after five minutes of rest by a dental nurse twice in both arms (first step), and those with a mean BP value �140 and/or �90 mmHg were asked to use a home blood pressure device (Omron M6 Comfort) for one week (twice in the morning and in the evening) (second step). If the home BP resulted in a mean value �135 and/or �85, the individuals were referred to a primary health-care centre (PHC) for further assessment and diagnosis.
Both written and oral consent was obtained and the study is approved by the ethical review board in Lund, No. 2013/553 and 2015/446.
Cost-effectiveness analysis
The analysis evaluates the two-step screening programme compared to the no-screening baseline in terms of short-term (from screening until diagnosis has been established, approximately 1-3 months) and long-term (20 years) outcomes. The short-term analysis uses an intermediate outcome measure, identified hypertension patients, and the long-term analysis uses qualityadjusted life years (QALYs) as the outcome metric. The result is presented in terms of the incremental cost-effectiveness ratio (ICER), which is the difference in costs divided by the difference in health outcomes with the screening programme compared to no-screening baseline scenario: (Cost SCREENING −Cost NO SCREENING )/(Outcome SCREENING −Outcome NO SCREENING ).
Sub-group analyses of the screening programme based on sex are carried out considering the sex differences in the incidence of hypertension-related conditions, especially acute myocardial infarction (AMI). Moreover, cost-effectiveness is evaluated from a societal as well as a payer perspective. The difference between the two perspectives is that the societal perspective also includes the costs of the programme for the included individuals (primarily time-use and travel-related costs). All costs are expressed in 2019 prices (consumer price index adjusted) [11] in Swedish kronor (SEK), and the main results are also presented in euro (EUR) assuming an exchange rate of 1 EUR = 10.3 SEK (July 2020) [12]. The economic evaluation model was built and analysed in Microsoft Excel [13] and Stata v.16 [14].
Short-term analysis
The short-term analysis includes the time frame up until persons are potentially diagnosed with hypertension and thus estimates the ICER in terms of the cost of identifying one patient with hypertension through the screening programme.
The model relies on the data from the primary screening study [10]. The formal dental and health-care cost data include the blood pressure test costs in dental and primary health-care facilities, ECG costs, and laboratory and diagnostic costs (Table 1). Non-health-care costs include patient time costs and travel costs [15]. Patient time costs refer to the time spent on the blood pressure tests in the dental-care setting and, for patients referred to the PHC, also the time spent in this latter setting. We assume that the visits did not displace working hours for the patients and thus value each hour of patient time based on average net wages [16]. Travel costs (to the PHC) are based on the average distance (3 km) and a cost of 1.85 SEK per km.
The number of newly discovered cases of hypertension in the screening group (170 individuals) is compared with a corresponding number in a hypothetical comparator arm (46 individuals). The parameter value in the comparator arm is based on an assumption that 61 (expected incidence 3%) [17,18] of the 2025 individuals would have been identified as having high blood pressure during a visit to a primary care centre for some reason (on the patient's own initiative, at a doctor's suggestion, or for other reasons).
Since the diagnosis of hypertension was based on repeated blood pressure measurements both at home and in clinic (screening arm), it was assumed that no one in the screening arm was false positive.
Among those diagnosed with high blood pressure based on blood pressure screening in a clinical setting (comparing arm), we estimate that approximately 15 individuals (25%) present false positives (that is, they display white coat hypertension [WCHT]), and the remaining 46 (75%) are expected to be true positives [19].
Long-term analysis: Markov-cohort model
For long-term costs and health outcomes, we developed a Markov-cohort model with the structure shown in Fig 1. At the time of the introduction of the screening programme, the entire cohort is in the "Healthy" state, which is also the status quo of the comparator case without the screening programme. Consequently, there are annual (one-year-cycle) risks of an AMI or stroke incident based on the risk equations from the Framingham studies adjusted for age, sex, lipid levels, and diastolic blood pressure [20]. There is also an annual age-and sex-
PLOS ONE
The cost-effectiveness of a blood pressure screening programme in a dental health-care setting adjusted risk of mortality from other causes (not AMI or stroke) based on Swedish life-table data. From the AMI and stroke health states, there is either death as a direct consequence of the event or a transition to the post-AMI or post-stroke state. We make a simplifying assumption that there are no recurrent strokes or AMIs for the same person. The time perspective of the Markov model is 20 years, with annual discounting of costs and health outcomes of 3%, in line with the recommendations for cost-effectiveness analyses in Swedish health policy settings [21] (see Table 1 for input data on costs and transition probabilities). In this model based study the health outcomes are measured in terms of quality-adjusted life years (QALYs), which combine health-related quality of life (QALY weights) and life length [22]. QALY weights as used in the long-term Markov model are indexed such that 0 is interpreted as "equal to being dead" and 1 is interpreted as "the best possible health state". Table 1 lists the QALY-weight decrements, based on published evidence, associated with a stroke and AMI event.
Assessing uncertainty
Parameter uncertainty analyses were carried out using (one-way) deterministic sensitivity analysis (DSA) and probabilistic sensitivity analysis (PSA) based on 5,000 Monte Carlo simulations. The results from the DSA are shown using a Tornado diagram where the ICER intervals are based on varying parameter input values for the time horizon of the Markov cohort model, underlying hypertension prevalence in the cohort, AMI and stroke costs, drug treatment costs, and QALY-weight decrements for AMI and stroke events. The PSA assesses the uncertainty with jointly varying parameter values for hypertension prevalence, costs, transition probabilities, and QALY-weight decrements. The results from the PSA are shown using a cost-effectiveness plane (scatter-plot) and a cost-effectiveness
PLOS ONE
The cost-effectiveness of a blood pressure screening programme in a dental health-care setting acceptability curve (CEAC), where the latter shows the probability that the screening programme (compared to the no-screening baseline) is cost-effective at different levels of the maximum willingness to pay per QALY ("threshold value").
The Swedish "threshold value" as stated by the National Board of Health and Welfare is that a cost per QALY is low if below 500,000 SEK, high if between 500,000 and 1 million SEK, and very high if above 1 million SEK [23]. All uncertainty ranges and distributions are listed in Table 1, with exception of the hypertension prevalence where the mean value of 170 identified persons with hypertension was associated with a standard error of 17. Table 2 shows the short-term cost with and without the screening programme from a payer (health and dental care) as well as societal perspective. The increase in costs with the total screening programme is approximately 0.6 million SEK (€58,000) from a payer perspective and 1.6 million SEK (€154,000) from a societal perspective. The increase in costs in individual terms (total cost divided by the cohort size) with the screening programme is 295 SEK (€29) in a payer perspective and 785 SEK (€76) in a societal perspective.
Results
As previously reported in the main publication on the screening programme [10], from the cohort of 2025 persons, mean age 52.8 (SD 8.7), the screening programme identified 170 (8%) persons as having true hypertension compared to an estimated 46 persons who would have been identified in the absence of the screening programme. The additional 124 persons correctly identified as having hypertension implies an incremental cost per identified hypertension case at approximately 4,800 SEK (payer perspective) and 12,800 SEK (societal perspective) (€470 and €1,240).
PLOS ONE
The cost-effectiveness of a blood pressure screening programme in a dental health-care setting Table 3 shows the long-term costs and health outcomes in the full cohort as well as for men and women separately (assuming equal cohort size). The results show that the incremental cost with the screening programme is 3.9-4.9 million SEK (€380,000-€475,000). If we consider an all-male cohort, the incremental cost would be 4.4-5.3 million SEK (€430,000-€515,000), and in an all-female cohort, it would be 3-4 million SEK (€290,000-€390,000). The lower value in the range refers to the payer perspective, and the higher value refers to the societal perspective.
The QALY gain is estimated at 1.77 for the entire cohort but higher (3.18) if we assume an all-male cohort and lower (0.66) if we assume an all-female cohort. The better health outcomes for an all-male cohort are based on the higher hypertension prevalence as well as the higher (untreated) AMI risk among men.
The associated cost per gained QALY is approximately 2.2 million SEK (€210,000) in a payer perspective and 2.8 million SEK (€270,000) in a societal perspective. Considering an allmale cohort, the estimated results are 1.4 million SEK (€135,000) and in the societal perspective 1.7 million SEK (€165,000) per QALY. For an all-women cohort, the cost is estimated at 4.6 million SEK (€445,000) and 6.1 million SEK (€590,000) per QALY.
Deterministic sensitivity analysis
Fig 2 shows the one-way deterministic sensitivity analysis when we vary the input parameter values for the model time horizon, QALY-weight decrements, AMI and stroke costs, drug treatment costs, and prevalence of hypertension. Substantial variations in the QALYweight decrements, AMI and stroke costs, and drug treatment costs have only a modest impact on the estimated ICER. Instead, the analyses reveal that the major uncertainty comes from varying the prevalence of (undetected) hypertension in the screened population and the model time horizon. Assuming a higher prevalence in the screened population lowers the ICER (since this factor would improve the health gains from the screening programme and subsequent treatment), and a longer time horizon (30 years vs. 10 years) also improves the cost-effectiveness (lower ICER). The dashed vertical line represents a cost-effectiveness of 500,000 SEK per QALY (€48,500), which is often used as an informal threshold value in Swedish health policy, and as seen, the ICER never fall below that threshold value in any of the sensitivity analyses. Notes: Incremental cost and QALYs is the additional cost and QALYs with the screening program compared to without the screening program for a cohort of 2,025 individuals based on 3% annual discounting. The incremental cost per QALY is the additional cost for each gained QALY. Costs are rounded to the closest 100,000 SEK. QALY-differences between the programs were driven by differences in AMIs (1.5 less with the screening program) and Strokes (0.7 less with the screening program). https://doi.org/10.1371/journal.pone.0252037.t003
PLOS ONE
The cost-effectiveness of a blood pressure screening programme in a dental health-care setting None of the ICERs are below the 500,000 SEK per QALY threshold, and very few are below the 1 million SEK per QALY threshold. This can be seen more clearly in Fig 4, which shows the cost-effectiveness acceptability curve (CEAC) from the same data. The probability that the screening programme is cost-effective is approximately 0.02 at a willingness to pay per QALY of 500,000 SEK. At a willingness to pay per QALY of 1 million SEK, the likelihood that the screening programme is cost-effective is approximately 5%.
Discussion
This is one of the few studies on the cost-effectiveness of screening for hypertension. The study model was built to capture the costs and outcomes of a programme for opportunistic screening of a general population in a "real-life" scenario, namely, an existing (dental-care) organization; such a setup has been recommended as a potential method of holding screening
PLOS ONE
The cost-effectiveness of a blood pressure screening programme in a dental health-care setting costs down. Blood pressure sampling was performed in the dental clinic, in the home environment, and in the PHC for at least 10 different days, which reduced the number of false positives by 85% [10]. The screening was performed on a previously unscreened population, which resulted in a large proportion (8%) of newly diagnosed cases being detected. Despite the above-mentioned good conditions, the model results for cost-effectiveness show a very high cost per gained QALY.
Short-term analysis
The cost of the screening programme from the perspective of the dental-and health-care payer was 0.6 million SEK (€58,000) and with the addition of socio-economic costs rose to 1.6 million SEK (€154,000). The major cause of the difference in the two sets of costs is the inclusion of the patients' time cost of BP testing in the societal perspective.
The results of the short-term analysis show that the additional cost was approximately 4,800 SEK (€470) per newly discovered case in the form of dental-and health-care costs and approximately 12,800 SEK (€1,240) per newly discovered case with the inclusion of all societal costs. A similar study of a less effective opportunistic BP screening resulted in an NNS of 18, a PPV of 30%, and a direct cost of 5,300 SEK (€515) per newly discovered case [24].
PLOS ONE
The cost-effectiveness of a blood pressure screening programme in a dental health-care setting
Long-term analysis
The long-term consequences were analysed in a Markov cohort model, with the results showing a cost per QALY of approximately 2.2 million SEK (€210,000). When the patient's time cost was included to yield the societal perspective, the cost per QALY increased to over 2.8 million SEK (€270,000). This is substantially above the standard threshold values for the cost per QALY referenced in the Swedish health policy literature (SEK 500,000) [23]. In the sub-group analyses for men and women, the cost per QALY was lower for men (1.3-1.6 million SEK per QALY) than for women (4.3-5.7 million SEK per QALY). The lower cost per QALY in the cohort of men is primarily explained by the higher prevalence of AMI among men, especially in the relatively younger age groups (and thus a higher potential benefit of screening and drug treatment). However, even in an all-male cohort, the cost per QALY is above the standard threshold levels referenced in the Swedish health policy literature.
The sensitivity analyses show that the prevalence of hypertension and the time horizon have the greatest impact on the model's results. Sweden has a relatively low prevalence of hypertension (27%) and a well-developed health-care system, which means that many people with hypertension are already identified, which reduces the cost-effectiveness of adding screening. The time horizon in the baseline analysis was 20 years, and extending the horizon to 30 years improved the cost-effectiveness results somewhat (i.e., reduced the cost per QALY), though the cost remained above 1 million SEK per QALY.
PLOS ONE
The cost-effectiveness of a blood pressure screening programme in a dental health-care setting The treatment of AMI and stroke has improved over time with improved survival, which also actually (relatively) reduces the value of screening and preventive treatment.
Further, most of the 170 newly identified persons had mild hypertension (grade 1), which can partly explain the high to very high cost per QALY from this screening programme. However, it should be noted that the health consequences considered from hypertension were limited to AMI and stroke. Should other consequences such as heart failure, renal failure, atrial fibrillation, cognitive impairment and dementia also be included, it is possible that the costeffectiveness of the screening programme would be higher.
Limitation
The transition probabilities (stroke and AMI risks) were based on risk models from the US Framingham study, and despite being widely used, they may have drawbacks in terms of validity for the given health context in this study [20]. As in all modelling-based studies, some simplifications had to be made that may have had some impact on the results. For example, an individual who has had an AMI can later suffer from stroke or vice versa, which our model did not allow for. And we have assumed that there is no difference between the two treatment alternatives in long-term identification of additional hypertension patients. An additional limitation with Markov cohort models is that average costs per, e.g., stroke and AMI, are assigned for each case and do not necessarily represent the costs in this particular cohort of patients.
The outcome in the comparator arm is based on the assumption that 61 individuals (expected incidence 3%) of the 2025 would have had high blood pressure during a routine consultation in a primary care centre and were diagnosed with hypertension (46 true positive and 15 false positive) [17,18]. This assumption is based on results from previous screening studies that 50% of those with high blood pressure are newly diagnosed, and that white-coat hypertension can account for up to 25-40% of those with hypertension [2,19]. We have chosen 25% for white-coat hypertension so as not to overestimate the result.
In the data set for our health economic analysis, there are no people with diabetes (as the condition was an exclusion criterion) and no information on cholesterol. The calculations refer to a population without diabetes. The mean values of serum cholesterol for men (4.0) and women (3.2) were used [25].
Conclusions
Despite the success of a blood pressure screening programme in identifying a substantial number of true positive hypertension patients in an existing dental-care facility, the cost per QALY was 2.2 million SEK (€210,000), which is considered a high cost. The results thus suggest that adding blood pressure screening in the dental-care setting is not cost-effective. | 5,070.8 | 2021-05-25T00:00:00.000 | [
"Medicine",
"Economics"
] |
Blockchain, consent and prosent for medical research
Recent advances in medical and information technologies, the availability of new types of medical data, the requirement of increasing numbers of study participants, as well as difficulties in recruitment and retention, all present serious problems for traditional models of specific and informed consent to medical research. However, these advances also enable novel ways to securely share and analyse data. This paper introduces one of these advances—blockchain technologies—and argues that they can be used to share medical data in a secure and auditable fashion. In addition, some aspects of consent and data collection, as well as data access management and analysis, can be automated using blockchain-based smart contracts. This paper demonstrates how blockchain technologies can be used to further all three of the bioethical principles underlying consent requirements: the autonomy of patients, by giving them much greater control over their data; beneficence, by greatly facilitating medical research efficiency and by reducing biases and opportunities for errors; and justice, by enabling patients with rare or under-researched conditions to pseudonymously aggregate their data for analysis. Finally, we coin and describe the novel concept of prosent, by which we mean the blockchain-enabled ability of all stakeholders in the research process to pseudonymously and proactively consent to data release or exchange under specific conditions, such as trial completion.
AbsTrACT
Recent advances in medical and information technologies, the availability of new types of medical data, the requirement of increasing numbers of study participants, as well as difficulties in recruitment and retention, all present serious problems for traditional models of specific and informed consent to medical research. However, these advances also enable novel ways to securely share and analyse data. This paper introduces one of these advances-blockchain technologies-and argues that they can be used to share medical data in a secure and auditable fashion. In addition, some aspects of consent and data collection, as well as data access management and analysis, can be automated using blockchain-based smart contracts. This paper demonstrates how blockchain technologies can be used to further all three of the bioethical principles underlying consent requirements: the autonomy of patients, by giving them much greater control over their data; beneficence, by greatly facilitating medical research efficiency and by reducing biases and opportunities for errors; and justice, by enabling patients with rare or under-researched conditions to pseudonymously aggregate their data for analysis. Finally, we coin and describe the novel concept of prosent, by which we mean the blockchain-enabled ability of all stakeholders in the research process to pseudonymously and proactively consent to data release or exchange under specific conditions, such as trial completion.
InTrOduCTIOn
The digitalisation of medicine has led to a large increase in the types and volume of health data that could be used for research, as well as the types of analysis that can be conducted. 1 Advances in information and communications technology have expanded the range of tools available for the secure storage, sharing and analysis of data. These trends have important implications for the traditional model of informed consent requirements, which dates back at least half a century. 2 This contribution argues that recent work on blockchain technologies 3 demonstrates many potential benefits of the technology across healthcare settings generally, [4][5][6] and particularly in the context of consent. 7 8 A set of advances in cryptography and mathematics which allows for a high degree of transparency and integrity in data access management, 'blockchain technologies could be applied in the health industry in a scalable manner with high-impact results, such as improved welfare for the patients and reduced running costs for healthcare systems.' 9 When introduced to one such blockchain-enabled infrastructure, the Massachussetts Institute of Technology's (MIT) Open Algorithms (OPAL) framework, 'the head of big data initiatives at the United Nations said: "This will change everything."… The [Chief Technology Officer] of the United States Health and Human Services Department said: "Holy ***! The implications for healthcare are enormous".' 10 We further argue that the introduction of blockchain technologies to the healthcare context is ethically significant, because they affect one or more of the foundational bioethical principles-justice, beneficence and autonomy. In many cases, the effects will be obvious and univalent. For example, using a blockchain-based supply chain management program might reduce the circulation of counterfeit and low-quality instruments and devices through improved tracking and auditing capabilities. 11 The effects of such a program would be to increase beneficence and justice.
However, and very importantly, the normative impacts of blockchain depend in part on the way the technology is implemented. As we argue below, a biomedical research infrastructure using blockchain for data access management and distributed computing for analysis of data stored in electronic health records has the potential to reduce the risk of privacy breaches to minimal. 10 3 Ethics and the law of most nations allow for the requirement of obtaining informed consent to be waived in cases of minimally risky research. 12 A case could therefore be made that such an implementation of blockchain technologies would reduce the risk of all recordsbased research to minimal, and therefore that the requirement of informed consent should be waived for all such research. To the extent that this gets rid of selection bias and speeds up research, it has a significant positive effect on beneficence. 12 However, by removing the option of refusing consent, this implementation would also have significant negative effects on autonomy.
The opposite case, however, could also be made. Using the cryptographic element of blockchain technologies, patients could be given complete control over who may access their medical data. They could be given the power over this access using permissions easily stored on and verifiable by a blockchain. Such an implementation would have a positive effect on patient autonomy but is likely to introduce significant selection bias, and so would likely have a strongly negative effect on beneficence.
The choice between these two implementations is not a scientific but an ethical one. Several other possible implementations of blockchain technologies likewise involve trade-offs between the bioethical principles. In the latter part of this paper, we argue that the pseudonymity and other features of blockchain networks enable new models of cooperation between stakeholders in the biomedical research ecosystem. We coin the term of prosent to describe the possibility that using the blockchain, patients or healthy citizens can participate in the scientific process, either by donating or selling their data to relevant research projects, by buying such data or by participating to various extent in the conduct of the research. Data exchanges based on the prosent model will likewise have very different impacts on beneficence, justice and autonomy depending on implementation. For example, should data owners be allowed or even encouraged to sell their data for profit? Which kinds of entities should be allowed to buy which kinds of data? These and many other questions are fundamentally ethical.
Historically, consent requirements have been based on bioethical principles of autonomy, justice and beneficence 2 13 Below, we introduce blockchain technologies and argue that their implementation can be used to enhance consent procedures in ways that advance all three of these ethical goals.
blOCKChAIn TeChnOlOgIes
Blockchain is a distributed technology enabling interactions of systems which, by design, does not rely on third parties to guarantee the integrity of a transaction. Instead, several features of blockchain technologies act in concert to guarantee data integrity. These are distribution of the blockchain to each member in its network, combined with a consensus mechanism designed to disincentivise fraud, and a hashing mechanism used to prove data integrity. More precisely and for convenience, we can imagine blockchain as a single shared database of which all users get a public copy, called the ledger. Table 1 lists some key principles and corresponding features and affordances of blockchain.
Technically, blockchains are organised in a decentralised fashion and the ledger is stored partially or in full on each of the computers (nodes) that participate in the recording and sharing of the data. Blockchains are distributed to each node in their network, are frequently updated, may be transparent and typically have low bandwidth. For these reasons, it is important to note that in most practical implementations of blockchain technologies in the healthcare context, the actual medical data of interest would not be recorded on the blockchain. Rather, the blockchain would store transactional and metadata such as hashes indicating whether or not a patient had consented; cryptographic keys denoting which healthcare professionals have access to which records; and evidence of database transactions, such as whether and when a healthcare professional has accessed a specific record, and what, if anything, that professional did with the resulting data.
For a new block to be accepted into the chain, a majority of these nodes need to agree on its veracity. This consensus mechanism is backed up by economic mechanisms designed to prevent malicious activity by disincentivising fraud.
A malignant attacker trying to corrupt data would require access to a majority (or a set, depending on the consensus mechanism) of the networked computers. This becomes almost impossible as the network grows.
Timestamping and keeping track of events
Indeed, in a blockchain, records or data are periodically aggregated into 'blocks' which represent every transaction that has happened within that time frame. These blocks are linked ('chained') to each other using a cryptographic hash of the previous block and carry a timestamp.
Let us abstract from our argument for a moment. If we consider any transaction recorded in the blockchain as an event, then we can timestamp said event and order sequential events in time, so that we can ascertain that an event indeed happened and that of a group of events, each event happened in a precise sequence in time. This is done in a near-incorruptible way, enabling us to consistently trace events.
Asserting and proving events
In the early history of Bitcoin technology, developers tweaked the data structure to store small pieces of information inside the blockchain. Known as a 'hash', this short string of characters is the result of putting a document through a hashing function. Any two identical documents will always produce an identical hash; change even a single character, and the outcome of the hashing function is radically affected. Because of these features, this little thing has important functional consequences. A hash can stand for a digital signature of any information: for instance, a document, however long, can be shortened into a hash, which then becomes its one and only 'signature'. Thus, a person receiving Original research a document can hash it and compare the hash to that of the original document (which in the current context is stored on the blockchain, so it cannot be altered). If the hashes match, this guarantees the integrity of the document relative to the state in which it was hashed.
In addition, each block in a blockchain contains a timestamp. Because the hash of each block includes the timestamp of the previous block, the blocks are chained together sequentially in time. In summary, hashes are digital summaries of data which are calculated on the content of each data block that makes up a blockchain. Importantly, blockchain can store as many of these proofs of data as necessary.
Automating processes
One of the greatest promises of blockchain is related to the development of script language that enables programming on top of the blockchain architecture and so gives all the flexibility of automating processes. These pieces of code are called 'smart contracts.' A smart contract is essentially a piece of code which executes on the fulfilment of certain predefined, userdetermined criteria. For example, a smart contract might be written to automatically upload trial data to a trials registry, if and only if certain conditions obtain: (1) that all patients have consented and (2) each phase of the trial protocol has been registered as successfully completed. This is a potentially very powerful tool, though currently in its infancy.
Public and private blockchains
Because blockchain technology relies on several component technologies, there are different variations on blockchains which reflect differences in the component technologies. A major distinction is between blockchains in which anyone can participate, therefore called public, and blockchains which require permissions to enter, therefore called private. The reasons for these are many and varied, but it is important to consider which architecture is best suitable for the context of healthcare broadly and research and consent in particular.
The first operational blockchain network, which underlies Bitcoin, is a public blockchain which anyone can join. It prevents fraud by forcing each computer in the node to solve hard calculations, which are calibrated to ensure a high energy cost resulting from the computational complexity. By making each computation costly in terms of resource costs, the proof of work system guarantees that messing with the previously calculated blocks becomes prohibitively expensive.
However, this dynamic is not scalable in its current form, as it consumes unacceptably large amounts of electrical energy. There are, however, other consensus mechanisms than the proof of work which are being explored but are beyond the scope of this paper.
data enclaves and homomorphic encryption
Distributed computing architecture may lead to the greater decentralisation of study conduct. The importance of real-world evidence and patients' reported outcomes are in line with the contemporary sense of a need for greater patient centricity of research studies.
Blockchain architecture could help define and concretise such an infrastructure. Moreover, to open to a distributed datasharing ecosystem with patient-level fine-grained ownership control, some research teams are designing new ways to process data. The idea is to push the algorithms rather than pull the data: algorithms process the data remotely without breaking privacy. This is especially interesting in an artificial intelligence era, where federated learning techniques would let algorithms jump from one data warehouse to another, increasing each time the spectrum of its machine-learnt knowledge.
Blockchain technologies may play an important role in these future architectures of distributed computing and federated learning. Other systems for data aggregation could be envisioned, such as safe houses or physically secure databases, also relying on public and private key cryptography. However, blockchain technologies offer several advantages, including speed of information transfer, no single target for breaches and various automations. These strong privacy-respecting systems may provoke more consent and participation to studies. For example, some entities such as the US Food and Drug Administration (FDA), and private companies, are working on a data brokerage system where a blockchain-based system would help consent to share the data to some dedicated research entities and to trace back the value created resulting from the data processing, for the purposes of distributing the value created from the analysis of data to those who have made those data available. In general, blockchain technologies could be used to manage data access, ensure transparency, reduce or prevent fraud and tampering, increase efficiency and connect stakeholders in a learning healthcare system.
Practical consequences for the consent process
Let us sum up all these functions. Using blockchain, we can trace if and when consent was given, we can bind a consent to a document, for instance, a study protocol, on any of its versions, the proof of which are stored in the blockchain through the so-called hashes. Automaticity through smart contracts enables endless possibilities, some of which may be: automation of aspects of consent collection processes (eg, identification of potential subjects and contact via email), reconsenting being triggered when some conditions are met, for example, major changes to the protocol, and conditioning consent to feedback of results.
COnsenT: AuTOnOmy
The use of blockchain technologies could give patients control over who may access their data. This would represent an increase in patient autonomy, as the patient would now be empowered to view who has permissions to access that individual's data. The patient would also be empowered to update these permissions at will through a blockchain transaction. In effect, by interacting with a blockchain on which representations of authorisation to access data are stored, patients can easily and effectively revoke consents or grant permissions for data access. Revoking consent is technically easy and would not require special efforts from patients. At its extreme, this situation would put the individual patient in total control over their own data, since they would have the option of removing access authorisations for all or any healthcare provider.
Several such implementations exist already. We briefly introduce two of these and refer to the original papers for details.
In the Enigma model, 10 blockchain technologies are used to manage access to data which is itself stored in a location not on the blockchain (eg, with data originators). When data are collected, it is encrypted using an encryption key shared between the data owner (ie, consenting subject) and the data acquirer (eg, the trial lead investigator). Only a hash of the original data is kept on-chain. These data can then be queried by the subject and investigator, whose identity is verified by encryption keys, using blockchain transactions. Pseudonomity via public cryptographic identifiers High degree of control on privacy; level of privacy can be modulated; concerns about trial or personal conduct can be registered with high degree of privacy.
Sequential timestamping Proof that consent was obtained before trial inclusion; proof of adherence to protocol; potential for greater patient engagement and consent due to higher trial integrity.
Decentralised storage of data Adapt consent and prosent to decentralised nature of data generation.
Smart contracts
Conditioning of trial progression on consent; automated release of data (prevention of publication bias and knowledge silos), automate aspects of data analysis; reconsent triggered when protocol is changed; automatic warning if abnormally high levels of severe side effects are found; automatic financial or other remuneration of data subjects.
Smart contracts for secondary research Data analysis in statistical plan can be carried out automatically; benefit sharing can be automated.
In the Nebula model, 14 a person wishing to access data sends a request, via a blockchain, to all relevant nodes in the network. Data are only shared if the requesting party authenticates themselves and/or permission is given by the data owner. Once authenticated, the subset of data relevant to the researcher's study is sent automatically via smart contract to a data enclave, at which point it is deidentified to a high level of abstraction compatible with research aims and aggregated. The resulting information is then released to the researcher. Throughout this process, the data are not seen by anyone except the receiving entity, whose access to and use of data are recorded to prevent abuse. Table 2 illustrates the potential uses of blockchain in the context of consent.
Three uses of this system
At least three novel approaches to data sharing become possible using such a system. The first grants data subjects or originators the sole power to determine who may access their data. The second would include some default permissions on an optout model. The third would remove the opt-out option, thus mandating data access.
data owner access control
The first possibility involves giving data owners complete control over access to their data. Data owners can grant, modify or revoke permissions to access data by means of blockchain transactions. Importantly, data owners could treat different categories of data differently and assign varying levels of access protections to them. For example, access to sensitive medical data might be kept private or granted only to select entities, whereas less sensitive data might be put up for donation, or, from the point of view of some start-ups, possibly even for sale. The blockchain thus provides a practical means of implementing meta-consent. 12
Consent, minimal risk and default permissions
Alternatively, default settings could be set to allow certain entities access to some data. In an opt-out model, default settings would be controllable and modifiable by the data subject. It would also be possible to have certain types of data shared by default. Table 3 lists some of the potential benefits of implementation.
The motivation for opt-out or mandatory models stems from the effects of consent requirements on research. Consent requirements can be excessively complex, especially where the data involved are not sensitive and might be put to general, open use. They can also lead to selection bias-the systematic distortion of research results due to statistically irremediable deviations from a normal sample-which can seriously reduce the reliability of research. [15][16][17] Significantly, both ethics and the law allow for consent waivers to avoid these problems if the research in question can be shown to involve only minimal risks. 18 Using blockchain-based data access management system and multiparty secure computing could reduce the risks of much non-interventional research to minimal, since data would remain at its origin and not be subject to additional breach risks.
Prosent: enabling bidirectional research requests
Consent protects the autonomy of patients and research subjects by allowing them to refuse unwanted treatment or participation in research. However, consent does not enable individuals to go beyond what is offered in terms of participation or interventions. Patients and research subjects might wish to exercise their autonomy by sharing other data or sharing data with other trusted research or healthcare entities. This would be possible through the prosent feature enabled by blockchain.
As manifested by the powerful trend of crowd and citizen science, many people outside the traditional research ecosystem have both the means and the willingness to gather and contribute important data. 19 20 Indeed, citizens generally hold positive views about data sharing for public benefit research. 21 22 Prosent could help further this trend by allowing data owners to identify each other and request data and/or participation of other citizens or scientists.
By analogy with the word consent, we propose a novel term that captures this ability: prosent. The prefix 'con-', originally derived from Latin cum ('with'), refers to a joining or togetherness in the present tense. By contrast, the prefix 'pro-' expresses both a positive affirmation (eg, prochoice) and a forwardlooking aspect (eg, prospect). Just as consent implies a current acceptance (con) of some feeling or thinking (sentio), so prosent implies a forward affirmation (pro) of an emotion or cognition (sentio).
Prosent leverages several of the affordances of blockchain to enable much greater communication between stakeholders. In a hypothetical health research ecosystem, there might be several groups of distinct stakeholders. These might include data subjects and data owners (whether individual, institutional or commercial); various data generators (including individual patients, patient advocacy groups, grassroots databases and
Original research
individuals or institutions with skills in data aggregation and/ or scraping); and various data acquirers (individual healthcare professionals, third-party information services, charities, both public and private institutions, hospitals and research centres; the pharmaceutical and actuarial industry; various government actors; and interested private individuals, not to mention interested loved ones). These many stakeholders have varying degrees of motivation, insight and expertise. In the current system, few of these are leveraged; typically, the physician or institution owns the data and does not share it, except with close colleagues. However, using blockchain, it becomes possible to open up this data exchange to others who may have relevant expertise. Each entity would be represented by a pseudonymous identifier. This identifier could include information which verifies their status (eg, of healthcare professional, institution or individual) in a privacy-preserving manner, such that, though it cannot be said whom a particular identifier represents, it can be determined what their status is. Thus, a patient suffering from a rare disease might include information on her profile to that effect. This would enable others, whether others suffering from the same disease, or researchers interesting in advancing research, to locate a potential patient without compromising the identity of any involved.
This architecture would enable all kinds of interesting interactions between stakeholder groups. A patient with a rare disease but some resources might take it on themselves to request the data from all other patients with that rare disease who can be located pseudonymously through the blockchain. They might then issue a prosent request for the data, and, if obtained in sufficient quantities, they could then release a prosent request to an institution or individual or groups of researchers to carry out analyses on these data.
Also, fascinatingly, individuals could indicate that they wish to acquire, generate or sell data only under certain circumstances. For example, a patient suffering from an orphan disease might add an identifier to their profile, such that they can be located and contacted (still pseudonymously). Another patient might add a different kind of identifier to their profile, perhaps indicating a willingness to share their data, but only with certain entities and not others. We might imagine that many would be happy to share their data for important epidemiological work, but less enthused about doing the same (without remuneration) for commercial research. Alternatively, an individual could indicate that they only wish to share with researchers from their own social, ethnic or cultural background. Finally, monetary or healthcare incentives could be offered for making data available, although this would lead to interesting questions concerning the correct levels of regulation for a healthcare data market, and on what money can and should not buy.
Thus, we use the term 'prosent' to refer to the bidirectional research requests enabled by blockchain technologies. Below, we briefly sketch some of the features enabled by prosent mechanisms. This is not an exclusive list.
For one, many stakeholders who can benefit from data access but who have previously left out of the research ecosystem can exercise a greater degree of control over data; both their own data, but also by pooling or acquiring the data of others, for example, to establish a database of rare diseases. Second, these stakeholders can interact with each other pseudonymously, such that they can locate each other as entity types but not as uniquely identifiable entities and can communicate without privacy concerns. Third, various stakeholders respond to various incentives, and using prosent it is possible to offer this variety; data release could be conditioned on the aim of the study involved, the researchers or patients involved, whether or not money or healthcare has been offered and who the likely main recipients of the benefits are.
Although this is a rough description and there have been several other calls for data marketplaces, we believe the concept of prosent has not been adequately captured in the extant scholarship. The possibilities of opening up science to interested, powerful and numerous stakeholders under controllable conditions are vast. Thus, prosent, especially when coupled with smart contracts, has implications both for justice and for beneficence.
Justice
Certain populations are under-represented in research, 23 due to worries ranging from additional susceptibility to health risks to the ability to give voluntary consent. 24 Others may be reluctant to trust in biomedical researchers due to historical factors. 25 26 Still others have very rare medical conditions. 27 28 Finally, many individuals belong to groups that are less able than others to pay for the advancement of their interests, making them less attractive targets for pharmaceutical and other medical companies. 29 30 Prosent mechanisms offer a novel and potentially powerful means of re-engaging individuals from these communities. Using the pseudonymity of a blockchain-based, prosent-enabled data exchange, groups of individuals with similar conditions could find each other via pseudonymous profiles which could contain tags indicating the preferences and interests of that user.
smart contracts and beneficence
As mentioned above, smart contracts are pieces of code layered on top of a blockchain which execute automatically when certain conditions are met.
Publication bias, resulting from the preferential publication of positive results, is a known problem in biomedicine. 31 Using smart contracts, it would be possible for stakeholders to agree at the beginning to release the data to a public trials registry, which would then happen automatically on study completion. Similarly, publicly minded patients might condition their consent on such data release, either to the public or to themselves; if such a condition were not released, the smart contract would invalidate that person's consent.
Smart contracts could also automatically trigger a request for reconsent in cases of major protocol changes. For example, it is crucial that primary and secondary outcomes are specified in the protocol before the conduct of the study and that they are not subsequently changed or manipulated. 32 In addition, the recording of consent on a blockchain could be fully transparent, visible and auditable for relevant stakeholders through dedicated public websites. Finally, blockchain could be used to put consent management in the hands of the patients. Patients who wish to revoke or modify their consent could do so directly via a transaction on the blockchain, without relying on a third party, such as the study administrator, to document the modification. This is not a small opportunity, since failure to obtain or document consent is a known problem in clinical research. One review of FDA records found a failure to protect subjects and/or obtain informed consent in 53% of cases studied. 31 Blockchain could be used to make the documentation of consent both transparent and traceable. 8 Smart contracts could be used to freeze patient data or the progression to the next protocol phase, thus predicating the release of data on unequivocal documentation of consent. 7 Similarly, blockchain technologies can be used to document other key components of the protocol. Any revisions to the protocol would be timestamped and transparent, reducing the incentive for fraudulent changes. These might include the data-sharing plan, the version of the analytical code used at the outset and subsequent modifications to it, and documentation of obtained consent. 7 Looking towards the future, smart contracts could be used to automate several important bottlenecks in medical research. In theory, smart contracts in combination with secure multiparty computing and the OPAL principles could lead to a situation in which trial results are uploaded immediately on trial completion by one smart contract, and then processed and integrated to existing systematic reviews, all fully automated and much quicker than the current laborious process. Again, these suggestions are not exhaustive; smart contracts could be used in the recall and monitoring of defective drugs and medical devices; in the provenance of surgical tools; in the automated transfer of patient information to relevant healthcare providers in cases of emergency or relocation; and many more.
deCenTrAlIsATIOn And sTOrAge Of dATA
In the MIT/OPAL and Enigma frameworks as well as many other worthwhile projects, a revolutionary way of preserving privacy for epidemiological research is developing. According to this paradigm, data are decentralised in the sense that it never leaves its original location. Rather, an algorithm is 'pushed' to the data, performs calculations on an encrypted version of those data. The aggregate calculations are then summed up to achieve an aggregate answer to a query, in which no individually identifiable information has been used at any stage. If this stage were to be combined with automatic statistical integration into the known body of medical knowledge, no human may at any time see the sensitive data, radically reducing any privacy concerns.
This contrasts to a situation in which a researcher 'pulls' data, that is, takes data in its raw form from many separate data sources and aggregates a database. This traditional way of doing things has some advantages, but is vulnerable to catastrophic breach risk, since any breach will affect a very large number of records.
fuTure ChAllenges
We have argued that the use of blockchain technologies can improve autonomy, justice and beneficence in biomedical research. These improvements in the biomedical research process are likely to lead to increased trust, and through trust, we may hope, greater patient engagement in research, benefiting everyone. 6 However, several challenges need to be met before this potential can be realised.
The most salient issue is that of implementation. Blockchain technologies are novel and systems for implementing some of the above recommendations remain at the proof of concept stage. 7 8 At the time of writing, interaction with blockchains still requires some level of cryptographic literacy. This presents a barrier to its adoption by patients and healthcare professionals. So far, there is no user-friendly solution to this problem, at least when enforcing the use of public blockchains, which we consider ought to be the default solution. The use of private blockchains should be restricted to cases of necessity. Table 4 lists some challenges and possible solutions. In addition, the use of smart contracts will require interdisciplinary skill sets. For smart contracts to function properly, both developmental expertise and legal know-how are required. Similarly, to ensure smooth function of blockchain-based solutions, some medical professionals may have to acquire basic knowledge of the technology. To reap the full benefits of blockchain-enabled solutions, attention needs to be paid to the importance of developing such interdisciplinarity.
For ethical and methodological reason, as well as for scaling up the usage of blockchains, it is absolutely crucial that the principles behind the open source movement be embraced in this context. There are still lots of knowledge to be gained on how, for instance, to make smart contracts work properly. If the community at large agrees to cooperate and share its code and knowledge openly, progress is likely to happen in the right conditions of transparency and methodological quality and also more rapidly than if the task fell to private groups of individuals. Thus, advocacy for open source and knowledge sharing is needed for blockchain technologies to be implementable in the near future.
The implementation of blockchain technologies promises many benefits for biomedical research in general and consent procedures in particular. The whole effort now will be to move from these early ideas to actual implementation. Because this is a whole new field, these efforts will have to include significant investment in the development of necessary skills for relevant stakeholders. However, we are convinced that the fruits of such investment will be more than worth the effort. | 7,877.8 | 2020-05-04T00:00:00.000 | [
"Computer Science"
] |
Non-equilibrium interplay between gas-particle partitioning and multiphase chemical reactions of semi-volatile compounds: mechanistic insights and practical implications for atmospheric modeling of PAHs
multiphase chemical reactions of semi-volatile compounds: mechanistic insights and practical implications for atmospheric modeling of PAHs Jake Wilson1, Ulrich Pöschl1, Manabu Shiraiwa2,*, and Thomas Berkemeier1,* 1Multiphase Chemistry Department, Max Planck Institute for Chemistry, Mainz, Germany 2Department of Chemistry, University of California, Irvine, CA, USA Correspondence: Thomas Berkemeier<EMAIL_ADDRESS>and Manabu Shiraiwa<EMAIL_ADDRESS>
Introduction
Polycyclic aromatic hydrocarbons (PAHs) are air pollutants that are structurally characterized by their fused aromatic ring systems (Keyte et al., 2013). Given their carcinogenic properties (Boström et al., 2002), developmental toxicity (Billiard et al., 2008) and abundance in the environment (Ravindra et al., 2008), PAHs pose a risk to human health (Kim et al., 2013). 25 PAHs are semi-volatile compounds that may exist in the gas phase, adsorbed on the surface of aerosol particles, or absorbed into the bulk of aerosol particles. As atmospheric aerosols, we describe the suspension of nano-to micrometer sized particles in outside air. Typical atmospheric aerosol particles include sea salt, mineral dust, sulfate, and organic particles (Pöschl, 2005).
Two key types of organic particles include soot, formed during fossil-fuel combustion, and secondary organic aerosols (SOA), formed from condensation of organic vapors in the atmosphere. The mass transfer and distribution of PAHs between the gas 30 phase and particle phase is referred to as gas-particle partitioning. For PAHs, an accurate model description of gas-particle partitioning is needed to interpret monitoring data, determine atmospheric burden and lifetime, and ultimately assess the hazards their emissions pose to human health. Moreover, after inhalation, the distribution of a semi-volatile compound between the gas phase and the particle phase can determine its bioaccessibility (Liu et al., 2017;Wei et al., 2020). While gas-phase PAHs can directly partition into the epithelial lining fluid of the lung, particle-phase PAHs first have to dissolve from a matrix and may 35 hence be less bioaccessible (Lammel et al., 2020).
In equilibrium, the flux of a semi-volatile species from the gas phase to the particle surface is equal to the flux that desorbs back into the gas phase. The state of the system at equilibrium can be described mathematically with thermodynamic entities (Junge, 1977;Yamasaki et al., 1982;Pankow, 1994). The equilibrium gas-particle partitioning of semi-volatile compounds is determined by both adsorption onto the particle surfaces and absorption into the particle bulk. The relative contributions 40 of these processes depend on the concentrations, composition and phase state of particles. Physicochemical properties of a compound, such as the octanol-air partition coefficient K OA (Finizio et al., 1997;Harner and Bidleman, 1998), the soot-air partition coefficient K SA (Dachs and Eisenreich, 2000;Lohmann and Lammel, 2004) and Abraham descriptors (Arp et al., 2008;Shahpoury et al., 2016) are typically used to predict the position of equilibrium. In terms of surface adsorption, soot or black carbon particles may be especially relevant for the gas-particle partitioning of PAHs as they exhibit large energies of CTMs, PAHs are partitioned, transformed and transported in discrete time steps, often using the method of operator splitting. With operator splitting, the partitioning equilibrium is restored at each model time step through instantaneous equilibration (Galarneau et al., 2014) rather than treating gas-particle partitioning continuously. PAH concentrations predicted by CTMs have been shown to depend on the employed treatment of gas-particle partitioning. For instance, Lammel et al. (2009) found that using different equilibrium partitioning models influenced atmospheric cycling, the total environmental fate, and 60 long-range transport potential of PAHs. Friedman et al. (2014) found that implementing a partitioning scheme in which PAHs slowly evaporate from aerosol particles yielded the better agreement between observed and simulated concentration and partitioning data compared to the instantaneous equilibration approach. Overall, CTMs that assume equilibrium partitioning tend to be more common than those accounting for mass-transfer limitations explicitly, as can be seen from a recent review of partitioning methods in regional-scale transport models of SOA (McFiggans et al., 2015).
Equilibration timescales of gas-partitioning may be estimated theoretically, using analytical equations or numerical models. By solving analytical transport equations, the equilibration timescales of partitioning for volatile inorganic compounds were found to depend on the size of aerosol particles (Meng and Seinfeld, 1996). More recently, there have been numerical simulations for SOA as a function of temperature and relative humidity (Shiraiwa and Seinfeld, 2012;Li and Shiraiwa, 2019). Alternatively, equilibration timescales may also be obtained experimentally. For example, Saleh et al. (2013) found the 70 equilibration timescale of SOA formed by α-pinene ozonolysis to be less than 30 min following a perturbation in temperature. Furthermore, the interplay between partitioning and multiphase reaction of OH with alkanes was shown to influence the distribution of product isomers (Zhang et al., 2015).
For PAHs, several studies have investigated the timescales of gas-particle partitioning from the perspective of absorptive partitioning. Rounds and Pankow used a radial diffusion model to investigate the kinetic limitations of partitioning resulting 75 from diffusion of a semi-volatile compound absorbed within a particle (Rounds and Pankow, 1990). Odum et al. (1994) additionally included a parameter to account for mass-transfer limitations at the surface. In chamber experiments, Kamens et al. (1995) examined the equilibration timescales of PAHs. However, an in-depth analysis of the important case of PAH adsorption onto the surface of soot remains elusive. In recent years, the desorption rate coefficients of PAHs from soot have been experimentally parameterized over a range of atmospherically-relevant temperatures (Guilloteau et al., 2010). However, a systematic 80 comparison between the equilibration timescales of partitioning and the timescales of loss processes has not been carried out.
In this study, we use a kinetic model to 1) examine the timescales of gas-particle partitioning for six PAHs and 2) to investigate the chemical loss of PAHs by explicitly coupling the partitioning and oxidation chemistry of the PAH pyrene. The model uses the conventions of the PRA (Pöschl-Rudich-Ammann) framework (Pöschl et al., 2007) and is based on the kinetic double-layer model for aerosol surface chemistry (K2-SURF; Shiraiwa et al., 2009). We quantify the equilibration timescales of 85 six PAHs on the model surface of solid soot particles for different temperatures and particle number concentrations (Sect. 3.2).
We illustrate how the combination of slow partitioning and chemical loss of PAHs can perturb the particulate fraction from equilibrium (Sect. 3.3.2) and alter chemical lifetime (Sect. 3.3.1) in the example of the PAH pyrene. We detail how a dominant loss of pyrene from the particle phase may decrease the particulate fraction. Likewise, in the case of dominant loss of pyrene from the gas phase, the particulate fraction would increase. Compared to instantaneous partitioning, which would conserve 90 equilibrium particulate fractions, chemical lifetime may be affected through depletion of pyrene in the more reactive phase.
We apply the knowledge gained from the kinetic model calculations to the description of gas-particle partitioning in CTMs by comparing the explicitly-coupled solution to a method mimicking operator splitting with instantaneous equilibration and evaluate the performance of both methods in different scenarios.
Kinetic Model
A modified version of the kinetic double-layer model K2-SURF is used for all simulations (Shiraiwa et al., 2009). The original K2-SURF model consists of a near-surface gas phase and surface layer, with gas diffusion from the far-surface gas phase represented by a correction factor (Fig. A1). In this study, we added explicit treatment of gas diffusion to K2-SURF to track gas-phase PAH concentrations. PAHs reversibly desorb and adsorb between the aerosol particle surface and the near-surface 100 gas phase. The rate coefficient for PAH desorption from the particle surface k des = Ae −EA/RT (s −1 ) depends on temperature T and two parameters determined from experiment: the Arrhenius factor A and the activation energy of desorption E A (Guilloteau et al., 2010). R is the gas constant. Aerosol particles are assumed to be monodisperse and to consist of a spherical, impenetrable solid carbon core. The system is closed with respect to aerosol particles and PAH in all simulations. In simulations involving chemical reactions, the system is open with respect to oxidants, i.e. gas-phase OH and O 3 concentrations are fixed.
105
The differential equations in Eqs. 1-3 describe the time evolution of which are the number concentrations (i.e. a unitless count of molecules per unit volume or unit area) of PAH in the gas phase, near-surface gas phase and on the surface of aerosol particles, respectively. J des (cm −2 s −1 ), J ads (cm −2 s −1 ) and J diff (s −1 ) are the desorption flux, adsorption flux and gas diffusion flux. Each flux term is described in detail in section A of the appendix.
The surface area of a single aerosol particle with diameter d p (cm) is d 2 p π, V gs (cm 3 ) is the volume of gas in the nearsurface gas phase for a single aerosol particle and N p (cm −3 ) is the particle number concentration. L g (cm −3 s −1 ) is the rate of chemical loss in the gas phase and L s (cm −2 s −1 ) is the rate of chemical loss in the particle phase. Reactions of PAHs within 115 the near-surface gas phase are considered to be negligible due to the small fraction of PAHs in this volume. Sources of PAHs are not considered in this study.
Chemical reactions
The surface reaction between pyrene and O 3 is modeled using a Langmuir-Hinshelwood mechanism, including reversible adsorption of O 3 onto the surface of aerosol particles and reaction of surface-adsorbed O 3 with surface-adsorbed pyrene.
The rate coefficient for reaction of pyrene with O 3 , k PAH+O3 = 2.7 × 10 −17 cm 2 s −1 , and the corresponding mass-transport parameters are taken from Shiraiwa et al. (2009) (Table A1). Reaction products are treated as inert and non-volatile. Note that 125 the reaction between O 3 and benzo(a)pyrene on the surface of soot has been suggested to involve the formation of long-lived reactive oxygen intermediates (ROI; Shiraiwa et al., 2011). Such a detailed chemical mechanism is beyond the scope of this study, which instead focuses on the interaction of partitioning and chemistry, and has thus been omitted for simplicity. The desorption rate coefficients of both pyrene and O 3 , which are temperature dependent and explicitly included in the model, are expected to be the main driver of sensitivity in the model with regards to temperature. It should be noted that the accommodation 130 coefficient and surface layer reaction rate coefficient may also exhibit temperature dependence, but without further quantitative parameters cannot be included in the model. The gas-phase reaction between O 3 and pyrene is considered negligible and is therefore not included (Keyte et al., 2013). The reaction between pyrene and OH is accounted for in both the gas phase and on the surface of particles.
The gas-phase reaction between pyrene and OH is modeled with the rate coefficient k PAH+OH = 6.58 × 10 −11 cm 3 s −1 (theoretical calculation at 298 K, Zhang et al., 2014). The reaction between pyrene and OH on the surface of particles is treated with an Eley-Rideal like mechanism using a surface reaction probability of 0.32 (obtained for pyrene, Bertram et al., 2001) and assuming an OH gas diffusion coefficient D g of 0.21cm 2 s −1 (Tang et al., 2014). The temperature dependence of the gas-phase 140 OH reaction of PAHs has been found experimentally to be 'slight to nonexistent' (Brubaker and Hites, 1998). Likewise, the reaction probability of OH on a pyrene surface has been found to exhibit only a slight temperature dependence (Liu et al., 2012). We therefore do not include temperature dependence of chemical rate coefficients in this model. The uptake of OH onto the surface of particles is considered to be irreversible.
Particulate fraction 145
The measured distribution of PAHs (and other semi-volatiles) between the particle phase and the gas phase is commonly described with the particulate fraction Φ, i.e. the fraction of total PAHs associated with aerosol particles (Eq. 4).
The total concentration of PAH adsorbed on the surface of aerosol particles [PAH] p (cm −3 ) is the product of the surface area of a single particle d 2 p π with diameter d p , the particle number concentration N p , and surface concentration of PAH, [PAH] s 150 (cm −2 ; Eq. 5).
Equilibration timescale τ eq
To quantify the time for PAHs to reach their equilibrium distribution between the gas phase and the particle phase we use the equilibration timescale (τ eq ), defined as e-folding time for the relaxation of the system to gas-particle partitioning equilibrium.
155 Figure 1 shows results from a kinetic box model simulation with a concentration of pyrene in air of 5×10 5 cm −3 , temperature T = 298 K, particle number concentration N p = 1×10 4 particles cm −3 , and particle diameter d p = 200 nm. No chemical loss of pyrene is included here. τ eq is obtained numerically from model outputs by interpolating the time required by the system to achieve 1 − 1 e (i.e. ≈ 63.2 %) of the difference ∆Φ between an initial particulate fraction Φ 0 and the equilibrium particulate fraction Φ eq .
160
In this example, pyrene reaches Φ eq after ≈ 2 minutes and the equilibration timescale is independent of the initial particulate fraction, i.e. τ eq is the same regardless of whether Φ 0 = 0.1 or Φ 0 = 0.9. In fact, τ eq is found to be independent of the choice of Φ 0 for most conditions due to the first-order and hence mono-exponential nature of the adsorption and desorption processes.
This allows for consistent intercomparison across different temperatures and particle number concentrations without changing starting distributions. Exceptions may occur in cases where surface adsorption is not strictly a first-order process, either due to 165 surface saturation effects or gas phase diffusion limitations. These conditions occur at very low particle number concentrations (typically < 1 × 10 3 particles cm −3 ) and further details are given in Appendix B.
3 Results and discussion
Extreme cases of multiphase chemistry and partitioning interaction
Three extreme cases can be formulated when partitioning and chemical-loss processes of a semi-volatile compound take place 170 at different relative timescales (Fig. 2). and Φ0 = 0.9 (blue). The equilibration timescale τeq is defined as the time required for the system to achieve 63.2 % of ∆Φ, the difference between Φ0 and Φeq.
When the timescale of partitioning is short compared to the timescales of chemical loss, molecules are redistributed quickly between both phases (case A, Fig. 2). In this case, the relative amounts of gas-and particle-phase species will remain very close to their equilibrium values (Φ ≈ Φ eq ). This is independent of whether molecules are lost primarily from the gas phase or from the particle phase.
175
In contrast, if the timescale of the partitioning process is slow and the chemical loss rates from the gas and particle phase differ substantially, the particulate fraction will be perturbed from its equilibrium value (cases B and C, Fig. 2). When the loss rate in the gas phase exceeds the loss rate in the particle phase, the particulate fraction increases beyond its equilibrium value (Φ > Φ eq ; case B, Fig. 2). However, when the loss rate in the particle phase is greater than that in the gas phase, the particulate fraction decreases (Φ < Φ eq ; case C, Fig. 2).
180
Unlike these scenarios, chemical loss and partitioning timescales may not differ substantially. Likewise, chemical losses are likely to take place in both phases simultaneously. Every real system must therefore be seen as superposition of these cases.
The extent to which perturbation occurs depends upon the difference between partitioning and chemical reactions timescales.
An in-depth discussion on the magnitude of perturbation is provided in Sect. 3.3.
Hence, two preconditions are required for the particulate fraction Φ of the system to be perturbed from the equilibrium 185 particulate fraction predicted by equilibrium partitioning theory Φ eq : 1) Slow partitioning relative to the timescale of chemical loss and 2) an imbalance of chemical loss between the gas and particle phases.
If timescales of chemical loss and partitioning were known for all natural systems, they could be classified and mathematically treated in the respective limiting case. In this manuscript, we: 1) estimate the partitioning timescales of PAH on soot Gas-phase loss is faster than particlephase loss Particle-phase loss is faster than gas-phase loss Particulate fraction larger than equilibrium value
Initial State
Particulate fraction at equilibrium value Particle-phase loss is faster than gas-phase loss Gas-phase loss is faster than particlephase loss
A B C
Particulate fraction smaller than equilibrium value Figure 2. Schematic on how the gas-particle partitioning equilibrium of a semi-volatile compound may be perturbed from an initial state (center) due to chemical loss, depending on equilibration timescales. If the timescales of partitioning are shorter than the timescales of chemical loss, the system is able to maintain equilibrium (A). However, the combination of rapid gas-phase loss and slow replenishment from the particle phase increases the particulate fraction above the equilibrium value (B). In turn, the combination of rapid particle-phase loss and slow replenishment by condensation decreases particulate fraction below the equilibrium value (C).
as a function of atmospheric conditions, 2) compare these timescales to typical chemical loss rates in order to investigate 190 whether perturbations from equilibrium exist, and 3) explore the implications of treating partitioning and chemistry separately in chemical transport models.
Partitioning equilibration timescales for PAHs on soot
τ eq depends on the molecular structure of the PAH, particle number concentration and temperature. This is explored in the following section with a series of simulations using a fixed total concentration of PAHs in air of 5×10 5 cm −3 and particles 195 with a diameter of 50 nm.
Particle number concentration
The effect of varying the particle number concentration N p on the equilibration timescale shows a distinct behavior ( Fig. 3a): τ eq is particle number-independent at lower N p , while τ eq is particle number-dependent at higher N p . The equilibration timescales of the less strongly adsorbed PAHs including anthracene, fluoranthene and pyrene are not significantly affected by particle num- ber concentration until a fairly high threshold particle number concentration is achieved (≈ 10 5 , 10 4 and 10 4 particles cm −3 , respectively).
Once the threshold particle number concentration is reached, a linear relationship in the double logarithmic dependence of equilibration timescale and particle number concentration emerges. The more strongly adsorbed PAHs, chrysene, benzo(e)pyrene and benzo(a)pyrene, reach this limit at a much lower N p (≈ 10 2 particles cm −3 ). This can be understood when looking at 205 Fig. 3b, in which the equilibration timescale of pyrene is shown together with the individual timescales of desorption τ des (gray dashed line, calculated with Eq. 6) and adsorption τ ads (gray dotted line, calculated with Eq. 7).
In the limit of an adsorbate-free surface, adsorption and desorption are first-order processes with respect to the near-surface 210 gas and surface concentrations of PAH respectively and can therefore be described with rate coefficients k ads (s −1 ) and k des (s −1 ). The desorption timescale τ des depends on the Arrhenius factor A and the activation energy for desorption from the aerosol particle surface (E A ), the gas constant R and temperature T . The adsorption timescale τ ads depends on the surface accommodation coefficients on an adsorbate-free substrate α s,0 , the particle number concentration N p , the surface area of a single aerosol particle d 2 p π with diameter d p and the mean thermal velocity ω. The surface coverage θ s is very small for typical 215 particle number concentrations and will therefore be neglected in the following. In general, τ eq can be approximated as a function of both timescales according to Eq. 8 (see appendix for details of the terms and derivation). This approximation holds as long as gas diffusion is sufficiently fast and does not limit equilibration.
If one process (desorption or adsorption) dominates the behavior of τ eq , the system can be said to fall into an adsorption-220 controlled regime (highlighted for pyrene with blue shading) or a desorption-controlled regime (highlighted with red shading).
A multi-step process in which mass is lost and transferred in one direction can be described analagously to a series of resistors in an electrical circuit and the term limiting can be used to describe the slowest step. In contrast, the gas-particle partitioning system is a reversible system in which mass is transferred in both directions and the relative rates of these mass-transfer processes determine the position of equilibrium. We therefore observe in Fig. 3b (and also Fig. 4b) that the equilibration time 225 is determined primarily by the fastest process (i.e. that with the shortest timescale). We therefore adopt the term controlled to characterize this behavior.
In the low particle number concentration limit, the system is in a desorption-controlled regime and the equilibration timescale is thus strongly influenced by strength of the PAH-soot interaction, which explains the large differences in equilibration timescale between PAHs in Fig. 3a. In the high particle number concentration limit, the equilibration timescale is determined 230 primarily by the adsorption of PAH onto particles from the near-surface gas phase and is therefore independent of PAH type as can be seen from the convergence of curves in Fig. 3a. The equilibration timescale here coincides with the adsorption timescale τ ads and the system is in an adsorption-controlled regime. The transition between both regimes occurs where τ ads intersects τ des and coincides with the point Φ eq = 0.5. At this specific point, equal amounts of PAH are in the gas and particle phases and the timescales of desorption and adsorption contribute equally to the equilibration time.
235
As surface coverages θ s are very small and PAHs generally have surface accommodation coefficients on an adsorbate-free substrate of α s,0 = 1 (Julin et al., 2014), we find in this study a special case of the adsorption-controlled regime where molecular collision of gas molecules is the sole controller of partitioning. For adsorbates with lower α s,0 , the adsorption timescale would be longer and the system may be in the desorption-controlled regime. highlighting the transition between adsorption-controlled and desorption-controlled behavior.
240
The effect of varying temperature T on the equilibration timescale shows a behavior similar to the one seen for the particle number concentration (Fig. 4a): τ eq is temperature-independent at low T , while τ eq is temperature-dependent and begins to decrease at higher T . For the most weakly adsorbed PAH anthracene, τ eq begins decreasing at 240 K towards higher T and at 298 K is already less than 5 s. The equilibration timescales for fluoranthene and pyrene begin decreasing at ≈ 260 K and at 298 K are both less than 100 s. Strongly adsorbed PAHs including chrysene, benzo(e)pyrene and benzo(a)pyrene do not 245 undergo a significant change in equilibration timescale in the investigated temperature range.
Again, the adsorption-controlled and desorption-controlled regimes explain this behavior (Fig. 4b). Between 210 K and 240 K, PAH molecules possess little kinetic energy and are prevented from escaping into the gas phase, thus exhibiting long desorption lifetimes (Fig. A3) and high equilibrium particulate fractions (Fig. A2b). As most PAH is adsorbed on the surface of aerosol particles, molecular collision determines equilibration time. The system is in the adsorption-controlled regime, highlighted for pyrene with blue shading and signified by the coincidence with the adsorption timescale τ ads (gray dotted line). The number of collisions between gas-phase PAHs and particles slightly increases as the thermal velocity increases, but this effect is much smaller compared to the effect of temperature increase on desorption rates. Note that the surface accommodation coefficient is assumed to be temperature-independent in this study. Overall, upon increase in temperature, the desorption process becomes increasingly important. At high temperature, the system is in the desorption-controlled regime, 255 highlighted for pyrene with red shading in Fig. 4b and signified by the coincidence with the timescale of desorption τ des (gray dashed line).
Interplay of multiphase chemistry and partitioning
Chemical reactions with O 3 and OH are important loss processes for PAHs. If the rate of chemical loss is fast relative to gas-particle partitioning, the gas-particle distribution may be perturbed from its equilibrium state (cf. Fig. 2, cases B and C).
260
This effect is exemplified for pyrene by including surface chemistry with O 3 (0, 1, 10 and 100 ppb) or gas-phase and surface chemistry with OH (0, 0.01, 0.1 and 1 ppt) in the model. 10 ppb is representative of surface background O 3 concentrations (Vingarzan, 2004), while 100 ppb O 3 is characteristic of concentrations at more polluted sites (Wang et al., 2017). An OH concentration of 0.01 ppt is representative of concentrations measured at night (Geyer et al., 2003), while 0.1 ppt is representative of daytime concentrations (Stone et al., 2012) and 1 ppt is an upper limit only encountered in highly polluted conditions 265 (Hofzumahaus et al., 2009) and smoke plumes (Hobbs et al., 2003). We employ the following conditions in the pyrene-soot system: T = 280 K, N p = 1×10 3 particles cm −3 , d p = 50 nm. At the start of the simulation, the initial total concentration of pyrene (5×10 5 cm −3 ) is distributed between the gas and particle phases according to the particulate fraction expected at equilibrium (i.e. Φ 0 = Φ eq = 0.24). This effect can be understood by observing the change in particulate fraction over time during each of the simulations 280 (Fig. 5b). As each simulation proceeds, the particulate fraction Φ drops below the equilibrium particulate fraction Φ eq and eventually reaches a quasi-steady state Φ qs . At O 3 concentrations of 10 ppb and 100 ppb, the particulate fractions reach values of Φ qs = 0.18 and 0.05, respectively. This effect can be explained by slow partitioning: chemical loss reduces the surface concentration of pyrene faster than its replenishment from the gas phase (non-equilibrium case C in Fig. 2). In the quasi-steady state, chemical loss and repartitioning are balanced. Importantly, both values differ significantly from Φ eq . In contrast, when 285 the O 3 concentration is low enough (1 ppb) the particulate fraction remains approximately equal to its value at equilibrium (Φ ≈ Φ eq = 0.24). At this O 3 concentration, the rate of partitioning is sufficiently high so that pyrene lost from the particle surface can be fully replenished from the gas phase (equilibrium case A in Fig. 2). Hence, non-equilibrium behavior increases with oxidant concentration. Figure 5c shows the decrease in total concentration of pyrene due to the simultaneous gas and surface reactions with OH.
290
The lifetimes of pyrene with OH concentrations of 0.01 to 0.1 and 1 ppt, are 18.9 h, 1.9 h and 0.2 h, respectively. Nearly identical lifetimes are obtained if partitioning is assumed to be infinitely fast, thus indicating that non-equilibrium effects on chemical lifetime are insignificant for this system. Figure 5d shows that in contrast to the behavior of the O 3 system, the highest concentration of OH perturbs the particulate fraction to a quasi-steady state above its equilibrium value. The particulate fraction reaches a quasi-steady state with a value of Φ qs = 0.37 at 1 ppt OH. Although chemical loss takes in both 295 phases simultaneously, the turnover of pyrene is higher in the gas phase. The particulate fraction thus increases, characteristic of the non-equilibrium case B in Figure 2. At lower concentrations of OH, the extent of the perturbation becomes only slight (Φ qs = 0.25 at 0.1 ppt) and eventually disappears (0.01 ppt in Fig. 5d). Hence, non-equilibrium effects on particulate fraction can be significant, even if they were insignificant for chemical lifetime. This is due to the short reaction timescale of the OHpyrene system compared to its partitioning timescale: pyrene reaches 1/e of its initial concentration before the quasi-steady 300 state is established.
Visualization of non-equilibrium effects with phase portraits
The dynamic behavior of the system may be visualized as a trajectory in the phase space of gas-phase and particle-phase pyrene concentrations, [PAH] g and [PAH] p (Fig. 6). At every point in the phase portrait, a vector illustrates how the system would change with time. Here, the direction of each vector arrow indicates the extent to which pyrene is being lost or trans-305 ferred between phases, and its length indicates the rate of change. The exact characteristics of the phase portrait depend on temperature, available particle surface area, the strength of the PAH-soot interaction and the rate of the chemical reactions involved. For a system where pyrene partitions without chemical loss, all trajectories converge onto a central line at which the system stops changing, known in mathematics as the nullcline (Fig. 6a). This line represents the point of gas-particle partitioning equilibrium. The slope of the line represents the dimensionless gas-particle partitioning coefficient K p , from which the 310 equilibrium particulate fraction Φ eq can be obtained (Eq. 9).
As seen previously, chemical reactions may cause perturbation of the partitioning equilibrium. Such a perturbation would be indicated by deviation of trajectories from the nullcline in the phase portrait. The difference between perturbed and equilibrium system is depicted for [O 3 ] = 100 ppb in Fig. 6b. The vector field fundamentally changes and the trajectory of an 315 exemplary simulated system (red solid line) does not converge to the nullcline obtained in Fig. 6a, despite starting at equilibrium conditions Φ eq (shown as black dashed line for reference). Instead, the trajectory converges onto a central trajectory termed the slow manifold (Fraser, 1988). All trajectories in this system (represented with gray dotted lines) converge towards this manifold, irrespective of initial conditions. After approaching the slow manifold, the trajectory proceeds towards the origin (i.e. full depletion of pyrene) with a constant slope. This constant slope indicates that a constant quasi-steady-state particulate 320 fraction Φ qs = 0.05 has been reached. The deviation of the nullcline (Fig. 6a) and the slow manifold (Fig. 6b) can be used to indicate the extent of non-equilibrium effects in a multiphase chemical reaction system. For example, Fig. 6c shows that for the reaction with 1 ppt OH, the discrepancy between the simulation trajectory and the partitioning nullcline is much smaller due to simultaneous loss in the gas and particle phases. The slow manifold here runs above the partitioning nullcline and is cm cm cm cm cm cm reached only just before all pyrene is consumed (compare solid blue lines in Fig. 5). Fig. 6d shows how the nullcline and the 325 slow manifolds above or below it can be interpreted using the diagrams in Fig. 2.
Implications for chemical transport models (CTMs)
In the previous sections, an explicit, coupled model of partitioning and chemistry is used. This means that mass-transport and chemical-loss processes are simultaneously evaluated in a set of differential equations. Hereon, this is referred to as the explicitcoupled approach (EC). As the explicit-coupled approach is computationally expensive, CTMs often treat the partitioning and chemical loss of PAHs separately using operator splitting (Brasseur and Jacob, 2017). Instantaneous equilibration (IE) is one type of operator-splitting approach: at each model time step (∆t), the gas-particle distribution of PAH is reset to the partitioning equilibrium (estimated by temperature, particle number concentration, PAH, and particle type) and chemical loss is then further integrated separately, starting from the newly established equilibrium. Time steps of global and regional transport models used to study PAH are typically around 15 min (Galarneau et al., 2014) or 30 min (Sehili and Lammel, 2007).
Influence of model time step length
The solution obtained using the IE approach can differ from the EC solution. Using the surface reaction of O 3 (100 ppb) with pyrene in the particle phase, we demonstrate that the magnitude and sign of this difference varies between ∆t = 4 min, 8 min, 30 min and 1 h (Fig. 7a). The following conditions are used in the simulations: T = 280 K, d p = 50 nm, N p = 1 × 10 3 particles cm −3 .
340
The lifetime of pyrene is underestimated when using time steps ∆t of 4 and 8 min (Fig. 7a), but overestimated with ∆t of 30 min and 1 h. In this example, an optimal time step exists, for which the deviation from EC is minimized, ∆t opt = 19.9 min (Fig. 7a). It is close to the equilibration timescale of gas-particle partitioning τ eq of pyrene, which is around 15 min under these conditions. τ eq could thus serve as initial guess for ∆t opt . ∆t opt is determined using a golden-section search optimization algorithm (Kiefer, 1953) to minimize the absolute difference between EC and IE model outputs. Of note, a deviation between 345 IE and EC and hence a dependence of the model result on the operator-splitting time step only arises if a significant departure from partitioning equilibrium occurs. Under equilibrium partitioning conditions, a range of sufficiently short ∆t can describe the system accurately. In an example with OH (1 ppt) reacting with gas-phase and surface-bound pyrene, all IE calculations produce negligible errors, irrespective of the time step used (Fig. 7b). This is due to the particulate fraction being very close to Φ eq until the majority of pyrene has reacted (cf. Fig. 5d).
Deviation from explicit-coupled (EC) approach
The discrepancy between the EC and IE solutions not only depends on the length of ∆t, but also on the relative rates of partitioning and chemical loss. In this section, this discrepancy is explored as a function of desorption rate and is therefore characteristic of a range of PAHs. The reaction rate coefficients of pyrene are used as best guess for generic PAHs. The discrepancy can be quantified with an error metric, E loss , which can be interpreted as the relative difference in loss rates 355 (Eq. 10). ∆[PAH] EC (t) and ∆[PAH] IE (t) are the accumulated losses of PAH at each time point t using EC and IE, respectively (Eqs. 11 and 12). This metric is chosen as it detects discrepancies in model solutions independent of the absolute turnover, which is important when comparing scenarios at high and low oxidant concentrations. E loss ranges between -1 and 1, and is evaluated until either t 90% or t = 24 h is reached. E loss is positive when the IE solution overpredicts the loss of PAHs compared to the reference EC solution, and negative when loss is underestimated. Figure 8 shows the extent and direction of deviation of IE from EC in a case study of PAH surface chemistry in which the desorption rate coefficient k des is varied between 5×10 −8 and 5 × 10 −1 s −1 , and the concentration of O 3 between 0 and 365 120 ppb for IE time steps of ∆t = 1 min and ∆t = 30 min. When ∆t = 1 min (Fig. 8a), IE overestimates PAH loss compared to EC, indicated by red coloring. Deviation is largest when k des is between 1×10 −4 and 1×10 −2 s −1 . Here, the IE time step of 1 min causes PAH to transfer onto particles at an artificially high rate. This increases the particle-phase concentration of PAH and results in faster chemical loss. The IE solution hence shows less non-equilibrium effects of slow partitioning on multiphase chemistry compared to the reference EC model. At the highest k des (> 1 × 10 −2 s −1 ), non-equilibrium effects of 370 slow partitioning still occur, but in the desorption-controlled regime (cf. Fig. 4 at 280 K) an increase in k des leads to a reduction in equilibration timescale. This not only leads to weaker non-equilibrium effects of slow partitioning in the EC solution, but also to a better match between EC equilibration timescale and IE time step. Hence, the discrepancy between IE and EC approach is reduced, as evident by the more faint red coloring. At low k des (< 1×10 −5 s −1 ), most PAH is located on the surface of particles at all times and re-partitioning of gas-phase PAH after depletion of particle-phase PAH is negligible. Thus, no deviation of the 375 IE from the EC approach occurs.
In contrast, with a time step of ∆t = 30 min (Fig. 8b), the IE approach generally underestimates the loss of PAH compared to the EC approach, indicated by blue coloring. The largest underestimations are found at high k des and high [O 3 ] g . Underestimation of PAH loss occurs because the re-partitioning induced by the longer IE time step of 30 min is slower than the true equilibration rate in the EC model. In this scenario, the IE approach thus leads to stronger non-equilibrium effects of slow 380 partitioning compared to the EC model. When the equilibration timescale becomes shorter, at high k des (> 1 × 10 −2 s −1 ), the discrepancy between the IE and the EC solution further increases, especially at high [O 3 ] g . Notably, at k des ≈ 1 × 10 −4 s −1 , the IE approach slightly overestimates loss of PAH at high [O 3 ] g . This is due to the EC equilibration timescale dropping below the equivalent of ∆t = 30 min just before non-equilibrium effects of slow partitioning vanish at the lowest k des . In between, a zero contour (labeled '0') and hence no deviation between both methods occurs when k des ≈ 1 × 10 −3 s −1 . Here, the IE approach matches the EC equilibration timescale by alternatingly underestimating and overestimating the concentration of PAH at different points of the simulation and a cancellation of errors occurs (cf. Fig. 7a). This case is distinct from the left zero contour at k des ≈ 5 × 10 −5 s −1 , in which departure from equilibrium does not occur and both methods return truly identical results (compare Figs. A5a and A5c). Another region exists: for k des < 1 × 10 −4 s −1 , the simulation proceeds for less than 30 min and therefore less than one IE time step is evaluated (Fig. A5). In this region partitioning effectively does not take place and 390 we chose not to report the numerical value of E loss .
E loss is negligibly small when the concentration of [OH] g is varied between 0 and 0.5 ppt with reaction in both the gas phase and on the surface of particles (Fig. A4a). Unlike the example with O 3 , PAH loss rates due to reaction with OH in each phase are similar enough that Φ is not perturbed far from Φ eq under these conditions. Often, in chemical transport models only the gas-phase reaction of PAH with OH is included. Varying the concentration of OH between 0 and 0.5 ppt with reaction in the 395 gas phase only causes IE to overestimate the loss of PAH (red area, Fig. A4b).
To estimate errors for global models, it is also informative to present the discrepancy between the EC and IE approaches as a percentage difference. In Fig. 8 for instance, when E loss = -0.5, a model using the IE assumption underestimates the loss of PAH by 34 % compared to the EC solution (Fig. 8a ). Likewise, if E loss = -0.1, a model using the IE assumption overestimates the loss of PAH by 23 %. In order to fully quantify the error introduced by the instantaneous equilibration assumption, it would 400 be necessary to implement a non-equilibrium partitioning scheme directly into a CTM.
It should be noted that alongside the gas-phase and particle-phase concentrations of PAHs, CTMs are often evaluated by comparing the predicted particulate fraction and partitioning coefficient to observational data. Both of these metrics depend on the relative concentrations of gas-phase and particle-phase PAHs. Therefore, due to error propagation, the particulate fraction and partitioning coefficient may be more sensitive than absolute concentration to the effects of a non-equilibrium partitioning 405 scheme.
Atmospheric implications
This study shows that the chemical loss of polycyclic aromatic hydrocarbons (PAHs) and their partitioning between the gas and particle phases are closely interlinked. The equilibration timescales of adsorptive partitioning are quantified for six PAHs on the surface of soot. Our model predicts that equilibration timescales range from seconds to hours depending on tempera-410 ture, available particle surface area and molecular structure of the PAH. We highlight the molecular processes governing this timescale with two regimes: adsorption-controlled and desorption-controlled partitioning.
Soot constitutes only a fraction of total ambient aerosol particles (Pöschl, 2005). Thus, a logical next step will be to investigate how equilibration timescales vary for other types of particle surfaces. For example, given the weaker desorption energies of PAHs such as anthracene on the surface of NaCl (75.3 kJ mol −1 ) compared to soot (87.9 kJ mol −1 ; Chu et al., 2010), one 415 would expect equilibration timescales on NaCl to be shorter.
On other particle types, PAHs molecules can undergo absorptive partitioning by diffusing through surface layers into the bulk of the particle. For secondary organic aerosol (SOA), the particle phase state influences the rates of condensation and evaporation (Shiraiwa and Seinfeld, 2012). Equilibration timescales for PAHs are therefore also expected to be dependent on particle composition and humidity. For absorptive partitioning, the equilibration timescales of PAHs are expected to be even 420 longer than the equilibration timescales for adsorptive partitioning. Alongside the contributions to the equilibration timescale from the adsorption and the desorption controlling regimes, absorptive partitioning is also controlled by the diffusion of PAHs through the bulk of aerosol particles. The complex interplay of partitioning and reaction in the gas and particle phases plays a critical role in the growth of SOA particles (Shiraiwa et al., 2013;Berkemeier et al., in review, 2020) and departure from partitioning equilibrium adds to this complexity (Cappa and Wilson, 2011). However, the role of bulk diffusion in determining 425 equilibration timescales is beyond the scope of this study and will be investigated in a follow-up study that builds on the framework provided here.
Chemical reaction of pyrene with O 3 on the surface of particles perturbs the particulate fraction from partitioning equilibrium at atmospherically-relevant oxidant concentrations. As the extent of this perturbation increases with concentration of O 3 , the largest deviations from equilibrium particulate fraction are likely to take place in the most polluted air. The reaction of pyrene 430 with OH in both phases results in much smaller perturbations. In general, the biggest deviations from equilibrium particulate fraction are expected for low-volatility PAHs when atmospheric conditions induce slow partitioning (i.e. cold temperatures and low particle number concentrations). Other chemical-loss processes may also be important for PAHs such as the reaction with NO 3 in both the gas phase and the particle phase (Gross and Bertram, 2008), as well as aqueous-phase photodegradation processes (Fasnacht and Blough, 2002). These reactions must eventually be studied simultaneously in order 435 to establish whether loss in both phases balances out or whether perturbation from equilibrium takes place.
Using existing observational datasets, it may be possible to establish how the size of the deviation in particulate fraction (between observed values and the predictions of equilibrium partitioning models) depend on the concentrations of OH, O 3 , NO 3 and other perturbing variables. This could help identify and compare key perturbing variables in a real-world setting.
It should also be noted that while in this study simulations involving chemical loss are initialized at the point of equilibrium 440 (Φ 0 = Φ eq ), in reality PAHs may be emitted in a state far from partitioning equilibrium. Depending on the prevailing loss processes, such an effect could both enhance or inhibit perturbations from equilibrium.
Non-chemical loss processes, such as dry and wet deposition remove PAHs from the gas and the particle phases at different rates and may also cause perturbations from partitioning equilibrium. The fastest loss processes, i.e. those operating at the shortest timescales, will cause the greatest perturbations. In the case of polybrominated diphenyl ethers (PBDEs) , Li 445 et al. (2015) attempted to include the effect of loss by deposition on partitioning and derived an equation for the partitioning coefficient assuming a steady state rather than equilibrium. However, given that for PAHs the estimated lifetimes due to dry deposition (1 to 14 days) and wet deposition (5 to 15 months; Škrdlíková et al., 2011) are much longer than lifetimes due to chemical loss (in this study less than 24 h), chemical loss is expected to be the loss process that is most likely to perturb the partitioning equilibrium.
450
The methodology described in this study is universally applicable to semi-volatile compounds on solid surfaces if masstransfer parameters and chemical reaction rate coefficients are available. In some cases it may be necessary to estimate these parameters. In quantum mechanical calculations, graphene surfaces could be used as a model for soot and desorption energies.
Such values are already available for PBDEs (Ding et al., 2014) and other small organic molecules (Lazar et al., 2013).
It has to be noted that an explicitly coupled solution of partitioning and chemical loss is computationally too expensive for 455 inclusion in typical regional and global CTMs. Hence, alternative algorithms would be highly anticipated. Knowledge about the position of the partitioning steady state in the presence of chemical loss (as indicated by the slow manifold that can be visualized in a phase portrait of gas and particle phase concentrations) could be used to develop such a method for global and regional models.
The flux of PAH molecules from the gas phase to the near-surface gas phase J diff with concentrations [PAH] g and [PAH] gs , respectively, is calculated with Eq. A1.
The gas-phase diffusion coefficient D g is fixed at 0.06 cm 2 s −1 for all PAH compounds, based on measurements for anthracene and pyrene in nitrogen (Siddiqi et al., 2009). d p is the diameter of aerosol particles. The mean free path λ is defined 465 in Eq. A2.
The mean thermal velocity of a molecule ω depends on temperature T and its molar mass M (Eq. A3).
The adsorption flux J ads of molecules from the near-surface gas phase to the particle phase is described using Eq. A4.
470
J ads = α s,0 (1 − θ s )J coll (A4) The surface accommodation coefficient on an adsorbate-free substrate α s,0 describes the probability that a molecule adsorbs upon collision with an adsorbate-free aerosol particle and for PAH molecules is assumed to be α s,0 = 1 (Julin et al., 2014 and σ O3 of PAH and O 3 , respectively (Eq. A6). In order to estimate σ, each benzene-like ring of a PAH molecule is assumed to occupy 2 × 10 −15 cm 2 . The collision flux J coll , i.e. the flux of molecules colliding with the surface, is defined in Eq. A7.
The temperature dependent desorption flux (J des ) due PAH molecules evaporating from the surface of an aerosol particle depends on the rate coefficient for desorption (k des ) and the surface concentration of PAH [PAH] s (Eq. A8).
k des depends on the Arrhenius factor (A) and the activation energy for desorption from the aerosol particle surface (E A ; Eq. A9).
The temperature dependence of the desorption rate coefficient was previously determined for seven PAHs on fresh kerosene soot (Guilloteau et al., 2008(Guilloteau et al., , 2010 and the obtained parameters are implemented in this model (see Table A1). These activation energies of desorption for PAHs on soot are consistent with those obtained theoretically on pure graphene (Lechner and Sax, 2014) and coronene (Kubicki, 2006). It should also be noted that different types of soots can have different effects 490 on gas-particle partitioning (Mader and Pankow, 2002) and more aged soot may have a reduced affinity for PAH. Despite the simplifications of this model, we aim to provide a basis to which further complexity can be added.
Irreversible reactions between pyrene and either O 3 on the surface of aerosol particles or OH in the gas phase and on the surface of aerosol particles are investigated with the model. The equations of mass-transport for O 3 are identical to those for PAH and the corresponding parameters are reported in Tables A1 and A2. As the uptake of OH is considered to proceed via 495 an Eley-Rideal mechanism, the diffusion of OH from the gas-phase to the near-surface gas phase is treated using a gas-phase diffusion correction factor C g,OH (Eq. A10). The full equation for C g,OH can be found in Eq. 14 of Pöschl et al. (2007).
[OH] gs = C g,OH [OH] g (A10) The rate of PAH loss from the particle surface due to chemical reaction with OH L s,OH depends on probability γ OH,PAH that reaction occurs following collision of OH with PAH on the particle surface (Eq. A11). The rate of gas-phase PAH loss 500 by OH L g,OH and the rate of PAH loss from the surface due to reaction with O 3 L s,O3 are defined by Eq. A12 and Eq. A13, respectively. Further details of these reactions and their parameters can be found in section 2.2.
Appendix B: Derivation of equation for equilibration time An approximate equation for equilibration time τ eq (s) is obtained analytically using the relaxation time of a simple reversible reaction (Bernasconi, 1976). The equation approximates the numerically-obtained results from the kinetic model and is derived by assuming that gas-particle partitioning can be described as two first-order processes, adsorption and desorption, with rate coefficients k des (s −1 ) and k ads (s −1 ). We find there is good agreement between the numerically-obtained results and the 510 approximate equation as long as the gas diffusion flux J diff does not significantly affect gas-particle partitioning (i.e.
[PAH] g ≈ [PAH] gs ) and surface crowding effects do not significantly inhibit adsorption of PAH onto the surface (i.e. θ s is small Both sides are integrated (Eq. B11) and the equation rearranged (Eq. B12). | 11,739 | 2020-10-07T00:00:00.000 | [
"Physics"
] |
Phonon modes in a Möbius band
It is well known that phonon modes become sensitive to the geometry of an object when the phonon wavelengths are comparable to the objects physical length scale. In contrast, the sensitivity of phonon modes toward topology is much less explored and understood. In this paper we discuss the effects of topology on phonon modes using a finite thickness Möbius band of centerline radius a as the model system. The phonon modes are derived using the xyz algorithm based on Riemannian geometry. From the boundary conditions and parity we identify two sets of modes with wave numbers q = n / 2 a described by odd and even integers n. Modes characterized by odd integers have flexural vibrations whereas those characterized by even integers exhibit dilatational and shear/torsional motion. While the phonon dispersion at large wave numbers agrees with that of structures having simple topology (rings and wires), at low frequencies and wave numbers the Möbius topology introduces significant differences. Uniquely, we find three of the four phonon branches do not go to zero frequency with decreasing wave number, but converge on a finite frequency. We identify a new form of vibrational pattern resembling incomplete breathing modes and discuss the ramifications of the modified spectrum, including a local increase in the density of states and the existence of a phonon band gap.
Introduction
Möbius strip topology has fascinated artists and scientists since the middle of the nineteenth century when the structure was first identified by Listing and Möbius [1][2][3][4]. The mechanical properties, equilibrium shape and surface geometry of this unique one-sided, single-boundary structure have been studied extensively [5,6]. In addition, a rich variety of research associated with the topology has been reported, including the stability of soap films [7], novel light polarization schemes [8], numerical calculations of Laplace-Beltrami eigenfunctions [9], and birdcage resonators that show half-integer harmonic behavior when configured as a Möbius strip [10]. Techniques have been developed to fabricate three-dimensional nanostructures with nontrivial topology [11][12][13][14] and the effects of topology on electron transport behavior [15][16][17][18] discussed. More generally, the consequence of topology, and the concept of topological order, has been the subject of extensive research on correlated electron systems, such as the fractional quantum Hall effect [19] and topological insulators [20,21].
At the nanoscale, geometric constraints give rise to phenomena related to phonon confinement, such as specific heat anomalies [22][23][24], the quantization of thermal conductance [25,26], and modifications to the Raman spectra of nano-crystalline materials [27]. The importance of low-frequency flexural modes on the thermal and mechanical properties of graphene sheets has been comprehensively studied [28], and tuning the flexural mode by modifying the geometry of graphene nano-ribbons has been considered as a means to control transport properties [29]. Also, the thermal conductivity has been shown to depend on topology, with Möbius strips displaying a lower conductivity than rings or ribbons; a result attributed to increased phonon-phonon scattering and localization [30].
In this paper we address the effect of topology on phonon modes in the elastic continuum regime. We derive the vibrational spectra of a finite thickness Möbius strip modeled as a rectangular cross-section bar that is twisted axially by π radians and is formed into a continuous structure by linking the ends. The centerline of the structure is assumed to be a perfect circle of radius a. For clarity, we term this model structure a Möbius band for the remainder of the paper. We do not consider chirality because it does not play a role in the phonon properties of individual, isolated Möbius bands. We first set up the model for the band and derive related structural parameters, such as the metric tensors and Christoffel symbols based on Riemannian geometry. We consider the parity and boundary conditions for phonon modes specific to a Möbius band, and find an unusual coexistence of two classes of phonon mode with allowed wavelengths of 2πa divided by either integers or half integers. Following previous work [31], the phonon modes are derived using the xyz-algorithm [32] reformulated in terms of Riemann geometry. We introduce a set of basis functions satisfying the boundary conditions in order to express the phonon mode displacements. A critical consideration is the need for displacement vector matching imposed by the π radian twist of a Möbius band. Comparing the calculated modes to those of a wire and a ring, we find the lowest frequency dispersion branches close to the zone center for a Möbius band have strikingly different characteristics. In the case of a wire (ring), four (two) branches converge to a frequency ω=0 as the wave number approaches zero. We show that for a Möbius band, one branch has ω=0 at the zone center while three branches converge around a finite frequency ω g at zero wave vector. The density of states is enhanced around ω g below which the spectral density becomes very low, giving rise to a phonon band gap not found in other closed structures, for example rings. Details of the phonon spectra close to the zone center are shown to be sensitive to elasticity, but insensitive to geometry. We discuss these sensitivities and conclude that topology rather than geometry is the dominant factor determining the low-frequency phonon spectrum of a Möbius band.
Möbius band and coordinate system
We consider a Möbius band modeled as a rectangular cross-section bar of single crystal that is twisted axially by π radians and is formed into a continuous structure by linking the ends as shown figure 1. The center line C denoted by the dashed line is assumed to be a perfect circle of radius a.
We first introduce a coordinate system which rotates along C. X denotes a position on C and s is a length between the position and prescribed origin on C. t, n and b are the tangential unit vector along C, the normal vector defined below, and the unit vector perpendicular to t and n. κ is the curvature and τ is the torsion. All of these are defined and related by the following Frenet-Serrete formulae; n indicates the center of circle and the curvature becomes κ=1/a. b indicates the direction normal to the circle, independent of s, so that τ=0. Using n and b, we introduce a frame comprised of e 0 ≡t and two unit vectors e 1 and e 2 which rotate with increasing s along C. Assuming a constant rate of rotation of the frame and putting θ=s/2a, we define the unit vectors rotating with increasing s by e e n b cos sin sin cos . 2 Supposing that the Möbius band is made of material having a cubic crystal structure, for example copper or gold, the unit vectors e 0 , e 1 and e 2 are set to be parallel to each crystal axis.
To describe the structure and equation of motion of elasticity in terms of Riemannian geometry, the position in the Möbius band is given by 2 connected to the rotating frame as shown in figure 1, the variable ranges are For the system to be physically feasible, the half-wide w and half-height h should satisfy the condi- ) . Geometric parameters such as the metric tensors g ij and Christoffel symbols Γ ijk are given by which are summarized in appendix.
XYZ algorithm
The xyz algorithm developed by Visscher [32] is a powerful method to numerically obtain vibrational modes of a free-standing object. That work showed that the equation of motion of vibrations for a free-standing object becomes the same as the wave equation in a bulk material and that the exact solution automatically satisfies the boundary condition for a free surface. In the present work, we apply the method to a Möbius band. Since the method was originally developed in terms of Euclidean geometry, we reformulate the method using Riemannian geometry [33] .
To begin, we introduce contravariant and covariant displacement vectors u i and u i which are related by The strain tensor ε ij is defined by Assuming Hooke's law, the stress tensor σ ij is associated with ε ij via the stiffness tensor C ijkl as Using the strain and stress tensors, the Lagrangian of a Möbius band yields where g represents the Jacobian and g g ij = | |. From the variational principle and boundary condition for a free surface, we have the following equation for a phonon mode with frequency ω u 0. 12 We express the displacement vector, using a series of bases Φ λ ; and λ stands for λ={l, m, q} (l, m0). In order to set up an equation to obtain the coefficients i c l , we first substitute equations (13) into (12) and multiply by the complex conjugate of F l¢ . Integrating over the volume, we have Equation (15) is reformulated to be the following secular equation Numerically solving equation (18), we obtain phonon spectra of Möbius bands and corresponding displacement vectors.
Parity and boundary condition
Each phonon mode component has a certain parity for inversion of coordinates x 1 and x 2 associated with structural symmetries. Because of the π radian twist in a Möbius band, the parity will also depend on matching the displacement vectors around the band. Since bases are closely related to the parity, we examine the boundary condition to choose a suitable set of bases. As x 0 changes from 0 to 4πa along the centerline C, the frame rotates axially around e 0 by 2π radians and returns to the original orientation. Then the displacement vector matches itself at the original position, in other words all the phonon modes have a period of 4πa; In addition, since the frame rotates by π radians when x 0 increases from 0 to 2πa, the displacement vector at x x x 0, , ). The frame rotation also inverts displacement components u 1 and u 2 . Putting these things together, we have the following boundary conditions for displacement components; Supposing an asymptotic case of extremely large radius in comparison with wavelengthλ/a=1, the phonon modes of a Möbius band will closely resemble those of a rectangular wire as understood from equation (18), which in the limit of a ¥ reduces to the secular equation for phonon modes in a rectangular wire [31]. A rectangular wire supports four kinds of phonon modes; a dilatational mode referred to as mode I, two flexural modes referred to as modes II and III, and shear/torsional modes referred to as mode IV. The shear and torsional modes are clearly separated only for a square wire, but are mixed for a rectangular wire. The displacement components of these modes are given by where l and m are 0 or 1, giving spatial symmetry of each displacement component. The combinations (l, m) for phonon modes are summarized in table 1. Here we apply the boundary conditions on the phonon modes in a rectangular wire. Putting equations (23) into (19), we have a condition for wave number q as where n is an integer. The wave number q is discretized to be an integer multiple of Since u i changes its sign only for an odd n when x 0 changes from 0 to 2πa, we have the parity for change in x 0 as Putting equations (27) into (26), we have the following relationship of the displacement; Comparing equation (28) with equations (20)- (22), it is found that the sum of powers N=l+m+n for each component must satisfy When n is an odd number(n odd ), l+m becomes odd for u 0 and even for u 1 and u 2 , and vice versa. As seen from table 1, modes II and III satisfy (29)-(31) only for n odd , and modes I and IV satisfy the conditions only when n is an even number (n even ) including 0. From the relationship between the wave number and wavelength q 2p l = and equation (25), the wavelength is given by the circumference 2πa divided by n/2 as a n When n=0, the wavelength becomes infinite and the vibrations become uniform motions. Considering both n odd and n even are possible for phonon modes in Möbius bands, modes II and III have a wavelength of the circumference divided by a half-integer while modes I and IV have that divided by an integer. It is unusual that the wavelength of modes II and III becomes a non-integer multiple of length for closed(ring-like) structures and that two different sets of phonon modes alternate as the wave number changes by 1/2a. These unusual properties are specific to phonon modes of Möbius bands. As the ratio λ/a increases, the curvature and torsion of a Möbius band will gradually modify or couple the phonon modes. Modifications of phonon modes from those of wire will be most important in the low frequency region near the zone center when the wavelengths become comparable to, or larger than, the circumference of a Möbius band, as shown below.
Phonon modes in a Möbius band
phonon dispersion branches of a wire. In contrast, and of primary interest, at low frequency and wave number the acoustic phonon branches disagree markedly with the dispersion relations of a wire. Figure 3 is a magnified region of figure 2 revealing the detailed spectral distribution of the acoustic branches in the low frequency region near the zone center, and introduces four characteristic frequencies ω a , ω b , ω c and ω d . A noticeable feature is that the frequencies of the two lowest branches associated with modes II and III for qa<10 deviate significantly from the parabolic dispersion relations of a wire and have finite magnitudes of the characteristic frequencies ( figure 3) ω b and ω c , which are comparable to v a g t w » at small wave numbers. Here v t is the sound velocity of transverse waves defined by v C t 44 r = . The frequencies ω b and ω c at qa = 0.5 can be » for copper, ω g and ω b,c coincide accidentally.
The spectra associated with mode IV also deviate from the linear dispersion relation of a wire and the frequencies become larger than those of mode I, where the frequency ω a at q=0 is estimated as C a 2 a 11 2 w r = .
On the other hand, the spectra associated with mode I almost coincide with the linear dispersion relation of a wire and the frequency at qa=1 is estimated as Y a where Y is Young's modulus, which is given by Y C C C C 2 11 12 )for cubic materials. For copper, ω c is the minimum of these frequencies, and as there are no modes with a lower finite frequency a band gap of characteristic frequency ω c occurs.
In order to consider the details of the band gap we note that ω a , ω b and ω c depend on C 11 , but only ω d depends on Young's modulus suggesting that ω d could possibly be lower than ω a , ω b and ω c for materials other than copper. To parametrize the posibility of a band gap, we introduce the ratio of ω d and ω b For materials with f1 a phonon band gap exists with modes II, III, and IV providing a high density of states around ω b and ω c , below which the spectral density is low. For materials with f<1, mode I becomes the lowest non-zero energy state, somewhat analogous to having a state in the gap. Although ω c would be better than ω b for the definition of f, we use ω b instead of ω c since it is too complicated to express ω c in a compact form. Aside from the case that ω d falls between ω b and ω c like copper, f is a useful parameter to judge the occurrence of a phonon frequency gap. With f=1.05 for aluminum and 1.18 for diamond, we have confirmed the frequency gaps numerically, as shown in figure 4(a) for diamond. On the other hand, gold ( f=0.82) has a mode with finite frequency below ω d as shown in figure 4(b).
Here we mention the dependence of these characteristic frequencies on the Möbius band thickness and width. The frequencies ω a , ω b , ω c and ω d are tolerant to changes of the cross sectional dimensions, although at higher wavenumbers the slopes of spectra of modes II and III depend strongly on thickness. It is apparent, therefore, that the fundamental properties of the frequency gap do not change with the geometry, suggesting they are characteristic of the topology.
Furthermore, we note that the frequency gap does not appear for other closed structures such as rings. Phonon modes of rings are derived by using the boundary condition u x x u a x x 0, , 2 , , the high wave number phonon spectra of a ring agree well with the dispersion relations of wires, similar to the case for the Möbius bands, the ring spectra deviate from those of wire in the low frequency region. However, unlike the Möbius band the ring does not have a frequency gap since the spectra of the two lowest branches are almost the same as modes II and III of wires and tend to ω=0 with decreasing wave number, as shown in figure 5 [35,36]. This further supports the conclusion that the frequency gap is a phenomenon characterisitc of the Möbius band topology.
Vibrational patterns
From the finding that the phonon spectra of a Möbius band match the dispersion relations of wires at high frequencies or large wave numbers, we expect that the vibrational patterns of Möbius bands will also resemble those of wires. Figure 6 shows vibrational patterns at points A, B, C and D denoted in figure 2. It is apparent that they exhibit dilatational(A), torsional(D) and flexural motions(B and C) corresponding to those in the rectangular wire [31]. Here we pay attention to B and C whose wavelengths are λ=2πa/(19/2). We may say that the twisted structure of Möbius bands absorbs or compensates for the phase difference caused by the half of wavelength, making such a wavelength realizable. Vibrational motions of modes at low frequencies are more specific to the Möbius band. Figure 7 shows vibrational patterns for modes (a) at q=0, (b) and (c) at qa=1/2 and (d) at qa=1 denoted in figures 3 and 4. Because of gentle spatial variations of displacement vectors for small wave numbers, the arrows express the dynamical motions. Mode (a) exhibits uniform axial-torsion along the circumference. Although the motion is common to the torsional mode of wire, the frequency ω a is finite, as is the case for phonon modes in a ring referred to as 'torsional modes' in reference [36] (see figure 8(a)). Mode (d) is longitudinal waves whose wavelength matches the circumference λ=2πa, and leads to a mode with ω=0 at q=0. Thus the motion as well as the spectra coincide with mode I of wires. We note that a ring does not support such a mode leading to ω=0. The vibrational patterns of modes (b) and (c) at qa=1/2 with λ=4πa show new forms of vibration that resemble incomplete breathing motions, which are in contrast to the breathing mode of a ring shown in figure 8(b).
Summary and discussions
The phonon modes in a Möbius band with finite thickness have been derived. The Möbius band has been modeled as a rectangular bar axially twisted by π radians whose ends are perfectly linked. The phonon spectra and corresponding displacement vectors are obtained by means of the xyz-algorithm that we have extended in terms of Riemannian geometry. As a result of mode parity and boundary conditions, we find that a Möbius band supports two distinct sets of allowed phonon wavelengths characterized by odd and even integers. The wavelengths are given by λ=2πa/(n/2), with odd integers corresponding to flexural modes II and III, and even integers to dilatational and shear/torsional modes I and IV. Importantly, this leads to integer and half-integer wavelength relationships being allowed, with the lowest frequency flexural mode (n = 1) having a wavelength of twice the center line circumference and the lowest finite frequency dilatational mode (n = 2) having a wavelength equal to the circumference. It also follows that the nature of the modes alternate as the wave number changes by 1/2a. A structure that supports modes with integer and half-integer wavelength relationships is unusual and is a direct consequence of the nontrivial topology.
We have also shown that phonon modes with wavelengths much shorter than the center-line circumference coincide with the dilatational, shear/torsional and two flexural modes of a rectangular wire. In this limit the vibrational patterns closely match those of wires, with the twisted structure of a Möbius band accounting for the half of wavelength phase difference. Major differences between phonon spectra of the Möbius band, wire and ring appear in the low frequency region near the zone center. For the Möbius band, three of the four acoustic branches do not go to ω=0 with decreasing wave number; instead they converge toward a finite frequency around which the number of states locally increases giving rise to the possibility of a phonon band gap. We have addressed the robustness if the gap towards geometry and have noted that a gap does not occur in other closed structures, such as rings that have two branches with ω=0 at q=0. We have parameterized details of the gap in terms of characteristic frequencies that depend on the elasticity of the material. Similar to other mechanisms that modify the phonon spectral distribution, we expect that the distorted spectral distribution resulting from the topology will give rise to anomalous thermal properties, which will be discussed elsewhere.
Finally, we emphasize that the topology plays a significant role in the phonon modes. The phonon band gap is one of the significant characteristics caused by the Möbius topology. Finite systems, such as nano particles, also have phonon spectral gaps. In these cases the band gap is caused by phonon confinement, which is a size dependent effect [24]. Very interestingly, a ring does not have a phonon band gap in spite of the finite size since the lowest acoustic branch begins at ω=0 and increases parabolically with respect to wavenumber. The difference in spectra between the nano particles and rings is caused by the difference in topology, i.e. the existence of a hole. Likewise, the difference between the Möbius bands and rings is also due to the difference in topology, i.e. twisted or not-twisted. Thus, topology substantially affects the phonon modes in a finite system and introduces effects that are comparable with, or larger than, those originating from geometry.
Acknowledgments
We would like to express our appreciation to Miles P Blencowe for his helpful comments. | 5,350.8 | 2018-08-01T00:00:00.000 | [
"Physics"
] |
MMP-9/Gelatinase B Degrades Immune Complexes in Systemic Lupus Erythematosus
Systemic Lupus Erythematosus (SLE) is a common and devastating autoimmune disease, characterized by a dysregulated adaptive immune response against intracellular antigens, which involves both autoreactive T and B cells. In SLE, mainly intracellular autoantigens generate autoantibodies and these assemble into immune complexes and activate the classical pathway of the complement system enhancing inflammation. Matrix metalloproteinase-9 (MMP-9) levels have been investigated in the serum of SLE patients and in control subjects. On the basis of specific studies, it has been suggested to treat SLE patients with MMP inhibitors. However, some of these inhibitors induce SLE. Analysis of LPR−/−MMP-9−/− double knockout mice suggested that MMP-9 plays a protective role in autoantigen clearance in SLE, but the effects of MMP-9 on immune complexes remained elusive. Therefore, we studied the role of MMP-9 in the clearance of autoantigens, autoantibodies and immune complexes and demonstrated that the lack of MMP-9 increased the levels of immune complexes in plasma and local complement activation in spleen and kidney in the LPR−/− mouse model of SLE. In addition, we showed that MMP-9 dissolved immune complexes from plasma of lupus-prone LPR−/−/MMP-9−/− mice and from blood samples of SLE patients. Surprisingly, autoantigens incorporated into immune complexes, but not immunoglobulin heavy or light chains, were cleaved by MMP-9. We discovered Apolipoprotein-B 100 as a new substrate of MMP-9 by analyzing the degradation of immune complexes from human plasma samples. These data are relevant to understand lupus immunopathology and side-effects observed with the use of known drugs. Moreover, we caution against the use of MMP inhibitors for the treatment of SLE.
INTRODUCTION
SLE is a chronic and complex systemic autoimmune disease, which affects all major organ systems. In SLE the autoantigens (aAg) are typically ubiquitous intracellular proteins and protein-nucleic acid complexes. This multi-system disease is characterized by the production of non-organ-specific, self-reactive autoantibodies (aAb) directed against these intracellular components, for example DNA, RNA, and ubiquitous proteins such as p53, actins, tubulins, and histones (1,2). In fact, more than 180 different aAb have been found in SLE patients (3). The presence of aAb leads to the formation of immune complexes (IC) which are detectable in the circulation. IC deposition occurs by hemoconcentration or other hemodynamic forces at specific anatomical sites and activates locally the complement system. This causes much of the tissue damage observed in, e.g., the kidneys, skin, lungs, and joints of SLE patients and leads to health problems, such as increased infection rates, renal and skin disorders, neurological complications, fibromyalgias, osteoporosis, rheumatoid arthritis, and osteoarthritis (2).
The origin of the aAg and the subsequent generation of aAb and IC in SLE is presently explained by a high apoptosis rate and by a defect in the clearance of apoptotic cells and neutrophil extracellular traps in SLE patients (4,5). Once IC have been formed, these are normally cleared by the complement and the macrophage phagocytic systems and defects in some of these processes have been described in SLE patients.
Some forms of SLE are provoked by the use of drugs, this type of SLE is called drug-induced lupus erythematosus (DILE). Normally, DILE is resolved within days to months after withdrawal of the culprit drug in those patients with no underlying immune system dysfunction (6)(7)(8).
The treatment of patients with SLE without major organ manifestation is with anti-inflammatory glucocorticoids and antimalarial drugs, and often it is a pure symptomatic treatment (9). The used drugs display side-effects and toxicity and do not specifically target aAg-IC per se. Some specific strategies target immune complex formation by reducing antibody production (targeting B cells), reducing the binding of aAb, reducing the availability of nucleosome material, increasing the clearance of IC, and interfering with feedback loops (10). However, none of these strategies succeeded in complete inhibition of IC formation. Therefore, further research is necessary to understand the basic mechanisms that trigger IC formation and to improve treatments.
The role of MMP-9 as a detrimental or beneficial molecule also remains an unanswered question in SLE, due to discrepancies in the published data. Several authors report higher serum levels of MMP-9 in SLE patients compared with those of healthy controls, whereas others do not detect significant differences (14)(15)(16)(17)(18)(19). Surprisingly, no correlation exists between serum MMP-9 levels and the number of peripheral blood cells in SLE patients (20) and the levels of MMP-9 in the circulation inversely correlate with the amounts of anti-dsDNA antibodies (21) which is an indication of the severity of the disease.
Due to the unsolved role of MMP-9 in SLE, and because some MMP-inhibitory drugs are able to induce DILE, it is clinically relevant to analyze the role of MMP-9 in this disease. Therefore, we studied the role of MMP-9 in SLE by comparing the lpr loss-of-function mutation in the apoptosis mediator fas knockout mice on a C57Bl/6 background (LPR −/− mice) with mice lacking both MMP-9 and functional apoptosis receptor Fas (LPR −/− MMP-9 −/− mice). In previous work, it was shown that LPR −/− animals develop a moderate SLE-like syndrome (22), whereas the double knockout mice lacking MMP-9 present reduced survival rates, extreme lymphadenopathy and splenomegaly and increased aAb production and therefore, pronounced autoimmune tissue injury (23).
Here, we specifically studied the role of MMP-9 in the cleavage of auto-IC. We showed that switching MMP-9 off in the SLE mouse model LPR −/− results in higher levels of IC in the circulation and spleen and kidney. We proved that MMP-9 does not cleave immunoglobulins but destroys autoantigens captured in IC. We studied the effects of active MMP-9 on IC degradation, by purifying IC from plasma samples from SLE mice with various degrees of SLE-like diseases, as well as from SLE patients. Serendipitously, this led to the discovery of a new substrate of MMP-9. Finally, with the use of an air-pouch model in WT and MMP-9 −/− mice, we studied in vivo the role and the efficiency of MMP-9 in IC clearance. Collectively, all our data are in line with the thesis that MMP-9 plays essential roles in preventing the formation of IC and in IC clearance.
RESULTS
The Levels of IC Are Higher in the SLE Mouse Model When MMP-9 Is Genetically Deleted The spleen is one of the main filter organs in charge of IC clearance (24) and kidneys are typical organs affected in SLE due IC deposition (5). We evaluated the deposition of IC in the spleen and kidney of single LPR −/− and in double knock out LPR −/− MMP-9 −/− mice by analyzing local complement activation with the use of C3d immunostaining. The signal from C3d was increased in the spleens and kidneys from LPR −/− mice in comparison with WT organs. Moreover, genetic knock out of MMP-9 in this lupus mouse model (LPR −/− MMP-9 −/− ) led to further increases of complement activation in the spleen and kidney (Figures 1A,B and Supplemental Figures 1-10). In addition, we measured the levels of C3d in tissue extracts of spleen and kidneys by sandwich ELISA (Figures 1C,D). Whereas, significantly higher C3d levels were observed in the comparisons of WT and LPR −/− mice and between WT and LPR −/− /MMP-9 −/− mice, the deletion of the MMP-9 gene only yielded trends toward higher C3d levels. With these data, we were stimulated to study further the role of MMP-9 in IC clearance or avoidance of IC deposition in WT animals.
By purification of the IC from plasma samples and analysis by SDS-PAGE we showed that the abundance of IC was higher in double knock out (LPR −/− MMP-9 −/− ) than in the single LPR −/− knockout mice (Figures 2A,B and Supplemental Figure 11). We also performed gel filtration chromatography analysis of plasma samples derived from the four mouse genotypes studied. The profiles of the P-values were determined by ANOVA Kruskal-Wallis test. *p < 0.05, **p < 0.01, and ***p < 0.001. Throughout the manuscript the following color code was used: yellow=WT, blue-=MMP-9 −/− , red=LPR −/− and green=LPR −/− /MMP-9 −/− . chromatographic analyses contained three main peaks at approximately 60, 150, and 200 kDa, which corresponded with albumin, antibodies, and large molecules/complexes, respectively. The gel filtration chromatography profiles indicated that the levels of molecules larger than 150 kDa and corresponding to the IC, were more abundant in the SLE mouse model LPR −/− than in the WT and MMP-9 −/− controls. In addition, the levels of these high molecular weight proteins further increased in the absence of MMP-9 (LPR −/− MMP-9 −/− ) (Figures 2C,D and Supplemental Figure 12). In line with these experiments, we studied the development and increases of IC in the two SLE mouse models (LPR −/− and LPR −/− MMP-9 −/− ) as a function of time, and observed that the levels of IC were not elevated at 3 months in the SLE mouse models, but increased gradually with time and, consequently, with the progression of the disease (Supplemental Figure 13). The profiles of the plasma proteins from the four genotypes were similar at 3 months, whereas at 9 months the plasma of the LPR −/− mice and, to a higher degree, of the LPR −/− MMP-9 −/− mice, contained increased levels of IC and antibodies. We concluded that the lack of MMP-9 correlates with increased IC plasma levels and possibly deposition in spleens and kidneys.
actMMP-9 Does Not Cleave Immunoglobulins IgG and IgM MMP-9 has been shown to cleave organ-specific aAg, such as myelin basic protein in multiple sclerosis (25), collagen in arthritis (26), insulin in diabetes (27), and ubiquitous aAg in systemic autoimmune diseases (12,23). Therefore, it was relevant to investigate whether these cleavages also occur with aAg captured within IC. Because immunoglobulins (Ig) are much larger proteins (more than 150 kDa) than most of the named aAg, a first logical step toward this study was to investigate whether MMP-9 cleaves Ig, in particular IgG, a major antibody in SLE-related IC. Human IgG was incubated with active MMP-9 (actMMP-9) at 37 • C during 6 or 24 h at a ratio enzyme: Ig of 1:5 or 1:50. The products of the incubation were then analyzed by SDS-PAGE. Figure 3A shows that neither the heavy chain (HC) nor the light chain (LC) of IgG were cleaved under any of the conditions analyzed.
To further study the role of MMPs in degrading Igs, we incubated IgG or IgM (as the 2 Ig classes activating the classical complement system) with actMMP−1,−2,−3−8, and−9 at a ratio enzyme: Figures 14A,B). Obviously, a strong natural selection pressure exists against proteolysis of IgG and IgM by MMP-9.
αB-Crystallin and Actin Within IC Are Cleaved by MMP-9 We next investigated whether actMMP-9 degrades known substrates like αB-crystallin and actin (12), when embedded within IC. First, αB-crystallin and actin were incubated with their respective polyclonal or monoclonal antibodies in a ratio Ab:antigen 2:1 during 30 ′ at room temperature to generate IC ( Figures 3D,E and Supplemental Figures 14C,D). These IC or the antigens alone were incubated with actMMP-9 during 1 h. αB-crystallin alone or in IC was cut by MMP-9, but the results of the cleavages were different when the IC were formed with a polyclonal (pAb) or a monoclonal antibody (mAb). Figure 3D and Supplemental Figure 14D shows that the migration patterns of cleavage products (generated by actMMP-9) of αB-crystallin alone or the IC with a mAb were similar. However, when the IC of αB-crystallin were formed with a pAb, the cleavage of the antigen was consistently inhibited. A different result was obtained with actin as antigen. Actin-IC with pAb or mAb resulted in similar cleaved actin fragments after incubation with actMMP-9 ( Figure 3E and Supplemental Figure 14C). Next, we studied if the protection against the cleavage caused by the pAb reactive with αB-crystallin was maintained for a long incubation time interval. In Figure 3F and Supplemental Figure 14E we showed that the cleavage of αB-crystallin by MMP-9 increased with incubation time and, after 24 h, only a small band with remnant epitopes of about 10 kDa was remaining. The incubation of pAb-αB-crystallin IC with actMMP-9 was much less affected, suggesting that the αB-crystallin cleavage sites were protected by the pAb.
These results were in line with the thesis that actMMP-9 was able to degrade protein antigens in IC, but that the efficiency of the cleavage depended on the ability of the Ab to mask the cleavage site(s).
actMMP-9 Cleaves Mouse SLE Autoantigens in IC
We then studied if actMMP-9 was able to degrade circulating IC from LPR −/− MMP-9 −/− mice. The results in Figure 4A showed that IC incubated with actMMP-9 generated different protein banding patterns, resulting from antigen digestion by MMP-9. Again, the immunoglobulin heavy and light chains persisted after proteolysis. However, actMMP-9 generated many remnant proteins/peptides within the molecular ranges of 30-40 kDa and 5-15 kDa, respectively. The densitometric quantification of the proteins between 30 and 40 kDa and between 5 and 15 kDa were in line with the possible role of MMP-9 in dissolving IC in the SLE mouse model ( Figure 4B).
actMMP-9 Cleaves Autoantigens in IC and Dissolves Human SLE IC
Next, we studied whether actMMP-9 degraded human purified IC from SLE patients. Information of the 10 SLE patients is provided in Supplemental Table 1. In analogy with the preparation and analysis of plasma from the SLE mouse models, we processed the human IC samples in a similar way. Purified IC from individual SLE patients (P1-P10) were incubated with actMMP-9 at 37 • C. After 16 h, the products of the incubations were separated by SDS-PAGE and silver-stained. The predominant protein bands in these preparations consisted of the intact heavy chain (HC) and light chain (LC) of human Igs as it is shown in the example gel in Figure 5A. After the incubation of IC with actMMP-9, different banding patterns of the proteins were observed, suggesting that MMP-9 cleaved some antigens within the IC ( Figure 5A). The changes observed in the banding patterns between 30 and 40 kDa and between 5 and 15 kDa, were analyzed and quantified ( Figure 5B).
We next investigated which proteins were present in the bands detected after actMMP-9 incubation. To this end, we sliced out the four indicated proteins bands in Figure 5C and we performed trypsin digestion followed by nanoLC/TOF/MS and protein identification. The obtained results indicated the presence of already known MMP-9 substrates, including C4, fibronectin and C1q (Supplemental Table 2). Interestingly, autoantibodies against these proteins have been found in SLE (3). In addition, a novel MMP-9 substrate in band 1 was identified: Apolipoprotein B 100 (Apo-B 100). To corroborate that Apo-B 100 was a new substrate of MMP-9, we purified IC from plasma samples and we performed Western blot analysis for Apo-B 100. After incubation with actMMP-9 the signal of intact Apo-B 100 disappeared, demonstrating that actMMP-9 cleaved Apo-B 100 ( Figure 5D). As an internal control, we also evaluated appearing proteins/peptides after cleavage ( Figure 5C, band 4). In this case, we identified sequences of the fibrinogen alpha chain, a known substrate of MMP-9, and of MMP-9 itself.
To study if the cleavage of the aAg in IC resulted in a change of the IC composition, we also performed gel filtration chromatography analysis of the IC from plasma of 3 SLE patients (P5, P6, P7), before and after incubation with actMMP-9. With this method, we analyzed the size of the IC under native physiological conditions in solution ( Figure 5E). After incubation with actMMP-9, the profiles of the IC were altered in comparison with the intact non-incubated IC for P5 and P7 and to a lesser extent for P6. The main peak that corresponded to the IC was consistently reduced: 52, 49, and 41% for P5, P7, and P6, respectively. Collectively, this suggested that MMP-9 degraded antigens that formed the IC, as actMMP-9 did not degrade Igs. The quantification of the peaks is shown in the bar graphs near the gel filtration profiles. The results of these gel filtration chromatography analyses with intact IC in solution corroborated our finding of a function of actMMP-9 in cleaving circulating IC.
Lack of MMP-9 Correlates With Decreased Clearance of IC in vivo
As a proof of concept, we investigated the role of MMP-9 in dissolving IC in an in vivo model. We used an air pouch animal model, which is a standard test in pharmacological, immunological and biomaterial research (28)(29)(30). We compared WT (14 control mice and 14 mice injected with IC) and MMP-9 −/− knock out animals (9 control mice and 10 mice injected with IC) for their capacities to dissolve exogenously administered IC. After the air pouch was established, 5 µg of IC were injected and 24 h later the pouch exudates were collected to characterize cells and fluids. First, we analyzed the presence of MMP-2 and MMP-9 by gelatin zymography (Figures 6A,B and Supplemental Figure 15A). We demonstrated significant increases of proMMP-9 and actMMP-9 levels when we injected IC in the WT pouches, whereas the levels of MMP-2 were not altered. As expected, MMP-9 was not present in the MMP-9 −/− animals, and, interestingly, MMP-2 levels did not significantly increase in the presence of IC in WT vs. MMP-9 −/− mice (Figures 6A,B, Supplemental Figure 15A). It was important to mention that no differences between WT and MMP-9 −/− mice were detected in the amount and types of cells migrating into the pouch after IC or PBS (control) injection (Supplemental Figure 15B). Nevertheless, and as expected, the number of cells increased approximately 10 times after IC injection in both genotypes. No differences were observed in the migration of cells into the pouch between WT and MMP-9 −/− mice (Supplemental Figure 15B). We also characterized the cell types from the exudates by flow cytometry (Supplemental Figure 15C) and cytospin analysis (Supplemental Figures 15D,E). The characterization of the cells by cytometry revealed that after 24 h neutrophils and mostly macrophages were present in the air pouch when we injected IC in WT and MMP-9 −/− mice. The cytometry analysis was corroborated by counting cytospins. Remarkably, the images not only confirmed increased macrophage percentages in the pouch after administration of IC, but also clearly showed the presence of considerable amounts of vacuoles in these cells (red arrows in Supplemental Figure 15E).
Mostly relevantly, we analyzed the presence and alterations of IC in the fluid exudates. We first purified the IC and analyzed these by SDS-PAGE followed by silver staining of proteins. WT and MMP-9 −/− controls (injected with PBS) barely contained Ig heavy and light chains in the air pouch exudates, whereas after IC injection, Ig heavy and light chains were detected in both groups ( Figure 6C). Significantly, more Ig remained present in the MMP-9 −/− than in WT mouse air pouches ( Figure 6C). We also examined the amount of remaining IC in the pouches by an anti-IgG ELISA ( Figure 6D). The levels of IgG in control animals (PBS treated) were low and similar in both genotypes. As expected, after IC injection slightly increased IgG levels were seen in WT, whereas in MMP-9 −/− mice, these increases were significantly more pronounced, suggesting that the absence of MMP-9 delayed the degradation of IC.
Collectively, these data indicated that MMP-9 regulates IC levels not only by degrading aAg and preventing IC formation, but also by degrading the already formed IC.
DISCUSSION
In the present study, we observed that the levels of IC were significantly higher in the plasma of mice lacking MMP-9, compared with animals with SLE like disease (LPR −/− ). In the mice lacking MMP-9 only a trend in higher IC accumulation was observed in the spleen and kidneys. This hinted to the fact that the absence of MMP-9 might lead to an increase of IC in plasma and deposition in peripheral organs like kidney, lymph nodes and spleen. We demonstrated that active MMP-9 is capable to degrade IC from plasma samples of SLE patients and from the SLE animal model LPR −/− . Interestingly, this clearance role is not by destruction of the immunoglobulins, but instead, and solely, by degradation of the autoantigenic proteins. By analyzing the content of the IC, degraded by MMP-9, we found not only many known MMP-9 substrates, but we also discovered a new autoantigen substrate of MMP-9, the Apolipoprotein B-100. In addition, we showed, in vivo, that the lack of MMP-9 delayed the degradation of IC.
Cauwe et al. already titrated the levels of some aAb against established MMP-9 substrates, for instance anti-actin or antitubulin. The serum concentrations of these aAb were significantly increased in the LPR −/− lupus mouse model that lacks MMP-9, compared with the single LPR −/− mice (23). This finding suggested that MMP-9 clears aAg released from apoptotic and necrotic cells (Figure 7A). Interestingly, aAb against non-MMP-9-substrates, such as anti-Smith IgG, anti-Histone IgG or anti-Chromatin IgG and rheumatoid factor IgG were also increased in the double knockout LPR −/− /MMP-9 −/− (23). This suggests that MMP-9 has an additional protective role in SLE, besides the autoantigen degradation. Degradation and clearance of IC will prevent its deposition in several organs and avoid tissue damage. Follicular dendritic cells in the spleen are involved in immune complex trapping and help to clear the IC from the blood with the help of macrophages (24,31). Therefore, after observing that IC levels were higher in MMP-9 −/− mice, we hypothesized that MMP-9 might disaggregate large IC already deposited in the spleen and also in the peripheral blood circulation to facilitate their phagocytosis and degradation by macrophages ( Figure 7B).
It is established that MMPs are highly regulated proteases that degrade substrates in vivo under well-defined conditions. We showed that neither MMP-9 nor MMP-1,-2,-3 and -8 cleave immunoglobulins IgM and IgG. This is a surprising finding in view of the fact that immunoglobulins are proteins of considerable sizes. Both, immunoglobulins (e.g., IgG and IgM) and MMPs (e.g., MMP-9) mediate critical immune functions and may co-exist in the blood stream and tissues, thus acting together. Our data support the important role of MMPs in the immune system by cleaving antigens while leaving IgG and IgM intact. For comparison, typical proteases in the gut, like trypsin and chymotrypsin, efficiently digest IgG (32).
We showed that actMMP-9 cleaves known aAg, such as actin and αB-crystallin within the context of IC. The nature of the antigen and the used antibody preparations influenced the degree of autoantigen protection against proteolysis suggesting that the in vivo half-lives of IC with protein aAg might vary considerably. Standardization and methodology have been described in detail before (51) (C) SDS-PAGE and silver stain analysis of IC proteins from the air pouch after injection of PBS or IC. Prior to analysis, the IC were purified with protein-G-Sepharose. HC, heavy chain; LC, light chain. The graphs underneath the photographs represent the quantification of the heavy and light chains of IgG after injection of IC in the pouch. A.U. = arbitrary units representing the densitometry analysis of the proteins with the use of ImajeJ software. (D) ELISA for IgG in the air pouch exudates after injection of PBS or IC in WT or MMP-9 −/− mice. P-values were determined by ANOVA Kruskal-Wallis test. *p < 0.05, **p < 0.01 and ***p < 0.001 comparing PBS and IC conditions in both genotypes and #p < 0.05 comparing WT and MMP-9 −/− IC conditions. Light yellow histograms represent the data from WT mice injected with PBS, green represents data from WT mice injected with IC, light blue represents MMP-9 −/− mice injected with PBS and dark blue represents MMP-9 −/− mice injected with IC. Each individual circle represents a data point from a single animal.
Regarding the IC used to study the cleavage by actMMP-9, we were aware that in our preparations, protein G only selected for IC formed by IgG and that IC based on IgM thus were excluded. However, for the purpose of the present work, it was more critical to obtain pure IC, not contaminated with other plasma proteins, than to collect all lupus-related IC of all possible Ig (sub)classes.
The degradation of the IC was studied in human SLE plasma samples and, similarly, in plasma samples obtained from two SLE mouse lines (LPR −/− and LPR −/− /MMP-9 −/− ), with varying degrees of SLE-like phenotypes. Both, human and murine IC samples were degraded by actMMP-9, although the switches in the banding patterns varied depending on the plasma samples, in line with the heterogeneity of IC between SLE patients (33). The protein identification of plasma IC proteins degraded after incubation with actMMP-9 provided interesting information. Firstly, most of the identified proteins were already known substrates of MMP-9. This was the case for C4 (34), fibrinogen (35), fibronectin (36), and C1q (37). All proteins identified here as substrates of MMP-9 are also autoantigens in SLE (12). Consequently, autoantibodies against C4 (38), fibrinogen (39), fibronectin (40), and C1q (41) have been described in SLE. From our search of MMP-9 substrates within IC, we serendipitously discovered a new substrate of MMP-9, namely Apo-B 100. In line with our hypothesis (Figure 7B), autoantibodies against Apo-B 100 have been described in SLE plasma samples (42). After incubation with actMMP-9, the immunoreactive signal of Apo-B 100 disappeared, rather than yielding immunoreactive fragments, suggesting that MMP-9 cleaves at several sites within this protein.
Although further biochemical analysis is needed to define the MMP-9 cleavage sites, the protease specificity prediction server "PROSPER" provided preliminary information about the Apo-B 100 cleavage sites generated by actMMP-9. With PROSPER In the absence of actMMP-9, these proteins remain intact and highly immunogenic. Classical antigen presentation enhances T helper functions, generating a considerable immune response with antibody secretion. In the presence of actMMP-9 and other proteases, the intracellular proteins are cleaved, diminishing the immunogenic capacity and therefore generating a lower or no immune response. (B) Once the IC are formed, actMMP-9 may be released by neutrophils and macrophages. As a result, actMMP-9 degrades the antigens within the complexes, making the IC smaller and therefore easier to be phagocytosed and cleared by macrophages and neutrophils. The lack of actMMP-9 helps to preserve the large IC in intact form and, as a consequence, the deposition of IC and tissue damage are increased.
analysis we found that actMMP-9 may cleave at more than 250 sites within the human Apo-B 100 amino acid sequence.
Finally, we used the mouse air-pouch model to prove, in vivo, the role of MMP-9 in IC clearance. We found that the levels of proMMP-9 and actMMP-9, but not of MMP-2, were significantly increased in the WT animals after administration of IC. After IC injection in the pouch, 10 times more cells and a strong shift in the proportion of monocytes and macrophages were observed in both WT and MMP-9 −/− mice, whereas the IC clearance analyzed by ELISA and SDS-PAGE quantification was significantly higher in WT vs. MMP-9 −/− mice. It is known that macrophages are able to clear IC, and that SLE patients have a deficiency in IC clearance due to macrophage and complement system defects (43). M2 macrophages are stimulated by IC and consequently are more implicated in IC clearance than M1 macrophages (44). Remarkably, M2 macrophages produce more MMP-9 and less TIMP-1 (45). These data support a role for macrophage MMP-9 activity in IC clearance.
Several compounds have been suggested to cause drug-induced lupus erythematosus (DILE): hydralazine, procainamide, quinidine, doxycycline, isoniazid, diltiazem, minocycline, and D-penicillamine (6,7). Interestingly, some of these drugs are known to inhibit MMP-9. For example, doxycycline, described as DILE inducer (8), has been used as MMP-9 inhibitor in lung injury (46) and aortic valve disease induced by cardiopulmonary bypass (47). Similarly, D-penicillamine has also been described to induce lupus (48) and is an MMP-9 inhibitor, known to delay experimental autoimmune encephalomyelitis (49). An inevitable consequence is that, before using small-or broad-spectrum MMP-9 inhibitors in SLE patients, careful preclinical evidence of efficiency needs to be provided and sufficient safety measures will be needed in clinical trials. Based on this information about DILE and our new data, we suggest that MMP-9 acts as a major beneficial factor in SLE by clearing aAg and IC.
We provided here biochemical, immunological and biological as well as preclinical and clinical data that demonstrate a beneficial role of MMP-9 in SLE, not only by cleaving aAg, but also by clearance of IC. In this way, MMP-9 complements the complement system in IC clearance. Finally, we believe that the value of the present work is relevant for other (systemic) autoimmune diseases in which IC are pathogenic.
Mice
The generation of MMP-9 deficient mice was described previously (50). LPR −/− mice on a C57Bl/6 background were obtained from the Jackson Laboratory (Bar Harbor, ME, USA). To generate LPR −/− MMP-9 −/− knock out mice, we crossed MMP-9 −/− and LPR −/− mice. F1 heterozygote animals were mated to obtain LPR −/− MMP-9 −/− in the F2 generation. The genotype of every mouse in the present study was defined by PCR. We used in all forthcoming experiments black animals from the 13th generation backcross into C57Bl/6. All mice were bred in specific pathogen-free (spf) insulators at the Rega Institute for Medical Research and moved to non-SPF conditions after weaning. All experimental procedures were approved by the institutional Ethics Committee under license LA1210243 for animal welfare (Project 277/2014). The numbers of female and male animals were similar and the diseases scores between both sexes were not different, as was already described for this SLE mouse model (22,23). Late disease stage samples of mice were collected between months 7 and 9, when the SLE disease was detectable. As was described before, a high variability exists between the disease scores within specific genotypes (22,23).
Immunohistochemistry Analysis
Paraffin-embedded spleens and lymph nodes (data not shown) were sliced into 5 µm sections and dried overnight at 50 • C. For immunohistochemical staining the EnVisionTM FLEX kit (DAKO) was used. Goat anti-mouse C3d (R&D Systems) and isotype control were used as primary antibody and peroxidaselabeled secondary antibodies as detection system. Quantification analysis was done with the "Fiji" version of ImageJ, by measuring the mean intensity of the signal of DAB (deconvolution of the colors: color 2 = DAB) and converting it to Optical Density (OD) by the formula. OD = log(max intensity/Mean intensity), where max intensity = 255 for 8-bit images.
Spleen and Kidney Tissue Extractions
Mice were sacrificed and organs were collected. Halve spleens and complete kidneys from individual mice were homogenized in hard tissue homogenizing CK28 tubes (Bertin Technologies) in 0.5 ml of assay buffer (50 mM Tris pH 7.4, 150 mM NaCl, 5 mM CaCl 2 , 0.01% Tween-20) and proteins were extracted with a Precellys lysing kit (Bertin Technologies) using the Precellys R 24 (2 times 5 s at 6,000 g, Bertin Technologies). To precipitate all tissue debris, the samples were centrifuged at 20,000 g and 4 • C for 15 min. The supernatant, containing soluble proteins, was collected.
ELISAs
A specific sandwich ELISA for C3d in tissue extracts from mice was developed by using the combination of a specific polyclonal Ab against mouse C3d as the coating Ab (Goat anti mouse C3d, art. AF2655 from R&D Systems; coating concentration 4 µg/ml in 0.1 M NaHCO 3 pH 9.6) and a monoclonal detection Ab against mouse C3 with affinity for the C3d fragment (Rat mAb to mouse C3 art. ab11862 from Abcam; conc 0.2 µg/ml in PBS with 0.5% casein and 0.05% Tween 20). The detection was done with the use of a HRP conjugated anti Rat Ab (art.112-035-143 from Jackson ImmunoResearch Laboratories, 0.1 µg/ml) and the peroxide TMB staining system. For anti IgG ELISAs, 96 well plates were coated with 2 µg of anti-human IgG antibody. After washing and blocking the plates, the studied samples were added and incubated overnight at 4 • C. The plates were then washed and incubated with specific secondary HRP-conjugated antibodies against human IgG for 1 h at RT.
Gel Filtration Chromatography and Protein Quantification Analysis
Samples were loaded on a Superdex 200 HR 10/300 column (GE Healthcare Life Sciences) equilibrated with 20 mM sodium phosphate pH 7.4; 150 mM NaCl at 0.4 ml/min flow rate and 0.5 ml fractions were collected. The absorption peaks at 280 nm obtained from 2.5 µl samples after gel filtration chromatography were analyzed with the Unicorn 5.1 software (GeHealthcare Life Sciences). Peak integration was used to measure peak areas. The relative areas of the main peaks were calculated against the total areas.
SDS-PAGE, Protein Staining and Gelatin Zymography Analyses
Samples were used in native form or chemically reduced and buffered and proteins were separated on Tris-glycine gels (Invitrogen, Carlsbad, CA, USA). Next, proteins were stained for further processing by Coomassie Brilliant Blue staining or with the SilverQuest TM Silver Staining Kit (Invitrogen, Carlsbad, CA, USA). Gelatin zymography gels were prepared and consisted of a 7.5% acrylamide separating gel with 1 mg/ml gelatin (Sigma Aldrich G1890), topped with a 5% stacking gel. After electrophoretic protein separation, the gels were washed twice for 20 min in re-activation solution (2.5% Triton-X-100). Afterwards, the gel was incubated overnight in 10 mM CaCl 2 and 50 mM Tris-HCl, pH 7.5 at 37 • C. Staining was performed with 0.1% Coomassie Brilliant Blue R-350 (GE Healthcare) and zymograms were analyzed densitometrically using the ImageQuant TL software (GE Healthcare) (51).
In vitro Cleavage of Antibodies and aAg by MMP-9
IgGs (2 µg) were incubated with active MMP-9 at 37 • C at a ratio actMMP-9:IgG of 1:5 or 1:50. After 6 or 24 h, the samples were analyzed by SDS-PAGE and protein staining with Coomassie blue. EDTA (MMP inhibitor) and actMMP-3 (for activation of proMMP-9) were used as controls. For the study of other MMPs, active MMP-1,−2,−3,−8 and also MMP-9 were incubated at a ratio enzyme: Ig of 1:5 for 24 h and the results of the incubations were analyzed by SDS-PAGE and Coomassie blue staining of proteins. To prepare the IC with actin or αB-crystallin (as substrates for MMP-9), 1 µg of the proteins were incubated with their respective antibodies at a ratio Ab:substrate 2:1. Afterwards, the IC were incubated with actMMP-9 at a ratio enzyme substrate 1:20. The resulting products of the incubations were visualized after SDS-PAGE by silver staining.
Human Plasma Samples
Blood samples from SLE patients were centrifuged at RT and 1,500 rpm for 5 min. The resulting supernatants were used as plasma samples. All donors gave written consent and all procedures were according to the terms of the declaration of Helsinki and following Belgian and European legislation (Ethical committee reference number: S58110). In Supplemental Table 1 information about the SLE patients is provided. All patients suffered from clinical SLE symptoms and were under various anti-inflammatory treatments at plasma sampling.
Purification of IC
IC and large and anionic proteins were precipitated by dilution of 200 and 100 µl of human and mouse plasma, respectively, with an equal volume of 7% of PEG 6000 at 4 • C overnight. After incubation, the samples were centrifuged at 13,000 rpm and washed 3 times with 3% PEG 6000. Subsequently, the precipitated material was dissolved in 200 or 100 µl phosphate buffered saline (PBS) for human and mouse samples, respectively. The IC were then purified by precipitation with protein-G-Sepharose beads, washed and eluted with 0.1 mM glycine at pH 2.2. The pH was adjusted to 7 with Tris buffer.
Cleavage of IC by MMP-9 and Densitometry Quantification IC purified from mouse and human plasma samples were incubated with actMMP-9 (final concentration 0.15 µM) during 16 h at 37 • C. The products of the incubation were separated by SDS-PAGE and visualized by silver staining. For quantification of the different protein band patterns we used ImageJ software.
nanoLC/TOF/MS and Protein Identification
The nano liquid chromatography/Time-of-flight/ Mass spectrometry (nanoLC/TOF/MS) and protein identification were done by Alphalyse (Alphalyse A/S, 5220 Odense, Denmark). Briefly, the protein samples were reduced and alkylated with iodoacetamide, i.e., carbamidomethylated, and subsequently digested with trypsin. The resulting peptides were concentrated by Speed Vac lyophilization and redissolved for injection on a Dionex nano-LC system and MS/MS analysis on a Bruker Maxis Impact QTOF instrument. The MS/MS spectra were used for Mascot database searching. The data were searched against in-house-Alphalyse protein databases downloaded from UniProt and NCBI containing more than 80 million known non-redundant protein sequences. The Mascot software found all the matching proteins in the database by their peptide masses and peptide fragment masses. The protein identification was based on a probability-scoring algorithm (www.matrixscience. com). It was considered a positive identification when at least 2 peptides had an MS-ions score above 35 or if a protein under 20 kDa had 1 peptide with an MS-ions score above 50.
Air Pouch Assay
The air pouch analysis was performed as detailed previously (30). Briefly, dermal air pouches were generated by injecting mice at dorsal sites with 3 ml of filtered air on days 0 and 3. The negative control sample consisted of phosphate-buffered saline (PBS). On day 6 the IC and control preparations (PBS) were injected. After 24 h, the exudates of the pouches were collected in 5 ml of PBS to characterize the viable cells by cytospin and flow cytometry analysis and the supernatant fluids for gelatin zymography and IC detection.
Cytospin Analysis 75 × 10 3 cells were applied onto slides by centrifugation using a Shandon Cytospin 2 apparatus (Thermo Shandon, Pittsburgh, USA). Then the cells were stained with Hema-color (Merck Chemicals, Darmstadt, Germany). Hundred cells were counted and classified based on morphology and staining pattern. The average of 3 counts per slide was represented.
STATISTICAL ANALYSES
Statistical analysis were performed using GraphPad Prism 6 software. Significant differences between experiments were evaluated using a non-parametric ANOVA Kruskal-Wallis test. All p-values of 0.05 or less were considered significant.
ETHICS STATEMENT
This study was carried out in accordance with the recommendations of the Belgian and European legislation, Ethical committee reference number S58110. All subjects gave written informed consent in accordance with the Declaration of Helsinki. All mouse experimental procedures were approved by the institutional Ethics Committee under license LA1210243 for animal welfare (Project 277/2014). Figures 1, 2), 2 MMP-9 −/− mice (Supplemental Figures 3, 4), 2 LPR −/− mice (Supplemental Figures 5, 6) and 2 LPR −/− /MMP-9 −/− mice (Supplemental Figures 7, 8). Two pictures at three different magnifications (10x, 20x, and 40x) were shown for each mouse. The horizontal bars indicate 50 µm. The quantification of the signal from C3d of these IHC pictures has been used for the graph shown in Figure Quantification of the bands of proMMP-9, actMMP-9, and MMP-2 was included to generate the graphs shown in Figure 6B. (B) Histograms representing theabsolute numbers of cells collected in the air pouch experiment. (C) Flow cytometry analysis of relative cytospin counts of the lavage exudates from the air pouch after injection of PBS or IC in WT and MMP-9 −/− mice. For flow cytometry analysis, monocytes were defined by CD11b and Gr-1, macrophages by CD11b and F4/80, neutrophils by CD11b and Ly6G and dendritic cells by CD11b and CD11c as surface markers. (D) Cytospins were stained with hemacolor, the cells were identified on the basis of their morphology and the relative cell percentages provided as cumulative histograms. Donut cells represent neutrophils with donut-shaped nuclei. The discrimination between monocytes and macrophages was made on the basis of size and presence of vacuoles. (E) Representative images from cytospins of the air pouch lavages after PBS and IC injection in WT and MMP-9 −/− mice.
AUTHOR CONTRIBUTIONS
Supplemental Table 1 | Information of the SLE patient cohort of the study. The individual patient numbers (P1-P10) refer to the numbers used throughout the manuscript.
Supplemental Table 2 | Proteins identified by nanoLC-MS/MS peptide sequencing and database search. The numbers (Band 1, 2, 3, and 4) refer to the excised gel slices within the red rectangles in Figure 5C. | 9,270.8 | 2019-03-22T00:00:00.000 | [
"Biology",
"Medicine"
] |
On Powers of General Tridiagonal Matrices
in any medium, provided the original work is properly cited. Abstract In this paper, a method for calculating powers of general tridiagonal matrices is introduced. This method employs the close relationship among tridiagonal matrices, second-order linear homogeneous difference equations, and orthogonal polynomials. Some examples are included to demonstrate the implementation of the method.
Introduction
It is well known that tridiagonal matrices have been under focus of many researchers recently.This emerged from the fact that such matrices play important roles in many recent applications, such as boundary value problems, parallel computing, spline interpolation, numerical solution of ordinary and partial differential equations, telecommunication system analysis.The calculation of powers-positive and/or negative-of tridiagonal matrices is therefore needed in order to solve problems that arise in these important applications.Consequently, a quite large number of publications that address this subject appeared in recent years.Some of these publications discuss inversion of these matrices in general, or specially structured types of them [1,2,4,5,6,7,10,11], some other publications discuss powers of such specially structured matrices [8,9,12,13,14,15], and some discuss both inversion and powers of these matrices [3].
This present work employs the close relationship among tridiagonal matrices, second-order linear homogeneous difference equations, and orthogonal polynomials , in order to derive an algorithm that does the job in computing powers of tridiagonal matrices with real or complex entries.
Given a general tridiagonal matrix with real or complex entries, we start by converting this matrix into a symmetric tridiagonal matrix.The case of a tridiagonal matrix with nonnegative real entries, which results in a real symmetric matrix, was discussed in detail in [2].But the case of a general tridiagonal matrix with entries that can be real-both positive and/or negativeor possibly complex may result in a symmetric complex or a Hermitian matrix.This work addresses these two cases, and gives two different schemes that compute powers of the complex symmetric matrices.
This work parallels the line of [ 2] in the sense that a second-order linear homogeneous difference equation is used in order to generate a set of orthogonal polynomials of degree n when the matrix has size nxn.It is known that each polynomial of degree i, i = 1, ..., n, is in fact the the determinant of the principal ixi submatrix of the nxn tridiagonal matrix given by: The above mentioned difference equation has the form: with initial conditions: , where a n = 0, b n = 0, c n = 0, f or all n ≥ 0,and c 1 = b n = 1, and it defines the recursion relation for the set of orthogonal polynomials {p n (x)} n≥0 on an open interval I with respect to a nonnegative weight function w(t ).
The set of orthogonal polynomials {p 0 , p 1 , ..., p n } plays an essential role in the construction of the eigenvalues and their corresponding eigenvectors of the matrix: This matrix T is converted by a similarity transformation into a symmetric real matrix, or a symmetric complex matrix, or a Hermitian matrix, we denote this matrix by J .
In the cases of a real symmetric matrix, or a Hermitian matrix, we construct the eigendecomposition of the matrix, which in turn is used to compute different powers of the matrix.In the case of a complex symmetric matrix, it is known that such matrices do not have the same properties of the other previous two cases of having an eigendecomposition that can be easily computed, therefore we have to use a different type of factorization, called Takagi factorization, which is then used to compute powers of the complex symmetric matrix.
This paper is organized as follows: Section 2 contains preliminary mathematical background needed for this work, section 3 contains the main results of the paper, this section is divided into three subsections, the first discusses the case of real symmetric matrices, the second discusses the case of Hermitian matrices, and the third contains the case of complex symmetric matrices, and section 4 presents some examples that demonstrate the method we introduce in this article about computing powers of matrices.
Preliminaries
Tridiagonal matrices, orthogonal polynomials, and second-order linear homogeneous difference equations are very much related with each other.Equation ( 2) above defines the recursion relation that generates a set of orthogonal polynomials {p n (x)} n 0 on an interval I with respect to a nonnegative weight function w(x).This is also related with tridiagonal matrices (1) in the sense that p n (x) is equal to the determinant of the tridiagonal matrix in (1) above [4].
Equation ( 2) can be rewritten as: which in turn can be rewritten in the following matrix form: x. where ,and It is also known that {p n (x)} n 0 is a set of orthogonal polynomials on an interval I with respect to a weight function w(x), each polynomial p j (x), 0 ≤ j ≤ n has j distinct real roots in the interior of the interval I.
So, if x is replaced by x j in(4), where x j is a root of p n (x), then (4) becomes: x j .D.p(x j ) = A.p(x j ) Which implies that x j , j = 1, ..., n,is a solution of the generalized eigevalue problem: λDu = Au , which is equivalent to (D −1 A)u = λu , where D −1 A is the tridiagonal matrix: Therefore, x j , j = 1, ..., n,are the eigenvalues of the tridiagonal matrix T , and the vector [p 0 (x j ), p 1 (x j ), ..., p n−1 (x j )] T is the corresponding eigenvector.Moreover, the matrix B whose columns are the eigenvectors of T is nonsingular, because the matrix A is also nonsingular, and consequently, the eigenvectors are linearly independent.
Main Results
Let a tridiagonal matrix with real or complex entries be given, then this matrix can be transformed using a similarity transformation into one of the following: 1.A real symmetric matrix when the entries of T are all nonnegative.2. A Hermitian matrix or a complex symmetric matrix when the entries of T are just real numbers(both positive and negative) or possibly complex.
This transformation is done in the following theorem: Theorem 1 Given the tridiagonal matrix T as in (3), and let D 1 = Diag(γ 1 , γ 2 , ..., γ n ) be a diagonal matrix , where the sequence {γ i } n i=1 is generated by: γ n = 1,and is the symmetric tridiagonal matrix: where J is a tridiagonal matrix, and J is one of the following: 1.Either a real symmetric matrix, 2. Or a Hermitian matrix, 3. Or a complex symmetric matrix.Proof.See [3].
Remark 2
The matrices T and J have the same eigenvalues, x 1 , x 2 , ..., x n , because they are similar.
Remark 3
The eigenvector of the matrix J that corresponds to the eigenvalue x j is:
The Case of a Real Symmetric Matrix
Given the n × n tridiagonal matrix T as in ( 6), we use the recursion relation in (2) in order to generate the orthogonal polynomials p 0 (x), p 1 (x), ..., p n−1 (x), p n (x).Then a rootfinding method is used in order to compute the n distinct real roots of p n (x).If the roots are x 1 , x 2 , ..., x n , then these are the eigenvalues of T , and the corresponding eigenvectors are P j = [p 0 (x j ), p 1 (x j ), ..., p n−1 (x j )] T , j = 1, 2, ..., n.The next step is to fill in the columns of the orthogonal matrix U -That is U −1 = U T -, and these columns are assigned the normalized vectors, , where Since T = D −1 1 .J.D 1 , and J = U.D 2 .U T , where D 2 = Diag(x 1 , x 2 , ..., x n ), we have: Thus, and where k is any positive integer, and can be a negative integer when T is nonsingular.
The Case of a Hermitian Matrix
This case is very similar to the case of a real symmetric matrix, in the sense that a Hermitian matrix has the same eigendecomposition which consists of a unitary matrix U,structured the same way as the case of the orthogonal matrix in the above case, and it is known that the diagonal matrix D 2 contains the eigenvalues of the Hermitian matrix, which are all just real numbers.So, the computation of powers of T follows almost the same procedure as the case of a real symmetric matrix.
The Case of a Complex Symmetric Matrix
Unforunately, there are not much references in the litrature that discuss the diagonalization of such matrices.One of the earliest articles is [6], which states that complex symmetric matrices can be diagonalized by a complex(orthogonal) transformation if and only if each eigenspace of the matrix has an orthonormal basis, which means that no eigenvectors have zero Euclidian norm are included in the basis, such eigenvectors are called quasi-null vectors, they are nonzero vectors but with zero norm, such as the vector 1 + i .[14] discusses this for a special type of complex symmetric matrices, namely the matrices which have positive definite real and imaginary parts.[22] discusses the same case as well.The celebrated book of Horn and Johnson [15] gives an extended discussion of this subject in section (4.4, p201).This good book introduces the so-called Takagi factorization of complex symmetric matrices, then in further elaboration on the topic, a necessay and sufficient condition for a complex symmetric matrix A to be diagonalizable is introduced in Theorem 4.4.13,p211.This condition states that the matrix A must be orthogonally diagonalizable, that is A = SΛS −1 , where Λ is diagonal, and S is nonsingular, and this is also equivalent to A = QΛQ T , where Q is an orthogonal complex matrix, that is To make the discussion complete, we include here Takagi's factorization lemma-corollary 4.4.4,p204 in [15], and the theorem that gives necessary and sufficient conditions for a complex symmetric matrix to be diagonalizable-Theorem 4.4.13,p211 in [15].Lemma 4 (Takagi's Factorization Lemma): Let A ∈ M n be symmetric, then there exists a unitary matrix U ∈ M n and a real nonnegative diagonal matrix Σ = diag(σ 1 , σ 2 , ..., σ n ) such that A = UΣU T .The columns of U are an orthonormal set of eigenvectors for AA, , and the corresponding entries of Σ are the nonnegative square roots of the corresponding eigenvalues of AA.
It is worth to mention at this point that Takagi factorization is a special singular value decomposition of the symmetric matrix, as a matter of fact, it represents a scaled SVD.For more details see [ 5,15].Theorem 5 (Theorem 4.4.13 in [ 15]: Let A ∈ M n be a symmetric matrix.Then A is diagonalizable if and only if it is complex orthogonally diagonalizable, that is A = SΛS −1 , for a diagonal Λ ∈ M n and a nonsingular S ∈ M n , if and only if A = QAQ T , where Q ∈ M n and It is clear that both the corollary and the theorem do lead to the same conclusion, in the sense that a complex symmetric matrix can be factorized into the form A = UΣU T , and whence we obtain this, we can compute any powers of A using (7) above.
Examples
, this is a complex tridiagonal matrix, Theorem 1 produces the complex symmetric matrix J = | 2,718.8 | 2015-01-01T00:00:00.000 | [
"Mathematics"
] |
Distribution of dissolved water in magmatic glass records growth and resorption of bubbles
driven by the of through the and exsolve at the On rapid the melt quenches to preserving the distribution of concentration around the (now vesicles), offering a window into pre-eruptive conditions. We measure the water distribution around vesicles in experimentally-vesiculated samples, with high spatial resolution. We find that, contrary to expectation, water concentration increases towards vesicles, indicating that water is resorbed from bubbles during cooling; textural evidence suggests that resorption occurs largely before the melt solidifies. Speciation data indicate that the molecular water distribution records resorption, whilst the hydroxyl distribution records earlier decompressive growth. Our results challenge the emerging paradigm that resorption indicates fluctuating pressure conditions, and lay the foundations for a new tool for reconstructing the eruptive history of natural volcanic products.
Introduction
Bubbles nucleate when magmatic volatiles (species such as water, CO 2 and SO 2 , that are only weakly soluble in the silicate melt) exsolve from a supersaturated melt. Water is the most important volatile because it is usually the most abundant and because it strongly affects melt viscosity (Hess and Dingwell, 1996). It is dissolved in the melt as two principal species: molecular water, H 2 O m , and hydroxyl groups, OH. As magma ascends, bubbles grow through decompressive expansion and continuing exsolution of volatiles from the melt (Sparks, 1978). Together these processes control the bubble growth rate which, in turn, controls or influences almost every aspect of magma ascent and eruption, including: magma vesicularity, buoyancy, rheology and permeability; the pressure gradient that drives the eruption; and the onset of magma fragmentation. Understanding and quantifying bubble growth is, therefore, one of the most fundamental challenges in physical volcanology.
Water exsolves from the melt, into a bubble, when its solubility in the melt decreases, and resorbs into the melt when its solubility increases. The resulting change in the water concentration at the bubble wall creates a chemical potential gradient in the melt, which drives diffusion towards a growing bubble and away from a shrinking bubble (Fig. 1). The water concentration profile may be preserved when the melt quenches to glass, offering the tantalising prospect of reconstructing the bubble's history of growth and resorption. We quantify the spatial distribution of dissolved water and its species in experimentally-vesiculated magmatic glasses, using secondary ion mass spectrometry (SIMS)-calibrated backscatter scanning electron microscope (BSEM) images (Humphreys et al., 2008) and Fourier-transform infra-red (FTIR) imaging (e.g. Nichols and Wysoczanski, 2007), in order to test this hypothesis.
Two recent studies apply a similar conceptual framework to draw significant conclusions about conduit processes. Watkins et al. (2012) analyse volatile distributions around vesicles in obsidian clasts and find water concentration profiles consistent with bubble resorption (cf. Fig. 1). They infer a pressure increase in the volcanic conduit prior to eruption. Carey et al. (2013) study vesicle distributions in basaltic pyroclasts and find indirect evidence of resorption of bubbles prior to eruption, which they also interpret as evidence of a pressure increase in the conduit. Based on our data, we propose that bubble resorption may occur during the quench from melt to glass as H 2 O solubility increases with decreasing temperature, and present an alternative interpretation of these findings in Section 4.4.
Other workers have used textural evidence from experimentally-vesiculated magma samples to investigate interactions between bubbles (Castro et al., 2012). They observe dimpled and sinuous glass films between vesicles, which they interpret as preserved evidence of incipient coalescence of growing bubbles. We analyse water distribution in the same samples and offer an alternative interpretation for their observations, which is consistent with our conceptual model (Section 4.3).
Water in silicate melts
Interpretation of water distributions in glass relies on quantitative models for water solubility and diffusivity. Experimental studies of various magma compositions show that, for crustal pressures relevant to magmatic degassing, solubility increases with increasing pressure and decreasing temperature (Baker and Alletti, 2012;Newman and Lowenstern, 2002) while diffusivity (D) increases with increasing temperature, decreasing pressure, and increasing water concentration (Ni and Zhang, 2008) (Fig. 2). Temperature exerts a dominant control on water diffusivity and, to a lesser extent, on solubility; however, there remains a gap in data between ambient and magmatic temperatures, which includes the transition between melt and glass.
The two species of water present in glass (H 2 O m and OH) interconvert via the equilibrium reaction
Fig. 2.
Controls on diffusivity and solubility of water. Variation in water solubility (upper) and diffusivity (lower) with pressure and temperature for rhyolite composition. A data gap exists between magmatic and ambient temperatures. High-T solubility model is from Newman and Lowenstern (2002) showing proportion of H 2 O m and OH at equilibrium speciation; diffusivity data are from Ni and Zhang (2008) Anovitz et al. (1999); low temperature diffusivity data (for 0.1 MPa) are from Anovitz et al. (2006). in which molecular water reacts with bridging oxygens (O • ) in the melt to produce hydroxyl groups that are bound to the silicate polymer framework (Stolper, 1982a). The 'total water' (H 2 O t ) content of a melt or glass is the sum of the contributions from H 2 O m and OH. The position of the equilibrium of Eq. (1) (the 'equilibrium speciation') changes with pressure, temperature, H 2 O t concentration and melt composition (Hui et al., 2008;Silver et al., 1990;Stolper, 1989Stolper, , 1982a (Fig. 2). The bound OH groups are effectively immobile and H 2 O m is the diffusing species; consequently, OH concentration gradients form indirectly by diffusion of H 2 O m and subsequent readjustment towards equilibrium speciation via Eq. (1) (Zhang et al., 1991). For identical conditions, D H 2 O m is therefore higher than D H 2 O t (Fig. 2). At experimental (or magmatic) temperatures the rate of the species interconversion reaction is sufficiently fast that, following a perturbation to the system, equilibrium speciation is re-established over timescales of milliseconds. As a result of the strong temperature-dependence of the reaction rate however, the time taken to achieve equilibrium speciation becomes much longer as temperature decreases, taking minutes to hours at ∼600 • C and days at ∼400 • C (Zhang et al., 1995(Zhang et al., , 1991.
Materials and methods
Samples are obtained from pre-existing experimental suites, and were manufactured under controlled conditions of pressure ( P ) and temperature (T ). P and T conditions are given in Table 1, along with references to the original studies; sample compositions are given in Table S1 in the Supplementary Information. The experiments were all designed to produce bubble populations with either equilibrium profiles (solubility experiments) or bubble growth profiles (decompression experiments) (Fig. 1).
Sample production
All samples were synthesised at high pressure ( P syn ) and temperature (T exp , constant throughout experiment) with excess water to form a starting melt that was water-saturated and fully equili- i.e. P syn = P i = P f . All samples were quenched isobarically at P f ; samples were quenched immediately upon reaching P f except for sample MCN15, which was held at P f for 60 s before quenching. Experimental quench rates were not directly measured but are estimated to vary from ∼3 to ∼60 s to the glass transition temperature (Table 1). Full production details for ABG samples are found in Burgisser and Gardner (2005). IS14 was produced following the procedure of Di Carlo et al. (2006); MCN13 following the procedure of Larsen (2008) and MCN15 following the procedure of Larsen et al. (2004).
Quantifying water content
H 2 O t data are obtained following Humphreys et al. (2008): SIMS H 2 O t analyses are used to calibrate greyscale values in BSEM images of the same area ( Fig. 3a, b).
BSEM imaging
Samples were prepared by embedding in epoxy resin and grinding to expose a flat surface. Additional resin was then added to the surface to fill in exposed vesicles and reduce topographic effects during SEM imaging. Re-grinding down to the original exposed surface creates a flat surface of exposed glass and infilled vesicles. Surface topography following sample preparation was checked via confocal microscopy at the National Physical Laboratory using an Olympus LEXT OLS4000 microscope (Wertheim and Gillmore, 2014). The join between the glass and infilling resin is smooth with typically less than 1 μm difference in height between glass and resin, with no gradient in the slope of the glass approaching the vesicle edge. 'Edge effects' (Newbury, 1975) are thus confined to thin (<2 μm) bright white rims at vesicle boundaries. As an additional measure to rule out a topographic cause for greyscale Margins of dark, water-rich glass also seen along cracks but with smaller width than water-rich halos. Cracks extend between vesicles but often terminate at the edge of water-rich halos. (b) Melt films between neighbouring vesicles are often deformed (box is enlarged in (c)). Left arrow in (a) shows broken buckled film. Note that contrast has not been adjusted to highlight halos in (b) and (c). Bright patches are gold-coat remnants or Fe-Ti oxides. Textures are discussed in Section 4.3.
variations approaching vesicle edges and crack margins, the sample stage was rotated and the same area imaged in a different orientation with respect to the incident electron beam and detector. Greyscale variations caused by sample topography (analogous to shadows in a photograph) would be reversed when the sample was rotated by 180 • ; the consistency of the greyscale variations observed here, irrespective of sample orientation, demonstrates that our results are not affected by sample surface topography. In some images taken after SIMS analysis, residual gold coat remains in cracks and vesicle interiors and appears bright white in BSEM (e.g. Fig. 3a). Images were acquired at Durham University (GJ Russell Microscopy Facility) with a Hitachi SU-70 Analytical High Resolution SEM with attached Gatan Mono CL and associated DigitalMicrograph software using a 15 keV electron beam in backscatter mode with a working distance of 15 mm. Qualitative H 2 O t variations are seen in greyscale variations: dark glass is H 2 O t -rich and light glass is H 2 O t -poor (Fig. 3a, Fig. 4a).
SIMS analysis
Samples were prepared as for BSEM work. 1 H + , 23 Na + and 28 Si + were analysed in radial profiles towards vesicles using a CAMECA ims-4f ion microprobe at the Edinburgh Ion Microprobe Facility. A 10.7 keV, 6 nA, O − primary beam was accelerated onto the sample with a net impact energy of 15 keV. Positive secondary ions were accelerated to 4.25 keV and collected sequentially on the electron multiplier detector, using a 75 eV offset with a 40 eV energy window. Profiles were run in scan mode with a 5 μm step size, with each analysed spot approximately 5 × 5 μm and <3 μm deep. H 2 O t concentration was calculated from a working curve of 1 H + / 28 Si + which was calibrated twice daily using well constrained standards of varying SiO 2 and H 2 O t content. Errors are ±10% relative.
SIMS-BSEM calibration and data processing
Following the technique of Humphreys et al. (2008), SIMS H 2 O t data were used to calibrate BSEM images in order to extract quantitative H 2 O t data at high spatial resolution (<1 μm). Greyscale values were extracted along a 5 μm-wide profile immediately adjacent to the visible SIMS track, using Gatan DigitalMicrograph software. Mean greyscale values of 5 μm segments were plotted against the corresponding SIMS measurements (Fig. 3). A linear regression fit was applied and the resulting calibration equation used to extract quantitative data in the same image, each new image requiring a separate calibration. 'Edge effects' at vesicle walls are narrow and anomalously bright and thus easily removed from extracted profiles; presented profiles therefore begin a few microns from the vesicle wall. Presented images are enhanced to make greyscale variations more apparent by varying brightness, contrast and gamma settings; these settings do not alter the raw greyscale values used for calibration and data extraction.
For each sample, multiple radial profiles were extracted around multiple vesicles. Once all extracted greyscale data were converted to H 2 O t data using the relevant image calibration equation they were compiled to create a composite dataset of H 2 O t as a function of distance from vesicle wall for the sample. An example figure which shows this profile-averaging methodology is presented in Supplementary Information. The mean H 2 O t value was calculated for all data within 2 μm segments; these mean values form the H 2 O t profile for each sample. Errors shown are twice the standard error of this mean. Averaging all extracted profiles for one sample thus accounts for the variation resulting from vesicles that are sectioned at different distances from their equator (which affects the profile gradient, with steepest profiles for those sectioned directly at the equator, see Supplementary Information), and for variation in H 2 O t resulting from the use of multiple SIMS tracks per sample (which affects the position on the y-axis but not the shape of the profile).
FTIR analysis
Samples were prepared as free-standing wafers polished on both sides. Their high H 2 O t contents required thin wafers (<20 μm) to avoid saturating the detector. Samples were mounted on a glass slide with Crystalbond 509 and ground with silica carbide grit before polishing with 3 μm and 1 μm diamond paste to produce a flat, polished surface. Samples were then flipped and remounted polished side down and ground and polished from the other side. Thickness was monitored during polishing using a micrometer on the glass surface. As target thickness was approached, micrometer measurements were conducted on the adjacent crystalbond to avoid damaging the delicate sample. Final thickness was determined using interference fringes on reflectance FTIR spectra. Samples were finally removed from the slide by dissolving the crystalbond with acetone, then a paintbrush (rather than tweezers) was used to remove the fragile wafers from the acetone bath.
FTIR analyses were acquired at the Institute for Research on Earth Evolution (IFREE), Japan Agency for Marine-Earth Science and Technology (JAMSTEC), using the Varian FTS Stingray 7000 Micro Imager Analyzer spectrometer with an attached UMA 600 microscope. Mid-IR (6000-700 cm −1 ) transmittance spectroscopic images were collected over 512 scans at a resolution of 8 cm −1 using a heated ceramic (globar) infra-red source, a Ge-coated KBr beamsplitter, and the Varian Inc. Lancer Focal Plane Array (FPA) camera housed in the microscope, which consists of a liquid-nitrogen cooled infrared photovoltaic HgCdTe 2 (MCT) array detector. The array detector is comprised of 4096 channels (arranged 64 × 64) across a 350 × 350 μm area giving a channel, or spectral, resolution of 5.5 × 5.5 μm. The FPA camera was calibrated regularly. Samples were placed on a KBr window under N 2 purge and areas were selected for analysis using the microscope. Initially a background im- age of the KBr window was collected, which was subtracted from the sample image. New background images were taken approximately every 300 min. Images were processed using Varian Win-IR Pro software (v3.3.1.014). Individual spectra for use in transects were extracted from the images and H 2 O concentrations were calculated in the normal manner by entering the height (absorbance) of the relevant peak above a linear background into the Beer-Lambert law (Stolper, 1982b). H 2 O t was calculated from the peak at ∼3500 cm −1 and H 2 O m from the peak at ∼1630 cm −1 , using respective molar absorptivity coefficients of 90 ± 4 l mol −1 cm −1 (Hauri et al., 2002) and 55 ± 2 l mol −1 cm −1 (Newman et al., 1986).
OH values were calculated by subtracting calculated H 2 O m from H 2 O t . Sample density was calculated iteratively from major element compositions (Lange and Carmichael, 1987) and H 2 O t content (Ochs III and Lange, 1997). Sample thicknesses for spectra along the transects were determined using the frequency of interference fringes on reflectance spectra (e.g. Nichols and Wysoczanski, 2007). Images were collected in reflected light of exactly the same area that had been analysed in transmitted light, and spectra at the same coordinates were extracted and processed. A refractive index of 1.5 (Long and Friedman, 1968) was used for rhyolite.
Errors on sample thickness determinations are ±3 μm (Nichols and Wysoczanski, 2007). Since thickness measurements for transect spectra all fall within 3 μm of each other with no systematic variation along transect, the average sample thickness along the transect was used when processing each spectrum. FTIR images are output in terms of absorbance. They were converted to concentration by applying the Beer-Lambert law, as above, using the mean sample thickness. Analytical errors are ±15% relative, largely as a result of the relative impact of uncertainties in thickness measurement for thin samples. However, it is important to note that a change in the thickness value used would shift the H 2 O t , H 2 O m and OH profiles up or down on the y-axis, but would not alter their shapes or positions relative to one another. Additional error results from volumetric averaging of concentration variations in 3D. Thin samples relative to vesicle diameter reduce this error but increase the relative importance of errors in thickness measurement; however the ratio of H 2 O m :OH again remains unaffected.
Results
We find that all vesicles, in all samples, have higher H 2 O t concentrations adjacent to the vesicle walls than in the far-field (Table 2). In BSEM images, this is seen as a dark halo surrounding each vesicle (Fig. 3a, Fig. 4a). In some areas dark circles are present even where no vesicle is observed on the BSEM image (Fig. 4a). In these instances, optical microscopy shows that vesicles are present just below the sample surface. We conclude that a water-rich shell surrounds each vesicle in 3D, which is seen as a water-rich halo when vesicles are cross-sectioned.
The SIMS-calibrated H 2 O t concentration gradients are steepest at the vesicle wall and decay to a far-field value over a few tens of microns from the vesicle wall (Fig. 5), corresponding to the edge of the observed halos in BSEM images. We quantify halo widths using the half-fall distance, i.e. the distance at which H 2 O t concentration is halfway between the maximum and minimum values along profile (Anovitz et al., 2006) (Table 2, Fig. 5c). To eliminate stereological issues, which arise where a vesicle is not sectioned through its equator (see Supplementary Information), half-fall distance for each sample is calculated using the single vesicle in each sample with the steepest profile, which will be the vesicle crosssectioned closest to its equator. Concentration profiles caused by bubble resorption will be radially symmetric around the centre of the bubble, which, for the same conditions of resorption, will result in smaller bubbles having steeper profile gradients (hence shorter half-fall distances) than larger bubbles. With the exception of sample IS14 (radius 100 μm), all samples have vesicles of a similar size (radii 16-30 μm) and so their half-fall distances are comparable (see Supplementary Information).
FTIR images of vesicular experimental samples also show higher H 2 O t concentrations around vesicles than in the far-field (Fig. 6). Transects between vesicles show that H 2 O t and H 2 O m concentrations increase towards the vesicle walls in both MCN13, which is an undecompressed solubility sample, and MCN15, which is a decompression sample. By contrast, the OH profile is flat between vesicles in the solubility sample, and is depleted around vesicles in the decompression sample. In the decompression sample, the H 2 O m :OH ratio increases from 1.7 in the far-field to 7.0 at the vesicle wall.
Thin cracks commonly extend between vesicles (Fig. 4a). Many of these cracks do not reach the vesicle walls; instead they terminate as they enter the water-rich halo. BSEM images show that cracks have dark, water-rich margins on either side (Fig. 4a). The width of these water-rich margins is variable but is typically much less than the width of the vesicle halos. The observed width of the water-enriched margin is affected by the angle at which the crack intersects the sample surface, equalling the true width only for cracks intersecting the surface at 90 • , and appearing wider with increasing obliquity of the angle of intersection. Observed half-fall distances of water-rich crack margins are therefore upper estimates. In sample ABG1 (shown in Fig. 4a), typical half-fall dis- tances of water-rich crack margins are <3 μm, compared with a half-fall distance of 12 μm for the water-rich halos around vesicles (Table 2).
Adjacent vesicles are often separated by sinuous, rather than planar, melt films (Fig. 4b, c). These films are similar to those described by Castro et al. (2012), whose study of vesicular samples included sample ABG1, which is also investigated in this study (Fig. 4, Table 1).
Resorption mechanism
In all experiments the pressure history was carefully controlled in order to produce either growing bubbles, or bubbles in equilibrium with the melt (i.e. neither growing nor resorbing), yet all samples have increasing H 2 O t concentrations towards vesicle walls. These concentration profiles are evidence of water diffusion from the bubble/vesicle back into the melt/glass, characteristic of resorbing bubbles (Fig. 1).
We observe resorption around all vesicles in all samples, independent of melt composition, experimental apparatus used or pressure history; this ubiquity implies a common mechanism. We propose that bubble resorption is caused by an increase in the equilibrium solubility of water in the melt (H 2 O eq ), as a consequence of decreasing temperature during quench.
For all samples the water concentration at the vesicle wall exceeds H 2 O eq for the final pressure ( P f ) and temperature conditions of the experiment (T exp ) immediately prior to quench; in four cases, the concentration even exceeds H 2 O eq at the highest pressure of the experiment ( P syn ). These water concentrations would require an increase in pressure of 30 to 150 MPa under isothermal conditions (Table 2). Melt-gas interfacial tension produces an overpressure within a bubble given by P = 2σ /r where σ is interfacial tension and r is bubble radius. For σ = 0.08 N m −1 (Gardner and Ketcham, 2011), the pressure increase above the reported experimental pressure is only 0.03 MPa for a bubble with a 5 μm radius, decreasing to 0.003 MPa for a 50 μm bubble radius. The observed vesicle wall water concentrations can therefore not be explained by interfacial tension effects. Although the quench process is described as isobaric, a small, transient pressure increase may occur during the quench process as the pressure medium reequilibrates once the sample moves into the quench vessel (e.g. Holloway et al., 1992;Di Carlo et al., 2006). In the ABG and IS14 experiments described here, this fluctuation was observed on the pressure gauge as a pressure increase of <10 MPa lasting for less than one second. Comparison of the observed resorption profiles with the results of diffusion modelling of this transient pressure increase (Fig. 5d) demonstrates that it is far too small to account for the observed vesicle wall water concentrations and half-fall distances, providing strong evidence that resorption is not driven by pressure increase in our samples. Thickness measured along transect was constant for MCN13 (top), but varied slightly for MCN15 (bottom). This variation was not systematic (i.e. glass did not thin or thicken towards vesicle walls) and had a standard deviation <1 μm. Blue shading shows the resulting error on calculated concentrations and demonstrates that this error cannot explain the observed profile shapes. Grey arrows shown for both samples show the typical change in calculated concentrations assuming a ±3 μm error on the average thickness used. This error would shift the profiles up or down but would not alter their shape or the dominance of H 2 O m relative to OH. Panel (e) (green graph) shows the expected H 2 O m and OH profiles for the observed H 2 O t profiles in (d) assuming equilibrium speciation at the experimental run temperature (T exp ), calculated using the relationship of Nowak and Behrens (2001). Comparison of (d) and (e) demonstrates that both samples exhibit higher H 2 O m :OH ratios than expected, with further enrichment in H 2 O m around vesicles. Solubility sample MCN13 shows a flat OH profile, whilst decompression sample MCN15 shows depletion in OH around vesicles. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) By contrast, in all samples, the water concentration at the vesicle wall is consistent with H 2 O eq expected at P f for temperatures <600 • C (i.e. interpolated in the solubility data gap, Fig. 2; Table 2).
All experiments are conducted isothermally so these temperatures are not reached until the sample is quenched at the end of the experiment. D H 2 O t decreases dramatically with decreasing temperature (Fig. 2) so the observed halos cannot be produced by post-quench hydration at ambient temperatures (at 30 • C D H 2 O t is ∼10 −21 m 2 /s (Anovitz et al., 2006) and ∼30 years would be required to create a 1 μm halo). We conclude that the observed bubble resorption profiles were created rapidly during quench.
This conclusion is further supported by our FTIR data, which show that samples have higher H 2 O m :OH ratios than expected for equilibrium speciation at the experimental temperature (Fig. 6). Although this could be partly attributed to the influence of the 'quench effect', whereby the slowing interconversion reaction can still maintain speciation in equilibrium with the decreasing temperature during the initial stages of quench, this cannot explain the observation that the water-rich halos around vesicles are significantly enriched in H 2 O m (Fig. 6). This enrichment instead indicates disequilibrium speciation, which is evidence that bubble resorption results from a rapid decrease in temperature. During bubble resorption, water enters the melt as H 2 O m (Stolper, 1982a). At magmatic temperatures Eq. (1) acts rapidly to convert some of this additional H 2 O m to OH in order to regain equilibrium speciation. However, during quench, the reaction slows dramatically (Zhang et al., 1995(Zhang et al., , 1991, so the melt in the resorption halos remains enriched in H 2 O m . If resorption were driven by a pressure increase at experimental/magmatic temperatures, the rate of the species interconversion reaction would be sufficiently rapid that water species would be in equilibrium; hence speciation data provide additional evidence that resorption is not driven by pressure increase in our samples. The breakdown of the interconversion reaction over rapid quench timescales means that the diffusion of resorbing H 2 O m is not slowed by conversion to immobile OH; consequently, quench resorption is controlled by D H 2 O m , which is always greater than D H 2 O t (Fig. 2). A more profound consequence is that OH concentrations in the melt are largely unaltered by bubble resorption if quench is sufficiently rapid. Fig. 6 shows the distribution of water species for two samples: MCN13 is from a solubility experiment and is expected to have had a flat, equilibrium water profile (Fig. 1) prior to quench; MCN15 is from a decompression experiment and is expected to have had a bubble growth profile (water depletion adjacent to bubbles, Fig. 1) prior to quench. Whilst both samples show enrichment in H 2 O m around vesicles, MCN13 shows a flat OH profile, and MCN15 shows depletion in OH adjacent to vesicles. We conclude that rapid quench preserves the OH profile created prior to quench. This record of pre-quench conditions, preserved in the OH distribution, may be accessed via FTIR.
Controls on quench resorption
Isobaric resorption of water is controlled by the thermal history of the sample during quench. The dependence of water solubility and diffusivity on temperature is strong, non-linear, and incompletely characterised (note gaps in Fig. 2), precluding quantitative modelling of quench resorption. Nonetheless, our data yield valuable insights into the physical controls on the process. Samples ABG1 and MCN13 have similar, rhyolitic composition and both are expected to be in equilibrium (sensu Fig. 1) prior to quench. MCN13 has the same half-fall distance as ABG1 (within error). However, its lower H 2 O t content should give lower D H 2 O m , and therefore a smaller half-fall distance than ABG1 (Figs. 5, 7). We attribute this to its slower quench (estimated 30-60 s to glass transition for MCN13, cf. 3-10 s for ABG1; see Appendix A for methodology) giving more time for resorption at high temperature, where diffusivity is high.
The five ABG samples have identical compositions and quench histories, but differ in decompression histories. The half-fall distance for the undecompressed, equilibrium sample (ABG1) is considerably longer than for the decompressed samples (Table 2). We propose that this difference reflects their different pre-quench water distributions. A flat (equilibrium) pre-quench profile is expected for ABG1, whilst the other samples' pre-quench profiles should be depleted at the bubble wall as a result of decompression-induced bubble growth (Fig. 1). For the same quench history, therefore, water is expected to diffuse furthest in sample ABG1 since there is no pre-quench, water-depleted (hence low diffusivity) region to 'fill in' near the bubble wall.
We determine a 'characteristic diffusivity', D H 2 O ch , for sample ABG1 from the observed half-fall distance (L) and the timescale over which the diffusion occurs (t), by rearranging L = D H 2 O ch t. This is a form of the half-fall distance derived for 1D Fickian diffusion (i.e. in a plane half-space) and is valid for constant diffusivity and with boundary conditions of constant H 2 O concentration (e.g. Anovitz et al., 2004). In these samples however, both diffusivity and solubility vary as a function of time as the sample cools during the quench process, and so D H 2 O ch represents a time-integrated 'average' diffusivity over the duration of resorption/hydration. Since the cooling history of the samples is calculated (Appendix A) rather than directly measured, we calculate D H 2 O ch for ABG1 using both the upper and lower estimates of quench time (3-10 s, see Table 1). 3 s is likely to be an underestimate of the true diffusion timescale over which the profile forms, but is representative of previous assumptions of the time to the glass transition for these samples (Burgisser and Gardner, 2005;Castro et al., 2012). The evidence of minor H 2 O enrichment around cracks shows that some diffusion also occurs below the glass transition, and so an upper limit of 10 s is also used, based on the time for the sample to cool to 300 • C. Although diffusivity data below 400 • C requires extrapolation of the Ni and Zhang (2008) diffusivity model beyond its temperature range, doing so suggests that diffusivity decreases a further order of magnitude between 400 and 300 • C, requiring 10 s to form a 1 μm hydration lengthscale. Since the sample spends only ∼1 s in the temperature range 310-290 • C, profiles are not expected to be significantly modified below 300 • C. Accordingly, for ABG1 we calculate D H 2 O ch as ∼6 × 10 −11 m 2 /s to ∼8 × 10 −12 m 2 /s, for 3 to 10 s respectively. Comparison with computed values of D H 2 O m for this sample (following Ni and Zhang, 2008) of 4 × 10 −11 m 2 /s at T exp and 2 × 10 −12 m 2 /s at 400 • C (Table 2), indicates that the bulk of the resorption occurs at high temperature; i.e. during the early part of the quench.
Resorption textures and the glass transition
Textural observations indicate that resorption occurs largely above the melt's glass transition temperature T g . Most samples contain cracks with thin water-rich margins (Fig. 4). The cracks must form below T g so the margins of cracks indicate that at least some hydration occurs into glass, rather than melt. However, the margins of cracks typically extend into the glass less than half as far as the resorption halos around vesicles; furthermore, the cracks often terminate in the outer part of the resorption halos.
A melt's T g varies with its water content and quench rate (Hui and Zhang, 2007). For ABG samples, we calculate T g ∼ 460 • C in the far-field (typically 4 wt% water), and T g ∼ 430 • C at the vesicle wall (typically 5 wt% water) (see Appendix A for methodology). Consequently, during quench, melt in the far-field is capable of cracking whilst the more water-rich melt near the bubbles is still plastic. Our textural observations are, therefore, consistent with early resorption of water around bubbles during quench, transition to glass of the melt in the far-field, cracking of the glass, followed, finally, by minor hydration of the crack margins during cooling to ambient temperature. When resorption occurs above the glass transition, bubbles can shrink. Integrating under the water concentration profile for ABG1 we calculate that 40% by mass of the water that was in the bubble pre-quench is resorbed into the melt during cooling (see Supplementary Information). If we assume, for purposes of illustration, that this resorption is compensated exactly by a loss of bubble volume of 40%, then the observed sample porosity would be only 60% of the pre-quench value. This is likely to be an overestimate of the volume change caused by resorption since it assumes that all excess H 2 O in the surrounding halo was derived from the bubble while it was still able to resorb, whereas the evidence of hydrated crack margins suggests that at least some of this diffusion happened below T g . Similarly it is possible that, as T g is approached, the increasing structural relaxation timescale of the melt with cooling may make it difficult for the reduction in bubble volume to keep pace with the loss of internal H 2 O vapour. However, this calculation does not account for the additional change in bubble volume that would occur due to simple thermal contraction of the H 2 O vapour. If this is included (see Supplementary Information) then the observed sample porosity could be as little as 30% of the original pre-quench porosity.
Possible evidence for significant reduction in bubble volumes can be seen in the deformed films that exist between neighbouring vesicles (Fig. 4b, c). Films that stretch as adjacent bubbles grow and interact will not remain planar if the bubbles subsequently shrink, but will tend to buckle. These features have previously been interpreted as evidence of novel bubble coalescence mechanisms at high pressure (Castro et al., 2012). In the light of our findings, we reinterpret these textures as evidence of resorption-driven bubble shrinkage during quench.
Interpretation of natural samples
This study demonstrates that water resorption should be expected whenever vesicular magma quenches. The degree of resorption will vary considerably with diffusivity and quench conditions and will be most significant where D H 2 O (particularly D H 2 O m ) is high and quench is slow (Fig. 7). If magma has undergone significant degassing during ascent, low residual H 2 O contents will result in lower D H 2 O and higher melt viscosity; hence, if such a melt is rapidly quenched (e.g. in an eruption column), then quench resorption is unlikely to be significant. Where erupted material cools more slowly there is potential for significant resorption even for melts with low H 2 O contents. Quench resorption affects both glass transition temperature and vesicularity, and may therefore be important in processes such as the welding of ignimbrites, the formation of rheomorphic flows, and the formation of obsidian. The samples described in this study were quenched rapidly at high pressure and may have natural analogues in submarine and subglacial eruptions, where magma is erupted at higher than ambient pressure and is therefore likely to have higher H 2 O contents and correspondingly greater potential for resorption. We stress that bubble resorption may also result from a pressure increase; however, since both temperature and pressure usually drop dramatically through eruption, we propose that the quench mechanism developed in this study will usually provide the most straightforward explanation when evidence of resorption in natural magmatic products is found. Watkins et al. (2012) describe resorption profiles around vesicles in obsidian clasts from the ca. 1340 AD eruption of Mono Craters, California and propose that resorption was caused by a pressure increase of ∼10 MPa in the volcanic conduit prior to eruption. We note that a temperature decrease to ∼600 • C, which must occur during quench, is also sufficient to explain the observed H 2 O t concentrations at vesicle walls (Liu et al., 2005;Newman and Lowenstern, 2002). Watkins et al. (2012) discount thermal resorption as inconsistent with a secondary vesicle population that they observe in the water-rich resorption halo around one vesicle, arguing that high melt viscosity at temperatures near the glass transition would preclude secondary bubble nucleation. By contrast, we interpret this observation as evidence of cooling and reheating of this sample. This is consistent with previous studies that use geospeedometry to interpret similar obsidian clasts, from the same locality, as fragments from chilled conduit margins that were subsequently entrained by erupting magma (Newman et al., 1988;Stolper, 1989;Zhang et al., 2000Zhang et al., , 1997Zhang et al., , 1995. Water-rich glass in the resorption halo produced by the first cooling event would be prone to secondary vesiculation on reheating. Consequently, we conclude that repressurization is not required to explain the observations of Watkins et al.; rather, we propose that their observations could provide a natural counterpart to our experimental observations. Carey et al. (2013) observe similar secondary populations of vesicles in basaltic pyroclasts from explosive activity of Kilauea (Hawai'i) in 2008. They propose that repressurization of magma during convection in a lava lake produces a water-rich halo around bubbles which subsequently vesiculates during eruption; a thermal origin is not considered, but our results indicate that it is a plausible alternative.
Secondary vesicle populations like those observed by Watkins et al. (2012) and Carey et al. (2013) are evidence of a fluctuation in either pressure or temperature conditions (or both) but do not, by themselves, reveal which occurred. Our observations of quench resorption highlight the need to consider variation in temperature, as well as pressure, when evidence of bubble resorption is found. Our FTIR results indicate that water speciation data provide a methodology for distinguishing between these two mechanisms when interpreting natural samples. Quench resorption creates dramatically increased H 2 O m :OH ratios around bubbles because the interconversion reaction (Eq. (1)) cannot maintain equilibrium speciation as the sample cools. By contrast, pressure-driven resorption at magmatic temperatures would yield H 2 O m :OH ratios consistent with equilibrium speciation. Consequently, any analytical technique that can quantify water species with high spatial resolution could be used to distinguish between resorption mechanisms.
Conclusions and implications
We show that bubble resorption is a ubiquitous consequence of the increase in H 2 O solubility during magma cooling. We also show that resorption occurs mainly above the glass transition, whilst the melt is still plastic and the bubbles are able to respond by shrinking. We observe buckled melt films between vesicles in our samples, which we interpret as direct evidence of bubble shrinkage.
Our FTIR data show that the resorption signal is dominated by H 2 O m , whilst the OH distribution records pre-quench conditions. The different behaviour of the water species is a consequence of their different diffusivities, and the kinetics of their interconversion reaction. Our work provides the conceptual underpinning for an analytical technique that allows both pre-quench processes, such as bubble growth, and post-eruptive quench to be interrogated. With further experimental work to fill the gap in our knowledge of water diffusivity and solubility around the glass transition temperature (Fig. 2), this should provide a quantitative tool for reconstructing both the pre-and post-eruptive history of natural samples.
The ubiquity of quench resorption has broad implications. Studies that use the final vesicle size or volume fraction of experimental or natural samples to make inferences about bubble growth, degassing mechanisms or eruptive processes may need to be revised to account for bubble shrinkage during resorption; SIMScalibrated BSEM images represent a simple and effective way to do so. Similarly, studies that make inferences from measurement of bulk dissolved water content or speciation of vesicular samples (e.g. solubility experiments, geospeedometry) should assess for quench resorption and disequilibrium speciation. Finally, the role that bubble resorption during cooling of eruptive products may play in promoting rheomorphic flow, welding of ignimbrites and the formation of obsidian is worthy of further investigation.
Author contributions
I.M.M. collected and processed SEM, SIMS and FTIR data. A.R.L.N. assisted with FTIR sample preparation and analysis, and undertook additional FTIR data processing. A.B., C.I.S. and J.F.L. performed decompression and solubility experiments. I.M.M., E.W.L. and M.C.S.H wrote the paper. All authors discussed the results and commented on the manuscript.
Research Data Access Statement
The data underlying this study are available from the corresponding author.
FP7. We thank A. Proussevitch and C. Martel for their detailed and constructive reviews and Tim Elliott for editorial handling. | 9,699.2 | 2014-09-01T00:00:00.000 | [
"Geology",
"Environmental Science",
"Physics"
] |
Orbital stability of periodic wave solution for Eckhaus-Kundu equation
In this paper, we mainly study the orbital stability of periodic traveling wave solution for the Eckhaus-Kundu equation with quintic nonlinearity, which is not a standard Hamilton system. Considering the studied equation is not a standard Hamilton system, the method presented by M. Grillakis and others for proving orbital stability cannot be applied directly, and this equation has two higher order nonlinear terms. So, by constructing three conserved quantities, using detailed spectral analysis and appropriate techniques, we overcome the complexity of the studied equation developed in calculation and proof, then, a conclusion on the orbital stability of the dn periodic wave solution for the Eckhaus-Kundu equation is obtained. As an extension of the proof for the above results, we also prove the orbital stability of the solitary wave for the studied Eckhaus-Kundu equation.
Introduction
It is well known that nonlinear phenomena exist in various fields of science and engineering, such as solid-state physics, biophysics, optical fibers, fluid dynamics, etc [1][2][3].With the development of nonlinear science, many nonlinear complex systems in these fields can be modeled by nonlinear evolution equations (NLEEs), and the well-known nonlinear Schrödinger-type equation (NLS) is one of the most critical models in NLEEs, which is often applied to space plasma, coastal engineering, nonlinear optics and so on.Accordingly, the study of nonlinear science has attracted the attention of a large number of researchers.Over the years, many effective methods were developed to study nonlinear problems.For example, there were some new advances on the solutions of nonlinear wave in recent years.Xu et al [4] investigated the long-time asymptotics of the solution to the Cauchy problem for the Gerdjikov-Ivanov type derivative nonlinear Schrödinger equation with step-like initial data, and obtained asymptotic formulas for the solution.Wang et al [5] studied the long-time asymptotics of the focusing Kundu-Eckhaus equation with nonzero boundary conditions at infinity by the nonlinear steepest descent method of Deift and Zhou, and found three asymptotic sectors in space-time plane, and gave asymptotic solutions for the three sectors.Finally, they also studied the modulational instability to reveal the criterion for the existence of modulated elliptic waves in the central region.In 2021, Bilman et al [6] studied the families of multiple-pole solitons generated by Darboux transformations as the pole order tends to infinity, used the nonlinear steepest-descent method for analyzing Riemann-Hilbert problems, and computed the leading-order asymptotic behavior in the algebraic-decay, non-oscillatory, and oscillatory regions.And in 2022, Wang et al [7] applied the finite-gap integration approach and Whitham modulation theory to study the complete classification of solutions to the defocusing complex modified KdV equation with step-like initial condition.
In this paper, we will study a higher-order NLS-type equation with cubic and quintic nonlinear terms, that is, the Eckhaus-Kundu (EK) equation [8][9][10][11][12] where u(x, t) is a complex function about x and t, σ, δ are both real constants, δ 2 denotes the quintic nonlinear coefficient, 2δdenotes the nonlinear dispersion coefficient, the last term indicates the nonlinear term caused by the time-retarded induced Raman process.Equation (1) was given successively by Kundu [8], Calogero and Eckhaus [9] in their study on the integrability of the nonlinear Schrödinger equation (NLS), which has a wide range of applications in physics, optics and other fields.Such as: Clarkson [10] described the propagation of ultrashort femtosecond pulses in optical fibres in quantum field theory, investigated the problem of approaching the critical point of the strongly interacting many-body system of equation (1), and obtained its exact solution.Also, the exact solutions without and with current density were obtained.Johnson [11] showed how to extend the periodic wave modulation procedure in the classical water wave problem, through the process of deriving the higher order equations associated with the water waves of equation (1), the details of the instability of the Benjamin sidebands were investigated, these results can be used to study the change in form of soliton solutions near stable boundaries, which leads naturally to more general applications to 'soliton' solutions and inverse scattering methods.In nonlinear optics, Kodama [12] modelled a long-distance high-bit-rate transmission system, the propagation of optical solitons in single-mode fibres was discussed, and equation (1) was investigated with respect to the integrability of the perturbations, as well as considering the soliton as a stable fixed point of an infinite-dimensional mapping generated by a transmission system with periodic excitations.
Clarkson and Cosgrove in the literature [13] studied the Lax pair of equation (1).Wang et al [14] and Mendoza et al [15] studied the soliton solutions of equation (1).Xie et al [16] studied the rogue wave solutions of equation (1).Zha [17] studied the higher order rogue wave solutions of equation (1).A recent literature [18] studied the integrable discretization of equation (1), Tian et al gave x-discrete, t-discrete and a fully discrete form of the Eckhaus-Kundu equation based on the bilinear form, the single soliton and double soliton solutions of the derived discrete equation have been successfully constructed by the Hirota bilinear method.Cimpoiasua et al [19] studied the explicit invariant solution of equation (1) from the point of view of Lie symmetry analysis, the hyperbolic function type soliton solution of (1) was obtained.Luo and Fan [20] used the ¶ ¯-dressing method to construct the two-soliton solution and the N-soliton solution of equation (1).
In the study of nonlinear systems, it is great importance to consider the stability of the solutions.Pava J A [21] investigated the orbital stability of the dn periodic wave solutions of the focusing nonlinear Schrödinger equation where u = u(x, t) ä C, x, t ä R and the mKdV equation by applying the ideology for proving stability of solitary wave solutions proposed by Benjamin and Bona.Since then, the studies for orbital stability of dn periodic wave solutions in nonlinear systems has received a great deal of attention [22][23][24].Here we study the orbital stability of the periodic wave solution for the Eckhaus-Kundu equation (1).From reading the literatures, we understand that the research of orbital stability of dn periodic wave solutions about this type of equations has not appeared in the previous literatures.
It is worth noting that, the NLS and mKdV equations studied in the literature [21] have the highest number of their nonlinear terms only three times, and both are standard Hamilton systems.In contrast, the Eckhaus-Kundu equation (1) that we studied in this paper is not only a higher order NLS-type equation with cubic and quintic nonlinear terms, but also not a standard Hamilton system.Due to the fact that equation (1) has a higher order of nonlinear term than the equations studied in the literature [21], and the structure of the equation (1) is different from that of the equations studied in the literature [21], so that the methods used in this paper are different from those used in the literature [21].Since equation (1) is not a standard Hamilton system, the method proposed by Benjamin [25] and Bona [26,27] etc to prove the orbital stability of solitary wave solutions used in the literature [21] cannot be applied directly, and equation (1) has higher order term, these bring a lot of complexity to our calculation and proof.To do so, by constructing three conserved quantities, and using detailed spectral analysis and appropriate techniques, we conclude that the dn periodic wave solution of equation (1) is orbitally stable for small perturbations of the L 2 -norm.Then, as an extension on the process to prove the orbital stability of the periodic wave in equation (1), we prove the orbital stability of the solitary wave solution in equation (1).The proofs on the orbital stabilities of periodic and solitary wave solutions to the equation (1) in this paper complete and supplement the previous studies of Eckhaus-Kundu equation about stability.
The main points of this paper are: In section 2, the existence of dn periodic wave solution of equation ( 1) is proved by first integration and the properties of elliptic functions.In section 3, a detailed spectral analysis of the operator D was carried out.Using Floquet's theory [28], Laméʼs equation [29] and Wely's essential spectrum theorem [29], we obtained the spectral properties that the operator D qm ma 3 x 2 2 = - ¶ + + has three simple feature values, with zero as its second feature value and the rest of spectrum consisting of discrete multiple feature values.In section 4, by constructing three conserved quantities, and using the ideas from Benjamin [25] and Bona [26,27] for proving the stability of solitary wave solutions, it is shown that the orbital stability of dn periodic wave solution for Eckhaus-Kundu equation (1) with period L, and the orbital stability of the solitary wave solution for equation (1) at ω, v satisfy 4ω + v 2 < 0.
Existence of periodic traveling wave solution for equation (1)
In this section, we mainly study the existence of periodic traveling wave solution u x t e e a x vt e a , , 4 for the Eckhaus-Kundu equation (1), where a a x vt e a x vt x vt , , 5 where g(a From the fact that both the real and imaginary parts of (7) are equal to 0, we get substitute (10) into the imaginary part (9), take E 1 , E 2 such that its coefficient is equal to 0, then there are then we have that (9) is constant to 0, and (8) can be reduced to a m a qa 0, 12 where m = 2σ, q = − (4ω + v 2 )/8σ.Multiplying equation (12) by a x ¢( ), and integrating once with respect to ξ, we obtain that a(ξ) satisfies where M a is a non-zero integration constant, and M v 4 8 . Let F(a) = a 4 + 2qa 2 + M a , then the solutions of equation (13) depend on the roots of the polynomial F(a).Since we consider m < 0 in this paper, then we have Therefore, there are real symmetric roots ±A and ±B for −F(a).Without loss of generality, we assume that 0 From m < 0 and the nonnegativity of the left side of (14), we know that B < a < A, and A, B satisfy , then equation ( 14) can be changed into According to the properties of the elliptic functions [30], it results in and t(0) = 1, following the relationship t = a/A, we get the dn wave solution and then there is where Since the dn wave solution has a fundamental period 2K, i.e., dn u k ), where K = K(k) denotes the first type of complete elliptic integral, the fundamental period of the dn wave solution a(ξ) in (17) can be obtained as Write the above equation as a function relating only to B, it follows that in theorem 2.1, there is a unique B ≡ B(q) that allows the fundamental period L of the periodic traveling wave solution ) such that T L aq 0 = .
(1) There exists an interval W q 0 ( ) around q 0 , an interval V B 0 ( ) around B 0 , and the unique smooth function W q V B : ( ), such that for q W q 0 " Î ( ), B q Î P( ) and (2) The periodic traveling wave solution a a A q B q , , v , = w (• ( ) ( )) with fundamental fixed period L and satisfying equations ( 6) and ( 13) is related to can be obtained as smooth.
(3) W q 0 ( )can be chosen as From the conclusions in theorem 2.1, there is B q , 0 0 0 L = ( ) .In the following we prove that 0 from ( 22) and (23), it can be calculated that where )is a rigorous subtraction function with respect to B, and the replenishment modulus )is a strictly increasing function with respect to B q 0, Î -( ).Take the derivative of f k¢ It can be shown that f k¢ ( )is an increasing function with respect to k¢, and thus we have , from this, equation (26) holds, i.e. there is 0 . It follows from the knowledge of the implicit function theorem, there is a unique smooth function W q V B : ( ), so that for q W q 0 " Î ( ), q q , 0 L P = ( ( ) ) . W q 0 ( )is an interval surrounding q 0 , V B 0 ( ) is an interval surrounding B 0 .Therefore, conclusion (1) of theorem 2.1 is proved.
Again, since q 0 can be arbitrarily taken in the interval W mL , 2 2 2 , and depending on the uniqueness of the function Λ, it is possible to extend W q 0 ( ) . Conclusion (2) holds by using the smoothness of the function. , ) is a rigorous subtraction function, thus it follows by ( 22) that k(q) is a rigorous subtraction function on q.
Proof.By theorem 2.1, Λ is a rigorous subtraction function with respect to B. As for q W q 0 " Î ( ), , using the relationship k k 1 ( ), therefore under the relation , i.e.Π is a rigorous increasing function with respect to q. Taking the derivative of (22) with respect to q, we get This proves that q k q ( ) is a rigorous subtraction function, and corollary 2.1 is proved., Following the properties of elliptic functions [30] and corollary 2.1 we know, K k E k ( ) ( ) is a rigorous increasing function with respect to k, k(q) is a rigorous subtraction function with respect to q, so there is
Spectral analysis
For any operator u, v ä X = H 1 ([0, L]), there is a real inner product Let X * is the dual space of X, then X * = H −1 ([0, L]), and there exists a natural isomorphism I: X → X * which defines 〈Iu, v〉 = (u, v), where 〈•, • 〉 indicates the pairing between X and X * .
It follows from (31) and (32) that Next we study the spectral properties of linear operator D.
On the basis of Wely's spectral theorem [29], we have σ ess Next, by means of the perturbation theorem, Floquet's theory [28] and the Laméʼs equation [29] eigenvalue problem, we analyse the spectral characteristics of the operator D as follows.
We begin with an analysis of the periodic feature values problem of the operator D on [0, L].
By the theory of tight self-congruent operators, the spectrum of the operator D in (35) is a countably infinite set and λ n → + ∞ as n → + ∞ .By χ n we represent the eigenfunction corresponding to the eigenvalue λ n , hence a continuous differentiable function χ n with period L can be scaled up to the entire interval ).Using Floquet's theory [28], the semi periodic problem for problem (35) Since that problem (37) is also a self-conjugating problem, hence a set of eigenvalue columns {μ n |n = 0, 1, L }, n → + ∞ , μ n → + ∞ is obtained, and this eigenvalue column satisfies Let ζ n be the eigenfunction of the feature value μ n .As for ∀x, a function g is said to be semi periodic when it has nature g(x + L) = − g(x).Its period is 2L and its semi periodic is L. So, the period of the solution for the equation Therefore, the solution of equation (39) is stable on the intervals (λ 0 , μ 0 ), (μ 1 , λ 1 ), L , then the intervals (λ 0 , μ 0 ), (μ 1 , λ 1 ), L are stable intervals; the solution of equation (39) is unstable on the intervals (− ∞ , λ 0 ), (μ 0 , μ 1 ), (λ 1 , λ 2 ), L , then the intervals (− ∞ , λ 0 ), (μ 0 , μ 1 ), (λ 1 , λ 2 ), L are unstable intervals.The unstable interval (− ∞ , λ 0 ) is always available.Theorem 3.1.For the dn periodic wave solution a a A q B q , , . Then, the linear operator D qm ma 3 x ]) has three simple eigenvalues, i.e., 0 l , 1 l and 2 l , where 0 1 l = is the second eigenvalue, and whose corresponding eigenfunction a¢, the rest of spectrum consists of discrete multiple feature values.
Proof.By (40) we show that 0 1 2 l l = < .From equation (34) we have Da 0 ¢ = , it is known that the eigenvalue 0 corresponds to the eigenfunction a¢.It is easy to see that the eigenfunction a¢ has two zeros on L 0, [ ), the zero eigenvalues of the operator D can be judged to be where ρ and λ are related by the equation According to Floquet's theory [28] it is known that equation (41) has three instability intervals: r r ( ), where for i 0 , i r is the eigenvalue associated with the periodic problem (40), i q is the eigenvalue associated with the semi periodic problem (37).So, the first three eigenvalues 0 r , 1 r , 2 r are simple eigenvalues, the remaining eigenvalues 3 Here we give that k 4 Y are the eigenfunctions corresponding to the first three eigenvalues 0 r , 1 r , 2 r , and their period are K 2 .It is easy to see that on K 0, 2 [ ], 0 Y has no zeros, 2 Y has two zeros, and for k 0, 1 Combining with (42), we have Clearly r l is an increasing function, then we have 0 . From this we can see, 0 1 l = is the second eigenvalue of the operator D, 0 0 l < .
Then, through the feature problem where the eigenvalue i q is related to i m by the equation 1 2 q = + .0 q , 1 q are the first two eigenvalues of the semi periodic problem (37).The corresponding eigenfunction of 0 q is x cn x dn x sm 0,
Where A is a self-associating operator and has only a single negative nature root 0 l , its corresponding feature vector f . Consider the dn wave solution a x ( ) defined in theorem 2.1, then Proof.
(1) Since the periodic traveling wave solution a x ( ) defined in theorem 2.1 is bounded, it is easy to see that , and . It is clear that the function column j f { } is delimited, and we still denote the subsequence of j The infimum of 0 g can be taken on a fetchable function 0 f ¹ .From lemma 3.1 and theorem 3.1, it follows that the operator D has the spectral characteristics in lemma 3.1.Thus there exists continuously differentiable, the derivative of equation ( 12) with respect to q, we have Following from lemma 3.1, there is 0 0 g .In summary, 0 0 g = .
(2) Using the method of proof in (1), it can be shown that 0 g .We then use the reduction to absurdity to prove that 0 g > .Suppose that 0 g = , then there exists a function Φ satisfying . With Lagrange's theorem, we have that α, λ, θ satisfy , it can be deduced that 0 a = .Also since Da 0 ¢ = , then , there is 0 l = .And then a V F = ¢, a¢ is orthogonal to a a 2 ¢, contradictions.Therefore, there is 0 g > .,
Orbital stability of the periodic wave solution for the Eckhaus-Kundu equation (1)
In this section we follow the main ideas of the classical approach proposed by Benjamin [25], Bona [26,27] et al, the orbital stability of the dn periodic wave solution U x t a e a , i for the Eckhaus-Kundu equation (1) is demonstrated by constructing three conservation functions E, Q 1 and Q 2 , where a(ξ) is given by theorem 2.1.
Since equation (1) has the symmetry of phase and translation, i.e., if u(x, t) is a solution of equation (1), then for any (y, θ) ä R × [0, 2π), e i θ u(x + y, t) is also a solution of equation (1), thus, the orbital stability is defined as follows: Definition 4.1.The orbit a y y R : , 0, 2 45 is stable under the action of the periodic flow produced by Eckhaus- Kundu equation (1), that is, for 0 e " > , there exists then for any t R Î , t q q = ( ) and y y t = ( ), the solution u x t , ( ) of the equation (1) with initial value u 0 satisfies u t e a y i 1 In the function space X = H 1 ([0, L]), we consider the initial value of equation ( 1) Let T 1 and T 2 be the one-parameter unitary group operators on X, defined as Derive (49), ( 50) with respect to s 1 , s 2 at s 1 = 0, s 2 = 0, we have According to the literature [32] it is known that, for any u H L 0, per s 0 Î ([ ]), s 0, equation (1) is globally adapted, i.e., for u(0) = u 0 (x), equation (1) has a unique solution u C R H L ; 0, ).In order to demonstrate the orbital stability of the periodic traveling wave solution, we build three new conservation quantities described below: where G u g s d s Easily verified, E(u), Q 1 (u) and Q 2 (u) are C 2 -functionals specified on the complex space X, and their first-order Fréchet derivatives are noted as , respectively.By calculation, we obtain It can be shown that E(u), Q 1 (u) and Q 2 (u) are invariant under the effect of T(• ), that is, for any s 1 , s 2 ä R, there are and for ∀t ä R, u(t) is the flow of (47), we have Next, we prove the orbital stability of the periodic wave solution for the Eckhaus-Kundu equation (1).
Theorem 4.1.Let L 0 > , v 0 > and v 4 0 2 w + < are arbitrary fixed.Allow for the smooth curve of the dn periodic traveling wave q mL a A q B q , 2 , , given by theorem 2.1.Then for ω, v such that ) , the orbit generated by U a x x = ( ) ˆ( ) in X is orbitally stable under the action of the periodic flow produced by equation (1).
Proof.For the classical approach proposed by Benjamin [25], Bona [26,27] and Weinstein [33], the solution a x ˆ( ) to equation (1) from theorem 2.1 is considered.Initially, with y L 0, )and t R Î , we define y u y t e a q m u y t e a , , , , By applying the method in [25][26][27] it can be seen that, on the interval In the following, we consider the perturbation term of the periodic wave a x ( ).Let From the minimum property of y y t t , , q q = ( ) ( ( ) ( )), we can obtain 0
| ( )
. Therefore, from the fact that a x ( ) satisfies equation (7) we get From (58) we obtain that the compatibility condition satisfied by , Using the translation and rotation invariants of E(u), Q u 1 ( ) and Q u 2 ( ) defined in (52), ( 53) and (54), expression (58), for each r 2 , the property of H L L L 0, 0, , and equations ( 7) and (11)satisfied by a x ( ), we obtain the variance of the conserved continuous functional u E u Q u vQ u > =|| || .In order not to lose generality, assuming that a 1 =
|| ||
, and let u a a a , from the definition of the operator D, and using the Cauchy-Schwarz inequality, we have 64 By the above conclusions we obtain where g x ax bx cx . For an arbitrarily small enough x, g(x) has the property g x 0 > ( ) .Let 0 e > , since u ϝ( ) holds.Finally, according to the mapping q mL a A q B q , 2 , , is continuous and the results derived from the previous analytical proofs, we obtain the conclusion that in space X H L 0, orbitally stable with small perturbations in the L 2 -parameter., In addition, the orbital stability of the solitary wave T t T vt a x ( ) can be discussed as follows.
Note 4.1: , the relationship can be obtained as A 2 + B 2 = − 2q.Further it follows from the properties of elliptic functions [30], where k is the modulus, k¢ is the replenishment modulus.So, if B → 0 + , there are k(B) → 1 − , A q 2 -.As in the elliptic functions, dn x sech x , 1 = ( ) ( ), from equation (17) we have This time, equation (17) loses periodicity in this limit to give a waveform with a single peak and 'infinite period', and the waveform is attenuated at infinity.Thus, we can obtain the bounded analytic solution where l = = − ω − v 2 /4 > 0, 2σ = m < 0. q, m are given by equation (12).This leads to the solitary wave solution u x t e e a e e l sech l x vt , , .6 9 of the equation (1).Based on the definitions of T 1 and T 2 in equations (49) and (50) we can write equation (69) as . Combining (5), ( 6) and (10), it can be verified that E a x Q a x vQ a x 0. 71 Define the operator of X → X * as = w w , which means that I −1 H ω,v is a bounded self-conjugate operator on X.The spectrum of H ω,v consists of the real number λ that make H ω,v − λI irreversible.From (51), ( 70) and (71) we calculate that and by (73) that Z is in the nucleus of the operator H ω,v .where Z is defined by (74), N is a finite subspace of X, such that P is a closed subspace of X, and there exists a positive constant ℓ 1 independent of u such that denote d″(ω, v) as the Hessian matrix of d(ω, v).We represent the number of positive eigenvalues of d″ by p(d″) and the number of negative eigenvalues of H ω,v by n(H ω,v ).
It follows from the literature [29] that local solution exists for the initial value problem (47), (48) of equation (1).From the analysis and discussion of the previous contents, it is clear that equation (1) has three conserved quantities satisfying (55), (56) and that the solitary wave of equation (1) satisfies (70).Also, we have defined the operator H ω,v .Therefore, according to the 'stability theorem' in the introduction to [34] where r 1 , r 2 , r 3 are real functions and r r X H R , ), l 0 > .Also due to Da x 0 ¢ = ( ) , thus, we obtain that the operator D has spectral properties: D has a unique simple negative eigenvalue, the nucleus of D is tensed by a x ¢( ), and the rest of the spectrum is positive and bounded away from zero.From the literature [35], for r H R < ¢ > =< Y > = , then there exists a positive real number 0 > ℓ , independent of r 1 , such that Dr r r , .8 1 , k 1 is an arbitrary real number.Then from (79), (80) we have 91), it follows that (84) holds.In this case, for any f(x) = e i ψ( x) (r 1 (x) + ir 2 (x)) ä X, chosen a 1 = 〈r 1 , Under the conditions of theorem 4.3, it follows from (68) that a x l sech lx ), so there are a a l , 2 ) .Substituting into (93) gives det d 4 0 .9 4 To sum up, it is clear that, under the conditions of theorem 4.3, we have p(d″) = n(H ω,v ) = 1, that is, if σ < 0 and ω, v satisfy 4ω + v 2 < 0, the solitary wave T t T vt a x
Conclusion
In this paper we focus on the orbital stability of the periodic wave solution of the Eckhaus-Kundu equation (1).Since equation (1) is not a standard Hamilton system, the method suggested by M. Grillakis [34,35] et al for proving the orbital stability of solitary wave solutions cannot be applied directly, and this equation has two higher order nonlinear terms.To this end, we construct three conserved quantities and use a special technique to overcome the above difficulties of equation (1) and prove that the dn periodic wave solution of equation ( 1) is orbitally stable under small perturbations of the L 2 -parameter.As an extension of the proof for the above results, we also prove the orbital stability of the solitary wave in equation (1).This discussions about orbital stabilities of periodic and solitary wave solutions to the Eckhaus-Kundu equation (1) in this paper complete and supplement previous studies of Eckhaus-Kundu equation on stability.
Thus by theorem 3.2 (2), it follows that
Hypothesis 4 . 1 .
(Spectral decomposition of H v , w ) Space X can be divided into direct sum
) 1 l 1
then the operator D can be written as D l It follows from the spectral properties of the operator D in theorem 3.1, 0 = is the second eigenvalue of D, and D has only one negative eigenvalue 0 l , whose corresponding eigenfunction is 0 Y , i.e. there is D 0 0 0 l Y = Y .From (68) in Note 4.1, as x ¥ | | , a 0 2 , it follows that M x 0 ( ) .According to Wely's essential spectrum theorem[29], we have D DΨ 0 = λΨ 0 , then f(x) can be uniquely expressed as In summary, Hypothesis 4.1 holds.Next prove that p(d″) = n(H ω,v ) = 1, i.e. prove det d ˆ) of the Eckhaus-Kundu equation (1) is orbitally stable on space X H R .
1, L ; the period of the solution for equation (39) is 2L if and only if δ = μ n , n = 0, 1, L .If the solution of equation (39) is bounded, then such a solution is said to be stable, otherwise it is said to be unstable.The zeros of χ n and ζ n on [0, L] can be obtained by analysis: χ 0 has no zeros; χ 2n+1 and χ 2n+2 have 2n + 2 zeros; ζ 2n and ζ 2n+1 have 2n + 1 zeros.It follows from the perturbation theorem that (36) and (38) are interleaved, i.e. there is (1)]heorem 4.1 in[34], we can obtain the following orbital stability theorem for the solitary wave of equation(1).According to theorem 4.2, the main conclusion of equation (1) for the orbital stability of solitary wave can be obtained.Proof.From the previous discussion, it can be seen that equation (1) has solitary wave T t T vt a x | 7,435.4 | 2023-11-13T00:00:00.000 | [
"Mathematics"
] |
Gintropy: Gini Index Based Generalization of Entropy
Entropy is being used in physics, mathematics, informatics and in related areas to describe equilibration, dissipation, maximal probability states and optimal compression of information. The Gini index, on the other hand, is an established measure for social and economical inequalities in a society. In this paper, we explore the mathematical similarities and connections in these two quantities and introduce a new measure that is capable of connecting these two at an interesting analogy level. This supports the idea that a generalization of the Gibbs–Boltzmann–Shannon entropy, based on a transformation of the Lorenz curve, can properly serve in quantifying different aspects of complexity in socio- and econo-physics.
Motivation
Many researchers use entropy as an appropriate measure for quantifying complexity or the inequality level in a complex system. There is an overwhelming choice in generalized entropy formulas, some of them satisfying more of the basic axioms than the others [1]. The classical Boltzmann-Gibbs-Shannon formula is often used in economic and social studies without elaborating too much on the conditions under which it is an appropriate thermodynamic function. Most prominently, the additivity of entropy upon the factorization of probabilities is, as a rule, not tested and therefore the use of entropy remains at the level of a crude analogy. Using the Tsallis-or Rényi entropy formula [2] is also not a sufficient choice. Although a free parameter in this entropy provides more flexibility in processing and interpreting statistical data and generalizing the additivity, there is no basic reason to not use yet another formula that satisfies the basic physical requirements for the entropy.
On the other hand, the most popular way for quantifying the inequality level in a socio-economic system is to use the Gini index, introduced for the first time by the economist Corrado Gini [3]. This measure provides a simple method of quantifying the deviation from a uniform distribution, and it is not a quantity borrowed by a simple analogy from thermodynamics. It also has the advantage that its value is a number in the [0, 1] interval, like an order parameter. The Gini index is 0 when all members of the investigated society are equal in the relevant quantity, and it is 1 if one member is monopolizing the whole of the available resources. The Gini index can be determined experimentally either graphically by constructing the Lorentz curve [4], or by the simple formula where x i is the relevant quantity for element i, and x is its average value for the whole system with N elements. While the Gini index is traditionally used to measure wealth-, income-or other inequality, the entropy is a concept stemming from physics and mathematics and is applied to understand, describe and construct optimal or equilibrium distributions. At first glance, these two termini show no reason to be connected. However, in recent publications, it has been observed that the Gini index and the total Shannon entropy of socio-economical models and data show a synergic behavior [5].
In this paper, we shall demonstrate that the mathematical construction formulas of the Gini measure of inequality in a society on the one hand and the entropy-probability trace formula on the other hand bring intriguing similarities at a certain step of their derivation. Both quantities are integrated quantities, in the sense of summing over alternative values of a basic variable, x. We propose the usage of the phrase "gintropy" in order to express the combination of the Gini index [3,6,7] and the entropy, both associated with a probability density distribution (PDF).
Basics
Let us consider the relevant quantity of the investigated system as a continuous variable x. This could be, for example, salary, wealth, population, etc. The occurrence frequency of this given value in a huge set of data are described by the normalized PDF: An approximation to such mathematical PDFs is given in the praxis by observing the number of occurrences of values in a short bin [x, x + dx] and dividing these by their sum, the total number: with N tot the total number of observed data. In income distributions, for example, N(x, x + ∆x) is the number of persons having an income in the ∆x interval starting at x. The total income is then obtained as and the average income is given by Both the entropy and the Gini index can be expressed as expectation values of some functions of x over the PDF ρ(x), and we are going to demonstrate the latter in the present paper.
Not only the PDFs, but frequently the cumulative distributions are in our light-spot. The first reason for this is that the experimental shape of the cumulative functions are smoother even in the case of a poorer statistics. The second reason is that, especially for income distribution and inequality, the total body of "rich" is better contrasted to the"poor".
It is straightforward to construct the quantity "the population fraction of richer than x" as the tail-cumulative integral of the PDF: A similar cumulative quantity is the wealth accumulated by this richer class, divided by the average income: Trivially, one obtains C(0) = 1 and F(0) = 1. The famous Pareto-law expresses that a p fraction of the population possesses a (1 − p) fraction of the wealth. In the original statement about the economy at the end of 19th century, it was p = 0.2, formulated as the "80/20" rule: 20 percent of the population having 80 percent of the total wealth [8][9][10]. Later, a "90/20" rule has also been suggested by Dunford [11]; this loses, however, the elegant definition of the Pareto point (see the next paragraph). Analyses of national GDP comparisons and wealth distribution in certain countries often use in the wealthy region a power-law fit, ρ(x) = cx −(1+α) , calling the parameter α the Pareto-index [12][13][14][15]. It is however largely debated where should one consider the cut-off in the distribution curve, over which the tail is of the power-law type. For part of the PDF, exponential fits can also be done [16]. As an overall fit to the whole income distribution curve recently, it has been shown that a Tsallis-Pareto cut power-law or some special beta prime distribution works well [17].
For a simple division of the system in an upper and lower class the x P Pareto-point is used, satisfying: The implicit relation, x P (p), depends on the underlying PDF, ρ(x). Since C(0) + F(0) = 2 and the general sum is monotonically decreasing, due to there is always a point x = x P , where C(x P ) + F(x P ) = 1. However, the value p cannot be arbitrary.
As we shall discuss in the next section, the Gini index, G, can be expressed in several alternative ways: (i) as the average of big differences in the data set, (ii) as a construction using the above cumulative quantities or (iii) as an expectation value of the cumulative of the cumulative. G expressed as an integral over C contains an integrand σ(C). For some PDFs, this function turns out to be formally identical with the terms in entropy-a probability trace formula known from elsewhere. These formulas define the gintropy, as a function of the cumulative measure of being "richer than", σ(C)-and this function coincides with the classical entropy for an exponential PDF, like the Gibbs-Boltzmann distribution of energy in thermodynamics. For some other, frequently considered distributions in complex systems, the gintropy resembles terms of various generalizations of the Gibbs-Boltzmann-Shannon entropy. Among others, we arrive at the Tsallis entropy for the original Pareto distribution, and some further interesting cases. By construction, as we shall demonstrate later, the gintropy curve is the difference between the Lorenz curve and the diagonal in the F vs. C maps.
In the sequel of this paper, we explore these formulas as several facets of the Gini index and its calculation. After the mathematical definitions and equivalent forms, we present certain analytically given PDFs, each reflecting a theoretical possibility about income inequalities: extreme communism giving every person the same income; a divided society defining two classes of the previous case with a fixed share; eco-window, providing equal probability to any income in a fixed, but possibly even an infinite interval; the exponentially distributed income taken as an analogy to the nature of atomic physics; and finally the Pareto-distribution characteristic to capitalism. A different Gini index, G, and also a different gintropy, σ(C) belong to each model. Finally, we collect a few ideas about what laws the Gini index and gintropy may follow: is there a trend akin to the second law of thermodynamics? Are societies closed systems or not? Can or must inflation distort our analysis?
Gross Inequality in General
Let ρ(x) be a normalized PDF. The Gini index in the continuous x case is defined as: It can easily be proven that its value is always between zero and one, and is used to quantify the gross inequality in the distribution ρ(x). The original definition (10) can be expressed by using the cumulatives as This expression can be further comprised by considering the cumulative of the cumulative: Finally, from here, the Gini index is then expressed as a ratio of two expectation values: Alternatively, it can be solely expressed via the cumulative population. From the corresponding definitions, we have the derivatives: ρ(x) = −dC/dx, xρ(x) = − x dF/dx and therefore (13), and, integrating by parts, we get: Using the boundary conditions C(0) = 1 and h(0) = x , we arrive at This form is reminiscent of the quantum impurity measure, Tr(ρ − ρ 2 ), which is zero only for pure states. In the theory of searching trees in informatics, the expression [18]. Let us also note here that for scaling PDFs, i.e., ρ(x) = 1 x f x x , the cumulative functions, C, and the Gini index, G, do not depend directly on x , it depends only on the form of the f (z) function. This is important when studying the history (time evolution) of G and the related constructions: an overall inflation increasing x in time will not influence this inequality measure.
Finally, we arrive now at the construction of the quantity gintropy. A fashionable representation of the Gini index is realized by plotting the cumulative wealth percentage in terms of the cumulative population possessing that wealth like in Figure 1. It can be shown that the half-moon area between the Lorenz curve [4,[19][20][21][22] and the diagonal of the unit square (known as equality line) in such an F(x) vs. C(x) plot, is exactly G/2. The integrand, σ(C), under the integral over C-which runs between zero and one-behaves alike an entropy-density. (The original and nowadays used Lorenz curve actually maps the low-cumulatives, integrated from zero to x. However, σ(C) instead of σ(C) does not remind to entropy formulas). We call this quantity gintropy, and define as the difference between the rich-end-cumulative Lorenz curve and the diagonal: From the above definition, σ(C) remains to be reconstructed with the help of C(x). We note that, using the relations C(x) = 1 − C(x) and F(x) = 1 − F(x), this quantity equivalently can be expressed by the poor-end-cumulative Lorenz curve (the generally used form of the Lorenz curve) too: The Gini index is expressed from the gintropy as a simple integral We note here that, for any integral, one substitutes It is interesting to summarize the proof of this statement here because it is a central motivation of thinking in terms of gintropy. Using the respective definitions of the tail-cumulative quantities, the half-moon area (16) is calculated as the following double integral: Changing the order of integration leads to Here, the term with 1 in the first parenthesis integrates to zero due to the definition of the expectation value, x . Then, we replace −C(y)ρ(y) = 1 2 d dy C 2 , integrate by parts and compare the result to (15) to conclude: Now, we explore some basic properties of gintropy. Some of these provide further evidence to consider gintropy like a generalized entropy density.
1.
The gintropy is never negative: σ = F − C ≥ 0 is proven by inspecting the integral and taking the first form for x ≥ x , the second form for the opposite case. This implies that the rich-end wealth fraction is always bigger or equal to the population fraction possessing it.
3.
According to Equation (8) at the Pareto-point, the gintropy equals σ(x P ) = 1 − 2p, and therefore, for the Pareto point, p ≤ 1/2 holds for the rich fraction. Since σ max ≥ σ(x P ), in order to get a Pareto point: σ( x ) ≥ 1 − 2p, i.e., the maximum of the gintropy has to be bigger than this difference value. As a consequence for the Pareto Point, we have a restriction imposed by the maximal gintropy (1 − σ( x ))/2 ≤ p ≤ 1/2.
4.
The expectation value of gintropy is the half of the gini index:
The integral of gintropy over the base value x is the non-Poissonity index,
with Var(x) = x 2 − x 2 being the variance of x. The proof of this statement uses the same mathematical trick as the one in Equation (12). 6.
For some particular PDFs, σ(C) looks like an entropy density formula, s(p i ). We present important examples in the next section.
Important Examples
In this section, we list some important examples of the gintropy, σ(C). We go through primitive models of income/wealth distributions, labelled as communism, communism++, eco-window, natural, or capitalism. Starting from model PDFs, the gintropy expression and the Gini index are calculated.
Communism
Our first example is communism: all incomes are equal, the PDF is simply a singular delta-distribution, peaked at the single value a: ρ(x) = δ(x − a) leading to x = a, C(x) = Θ(a − x) and h(x) = (a − x)Θ(a − x), with Θ(x) the Heaviside step function defined as: This leads to x F = h + xC = aΘ(a − x) and by that i.e., to an identically vanishing gintropy. As a consequence, also G = 0. Here, the Pareto-point belongs to a 50/50 division.
Communism++
The next example we present is a slight variation of the previous: now, two peaks in a given ratio constitute the PDF. This belongs to a two-class-society where all are equal but some of them are more equal. The two-peak-PDF, The w fraction of the population has an income a and the (1 − w) fraction b. The cumulative rich population graph shows two steps, at a and b, respectively: having the value 1 for x ≤ a, (1 − w) for x ∈ [a, b], and 0, otherwise. Therefore, C(x) = 1 − C(x) is zero for x ≤ a, equals w in the mid interval, and has the value 1 otherwise. The Gini index is obtained from this as: Expressing the weights, w = b− x b−a and 1 − w = x −a b−a , we obtain the alternative form It is worth noting that, for a → 0, i.e., when the lower class has (almost) zero income, the Gini index, cf. (26) tends to G → w, exactly the share of the people earning a → 0 in the population. This result is independent of b, the income in the upper class.
The gintropy, following its definition, first is expressed as a function of x: It is easy to see that, outside the interval [a, b], the gintropy is zero. Inside the interval, only the second term survives giving In conclusion, σ(C) shows a plateau at C = 1 − w with the value G and its jumps are at C(a) = 1 − w/2 and C(b) = (1 − w)/2: It is easy to check that indeed The corresponding Lorenz curve is illustrated in Figure 2a.
Eco-Window
The next example is still mathematically simple with a window-form PDF. We label this as eco-window: here, everyone has the same chance for all of possible incomes between a and b. Eventually, a = 0 and/or b = ∞ may be considered, as special cases. For the PDF ρ( , one obtains the following cumulative rich distribution: Obviously, x = (a + b)/2 and, according to eq. 15, the Gini index becomes: After some tedious but straightforward calculation, the gintropy is obtained as a function of C: For a specific choice of a and b, the corresponding Lorenz curve is illustrated in Figure 2b.
Natural Distribution
Our next example is the natural distribution, mimicking the Boltzmann-Gibbs exponential energy distribution, known from statistical physics. This is not necessarily an equilibrium distribution, it may also be the stationary limit of "growth and resetting" type processes with quantity-independent rates [23]. The PDF is a scaling one: ρ(x) = 1 x e −x/ x . The corresponding tail-cumulative probability, the rich population is given by and the Gini index becomes Our gintropy formula is constructed as follows: first, we obtain the cumulative of the cumulative, From this, it is easy to obtain the wealth share of the rich classes, x F = h + xC = (x + x )e −x/ x , and based on this the gintropy In order to express it as a function of C, we invert (35) to have Finally, it leads to Apart from a constant proportionality factor, this formula formally coincides with the terms in the sum of the Boltzmann-Gibbs-Shannon entropy: To continue the analogy, also C ∈ [0, 1]. Indeed, in this case, gintropy is like the entropy density, with the caveat that the cumulative values C(x) are never disjunct for different x-s; instead, they overlap and show a definite hierarchy. The Lorenz curve for x = 1 is illustrated in Figure 2c.
Capitalism
Our last example is capitalism, conjecturing the base PDF being the cut Pareto (known also as Tsallis-Pareto or Lomax II) distribution [24]: This distribution can also be obtained as the canonical equilibrium optimizer of the Tsallis entropy [25]. The tail-cumulative integral is which upon integration leads to the following cumulative of the cumulative: This result also delivers the expectation value, x = h(0) = 1/AB. The Gini index is calculated in the (15) form, and it becomes The gintropy as a function of the income, x, follows the form In order to express this result akin to the entropy, we write σ as a function of C using the inversion of Equation (43) and we obtain Finally, using the Tsallis parameter, q = B/(B + 1), we arrive at the formula: One immediately makes an analogy with the terms in the Tsallis entropy formula: The Gini index is simply Similar to the previously considered cases, we illustrate the Lorenz curve for this distributions as well. For A = 1 and B = 3, the corresponding Lorenz curve is plotted in Figure 2d.
Finally, we summarize the lesson of the considered theoretical examples in Table 1 and Figure 3.
Conclusions
In this work, we explored a density-like quantity called gintropy which occurs in calculating the Gini index, G, for a given relevant socio-economic distribution, ρ(x). This gintropy can be deduced from two cumulative functions, the rich population fraction and the corresponding richness fraction, C(x) and F(x), respectively. The proposed "gintropy" name is meant to suggest a connection between the inequality measure quantified by the Gini index and the entropy. Its dependence on the rich population fraction cumulative function is reminiscent of terms in entropy formulas, known from physics, statistics, and informatics. More precisely, we found that, for the the natural, exponential PDF, the gintropy is reminiscent of the classical Boltzmann-Gibbs-Shannon formula, σ(C) = −C ln C. The Gini index is then the expectation value of the gintropy function; for the exponential PDF, its value is 1/2. For the Tsallis-Pareto distribution, the Gini index must be always over this value.
Several other PDFs have been suggested to describe income or wealth distributions in terms of time [17,[26][27][28][29]. Many of them are not treatable analytically, so the σ(C) relation can only be explored numerically.
Beyond igniting the theoretical phantasy, the gintropy-reminscent of generalized entropy formulas-is also the one-variable density, which lays under the Gini index, originally defined for measuring inequality. Turning this statement around, should we look for such generalizations of the classical entropy formula that are inequality or impurity measures at the same time? We believe that this criterion selects a subclass of possible statistical theories among all possible approaches to the origin, behavior and future of social and economical inequalities. Even generalizations of the Gini index formula have been suggested a few times, cf. [30,31]. We do not expect that a corresponding gintropy ("Lorenz curve minus the diagonal") would resemble any known entropy formula, but this question needs further study.
Finally, it seems that the "correct" entropy measure for economical and social theories can hardly be a simple copy of the classical formula known from physics, mathematics and informatics. Our procedure, described above, is more promising: a recipe for constructing gintropy from cumulative functions of the underlying PDF whose expectation value is the half Gini index and whose dependency on the cumulative rich population coincides with various generalizations of the entropy-probability formula. | 4,957.4 | 2020-07-09T00:00:00.000 | [
"Computer Science"
] |
A MODELING STUDY TO EVALUATE THE QUALITY OF WOOD SURFACE
The goal of this study was to develop a model to predict sanding conditions of different type of materials such as Lebnon cedar (Cedrus libani) and European Black pine (Pinus nigra). Specimens were prepared using different values of grit size, cutting speed, feed rate, and sanding direction. Surface quality values of specimens were measured employing a laserbased robotic measurement system and stylus type measurement equipment. Full factorial design based Analysis of Variance was applied to determine the effective factors. These factors were used to develop the Artificial Neural Networks models for two different measurement systems. The MATLAB Neural Network Toolbox was used to predict the Artificial Neural Networks models. According to the results, the Artificial Neural Networks models were performed using Mean Absolute Percentage Error and R-square values. Mean Absolute Percentage Error values for laser and stylus equipment were found as 2,405 % and 3,766 %, respectively. R-square values were determined as 96,2% and 92,7 % for laser and stylus measurement equipment, respectively. These results showed that the proposed models can be successfully used to predict the surface roughness values.
INTRODUCTION
Sanding process is applied in different wood manufacturing applications.In furniture industry, this process presents some advantages such as low surface roughness, high productivity, appearance of wood products and high wood coating performance (Richter et. al 1995, Cool and Hernandez 2011, Scrinzi et al. 2011, Landry and Blanchet 2012, Hiziroglu et al. 2014, Gurau et al. 2015, Ugulino and Hernandez 2016, Sogutlu et al. 2016).Therefore, the surface roughness is the major indicator of wood surface quality.It is mainly a result of various controllable or uncontrollable sanding parameters.Anatomical structure, hardness, density, annual ring variation, cell structure early-late wood ratio are uncontrollable variables, while the sanding parameters such as feed rate, cutting speed, grit size, pressure, depth of cut, sandpaper type and cutting direction are controllable variables (Tan et al. 2012, Gurau et al. 2013, Magoss 2015, Ramananantoandro et al. 2017).
There are different roughness measuring techniques such as pneumatic, laser and light scattering techniques to determine the surface quality of wood and wood products (Hiziroglu and Suziki 2007, Hazir 2013, Zhong et al. 2013).Stylus type profilometer is commonly applied due to its usefulness and advantage in obtaining accurate numerical results (Sandak andTanaka 2003, Hiziroglu et al. 2014).Laser measurement system provides crucial advantages such as measuring the complex surface structures with non-contact equipment, decreasing the time consumption, gathering more data from the surfaces in a short time and obtaining online measurements in the real production process (Sandak andTanaka 2003, Koc et al. 2017).The full factorial design is a powerful technique to determine the significant factors.It involves all possible combinations between input and output variables.Therefore, this design has been widely used in various engineering applications.
In the recent years, artificial intelligence algorithms such as Artificial Neural Network (ANN), Genetic Algorithm (GA), Fuzzy Logic (FL) and Particle Swarm Optimization (PSO) have applied to different engineering problems (Ozsahin and Aydin 2014, Mahes et al. 2015, Jain and Raj 2017).Carrano et al. (2002) investigated sanding process of hard maple, white oak and eastern white pine as function of spindle speed, feed rate, depth of cut, grit size, tooling resilience and grain orientation.The results showed the grit size, tooling resilience and grain orientation were significant for all species.The feed rate was found as a significant factor for white oak and eastern white pine.Zhong et al.
(2013) evaluated surface quality using stylus type profilometer and 3D image analyzer different wood materials such as particle board, medium density fiberboard, plywood and ten different solid wood.According to the results, these methods can be successfully applied to determine the surface quality.Tiryaki et al. (2014) modeled planing and sanding process of Spruce and Beech wood as function of spindle speed, cutting depth, feed rate, number of cutter, wood zone and grain size of abrasive.The results indicated the surface roughness decreased with increasing the grit number and number of the cutter.It was shown that ANN method can also be used successfully for modeling of surface roughness.A study was carried out by Laina et al. (2017) investigated sanding process of beech, oak and pine the function of grain direction, wood hardness and machining conditions such as planing and sanding process.The results showed the surface roughness was decreased from 60 to 180 grit size.Hardness was found as a significant factor for wood surface roughness.Hazir et al. (2017) developed a mathematical model to evaluate optimum sanding conditions of European black pine (Pinus nigra).Samples were sanded using different grit sizes, feed rates, cutting speeds and depths of cut.Response surface methodology (RSM) was used to determine the optimum parameters values.
The objective of this study was to develop a model to predict sanding conditions using ANN of two different wood species such as Lebnon cedar (Cedrus libani A.Rich) and European Black pine (Pinus nigra Arnold) for two different measurement equipment, namely laser and stylus.
MATERIALS AND METHODS
Lebnon cedar (Cedrus libani A.Rich) and European Black pine (Pinus nigra Arnold) species are extensively used in the furniture industry.The samples were prepared with the dimension of 200 mm x 100 mm x 30 mm for each test.Samples were conditioned in a climate room having a temperature of 20°C and relative humidity of 65% until they reach a moisture content of 10±1%.The density of Lebnon cedar and Black pine was found as 570 kg/m³ and 680 kg/m³, respectively.The samples were processed with wide-belt sanding machine equipped with open coat aluminum oxide abrasive paper.
Evaluation of wood surface quality
In this study, wood surface quality was determined by using two different methods.(1) Laser-based system was employed to evaluate the surface quality.Cartesian robot integrated with laser sensor was used to gather data for evaluating the surface quality (Figure 1a).In order to determine the surface quality with the laser sensor, the robot was gathered with 500 data every 7 mm movement and the average values were calculated.Laser sensor was used to measure the wood surface for determining the various sanding parameters.The robot was controlled in X and Y axis for evaluating the wood surface quality.The laser sensor gave 500 measurements and the results were transferred into the MATLAB program.
(2) Another measurement equipment for determining the surface roughness values of machined wood material was the Taylor-Hobsan Suftronic type equipment (Figure 1b).This device is a stylus-A modeling study to evaluate..: Hazir and Koc based portable profilometer, equipped with a diamond stylus with a 5 µm radius 12,5 mm length of measurement 2,5 mm length of sampling, 15 mm travel of stylus and 90° contact angle running at a speed of 0,5 mm/s.were taken from sample surfaces.With reference to ISO 4287:1997, average roughness (R a ) and mean peak to valley height (R z ) are admitted as roughness parameters.In this study, R a parameter was selected to evaluate the surface roughness of the samples.
Statistical design of experiment
The statistical analysis was performed using Minitab software package 17.The full factorial experimental design was used to obtain the results.This design is one of the most important methods to investigate two or more parameters (Montgomery 1997).ANOVA was applied to experimental data in order to determine effective factors for both laser and stylus type equipment.Each independent variable had two and three levels which were coded as (-1), ( 0) and (+1).The low (-1), medium (0) and high (+1) levels are given in (Table 1).
Artificial neural networks (ANN)
Artificial neural networks (ANN) are developed with inspiration from information processing model of the biological neural system of the human brain.It is used non-linear and linear model for prediction and optimization of the data.ANNs are applied for engineering applications such as pattern recognition, forecasting and data processing (Karazi et al. 2009).This model consists of inputs, which are multiplied by weights.These weights are computed using mathematical function determining the activation of the neuron.This model learns the correlation between the input and output factors by using recorded data.An ANN system depends on neurons connected with the number of weighted links.Every piece of information is transferred to other neurons.Artificial neural network structure is given in (Figure 2).2: Where the terms of net j , w ij , j, x i , θ j and y i are sums of information, weight factors, neuron, layer information, bias of the layer and output values, respectively.In this study, a feedforward and backpropagation multilayer ANN was carried out predicting the wood surface roughness for two different measurement systems.Moreover, the hyperbolic sigmoid function (tansig) and the linear transfer function were selected as transfer function.Levenberg-Marquardt algorithm (trainlm) was applied as training algorithm and gradient descent with a momentum back-propagation algorithm (traingdm) was selected as learning rule.To evaluate the ANN model, Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE) and R-square (R 2 ) values were used to test the accuracy of the results.MAPE, MSE and R 2 were computed with Equation 3, Equation 4 and Equation 5.The terms of A t , F t and t F indicate the actual, predicted and the average of predicted values, respectively.
(3) (4) (5) Maderas.Ciencia y tecnología 20(4): 691 -702, 2018 A modeling study to evaluate..: Hazir and Koc In order to determine an equal contribution of each variable, these parameters were normalized using Equation 6.This process ensured the best generalization performance of ANN model.Training, testing and validation data were normalized by carrying out their minimum and maximum values within the range of [-1, 1].The model parameters were normalized by computing Equation 6. (6) The terms of X norm , X max, and X min are normalized value of a variable X, maximum and minimum values of X, respectively.
RESULTS AND ANALYSIS
The experiment consisted of five factors and one replicate, making a total of 108 runs ( ). (Table 2) shows the experimental parameters and their recoded laser and stylus measurement roughness values.In both measurement methods, the average roughness values were calculated by taking measurements from three different points on the wooden surface.
ANOVA based F-test was applied to evaluate the significance factor on the surface roughness.This analysis investigates the following for each parameter given in Equation 7and Equation 8: ...
:
for at least one pair , The F value is calculated by: (8) The terms of (α-1) and (N-α) are the degrees of freedom and the error degrees of freedom for the parameter A, respectively.MS A and MS E are indicated the sum squares of means and errors for the variable A, respectively.The null hypothesis is rejected when the F 0 is higher than the critical value of F a,a-1,N-a , where α is the level of the significance (Antony 2014). (Table 3 and Table 4) displayed the P -value is less than 0,05 showing the model is significant at 95% confidence level.Value of "prob>F" are lower than 0,05 indicating that the model terms are significant.In this case, the terms of feed rate, cutting speed, material type, sanding direction and grit size resulted in significant factors for stylus and laser measurement equipment.DF: degrees of freedom, SS: Sum of squares, F: F-test value and P:error variance ª At a given response, parameters belonging to the filled cells are effective within 95 % reliability interval.
Evaluation of the models
The normal probability plot of the residuals and residuals versus the predicted for R a are shown in (Figure 3).Evaluating on the normal probability plots (NPP) are shown in Figure 3a, Figure 3b) depicts that the residuals generally fall on a straight line implying that the errors are distributed normally.In addition to these Figure 3c, Figure 3d) show that the residuals versus the fitted values for the surface roughness data.As result of the residuals no unusual structure is apparent.This implies that the models proposed are adequate and there is no reason to suspect any violation of the independence or constant variance assumption (Montgomery 1997).
Parameter prediction by using ANN
According to the results obtained from the analysis of variance, the sanding parameters of grit size, feed rate, cutting speed, material type and sanding directions were found as effective factors on surface quality for laser and stylus type equipment.In this reason, these variables were selected as input parameters while the surface roughness was selected as output parameter for ANN.The gathered data making of 108 was used as 76 samples for the training, 16 samples for the validation and 16 samples for the testing.The accuracy of models was performed by using correlation coefficient (R 2 ) and MSE values.These results were given in (Table 5).; 1,38 and 1,97 for stylus-based measurement equipment, respectively.These results were used to evaluate the performance for predicting models.Because of the high values of R 2 and the low level of errors, these models were satisfactory.A value of R 2 obtained from laser-based measurement was better than stylus type equipment.(Figure 5 and Figure 6) showed the relationship between the measured and predicted values for laser and stylus type equipment.Moreover, the MAPE values for all data are given in (Figure 5 and Figure 6).MAPE values for laser and stylus measurement equipment were computed as 2,405 % and 3,766 %, respectively.MAPE values for laser measurement equipment lower than stylus type equipment.
(a) Laser-based measurement system (b) stylus-based measurement equipment.
Figure 2 .
Figure 2. Artificial neural network structure.An ANN is performed by using transfer function type, training algorithm, training and testing data size and values of weights and biases.This algorithm is formulated in Equation 1 and Equation 2:
Figure 3 .
Figure 3. (a-b) NPP of residuals and (c-d) plot of residuals fitted for R a -Stylus and Laser results.
Figure 4
Figure4displays the relationship between the measured and predicted values for training, validation and testing data.(Figure4a) displays R 2 values for training, validation and testing data sets in predicting stylus measure of 0,9450; 0,92462 and 0,87836 respectively.(Figure4b) displays R 2 values for training, validation and testing data sets in predicting laser measuring of 0,97528; 0,94573 and 0,90692 respectively.
Figure 4 .
Training, validation and testing results for (a) stylus and (b) laser type equipment.According to Figure 4, R 2 values are close to 1 for training, validation and testing data.It means that there is a good relationship between the measured values and the predicted values.Moreover, MSE values of training, validation and testing parameters were found as 1,45; 7,44 and 6,25 for laser-based measurement system, respectively, whereas MSE values of training, validation and testing parameters were computed as 8,59
Figure 5 .
Figure 5.The relationship between the measured and predicted values for laser measurement.
Figure 6 .
Figure 6.The relationship between the measured and predicted values for stylus measurement.
Table 1 .
Sanding procedure parameters and levels.
Table 2 .
Experimental parameters and the recorded roughness values of laser and stylus measurement.
Table 3 .
ANOVA for R a -Stylus measurement results.At a given response, parameters belonging to the filled cells are effective within 95 % reliability interval.
DF: degrees of freedom, SS: Sum of squares, F: F-test value and P:error variance ª
Table 4 .
ANOVA for -R a Laser measurement results.
Table 5 .
ANN performance results. | 3,497.2 | 2018-01-01T00:00:00.000 | [
"Materials Science",
"Engineering",
"Environmental Science"
] |
Inhibition of the Symbiotic Fungus of Leaf-Cutting Ants by Coumarins
Formigas cortadoras de folhas são consideradas pragas para a agricultura devido à grande quantidade de material vegetal utilizado por elas para cultivar um fungo simbionte que lhes serve de alimento e enzimas. O mutualismo entre o fungo e as formigas é um ponto a ser explorado quando se considera sua possível aplicação em métodos alternativos para o controle desses insetos. Sabendose que algumas plantas são naturalmente resistentes aos insetos fitófagos, alguns produtos naturais (metabólitos secundários) devem ser avaliados em relação às suas propriedades inseticidas e/ou fungicidas. Neste trabalho foram isoladas oito cumarinas de quatro espécies de plantas e o efeito no desenvolvimento do fungo simbionte das formigas cortadeiras Atta sexdens foi determinado. Com exceção da clausarina, todas as outras cumarinas foram inibitórias de 64 μg mL à 80 μg mL sendo que a xantiletina inibiu o fungo na concentração de 25 μg mL.
Introduction
Leaf-cutting ants, the dominant herbivores in the tropics, can be found from the Southern United States to Northern Argentina. 1 They cultivate a symbiotic fungus for feeding using leaf fragments as substrate, thus damaging agriculture. 2t is known that plants have several mechanisms to avoid herbivores, including a set of toxic chemicals. 3The presence of secondary plant metabolites toxic to the ants and/or to their symbiotic fungi may play an important role in whether the ants will cut such plants or not. 4 Many of the most important crops cultivated in our country are exotic species, which have not had the possibility of co-evolution with predators and as such are preferentially attacked. 5ifferent methods have been proposed for the control of these ants (organophosporus, pyrethroids and sulfluramid insecticides).However, problems with non-target animals, high chemical stability and environmental contamination are common and it is necessary to continue looking for new strategies to control these insects. 6[8][9][10][11][12][13][14][15] Among the several biological properties of coumarins we can find dermal photosensitising, estrogenic, antimicrobial, vasodilator, molluscicidal, antihelmintic, sedative and hypnotic, analgesic and hypothermic activities, 16,17 but they have been associated mainly with anticoagulant activity. 18Indeed, they are widely found in several botanical families such as Rutaceae, Apiaceae, Asteraceae, Fabaceae, Oleaceae, Moraceae and Thymeleaceae and it seems to correlate with their action as phytoalexins.Repellent action to beetles and other terrestrial invertebrates and the inhibition of both sporulation and the growth of some fungal plant pathogens by some coumarins were described by Weinmann. 19he aim of this study was to analyze the activity of eight coumarins isolated from different plant species on the development of the symbiotic fungus of the leaf-cutting ant Atta sexdens rubropilosa.
Experimental Preparation and fractioning of crude extracts
Different parts of plants were ground to powder, dried at 40 °C and percolated with a set of organic solvents (hexane, dichloromethane and methanol) during 72 hours three times each at room temperature for three days, followed by the evaporation of the solvent under reduced pressure at 40 °C.The crude extracts were fractionated through flash chromatography under vacuum with silica gel and eluted with solvents of increasing polarity (hexane, dichloromethane, ethyl acetate and methanol).They were then purified through different techniques including column chromatography, preparative TLC and HPLC.
The medium for fungus maintenance and methods for the bioassays were previously described. 10One mL of dichloromethane solution of each coumarin was added to 9.0 mL of culture medium composed of (g L -1 ): glucose, 10.0; sodium chloride, 5.0; peptone, 5.0; malt extract, 10.0 and agar, 15.0.Control tubes received 1.0 mL of dichloromethane and 9.0 mL of medium.After the addition, the tubes were autoclaved at 121 o C by 15 min and then slanted.The final concentration of each coumarin in µg mL -1 was: 1=72; 2=64; 3= 70; 4=75; 5=80; 6=65; 7=25 and 8=75.The fungal suspension was prepared by transferring aseptically pieces of the mycelia (obtained from 1-month-old culture growing in slant culture) to an all-glass tissue grinder containing sterile peptone (1 g L -1 ) and gently fragmented.One mL of this suspension was spread onto the surface of the agar slant and incubated at 25(±1) o C for 30 days.The assays were run twice (two sets of five tubes each).Controls with and without solvent as well as with PBO (9) (commercial piperonil butoxide -Pirisa Piretro Industrial, 97LO106IO) were run simultaneously.Fungal growth was estimated macroscopically on the basis of the mycelial surface and density using the modal value.
Results and Discussion
Figure 1 shows the molecular structure of the isolated coumarins and on Table 1 we can see the effects of the coumarins on the fungal growth.Except for clausarin (4), which had no effect on the fungal development at a concentration of 75 µg mL -1 , all the others inhibited the fungus at different degrees.Thus, at 72 µg mL -1 , angelicin (1) was responsible for a moderate (40%) inhibition of fungal growth whereas an inhibition of 60% was achieved with umbelliferone (6) at 65 µg mL -1 .Strong inhibition (80% at 75 µg mL -1 ) was observed with 7-hydroxy-3-(1',1'dimethylallyl)-8-methoxycoumarin (8) and total inhibition of fungal growth was observed with suberosin (2), xanthoxyletin (3) and isopimpinellim (5) at concentrations ranging from 64 to 80 µg mL -1 .Unfortunately, we did not have enough material to determine whether suberosin, xanthoxyletin and isopimpinellim could be inhibitory at a lower concentration.Victor 31 had shown total inhibition of the fungus in the presence of xanthyletin (7) at concentrations of 100 and 50 µg mL -1 .The same effect was now observed at 25 µg mL -1 showing that this compound should be better evaluated in respect to its potential in the control of these insects.
Five out of the eight coumarins assayed were isolated from the roots of Citrus limonia and Adiscanthus fusciflorus.Natural coumarins are synthesized by plants as a response to injury during the wilting process, diseases or drying and accumulate on the surface of leaves, fruits and seeds.According to Ojala 32 their presence in the roots may provide a tool against microbial invasion.
Attempts to correlate structure and antimicrobial activity of coumarins were made by some authors.Kayser and Kolodziej 33 reported that highly oxygenated coumarins and the positions of the polar hydroxyls and of the less polar methoxy groups on the aromatic nucleus are important for antibacterial activity.Yet, according to Dini et al. (apud Sardari et al. 34 ) the occurrence of the aromatic hydroxy group and/or ether or ester groups at positions 6 or 7 of the basic structure are important for antifungal action.For them, alkylated derivatives of 7hydroxycoumarins can have both antifungal and antibacterial properties.On the other hand, Sardari et al. 34 did not find antifungal activity of coumarins with a hydroxy group at position 7, as observed with umbelliferone, which showed low activity against Candida albicans, Cryptococcus neoformans and Aspergillus niger but they found a relationship between antifungal activity and the presence of free 6-OH and 6-OMe.The results of Sardari et al. 34 were confirmed by Ojala et al. 35 with C. albicans, A. niger and Saccharomyces cerevisiae but not for the phytopathogenic fungus Fusarium culmorum to which umbelliferone was highly inhibitory.So, if some structure-activity relationship can be made, the hypothesis about a possible species-specific activity cannot be discarded.
Except for clausarin (4) our results showed that the seven linear coumarins assayed inhibited the symbiotic fungus of A. sexdens at low concentrations whereas a low inhibition was achieved with angelicin.Since angelicin was the only angular coumarin assayed in this work it was not possible to establish at this moment any relationship between antifungal activity and linearity.Also the occurrence of prenyl and MeO groups linked to the basic structure of coumarins should be better studied in order to determine if they are important or not for the antifungal activity described here.
Table 1 .
Inhibitory effect of natural coumarins on symbiotic fungus of leaf-cutting ant Atta sexdens rubropilosa Control with and without solvent = 0% inhibition of fungal growth; b Dry-weight of inoculum = 6.2 ± 0.3 mg mL -1 ; c Control = PBO = Piperonyl butoxide.
a Figure 1.Molecular structures of coumarins and commercial piperonyl butoxide. | 1,871.8 | 2005-05-01T00:00:00.000 | [
"Environmental Science",
"Agricultural and Food Sciences",
"Biology"
] |
Toward a more E ffi cient Knowledge Network in Innovation Ecosystems: A Simulated Study on Knowledge Management
: Knowledge management has become increasingly important in the era of knowledge economy. This study explores what is an optimal knowledge network for more e ffi cient knowledge di ff usion among strategic partners in order to provide insights on sustainable enterprises and a more knowledge-e ffi cient innovation ecosystem. Based on simulated analyses of the e ffi ciency of knowledge network models, including regular network, random network, and small world network, this study shows that a random knowledge network is more e ffi cient for knowledge di ff usion when a mixture knowledge trade rule is used. This study thus helps identify which knowledge networks facilitate knowledge exchange among collaborative partners for sustainable knowledge management. Management practitioners and policymakers can use the findings to design more appropriate knowledge exchange networks to improve the e ffi ciency of knowledge di ff usion in an innovation ecosystem.
Introduction
Knowledge and knowledge management have become a new driving force of economic development in the era of knowledge economy [1][2][3][4][5][6][7][8]. More firms have recognized the importance of knowledge and knowledge diffusion in obtaining sustainable advantages [4,[8][9][10][11]. As the global market becomes more competitive, only those firms that can continuously create, transmit, and absorb new knowledge are able to achieve sustainable success in an increasingly turbulent environment [12]. Knowledge has thus become a critical factor for firms to innovate and compete with domestic and international counterparts for sustainable development [8,10,13,14].
Interfirm networks in a variety of forms (such as strategic alliances and industry clusters) have become increasingly important in helping firms improve their competitive positions through enhanced access to knowledge, innovation, and resources otherwise not available to them [6,7,[15][16][17] In response to the increased performance pressure from unforeseeable challenges in the global market, firms have formed various networks and moved fast from competitors to collaborators and value cocreators in order to build knowledge efficient innovation ecosystems [17,18], a coopetition view of modern business activities. It has been widely accepted that knowledge networks in an ecosystem affect the diffusion Sustainability 2020, 12, 6328 3 of 18 depending on the knowledge management stages: in the early stage of knowledge diffusion among clustered firms, a mixture trade rule including both gift trade and barter trade of knowledge may be better in helping increase the knowledge level of all members. The results of this study show that under the new trade rule-the mixture trade rule, the optimal knowledge network for efficient knowledge diffusion among strategic partners is no longer the small world network; instead, a random network is more efficient when the new trade rule is used. As a result, this study adds an important piece to the literature on knowledge networks and knowledge diffusion efficiency: According to Cowan and Jonard [20,23], the small world network is optimal for knowledge diffusion, but their research somehow fails to consider the important condition under which the barter trade rule is used in knowledge diffusion process, namely the benevolent nature of the partnership among strategic partners or the early stage of network development in industrial parks. This study shows that a static view used in previous research is defective. Using a contingency perspective, it is argued that the mixture rule of knowledge diffusion (i.e., barter trade plus gift trade for knowledge exchange) may be more realistic and thus more suited to strategic partnerships or industrial parks. Consequently, the random network is more optimal for efficient knowledge diffusion.
The rest of this paper is structured as follows. We first present a brief review of relevant studies in the literature, and then network models are conducted to simulate the process of knowledge diffusion, including setting a mixture knowledge trade rule, constructing different knowledge networks and measuring the efficiencies of knowledge diffusion. We then report the simulated results and a sensitive analysis to compare our models and those of Cowan and Jonard [23]. The last section discusses the findings and provides implications for management practitioners and policymakers.
Literature Review
In response to the emerging challenges in the dynamic and interconnected market, firms have formed strategic partnerships such as joint ventures, alliances, and interfirm clusters to obtain enhanced access to innovation, knowledge, complementary resources, and capabilities not available to them otherwise [2]. Governments in different countries have also created a large number of industrial parks in order to take advantage of the synthesis effect of innovation ecosystems on economic development [34]. Strategic alliances lead to clustered firms, which can use both market and non-market interactions. Firms can exploit, participate, and position themselves in the interfirm network to respond to radical changes in the global market [10,35]. When firms form and maintain strategic alliances with each other, they are actually weaving a relationship network, and firms embedded in the network are able to gain access to knowledge and technologies from their network partners [36]. Interfirm networks provide each firm with rapid and flexible access to resources embedded in other firms or related industries [2,24,30,31].
The Impact of Network Characteristics on Knowledge Diffusion
Network characteristics affect knowledge diffusion among network members [37][38][39][40]. Network density is one of the most important characteristics, reflecting the proportion of possible ties that are actualized among network members. Research has shown that network density can promote group communication and mutual trust in collective behaviors. Further, it can promote the recognition and coordination of group members and facilitate knowledge diffusion among network members, especially for invisible resources and tacit knowledge [37][38][39][40][41][42][43][44]. A dense network can increase the probability of forming strong ties, which is not only conducive to the transfer and diffusion of tacit knowledge, but also has a positive effect on innovation performance [22,45]. The speed of knowledge accumulation for a single firm also heavily depends on the density of its social network connections with other firms [46].
Another important network characteristic is network centrality, a network node's location in the network, and it is closely related to its social capital. Studies on organizations' network positions have shown that individual firms can obtain and use more diverse resources through their social network Sustainability 2020, 12, 6328 4 of 18 ties [35]. Such network embeddedness is conducive to the flow of information and acquisition of knowledge. For example, firms located in the center of the network or with many strong ties usually have access to abundant information, have strong influences on others, and can increase the others' dependence [47]. The central firms have access to more sources of knowledge and can foster innovation by successfully combining knowledge acquired from different sources [48]. Similarly, firms located in strategic positions in the network can spread more valuable resources, thus exerting a greater influence on decision makers [49].
The Impact of Knowledge Networks on Knowledge Diffusion
Network structure is another important network characteristic that affects knowledge diffusion among network members [20,23]. Knowledge networks can be grouped into random networks, small world networks, regular networks, scale-free networks, and complex networks, among others. There is an ongoing debate in the literature about which structure facilitates efficient knowledge diffusion and how different knowledge networks promote a fast diffusion of knowledge and collective innovation [14]. For instance, the speed of knowledge diffusion may change along with the change of network randomness, that is, the transmission speed in a regular network is much slower but it becomes much faster in a small world network [50]. The small world network can make knowledge diffusion much more comprehensive because of its high cohesion and shorter average path length and thus is widely considered the optimal structure for knowledge diffusion [20].
However, the small world network is not always efficient [11]. When there are great knowledge disparities between network members, the small world network is not an optimal knowledge diffusion structure, and it can only achieve a moderate level of performance: The higher the diversity among individual network members, the more likely it is that the small world network will widen the gap between network members' acquired knowledge [51], because the argument that the small world network is optimal is based on the condition that a barter knowledge trade rule should be used in the knowledge diffusion process, i.e., knowledge diffusion can only occur when both trade partners have something needed by the other side, and knowledge diffusion stops when one or both sides exhaust the resources needed by the other side. In addition, the small world knowledge network may not benefit every network member: network members may face longer search paths when locating knowledge in an organization, and their world may be large [52]. In fact, the ideation of innovation is better communicated by a knowledge network with a "complete graph", which maximizes the number of parallel communications and encourages people to dynamically stir through a large set of conversational partners [53].
In sum, a knowledge network has an important impact on knowledge diffusion efficiency among clustered firms, but which structure is optimal for knowledge diffusion is to be further examined. For example, a random network may be better in a knowledge diffusion process without creating new knowledge, while a regular network may be better in a knowledge diffusion process which creates new knowledge [20]. For a long-term knowledge accumulation, a small world network may be the best [20]. In addition, if the knowledge is scarce, the network with structural holes is better. If the knowledge is abundant, a high-density network is better [26]. Moreover, a star node, which is often treated as an important information center of knowledge distribution in the network, makes the highly asymmetric network more conducive to rapid knowledge diffusion. If the star node withdraws from the network, it will lead to severe destruction in the network distribution. Consequently, a flat knowledge network will be favored when there are increasing cases of withdrawing [54]. Furthermore, Nieves and Osorio [55] argue that the knowledge search strategy should also be considered in the search for the most suitable knowledge network. As a result, past studies have created a lot of confusions, and it is critical to further explore the impact of knowledge networks on knowledge diffusion performance, in particular for the increasingly popular strategic partnerships and industrial parks, in order to help management practitioners and policymakers to create more appropriate interfirm networks to facilitate knowledge creation and exchange.
Modeling the Process of Knowledge Diffusion among Clustered Firms
In order to further explore the impact of interfirm knowledge networks on knowledge diffusion efficiency for more sustainable development, we decided to use computer simulations to compare the efficiency of different knowledge networks in knowledge exchange. Computer simulations provide a cost-efficient tool to compare different interfirm networks and their performance. Several models were thus constructed in this study to analyze the relationship between knowledge networks and knowledge diffusion efficiency with different knowledge exchange rules that are more consistent with the purpose of strategic partnership and industrial parks.
Rules of Knowledge Interactions/Diffusion
Scholars have used different trade rules in their studies on knowledge network and knowledge diffusion [20,23]. For example, the oft-used "barter trade" rule assumes that an individual network member transfers part of his/her knowledge to another and is paid back with a different knowledge, and both members consider knowledge trade as mutually beneficial [20,23]. This is also the condition used in the studies by Cowan and Jonard [23] and many other scholars [25], who consequently contend that a small world network is the optimal structure for knowledge diffusion and the barter trade rule should be used in knowledge diffusion. Contrary to this assumption, the "gift trade" rule does not expect returns from exchange partners, but has an intention to develop or maintain a social relationship between exchange parties [28], and this rule is largely ignored in the mainstream studies on knowledge diffusion [20,25]. In this study, a contingency view is used [33], and it is argued that both knowledge trade rules should be considered in clustered firms depending on the knowledge management stages: in the early stage of knowledge diffusion among clustered firms, policymakers hope to increase the knowledge level of all members through a variety of knowledge trade activities. While it is beneficial to have barter trade of knowledge between partners, some network partners often have no definite return to knowledge providers-those firms with more knowledge endowment. Therefore, a mixture rule, rather than a barter trade rule, is more consistent with the policy reality at the early stage of interfirm alliances and benevolent strategic partners. A barter trade rule is more appropriate at the mature stage of knowledge management, when both partners have developed some knowledge needed by the other. Therefore, this study will model the knowledge diffusion process using a mixture rule, a more realistic knowledge diffusion rule for clustered firms in an innovation ecosystem. When two firms meet, they make either a bilateral or unilateral knowledge trade. That is, if Firm A has knowledge Firm B does not have and Firm B also has knowledge Firm A does not have, then the trade occurs, a bilaterally profitable trade (i.e., barter trade). On the other side, if Firm A has knowledge Firm B does not have, but Firm B has no knowledge Firm A needs, the trade also occurs, on a unilaterally profitable trade (i.e., gift trade), and vice versa. Using a different but also more realistic knowledge trade rule is the major difference between this study and those of Cowan and Jonard [20,23,24,54]. Further, this study also assumes that knowledge exchange can take place only with those to whom firms have direct links (edges). This process is repeated and forms the foundation on which knowledge spreads through the interfirm network for overall knowledge increase.
Basic Assumptions
We consider a population N = {1, 2, · · · , n} of firms in a networked partnership. Each firm is treated as a node, and the relationship between two firms is treated as an edge. The undirected graph associated with this social network is written as G(V, E), where the correspondence V= {V i , i ∈ N} represents the set of nodes to which each node is connected. The l ij is defined as the length of the shortest path from node i to node j, that is V i = j l ij = 1, j ∈ N . The l ij = 1 indicates that node i and node j are directly connected. It is assumed in this study that two nodes can interact only when they are directly connected. Indirect exchange is not considered in our study.
Constructing Different Knowledge Networks
In order to model different knowledge networks, a re-wiring algorithm is used in this study-the same as the one used in similar studies [23]. According to the algorithm, we begin with a regular graph: a circular lattice with n nodes, and each node has some edges connected only to its m nearest neighbors (m is even). We operate on each edge of the nodes sequentially, but in a particular order: we first begin with node one, and the edge connects to its nearest neighbor clockwise. With probability p (0 ≤ p ≤ 1), we cut the connection to the neighbor and re-connect the edge to a code selected randomly over the entire graph. With probability 1-p, the edge remains unchanged. The progress is around the lattice clockwise, considering one edge per node and avoiding duplicated edges. After one complete round, the procedure is repeated, and the second nearest clockwise neighbor is considered. We repeat the procedure and consider progressively more distant neighbors, until every structure is considered, ranging from a perfect regular (p = 0), periodic lattice (0 < p < 1) to a completely random graph (p = 1).
Measuring Knowledge Diffusion Efficiency
This study aims to understand how a knowledge network affects the efficiency of knowledge diffusion between partnered firms to identify optimal knowledge networks for efficient knowledge diffusion in clustered firms. The key question is to measure the efficiency of knowledge diffusion. Scholars have used the efficiency of networks to measure the efficiency of knowledge diffusion with un-weighted networks [56,57] and weighted networks [58]. In particular, the mean µ and variance σ 2 are used in past studies to measure the accumulation and dispersion of knowledge over the agents [23]. However, the absolute variance σ 2 may not be suitable for measuring the dispersion. For example, at the beginning, the knowledge stock of firms i and j are S i (0) = 1 and S j (0) = 2, respectively, and at the end the knowledge stock of firms i and j are S i (t) = 4 and S j (t) = 8, respectively. If we use absolute variance, the dispersion increases, because it does not take into account the total knowledge stock in the whole network. In other words, a relative measure index may be more suitable. Therefore, we considered three indexes to measure knowledge diffusion efficiency in order to address the concerns mentioned above.
First, a structure will be better than others if it helps clustered firms reach a high overall knowledge level, which is often the purpose of strategic partnerships and industrial parks [20]. Therefore, we used the average knowledge stock (i.e., AKS(t)) to measure the mean of the whole network's knowledge stock. It is denoted as follows: Second, the speed of knowledge diffusion (i.e., SKD(t)) is another major concern, and therefore, the faster the knowledge diffusion, the better the knowledge diffusion in clustered firms. The SKD(t), the slope of the curve of AKS(t), was thus used in our study.
Third, the goal of knowledge diffusion in clustered firms is to help every network member get more knowledge and improve their performance. In particular, the studied network should help backward firms catch up with more advanced firms. From this perspective, the desired level of disparity in the distribution of knowledge in interfirm alliance should be lowest. So, we used AKD(t) to denote the level of average knowledge disparity, and it is written as follows:
Setting of Parameters
The parameters used in the experiment were set as follows. Assume that the interfirm network for knowledge exchange has a total population of n = 1000 firms. Each of them holds m = 6 links, similar to the parameters used in previous studies [23]. There are k = 50 possible types of knowledge. Initially, each firm i ∈ N holds one type of knowledge with independent identical probability q = 0.15. Experiments with different sets of knowledge, k = 10, 20, and 30, were also conducted and showed similar results (which did not affect the trend of evolution, but only the specific numerical size and the time-cost to the steady state). Moreover, initializing firms' knowledge endowments differently-by changing the amount of knowledge held on average by individual firms-produces little variation. We also set q = 0.05, 0.1, 0.3, and 0.5, respectively, which did not affect the trend of evolution, but only the specific numerical size and the time-cost to the steady state. The evolution of the knowledge network was tested by assigning the re-wiring probability p different values between 0 and 1-here p = 0, 0.1, 0.5, and 1, respectively, which represents knowledge networks different from a regular network (p = 0), a mixture network (0 < p < 1), and a completely random network (p = 1). The procedure was simply repeated and stopped when the interaction possibilities had been exhausted, or the output had reached a steady state. All the simulations were conducted on platform MATLAB R2014b.
Methodological Procedures
We used procedures similar to those of previous studies [23] to create different knowledge network structures for comparison. As described in Section 3.2.2, we first constructed a regular network with the symmetric connection matrix. If the matrix elements in location (i, j) is 1, it means that nodes i and j are connected. Then, based on the re-wiring algorithm in Section 3.2.2, with the increasing of p, the network structure changed from a regular network to a small world network and to a random network. For the initial knowledge endowment of each node, i.e., S i (0), we used the rand function rand (1) to calculate its value. If the parameter q, as in Section 4.1, is greater than rand (1), then S i (0) = 1, which mean that node i has the knowledge. After that, we calculated the knowledge accumulation and disparity of the whole network using the knowledge interaction rule and formula as set in Equations (1)-(3), and then plotted the results in different figures.
The Accumulation of Knowledge in Clustered Firms
In this section, the goal is to find the optimal knowledge network based on the amount and speed of knowledge accumulation. Let p be equal to 0, 0.1, 0.5, and 1, respectively, representing different structure forms from a regular network, a mixture network to a completely random network. The results are shown in Figure 1.
As seen in Figure 1, the amount of average knowledge stock (i.e., AKS) will reach the maximum value of 50, which indicates that the diffusion is completed for all kinds of knowledge network. However, the cost-time reaching the maximum decreases with the increase of the value of p, which implies that the diffusion speed is the fastest for p = 1. As discussed before, for the indexes AKS and SKD, the more the better. Therefore, when the accumulation of knowledge is considered under the mixture trade rule, the completely random network (i.e., p = 1) is the optimal. Similar results were obtained when different values of p were tested, as in Figure 1. As seen in Figure 1, the amount of average knowledge stock (i.e., AKS) will reach the maximum value of 50, which indicates that the diffusion is completed for all kinds of knowledge network. However, the cost-time reaching the maximum decreases with the increase of the value of p, which implies that the diffusion speed is the fastest for p = 1. As discussed before, for the indexes AKS and SKD, the more the better. Therefore, when the accumulation of knowledge is considered under the mixture trade rule, the completely random network (i.e., p = 1) is the optimal. Similar results were obtained when different values of p were tested, as in Figure 1.
The Dispersion of Knowledge in Clustered Firms
In this section, the goal is to find the optimal knowledge network based on knowledge disparity among clustered firms. Let p equal to 0, 0.1, 0.5, and 1, respectively, representing different structure forms from a regular network, a mixture network, to a completely random network. The results are shown in Figure 2. We can see from Figure 2 that the amount of average knowledge deviation (i.e., AKD) will reach the minimum (i.e., 0), meaning that all firms have the same knowledge level. However, the cost-time
The Dispersion of Knowledge in Clustered Firms
In this section, the goal is to find the optimal knowledge network based on knowledge disparity among clustered firms. Let p equal to 0, 0.1, 0.5, and 1, respectively, representing different structure forms from a regular network, a mixture network, to a completely random network. The results are shown in Figure 2. As seen in Figure 1, the amount of average knowledge stock (i.e., AKS) will reach the maximum value of 50, which indicates that the diffusion is completed for all kinds of knowledge network. However, the cost-time reaching the maximum decreases with the increase of the value of p, which implies that the diffusion speed is the fastest for p = 1. As discussed before, for the indexes AKS and SKD, the more the better. Therefore, when the accumulation of knowledge is considered under the mixture trade rule, the completely random network (i.e., p = 1) is the optimal. Similar results were obtained when different values of p were tested, as in Figure 1.
The Dispersion of Knowledge in Clustered Firms
In this section, the goal is to find the optimal knowledge network based on knowledge disparity among clustered firms. Let p equal to 0, 0.1, 0.5, and 1, respectively, representing different structure forms from a regular network, a mixture network, to a completely random network. The results are shown in Figure 2. We can see from Figure 2 that the amount of average knowledge deviation (i.e., AKD) will reach the minimum (i.e., 0), meaning that all firms have the same knowledge level. However, the cost-time We can see from Figure 2 that the amount of average knowledge deviation (i.e., AKD) will reach the minimum (i.e., 0), meaning that all firms have the same knowledge level. However, the cost-time reaching the steady state decreases with the increase of value of p. In other words, the speed of knowledge diffusion (SKD) increases with the increase of the value of p. As discussed before, for the index of AKD, the less the better; and for the index of SKD, the bigger the better. Therefore, when the knowledge disparity and the speed of knowledge diffusion are considered under the mixture knowledge trade rule, the completely random network (i.e., p = 1) is still the optimal structure. Results are very similar when we use different values of p as in Figure 2.
Sensitivity Analysis
This study shows that under a mixture trade rule, the random knowledge network is optimal for knowledge diffusion among strategic partners, which is different from the argument that a small world network is the optimal structure for knowledge diffusion [23]. To validate our result, a sensitivity analysis is conducted in this section to explore why our result is different from that of previous works.
Analysis of the Reliability of the Method
Using the interaction rule and performance indexes provided by Cowan and Jonard [23]-that is, a barter trade as the interaction rule, with µ and σ 2 as the performance indexes-a simulation is conducted to test whether our procedure is reliable by using our algorithm with their parameters to see whether we would get the same results. Note that all the parameters used in this test are set as theirs, that is, n = 500, m = 10, k = 20, and q = 0.25. The results are as follows. Figure 3 shows the average knowledge stock held by firms and the speed of diffusion with different values of p, and Figure 4 shows the average knowledge disparity between firms with different values of p. It is expected that the more the better for µ but the less the better for σ 2 when the curves reach a steady state. These two results also indicate that the small world network is the best (i.e., p = 0.1), which is consistent with the finding of Cowan and Jonard [23]. Therefore, it can be concluded with confidence that our models and the algorithm/calculation procedure used in this study are valid: when the barter trade rule is used, our study also shows that a small world network is optimal, but it is argued in our study that a different rule should be used to explore knowledge diffusion among strategic partners or firms in industrial parks which often have a more benevolent relationship among network members, and thus a strict barter rule for knowledge trade may not be appropriate.
knowledge disparity and the speed of knowledge diffusion are considered under the mixture knowledge trade rule, the completely random network (i.e., p = 1) is still the optimal structure. Results are very similar when we use different values of p as in Figure 2.
Sensitivity Analysis
This study shows that under a mixture trade rule, the random knowledge network is optimal for knowledge diffusion among strategic partners, which is different from the argument that a small world network is the optimal structure for knowledge diffusion [23]. To validate our result, a sensitivity analysis is conducted in this section to explore why our result is different from that of previous works.
Analysis of the Reliability of the Method
Using the interaction rule and performance indexes provided by Cowan and Jonard [23]-that is, a barter trade as the interaction rule, with and 2 as the performance indexes-a simulation is conducted to test whether our procedure is reliable by using our algorithm with their parameters to see whether we would get the same results. Note that all the parameters used in this test are set as theirs, that is, n = 500, m = 10, k = 20, and q = 0.25. The results are as follows. Figure 3 shows the average knowledge stock held by firms and the speed of diffusion with different values of p, and Figure 4 shows the average knowledge disparity between firms with different values of p. It is expected that the more the better for but the less the better for 2 when the curves reach a steady state. These two results also indicate that the small world network is the best (i.e., p = 0.1), which is consistent with the finding of Cowan and Jonard [23]. Therefore, it can be concluded with confidence that our models and the algorithm/calculation procedure used in this study are valid: when the barter trade rule is used, our study also shows that a small world network is optimal, but it is argued in our study that a different rule should be used to explore knowledge diffusion among strategic partners or firms in industrial parks which often have a more benevolent relationship among network members, and thus a strict barter rule for knowledge trade may not be appropriate.
Effect of the Degree of Node
Among all parameters, the degree of node m seems to be arbitrary. Intuitively, if only considering the efficiency of knowledge diffusion, a shorter path length and a faster knowledge diffusion are better for knowledge accumulation. Thus, it seems that the largest degree of node (i.e., m = n − 1) would be best. In other words, the knowledge network with complete connections should be the optimal one, and it makes less sense to search for the so-called optimal network. However, while there is already a consensus that highly dense networks and sparse networks are not good for network knowledge diffusion, which implies that there should be an optimal degree of node between 0 and n − 1, the degree of mode has received limited attention in the literature on how to prove that the optimal level actually exists. Therefore, an important question to be solved is whether an optimal value of m indeed exists, in order to confirm the reliability of our simulation experiment.
In this study, we also test the marginal effect of the degree of node to measure knowledge accumulation with the increase of m. In this simulation, the parameters are set as follows: n = 1001, m increasing from 2 to 1000 with intervals 2 (i.e., from minimum value to maximum value, note that m is even), k = 50, q = 0.15. The result is shown in Figure 5.
Effect of the Degree of Node
Among all parameters, the degree of node m seems to be arbitrary. Intuitively, if only considering the efficiency of knowledge diffusion, a shorter path length and a faster knowledge diffusion are better for knowledge accumulation. Thus, it seems that the largest degree of node (i.e., m = n − 1) would be best. In other words, the knowledge network with complete connections should be the optimal one, and it makes less sense to search for the so-called optimal network. However, while there is already a consensus that highly dense networks and sparse networks are not good for network knowledge diffusion, which implies that there should be an optimal degree of node between 0 and n − 1, the degree of mode has received limited attention in the literature on how to prove that the optimal level actually exists. Therefore, an important question to be solved is whether an optimal value of m indeed exists, in order to confirm the reliability of our simulation experiment.
In this study, we also test the marginal effect of the degree of node to measure knowledge accumulation with the increase of m. In this simulation, the parameters are set as follows: n = 1001, m increasing from 2 to 1000 with intervals 2 (i.e., from minimum value to maximum value, note that m is even), k = 50, q = 0.15. The result is shown in Figure 5.
Effect of the Degree of Node
Among all parameters, the degree of node m seems to be arbitrary. Intuitively, if only considering the efficiency of knowledge diffusion, a shorter path length and a faster knowledge diffusion are better for knowledge accumulation. Thus, it seems that the largest degree of node (i.e., m = n − 1) would be best. In other words, the knowledge network with complete connections should be the optimal one, and it makes less sense to search for the so-called optimal network. However, while there is already a consensus that highly dense networks and sparse networks are not good for network knowledge diffusion, which implies that there should be an optimal degree of node between 0 and n − 1, the degree of mode has received limited attention in the literature on how to prove that the optimal level actually exists. Therefore, an important question to be solved is whether an optimal value of m indeed exists, in order to confirm the reliability of our simulation experiment.
In this study, we also test the marginal effect of the degree of node to measure knowledge accumulation with the increase of m. In this simulation, the parameters are set as follows: n = 1001, m increasing from 2 to 1000 with intervals 2 (i.e., from minimum value to maximum value, note that m is even), k = 50, q = 0.15. The result is shown in Figure 5. Figure 5 shows that knowledge accumulation has increased rapidly with the increase of m at the beginning, which indicates that more links are favorable. However, the increase of m makes decreasing difference when it reaches a steady state, indicating that too many links are not always more favorable, and they may even bring negative effects considering the cost. This result suggests that a moderate intensive network is more favorable, which is consistent with the current literature. This also confirms that a knowledge network with complete connections is not optimal. In other words, based on a given m, there is indeed an optimal knowledge network to be discovered, rather than simply adopting a completely connected structure to increase knowledge diffusion.
Effect of Knowledge Interaction Rules
In this section, we want to test whether the choice of the knowledge trade rule does lead to different conclusions. Thus, we use the mixture rule as the interaction rule and keep the performance index the same as used in similar studies (i.e., µ, and σ 2 ). The simulation results are shown in Figures 6 and 7. Sustainability 2020, 12, x FOR PEER REVIEW 11 of 18 Figure 5 shows that knowledge accumulation has increased rapidly with the increase of m at the beginning, which indicates that more links are favorable. However, the increase of m makes decreasing difference when it reaches a steady state, indicating that too many links are not always more favorable, and they may even bring negative effects considering the cost. This result suggests that a moderate intensive network is more favorable, which is consistent with the current literature. This also confirms that a knowledge network with complete connections is not optimal. In other words, based on a given m, there is indeed an optimal knowledge network to be discovered, rather than simply adopting a completely connected structure to increase knowledge diffusion.
Effect of Knowledge Interaction Rules
In this section, we want to test whether the choice of the knowledge trade rule does lead to different conclusions. Thus, we use the mixture rule as the interaction rule and keep the performance index the same as used in similar studies (i.e., , and 2 ). The simulation results are shown in Sustainability 2020, 12, x FOR PEER REVIEW 11 of 18 Figure 5 shows that knowledge accumulation has increased rapidly with the increase of m at the beginning, which indicates that more links are favorable. However, the increase of m makes decreasing difference when it reaches a steady state, indicating that too many links are not always more favorable, and they may even bring negative effects considering the cost. This result suggests that a moderate intensive network is more favorable, which is consistent with the current literature. This also confirms that a knowledge network with complete connections is not optimal. In other words, based on a given m, there is indeed an optimal knowledge network to be discovered, rather than simply adopting a completely connected structure to increase knowledge diffusion.
Effect of Knowledge Interaction Rules
In this section, we want to test whether the choice of the knowledge trade rule does lead to different conclusions. Thus, we use the mixture rule as the interaction rule and keep the performance index the same as used in similar studies (i.e., , and 2 ). The simulation results are shown in As seen in Figures 6 and 7, the completely random network (i.e., p = 1) is the best when using the mixture trade rule. Therefore, it can be concluded that the use of different interaction rules leads to a different result from what Cowan and Jonard [23] have found. Furthermore, since the barter trade represents a bilateral profitable trade, the knowledge trade may be terminated if there is no mutual benefit opportunity in the network. As a result, the knowledge diffusion may be incomplete, as shown in Cowan and Jonard [23], which is against the goal of strategic partnership. By contrast, a mixture rule including the barter trade and gift trade is more likely to promote knowledge diffusion among network members. Even though Cowan and Jonard [23] argue that knowledge exchange is not a gift grade but a barter trade when competitors are involved, we argue that there is much more cooperation than competition among strategic partners or networked firms in strategic alliances, and therefore a gift rule cannot be ruled out. In other words, the gift trade should also be considered for knowledge diffusion in strategic alliances. Of course, the most important task is to determine how policymakers can create a fair environment in order to make the mixture rule (mainly the gift trade rule) work well.
Effect of Performance Indexes
To further examine the validity of our study, we also tested whether the choice of performance index leads to different conclusions. We used the same interaction rule as in previous studies (i.e., the barter trade rule) and our indexes AKS, SKD, and AKD as the performance indexes, respectively. The simulation results are shown in Figures 8 and 9.
Sustainability 2020, 12, x FOR PEER REVIEW 12 of 18 As seen in Figures 6 and Figure 7, the completely random network (i.e., p = 1) is the best when using the mixture trade rule. Therefore, it can be concluded that the use of different interaction rules leads to a different result from what Cowan and Jonard [23] have found. Furthermore, since the barter trade represents a bilateral profitable trade, the knowledge trade may be terminated if there is no mutual benefit opportunity in the network. As a result, the knowledge diffusion may be incomplete, as shown in Cowan and Jonard [23], which is against the goal of strategic partnership. By contrast, a mixture rule including the barter trade and gift trade is more likely to promote knowledge diffusion among network members. Even though Cowan and Jonard [23] argue that knowledge exchange is not a gift grade but a barter trade when competitors are involved, we argue that there is much more cooperation than competition among strategic partners or networked firms in strategic alliances, and therefore a gift rule cannot be ruled out. In other words, the gift trade should also be considered for knowledge diffusion in strategic alliances. Of course, the most important task is to determine how policymakers can create a fair environment in order to make the mixture rule (mainly the gift trade rule) work well.
Effect of Performance Indexes
To further examine the validity of our study, we also tested whether the choice of performance index leads to different conclusions. We used the same interaction rule as in previous studies (i.e., the barter trade rule) and our indexes AKS, SKD, and AKD as the performance indexes, respectively. The simulation results are shown in Figures 8 and 9. As seen in Figures 8 and 9, the small world network (i.e., p = 0.1) proves to be the best. Thus, we can confirm that the change of index performance does not result in different conclusions between our study and those of Cowan and Jonard [23]. Furthermore, it confirms that the interaction rule is the determining factor if more average knowledge stock and less average knowledge disparity is preferred. As seen in Figures 8 and 9, the small world network (i.e., p = 0.1) proves to be the best. Thus, we can confirm that the change of index performance does not result in different conclusions between our study and those of Cowan and Jonard [23]. Furthermore, it confirms that the interaction rule is the determining factor if more average knowledge stock and less average knowledge disparity is preferred.
Discussion
In this study we searched for an optimal knowledge network for knowledge diffusion in networked firms among strategic partners. Based on a mixture rule of knowledge interaction (i.e., a combination of barter trade and gift trade) and using different performance indexes (more average knowledge stock, faster speed of knowledge diffusion, and less knowledge disparity), a set of knowledge diffusion models were constructed and tested with a computer simulation. The simulation results show that a random network, rather than a small world network, is the optimal structure when a mixture knowledge trade rule is used. In other words, our study shows that the efficiency of knowledge diffusion increases along with the increase in the degree of randomness if both the barter trade rule and the gift trade rule are used.
Our finding contradicts the previous view that a small world network is the best for knowledge diffusion [20,[23][24][25]. To validate our result, sensitive analyses were conducted to test whether our method is reliable and why our study generated a different result. The comparative analysis suggests that our method is reliable and valid, and the key difference between our study and previous studies is that our study used a different knowledge trade rule-the mixture rule. We argue that different knowledge interaction rules reflect different environment conditions, and thus a contingency view better reflects the reality of interfirm alliances [33]. More specifically, the barter trade represents a formal relationship among firms based on market exchange, and therefore knowledge diffusion must be a bilaterally profitable trade. The knowledge trade will be terminated if there is no mutual profit, and consequently knowledge diffusion based on the barter trade rule tends to be slow and incomplete. The gift trade rule represents a committed relationship among firms based on friendship and sometimes mutual obligations, and thus it may also be used to form a unilaterally profitable trade. Therefore, we argue that the appropriate knowledge interaction rule should be a mixture rule including barter trade and gift trade among strategic partners and in the early stage of industrial parks, because policymakers or government agents often require faster and complete knowledge diffusion with less disparity, and consequently may request more advanced firms to unilaterally help backward firms in a strategic alliance.
Our findings are consistent with those of many other scholars, who contend that a small world network may not be the optimal structure under conditions such as an uncertain environment or abundant resources [13,52,53,59]. Our findings, together with others, show that whether a knowledge network is optimal should be contingent upon whether it fits with environmental conditions rather than the pure static topology in knowledge networks. Put in another way, when you have the option to design a firm-based network for strategic alliances, what is an optimal structure really depends on whether it fits in with the contextual requirements. Therefore, this study adopts a contingent view to consider a more realistic knowledge trade rule, different from the static view which contends that a small world network is the only optimal structure [33].
Managerial Implications
Recent years have seen increasing interests in clustering, innovation ecosystems, and localization because the underlying assumption is that knowledge transmission and recombination are easier among agents of a geographically localized area, where industrial R&D and knowledgeable workforce abound. However, research also shows that there may be too much or too dense clustering in an innovation ecosystem [2,22,60]. At the same time, it is also important that cluster members maintain close links with members located outside the cluster. In this case, unlike the traditional viewpoint that a small world network is the best, our study shows that a random network is the best for efficient knowledge diffusion for network members within an innovation ecosystem when a more realistic knowledge trade rule is used.
The findings of this study thus have important implications for policymakers and management practitioners. First, policymakers of industrial parks and government agents should encourage and create policies to facilitate firms to develop new partnerships across the border of local clusters in order to obtain more efficient knowledge diffusion and innovation ecosystems. Policymakers can also encourage firms to carry out extensive technical cooperation through joint R&D projects as a form of linking clustered firms-a random network-to promote knowledge diffusion among clustered members. Third, strategic alliances should promote informal communication between firms using methods such as technical cooperation forums, entrepreneur salons, firm associations, and industry forums of technological consulting. Of course, the type of communications is not only important for local firms, but also for those in different geographic locations, in order to create more opportunities for different firms to facilitate knowledge diffusion and a more efficient ecosystem. Finally, policymakers can promote the mixture rule of knowledge diffusion through enhancing mutual trust and a cooperative atmosphere, which ultimately helps implement a fully random network to improve the efficiency of knowledge diffusion among firms in interfirm alliances. This is even more important because many firms are not willing to share knowledge using the gift rule in the market economy, but the findings of this study show that certain unilaterally beneficial policies may be better for the collective good.
More specifically, the findings of this study can help policymakers understand and identify efficient knowledge networks to facilitate knowledge diffusion in different stages in clustered firms in order to build more efficient innovation ecosystems. Based on the findings of our study, the mixture interaction rule (i.e., barter trade plus gift trade) may be more realistic and thus should be adopted in the initial stage of interfirm alliances, when a lot of network partners still do not possess the knowledge desired by their partners and consequently a random network is optimal for knowledge diffusion. The barter trade rule may be more feasible in the mature stage of interfirm alliances, when a small world network should be used. This is particularly important when an increasing number of high-tech parks and interfirm alliances are created in many countries to recreate the success of Silicon Valley. In the early stage of high-tech interfirm alliances, there often exist several core firms with sufficient knowledge and a lot of non-core firms with less knowledge. It is expected that the core firms will help non-core firms increase their knowledge stock as soon and as much as possible. Under such a situation, a barter trade is not feasible because the non-core firms may have little or no knowledge for the core firms, and thus the small world network is not a proper structure for knowledge diffusion. Instead, the mixture rule (i.e., barter trade plus gift trade) is much more realistic, and the random network will be optimal. However, the gift trade rule may not be sustainable because it only benefits one side of the exchange relationship. When the non-core firms obtain and create unique knowledge needed by other firms, the barter trade is more desirable than the mixture rule. In this situation, a small world network should be the preferred structure to facilitate knowledge diffusion in the mature stage of interfirm alliances.
Contributions
This study makes important contributions to the literature in several aspects. Firstly, there is an ongoing debate on what is the optimal structure for knowledge diffusion in networked firms, and whether more links between nodes would be better for knowledge diffusion in networked firms [11]. Some arguments based on the structural hole theory and the social capital theory are proposed to examine the advantages or disadvantages of relevant structures, but there is no consensus. In this study, our simulation results show that an optimal structure for knowledge diffusion is possible.
Secondly, based on a mixture rule of knowledge interaction and with a contingency view [33], our results show the possibility that a random network, rather than an intuitively attractive small world network, is likely to be the optimal structure for efficient knowledge diffusion. On the one hand, this suggests that the optimal knowledge network depends on environments of interfirm alliance, as discussed in the literature. On the other hand, it provides a feasible method for policymakers to achieve efficient knowledge interaction through enhancing mutual trust or creating a cooperative atmosphere in order to adopt the completely random network, which ultimately improves the efficiency of knowledge diffusion among networked firms in a strategic partnership.
Limitations and Future Research
This study has its limitations. Caution should be exercised when applying the findings of this study to other contexts. In our model, the degree of node (i.e., m) is fixed for each node, but a network with high degree-heterogeneity is likely to be suitable for knowledge diffusion [61]. Furthermore, the network often evolves over time. For example, a novel mechanism of network change, namely "node collapse", is proposed by Hernandez and Menon [62], which shows that node collapses directly affect the performance of the acquirer and indirectly that of other actors, and that the direction of network evolution hinges on the degree to which firms pursue internal versus network synergies through node collapses. Moreover, the peer effect may also influence the selection of partners and consequently influence the forming of a network. For instance, exogenous factors beyond individual agency, i.e., random peers, can shape a knowledge network [63]. Therefore, future research is called on to test whether a random network is still the optimal knowledge network if the degree of node m is not fixed.
In addition, we have assumed a static knowledge diffusion process, as in other studies [20,23,24,54]. In other words, it was assumed that no new knowledge has been added to the knowledge network, an assumption used for both our model and that of Cowan and Jonard [23], and thus no consideration was given to the scenario where the stock of new knowledge keeps growing. Future research is urged to consider the situation when more new knowledge is added to the knowledge diffusion process, a dynamic view of knowledge diffusion, in order to better capture the nature of knowledge diffusion among clustered firms in strategic partnerships. Another limitation embedded in this study is that, comparing with the studies by Cowan and Jonard [20,23,24,54], our study is based on simulated models. We do not have industrial data from different firms for a more in-depth analysis, and thus this study may not be practical enough, which could limit its generalizability. Future research is thus needed to collect more industrial data to validate the findings of this study. That being said, given that nothing is more practical than a good theory [64], this study will be able to provide important insights for future practical research. | 12,101.6 | 2020-08-06T00:00:00.000 | [
"Business",
"Computer Science",
"Economics",
"Engineering"
] |
On the evolution of operator complexity beyond scrambling
We study operator complexity on various time scales with emphasis on those much larger than the scrambling period. We use, for systems with a large but finite number of degrees of freedom, the notion of K-complexity employed in [1] for infinite systems. We present evidence that K-complexity of ETH operators has indeed the character associated with the bulk time evolution of extremal volumes and actions. Namely, after a period of exponential growth during the scrambling period the K-complexity increases only linearly with time for exponentially long times in terms of the entropy, and it eventually saturates at a constant value also exponential in terms of the entropy. This constant value depends on the Hamiltonian and the operator but not on any extrinsic tolerance parameter. Thus K-complexity deserves to be an entry in the AdS/CFT dictionary. Invoking a concept of K-entropy and some numerical examples we also discuss the extent to which the long period of linear complexity growth entails an efficient randomization of operators.
Introduction
Quantum complexity has been proposed as a new entry in the holographic dictionary (see for instance [2,3] and references therein). The underlying idea is to characterize the entanglement of a state in an 'optimal' way, with respect to some simple building blocks, such as gates in a quantum circuit model or more generally a tensor network. Complexity can then be defined as the size of the smallest circuit or tensor network which approximates the state, given some prescribed set of gates or fundamental tensors (see for instance [4,5]). The quantum circuit model leads naturally to a notion of complexity which is extensive in the number of degrees of freedom, S, and furthermore grows linearly in time, for a period much longer than any ordinary thermalization time scale: with β an effective time step for state-vector orthogonality, i.e. Ψ t |Ψ t+β ≈ 0. This linear growth is to be matched to the linear growth of spacelike volumes inside a black hole of entropy S and inverse Hawking temperature β [6,7]. An important question is whether the so-defined complexity has an upper bound. In the quantum models with a finite set of qubits, a computation is regarded as finished when the target state is approximated within some a priori tolerance , with respect to a standard metric on the space of states. Complexities defined with such an implicit dependence on the tolerance parameter are bounded by the number of -cells in the space of states, which scales exponentially with the number of qubits: scrambling evolution. We establish the linear growth of K-complexity and the saturation time scale. In section 4 we define the notion of K-entropy, as a measure of the degree of randomization of the Heisenberg flow, and argue on the basis of some numerical estimates that such randomization is expected to occur in order of magnitude. Section 5 brings the conclusions and a number of open questions suggested by our work.
Review of K-complexity
We begin with a review of K-complexity and a description of the notational conventions to be used in this paper. The main reference for this section is [1]. Given the Hamiltonian of a lattice system, H, and a particular initial operator O 0 , one defines a linearly independent set of operators O n in terms of the n-times The orthonormality can be defined with respect to any non-degenerate inner product in the operator algebra, such as where the trace is taken over the complete Hilbert space of dimension N . In what follows, we assume that appropriate cutoffs exist so that N is finite, but many of the expressions should admit a consistent N → ∞ limit. The construction of the Krylov basis runs iteratively as follows. From the initial operator O 0 = A 0 , which we assume to be normalized, where the non-negative matrix elements b n are called Lanczos coefficients. It is useful to exploit the notation (2.1) to introduce a vector space of operators, |O), with dimension of order N 2 . The adjoint action of the Hamiltonian in (2.4) introduces a linear operator in this space known as the Liouvillian, defined as which acts as the generator of the Heisenberg flow O t = e itH O 0 e −itH in this notation:
JHEP10(2019)264
Any Hermitian operator O can be expanded in the Krylov basis according to the expression for some real coefficients ϕ n . In terms of these 'amplitudes', we can rewrite the Heisenberg with boundary condition ϕ −1 (t) = 0. One can directly check that the normalization of the amplitudes is preserved under time evolution, i.e. ∂ t n |ϕ n (t)| 2 = 0. Since O 0 was assumed to be normalized, we have ϕ n (0) = δ n0 and the quantities |ϕ n (t)| 2 define a unit-normalized probability distribution for all times. If we start with a 'small' operator, containing few local degrees of freedom, each nested commutator with the Hamiltonian tends to increase its size. For a k-local Hamiltonian, containing products of less than k local degrees of freedom, we expect the size of O n to be of order n k for large values of n. To see this, suppose O n contains 'clusters' of size r in terms of local operators. Since H is a sum of clusters of size k, the commutator [H, O n ] is nonzero when the corresponding clusters have a non-vanishing intersection. The components with largest size are then those corresponding to clusters that intersect through O(1) local operators, yielding commutators of size r + k. Hence, each commutator will generically increase the size of the operator by O(k). Proceeding inductively we find that a nested commutator of order n will generate operators of size n k in order of magnitude.
Since 'operator size' is an intuitive measure of its complexity, and operator size is roughly related to the ordering in the Krylov basis, it is natural to define the notion of K-complexity as the average value of n in the Krylov basis expansion (2.7), i.e.
where unit normalization of the ϕ n amplitudes is assumed. It was explicitly shown in [1] that this definition is close to operator size for the SYK model in the thermodynamic limit. Applying the general definition (2.9) to the time-evolved operator O t = e itH O 0 e −itH with initial condition O 0 , we are led to a natural notion of time-dependent K-complexity: which depends implicitly on the seed operator O 0 . A given pattern of growth of Lanczos coefficients as a function of n translates into a characteristic growth of complexity. For instance, it is shown in [1] that a system with an asymptotic large-n law 1 b n ≈ α n , (2.11)
JHEP10(2019)264
accumulates K-complexity at an exponential rate: (2.12) A benchmark example of this behavior is the SYK model, for which 2α = λ is the Lyapunov exponent revealed in OTOC correlations. It is then natural to propose (2.11) as a criterion for local quantum chaos, since explicit evaluation of Lanczos coefficients in various integrable systems yield softer asymptotic laws of the form b n ∼ α n δ , 0 < δ < 1 . (2.13) In these cases, K-complexity has a milder, powerlike growth: It is useful to find relations between the patterns of growth of Lanczos coefficients and more familiar objects, such as correlation functions. Let us consider the time autocorrelation which coincides with the standard Wightman correlation function at infinite temperature.
In the thermodynamic limit, N → ∞, the Fourier transform develops a non-trivial analytic structure. In particular, the singularities closest to the real axis are located at ±iπ/(2α), where α is the slope coefficient in (2.11), and G(ω) decays exponentially along the real axis with the law (cf. [14]), More generally, a growth law of the form (2.13) translates into a decay G(ω) ∼ exp(−|ω/ω 0 | 1/δ ), i.e. the sharper is the decay of the spectral function, the milder is the growth of the Lanczos coefficients. In the case that the b n have a finite asymptotic limit lim n→∞ b n = b ∞ , it turns out that the spectral function has compact support in the There is a direct relation between the Lanczos coefficients and the moments of the Liouvillian, which in turn control the Taylor series of the autocorrelation function (only even moments contribute for Hermitian operators) The relation between b n and µ 2n can be written explicitly as a combinatorial formula (cf. appendix A of [1] for a review)
JHEP10(2019)264
where {h k } is the set of so-called Dyck paths, sequences of 2n numbers satisfying h 0 = h 2n = 1/2, h k ≥ 1/2 and |h k − h k+1 | = 1 for all k. The number of such paths is the Catalan number C n = (2n)!/n!(n + 1)!. From this expression one can relate the large-n asymptotics of Lanczos coefficients and moments. For instance, a linear growth of b n translates into a factorial-squared growth of the moments, i.e. µ 2n ∼ n 2n . More interesting for our purposes is the fact that an asymptotically constant Lanczos sequence, b n ∼ b ∞ produces power-like moments: where we have applied the large-n asymptotics C n ≈ 4 n n −3/2 and used the notation o(n) in the exponent for any terms with large n growth slower than linear, such as fractional powers or logarithms. Hence, an asymptotic power-law behavior of the moments is associated with a flat distribution of Lanczos coefficients.
K-complexity of scramblers: fast and finite
In systems with a finite-dimensional Hilbert space, K-complexity is necessarily bounded by the dimensionality of the operator space, C K ≤ N 2 . Saturation of this bound is not guaranteed, as the Krylov basis may terminate its iterative construction before it spans the whole operator space. Still, for sufficiently generic choices of initial operator O 0 and Hamiltonian H, we expect that n max does not lie far below N . To see this, consider the basis of operators where |E a denotes the exact energy eigenstate with eigenvalue E a . The N 2 operators L (ab) define a basis of the operator space which is orthonormal with respect to the inner product (2.1). The components of O t in this basis are proportional to its matrix elements in the exact energy basis: which at the same time can be written as For sufficiently generic initial operator, there are O(N 2 ) non-vanishing matrix elements, which remain non-vanishing at all times. Thus, the 'supervector' |O t ) has O(N 2 ) nonvanishing projections (L (ab) |O t ). Although the Krylov basis is rotated with respect to (3.1), it is natural to expect that the number of non-vanishing K-components (O n | O t ) will also be of O(N 2 ). Furthermore, for generic values of the energies E a , the N independent phases e −itEa describe an ergodic motion on a real N -dimensional torus, which is embedded in the operator space by the equation (3.2). Hence, |O t ) lies on an N -dimensional submanifold and we can conclude that for systems with S degrees of freedom, and generic choices of H and O 0 .
JHEP10(2019)264
The computation of K-complexities requires the evaluation of (2.10) once we know the amplitudes ϕ n (t). These in turn are obtained by solving (2.8). Therefore, it is the structure of the sequence b n what determines the relevant dynamical regimes in the growth of K-complexity. In a typical fast scrambler, such as the SYK model, small operators grow in size at an exponential rate exp(λt), where λ ≈ 2α is the Lyapunov exponent. In other words, for small operators, operator size is roughly equivalent to K-complexity.
We can regard the operator as 'scrambled' when it has spread, in order of magnitude, over the whole system. For a fast scrambler with S local degrees of freedom, this happens at the familiar time scale t * ∼ λ −1 log S [15]. The value of the K-complexity at the scrambling time is of order For systems with O(S) lattice sites and a finite-dimensional Hilbert space one has N ∼ e O(S) .
Since S e O(S) , it follows that K-complexity has an enormous scope for growth beyond the 'scrambling value'. Should the complexity continue to grow exponentially fast for t > t * , it would saturate in a time of order S. In the next section we use the ETH hypothesis to argue that this estimate is far from correct.
The ETH estimate
For systems which scramble less efficiently than a 'fast' scrambler, one expects the scrambling time to scale like a power of S, rather than a logarithm, but the intuitive relation between K-complexity and operator size suggests that the complexity at the scrambling time continues to satisfy (3.4). Hence, the wide gap between the complexity at scrambling, C K (t * ) ∼ S, and the maximal complexity, of order e O(S) , should be a general feature of any system with finite degrees of freedom.
The rate of K-complexity growth after scrambling depends on the form of the b n coefficients for n S. These can be constrained from the behavior of the moments: From the spectral decomposition of the correlation function, we obtain the expression: where O ab = E a | O 0 |E b denote the matrix elements of the initial operator in the exact energy basis. These matrix elements can be used to characterize a degree of quantum chaos. For operators whose expectation values and correlations approach thermal values at long times, it is expected that O ab satisfy the Eigenstate Thermalization Hypothesis (ETH) [16][17][18][19][20], which essentially says that the eigenbases of O 0 and H are uncorrelated,
JHEP10(2019)264
related by a random unitary on the N -dimensional Hilbert space. From this assumption it follows that off-diagonal matrix elements contributing to (3.7) have the form where R ab is a random matrix whose entries have mean zero and unit variance. The form factor F carries the information about the normalization of the operator and is assumed to depend smoothly on the energies of the states. Plugging this ansatz into the spectral expression (3.7) we thus find For n S the energy sum tends to be dominated by the largest possible energy differences. For a system with S degrees of freedom and extensive energy, the maximum energy difference is of order ΛS, where Λ is the UV cutoff. Hence, we expect the large-n moments in (3.9) to scale as (ΛS) 2n for n S. We can refine the estimate further to check that the operator form factor does not alter this conclusion significantly. The function F (E a , E b ) is assumed to depend weakly on the average energyĒ = (E a + E b )/2 and more sharply on the energy differences ω = E a − E b , with a characteristic bandwidth Γ. For local operators the bandwidth is an intensive energy scale, not scaling with S, and set by the local frequency cutoff Λ. For instance, assuming an exponential form factor F (ω) ∼ e −ω/Γ , we would estimate the sum in (3.9) as proportional to where γ stands for the lower-incomplete Gamma function. Upon further use of the asymptotic expansion for n/S 1, we find the following n S asymptotics for the moments: The same qualitative estimate is obtained if we use milder form factors such as the standard Using now the general relation (2.20) we conclude that ETH suggests a saturation of the Lanczos sequence at a 'plateau' of height λ ∼ 2α ∼ Λ, within factors of order unity, and we are led to a very simple picture for the Lanczos sequence: linear growth with slope λ for 0 < n < n * morphing into an approximate plateau as n S. This conjectured form of the Lanczos band is shown in figure 1. In the figure, we denote by a dotted line our ignorance about the details of high-energy endpoint, other than our previous estimate that, for sufficiently generic initial operators, we expect n max = e O(S) , as explained in the discussion leading to (3.3).
More generally, for systems with less efficient scrambling, the initial linear growth might be substituted by (2.13), whereas the 'post-scrambling' plateau for n S is expected to be rather general. It would be very interesting to test the generality of this 'Lanczos plateau' in numerical simulations of various models, such as SYK.
The qualitative description of a fast scrambler, as determined by a Lyapunov exponent λ ∼ Λ and S degrees of freedom, can be adapted more general situations where only a subset of degrees of freedom are 'activated' in the scrambling process. This occurs when considering a system at a finite temperature below the UV cutoff, T < Λ. In this case, on states of entropy S, the system can be described as having about one degree of freedom per thermal cell of size β = T −1 participating in the scrambling process, with the rest of degrees of freedom effectively 'frozen' in their ground state, and thus not contributing to the entropy. On such states of entropy S and effective temperature T , the UV cutoff is effectively replaced by T , setting the scale of the Lyapunov exponent λ ∼ T .
Dynamics of K-complexity
The evolution of K-complexity for a fast scrambler with linear growth (2.11) was studied in [1]. An analytic solution for the amplitudes ϕ n (t) exists for a formal choice of Lanczos coefficients given by b n = n(n − 1 + η). To simplify matters, we look at the exactly linear case, corresponding to η = 1, for which the solution reads ϕ n (t) = tanh n (αt) sech(αt) . (3.12) An initially sharp peak at n = 0 moves to higher n exponentially fast: n peak (t) ∼ e 2αt . The overall height of the function at large t is of order e −αt . Hence, the scrambling is JHEP10(2019)264 very efficient at accessing 'large' operators but at the same time it is also very efficient in randomizing the operator in the Krylov basis, leading to an essentially flat ϕ n distribution with support on [0, n peak (t)] and height of order 1/ n peak (t).
The growth of complexity is largely controlled by the ballistic motion in n-space of the solution's 'wave front'. On the other hand, operator randomization depends on whether a significant tail is left behind the wave front. For a discussion of the ballistic aspect, as well as the detailed matching between the pre-scrambling and post-scrambling regimes, it is useful to start with a continuum approximation.
Taking a coarse-grained look at the discrete function ϕ n (t), let us introduce a lattice cutoff ε and a coordinate x = ε n, and define the interpolating functions ϕ(x, t) = ϕ n (t), v(x) = 2ε b(εn) = 2ε b n . A continuum form of the recursion relation (2.8) can be written as: Expanding now in powers of ε, we find to leading order where ψ i (y) = ψ(y, 0) is the initial condition. The rescaling (3.15) is also useful from the point of view of the intuition about probability distributions. From the discrete normalization condition n≥0 |ϕ n | 2 = 1 we can derive the continuum analogs so that ψ(y) is a naive probability amplitude in y space, just as ϕ(x) is a naive probability amplitude in x space. The physics of (3.17) is that of a simple ballistic motion of the initial ψ-distribution towards positive values of y at a constant velocity. The problem is solved once we know the change of variables between the x-frame and the y-frame. The K-complexity as a function of time is given by
JHEP10(2019)264
Using the general solution (3.17) in the last expression and changing variables y → y + t we find C K (t) = 1 ε 2 dy x(y + t) |ψ i (y)| 2 . (3.20) There are various interesting cases to consider. A fast scrambler with linear Lanczos growth has v(x) = λ x, where λ = 2α is the Lyapunov exponent. The corresponding change of variables is where we have chosen the additive normalization in y for convenience. Notice that, in this case, the y variable runs over the whole real line, whereas the x variable is restricted to be positive. The scrambling solution for the ϕ amplitude then reads An initial peak at y = 0 for ψ i (y) will move ballistically as y p (t) = t, corresponding to an x-frame trajectory which also controls the exponential growth of K-complexity.
If the velocity has a logarithmic correction, as proposed in [1] for the corresponding frame map is x = ε e √ 2λy . and the distribution peak and K-complexity grow at a rate of order exp( √ 2λt). For systems with a less efficient scrambling, governed by (2.13) with δ < 1, the drift velocity is given by v(x) = 2α x ε δ , leading to a change of variables y = 1 2α and a power-like complexity growth proportional to (α t) 1 1−δ . It is interesting to compare estimates of scrambling times based on the growth of Kcomplexity with other heuristic models of scrambling. If we define the scrambling time by the requirement that complexity reaches the size of the system, C K (t * ) ∼ S, then we have On the other hand, in d-spatial dimensions, ballistic scrambling takes a time of order t * ∼ L for a system of size L. If we write S ∼ (α L) d for the effective number of degrees of JHEP10(2019)264 freedom (entropy) and α −1 for the effective dynamical time step, we have t * ∼ α −1 S 1/d for ballistic scrambling. If we model the scrambling by a diffusion process, characterized by a random walk of step α −1 , we obtain instead t * ∼ α −1 S 2/d . Then, we find the interesting correspondences The post-scrambling regime. In the post-scrambling regime the x and y frames are simply proportional: and the amplitude ϕ(x, t) just moves ballistically towards large x with velocity v * , the K-complexity also growing linearly. To summarize, in the simplest case of an SYK-like fast scrambler, with Lyapunov exponent λ and S extensive degrees of freedom, we have v(x) ≈ λ x in the scrambling band, 0 < x < ε S and v(x) = v * ∼ λ ε S in the post-scrambling band. In the scrambling period, x(y) ≈ ε e λy resulting in the expected exponential growth. If the initial operator is 'small', the initial complexity is also small, C K (0) = O(1) and C K (t * ) = O(S). On the other hand, in the post-scrambling regime x(y) ≈ y v with constant v = v * = ελn * ∼ ελ S. At long times: using the normalization condition (3.18). We conclude that the complexity grows exponentially fast during scrambling and only linearly after scrambling, with a rate of order λS. The time scale for the amplitude to reach n max is of order At times larger than t K , the function ψ(y, t) remains stuck near the endpoint, because the drift towards large values of x will prevent the distribution from bouncing back. This implies that the complexity eventually levels off and remains constant. Over extremely long time scales, however, we know that the solution of the discrete equation (2.8) will necessarily undergo Poincaré recurrences. The time scale for this to happen is of order where determines the precision with which we demand recurrence. We summarize the qualitative behavior of the K-complexity for a fast scrambler in figure 2.
Operator randomization and K-entropy
Having established the existence of a very long post-scrambling era of linear K-complexity growth, we now begin a more detailed study of this dynamical regime. In particular, we discuss the degree of randomization of the operator O t , when expanded in the Krylov basis. For this purpose, we shall introduce the notion of K-entropy. In order to motivate its definition, we momentarily go back to the scrambling period. The exact solution (3.12) describes two a priori independent phenomena: there is an exponentially fast growth of K-complexity and at the same time there is an efficient randomization of the operator over the time-dependent span of the Krylov operator set. This is intuitively clear from the qualitative form of (3.12), which eventually looks like a uniform distribution of size n peak and amplitude 1/ √ n peak . A more formal characterization of this uniformity is given by the 'operator entropy' or K-entropy, which we define by Since the quantities |ϕ n | 2 define a probability distribution, the so-defined S K satisfies the usual properties of an entropy function. If the ϕ n amplitude is very peaked at a particular value of n, large or small, the K-entropy is small. On the other hand, if the distribution is completely uniform over the interval [0, n M ], then S K = log (n M ). Applying the definition (4.1) to (3.12) we can determine the growth of K-entropy to be expected from a typical fast scrambler. The result of a numerical evaluation is a linear growth with slope close to 2α = λ. Hence, the scrambling dynamics increases K-complexity at an exponential rate, and also increases K-entropy at a linear rate. It turns out that the linear growth of K-entropy for a fast scrambler is captured by the continuum solution of the leading equation (3.14). The continuum versions of K-entropy in both x-frame and y-frame are
JHEP10(2019)264
Extracting the velocity-dependent term in the y-frame expression, we have In the leading continuum approximation, any y-frame solution has the form ψ(y, t) = ψ i (y − t). Hence, the first term is time-independent, whereas the second term computes the average of log(v(y)) over the operator probability distribution. In periods where the complexity growth is accelerated, such as the scrambling period of a fast scrambler, there is entropy production. Inserting the leading continuous solution (3.22) of the scrambling regime into (4.2) one obtains which matches the numerical evaluation for the exact solution (3.12). This means that the simple chiral wave equation with a mass term (3.14) actually gives a very accurate description of the scrambling regime, not only accounting for the growth of K-complexity, but also capturing quantitatively the growth of K-entropy.
In the post-scrambling period where v(x) ≈ constant, the mass term in (3.14) is negligible and the amplitude propagates ballistically in both frames. Therefore, the leading order term in the continuum approximation to the amplitude does not detect any significant growth of the K-entropy. We now turn to analyse what can seen at some higher orders.
The continuum amplitude at post-scrambling
We have seen that, while operator randomization is well accounted for in the continuum approximation for the scrambling regime, it is completely missed at leading order in the post-scrambling regime. It is an important question to determine whether K-entropy can be produced at all during the enormously long post-scrambling era.
In this section we show that the next-to-leading approximation to the evolution equation (3.13) already begins to incorporate the randomization effect, but ultimately falls short of the goal. Carrying the short distance expansion of (3.13) to higher orders one finds (3.14), with further corrections on the right hand side. At order ε there is a term This is a small effect in the scrambling regime and completely negligible in the post-scrambling regime. At order ε 2 we find two terms: The first term is a diffusion contribution with the wrong sign of the diffusion constant, and it only acts for a small time in the scrambling era. The second term is active throughout the long post-scrambling era and thus corresponds to the leading correction which is in principle capable of incorporating a broadening effect.
JHEP10(2019)264
Let us then consider the O(ε 2 )-corrected equation in the post-scrambling regime t > t * and in the y-frame, (∂ t + ∂ y )ψ(y, t) = −γ ∂ 3 y ψ(y, t) , (4.5) written in terms of the rescaled amplitude ψ(y, t) = v(y)ϕ(y, t), which has standard L 2 norm in the y-frame. The coefficient controlling the new term is certainly small: where we have used that v = ε λ n * ∼ ε λ S in the post-scrambling regime. In order to solve (4.5) we seek a solution of Fourier form with dispersion ω k = k − γ k 3 . Let us set an initial condition at t = t * , specifying the amplitude as ψ(y, t * ) = ψ i (y), which is just the Fourier transform of ψ k . The solution reads where ∆t = t − t * . By the rescaling k → k/(3γ∆t) 1/3 we can evaluate the momentum integral in terms of the Airy function to obtain where z = y − ∆t (3γ∆t) 1/3 .
It is already clear from this expression that this approximation is beginning to capture the randomization effect, due to the properties of the Airy function. To see this, let us consider an initial delta-function pulse, where ∆y = y − y * . The constant A is fixed by requiring the correct normalization of ψ(y, t). Evaluating the asymptotics long after the ballistic front y ∼ t has passed, i.e. ∆t ∆y, one obtains fixes the order of magnitude of the constant to be A ∼ (3γ) 1/4 √ ε, so that the operator amplitude looks like a rapidly oscillating function over the interval 0 < ∆y < ∆t of the form where Osc [0,t] stands for the oscillation component with unit amplitude (a cosine function) and support on the interval [0, t]. Converting back to the x-frame amplitude ϕ = ψ/ √ v we have an oscillating function with amplitude of order ε/vt and support on the interval [0, vt]. This result is interesting, since it shows perfectly efficient randomization (cf. figure 3). The very flat and long tail yields a K-entropy of order S K ∼ log(vt/ε) = log(2bt) (4.14) at long times. However, a delta-function initial condition is not a realistic starting point for the post-scrambling regime. First, such a singular initial configuration is beyond the regime of applicability of the low-derivative approximations to (3.13). Second, it was argued that a period of fast-scrambling with S degrees of freedom outputs a distribution with an x-width of order x * ∼ ε n * ∼ εS ε. Hence, in order to check if the present approximation captures randomization, we must input an initial distribution of width δx ∼ ε S. Equivalently, in the y-frame at post-scrambling this amounts to δy ∼ δx/v * ∼ λ −1 .
Picking a gaussian ansatz for the normalized y-frame distribution, the integral (4.9) may be evaluated exactly to obtain 2 . (4.17) Looking at the long-time tail we focus on the region of large ∆y with ∆t ∆y, so that only the first term in B remains relevant as a correction to the Airy function profile. This term induces a suppression of order exp(−δ 2 /6γ) on the tail amplitude. Putting all factors together one finally finds ϕ tail ∼ δ λ n * e −(δ λ n * ) 2 up to O(1) factors. We conclude that, unless we pick a lattice-size distribution, with δ ∼ 1/λS, the randomization is all but washed out when looking at smooth signals. In particular, for the choice of width δ ∼ 1/λ, which corresponds to an initial scrambling period of time t * = λ −1 log S, the tail is exponentially suppressed, 19) and the propagation is essentially ballistic. We show the difference between the two choices of initial width in figures 4 and 5.
On the other hand, the fact that randomization arises when the signal is extrapolated to cutoff scales, beyond the domain where we trust the equation (4.5), suggests that perhaps randomization is a true property of the discrete evolution equation.
The discrete amplitude at post-scrambling
In search for K-entropy production in the post-scrambling regime, we return to the discrete problem (2.8), which becomes when the Lanczos coefficients are approximated by a constant b n ≈ b. In the physical situation of interest, this equation holds for n > n * ∼ S, and the solution must be matched to a solution of the scrambling regime, such as (3.12). Ignoring boundary conditions for the time being, a particular solution of (4.20) is just a Bessel function: ϕ n (t) = J n (2bt) . It has the correct normalization at t = 0, with all amplitudes vanishing except ϕ 0 (0) = 1. Therefore the Bessel functions describe the spread of a distribution which begins sharply localized at the origin. A glance at the plot in figure 6 reveals that randomization is very efficient, featuring a tail similar to that of the Airy function found in the last section. Using the so-called 'approximation by tangents' (cf. [21]) we can write, for n large at fixed ratio 2bt/n > 1: where a = arc tan √ 4b 2 t 2 − n 2 . As the distribution moves to large n at constant velocity, equal to 2b, there is a rapidly oscillating tail with almost flat envelope and height of order (4b 2 t 2 − n 2 ) 1/4 . Therefore, the Bessel function restricted to positive n behaves qualitatively as the Airy function, featuring an oscillating tail with amplitude of order 1/ √ 2bt, supported on the interval [0, 2bt].
The Bessel function amplitude has however in this case unphysical features, because it leaks into the negative n axis, as the ansatz (4.21) fails to satisfy the correct boundary condition ϕ −1 = 0. This implies that the probability density |ϕ n | 2 is not conserved on the physical configurations with n ≥ 0. The problem can be fixed by a superposition of two Bessel functions: which vanishes identically at n = −1 for all times, as one can verify using the identity J −n (z) = (−1) n J n (z). As a result, R −1 (2bt) = 0 is effectively a 'Dirichlet' condition separating the dynamics of the physical region n ≥ 0 and the dynamics of the unphysical region n < −1. Furthermore, R n (t = 0) = δ n,0 + δ n,−2 and, since one can now consistently restrict attention to positive values of n, it follows that (4.23) does satisfy the physical conditions of being narrowly localized at t = 0 and permanently confined in the n ≥ 0 region. The function (4.23) can be rewritten as a from that makes manifest a linear enveloping behavior at large n, as shown in figure 7. Despite this accumulation of probability at the higher end of the n spectrum, one can check using the form (4.24) that the K-entropy does grow at a logarithmic rate S K [R n (2bt)] ∝ log (2bt) at long times, the hallmark of a good operator randomization. The function R n (2bt) locates the initial pulse right next to the Dirichlet condition. In order to better simulate the type of configuration prepared by a previous scrambling period, it is convenient to engineer analogs of the R n function with initial pulses located at any desired position. These 'displaced' pulses can be manufactured by generalizing for any non-negative integer k. These functions meet the goal since they vanish at n = −1 for all times and R (k) n (0) = δ n,k + (−1) k δ n,−k−2 . Hence, we have a function which starts with a unit pulse at any n = k ≥ 0, while remaining confined to the n ≥ 0 domain at all times, the original pulse function (4.23) corresponding to the particular case of k = 0.
For generic values of k and long times, the k-pulse functions R (k) n (2bt) look like modulated Bessel functions, i.e. they display a tail of average height of order 1/ √ 2bt and are supported on the ballistic domain bounded by n t ∼ 2bt (cf. figure 8), therefore, they also feature logarithmically increasing K-entropies.
With these ingredients in place, we are ready to discuss the more realistic case of an initial pulse with arbitrary width K 0 . This can be achieved by a superposition of k-pulses ϕ n (t) = In particular, choosing K 0 ∼ S simulates the kind of signal that is prepared by a previous period of fast scrambling. To simplify matters, let us consider a square pulse with α k = 1/ √ K 0 . An example of the long time evolution of such a pulse is shown in figure 9. We observe a stable peak which propagates ballistically and an approximately uniform tail obtained by averaging over tails of single-pulse functions. Assuming that the phases of each single- Figure 10. Growth of K-entropy for an initial square pulse of width K 0 = 5. Notice the asymptotic logarithmic growth and the initial finite-size effects due to the details of the square pulse. Figure 11. Sketch of the K-entropy dynamics in a fast scrambler with S degrees of freedom and Lyapunov exponent λ. A linear growth proportional to λ t during scrambling is followed by a logarithmic increase in the post-scrambling era, according to a scaling log(2Sλt), and a final saturation beyond times of order t K .
pulse function add up randomly, we estimate that a randomization tail exists with height of order 1/ √ 2bt and width of order 2bt, leading to a logarithmic growth of K-entropy: S K (t) ∼ log(2bt). This logarithmic growth for the K-entropy can be confirmed by direct numerical evaluation (cf. figure 10).
The conclusion is that randomization does occur in order of magnitude. There is a persistent ballistic component which makes an O(1) fraction of the normalization, but the K-entropy at long times is dominated by the oscillating tail. Eventually, after times of order t K , the K-entropy becomes of order log(n max ), thereby growing from O(log S) at t * to O(S) by the exponential time scale t K . A qualitative picture of the K-entropy dynamics in a fast scrambler is presented in figure 11.
Conclusions
In this paper we have explored the long-time behavior of K-complexity, an algebraic notion of operator complexity which relies on an effective dimensionality of a linear subspace containing the operator's time evolution. This concept was introduced in [1] as a useful JHEP10(2019)264 characterization of chaotic behavior, in the sense of being governed by the same Lyapunov exponent as OTOC correlators.
Using the Eigenstate Thermalization Hypothesis as a starting point, we have argued that K-complexity grows linearly at late times, after the system has been scrambled, with a rate which is extensive in the size S of the system. Eventually, the K-complexity must saturate at a maximum bounded by e O(S) , in a time also proportional to e O(S) , and stays approximately constant thereafter, until Poincaré recurrences begin to show up at times scaling as a double exponential of the entropy, exp(e O(S) ).
We furthermore notice that, during the exponentially long post-scrambling period when K-complexity grows linearly, the operator is randomized in order of magnitude. This can be characterized by the logarithmic growth of the K-entropy, which measures the degree of uniformity of the amplitudes ϕ n (t). More precisely, we find numerical evidence for a growth law of the form S K ∼ log (2bt) , as bt 1, where b denotes the asymptotic value of the Lanczos sequence. At complexity saturation, the K-entropy also saturates at a value of order log(n max ) = O(S). It would be interesting to study the consequences of this randomization on the long time behavior of correlation functions, along the lines of [9,[22][23][24][25].
The outstanding open question regarding these results is the holographic representation of K-complexity. During the scrambling period, there is an approximate correspondence between K-complexity and operator size. There are proposals for concrete relations between operator size and bulk quantities [26][27][28]. In these examples, the holographic map is specified between the process of particle free-fall towards a horizon and a scrambling process in the holographic dual. The natural expectation is that a period of linear growth of complexity should be associated to properties of the motion in the interior of the black hole. | 9,532.6 | 2019-10-01T00:00:00.000 | [
"Mathematics"
] |
Universal heavy-ball method for nonconvex optimization under H¨older continuous Hessians
We propose a new first-order method for minimizing nonconvex functions with Lipschitz continuous gradients and H¨older continuous Hessians. The proposed algorithm is a heavy-ball method equipped with two particular restart mechanisms. It finds a solution where the gradient norm is less than ε in O ( H 1 2+2 ν ν ε − 4+3 ν 2+2 ν ) function and gradient evaluations, where ν ∈ [0 , 1] and H ν are the H¨older exponent and constant, respectively. Our algorithm is ν -independent and thus universal; it automatically achieves the above complexity bound with the optimal ν ∈ [0 , 1] without knowledge of H ν . In addition, the algorithm does not require other problem-dependent parameters as input, including the gradient’s Lipschitz constant or the target accuracy ε . Numerical results illustrate that the proposed method is promising.
Introduction
This paper studies general nonconvex optimization problems: where f : R d → R is twice differentiable and lower bounded, i.e., inf x∈R d f (x) > −∞.Throughout the paper, we impose the following assumption of Lipschitz continuous gradients.Assumption 1.There exists a constant L > 0 such that ∥∇f (x) − ∇f (y)∥ ≤ L∥x − y∥ for all x, y ∈ R d .
First-order methods [3,31], which access f through function and gradient evaluations, have gained increasing attention because they are suitable for large-scale problems.A classical result is that the gradient descent method finds an ε-stationary point (i.e., x ∈ R d where ∥∇f (x)∥ ≤ ε) in O(ε −2 ) function and gradient evaluations under Assumption 1.Recently, more sophisticated first-order methods have been developed to achieve faster convergence for more smooth functions.Such methods [2,6,28,[33][34][35]53] have complexity bounds of O(ε −7/4 ) or Õ(ε −7/4 ) under Lipschitz continuity of Hessians in addition to gradients.1This research stream raises two natural questions: Question 1.How fast can first-order methods converge under smoothness assumptions stronger than Lipschitz continuous gradients but weaker than Lipschitz continuous Hessians?
Question 2. Can a single algorithm achieve both of the following complexity bounds: O(ε −2 ) for functions with Lipschitz continuous gradients and O(ε −7/4 ) for functions with Lipschitz continuous gradients and Hessians?
Question 2 is also crucial from a practical standpoint because it is often challenging for users of optimization methods to check whether a function of interest has a Lipschitz continuous Hessian.It would be nice if there were no need to use several different algorithms to achieve faster convergence.Motivated by the questions, we propose a new first-order method and provide its complexity analysis with the Hölder continuity of Hessians.Hölder continuity generalizes Lipschitz continuity and has been widely used for complexity analyses of optimization methods [12, 13, 18, 20, 22-25, 30, 38].Several properties and an example of Hölder continuity can be found in [23,Section 2].
• Our algorithm requires no knowledge of problem-dependent parameters, including the optimal ν, the Lipschitz constant L, or the target accuracy ε.
Let us describe our ideas for developing such an algorithm.We employ the Hessian-free analysis recently developed for Lipschitz continuous Hessians [35] to estimate the Hessian's Hölder continuity with only first-order information.The Hessian-free analysis uses inequalities that include the Hessian's Lipschitz constant H 1 but not a Hessian matrix itself, enabling us to estimate H 1 .Extending this analysis to general ν allows us to estimate the Hölder constant H ν , given ν ∈ [0, 1].We thus obtain an algorithm that requires ν as input and has the complexity bound (2) for the given ν.However, the resulting algorithm lacks usability because ν that minimizes (2) is generally unknown.
Our main idea for developing a ν-independent algorithm is to set ν = 0 for the above ν-dependent algorithm.This may seem strange, but we prove that it works; a carefully designed algorithm for ν = 0 achieves the complexity bound (2) for any ν ∈ [0, 1].Although we design an estimate for H 0 , it also has a relationship with H ν for ν ∈ (0, 1], as will be stated in Proposition 1.This proposition allows us to obtain the desired complexity bounds without specifying ν.
To evaluate the numerical performance of the proposed method, we conducted experiments with standard machine-learning tasks.The results illustrate that the proposed method outperforms state-of-the-art methods.
Notation.For vectors a, b ∈ R d , let ⟨a, b⟩ denote the dot product and ∥a∥ denote the Euclidean norm.For a matrix A ∈ R m×n , let ∥A∥ denote the operator norm, or equivalently the largest singular value.
Related work
This section reviews previous studies from several perspectives and discusses similarities and differences between them and this work.
Complexity of second-order methods using Hölder continuous Hessians.The Hölder continuity of Hessians has been used to analyze second-order methods.Grapiglia and Nesterov [23] proposed a regularized Newton method that finds an ε-stationary point in O(ε − 2+ν 1+ν ) evaluations of f , ∇f , and ∇ 2 f , where ν ∈ [0, 1] is the Hölder exponent of ∇ 2 f .The complexity bound generalizes previous O(ε −3/2 ) bounds under Lipschitz continuous Hessians [10,11,14,40].We make the same assumption of Hölder continuous Hessians as in [23] but do not compute Hessians in the algorithm.Table 2 summarizes the first-order and second-order methods together with their assumptions.
Reference / Algorithm [12,22] Gradient descent This work [33][34][35] Universality for Hölder continuity.When Hölder continuity is assumed, it is preferable that algorithms not require the exponent ν as input because a suitable value for ν tends to be hard to find in real-world problems.Such ν-independent algorithms, called universal methods, were first developed as first-order methods for convex optimization [30,38] and have since been extended to other settings, including higher-order methods or nonconvex problems [12,13,20,[22][23][24][25].Within this research stream, this paper proposes a universal method with a new setting: a first-order method under Hölder continuous Hessians.Because of the differences in settings, the existing techniques for universality cannot be applied directly; we obtain a universal method by setting ν = 0 for a ν-dependent algorithm, as discussed in Section 1.
Heavy-ball methods.Heavy-ball (HB) methods are a kind of momentum method first proposed by Polyak [43] for convex optimization.Although some complexity results have been obtained for (strongly) convex settings [21,32], they are weaker than the optimal bounds given by Nesterov's accelerated gradient method [36,39].For nonconvex optimization, HB and its variants [15,29,46,50] have been practically used with great success, especially in deep learning, while studies on theoretical convergence analysis are few [34,41,42].O'Neill and Wright [42] analyzed the local behavior of the original HB method, showing that the method is unlikely to converge to strict saddle points.Ochs et al. [41] proposed a generalized HB method, iPiano, that enjoys a complexity bound of O(ε −2 ) under Lipschitz continuous gradients, which is of the same order as that of GD.Li and Lin [34] proposed an HB method with a restart mechanism that achieves a complexity bound of O(ε −7/4 ) under Lipschitz continuous gradients and Hessians.Our algorithm is another HB method with a different restart mechanism that enjoys more general complexity bounds than Li and Lin [34], as discussed in Section 1.
Comparison with [35].This paper shares some mathematical tools with [35] because we utilize the Hessian-free analysis introduced in [35] to estimate Hessian's Hölder continuity.While the analysis in [35] is for Nesterov's accelerated gradient method under Lipschitz continuous Hessians, we here analyze Polyak's HB method under Hölder continuity.Thanks to the simplicity of the HB momentum, our estimate for the Hölder constant is easier to compute than the estimate for the Lipschitz constant proposed in [35], which improves the efficiency of our algorithm.We would like to emphasize that a ν-independent algorithm cannot be derived simply by applying the mathematical tools in [35].It should also be mentioned that we have not confirmed that it is impossible or very challenging to develop a ν-independent algorithm with Nesterov's momentum under Hölder continuous Hessians.
Lower bounds.So far, we have discussed upper bounds on complexity, but there are also some studies on its lower bounds.Carmon et al. [8] proved that no deterministic or stochastic first-order method can improve the complexity of O(ε −2 ) with the assumption of Lipschitz continuous gradients alone.(See [8, Theorems 1 and 2] for more rigorous statements.)This result implies that GD is optimal in terms of complexity under Lipschitz continuous gradients.Carmon et al. [9] showed a lower bound of Ω(ε −12/7 ) for first-order methods under Lipschitz continuous gradients and Hessians.
Compared with the upper bound of O(ε −7/4 ) under the same assumptions, there is still a Θ(ε −1/28 ) gap.Closing this gap would be an interesting research question, though this paper does not focus on it.
Preliminary results
The following lemma is standard for the analyses of first-order methods.
Lemma 1 (e.g., [37,Lemma 1.2.3]).Under Assumption 1, the following holds for any x, y ∈ R d : This inequality helps estimate the Lipschitz constant L and evaluate the decrease in the objective function per iteration.We also use the following inequalities derived from Hölder continuous Hessians.
Lemma 3.For all x, y ∈ R d and ν ∈ [0, 1] such that H ν < +∞, the following holds: The proofs are given in Appendix A.1.These lemmas generalize [35, Lemmas 2 and 3] for Lipschitz continuous Hessians (i.e., ν = 1).It is important to note that the inequalities in Lemmas 2 and 3 are Hessian-free; they include the Hessian's Hölder constant H ν but not a Hessian matrix itself.Accordingly, we can adaptively estimate the Hölder continuity of ∇ 2 f in the algorithm without computing Hessians.
Algorithm
The proposed method, Algorithm 1, is a heavy-ball (HB) method equipped with two particular restart schemes.In the algorithm, the iteration counter k is reset to 0 when HB restarts on Line 8 or 10, whereas the total iteration counter K is not.We refer to the period between one reset of k and the next reset as an epoch.Note that it is unnecessary to implement K in the algorithm; it is included here only to make the statements in our analysis concise.
The algorithm uses estimates ℓ and h k for the Lipschitz constant L and the Hölder constant H 0 .The estimate ℓ is fixed during an epoch, while h k is updated at each iteration, having the subscript k.
Update of solutions
With an estimate ℓ for the Lipschitz constant L, Algorithm 1 defines a solution sequence (x k ) as follows: v 0 = 0 and for k ≥ 1.Here, (v k ) is the velocity sequence, and 0 ≤ θ k ≤ 1 is the momentum parameter.Let x −1 := x 0 for convenience, which makes (4) valid for k = 0.This type of optimization method is called a heavy-ball method or Polyak's momentum method.
In this paper, we use the simplest parameter setting: for all k ≥ 1.Our choice of θ k differs from the existing ones; the existing complexity analyses [16,17,21,32,34,43] of HB prohibit θ k = 1.For example, Li and Lin [34] proposed Our new proof technique described later in Section 5.1 enables us to set θ k = 1.We will later use the averaged solution to compute the estimate h k for H 0 and set the best solution x ⋆ k .The averaged solution can be computed efficiently with a simple recursion:
Estimation of Hölder continuity
Let to simplify the notation.Our analysis uses the following inequalities due to Lemmas 2 and 3.
Algorithm 1 requires no information on the Hölder continuity of ∇ 2 f , automatically estimating it.To illustrate the trick, let us first consider a prototype algorithm that works when a value of which come from Lemma 4. This estimation scheme yields a ν-dependent algorithm that has the complexity bound (2) for the given ν, though we will omit the details.The algorithm is not so practical because it requires ν ∈ [0, 1] such that H ν < +∞ as input.However, perhaps surprisingly, setting ν = 0 for the ν-dependent algorithm gives a ν-independent algorithm that achieves the bound (2) for all ν ∈ [0, 1].Algorithm 1 is the ν-independent algorithm obtained in that way.Let h 0 := 0 for convenience.At iteration k ≥ 1 of each epoch, we use the estimate h k for H 0 defined by The above inequalities were obtained by plugging ν = 0 into (8) and ( 9).
Although we designed h k to estimate H 0 , it fortunately also relates to H ν for general ν ∈ [0, 1].The following upper bound on h k shows the relationship between h k and H ν , which will be used in the complexity analysis.Proposition 1.For all k ≥ 1 and ν ∈ [0, 1] such that H ν < +∞, the following holds: Proof.Lemma 4 gives Hence, definition (10) of h k yields The desired result follows inductively since For ν = 0, Proposition 1 gives a natural upper bound, h k ≤ H 0 , since the estimate h k is designed for H 0 based on Lemma 4. For ν ∈ (0, 1], the upper bound can become tighter when kS k is small.Indeed, the iterates (x k ) are expected to move less significantly in an epoch as the algorithm proceeds.Accordingly, (S k ) increases more slowly in later epochs, yielding a tighter upper bound on h k .This trick improves the complexity bound from O(ε −2 ) for ν = 0 to O(ε − 4+3ν 2+2ν ) for general ν ∈ [0, 1].
Restart mechanisms
Algorithm 1 is equipped with two restart mechanisms.The first one uses the standard descent condition to check whether the current estimate ℓ for the Lipschitz constant L is large enough.If the descent condition (13) does not hold, HB restarts with a larger ℓ from the best solution x ⋆ k := argmin x∈{x 0 ,...,x k ,x 1 ,...,x k } f (x) during the epoch.We consider not only x 0 , . . ., x k but also the averaged solutions x1 , . . ., xk as candidates for the next starting point because averaging may stabilize the behavior of the HB method.As we will show later in Lemma 6, the gradient norm of averaged solutions is small, which leads to stability.For strongly-convex quadratic problems, Danilova and Malinovsky [16] also show that averaged HB methods have a smaller maximal deviation from the optimal solution than the vanilla HB method.A similar effect for nonconvex problems is expected in the neighborhood of local optima where quadratic approximation is justified.
The second restart scheme resets the momentum effect when k becomes large; if is satisfied, HB restarts from the best solution x ⋆ k .At the restart, we can reset ℓ to a smaller value in the hope of improving practical performance, though decreasing ℓ is not necessary for the complexity analysis.This restart scheme guarantees that holds at iteration k of each epoch.The Lipschitz estimate ℓ increases only when the descent condition ( 13) is violated.On the other hand, Lemma 1 implies that condition (13) always holds as long as ℓ ≥ L. Hence, we have the following upper bound on ℓ.
Objective decrease for one epoch
First, we evaluate the decrease in the objective function value during one epoch.
Lemma 5. Suppose that Assumption 1 holds and that the descent condition holds for all 1 ≤ i ≤ k.Then, the following holds under condition (15): Before providing the proof, let us remark on the lemma.Evaluating the decrease in the objective function is the central part of a complexity analysis.It is also an intricate part because the function value does not necessarily decrease monotonically in nonconvex acceleration methods.To overcome the non-monotonicity, previous analyses have employed different proof techniques.For example, Li and Lin [33] constructed a quadratic approximation of the objective, diagonalized the Hessian, and evaluated the objective decrease separately for each coordinate; Marumo and Takeda [35] designed a tricky potential function and showed that it is nearly decreasing.
This paper uses another technique to deal with the non-monotonicity.We observe that the solution x k does not need to attain a small function value; it is sufficient for at least one of x 1 , . . ., x k to do so, thanks to our particular restart mechanism.This observation permits the left-hand side of (17) to be min 1≤i≤k f (x i ) rather than f (x k ) and makes the proof easier.The proof of Lemma 5 calculates a weighted sum of 2k − 1 inequalities derived from Lemmas 1 and 3, which is elementary compared with the existing proofs.Now, we provide that proof.
Proof of Lemma 5. Combining (16) with the update rules (3) and (4) yields for 1 ≤ i ≤ k.For 1 ≤ i < k, we also have (11) and We will calculate a weighted sum of 2k − 1 inequalities: The left-hand side of the weighted sum is On the right-hand side of the weighted sum, some calculations with v 0 = 0 show that the innerproduct terms of ⟨v i−1 , v i ⟩ cancel out as follows: The remaining terms on the right-hand side of the weighted sum are We now obtain Finally, we evaluate the coefficient on the right-hand side with (15) as which completes the proof.
The proof elucidates that the second restart condition ( 14) was designed to derive the lower bound of ℓ 2k−1 4k in (20).
For an epoch that ends at Line 10 in iteration k ≥ 1, Lemma 5 gives For an epoch that ends at Line 8 in iteration k ≥ 2, the lemma gives These bounds will be used to derive the complexity bound.
Upper bound on gradient norm
Next, we prove the following upper bound on the gradient norm at the averaged solution.
Complexity bound
Let l denote the upper bound on the Lipschitz estimate ℓ given in Proposition 2: l := max{ℓ init , αL}.
The following theorem shows iteration complexity bounds for Algorithm 1. Recall that α > 1 and 0 < β ≤ 1 are the input parameters of Algorithm 1.
Theorem 1. Suppose that Assumption 1 holds and inf x∈R
In Algorithm 1, when ∥∇f (x k )∥ ≤ ε holds for the first time, the total iteration count K is at most In particular, if we set β = 1, then c 1 = 0 and the upper bound simplifies to inf Proof.We classify the epochs into three types: • successful epoch: an epoch that does not find an ε-stationary point and ends at Line 10 with the descent condition ( 13) satisfied, • unsuccessful epoch: an epoch that does not find an ε-stationary point and ends at Line 8 with the descent condition ( 13) unsatisfied, • last epoch: the epoch that finds an ε-stationary point.
Let N suc and N unsuc be the number of successful and unsuccessful epochs, respectively.Let K suc be the total iteration number of all successful epochs.Below, we fix ν ∈ [0, 1] arbitrarily such that H ν < +∞.(Note that there exists such a ν since H 0 ≤ 2L < +∞.) Successful epochs.Let us focus on a successful epoch and let k denote the total number of iterations of the epoch we are focusing on, i.e., the epoch ends at iteration k.We then have as follows: if k = 1, we have On the other hand, putting the restart condition (14) together with Proposition 1 yields and hence Combining ( 26) and ( 27) leads to Plugging them into the (21) yields since ν ≥ 0. Summing these bounds over all successful epochs results in and hence Other epochs.Let k 1 , . . ., k Nunsuc and k Nunsuc+1 be the iteration number of unsuccessful and last epochs, respectively.Then, the total iteration number of the epochs can be bounded with the Cauchy-Schwarz inequality as follows: where i: k i ≥2 denotes a sum over i = 1, . . ., N unsuc + 1 such that k i ≥ 2. We will evaluate N unsuc and the sum of k 2 i .First, we have ℓ init β Nsuc α Nunsuc ≤ l and hence from (28), where c 1 and c 2 are defined by (24).Next, let us focus on an epoch that ends at iteration k ≥ 2. Lemma 6 gives ε < ℓ 8S k−1 /k 3 and hence Summing this bound over all unsuccessful and last epochs results in Plugging ( 30) and ( 31) into (29) yields where the last inequality uses Putting this bound together with (28) gives an upper bound on the total iteration number of all epochs: where we have used 2 Algorithm 1 evaluates the objective function and its gradient at two points, x k and xk , in each iteration.Therefore, the number of evaluations is of the same order as the iteration complexity in Theorem 1.
The complexity bounds given in Theorem 1 may look somewhat unfamiliar since they involve an inf-operation on ν.Such a bound is a significant benefit of ν-independent algorithms.The ν-dependent prototype algorithm described immediately after Lemma 4 achieves the bound only for the given ν.In contrast, Algorithm 1 is ν-independent and automatically achieves the bound with the optimal ν, as shown in Theorem 1.The fact that the optimal ν is difficult to find also points to the advantage of our ν-independent algorithm.The complexity bound (25) also gives a looser bound: 91∆ lH where we have taken ν = 0 and have used H 0 ≤ 2L ≤ 2 l.This bound matches the classical bound of O(ε −2 ) for GD.Theorem 1 thus shows that our HB method has a more elaborate complexity bound than GD.
Remark 1.Although we employed global Lipschitz and Hölder continuity in Assumption 1 and Definition 1, they can be restricted to the region where the iterates reach.More precisely, if we assume that the iterates (x k ) generated by Algorithm 1 are contained in some convex set C ⊆ R d , we can replace all R d in our analysis with C; we can obtain the same complexity bound as Theorem 1 with Lipschitz and Hölder continuity on C. 3
Numerical experiments
This section compares the performance of the proposed method with several existing algorithms.The experimental setup, including the compared algorithms and problem instances, follows [35].We implemented the code in Python with JAX [4] and Flax [26] and executed them on a computer with an Apple M3 Chip (12 cores) and 36 GB RAM.The source code used in the experiments is available on GitHub. 4
Compared algorithms
We compared the following six algorithms.
• JNJ2018 [28, Algorithm 2] is an accelerated gradient (AG) method for nonconvex optimization.The parameters were set in accordance with [28, Eq. ( 3)].The equation involves constants c and χ, whose values are difficult to determine; we set them as c = χ = 1.
The parameter setting for JNJ2018 and LL2022 requires the values of the Lipschitz constants L and H 1 and the target accuracy ε.For these two methods, we tuned the best L among {10 −4 , 10 −3 , . . ., 10 10 } and set H 1 = 1 and ε = 10 −16 following [33,35].It should be noted that if these values deviate from the actual ones, the methods do not guarantee convergence.
Problem instances
We tested the algorithms on seven different instances.The first four instances are benchmark functions from [27].
The dimension d of the above problems was fixed as d = 10 6 .The starting point was set as x init = x * + δ, where x * is the optimal solution, and each entry of δ was drawn from the normal distribution N (0, 1).For the Qing function (34), we used x * = ( √ 1, √ 2, . . ., √ d) to set the starting point.
The other three instances are more practical examples from machine learning.
• Training a neural network for classification with the MNIST dataset: The vectors x 1 , . . ., x N ∈ R M and y 1 , . . ., y N ∈ {0, 1} K are given data, ℓ CE is the cross-entropy loss, and ϕ 1 (•; w) : R M → R K is a neural network parameterized by w ∈ R d .We used a three-layer fully connected network with bias parameters.The layers each have M , 32, 16, and K nodes, where M = 784 and K = 10.The hidden layers have the logistic sigmoid activation, and the output layer has the softmax activation.The total number of the parameters is d = (784 × 32 + 32 × 16 + 16 × 10) + (32 + 16 + 10) = 25818.The data size is N = 10000.
• Training an autoencoder for the MNIST dataset: min The vectors x 1 , .• Low-rank matrix completion with the MovieLens-100K dataset: min The set Ω consists of N = 100000 observed entries of a p × q data matrix, and (i, j, s) ∈ Ω means that the (i, j)-th entry is s.The second term with the Frobenius norm ∥•∥ F was proposed in [51] as a way to balance U and V .The size of the data matrix is p = 943 times q = 1682, and we set the rank as r ∈ {100, 200}.Thus, the number of variables is pr + qr ∈ {262500, 525000}.
Although we did not check whether the above seven instances have globally Lipschitz continuous gradients or Hessians, we confirmed in our experiments that the iterates generated by each algorithm were bounded.Since all of the above instances are continuously thrice differentiable, both the gradients and Hessians are Lipschitz continuous in the bounded domain.Considering Remark 1, we can say that in the experiments, the proposed algorithm achieves the same complexity bound as Theorem 1.
Results
Figure 1 illustrates the results with the four benchmark functions. 5The horizontal axis is the number of calls to the oracle that computes both f (x) and ∇f (x) at a given point x ∈ R d .
Let us first focus on the methods other than L-BFGS, which is very practical but does not have complexity guarantees for general nonconvex functions, unlike the other methods.Figures 1(a) and 1(b) show that Proposed converged faster than the existing methods except for L-BFGS, and Figure 1(c) shows that Proposed and MT2022 converged fast.Figure 1(d) shows that GD and LL2022 attained a small objective function value, while GD and Proposed converged fast regarding gradient norm.In summery, the proposed algorithm was stable and fast.
L-BFGS successfully solved the four benchmarks, but we should note that the results do not imply that L-BFGS converged faster than the proposed algorithm in terms of execution time.Figure 2 provides the four figures in the right column of Figure 1, with the horizontal axis replaced by the elapsed time.Figure 2 shows that Proposed converged comparably or faster in terms of time than L-BFGS.One reason for the large difference in the apparent performance of L-BFGS in Figures 1 and 2 is that the computational costs of the non-oracle parts in L-BFGS, such as updating the Hessian approximation and solving linear systems, are not negligible.In contrast, the proposed algorithm does not require heavy computation besides oracle calls and is more advantageous in execution time when function and gradient evaluations are low-cost.
Figure 3 presents the results with the machine learning instances.Similar to Figure 1, Figure 3 shows that the proposed algorithm performed comparably or better than the existing methods except for L-BFGS, especially in reducing the gradient.
Figure 4 illustrates the objective function value f (x k ) and the estimates ℓ and h k at each iteration of the proposed algorithm for the machine learning instances.The iterations at which a restart occurred are also marked; "successful" and "unsuccessful" mean restarts at Line 10 and Line 8 of Algorithm 1, respectively.This figure shows that the proposed algorithm restarts frequently in the early stages but that the frequency decreases as the iterations progress.The frequent restarts in the early stages help update the estimate ℓ; ℓ reached suitable values in the first few iterations, even though it was initialized to a pretty small value, ℓ init = 10 −3 .The infrequent restarts in later stages enable the algorithm to take full advantage of the HB momentum.
A Omitted proofs A.1 Proof of Lemmas 2 and 3
Proof of Lemma 2. Since f is twice differentiable, we have and its weighted average gives A.2 Proof of ( 7) 7) is a of [35,Eq. (22)], which was originally for an accelerated gradient method with Lipschitz continuous Hessians, for our heavy-ball method with Hölder continuous Hessians.The following proof of ( 7) is based on the one for [35, Eq. ( 22)] but is easier, thanks to our simple choice of θ k = 1.
Proof.Using the triangle inequality and Lemma 2 with n = k, z i = x i , and and we will evaluate each term.First, it follows from the update rule (3) that Therefore, the first term on the right-hand side of (39) reduces to ℓ k k ∥.Next, we bound the second term.Using the triangle inequality and the Cauchy-Schwarz inequality yields We obtain the desired result by evaluating the right-hand side of (39).
Table 1 :
Complexity of first-order methods for nonconvex optimization."Exponent in complexity" means | 6,841.8 | 2023-03-02T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Language Acquisition: The Influential Factors And Its Connection With Age
The term of language acquisition often leads to a misinterpretation when it deals with English learning. However, there is one distinct point that might be pointed out about language acquisition which differentiates it with the general language learning. Language acquisition is the process of acquire language while language learning is the process of learning the language. This article is meant to give a perspective about language acquisition, factors that influence the acquisition and also the connection of language acquisition with age. Keywords—Language acquisition, first language acquisition and second language acquisition
I. INTRODUCTION
Learning a language is different from learning other skills or knowledge because of the unique status owned by a language.One can learn a language to understand how the language works.However, in order to be able to use the language, one must also acquire it (Krasen, 1981).The process of this acquiring language is then known as language acquisition.Wilson (2000) states that; "language acquisition is a subconscious process to acquire a language.In this process, language acquirers are not consciously aware of the grammatical rules of the language, but rather developing a "feel" for correctness".Krashen mentions that language acquisition is determined as the process of 'picking-up' a language (Krashen, 1981 cited in Wilson, 2000).In other words, language acquisition can be defined as the way people learn about a language and only focus on the way of using it for communication purposes rather than the grammar in the language.
There are two kinds of language acquisition already known which are also different each other depend on the learners and the place they learn or acquire the language.Those are the first and second language acquisitions.The first language acquisition is about acquiring first language in one's life in where it usually starts at younger age or childhood (Cook, 1969).The first language must be a mother tongue that is heard and spoken for the first time at the early stage of a child begins to speak.Therefore, most of the learners of first language are children or even infants.On the other hand, second language acquisition deals with acquiring other or additional language than the first language, often known as second language, and most of the learners are older children or people (Cook, 1969).Both of them are also known as disciplines in language teaching and learning that deals with how people learning languages.
II. PAGE LAYOUT
First and second language acquisition have some differences that can be caused by factors as can be found in Cook, Long, & McDonough (1979): Age plays the most significant role in differentiating first and second language acquisition.Learners of first language are mostly children or infants therefore the way they acquire the first language is influenced by their mental condition and consciousness as learners who are not afraid of making mistake.Children usually show no anxiety and stick only to the meaning of language they use as the only purpose of their learning.Meanwhile, second language is usually learned by more mature people with all their consciousness in learning to have additional language for many purposes such as expanding social relationship, studying abroad or in other country, or for getting a job.Children also develop clear intuitions about correctness that will help them learn better while second language learners are often unable to form clear grammaticality judgments that may be a big obstacle in learning.Children usually acquire their first language naturally, while adults, as stated to be in postcritical period already, do not naturally acquire their second language, as a number of fundamental differences appear in their rationale towards learning.More discussion about age and second language acquisition can be found in other section of this article.
B. Degree of Acquisition
There has been a major agreement in the world that one will come to a perfect mastery of the language, as they will be the native, when learning first language.Even there is success guaranteed for the learners.This condition is very unlikely to happen in second language learning because the degree of success depends on many things such as motivation, support, and environment.So, complete success in acquiring second language is rarely found as the degree of acquisition is low.
C. Competency
The competency needed in the first and second language acquisition is also different.Learners need to acquire perfect accuracy in target language when dealing with first language acquisition; while in second language acquisition the competence could only be focused on fluency in using target language.Accuracy is not the main target in second language because users or learners merely use the language for communication purpose or the function of language which are not really stressed on language form.
D. Instruction
Almost all first language acquisitions happen in informal situation like at home.Usually, the process of acquiring the language starts when a child tries to say something that they think similar to what they have heard in order to get everything they need from parents or other family members.Here, parents provide very little in the way of language instruction to the child.They do not actually teach their children to speak, they only correct falsehoods but not erroneous grammars.Very different situation occurs with second language acquisition that mostly takes place in formal setting such as school or classroom.This setting indeed requires instruction and even teacher to make the instruction works.Here, instruction is helpful and necessary for acquiring the second language.
E. Affective Factor
As first language acquisition mostly performed by children, and then there will be no affective factor involved in the process.On contrary, second language acquisition that performed by more varied and mature people needs affective factors that play a major role in determining level of proficiency.
The first language children mostly acquire language in different settings with different exposure to language than the second language learners and they are at different stages of mental and social maturity (Cook, 1969).The differences indeed make the acquisition of first language different from the second language wherever it occurs.
III. AGE AND SECOND LANGUAGE ACQUISITION
Discussion on effects of age toward Second Language Acquisition has been the most debatable one comparing to other variables that influence the acquisition such as motivation, learning styles, personality, sex, aptitude, and attitude.Many studies and researches have been conducting in order to get a better understanding about how age affects this Second Language Acquisition (SLA).
A. Second Language Acquisition
As mention above, second language acquisition is the process to acquire other or additional language than the first language, often known as second language, and most of the learners are older children or people.As one discipline in language teaching and learning, the focus of SLA is language learning and use in where learners must communicate and interact in the L2 (second language) or target language in everyday life.The acquisition of this L2 involves code switching or the ability to switch from one language (L1) to another (L2) in constructing sentence or communicating.A person who watches movies in his or her second language cannot acquire the language because there is no interaction and code switching in it.Strzepek (2004) states that actual acquisition of an L2 occurs when an individual is able to code switch in his or her mind without taking extra time and without mixing up or confusing the two different languages.In other words, SLA happens when the learner can give up code switching and start to think in the target language".So, one with L2 acquisition must be able to use the language along with his or her L1 although he or she does not perceive native-like ability for the L2.
B. How does age affect SLA?
Discussions and studies about effects of age on SLA have been widely conducted until today, and yet the real answer remains uncertain.The discussions involve two major players; younger and older learners.One major finding from studies about these two players is that young learners are better in learning language in long term while older learner is better in short time.Snow & Hoefnagel-Hoehle (1978) in their study involving children and adults English speakers who study Dutch language conclude that older learners are better than younger learners in terms of rate of acquisition in the initial stage of SLA.The study reveals that within three months, older learners outperform children in the test given and both groups have the same result after six months of the study.Another study conducted by Garcia-Lecumberry & Gallardo (2003) toward a group of children between 4 to 11 years old Spanish L1 who study English shows that older children have better result and understanding with the English.
Similar studies have come up with the same opinion.Those studies reveal that older people are able to get more comprehensible input because they have better cognitive skill and better memory.They have greater ability to beat the silent period and perform using structures that they have never seen or acquired.Older people also have everything helpful in learning language suach as better knowledge, experience and background information of the world which make the input more comprehensible to them.Meanwhile, children usually acquire competence in L2 over the long run.They usually will be good in pronunciation, accent and some aspects in phonology for L2 while older people have advantage in grammar.Usually, children will find it difficult to comprehend grammatical structures because they have less pragmatic skills and memory (Johnson and Newport, 1989; and Flege, Yeni-Komshian and Liu, 1999).
C. Critical Period Hypothesis (CPH)
The study of correlation between age and SLA cannot be separated from Eric Lenneberg theory about critical period hypothesis.According to Lenneberg, "automatic acquisition from mere exposure to a given language seems to disappear after puberty, and foreign languages have to be taught and learned through a conscious and laboured effort" (1967, p. 176).It means that children have better ability to learn language than teenagers or adult because their brains are at the best period to do so.However, the existence of this critical period is still in controversy.Many researches in CPH have conducted and most of them show contradictory evidences.Some are supporting the CPH while other reject it.Krashen's Input Hypothesis is one of supports known for CPH.According to Krashen, a critical period does indeed exist; since the hypothesis assumes not only that L2 acquisition is similar in nature to L1 acquisition, but also that this is the case for learners of any age.Although that Krashen's theories are claimed to be fault in any case, their influence in the field of second language teaching cannot be denied.
Different opinion toward CPH can be seen from several studies.For example, Hyltenstam & Abrahamsson (2000) who mention that CPH must be interpreted as younger equals better in terms of eventual outcome hypothesis.Nowadays, studies on connection between age and SLA are mainly focused on non-native speaker's abilities and the proficiency to reach native-like.For example, the study conducted by Ioup, Boustagui, El Tigi & Moselle's (1994) about successful adult learners of SLA in a naturalistic environment, and Nikolov, Marianne & Iral's (2000) who study adult learners of Hungarian and English.Both of these studies try to find out whether learners will still have native-like proficiency when they have passed the critical period.The results are still in controversy that there are no clear conclusion about who are better in SLA as both adults and children show different responds in various studies.Sometimes adults are considered better with some conditions while children could also be better under different treatment of studies.Ehrman & Oxford (1995) from their study conclude that, "younger learners are more likely to attain fluency and native-like pronunciation, while older learners have an advantage in understanding the grammatical system and in bringing greater 'world knowledge' to the language learning context".
IV. CONCLUSIONS
There are two kinds of language acquisition known in language learning, the first language acquisition and the second language acquisition.Language acquisition is proved to be influenced by many factors such as the degree of acquisition, instruction, competency, some affective factors, and offcourse age.Age is the main factor that gets many attentions from linguist and language researchers to acquire more evidence about its influence.The discussions about effect of age on SLA have resulted in different opinions.Some say that age has significant factor while other claim that it gives nothing.However, both groups of age, children and adult, who are the objects of studies in this field, have their own advantages.Children is good in anything dealing with sound repetition like pronunciation, while adult is better in grammatical issue.So, we cannot say that one group is better than the other in SLA.This would make the debate over the Critical Period Hypothesis (CPH) as a central issue in SLTL. | 2,917.6 | 2018-06-06T00:00:00.000 | [
"Linguistics",
"Education"
] |
An Analytical Theory of Hall-Effect Devices with Three Contacts
Methods: In contrast to devices with four contacts, it is not possible to determine the sheet resistance of devices with three contacts by electrical measurements like the one of van der Pauw. However, for both types of devices, the output voltage over input current depends only on resistances of the ERC, the sheet resistance, and the Hall angle, irrespective of the exact shape of the devices and the size of the contacts.
INTRODUCTION
Traditional Hall plates have four contacts and two perpendicular mirror symmetries [1 -4].Current I in is sent through two opposite contacts and the output voltage V out is tapped between the other two opposite contacts (Fig. 1a).
(1) with the Hall mobility µ H > 0, the magnetic flux density B ┴ perpendicular to the thin, plane Hall plate, the sheet resistance R sh , and the Hall-geometry factor 0< <1.The arctangent of µ H B ┴ is called the Hall angle.The electrical behaviour of such a 4C-device at zero magnetic field is described by an Equivalent Resistor Circuit (ERC) comprising six resistors with three different resistance values (Fig. 1b) [5].Irrespective of the size of the contacts, we can use some generalized van der Pauw technique to measure the sheet resistance and the resistances of the ERC [5].From the ERC, we can compute the input resistance R in between the supply contacts and the output resistance R out between the sense contacts.Thereby, the ratios of input and output resistances over sheet resistance R in / R sh , R out / R sh depend only on the lateral geometry of the Hall plate, i.e., its layout.We call them the effective numbers of squares of input and output Together with the sheet resistance these 3 DoF fully characterize the electrical behaviour of the Hall plate at zero magnetic field.Moreover, they also describe the behaviour of the Hall plate at arbitrary magnetic field.For small magnetic field, we even know an analytical relation [6]: with the incomplete elliptic integral of the first kind , the complete elliptic integral of the first kind K(k) = F (1, k), the complementary elliptic integral , and the modular lambda function L(y), defined by L(K'(k) / K(k)) = k 2 for 0 ≤ k ≤ 1 [5].There is a non-trivial symmetry [6].
In modern times, Hall plates are mostly operated in the spinning current Hall probe scheme, which reduces the offset error by 2 ½ decades (in silicon technology from 5 mT initial offset to 10 µT residual offset) [7,8].The scheme comprises various operating phases where the supply and sense contacts of the Hall plate are swapped (contact commutation).An appropriate sum or difference of output signals of individual operating phases cancels out offset errors while boosting the magnetic sensitivity.It also cancels out flicker noise, if the spinning frequency is chosen larger than twice the flicker noise corner frequency [9].For this scheme, it is convenient to use Hall plates with equal input and output resistances R in = R out or λ in = λ out .This can be readily done with layouts of 90° symmetry.The ERC has only two resistance values for the six resistors, because in Fig. (1b) we simply have to set R Df = R DP → R D .Then the generalized van der Pauw measurement has a simpler formula for λ in = λ out → λ and R sh (see (26) in [10]).The 2 DoF λ, R sh are linked to the ERC via ( ) with [10,11].In (4a,b) the minus-sign is used for and the plus-sign in the opposite case.The accuracy of (4a,b) is +/-0.02%.Again the number of squares λ depends only on the layout, and it fully determines the Hall-geometry factor at small magnetic field [12] (5a) with and c = 2.279, c 2 = 1.394, c 4 = 0.6699, c 6 = 0.4543.The accuracy of (5a) is -2%/+0%, and the accuracy of (5b) is -60ppm/+0.02%.Here the symmetry is between complementary devices Fig. (8) in [6]).
With these results, one can show that the ratio of the output signal over thermal noise is given by [6] (6) with Boltzmann's constant k b , the absolute temperature T, the effective noise bandwidth Δf, and the available Hall supply voltage V H sup = I in R in .At a given impedance R in the SNR gets largest for symmetric devices with [6] (7) Hence, in silicon technology at room temperature, a 500 Ω device operated at 2.5 V supply can achieve a maximum magnetic resolution of 566 nT in 1 kHz noise-equivalent bandwidth [6].This Hall plate needs to be quite thick (~45 µm) -alternatively one may connect 45 devices with 1 µm thickness in parallel (which can be accommodated in 150 µm x 150 µm chip area).
Recently, vertical Hall-effect devices (VHalls) have been attracting significant attention from the industry, because they allow one to measure the in-plane magnetic field components, i.e. the magnetic field components parallel to the chip surface (B x , B y ), whereas Hall plates -also called horizontal Hall-effect devices (HHalls) -respond to the magnetic field component orthogonal to the chip surface (B z ) [13,14].A vast plurality of topologies was proposed for VHalls, yet one important group of VHalls comprises a single Hall tub with only three contacts on one side and a single mirror symmetry Fig. (2) [15,16].It can also be viewed as a basic building block for more complex Hall-effect devices.Besides, this type of symmetry also applies to split-drain MAG-FETs [17].Therefore, we are interested in a theory of such devices analogous to the foregoing one for 4C-devices.And indeed, we will find many analogies and a few distinct differences between devices with 3 and 4 contacts.2).Vertical Hall-effect device with three contacts and a single mirror symmetry.
THE DEGREES OF FREEDOM AND THE EQUIVALENT RESISTOR CIRCUIT
shows a plane circular disk-shaped device without holes and with three arbitrary contacts on its perimeter.Obviously, its layout has 7 geometrical degrees of freedom α 1 , α 2 , ...α 6 plus the diameter, but we can scale the diameter to 1 and rotate the device so that α 6 → 0 without affecting its electrical properties, and then we end up with 5 DoF.We can further map the interior of the disk in the z -plane onto the upper half of the w' -plane by a Möbius transformation w' = (a 1 + a 2 z) / (1 + a 3 z) with 3 complex-valued parameters a 1 , a 2 , a 3 , which we are free to choose except for a 2 ≠ a 1 a 3 [18].Thus, from the 6 points Z 1 , Z 2 , ...Z 6 we can map 3 points at fixed positions in the w' -plane: in Fig. [19], so that only 3 points W 3 ', W 4 ', W 5 ' on the real axis of the w' -plane are free to describe the specific contact geometry.These 3 scalar parameters plus the sheet resistance give 4 electrical DoF, which fully describe the electrical behaviour of the device.On the other hand, we know from linear circuit theory that a resistive network with three terminals can be represented by its ERC, which has a resistor between each pair of contacts.This gives only 2+1 = 3 resistors Fig. (3c).Hence, we have 4 DoF W 3 ', W 4 ', W 5 ', R sh in the w' -plane, which are mapped to just 3 DoF R a , R b , R c in the ERC!In other words, for a given ERC we can freely choose one of the 4 DoF in the w' -plane.For example, we could choose a value for the sheet resistance and select appropriate three parameters W 3 ', W 4 ', W 5 ' to achieve any required ERC.Therefore, it is impossible to determine the sheet resistance from electrical measurements on a 3C-device at zero magnetic field.Finally, we increase the symmetry even further according to Fig. (5a).Now the device has a 120° symmetry and the layout in the z -plane has only one scalar parameter θ that affects the electrical behavior.After a Möbius-transformation the layout in the w' -plane has still one scalar parameter .The second parameter is not free -it follows from the first one .With the sheet resistance we have 2 electrical DoF, but the ERC has only a single resistance value (Fig. 5c).Thus, for a given electrical behaviour at zero magnetic field we can choose any arbitrary value for R sh and select k f according to the following equation.
(10a)
Therefore, R H / R sh and R D / R sh represent only a single DoF, and the 2 DoF of the ERC are covered by R H / R sh and R sh .The reason, why we can measure sheet resistance with van der Pauw technique is: the ratio R H / R D is accessible to electrical measurement and it depends only on the layout and not on the sheet resistance.From ( 6), ( 14), (15) in [10] and (A15) in [5] we derive the relation
If we reduce the symmetry according to Fig. (7a) the device has two orthogonal mirror symmetries and thus different input and output resistances.This gives 3 electrical DoF λ in , λ out , R sh (see (2a,b)).The ERC has three resistance values R H , R Df , R Dp which are related via the sheet resistance: From [5] we use (1a) to express R sh (1 / R H + 1 / (2R Df )), and then we insert (1b) and (2c) of [5] into (A14) of [5].Combining both results we can express R Df / R sf as a function of R Dp / R sh and R H / R sh : Consequently, the 3 electrical DoF can be represented by R Dp / R sh , R H / R sh , R sh .
We skip the discussion of 4C-devices with 4 and 5 DoF and conclude with the case of entirely asymmetric devices.The 4 contacts are defined by 8 arbitrary end-points, 3 of which we can map via Möbius-transformation onto defined points on the Re {w'}-axis.This gives 5 DoF of the layout plus the sheet resistance.So we end up with 6 electrical DoF, which matches the 6 resistances in the ERC [8].If we normalize these 6 resistances in the ERC by the sheet resistance, there must be one relation between them: the 6 th one follows out of the first 5 ones.In a weaker form this has already been mentioned in the appendix of the seminal paper by Van der Pauw [20].To sum up, in this section, we have shown two remarkable differences between 3C-and 4C-devices: (1) from mere electrical measurements on a 3C-device and without knowledge of the device geometry one cannot deduce the sheet resistance, however, for 4C-devices this works irrespective of the size of the contacts.(2) for a fixed shape of the Halleffect region and a fixed ERC (i.e.fixed electrical behavior) there is exactly one contact geometry and one sheet resistance in the case of 4C-devices, however, in the case of 3C-devices we will find infinitely many contact geometries and sheet resistances (whereby we assume plane devices, simply connected region, homogeneous and isotropic conductivity, constant thickness, and the contacts must be at the perimeter).
THE ERC OF A 3C-DEVICE WITH SINGLE MIRROR SYMMETRY
Here we compute the two resistances R d , R e of the device from Fig. (4a).Thereby, we look for operating conditions with symmetric potential distributions at zero magnetic field, which can be generated by only two contacts, because this leads to comparably simple, closed-form conformal transformations in terms of elliptic integrals.
Current Flow Across the Line of Mirror Symmetry
In Fig. (4a) the Re{z}-axis is the line of mirror symmetry.If we connect C 1 to +1V and C 3 to -1V, C 2 and the Re{z}axis will be at 0V and we need to study only the potential distribution in the lower half of the device with the two contacts C 1 , C 2 .Fig. (8) shows a sequence of conformal transformations, which map the semi-circular region onto a rectangle, with the contacts at opposite sides, so that we immediately know the resistance between them.The first transformation is [21]: only in an isotropic scaling factor w = (tan (α 2 / 2)) 2 w'.We symmetrize the contacts by the Möbius transformation: A final Schwartz-Christoffel transformation maps the upper half of the t-plane onto the interior of a rectangle in the q-plane (see also Fig. (3d) in [22]).
(14a)
sn(F(t, k), k)=t is the Jacobi sine-amplitude function.The aspect ratio of the rectangle gives the resistance between C 1 and C 2 in this operating mode
Fig. (8).
(a-d) Sequence of transformations that map the lower half of a circular device with current flow across its axis of single mirror symmetry onto a rectangle z → w → t → q with homogeneous current density.
Theory of Hall-Effect Devices Three Contacts
Open Physics Journal, 2018, Volume 4 21
Current Flow Along the Line of Mirror Symmetry
If we connect C
(16a)
The aspect ratio of the rectangle gives the resistance between C 1 and C 2
A final Schwartz-Christoffel transformation maps the upper half of the -plane onto the interior of a rectangle in the -plane.
The ERC and its Properties
Comparison of (14b) with the ERC in Fig. 4c gives Comparison of (16b) with the ERC in Fig. 4c gives Solving (17a,b) for the two resistances of the ERC gives Since R d , R e fully define the electrical behaviour of the device, (17a,b) imply that R sh λ d , R sh λ ed also describe the 2 electrical DoF.Thus, R sh λ d and R sh λ ed are independent of each other, and therefore λ d and λ ed are also independent of each other.Analogous to (18a) we can define (18c) so that any single one of the parameters λ d , λ ed , λ e can be expressed by the other two.
On the other hand, we can invert (13d) and (15d) to express two parameters out of W 1 , W 2 , W 3 by k d , k ed or by λ d , λ ed .
(19a) (19b)
We can obtain any arbitrary ERC by picking some arbitrary value for W 2 : then W 1 and W 3 follow from (19a,b).In particular, we may choose W 2 = -1.Then we can model all possible ERCs by W 1 (-1,0) and W 3 (-∞, -1).Thus, we have reduced the problem in three dimensions to a problem in two dimensions.Alternatively, we may also write . location and size of contact C 1 -so that we can obtain any arbitrary ERC with α 3 , i.e. with a fixed size of contact C 2 (the location of C 2 is given by symmetry).This has an important application for VHalls with three contacts: we can choose any convenient size for the center contact C 2 -by playing around with location and size of the outer contacts we still have all 2 DoF at our option.
With (18b) this means R e > 0. With the L.H.S. of (14b), (16b) and with (28) in [22] this also means which limits the allowed region in (k d , k ed ) -space to a narrow region.
If we swap isolating boundaries with contacts in Fig. (4a) we obtain the complementary device, which has also three contacts and a single mirror symmetry (see Figs. (11a, b)).Denoting the parameters of the complementary device by an overbar and the conjugate complex by an asterisk, we get the following relations.If a device has electrical symmetry λ d = 3λ ed , then also its complementary device has electrical symmetry .
THE MAGNETIC SENSITIVITY OF A 3C-DEVICE WITH SINGLE MIRROR SYMMETRY
A 3C-Hall-effect device can be operated in various operating modes (see Fig. (6) in [19]).For each operating mode one can find a spinning current scheme that cancels out offset errors perfectly, as long as the device is assumed to have electrical linearity 1 .First we show that even though the 3C-device may be entirely asymmetric like in Fig. (3), its 1 Electrical linearity means that the resistance values in the ERC do not depend on the potentials at the various nodes.However, the device may be magnetically nonlinear, which means that the resistance values in the ERC may well depend on the applied magnetic field -so we do not have to limit this discussion to small magnetic fields.
We use the principle of superposition as introduced in [24].Let us consider an asymmetric VHall device with three contacts.In a first operating phase ph 1 the device is supplied at its outer contacts (see Fig. (12a) and the ERC in Fig. (12b).We can use the ERC to compute the potentials and currents at the contacts of the device but we have to add an extra Hall term whenever a contact is left of the current flow through the device (and we subtract it, when the contact is right of the current flow2 ).We can think of two further operating phases ph 2 and ph 3 , where the current flows between an outer contact and the mid-contact.Phases ph 2 and ph 3 are chosen such that their superposition gives phase ph 1 (see Fig. 12c-12f).This means that the sum of currents into each contact in phases ph 2 and ph 3 must equal the current into the respective contact in phase ph 1 , and the sum of potentials at each contact in ph 2 and ph 3 must also equal the potential at the respective contact in ph 1 [25].With linear circuit theory we get the following potentials at contact C 1 .
Note that in ph 3 the Hall action of the VHall device tries to reduce the potential at C 3 by , but the ground wire ties C 3 to 0 V thereby lifting all other potentials in the device by .Superposition of ph 2 and ph 3 gives ph 1 :
(23d)
Inserting (23a-c) into (23d) gives .Since this is valid for all B ┴ we can differentiate the equation with respect to B ┴ and it follows S i (ph 2 ) = S i (ph 3 ).We repeat the same for contact C 2 . ( With (24a-d) it follows S i (ph 1 ) = S i (ph 3 ).Hence, the current related magnetic sensitivity S i is equal in all phases ph 1 , ph 2 , ph 3 .This holds for 3C-devices with and without symmetry.Thereby the magnetic field may even be strong, so that the resistances R a , R b .R c and the current related magnetic sensitivity S i become nonlinear functions of B ┴ -the principle of superposition requires only electrical linearity, not magnetic linearity.According to [19] we can define the Hallgeometry factor of 3C-Hall-effect devices by
Theory of Hall-Effect Devices Three Contacts
Open Physics Journal, 2018, Volume 4 27
The Weak Field Hall-Geometry Factor of a 3C-Device with Singe Mirror Symmetry
For small Hall angles one can use a perturbation approach, where the potential in the Hall-effect region is developed into powers of µ H B ┴ and only the lowest order term is used to compute the Hall output signal [26,27].The procedure is developed in detail in [22]: In a first step we compute the potential at zero magnetic field.Thereof, we get the electric field E p along the isolating boundaries.However, in the presence of a magnetic field, the Hall effect gives rise to an additional component of the electric field, which is normal to the isolating boundary E n = µ h B ┴ E p with the Hall angle .We take account of the component E n in a second step, where we tie all supply contacts to zero potential, and impose a perpendicular current density on the isolating boundaries J n = σ 0 E n , with σ 0 being the conductivity at zero magnetic field.As in the first step also in this second step we use the isotropic conductivity σ 0 throughout the Hall-effect region.The output contacts are at unknown potential V out and -depending on biasing conditions -usually no net current is flowing in or out of them.Solving the net current condition at the output contacts returns V out proportional to µ h B ┴ .The advantage of this method is that we can use symmetries of the device, whereas in the case of strong magnetic field these symmetries get lost.Another asset is that in the course of this calculation we can re-use conformal mappings of the ERC-computation in section 3.
From section 4.1 we know that the Hall-geometry factor is identical in all operating phases.So we choose current flow from contact C 1 to C 3 in a circular device with single mirror symmetry of Fig. (8), because it gives highly symmetric current flow lines.In step 1 this will give us the inhomogeneous electric field along the isolating boundaries Similar to [22] we make the ansatz for the potential in the -plane of Fig. (9d), whereby .This satisfies two boundary conditions and where V out is the change in potential at contact C 2 caused by the action of the small magnetic field.The net current into the output contact must vanish.Thus, (27a) (26) in (27a) and integration gives
(27b)
The boundary conditions on the left and on the right edge of the rectangle in the -plane are Positive J n on the left edge = 0 means that current flows into the Hall-effect region, whereas positive J n on the right edge = 1/λ d means that current flows out of the Hall-effect region.Introducing the ansatz ( 26) into (28a-c) and making a Fourier series expansion gives the unknown coefficients a n , b n in (27b) We insert (29a,b) into (27b), use J n = µ H B ┴ J p , and reverse the sequence of summation and integration, thereby using .
(30)
In (30) we only have to determine the transformation of current density from q-plane in Fig. (8d) to -plane in Fig. (9d), which is detailed in Appendix A. The subtraction in (30) means that the current flowing along the boundary between the supply contacts C 1 and C 3 reduces the Hall output signal.Finally, the weak magnetic field limit of the geometry factor of a 3C-Hall-effect device with single mirror symmetry is given by (31) with the abbreviations L d = L(λ d ), L ed = L(λ ed ).Equation (31) is the core result of this work.It is valid for any operation mode, where current I in flows into one contact and out of another contact and voltage V out is tapped at the third contact.λ d , λ ed are the 2 DoF of the layout and with (17a,b) they can be expressed by ratios of resistances of the ERC over the sheet resistance.Thus, (31) gives the low field limit of the Hall-geometry factor as a function of purely electrical parameters R d / R sh , R e / R sh , irrespective of the geometry of the device.
Discussion of Magnetic Sensitivity and SNR of 3C-Devices
With numerical integration it is straightforward to plot versus its 2 DoF in the allowed region 0 ≤ λ ed ≤ λ d (cf. (20a)) as shown in Fig. (13).There the black solid curve on the surface represents for electrically symmetric devices with R d = R e which means λ d = λ e = 3λ ed .Obviously, → 0 for small λ d , λ ed , i.e. large contacts.On the other hand, with (21) we know that for point-sized contacts at arbitrary position we are located at the far end of the black solid curve.There → 1 [19].If we plot the points on the black solid curve versus λ ed we get the same plot as in Similar to 4C-devices, we note also for 3C-devices a symmetry of the Hall-geometry factor.For devices with electrical symmetry λ d = 3λ ed numerical inspection suggests the following conjecture
(32)
The physical significance is also the same: it links the Hall geometry factor (λ ed ) of a device with the Hall geometry factor of its complementary device from (22e).Therefore we only need to know the Hall geometry factor for small contacts with because we can obtain its values for large contacts with the above symmetry relation.A simple approximation is (33) with an accuracy of -2.3%/+1.7%.In [19] we showed that at a given impedance level the signal-to-noise ratio (SNR) of a 3C-Hall-effect device is proportional to .This parameter is plotted in Fig. ( 14) for all possible devices.Similar to 4C-devices we note a clear maximum and this maximum occurs for devices with electrical symmetry: for and .Such devices with circular shape may have various contact sizes, such as α 1 = 30 ). 3 Meanwhile it was proven rigorously that any Hall device with three contacts and at least one mirror symmetry has the same SNR as its complementary device [30]./sqrt(λ ed λ d )
Theory of Hall-Effect Devices Three Contacts
Open Physics Journal, 2018, Volume 4 31
THE VERTICAL HALL-EFFECT DEVICE WITH THREE CONTACTS
Such a vertical Hall-effect device is shown in Fig. (2).The Hall-effect region is a tub with contacts at its top side.In silicon technology, the tub may be a CMOS n-well, a deeper high-voltage CMOS n-well or an epitaxial layer, and the ohmic contacts are made by shallow n + source/drain diffusion.We neglect the depth of the contacts and the inhomogeneous doping profile, and we assume a rectangular cross-section of the Hall-tub.Then we can apply our theory to clarify, if it is possible to optimize such devices despite the limitation that all contacts have to be on the top side.
To this end we simply have to find a mapping from the rectangular cross-section of Fig. (2) to the half-plane geometry in Fig. (4b).This is shown in Figs.(15a-c).There we draw a scaled rectangular device in the q-plane and transform it to the t-plane similar to (14a).
(34a)
The parameter k is given by the aspect ratio of the Hall-tub.
(34b)
d and r are the depth and the length of the rectangular Hall-tub (see Fig. 15a).The mapping of the contacts is where r a is the length of the center contact, r b is the spacing between center and outer contacts, and r c is the length of the outer contacts (see Fig. 15a).A degenerate Möbius transform Summarizing these findings we may assume r, d, r a and derive r b , r c for optimum SNR:
For (39a) we used (34b,c), for (39b) we used (38) and (39a), for (39c) we used (34d) and (39b), and for (39d) we used (34e) and (39c).For real-valued F(w,k) with 0 ≤ k ≤ 1 it must hold 0 ≤ w ≤ 1.Hence, from (39d) we obtain a maximum allowed: The meaning of ( 40) is: if we assume a given aspect ratio d / r of the Hall tub, we can only realize optimized devices (i.e.devices with electrical symmetry and with ) if r a / r is small enough.For practical reasons r a must be larger than the feature size of the semiconductor technology.Fig. (16) shows a plot of the R.H.S. of (40) versus d and r.Obviously, for small d we must use very small r a .For fixed d the length r a can be largest for r → ∞, namely ln(4/3)d/π 0.092d.On the other hand, if one can use the entire chip as Hall-tub, d/r is large, and this means .Thus, even for deep Hall-tubs the length of the center contact must be less than 5% of the device length to achieve -and for shallow Hall-tubs it must be even shorter.With respect to minimum feature size r b and r c are less critical than r a , because from (39c,d) it follows r b > r a and r c > r a .In practice, one should take care that too small r a and r b gives too large electric field, which leads to velocity saturation, electrical nonlinearity, temperature gradients and finally to reduced magnetic sensitivity and to poor residual offset at the output of the spinning scheme.For a device with electrical symmetry but with we have to replace in (40) by , and this requires even smaller r a /r.
Example:
Please note the usefulness of an analytical treatment in this context.Not only does it prove, that a device with contacts only on one side of the rectangular Hall-region can still have electrical symmetry despite its degenerate geometry -it also shows, that this is possible for all rectangular aspect ratios and what the penalty is, that one has to pay for shallow Hall-tubs.It would have been painstaking to find the optimum of this problem with 4 DoF (r a /r, r b /r, r c /r, d/r) by purely numerical methods.
Fig. (17)
. FEM-simulation of a 3C-VHall-device with d = 5µm, r = 20µm, r a = 0.45µm, r b = 0.633548µm, r c = 6.78451µm operated in phases 1 and 2. In both phases the voltages at the input contacts and the magnetic sensitivities are identical and match with our analytical theory up to better than 649 ppm and 40 ppm, respectively: , .The color coding denotes the electric potential and the grey lines are current streamlines.
CONCLUSION
In this paper, we gave an analytical theory of Hall effect devices with three contacts and a single mirror symmetry.This class of devices is of considerable practical relevance, because it includes split-drain MAG-FETs and many Vertical Hall effect devices.The Equivalent Resistor Circuit (ERC) at vanishing magnetic field has three resistors with We aim at a device with length = 20 µm.The Hall-tub is 5 µm deep.With (40) we may choose r a = 0.45 µm and from (39c,d) it follows r b = 0.633548 µm, r c = 6.78451 µm.The results of a finite element (FEM) simulation on this device are shown in Fig. (17).The FEM model used a conductivity of 1 S/m and a sheet resistance of 1 Ω at zero magnetic field.The complete conductivity tensor with the Hall-effect was 2 ).The mesh had 1.9 million elements and this gave 3.8 million equations.At zero magnetic field the FEM simulation gave a resistance between the outer contacts of 1.154369 Ω (phase 1), and between the center contact and the right contact it was 1.15395 Ω (phase 2).This matches up to 287 ppm and 649 ppm with the analytical formulae (37a,b).Next, the Hall-output voltage was computed for phases 1 and 2 in the limit of vanishing magnetic field.In both cases they agree up to 40 ppm with our analytical formula (31).In both phases the device was supplied by 1 A. Then the Hall-geometry factors are identical up to 0.1 ppm and the supply voltage matches up to 363 ppm for low and high magnetic fields in the range µ H B ┴ = 0.01, 0.02, 0.05, 0.1, ... ,5. two different resistance values, but the device geometry has three parameters (two for the layout and one for the thickness).Hence, a 3C-device has 2 electrical DoF and 3 geometrical DoF.As a consequence, one cannot determine the sheet resistance by electrical measurements on a 3C-device.This is a striking contrast to devices with four contacts, whose Hall-output voltage is a unique function of the ERC for fixed input current and Hall angle, whereas the Halloutput voltage of 3C-devices is not defined by the ERC alone -in addition one needs the sheet resistance.We also gave an analytical formula for the weak magnetic field limit of the Hall-geometry factor as a function of the 2 DoFs of the device layout.Various properties of this Hall-geometry factor of 3C-devices were discussed.Numerical values were given in tabular form and some of them were checked by finite element simulations.The maximum signal-to-noise ratio (SNR) at given impedance level is obtained for optimum devices, where the resistance between any two contacts equals times the sheet resistance.It was also shown that VHalls with three contacts can be optimum for arbitrary depths and lengths of their tubs, however, their center contact has to be smaller than 4.6% of the tub length and smaller than 9.2% of the tub depth.
COMPETING INTERESTS
The author declares that there are no competing interests.
CONSENT FOR PUBLICATION
Not applicable.
APPENDIX A
Here we compute the integrals in (30).We start with on the straight line .The mapping transforms the current density with in the interval 0 ≤ u ≤ 1 (cf.Fig. 8d).Input voltage and current are linked via (17a) V in = 2λ ed R sh I in .With (16a) we have in -1 / k d ≤ ≤ -1 .There it holds
(A1)
where we used the substitution y = -x.Thus, we get
(A2)
Next, we change the integration variable du = (du / dt)(dt / dw)dw.For du/dt we use (14a) in the interval
Theory of Hall-Effect Devices Three Contacts
Open Physics Journal, 2018, Volume 4 37 Plugging (A5) and (A9) into (30), we get with (25) in the limit of small magnetic fields
(A10)
where we used the abbreviation .With [28,29] we get . The rest of (A10) can be integrated in parts with the same formulae by Prudnikov.J o is solved by partial integration.With (52a,b) in [22] we have Inserting (B2a,b) into (B1a) gives
(B3)
With (A17) and (A18) from [5] this leads to For λ ed = 0 we set L ed = 1 + dL ed with small dL ed < 0. We develop the second summand in the integrand in (31) into powers of dL ed and integrate.The result goes to and so it vanishes for λ ed = 0.The first summand in the integral in (31) can be integrated in parts The integral at the R.H.S. of (B5) is finite at L ed = 1 and .So the integral in (31) remains finite for λ ed = 0, but the numerator .Therefore it holds .
With (19c,d) we see that λ ed = 0 means α 1 = 0 while α 2 , α 3 are arbitrary.This means that contacts C 1 and C 3 touch and the Hall signal disappears.
Fig. (1).(a) Traditional 4C-Hall-plate with current streamlines at strong magnetic field µ H B ┴ = 1 pointing out of the drawing plane.(b) Equivalent Resistor Circuit (ERC) of the same Hall plate at zero magnetic field.
Fig. (4b).With the sheet resistance this gives 3 DoF (k 12 , k 23 , R sh ), however, the ERC has only two resistance values (R d , R e ).Again we may freely choose one out of k 12 , k 23 , R sh to obtain any given ERC.
Fig. ( 4
Fig. (4).(a) 3C-HHall with a single mirror symmetry axis Re{z}.(b) Its conformal transformation on the upper half of the w' -plane has 3 electrical DoF k 12 , k 23 , R sh .(c) Its ERC has only 2 electrical DoF R d , R e .
6 resistors, but with the high symmetry we have only two resistance values R H , 2R D Fig. (6bc) [10].
2
Fig. (9).(a-d) Sequence of transformations that map the lower half of a circular device with current flow along its axis of single mirror symmetry onto a rectangle z → w → → with homogeneous current density.
2 < -1/3 and with -∞ ≤ W 3 ≤ -3.Since this device must also have electrical symmetry, the ERC leads to R d = R e .With (18a,b,c) it follows λ d = λ e = 3λ ed .A device without geometrical symmetry may still have electrical symmetry R d = R e with λ d = λ e = 3λ ed .With (14b), (16b) this means K'(k d ) / K(k d ) = 12K(k ed ) / K'(k ed ).This modular equation of degree 12 can be solved in two steps by K
Fig. ( 10 ).
Fig. (10).All possible parameter-sets α 1 , α 2 , α 3 for circular 3C-devices with single mirror symmetry from Fig. (4), that have electrical symmetry R d = R e .The straight solid black line denotes devices with geometrical 120°-symmetry.The rest of the surface denotes devices without geometrical symmetry but with electrical symmetry.The surface is symmetric: each device right of this line has a complementary device left of this line where contacts and isolating boundaries are swapped.Two examples of such devices are shown and their respective locations on the surface are indicated.The dashed lines denote devices with constant λ ed and λ d and thus identical (i.e.constant Hall output signal at fixed supply current, see (31)).
C 2 _ 4 . 1 .
Fig. (12).(a, c, e) Asymmetric 3C-VHall in operating phases ph 1 , ph 2 , ph 3 .(b, d, f) Its respective ERCs.The dashed lines in (a, c, e) denote the global current flow in the Hall-effect regions.The potentials at the contacts are computed by the ERCs in (b,d,f) and the extra termsshown in (a, c, e), which have to be added to the potentials at the indicated contacts to account for the Hall effect.
is constant for all phases ph 1 , ph 2 , ph 3 , too.If we know and the ERC, we know the Hall output signals in all operating conditions.
Fig. (8d) this electric field is simple to compute and homogeneous.It becomes inhomogeneous via the transformations q → t → w.For step 2 we look at the complete original device in Fig. (4) with its symmetry.There we note that with C 1 and C 3 at zero volts and the current J n imposed on all isolating boundaries the current flow of the Hall reaction will not flow across the real axis.So we can use the lower semi-circular region of Fig. (9a) to compute the Hall output voltage V out .It is easier to use the w-plane in Fig. (9b), where we already have an expression for J n in the intervals and from step 1.We can transform this current via w → → onto the rectangle in Fig. (9d), where J n is impressed on the boundaries and .To sum up: the homogeneous current J p along the isolating boundary in the q-plane is transformed into the -plane via q → t → w → → where it defines the boundary condition J n = µ H B ┴ J p on parts of the isolating boundary.The solution of the potential in the -plane finally gives the Hall output voltage.
Fig. ( 18 )
Fig.(18) of[19].Hence, our analytical formula (31) is consistent with results from numerical simulations on 120°s ymmetric 3C-HHalls.Interestingly, in Fig.(13a) the black solid curve does not lie on the crest of the surface: for fixed λ d the function has its maximum for λ ed > λ d / 3, thus, not for symmetric devices.At fixed λ d the function goes to zero for small λ ed (which means that the spacing between contacts C 1 and C 2 gets small) and for λ ed → λ d (which means that contacts C 1 and C 3 get small while C 2 remains finite) (see Appendix B).
Fig. ( 13
Fig. (13).The weak-field Hall-geometry factor of devices with three contacts and a single mirror symmetry versus its 2 DoF of the layout.In (a) the 2 DoF are λ ed , λ d in (b) the 2 DoF are the resistances of the ERC normalized to the sheet resistance R d /R sh , R e /R sh and in (c) the 2 DoF are the resistances between two contacts normalized to the sheet resistance R C1 → C2 /R sh , R C1 → C3 /R sh .The black solid curves denote devices with electrical symmetry R d = R e (i.e.λ d = λ e = 3λ ed and R C1 → C2 = R C1 → C3 ).R e 2R d means the parallel connection of R e and 2R d .
Fig. ( 14 Table 1 .
Fig. (14).forall 3C-devices with single mirror symmetry versus its 2 DoF λ d , λ ed .The black solid curve denotes devices with electrical symmetry R d = R e (i.e.λ d = λ e = 3λ ed ).This curve goes right over the peak of the surface at with .
Fig. ( 15
Fig. (15).(a) Vertical Hall-effect device with three contacts and single mirror symmetry in the z-plane.(b) The same device in the normalized q-plane.(c) Its conformal transformation on the upper half of the t-plane.(d) Its final transformation on the lower half of the w' -plane: there it is rotated by 180° to Fig. (4b) for W a ' = W 4 ' = k 23 and W c ' = W 6 ' = k 12 .
Fig. ( 16
Fig. (16).Minimum required r a of an optimum 3C-VHall device of Fig. (15a) with .For a given depth d and length r of the Hall-tub the length r a of the center contact has to stay below the respective curve (see (40)). | 9,636.2 | 2018-06-29T00:00:00.000 | [
"Physics",
"Engineering"
] |
Resolution of curvature singularities from quantum mechanical and loop perspective
We analyze the persistence of curvature singularities when analyzed using quantum theory. First, quantum test particles obeying the Klein-Gordon and Chandrasekhar-Dirac equation are used to probe the classical timelike naked singularity. We show that the classical singularity is felt even by our quantum probes. Next, we use loop quantization to resolve singularity hidden beneath the horizon. The singularity is resolved in this case.
Introduction
One of the important predictions of the Einstein's theory of general relativity is the formation of spacetime singularities. In classical general relativity, singularities are defined as points in which the evolution of timelike or null geodesics is not defined after a finite proper time. According to the classification of the classical singularities devised by Ellis and Schmidt [1], scalar curvature singularities are the most strongest one in the sense that the spacetime posses incomplete geodesics ending in them and all the physical quantities such as the gravitational field (scalars formed from curvature tensor), energy density and tidal forces diverge at the singular point.
But such divergence of physical quantities signify the breakdown of predictive power of classical general relativity. If these singularities are covered by horizon (as supposed by Cosmic Censorship Conjecture) then at least the physically most relevant region of spacetime is a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>under control. Naked singularities (those not covered by horizon), on the other hand, provide an observer with causal access to the region of diverging quantities and should be avoided. However, even singularities covered by the horizon can be accessed by an infalling observer and, more importantly, we would like to have a theory that lacks divergences, at least effectively.
The natural direction for resolving the problem of singularities in classical theory is investigating their persistence in quantum picture. Although we do not have a final quantum theory of gravity we still have several tools for analyzing quantum singularities. The first approach relies on examining properties of quantum particle wave functions on the background represented by the studied geometry. This is a frequently used technique based on well understood properties of operators on a Hilbert space. To move further, one might proceed to using quantum fields and possibly even the backreaction of background geometry using semiclassical Einstein equations with suitably regularized stress energy tensor. Finally, one can apply quantization of the geometry itself. The last approach is in principle the most precise but relies on the selected quantization method and we have no generally accepted one in case of gravity.
We will apply two of the above mentioned approaches for analysis of singularity in case of the general metric of global monopole [16], which is determined by two parameters -one characterizing the "Schwarzschild-type mass" and the other one the deficit of solid angle. The singularity is generally covered by single horizon but the class of metrics also contains, as a special case, a naked singularity which is analyzed from quantum mechanical point of view using the technique of Horowitz and Marolf [17] (who continued the pioneering work of Wald [18]). This method for analyzing timelike singularities is based on investigation of self-adjoint extensions of the evolution operator associated with the given wave equation. If it is unique the spacetime is deemed quantum mechanically non-singular. The analysis is carried out for relativistic quantum particle wave equations on a fixed background. Specifically, we review the previous results for Klein-Gordon equation and show the calculation using Newman-Penrose formalism for the Dirac equation, both in the case of pure global monopole with naked singularity for which the method was developed.
But as already mentioned, the most reliable method when trying to investigate the possible removal of the singularities from geometry is quantum gravity. Here we have selected loop quantization method inspired by [19,20,21], where the spacetime beneath the horizon (in the non-naked subclass) is isometric to the Kantowski-Sachs cosmology. Then one can apply methods from Loop Quantum Cosmology (LQC), that are based on loop quantization on the restricted configuration space. In this way, the results for resolution of initial cosmological singularity are translated to statements about the singularity at the origin r = 0.
The General Metric for Global monopole
It is well known that different types of non-standard topological objects may have been formed during initial Universe evolution, such as domain walls, cosmic strings and monopoles [16,22]. The basic idea is that these topological defects have formed as a result of a breakdown of local or global gauge symmetries. The simplest model that gives rise to global monopole is described by the Lagrangian where φ a is a triplet of scalar fields, a = 1, 2, 3. The model has a global O (3) symmetry, which is spontaneously broken to U (1). The field configuration describing the monopole is where x a x a = r 2 . We assume that underlying geometry is general static spherically symmetric described by the line element with the usual relation between the spherical coordinates, r, θ, φ and the Cartesian coordinates x a . The Lagrangian for the above given field configuration simplifies in the following way and the diagonal energy momentum tensor is given by these components The general solution of the Einstein equations with this T ν µ is where M is a constant of integration. The metric describes a black hole of mass M , carrying a global monopole charge characterized by η. Such a black hole can be formed if a global monopole is swallowed by an ordinary black hole [16]. The Kretschmann scalar which indicates the formation of curvature singularity is given by It is obvious that r = 0 is a typical central curvature singularity (scalar curvature singularity according to above mentioned classification) and the dominant contribution comes from term corresponding to black hole mass M . If M > 0 the singularity is evidently spacelike and covered by a single horizon.
Global monopole and its singularity
If we assume that the mass term is negligible on the astrophysical scale or vanishing, we will have For simplicity we choose α 2 = 1 − 8πGη 2 and by rescaling r and t variables, we can rewrite the monopole metric as If we calculate the Kretschmann scalar, still there is a weaker singularity at r = 0. From the metric (7) one can immediately see that the singularity is timelike. This time, because our simplified metric does not have the horizon the singularity is naked.
Naked Singularity
As mentioned in the Introduction naked singularity poses a serious problems and its resolution would be desirable. In this section, the occurrence of naked singularities in global monopole will be analyzed from quantum mechanical point of view. In probing the singularity, quantum test particles obeying the Klein-Gordon and Dirac equations are used. The reason for using two different types of fields is to clarify whether the classical singularity is sensitive to spin of the fields.
According to Horowitz and Marolf (HM) [17], the singular character of the spacetime is defined as the ambiguity in the evolution of the wave functions. That is to say, the singular character is determined based on the number of self-adjoint extensions of the evolution operator to the entire Hilbert space. If the extension is unique, it is said that the spacetime is quantum mechanically regular. The brief review of the method follows: Consider a static spacetime (M, g µν ) with a timelike Killing vector field ξ µ . Let t denote the Killing parameter and Σ denote a static slice. The Klein-Gordon equation in this space is This equation can be written in the form in which f = −ξ µ ξ µ and D i is the spatial covariant derivative on Σ. We assume that the Hilbert space H = L 2 (Σ, µ) is the space of square integrable functions on Σ with appropriate measure µ. Initially the operator A is defined on smooth functions with compact support C ∞ 0 (Σ). Since the operator A is real, positive and symmetric its self-adjoint extensions always exist. If it has a unique extension A E , then A is called essentially self-adjoint [23,24,25]. Accordingly, the Klein-Gordon equation for a free particle satisfies with the solution If A is not essentially self-adjoint, the future time evolution of the wave function (12) is ambiguous. Then, HM criterion defines the spacetime as quantum mechanically singular. However, if there is only a single selfadjoint extension, the operator A is said to be essentially self-adjoint and the quantum evolution described by equation (12) is uniquely determined by the initial conditions. According to the HM criterion, this spacetime is said to be quantum mechanically non-singular.
In order to determine the number of self-adjoint extensions, the concept of deficiency indices is used. The deficiency subspaces N ± are defined by ( see Ref. [26] for a detailed mathematical background), that belong to the Hilbert space H. If there are no square integrable solutions ( i.e. n + = n − = 0), the operator A possesses a unique self-adjoint extension and it is essentially self-adjoint. Consequently, a sufficient condition for the operator A to be essentially self-adjoint is to find only solutions satisfying Eq. (14) that do not belong to the Hilbert space.
Klein -Gordon Fields
The Klein-Gordon equation for a massless scalar particle is given by, For the metric (8), the Klein-Gordon equation becomes, In analogy with the equation (10), the spatial operator A is and the equation to be solved is (A * ± i) ψ = 0. Using separation of variables, ψ = R (r) Y m l (θ, ϕ), we get the radial portion of equation (14) as, The square integrability of the above solution is checked by calculating the squared norm of the above solution in which the function space on each t = constant hypersurface Σ is defined as H =L 2 (Σ, µ) where µ is the measure given by the spatial metric volume element.
We easily recover the results showed in [8]: The spacetime of global monopole remains singular in the view of relativistic quantum mechanics: the future of a given initial wave packet obeying the Klein-Gordon equation is not generally well determined, similarly to the future of a classical particle which reaches the classical singularity at r = 0.
Dirac Fields
The Newman-Penrose formalism will be used here to analyze massless Dirac particle propagating in the space of global monopole. The signature of the metric (8) is changed to −2 in order to use the Dirac equation in Newman-Penrose formalism. Thus, the metric is given by, The Chandrasekhar-Dirac (CD) [10] equations in Newman-Penrose formalism are given by where F 1 , F 2 , G 1 and G 2 are the components of the wave function, ǫ, ρ, π, α, µ, γ, β and τ are the spin coefficients to be found and the "bar" denotes complex conjugation. The null tetrad vectors for the metric (19) are defined by The directional derivatives in the Dirac equation are defined by D = l a ∂ a , ∇ = n a ∂ a and δ = m a ∂ a . We define operators in the following way The nonzero spin coefficients are, Substituting nonzero spin coefficients and the definitions of the operators given above into the CD equations leads to For the solution of the CD equations, we assume separable solution in the form of Here {f 1 , f 2 , g 1 , g 2 } and {Y 1 , Y 2 , Y 3 , Y 4 } are functions of r and θ respectively, m is the azimuthal quantum number and k is the frequency of the Dirac spinor, which is assumed to be positive and real. By substituting (25) in (24) we will see that with these assumptions f 1 (r) = g 2 (r) and f 2 (r) = g 1 (r) , Dirac equation reduces to two equations. The radial part of the Dirac equations become where λ comes from separation of variables. We further assume that then equation (28) transforms into, where λ ′ = λ α , so we will have d dr In order to write the above equation in a more compact form we combine the solutions in the following way, After doing some calculations we end up with a pair of one-dimensional Schrödinger-like wave equations with effective potentials, In analogy with the equation (10), the spatial operator A for the massless case is The solutions of the above equations are expressible using Bessel functions of the first and second kind in the following way Using the asymptotic formulas for Bessel functions when r → ∞ (Y (κ, z) ≈ z −1/2 sin(z−κπ/2−π/4) and J(κ, z) ≈ z −1/2 cos(z − κπ/2 − π/4)) and noting the complex argument in both solutions one can find a combination of constants C 1 , C 2 or C ′ 1 , C ′ 2 which is square integrable near infinity. (But, it is also possible to choose the constants differently so that both solutions are not square integrable!).
When r → 0 the approximate expressions for Bessel functions (Y (κ, z) ≈ z −κ for κ = 0, Y (0, z) ≈ ln(z/2) and J(κ, z) ≈ z κ ) imply that for C 2 = 0 and C ′ 2 = 0 we have square integrable solution near zero. (Here again if we suppose C 1 = 0 and C ′ 1 = 0, for κ ≥ 3/2, the solutions are not square integrable!. One could restrict an analysis to only certain wave modes and purposely choose the modes to be quantum regular).
But since we have a solution of equations valid on the whole domain (not just asymptotic forms of equations) we can match the behaviour at zero and infinity. Based on the results we can have solution square integrable over the whole domain and therefore our deficiency indices are nonzero. The operator is not essentially self-adjoint and the spacetime is quantum mechanically singular.
Quantum Gravity
Now we are going to investigate the singularity of general global monopole using techniques from loop quantization in the manner of [20]. Consider equation (2), for r < 2GM 1−8πGη 2 . This metric describes spacetime inside the horizon of a black hole. The coordinate r is timelike and the coordinate t is spatial there; for convenience we rename them as r ≡ T and t ≡ r with T ∈ [0, 2GM 1−8πGη 2 ] and r ∈ [−∞, +∞] and the metric becomes we eliminate the coefficient of dT 2 by defining a new temporal variable τ via Accordingly, the metric becomes We introduce two functions a 2 (τ ) ≡ 2Gm T − α 2 and b 2 (τ ) ≡ T 2 (τ ) and redefine τ ≡ t. The metric becomes this metric describes a homogeneous, anisotropic Kantowski-Sachs cosmological mmodel with spatial section having topology R × S 2 . From this observation comes the motivation to use LQC approach. In our case a (t) is a function of b (t).
Classical observables
The corresponding action for gravity minimally coupled with scalar field can be written in the form by considering the metric (38), the action becomes by using the relation between a and b, we will be able to write the action in terms of a single function Now, we will compute the Hamiltonian (Hamiltonian constraint). The momentum associated to the chosen configuration variable is and therefore we obtain
Now, we calculate the Hamiltonian constraint in terms ofḃ
and immediately get the following solutioṅ which is exactly the equation (36). When the horizon radius, r h = 2GM α 2 , is much larger than the scale on which we are probing the singularity, we can write so the Hamiltonian would be The volume simplifies when using the above approximation and we obtain The canonical pair is given by b ≡ x and p b , with Poisson bracket {x, p b } = 1.
For isotropic models, only holonomies evaluated in isotropic connections A i a =cδ i a appear. Along straight lines in the direction of translation symmetries X a I = ∂/∂X I a , holonomies exp X a I A i a τ i in the fundamental representation of SU (2) have matrix elements of the form exp (iµc), where µ depends on the length of the curve used. Here, it turns out to be useful to introduce c := V 1/3 0c defined in terms of the coordinate size V 0 of the region used to define the isotropic phase space [27].
Using this motivation we introduce the following function which will be used instead of the momentum (from now on we leave out the subscript b for momentum associated with this observable) [20] where γ is a real parameter and L fixes the length scale. The parameter γ determines the separation of momentum points in the phase space. The pair (x, U γ (p)) has the following Poisson bracket algebra A straightforward calculation gives We are concerned with the quantity 1 |x| which can serve as an indicator for singularity presence because classically it diverges for |x| → 0 thus producing singularity. From this moment we choose n = 1/3
Quantization
We will use the basis of Hillbert space introduced in [19,20], which is formed by eigenstates ofx. This implies the existence of a self-adjoint operatorx, acting on the basis states according tô Next, we want to promote the classical momentum function U γ = e (8πG iγ L p) to operator. We can do so by defining the action ofÛ γ on the basis states with the help of the definition equation (53) and using commutation relation based on the Poisson bracket between x and U γ we obtain
Volume operator and disappearance of the singularity
In the vicinity of the singularity we assume the approximate equation (48). Then the volume operator acts in the following way on the basis stateŝ Using the equation (52) and promoting the Poisson brackets to commutators, while setting γ = 1, we find On the basis states this operator acts in the following waŷ (58) so finally we get We can see that the spectrum is bounded from above and so the singularity is resolved in the quantum theory (the theory gives finite predictions for observables related to singularity). In fact, the eigenvalue of operator 1 |x| corresponding to the state |0 which probes the classical singularity is equal to 2 πl 2 p , which is the highest eigenvalue of the spectrum. Specifically, the operator corresponding to the curvature invariant R µνρσ R µνρσ = 48M 2 G 2 r 6 + 128M πG 2 η 2 r 5 + 256G 2 π 2 η 4 r 4 is then automatically finite in quantum mechanics. Promoting it to operator and evaluating on |0 we get On the other hand, when |µ| → ∞ the eigenvalue of 1 |x| goes to zero which is natural behaviour for large |x|. Also, it is possible to show that the quantum Hamiltonian constraint gives a discrete difference equation for the coefficients of the physical states.
Conclusion
We have seen that we have not been successful in removing the naked singularity by using relativistic quantum mechanics (for both Klein-Gordon and Dirac equations). On the other hand we have shown that the curvature singularity of general global monopole is resolved when the geometry is quantized using loop techniques. Unfortunately, one cannot directly compare the results because the loop quantization relied on radial coordinate being timelike beneath the horizon which is not the case for naked singularity of pure monopole. But still, this might be an indication that the first method is not reliable for determining the fate of singularities in quantum theory and one should rather focus on quantization of the geometry itself. But even the approach using loop quantization that relied on restricted class of geometries should not be trusted completely. One should allow, e.g., for deviations from spherical symmetry to be completely sure about the fate of singularities. | 4,695.6 | 2013-12-30T00:00:00.000 | [
"Physics"
] |
A random Q-switched fiber laser
Extensive studies have been performed on random lasers in which multiple-scattering feedback is used to generate coherent emission. Q-switching and mode-locking are well-known routes for achieving high peak power output in conventional lasers. However, in random lasers, the ubiquitous random cavities that are formed by multiple scattering inhibit energy storage, making Q-switching impossible. In this paper, widespread Rayleigh scattering arising from the intrinsic micro-scale refractive-index irregularities of fiber cores is used to form random cavities along the fiber. The Q-factor of the cavity is rapidly increased by stimulated Brillouin scattering just after the spontaneous emission is enhanced by random cavity resonances, resulting in random Q-switched pulses with high brightness and high peak power. This report is the first observation of high-brightness random Q-switched laser emission and is expected to stimulate new areas of scientific research and applications, including encryption, remote three-dimensional random imaging and the simulation of stellar lasing.
D isordered optics, which is the subject of light transport through non-uniform media, has attracted considerable interest and has numerous potential applications such as imaging, remote sensing, random lasers, and solar energy 1 . Among these areas, random lasers are especially important because of their unique properties and rich underlying laser physics, which are of fundamental scientific interest. Since the concept of the random laser was first introduced by Letokhov et al. 2 , random lasers have been realized using bulk powders, dye solutions containing particles, multilayered films, fiber configurations, non-uniform waveguides, and even atomic vapors [3][4][5][6][7][8][9][10][11][12][13][14] .
Unlike regular lasers, in which parallel mirrors are used to produce feedback and resonant modes, random lasers depend on multiple scattering in disordered media to trap light, where the interference of the scattered light results in resonant modes at particular frequencies 15 . Random lasers do not exhibit stationary resonance and thus display strong space-and time-dependent fluctuations in their emission properties, i.e., the emission direction, laser strength, and emission spectrum. In the development of random lasers, significant effort has been expended to improve the directionality of the laser emission 7,9,12,16 and frequency selection [17][18][19] , and the use of fiber configurations has resulted in a new level of control 9,10,20-23 . However, efficient control of random laser emissions in the temporal regime has not yet been achieved, and the emission brightness of random lasers is comparatively low. Standard regular lasers, which have definite cavity modes, can be Q-switched with a conventional modulator to produce giant energy pulses and high peak power. Random lasers have a large number of randomly distributed modes that are strongly coupled together, which generate emissions in all directions at various positions and rapidly deplete the pump-accumulated energy. Consequently, energy cannot be effectively stored in random lasers, and their Q-values cannot be controlled using traditional techniques.
Here, we exploit strongly nonlinear effects in random fiber lasers to modulate the Q-value of the system and to achieve Q-switching. We use feedback from Rayleigh scattering to define the random cavity modes and stimulated Brillouin scattering (SBS) to improve the Q-value of the cavity quickly and significantly, thus efficiently combining random lasing and Q-switching in a simple fiber configuration. The random Q-switched fiber laser (RQFL) outputs stored energy over a short time interval, which greatly improves the emission brightness (peak power). Experiments have shown that the RQFL produces pulses with a peak power above 2 kW. Q-switched random lasers with such high peak powers can not only reveal rich underlying laser physics and related dynamics occurring in the interaction between light and disordered media but can also greatly enhance the applicability of random lasers. Immediate intriguing applications of high-power random lasers include speckle-free imaging 24 and full-field optical coherence tomography 25 . Figure 1 displays a schematic of the apparatus and the operating principle of the RQFL. The laser system primarily consists of one piece of an ultra-high numerical aperture (UHNA) passive fiber that is fusion-spliced with one piece of a Tm 31 gain fiber. The suspended end of the UHNA fiber is cleaved at an angle of .10u to ensure that feedback is only produced by randomly distributed Rayleigh scattering. The open end of the Tm 31 fiber is perpendicularly cleaved to provide .4% Fresnel feedback and also acts as the output coupler. A high gain is achieved by using high-power laser diodes to pump the double-clad Tm 31 -doped silica fiber. The unique properties of the passive fiber (small core area and large NA) enhance both the light intensity in the fiber core and the overlap integral between photons and phonons, facilitating light backscattering and reducing the SBS threshold.
Results
As a proof of concept, the underlying mechanism of the random Q-switching process is described here (see Fig. 1). First, under pumping, spontaneous emission at .2-mm (near the gain peak) is generated in the Tm 31 fiber. During propagation of the spontaneous emission, micro-scale irregularities of the fiber core stimulate spontaneous Rayleigh scattering, especially in the UHNA fiber because of its specific fiber configuration. These randomly distributed Rayleigh scatterings act as a large number of frozen Rayleigh reflectors along the fiber 10,26 and return a fraction of the spontaneous emission back to the laser cavity. The light signal that is enhanced solely by the Rayleigh-scattering feedback is still very weak, and the Q-value of the cavity remains at a low level. However, when these randomly distributed Rayleigh reflectors act synchronously with the .4% Fresnel reflection of the perpendicularly cleaved gain fiber facet, many closed feedback loops are activated, which results in the formation of a large number of random modes. These as-formed random cavity resonances increase the light intensity and consequently increase the Q-value of the cavity to a moderate level. Some random cavity modes are more highly overlapped with the gain peak of the Tm 31 fiber and are considerably amplified, which, in combination with the simultaneous linewidth narrowing due to Rayleigh scattering, finally stimulates the SBS processes. The occurrence of SBS dramatically increases the Q-value of the system to an extremely high level, and laser oscillations suddenly appear. Consequently, a giant pulse forms and is coupled out from the system. The export of the giant pulse depletes the intra-cavity stored energy, and stops the SBS process; thereby the Q-value of the cavity drops. Under continuous pumping, the gain and the Q-value of the cavity are recurrently driven up and down by the SBS process, thus generating random pulse trains.
For low pump power, no pulse emissions are observed because spontaneous Rayleigh scattering and random cavity resonance cannot increase the Q-value of the system to a sufficiently high level. Thus, the entire cavity remains in a high loss state. However, an entirely different scenario results when the pump power is increased to a certain level (.4 W). High-intensity pulses are occasionally observed (see Fig. 2). These giant pulses appear occasionally and vanish suddenly, but this random pulsing is always observed provided the pumping is sustained. At this point, the random Q-switching regime arises. Each generated single pulse has an extremely high intensity (with a peak power above 1 kW). This generation of giant pulses in the random Q-switching state results from rapid enhancement of the Q-value of the cavity by the stimulated SBS process. Only those modes that have a sufficiently high intensity to stimulate SBS can overcome the total high system loss, thus improving the Q-value of the cavity rapidly and significantly. In this domain, the output emission exhibits significant fluctuations in both the temporal and spectral regimes (which vary significantly among measurements). This behavior occurs because of the strong coupling among a large number of spatially overlapping random modes; competition among these modes results in a highly chaotic emission 27,28 . The random resonance can occasionally cause the Q-value of the system to reach a high level (leading to giant pulse generation); however, at other times, the Q-value cannot be increased to such a level, and no giant pulses are produced.
As the pump power increases, the pulse intensity increases, and the random giant pulses become denser (the number of pulses for a fixed time interval increases). The concentration of random pulses as a function of the pumping level can be found in the supplementary information (Fig. S1). To elucidate the random pulsing characteristics of the RQFL, another pulse train is measured at a comparatively higher pump level (8 W) over a smaller time window (Fig. 3a). The results show large fluctuations in both the peak intensity and the pulsing period. To investigate the random pulsing characteristics in detail, we sample 200 consecutive single pulses to statistically analyze the variation in the pulsing period and the pulse width: the results are shown in Fig. 3b. Here, the pulse width is the envelope value of each pulse, and the deviations in the pulsing period and the pulse width are given relative to their respective mean values. Greater than 50% of the pulses show .20% deviation in both the pulsing period and the pulse width, and some pulses even demonstrate deviations in the period and the pulse width above 60%. The standard deviation of the period with respect to the corresponding mean value (i.e., the ratio of the standard deviation to the mean value) is .10%. The standard deviation of the pulse width with respect to the mean value is .17%. The large fluctuations in the pulsing characteristics (period, pulse width and intensity) clearly show that the laser emission originates from random cavities (the cavity length is random and fluctuates with time) and also demonstrates the unique features of random Q-switched lasers.
The shape and width of a single pulse are unpredictable. The pulse shape differs significantly among pulses (see Fig. S3 in the supplementary information), and the pulse envelope width fluctuates from almost 20 ns to over 70 ns (see Fig. 3b). These pulses primarily consist of two sub-pulses (see the inset of Fig. 3a), each with a pulse width of .20 ns. The sub-pulse width is consistent with the phonon decay time of .20 ns in silica fiber 29 . This result confirms that the significant improvement in the Q-value of the cavity results from the SBS process aided by acoustic waves.
As the pump power is increased, the output power increases accordingly, and the pulse density continues to increase. The random pulsing regime can be sustained up to the maximum available pump power (.50 W). The evolution of several pulsing characteristics (repetition rate, pulse width and peak power) of the RQFL as a function of pump power is shown in Fig. 4. The linear increase of the repetition rate with pump power is similar to that of conventional passively Q-switched lasers 30 , which clearly demonstrates the Qswitching feature of our random fiber laser. The pulse envelope width fluctuates around a horizontal line (.42 ns) but does not significantly shrink or spread as the pumping strength increases (Fig. 4b). This behavior is very different from that of conventional Q-switched lasers, for which the pulse duration depends strongly on the pump power 30 and is sensitive to the turn-on time of active Q-switches or the recovery time of passive Q-switches. This behavior arises because the Q-switched pulses in our fiber laser are stimulated by the SBS process which has a constant relaxation time (the phonon lifetime) for a given host material. Another key feature of our RQFL is that it operates in the high peak power regime. The maximum peak power is above 2 kW (Fig. 4c), which represents the first evidence of high peak power performance in random lasers. The peak power is clamped at approximately 2.3 kW under high pump levels, demonstrating the quantization-like behavior of the pulse energy in the RQFL. These high peak power random Q-switched lasers can considerably broaden the applicability of random lasers in areas requiring highpower emissions.
Some of the characteristics of the RQFL can be observed from the measured laser spectra (Fig. 5). At low pump power levels, few modes can go into oscillation because of the limited gain, and the spectrum varies significantly among measurements. Details of the variation in the spectral features at a fixed pump power can be found in the supplementary information (Fig. S4). At higher pump power levels, more spectral peaks appear because a greater number of random modes obtain a sufficient gain to overcome the cavity loss. Further increasing the pump power level lifts up the spectral envelope, causing more random modes to converge together. The overlap of multiple peaks in the spectral envelope clearly demonstrates the characteristics of coherent random lasers 20 . The central spectral wavelength of this RQFL is in good agreement with the gain peak of the fiber and shows a negligible dependence on the pump power level, which is also consistent with the behavior of conventional properties of the pulse train: the horizontal axis shows the deviation relative to the average value, where every 20% variation relative to the average value is denoted as one range (e.g., 0% denotes deviations between 210% and 10%, and 20% denotes deviations between 10% and 30%); the vertical axis shows the pulse number percentage with respect to the total pulse number; a total of 200 consecutive pulses is used for statistical analysis of the pulse train.
www.nature.com/scientificreports SCIENTIFIC REPORTS | 5 : 9338 | DOI: 10.1038/srep09338 random solid-state lasers 18 . In random lasers, a large number of random modes compete with each other for gain and cancel each other out in the cavity. The modes with frequencies near the peak gain are more likely to increase their gain over those modes that are far from the peak gain and are thus able to survive.
Discussion
In conventional random lasers 3-5 , a large number of random cavity modes exist simultaneously across the disordered material (as both localized and extended modes), simultaneously overlapping and competing with each other. Therefore, the random laser emission exhibits a strong spectral dependence and an angular dependence (i.e., the laser emission radiates in all directions) and dissipates energy at all points in space and time. This behavior renders energy storage impossible in random lasers. However, conventional Qswitching techniques (e.g., in acousto-optic and electro-optic modulators) are based on periodic energy storage and depletion and therefore cannot be transplanted directly into random systems to achieve high-energy pulsing operation.
In contrast, in our system, we first adopt a fiber configuration to confine the random emission in one dimension and then use a UHNA fiber to strengthen the Rayleigh scattering and realize strong random cavity resonance (i.e., to form random cavity modes). In addition, we adopt LD-pumped gain fibers to achieve a high gain (orders of magnitude above that of conventional random lasers) and store energy. Finally, we use the SBS process to dissipate the accumulated energy. The interplay between the pumping-induced energy storage and the depletion of energy in the SBS process acts as an effective Q-switch and produces a recurrent modulation of the Qvalue of the system. Our RQFL originates from random cavity modes and exhibits chaotic behavior (similar to that produced in conventional random lasers); however, the Q-switched state and high brightness of the RQFL make it completely different from conventional random lasers. Our laser system operates in a random Q-switched regime (in which giant pulses can be produced), whereas the emission from conventional random lasers is either completely stochastic 27 or stationary CW output 10 . The pulsing state of our RQFL is very similar to that of traditional Q-switching operation but exhibits high randomness in the pulsing intensity, pulsing period and pulse shape because the RQFL originates from random cavity resonance. This random cavity resonance is caused by randomly distributed Rayleigh scattering and exhibits a random cavity length. However, the high Q-value of the cavity is maintained by the SBS process, such that the giant pulse width is commensurate with the SBS relaxation time (approximately tens of nanoseconds).
Conclusion
We used random resonance induced by Rayleigh scattering and a nonlinear optical process (SBS) in high-gain fibers to investigate the Q-switching characteristics of random lasers in one dimension for the first time. Modulation of the Q-value in the cavity results in recurrent storage and extraction of the random cavity energy (which manifests as a pulsing regime), thereby realizing a random emission with high brightness. Our high-brightness random Q-switched laser can be extremely useful in application areas that require light sources with low coherence and high intensity, such as imaging 24 , full-field optical coherence tomography 25 , and focusing through random media 31 . The random Q-switched laser concept demonstrated herein can be straightforwardly extended to other wavelength regimes (e.g., visible, mid IR, and far IR) and other random laser configurations (e.g., bulk powders, multilayer semiconductor films, and scatterers in dye solutions). Instead of the SBS effect, other nonlinear optical effects (such as Raman scattering and four-wave mixing) may also be used to switch random lasers. The transition of random lasers from CW or completely chaotic regimes to Q-switched states offers various unique advantages that should open up new avenues for random lasers, both for potential applications and for fundamental scientific research combining laser physics, nonlinear optics and fiber optics with random scattering theory (new lasing theory in which Q-switching/mode locking is mixed with random scattering).
Methods
A double-cladding pumping technique is used to achieve a high gain. The laser gain medium is a double-clad Tm 31 -doped silica fiber (10/130 mm, 0.15/0.46 numerical aperture (NA)) with a Tm 31 doping concentration of .2 wt.% and a cladding absorption of .3 dB/m (at 793 nm). The pump sources are two 35-W 793-nm laser diodes (LDs) with an output fiber pigtail of 100/125 mm. The pump light is launched into the gain fiber through a (2 1 1) 3 1 fiber combiner with a coupling efficiency of .95%. The fiber combiner has a signal fiber of 10/125 mm (NA of 0.15/0.46), which is almost perfectly matched to the gain fiber. The pump fiber of the combiner has the same parameters as the pigtail fiber of the pump LDs. The backscattering fiber is a UHNA passive fiber (with a total length of 50 m in this experiment) with a core diameter of 3.5 mm (NA of 0.41) and a clad diameter of 125 mm.
One end of the UHNA fiber is fusion-spliced to the signal fiber of the combiner, and the other end is cleaved at an angle of .10u to eliminate parasitic reflections and to ensure that the feedback from this fiber end only results from randomly distributed scattering (Rayleigh scattering). One end of the Tm 31 fiber is fusion-spliced to the output signal fiber of the combiner, and the other end is perpendicularly cleaved to provide .4% Fresnel feedback for the laser radiation. Propagation testing from a lowpower 2-mm source shows that the total splice loss, which includes the loss for the UHNA fiber, the signal fiber of the combiner and the Tm gain fiber, is approximately 1.5 dB (primarily from the fusion point between the UHNA fiber and the combiner signal fiber).
A 3.2-m-long Tm 31 fiber is wrapped on a convectively cooled copper drum with a diameter of 10 cm. The laser power is outputted from the right side of the Tm 31 fiber. At the output end, a dichroic mirror (R . 99.9%@793 nm, 0u) is used to filter the residual pump light. The laser output power is measured with a power meter (FieldMax II-top, Coherent Co.), and the laser spectrum is recorded with a triplegrating spectrometer (Zolix Co.) with a spectral resolution of 0.2 nm. The laser pulsing dynamics are measured with a 2-GHz Agilent oscilloscope combined with a 1-GHz InGaAs detector. | 4,568.2 | 2015-03-23T00:00:00.000 | [
"Physics"
] |
hep-th/0604215 Universal Superfield Action for N = 8 → N = 4 Partial Breaking of Global Supersymmetry in D = 1
We explicitly construct N=4 worldline supersymmetric minimal off-shell actions for five options of 1/2 partial spontaneous breaking of $N=8, d=1$ Poincar\'e supersymmetry. We demonstrate that the action for the N=4 Goldstone supermultiplet with four fermions and four auxiliary components is a universal one. The remaining actions for the Goldstone supermultiplets with physical bosons are obtained from the universal one by off-shell duality transformations.
Introduction
Supersymmetric mechanics, being a natural framework for testing the characteristic features it shares with more complicated higher-dimensional theories, reveals an interesting peculiarity. It turns out that among all possible N = 4, d = 1 supermultiplets there is a "root" one [1]. The action for this "root" supermultiplet proved to be a generic one, from which the actions for the rest of linear and nonlinear N = 4 supermultiplets can be easily obtained by reduction [2]. This fact seems to be rather important, and a natural question is whether such universality may show up in other supersymmetric one-dimensional actions.
Beside the sigma-model type actions there is another very important class of supersymmetric actions -the one which describes theories with spontaneous partial breaking of global supersymmetry (PBGS) [3] - [15]. The concept of PBGS provides a manifestly off-shell supersymmetric worldvolume description of various superbranes in terms of Goldstone superfields [3]. The physical worldvolume multiplets of the given superbrane are interpreted as Goldstone superfields realizing the spontaneous breaking of the full brane supersymmetry group down to its unbroken worldvolume subgroup. The spontaneously broken supersymmetry is realized on the Goldstone superfields by inhomogeneous and nonlinear transformations. The choice of the Goldstone supermultiplet is not unique [8]. Moreover, the proper choice of the Goldstone multiplet can greatly simplify the construction of the invariant superfield action [8,10,11]. In this respect, the preferable Goldstone supermultiplet should contain the smallest possible number of physical scalars. The reason is rather simple. The physical scalars in the Goldstone supermultiplets correspond to the central charges in the anticommutators of manifest and hidden supersymmetry, and these scalars are shifted by constants under the corresponding transformations. This means that the transformations of the bosonic Goldstone superfields under hidden supersymmetry contain θ-dependent terms. The presence of such terms makes the construction of the proper action rather nontrivial. Moreover, in many cases the invariant Goldstone actions being constructed require a highly nonlinear redefinition of the Goldstone scalar superfields to bring the action to the standard form (with the Nambu-Goto action for scalars in the bosonic sector).
In contrast to the higher-dimensional supermultiplets, among the N = 4, d = 1 supermultiplets there is one which does not contain physical bosons at all [1,16]. We will use the notation (0, 4,4) to describe this supermultiplet with no physical bosons, four fermions and four auxiliary components. It is natural to suppose that the N = 4 supersymmetric Goldstone superfield action for this supermultiplet with an additional non-linearly realized N = 4 supersymmetry would give the simplest variant of the system with N = 8 → N = 4 PBGS. The aim of this paper is to demonstrate that it is really so. Moreover, we will present the superfield actions which realize N = 8 → N = 4 PBGS for all N = 4, d = 1 Goldstone supermultiplets which may be obtained from (0,4,4) by dualization of the auxiliary components into physical scalars.
Universal N = 8 → N = 4 PBGS action
Our aim is to construct a N = 4 superfield action possessing an additional spontaneously broken N = 4 supersymmetry. It is clear that the N = 4 superfield formulation is preferable because only N = 4 supersymmetry remains unbroken and manifest. We are going to use the (0,4,4) supermultiplet as a Goldstone one. So, following [16], let us introduce a doublet of fermionic superfields Ψ i which are subjected to the constraints Here, we use the following spinor derivatives: Let us observe that it is immediately follows from (2.1) that where In virtue of (2.3) the superfield Ψ i contains among independent components four fermions and four auxiliary components as it should be for an irreducible (0,4,4) supermultiplet.
As usual, the partial breaking implies the presence of Goldstone fermions among the component fields of the theory. Assuming that the first components ψ i (2.5) of the superfields Ψ i are just the Goldstone fermions, they should contain a pure shift in the transformation under spontaneously broken N = 4 supersymmetry where ǫ i ,ǭ i are the transformation parameters. Clearly, in order to have a linear off-shell realization of the additional N = 4 supersymmetry one has to add one more N = 4 superfield, but which one? The idea of choosing these additional superfields is due to J. Bagger and A. Galperin [8] who found that the Lagrangian density for any PBGS action belongs to an extended supermultiplet. Keeping in mind that any action should start from the free one, which for our (0,4,4) supermultiplet reads we will choose this additional N = 4 superfield Φ to be a chiral onē So, the proper candidate to be the N = 8 → N = 4 PBGS action reads It is easy to find the following transformation laws of Ψ i , Ψ j and Φ forming the desired N = 4 supersymmetry algebra: with the following Lie bracket: It is crucial for us that, in virtue of (2.3), the "action" (2.9) is invariant under (2.10). Now, in order to have a meaningful action, one should express the superfields Φ in terms of our Goldstone fermionic superfields Ψ i . Following [8] and motivated by the structure of the free action (2.7), let us start from the following Ansatz: where f ,f are arbitrary functions depending only onD 2 Φ and D 2 Φ, respectively. Substituting the Ansatz (2.12) in (2.10), one can find that it is consistent, provided .
Thus, with the additional equations the transformation properties (2.10) are satisfied and the action (2.9) becomes meaningful. The last step is to solve equations (2.14). The procedure simply mimics Bagger-Galperin considerations [8] so we omitted the details and will present the answer Therefore, with (2.15) the action (2.9) acquires the form Thus the action (2.17) is the desired action which describes a one dimensional system with partially broken N = 8, d = 1 supersymmetry. Before going further, let us make some comments. First of all, clearly the supermultiplet Ψ i , Φ with the transformation properties (2.10) is a modified version of the N = 8, d = 1 (2,8,6) multiplet [17]. 1 Secondly, one should stress that the action (2.17) describes a very special case of the PBGS theory, because it does not contain any physical bosonic degrees of freedom. The supermultiplet (0,4,4) we choose to be a Goldstone one represents the reduced version of the N = 2, D = 3 double vector supermultiplet [19] with all field strengths showing up as auxiliary components. The action (2.17) is a one dimensional version of N = 4 → N = 2 PBGS in D = 3 with the double vector supermultiplet as a Goldstone multiplet. Finally, one should stress that the transformation laws (2.10) and the action (2.17) being so simple in terms of superfields, take a rather complicated form in terms of components.
N = 8 → N = 4 PBGS superparticle actions
As in other PBGS theories, the Goldstone fermions can be placed into different multiplets of the unbroken N = 4, d = 1 supersymmetry. In contrast with the higher-dimensional theories, where each case has to be considered independently or with using heavily on-/off-shell duality transformations, in one dimension the same "universal" action (2.17) describes all possible N = 8 → N = 4 PBGS theories with different linear N = 4 Goldstone supermultiplets. The crucial property for such a universality is the tight relations between different N = 4 supermultiplets [16,2]. In what follows we will present all possible cases for N = 8 → N = 4 PBGS.
Superparticle in D = 2
We start from the simplest situation -the superparticle in D = 2. Clearly enough, to describe a particle in D = 2 one should have one bosonic component in the Goldstone supermultiplet. So, we have to dualize one auxiliary component in the (0,4,4) supermultiplet into a physical boson. The resulting N = 4 Goldstone supermultiplet will be (1,4,3), which is defined as [20,16] D If we then identify the spinor derivatives of U and Ψ we get exactly the fermionic superfields satisfying (2.1). The content of the superfield U includes one physical bosonic and four fermionic components together with a real triplet A ij = iD (iDj) U| of auxiliary fields. The crucial step is to check whether the transformation law (2.10) is compatible with (3.2). It is rather easy to find how the spontaneously broken N = 4 symmetry is realized on U and Φ for this case Upon differentiation, the laws (3.3) will reproduce (2.10). Finally, in order to get the action, one should replace the spinor superfields in the action (2.17) by covariant derivatives of the superfield U, according to (3.2). The bosonic part of the action acquire the following form which after eliminating the auxiliary fields, turns into Thus, the bosonic core is a just Nambu-Goto action in a static gauge for the particle in D = 2. Therefore the action (2.17), with the substitution (3.2), describes the superparticle with N = 8 → N = 4 PBGS in D = 2
Superparticle in D = 3
In order to get the action of a particle in D = 3, let us introduce a twisted chiral multiplet [16] rather than the standard one with (2,4,2) component structure. The independent components of this supermultiplet may be defined as physical bosons: λ,λ, fermions:D 1Λ ,D 2 Λ , D 1 Λ| , D 2Λ , auxil. bosons: (3.7) Now one may immediately check that the spinor superfields Ψ i , Ψ i defined as satisfy the chirality and irreducibility conditions (2.1). The hidden N = 4 supersymmetry is realized now as follows: With (3.8) the bosonic part of the action (2.17) acquires the form The auxiliary fields turn out to vanish on-shell in the bosonic limit, and the static gauge action of a particle moving in D = 3 gets the standard form Thus the same superfield action (2.17) with the identifications (3.8) describes the superparticle in D = 3.
Superparticle in D = 4
In this case we express the Goldstone superfield Ψ i through a tensor supermultiplet [21] with (3,4,1) content. Defining fermionic superfields as one may check that they obey the needed constraints (2.1). The bosonic content of the supermultiplet is a real triplet v ij = V ij | and a real auxiliary field M = iD iDj V ij |. The spontaneously broken N = 4 supersymmetry is realized now as With all these ingredients we may replace the fermionic superfields Ψ i in the action (2.17) by covariant derivatives of the superfield V ij (3.12). After elimination of the auxiliary field the bosonic part of the action reads 15) and thus it describes the particle in D = 4.
Superparticle in D = 5
Finally, we are going to use, as a Goldstone superfield, the N = 4 "hypermultiplet" Q ai defined as This supermultiplet contains four physical bosons and four physical fermions. Like the previously considered cases, the fermionic superfields constructed as satisfy the relations (2.1). The hidden N = 4 supersymmetry may be realized on the superfields Q ia , Φ and Φ as follows: Once again, the action (2.17) with the relations (3.17) reproduce the action for the superparticle in D = 5. Indeed, it is rather easy to find the bosonic core of this action which describes the particle in D = 5.
Conclusion
In the present paper we have constructed the universal nonlinear Goldstone superfield action which is manifestly invariant under N = 4, d = 1 supersymmetry and possesses additional hidden nonlinearly realized N = 4 supersymmetry. We have shown that this action, being initially written in terms of the N = 4 (0,4,4) supermultiplet, can also serve as the proper action for the rest of the linear N = 4 supermultiplets. These actions provide a manifestly world-line supersymmetric description of some superparticles in flat D = 2, ..., 5 Minkowski backgrounds. We did not plan to give an exhaustive analysis of all possible variants and realizations of partial breaking of one dimensional N = 8 supersymmetry. Our goal here was to demonstrate that the universality of some N = 4 supermultiplets, firstly noted in [1,2], may be extended to the case of PBGS actions too. In contrast with the sigma-model type actions where the "root" (4,4,0) supermultiplet plays the key role, in PBGS actions the role of "universal" supermultiplet is reserved for another supermultiplet -the (0,4,4) one. The PBGS action for this supermultiplet describes some sort of D-particle -one dimensional mechanics without physical bosonic degrees of freedom. Being not too illuminating in itself, this action provides, though, a proper description of superparticles in various dimensions after dualization of some/or all its auxiliary components into physical bosons.
In the present paper we limit our consideration by dealing only with linear N = 4 supermultiplets. But it is known that there are at least two possible variants of nonlinear N = 4 supermultiplets [16,22]. The first variant includes two types of N = 4 nonlinear supermultiplets, which may be obtained from the nonlinear realization of the N = 4 superconformal group, in the same manner as the linear ones [16]. The second type of nonlinear supermultiplets [22] is much more complicated and the geometric origin of these multiplets is still unknown. It is an interesting problem to construct PBGS actions for these types of N = 4 supermultiplets. | 3,255 | 2006-04-28T00:00:00.000 | [
"Physics"
] |
X-Ray Holographic Imaging of Hydrated Biological Cells in Solution
We demonstrate nanoscale x-ray holographic imaging using optimized illumination wave fronts emitted by x-ray waveguide channels. Mode filtering minimizes wave-front distortions and artifacts encountered in most hard x-ray focusing schemes, enabling quantitative reconstruction of the projected density, as evidenced by a test pattern imaged with a field of view of about 20 × 40 μ m and at 22 nm resolution. The dose efficiency and contrast sensitivity make the optical scheme compatible with samples of intrinsically low contrast, typical for hydrated soft matter. This is demonstrated by imaging bacteria in the hydrated and living state, with quantitative phase contrast revealing dense structures of the bacterial nucleoids associated with compactified DNA. In response to continued irradiation, characteristic changes in these dense structures are observed.
Imaging of biological matter at the nanoscale is characterized by three persistent challenges: resolution, contrast, and compatibility with functional or physiological conditions.For the investigation of biological cells-which are often referred to as the test tubes of the 21st centuryimaging of processes and functions is particularly essential.Hard x-ray coherent imaging [1][2][3][4][5][6][7] is unique as a probe of the native electron density distribution within cells and thicker tissue.It is compatible with a large range of environmental conditions, does not depend on labeling or staining, and is well suited for tomography of larger specimens, due to a high penetration power and a large depth of focus.After overcoming considerable challenges related to the phase problem (Refs.[6,7] and the references therein), the resolution achievable with lensless coherent x-ray diffractive imaging (CDI) has become high enough to address subcellular architectures [8][9][10][11][12][13][14], such as the topology of biological membranes in complex organelles, the organization of protein networks, and compactified DNA.
A major challenge of applying x-ray imaging to biological matter is the low contrast in the hydrated state and the high radiation dose's inducement of radiation damage.Most reported dose values, even for dehydrated cells with strong contrast, are in the range of 10 7 -10 9 Gy, well above the theoretical dose-resolution curve which increases with a power law of exponent 3 ≤ γ ≤ 4, as derived for the case of Fraunhofer far-field diffraction [15].Such excessive dose values are prohibitive for cells in solution, let alone for live cell imaging.A recent soft x-ray CDI study has demonstrated imaging of mammalian cells under low dose conditions [16], but it was limited to the freeze-dried state.A first CDI study of cells in solution ("wet" CDI) reported 30 nm resolution (stated as half-period throughout this Letter), but at a "cost" of 10 8 Gy [17].
In this Letter we present a different approach to nanoscale x-ray imaging, at a drastically reduced dose and with a large field of views, based on in-line holographic recordings using optimized and filtered wave fronts.The method is demonstrated here using first a lithographic test pattern imaged at a resolution of 22 nm, and second the grampositive bacteria Deinococcus radiodurans in the freezedried state, at a resolution of 53 nm and a radiation dose of 10 4 Gy.Coherent x-ray imaging provides a unique tool to shed light on the disputed structural arrangement of DNA in the nucleoid of this bacterium [18,19].From the quantitative density contrast, constraints on DNA packing models can be obtained [13], complementing electron microscopy studies [20].The dense, round structures observed by x-ray imaging within freeze-dried Deinococcus radiodurans cells [13,14,21] were attributed to DNA rich regions in the bacterial nucleoids.Here, we show that these structures can be imaged even in the hydrated and living state.To this end, we present the first electron density maps of living cells in buffered solution that were obtained by an application of less than the lethal dose.Successive images reveal structural processes in the nucleoids in response to radiation.This result casts serious doubt on previous conclusions that radiation damage does not change the observed structure of hydrated cells on the 50 nm scale for the typical high dose values of CDI [17] and underlines the need for dose-efficient imaging approaches.
To achieve phase contrast images at a drastically reduced dose and nanoscale resolution, we use x-ray full-field imaging with contrast formation by free space propagation in combination with highly coherent and well-controlled spherical wave fronts emitted by x-ray waveguides [22,23]; see the sketch in Fig. 1.Introduced almost two decades ago [1,24], x-ray propagation imaging uses phase reconstruction algorithms [6,7,25,26] to invert the intensity pattern(s) recorded downstream from an object illuminated by a plane wave, or spherical wave, for the sake of geometric magnification [27][28][29].However, the wave-front errors associated with hard x-ray focusing lead to a severe loss of image quality and, to correct for this, we here use waveguide mode filtering, which significantly reduces wave-front aberrations and increases the spatial coherence.Progress in fabrication of lithographic waveguide channels has enabled us to overcome the previously low efficiency of x-ray waveguide optics [30], increasing the waveguide exit flux of the present experiment to I WG > 10 9 ph= sec.Since the sample is not positioned in a focus but at a defocus position, the flux density at the sample can be adjusted to a tolerable level.The optical setup is combined with optimized near-field phase retrieval algorithms to achieve quantitative reconstructions from a single hologram.
Dose-efficient imaging of weakly diffracting objects becomes possible due to two distinct features of waveguide holography: (i) The homogeneous signal level within the recorded radiation cone circumvents well-known challenges associated with the limited dynamical range of x-ray detectors and does not require the use of beamstops; (ii) the waveguide transmits only the radiation modes required for the coherent imaging process and filters out background radiation, which is absorbed in the cladding [31].The waveguide thus protects the sample from unwanted incoherent radiation which would not improve image quality but would increase the dose.Furthermore, interference of the weak diffracted wave behind the sample with the much stronger and highly coherent primary wave enhances the signal level well above the background originating at or downstream from the sample, including the Compton background of the sample itself [32].In addition, the magnified near-field (Fresnel) diffraction pattern (in-line hologram) directly represents the location and the shape of the object, enabling easy sample alignment and providing a further optional constraint for iterative phase retrieval.
The experiments were performed using the GINIX instrument [33] at the coherence beam line P10 of the PETRA III storage ring (Hamburg, Germany; see Ref. [34] for details).The undulator beam was monochromatized [Si(111)] and focused by Kirkpatrick-Baez (KB) mirrors to about 300 nm in the horizontal and vertical directions.The twodimensional x-ray waveguides were placed in the focal plane of the mirrors, acting as a spatial and coherence filter [31].The samples were placed into the divergent wave field exiting the waveguide, at a distance z 1 .In-line holograms, magnified by a factor of M ¼ 1 þ z 2 =z 1 , were recorded using a fiber coupled sCMOS detector (Photonic Science) with a pixel size of P ¼ 6.54 μm, positioned in the detector plane at z 1 þ z 2 ≈ 5 m behind the waveguide.This is equivalent to a parallel beam case with an effective sample-detector distance z ¼ z 2 =M and a demagnified pixel size p ¼ P=M.The recorded holographic intensity I z ðx; yÞ ≔ jD z fPðx; yÞ • Oðx; yÞgj 2 can be calculated based on the free space Fresnel propagator D z acting on the product between object transmission function O and probe function P, which in turn emerges from the waveguide exit field.Figure 2 25.4 nm ðhorizontalÞ × 30.8 nm (vertical) was determined from the near-field reconstruction shown in Fig. 2(c), using the error-reduction algorithm [23].The small source sizecompared to the channel dimensions of d x ¼ 97 nm and d y ¼ 73 nm-arises based on multimodal interference, as supported by the finite difference simulations shown in Fig. 2(d).Full details on the bonded silicon waveguide (the air channel) used for holographic recordings at 7.9 keV and the Ge/Mo/C/Mo/Ge waveguide system used for the 13.8 keV recordings are given in Ref. [34].
The waveguide exit beam is fully coherent [31,33] and has a smooth Gaussian-like line shape [Figs.[28], where the empty beam intensity normalization fails [36].
To benchmark the optical setup and phase retrieval algorithms, we have first imaged a test pattern milled by focusing an ion beam into a 200-nm-thick gold layer on a 200-nm-thick Si 3 N 4 membrane.Figure 3 For objects with a slowly varying phase and negligible absorption, the image formation can be linearized and written in Fourier space as Ĩz ðν x ; ν y Þ ≃ δ D ðν x ; ν y Þ þ 2 φðν x ; ν y Þ sin ½χðν; zÞ, where ã ¼ F fag denotes the two-dimensional Fourier transform, ν x , ν y the spatial frequencies with ν 2 ¼ ν 2 x þ ν 2 y , and δ D the Dirac delta function.The term sin ½χðν; zÞ with χðν; zÞ ¼ πλzν 2 is known as the phase contrast transfer function (CTF).Phase reconstruction via filtering in Fourier space based on the CTF [24] suppresses the twin-image artifacts, as shown in Fig. 3(c).However, because of zeros in the phase CTF at ν 0 ¼ ffiffiffiffiffiffiffiffiffi ffi n=λz p , with n ∈ N, some pronounced artifacts remain, in particular at low spatial frequencies.
Here we use the CTF reconstruction to initialize a modified hybrid input-output (mHIO) algorithm which is capable of recovering the missing information [26], based on the support of the object, which is readily inferred from the deterministic CTF reconstruction.In essence, the algorithm propagates back and forth between sample and detection plane, using a numerical implementation of the free space Fresnel diffraction operator D z with ψ z ¼ D z fψ 0 g ¼ F −1 fexpði2πz=λÞ expð−iπλzν 2 ÞF fψ 0 gg, and enforces compact object support as well as intensity values in line with the measured data, respectively.As shown in Fig. 3(d), the phase distribution after N it ¼ 1000 iterations reveals the object nearly artifact free.The world map exhibits sharp edges and uniform gray values.A 100 × 100 pixel domain yields a mean phase shift of μ φ ¼ 0.18 rad and a standard deviation σ φ , with σ φ =μ φ ¼ 3%.In addition to the absence of low frequency artifacts, the superior quality of the mHIO reconstruction also manifests itself in an increased resolution of 22 nm (compared to 24 nm for the CTF reconstruction), as determined by fits to edges at different regions of the object.See Ref. [34] for additional information on data processing and reconstruction algorithms.
After optimization of experimental settings and algorithms, the approach was used to image freeze-dried bacteria.Cells of the Deinococcus radiodurans strain R1 were cultivated from freeze-dried cultures and vitrified on Si 3 N 4 foils by plunge freezing them in liquid ethane, followed by freeze drying as in Ref. [14], as detailed in Ref. [34].Samples were imaged at 7.9 keV, using the smooth central cone of the waveguide field shown in Fig. 2, well matched to the active area of the sCMOS detector [the dashed rectangle in Fig. 2(b)], placed at z 1 þ z 2 ¼ 5.12 m.A single 8 second accumulation of the sample placed at z 1 ¼ 15.9 mm was recorded, along with a corresponding empty beam measurement.Figure 4 a mHIO reconstruction after N it ¼ 741 iterations.The diskshaped domains with large relative phase shifts of up to −0.3 radian can be clearly identified and attributed to the bacterial nucleoid.With an effective pixel size of 20.3 nm, the crossover to the noise plateau of the power spectral density (PSD) at about 0.19 cycles per pixel corresponds to a resolution of about 53 nm; see Ref. [34].The flux density at the sample plane was 5 × 10 5 ph=μm 2 =s, corresponding to a total dose of D ¼ 5.2 × 10 3 Gy applied during 8 seconds, as calculated for model protein [15].This is almost 3 orders of magnitude less than a recent ptychographic reconstruction of the same bacteria (of the same preparation batch) at similar photon energy (6.2 keV), contrast, and resolution (50 nm), recorded at a dose of 4.9 × 10 6 Gy [14].And, in contrast to ptychographic scanning [14,37], a large field of view, e.g., of ð20 μmÞ 2 , is observed simultaneously, which is important for samples in semistable environments or dynamic states, e.g., hydrated or living samples.
Since the freeze-dried cells were imaged below the lethal dose of Deinococcus radiodurans, the next step was to image living bacteria in solution.For the measurement, the bacteria were kept in microscopy chambers compatible with cell culture (ibidi, Germany); see Ref. [34] for details.
At a photon energy of 13.8 keV, 56 images with 10 second exposure time were recorded with the sample placed at z 1 ¼ 19.7 mm, corresponding to an effective pixel size of 25.4 nm.Eight consecutive exposures were averaged, yielding 7 frames with 80 second accumulation time for each.Figure 4(c) shows the reconstruction for every other frame of the live cell recordings.To increase the signal-tonoise ratio, the holograms were binned by a factor of 2. Phase reconstruction of each frame was performed using the mHIO algorithm with 3500 iterations, on average.Resolution is degraded due to slight sample movement in the solution during the exposure and is estimated to about 2 to 3 pixels, corresponding to 100-150 nm.The results confirm that the dense round structures attributed to the nucleoids observed in the freeze-dried state [see Fig. 4(c) and Refs.[13,14,21]] are also present in the hydrated living state of the bacterium.With a total flux of 2 × 10 7 ph=μm 2 in each frame, the dose is D ¼ 8.9 × 10 3 , as calculated for a model protein [15], and D ¼ 8.6 × 10 3 for water.These values are below the lethal dose LD 50 > 10 4 of Deinococcus radiodurans.The retrieved electron density map of at least the first frame should, therefore, represent the native structure in the living state of the bacteria, while in successive frames, radiation induced changes in the density distribution can be monitored.Notably, the density of the nucleoids decreases, but quite differently for individual organelles, as quantified in Figs.4(c) and 4(d).While most nucleoids are subject to gradual density fading, for some organelles the process occurs in pronounced steps; see the colored columns in Fig. 4(d).
Importantly, the applied dose could be precisely adjusted and reduced without a breakdown of the phase retrieval process.Despite the dose reduction by orders of magnitude with respect to most far-field diffractive imaging studies reported previously, including studies of the same organism [11,13,14], we could already observe radiation induced structural changes in the course of consecutive exposures.In contrast to previous claims of wet CDI [17], we conclude that imaging of living or hydrated biological samples at 50 nm resolution is not possible, in general, without severe damage.At the same time, the onset of radiation induced processes and subsequent radiation damage could be precisely studied with the demonstrated dose-efficient holographic approach.This may enable future studies of repair processes in response to radiation damage, from a structural point of view.The role of possible cofactors could be investigated, e.g., by varying the buffer solution or the metabolic state of the bacteria.For single low dose exposures, the method enables the visualization of the subcellular density distribution within living cells, even in complex environments.This structural probe could then be enhanced by well-chosen nanodiffraction spots, yielding high resolution in reciprocal space [14,38].Last but not least, and beyond the single cell level, this dose-efficient holographic approach should also enable 3D reconstruction with subcellular resolution in tissues, based on nested phase and tomographic reconstruction, for example [39].
FIG. 1 (
FIG. 1 (color online).A monochromatic hard x-ray beam is focused by Kirkpatrick-Baez (KB) mirrors onto a waveguide (WG) system.The sample (S) is illuminated by the waveguide beam and magnified Fresnel diffraction patterns are recorded at the detection plane (D).
3 FIG. 2 (
FIG. 2 (color online).Silicon channel waveguide fabricated by electron-beam lithography, followed by wafer bonding.(a) Scanning electron micrograph of the exit surface.The channel cross section is enclosed by the dashed rectangle (97 × 73 nm).(b) Logarithmic far-field intensity distribution, as measured in photon numbers by a pixel detector about 5 m behind the waveguide.The smooth central part (the dashed rectangle) is used for imaging.(c) Reconstructed near-field intensity, linear color coding.(d) Simulated intensity distribution along the beam direction (z) within and right behind a two-dimensional waveguide with channel dimensions d x ¼ 97 nm and d y ¼ 73 nm, logarithmic color coding.Scale bars (a) 100 nm, (b) 10 mm, (c) 20 nm.
2(b) and 2(c)].Propagation of the empty waveguide beam is therefore well approximated by a pure geometrical enlargement (considering amplitudes), and image formation can be expressed as I z ≔ jD z fP • Ogj 2 ≃ jD z fPgj 2 • jD z fOgj 2 .This enables artifact-free normalization by the empty beam I E z ≔ jD z fPgj 2 , expressed by Īz ¼ I z =I E z ¼ jD z fOgj 2 .The normalized intensity is thus directly related to the object transmission function O ¼ exp ½ − i2π=λ R 0 −Δt ½δ λ ðx; y; zÞ − iβ λ ðx; y; zÞdz of the object with thickness Δt and refractive index n ¼ 1 − δ λ þ iβ λ at wavelength λ, in contrast, for example, to cone-beam holography experiments with KB beams (a) shows the normalized hologram Īz of a 200 second exposure recorded at 13.6 keV photon energy, sample distance z 1 ¼ 4.93 mm and detector distance z 1 þ z 2 ¼ 5.07 m.Interference fringes extend all the way to the corners of the diffraction pattern, indicating a high quality hologram.The different reconstructions of the object phase φðx; yÞ are shown in Figs.3(b), 3(c), and 3(d).The holographic reconstruction φðx; yÞ ¼ φfD −z f Īz gg, based on the free space Fresnel diffraction operator D z , is shown in Fig. 3(b) and exhibits the well-known twin-image artifacts of in-line holography.
(a) shows the normalized hologram of a group of Deinococcus radiodurans cells (without any further data treatment), while Fig. 4(b) depicts
FIG. 3 .
FIG. 3. (a) Normalized hologram of a test structure milled into 200 nm thick gold.(b) Holographic phase reconstruction.(c) Phase reconstruction based on the contrast transfer functions (CTF).(d) Iterative mHIO phase reconstruction, the support information (the dashed line) was obtained from the reconstruction shown in (c).Scale bars, 2 μm.
FIG. 4 (
FIG. 4 (color online).(a) Normalized hologram of freeze-dried Deinococcus radiodurans cells, obtained in a single recording with 8 s dwell time along with (b) the iterative mHIO phase reconstruction.(c) mHIO reconstruction of (initially) living cells in solution.Each frame was accumulated for 8 × 10 seconds (every other frame is shown).Pronounced changes in the densities are observed after successive irradiation, as quantified in (d), showing the normalized electron density in the high density nucleoid regions indicated by the corresponding colors as a function of dose.The images in (c) are reconstructions corresponding to averages over the colored columns in (d).Scale bars, 4 μm. | 4,457.6 | 2015-01-28T00:00:00.000 | [
"Biology",
"Engineering",
"Materials Science",
"Physics"
] |
Generation of Self-Assembled 3D Network in TPU by Insertion of Al2O3/h-BN Hybrid for Thermal Conductivity Enhancement
Thermal management has become one of the crucial factors in designing electronic equipment and therefore creating composites with high thermal conductivity is necessary. In this work, a new insight on hybrid filler strategy is proposed to enhance the thermal conductivity in Thermoplastic polyurethanes (TPU). Firstly, spherical aluminium oxide/hexagonal boron nitride (ABN) functional hybrid fillers are synthesized by the spray drying process. Then, ABN/TPU thermally conductive composite material is produced by melt mixing and hot pressing. Then, ABN/TPU thermally conductive composite material is produced by melt mixing and hot pressing. Our results demonstrate that the incorporation of spherical hybrid ABN filler assists in the formation of a three-dimensional continuous heat conduction structure that enhances the thermal conductivity of the neat thermoplastic TPU matrix. Hence, we present a valuable method for preparing the thermal interface materials (TIMs) with high thermal conductivity, and this method can also be applied to large-scale manufacturing.
Introduction
In this modern technological era, electronic gadgets and devices play a major role in every field. The electronics industry has been incessantly focusing on designing high performing, efficient, miniaturized, and cost-effective electronic products. Apart from the above criteria, thermal management also plays a critical part in designing electronic products. The excess irrelevant heat produced by the devices can get accumulated, affecting the operational efficiency and reliability of the devices [1]. Therefore, thermal interface materials (TIM) play a vital role in enhancing heat transfer between heat sources and heat sinks [2]. Conventional synthetic polymers have the advantages of excellent electrical insulation, good processability, and being lightweight, making them suitable for substrates of thermal interface materials. Among the available polymeric matrixes for TIMs, thermoplastic polyurethanes (TPU) are attractive due to their highly versatile and unique properties. TPU is a kind of multiphase block copolymers, the thermomechanical properties can be easily tailored by changing the molecular chain structure of the soft and hard segments, and the recyclability of thermoplastics gives it an added advantage [3,4]. Unfortunately, the thermal conductivity of most conventional polymeric substrates is very low, in the range of 0.1-0.5 W m −1 K −1 [5]. Ren et al. claimed that the thermal conductivity of laying graphite films/carbon fiber fabrics/TPU composite is up to 242 W m −1 K −1 at room temperature [6]. In addition, Dong et al. reported that the thermal conductivity and thermal stability were also found to be enhanced by introducing carbon black into TPU matrix [7]. However, most reports have only academic significance, and difficultly can be applied to electronic devices due to their extremely high electrical conductivity, leading device to malfunction because of electron leakage. Hence, the introduction of a thermally conductive but electrically non-conductive inorganic ceramic filler into the polymeric matrix would be one of the promising solutions [8][9][10]. Many studies have reported various methods to develop high thermally conductive and electrically insulating polymer-based composites which include the addition of various ceramic fillers such as boron nitride (BN), aluminum oxide (Al 2 O 3 ), aluminum nitride (AlN), silicon carbide (SiC), silicon dioxide (SiO 2 ), and zinc oxide (ZnO) into the polymeric matrix [11][12][13][14][15][16][17][18]. Among these ceramic fillers, hexagonal boron nitride (h-BN) with a two-dimensional (2D) layered structure stands out owing to its excellent thermal conductivity (250-300 W m −1 K −1 ), low thermal expansion coefficient, stable crystal structure, low dielectric constant, high resistivity, and non-toxic properties [19,20]. Liu et al. reported that high thermal conductive h-BN filled TPU composites can be enhanced with 2-time by controlling the alignment level of h-BN through a fused deposition modeling 3D printing technique, but it is only limited along the printing direction [21].
Interfacial thermal resistance between the polymer matrix and filler is a key factor that influences the thermal conductivity of the composites as known from the literature [22]. The thermal energy in the system is mainly transmitted through the lattice vibrations (phonons) therefore, the discontinuous coupling between the polymer and filler causes phonon scattering, resulting in thermal resistance [5]. Several common solutions have been used to resolve the problem of interfacial thermal resistance. For example, surface functionalization or modification of the filler can create a bonding to improve the adhesion between the filler and polymer matrix that can reduce the intensity of phonon scattering [23,24]. However, some surface functionalization methods are challenging to perform and the effect of these methods in improving the thermal conductivity of composite materials is still limited [25][26][27]. Of late, many studies have reported various preparation methods for establishing a three-dimensional heat conduction network in the composites to deal with the issue mentioned above [27][28][29][30]. Through the compact network structure of continuous thermally conductive fillers, the effect of phonon scattering can be reduced which in turn reduces the interfacial thermal resistance thereby achieving higher thermal conductivity. On the other hand, it also provides continuous heat conduction paths in multiple dimensions and allows heat energy transfer throughout the network [31]. Indeed, these methods have their merits, however, some are time-consuming and complicated, limiting their use in large-scale industrial manufacturing and practical applications [32]. In general, increasing the filler content favors the establishment of a continuous thermal network structure in the composite materials. Nonetheless, adding in excess leads to problems such as the reduced processability of the composite and increased cost of the filler [33,34]. The introduction of a hybrid filler is one other promising strategy for enhancing the composites since it combines the advantages of different filler systems, such as their aspect ratios, geometric dimensions, etc. Fillers along with the polymer matrix forms a complex three-dimensional (3D) thermal network structure, consequently improving the performance and lowering the manufacturing cost of the composite materials [35][36][37][38][39].
During the preparation of thermally conductive polymer composites, the dispersion of fillers is one of the key factors that affects thermal conductivity [40]. Simple mixing strategies inevitably lead to an uncontrolled distribution of fillers, which might limit the synergistic enhancement between the fillers in the heat transfer network construction process [32]. Several methods have been reported in the literature that assist in resolving the uncontrolled distribution problem. Several methods have been reported in the literature that assist in resolving the uncontrolled distribution problem, such as solution compounding, roll mixing, and melt-compounding [41][42][43]. Among them, the melt-compounding is the method commonly utilized in the batch manufacturing of thermoplastic composites since it is a continuous process and involves simple operating methods wherein the fillers can be uniformly distributed in the continuously produced polymer matrix. However, when undergoing the screw extrusion process, the viscosity of composite influences the distribution as the high shear force generated during the mixing might damage and crack the thermally conductive fillers making it more challenging to manufacture on a large scale for industrial applications [44].
In this work, we propose a facile and effective method to prepare thermally conductive composite constituting compact and continuous fillers. Firstly, the spheroidized threedimensional functional hybrid fillers, Al 2 O 3 /h-BN (ABN), were prepared by mechanical mixing and spray drying processes [45,46]. Secondly, the ABN functional hybrid fillers were uniformly mixed with the TPU matrix through the melt-compounding process to form the ABN/TPU thermally conductive composite which was then made into a pellet by hot pressing. The results indicate that the thermal conductivity of ABN/TPU composites is substantially improved from the synergistic association of Al 2 O 3 nanoparticles filled with h-BN. At 30 wt.% of filler content, the ABN/TPU thermally conductive composites can reach a high thermal conductivity of 1.39 Wm −1 K −1 and can considerably reduce the amounts of h-BN. It is confirmed that ABN functional hybrid thermal fillers can form a continuous three-dimensional (3D) thermal network structure in the TPU matrix and maintain the network framework during composite preparation. The method claimed in this study is facile, cost-effective and therefore offers new possibilities for the large-scale production of thermally conductive composite materials constituting a 3D thermal network of hybrid fillers with commercial applications in thermal interface materials.
Materials
The h-BN powder with a particle size of 2-3 µm was provided by National Nitride Technologies Co., Ltd. (Taichung, Taiwan). Al 2 O 3 nanoparticles with a particle size of 30-50 nm were purchased from Yong-Zhen Technomaterial Co., Ltd. (Taipei, Taiwan). Thermoplastic polyurethane (TPU, Elastollan S 85A) was provided by BASF Co., Ltd. (Ludwigshafen, Germany). All chemicals used in the experiment are of analytical grade and without any further purification.
Preparation of Al 2 O 3 /h-BN Spherical Three-Dimensional Functional Hybrid Filler
Mechanical mixing and spray drying methods were used for the preparation of the ABN hybrid filler. At first, the Al 2 O 3 nanoparticles were uniformly mixed with h-BN powder to form the starting slurry via mechanical mixing. The Al 2 O 3 nanoparticles and h-BN powders in the ratio of 1:1 (6.7 kg) were dispersed in de-ionized water (5.4 L). After continuous agitation with the magnetic stirrer, the components were thoroughly mixed in the vertical ball mill to attain a uniform Al 2 O 3 /h-BN solution. The above-formed dispersion was then transferred to the spray drying system (CNK SDDNH-3, GUS Technology Co., Ltd., Taipei, Taiwan), from which the Al 2 O 3 /h-BN spherical three-dimensional functional hybrid thermal conductive filler (ABN) was obtained. The processing parameters of spray drying are described as below: The temperature ranges for in air and out of air were set at 200 to 220 • C and 60 to 80 • C respectively. The temperature of the chamber was set at a range of 90 to 130 • C. The press air was set at 1.5 kg/cm 2 , the disc rotation speed was set at 20,000 rpm, and the feed rate was fixed at 3 kg/h. The experimental procedure is shown in Figure 1.
Composite Preparation
ABN/TPU thermally conductive composites were prepared by melt-compounding. Firstly, the ABN functional hybrid filler with different loading percentages was blended with the TPU matrix at a melting temperature of 180 °C for 10 min using a twin-screw extruder (Kobelco Co., Ltd., Tokyo, Japan). After extrusion, the composite samples were cut into shots and dried in the hot-air oven at 100 °C for 24 h after which the composite pellets with a thickness of 3 mm were hot-pressed at 180 °C with 4 MPa pressure by a Vacuum Type Heating Pressure Shaping Machine (Long Chang Co., Ltd., Tainan, Taiwan). To validate that the compact and continuous structure of ABN fillers could assist in improving the thermal properties of the composites, the same method was applied to prepare h-BN/TPU with different filler loading percentages as the control sets. In this work, the weight percentage of filler loading in ABN/TPU and h-BN/TPU composites were designated at 10 wt.%, 20 wt.%, and 30 wt.%.
Characterizations of Composite Materials
The crystal structure of fillers and composite materials were characterized by X-ray diffraction (EMPYREAN, Panalytical Co., Ltd., Almelo, The Netherland) using Cu Kα radiation (λ = 1.54 Å) as a X-ray source in the 2θ range of 10°-80° with a step size and scan speed of 0.05° and 2°/min respectively. Fourier transform infrared spectrometer (JASCO FT/IR-4150, Tokyo, Japan) was used to analyze the chemical structure of thermally conductive TPU composites and fillers. The vibration spectra were acquired from 400 to 4000 cm −1 . The top and cross-section view of microstructures of the prepared ABN functional hybrid filler and thermally conductive composites were observed through Field Emission Scanning Electron Microscopy (FE-SEM, JSM-7610F, JEOL, Tokyo, Japan). Energy dispersive spectroscopy (EDS) was performed using the FE-SEM equipped with an EDS detector to assess the ABN functional hybrid fillers. Thermogravimetric Analyzer (STA7300, Tokyo, Japan) was used to estimate the filler content of ABN in the ABN/TPU composites. Thermal degradation evaluation of all the samples was performed from 25 °C to 900 °C with a heating rate of 10 °C/min under the nitrogen atmosphere. Based on the transient plane source method (TPS), the thermal constants analyzer (Hot Disk TPS 2500, Gothenburg, Sweden) was used for understanding the thermal conductivity of the composite and all measuring processes were based on the ISO 22007-2 standards [29,32,33,47]. Infrared
Composite Preparation
ABN/TPU thermally conductive composites were prepared by melt-compounding. Firstly, the ABN functional hybrid filler with different loading percentages was blended with the TPU matrix at a melting temperature of 180 • C for 10 min using a twin-screw extruder (Kobelco Co., Ltd., Tokyo, Japan). After extrusion, the composite samples were cut into shots and dried in the hot-air oven at 100 • C for 24 h after which the composite pellets with a thickness of 3 mm were hot-pressed at 180 • C with 4 MPa pressure by a Vacuum Type Heating Pressure Shaping Machine (Long Chang Co., Ltd., Tainan, Taiwan). To validate that the compact and continuous structure of ABN fillers could assist in improving the thermal properties of the composites, the same method was applied to prepare h-BN/TPU with different filler loading percentages as the control sets. In this work, the weight percentage of filler loading in ABN/TPU and h-BN/TPU composites were designated at 10 wt.%, 20 wt.%, and 30 wt.%.
Characterizations of Composite Materials
The crystal structure of fillers and composite materials were characterized by X-ray diffraction (EMPYREAN, Panalytical Co., Ltd., Almelo, The Netherland) using Cu Kα radiation (λ = 1.54 Å) as a X-ray source in the 2θ range of 10 • -80 • with a step size and scan speed of 0.05 • and 2 • /min respectively. Fourier transform infrared spectrometer (JASCO FT/IR-4150, Tokyo, Japan) was used to analyze the chemical structure of thermally conductive TPU composites and fillers. The vibration spectra were acquired from 400 to 4000 cm −1 . The top and cross-section view of microstructures of the prepared ABN functional hybrid filler and thermally conductive composites were observed through Field Emission Scanning Electron Microscopy (FE-SEM, JSM-7610F, JEOL, Tokyo, Japan). Energy dispersive spectroscopy (EDS) was performed using the FE-SEM equipped with an EDS detector to assess the ABN functional hybrid fillers. Thermogravimetric Analyzer (STA7300, Tokyo, Japan) was used to estimate the filler content of ABN in the ABN/TPU composites. Thermal degradation evaluation of all the samples was performed from 25 • C to 900 • C with a heating rate of 10 • C/min under the nitrogen atmosphere. Based on the transient plane source method (TPS), the thermal constants analyzer (Hot Disk TPS 2500, Gothenburg, Sweden) was used for understanding the thermal conductivity of the composite and all measuring processes were based on the ISO 22007-2 standards [29,32,33,47]. Infrared ther-mography (IR-TCM HD, Jenoptik AG, Jena, Germany) was used to record the change in the surface temperature of the TPU composite during heating time. The tensile properties were measured on a universal tensile machine (Instron4464, Instron Corporation, Norwood, MA, USA) at a cross-head speed of 5 mm/min according to ASTM D412. All the measurements were carried out at room temperature (25 • C). For each composition, the average value of 5 specimens was reported. At least 5 samples were tested for each composite.
Results and Discussion
The ABN functional hybrid filler was dispersed in the TPU matrix to form the thermally conductive composite. The FTIR was utilized in understanding the formation of the composite (30 wt.% of filler loading) with the preliminary identification of their chemical composition. Figure 2a shows the FTIR spectrum of h-BN powders, Al 2 O 3 nanoparticles, and Al 2 O 3 /h-BN (ABN). The pure h-BN exhibits two sharp characteristic peaks at wavenumbers 1370 cm −1 and 810 cm −1 , indicating B-N in-plane stretching mode and B-N-B out-of-plane bending mode, respectively [20]. From the spectrum of Al 2 O 3 nanoparticles, the vast broadband at the wavenumber ranging from 400 cm −1 to 1000 cm −1 can be attributed to the Al-O-Al stretching vibration. In addition, two characteristic peaks are observed at wavenumbers 1600 cm −1 and 3400 cm −1 . Those two peaks respectively correspond to -OH groups bending mode and hydroxyl group (-OH) stretching mode of the absorbed water [48]. The spectrum of ABN is mainly dominated by the characteristic h-BN peak comparable to that of the h-BN spectrum. Correspondingly, the characteristic absorption broadband of alumina is seen at wavenumbers ranging from 400 cm −1 to 1000 cm −1 , verifying that the Al 2 O 3 nanoparticles and h-BN powders were successfully blended to produce ABN through the spray drying process. thermography (IR-TCM HD, Jenoptik AG, Jena, Germany) was used to record the change in the surface temperature of the TPU composite during heating time. The tensile properties were measured on a universal tensile machine (Instron4464, Instron Corporation, Norwood, MA, USA) at a cross-head speed of 5 mm/min according to ASTM D412. All the measurements were carried out at room temperature (25 °C). For each composition, the average value of 5 specimens was reported. At least 5 samples were tested for each composite.
Results and Discussion
The ABN functional hybrid filler was dispersed in the TPU matrix to form the thermally conductive composite. The FTIR was utilized in understanding the formation of the composite (30 wt.% of filler loading) with the preliminary identification of their chemical composition. Figure 2a shows the FTIR spectrum of h-BN powders, Al2O3 nanoparticles, and Al2O3/h-BN (ABN). The pure h-BN exhibits two sharp characteristic peaks at wavenumbers 1370 cm −1 and 810 cm −1 , indicating B-N in-plane stretching mode and B-N-B outof-plane bending mode, respectively [20]. From the spectrum of Al2O3 nanoparticles, the vast broadband at the wavenumber ranging from 400 cm −1 to 1000 cm −1 can be attributed to the Al-O-Al stretching vibration. In addition, two characteristic peaks are observed at wavenumbers 1600 cm −1 and 3400 cm −1 . Those two peaks respectively correspond to -OH groups bending mode and hydroxyl group (-OH) stretching mode of the absorbed water [48]. The spectrum of ABN is mainly dominated by the characteristic h-BN peak comparable to that of the h-BN spectrum. Correspondingly, the characteristic absorption broadband of alumina is seen at wavenumbers ranging from 400 cm −1 to 1000 cm −1 , verifying that the Al2O3 nanoparticles and h-BN powders were successfully blended to produce ABN through the spray drying process. In the pure TPU spectrum, the broadband observed at 3332 cm −1 corresponds to amine (-NH-) groups. Another broadband located at around 2900 cm −1 which is split into multiple peaks can be attributed to the -CH2-O stretching mode. Two sharp peaks at 1726 and 1702 cm −1 are correlated to the ester carbonyl group and the carbonyl stretching of the urethane groups. In addition, peaks at 1530 cm −1 and 1230 cm −1 correspond to urethane C-N stretching and N-H bending absorption, respectively [49]. The characteristic peaks of h-BN can be found in both the h-BN/TPU and ABN/TPU spectra, however, the intensity gets weaker due to the overlapping of the TPU In the pure TPU spectrum, the broadband observed at 3332 cm −1 corresponds to amine (-NH-) groups. Another broadband located at around 2900 cm −1 which is split into multiple peaks can be attributed to the -CH 2 -O stretching mode. Two sharp peaks at 1726 and 1702 cm −1 are correlated to the ester carbonyl group and the carbonyl stretching of the urethane groups. In addition, peaks at 1530 cm −1 and 1230 cm −1 correspond to urethane C-N stretching and N-H bending absorption, respectively [49]. The characteristic peaks of h-BN can be found in both the h-BN/TPU and ABN/TPU spectra, however, the intensity gets weaker due to the overlapping of the TPU and high-intensity additives peaks [27]. The overall FTIR results suggest that the inorganic fillers are not chemically linked but are physically attached to the TPU polymer and therefore there is no disappearance or appearance of new bonds in the h-BN/TPU and ABN/TPU composites [50].
The evolution of the structure of each sample can be realized from the XRD patterns shown in Figure 3a. and high-intensity additives peaks [27]. The overall FTIR results suggest that the inorganic fillers are not chemically linked but are physically attached to the TPU polymer and therefore there is no disappearance or appearance of new bonds in the h-BN/TPU and ABN/TPU composites [50]. The evolution of the structure of each sample can be realized from the XRD patterns shown in Figure 3a. The main peaks of the pure h-BN can be observed at 2θ = 27.11°, 41.95°, 50.50°, and 55.35° corresponding to the (002), (100), (102), and (004) planes, respectively that belong to the characteristic hexagonal crystal structure (JCPDS:85-1068) [34,51]. The crystallinity of Al2O3 nanoparticles can also be reflected in the XRD pattern. The main peaks located at 2θ = 19.43°, 31.99°, 37.70°, 45.88°, and 66.89°, can be assigned to the (111), (220), (311), (400), and (440) planes, respectively that matches well with the hexagonal phase of Al2O3 belonging to the R-3c space group (JCPDS:00-010-0425) [52]. It is noticeable that the patterns correspond to the typical pure h-BN and Al2O3 structures, and no other impurity phases or heterostructures can be detected according to the MDI Jade database. This demonstrates the h-BN and Al2O3 samples prepared are of high purity. The XRD pattern of the ABN functional hybrid filler is composed of the characteristic peaks of h-BN and Al2O3, as seen in Figure 3a with no other additional peaks inferring that the ABN functional filler can be effectively prepared by the spray drying process and is in accord with FTIR studies, as discussed in Figure 2a. The XRD patterns of the TPU, h-BN/TPU, and thermally conductive ABN/TPU are shown in Figure 3b. The broad diffraction peaks at around 2θ of 15° and 30° designate the amorphous property of the TPU polymer. The presence of all the characteristic peaks of h-BN and ABN in the h-BN/TPU and ABN/TPU diffraction patterns denote, and with no obvious position shift, which suggested that the crystal structures of thermally conductive composites are not affected by the processing method. It is interesting to note that high-intensity h-BN peak dominates in both patterns. As verified by several research works, the degree of the orientation of h-BN sheets in the polymer matrix is one of the essential factors affecting the thermal conductivity and this feature can be examined by XRD analysis. The (002) and (004) planes are attributed to the horizontally oriented BN, while the (100) plane is due to vertically oriented h-BN [53,54]. As seen in Figure 3b, the two strong peaks at 2θ = 26.5° and 54.7° of h-BN/TPU composites representing (002) and (004) resulted from the horizontal orientation of the h-BN sheets that was induced by hot-pressing with a perpendicular pressure. On the other hand, in the XRD pattern of ABN/TPU composite, the (100) plane appeared The XRD pattern of the ABN functional hybrid filler is composed of the characteristic peaks of h-BN and Al 2 O 3 , as seen in Figure 3a with no other additional peaks inferring that the ABN functional filler can be effectively prepared by the spray drying process and is in accord with FTIR studies, as discussed in Figure 2a. The XRD patterns of the TPU, h-BN/TPU, and thermally conductive ABN/TPU are shown in Figure 3b. The broad diffraction peaks at around 2θ of 15 • and 30 • designate the amorphous property of the TPU polymer. The presence of all the characteristic peaks of h-BN and ABN in the h-BN/TPU and ABN/TPU diffraction patterns denote, and with no obvious position shift, which suggested that the crystal structures of thermally conductive composites are not affected by the processing method. It is interesting to note that high-intensity h-BN peak dominates in both patterns. As verified by several research works, the degree of the orientation of h-BN sheets in the polymer matrix is one of the essential factors affecting the thermal conductivity and this feature can be examined by XRD analysis. The (002) and (004) planes are attributed to the horizontally oriented BN, while the (100) plane is due to vertically oriented h-BN [53,54]. As seen in Figure 3b, the two strong peaks at 2θ = 26.5 • and 54.7 • of h-BN/TPU composites representing (002) and (004) resulted from the horizontal orientation of the h-BN sheets that was induced by hot-pressing with a perpendicular pressure. On the other hand, in the XRD pattern of ABN/TPU composite, the (100) plane appeared while the intensity of (002) and (004) substantially reduced. This phenomenon indicates that the orientation of the h-BN sheets inside the ABN/TPU composites is random, contributing to a higher intensity of (100) plane, which are in accord with SEM analysis as discussed in the next section [29,55].
In addition, the intensity ratio of (002) and (100) characteristic peaks could be used to evaluate the orientation degree of h-BN filled in the polymer matrix. As verified by several research works [20,53], the more vertically arranged h-BN structures can be created when the value of I (002) /I (100) is lower. The I (002) /I (100) ratio of ABN/TPU composites and h-BN/TPU composites are 5.2 and 84.1, respectively. The I (002) /I (100) value of h-BN/TPU composite is 16 times higher than the ABN/TPU composite, indicating that a random structure of a h-BN-filled composite by a hot press process is complicated since h-BN sheets tend to distribute in the horizontal direction. Creating a random structure of a h-BN-filled composite by a hot press process is complicated since h-BN sheets tend to distribute in the horizontal direction. However, through the process presented in this work, more vertically arranged h-BN structures can be created, which is beneficial to the ABN filler in the composite material in order to form the effective continuous heat conduction chains to provide more heat conduction paths.
The surface morphology of the pure h-BN powder, Al 2 O 3 nanoparticles, and ABN functional hybrid filler was studied by field emission scanning electron microscopy (FE-SEM). In Figure 4a, it can be seen that the h-BNs displayed a hexagonal and plate-like shape with an average particle size of 3 µm. A uniform spherical shape of Al 2 O 3 NPs with the particle size of the nanospheres ranging from 30 to 50 nm can be seen in Figure 4b. The surface morphology of the ABN functional hybrid filler after spray drying, displayed in Figure 4c, shows that most of the ABN particles form the spherical-like structures with their particle sizes ranging from 10 and 40 µm. At a higher magnification, as demonstrated in the inset of Figure 4c, the ABN particles have a rough surface, which is due to the decoration of Al 2 O 3 NPs on the surface. In conventional theory, if the components powders are approximately of the same size, they would tend to uniformly distribute in the composite particles. However, when the components powders are composed of two different particle sizes, the radial segregation of particles occurs. Through the Brownian motion, smaller particles with the higher mobility occlude larger particles and therefore, according to this mechanism, surrounding or a coating of one component by others can be created [46,56,57]. Figure 5a shows the cross-section image of pure TPU. Because of the brittle fracture, the surface of the pure TPU is clean and smooth. In contrast, as seen in Figure 5b,c, incorporated hybrid functional filler composites exhibit a rough surface and crumpled fracture structure with To further reveal that the h-BN powder surface was wrapped by compact Al 2 O 3 NPs layers, ABN particles were compressed to form a crack on the surface, and the results were confirmed by the elemental mapping images as shown in Figure 4d. As expected, the abundant Al and O elements exhibited uniform and continuous distribution throughout the ABN particles demonstrating the Al 2 O 3 NPs coating on the h-BN powders. Besides, a considerable amount of Al 2 O 3 NPs is located between the neighboring h-BN particles that serve as bridges. This kind of unique structure is conducive to form a more efficient 3-D thermally conductive networks. Figure 5a-c presents the cross-section images of the samples. All the samples underwent brittle fractures after being immersed in liquid nitrogen. The dispersion and structure distribution between fillers and polymer can be observed in these figures. Figure 5a shows the cross-section image of pure TPU. Because of the brittle fracture, the surface of the pure TPU is clean and smooth. In contrast, as seen in Figure 5b,c, incorporated hybrid functional filler composites exhibit a rough surface and crumpled fracture structure with many embedded particles, a result of the local polymer deformation that occurred due to cracking from the addition of the hybrid functional fillers [58]. Despite the rough surface, thermally conductive fillers show a uniform dispersion and homogeneity with no large clusters in the matrix. This is attributed to the state of particle dispersion according to the melt mixing method, as mentioned in the introduction. Most of the thermally conductive filler particles play an essential role in constructing the thermal conductive pathways, and they have higher thermal conductivity along the direction of the heat flow [53]. We further compared the cross-sectional morphologies of TPU composites filled with the different weight percentage of thermally conductive/functional fillers. Figure 5b,c displays the images of h-BN/TPU and ABN/TPU composites with the filler loading of 20 wt.%. As seen in Figure 5b, the h-BN sheets in h-BN/TPU composite film are almost horizontally oriented as rendered by hot-pressing with perpendicular pressure, signifying that the thermal conductivity could be quite different in various directions, thereby limiting its use in practical applications since heat dissipation between the devices and heat sinks usually occurs in the vertical direction [55]. In addition, from the figure, we observed that at a low weight percentage (20 wt.%) of h-BN loading, fillers are unable to create a continuous heat flow path with lower interfacial thermal resistance. Due to this inherent problem, a large amount of thermally conductive fillers is required to establish a thermally conductive pathway network structure, and hence increases the cost of fillers. On the other hand, as shown in Figure 5c, the spherical ABN hybrid functional hybrid fillers, proposed in this work, are connected to form a continuous thermally conductive pathway network (marked in red), which plays a pivotal role in enhancing the thermal conductive of the composite. The most significant advantage of spherical ABN functional hybrid filler is that it has no specific orientation after being processed by hot pressing. The spherical structure provides continuous pathways in all dimensions and ensures most of the energy is being transferred through the filler networks [59]. Therefore, compared with the h-BN filler from our previous work, the spherical ABN functional hybrid filler possesses a more continuous structure in all dimensions and longer range, resulting in a significant improvement of composite thermal conductivity despite the low filler concentration. As the above mentioned, self-assembled 3-D network in TPU is successfully On the other hand, as shown in Figure 5c, the spherical ABN hybrid functional hybrid fillers, proposed in this work, are connected to form a continuous thermally conductive pathway network (marked in red), which plays a pivotal role in enhancing the thermal conductive of the composite. The most significant advantage of spherical ABN functional hybrid filler is that it has no specific orientation after being processed by hot pressing. The spherical structure provides continuous pathways in all dimensions and ensures most of the energy is being transferred through the filler networks [59]. Therefore, compared with the h-BN filler from our previous work, the spherical ABN functional hybrid filler possesses a more continuous structure in all dimensions and longer range, resulting in a significant improvement of composite thermal conductivity despite the low filler concentration. As the above mentioned, self-assembled 3-D network in TPU is successfully generated by the insertion of the Al 2 O 3 /h-BN hybrid through our designed method. Moreover, our designed method is not only facile, but also offers new chances for large-scale production.
Thermal stability is crucial for polymer materials, which is the limiting factor in both processing and applications. In this work, the content of ABN in the composites and the thermal stability of the composites were verified by Thermogravimetric Analyzer (TGA) tested at a heating rate of 10 • C min −1 from 25 • C to 900 • C under the nitrogen atmosphere. Figure 6 demonstrates the TGA curves of the ABN filler, the pure polymer TPU, and ABN/TPU composites loaded with different ABN contents. The results show that the ABN functional hybrid fillers prepared by our method exhibited high thermal stability with no significant weight loss up to 900 • C. Compared to ABN filler, pristine TPU, and ABN/TPU composites demonstrate two main degradation stages as seen in the weight loss curve. The first stage is between 300 • C-350 • C, which is attributed to the cleavage of the urethane linkage to polyol and isocyanate in the TPU hard segment [60]. The second stage is between 350-480 • C, which is ascribed to the cleavage of the polyol and diisocyanate into smaller molecules in the TPU soft segment [61]. The residual weight of ABN/TPU composites is higher than that of pure TPU which completely degrades at around 900 °C. The content of ABN filler can be obtained from the residual weight at 900 °C and hence from calculating the differences in the residual weight, the addition of about 10 wt.%, 20 wt.%, and 30 wt.% ABN to the TPU substrate can be confirmed. Figure 7 shows the thermal conductivity of h-BN/TPU and ABN/TPU composites with different filler loadings ranging from 0 wt.% to 30 wt.%, respectively. Owing to its amorphous structure and phonon scattering, the thermal conductivity of the pristine TPU is extremely low, which is at around 0.2 W m −1 K −1 . For all composites, the thermal conductivity remarkably improved with the increasing content of the fillers. However, the enhancement in the thermal conductivity of the composites with different fillers showed a distinct difference. As clearly seen in Figure 7, the ABN/TPU composite showed higher thermal conductivity than the h-BN/TPU with the same filler loading. For example, with 30 wt.% filler loading, the thermal conductivity of ABN/TPU composite is 1.39 W m −1 K −1 , while the h-BN/TPU composite showed a trifling thermal conductivity of 0.48 W m −1 K −1 which is three times lower, signifying that the spherical ABN functional filler obtained by spray-drying demonstrated a greater advantage in improving the thermal conductive properties of the composite. The residual weight of ABN/TPU composites is higher than that of pure TPU which completely degrades at around 900 • C. The content of ABN filler can be obtained from the residual weight at 900 • C and hence from calculating the differences in the residual weight, the addition of about 10 wt.%, 20 wt.%, and 30 wt.% ABN to the TPU substrate can be confirmed. Figure 7 shows the thermal conductivity of h-BN/TPU and ABN/TPU composites with different filler loadings ranging from 0 wt.% to 30 wt.%, respectively. Owing to its amorphous structure and phonon scattering, the thermal conductivity of the pristine TPU is extremely low, which is at around 0.2 W m −1 K −1 . For all composites, the thermal conductivity remarkably improved with the increasing content of the fillers. However, the enhancement in the thermal conductivity of the composites with different fillers showed a distinct difference. As clearly seen in Figure 7, the ABN/TPU composite showed higher thermal conductivity than the h-BN/TPU with the same filler loading. For example, with 30 wt.% filler loading, the thermal conductivity of ABN/TPU composite is 1.39 W m −1 K −1 , while the h-BN/TPU composite showed a trifling thermal conductivity of 0.48 W m −1 K −1 which is three times lower, signifying that the spherical ABN functional filler obtained by spray-drying demonstrated a greater advantage in improving the thermal conductive properties of the composite. ductivity remarkably improved with the increasing content of the fillers. However, the enhancement in the thermal conductivity of the composites with different fillers showed a distinct difference. As clearly seen in Figure 7, the ABN/TPU composite showed higher thermal conductivity than the h-BN/TPU with the same filler loading. For example, with 30 wt.% filler loading, the thermal conductivity of ABN/TPU composite is 1.39 W m −1 K −1 , while the h-BN/TPU composite showed a trifling thermal conductivity of 0.48 W m −1 K −1 which is three times lower, signifying that the spherical ABN functional filler obtained by spray-drying demonstrated a greater advantage in improving the thermal conductive properties of the composite. The trend observed in the thermal conductivity of ABN/TPU composites is nonlinear. With the 10 wt.% filler loading, the thermal conductivity is 0.34 W m −1 K −1 showing an improvement of around 45%. Beyond the threshold for percolation (between 10 wt.% and 20 wt.% of the filler loading), this value sharply increases [27,29]. When the content of spherical ABN particles is further increased to 30 wt.%, the thermal conductivity achieved is 1.39 W m −1 K −1 , equivalent to a dramatic upsurge of 488% compared to that of the pure matrix. The variation in the thermal conductivity acquired from the ABN/TPU composites can be explained by the network structure of the fillers and distribution state in the TPU matrix. The polymer matrix is interposed between the adjacent fillers under low filler loading, disrupting the contact between the particles. This results in the phonon scattering, further increasing the interface thermal resistance, which finally leads to reduced heat conduction. When the filler content is continually increased beyond the threshold for percolation (between 10 wt.% and 20 wt.%), the increasing number of spherical ABN fillers contact with the adjacent ones, forming a densely packed structure that facilitates phonon transfer in a continuous thermal network.
To better illustrate the effect of different fillers on the formation of thermally conductive pathways, two models with the efficient network for heat flow in the composites were proposed as shown in Figure 8. Usually, to obtain high thermal conductivity, a heat flow channel along the heat flow direction should be generated. However, the h-BN/TPU composites seen in Figure 8a comprises of a few thermally conductive pathways, and many h-BN sheets are not involved in the construction of pathways. The horizontal orientation of h-BN sheets can be formed after vertical hot pressing, which results in the interruption of heat transfer along the vertical direction, but nonetheless, the thermal conductivity cannot be effectively improved. In contrast, the spherical ABN particles are linked to each other to form a continuous three-dimensional network structure for unhindered heat flow in the TPU polymer as shown in the schematic diagram in Figure 8b. These spherical ABN particles can not only retain the structure after hot pressing but are also beneficial in achieving the dense packing of the particles to form thermally conductive paths with reduced interfacial thermal resistance, thereby enhancing the phonon transfer. The heat transfer always alternates between fillers and polymers when fillers randomly distributed in the composites, hence, the interface thermal resistance has a very bad effect on the improvement of thermal conductivity. The synergistic enhancement of the spherical ABN fillers on thermal conduction can also be clearly observed, in which the Al 2 O 3 NPs are connected to the neighboring h-BN platelets like bridges to construct the continuous phonon transmission pathways. Therefore, this demonstrates that the spherical ABN functional filler prepared in this work is promising in achieving a high thermal conductivity in polymer composites. Figure 9b, and the images of infrared thermography at different stages of the three samples mentioned above are illustrated in Figure 9c. The surface temperature changes with an increase in the heating time and the color of all the samples gradually change from blue to red. It can be seen that the rate of change in the surface temperature of all composites is faster than that of pure TPU. Figure 9b,c shows the heat transfer trend of the three samples when heated from 25 °C to 65 °C on the hot plate. After heating for 40 s, it is apparent that the increase in the surface temperature of the ABN/TPU was the fastest, and the surface temperature of ABN/TPU was the highest. The addition of spherical ABN fillers for enhancing the thermal conductivity of composites is consistent with the order of their thermal conductivity values as mentioned above. From the IR results, it can be confirmed that through ABN/TPU composites prepared by the proposed method, the continuous thermal network can be formed and heat transfer occurs more effectively. The heat transfer capability of h-BN 30 wt.% loading and ABN 30 wt.% loading in the TPU composite materials were tested by heating on an electric hot plate for 40 s and analyzing the temperature response, recorded by the infrared thermography as shown in Figure 9. Figure 9a presents the photographs of the pure TPU, h-BN/TPU, and ABN/TPU composites. The pure TPU exhibits high transparency, while the h-BN/TPU and ABN/TPU composites are opaque and white. The heating curves of the samples are shown in Figure 9b, and the images of infrared thermography at different stages of the three samples mentioned above are illustrated in Figure 9c. The surface temperature changes with an increase in the heating time and the color of all the samples gradually change from blue to red. It can be seen that the rate of change in the surface temperature of all composites is faster than that of pure TPU. Figure 9b,c shows the heat transfer trend of the three samples when heated from 25 • C to 65 • C on the hot plate. After heating for 40 s, it is apparent that the increase in the surface temperature of the ABN/TPU was the fastest, and the surface temperature of ABN/TPU was the highest. The addition of spherical ABN fillers for enhancing the thermal conductivity of composites is consistent with the order of their thermal conductivity values as mentioned above. From the IR results, it can be confirmed that through ABN/TPU composites prepared by the proposed method, the continuous thermal network can be formed and heat transfer occurs more effectively.
To further realize the structure and interfacial properties between the ABN functional filler and TPU, the mechanical properties of ABN/TPU composites were investigated by using a tensile stress-strain test. The tensile stress-strain curves of pure TPU, and TPU composites with various ABN filler loading ranging from 10 wt.% to 30 wt.% are shown in Figure 10. A typical tensile behavior of an elastomer can be observed in the pure TPU. The tensile strength and strain-at-failure of the pure TPU was 82 MPa and 570%. While adding the ABN functional filler, the tensile strength of TPU composites were firstly promoted, then decreased with the increase in the filler loading. The maximum tensile strength of 21 MPa can be reached when the filler loading is 20%, which is about three times that of the pure TPU matrix, and the elongation was only slightly decreased. This strengthened tensile stress is due to the intrinsic favorable mechanical property of ABN functional fillers and strongly interfacial interaction with the TPU matrix [62,63].
When subjected to stress, the TPU chains were initially stretched along the stress direction, then the external stresses applied to the composite can be shared efficiently by transferring to the filler via the interface interaction between the filler and TPU chain [51]. Furthermore, the filler has a higher tensile strength than those of the TPU matrix and can act as a skeleton to help the matrix to bear the load, resulting in the higher tensile stress of composites. However, the further increase of ABN functional filler content to 30 wt.%, results in both sharply decreased tensile strength and elongation at the break of the composites, and exhibited in obvious brittle fracture characteristics, a phenomena that can be attributed to the aggregation of the filler, which weakened the encapsulating and supporting role played by the TPU matrix, indicating that the presence of the ABN functional filler is unfavorable for maintaining the tensile ductility of the composite, especially at an extremely high content [29,60]. The above results demonstrated that the improvement of thermal conductivity in the TPU composites is due to the strong coupling in the interface of the ABN filler and TPU, which are beneficial in achieving and forming a densely continuous three-dimensional network for thermal conduction. To further realize the structure and interfacial properties between the ABN functional filler and TPU, the mechanical properties of ABN/TPU composites were investigated by using a tensile stress-strain test. The tensile stress-strain curves of pure TPU, and TPU composites with various ABN filler loading ranging from 10 wt.% to 30 wt.% are shown in Figure 10. A typical tensile behavior of an elastomer can be observed in the pure TPU. The tensile strength and strain-at-failure of the pure TPU was 82 MPa and 570%. While adding the ABN functional filler, the tensile strength of TPU composites were firstly promoted, then decreased with the increase in the filler loading. The maximum tensile strength of 21 MPa can be reached when the filler loading is 20%, which is about three times that of the pure TPU matrix, and the elongation was only slightly decreased. This strengthened tensile stress is due to the intrinsic favorable mechanical property of ABN functional fillers and strongly interfacial interaction with the TPU matrix [62,63]. When subjected to stress, the TPU chains were initially stretched along the stress direction, then the external stresses applied to the composite can be shared efficiently by transferring to the filler via the interface interaction between the filler and TPU chain [51]. Furthermore, the filler has a higher tensile strength than those of the TPU matrix and can act as a skeleton to help the matrix to bear the load, resulting in the higher tensile stress of composites. However, the further increase of ABN functional filler content to 30 wt.%, results in both sharply decreased tensile strength and elongation at the break of the composites, and exhibited in obvious brittle fracture characteristics, a phenomena that can be attributed to the aggregation of the filler, which weakened the encapsulating and supporting role played by the TPU matrix, indicating that the presence of the ABN functional filler is unfavorable for maintaining the tensile ductility of the composite, especially at an extremely high content [29,60]. The above results demonstrated that the improvement of thermal conductivity in the TPU composites is due to the strong coupling in the interface of the ABN filler and TPU, which are beneficial in achieving and forming a densely continuous three-dimensional network for thermal conduction.
Conclusions
In this work, a novel spherical hybrid filler (ABN) containing Al2O3 NPs and h-BN was designed through simple mechanical mixing and spray drying processes. This filler was utilized to prepare the TPU composite matrix with the continuous three-dimensional
Conclusions
In this work, a novel spherical hybrid filler (ABN) containing Al 2 O 3 NPs and h-BN was designed through simple mechanical mixing and spray drying processes. This filler was utilized to prepare the TPU composite matrix with the continuous three-dimensional (3D) thermal conduction network. The ABN/TPU composites prepared by melt mixing and hot compression was compared with h-BN/TPU composites in terms of their thermal conductivity wherein the ABN/TPU composite exhibited thermal conductivity of 1.39 Wm −1 K −1 with a filler loading of 30 wt.%, which is six times higher than that of pure TPU and three folds elevated than that of the h-BN/TPU composite. Enhancement in the thermal conductivity could be attributed to the three-dimensional (3D) thermal conduction network. As the filler, ABN particles provided a continuous heat conduction path while reducing the interface thermal resistance of the matrix. SEM images confirmed that Al 2 O 3 NPs were located between neighboring h-BN powders and served as bridges, which in turn assisted to build a continuous phonon transmission path. The above results can provide new insights into the construction of filler-contained composites with 3D isolation networks and demonstrated strong potential for the design of large-scale manufacturing of thermal interface materials. | 11,177.6 | 2021-01-01T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Type-2 Fuzzy Expert System Approach for Decision-Making of Financial Assets and Investing under Different Uncertainty
Extensive research results of stock market time series using classical fuzzy sets (type-1) are available in the literature. However, type-1 fuzzy sets cannot fully capture the uncertainty associated with stock market developments due to their limited descriptiveness. ,is paper fills a scientific gap and focuses on type-2 fuzzy logic applied to stock markets. Type-2 fuzzy sets may include additional uncertainty resulting from unclear, uncertain, or inaccurate financial data through which model inputs are calculated. Here we propose four methods based on type-2 fuzzy logic, which differ in the level of uncertainty contained in fuzzy sets and compared with the type-1 fuzzy model. ,e case study aims to create a model to support investment decisions in Exchange-Traded Funds (ETFs) listed on international equity markets. ,e created models of type-2 fuzzy logic are compared with the classic type-1 fuzzy logic model. Based on the results of the comparison, it can be said that type-2 fuzzy logic with dual fuzzy sets is able to better describe data from financial time series and provides more accurate outputs. ,e results reflect the capability and effectiveness of the approach proposed in this document. However, the performance of type-2 fuzzy logic models decreases with the inclusion of increasing uncertainty in fuzzy sets. For further research, it would be appropriate to examine the different levels of uncertainty in the input parameters themselves and monitor the performance of such a modified model.
Introduction
e stock market occupies a key position in the economic system of each country. Predicting the future development of the stock market is a key task and an important area of research in the financial field, and thus in the economy as a whole. Stock markets are characterized by nonlinear behavior, including its chaotic nature; therefore the data collected generally show some uncertainty and may be incomplete or even incorrect. Uncertainty is therefore a major challenge in real-world applications, and there is a need for easy access to deal with such vague information, as Shukla et al. [1]. In this paper, attention is focused on the integration of the approach facilitating decisions on the future direction of the stock market through fuzzy logic. Fuzzy set theory was first introduced by Lotfi Zadeh in the 1960s as a way to capture uncertainty and ambiguity. Fuzzy logic can be considered as a generalization of classical set theory. Over time, research has revealed improvements in fuzzy logic that better reflect its true meaning, i.e., the linguistic expression of input variables, including uncertainty stemming from unclear or ambiguous information.
is idea has sprung three main representations of fuzzy logic: type-1 fuzzy sets (T1FS), interval type-2 fuzzy sets (IT2FS), and general type-2 fuzzy sets (GT2FS). e first approach is the simplest form of fuzzy logic and also the most widespread and applicable. A more complex approach is represented by IT2FS, where the concept of uncertainty in the form of intervals is introduced. Although computationally complex compared to T1FS, they derive an improvement in the general fuzzy model by being more resistant to external noise, as reported by Castro et al. [2]; Puška et al. [3]; Eren [4]; Tavossi et al. [5].
Fuzzy logic is widely used in many areas not only because it can handle incomplete or uncertain data, but also because its tools have been simplified using parameterized FS. Building fuzzy rules and building the right membership function (MF) have been challenging for decades now. e fuzzy membership feature is a key concept in designing fuzzy systems. Correct and accurate use of the membership function is essential for the reliability of the results obtained. e construction of the member function and the determination of its parameters are therefore still a current problem, as stated by Yankova et al. [6]. e choice of the shape and parameters of the membership functions plays an important role in the fuzzy model, as it can affect the performance of the whole system as a state of Wijayasekara and Manic [7]. Although the user can choose from a large number of shapes of membership functions, the choice of parameters is individual depending on the specific application. is will require expertise or sophisticated methods to fine-tune membership features. In addition, membership functions are, as reported by Kayacan et al. [8], a subjective matter of perceiving vague concepts entering the model. Sadollah [9] further adds that there is still no clear criterion for assessing the appropriateness of choosing the membership function. e MFs can take any shape and form as long as they map the data with the required degree of membership. As for the choice of MF, it is up to us to decide. is is where the fuzzy system offers individual degrees of freedom. With experience, you will learn which MF shape is suitable for the intended application.
In this work, we use type-2 fuzzy sets to overcome this uncertainty and develop a fuzzy system to support stock market investment decisions. is type-2 fuzzy system takes the delayed value of the stock index as inputs, fuzzifies it is using a type-2 fuzzy member function, and implies fuzzy rules in the fuzzy system. e output of the fuzzy system, which is in the form of a type-2 fuzzy membership function, reduced to a type-1 fuzzy membership function, decrypts it to a sharp value and creates decision support for the future development of the monitored stock index. e aim of the paper is to create models based on type-2 fuzzy logic with different levels of uncertainty contained in type-2 fuzzy sets with application to stock markets with subsequent comparison with classical type-1 fuzzy logic. e model will serve as decision support for investors. e purpose is to determine whether IT2FLS provides more accurate results compared to classical T1FLS. e main contributions of the research can be considered: (1) creation of models that combine different locations of upper and lower functions of type-2 fuzzy logic membership focused on the financial and economic area of interest which has so far been insufficiently researched in terms of the applicability of type-2 fuzzy logic. (3) e model is intended directly to support decision-making regarding investments in Exchange-Traded Funds. We believe that focusing on a specific investment instrument is more suitable for the general investor public and makes it easier to invest your funds in the entire portfolio of assets through a single share than focusing on the stock index. (4) e authors provide an alternative approach to investment evaluation for investment funds compared to classical statistical methods. Our study is organized as follows: Section 1 focuses on a review of the literature applying fuzzy logic in stock markets. Section 2 explains the type-2 fuzzy logic technique, including metrics for evaluating the overall performance and error rate of the model. Section 3 describes the examined data set, including the creation of the IT2FLS model, and Section 4 deals with the subsequent validation and comparison of the created models.
Review of the Scientific Literature
Fuzzy logic is used in a wide range of decision-making problems such as risk management, finance, economics, and management, but also in weather forecasting, physics, and many other areas. e usability of fuzzy logic is huge, mainly due to the fact that they allow you to work on the principle of human thinking, unlike neural networks or genetic algorithms. Since the introduction of fuzzy logic prediction models, this method is increasingly used in a number of studies to solve problems related to stock market forecasts or as a support to decision-making tool for investors, analysts, or the general investor public.
An example is the short-term technical business strategy discussed in Chourmouziadis and Chatzoglou [10] using fuzzy logic. e authors focused on the methodology of buying and selling securities without the support of portfolio managers. Ijegwa et al. [11] developed a fuzzy model that, based on technical indicators, provides a signal to buy, sell, or hold an investment. Model outputs provide satisfactory results. Khayamim et al.'s [12] results showed that the proposed fuzzy method responds appropriately to the psychological component of the market. In addition, for all investor profiles, the recommended strategy completely outperforms the market and the remaining strategies. e conditional fuzzy inference approach is used in the study by Hassanniakalager et al. [13].
is approach is used for forecasting under constraint file conditions. rough conditional selection of rules, the model is able to achieve higher performance and interpretability. To predict the Chinese stock index, Sun et al. [14] use fuzzy sets and combine the traditional fuzzy model with the rough set method. is approach, according to the authors, provides better prediction results. Mansour et al. [15] formulated a multiobjective financial portfolio selection approach involving fuzzy parameters, where the distribution options are given by fuzzy numbers from the information provided by the decision environment. Tsai et al. [16], in contrast to traditional methods, use more variables included in the fuzzy model to predict and better reflect the issue of stock volatility.
e results suggest that the authors' model with multiple periods is better and provides sufficient decision support for investors. Hasan and Fong [17] introduced some components for improving decision-making through sentiment analysis and simple fuzzy decision-making. e best model was chosen as the basis for a fuzzy decision-making mechanism provided to investors.
Other researchers have focused on hybrid strategies such as the integration of fuzzy logic and neural networks. Examples are the studies of Su and Cheng [18] and Vella and Ng [19]. ese authors used adaptive neurofuzzy inference systems (ANFIS), modifying this model to type-2 fuzzy logic instead of classical type-1 fuzzy logic. Vlasenko et al. [20] modify the classical ANFIS model, where in the fourth layer they use multidimensional Gaussian functions instead of polynomials.
e experimental results showed clear advantages of the described model and its learning. Dutta [21] states that the nature of stock/capital market data makes it more complex and challenging to predict stock price movements. e study combined both fuzzy c-means and neural network technique for stock price prediction. Research will find the optimal solution for predicting the future share price. A comparison of the complexity of time and space has shown that the proposed method is better than existing methods. Rajab and Sharma [22] focused on the Bombay Stock Exchange, CNX Nifty, and S&P 500 and proposed an effective neurofuzzy model for their prediction.
e authors point out that the hybrid model strikes a better balance between accuracy and interpretability. In her study, Janková [23] discusses the design of a neurofuzzy model to support decision-making when investing in investment instruments listed on the stock exchange in the Czech Republic. Empirical results show that the neurofuzzy model behaves more naturally than other statistical tools that simulate the decision-making process in stock trading without increasing the risk in the form of the investor's subjective judgment. Similarly, García et al. [24] demonstrate the suitability of the implementation of technical indicators and their predictive ability on the German stock index DAX-30 using a hybrid neurofuzzy model. In addition to the accuracy of the model, the authors also highlight the creation of less risky strategies, which are more profitable than using other methods. e above study demonstrated exclusively the use of type-1 fuzzy logic, which is represented by membership functions or fuzzy sets ranging from zero to one. Such membership functions represent a precise point or exact degree of membership. However, this leads to problems with the inclusion of additional uncertainty and unclear information that enters the model according to Jiang et al. [25]. erefore, type-2 fuzzy sets have been introduced that allow the uncertainty of the associated degrees of membership to solve the uncertainty problems of Liu et al. [26]. e proposed models based on the type-2 fuzzy system are used in finding solutions to some known problems reported in the literature, as described by Sumati et al. [27]. An example of using type-2 fuzzy logic is given in the following paragraph.
Jiang et al. [25] propose an interval type-2 fuzzy system for stock index prediction based on fuzzy time series and fuzzy logical relationship map (FLRM). e authors applied this methodology to data from the Taiwan Stock Exchange Capitalization Weighted Stock Index, the Dow Jones Industrial Average, and the National Association of Securities Dealers Automated Quotation. e outputs of the authors point out that the chosen method of solution exceeds classical statistical methods. Huarng and Yu [28] study their extension of type-1 fuzzy time series models to type-2 models. ey designed a type-2 model for TAIEX index prediction. In this model, additional observations were made to refine the FLRs obtained from the type-1 model, resulting in better predictive performance. e authors' empirical evidence points to a lower error rate measured by RMSE type-2 than in the case of type-1 fuzzy logic. Similar results are presented by Bajestani and Zare [29], adding that this new type of fuzzy logic is more efficient than previous methods. Liu et al. [26] modified the classical hybrid neurofuzzy model and integrated a type-2 fuzzy set into it, which they used to predict the TAIEX index. e obtained knowledge points to a higher accuracy of the prognosis of this hybrid model than to the individual approaches alone. Zarandi et al. [30] developed an expert system based on type-2 fuzzy rules in their article for stock price analysis. ey used a type-2 fuzzy model to predict the company's stock prices in Asia. e results of the forecast of price deviations are very encouraging. e genetic system of type-2 fuzzy logic was introduced in Bernardo et al. [31]. e authors used this hybrid model for prediction and modeling in the field of finance. is model overcame the white box problem and provided comparable performance to black box models. Janková and Dostál [32] apply IT2FL on the Czech stock market and they are used when deciding on investing in shares of the PX index. e proposed type-2 fuzzy model uses the return and risk of investment instruments as input variables. e created system is able to generate aggregated models from a number of language rules, which allows the investor to understand the created financial model. e use of T2FLS can lead to more realistic and accurate results than T1FLS. Another hybrid approach to IT2FLS was provided by Hasan et al. [33]. eir results correspond to similar outputs of other authors. us, it can be stated that type-2 fuzzy logic is able to improve the performance of existing models.
Type-2 Fuzzy Logic System
Quantification of sustainability data is very difficult; therefore, the evaluation of this data requires a personal point of view, namely, the evaluation by experienced experts in the field who provide a relevant opinion. In addition, sustainability data are often imperfect or inaccessible. is problem can be solved by fuzzy logic, the main advantage of which is that it allows language evaluation of each indicator. In order to implement the required assessment technique, methodologies should provide flexibility regarding the set of indicators applied, recognize data uncertainty, and mimic the human cognitive ability to assign scores to evaluated scores. is is essential for the holistic sustainability of evaluation and the associated large-scale impacts, as stated [34].
Fuzzy logic is based on the ability to deal with highly uncertain, inaccurate, or chaotic data files. Zadeh [34] has developed acceptable techniques for characterizing this uncertainty through fuzzy sets instead of complex mathematical formulations. e obvious advantage of fuzzy models is the ability to present data through linguistic qualitative concepts rather than quantitative data [35]. Fuzzy logic consists of three basic mechanisms, which are fuzzification, inference engine, and defuzzification. e relationship of these components is schematically illustrated in Figure 1, including fuzzy logic operators, member functions, and fuzzy rules. Member functions allow you to demonstrate a fuzzy set. In addition, fuzzy "if-then" rules represent the views and opinions of experts in their field that can be easily calculated.
Mathematical Problems in Engineering e method of fuzzy logic allows the system to take advantage of modeling an environment that mimics human cognitive behavior and to enable language input in recognizing record uncertainty. e goal of FLS is therefore to provide a person with a descriptive understanding of problem solving. Without losing the generality, it is a fuzzy set of elements that allow their members different degrees of membership in the range [0, 1]. e degree of similarity of each input variable in such a fuzzy set is thus given by the membership function [36,37]. A type-1 fuzzy membership function is defined by accurate and sharp values in the range [0, 1], while type-2 fuzzy membership function can be designed for each input variable in domain x. Furthermore, it can be stated that the T2FLS membership function can handle a higher level of uncertainty compared to the T2FLS membership function [38]. is is achieved by incorporating different degrees of footprint of uncertainty (FOU) combined with the three-dimensional nature of type-2 fuzzy sets. e secondary membership function is linked to the degree of membership. When this secondary member function takes the maximum uncertainty 1 in a certain interval [a, b], a fuzzy set of a type-2 interval is formed. e key elements of the interval type-2 fuzzy sets are footprint of uncertainty (FOU), upper membership function (UMF), and lower membership function (LMF). Note that the maximum uncertainty expressed in the secondary membership function is equal to 1, so a fuzzy set of type-2 intervals can be simplified. Fuzzy sets are associated with linguistic terms that form part of fuzzy rules that are conditioned by statements [39]. e T2FLS structure is very similar to the T1FLS structure. e measured real variables are first transformed in a fuzzification block into linguistic variables, with the linguistic variables based on the basic linguistic variables. Janková et al. [40] state that three to seven attributes of this basic variable are usually used. e degree of attribute of a given variable in a set is represented by a mathematical function. ree types of fuzzification are available in T2FLS. If the measured data is perfect, modeled as a sharp set, data with noise and data with stationary noise are modeled as type-1 fuzzy sets, with nonstationary noise modeled as type-2 fuzzy sets. e latter type of fuzzification cannot be performed in T1FLS.
Type-2 fuzzy logic systems are represented by the possibility of a distribution function that can be written, according to Sang et al. [41] and Mendel et al. [42], such as where x is the first variable, J x ϵ [0, 1] is the first fuzzy possibility of x, u is the second variable, and Mendel [43] defines the IT2FLS method, in which they use a generalized interval fuzzy set interval. e requirement for secondary possibility distribution is a condition of normality, which means that the X elements are fully distributed for x, which are defined as follows: IT2FLS is For IT2FLS X are upper possibility distribution μ(x) and lower possibility distribution μ(x), type-1 possibility distribution, the footprint uncertainty of X(FOU(X)) is defined as [42]
Consequent Antecedents Fuzzification Defuzzification
Rule-base Inference mechanism Input Output Figure 1: e mechanism of type-2 fuzzy logic. e comparison method for IT2FLS is described below, which is based on the assumption of an uncertain average and variation coefficient.
, whose possibility uncertainty mean value is defined, according to Sang and Liu [44], where possibility uncertainty mean of the upper membership function M(X U ) and lower membership function M(X L ) are, respectively, written as . For all IT2FLS, as further reported by Sang and Liu [44], the coefficient of variation of possibility uncertainty is defined as where ϵ is an extremely small value to present the ap- where f(α) is an increasing function satisfying . Let X and Y be two IT2FLS, whose comparison criteria are defined according to Sang and Liu [44] as follows: It is denoted that > means "larger than" in the sense of order, < means "less than" in the sense of order, and ∼ means "same order".
Evaluation of the Accuracy of the Model.
e model can be used in practice if the verification shows that the model provides accurate results. e following metrics are used to verify and evaluate the accuracy or error rate of individual MFs. For this reason, the RMSE indicator is used, which is focused on comparing the original data y t with the data generated by the model _ y t . e Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE), Relative Root Mean Squared Error (RMSE), and Mean Squared Error (MSE) indicators are also used. ese metrics are used to evaluate which type of MF and which level of uncertainty provide the best results for analyzing a stock market that exhibits a specific feature, as described by Soto et al. [45] and Bas et al. [46]. e formulas of these evaluations are shown below:
Data and Methodology
It is well known that the stock market is a dynamic system exhibiting chaotic behavior, which makes it very difficult to predict its future development. In particular, nonlinear and complex laws limit quick decisions on the right investments. For this reason, many researchers are focusing on developing an intelligent system that is able to reduce the amount of risk in the market that results from its nonlinear nature. For this reason, alternative techniques are increasingly used in stock market modeling and analysis, including fuzzy logic, which is able to include the uncertainty, nonlinearity, and noise that occur in financial time series. e study focuses on the application of interval type-2 fuzzy logic as a sophisticated tool that is able to intuitively model human judgment through linguistic values in combination with quantitative data, as reported by Ulubeyli and Kazaz [47]. Fuzzy logic allows you to better express preferences and subjective opinions when making decisions. In other words, if objective facts cannot be accurately identified, at least its scope of membership can be defined. e fuzzy logic interval operates with approximate numerical data used for decision-making, as stated by Liu [48].
Description and Processing of the Dataset.
For testing interval type-2 fuzzy logic system, which in this paper serves to create a model to support decision-making in investment Mathematical Problems in Engineering instruments, a data set of 40 Exchange-Traded Funds (ETFs) from four continents over the last 5 years, i.e., from 2015 to 2019, is selected. Monthly data is used. Specifically, the 10 most powerful ETFs are selected from each continent of Europe, USA, and Asia-Pacific and Emerging markets. ETFs are basically index funds, which represent an alternative way of investing for institutional and retail investors. e most important characteristic of Exchange-Traded Funds is, as the name suggests, the fact that they are traded similarly to shares on the stock exchange. ey are valued and traded on an ongoing basis throughout the trading day, allowing investors to buy or sell without delay. Exchange-Traded Funds invest in a defined index, or baskets of assets, thus allowing investors to invest in the entire portfolio using a single share. Originally, they were established as passive funds with the aim of replicating the underlying benchmark as faithfully as possible; however, in recent years, ETFs with active management have also expanded to outperform the underlying index or basket of assets. Figure 2 shows a PMFG graph from a pairwise correlation matrix based on the monthly data of all 40 examined ETFs.
is graph is the first extension of the Minimum Spanning Tree (MST) and the full name is Planar Maximally Filtered Graph. PMFG is a comprehensive network that was first introduced in Tumminello et al. [49] and Aste et al. [50]. In this case, the degree of similarity of the ETFs is given by the Person correlation coefficient. Individual ETFs are marked according to ticker. A lighter color indicates a stronger correlation between the funds. Conversely, a darker color indicates independence or even a negative correlation between individual ETFs. e largest correlation is between European, American, and Asia-Pacific funds. ETFs from Europe and Asia-Pacific correlate more positively. On the contrary, the positive correlation between the USA and Europe or Asia-Pacific is not so dominant. Conversely, some Emerging markets ETFs even have a negative correlation with all other funds. Table 1 shows the basic characteristics of ETFs by continents. e table shows that ETFs from the USA have the highest return with an average return of 65.59%, followed by funds from the Asia-Pacific continent with an average return of 23.46%. e lowest average return was achieved by ETFs from Emerging markets with a value of 7.88%. In terms of fluctuations in returns or riskiness of the fund represented by the standard deviation, ETFs from Asia-Pacific are calculated to be the riskiest. Paradoxically, although US funds have the highest returns, they also have the lowest risks. In terms of total cost ratio, measured by the total expansion ratio (TER), ETFs from the US show the lowest cost on average, followed by ETFs from Europe. On the contrary, similar costs above 0.5% are achieved by ETFs from the Asia-Pacific and Emerging markets. Despite these values, all analyzed ETFs show a low overall cost compared to mutual funds, which is one of the main advantages of these Exchange-Traded Funds. Selected ETFs are then compared with their underlying indices. All analyzed funds are equity, so to determine the coefficient of determination and subsequent indicators, the underlying index specified in the fund's articles of association is chosen, or the best alternative is chosen according to the availability of data. Funds from the USA with a value of the coefficient of determination of 0.94 best replicate their benchmarks. From this point of view, it can be assumed that the deviation from the underlying index is very small. Both underlying assets and ETFs from Emerging markets replicate relatively well. ETFs from Europe and the Asia-Pacific continent show the same value of the coefficient of determination with a value of 0.76. e last of the analyzed indicators in Table 1 is the beta coefficient. A beta coefficient value higher than 1 means that the fund is able to outperform the market and these are cyclical funds, while a value below 1 indicates anticyclical funds that do not achieve comparable or higher performance in a bull market, respectively. ey are unable to outperform the market. On the contrary, these funds are suitable at a time of a bear market, when they limit losses compared to the market as a whole. It can be noted that values above 1 are achieved by ETFs from Asia-Pacific and Emerging market, respectively, and basically the USA. On the contrary, funds from Europe with a beta coefficient of 0.95 lag behind the market, which can be attributed to the not very good replication of benchmarks.
Calculation of Input Variables.
e indicators are then selected as input variables into the interval type-2 fuzzy model, which are explained in Table 2 and scored in Table 3. Based on the analyzed ETFs, indicators of return, risk, and performance are calculated. Subsequently, the results are summarized according to the individual continents and the average score is determined for each indicator separately. e higher the point rating is, the higher the ETF achieves higher returns, higher risk, or higher performance.
From Table 3 it is clear that the highest annual return is achieved by ETFs from the USA, which has already resulted from Table 1. It is also evident that ETFs from the USA also show the highest return above risk-free rate and also have the highest telling value in the form of information ratio. On the contrary, ETFs from Europe show the highest return above benchmark. In terms of overall profitability indicators, ETFs from Emerging markets are the worst off. In terms of risk, they clearly dominate with the lowest score, and thus the lowest risk, through standard deviation, specific and systematic risk, and ETFs from the US. A similarly low risk can be seen with ETFs from Europe. erefore, it can be concluded that especially American and European ETFs have an almost perfectly diversified portfolio. ETFs from Emerging markets show the greatest systematic risk, while the greatest specific risk and standard deviation can be seen in ETFs from Asia-Pacific. e specific risk can be diversified with a suitable composition of investment instruments. erefore, the reason may be the inadequate number of constituents included in the portfolio, but also the poor management of the asset manager of the ETF from the Asia-Pacific continent. e tracking error indicator is derived from the return above the benchmark, which expresses the volatility of the differences in the performance of the fund and the benchmark by means of a standard deviation. It is desirable that the value of this indicator for passively managed ETFs be as small as possible. e absolute best results were achieved by ETFs from the USA, which perfectly replicate the underlying benchmarks. On the contrary, funds from the Asia-Pacific continent are worse off. e chart also shows the ratios that represent the fund's performance. e Treynor ratio expresses the reward for volatility. erefore, a higher score is desirable, which means higher performance of the fund. All analyzed continents show very similar points. However, the US is the worst off, which may be surprising given the claims made so far. e reason can be seen in the fact that the shortcoming of the Treynor ratio is that it completely ignores the unique risk, as it presupposes a perfect diversification of the portfolio. Jensen's alpha expresses the added value of the fund manager to achieve a higher return than the market return, taking into account the sensitivity of the fund to the movement of the entire market represented by the beta coefficient. If alpha shows positive values, it indicates the ability of the fund manager to beat the market and better deal with systematic risk; with a negative alpha, active portfolio management fails. All analyzed funds managed to outperform the market represented by the reference index. ETFs from the Asia-Pacific market achieved the highest score. e last indicator is the Appraisal ratio. is indicator evaluates the quality selection of equities for the fund's portfolio by the manager and focuses on the nondiversified part of the portfolio.
Creating the IT2FLS Model
In particular, the stock market works with vague concepts, so it is appropriate to use fuzzy logic, which is able to contain this data, as a decision support. e MATLAB software toolbox is used to create a type-2 fuzzy model. e fuzzy model for examining equity ETFs consists of three input variables (return, risk, and performance), one block of rules, and one output variable to determine whether or not it is appropriate to invest in the ETFs. is model is shown in Figure 3, with the figure showing that each of the three input indicators consists of other indicators, which were described in Section 3. e aim is to create a suitable, clear, and accurate model based on IT2FLS, which will serve as a support for investors to decide whether or not to invest in ETFs according to input parameters. A fuzzy inference system of the Mamdani type is chosen for the creation of the model, because it is able to work better and more intuitively with unstructured or poorly structured data inputs than the Sugeno type. In addition, it is able to imitate human thinking and comprehensively describe the system using natural language. is type of output is sufficient to interpret stock ETF analysis. Each input variable is represented by a Gaussian membership function (MF), which consists of fuzzy sets containing a total of five linguistic fuzzy values or attributes: VL: very low, L: low, M: medium, H: high, and VH: very high. Input 1 represents the overall return of the ETF, input 2 represents the overall riskiness of the ETF, and finally input 3 indicates the overall performance of the ETF examined on three continents, namely, Europe, the USA, Asia-Pacific, and Emerging markets. e output variable is also represented through the five attributes of the member function based on the point rating, namely, S: sell (0 points), RS: rather sell, H: hold, RB: rather buy, and B: buy (100 points). Member functions are in the range [0; 1] and are used to create fuzzy models with different degrees of uncertainty. Table 4 shows examples of input variables and output variables by means of numerical values as well as linguistic expressions corresponding to the attributes of MF.
As an example, the first ETF can be described, which in terms of input 1 achieves a score of 35.85 points, or the total return for this ETF is medium. Input 2 expresses the overall riskiness of a particular ETF with a score of 24.35, which is a fuzzy set with low risk. Input 3 evaluates the overall performance with the total point score of the given ETF 33.11 or according to the fuzzy notation it is a high score. e recommendation for the ETF is a score of 80, which means rather buy. A similar procedure can be used for all other analyzed ETFs.
Results and Evaluations
is study focuses on IT2FL, which consists of dual membership functions containing upper and lower MFs, Return achieved over the return achieved by the underlying index or basket of assets.
Return above risk-free rate A risk premium required by an investor when investing in an asset with a higher risk than government bonds.
Standard deviation
Indicates the quadratic average of the deviations of the fund's portfolio returns from the arithmetic average, i.e., the square root of the variance.
Systematic risk
It results from the overall economic situation and individual macroeconomic variables, so it is undiversified and affects all economic entities. Specific risk It is a unique risk for each asset and can be eliminated by appropriate portfolio diversification.
Tracking error
Measures variations in fund portfolio and benchmark performance.
Performance
Information ratio It compares the fund's performance with the market's performance taking into account risk.
Treynor ratio It represents a reward for volatility and assumes that the fund eliminates unique risk by appropriate portfolio diversification and only counts on systematic risk.
Jensen alpha It measures the ability of the fund manager to generate a fund return above the return given by the benchmark and the ability to deal with systematic market risk. Appraisal ratio It expresses the additional return, adjusted for the systematic risk per unit of individual risk taken. Other models with higher uncertainty were not considered relevant by the authors of the paper. Subsequently, to evaluate the created models, it is necessary to determine the knowledge base or block of rules. e knowledge base representing rules in the form "if -then" expresses expert knowledge about the relationship between input variables and output variables. A total of 125 rules are created, based on which the created models are evaluated. e block of rules is determined by experts and authors of the article. e rules represent a knowledge base that describes the behavior of the entire fuzzy system. For this reason, it is necessary to describe the whole issue using a sufficient number of rules. ese rules were generated by experts in the field. e result is the assignment of a verbal description to the output variables based on the knowledge base. In other words, the input data is converted to output data using these rules. e defuzzification part of the model sets out the final assessment, which serves as a decision support for investors whether to invest in the ETF or not. e specific defuzzification value from all 5 models is given in Table 4; the values given correspond to the input values, which are given in Table 3. In particular, an ETF with a total return of 35.85 points, a total risk of 24.35 points, and a total performance of 33.11 points should have a score of 80 points with a rather buy recommendation. Model 1, i.e., the classic T1FLS, sets the total value of the result at 70 points. Models 2 and 3, i.e., IT2FLS with 10% and 20% uncertainty, contained in the MF give a result of 75 points. Models 4 and 5, based on the knowledge base and the uncertainty contained in the MF, give a final score of 74 points. us, specifically for this fund, the recommendation for all models is rather buy, while the most accurate results were achieved by models with 10% and 20% uncertainty. Similarly, other examples of ETFs entering the models can be analyzed.
From Table 5 it can be further noted that the differences in the point evaluation of the results are not very different for all examined models. Especially for IT2FLS models, the results differ by 1 to 3 points, which is not a big difference. To evaluate the performance of the analyzed models, the selected metrics are listed in Section 2.3. e results of evaluation and comparison of models are given in Tables 6-10. e tables also show MFs with different levels of uncertainty. e figure also shows how the distance between the individual functions gradually increases with the addition of uncertainty. Table 6 shows an evaluation of the error rate and performance of the T1FLS model. e RMSE indicator is used to compare the quality of models. Equation (12) is used to calculate it. A lower RMSE value indicates a better model. Model 1 shows the value of this indicator is 15.43. Compared to other models, T1FLS shows the worst result or other IT2FLS models containing uncertainty in membership functions provide better results than the classic fuzzy model. Indicator value for model 1, the MAE value calculated based on equation (14), is set to 12.1. In terms of this indicator, model 1 achieves the highest error rate and deviates the most from reality compared to all other models. e last indicator used to compare the created models is the MAPE indicator calculated on the basis of equation (13). e value of the MAPE indicator is a dimensionless characteristic by which different models can be compared. Even with this indicator, the T1FLS model achieves the worst results with a value of 13.95 compared to other models. Table 7 evaluates the indicators described above for the IT2FLS model with a degree of uncertainty of 10%. e figures captured in this table already show the difference between the upper and lower membership function. is space is filled with additional uncertainty resulting from uncertain data coming from the stock market. It is clear that model 2 shows the lowest values of all examined indicators. Specifically, the RMSE indicator indicates the value for this model is 12.15, i.e., the best model when compared to other IT2FLS models. e MAE and MAPE error indicators are also the lowest, so the model with 10% uncertainty shows the smallest deviation from the original result and is best able to serve as a decision support for investors regarding investments in ETFs. Table 8 evaluates the performance of model 3 with 20% uncertainty included in the MF. Although this model is worse than model 2, it still provides much better results than T1FLS, as well as other models with a higher degree of uncertainty in MF. In addition, it can be stated that the difference in performance between the best model (model 2) and model 3 is very small to negligible in our case. Because RMSE is 12.32, MAE is 9.29, and MAPE is 10.88, looking back at Table 6, there is a minimal difference in performance and in any case, using this model would not significantly distort the final assessment of whether or not to invest in ETFs.
e evaluation of the penultimate model is given in Table 9, which evaluates a model containing 30% uncertainty in MF. A higher degree of uncertainty is also evident from the MFs shown in the figure, which are more distant from each other than in the other models described earlier. Also, in this model, as in model 3, there is a deterioration in performance according to the error indicators MAE (9.47) and MAPE (9.47) and the performance indicator RMSE (11.02). e last model examined is model 5 containing the highest degree of uncertainty or the most distant fuzzy MF. It is also the model with the worst performance compared to other IT2FLS models. However, even this model still achieves much better results than the classic T1FLS model. Type-2 fuzzy logic features three-dimensional shapes of membership functions. ese duplicate membership functions are able to include additional uncertainty resulting from insufficient information. ey are used especially when it is difficult to determine the exact shape and location of membership functions. Using these functions, type-2 fuzzy logic is severely limited mainly due to the increasing computational complexity associated with their implementation. Figure 4 captures the surface of the three input variables: overall return, overall risk, and overall performance of the analyzed ETFs in relation to the overall model outcome. It is clear from the figure that the higher the fund's performance score and the lower the fund's risk score (Figure 4(a)), the more recommended it is to invest in ETFs. On the contrary, the higher the risk rating and the lower the performance score (Figure 4(b)), the more recommended it is, according to the results of IT2FLS, not to invest in ETFs or sell ETF shares. e last combination concerns performance and (Figure 4(c)). In the case of the mean values of these variables, the investor is advised to hold ETF shares or refrain from any action, whether in terms of sale or purchase.
Conclusion and Future Search
e presented paper tried to apply a sophisticated method that is rarely used in the financial field and points out to the general investor public the possibility of using it to successfully analyze the stock market based on intuitive behavior. We present specific advanced methods of fuzzy logic for decision-making areas for successful investing. Conventional methods for decision support in various fields require accurate and unambiguous numerical evaluation. However, an accurate numerical assessment may not fully reflect the real preferences of decision-makers. People are often sensitive to intuitive judgments based on an individual's experience and knowledge. It offers a suitable alternative to use the method of evaluation, which is able to evaluate verbal descriptions and expressions.
is achieves a subjective expression of the decision-maker on the basis of their own judgments. A fuzzy logic tool is suitable for this purpose.
In this research, a new fuzzy time series model is used to predict stock market prices. e proposed model is based on the type-2 fuzzy logic approach. e proposed model is verified using experimental data sets originating from stock markets from different continents. Specifically, this study focuses on Exchange-Traded Funds (ETF) shares. ese investment instruments provide investors with better performance due to their nature compared to traditional mutual funds. In addition, the authors believe that the use of ETFs will provide more realistic results for investors. Because the majority of the contributions examined focus exclusively on the stock index, in which it is not easy to invest, an alternative to investing in stock indices is ETFs, which try to replicate the underlying benchmarks as faithfully as possible. e aim of the papers was to create a model based on type-2 fuzzy logic, which has not yet been sufficiently researched in the literature. Moreover, in the context of the ETF, such a model was not created at all according to the authors' findings. Based on the time series examined by the ETFs over a five-year period, indicators were compiled, which are summarized into three input variables entering the model: indicators of overall return, overall risk, and overall performance. Based on expert judgment, a knowledge base or a set of rules was determined, on the basis of which the overall model is compiled. e result of the model is a recommendation for potential investors whether or not to invest in ETF shares based on set parameters. e created models of type-2 fuzzy logic are compared with the classic type-1 fuzzy logic model. Based on the evaluation and comparison of different degrees of uncertainty in fuzzy sets, it can be stated that the analysis of the stock market represented by the ETF is best suited to the MF with 10% uncertainty. Based on the results of the comparison, it can be said that type-2 fuzzy logic with dual membership functions is able to better describe data from financial time series. However, the performance of type-2 fuzzy logic models decreases with the inclusion of increasing uncertainty in fuzzy sets, as evidenced by a comparison of MAPE, MSE, and RMSE performance indicators, where in other 20%, 30%, and 40% models levels of uncertainty continue to increase. However, the increase in these indicators is not significant and in no way will affect the overall decision for investors whether to invest in ETFs or not. Based on the results of the comparison, it can be said that type-2 fuzzy logic with dual fuzzy sets is able to better describe data from financial time series and provides more accurate outputs. e results reflect the capability and effectiveness of the approach proposed in this document. However, the performance of type-2 fuzzy logic models decreases with the inclusion of increasing uncertainty in fuzzy sets.
However, it is necessary to point out the weaknesses and limitations of our research. As described in previous research, when applying fuzzy methods, fuzzy rules are established and defined through human judgment and may involve a degree of subjectivity. However, this process of setting fuzzy rules and defining membership functions can be time consuming. For this reason, some other techniques and methods are being promoted that will facilitate the process of setting fuzzy rules. For example, machine learning techniques (e.g., decision tree) or combinations of fuzzy logic and neural networks (e.g., ANFIS) are used in the literature for this purpose, which are able to generate a set of fuzzy rules automatically. Model with a larger number of variables, the number of fuzzy rules that need to be defined logically, increases, thus increasing computational complexity and increasing the time required for computation. Inherent limitations in the selection of a database concerns in particular the input data. Another limitation concerns especially the fuzzy logic itself, as it is an approach that is not able to learn and has no memory. In addition, the results of the fuzzy model can be skewed due to the choice of multiple shapes and numbers of member functions.
For further research, it would be appropriate to examine the different levels of uncertainty in the input parameters themselves and monitor the performance of such a modified model. It would be appropriate to examine different types of fuzzy sets, not just the Gaussian function of membership, which is demonstrated in this study, and to monitor the validity and accuracy of different types of fuzzy sets that can improve the model. Furthermore, it would be appropriate to focus the paper on a larger dataset using, for example, stocks or stocks of continental companies, as most studies still concern Anglo-Saxon companies. Last but not least, it would be appropriate to revise the further developed model and integrate it, for example, with neural networks.
Data Availability e authors provide relevant data of calculation used to support the findings of this study in the Supplementary Information files.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. | 11,039 | 2021-01-01T00:00:00.000 | [
"Business",
"Computer Science",
"Mathematics"
] |
Insider Trading and Institutional Holdings in Mergers and Acquisitions
We investigate three issues about the impact of insider trades and institutional holdings on mergers and acquisitions (M&As). First, we test how insider trades affect the trading behavior of institutional investors in M&As. Second, we test whose trading behavior, either insiders or institutional investors, has greater explanatory power for the performance of M&A firms after takeover announcements. Third, we analyze the industry-wide spillover effects of insider trades and institutional holdings. Empirically, we find that insiders and institutional investors of M&A firms may utilize similar information in their transactions because insider trades induce similar trading behavior for institutional investors. In addition, insider trades, relative to institutional holdings, have greater explanatory power for M&A firm’s long-term performance. Finally, compared with insider trades, institutional holdings have a more significant spillover effect in the industry of M&A firms.
Introduction
Information asymmetry between managers and investors is a fundamental issue for investors and market observers. Some investors have information advantage relative to others and normally they would take advantage of these information sources to benefit themselves [1][2][3][4][5][6][7][8]. Even though the existing evidence supports weak form or semi-strong form market efficiency, it is not uncommon to find some investors have a better investment performance than others due to the information advantage. For example, many investors follow institutional holdings and insider trading activity to gain valuable insights, insiders and institutional investors are two types of investors that may have information advantages over other outside or retail investors [9][10][11][12]. In general, these two parties may share the same information sources and researchers utilize their trading behavior to forecast the firm's market performance after seasoned equity offerings (SEOs) [13][14][15][16]. We are interested in the behavior of informed traders in the M&As.
In this study, we follow the previous literature by using the M&As to analyze the trading behavior of insiders and institutional investors [17][18][19][20]. Some investigations have been carried out [21][22][23][24][25][26]. In addition, we also extend our research to the spillover effect of these two types of investors in the industries.
There are three research questions in this study. First, we test how insider trades affect the trading behavior of institutional investors in the M&As. Some researchers, such as [13], find that insider and institutional trading influences the firms' information environment, but how the asset prices change depends on each group's relative information advantage. In addition, Luo [27] finds that managers of merging companies appear to extract information from the market reaction of institutional investors and later consider it in closing the deal. The author concludes that firms held by short-term institutional investors have a weaker bargaining position in acquisitions. Weaker monitoring from short-term institutional investors could allow managers to accept value-reducing acquisitions.
In contrast, Griffin [18] cannot find supportive evidence to show that institutional investors trade on the information from investment bank connections through takeover advising. Therefore, there is a research gap in the information flow between insiders and institutional investors. Our research fills in this gap and sheds light on this issue by utilizing M&As and test how insider trades affect the institutional holding of these M&As firms.
Second, due to the different characteristics of information sources, we test whose trading behavior, either insiders or institutional investors, has greater explanatory power for long-term performance after M&As. Allen [28] finds that the trades of insiders are significantly related to post-spin-off stock returns, takeovers, and delistings of spin-off firms. This implies that the trading behaviors of insiders and institutional investors have different explanatory power for the takeover market performance.
We measure the insider trades and the institutional holdings before and after M&As and analyze the impact of the trading behavior of both groups on the long-term performance, which is measured by the buy-and-hold abnormal returns. This analysis contributes to the related literature of understanding the prediction power of informed traders on a firm's market performance after M&As.
Third, we analyze the industry-wide spillover effect of insider trades and institutional holdings. In the M&As, the insiders or institutional investors may signal some private information through their trading behaviors. How do insiders and institutional investors of other firms in the same industry react to these signals? Based on the existing evidence, we extend our analysis to the spillover effect of insider trades and institutional holdings on the institutional holdings of matching firms in the M&As. The analysis helps to understand how insider trades and institutional holdings affect the reaction of institutional investors of firms in the same industry.
The empirical results show that insider transactions have a significant impact on the institutional holdings. First, we find that institutional investors significantly decrease holdings of acquiring firms as insider transactions with the negative net sell of insider transactions. This result implies that insiders and institutional investors apply a different source of information and they have a different point of views under negative net sell of insider transactions. Second, we find that insider transactions have greater explanatory power than institutional holdings for the long-term market performance of acquiring firms after M&A announcements. In sum, we conclude that institutional investors share different information sources relative to insiders regarding M&As. In addition, insider transactions have more explanatory power than institutional investors in the long-term market performance. Finally, we find that the insider transactions of acquiring firms have an insignificant impact on the adjustment of the institutional holdings of matching firms. Instead, we find that the institutional holdings of acquiring firms have a significant impact on the adjustment of institutional holdings of matching firms. This result implies that there exist spillover effects of institutional holdings on those matching firms' informed traders.
The main contribution of the research is to comprehensively analyze the reaction of insiders and institutional investors in the events of M&As. The remainder of this paper is organized as follows. Section 2 briefly summarizes the relevant literature and develops our research hypotheses. In Section 3 and 4 we describe the methodology and data collection. Section 5 reports the results of the empirical analysis, while Section 6 concludes.
Literature Review and Research Hypotheses
Many studies have examined the trading behavior of insiders and institutional investors, and both groups have information advantages relative to other outside and retail investors. There is limited research on the interaction between insiders and institutional investors. Frankel [9] examines how financial statement informativeness, analyst following, and news related to the information asymmetry between insiders and outsiders. They find that increased analyst following is associated with the reduced profitability of insider trades and reduced insider purchases. Luo [27] finds that the market reacting to an M&A announcement predicts the likelihood of the consummation of the proposed deal, suggesting that "insiders learn from outsiders." Based on the result, we expect that informed traders, including insiders and institutional investors, adjust their stock holdings in the M&As once they observe the other group's move.
Piotroski [13] tests how much firm-specific, market-level, and industry-level information is impounded into the firm's stock price. In their research, they find that different informed participants change the information environment and the stock price reflects the different information conveyed by various participants. Griffin [18] also employs broker-level trading data to systematically examine possible cases of connected trading. They show that neither brokerage house clients nor the brokerage houses themselves trade on inside information through the brokerage house associated with the information of M&As. They suggest that institutional investors are reluctant to use inside information in traceable manners. From their results, we are interested in testing how the different informed investors change their holdings after observing the trading of other informed investors.
In contrast, Jegadeesh [17] examines the pattern and profitability of institutional trades around takeover announcements. The authors find that the trades of funds as a group, either before or after takeover announcements, are not profitable. However, funds whose main broker is also a target advisor are net buyers of target shares before announcements, and their pre-announcement trades are significantly profitable. Therefore, leakage of inside information from brokerages that advise the target is a significant source of funds' informational advantage. Consequently, we expect that institutional investors may utilize information from insiders by observing insider trading behavior. We test how the insider trades affect institutional holdings after M&As. The first research hypothesis is as follows.
Hypothesis 1: Insider trading should have a substantial impact on institutional holdings after M&As. Therefore, the trading behavior of insiders and institutional investors should be very similar around M&As. The existing literature shows that insiders and institutional investors play an important role in the firm's strategic decision. For example, Wahal [14] finds a positive relation between industry-adjusted expenditures for property, plant, and equipment (PP&E) and research and development (R&D) and the fraction of shares owned by institutional investors. In addition, the informed traders may also utilize their information advantage to benefit themselves in their trading. Gaspar [29] investigates how the investment horizon of a firm's institutional shareholders impacts the market for corporate control. In their study, they also show that both target firms and acquiring firms with passive institutional investors have worse merging benefits relative to those with active institutional investors. Andriosopoulos [30] investigates the impact of institutional ownership on UK M&As. They find that institutional investors increase the likelihood of an M&A to be a large, cross-border deal, opting for full control.
Moreover, institutional ownership concentration and foreign institutional ownership increase the likelihood of cross-border M&As. In addition, they assess the influence of institutional shareholders' investment horizon and find that while the investment horizon has a negative influence in encouraging cross-border M&As, the presence of long-term investors encourages larger M&As. Finally, even after controlling for the 2007-08 financial crisis the market reacts negatively to the announcement of cross-border M&As.
As per insiders, King [31] shows that Both British and US evidence presented in this article confirm that insiders achieve abnormal gains and, surprisingly, that these gains persist long after the disclosure of insider trading. Damodaran [32] shows that there is substantial evidence that insider trading is present around corporate announcements and that this insider trading is motivated by private information. They find that insiders buying (selling) after they receive favorable (unfavorable) appraisal news, especially for negative appraisals. Furthermore, positive (negative) appraisals and net insider buying (selling) elicit significant positive (negative) abnormal returns during the appraisal period. Aboody [5] finds that insider gains in R&D-intensive firms are substantially larger than insider gains in firms without R&D. Insiders also take advantage of information on planned changes in R&D budgets.
Agrawal [20] examines open market stock trades by registered insiders in about 3700 targets of takeovers announced during 1988-2006 and in a control sample of non-targets, both during an 'informed' and a control period. Fich [33] shows that studies of institutional monitoring focus on the fraction of the firm held by institutions. They focus on the fraction of the institution's portfolio represented by the firm. In the context of acquisitions, they hypothesize that institutional monitoring will be greatest when the target firm represents a significant allocation of funds in the institution's portfolio.
On the other hand, Ang [34] finds that shareholders of 1,283 (or 17%) target firms responded to the offer with negative market returns. These investors were disappointed at the offer, despite the price premium. In addition, Augustin [35] documents pervasive informed trading activity in equity options before the M&A announcements.
About 25% of takeovers have positive abnormal volumes. These volume patterns indicate that informed traders are likely using bullish directional strategies for the target and volatility strategies for the acquirer. Shams [36] investigates the patterns of directors' trades and returns around takeover announcements. They find that the pre-announcement net value (the difference between buy value and sell value) of directors' trading is positively related to acquirers' announcement period abnormal returns. Therefore, we expect that both insider trading and the change in institutional holdings have certain explanatory power for the firm's performance. The unanswered question is which groups of investors has greater explanatory power than the other. This is our second research question, and we construct the second research hypothesis based on it as follows.
Hypothesis 2: Insider trades and institutional investors have significantly explanatory power for the firm's long-term performance after M&As.
Moreover, Shahrur [37] uses a sample of 816 diversifying takeovers from 1978 to 2003 to examine whether takeover announcements release negative information about the prospects of the acquirer's main industry. They find that rivals that are most similar to the acquirer (homogeneous rivals) experience significant negative cumulative abnormal returns (CAR) around takeover announcements. In contrast, Erwin [38] examines the extent to which announcements of open market share repurchase programs affect the valuation of competing firms in the same industry. On average, although firms announcing open market share repurchase programs experience a significantly positive stock price reaction at the announcement, portfolios of rival firms in the same industry experience a significant and contemporaneous negative stock price reaction. In other words, they show that open market repurchase announcements have an adverse effect on rivals in the same industry with the event firms.
Our research contributes to the related literature by analyzing the spillover effect of insider trading and the changes in institutional holdings between M&A firms and non-M&A firms. To the best of our knowledge, this is the first paper to analyze the spillover effect comprehensively in the M&As. We construct the third research hypothesis as follows: Hypothesis 3: There exists spillover effect of insider trading and institutional holdings of M&A firms on non-M&A firms in the same industry.
Methodology
In this study, we need to measure the characteristics of institutional holding, insider trading, and long-term market performance in empirical tests. Hence, we summarize these measures as follows.
Measuring Institutional Holding
We use the number of shares held by institutions divided by the number of shares outstanding to calculate the percentage of institutional holdings for a sample firm.
Measuring Insider Trading
Previous studies measure insider trading in various ways. Gombola [39,40] observe the monthly number of insider transactions, number of shares, and dollar value around SEOs. Rozeff [41] employs insider trading deflated by trading volume (number of shares traded by insider over a number of shares traded in the market) to investigate the direction of insider trades along the value/glamour spectrum. Lakonishok [42] uses the ratio of net insider trading (number of insider purchases minus the number of insider sales) to total insider transactions over the past few months to examine the market reaction to insider trades. Due to the availability of insider trading data, we use the number of net selling shares (number of shares sold minus number of shares bought) over the number of shares outstanding to measure the behavior of insider trading, which is where NSH is the number of net selling shares, NS is the number of shares sold by all insiders, and NP is the number of share purchased by all insiders. In order to capture the asymmetric reaction to different insider trading, we decompose net selling into two components and create two variables to represent it. One is PNSH, equal to the positive net selling when net selling is greater than zero, and zero otherwise. The other variable is NNSH, equal to the negative net selling when net selling is less than zero, and zero otherwise.
Measuring Long-Term Market Performance
This study calculates the buy-and-hold abnormal returns of a stock i as follows: where , and ℎ, , respectively, denote firm j's returns and benchmark returns on day . We calculate BHAR starting from the announcement date of these events and set a month 22 trading days. If the firm is delisted, returns are compounding until the delist date.
Finally, the Compustat database provides all the accounting data we need for capturing the firm characteristics. Following previous studies, we add firm characteristic variables in the regression analysis, which include over-investment, the MB ratio, the firm size, and the debt ratio. We also control for year and industrial fixed effects in our regression analysis. Finally, to alleviate the effect of outliers in the following analysis, we winsorize all independent variables at 1% level.
Empirical Models
There are two parts to the empirical analysis in this study. First is the basic summary statistics for insider trading and institutional holdings. We summarize insider trades and institutional holdings in the different period before and after M&As. In the univariate analysis, we expect to observe the basic statistics of these two measures and check the systematic pattern of these two measures. Second, we perform the multivariate analysis by running the regressions of the level of institutional holdings, and long-term market performance of M&A firms. In addition, we also apply the regression analysis on the spillover effect and check how the non-M&A firms react to the trading of insiders and institutional investors of M&A firms.
To test the first hypothesis, we summarize the basic statistics for the change in institutional holdings concerning different insider trading in the M&As and check the significance of the change of institutional holdings. To check the robustness of our results, in the multivariate regression analysis, we run the regression of institutional holdings on insider trades and control for all firm characteristics. The empirical model is as fol1ows: where the dependent variable INSTH is the institutional holding for a firm, PNSH denotes the positive insider net selling, NNSH is the negative insider net selling, OVERINV is the capital expenditure over the expected level based on the estimation model in [43], BM is the book-to-market ratio, SIZE is the natural log of the firm's market capitalization, DR is the ratio of long-term debt to total assets, and RUNUP is the buy-and-hold abnormal return in three months before M&As. In addition, we also control for industrial and year dummies in regression analysis. Next, we measure the long-term market performance by three-year buy-and-hold abnormal return after M&As. We sort the BHAR based on different time periods and then summarize the statistics of insider trading and institutional holdings. We also perform the multivariate analysis of the long-term market reactions. The empirical model is where BHAR(0,t) is the t-year buy-and-hold abnormal returns of a stock i, is the residual of the institutional holding in the previous regression of institutional holdings. Based on the result of (3), there may have endogenous problems because the insider trading may affect the change of institutional holdings. To alleviate the endogenous problem, we utilize two-stage least square in the regression of (4).
Finally, we test the spillover effect of insider trading and the change of institutional holding on the institutional holding of matching firms in the same industry. To measure the spillover effect, we measure the insider trading and the change of institutional holdings for matching firms. In the analysis of summary statistics, we check the basic statistics of insider trading and the change of institutional holdings for matching firms. In addition, we perform the multivariate regression analysis, and the empirical models are as follows: where MINSTH is the institutional holdings of matching firms. If the spillover effect exists, then we expect that the coefficients of 1 or 2 in (5) and of 1 , 2 or 3 in (6) would be significant.
Data and Sample Characteristics
All sample firms are collected from the Thomson Financial Securities Data Corporation (SDC) Domestic M&A Database with a transaction value of at least US$50 million. M&A characteristics, including the announcement date, the company identity, and we collect a sample covering the period from 1990 to 2010 and trace their returns up to 2013 and in a control sample of non-M&A fìrms. The sample firm's CUSIP can be matched with the Center for Research in Securities Prices (CRSP) data. To be included in our sample, the observations must meet the following criteria: 1. The M&As must be common stocks of firms (share codes 10 and 11) listed on NYSE, AMEX, and NASDAQ. American depository receipts (ADRs), real estate investment trusts (REITs), closed-end mutual funds, and partnership are eliminated from the sample. 2. As in other previous studies, we exclude M&As of the financial and regulated utility industries (SIC codes 6000-6999 and 4900-4999 respectively) since firms in these industries have to meet regulatory requirements in making the strategic decisions. Also, accounting items of these two industries are distinct from those of other industries. 3. A firm included in the SEO events cannot be in the M&A sample within the three years before and after M&As since the long-term performance may arise from SEOs rather than M&As.
We collect daily returns and number of shares outstanding of the sample firms and daily market indices (CRSP VW and EW) from the CRSP database. Annual accounting data of firm-specific variables are collected from the Compustat database. In addition, we collect monthly insider trading data from the Thomson CDA Investnet database, and quarterly institutional equity holdings from the Thomson CDA Spectrum database, which include the data from the 13F filings. We use the institutional codes in the CDA Spectrum database to identify the types of institutional investors. 1 The firm characteristics may have a certain pattern that drives firms to conduct equity financing. We summarize the firm characteristics in Table 1.
This table provides summary statistics of M&A firms from 1990 to 2010, and the number of M&A firms is 10,203. The variables are defined as follow. SIZE is the market value of equity on the 11th day before the M&A announcement day. BM is the book-to-market ratio at the end of the month preceding M&A. RUNUP is the buy-and-hold abnormal return in the three months before the M&A announcement. BHAR is the buy-and-hold abnormal return for three years after M&As. OVERINV is the capital expenditure over the expected level based on the estimation model in [43]. DR is the ratio of long-term debt to total assets. This table provides summary statistics of sample firms from 1990 to 2010. The variables are defined as follow. SIZE is the market value of equity on the 11th day prior to the event announcement day. BM is the book-to-market ratio at the end of the month preceding the specific corporate events. RUNUP is the buy-and-hold abnormal return in the three months before the specific corporate events announcement. BHAR is the buy-and-hold abnormal return for three years after the specific corporate events. OVERINV is the capital expenditure over the expected level based on the estimation model in [43]. DR is the ratio of long-run debt to total assets.
In general, significant operational advantages can be obtained when two firms are combined. In fact, the goal of most M&As is to improve company performance and shareholder value over the long-term. Meanwhile, investors may expect stock price of M&As goes up dramatically by company government goal of M&A firms. In contrast, previous research like Andre [44] suggests that significantly underperform over the three-year post-event period of M&A firms. From Table 1, we find that the long-term market performance, however, is poor in these M&A firms, which is consistent with existing empirical evidence. In addition, the investment level is over the expected level, which implies that these firms have aggressive investment strategies. Overall, M&A firms perform poorly, and they tend to over invest. 1 Institutional investors with more than $100 million in equities must report their equity ownership to the SEC in quarterly 13F filings. The CDA Spectrum classifies institutional investors of five types: bank (trust departments), insurance companies, investment companies (mutual funds and closed-end funds), independent investment advisors (principally pension fund advisors), and others (miscellaneous institutions such as endowment funds or public pension funds).
Empirical Results
We analyze the institutional holdings before and after M&As. The institutional holdings of four quarters before and after M&As are summarized in Table 2. The median and mean of both measures of institutional holdings are calculated on a quarterly basis which is the frequency in the database. The effective date of each M&A is in Quarter 1 and the first quarter before the effective date is Quarter -1. The number of the company of M&As are 8,811 in Quarter 1.
The median of institutional ownership is calculated under a quarterly basis which is the frequency in the database. The effective date of each event is in Quarter 1 and the first quarter before the effective date is Quarter -1.
From the results in Table 2, we find that the institutional investors increase their holding substantially before and after M&As, which implies that institutional investors do change their holdings after M&As. Whether these changes is correlated with firms' operational performance is a key question about the information sources of the information advantage for institutional investors. We summarize the operational performance in Table 3.
We measure the firm's operational performance from EBIT/Sales and ROA. The effective date is in year 0, and the median and mean of both measures are under annual basis. We collect the data for three years before and after M&As.
We measure the firm's operational performance from EBIT/Sales and ROA. The effective date is in year 1, and the median and mean of both measures are under annual basis. We collect the data for three years before and after the specific events.
From the results in Table 3, we find that the operational performance does not have obvious improvement after M&As. The Median of EBIT/sales improves in the current year of M&As but gets back to the original level in the first year after M&As. These results imply that institutional investors may not rely on the operational performance to adjust their holding of these sample firms. Next, we check the change of holding of insiders. This may be another information sources for institutional investors. We summarize the change of insider transactions in Table 4.
This study reports the median and mean cumulative insider trading from month -6 to month t relative to the M&A effective date. The number of observation of M&As is 7,289 in the period (-6,1). All numbers are a percentage of outstanding shares. Net sell is the difference between insider sell and insider purchase.
We report the median and mean cumulative insider trading from month -6 to month t relative to the specific events' effective date. All numbers are a percentage of outstanding shares of all sample firms. Net sell is the difference between insider sell and insider purchase.
From the results in Table 4, we find that while insiders increase their purchases below normal levels, they increase their sales, even more, thus increasing their net sales, i.e., insiders reduce their holding by degrees gradually before and after M&As, which implies that they may expect a not better results somewhat after M&As for M&A firms. Based on this result, we suspect that insiders are pessimistic in the M&As. To analyze whether the insider transactions have a significant impact on the adjustment of institutional holdings, we regress the institutional holdings on the net sell of insider transactions and control for other firm characteristics. The results are summarized in Table 5 We suspect that there is an asymmetrical impact of insider transactions on institutional holding, and therefore we create the variables of PNSH and NNSH from insider net sell of M&As. PNSH denotes the positive net sell of insider transactions when the net sell is greater than zero, and zero otherwise, NNSH is the negative insider net sell of insider transactions when the net sell is less than zero, and zero otherwise, and net sell is the difference between insider sell and insider purchase. The other variables of M&As are defined as follows. BM is the book-to-market ratio. SIZE is the natural log of the firm's market capitalization. DR is the ratio of long-term debt to total assets. RUNUP is the buy-and-hold abnormal return in three months before M&As. OVERINV is the capital expenditure over the expected level based on the estimation model in [43]. The numbers in parentheses are robust p-values. ***, **, * represent the significance under 1%, 5%, 10% level respectively.
The dependent variable is the institutional holding of acquiring firms in M&As, and the independent variables are defined as follows. PNSH denotes the positive net sell of insider transactions when the net sell is greater than zero, and zero otherwise, NNSH is the negative insider net sell of insider transactions when the net sell is less than zero, and zero otherwise, and net sell is the difference between insider sell and insider purchase of M&As. BM is the book-to-market ratio. SIZE is the natural log of the firm's market capitalization. DR is the ratio of long-term debt to total assets. RUNUP is the buy-and-hold abnormal return in three months before M&As. OVERINV is the capital expenditure over the expected level based on the estimation model in [43]. The numbers in parentheses are robust p-values. ***, **, * represent the significance under 1%, 5%, 10% level respectively.
The results in Table 5 support our expectation that the negative net sell of insider transactions have a significant impact on the adjustment of institutional holdings, but the positive net sell does not. Among the M&As, institutional holdings decrease with the negative net sell of insider transactions. This result implies that insiders and institutional investors apply a different source of information and they have a different point of views under negative net sell of insider transactions regarding M&As. Base on the empirical evidence, we expect that both insiders and institutional investors share similar information about these M&A firms. Therefore, they show the same trading behavior after M&As. Next, we analyze the impact of institutional investors or insider transactions on a firm's long-term market performance. The regression result is summarized in Table 6. The dependent variable is three-year buy-and-hold abnormal returns. The independent variables are defined as follows. The ε INSTH is the residuals of institutional holding in the regression analysis in Table 5. PNSH denotes the positive net sell of insider transactions when the net sell is greater than zero, and zero otherwise, NNSH is the negative insider net sell of insider transactions when the net sell is less than zero, and zero otherwise, and net sell is the difference between insider sell and insider purchase of M&As. BM is the book-to-market ratio. SIZE is the natural log of the firm's market capitalization. DR is the ratio of long-term debt to total assets. RUNUP is the buy-and-hold abnormal return in three months before M&As. OVERINV is the capital expenditure over the expected level based on the estimation model in [43]. The numbers in parentheses are robust p-values. ***, **, * represent the significance under 1%, 5%, 10% level respectively.
From Table 6, we find that insider transactions, both positive insider net sell and negative insider net sell, have a significant impact on a firm's long-term performance in the M&As. In the long run, despite insiders may adjust the portfolios of their assets with positive insider net sell of insider holdings, the long-term market performance is significant good. In addition, the institutional holdings also have a marginal significantly negative impact on a firm's long-term performance in M&A firms. In sum, we conclude that institutional investors share different information sources relative to insider and insider transactions have strong explanatory power to the long-term market performance than institutional holding regarding M&As. Furthermore, we analyze the impact of insider transactions of M&As on the institutional holding of the matching firms. The regression result is summarized in Table 7.
We suspect that there is an asymmetric impact of insider transactions of M&As on the institutional holding of matching firms. Therefore we create the variables of PNSH and NNSH from insider net sell of M&As.
The dependent variable is the institutional holdings of matching firms. The independent variable PNSH denotes the positive net sell of insider transactions when the net sell is greater than zero, and zero otherwise, NNSH is the negative insider net sell of insider transactions when the net sell is less than zero, and zero otherwise, and net sell is the difference between insider sell and insider purchase of M&As. The other variables of matching firms are defined as follows. BM is the book-to-market ratio. SIZE is the natural log of the firm's market capitalization. DR is the ratio of long-term debt to total assets. RUNUP is the buy-and-hold abnormal return in three months before SEOs. OVERINV is the capital expenditure over the expected level based on the estimation model in [43]. The numbers in parentheses are robust p-values. ***, **, * represent the significance under 1%, 5%, 10% level respectively. The results in Table 7 show that the insider transactions of M&As have not a significant impact on the adjustment of institutional holdings of matching firms. This result implies that institutional investors of matching firms and insiders of M&As have a different point of views and they may refer to someone else with different information or the institutional investors of matching firms may be worse expected with none investment regarding M&As. Finally, we analyze the impact of the institutional holding of M&As on the institutional holding of matching firms. The regression result is summarized in Table 8.
The dependent variable is the institutional holdings of matching firms. The independent variable INSTH is the institutional holding for an acquiring firm, and ε MINSTH is the residuals of the institutional holding of matching firms in the regression analysis in Table 7. PNSH denotes the positive net sell of insider transactions when the net sell is greater than zero, and zero otherwise, NNSH is the negative insider net sell of insider transactions when the net sell is less than zero, and zero otherwise, and net sell is the difference between insider sell and insider purchase of M&As. The other variables are defined as follows. BM is the book-to-market ratio. SIZE is the natural log of the firm's market capitalization. DR is the ratio of long-term debt to total assets. RUNUP is the buy-and-hold abnormal return in three months before M&As. OVERINV is the capital expenditure over the expected level based on the estimation model in [43]. The numbers in parentheses are robust p-values. ***, **, * represent the significance under 1%, 5%, 10% level respectively.
From Table 8, we find no evidence that insider transactions of M&As have a significant impact on the adjustment of institutional holdings of matching firms; instead, we find that the institutional holdings of M&As have a significant impact on the adjustment of institutional holdings of matching firms. Among the M&As, the institutional holdings of matching firms increase or decrease the same direction with institutional holdings of M&As. This result implies that institutional investors of M&A firms have stronger spillover effect than insiders in our sample. This is an important finding in the related literature.
Conclusions
The empirical results show that insider transactions have a significant impact on the institutional holdings. First, we find that institutional investors significantly decrease holdings of acquiring firms as insider transactions with the negative net sell of insider transactions. This result implies that insiders and institutional investors may utilize a different source of information and they have a different point of views regarding future performance after M&As. Second, we find that insider transactions have greater explanatory power than institutional holdings for the long-term market performance of acquiring firms after the M&As announcements. One of the reasons is that insiders have an informational advantage relative to institutional investors. This result is consistent with the existing evidence of the violation of strong-form efficient market hypothesis.
Finally, we find that the institutional holdings of acquiring firms have a significant impact on the adjustment of institutional holdings of non-M&A matching firms. This result implies that there exist spillover effects of institutional holdings on those informed traders of non-M&A matching firms in the M&As. This result is consistent with the existing evidence of herding behavior among institutional investors. These institutional investors may share similar information sources for a specific industry or firms, which results in similar trading behavior. Insiders, in contrast, who have private information may not share with insiders of other firms. Therefore, we do not find the spillover effect in insider transactions.
The main contribution of the research is to comprehensively analyze the reaction of insiders and institutional investors in the M&As. In addition, through the analysis of the spillover effect of the trading behavior informed traders, we find how institutional investors of non-M&A matching firms react to the signals conveyed by insiders and institutional investors of M&A firms. To the best of our knowledge, this is the first paper that addresses the spillover effect of informed traders in the financial market. | 8,486.8 | 2018-12-01T00:00:00.000 | [
"Business",
"Economics"
] |
First record of Branchiura sowerbyi Beddard, 1892 (Oligochaeta: Tubificidae) in Azores
The present work reports the finding of an exotic and invasive annelid, Branchiura sowerbyi Beddard, 1892, in freshwaters of São Miguel Island – Azores archipelago (Atlantic Ocean). One specimen was found near the mouth of Ribeira Quente stream in the south of São Miguel on 7 May 2008. This study increases the number of freshwater Oligochaeta species occurring in the Azores from 8 to 9.
The aquatic oligochaete, Branchiura sowerbyi, was first described by Beddard (1892) from a tank in the Royal Botanical Society's Garden in London, and is one of the most widespread freshwater oligochaetes in Europe and North America.It is also known from South-East Asia, South Africa, South America, Mauritius Island and Australia (Brinkhurst and Jamieson 1971).In Europe it could be found in 22 countries, including Portuguese mainland (Giani 2004).Its earliest records were restricted only to South-East Asia (Brinkhurst and Jamieson 1971) and to botanical gardens in Europe (Beddard 1892).Such distribution pattern led Grabowski and Jablonska (2009) to a conclusion that B. sowerbyi is a species originating from the Sino-Indian region, spread elsewhere due to human activity.It seems that this introduction could be connected with transport of plants from one part of the world to another or with import of fish for fish farming (Paunovic et al. 2005).Due to the fast dispersal, success in adaptation and mass occurrence in some recipient areas B. sowerbyi could be characterized as invasive species.Presence of this species could disturb relations within benthic community and, consequently, could have influence to the aquatic ecosystem food chain (Paunovic et al. 2005).
Dispersion of invasive species, a particularly important theme in islands, was recognized as one of the major threats to local biodiversity (Silva et al. 2008).Aquatic biotopes are, due to its unique features, among the most disposed ecosystems to this kind of disturbance.Moreover, B. sowerbyi lives with its heads buried in the sediment, whilst the tails wave actively in water layer above the bottom.As a conveyorbelt feeder that mixes sediments (Matisoff et al. 1999).Potentially, it can have a large impact on the recipient environment since they can make burrows to a depth of 20 cm and after a short period of time move to a new location to build new burrows.B. sowerbyi is a thermal water species, with huge ability of adaptation.It is typical for waters with current velocity under 0.5 ms -1 (Paunovic et al. 2005).In Serbia, soon after the first finding, dense populations of B. sowerbyi were observed in a several artificial, slowrunning channels.The dispersal of B. sowerbyi, after initial introduction and population establishment, has been rapid (Paunovic et al. 2005).
In the Azores archipelago the Oligochaeta have been poorly studied with the exception of works of Sciaccmitano (1964) and Brinkhurst (1969), no further studies have been published in this subject resulting in a short previously known 8 species list, reinforced by the poorer character in terms of species richness of this young, isolated and oceanic islands (Malhão et al. 2007;Gonçalves et al. 2008;Raposeiro et al. 2009).
This short communication, reporting the occurrence of B. sowerbyi in the Ribeira Quente (São Miguel) results of the ongoing research of the freshwater ecosystems performed by the University of Azores (e.g., Malhão et al. 2007;Gonçalves et al. 2008;Raposeiro et al. 2009).In spite of the intense sampling carried out these last years in the Azores this is the first time we were able to find this species.
A single specimen of B. sowerbyi was collected from the Ribeira Quente (37º44'894"N, 25º14'315"W), on 7 May 2008, in a sedimentation unit of the stream with a hand net (500 μm), preserved in 96% industrial alcohol and stored in Departamento de Biologia -Universidade dos Açores with the label number DB_FW_SMG_0001 (Figure 2 A).
Description: Length 31 mm.Dorsal anterior chaetal bundles with 4 short hair (Figure 2B) chaetae and crotchet that vary from simplepointed to bifid with short upper teeth.Posteriorly hairs fewer and shorter and non-hair chaetae with less replication of upper teeth.Ventral bundles with 10-11 bifid chaetae with upper teeth shorter than lower, even simplepointed anteriorly.The presence of 41 long (longer than the body diameter) dorsal and ventral gill filaments on posterior half o the body (Figure 2A) makes this species easy to recognize from all the other aquatic oligochaetes occurring in Europe (Brinkhurst and Jamieson 1971).
Cosmopolitan (Brinkhurst 1971) Based on species level identifications only (i.e.genus level identification), the finding of B. sowerbyi in the Azores increases the number of species occurring in the archipelago from 8 to 9 (Table 1), distributed across 4 families.In terms of the free-living Oligochaeta, several areas of the archipelago have been poorly studied.Therefore, we estimate that Oligochaeta species richness in the area is higher and more intensive investigation is needed.The aquatic oligochaete, B. sowerbyi (Oligochaeta, Tubificidae), lives in aquatic sediments nearly devoid of oxygen (Brinkhurst and Jamieson 1971;Naqvi 1973), associated with shallow, stagnant or slowly flowing waters.It is thermophilous species.The reproductive cycle of this iteroparous worm was partly described by Casellato (1984) and after by Caselatto et al. (1987).
These are the conditions of the sampled location in the Ribeira Quente stream that present warm water due to its location just below the warm water effluent of Ribeira Quente power station.
In fact it is reported that in cooler temperate regions the species is found most frequently in artificially warmed waters (e.g Brinkhurst and Jamieson 1971).However, Prater et al. (1980) reported this species as abundant in Ohio in areas with moderate amounts of organic input.A high number of Oligochaeta was found in the same site where B. sowerbyi was collected, by Gonçalves et al. (2008) and Inova (2005Inova ( , 2006) ) reports this location as influenced by organic enrichment as a consequence of urban pressures.
The presence of B. sowerbyi in Azores archipelago is probably due to human activity, and the fact that just one individual was sampled might indicate that this is an early stage of invasion, since the initial steps towards monitoring freshwater systems using benthic macroinvertebrates started in 2003.In fact, isolated islands have a reduced, but unique diversity (in compare with the mainland?)due to the different levels of separation caused by linear distance, strength and direction of water and wind currents and intervening depths.This makes these regions particularly vulnerable to biological invasions.Rare or occasional events that inoculate islands may be important in the establishment and colonisation of islands to form a native biota.However, in recent decades the efficient, diverse and far-ranging extent of transport modes has enabled access to a greater diversity of species from all world regions.Nevertheless many arrivals to islands can be predicted on account of their appearance and spread on nearby landmasses (Michin 2007).The establishment and consequences of introduced species has been object of a discussion in a lot of studies (e.g.Mooney and Hobbs 2000), but we are still not able to predict outcome of the introduction of particular species, as well as the impact of invasions in general to specific ecosystem.Therefore, every finding of nonindigenous species and effort to understand the way of transport, introduction establishment and spread of species, is valuable in the process of defining of predictable models, as well as an attempt to warn to the problem of endangerment of native biodiversity caused by invaders.Therefore further work is required to elucidate the distribution and habitat preferences of B. sowerbyi in the Azores and possible effects to aquatic ecosystems.Presence of this species could disturb relations within benthic community and, consequently, could have influence to the aquatic ecosystem food chain (Paunovic et al. 2005).Moreover has been hypothesized that temporary or permanent climate change facilitates natural range expansion (Nehring 1998;Stachowicz et al. 2002).
Figure 1 .
Figure 1.Location of Azores archipelago and respective surveyed stream | 1,793.4 | 2009-09-01T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Inflorescence and floral traits of the Colombian species of Tristerix ( Loranthaceae ) related to hummingbird pollination
Floral diversification in Loranthaceae reaches its highest peak in the Andes. The flowers of the exclusively Andean genus Tristerix exhibit tubular and vividly coloured flowers pollinated by hummingbirds. We studied inflorescence and flower morphoanatomy of the two Colombian species, T. longebracteatus and the highly endangered T. secundus. Both species have terminal racemes with up to 26 ebracteolate flowers, of which the proximal one opens and sets fruits first. The slightly irregular calyx initiation is followed by the simultaneous initiation of petals and the successive initiation of stamens. Anthesis is fenestrate, explosive, and triggered by the tension of the style against the abaxial petals, a mode so far not reported in Loranthaceae. Anthetic petals spread symmetrically in T. longebracteatus and asymmetrically in T. secundus. Nectar is produced by a supraovarial disk and by the petal mesophyll. Floral lifespan lasts up to 20 days. The hummingbirds Eriocnemis vestita and Pterophanes cyanopterus are the likely pollinators of T. secundus. Morphological traits are inconclusive to support one of the two competing sister group relationships that involve Tristerix, as the lack of cataphylls in renewal shoots links Ligaria and Tristerix, whereas the terminal inflorescences support its relationship with Desmaria and Tupeia. Resumen. La diversificación floral en Loranthaceae alcanza su máxima expresión en los Andes. Las flores del género Tristerix, restringido exclusivamente a dicha cordillera, exhiben flores tubulares y de color llamativo que son polinizadas por colibríes. Se ha realizado un estudio de la morfoanatomía de inflorescencias y flores en las dos especies colombianas del género, T. longebracteatus y T. secundus, esta última muy amenazada. Las dos especies tienen racimos terminales hasta con 26 flores ebracteoladas, de las cuales la proximal se abre y forma el fruto antes que las demás. La iniciación del cáliz, levemente irregular, es seguida por la iniciación simultánea de los pétalos y la iniciación sucesiva de los estambres. La antesis es fenestrada, explosiva y se activa por la tensión del estilo contra la comisura de los dos pétalos abaxiales, un modo de antesis hasta ahora no descubierta en Loranthaceae. Los pétalos en antesis se expanden simétricamente en T. longebracteatus y asimétricamente en T. secundus. El néctar se produce en un disco supraovárico y el mesófilo de los pétalos. El periodo entre la antesis y la iniciación del fruto dura hasta 20 días. Las especies de colibrí Eriocnemis vestita y Pterophanes cyanopterus son los visitantes y probables polinizadores de T. secundus. Los rasgos morfológicos de Tristerix no aportan información concluyente para apoyar una de las dos hipótesis relacionadas con los posibles grupos hermanos del género, ya que la ausencia de catafilos en los brotes de renuevo asocian Ligaria y Tristerix, en tanto que las inflorescencias terminales apoyan una relación cercana a Desmaria y Tupeia. González F. & Pabón-Mora N. 2017. Inflorescence and floral tr aits of the Co lombian species of Tr isterix (Loranthaceae) related to hummingbird pollination. Anales Jard. Bot. Madrid 74 (2): e061. http://dx.doi.org/10.3989/ajbm.2474 Title in Spanish: Caracteres de la inflorescencia y las flores de las especies colombianas de Tristerix (Loranthaceae) relacionados con la polinización por colibríes. Received: 15‒III‒2017; accepted: 25‒VI‒2017; published online: 03‒XI‒2017; Associate Editor: J. Fuertes.
INTRODUCTION
The morphological diversification of flowers in Loranthaceae reaches its highest peak in the Andes.However, most of the studies on inflorescence and flower morphoanatomy and reproductive biology have been carried out in Old World members of the family (v.gr., Blakely 1922;Maheshwari & al. 1957;Bhatnagar & Johri 1983;Feehan 1985;Ladley & al. 1997).Thus, the inflorescence and floral traits related to pollination remain to be investigated in neotropical taxa, including Tristerix Mart., a genus that comprises 12 species confined to high elevations in the Andes from Colombia to Chile (Barlow & Wiens 1973;Kuijt 1988Kuijt , 2015)).
The species of Tristerix exhibit long, tubular, and vividly coloured flowers that are pollinated by hummingbirds (Reiche 1904;Tadey & Aizen 2001;Aizen 2005;Amico & al. 2007).Two species in Colombia mark the northernmost distribution of the genus, T. longebracteatus (Desr.)Barlow & Wiens and T. secundus (Benth.)Kuijt.The distribution of these two species in Colombia is disjunct, as T. longebracteatus grows in the Central Cordillera, whereas T. secundus is endemic to the Eastern Cordillera.Together with Aetanthus (Eichl.)Engl.and Gaiadendron G.Don, these are the only Loranthaceae that reach the páramos in Colombia.Tristerix longebracteatus and T. secundus grow between 2,900 and 3,900 m a.s.l.The habitats occupied by these species are increasingly threatened by agricultural expansion and strong disturbance.In particular, the current conservation status of T. secundus deserves special attention because this species is known to occur only in a few páramos of the departments of Boyacá, Cundinamarca, and Meta, near densely populated areas.The goal of the present research is to investigate the so far overlooked morphoanatomical traits of inflorescences and flowers of the páramo species of Tristerix that are supposedly pollinated by hummingbirds.
Macromorphological measurements, counts, and general observations were made in the field, avoiding invasive methods that would damage the small populations.We collected a limited number of flowers and inflorescences, as these plants are very scarce in their habitats.Nevertheless we took abundant photographic material that was used for counts and observations, which included no less than 40 inflorescences in different developmental stages with an average of 20 floral buds of mature flowers per inflorescence.
For anatomical studies flowers in several developmental stages were fixed in 70% EtOH.Buds were dissected in 90% EtOH under a Leica MZ7.5 stereomicroscope -Leica Microsystems, Heerbrugg, Switzerland-and dehydrated in an absolute ethanol series -90%, 95%, to 100% × 2 ethanol, 30 min each-.Fixed material was dehydrated through an alcohol-Histochoice series, and embedded in Paraplast X-tra -Fisher Healthcare, Houston, Texas, USA-.The samples were sectioned at 12 µm with an AO Spencer 820 -GMI Inc.Minnesota, US-rotary microtome.Sections were stained with Johansen's safranin and 0.5% Astra Blue, and mounted in Permount -Fisher Scientific, Pittsburgh, Pennsylvania, USA-.Sections were viewed and digitally photographed with a Nikon Eclipse 80i compound microscope equipped with a Nikon DXM1200C digital camera with ACT (1) software.
RESULTS
In general, the morphoanatomical and developmental traits of the two examined species are very similar.Thus, we describe the results simultaneously for both species, except for those characters that vary among them or that were preferentially recorded for T. secundus in the field.
Inflorescence development and morphology
Individuals in both species are stem hemiparasites with a slightly thickened primary haustorium and no epicortical roots.They copiously ramify from early stages soon after seedling establishment (fig.1a).Up to three individuals of T. longebracteatus were observed parasitizing a single tree of Escallonia myrtilloides L.f. -Escalloniaceae R.Br.ex Dumort.-whereas up to five individuals of T. secundus were observed growing in a single shrub of Ageratina baccharoides (Kunth) R.M.King & H.Rob. -Asteraceae Bercht.& J.Presl-.Young branches are dull reddish but they turn dull green to dark gray when flowering (fig.1a, f, g).Branching is sympodial as inflorescences are terminal and the two axillary shoots immediately below the inflorescence successively develop into renewal shoots (figs. 1b, d, e, h, 2b, 3a-c).Shoots in both species reach up to 1 m in length and have opposite, decussate leaves (fig.1a, c, f); young stems are terete in T. longebracteatus and quadrangular in T. secundus (fig.1j).Young inflorescences are tightly protected by the distalmost pairs of opposite leaves (fig.1c).They develop into a raceme whose apical meristem depletes after forming up to 20 and 26 lateral flowers in T. longebracteatus and T. secundus, respectively (figs.1b-j, 3a-c).Each flower is subtended by a single bract that is recaulescent to the pedicel (fig.1d, e, h-j).The free portion of the bract is scale-like, ovate, to 4 × 3.5 mm and tightly appressed to the pedicel (fig.1b-i) in T. secundus, whereas it is leafy, narrowly lanceolate, to 3.5 × 1.8 cm (fig.1j) in T. longebracteatus.
Flower initiation proceeds acropetally along five ontogenetic spiral lines (fig.1b, c, 2a, b), but mature inflorescences appear to have flowers arranged in whorls (fig.1e, h, i).When the young flowers reach 5 cm in length, the entire inflorescence becomes pendant (fig.1f, j).Immediately before anthesis, flowers of T. secundus are lifted to a nearly horizontal orientation due to the strong increase in pedicel thickness and a sharp angle formed between the pedicel and the flower (figs. 1h, 3a, b).The first anthetic flower -hereafter called the leading floweris always placed in an upper position as it corresponds to the proximal flower of the pendant raceme (figs. 1d, e, h, 3a-c).Anthesis proceeds downwards and young fruits are found on the upper portion of the inflorescence while lower flowers are still in preanthesis or anthesis (fig.1g, 3a-c).
Flower development and morphoanatomy
The floral primordia are radial (fig.2a, b).Floral organogenesis proceeds centripetally; the calyx initiates as a ring meristem above which five slightly irregular lobes are apparent but remain poorly differentiated throughout development (fig.2b).Then, five free petals initiate alternating the sepal tips (fig.2b).When the flower bud reaches 2.5 mm in diameter, the calyx encloses almost completely the petal primordia (fig.2b), and five stamen primordia become evident opposite and slightly adnate to each petal.The adaxial stamen initiates first, followed by the initiation of the two lateral and, then, the two abaxial stamens; this sequence coincides to the three length categories in the stamens throughout development, that is, the adaxial stamen is the longest, the two lateral stamens are intermediate size, and the two abaxial stamens are the shortest (figs. 2d, e, 4b-d).The length of the coherent zone between petals and filaments reaches 2.5 cm in length in T. longebracteatus and 5 cm in T. secundus.
The corolla aestivation is valvate (figs. 2b, 3d-g, 4a, j, k, 5g, l).Young -< 1 cm long-corolla tubes of T. secundus undergo a stronger elongation of the adaxial petal, causing a c-shape curvature towards the subtending bract (fig.2c), but soon the faster elongation of the two abaxial petals shifts the curvature away from the subtending bract (fig. 2d).No early curvatures were observed in young flowers of T. longebracteatus.The elongation of the five petals is accompanied by the gradual interlocking of their margins and the postgenital fusion between the base of each petal and the opposite filament (figs. 2c-e, 4j).Corolla tubes less than 2.5 cm long are light green but they gradually turn bright scarlet at their proximal and distal ends, and yellow at their middle portion (figs. 1d-h, 3).The fully elongated tube prior to anthesis reaches up to 5.5 cm in length and 7 mm in diameter in T. longebracteatus, and 11 cm in length and 9 mm in diameter in T. secundus.The tube is nearly straight in T. longebracteatus (fig.4f), whereas in T. secundus it is slightly s-shape with its distal portion corresponding to the anther zone slightly swollen, twisted and oriented more or less upwards (figs.3b-g, 4ae).The filaments of T. secundus have a small subterminal gland (fig.4d, g), whereas those of T. longebracteatus have minute, retrorse epidermal teeth (fig.4h).The anthers are incumbent, versatile and dorsifixed (fig.4b-d, g, h).They are yellow, straight and reach up to 8.5 mm in T. longebracteatus (fig.4f, h), whereas they are purple, slightly crescent-shape and reach up to 1.5 cm in length in T. secundus (fig.4b-d, g).
The gynoecium is formed by five congenitally fused carpels, which are evident by the five vascular bundles and the edges alternating the petals and stamens (fig.5b).No locules were observed at any developmental stage.By full anthesis, the solid ovary is obovoid and reaches 6-8 × 5-6 mm.Nectar in both species is produced in a slightly 5-lobed supraovarial nectary ring (fig.5j-m).Additionally, nectar production was also detected in the mesophyll of the petals in T. secundus (fig.4k, l).The club-shaped style is initially yellow and straight but it turns bright scarlet and slightly sinuous due to the mechanical constraint of the interlocked petals (figs. 3d-g, 4b, d, 5a, n).The style is persistent until the first stages of fruit growth (fig.5n).The stigma is entire and slightly capitate (figs. 4b, d, 5i).The mature fruit is a globose berry to 1 cm in diameter, and it is enclosed and fused to the calyx except by its apical portion that remains free (fig.5n, o).The colour of the outer surface of the calyx in mature fruits gradually shifts from dull green to deep purple (fig.5n, o).
Floral anatomy
Five vascular bundles enter the base of the pedicel (fig.5b), above which they radially split into an outer ring of five traces that irrigate the common petal-stamen bases, and an inner ring that serves the gynoecium (fig.5b).No vasculature was observed irrigating the calyx.The free portion of the mature calyx has a single epidermal layer formed by small, cuboidal, isodiametric cells.No stomata were observed.The calyx mesophyll is formed by c. 10 layers of parenchymatous cells poorly differentiated from the pericarp (fig.5c-f).The collenchyma cap, distally formed by up to 15 bundles, lays between the pericarp and the endosperm (fig.5c-e).
The petal epidermis adaxially and abaxially is formed by a single layer of small, slightly tangentially elongate, papillose cells (fig.4i, j, l); the epidermal cells of adjacent petal margins are tightly interlocked and have a thicker cuticle (fig.4j).No stomata were observed.The vascular trace that enters the common petal-stamen base splits radially at the base of the corolla tube into a petal trace and a stamen trace (fig.5g).Each petal is irrigated by one central trace and two pairs of lateral traces (fig.4i, l).The petal mesophyll is formed by eight layers of isodiametric cells on the outside of the vascular traces and five layers of smaller cells on the inside (fig.4j); the mesophyll immediately outside of the central vascular bundle is schizogenous and one or more cavities are formed (fig.4k, l).The cavity in the adaxial petal is considerable larger than those formed in the remaining petals (fig.4k).These cavities appear to be nectariferous (fig.4l).
Each stamen is served by a single vascular bundle (fig.5g).Some epidermal outgrowths that point backwards are scattered along the distal half of the filaments of T. longebracteatus (fig.4h).Anthers are dithecal, tetrasporangiate and dehisce latrorsely through a longitudinal slit (fig.4b-e, g, n).The anther wall is formed by a narrow layer of tangentially elongated epidermal cells, a thick, fibrous endothecium that proliferates into two layers on the outer edges of the anther, one or two middle layers and an unistratified or bistratified secretory tapetum that is degraded by late preanthesis (fig.4m-o).The two pollen sacs of each theca connect to each other and open through a common latrorse stomium (fig.4n).The microsporogenesis appears to be successive, although a few tetragonal tetrads were observed along with the predominant tetrahedral tetrads (figs.4j, 5h).The pollen grains in both species are isopolar, tricolpate, and radially symmetrically trilobed (fig.4n, o), a shape that becomes evident even before the reabsorption of the callose sheath of the tetrads (fig.4j, m, 5h).Pollen is yellow in T. longebracteatus and gray to dull green in T. secundus (fig.3h, 4g, h).
The ovary is solid and served by a poorly differentiated vascular ring around the single mamelon (fig.5f).The style is solid and its mesophyll is formed mainly by isodiametric cells; the cells of the central mesophyll are amyliferous and surround five poorly defined vascular traces (fig.5h).The stigma is also solid and undifferentiated except for the short papillose epidermal cells; the mesophyll is formed mainly by isodiametric cells except for a strip of tangentially elongated cells located towards one side of the stigma (fig.5i).
During the ovary-fruit transition (fig.5n), a continuous viscin layer formed by radially elongated cells arranged into a palisade-like parenchyma is evident (fig.5p-r).Mature fruits are covered by a leathery outer epidermis and a fleshy mesophyll derived from the calyx.The fruit proper is formed by a fleshy pericarp, a collenchyma cap, and a massive viscin layer (fig.5n-r).The seeds are differentiated into a cup-shaped endosperm, which is formed between the viscin layer and the cylindrical embryo with two fused cotyledons (fig.5p, q).
Anthesis, floral lifespan, and floral visitors
Anthesis in T. secundus was fully recorded in the field.The elongating style gradually bends at its midlevel producing an outward tension against the commissure of the two abaxial petals triggering a premature split of a fenestra; the bending style protrudes through the fenestra (fig.3a-f).By that time, the corolla tube is fully s-shaped curved (fig.3a-f).Anthesis proceeds with the explosive opening of the twisted corolla tip, exposing the bright scarlet petals, filaments, style and stigma and deep purple anthers (figs. 3f, g, 4a).The filaments spread and bend upwards lifting the dorsifixed, versatile anthers, which dehisce and release the pollen 24 to 48 h after anthesis; the style -slightly longer than the stamens-is also bend upwards and occupies a more or less midpoint with respect to the lifted anthers, which keep apart from it during pollen shed (figs.3b, c, g, 4e, g).
Opened flowers exhibit differences among the species examined.In T. longebracteatus the petals separate only halfway, spread symmetrically and become strongly revolute, and the stamens remain near each other forming a loose tubular fascicle around the style (fig.4f).In T. secundus the petals and corresponding stamens separate almost completely and spread resulting into a bilateral flower (figs. 3b, g, 4e).The adaxial petal is located on the lower side of the anthetic flower and serves as a platform for the hummingbird beak -videos available upon request-; the stamens and style bend upwards and inwards, which ensures their contact with the hummingbird's head (figs. 1g, 3b, c, g, 4e).Fully opened flowers are odourless in the two examined species.
Elongation from a 1 cm to a 11 cm long corolla tube in T. secundus takes approximately four weeks.Anthesis lasts 14-16 days -n = 12 flowers-from the first signs of style protrusion (fig.3 a, b).Then, petals and attached stamens fall off and within 48 h the young fruit with the persistent style is apparent (fig.5n).The style abscisses from its base in the following 48 h (fig.5n).By the time the fruit reaches c. 1 cm in diameter, the green, 3-4 mm in length embryo has differentiated into a radicle and a plumule, which point towards the proximal and the distal ends of the fruit, respectively (fig.5p).Morphologically, the leading flower corresponds to the proximal flower of the pendant raceme (figs. 1d, e, h, 3a, b).Approximately five days after opening of the leading flower, the two flowers located immediately below it enter anthesis; this timing is subsequently maintained in the remaining flowers.Fruit set follows the same sequence (figs. 3b, c, 5n).Floral lifespan for T. longebracteatus is unknown.
Tristerix secundus is likely pollinated by two hummingbird species, Eriocnemis vestita and Pterophanes cyanopterus (fig.1g; videos available upon request).Signs of nectar robbers are observed as randomly distributed punctures on the outside of the corolla tube before anthesis, except at the level of the anthers.It is likely that nectar robbers collect the nectar produced by the schizogenous hypodermal cavities of the petals, which are easily accessed from the outside (fig.4k), compared to the nectar produced directly from the supraovarial nectary ring (figs.5 j, k).
Inflorescence development and structure
We followed the development of terminal racemes formed by ebracteolate flowers, each subtended by a bract, in the two examined species of Tristerix.The presence of two lateral bracteoles in addition to the subtending bract in flowers of T. aphyllus Tiegh.ex Barlow & Wiens and T. corymbosus (L.) Kuijt (Reiche 1904;Kuijt 1988) supports the interpretation that each flower along the terminal raceme in T. longebracteatus and T. secundus corresponds to a dichasium reduced to the terminal flower (Suaza-Gaviria & al. 2017).However, no evidence of vestigial flowers or bracteoles was found in the species studied here.Interestingly, bracts and bracteoles are formed even in T. aphyllus, a species with extreme reduction of the vegetative organs (Reiche 1904;Mauseth & al. 1985;Heide-Jørgensen 2008).Racemes are also found in a few New World Loranthaceae, such as a few species of Peristethium Tiegh., but in the latter they are always lateral (Suaza-Gaviria & al. 2017).
The most conspicuous traits of the inflorescence structure in the Colombian species of Tristerix related to hummingbird pollination are: the horizontal -T.secundus-to nearly upright -T.longebracteatusposition of the many-flowered racemes; the flowers attached to a stout pedicel that could facilitate perching; the sharp angle between the bract and the flower that maintains a suitable position for visits and perching; and the gradual anthesis beginning with the opening of the leading flower, followed by the flowers below it, with time intervals of c. 5 days.Most of these traits have also been reported in other hummingbird pollinated extratropical species of Tristerix (Tadey & Aizen 2001).
We also report here for the first time that fenestrate anthesis, at least in T. secundus, begins asymmetrically and it is uniquely triggered by the outward tension of the elongating style against the commissure of the two abaxial petals (fig.3a, d-f), followed by the explosive opening of the corolla tube apex (fig.3g).The commissure between the two abaxial petals at their midlevel is looser than the remaining four commissures, and offers much less mechanical constraint than the tighly interlocked and twisted petal tips (figs.3e-g, 4k).
Fenestrate, explosive anthesis has long been associated to ornithophily, which is the primary pollination mechanism in both Old and New World Loranthaceae (Reiche 1904;Werth 1915;Blakely 1922;Feehan 1985;Galetto & al. 1990;Kirkup 1998;Aizen 2005).However, this type of anthesis is not limited to cross-pollinated mistletoes, as it can occur also under self-compatibility and even cleistogamy; for example, in the fenestrate flowers of Peraxilla colensoi and Peraxilla tetrapetala the anthers dehisce and pollen is shed during preanthesis (Ladley & al. 1997).Cleistogamy can be ruled out in the two species of Tristerix examined, as the anthers dehisce after the opening of the corolla tube.
The visits of the likely pollinator hummingbirds Eriocnemis vestita (fig.1g) and Pterophanes cyanopterus are the first records for T. secundus, although experiments are needed to fully demonstrate it.Nectar in flowers of the two examined species of Tristerix is produced by the supraovarial nectary disk, and in the mesophyll of the petals (figs. 4k, l, 5j-l).Floral orientation in these species differs, as it is upright in T. longebracteatus and horizontal in T. secundus; however, the floral orientation does not appear to affect the efficiency of hummingbird visits or the volume or concentration of nectar, as demonstrated by Tadey & Aizen (2001) in T. corymbosus.The production of nectar in the petal mesophyll, reported here for the first time in the genus, is likely related to the visits of floral piercers, detected by frequent punctures on the outside of the petals.Although Graves (1982) -see also Amico & al. (2007)reported that two flower-piercer species of Diglossa serve as pollinators of T. longebracteatus in northern Peru, it is likely that they are not the primary pollinators of this species as the punctures are made in preanthetic flowers and far below the anthers and the stigma.According to Vidal-Russell & Nickrent (2008), tubular and bird-pollinated flowers evolved independently from insect-pollinated ancestors, once in the clade formed by Tristerix and Ligaria Tiegh., and once in Aetanthus plus Psittacanthus.This is supported by significant flower morphological differences and pollination strategies between them.The population of T. secundus studied for the present research is sympatric with Aetanthus mutisii and individuals of both species grow only a few meters apart and occasionally share the same host individual (fig.6).Aetanthus mutisii is more abundant and occupies higher strata in the subpáramo vegetation, whereas T. secundus is locally rare and occupies lower strata (fig.6).Ornithophilous traits related to inflorescence and floral morphoanatomy of T. secundus strongly differ from those found in its sympatric Aetanthus mutisii.In T. secundus, the anthesis is fenestrate, explosive, and occurs along the entire length of the corolla tube, the anthers are dorsifixed and tetrasporangiate, the stamens spread away from the style during anthesis, the hummingbird's beak enters in direct contact to the nectary disk as well as the nectar produced in the petals, especially the adaxial one that serves as a platform for the hummingbird's beak.Conversely, the corolla of Aetanthus mutisii is not fenestrate, and it opens only at its distal portion, exposing the baxifixed, polisporangiate anthers, which remain connivent forming a tube around the style; in this species, the hummingbird's beak is far from approaching the nectar disk, and the nectar slides down and accumulates mostly around the base of the connivent anthers (fig. 6;Suaza-Gaviria & al. 2016).Thus, the site of nectar accumulation allows the short-beaked hummingbird Eriocnemis vestita to easily access it and get dusted with pollen (fig.6).This contradicts the purported role of long-beaked hummingbirds (cf.Heide-Jørgensen 2008) as pollinators in Aetanthus.
Systematic and taxonomic significance of inflorescence and floral traits
Phylogenetic relationships of Tristerix are still unresolved and the competing scenarios pose important biogeographic implications.Wilson & Calvin (2006) stated that the genus is sister to the subclade formed by the monotypic Desmaria Tiegh., from the Andes, and Tupeia Cham.& Schltdl., from New Zealand.Conversely, Vidal-Russell & Nickrent (2008) and Su & al. (2015) postulated a sister group relationship with the South American Ligaria.A comparison of a number of morphological traits between these four genera is inconclusive (table 1); whereas the lack of protective cataphylls in renewal shoots occurs in Ligaria and Tristerix, the terminal position of the inflorescences occurs in Tristerix, Desmaria, and Tupeia (table 1).
Fig. 3 .
Fig. 3. Anthesis and floral lifespan of Tristerix secundus (Benth.)Kuijt: a, b, lateral views of an inflorescence photographed in a two-week interval, from the beginning of style protrusion in the leading flower to fruit set -arrow-; c, frontal view of the inflorescence -note young fruit (arrow) formed from the leading flower-; d-f, fenestrate, explosive corolla opening, lateral (D, E) and top (F) views; g, fully opened flower.[Drops in a and d correspond to raindrops; arrowheads point to styles protruding between the abaxial petals; ap, adaxial petal.] | 5,888.4 | 2017-11-03T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Impact of Drug-Mediated Inhibition of Intestinal Transporters on Nutrient and Endogenous Substrate Disposition…an Afterthought?
A large percentage (~60%) of prescription drugs and new molecular entities are designed for oral delivery, which requires passage through a semi-impervious membrane bilayer in the gastrointestinal wall. Passage through this bilayer can be dependent on membrane transporters that regulate the absorption of nutrients or endogenous substrates. Several investigations have provided links between nutrient, endogenous substrate, or drug absorption and the activity of certain membrane transporters. This knowledge has been key in the development of new therapeutics that can alleviate various symptoms of select diseases, such as cholestasis and diabetes. Despite this progress, recent studies revealed potential clinical dangers of unintended altered nutrient or endogenous substrate disposition due to the drug-mediated disruption of intestinal transport activity. This review outlines reports of glucose, folate, thiamine, lactate, and bile acid (re)absorption changes and consequent adverse events as examples. Finally, the need to comprehensively expand research on intestinal transporter-mediated drug interactions to avoid the unwanted disruption of homeostasis and diminish therapeutic adverse events is highlighted.
Introduction
A semi-impervious membrane bilayer is necessary to protect intracellular components and whole organisms from potentially harmful xenobiotics [1].Conversely, cellular homeostasis requires nutrients and endogenous substrates to pass through this barrier.Membrane transporters are key regulators of this process and are critical for the absorption of essential biomolecules and therapeutics.As of 2018, 62% of FDA-approved drugs were designed for oral administration, and new drugs continue to be developed for delivery through this non-invasive route [2].As with nutrients and endogenous substrates, systemic exposure to drugs requires passage through enterocytes (and hepatocytes).
Numerous membrane transporters within enterocytes have been identified and characterized regarding the regulation of nutrient, endogenous substrate, or drug (re)absorption [3,4], and many have been recognized by the International Transporter Consortium (ITC) as important to consider throughout drug development [5].These transporters include the efflux transporters P-glycoprotein (P-gp, encoded by the ABCB1 gene), breast cancer resistance protein (BCRP, encoded by the ABCG2 gene), and multidrug resistance proteins 2 and 3 (MRP2 and MRP3, encoded by the ABCC2 and ABCC3 genes, respectively), as well as the uptake transporters organic anion transporting polypeptide 2B1 (OATP2B1, encoded by the SLCO2B1 gene), peptide transporters 1 and 2 (PEPT1 and PEPT2, encoded by the SLC15A1 and SLC15A2 genes, respectively), monocarboxylate transporter 1 (MCT1, encoded by the SLC16A1 gene), apical sodium dependent bile acid transporter (ASBT, encoded by the SLC10A2 gene), organic solute transporters α and β (OSTα and OSTβ, encoded by the SLC51A and SLC51B genes, respectively), and thiamine transporters 1 and 2 (THTR1 and THTR2, encoded by the SLC19A2 and SLC19A3 genes, respectively).In addition to these ITC-recognized transporters, sodium glucose transporter 1 (SGLT1, encoded by the SLC5A1 gene) has been subject to many investigations due to its role in mediating intestinal glucose uptake and sensitivity to certain drugs, along with the proton-coupled folate transporter (PCFT, encoded by the SLC46A1 gene) and Niemann-Pick C1-Like1 transporter (NPC1L1, encoded by the SLC65A2 gene) due to their involvement in folate and cholesterol absorption, respectively.
The roles of many of the above transporters within the intestine, including P-gp, MRPs, PEPT1, and OATP2B1, are detailed in comprehensive reviews [6,7]; however, previous studies involving reduced-activity genetic variants, drug-drug interactions, or food-drug interactions have arguably been predicated upon investigating their contributions to xenobiotic disposition.Consequently, an insufficient consideration of drugs that alter the endogenous function of intestinal transporters, which contributes to unwanted effects such as nutrient malabsorption, can impact drug development and therapeutic outcomes.For example, although not a transporter expressed within enterocytes, the activity of the hepatic bile salt export pump (BSEP, encoded by the ABCB11 gene) can be inhibited by several tyrosine kinase inhibitors (TKIs), a class of drugs designed to treat various forms of cancer.These drugs can potentially disrupt BSEP-mediated bile acid secretion and promote cholestasis through the hepatic accumulation of bile acids [8].Some TKIs have recently been shown to inhibit the activity of a variety of other transporters in a non-competitive manner [9,10].Moreover, many drugs are associated with adverse events that can be partially explained by nutrient deficiencies.For example, patients treated with some TKIs have experienced symptoms that include hypoglycemia or thiamine deficiency.In fact, the symptoms of thiamine deficiency were a major challenge associated with the regulatory approval of fedratinib (detailed below).
Due to the above-highlighted challenges, this review underscores the importance of understanding how intestinal transporters are impacted by therapeutics and the consequent effects on nutrient or endogenous substrate disposition or homeostasis.Examples of altered nutrient or endogenous substrate disposition following exposure to xenobiotics, as well as the outcomes and characterization of these events, are described.Opportunities to improve the prediction and characterization of transporter-mediated drug interactions are also provided.
Examples of Altered Nutrient and Endogenous Substrate Disposition due to Drugs
Reports of drugs that may alter nutrient or endogenous substrate disposition, as well as the consequent biological outcomes, are limited.Indeed, current clinically relevant observations are largely limited to drugs that impact substrate interactions with SGLT1, THTR2, PCFT, MCT1, NPC1L1, ASBT, or OSTα/β.The expression, protein abundance, and membrane localization of these transporters are presented (Table 1), while details of altered substrate disposition in the presence of certain drugs are outlined below.
Disruption of Glucose Disposition
The movement of glucose, the primary source of mammalian cell energy, through the apical membrane of enterocytes is highly dependent on SGLT1.SGLT1 is a 73 kDa protein that is responsible for intestinal glucose uptake using symport via a sodium gradient, after which glucose moves into the portal circulation through glucose transporter 2 (GLUT2, encoded by the SLC2A2 gene) at the basolateral membrane (Figure 1).SGLT1 is also expressed at the apical membrane in renal proximal tubule cells, where it contributes to glucose reabsorption into the systemic circulation.
The importance of SGLT1 is best represented by individuals with genetic loss-offunction variants who suffer from glucose-galactose malabsorption, diarrhea, and hypoglycemia resulting from diminished glucose uptake into enterocytes [19].Consistent with these observations, Sglt1-deficient rodents were reported to exhibit these symptoms [20].Such a disrupted glucose disposition was critical to the development of gliflozins, which are selective inhibitors of SGLT2, the major mediator of renal glucose reabsorption, which is used to treat diabetes.The gliflozins vary in SGLT1 inhibitory potency, especially sotagliflozin (Table 2), which is a dual SGLT1/2 inhibitor that reduces postprandial and fasting blood glucose in patients with types 1 or 2 diabetes [21,22].The simultaneous inhibition of Sglt1 and Sglt2 in rodents was considered to be due to a significantly greater reduction in renal glucose reabsorption compared to the loss of Sglt2 activity alone [23].A dose-limiting adverse event associated with sotagliflozin is diarrhea, which results from carbohydrate accumulation in the gastrointestinal tract, along with a reversed osmotic flow of water following the loss of SGLT1 activity within enterocytes [24].
ing intensity.Protein abundance data are reported from the human protein atlas [13].
Disruption of Glucose Disposition
The movement of glucose, the primary source of mammalian cell energy, through the apical membrane of enterocytes is highly dependent on SGLT1.SGLT1 is a 73 kDa protein that is responsible for intestinal glucose uptake using symport via a sodium gradient, after which glucose moves into the portal circulation through glucose transporter 2 (GLUT2, encoded by the SLC2A2 gene) at the basolateral membrane (Figure 1).SGLT1 is also expressed at the apical membrane in renal proximal tubule cells, where it contributes to glucose reabsorption into the systemic circulation.
The importance of SGLT1 is best represented by individuals with genetic loss-offunction variants who suffer from glucose-galactose malabsorption, diarrhea, and hypoglycemia resulting from diminished glucose uptake into enterocytes [19].Consistent with these observations, Sglt1-deficient rodents were reported to exhibit these symptoms [20].Such a disrupted glucose disposition was critical to the development of gliflozins, which are selective inhibitors of SGLT2, the major mediator of renal glucose reabsorption, which is used to treat diabetes.The gliflozins vary in SGLT1 inhibitory potency, especially sotagliflozin (Table 2), which is a dual SGLT1/2 inhibitor that reduces postprandial and fasting blood glucose in patients with types 1 or 2 diabetes [21,22].The simultaneous inhibition of Sglt1 and Sglt2 in rodents was considered to be due to a significantly greater reduction in renal glucose reabsorption compared to the loss of Sglt2 activity alone [23].A dose-limiting adverse event associated with sotagliflozin is diarrhea, which results from carbohydrate accumulation in the gastrointestinal tract, along with a reversed osmotic flow of water following the loss of SGLT1 activity within enterocytes [24].Glucose uptake by SGLT1 Sotagliflozin (0.036 µM)-direct inhibitor Reduced plasma glucose concentration in patients [21,22] Erlotinib * (NA)-indirect inhibitor Reduced glucose uptake in A549, MCF10A, H322, or H292 cells [25][26][27] Lapatinib * (NA)-indirect inhibitor Reduced glucose uptake in A549 or MCF10A2 cells [26] Sorafenib * (NA)-indirect inhibitor Reduced plasma glucose concentration in patients [28] Dasatinib * (NA)-indirect inhibitor Reduced plasma glucose concentration in patients [28] Sunitinib * (NA)-indirect inhibitor Reduced plasma glucose concentration in patients [28] Imatinib * (NA)-indirect inhibitor Reduced plasma glucose concentration in patients [28] Thiamine uptake by THTR2 Fedratinib (0.94-1.36 µM)-direct inhibitor
Disruption of Thiamine Absorption
Thiamine, also known as vitamin B1, is an essential water-soluble nutrient obtained through dietary consumption.Thiamine absorption into enterocytes is largely mediated by THTR2, a 56 kDa protein located at the apical membrane of enterocytes (Figure 1).The importance of THTR2 in this process is best characterized by reduced intestinal absorption and subsequent reduced systemic concentrations of thiamine in Thtr2-deficient mice [48].Following uptake into enterocytes, THTR1 mediates the movement of thiamine through the basolateral membrane into the portal vein and eventually systemic circulation.THTR1 and THTR2, which are ubiquitously expressed, also mediate thiamine uptake from the circulation into cells, along with its reabsorption from renal proximal tubule cells [49].Thiamine (and small quantities of metabolites) is largely excreted into the urine by glomerular filtration and tubular secretion, while hepatic uptake of thiamine occurs via organic cation transporter 1 (OCT1, encoded by the SLC22A1 gene).
Patients with THTR2 reduced-activity variants are at increased risk for Wernicke's encephalopathy, a severe neurological condition caused by prolonged thiamine deficiency through malnutrition or malabsorption [50].Although inhibitors were not designed to clinically target THTR2, the transporter was recently shown to be sensitive to inhibition by various drugs (Table 2).The first indication of this event occurred following the termination of Phase 3 clinical trials of the TKI fedratinib when many patients developed Wernicke's encephalopathy.Follow-up in vitro studies revealed that fedratinib is a THTR2 substrate and inhibitor at low, clinically relevant concentrations (Table 2), which would diminish thiamine absorption through enterocytes and promote Wernicke's encephalopathy [29].Since this discovery, the occurrence of Wernicke's encephalopathy has been decreased by monitoring thiamine concentrations before and during treatment.However, other drugs, including amitriptyline and hydroxychloroquine, have been identified as THTR inhibitors using in vitro models (Table 2) [30,31].Future clinical interaction studies associated with these drugs and thiamine disposition changes are highly recommended; transporters beyond THTR2 should be also considered.Evidence for this expansion involves recent clinical and animal data showing an unexpected increase in systemic thiamine concentrations upon the administration of trimethoprim, an antifolate antibiotic with THTR2-inhibitory properties [32].This increase in systemic thiamine concentrations was believed to result from the simultaneous inhibition of thiamine hepatic uptake and its clearance via OCT1 by trimethoprim.Therefore, future investigations of potential THTR2 inhibitors, such as metformin and verapamil [14], and their ability to alter systemic thiamine concentrations should include assessments of other transporters that contribute to the balance between thiamine absorption and clearance, including THTR2 and OCT1.Additionally, changes to thiamine metabolism may need to be considered.
Disruption of Folate Disposition
Folate, or vitamin B-9, is essential for DNA synthesis and cell growth.Folate is obtained from the diet via absorption into the apical membrane of enterocytes by PCFT, a 50 kDa protein that uses the symport of protons to drive transport (Figure 1).Cellular folate exits the basolateral membrane of enterocytes into the portal vein via a currently unconfirmed mechanism that is believed to involve MRPs [15].The uptake of folate from the circulation into cells then occurs by PCFT, as well as the reduced folate carrier (RFC, encoded by the SLC19A1 gene) and folate receptors.The importance of PCFT is highlighted by genetically deficient mice that develop severe anemia and pancytopenia resulting from systemic folate deficiency, which is largely due to disrupted intestinal folate uptake [51].Consistent with these phenotypes, patients with reduced-activity PCFT variants have impaired intestinal folate absorption, along with impaired transport into the central nervous system, which collectively leads to anemia and, in many patients, seizures or mental deficiencies [52].
Based on the importance of folate and PCFT, no drugs have been designed with the goal of inhibiting PCFT; however, antifolates, including methotrexate and raltitrexed, are PCFT substrates [53].Oral methotrexate is associated with adverse events that include gastrointestinal toxicity, anemia, and myelosuppression, each of which can be attributed to folate deficiency.In fact, folate supplementation is used clinically to alleviate these symptoms.Reduced PCFT-mediated intestinal folate absorption by methotrexate via competitive inhibition appears to be negligible based on investigations showing that folate absorption is unchanged in the presence of methotrexate [54].Instead, folate deficiency in methotrexate-treated patients is likely due to the inhibition of folate metabolism that is necessary for DNA synthesis, or reduced reabsorption via PCFT in proximal tubules leading to increased renal elimination.Although methotrexate may not reduce folate absorption, other drugs such as the anti-inflammatory agent sulfasalazine have been identified as PCFT inhibitors at clinically relevant concentrations.Sulfasalazine is also associated with folate deficiency-related complications that are believed to result from reduced PCFT-mediated intestinal absorption of folate [55].
Disruption of Lactate Disposition
The disposition of several organic acids, including lactate, which promotes redox signaling and energy for oxidative metabolism, are mediated by MCT1-4 [16].Lactate, a product of anaerobic glycolysis, is absorbed into enterocytes through MCT1, which is a 43 kDa protein that uses the symport of protons to drive cellular lactate uptake (Figure 1).MCT4 is expressed at the basolateral membrane of enterocytes and appears to be involved in lactate efflux into the portal vein [56].MCT1 is expressed in almost all human tissues; thus, it plays a major role beyond enterocytes in mediating lactate uptake from plasma or facilitating efflux [16].The genetic knockout of Mct1 in mice is embryonically lethal, while Slc16a1 −/+ mice have neurodegenerative complications due to reduced lactate shuttling and decreased nutrient absorption [57].Similarly, MCT1-inactive genetic variants in humans are associated with metabolic acidosis and diarrhea, although no changes in plasma lactate concentrations are observed [58].
MCT1 has become a drug target for multiple purposes.For example, gabapentin enacarbil, which is used clinically as an anticonvulsant (Table 2) [34,35], was designed as an MCT1 substrate to improve intestinal absorption and bioavailability.AZD3965 was recently designed as a MCT1 inhibitor (IC 50 of 17 nM) with the goal of suppressing lactate uptake and altering glycolysis and pH in MCT1-overexpressing tumors [36,37].The clinical utility of AZD3965 remains under development.The effects of MCT drug substrates or inhibitors on intestinal lactate absorption may require investigation considering that many have a higher affinity for MCTs than lactate (K m < 3.5 mM).Indeed, metabolic acidosis has been reported following clinical exposure to AZD3965, along with increased urinary elimination of lactate and ketones [36].These events occurred in the absence of lactate plasma concentration changes, consistent with MCT1 genetic deficiency in humans.The increased urinary elimination of MCT1 substrates with AZD3965 is believed to result from a lack of MCT1-mediated reabsorption from proximal tubules, while a lack of changes in plasma may be due to compensation by MCT4, which is widely expressed among tissues (except ocular tissue) and is not inhibited by AZD3965.Despite compensatory mechanisms with plasma lactate concentrations and the complexity of factors that contribute to lactate metabolism and disposition, MCT1 inhibitors or substrates may still have significant potential to harm enterocyte homeostasis, especially considering that such compounds are commonly administered orally.Moreover, the genetically mediated loss of intestinal Mct1 activity alone in mice was shown to not only be sufficient in decreasing oral absorption of lactate but can alter microbiome contents, glucose homeostasis, and inflammation [59].
Overall, drugs with the potential to alter MCT1 activity are novel, and the potential harm or benefits of changes in MCT substrate disposition in humans taking inhibitors will require future investigation.These studies should also include establishing the role of MCTs in other tissues, including hepatocytes, or examining other transporters or enzymes involved in lactate disposition and metabolism that may compensate for MCT loss of function.
Disruption of Cholesterol Disposition
Cholesterol is a major structural component of human cell membranes and is a precursor for steroid hormone, bile acid, and vitamin synthesis.The absorption of cholesterol into enterocytes is mediated by the 145 kDa transporter NPC1L1 (Figure 1), followed by esterification and chylomicron secretion or efflux by the cholesterol efflux regulatory protein (CERP, encoded by the ABCA1 gene) into the portal circulation [17].NPC1L1 is also expressed at the apical membrane of hepatocytes.The importance of NPC1L1 is represented by genetically deficient mice, which have significantly reduced intestinal cholesterol absorption [60].Similarly, human variations in the SLC65A2 gene are associated with reduced intestinal absorption of cholesterol and reduced LDL concentrations [61].
The discovery of NPC1L1 provided clarity into the mechanism of action for ezetimibe, the first cholesterol absorption inhibitor approved to treat hypercholesterolemia. Specifically, ezetimibe acts by inhibiting NPC1L1 [62] and, to date, remains the only clinically used inhibitor of this transporter.Ezetimibe is effective in reducing cholesterol in patients either alone or in combination with a statin [63].Accordingly, investigations into NPC1L1 inhibitors continue, often focusing on ezetimibe analogues.However, no compound superior to ezetimibe has been identified to date.
Pathological Alteration of Bile Acid Recirculation
Bile acids are steroidal compounds derived from cholesterol that are critical for the digestion of dietary lipids.These endogenous compounds are fundamental in the regulation of multiple metabolic processes, including cholesterol and insulin homeostasis [64].Bile acid concentrations within the gastrointestinal tract are tightly regulated by the cholehepatic shunt and enterohepatic recirculation pathways to maintain digestive homeostasis (Figure 2) [18].Bile acid uptake into enterocytes is mediated by ASBT, a 43 kDa protein located at the apical membrane that uses the symport of sodium to regulate transport activity.OSTα/β are facilitative transporters located at the basolateral membrane (with approximate molecular weights of 37 and 19 kDa, respectively) that move bile acids into the portal vein.OSTα/β are also expressed in hepatocytes, where they function with other transporters that include the uptake transporter sodium-taurocholate co-transporting polypeptide (NTCP, encoded by the SLC10A1 gene) and OATPs, as well as the efflux transporters MRP1-4 and BSEP, to regulate bile acid concentrations.The importance of ASBT is supported by observations in rodent knockout models in which Asbt deficiency led to increased fecal cholesterol clearance, along with decreased bile acid pool and serum concentrations [65,66].Genetic variation associated with reduced ASBT activity in humans is linked to primary bile acid malabsorption, which presents as congenital chologenic diarrhea and a loss of bile acid transport [67].Genetic-mediated loss of OST function leads to cholestasis, liver fibrosis, and congenital diarrhea without changing systemic bile acid concentrations [68]. is linked to primary bile acid malabsorption, which presents as congenital chologenic diarrhea and a loss of bile acid transport [67].Genetic-mediated loss of OST function leads to cholestasis, liver fibrosis, and congenital diarrhea without changing systemic bile acid concentrations [68].An increased clearance of bile acids may be beneficial for cholestatic disorders, including Alagille syndrome (ALGS), progressive familial intrahepatic cholestasis (PFIC), and biliary atresia.Regardless of cholestatic origin, these patients suffer from multiple adverse effects, including severe pruritis, which is associated with elevated serum bile acid concentrations.Targeting (inhibiting) intestinal bile acid transporters would decrease An increased clearance of bile acids may be beneficial for cholestatic disorders, including Alagille syndrome (ALGS), progressive familial intrahepatic cholestasis (PFIC), and biliary atresia.Regardless of cholestatic origin, these patients suffer from multiple adverse effects, including severe pruritis, which is associated with elevated serum bile acid concentrations.Targeting (inhibiting) intestinal bile acid transporters would decrease serum bile acid concentrations and is expected to reduce the severity of pruritis.Indeed, the inhibition of ASBT and OSTα/β alone can interrupt bile acid recycling and significantly increase the fecal clearance of bile acids [69,70].
ASBT inhibitors have been developed for cholestatic diseases in some countries.Maralixibat, approved in the United States in 2021, is indicated for the treatment of cholestatic pruritis in children with ALGS.The ICONIC trial showed that maralixibat lowered average observed itching scores by 2.3 points and serum bile acid concentrations by 36% by week 204 [40].The IMAGINE-I and IMAGINE-II trials yielded similar results, and the ITCH trial showed a statistically significant improvement in observed itching scores by week 13 [41,42].Currently, maralixibat is undergoing clinical trials as a treatment for pruritis in patients with PFIC types 1 and 2, biliary atresia, and generalized cholestatic liver disease (NCT02057718, NCT03905330, NCT04185363, NCT04524390, NCT04168385, and NCT04729751).Preliminary results indicate that this drug could be beneficial for these patient groups.
Odevixibat, another ASBT inhibitor approved in the United States and European Union in 2021, is indicated for the treatment of pruritis in children with PFIC types 1 and 2. The PEDFIC 1 trial showed statistically significant reductions in observed itching scores, scratching scores, and serum bile acid concentrations in patients receiving odevixibat for 22-24 weeks.A follow-up study, PEDFIC 2, confirmed these results.Patients also experienced improvements in sleep parameters in both PEDFIC trials [43].Like maralixibat, odevixibat is currently undergoing clinical trials as a treatment for pruritis in patients with other cholestatic diseases [44].
A third ASBT inhibitor, elobixibat, is approved in Japan for the treatment of chronic idiopathic constipation due to the expected on-target effects of interrupting bile acid recycling and promoting bowel movements.In the United States, patients in the ACCESS trial who received ≥5 mg of the drug experienced at least a twofold increase in complete spontaneous bowel movements per week compared to a placebo (NCT01007123).Additionally, patients who received ≥10 mg of elobixibat experienced their first spontaneous bowel movement faster compared to placebo.As a class, the effectiveness of ASBT inhibitors in reducing serum bile acid concentrations render new therapeutic options for treating cholestatic and constipation-related disease states.
To date, there are no OSTα/β inhibitors approved for use in patients with cholestatic disease.Thus, evidence of the drug-induced disruption of OST substrate disposition is lacking.However, an in vitro study compared the effects of 77 test compounds on transporter activity in OSTα-and OSTβ-expressing cells (Flp-In 293) to that in mock cells (HEK293).Of these compounds, atorvastatin, ethinylestradiol, fidaxomicin, indomethacin, spironolactone, and troglitazone were strong OSTα/β inhibitors (≥50% inhibition relative to control) [45].An in vitro study using fluorescence resonance energy transfer and OSTα/β-expressing cells identified clofazimine as another strong inhibitor [46].Despite interest in developing OSTα/β inhibitors, a novel drug molecule has yet to be identified.This apparent lag in drug discovery may be due to the ASBT inhibitors showing promise as bile acid modulators or the potential negative impact on bile acid disposition in other OSTα/β-expressing cells such as hepatocytes.Additionally, unlike ASBT inhibitors, OSTα/β inhibitors must traverse the apical membranes of cholangiocytes and hepatocytes to reach the therapeutic (basolateral) target, creating an additional challenge with respect to drug design.To that end, both steroids and statins have shown OSTα/β inhibitory activity; thus, rigorous characterizations of those interactions may lead to the development of novel drug molecules as a new class of bile acid modulators.
Conclusions and Future Considerations
Considerable knowledge has been uncovered involving the disposition of nutrients, endogenous substrates, and drugs across the intestinal barrier, improving our understanding of the factors involved in intestinal absorption and relevant disease states.Through an abundance of in vitro, ex vivo, and in vivo animal models, along with clinical investigations, several transporters have been linked to the disposition of nutrients and endogenous substrates, including PCFT, NPC1L1, MCTs, SGLT1, THTR2, and ASBT.This insight has led to new clinical strategies or therapeutics to alleviate various disease symptoms, including those associated with cholestatic disease with new ASBT inhibitors.
Despite these advances in our understanding of the intestinal disposition of xenoand endobiotics, further investigation is needed.The identification of unintended nutrient uptake transporter inhibitors such as fedratinib highlighted the potential patient risk of nutrient deficiencies, especially if chronic exposure is expected.Accordingly, the multiple potential transporter inhibitors described within this review will require follow-up investigations to assess their risk in promoting nutrient malabsorption.Such investigations should include in vitro and in vivo studies to assess the time dependency of inhibition or the time to transporter function recovery.Moreover, the unexpected trimethoprim-mediated increase in plasma thiamine concentrations highlights the need to consider transport or metabolic pathways within other tissues as compensatory mechanisms.
Future studies should expand beyond the examples provided and could consider innovative outcomes, such as the apparent changes in the gut microbiome content and colitis risk observed with diminished Pept1 activity in mice [71], as well as the risk of colitis in the absence of P-gp [72].In silico approaches, which are commonly used to assess drug disposition changes under drug-drug or drug-nutrient interaction conditions, could also be devised to address the impact of these interactions on nutrient disposition and predict biological consequences.Finally, novel orphan intestinal transporters should be considered that may provide further insight into intestinal transporter-mediated drug absorption, interactions, and disease states, as well as new therapeutic strategies.
Table 2 .
Nutrient-or endogenous-substrate-mediated transport with potential sensitivity to drug exposure. | 5,793.2 | 2024-03-24T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Hydroxycinnamyl Derived BODIPY as a Lipophilic Fluorescence Probe for Peroxyl Radicals
Herein, we describe the synthesis of a fluorescent probe NB-2 and its use for the detection of peroxyl radicals. This probe is composed of two receptor segments (4-hydroxycinnamyl moieties) sensitive towards peroxyl radicals that are conjugated with a fluorescent reporter, dipyrrometheneboron difluoride (BODIPY), whose emission changes depend on the oxidation state of the receptors. The measurement of the rate of peroxidation of methyl linoleate in a micellar system in the presence of 1.0 µM NB-2 confirmed its ability to trap lipid peroxyl radicals with the rate constant kinh = 1000 M−1·s−1, which is ten-fold smaller than for pentamethylchromanol (an analog of α-tocopherol). The reaction of NB-2 with peroxyl radicals was further studied via fluorescence measurements in methanol, with α,α′-azobisisobutyronitrile (AIBN) used as a source of radicals generated by photolysis or thermolysis, and in the micellar system at pH 7.4, with 2,2′-azobis(2-amidinopropane) (ABAP) used as a thermal source of the radicals. The reaction of NB-2 receptors with peroxyl radicals manifests itself by the strong increase of a fluorescence with a maximum at 612–616 nm, with a 14-fold enhancement of emission in methanol and a 4-fold enhancement in the micelles, as compared to the unoxidized probe. Our preliminary results indicate that NB-2 behaves as a “switch on” fluorescent probe that is suitable for sensing peroxyl radicals in an organic lipid environment and in bi-phasic dispersed lipid systems.
Introduction
There is much evidence of the harmful effects of excess amounts of Reactive Oxygen Species (ROS) resulting in oxidative stress, and consequently, in undesired effects for health [1][2][3][4][5][6][7][8]. Excessive generation of ROS correlates with mitochondrial dysfunction and increased receptor signaling in cells in general [9]. Lipids are the main components of biomembranes and their exposition to endogeneous or exogeneous ROS initiates a peroxidation affecting the assembly, composition, polarity, structure, and dynamics of cellular membranes, and finally, can lead to the death of the cell [10]. Once the peroxidation is triggered by radical forms of ROS attacking the lipid (LH, Reaction (1) in Scheme 1), the process is propagated by alkyl and alkylperoxyl radicals (L• and LOO•, respectively) and in cyclic reactions of H abstraction (Reaction (2)) and O 2 addition (Reaction (3)) tens and hundreds of lipid molecules are converted into the hydroperoxides until the process is terminated in Reaction (4).
A number of problems connected with the oxidative stress and with mechanism of action still remains unsolved. For example, some compounds designed to eliminate ROS, which positively passed tests in vitro, show dramatically low activity in vivo [11]; therefore, a deeper insight is needed into the localization of ROS and their interactions with other components of living cells. Cellular concentrations of ROS are usually at the pico to nanomolar level and can change rapidly [12], thus, the methods used for their monitoring should be fast, efficient, and highly sensitive. Additionally, a molecular marker for oxidative stress should neither affect the process to be monitored nor the components of the cell.
Among the main methods of ROS detection the Electron Paramagnetic Resonanse EPR a technique using spin traps (usually nitroxides or nitrones) can be conveniently applied to living organisms. However, the expensive equipment and advanced methodology of the interpretation of the results make EPR rather inaccessible for non-specialists. Another method, Magnetic Resonance Imaging (MRI) [13,14], is often too expensive and time consuming to be broadly used for monitoring the oxidative stress on the cellular level (in vitro and in vivo). ROS can also be monitored by fluorescence imaging [12,15,16], a technique employing molecules of a fluorescence probe (FP) that allows for the observation of processes in single cells or their parts (via confocal microscopy) [17]. FP should react with ROS faster than other biomolecules and such a reaction should result in a clear fluorescence response (on/off), even at low concentrations of ROS. Figure 1 presents examples of two FPs commonly used for the detection of intracellular ROS [18,19]. The diacetate form of dichlorodihydrofluorescein (DCFDA) after crossing the cell membrane is enzymatically hydrolyzed to polar form (DCFH, non-fluorescent), reacts (non-selectively) with ROS, and forms fluorescent DCF. Another commonly used FP, APF, is less sensitive, but more selective than DCFH. It reacts with • OH, HOCl, and ONOO -. DCFH and APF are the "switch on" fluorescent probes, their reaction with ROS result in fluorescent products. This is, however, not the only mechanism FPs can follow. Some FPs can act as "switch off" markers, i.e., their fluorescence is quenched after the reaction with ROS. In both cases, the fluorescence signal/intensity (on/off) of the whole molecule strictly depends on the reaction with ROS. In the first case, an electron is promoted from the ground state to the excited singlet state of the molecule: 1GS → 1ES* and the reaction of 1ES* with ROS results in a relaxation to a lower energy triplet state: 1ES* → 3ES*. Relaxation of this state can be either mediated again by triplet oxygen or can happen via a heat dissipation route. The second mechanism involves photoinduced electron transfer (PeT), that is, LUMO → LUMO transfer within the excited donor-acceptor pair [D*:A] → [D+:A−]*, then [D+:A−]* returns to the ground state either with or without photon emission (exciplex emission) with the subsequent return of electron to the donor (HOMO → HOMO) and decomposition of the D:A complex. The same PeT mechanism might operate for D and A covalently bonded as two structurally and functionally different segments. The first segment is responsible for the reaction with ROS and is called a receptor (R), while the second one is called a reporter (F) and is a fluorophore whose fluorescent properties will change depending on the redox status of R. Recently, Tang and coworkers designed a "switch on" type of FP with a tripolycyanamide scaffold as F and two catechol moieties as receptors for O 2 •− (Figure 2a) [20]. An example of a "switch off" catechol based sensor used for the detection of • OH and H2O2 is presented in Figure 2b [21]. The catechol functionality was also utilized in a dipyrrometheneboron difluoride (BODIPY) based fluorescent probe, shown in Figure 2c. This probe, however, is sensitive towards hypochlorous acid and hypochlorite [22]. Another probe, a lipophilic derivative of BODIPY (Figure 2d), with sensitivity to oxidation comparable to that of endogenous fatty acyl moieties, was proposed by Drummen and coworkers as FP for monitoring lipid peroxidation and antioxidant efficacy [23].
Recently, Cosa and his group designed a series of BODIPY-α-tocopherol (and other phenolic antioxidants) FPs with excellent sensitivity towards trace amounts of peroxyl and alkoxyl radicals ("switch on" probes, see Figure 3) [24][25][26][27][28]. An example of a "switch off" catechol based sensor used for the detection of • OH and H2O2 is presented in Figure 2b [21]. The catechol functionality was also utilized in a dipyrrometheneboron difluoride (BODIPY) based fluorescent probe, shown in Figure 2c. This probe, however, is sensitive towards hypochlorous acid and hypochlorite [22]. Another probe, a lipophilic derivative of BODIPY (Figure 2d), with sensitivity to oxidation comparable to that of endogenous fatty acyl moieties, was proposed by Drummen and coworkers as FP for monitoring lipid peroxidation and antioxidant efficacy [23].
Recently, Cosa and his group designed a series of BODIPY-α-tocopherol (and other phenolic antioxidants) FPs with excellent sensitivity towards trace amounts of peroxyl and alkoxyl radicals ("switch on" probes, see Figure 3) [24][25][26][27][28]. An example of a "switch off" catechol based sensor used for the detection of • OH and H 2 O 2 is presented in Figure 2b [21]. The catechol functionality was also utilized in a dipyrrometheneboron difluoride (BODIPY) based fluorescent probe, shown in Figure 2c. This probe, however, is sensitive towards hypochlorous acid and hypochlorite [22]. Another probe, a lipophilic derivative of BODIPY (Figure 2d), with sensitivity to oxidation comparable to that of endogenous fatty acyl moieties, was proposed by Drummen and coworkers as FP for monitoring lipid peroxidation and antioxidant efficacy [23].
Herein, we describe the synthesis and preliminary results obtained for a molecule in which the reporter moiety (BODIPY) is covalently bonded to two receptor segments, sensitive toward peroxyl radicals, i.e., two phenol moieties connected via double C=C bonds to position 3 and 5 of the BODIPY core (Scheme 1). Phenolic segments in NB-2 are responsible for scavenging peroxyl radicals. In this particular case we deliberately used simple phenol moieties (monohydroxyl, non-hindered phenols) that are less reactive towards peroxyls than other natural antioxidants (tocopherols, carotenoids, and melatonin). Such a choice can be rationalized because our FP was designed for monitoring the system at the moment when other (more reactive) antioxidants have been consumed. The nitrophenyl moiety present at the meso-position of BODIPY is an example of a functional group that can be relatively easily converted into other functionalities in order to assemble the FP with other molecules or nanoparticles via the aromatic ring.
Chemicals and Reagents
Chemicals and reagents for synthesis: starting materials, reagents, and solvents were purchased from Sigma-Aldrich, Acros Organics and Combi-Blocks, and were used without any additional purification. The reaction progress was monitored by Thin Layer Chromatography performed on commercial Kieselgel 60, F254 silica gel plates with fluorescence-indicator UV254 (Merck, TLC silica gel 60 F254). For the detection of components, a UV light at λ = 254 nm or λ = 365 nm was used.
Herein, we describe the synthesis and preliminary results obtained for a molecule in which the reporter moiety (BODIPY) is covalently bonded to two receptor segments, sensitive toward peroxyl radicals, i.e., two phenol moieties connected via double C=C bonds to position 3 and 5 of the BODIPY core (Scheme 1). Phenolic segments in NB-2 are responsible for scavenging peroxyl radicals. In this particular case we deliberately used simple phenol moieties (monohydroxyl, non-hindered phenols) that are less reactive towards peroxyls than other natural antioxidants (tocopherols, carotenoids, and melatonin). Such a choice can be rationalized because our FP was designed for monitoring the system at the moment when other (more reactive) antioxidants have been consumed. The nitrophenyl moiety present at the meso-position of BODIPY is an example of a functional group that can be relatively easily converted into other functionalities in order to assemble the FP with other molecules or nanoparticles via the aromatic ring.
Chemicals and Reagents
Chemicals and reagents for synthesis: starting materials, reagents, and solvents were purchased from Sigma-Aldrich, Acros Organics and Combi-Blocks, and were used without any additional purification. The reaction progress was monitored by Thin Layer Chromatography performed on commercial Kieselgel 60, F254 silica gel plates with fluorescence-indicator UV254 (Merck, TLC silica gel 60 F254). For the detection of components, a UV light at λ = 254 nm or λ = 365 nm was used.
Phenolic segments in NB-2 are responsible for scavenging peroxyl radicals. In this particular case we deliberately used simple phenol moieties (monohydroxyl, non-hindered phenols) that are less reactive towards peroxyls than other natural antioxidants (tocopherols, carotenoids, and melatonin). Such a choice can be rationalized because our FP was designed for monitoring the system at the moment when other (more reactive) antioxidants have been consumed. The nitrophenyl moiety present at the meso-position of BODIPY is an example of a functional group that can be relatively easily converted into other functionalities in order to assemble the FP with other molecules or nanoparticles via the aromatic ring.
Chemicals and Reagents
Chemicals and reagents for synthesis: starting materials, reagents, and solvents were purchased from Sigma-Aldrich, Acros Organics and Combi-Blocks, and were used without any additional purification. The reaction progress was monitored by Thin Layer Chromatography performed on commercial Kieselgel 60, F254 silica gel plates with fluorescence-indicator UV254 (Merck, TLC silica gel 60 F254). For the detection of components, a UV light at λ = 254 nm or λ = 365 nm was used.
2.2. General Information mm, Fluka. Melting points were recorded using the OptiMelt Automated Melting Point System from Stanford Research Systems.
Method B (see Scheme 1): Under nitrogen atmosphere 2,4-dimethylpyrrole (0.610 mL, 5.93 mmol, 2.2 equiv.) was slowly added to a DCM (50 mL) solution of 4-nitrobenzoyl chloride (500 mg, 2.69 mmol). The reaction mixture was stirred overnight at room temperature. Next, the flask was opened and TEA (3.00 mL, 21.5 mmol, 8.0 equiv.) was added and the reaction mixture was stirred (still open) for an hour. Then, the flask was closed, flushed with N 2 , and BF 3 etherate (4.00 mL, 32.4 mmol, 12 equiv.) was quickly added (fast drop by drop or a steady stream). After stirring the reaction mixture for the next hour, the solvents were evaporated and the crude mixture was purified by flash chromatography (dry loading) using pentane/diethyl ether (gradient 3/1 −> 2/1; v/v) as the eluent. Compound NB-1 was obtained as red precipitate (670 mg, 67% yield). Both methods gave the same compound: 1 [29].
Preparation of Micelles
The micelles were prepared using a methodology described in our previous kinetic studies [30,31]. Glass test-tubes with 19 µL of methyl linoleate (LinMe) and 10.5 mL of 16 mM Triton X-100 were stirred using Vortex for 60 sec. Then, 10.5 mL of buffer was added and the mixture was shaken again for 60 s.
Methodology of Autoxidation Measurements
The ability of NB-2 to trap peroxyl radicals was evaluated by monitoring the rate of peroxidation of methyl linoleate dispersed in the micellar system. The uptake of dissolved oxygen during peroxidation of micelles was carried out at 37 • C by using a RC650 Respirometer (Strathkelvin Instruments) equipped with a Clark-type electrode in the same way as described in one of our previous papers [30][31][32]. The samples were buffered at pH 7.0 with a Tris buffer and the chambers with magnetic stirring, containing 3 mL of micelles were saturated with oxygen. The electrode was placed inside the chambers and peroxidation was initiated by the injection of water solution of ABAP (final concentration 10 mM). After 10% of oxygen was consumed, 10 µL of the studied compound (PMHC, NB-1, NB-2) in ethanol was added. In this way, the final concentration of the added compounds was 1.0 µM).
UV-vis Studies of Stability and Reactivity in Methanol and in Micelles
UV-vis spectra of NB-2 were recorded on Varian Cary 50 spectrometer (Agilent Technologies, Santa Cara, CA, USA) using quartz cuvettes QS with 10 mm optical path length. Stability measurements: the UV-vis spectra were recorded for 180 min for NB-2 in methanol at 37 • C. For reactivity studies in methanol, peroxyl radicals were generated from AIBN by thermolysis (30,37 and 50 • C, spectra were recorded every 5 min) or photolysis (ambient temperature).
For studies of reactivity with peroxyl radicals in micelles at pH 4.0 and 7.0, water soluble initiator (ABAP) was used: 130 µL NB-2 in ethanol and 100 µL of aqueous solution of ABAP were injected to a quartz cuvette (with magnetic stirring), containing 2 mL of micelles. The final concentrations were 2.73 mM LinMe, 8 mM Triton X-100, 25 mM ABAP, 9.0 µM NB-2. UV-vis spectra were recorded every 15 min.
Spectrofluorometric Measurements
Spectrofluorometric measurements were performed on Cary Eclipse fluorescence spectrophotometer equipped with Peltier System and magnetic stirrer (Agilent Technologies, Santa Cara, CA, USA). Measurements were performed in the liquid phase (methanol or micellar system) using quartz cuvettes QS for fluorescence with 10 mm optical path length. The most frequently used configuration parameters were λ ex = 575 nm, slit = 5 and 5 nm, and gain = medium.
Experiments in methanol: a stock solution of AIBN in methanol was added with a syringe into a thermostated quartz cell containing NB-2 dissolved in methanol. The final concentrations were 15 or 10 µM of NB-2 and 5 mM AIBN. For micellar systems, water soluble azo-initiator (ABAP) was used, but the employed methodology remained the same, and the samples contained 2.73 mM LinMe, 8 mM Triton X-100, 10 µM NB-2, and 10 mM ABAP. Automatic spectra collection was immediately started after the initiator was injected into the sample. The sample solution inside the cuvettes was non-stop stirred with a magnetic stirrer. Excitation and emission spectra were recorded at room temperature or at 30, 37, and 50 • C, depending on the experiment. Temperature, time of experiment, and intervals for spectra collection varied between experiments and are described for every measurement individually.
Photolysis of AIBN
In order to produce peroxyl radicals by the photolytic decomposition of AIBN, a solution of 15 µM NB-2 in methanol was placed in a fluorescence quartz cell (optical path 10 mm) and methanolic solution of AIBN was injected into it with a glass microsyringe (final concentration of AIBN was 5 mM). The solution was continuously stirred with a magnetic bar and the cell was periodically irradiated for 5 s with 365 nm UV High Power LED irradiation (2.7 W and 1200 mW radiant power). After every 60 s, an emission spectrum was recorded and the process was repeated for the same sample. Measurements were conducted at room temperature. The photography of UV irradiation chamber is presented in Figure S5 (Supplementary Material). The same methodology was used for UV-vis measurements.
Synthesis and Spectral Chracteristics of NB-2
NB-1 was prepared using two different methods. As the yield of the product obtained by method A was not satisfactory, we employed the conditions described in method B, see Scheme 1. With acyl chloride instead of aldehyde as the substrate, the synthesis of NB-1 was much cleaner, and the compound could be easily isolated with a 67% overall yield. Then, Knoevenagel condensation was carried out in refluxing acetonitrile with freshly dried molecular sieves, as proposed by Cicchi and coworkers [33]. This method gave NB-2 with a satisfactory yield of 40%. NB-2 is soluble in alcohols (methanol, ethanol), acetonitrile, THF, ethyl acetate, and acetone. Moderate solubility is observed in CH 2 Cl 2 and CHCl 3 . We did not measure the octanol/water partition coefficient, but we roughly assumed that NB-2 should be similar or even more lipophilic than the fluorescent probe with carboxyl functionality presented in Figure 2d, for which Drummen et al. determined logP = 2.49 ± 0.04 at pH 7.4 (i.e., in its anionic form) [23]. Figure 4 shows the absorption and emission spectra of NB-2 in methanol at 23 • C. The compound had its absorption maximum at λ = 647-653 nm (molar absorptivity has been calculated as ε 649nm = 5.1 × 10 4 M −1 ·cm −1 ) with a shoulder band with a local maximum at 596 nm, and strong absorption at 373 nm (ε 373nm = 3.0 × 10 4 M −1 ·cm −1 ).
In most cases, boron dipyrromethene difluoride (BODIPY)-based dyes are fluorochromes that display bright green fluorescence in the range of 450 to 550 nm. NB-2 is a derivative in which the boron dipyrromethene difluoride core is substituted with a nitrophenol at the meso-position, and two phenolic moieties via stilbene-like interconnections. Due to the extended conjugation, the intact NB-2 displays a bright red fluorescence (see photography/ Figure S5) with an emission maximum at 674 nm (red dashed line in Figure 4).
Stability of NB-2 in aerated methanol at 37 • C was examined using UV-vis measurements (for details, see Figure S6). No changes in the absorbance of the compound were observed during at least 180 min. Considering the fact that most further measurements conducted in this study lasted max. 120 min in similar conditions, we can assume that all of the color and emission changes that occurred during tests came from the reaction with radicals. Stability of NB-2 in aerated methanol at 37 °C was examined using UV-vis measurements (for details, see Figure S6). No changes in the absorbance of the compound were observed during at least 180 min. Considering the fact that most further measurements conducted in this study lasted max. 120 min in similar conditions, we can assume that all of the color and emission changes that occurred during tests came from the reaction with radicals.
Kinetic Parameters of Reaction of NB-2 with Peroxyl Radicals
Two features are crucial for a molecule to be used as FP for sensing /imaging peroxyl radicals: (i) the molecule should exhibit relatively high reactivity toward alkylperoxyl radicals, and (ii) such a reaction should result in an increase or decrease of fluorescence (switch on/off FP). Therefore, in the first step, we examined the ability of NB-2 to react with peroxyl radicals mediating lipid peroxidation in a micellar system. We selected methyl linoleate (LinMe) as a lipid representing polyunsaturated fatty acids (PUFA)-the constituents of natural lipid bilayers and biomembranes. During the measurement, thermal decomposition of water soluble azo-initiator ABAP produces primary radicals (R•) that are immediately converted into water soluble peroxyl radicals able to abstract the H atom from the weakest C-H bond (bis-allyl position) at the LinMe molecule. This process is regarded as the initiation of peroxidation (Reaction (1)) and triggers the propagation described by Reactions (2) and (3). The typical plot of oxygen consumption of such spontaneous PUFA peroxidation is presented in Figure 5 as curve 1, with the rate of oxidation Rox = d[O2]/dt.
Kinetic Parameters of Reaction of NB-2 with Peroxyl Radicals
Two features are crucial for a molecule to be used as FP for sensing /imaging peroxyl radicals: (i) the molecule should exhibit relatively high reactivity toward alkylperoxyl radicals, and (ii) such a reaction should result in an increase or decrease of fluorescence (switch on/off FP). Therefore, in the first step, we examined the ability of NB-2 to react with peroxyl radicals mediating lipid peroxidation in a micellar system. We selected methyl linoleate (LinMe) as a lipid representing polyunsaturated fatty acids (PUFA)-the constituents of natural lipid bilayers and biomembranes. During the measurement, thermal decomposition of water soluble azo-initiator ABAP produces primary radicals (R•) that are immediately converted into water soluble peroxyl radicals able to abstract the H atom from the weakest C-H bond (bis-allyl position) at the LinMe molecule. This process is regarded as the initiation of peroxidation (Reaction (1)) and triggers the propagation described by Reactions (2) and (3). The typical plot of oxygen consumption of such spontaneous PUFA peroxidation is presented in Figure 5 as curve 1, with the rate of oxidation R ox = d[O 2 ]/ dt. The rate of peroxidation can be significantly reduced in the presence of small amounts of chain-breaking antioxidants (CBA)-small molecules, mainly phenols, thiols, aromatic amines, or terpenoids that sacrificially eliminate peroxyl radicals from the propagation chain, Reaction (5) [34]. The rate of peroxidation can be significantly reduced in the presence of small amounts of chain-breaking antioxidants (CBA)-small molecules, mainly phenols, thiols, aromatic amines, or terpenoids that sacrificially eliminate peroxyl radicals from the propagation chain, Reaction (5) [34].
Three features are important to be effective CBAs: first, their reaction with radicals (Reaction (5)) should be much faster than the propagation step of peroxidation (k inh >> k 3 ), secondly, the products of Reaction (5) should not be highly reactive radicals themselves, and finally, as the propagation occurs within a lipid membrane, an efficient CBA should be localized inside it. An example of such a compound is PMHC (2,2,5,7,8-pentamethylchroman-6-ol), an analogue of α-tocopherol (the most active natural CBA). The addition of 1.0 µM PMHC into the system causes a lag phase (induction period within time τ, see curve 2 in Figure 5). The observed time τ is dependent on the initial concentration of antioxidant, [ArOH] 0 , and the rate of initiation, R i : R i can be determined from transformed Equation (6) when τ is measured for a known concentration of PMHC as a standard CBA, for which n = 2.0. The inhibition rate constant k inh can be determined from an integral form of the rate equation: where ∆[O 2 ] t stands for molar oxygen consumption [M] recorded at time t within the induction period (t < τ). With k p = 36 M −1 ·s −1 for methyl linoleate dispersed in Triton X-100 micelles (assuming the same k p as in SDS micelles [35]), we calculated k inh for PMHC and for NB-2. The obtained kinetic parameters are listed in Table 1. Derivative NB-1 (without phenol moieties) showed a slight retarding effect on the studied reaction, (see line 3 in Figure 5), but its activity was not sufficient to effectively break the peroxidation chain and no induction period was observed. Table 1. The lengths of induction periods, τ, the rates of initiation, R i , the slow-down factor (R ox /R inh , the ratio of the rate of non-inhibited process to the rate of inhibited process), and the inhibition rate constants, k inh , calculated for autoxidation of 2.73 mM LinMe dispersed in 8 mM Triton X-100 micelles in the presence of 1 µM PMHC, or NB-1, or NB-2. The experiments were performed at 37 • C and pH 7.0. Peroxidation was initiated by 10 mM ABAP. Each measurement was run 4-6 times (see Table S1). Values are expressed as the mean ± standard deviation (SD).
Experimental System
a For R i determination, see Equation (6). b The (R ox /R inh ) ratio informs how many times the inhibited oxidation is slower than spontaneous (non-inhibited) process, with R ox = (4.3 ± 0.3) × 10 −7 Ms −1 . It can be also considered as the ratio of the kinetic chain length (the number of propagating cycles) of the inhibited to non-inhibited process. c For this system, the inhibition period was not detected (see curve 3 in Figure 5) and the rate of the retarded process is listed.
Compound NB-2 has a noticeable effect on the rate of methyl linoleate peroxidation, decreasing it considerably, from R ox = (4.3 ± 0.3) × 10 −7 Ms −1 for spontaneous peroxidation to R inh = (0.9 ± 0.1) × 10 −7 Ms −1 for the sample containing 1 µM of NB-2 ( Figure 5), with the induction period τ ind = 20.2 ± 0.8 min (average of 6 measurements). Based on eq. 7, we calculated the bimolecular rate constant for reaction of NB-2 with peroxyl radicals, k inh = 1000 ± 100 Ms −1 , as it would be expected for a moderate antioxidant. In this case, we regarded the reactivity of NB-2 being one order of magnitude smaller than tocopherols analogue (PMHC) as an advantage because the compound will perform its reporting role when other, more reactive antioxidants are exhausted in the system. Finally, from rearranged equation 6, with R i , [NB-2] 0 and τ (in seconds) taken from Table 1, the stoichiometric coefficient n = 5.3 was calculated for NB-2. This parameter exceeds the predicted number of four radicals scavenged by two phenolic moieties, however, the presence of a double C=C bond conjugated with an aryl ring provides additional susceptibility to trap more radicals than could be expected for simple phenols.
UV-vis and Spectrofluorimetric Study of the Reaction of NB-2 with Peroxyl Radicals in Methanol
With the knowledge that NB-2 reacts with the peroxyl radicals mediating the peroxidation chain in a water/lipid biphasic system, we investigated whether this radical trapping molecule could report on the presence of peroxyl radicals. Figure 6 presents the results of several experiments monitoring the photochemical response of NB-2 to peroxyl radicals generated from AIBN. At room temperature, during the photodissociation of AIBN in the liquid phase alkyl radicals are being produced with a moderately high quantum yield of ca. 0.44 (in benzene) [36], Reaction (8).
3 in Figure 5) and the rate of the retarded process is listed.
Compound NB-2 has a noticeable effect on the rate of methyl linoleate peroxidation, decreasing it considerably, from Rox = (4.3 ± 0.3) × 10 −7 Ms −1 for spontaneous peroxidation to Rinh = (0.9 ± 0.1) × 10 −7 Ms −1 for the sample containing 1 µM of NB-2 ( Figure 5), with the induction period τind = 20.2 ± 0.8 min (average of 6 measurements). Based on eq. 7, we calculated the bimolecular rate constant for reaction of NB-2 with peroxyl radicals, kinh = 1000 ± 100 Ms −1 , as it would be expected for a moderate antioxidant. In this case, we regarded the reactivity of NB-2 being one order of magnitude smaller than tocopherols analogue (PMHC) as an advantage because the compound will perform its reporting role when other, more reactive antioxidants are exhausted in the system. Finally, from rearranged equation 6, with Ri, [NB-2]0 and τ (in seconds) taken from Table 1, the stoichiometric coefficient n = 5.3 was calculated for NB-2. This parameter exceeds the predicted number of four radicals scavenged by two phenolic moieties, however, the presence of a double C=C bond conjugated with an aryl ring provides additional susceptibility to trap more radicals than could be expected for simple phenols.
UV-vis and Spectrofluorimetric Study of the Reaction of NB-2 with Peroxyl Radicals in Methanol
With the knowledge that NB-2 reacts with the peroxyl radicals mediating the peroxidation chain in a water/lipid biphasic system, we investigated whether this radical trapping molecule could report on the presence of peroxyl radicals. Figure 6 presents the results of several experiments monitoring the photochemical response of NB-2 to peroxyl radicals generated from AIBN. At room temperature, during the photodissociation of AIBN in the liquid phase alkyl radicals are being produced with a moderately high quantum yield of ca. 0.44 (in benzene) [36], Reaction (8).
(8) Figure 6. (A) absorption spectra of solution of 15 µM NB-2 and 5 mM AIBN in methanol. Sample (optical path 10 mm) was periodically exposed to 5 sec. irradiation with 365 nm UV High Power LED; then, after 1 min, the spectrum was taken and the process was repeated for the same sample. (B) Emission spectra recorded with the same methodology of irradiation (λ ex = 575 nm, slit = 5 and 5 nm, and gain = high).
In an oxygen saturated solvent, once the R• radicals escape from the solvent cage (the cage effect), they immediately react with O 2 producing peroxyl radicals (Reaction (2), with k 2~1 0 9 M −1 ·s −1 ). In our experiment, a sample of 5 mM AIBN in methanol containing 15 µM NB-2 was periodically exposed to 365 nm UV light (5 s), and after 60 s, the absorbance and emission spectra were recorded (separate series of experiments). The obtained UV-vis spectra are presented in Figure 6A, demonstrating that oxidation (H atom abstraction from phenolic moieties, NB-2 → NB-2 ox ) is accompanied by a ten-fold decrease of the absorbance at λ = 647 nm, a three-fold decrease at 373 nm and a two-fold increase of the absorbance at λ max at 575 nm. Consequently, we decided to use 575 nm as the excitation wavelength for NB-2 ox . Figure 6B presents the emission spectra a sample of NB-2 reacting with peroxyl radicals (separate series but the same concentrations and conditions). As the oxidation of NB-2 progresses, an intensive emission for NB-2 ox appears with a maximum at λ = 605-615 nm. The intensity of the initial red fluorescence at 674 nm originating from parent molecule (NB-2) increases much slower and overlaps with the aforementioned band at 605-615 nm. Emission intensity measured at 612 nm undergoes a ca. 14-fold enhancement, suggesting that NB-2 is a promising candidate for a fluorescent probe for peroxyl radicals.
In the next series of experiments, the radicals were generated by thermolysis of AIBN in methanol. The quartz cell with the sample was placed in a thermostated pocket of the spectrofluorometer and emission at 612 nm was monitored on-line within 120 min. Upon thermolysis, the same primary alkyl radicals were produced (Reaction (9)) as during the photolysis of AIBN.
In an air-saturated solution, R• were immediately converted into peroxyl radicals (Reaction (2)), thus, peroxyl radicals were generated with a constant rate R g = 2 f k d [AIBN] and under 5 mM initial concentration of AIBN with k d = 0.28 × 10 −6 s −1 at 37 • C and 2.2 × 10 −6 s −1 at 50 • C in benzene [37] with the assumption that f~0.5 [38] the values of R g are 1.5 nMs −1 at 37 • C and 11 nMs −1 at 50 • C. This first value is comparable to R i , which was determined during peroxidation of LinMe in the micellar system in Section 3.2.
The emission spectra presented in Figure 7A-C show that the progress of oxidation (H atom abstraction from phenolic hydroxyl groups) is manifested by the increased emission at 612 nm. The slopes of the lines in panel D can be considered as representations of the rates of oxidation of NB-2. The comparison of those slopes gives relative rates of oxidation 1:4:23 for the processes carried out at 30, 37, and 50 • C, respectively, which is in good agreement with the accessible rate constants, k d , for thermolysis of AIBN (Reaction (9)) and with k d calculated from the Arrhenius equation with E a = 128.9 kJ/mol and logA = 15.19 s −1 [38]. In both kinds of experiments carried out in methanol, with peroxyl radicals generated by photolysis and thermolysis of AIBN, we observed a maximal 14-fold fluorescence enhancement. Such enhancement cannot be assigned to processes other than the reaction with peroxyl radicals (we exclude the reaction with molecular oxygen even during prolonged time, vide supra, and see Figure S6). Therefore, we assume that the oxidized form of NB-2 is responsible for emission at 605-612nm and 674 nm. Scheme 2 presents a proposed mechanism of a two-step oxidation of one receptor segment of NB-2: H atom abstraction from the phenolic hydroxy group by a peroxyl radical and a subsequent recombination of the formed phenoxyl radical with another peroxyl. This proposed mechanism is in accordance with the standard mechanism of reaction of peroxyl radicals with derivatives of hydroxycinnamic acids, such as p-coumaric, caffeic, ferulic, and sinapic acids [39]. Since recombination of phenoxyl radical with ROO• is very fast (for para-substituted phenols kr In both kinds of experiments carried out in methanol, with peroxyl radicals generated by photolysis and thermolysis of AIBN, we observed a maximal 14-fold fluorescence enhancement. Such enhancement cannot be assigned to processes other than the reaction with peroxyl radicals (we exclude the reaction with molecular oxygen even during prolonged time, vide supra, and see Figure S6). Therefore, we assume that the oxidized form of NB-2 is responsible for emission at 605-612 nm and 674 nm. Scheme 2 presents a proposed mechanism of a two-step oxidation of one receptor segment of NB-2: H atom abstraction from the phenolic hydroxy group by a peroxyl radical and a subsequent recombination of the formed phenoxyl radical with another peroxyl. This proposed mechanism is in accordance with the standard mechanism of reaction of peroxyl radicals with derivatives of hydroxycinnamic acids, such as p-coumaric, caffeic, ferulic, and sinapic acids [39]. Since recombination of phenoxyl radical with ROO• is very fast (for para-substituted phenols k r > 10 8 M −1 ·s −1 [36]), the first reaction, H atom abstraction, with k inh = 10 3 M −1 ·s −1 will be the rate determining step. Each hydroxycinnamyl residue can react with two peroxyl radicals, giving n = 4, which is smaller than the experimentally determined n > 5 (Section 3.2). This effect can be explained by the possible addition of the next ROO• to double the C=C bond in structures a2 and a3.
In both kinds of experiments carried out in methanol, with peroxyl radicals generated by photolysis and thermolysis of AIBN, we observed a maximal 14-fold fluorescence enhancement. Such enhancement cannot be assigned to processes other than the reaction with peroxyl radicals (we exclude the reaction with molecular oxygen even during prolonged time, vide supra, and see Figure S6). Therefore, we assume that the oxidized form of NB-2 is responsible for emission at 605-612nm and 674 nm. Scheme 2 presents a proposed mechanism of a two-step oxidation of one receptor segment of NB-2: H atom abstraction from the phenolic hydroxy group by a peroxyl radical and a subsequent recombination of the formed phenoxyl radical with another peroxyl. This proposed mechanism is in accordance with the standard mechanism of reaction of peroxyl radicals with derivatives of hydroxycinnamic acids, such as p-coumaric, caffeic, ferulic, and sinapic acids [39]. Since recombination of phenoxyl radical with ROO• is very fast (for para-substituted phenols kr > 10 8 M −1 ·s −1 [36]), the first reaction, H atom abstraction, with kinh = 10 3 M −1 ·s −1 will be the rate determining step. Each hydroxycinnamyl residue can react with two peroxyl radicals, giving n = 4, which is smaller than the experimentally determined n > 5 (Section 3.2). This effect can be explained by the possible addition of the next ROO• to double the C=C bond in structures a2 and a3. Scheme 2. Reaction the hydroxycinnamyl residue (single one is presented for simplicity) with two peroxyl radicals ROO• and the formation of three isomeric products of recombination. Scheme 2. Reaction the hydroxycinnamyl residue (single one is presented for simplicity) with two peroxyl radicals ROO• and the formation of three isomeric products of recombination.
UV-vis and Spectrofluorimetric Study of the Reaction of NB-2 with Peroxyl Radicals in Micelles
Using the excitation wavelength 575 nm (see previous section) for the visualization of oxidized NB-2 is advantageous as for λ > 500 nm the signal distortion caused by autofluorescence in the biological and biology relevant systems is minimal. From this point of view, NB-2 should be a highly suitable FP used for monitoring the peroxidation reactions however, some other environmental and microenvironmental factors can affect fluorescence and detectability, for example: pH changes, solvent polarity, background fluorescence, light scattering, and interactions of the fluorochrome with other fluorochromes present in the sample. Even though experiments with biological systems are beyond the scope of this preliminary report, we decided to check the behavior of NB-2 in a micellar system the same as for the kinetic measurements described in Section 3.2, with ABAP producing the peroxyl radicals with the same rate of initiation, R i = 4.3 nMs −1 (see Table 1). The emission spectra recorded for every 2.5 min during the peroxidation reaction are collected in Figure 8. The emission spectrum of NB-2 in the micellar system before the oxidation started ( Figure 8, red line, see also Figure S3) shows two bands with similar intensity: the first one with maximum absorbance at 674 nm and a second one at 616 nm. The cumulative plot presented in Figure 8 (inset) shows that the increase of emission recorded in the micellar system follows the same trend as in methanol (with AIBN). The observed 3.9-fold increase of fluorescence intensity during 20 min of reaction is lower than the analogous one in methanol, but is comparable to the 4-fold increase in emission observed by Cosa for his probe B-TOH ( Figure 3) tested in phospholipid DMPC vesicles at pH 7.4 with the radicals generated with ABAP [25]. After 25 min of peroxidation, a slow decrease in emission (Figure 8, inset) can be noticed as an effect of radical-mediated BODIPY degradation at further stages of peroxidation [40]. same trend as in methanol (with AIBN). The observed 3.9-fold increase of fluorescence intensity during 20 min of reaction is lower than the analogous one in methanol, but is comparable to the 4-fold increase in emission observed by Cosa for his probe B-TOH (Figure 3) tested in phospholipid DMPC vesicles at pH 7.4 with the radicals generated with ABAP [25]. After 25 min of peroxidation, a slow decrease in emission (Figure 8, inset) can be noticed as an effect of radical-mediated BODIPY degradation at further stages of peroxidation [40]. We also monitored the changes in absorbance during the prolonged oxidation (3 h) carried out in the micellar system at pH 4.0 and 7.0. UV-vis spectra for these measurements recorded every 15 min during peroxidation are presented in the Supplementary Material ( Figure S7 for stability test without peroxidation, and Figures S8 and S9 for peroxidation at pH 7 and pH 4). The observed effects are the same as for the reactions carried out in methanol, i.e., a slow decrease of intensity of broad bands with λ max at 373 and 660 nm and parallel build-up of the bands at 587 and 513 nm.
Conclusions
We designed and prepared a fluorescent probe NB-2 and described its use for the detection of peroxyl radicals. This probe is composed of two receptor segments (4-hydroxycinnamyl moieties) sensitive towards peroxyl radicals that are conjugated with a fluorescent reporter, dipyrrometheneboron difluoride (BODIPY), whose emission changes depend on the oxidation state of the receptors.
According to the oxygen intake measurements performed with a Clark-type oxygen electrode, NB-2 behaves like a moderate chain-breaking antioxidant during the peroxidation of methyl linoleate (LinMe) in Triton X-100 at pH 7, initiated with a water soluble azo-initiator, ABAP, at 37 • C. In this system, the rate constant for a reaction with peroxyl radicals (k inh ) is 1000 ± 100 M −1 ·s −1 , being one order of magnitude smaller than k inh for reaction of PMHC (an analogue of α-tocopherol). The stoichiometric coefficient, n, determined for NB-2 in this system, is above five, indicating that peroxyl radicals are trapped not only by phenolic moieties (which would result in n = 4), but also by other structural parts of NB-2.
The reporter segment of NB-2 absorbs and emits light in the visible region of the spectrum (λ > 500 nm). The whole molecule shows thermal stability (up to 3 h in methanol saturated with oxygen) and chemical stability upon 30 min exposition to peroxyl radicals. UV-vis and spectrofluorimetric experiments demonstrated that NB-2 yields a highly fluorescent product upon scavenging peroxyl radicals in a homogeneous solution (methanol), with AIBN used as a source of radicals generated by photolysis or thermolysis, and in a micellar system (LinMe/Triton X-100, pH 7), with ABAP used as a thermal source of peroxyl radicals. The emission enhancement upon such exposition to peroxyls is 14-fold in methanol and 4-fold in the micelles.
The preliminary results presented in this report show that NB-2 is a promising novel lipophilic fluorescent "switch on" probe. Detailed photochemical characteristics, studies of selectivity and interactions with other antioxidants, measurements carried out in biologically relevant systems, and possible applications of NB-2 will be presented in a full paper.
Author Contributions: Individual contributions of the authors: conceptualization, G.L., methodology of synthesis: K.S., synthesis: K.S., J.K., methodology of fluorometric and spectrophotometric measurements: J.K., and kinetic studies, J.K., A.K.; manuscript writing, editing and reviewing, all authors; supervision, G.L. All authors have read and agreed to the published version of the manuscript. | 9,892.4 | 2020-01-01T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Navigating the Grey Area: Students' Ethical Dilemmas in Using AI Tools for Coding Assignments
: Integrating artificial intelligence (AI) in higher education, particularly in coding assignments for Information Technology (IT) students, represents a rapidly evolving research area with significant implications for academic practices and integrity. This study focuses on the ethical challenges faced by IT students when using AI tools like ChatGPT for coding assignments. Despite the growing use of AI in education, there is a notable gap in understanding how students perceive and navigate the ethical dilemmas associated with these technologies. To address this gap, this study employed a thematic analysis of qualitative data collected from interviews with IT students. The results reveal a complex landscape of ethical considerations, including issues of originality, academic integrity, and the potential for misuse of AI tools. Students reported challenges in balancing the benefits of AI assistance with the need to maintain independent learning and adhere to ethical standards. The implications of this research are significant for educators, institutions, and policymakers. Understanding the ethical challenges students face can inform the development of more effective teaching strategies, assessment methods, and institutional policies. This study contributes to the ongoing dialogue about AI ethics in academia, providing valuable insights for creating an educational environment that leverages the power of AI while upholding the principles of academic integrity and meaningful learning.
Introduction
Integrating artificial intelligence (AI) in education, particularly in higher education, represents a rapidly evolving research area with far-reaching implications for academic practices and integrity (Takács et al., 2023).This study focuses on a critical issue within this domain: the ethical challenges faced by Information Technology (IT) students when using AI tools like ChatGPT for coding assignments.As AI technologies become increasingly sophisticated and accessible, their potential to revolutionize learning is matched by their capacity to disrupt traditional academic norms and practices, raising complex ethical questions that demand careful consideration.The issues of academic dishonesty and ethics have always been a challenge for a long time (Mutanga, 2020) The importance of this issue cannot be overstated.As Mohamad & Nazlan, (2024) point out, "the probability that AI will soon relieve humans of the daily tasks that humans usually do and as such, the trust in this technology must be paramount."This sentiment underscores the urgency of addressing the ethical implications of AI in education, particularly in coding assignments where the line between leveraging AI as a learning aid and potential academic misconduct can become blurred.The complexity of this issue lies in balancing the benefits of AI as a learning tool VOLUME 8 ISSUE 1 with the fundamental educational goals of skill development, critical thinking, and academic integrity (Slimi & Villarejo-Carballido, 2024) The rapid advancement of AI has led to its integration into various fields, including education.In the realm of coding assignments, AI tools have emerged as a transformative technology, offering both opportunities and ethical challenges.This sentiment is echoed by Slimi & Carballido (2023) who highlight the "ethical challenges and dilemmas" that have surfaced with the swift integration of AI into the education system, particularly concerning students' misuse of the technology.The exploration of these ethical dilemmas is crucial to ensure that the integration of AI into coding education is conducted responsibly and transparently.
Recent literature has begun to explore the multifaceted impact of AI on education.Mohamad & Nazlan (2024) propose a framework for evaluating the ethics of AI, emphasizing the need for AI to adhere to moral rules and not be used for illicit purposes.They stress the importance of addressing ethical issues such as "human sense and empathy" that may be lacking in AI algorithms, as well as concerns over "the security of data" and "the opaque nature of the algorithms." The authors argue that it is "imperative to evaluate their ethics" to ensure that the technology being used does not violate ethical principles.
In the realm of IT education specifically, the emergence of AI-generated code has posed a significant challenge to traditional computer science education.Porayska-Pomsta et al., (2022) argue that "instructors should instead reconsider assessment design in their pedagogy in light of recent developments, with a focus on how students build knowledge, practice skills, and develop processes."This suggests a need to rethink how computer science is taught and assessed, recognizing the opportunities and limitations presented by AI tools.
The perspectives of educators have also been explored in recent studies.Lo, (2023) conducted research to gather the views of university programming instructors on how they plan to adapt to the growing presence of AI code generation tools, such as ChatGPT and GitHub Copilot.The authors found that "in the short-term, many planned to take immediate measures to discourage AI-assisted cheating," while longer-term opinions diverged on whether to ban or integrate these tools into their courses.This highlights the diversity of approaches and the need to further explore best practices in this rapidly evolving landscape.
Similarly, Agrawal et al., (2021) investigate the balance between AI advancements and ethical concerns in higher education assessments.Their survey of diverse educators found a "growing interest in AI-based educational tools, along with a demand for rigorous training and ethical standards for their equitable application."This underscores the importance of addressing ethical considerations as AI becomes more prevalent in assessment practices.
The ethical implications of AI extend beyond coding assignments to other areas of academic work.Chan, (2023)explores the ethical dilemmas in using AI for academic writing, focusing on the field of nephrology.They highlight the potential for "scholars incorporating AI-generated text into their manuscripts, potentially undermining academic integrity."The authors propose solutions, including "the adoption of sophisticated AI-driven plagiarism detection systems" and "a robust augmentation of the peer-review process with an 'AI scrutiny' phase," to mitigate the unethical use of AI in academia.
From the student perspective, Gupta et al., (2024) present a study on the impact of AI tools, such as ChatGPT, on the student experience in programming courses.Their preliminary findings "describe a range of students' attitudes and behaviours towards ChatGPT that provides insight for future research and plans for incorporating such AI tools in a course."This underscores the need to understand the student perspective and its implications for the effective integration of AI tools in programming education.
The ethical challenges associated with AI in education are part of a broader discourse on AI ethics.Abimbola et al., (2024) examine the ethical dilemma of regulating AI chatbots, particularly ChatGPT.The study addresses potential ethical issues related to "data privacy, algorithmic bias, and the potential for chatbots to replace human interaction and support."The author emphasizes the need to find a balance between regulation and innovation to maximize the benefits of ChatGPT while minimizing its potential harms.
In the medical field, (Ciriaco & Marín, 2023) explore the ethical dilemmas of using AI.They highlight issues related to "informed consent, respect for confidentiality, protection of personal data, and the accuracy of the information it uses."The authors emphasize that the ethical analysis of AI in medicine must address "nonmaleficence and beneficence, both in correlation with patient safety risks, ability versus inability to detect correct information from inadequate or even incorrect information."While these concerns are specific to medicine, they highlight the broader ethical considerations that arise with the integration of AI in professional and educational settings.Kooli, (2023) conducts an ethics-based audit on leading Large Language Models (LLM), including GPT-4, to assess their moral reasoning and normative values.The author employs an "experimental, evidence-based approach that VOLUME 8 ISSUE 1 challenges the models with ethical dilemmas" to probe human-AI alignment.Their findings include "underlying normative frameworks with clear bias towards particular cultural norms" and "troubling authoritarian tendencies" in many of the models.This highlights the need for rigorous evaluation and regulation of AI systems to ensure alignment with human values and ethical principles.
While existing research provides valuable insights into the broader implications of AI in education and its ethical challenges, there is a notable gap in our understanding of how students, particularly in technical fields like computer programming, perceive and navigate these ethical dilemmas.The unique nature of coding assignments, which often involve problem-solving and algorithm development, presents specific challenges when considering the ethical use of AI tools.This gap is critical, as students' perspectives and decision-making processes are central to developing effective educational policies and ethical guidelines.The severity of this gap is underscored by the rapid adoption of AI tools among students, with a survey by Mohamad & Nazlan, (2024) indicating that 67% of students have used ChatGPT for schoolwork.Furthermore, the rapid evolution of AI technology, exemplified by tools like ChatGPT, has outpaced the development of ethical guidelines and educational policies.This creates a pressing need for research that explores how students are adapting to these new tools and the ethical frameworks they are developing to guide their use (Egbe et al., 2016).
To address this gap, our study poses the main research question: How do IT students perceive and navigate the ethical challenges of using ChatGPT in coding assignments?Understanding students' perspectives is crucial for developing informed strategies to harness AI's potential while maintaining academic integrity and ensuring meaningful learning outcomes.The complexity of this research question lies in several factors: the evolving nature of AI technology, which requires ongoing reassessment of ethical guidelines; the diversity of coding assignments, presenting unique ethical considerations; varying levels of AI literacy among students, influencing their ethical decision-making; the intersection of institutional policies and personal ethics; and the potential impact on skill development, balancing AI as a learning aid against the need to develop crucial coding skills.These multifaceted aspects underscore the intricate ethical landscape that students must navigate when using AI tools in their academic work.
In this paper, we adopted Kitchener's Five Ethical Principles (Kitchener, 1984) as the theoretical framework to analyze students' ethical behavior and decision-making when using AI tools in coding assignments.This framework allows us to explore key aspects of students' ethical considerations.The chosen framework has been used in other instances.For instance, Duncan & Geist (2022) investigated the understanding and awareness of ethics among psychology students.
Our findings reveal a complex landscape of ethical considerations, encompassing students' motivations, strategies for balancing AI assistance with independent learning, and perceptions of ethical AI use in academic settings.These insights provide a good understanding of the ethical dilemmas faced by IT students and their decision-making processes, offering valuable potential to inform the development of more effective policies on AI use in IT education.By understanding students' perspectives, educators and institutions can create realistic guidelines that address the challenges and opportunities presented by AI in coding education.Furthermore, this work contributes to broader discussions on adapting pedagogical approaches for a future where AI is integral to professional IT practice.
As the world moves forward in this era of rapid technological advancement, we must develop a comprehensive understanding of the ethical implications of AI in education.This study, therefore, represents a step towards that understanding, focusing on the perspectives of those at the forefront of this technological revolution -the students themselves.
The rest of the paper is structured as follows: The next section and the methodology details data collection and analysis processes, including transcription, coding, and thematic analysis.The findings and discussion highlight IT students' ethical dilemmas when using AI tools for coding assignments, focusing on originality, academic integrity, and AI misuse.The conclusion summarizes key findings and their implications for educators and policymakers, emphasizing the need for ethical guidelines.
Theoritical Framework
The ethical use of artificial intelligence (AI) in academic settings is a complex and multifaceted issue.To understand how students navigate the ethical dilemmas associated with the use of AI tools, this study employs Kitchener's Five Ethical Principles (Kitchener, 1984) as the theoretical framework.These principles provide a robust foundation for analyzing ethical behaviour and decision-making in many contexts, including education.
Autonomy
Autonomy refers to respecting the individual's right to make informed decisions about their actions.In the context of using ChatGPT, students' autonomy involves their ability to decide how and when to use the AI tool, provided they are aware of the ethical implications.This principle emphasizes the importance of students' understanding of academic integrity policies and their capacity to make choices that align with these guidelines.It also highlights the role of educational institutions in providing clear and comprehensive information about acceptable and unacceptable uses of AI tools.
Nonmaleficence
The principle of nonmaleficence centres on the obligation to avoid causing harm.Applying this principle to the use of ChatGPT involves ensuring that the use of AI does not negatively impact students' learning experiences, academic development, or the integrity of their work.This principle is crucial in understanding the potential risks associated with over-reliance on AI, such as the erosion of critical thinking skills and the temptation to engage in academic dishonesty.
Beneficence
Beneficence involves actively promoting the well-being of others.In this study, beneficence is considered in terms of how ChatGPT can enhance students' learning, support their academic performance, and contribute to their overall educational experience.This principle requires examining the positive aspects of AI use, such as providing additional resources for understanding complex topics, aiding in brainstorming and idea generation, and offering personalized learning support.It also involves balancing these benefits with potential drawbacks to ensure that the use of AI is genuinely beneficial to students.
Justice
Justice pertains to fairness and the equitable distribution of benefits and burdens.In the context of ChatGPT, this principle examines whether all students have equal access to AI tools and whether the use of these tools creates or exacerbates disparities among students.Justice also involves considering the fairness of using AI-generated content in academic work and the implications for grading and assessment.Ensuring that policies regarding AI use are applied consistently and fairly across different student groups is essential to uphold this principle.
Fidelity
Fidelity involves maintaining trustworthiness, honesty, and integrity in relationships and actions.For students, fidelity means adhering to academic integrity standards and being honest about their use of ChatGPT in their work.This principle underscores the importance of transparency in disclosing AI assistance and the ethical responsibility to produce original work.Fidelity also extends to the relationship between students and educators, emphasizing the need for clear communication and mutual understanding regarding the ethical use of AI tools.
Research Design
This research employs a qualitative case study approach to explore how students navigate ethical dilemmas when using ChatGPT for academic purposes.A case study design is chosen because it allows for an in-depth examination of the area under investigation (Hennink et al., 2020).This approach is particularly well-suited to understanding complex phenomena like ethical decision-making, where contextual factors play a significant role (Hancock et al., 2021;Schoch, 2020).In essence, the rationale for using a qualitative methodology is to capture the rich, detailed narratives of students' interactions with ChatGPT, providing insights that quantitative methods may overlook.
Data Collection Methods and Procedures
Data were collected from IT students enrolled in computer programming courses through a combination of semistructured interviews and focus groups.This approach has been shown to give robust and comprehensive data, allowing for the triangulation of findings and thereby enhancing the credibility of the results (Heiselberg & Stępińska, 2023).In the semi-structured interviews, a purposive sample of 20 students was selected based on their willingness to discuss their experiences with ChatGPT.Each interview lasted approximately 45-60 minutes and was conducted in person or via video conferencing, based on their preferences.The interviews were audio-recorded with participants' consent and VOLUME 8 ISSUE 1 transcribed verbatim for analysis.The focus groups involved three separate sessions, each consisting of 6-8 students with varied experience levels with ChatGPT.These focus groups, facilitated by the researcher, lasted about 90 minutes.Discussions were audio-recorded and transcribed to capture diverse perspectives.This approach encouraged the students to reflect on and debate their views in a group setting.
Data Analysis Techniques
The data analysis followed a systematic and iterative process to ensure thorough and credible findings.The primary techniques used were thematic analysis and narrative analysis.Thematic analysis involved coding the transcripts from interviews and focus groups using a combination of deductive and inductive approaches, as described in the work by Kiger& Varpio (2020).Initial codes were derived from Five Ethical Principles, while additional codes emerged from the data.Codes were grouped into broader themes that captured the critical aspects of students' ethical dilemmas and decision-making processes.Themes were refined through multiple rounds of review and discussion.Narrative analysis focused on understanding students' stories and experiences to identify common patterns and unique variations in how they navigated ethical dilemmas.
Awareness and Understanding of Ethical Implications of Using ChatGPT
Our findings revealed that students exhibit varied levels of awareness regarding the ethical implications of using ChatGPT.Some students clearly understand what constitutes ethical use, recognizing the importance of maintaining academic integrity and adhering to institutional guidelines.AI ethics is a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in developing and using technologies (Balasubramaniam et al., 2022).These students tend to use ChatGPT responsibly, employing it as a supplementary tool to enhance their learning without compromising the originality of their work.One student noted, "I know it's important to use ChatGPT wisely.I use it to get ideas and understand concepts, but I always make sure my work is my own."However, a significant portion of the student population appears to be less informed about the ethical boundaries associated with using AI tools like ChatGPT.This lack of awareness often leads to misuse, where students might unintentionally engage in academic dishonesty by submitting AI-generated content as their own work or relying too heavily on the tool, thereby undermining their learning process (Jobin et al., 2019).Academic dishonesty has always been reported as a big challenge in higher (Barnes & Hutson, 2024).A student admitted, "Sometimes I just copy what ChatGPT gives me because I'm not sure if it's okay to use it directly.I don't want to get in trouble, but it's not always clear what the rules are." A critical factor contributing to this varied level of awareness is the absence of clear and consistent guidelines from educational institutions.Studies have been conducted to investigate the challenges related to adhering to specific ethical principles of AI, such as fairness, accountability, and privacy (Leslie, 2019).Due to the lack of explicit policies and instructions on the ethical use of AI, students are left to navigate these complexities on their own, leading to inconsistent practices and potential ethical violations (Kong et al., 2023)).One student pointed out, "We don't get a lot of guidance on how to use tools like ChatGPT.Some professors talk about it, but others don't mention it at all, so it's confusing." The disparity in guidelines is further exacerbated by the differing rules set by individual lecturers.Academic institutions are still grappling with relevant standards to teach and implement AI ethics (Kong et al., 2023).The lack of AI students' guidelines creates confusion to learners (Fan & Li, 2023).This inconsistency creates confusion among students as they struggle to reconcile conflicting directives from different courses and instructors.For instance, one lecturer might emphasize the importance of citing AI assistance, while another might provide no guidance at all, leaving students uncertain about what is acceptable.One student said, "One of my lecturers says we need to cite ChatGPT if we use it, but another hasn't said anything about it, so I don't know what's right."This lack of uniformity not only affects students' understanding of ethical use but also impacts their behavior, as they may inadvertently breach academic integrity standards due to unclear expectations.Lecturers or teacher autonomy plays a part in the use of AI by students.This autonomy has been defined as teachers' perception regarding whether they control themselves and their work environment.
Perceptions of Ethical Use
Students' perceptions of the ethical use of ChatGPT vary significantly depending on the type of assignment and its context.Student perceptions play a vital role in determining their motivation, engagement and academic achievement.
VOLUME 8 ISSUE 1
On the other hand, negative perceptions result in reduced motivation and hinder academic success (Lavrič & Škraba, 2023).This study found that many students differentiate their use of AI tools based on whether the assignment is a take-home task or an in-class exam.For take-home assignments, some students view ChatGPT as a carte blanche to employ AI tools however they see fit.Students now find support by using ChatGPT to tackle their assignments ((Wibowo et al., 2023).They argue that if they can use resources like Google to learn and perform programming tasks, then using ChatGPT, which provides precise and relevant answers, should be equally acceptable.One student stated, "If we can use Google to figure out programming tasks, why not use ChatGPT?It gives us exactly what we need."Scholars who accept a deep approach to learning pursue to understand what they are learning, are vigorously concerned about their learning material, and attempt to source conclusions on evidence and reasoned arguments (Gordon & Debus, 2002) This perception is often articulated in student comments such as, "Why do we need to use Google to learn how to perform certain programming tasks when ChatGPT can give us the exact thing we are looking for?"Such viewpoints reflect a pragmatic approach to AI use, where efficiency and accuracy are highly valued, sometimes at the expense of deeper learning and skill development.The motivation connected with a deep learning approach is fundamentally intrinsic: the student seeks to satisfy the personal novelty of learning.Such students are aware of more aspects of their learning situations and experiences than students who adopt a surface approach to learning (Gordon & Debus, 2002) However, there is a consensus among students that using ChatGPT for brainstorming or getting explanations is generally seen as more acceptable than using it to generate entire essays or solve exam questions.ChatGPT tools if used correctly, present the opportunity to enhance group brainstorming sessions (Lavrič & Škraba, 2023) The former is perceived as a way to enhance understanding and foster creativity, while the latter is viewed as crossing an ethical line by outsourcing substantial portions of academic work to an AI tool.A student explained, "It's fine to use ChatGPT to get ideas or understand something better, but writing my whole essay with it feels wrong."Despite these distinctions, high academic pressure and workload significantly influence students' ethical decisionmaking.ChatGPT has revolutionized educational paradigms and reorganized student engagement modes with digital content and their social environment (Shahzad et al., 2024).Under intense stress and time constraints, students are more likely to rationalize the unethical use of ChatGPT.Moreover, ChatGPT serves as a robust tool for effective time management task prioritization, and as a repository of supplemental learning means (Shahzad et al., 2024).They may justify using the tool in ways they would normally consider inappropriate, such as copying large sections of AIgenerated text directly into their assignments or relying on ChatGPT to complete tasks that they do not understand.One student admitted, "When I'm stressed and running out of time, it's easy just to take what ChatGPT gives me and use it, even if I know I shouldn't."This rationalization is driven by the urgent need to meet deadlines and achieve high grades, often overshadowing concerns about academic integrity.
Usage Patterns and Ethical Dilemmas
The study uncovered diverse usage patterns of ChatGPT among students, highlighting a spectrum that ranges from support to dependence.However, the utilization of AI in education must be approached carefully to ensure it reinforces rather than reduces critical thinking skills (Darwin et al., 2024).Our findings show that many students use ChatGPT as a supplementary tool to aid their understanding of complex concepts.However, a fine line exists between using ChatGPT for support and becoming overly dependent, as some students admitted that they don't know where to draw the line.If left unchecked, an over-reliance on AI tools for problem-solving could lead to a passive learning approach.The number of students who admitted to being overly reliant on chatGPT was significant.One student remarked, "I use ChatGPT to understand difficult topics, but sometimes I worry I'm not putting in enough effort myself."It is, thus, imperative for lecturers to make sure that students do not overly rely on this technology at the expense of their learning.
ChatGPT's effectiveness in handling programming questions has solidified its position as a trusted study partner for many students.In essence, AI tools have become a powerful tool to supplement critical thinking skills, especially in learning settings (Rogers et al., 2024).Some students expressed that they could not imagine life without ChatGPT, indicating significant reliance on the tool.One student shared, "I can't imagine studying without ChatGPT.This reliance is so pronounced that many students use ChatGPT daily, even during classes.Over-dependence on ChatGPT reduces the level of critical thinking (Bouzar et al., 2024;Wu, 2024).When lecturers pose questions, students admit that they often turn to ChatGPT for answers, bypassing the need to engage in critical thinking or problem-solving themselves.This is despite the fact that the importance of critical thinking skills cannot be over-stated.Critical thinking, as a skill, is crucial for assessing information, solving problems, and making informed decisions, both in academic and real-world scenarios (DİLEKLİ & Boyraz, 2024) .If that learning opportunity is handed over entirely to ChatGPT, students are robbed of using their creative thinking.Another student admitted, "If a lecturer asks a tough question in class, I just type it into ChatGPT using my phone and get the answer immediately."
VOLUME 8 ISSUE 1
This behaviour shows a troubling trend where the convenience and accuracy of AI tools diminish students' motivation to develop their cognitive skills.Studies of ChatGPT's usage show proof of deductive reasoning, a sequential thought process, and the ability to maintain a long-term dependency (Essel et al., 2024) .
Students often struggle to distinguish between legitimate uses of ChatGPT, such as paraphrasing or seeking clarification, and unethical practices, such as submitting AI-generated text as their own work.The line between these practices can be very blurry for students.This may subsequently lead to unintentional unethical practices.One student said, "It's hard to know when I'm crossing the line.Sometimes, I just use what ChatGPT gives me because it's easier, but I know that's not always right." Students' heavy reliance on ChatGPT exacerbates this issue, as they may not fully grasp the importance of producing original work or may feel justified in using AI-generated content due to the perceived ubiquity of the tool.Despite the several potential benefits of ChatGPT as a teaching and learning tool, researchers have also highlighted causes for discussion and concern about possible disruptions to education, as this tool is still an unregulated technology presenting issues with academic integrity, data privacy, and other ethical concerns (Essel et al., 2024).This heavy reliance not only affects their learning outcomes but also raises serious ethical questions about academic integrity.This is demonstrated by one student who stated that, "Everyone uses ChatGPT, so it feels like it's okay to use it too much, but I know it's not the same as doing the work myself.".
Motivations and Justifications
Students who use ChatGPT typically do so primarily for academic purposes, specifically to pass their modules.During the interviews, a lot of students defended their use of ChatGPT by emphasizing how it would help them learn the material better and get better grades.A pupil stated, "Using ChatGPT helps me get better grades because it explains things clearly and helps me understand concepts I struggle with." Interestingly, the use of ChatGPT purely as a learning tool was rarely mentioned by students.Instead, the emphasis was predominantly on the outcomes of using the tool, such as higher grades rather than the learning process itself.This indicates that the tool is primarily seen as a means to an end rather than an integral part of the learning journey.
Another major reason mentioned by most students for using chatGPT GPT is the reduction of time needed to complete assignments.For many students, especially those juggling multiple responsibilities such as part-time job and extracurricular activities, completing assignments fast is a good motivation.One student explained, "ChatGPT saves me a lot of time.I can get answers fast and finish my assignments quickly, which is great because I have a lot of other commitments.".
Ethical Dilemmas and Resolution Strategies
Students stated that they frequently encounter conflicting norms when using ChatGPT.Essentially, they struggle to balance personal values, peer practices, and institutional expectations.For a student, these contradictory norms create a challenging landscape.One student expressed this dilemma: "I want to do things the right way, but it's hard when I see others using ChatGPT to get ahead without any consequences."This illustrates the struggle between maintaining personal integrity and succumbing to the pressure of following what seems to be the norm among peers.
In response to these conflicts, some students stated that they developed personal ethical codes based on their experiences and understanding of what constitutes acceptable use of ChatGPT.These personal codes served as internal guidelines, helping students make decisions that align with their values and ethical beliefs.One student shared, "I have my own rules for using ChatGPT.I only use it to check my work or get explanations, but I never copy and paste answers."Such personal codes reflect a commitment to ethical practices, even without clear or consistent institutional guidelines.
By creating and adhering to these personal ethical codes, students attempt to resolve the dilemmas they face and navigate the complexities of using AI tools like ChatGPT.These codes provide a framework for ethical decisionmaking, enabling students to use ChatGPT to support their learning while maintaining academic integrity.However, the effectiveness of these personal strategies can vary, depending on each student's understanding of institutional policies.It is thus crucial for institutions to make sure that all students have a common understanding of the policies.
Ethical Dilemmas in Collaborative Work
In collaborative settings such as computer programming, the ethical use of ChatGPT becomes more complex due to group dynamics and peer influence, which can lead to varied approaches to using the tool and sometimes result in VOLUME 8 ISSUE 1 ethical conflicts within groups.When working on group assignments, students reported that they often face dilemmas about incorporating ChatGPT into their collective work.One student shared, "It gets tricky when you're in a group.Not everyone has the same opinion on how to use ChatGPT, and it can cause disagreements."This complexity is compounded by group members' diverse ethical standards and practices, leading to confusion and conflict.In addition, the fact that computer programming is perceived as a difficult subject (Msane et al., 2020) (Egbe et al., 2020) and may tempt students to use AI tools to make their lives easier.
On the other hand, students also reported that sometimes they are not fully aware of how ethically their counterparts present their contributions in a group assignment.They further reported that this usually creates tension and uncertainty within the group.As one student said, "You don't always know if someone has just copied and pasted something from ChatGPT.It makes you question the integrity of the entire project."This uncertainty can undermine trust and collaboration, it difficult for students to work together effectively.
The issue of collective responsibility further complicates the ethical dilemmas in group projects.Students often face dilemmas regarding the collective responsibility and the use of ChatGPT, especially when there are differing opinions on what constitutes ethical use.The potential for the entire group to be punished for the unethical actions of one member adds to the stress and complexity of managing group work.One student expressed this concern: "It's frustrating because if one person decides to use ChatGPT unethically, the whole group can get in trouble.It's hard to control what everyone does."This shared responsibility for maintaining academic integrity highlights the need for clear communication and agreement within the group on how to use AI tools like ChatGPT ethically.
Conclusion
Integrating artificial intelligence (AI) in higher education, particularly in computer programming assignments, presents ethical challenges.This study has highlighted the critical ethical dilemmas faced by students, such as balancing AI assistance with originality, maintaining academic integrity, and navigating the potential for misuse of AI tools like ChatGPT.We have gained valuable insights into how students perceive and manage these ethical considerations by applying a theoretical framework based on autonomy, nonmaleficence, beneficence, justice, and fidelity.
The implications of this research are far-reaching.For educators, understanding the ethical challenges students face when using AI tools can inform the development of more effective teaching strategies and assessment methods.This includes rethinking assessment design to better integrate AI tools in a way that promotes learning while upholding academic integrity.Institutions can also benefit by developing clear guidelines and policies that address the ethical use of AI in educational settings, ensuring that students are equipped with the knowledge to use these technologies responsibly.Moreover, this study emphasizes the need for continuous dialogue between students, educators, and policymakers to adapt to the rapid advancements in AI.As AI technologies evolve, so too must the ethical frameworks and educational practices that govern their use.This research suggests that incorporating ethical training and AI literacy into the curriculum could help students better navigate the complexities of using AI tools in their academic work.
The translational importance of this work lies in its potential to shape future educational policies and practices.By providing a nuanced understanding of the ethical dilemmas associated with AI in education, this study can guide the creation of robust ethical guidelines and promote a balanced approach to AI integration.This can ensure that AI tools are used to enhance learning without compromising the core values of education.
Future research could extend this work by exploring the long-term impacts of AI use on student learning outcomes and the development of coding skills.Additionally, comparative studies across different educational contexts and disciplines could provide a broader perspective on the ethical challenges and best practices for AI integration in academia. | 7,711.2 | 2024-08-14T00:00:00.000 | [
"Computer Science",
"Education"
] |
A data browsing application for accessing gene and module-level blood transcriptome profiles of healthy pregnant women from high- and low-resource settings
Abstract Transcriptome profiling data, generated via RNA sequencing, are commonly deposited in public repositories. However, these data may not be easily accessible or usable by many researchers. To enhance data reuse, we present well-annotated, partially analyzed data via a user-friendly web application. This project involved transcriptome profiling of blood samples from 15 healthy pregnant women in a low-resource setting, taken at 6 consecutive time points beginning from the first trimester. Additional blood transcriptome profiles were retrieved from the National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO) public repository, representing a cohort of healthy pregnant women from a high-resource setting. We analyzed these datasets using the fixed BloodGen3 module repertoire. We deployed a web application, accessible at https://thejacksonlaboratory.shinyapps.io/BloodGen3_Pregnancy/which displays the module-level analysis results from both original and public pregnancy blood transcriptome datasets. Users can create custom fingerprint grid and heatmap representations via various navigation options, useful for reports and manuscript preparation. The web application serves as a standalone resource for exploring blood transcript abundance changes during pregnancy. Alternatively, users can integrate it with similar applications developed for earlier publications to analyze transcript abundance changes of a given BloodGen3 signature across a range of disease cohorts. Database URL: https://thejacksonlaboratory.shinyapps.io/BloodGen3_Pregnancy/
Introduction
Pregnancy is a critical period for both the mother and the fetus.It is associated with increased health risks for the mother, and it is a formative period for the fetus, as it is thought to influence health trajectories in later life (the development origins of health and disease principle) (1)(2)(3).It is also associated with the marked alteration of the physiology and immunity of the mother (4)(5)(6).Such changes can be detected by measuring blood transcript abundance on a genome-wide scale (7)(8)(9)(10)(11).We carried out the Molecular Signature in Pregnancy (MSP) study, which aimed to identify changes in blood transcript abundance associated with, and possibly preceding, adverse clinical outcomes (12,13).The study was carried out in a low-resource setting and involved the collection of samples at high temporal frequency (every 2 weeks from the first trimester of pregnancy).Overall, 430 women were enrolled in the study.A secondary aim was to design targeted assays that will serve as a resource for profiling changes on a large scale in the MSP study (>6000 samples available) as well as future pregnancy monitoring studies.
As a first step, we sought to establish a reference collection of transcriptome data that could be used to inform the design of this targeted assay.For this, we generated RNAseq profiles for a subset of MSP study subjects.We also identified and retrieved a complementary public blood transcriptome dataset generated by a study conducted in a high-resource setting (the PROMISSE study [Predictors of Pregnancy Outcome: Biomarkers in Antiphospholipid Antibody Syndrome and Systemic Lupus Erythematosus] ( 7)).While these datasets were primarily used as a reference to guide the design of our targeted assay, we also sought to maximize their utility: first, by depositing the data that were generated de novo in a public repository, along with extensive metadata.This will permit reuse by other investigators, who may employ different analytic approaches or combine these data with additional datasets and perform meta-analyses on a large number of samples.Second, we sought to make the data accessible to the research community at large via a data browsing application.Indeed, data deposited in a public repository such as NCBI's GEO are not readily accessible since it requires downloading count matrices or even raw output files that would then need to be run through a bioinformatics pipeline for pre-processing, alignment and normalization.This is a hurdle that we aimed to address here specifically by making transcriptional profiling data accessible to the scientific community via a user-friendly web application.Furthermore, in addition to providing access to gene-level profiling data, we are leveraging this web application to also make available results of analysis we carried out at the module level.
Materials and Methods
The MSP reference transcriptome dataset was generated as follows: transcript abundance was measured via RNA sequencing in 88 samples collected at 6 of ∼15 available time points, from 15 women with uncomplicated pregnancies.A data descriptor will be published that will report in detail the methodologies used for sample and data processing.Briefly, 50 μl of blood collected via a fingerstick was stabilized in a solution that permits to preserve RNA integrity (14).Following RNA extraction libraries were prepared using the TruSeq Illumina RNA Library Prep kit.Samples were sequenced on an Illumina HiSeq 4000 instrument at a high read depth (60 million).Data were availed from a separate study: the PROMISSE study aimed to identify molecular mechanisms underlying the increased risk of pregnancy complications observed in subjects with systemic lupus erythematosus (7).For this, the authors enrolled pregnant Fingerprint grid plot representation.This fingerprint grid plot represents changes in blood transcript abundance in samples collected during the third trimester of pregnancy ('Third') from women recruited in the MSP study, relative to transcript abundance in samples collected from the same donors 3 months after delivery ('3PP' = 3 months post-partum).The position of the modules on the grid is fixed, with each row regrouping modules from the same 'module aggregate', labeled as A1, A2, A3, etc., for Aggregate 1, Aggregate 2, Aggregate 3, etc.Only the 28 aggregates which were assigned more than one modules are represented on this grid (15).Changes in abundance are represented on the grid by a red spot, indicating that constitutive transcripts of the corresponding module are significantly increased in third trimester samples over post-partum samples.A blue spot shows conversely that its constitutive transcripts are significantly decreased.The color gradation is indicative of the 'module activity', which is the proportion of transcripts meeting the statistical cutoff that is employed for this comparison-i.e.P < 0.05 and a False Discovery Rate = 0.1, with values for red spots ranging from +15% to +100% (all constitutive transcripts showing a significant increase in abundance) and for blue spots from −15% to −100% (all constitutive transcripts showing a significant decrease in abundance).Finally, the grid below indicates the functional annotations assigned to the modules at their given position using a color code.Areas on the grid in white are for modules for which we could not find clear functional associations (TBD = To Be Determined).Areas on the grid in gray are not assigned to any given modules (NA = Not Applicable).women being diagnosed with systemic lupus erythematosus and healthy pregnant women who were used as a control.They collected blood samples at each trimester and between 8 and 20 weeks post-partum and profiled transcript abundance using Illumina BeadArrays.The study was conducted in the USA and Canada-high-resource settings when compared to the MSP study which was conducted in a mobile migrant population on the Thai-Myanmar border (12,13).The MSP study was approved by the ethics committee of the Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand (Ethics Reference: TMEC 15-062, initial approval 1 December 2015), the Oxford Tropical Research Ethics Committee (Ethics Reference: OxTREC: 33-15, initial approval 16 December 2015) and reviewed by the local Tak Province Community Ethics Advisory Board.Written informed consent or consent via thumbprint confirmed by an impartial, literate witness (in the case of illiterate participants) was obtained was obtained from all cohort participants.The PROMISSE study protocol and consent forms were reviewed and approved by institutional review boards, and written informed consent was obtained from all patients A fixed blood transcriptome module repertoire that we recently established and characterized was employed as a framework to perform analyses at the module level (15).Briefly, the 'BloodGen3' repertoire has been constructed through a data-driven process, factoring in co-clustering patterns of individual gene pairs across a collection of 16 reference patient cohorts.These cohorts included patients with a wide range of autoimmune and infectious diseases, as well as patients with cancer, liver transplant recipients and pregnant subjects-encompassing overall 985 individual subject profiles.A network was constructed with genes as the nodes and co-clustering as the edges, with a weight being attributed to the edges of the network based on the number of co-clustering events for a given gene pair observed across all 16 reference datasets.This network was mined to identify densely connected sets of genes that formed the modules.In total, 382 modules were identified through this process.Downstream module-based analyses were carried out employing the 'BloodGen3Module' R package that we have specifically developed for this purpose (16).
Results
Analysis results can be accessed via the MSP1 BloodGen3 web application which was deployed as an R Shiny app and can be accessed at: https://thejacksonlaboratory.shinyapps.io/BloodGen3_Pregnancy/.In addition to providing access to processed analysis results, it can be leveraged to generate custom plots for use in reports and publications.This is the resource that is presented in this article and will next be described in more detail.
Tabs on the left side of the interface provide user access to different types of customizable plots as well as extensive annotations that will aid in their interpretation.Specifically: (i) The 'aggregate annotation' tab lists the 28 module aggregates that serve as a basis for generating fingerprint grid maps or heatmaps (Figure 1).Each module aggregate comprises several modules.Clicking on the links that are provided will open an interactive Prezi presentation in a new browser window; for instance, in the case of module Aggregate A28: https://prezi.com/view/sSTVHAGUMNgkGiNhSbgD/.Clicking on individual modules will permit to zoom in and access background information about the module (gene composition), functional profiling information (ontology profiling, pathway and literature enrichment tools, transcription factor binding motif enrichment) and transcriptional profiles for the gene set constituting the module across several reference datasets (isolated leukocyte populations and hematopoietic precursors).This is illustrated by a short screencast video deposited in FigShare (17) and accessible via this link: https://youtu.be/5OLh6T6IvOk.(ii) The 'Fingerprint grid' tab provides access to fingerprint grid plots which indicate changes in transcript abundance for a given study at a given timepoint in comparison to a non-pregnant baseline (Figure 2).The position of the modules on the grid is fixed, with the modules lined up on a given row belonging to the same aggregate [the number of modules per aggregate varies between 2 (Aggregate A16) and 42 (Aggregate A2)].Red spots indicate that a proportion of the transcripts constitutive of the corresponding module have significantly higher abundance levels in pregnant subjects compared to their baseline.Blue spots indicate that the transcripts have significantly lower abundance levels in those subjects.The colors are gradated to indicate the relative proportion of transcripts showing significant changes, with values ranging from +100% (all constitutive transcripts are increased) to −100% (all constitutive transcripts are decreased).An annotated map is provided below that uses a color code to represent the functional annotations associated with each of the modules on the map (no color means that functional associations for these modules have not yet been identified).A short screencast video deposited in FigShare (18), illustrating how fingerprint grid plots are generated and can be accessed via this link: https://youtu.be/6e2t2Ccotcc.(iii) The 'modules X studies' tab provides users access to fingerprint heatmap plots, for each of the aggregates and across the MSP and PROMISSE study groups (Figure 3).The position of the modules on the heatmap is not fixed.They are arranged instead according to similarities in abundance patterns via hierarchical clustering.
In this case, columns on the heatmap correspond to study groups, and rows correspond to individual modules.The proportion of transcripts for which abundance is significantly changed is shown again using gradated red and blue dots.Such maps can be accessed for each aggregate via the drop-down list directly above the plot ('Choose aggregate').Notably, the zoom in/out function of the web browser can be used to increase the size of the image, thus improving its resolution.The image, for instance, can then be saved for used in reports or manuscript preparation.All functionalities described here are demonstrated in a screencast that has been deposited in FigShare (19) and can be accessed via this link: https://youtu.be/G8ro8zxqUGI.(iv) The 'modules X individuals' tab provides users with the opportunity to generate custom fingerprint heatmap plots.Rows represent modules for a given aggregate, but this time columns represent individual subjects (rather than study groups as in the previous tab) (Figure 4).It is, in this instance, possible to combine multiple module aggregates, simply by typing in turn in the box the IDs of the modules of interest (e.g.A28 is the ID for module Aggregate A28).A drop-down menu permits to choose whether to display results for the MSP cohort only (MSP), the PROMISSE cohorts only (PROMISSE) or combine both cohorts (MSP-PROMISSE).It is also possible for users to apply a filter removing modules showing only modest changes in abundance across the set of samples selected.Choosing 'Method 1' combined with the 'Check % average' slider permits to set a threshold based on the average module response value across all samples (e.g.selecting Method 1 from the drop-down menu and setting 'Check % average' at 20 will remove from the heatmap below all modules for which the average module response is <20%).Choosing 'Method 2' combined with the 'Check % value' slider permits to set a threshold based on the maximum module response observed across all samples (e.g.selecting Method 2 from the drop-down menu and setting 'Check % value' to 15 will remove from the heatmap below all modules for which the maximum module response is <15%).Of note, if the filter applied excludes all modules, the application will return an error message.It is then indicated to lower the threshold accordingly.Once again, these functionalities are demonstrated in a screencast that is available via this link: https:// youtu.be/3v529I6Ww1kand has been deposited in FigShare ( 20).(v) The 'BOXPLOT (% Module Response)' tab provides access to box plots showing the percentage response for individual modules across study groups, for both the MSP and PROMISSE datasets (Figure 5), and to box plots showing transcript abundance for individual genes.Modules can be selected from a drop-down menu.Individual genes can be selected by typing their official gene symbol in a search box.The corresponding screencast is accessible via this link: https://youtu.be/vmqV2UpLeaY and has been deposited in FigShare (21).
Discussion
In conclusion, while vast amounts of systems-scale profiling data are available in public repositories, it is not always readily accessible or interpretable.The web application that is described in this data note is meant to fill this gap and complement our GEO deposition of the primary transcriptomic data generated in the context of our MSP study.Practically, this resource is being employed by our team to support the design of targeted transcript panels and assays for the monitoring of pregnancy.The resource is also being used to generate figures that are used in reports and peer-review publications.
In another context, we used a similar BloodGen3 app as a basis for holding an 'omics data interpretation workshop'.As indicated by the title, such workshops are meant to support the interpretation of large-scale profiling data but do not require from participants to carry out hands-on analyses.Instead, participants that may not have any bioinformatics skills but are medical experts or immunologists will focus instead on the interpretation of the data and will rely on the data browsing application to explore analysis results and to generate custom figures.This is illustrated in three papers exploring the use of the BloodGen3 repertoire for investigating the pathogenesis of psoriasis disease (22), delineating respiratory syncytial virus endotypes (23), and developing targeted transcriptional profiling panels for COVID-19 immune monitoring (24).
As we strive to enhance the application's utility, we acknowledge the potential value of enabling users to upload and analyze their own datasets within the context of the existing cohorts for comparative purposes.However, we must emphasize that our current platform does not yet support this functionality.Ensuring user data privacy and security is paramount, and any future updates that might allow personal data uploads will be developed with rigorous adherence to data protection standards, potentially including serverless operations to maintain confidentiality.
Moreover, it is pertinent to note that other 'BloodGen3' applications have been made available as companion to earlier publications and encompass a wide range of diseases and immune states (15,23,24).The user interface and functionalities follow a similar scheme, and these can be used as a resource to contextualize the analysis and interpretation of the MSP and PROMISSE fingerprint profiles.
Figure 1 .
Figure 1.BloodGen3 application user interface.Users navigate the user interface primarily through the tabs on the left that provide access to different information and visual representations of the results.Parameters can be adjusted via drop-down menus and sliders to customize the plots.The latter can then be used in reports or publications.
Figure 2 .
Figure 2.Fingerprint grid plot representation.This fingerprint grid plot represents changes in blood transcript abundance in samples collected during the third trimester of pregnancy ('Third') from women recruited in the MSP study, relative to transcript abundance in samples collected from the same donors 3 months after delivery ('3PP' = 3 months post-partum).The position of the modules on the grid is fixed, with each row regrouping modules from the same 'module aggregate', labeled as A1, A2, A3, etc., for Aggregate 1, Aggregate 2, Aggregate 3, etc.Only the 28 aggregates which were assigned more than one modules are represented on this grid(15).Changes in abundance are represented on the grid by a red spot, indicating that constitutive transcripts of the corresponding module are significantly increased in third trimester samples over post-partum samples.A blue spot shows conversely that its constitutive transcripts are significantly decreased.The color gradation is indicative of the 'module activity', which is the proportion of transcripts meeting the statistical cutoff that is employed for this comparison-i.e.P < 0.05 and a False Discovery Rate = 0.1, with values for red spots ranging from +15% to +100% (all constitutive transcripts showing a significant increase in abundance) and for blue spots from −15% to −100% (all constitutive transcripts showing a significant decrease in abundance).Finally, the grid below indicates the functional annotations assigned to the modules at their given position using a color code.Areas on the grid in white are for modules for which we could not find clear functional associations (TBD = To Be Determined).Areas on the grid in gray are not assigned to any given modules (NA = Not Applicable).
Figure 3 .
Figure 3. Group-level fingerprint heatmap representation.This heatmap represents changes in transcript abundance for individual modules (rows) belonging to a given aggregate (A33 in this example), across MSP and PROMISSE study groups (columns).Groups here are formed according to the sampling time point (first, second or third trimester) and baseline (1 or 3 months post-partum, noted 1PP and 3PP, respectively).Rows and columns are arranged via hierarchical clustering, based on similarities in abundance profiles.The red spots indicate an increase in transcript abundance compared to baseline, with proportions of significant transcripts for the corresponding module ranging from 15% to 100%.The blue spots indicate a decrease in transcript abundance with proportions indicated by negative % values ranging from −15% to −100%.Functional associations for the modules shown on the heatmap are indicated by a color code on the vertical annotation track.
Figure 4 .
Figure 4. Individual-level fingerprint heatmap representation.This heatmap represents changes in transcript abundance for individual modules (rows) belonging to multiple aggregates (A27, A1, A35, A36 and A38 in this example), across individual MSP samples (columns).Columns are arranged according to the study group membership (first, second or third trimester, delivery, 1 month and 3 months post-partum).Rows are arranged via hierarchical clustering, based on similarities in abundance profiles, first across modules aggregates, then secondly within module aggregates (i.e.modules from different aggregates remain on their aggregate's branch).The red spots indicate an increase in transcript abundance compared to baseline, with proportions of significant transcripts for the corresponding module ranging from 15% to 100%.The blue spots indicate a decrease in transcript abundance with proportions indicated by negative % values ranging from −15% to −100%.Functional associations for the modules shown on the heatmap are indicated by a color code on the vertical annotation track.'First', 'second' and 'third' = first, second and third trimester of pregnancy, respectively.1 PP = 1 month post-partum; 3 PP = 3 months post-partum.
Figure 5 .
Figure 5. Module activity profiles.The boxplots represent activity profiles measured as 'percentage of response' (proportion of constitutive transcripts for which abundance levels are significantly different compared to post-partum baseline).T1, T2 and T3 = first, second and third trimesters of pregnancy, respectively; D = delivery; 1 PP = 1 month post-partum; 3 PP = 3 months post-partum.Profiles are shown for 4 of the 382 modules that constitute the BloodGen3 repertoire. | 4,718 | 2024-01-01T00:00:00.000 | [
"Medicine",
"Computer Science",
"Biology"
] |
Extraction and Chemical Characterization of Humic Acid from Nitric Acid Treated Lignite and Bituminous Coal Samples
: Currently, conversion of coal into alternative fuel and non-fuel valuable products is in demand and growing interest. In the present study, humic acid was extracted from two different ranks of coal, i.e., low rank and high rank (lignite and bituminous), through chemical pretreatment by nitric acid. Samples of lignite and bituminous coal were subjected to nitric acid oxidation followed by extraction using KOH and NaOH gravimetric techniques. The chemical pretreatment of both types of coal led to enhanced yields of humic acid from 21.15% to 57.8% for lignite low-rank coal and 11.6% to 49.6% bituminous high rank coal. The derived humic acid from native coal and nitric acid treated coal was analyzed using elemental analysis, E4/E6 ratio of absorbance at 465 nm and 665 nm using UV-Visible spectrophotometry and Fourier transformed infrared spectroscopy FTIR. The chemical characteristics of coal treated with nitric acid have shown increased molecular weight and improved aromaticity with more oxygen and nitrogen and lower C, H, and sulphur content. The E4/E6 ratio of nitric acid-treated low and high ranks of coal was high. The FTIR spectroscopic data of nitric acid-treated lignite coal indicates an intensive peak of carboxyl group at 2981.84 cm − 1 , while bituminous coal was shown in cooperation with the N-H group at 2923.04 cm − 1 . SEM was performed to detect the morphological changes that happen after producing humic acid from HNO 3 treatment and native coal. The humic acid produced from HNO 3 treated coal had shown clear morphological changes and some deformations on the surface. SEM-EDS detected the major elements, such as nitrogen, in treated humic acid that were absent in raw coal humic acid. Hence, the produced humic acid through HNO 3 oxidation showed a more significant number of humic materials with improved efficiency as compared to native coal. This obtained humic acid can be made bioactive for agriculture purposes, i.e., for soil enrichment and improvement in growth conditions of plants and development of green energy solutions.
Introduction
Chemically, coal is considered to be a complex structure, and conversion of coal into useful simpler substances of low molecular weight is considered to be a convenient and most appropriate method, compared to conventional utilization of coal [1]. Few of these low molecular weight fractions may be value-added chemical entities. These fractions from low rank lignite coal can be separated by strong alkali treatment and that mainly results in the isolation of three components (alkali-soluble and acid-insoluble, alkali-insoluble, and acid-soluble) [1,2]. The acid-insoluble and alkali-soluble fraction from lignite coal is called humic acid and contains about 20-80% of the organic content in lignite [3].
Humic acid is also known as polyhydroxy carboxylates that include aliphatic and aromatic subunits [4]. It is light brown to the black, heterogeneous, and multifaceted organic polymer that is created through secondary synthesis reactions [5]. Humic acid is proactive and interacts with organic and inorganic chemicals when compared with other substances. Some amine and aromatic groups are found to be present in humic acids that are active biologically in the growth of plants. The other groups that include phenols [5,6], carboxylates, hydroxyl, and ketones [7] help to expand the ion exchange capability of soils [8].
Currently, low rank coal that mainly includes peat and leonardite is not used commercially because they have less energy content but, on the other hand, these are considered rich sources of humic fractions [9]. Likewise, high rank bituminous coal is relatively soft and contains a tar-like substance known as bitumen. Principally, the insolubility of high rank coal can be made soluble through oxidation which introduces and increases the acidic groups in coal molecules [10].
Pakistan contains about 185 billion tons of coal reservoirs from all of its provinces. The major coal reserves exist in Thar, Lakhra (Sindh) Loralai, Duki, Chamalang (Balochistan), Mianwali, Makarwal, Khushab (Punjab), Narran, and Kotli (KPK) [11]. The category of Pakistani coal falls into different coal ranks and types that include lignite, sub-bituminous, and bituminous, but the low rank lignite shares the largest reservoirs of the country [11,12].
The structure of humic acid contains a mixture of small, large, and polydisperse moieties that are formed through the process of transformation and decomposition of microbial strains. The geochemical reaction also plays an important role in the formation of humic acids and humates [13]. One of the structures proposed by [14] says that humic acid is formed by the alkyl benzene moieties that are attached through covalent bonds. While another author, Piccolo [15], proposed that humic acid is a structure that is made up of small heterogeneous molecules that are bound together by hydrogen bonds and hydrophobic dispersive forces. Various other authors suggested that the structure of humic acid depended upon the sources that generated it and specific conditions of extraction as well. Humic acid carries several applications in various fields, i.e., medicine, agriculture, wastewater treatment, health, and many other applications. Humic acid can improve the quality of water and remove metal ions efficiently. It can also be used as ceramic additives, water-soluble fertilizer, soil remediation, flocculant, surfactant, and a battery cathode expansion agent [16]. It is also a good absorbent with the ability to treat pollution created by gases produced through waste substances [4][5][6][7][8][9][10][11][12][13][14][15][16][17]. Humic acid is found to be present in marsh soils, lakes, lignite, peat, bituminous coal, shale, weathered coal, flora and fauna residues, and shale areas.
The lignite type of coal carries significant attention as its reserves are present 45% globally [17,18]. But on the other hand, it has a wide number of oxygen functional moieties, high moisture content, and a low calorific value which has confined its wide and direct use. The humic acid extracted from the lignite type of coal has a high content of carbon, lower oxygen and nitrogen, very few carboxylic groups, and more aromatic groups. It also has Sustainability 2021, 13, 8969 3 of 15 cross-linking of ethylene and methylene between aromatic moieties [19]. As compared to humic acid from soil and peat, lignite humic acid contains long chains of saturated alkanoic acids [19,20]. However, the humic acid produced from bituminous and mature coal under mild oxidation produces humic substances and humins and it is observed that few soluble non-alkali materials are present in bituminous coals [21].
Humic acid under an acidic environment is insoluble but, under alkaline and basic conditions, it becomes soluble. Various methods have been proposed for the extraction of humic acid from lignite, including physical, biological, and chemical methods [22]. The present study aims at the extraction of humic acid from indigenous lignite and bituminous coal, categorized as low and high rank, by using the alkali-acid method and characterization of precipitated humic acid by UV-VIS, FTIR, elemental analysis, and SEM-EDS. Furthermore, this study will be focused on the economic value and application of humic acid extracted from coal.
Reagents and Chemicals
All the solvents and reagents used in the study were of analytical grade and obtained from Sigma Aldrich and were used without purification. The solutions were prepared by using deionized water. The samples of lignite coal were collected from the Thar coal mine in Sindh and bituminous coal was collected from the Duki coal mine, Baluchistan, Pakistan. All the samples were collected in sterilized bags and stored in a dry place.
Coal Samples Preparation
Coal samples from the Thar and Duki coal mines, weighing almost 20-30 kg, were collected and transported to the Environmental Microbiology lab, Quaid-I-Azam University, Islamabad, Pakistan. For carrying out the experimental study, about 2 kg of each sample was selected by using the quarter and conning method. The samples were crushed into fine particles using mortar and pestle. Later on, the coal samples were sieved through 60 mesh (0.25 mm) and were stored in a sterilized plastic bag having a quantity of about 1 kg. For carrying out proximate analysis, the coal sample was investigated using the standard ASTM method.
Acidic Pretreatment of Coal Using Nitric Acid (HNO 3 )
For pretreatment of coal samples, the working solution of HNO 3 was prepared freshly. To prepare about 2% working solution of HNO 3, 65% of concentrated HNO 3 (laboratory Grade) was used. About 50 g of each coal sample was allowed to oxidize using 100 mL of 2% HNO 3 freshly prepared in a beaker and allowed to stir gently for 1 h and were placed for 24-48 h at 30 • C. The coal content was then filtered and washed again and again with distilled and deionized water at 8000 rpm for 5 min so that unreacted acid was washed out properly. The treated coal sample was dried in an oven at 40 • C for 2-3 h. The treated and washed coal content was kept safe in sealed plastic bags to further use it for extraction studies of humic acid.
Extraction of Humic Acid by KOH
The pretreated lignite and bituminous coal were allowed to mix with 0.5%, 1.5%, 2.5%, 3.5%, and 4.5% KOH solutions, and experiments were carried out for 24 h with continuous stirring. Later on, reaction content was filtered. The filtrate of each coal sample with different KOH concentrations was kept in sealed bottles for further experimental study.
Extraction of Humic Acid Using NaOH
The method for extracting humic acid using NaOH was modified from [18]. The alkaline extraction of humic acid using NaOH is suitable at an industrial level as alkali is safe to use and is non-toxic. It results in the production of humic fertilizers that help to improve soil aeration, aggregation, and increase water holding capacity. First, 1 g of each lignite and bituminous coal was mixed with 100 mL, 0.1 M NaOH with continuous stirring at 20 • C for 24 h. The coal suspension was then centrifuged at 8000 rpm for 10 min and the supernatant was separated using Whatman no. 1 filter. The pH of the supernatant was adjusted to 1.8 by using 6 M HCL. The supernatant was then left for precipitation for 24 h. After 24 h, the precipitate was collected by centrifugation at 8000 rpm for 15 min. The precipitated humic acid was washed with Milli Q water for about 3 to 4 times and dried at 40 • C in an oven and stored for further experimental study at 4 • C.
Humic Acid Determination Using Spectrophotometry
The UV visible spectrophotometry was used to determine the absorption ratios with SPECORD 200 Analytic UV-Vis spectrophotometer (Jena, Germany). The humic content was allowed to dissolve in 0.05 M NaHXO 3 solution (pH 8.4) with humic acid concentration 40 mg per liter. The solution was centrifuged at 7000 rpm at room temperature for 5 min. The sodium bicarbonate buffer was run as a blank and the absorbance spectrum was taken at wavelength 465 nm and 665 nm. The E4/E6 ratio was calculated to determine the degree of aromaticity.
FTIR Analysis
For the identification of structural and functional groups present in the extracted humic acid, FTIR analysis was conducted. Humic acid was mixed thoroughly with 200 mg dried KBr and pellets were obtained. The pellets were then placed into FTIR Spectrum-65 (Perkin Elmer, Waltham, MA, USA) in 4000-500 cm −1 region for further investigation.
CHNSO Elemental Analysis
The elemental composition of humic acid was measured using CHNS Analyzer (LECO TruMac Series, Saint Joseph, MI, USA) according to ASTM standard. It was used to investigate the relative quantities of C, H, N, O, and S. Data were analyzed on ash-free basis. Oxygen content was calculated using the formula: O = 100 − (C + H + N + S)
SEM Analysis
Scanning electron microscopy model MIRA3 TESCAN (Japan), located at Institute of Space and Technology, Pakistan, fitted with energy dispersive X-ray analyzer and energy dispersive spectroscopy EDS was used to examine the elemental composition of produced humic acid from native lignite and bituminous coal. Deformation occurs morphologically after producing humic acid from HNO 3 treatment of both types of coal.
Gravimetric Determination of Humic Acid Using Different KOH Concentrations
The HNO 3 pretreated lignite and bituminous coal sample was treated with different percentages of KOH, i.e., 0.5%, 1.5%, 2.5%, 3.5%, and 4.5%, for finding out the maximum yield of produced humic acid after alkali treatment. Figure 1 shows the effect of different concentrations of KOH on the percentage yield of extraction of humic acid from lignite and bituminous coal. The maximum yield was obtained using 4.5% KOH. About 31.7% of humic acid was extracted from bituminous coal and 42.6% from lignite coal by using a 4.5% concentration of KOH, while 3.5% of used KOH concentration produces about 23.5% and 30.98% from bituminous and lignite coal, respectively. In one of the studies investigated by Zara et al. [23], the maximum yield of humic acid produced from lignite coal was obtained using a 3.5% KOH concentration. In the present study, 0.5%, 1.5%, and 2.5% concentration of KOH produce 7.54%, 11.38%, and 17.98% yield of humic acid from bituminous coal, while 9.98%, 13.45%, and 21.89% from lignite coal, respectively. This shows that the 0.5%, 1.5%, and 2.5% concentration of KOH did not produce the maximum content of humic acid for both types of coal. Zara et al. [23] also showed the 2.5% and 3% KOH concentration does not produce the maximum yield of humic acid from lignite coal.
humic acid was extracted from bituminous coal and 42.6% from lignite coal by using a 4.5% concentration of KOH, while 3.5% of used KOH concentration produces about 23.5% and 30.98% from bituminous and lignite coal, respectively. In one of the studies investigated by Zara et al. [23], the maximum yield of humic acid produced from lignite coal was obtained using a 3.5% KOH concentration. In the present study, 0.5%, 1.5%, and 2.5% concentration of KOH produce 7.54%, 11.38%, and 17.98% yield of humic acid from bituminous coal, while 9.98%, 13.45%, and 21.89% from lignite coal, respectively. This shows that the 0.5%, 1.5%, and 2.5% concentration of KOH did not produce the maximum content of humic acid for both types of coal. Zara et al. [23] also showed the 2.5% and 3% KOH concentration does not produce the maximum yield of humic acid from lignite coal. Table 1 shows the percentage yield of humic acid from both types of lignite and bituminous coal obtained by using alkali treatment NaOH. The purity, quality, and properties of humic acid directly depend on the method and source of their extraction. This defines their application further in industry and agriculture fields. The added coal with sodium hydroxide results in the formation of sodium cations in hydroxide state that can replace the protons into the humic acid molecules, resulting in their activation and dissolution. This alkali extraction process results in the conversion of insoluble humic acids to soluble salts, i.e., sodium humate [24]. The extraction using NaOH confirms the maximum yield of organic material [25]. In the present study, the humic acid yield from native lignite coal was 21.15% and bituminous coal was 11.6%, while the HNO3 pretreatment showed about 57.8% humic acid yield from lignite coal and 46.9% from bituminous coal. Adnan et al. [26] reported a maximum yield of 54.2% from HNO3 pretreated sub-bituminous coal, while Zara et al. [23] showed a 24.6% humic acid yield from lignite coal. Ehsan Sarlaki et al. [3] reported about 95% humic acid yield from lignite coal using NaOH alkali extraction and membrane purification system. Shakiba et al. [22] reported about 24% humic acid yield by alkali extraction. One of the research studies proposed by Muhammad et al. [8] showed the maximum yield of 50.80% by HNO3 treated bituminous coal, while the same researcher showed 60.60% yield of humic acid from lignite coal. It is seen that during the oxidation process the coal 001% 002% 003% 004% 005% Table 1 shows the percentage yield of humic acid from both types of lignite and bituminous coal obtained by using alkali treatment NaOH. The purity, quality, and properties of humic acid directly depend on the method and source of their extraction. This defines their application further in industry and agriculture fields. The added coal with sodium hydroxide results in the formation of sodium cations in a hydroxide state that can replace the protons into the humic acid molecules, resulting in their activation and dissolution. This alkali extraction process results in the conversion of insoluble humic acids to soluble salts, i.e., sodium humate [24]. The extraction using NaOH confirms the maximum yield of organic material [25]. In the present study, the humic acid yield from native lignite coal was 21.15% and bituminous coal was 11.6%, while the HNO 3 pretreatment showed about 57.8% humic acid yield from lignite coal and 46.9% from bituminous coal. Adnan et al. [26] reported a maximum yield of 54.2% from HNO 3 pretreated sub-bituminous coal, while Zara et al. [23] showed a 24.6% humic acid yield from lignite coal. Ehsan Sarlaki et al. [3] reported about 95% humic acid yield from lignite coal using NaOH alkali extraction and membrane purification system. Shakiba et al. [22] reported about 24% humic acid yield by alkali extraction. One of the research studies proposed by Muhammad et al. [8] showed the maximum yield of 50.80% by HNO 3 treated bituminous coal, while the same researcher showed 60.60% yield of humic acid from lignite coal. It is seen that during the oxidation process the coal molecules gained an additional acidic group apart from degradation due to which they become soluble under alkali conditions. The degraded products mostly produced are Sustainability 2021, 13, 8969 6 of 15 a variety of aliphatics, hydroxybenzoic acids, benzene carboxylic acids humic acid, and humic substances. Shi kai et al. [17] obtained a high yield of humic acid by oxidizing coal with NaOH. Haider et al. [7] investigated about 57% of humic acid yield from lignite coal using alkali NaOH treatment.
UV-Vis Spectrophotometry of Produced Humic Acid
In spectroscopic studies, the produced humic acid is most commonly investigated by the E4/E6 ratio of absorbance at 465 nm and 665 nm. The E4/E6 is also called an index of humification that correlates the oxygen content of humic materials with the average molecular weight, and it decreases as the degree of condensation increases. This is a widely used ratio in humic substances study as a humification indicator ( Table 2). In the present study as mentioned in Table 2, the native lignite coal has shown a greater E4/E6 ratio of 1.503 and bituminous coal of 1.405, while HNO 3 treated lignite coal has shown a maximum E4/E6 ratio of 1.87 and bituminous coal of 1.6607. This means that native or raw lignite and bituminous coal had a high degree of aromaticity and molecular weight as their E4/E6 ratio is lesser than HNO 3 treated coal. The HNO 3 treated coal had shown a higher E4/E6 ratio which indicated the decrease in molecular weight and aromatic content. This ultimately indicates increased bioactivity of the molecule. So, this shows that HNO 3 treated humic acid had increased bioactivity as compared to native coal, which could be because of the pretreatment that resulted in the breakdown of aromatic rings and structures, and the introduction of some new functional groups into the acid molecule. The HNO 3 attacks the carbon bonds present in the coal and those result in the deformation of the structure of coal by oxygenation, nitration, and other reactions. Adnan et al. [26] has shown higher E4/E6 in the raw coal than others, while Haider et al. [7] reported a lesser E4/E6 ratio of native lignite coal.
FTIR Analysis
FTIR spectra of humic acid derived from native (raw) coal and HNO 3 treated coal were obtained to determine the structural changes and presence of functional groups in the produced humic acid [27,28].
The spectrum shown in Figure 2 explains the humic acid produced from the native Thar lignite. The spectrum shows the peak at the range of 3300-3600 cm −1 , which indicated the presence of OH groups and aliphatic primary amines. At 1630 cm −1 C=C stretching was investigated, and aromatic C-H bending was detected at the range of 700-900 cm −1 . At 1002 cm −1 , the peak shows CO-O-CO stretching, which indicates the presence of an anhydride group. Figure 3 shows the IR spectra of humic acid produced from pretreated HNO 3 Thar coal. This spectrum shows the peak at 3263.69 cm −1 that indicated the OH stretching and presence of some OH groups. At 2921.84 cm −1 , an intensive peak of the carboxyl group was observed that showed strong hydrogen stretching and vibrations. At the range of 1000-1300 cm −1 , the presence of esters, phenol, and stretching of C=C and C=0 groups were observed. Similarly, C=H bending was observed in the range of 700-900 cm −1 . Figure 3 shows the IR spectra of humic acid produced from pretreated HNO3 Thar coal. This spectrum shows the peak at 3263.69 cm that indicated the OH stretching and presence of some OH groups. At 2921.84 cm, an intensive peak of the carboxyl group was observed that showed strong hydrogen stretching and vibrations. At the range of 1000-1300 cm, presence of esters, phenol, and stretching of C=C and C=0 groups were observed. Similarly, C=H bending was observed in the range of 700-900 cm. presence of some OH groups. At 2921.84 cm, an intensive peak of the carboxyl group was observed that showed strong hydrogen stretching and vibrations. At the range of 1000-1300 cm, presence of esters, phenol, and stretching of C=C and C=0 groups were observed. Similarly, C=H bending was observed in the range of 700-900 cm. Figure 4 indicates the humic acid produced from Balochistan native or raw coal. The presence of peaks at the range of 3300-3700 cm indicated the OH stretching and presence of OH groups. The peak at 1718.82 cm shows C=C stretching that indicated the presence of carboxylic groups, while the peak at 1618.82 cm indicated the C=C stretching presence of aromatic groups. C-H bending was investigated at the range of 700-900 cm, while mineral matter presence was also observed at the range of 400-500 cm in HNO3 pretreated coal-derived humic acid. Figure 4 indicates the humic acid produced from Balochistan native or raw coal. The presence of peaks at the range of 3300-3700 cm −1 indicated the OH stretching and presence of OH groups. The peak at 1718.82 cm −1 shows C=C stretching that indicated the presence of carboxylic groups, while the peak at 1618.82 cm −1 indicated the C=C stretching presence of aromatic groups. C-H bending was investigated at the range of 700-900 cm −1 , while mineral matter presence was also observed at the range of 400-500 cm −1 in HNO 3 pretreated coal-derived humic acid. Figure 5 indicates the IR spectrum of humic acid derived from HNO 3 pretreated bituminous coal. This spectrum shows an immense number of peaks at different IR ranges. The peak at 3272.99 cm −1 indicated the presence of the OH group, while the peaks at the range of 2800-3000 cm −1 show the presence of asymmetric and symmetric stretching of aliphatic groups. At 2923.04 N-H, stretching was also indicated. Some aromatic group structures were observed at the range of 1500-1700 cm −1 . Some phenolics, esters, and amine groups were revealed at 1533.08 cm −1 , 1355.01 cm −1 , and 1233.48 cm −1 . At the range of 900-1100 cm −1 , weak C-O stretching was observed while C-H bending and the presence of mineral matter were observed at 400-700 cm −1 . The HNO 3 pretreatment oxidizes coal and breaks down certain aromatic rings and also loosens the structure of aromatic side chains. That is why the humic acid produced from HNO 3 pretreated lignite and bituminous coal indicates the presence of more functional groups and shows various structural changes. Figure 5 indicates the IR spectrum of humic acid derived from HNO3 pretreated bituminous coal. This spectrum shows an immense number of peaks at different IR ranges. The peak at 3272.99 cm indicated the presence of the OH group, while the peaks at the range of 2800-3000 cm show the presence of asymmetric and symmetric stretching of aliphatic groups. At 2923.04 N-H, stretching was also indicated. Some aromatic group structures were observed at the range of 1500-1700 cm. Some phenolics, esters, and amine groups were revealed at 1533.08 cm, 1355.01 cm, and 1233.48 cm. At the range of 900-1100 cm, weak C-O stretching was observed while C-H bending and presence of mineral matter were observed at 400-700 cm. The HNO3 pretreatment oxidizes coal and breaks down certain aromatic rings and also loosens the structure of aromatic side chains. That is why the humic acid produced from HNO3 pretreated lignite and bituminous coal indicates the presence of more functional groups and shows various structural changes. Figure 5 indicates the IR spectrum of humic acid derived from HNO3 pretreated bituminous coal. This spectrum shows an immense number of peaks at different IR ranges. The peak at 3272.99 cm indicated the presence of the OH group, while the peaks at the range of 2800-3000 cm show the presence of asymmetric and symmetric stretching of aliphatic groups. At 2923.04 N-H, stretching was also indicated. Some aromatic group structures were observed at the range of 1500-1700 cm. Some phenolics, esters, and amine groups were revealed at 1533.08 cm, 1355.01 cm, and 1233.48 cm. At the range of 900-1100 cm, weak C-O stretching was observed while C-H bending and presence of mineral matter were observed at 400-700 cm. The HNO3 pretreatment oxidizes coal and breaks down certain aromatic rings and also loosens the structure of aromatic side chains. That is why the humic acid produced from HNO3 pretreated lignite and bituminous coal indicates the presence of more functional groups and shows various structural changes. Adnan et al. [26] reported very weak C-H bending in raw sub-bituminous coal, while weak C-O shoulder was observed in the case of HNO 3 treated coal. Similarly, in the present study, very weak C-O bending and stretching were observed in the case of humic acid derived from HNO 3 treated bituminous coal. Adnan et al. [26] also showed the presence of carboxylic groups for HNO 3 treated coal. Syahren et al. [29] also showed the clear peaks at 1700-1720 cm −1 that indicated C=O stretch, C=C aromatics at 1620-1630 cm −1 , and C-H and C=O distinct bands at 1220-1250 cm −1 . Similar bands were observed in the present study for humic acid derived from HNO 3 pretreatment. Priyanka Shaha and Supriya Sarkar [9] indicated the absence of OH stretch at 3040 cm −1 and C=C bond at 1410 cm −1 for lignite HNO 3 treated coal. Manoj et al. [30] showed the presence of OH groups in the range of 3500-3100 cm −1 . This study also showed some N-H stretching in the range of 3100-3500 cm −1 , while in the present study, N-H stretching peaks were observed at 2600-2700 cm −1 range.
Elemental Analysis of Humic Acid
The results of the elementary composition of humic acid derived from lignite and bituminous native coal and nitric acid-treated coal are listed in Table 3. The carbon content was reduced in the case of HNO 3 treated coal, as compared to raw coal, in both lignite and bituminous types of coal. With the other coal contents, such as nitrogen and sulfur, there was a slight difference in the case of HNO 3 treated lignite coal. The oxygen content increased in HNO 3 treated lignite coal. This is because of the oxidation process by the use of nitric acid that resulted in the addition of more oxygen-containing functional groups. In the case of bituminous coal, the carbon content of humic acid derived from raw coal is more as compared to HNO 3 treated coal. There was also a reduction in sulfur content which showed that desulphurization of coal occurs in the case of HNO 3 derived humic acid and reduced carbon content showed decarbonization. The nitrogen content was also higher than raw coal and this is because, during oxidation, the nitration occurs which introduces nitrate as a functional group during HNO 3 pretreatment of coal.
One of the studies reported by Patti et al. [31] also investigated the nitrogen into HNO 3 oxidized brown coal and NO 2 molecule added during the oxidation process. MacPhee et al. [32] used a technique, TG-FTIR, which reported that the oxidation by HNO 3 produced oxygen-containing functional groups that were not reported in native coal. There was an increase in percentage in the case of nitrogen content which is because of the nitration taking place because of the phenomenon of the process of nitration. In [25], the same trend was also reported. Adnan et al. [26] investigated the higher content of carbon in the case of raw coal followed by HNO 3 treated sub-bituminous coal. Dong et al. [33] also reported the same trends with lignite coal.
The H/C atomic ratio was shown to be almost the same in the case of lignite coal, while bituminous coal showed a slight decrease in the H/C ratio. The slight decrease in H/C ratio in bituminous coal indicated higher aromatic condensation and high molecular mass fraction. Adnan et al. [26] reported a narrow range of H/C ratio for sub-bituminous coal. There seemed a slight increase in the O/C ratio of lignite and bituminous raw coal, as well as nitric acid-treated coal. This shows that some oxygen-containing fraction or content is present as an organic material, e.g., COOH group. The range of O/C content is 0.33 to 0.40. Adnan et al. [26] investigated 0.540 to 0.671 O/C content in sub-bituminous coal. Ehsan et al. [3] studied the lowest O/C content in lignite-derived humic acid as compared to standard humic acid. XU-Yun-long et al. [34] investigated reduced carbon, hydrogen, sulfur, and increased nitrogen and oxygen content. The coal oxidation resulted in a chemical reaction between nitric acid and functional groups present in coal, i.e., oxidation of side chains that produce esters, aldehydes, and ketones, and nitration and aromatic ring carboxylation also occurred.
One of the methods proposed by Kashif et al. [35] extracted humic acid from Bulgarian lignite coal by using NaOH and HCl for precipitation of humic acid and obtained about 83% yield of humic acid content. Another method proposed by Hofrichter et al, [25] explained the mixing of south Moravian lignite coal with base NaOH (0.5 mol/L) and Na 4 P 2 O 7 (0.1 mol/L) followed by HCl-HF solution (0.5%) and humic acid, were finally precipitated by addition of H 2 O 2 and HNO 3 treatment.
SEM Analysis
For characterizing the produced humic acid morphologically from raw coal, as well as HNO 3 treated lignite and bituminous coal, scanning electron microscopy was performed. The elemental distribution on the surface of each sample was also obtained by EDS, as shown in Figure 6. Figure 6a shows the SEM image of humic acid produced from raw lignite coal and the major elements observed in Figure 7a are C, O, Si, and Al, with some other minor elements such as S, Cl, Ca, Ti, and Fe. After treating lignite raw coal with HNO 3 , shown in Figure 6b, the produced humic acid had shown some structural and morphological changes along with some visible differences in elemental composition. The elemental composition, as shown in Figure 7b, of HNO 3 treated lignite coal showed the incorporation of nitrogen and oxygen as major elements, which indicated the formation of -NO 2 group and some other new compounds. The porous structure on the surface indicates the deformation of some aliphatic and aromatic bonds present in the structure of coal. Some minor elements that were also observed include Al, Ca, and sulphur, which were reduced and dislodged, while chlorine, phosphorus, and potassium were also indicated, the formation of some new elements produced as a result of acidic treatment of coal with HNO 3 . Similarly, Figure 6c,d indicates the humic acid produced from raw bituminous coal and HNO 3 treated bituminous coal. The elemental composition, as shown in Figure 7c, indicated the presence of major elements such as carbon, oxygen, and silicon. The presence of nitrogen in treated bituminous coal with HNO 3 indicated the incorporation of nitrogencontaining compounds such as -NO 2. This showed that treatment of bituminous coal with nitric acid results in the breakdown of the toughest aromatic bonds and aliphatic side chains present in the structure of coal. That is why the produced humic acid also indicated the presence of nitrogen on the surface by elemental analysis, shown in Figure 7d. Some other elements also seemed to be present, such as Na, Mg, S, Ca, K, and chlorine. The acidic pretreatment destroyed the hydrogen bonds and Van der Waals forces as well as π-π bonds proposed by Piccolo [14] and coworkers. One of the studies conducted by Chu-fang et al. [34] reported the introduction of chlorine and dislodging of Al and Ca ions by treating coal with acid and producing humic acid from it. Some of the crystal or needle-like structures were also shown on humic acid produced from HNO 3 treated bituminous coal, which clearly indicated that some new structures have been produced. Another study conducted by Stefane et al. [16] had shown the SEM images of produced humic acid from lignite coal at different magnifications. They also reported the presence of weak Van der Waals forces, hydrogen bonds, and the presence of porous structure on the surface of humic acid.
Conclusions
In the present study, two different types of coal, i.e., low rank and high rank coal, were used for extracting humic acid using different concentrations, i.e., 0.5%, 1.5%, 2.5%, 3.5%, and 4.5%, of KOH solutions and 0.1M NaOH was used as an alkaline treatment for the production of organic matter (humic acid). The particle size of coal taken was 60 mesh (0.25 mm) and the samples were allowed for agitation or continuous shaking for 24 h. The extracted humic acid was separated from the supernatant by centrifugation, dried at 60 • C, and analyzed using the gravimetric method, UV-Visible spectroscopy, FTIR, and SEM-EDS. According to the analysis, 4.5% concentration of KOH had shown maximum yield of humic acid in both types of coal. The percentage of humic acid using NaOH for native lignite and bituminous coal was 21.15% and 11.6%, and for HNO 3 treated coal, the percentage for lignite and bituminous coal was 57.8% and 46.9%, respectively. FTIR results of HNO 3 treated coal clearly indicated the presence of N-H group peak at 2923.04 cm −1 in case of bituminous coal and introduction of the nitro group in lignite coal at 2921.84 cm −1 , while both peaks were absent in native coal FTIR spectrum. The elemental analysis of HNO 3 treated coal had shown the reduction in carbon content and a slight change in sulphur and nitrogen and an increase in oxygen content that confirmed the oxidation process had occurred. The O/C ratio also increased in the case of HNO 3 treated coal. The SEM-EDS analysis had shown clear morphological and structural changes in the produced humic acids by HNO 3 treatment as well as native coal. The EDS analysis had also shown clear changes in the elemental composition of lignite and bituminous coal after treatment. The porous structure in SEM images clearly indicated some breakdown of aliphatic and aromatic side chains by treating with HNO 3. On the basis of this study, it can be suggested that both lignite and bituminous coal can be used as a raw feed to produce and extract humic acid at a commercial scale. | 8,311.8 | 2021-08-11T00:00:00.000 | [
"Environmental Science",
"Chemistry",
"Materials Science"
] |
Fish Scale-Derived Scaffolds for Culturing Human Corneal Endothelial Cells
Purpose To investigate the biocompatibility of fish scale-derived scaffolds (FSS) with primary human corneal endothelial cells (HCEnCs). Methods HCEnCs were isolated from 30 donor corneas in a donor-matched study and plated in precoated Lab-Tek slides (n = 15) and FSS (n = 15). Cell morphology, proliferation/migration, and glucose uptake were studied (n = 30). Hoechst, ethidium homodimer, and calcein AM (HEC) staining was performed to determine viability and toxicity (n = 6). The cell surface area was calculated based on calcein AM staining. HCEnCs were stained for ZO-1 (n = 6) to detect tight junctions and to measure cell morphology; Ki-67 (n = 6) to measure proliferating cells; and vinculin to quantify focal adhesions (n = 6). The formation of de novo extracellular matrix was analyzed using histology (n = 6). Results HCEnCs attach and grow faster on Lab-Tek slides compared to the undulating topography of the FSS. At day 11, HCEnCs on Lab-Tek slide grew 100% confluent, while FSS was only 65% confluent (p = 0.0883), with no significant difference in glucose uptake between the two (p = 0.5181) (2.2 μg/mL in Lab-Tek versus 2.05 μg/mL in FSS). HEC staining showed no toxicity. The surface area of the cells in Lab-Tek was 409.1 μm2 compared to 452.2 μm2 on FSS, which was not significant (p = 0.5325). ZO-1 showed the presence of tight junctions in both conditions; however, hexagonality was higher (74% in Lab-Tek versus 45% in FSS; p = 0.0006) with significantly less polymorphic cells on Lab-Tek slides (8% in Lab-Tek versus 16% in FSS; p = 0.0041). Proliferative cells were detected in both conditions (4.6% in Lab-Tek versus 4.2% in FSS; p = 0.5922). Vinculin expression was marginally higher in HCEnCs cultured on Lab-Tek (234 versus 199 focal adhesions; p = 0.0507). Histological analysis did not show the formation of a basement membrane. Conclusions HCEnCs cultured on precoated FSS form a monolayer, displaying correct morphology, cytocompatibility, and absence of toxicity. FSS needs further modification in terms of structure and surface chemistry before considering it as a potential carrier for cultured HCEnCs.
Introduction
The human cornea is the outermost, transparent tissue of the eye. It is the principal refractive element of the visual system, and its function depends mainly on its optical clarity. Human corneal endothelial cells (HCEnCs) are responsible for maintaining this transparency through a pump-and-leak mechanism [1]. To do so, this leaky barrier of hexagonally shaped cells allows passive diffusion of nutrients flowing from the anterior chamber to the corneal stroma and epithelium but simultaneously averts corneal edema by pumping excessive fluid back to the anterior chamber.
Due to a mitotic arrest in vivo after birth, the number of endothelial cells decreases throughout life [2]. However, this decay can dramatically be accelerated by trauma or several diseases. If the overall number of HCEnCs drops below a certain threshold of less than 500 cells/mm 2 , irreversible edema eventually arises, leading to an opaque cornea.
The only available treatment currently is corneal endothelial transplantation, termed endothelial keratoplasty (EK). In 2016, nearly 40% of donated corneas distributed by US eye banks were transplanted to treat endothelial dysfunction. Although EK has a high success rate in terms of visual rehabilitation and postoperative visual outcome, transplantations are often restricted by a shortage of corneal donor tissue [3].
In order to overcome this scarcity, alternative therapeutic approaches such as ex vivo expansion of HCEnCs are under investigation to enable HCEnCs transplantation as cell sheets or cell suspension [4][5][6][7]. Once HCEnCs from one donor eye can successfully be expanded, we would finally be able to overcome the current 1 : 1 ratio where one donor cornea is used to treat a single patient. Consequently, waiting lists would shorten significantly. In case of the cell sheet transplantation strategy, a scaffold is required which will act as a mechanical support (i.e., a surrogate basement membrane) that can sustain cell proliferation and phenotype. Multiple scaffolds have been reported as candidate membranes, and among these options, three different categories can be identified: (i) biological, (ii) synthetic, and (iii) biosynthetic substrates [5].
In 2010, Lin et al. proposed an oxygen-and glucosepermeable collagen scaffold derived from decalcified fish scales (Tilapia; Oreochromis mossambicus) that can be used in corneal regeneration [8]. Until now, preliminary in vitro studies have shown cytocompatibility of corneal epithelial cells on these heterogeneously patterned, biological scaffolds [9]. Its architectural features have been suggested as an important characteristic for corneal epithelial cell migration and growth. Moreover, its transparency and availability, that is, roughly 200 scales from one fish, make it an attractive biocompatible material for the generation of corneal epithelial cell grafts. Additional in vivo studies performed on rats and rabbits have demonstrated its potential as a deep anterior lamellar keratoplasty (DALK) alternative or to seal perforated corneas, respectively [10].
Although fish scale-derived collagen scaffolds (FSS) have been identified as a potential scaffold for ocular surface reconstruction, its potential to support HCEnC cultures has not yet been explored. If FSS enable early attachment and growth of HCEnCs, they could serve as a potential carrier in tissue engineering corneal endothelial grafts. This paper therefore investigates the potential of a fish scale-derived collagen scaffold to support the attachment and proliferation of primary HCEnCs. In addition, we evaluate its effect on cell viability and preservation of key proteins (i.e., ZO-1 tight junctions), which are characteristics for the HCEnC barrier formation.
Materials and Methods
2.1. Ethical Statement. Human donor corneas [n = 30, fifteen pairs] were collected from the Veneto Eye Bank Foundation (FBOV) with informed consent from the donors' next of kin to be used for research. The methods followed the tenets of the Declaration of Helsinki, and the tissues were used under the laws of Centro Nazionale di Trapianti. The corneas were unsuitable for transplantation due to their low endothelial cell counts (<2200 cells/mm 2 ) and thereby qualified as research grade, with no known additional complications or contraindications. All tissues were preserved in tissue culture medium at 31°C prior to use for experiments.
2.2. Donor Characteristics. The average donor age was 60.75 (±14.55) [range: 45-75] years, and the male : female ratio was 10 : 5. The postmortem time to the preservation of the corneas was 16.54 (±5.89) hours. The tissues were preserved in tissue culture medium for 31.25 (±6.78) days prior to isolation of the cells. Average endothelial cell density (ECD) before isolation was found to be 1965 (±202.83) cells/mm 2 in corneas obtained for Lab-Tek and 1970 (±191.76) cells/ mm 2 for FSS. For the experiments, one cornea was used for 2 Lab-Tek wells of 0.7 cm 2 each and the other cornea from the same donor (donor-matched study) was used for 1 fish scale of 13 mm diameter with a surface area of 1.32 cm 2 . The corneas did not show any dead cells determined using trypan blue staining before plating.
Processing and Characteristics of Fish Scale Scaffolds.
Tilapia fish scales were cleaned and acellularized using previously reported methods [8][9][10][11]. Briefly, the harvested fish scales were rinsed in distilled water and decellularized according to a four-step detergent and enzymatic processing, involving a stepwise protease, surfactant, and DNase and RNAse treatment, followed by a final surfactant treatment [12]. Acetic acid was used to increase the porosity of the scaffolds, followed by decalcification with nitric acid [8]. The resulting acellularized fish scales were rinsed extensively, stored, and transported in sterilized phosphate-buffered saline (PBS). FSS were then shipped to the FBOV labs from Body Organ Biomedical Corporation (Taipei, Taiwan) as acellularized scaffolds.
Each FSS was 13 mm in diameter with an average thickness of 100-120 μm. Tensile stress was 12.68 MPa (±9.53), and Young's modulus was 56.4 MPa (±21.91) with an elongation of 24.72 (±5.65)%. Water holding capacity of the FSS was 82% (±3.0) with an initial transparency of 92.67% within the visual spectrum (380-780 nm) as recorded by Body Organ Biomedical Corporation prior to shipping the FSS to FBOV labs. The surface topography of fish scales was observed using anterior iVue Optical Coherence Tomography (OCT) (OptoVue, California, USA).
Endothelial Cell Count and Donor Characteristics.
Cell death (%) was determined prior to isolation, using 0.25% trypan blue (TB) (Thermo Fisher Scientific, New York, USA). Approximately 100 μL of TB was topically applied to stain the endothelial cells for 20 seconds followed by washing with 1x PBS. Trypan blue-positive cells and ECD of three random areas were counted by two operators before enzymatic digestion of the cells using an in-built eyepiece reticule (10 × 10 1 mm 2 boxes) for inverted microscopy (Axiovert, Zeiss, Germany). Donor characteristics of the 15 donors (30 corneas in total) were obtained from the FBOV database to determine age, gender, postmortem time to preservation, cause of death, and duration of preservation. 2.6. Primary HCEnC Isolation. HCEnCs were isolated from research grade donor corneas using a peel-and-digest method similar to previously published methods, [7,[13][14][15] with limited modifications. Firstly, the corneas [n = 30] were washed in sterile PBS and Descemet's membrane with endothelium was dissected with a fine forceps, similar to the stripping technique used for Descemet's membrane endothelial keratoplasty (DMEK). Secondly, the excised pieces were incubated in 2 mg/mL collagenase type 1 (Thermo Fisher Scientific, Rochester, NY, USA) solution for 2-3 hours at 31°C, 5% CO 2 . Once Descemet's membrane was digested, the solution was centrifuged for 5 minutes at 1000 rpm. The supernatant was removed, and the cells were resuspended in TrypLE Express (1x) for 10 minutes at 37°C, (Life Technologies, Monza, Italy) to obtain single cell suspension suitable for seeding. An overview of the performed experiments can be seen in Figure 1.
Cell Culture and Morphological
Analysis. Lab-Tek II chamber slides (8 × 0.7 cm 2 culture area) from Thermo Fisher Scientific (Rochester, NY, USA) and FSS (13 mm diameter) were used for culturing cells of each donor pair (n = 30; fifteen pairs). Per donor, two chambers of Lab-Tek slides and one FSS were coated with FNC coating mix (US Biological Life Sciences, Salem, Massachusetts, USA) for at least 30 minutes at 37°C. When seeding primary cells (passage 0), the seeding density for the Lab-Tek slide (control group) was 180,645 (±19,265) cells, which was divided between the two wells of Lab-Tek slides and 181,120 (±18,215) cells were plated on a single FSS. The cell suspension was added in a small volume on the concave side of the FSS and on the Lab-Tek slide and incubated at 37°C for 20 minutes allowing the cells to settle. An additional volume of proliferation medium was added once the cells showed attachment. Cultures were monitored and refreshed every alternate day until confluence. The percentage of confluency was manually measured by the area of outgrowth using a built-in reticule inside the eye-piece of the microscope, that is, number of boxes filled with the cells of a 10 × 10 reticule of 1 mm 2 each.
2.8. Glucose Uptake of the Cultured HCEnCs for Functional and Metabolic Analysis. Glucose uptake was determined from preserved medium that was stored at −20°C (n = 30) every alternate day. Quantitative analysis was performed using the D-Glucose HK kit (Megazyme International Ireland Ltd., Bray Business Park, Bray, County Wicklow, Ireland). With this, the amount of glucose utilized by the HCEnCs was determined, allowing the evaluation of metabolic activity over time. Positive controls in this experiment were cells grown on Lab-Tek slides, while negative controls were samples containing culture medium without cells.
2.9. Hoechst, Ethidium Homodimer, and Calcein Acetoxymethyl (AM) (HEC) Staining to Determine Live and Dead Cells. Cell cultures from three donors at confluence (day 11) were washed with PBS prior to the assay. The control sample consisted of isolated Descemet's membrane, with intentionally damaged areas to induce cell death. The HEC mastermix consisted of 5 μL of Hoechst 33342 (blue) (Thermo Fisher Scientific, Rochester, NY, USA), 4 μL of ethidium homodimer EthD-1 (red), and 2 μL calcein AM (green) (Live/Dead viability/cytotoxicity kit, Thermo Fisher Scientific, Rochester, NY, USA) mixed in 1 mL of 1x PBS [17]. 100 μL of the final purposes. Three microscopic fields were selected for each evaluation (one central and two mid-peripheral). The cell surface area was determined on 10 cells per condition at 100x magnification using Calcein AM and analyzed with "analyze particles" with size limits of 150-10,000 μm 2 considering there were no background signal and large cell clusters. For ZO-1, the area was selected and using predefined commands in Macros for converting the image to overlay masks, the total number of cells was automatically counted whereas the hexagonal cells and polymorphic cells were counted based on the cell structure in the particular area (with 6 borders) at 100x magnification. The macros was designed particularly for this study to obtain results by simply inserting the algorithm in the ImageJ analysis. The particles were analyzed at 100x magnification using outline option, and watershed was applied if necessary for Ki-67positive cells. For vinculin, focal adhesion points of ten cells per sample were counted and the average number of focal adhesions was recorded for analysis using binary masks. Data are expressed as the mean ± standard deviation (SD). A nonparametric Wilcoxon test for paired data using SAS statistical software was employed to check the statistical significance between different conditions where p < 0 05 was deemed significant.
Results
3.1. Characteristics of the Fish Scale Scaffolds. On a montage of multiple OCT images, the scales did not appear to have a uniform thickness but were thinner peripherally (Figure 2(a)). It was also possible to detect the surface topography of the FSS with its distinct valleys and ridges (Figure 2(b)).
3.2. Morphology, Confluency, and Glucose Uptake. FSS displayed a nonhomogeneous surface architecture consisting of broad and narrow troughs and ridges, spokes, and a central flat region (Figures 3(a)-3(f)). Transparency of the FSS remained unchanged when observed before and after HCEnC culture (Figures 3(a) and 3(g)), as observed subjectively. The cells showed improved adherence on areas with broad ridges, but also centrally, where the surface was flatter (Figure 3(h)). The growth rate of the cells in Lab-Tek was marginally higher compared to that on FSS. At day 11, cells covered 65% of the FSS, while the controls were completely confluent (p = 0 0883) (Figure 3(i)). Average glucose uptake was not different for FSS and control conditions (p = 0 5181), that is, 2.2 μg/mL in Lab-Tek versus 2.05 μg/mL in FSS (Figure 3(j)) at day 11.
Cell Viability and Cell Area on FSS.
Triple labelling with HEC showed the dead cells in red, the nuclei in blue, and live cells in green. A human donor cornea used as a control to demonstrate the HEC staining showed dead cells (red), nuclei (blue), live cells (green), dying cells (blue without green), and merge (Figure 4(a)). Only a few apoptotic cells were observed manually by counting blue cells (nuclei) without green (cytoplasm) marked as white arrows (Figure 4(a)). HCEnCs on Lab-Tek slides showed high viability as shown in Figure 4(b). In compliance with the confluency data, HEC staining also showed that the cells were approximately 60% confluent (Figure 4(c)) on the FSS with almost 100% viability in both conditions. The cell area was determined on 10 cells per condition using calcein AM staining and ImageJ analysis.
Average values of the cell area in the Lab-Tek slide was found to be 409.1 μm 2 (±169.1) compared to 452.2 μm 2 (±131.1) on FSS, which was not found to be statistically significant (p = 0 5325).
3.4.
Immunostaining. ZO-1 tight junction protein was expressed in HCEnCs cultivated on both FSS ( Figure 5(a)) and control ( Figure 5 HCEnCs cultured on Lab-Tek with FNC coating mix showed an average of 233.5 (±22.6) number of focal adhesions ( Figure 5(g)) per cell (average of 10 cells counted in three microscopic fields) compared to 199.7 (±12.1) number of focal adhesions from FSS ( Figure 5(h)) which was nearly reaching statistical significance than on Lab-Tek (p = 0 0507) ( Figure 5(i)) at day 11. Initial investigative experiments showed that coating of the FSS was crucial for HCEnC attachment (data not shown).
Histological Analysis.
On whole mount control sections, periodic acid-Schiff (PAS) staining showed all corneal cell layers including Descemet's membrane and endothelium (Figures 6(a) and 6(b)). PAS staining showed that HCEnCs grew as a monolayer on FSS (Figure 6(c)). At day 11, the cells did not show the presence of their own basement membrane but only uniform distribution of corneal endothelial cells on the FSS (Figure 6(d)). This was further confirmed using collagen VI and laminin markers. No expression of collagen VI (Figure 6(e)) or laminin (Figure 6(f)) was observed on the FSS; however, Draq 5 showed the presence of cell nuclei on the FSS.
Discussion
One option to reduce global donor corneal shortage is to expand the HCEnCs from a single cornea into multiple transplantable sheets. However, these sheets require a carrier for transporting the cultured cells for transplantation. Development of a scaffold for culturing and transplanting expanded HCEnCs would thereby create composite grafts similar to current Descemet's stripping automated endothelial keratoplasty (DSAEK) procedures. Here, the donor endothelium with a residual layer of donor corneal stroma is inserted into the recipient's cornea using a tissue glider. It automatically unfolds when inserted in the anterior chamber after which the donor stromal tissue attaches to the acceptor stroma. It was observed that the FSS were flexible enough to be folded and unrolled automatically without breaking, similar to the DSAEK grafts. The FSS is primarily built of collagen type I, similar to the corneal stroma, so the attachment of the scaffold is expected to happen similarly. Also, transparency of FSS was acceptable and it did not degrade in vivo in any of the previously published studies [8][9][10].
Fish scales are inherently calcified; however, decalcification increases their transparency and degree of flexibility, thus improving its properties as a corneal scaffold. The question whether recalcification may occur following transplantation is difficult to ascertain in vitro. Only long-term animal studies could give a clear answer about the possibility of calcium precipitation on the FSS. However, animal studies using the FSS as a stromal implant for 3 weeks reported no such phenomenon, which was confirmed in another rabbit study over a period of 6 months [11,18].
The scaffold was proved to be nontoxic and supported endothelial cell proliferation with the absence of dead/dying cells. HCEnCs adhered over the irregular substrate; however, we clearly observed regional differences in cellular proliferation. Cells showed better attachment on flatter areas over the narrow and higher ridges. In the more central regions and broad ridges, the morphology of the cells was similar to that of the control, whereas cells in the irregular regions showed a more stretched phenotype. We assume that the initial difficulty to attach on the irregular surface and the dimensions of the ridges further impedes the migration of the cells and thereby affects the formation of a confluent monolayer. These results corroborate with the findings published by Rizwan et al. [19] in which they point out that one of the limiting factors which affects monolayer formation was the spacing of artificial guttae, basement membrane excrescences that are characteristics for Fuch's endothelial dystrophy. When the spacing was too narrow, cells were not able to form a monolayer, whereas this did occur at a broader spacing. We had similar observations on the FSS, where cells preferentially attached to the broader ridges rather than the narrow spaces, also described in Table 1.
In this study, the immunocytochemical staining was carried out on day 11 as the HCEnCs in the control condition reached 100% confluence. When cells become too confluent, their expression pattern can change; hence, the staining was performed for the sake of identical conditions. Representative images of cell cultures were taken every alternate day. Through measuring the surface area of colonies on calibrated reticule, we objectively quantified the growth of endothelial cells by means of confluency. Traditional proliferation assays are based on cell metabolism and use a nonfluorescent dye that is added to a cell culture, which is then converted to a fluorescent dye; if metabolized, this signal then correlates with a certain number of cells. However, we did not know whether the metabolisms of cells grown on Lab-Tek and FSS are equivalent. If cells on one substrate metabolize at a higher rate, the amount of converted dye will not correlate to the corresponding number of cells. By measuring the glucose uptake and confluency separately, we confirmed this discrepancy. While the cells grown on Lab-Tek slides had become 90% confluent by day 9, they were only 50% confluent on the fish scales. However, glucose uptake was not significantly different, indicating a higher degree of (glucose) metabolism in cells grown on both, Lab-Tek and FSS. With distinct rates of metabolism, conversion-based proliferation assays therefore could not be used reliably in this study.
The cell surface area was not found to be significantly different between the two conditions. ZO-1, a tight junction protein associated with cell-cell interaction and one of the hallmarks of a HCEnC monolayer, is expressed appropriately in both conditions. This staining also allowed us to assess cell morphology, which revealed a higher degree of pleomorphism for HCEnCs when grown on FSS. Although cell areas were not significantly different and both the conditions were suitable for obtaining cell growth, the results highlight that similar cell area does not necessarily mean that the cells are hexagonal, which is an important parameter for HCEnC culture.
Vinculin, a membrane-cytoskeletal protein present in focal adhesions, is involved in the linkage of integrin adhesion molecules to the actin cytoskeleton. Staining for this protein allowed us to quantify the degree of cellular attachment with the substrate at a given time. Similar to observations with the light microscope, vinculin expression and thus attachment were higher on the Lab-Tek slides than on the FSS. As the control and FSS conditions are coated similarly, with collagen I and fibronectin, it is more likely that the surface topography is the main factor that adversely affects adhesion. Theoretically, the positive surface charge of the Lab-Tek, mimicking poly-L-lysine, could support the attachment to the slides even more. Similar to other research groups, we supplemented our proliferative medium with ROCK inhibitor since studies have reported that inhibition of ROCK signalling enhances the attachment of HCEnCs, hypothetically through the upregulation of vinculin [20,21]. During initial investigative experiments, we saw that cells did not attach without the FNC coating. With the additional coating of the FSS, cells could produce their integrins more rapidly, accelerating the concomitant focal adhesion formation and ECM secretion. After 2 weeks of culture, sectional PAS staining showed that these cells formed a monolayer and did not stratify. There was no detection of extracellular matrix deposition at this time point. However, ECM deposition by HCEnCs in vivo is also very low, with the thickness of Descemet's membrane in the elderly being reported to be around 16 μm [22,23]. The difference between FSS and human corneal Descemet's membrane is listed in Table 2.
Apart from attachment, HCEnC proliferation on FSS was higher in control than in FSS. We did not observe a difference in metabolic activity or Ki67 staining, further supporting the noncytotoxic nature of the FSS and ability to sustain cellular proliferation to a certain degree.
The domain of tissue engineering is growing with the developments of cytocompatible and biodegradable materials. However, culture of HCEnCs and maintaining them on a scaffold still remain challenging. The FSS have a potential in terms of culturing challenging cell types like primary HCEnCs and, once decalcified, make a suitable corneal substitute. However, in many aspects, our control conditions on plastic performed better than the FSS. Our FSS were surface modified using ECM proteins (fibronectin and collagen I coating mix), so the inherent capacity of the FSS for cellular adhesion is still debatable, but we do show adequate attachment and proliferation once coated. Hence, we conclude that the fish scale-derived scaffold in its current form may not be ideal for the development of tissue-engineered corneal endothelial constructs. However, this study can certainly aid the further development of the FSS substrate, or other scaffolds, towards one that is specifically suitable for HCEnC cultures. Further modification of the substrate, such as surface polishing to remove the irregular topography, overall thinning of the scaffold, and incorporation of functional groups such as fibronectin, could drastically improve its geometric and physical characteristics while also enhancing cellmatrix interactions, in order to develop the ideal endothelial cell carrier.
Conclusion
The affordable nature and wide availability of this biological scaffold may attract further research into FSS as a robust tissue engineering scaffold. However, although FSS could have a promising future post-modification, as suggested, this model will have to undergo regulatory compliances similar to that of advanced therapy medicinal product (ATMP). This study is a proof of concept for culturing HCEnCs on FSS. While possessing attractive properties and cytocompatibility with primary corneal endothelial cells, additional refinement would be desired before testing in an animal model for cultivated endothelial transplantation [28].
Disclosure
The manuscript is part of a thesis presented in 2017 by Mohit Parekh to the University of Padova, Padova, Italy. | 5,852 | 2018-04-29T00:00:00.000 | [
"Biology",
"Medicine",
"Materials Science"
] |
A 40-nm CMOS Piezoelectric Energy Harvesting IC for Wearable Biomedical Applications
Copyright: © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/). 1 Department of Electrical Engineering, National Sun Yat-Sen University, Kaohsiung 80424, Taiwan<EMAIL_ADDRESS>(L.K.S.T<EMAIL_ADDRESS>(P.-C.C.) 2 Institute of Undersea Technology, National Sun Yat-Sen University, Kaohsiung 80424, Taiwan 3 Electrical and Electronics Engineering Institute, University of the Philippines Diliman, Quezon City 1101, Philippines<EMAIL_ADDRESS>4 Department of Mechanical and Electro-Mechanical Engineering, National Sun Yat-Sen University, Kaohsiung 80424, Taiwan<EMAIL_ADDRESS>(C.-K.Y<EMAIL_ADDRESS>(C.-T.P.) 5 Department of Electronic Engineering, National Yunlin University of Science and Technology, Yunlin 64002, Taiwan<EMAIL_ADDRESS>* Correspondence<EMAIL_ADDRESS>
Introduction
There is a huge demand for wearable sensors which are mainly used in monitoring health status of patients. Conventional batteries which are utilized as power supplies of these wearable sensors are being replaced. They may leak when they are not properly handled or packaged causing danger to the person. In addition, they cannot supply these wearables consistently due to limited battery capacity and the recharging requirement [1]. Moreover, they occupy too much space and add weight to the wearable sensor platforms [2,3]. Hence, they make the person wearing them feel uncomfortable. As a substitute for batteries in wearable sensor platforms, energy harvesting is being sought as a possible solution in providing power to these wearable sensors.
Meanwhile, human movements like feet and leg motion (walking and running), finger motion (writing and typewriting), and muscle contractions (heart, lung, and muscles) can bring an ample amount of wasted energy which can be harvested and converted to electricity. These biomechanical motions are commonly harvested by piezoelectric, electromagnetic, and triboelectric technologies [4,5]. Among the three energy harvesting technologies, piezoelectric is the most widely used and researched. Through the mechanical deformation of a piezoelectric material, an electricity is generated [4,6,7]. Some of the advantages of piezoelectric energy harvesting over the two technologies are higher energy density and output voltage [4,8]. The most widely known applications which utilized piezoelectrics are implemented in shoes [5,9]. More energy is generated by the piezoelectric when this is placed in shoes due to the weight exerted by the user standing on it and the continuous activities made by the user by walking and running. However, the generated output voltage from the vibration of piezoelectric material is not enough to supply or drive an electronic system [10]. One advantage of piezoelectric is that they can be easily integrated and interfaced with electronic systems implemented in a chip for more enhanced energy harvesting; thus, output voltage is now boosted [4,8,11,12].
Many related works have been reported regarding piezoelectric energy harvesting ICs for typical and wearable biomedical applications. A 0.5-µm piezoelectric energy harvester was developed for wireless temperature monitoring which is capable of interfacing with wearables or implants where their data can be transmitted through the Internet (Internet of Things or IoT) [13]. It operates at a minimum input voltage of 0.51 V but generates voltage of only 3 V. In [8], they used a 130-nm CMOS full-bridge rectifier as energy harvester since the the voltage drop across the full-bridge rectifier is lower than the traditional off-chip diode. The voltage conversion ratio or pump gain is 0.987 while the output power is 10.7 µW. Meanwhile, a piezoelectric energy harvesting circuit was developed that serves as supply to the wearable cannula for infant respiratory monitoring [14]. It has a pump gain of 0.987 but a higher output power of 11.1 µW. A buck-boost converter was used for an input voltage of 0.54 V amplitude [15]. Its generated output voltage and power are 3.3 V and 57 mW, respectively. A 0.18-µm piezoelectric energy harvester for wireless sensor nodes was proposed [16], where its possible generated output power and voltage are 94 µW and 1.8 V for an input voltage of 1 V. Lastly, a study in [17] presented a 0.5-µm CMOS vibrational energy harvester that has a pump gain of 3 but only operates at a minimum input voltage of 1 V.
In this study, a piezoelectric made from polyvinylidene fluoride (PVDF) was used because it is flexible which makes it to be wearable, comfortable, and non-intrusive or unobtrusive [14] to the users than the brittle piezoelectric made from lead zirconate titanate (PZT) [18]. The said PVDF as shown in Figure 1 has a wire diameter of 10 µm and wire length of 1 mm. It has a vibration frequency of 6.67 Hz and maximum output voltage of 45 mV. Its output voltage was increased to 120 mV through series connection of the fibers. It was placed on the insole of the shoe.
Based from the previous works presented, none of them have developed an energy harvesting circuit that can accommodate an input voltage below 0.5 V. Previously, we proposed a PVDF film-based energy harvesting circuit [3] for the detection of missing children and elderly using a Bluetooth low energy (BLE) transceiver which requires at least 1 V power supply. Post-layout simulations were made for the evaluation of the said energy harvesting circuit with no chip implementation, measurements, and theoretical analysis conducted. In this paper, an energy harvesting circuit for low voltage PVDF piezoelectric which accommodates at least 120 mV-amplitude input voltage was realized that can be utilized for wearable biomedical applications. TSMC 40-nm CMOS process was used for implementation and fabrication of the energy harvesting IC. The fabricated chip was embedded inside the heel of the shoe for testing. It generates a maximum voltage of 1 V which is enough to drive and supply low-power and low-voltage wearable sensor platforms and systems.
System Architecture of Energy Harvester
The piezoelectric energy harvesting circuit architecture for wearable biomedical applications is shown in Figure 2. It is composed of AC/DC Converter, Voltage Monitor, and DC/DC Converter, which will be presented in the following subsections.
AC/DC Converter
In piezoelectric material energy harvesting circuits, AC/DC converters are needed. There are many different implementations for AC/DC converter circuits, e.g., [19]. However, due to efficiency and simplicity in practical applications of PVDF energy harvesting circuit for wearable biomedical devices, Voltage Multiplier or Charge Pump as shown in Figure 3 was selected in this study as AC/DC Converter in Figure 2. This 5-stage Voltage Multiplier is widely used in energy harvesting to convert AC to DC signals. It is a switched capacitor which elevates the lower voltage to a higher voltage value. Moreover, it can provide a preliminary boost to facilitate the design of the DC/DC Converter, thereby alleviating the problem of too small input voltage in the subsequent DC/DC Converter design. Because the PVDF film's output voltages at PZ1 and PZ2 are very low, a low V th transistor is utilized reducing the voltage drop and boosting the voltage efficiently.
Voltage Monitor
Since a very low output voltage is generated by the PVDF film, energy must be stored at C st first. Then, the next stage will be driven by the large stored energy in C st . As shown in Figure 4, V st is monitored by a CMOS-based Schmitt trigger [20]. When V st > high switching voltage V SPH , the power MOS switch M sw in Figure 2 is switched on. At this moment, the DC/DC converter is started by an enable signal (V sw_n = 1). When V st < low switching voltage V SPL , a disable signal is sent by the Schmitt trigger (V sw_n = 0) to turn off the DC/DC converter and restore energy in C st .
DC/DC Converter
The Boost DC/DC Converter is displayed in Figure 5 [21]. It is composed of Digital Logic, On-time Mode Generator (T on Generator), Off-time Mode Generator (T off Generator). The PMOS switch P sw is used to replace the diode which has lower voltage drop and better efficiency; hence, the said converter is a synchronous type instead of a diode architecture. Because of very low output voltage of PVDF, the bucket fountain method [22] is used in which the energy is kept in C st until it is fully charged to the preset voltage value, and then the second stage boost is used to drive the wearable biomedical device. The Voltage Monitor is used to determine whether the stored voltage is sufficient. If the stored voltage is large enough, the power transistor (M sw ) in Figure 2 is turned on, and a large enough voltage is generated through the DC/DC converter to drive the wearable biomedical device. The system is in standby state waiting for the next startup. When the Voltage Monitor sends an enable signal (V sw_n = 1), the DC/DC converter will now start to operate. S 1 , S 2 , and S 3 are generated by the Digital Logic. S 2 and S 3 are denoted by S 2_b and S 3_b , respectively. A comparator is interfaced to the Digital Logic for output adjustment.
T on Generator
The T on Generator in Figure 5 is shown in Figure 6. It directs the V n signal in Figure 5 to maintain the high pulse width (V n = 1) which governs the ON time of the POWER NMOS N sw . During T on mode, S 1 is high, S 2 and S 3 are low, MN3 is turned on while MP4 is at cutoff. A fixed current (I on ) discharges C on when V con < V in is achieved. During T off mode, S 1 and S 3 are low, S 2 is high, and MN3 and MP4 are at cutoff prior to becoming equal values of V con and V in . Figure 7 shows the timing diagram at T on mode.
T off Generator
The Toff generator in Figure 5 is shown in Figure 8. It directs the V p signal to maintain the low pulse width (V p = 1) for the ON time of power PMOS transistor P sw , and generates charging and discharging currents through the current mirror. During T on mode where S 1 is high, S 2 and S 3 are low, transistors MN7, MP10, and MN11 are at cutoff. On the contrary, during T off mode, where S 1 and S 3 are low, S 2 is high, MN11 is at cutoff while MN7 and MP10 are on. Therefore, this charges C off with a fixed current until V coff > V in is expected. Figure 9 shows the timing diagram at T off mode.
Digital Logic
Digital Logic in Figure 5 controls the signals' timing sequence in the DC/DC converter, so that the DC/DC converter follows a predefined function for it to be operated correctly. Figures 10 and 11 show the flowchart and the timing diagram, respectively, of the timing control of Digital Logic in steady state. There are 3 modes: idle, on-time (T on ), and off-time (T off ). Looking at Figures 5 and 10, the DC/DC converter starts at idle mode, and Digital Logic is at initial state when V sw_n is low and S 3 and V con are high. The power MOS transistor (M sw ) in Figure 2 will be on when a sufficiently high voltage is across C st ; thus, making V sw_n high. When S 1 which is connected to the Gate Buffer of N sw and V sw_n are high, and S 2 which is connected to the Gate Buffer of P sw is low, the system enters T on mode. At this time, the inductor L stores energy when I L flows through N sw . The T on generator controls the duration of the T on mode. It will enable S 1 to low after a predefined time when V con drops below V in changing the system from T on to T off mode.
When S 2 is high and S 1 is low, the system enters T on mode. C load charges when (I L ) flows through P sw . The T off generator controls the duration of the T off mode. It will enable S 2 to low and S 3 to high after a predefined time when V in drops below V coff returning the system back from T off to idle mode. Figure 12 shows the Reference Current Generator which provides a stable reference current (I bias ) needed in Figures 6 and 8. The duty cycle contributed by the power switches (N sw and P sw ) depend on the Reference Current Generator. Since I bias can be affected by power supply's voltage and temperature variations, proportional (PTAT) and complementary to absolute temperature (CTAT) circuits are implemented to suppress the influence of temperature on accuracy and stability. In PTAT, both MN1 and MN2 are driven to subthreshold operation while both MP3 and MP4 are biased into strong inversion operation. PTAT current generator consists of transistors MN1, MN2, MP3, and MP4. Its generated I pt0 is mirrored by MP3 and MP9 to the resulting (I pt ). Moreover, CTAT current generator is composed of MP5, MN6, MN7, and MP8. Its I nt0 is mirrored by MP8 and MP9 to the output (I nt ). Then, the output current I bias is the sum of I nt and I pt , which will be insensitive to temperature variations. The two comparators in the T on and T off Generators are used to regulate the on-time and off-time modes' maintenance duration, respectively. For these two comparators, their propagation delay and noise have less effect to the their function and performance than the output-regulated Comparator in Figure 5. Therefore, the two comparators' quiescent current can be fixed to a smaller value to consume less power. However, the trade-off is that the propagation delay will be larger. In this design, a two-stage open-loop comparator is used for the AC/DC converter's output adjustment. It is shown in Figure 13.
Measurement Results and Discussion
TSMC 40-nm CMOS process was used to implement and fabricate the PVDF piezoelectric energy harvesting IC. All of the transistor lengths are at minimum size. The aspect ratio of the width of the NMOS and PMOS transistors used in this chip is 2:1. The layout view of the core which has an area of 0.1223 × 0.1065 mm 2 is shown in Figures 14 and 15 and shows the die photo of the chip which has an area of 0.7097 × 0.7097 mm 2 . The core was covered by a dummy layer, as seen in the die photo, because there is a minimum metal density required by the foundry; hence, only the pads and the wire-bonds can be seen in the photo. The fabricated chip was connected to PCB board and SMA connectors to reduce the loading effect. A power supply (ABM PRT3230 (ABM, Hsinchu City, Taiwan)) was used to provide VDD (0.9 V) and GND. A signal generator (Tektronix AFG3252 (Tektronix, Johnston, OH, USA))provides input signals for testing. It generates a 120 mV-amplitude sinusoid which mimics the excitation and the output voltage of the PVDF film for easy testing and validation of the proposed energy harvesting circuit. An oscilloscope (Teledyne Lecroy Waverunner 610Zi (Teledyne Lecroy, Thousand Oaks, CA, USA))observed the input and output waveforms. It was also used for the measurement and characterization of the PVDF film's output voltage.
The output signal generated by the PVDF film, V st , and V sw_n signals are shown in Figure 16. A maximum of 100 kHz was used as PVDF's vibration frequency for this testing. From Figure 17, it is noted that at the beginning the capacitor (C st ) was charged at 260 mV. When V st reaches at least 260 mV, the Voltage Monitor sends an enable signal (V sw_n = 1 V) to the DC/DC Converter to start the second mode of boosting which is the T on mode. Figure 18 summarizes the output specifications. The output voltage, current, and power are 1 V, 4.2 mA, and 4.2 mW, respectively. When the on-voltage value (V on ) is lower than V st (237 mV), V sw_n is 0 then wait for the next enable signal. The resulting waveforms of the digital logic timing control in steady state is shown in Figure 18. It is consistent with the designed Digital Logic's timing control waveforms in Figure 19. Figure 20 shows the waveform of the output voltage V out generated by the piezoelectric energy harvesting IC which is 1 V. For the PVDF generated voltage of 120 mV to 1 V, a regulated output voltage of 1 V is expected. The developed piezoelectric energy harvesting IC can be operated with a PVDF's vibration frequency of 6.67 Hz to 100 kHz. The measurement results correspond with the post-layout simulation results made in the prior work [3]. With this, it can provide a stable power supply to the low-power and low-voltage wearable biomedical devices. The proposed piezoelectric energy harvesting IC is compared with the prior studies as shown in Table 1. In Table 1, it should be noted that the pump gain is represented as the ratio of V out and the PVDF generated voltage. As shown in Table 1, the largest pump gain was generated by the proposed piezoelectric energy harvester IC over all the prior energy harvesting circuits cited in the literature.
Conclusions
A 40-nm CMOS piezoelectric energy harvesting IC has been presented. A Voltage Multiplier was constructed alleviating the problem of very low input voltage from the PVDF piezoelectric by boosting this low voltage. DC/DC converter improves the boosted voltage by following the Digital Logic's predefined timing of three operating modes, namely, idle, T on , and T off . With this, the low-power, low-voltage wearable biomedical devices, which normally require 1 V as their supply, can be operated at stable condition by the developed piezoelectric energy harvesting IC. Among the prior studies, the proposed energy harvesting IC exhibited the highest pump gain and accommodated the lowest piezoelectric generated voltage. | 4,181.8 | 2021-03-11T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Performance model of community food business development in East Nusa Tenggara Province
Community Food Business Development (PUPM) is one of the Indonesian government’s programs to achieve food security. The role of farmers in PUPM are as producers, so they have an important role in realizing food security. This study aims to examine the performance model of PUPM based on the characteristics of production areas, consumption and entrepreneurship in realizing food security in East Nusa Tenggara (NTT) Province according to farmers’ perceptions. The sample was 93 farmers, coming from six Gapoktans that act as Community Food Business Institutions (LUPM). The data analysis technique used is descriptive statistics and non-parametric statistics, namely Partial Least Square (PLS). The results show that the PUPM performance model based on farmers’ perceptions has not been fully able to realize food security in NTT Province because it has only reached a sufficient level, the aspect of food availability has a very small contribution of 0.01%, while the aspects of access and utilization and stability have been fulfilled. Therefore, to realize food security in NTT through the PUPM performance model, the aspect of food availability needs to be improved and its performance improved.
Introduction
Food is a basic need of every human being so that the fulfilment of food needs is a manifestation of the fulfilment of the human rights. Food contributes to food security and sustainability, so it is necessary to conduct a comprehensive study related to food, especially for farmers who have a dual role, namely as producers but also as consumers [1,2].
As a basic human need, the government is obliged to establish a food policy to ensure the fulfilment of food for every society. Government policies related to food are based on the applicable Food Law, namely Law Number 18 of 2012 that is the implementations of food as basic human need must be carried out fairly, equitably, sustainably. Based on the statement in the Act, food security is one of the important things that need to be realized following the mandate of the existing Food Law.
Availability, access, utilization, and stability are the focal concern of food security [3]. To achieve food security, the government implements the Community Food Business Development Program (PUPM). PUPM is expected to be able to fulfil every dimension of food security well for the community. The implementation of PUPM activities involves several parties, namely Association of Farmer Groups (Gapoktan) which carries out its function as a Community Food Business Institution (LUPM) and functions as a producer, Farmers Shop as a distributor, and those who play a role as rice consumers are rice buyers at Farmer Stores which are low-income people.
Farmers as producers are an important aspect in realizing food security. This is because farmers in addition to acting as producers but also as consumers. The ever changing seasons will influence the food availability [4]. Farmers also play a role in providing food for the community and raw materials for the food processing industry [5]. However, the low purchasing power of farmers is one of the factors causing the problem of farmers' inability to provide food for their households [6].
NTT Province is one of the provinces that still have food security problems [7] and food insecurity problems [8]. The PUPM activity, which is an effort by the Indonesian government to achieve food security by involving farmers, consumers and business actors, has only been able to fulfill the dimensions of food access and price stability, but the dimensions of food availability have not been fulfilled [9]. In order to overcome the food security problems, the policies carried out in each region need to be adjusted to the characteristics of each region, so that they are right on target [10].
Based on the description that has been stated, it is clear that farmers have the important role as food producers, especially staple food, namely rice. This research's objective is examine the performance model of PUPM based on the perception of rice farmers in order to realize food security in the Province of NTT. The novelty of this research is the result of a study of the PUPM performance model focused on the perceptions of rice farmers in NTT, while the same research has been carried out by [9], but this study focuses not only on the perceptions of farmers but also based on perceptions of consumers, and business actors where the analysis of the performance model is based on a diverse sample of farmers, consumers and business actors and the analysis is carried out jointly with a total of 219 respondents. The results of the study of PUPM performance model are specifically focused on studies based on farmers' perceptions because it is based on the concept that farmers have a dual role, both as food producers and food consumers, so that if the food needs of farmers are fulfilled, it is also hoped that their food production must also be can fulfill the needs of the community. Thus this research is useful for the government as policy makers, Gapoktan and business actors in the food sector, especially rice.
Research site
This study focused on six Gapoktans in six districts designated by the government to carry out their role as LUPM, namely the Tunmuni Gapoktan in Kupang, Roda Mandiri in North Central Timor, Eka Tua in Southwest Sumba, Sinar Usaha in West Manggarai, Rentung in Manggarai and Ine Pare in Ende. Gapoktan which acts as LUPM are located on three major islands in NTT, namely Sumba, Timor and Flores.
Size of sample and respondents
The population consists of 1263 rice farmers in six Gapoktans. The number of samples was determined using the formulation of Slovin Theory (Sevila, et al 2007) so that a sample of 93 farmers was obtained and the distribution in each Gapoktan was carried out proportionally.
Analysis method
The indicators of the research variables are used to measure the variables which are arranged according to the research objectives. 1) Whole production is sold, Part of the production is sold, 2) Whole production for own consumption The data is analyzed technique used in this study is descriptive statistical analysis and non-parametric statistics, namely data analysis techniques that use a variance-based approach or known as Partial Least Square (PLS). PLS is a variant-based structural equation analysis method that may test the measurement model as well as the model itself at the same time. This study uses PLS analysis because it can predict the model for theory development.
The PUPM performance model based on the perception of rice farmers using PLS is shown in Figure 1.
Characteristics of rice farmers in East Nusa Tenggara
Characteristics of rice farmers in East Nusa Tenggara (NTT) can be presented in Table 2 Based on the data in Table 2, it can be seen that rice farmers in NTT need to optimize land use properly to provide for the food needs of the farmers' families. The number of dependents of the farmer's family is one of the factors that influence the food needs of the farmer's household, although the productive age of the rice farmer and land ownership can be factors that support the success of rice farming activities.
Validity and reliability test
A valid instrument is the right instrument to measure something which is done by correlating the score of the instrument item with the total score of all question items. If in the validity test the coefficient value is more than 0.3, then the question or statement in the instrument is declared valid. While reliability is useful to see consistency. A good reliability coefficient value is if it gives Cronbach's alpha value above 0.6.
The validity test result indicated that, of the research instrument on the respondents of rice farmers, it is known that the correlation coefficient value for each question indicator is a strong construct because it has a correlation coefficient appropriate value of > 0, 3. Thus the measuring instrument used in this study has met validity as an instrument. The results of the reliability test of the research instrument on the respondents of rice farmers showed that the value Cronbach's Alpha > 0.6 so that the variables of Characteristics of Production Areas (X1), Consumption (X2), Entrepreneurship (X3), and PUPM Performance (Y) were acceptable with a reliability value level of 0.650. -0,936.
Evaluation of the measurement model (outer model)
Evaluation of the measurement model (outer model) was carried out on a model based on the perceptions of rice farmers. This study has four latent variables, namely the characteristics of the production area (X1), consumption characteristics (X2), entrepreneurial characteristics (X3), and PUPM performance (Y). Because the measurement model used reflective indicators, the convergent and discriminant validity of the indicators, as well as the composite reliability because the measurement model used uses reflective indicators.
Based on Figure 1, it appears that the indicator of the number of products sold (X13) outer loading 0.652 and the source of the product consumed (X22) outer loading 0.697 are invalid indicators, because outer loading <0.700 [11,12]. The indicator is removed from the model and the outer model is tested again and the results are shown in Figure 2. Evaluation of the discriminant validity of the measurement model with the reflection indicator in this study used the value of cross loading, average variance extracted (AVE), and the square root of average variance extracted (roots A V E). The discriminant validity of the measurement model is assessed based on the measurement of cross-loading with the construct. The cross-loading value is said to be valid if the construct correlation with the measurement subject for each indicator is greater than the other constructs and the value is> 0.700 [11,12]. Latent constructs can predict indicators better than other constructs. AVE and the root of AVE are indicators that are used to explain that the indicators used can explain the variables that are formed than other indicators. If the AVE root value of each latent variable is greater than the AVE value of the latent variable, then the instrument variable is also said to be a valid discriminant. Discriminant validity testing can be explained as follows:
1) Cross loading
The results of the calculation of cross-loading for the constructs of the characteristics of production, consumption, and entrepreneurship areas show that the cross-loading value on all indicators is> 0.700 so that all indicators used are said to be discriminant valid.
2) Average variance extracted (AVE) and an r AVE.
The results of the calculation of the AVE and root values on the variable characteristics of the production area, consumption, entrepreneurship, and performance of the PUPM show that they are above the tolerance limit value of 0.500 so that the instrument for each variable is said to be valid discriminant [11,12]. Table 3 shows the AVE Value and AVE Root of the research variables based on the perceptions of rice farmers. The reliability test of a construct with reflective indicators is carried out using composite reliability and Cronbach's Alpha. The value of composite reliability and Cronbach's Alpha is said to be good if the value is> 0.60 and shows that discriminant validity has been achieved [11,12]. The results of testing the composite reliability and Cronbach's Alpha measurement model of this study show that all the variables tested are reliable so that the latent variables used have good composite reliability and have highreliability values. All instruments of rice farmers' perceptions of the characteristics of the production area, consumption, entrepreneurship, and performance of PUPM used in this study have met the criteria or are feasible to be used in the measurement of all latent variables and can then be used to evaluate inner models or evaluation of structural models. Table 4 shows the value of composite reliability and Cronbach's alpha research variables based on the perceptions of rice farmers.
Evaluation of the structural model (inner model)
The evaluation of the structural model (inner model) aims to see the relationship between the latent construct (causal path) and the estimated result of the path parameter coefficient and its significance level to test the predetermined hypothesis. The structural model of this study was analysed using bootstrapping techniques and evaluated by paying attention to the R-square value (R 2 ) obtained from the goodness of fit model test and the Q-Square (Q 2 ) value from the predictive relevance model test. The value of Q 2 is based on the coefficient of determination (R 2 ) of all endogenous variables which aims to measure how well the observed value is generated by the model. The quantity of Q 2 has a value with a range of 0 <Q 2 <1, the closer to the value of 1, the better the model [11,12].
The results of the analysis of the coefficient of determination (R 2 ) of the structural model based on the perceptions of rice farmers are depicted in Table 5. The variable characteristics of the production area can be explained by the consumption and entrepreneurial characteristics variables of 0.235 or 23.5%, while the rest is explained by other factors not examined. The entrepreneurial characteristics variable can be explained by the consumption characteristic variable of 0.173 or 17.3%, while the rest is explained by other factors not examined. PUPM performance variables can be explained by the characteristics of the production area, consumption, and entrepreneurship variables of 0.514 or 51.4%, while the rest is explained by other factors not examined. Based on the results of the analysis, the R 2 value is 0.514 and according to Ghozali (2008), this value is classified as a moderate or sufficient model, thus the characteristics of the production, consumption and entrepreneurship area variables are moderately able to explain the performance of PUPM. Based on the value of the coefficient of determination (R 2 ) of the endogenous variable PUPM Performance (Y), the Q 2 value can be calculated as follows: After obtaining a Q 2 value of 0.514, the structural model based on the perceptions of rice farmers shows acceptable suitability and has a strong predictive relevance, because the value is> 0.35 [11,12]. This means that the latent variables in the structural model can predict the model well and can be used to test the hypothesis of this study. Based on the results of the structural model analysis, hypothesis testing can be carried out by looking at the estimated value of the path coefficient and the significant critical point value at the 95% confidence level or the significant probability value (p-value) <5% (0.05) and with the t-statistic value. > t-table which is 1.662, then the proposed hypothesis can be accepted.
The results of the path coefficient and hypothesis testing can be seen in Table 6 and Figure 3. Based on the results of the analysis, it can be seen that the three exogenous variables, namely the characteristics of the production area (X1), consumption (X2), and entrepreneurship (X3) have a positive path coefficient value and have a t-value. -statistics > 1.662 and p-value <0.05 for endogenous variables, namely PUPM performance. These three variables have a direct and significant effect on the PUPM performance variable (Y). The analysis results also show the relationship between the three exogenous variables, it can be seen that the consumption characteristic variable (X2) has a direct and significant effect on the production area characteristic variable (X1) where the t-statistic value = 6.916> 1,662 and p-value = 0.000 <0 05. The consumption characteristics variable (X2) has a direct and significant effect on the entrepreneurial characteristics variable (X3), where the t-statistic value = 5,276> 1,662 and p-value = 0,000 <0.05. The entrepreneurial characteristics variable (X3) has no direct and insignificant effect on the production area characteristic variable (X1) because it has a negative path coefficient value and has a tstatistic value = 0.793 <1.662 and p-value = 0.562> 0.05.
The results of the path coefficient analysis, t-statistical critical point value, and p-value also aim to answer the proposed hypothesis. The results of the structural model analysis based on the perceptions of rice farmers show empirically strong evidence to accept the proposed hypothesis, namely the characteristics of the production area, consumption and entrepreneurship simultaneously affect the performance of PUPM in NTT Province. Table 7, it is known that the role of PUPM is still very small, namely, only 0.01% of the rice consumption needs in NTT in 2017 in meeting the aspect of food availability. Based on the aspects of access and benefits of food, 99.18% of rice from PUPM can be accessed and utilized by low-income people. Based on the stability aspect, it is known that there is a small gap between the market price and the price at TTI, which is IDR 1,100. Thus, the existing PUPM performance model has been able to realize regional food security in NTT from the aspects of access, utilization, and price stability of the food, while the aspect of availability plays a very small role.
Conclusion
Based on the results of the analysis and discussion, it can be concluded that the PUPM performance model based on the perceptions of producer has not been fully able to realize food security in Province of NTT because it has only reached a sufficient level (R 2 = 0.514). Aspects of food availability has a very small contribution, namely 0.01%. Aspects of access and utilization, it is found out that 99.18% of rice from Gapoktan has been distributed to TTI to meet consumer needs, thus fulfilling aspects of access and utilization. Aspects of stability, it appears that the price difference in the market with the price in TTI is not large so that it fulfills the price stability aspect. Therefore, to realize food security in NTT Province through the PUPM performance model, the aspect of food availability needs to be improved and its performance is enhanced | 4,137 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Constraints on the two-pion contribution to hadronic vacuum polarization
At low energies hadronic vacuum polarization (HVP) is strongly dominated by two-pion intermediate states, which are responsible for about $70\%$ of the HVP contribution to the anomalous magnetic moment of the muon, $a_\mu^\text{HVP}$. Lattice-QCD evaluations of the latter indicate that it might be larger than calculated dispersively on the basis of $e^+e^-\to\text{hadrons}$ data, at a level which would contest the long-standing discrepancy with the $a_\mu$ measurement. In this Letter we study to which extent this $2\pi$ contribution can be modified without, at the same time, producing a conflict elsewhere in low-energy hadron phenomenology. To this end we consider a dispersive representation of the $e^+e^- \to 2\pi$ process and study the correlations which thereby emerge between $a_\mu^\text{HVP}$, the hadronic running of the fine-structure constant, the $P$-wave $\pi\pi$ phase shift, and the charge radius of the pion. Inelastic effects play an important role, despite being constrained by the Eidelman-Lukaszuk bound. We identify scenarios in which $a_\mu^\text{HVP}$ can be altered substantially, driven by changes in the phase shift and/or the inelastic contribution, and illustrate the ensuing changes in the $e^+e^-\to 2\pi$ cross section. In the combined scenario, which minimizes the effect in the cross section, a uniform shift around $4\%$ is required. At the same time the analytic continuation into the space-like region as well as the pion charge radius are affected at a level that could be probed in future lattice-QCD calculations.
This situation has triggered renewed interest in the consequences of large changes to HVP elsewhere, especially for global electroweak fits due to its impact on the hadronic running of the fine-structure constant α [46][47][48][49]. These analyses have shown that to avoid a significant tension with electroweak precision data, the changes to the hadronic cross sections need to be concentrated at low energies, at least below 2 GeV, a scenario indeed indicated by Ref. [39].
In previous work [47][48][49] changes to the hadronic cross sections were considered as a whole, with specific assumptions on the energy dependence. However, if the changes are concentrated in the low-energy region, it is clear that the most relevant absolute effect will occur in the dominant 2π channel, since the required relative changes in the subleading channels would become prohibitively large. In this region, the 2π channel is essentially elastic and dominated by the ρ resonance. The relevant hadronic matrix element, the pion vector form factor (VFF), is strongly constrained by analyticity and unitarity, which imply that below 1 GeV it is essentially determined by the P-wave ππ phase shift [8], which is again constrained by analyticity, unitarity, and crossing symmetry, taking the form of Roy equations [50][51][52][53]. The main conclusion of the analysis in Ref. [8] is that the VFF below 1 GeV can be described in terms of a handful of parameters, which can all be determined by a fit to the e + e − → 2π data. The fact that these data, which have now reached a remarkable level of precision, typically below 1%, can be well described by this highly constrained representation, is a nontrivial test on their quality. Within this framework it is possible to address the question which changes become possible without violating analyticity and unitarity and without incurring other tensions elsewherebesides those with the e + e − → 2π cross-section data. To this end, we first of all determine what changes in the parameters of the dispersive representation may generate the desired change in a HVP µ . With the same set of parameters we then calculate the P-wave ππ phase shifts, the hadronic running of α, as well as the charge radius of the pion, and thereby establish correlations among all these quantities.
Finally, we identify scenarios in which significant changes to HVP remain possible despite these independent constraints on the pion VFF. The comparison of the resulting predictions for the e + e − → 2π cross section to data allows us to quantify by how much the experimental cross sections would need to be changed to accommodate such an increase in a HVP µ .
The pion vector form factor
The HVP contribution to the anomalous magnetic moment of the muon, expressed in terms of the e + e − → hadrons cross section, reads [54,55] a HVP with a known kernel functionK(s). With the pion VFF F V π (s) defined as the matrix element of the electromagnetic current j µ em , the 2π contribution becomes where σ π (s) = 1 − 4M 2 π /s. Similarly, the two-pion contribution to the hadronic running of α, evaluated at M 2 Z , is determined by F V π (s). In both cases, the integration threshold becomes s thr = 4M 2 π , and radiative corrections to the cross section are implemented in such a way that vacuum polarization is removed, but final-state radiation (FSR) included. Since Eq. (6) defines the matrix element in pure QCD, this implies that FSR corrections need to be included in the final step, see Ref. [8] for further details. In addition, we consider the correlation with the pion charge radius which, contrary to a HVP µ and ∆α (5) had , is also explicitly sensitive to the phase of F V π (s). In the elastic region, where 2π is again the only relevant intermediate state, F V π (s) is strongly constrained by analyticity and unitarity. If the elastic region extended all the way to infinity, the solution to the unitarity and analyticity constraints would be given by the Omnès factor [56] with the P-wave ππ scattering phase shift δ 1 1 (s). This phase shift, in turn, is strongly constrained by ππ Roy equations [50][51][52][53], which further limits the permissible changes in F V π (s), see Refs. [57][58][59][60][61][62][63][64] for representations that exploit this intimate connection between the VFF and ππ scattering. Below 1 GeV inelastic effects are small, but at the level of precision necessary here, have to be taken into account. To do this we multiply the fully elastic Omnès factor (10) by two additional factors, as in Refs. [8,58,59] where G ω (s) accounts for the isospin-violating 3π cut, which is completely dominated by ρ-ω mixing, and the 4π cut is expanded into a conformal polynomial where the conformal variable permits inelastic phases above the πω threshold s in = (M π 0 + M ω ) 2 . The parameter s c is the value of s mapped to the origin, z(s c ) = 0, and is varied around −1 GeV 2 . To ensure the correct threshold behavior, the c k are related by an additional constraint that removes the S -wave singularity. In total, the dispersive representation from Ref. [8] then involves the following free parameters: first, the solution of the ππ Roy equations is determined once the phase shifts at s 0 = (0.8 GeV) 2 and s 1 = (1.15 GeV) 2 are specified, so that δ 1 1 (s 0 ) and δ 1 1 (s 1 ) are free fit parameters. Second, G ω (s) depends on the ω pole parameters as well as the overall strength of ρ-ω mixing. Third, there are N − 1 free parameters in G N in (s) to describe inelastic effects.
The results for the phase shifts from a fit to VFF data are [8] but for the purpose of this work it is crucial to understand to within which ranges they can be constrained without relying on e + e − → 2π data (or τ → ππν τ ). In principle, one could even consider indirect constraints that arise, via the Roy equations, from low-energy data in crossed channels, such as K 4 data [65][66][67], but here we simply quote the results from the partial-wave analyses Ref. [68] : where our values, extracted from the global fit to e + e − → 2π data, are shown in brackets for comparison. The parameters in G ω (s) do not need to be considered further because either one would have to be changed beyond any plausible range to produce a relevant effect in a HVP µ . Finally, if several free parameters in the conformal polynomial are introduced, the resulting inelastic phase shift in general leads to unacceptably large violations of Watson's final-state theorem [70]. A quantitative phenomenological bound can be formulated based on the ratio of non-2π to 2π hadronic cross sections for isospin I = 1, e.g., for the total phase ψ of the VFF [71,72] This EŁ bound shows that inelastic effects below the πω threshold are indeed negligible, and limits the size of the inelastic phase above. In practice, we use the implementation of the EŁ bound from Ref. [8], but note that these details are of limited importance in the present context: once the EŁ bound becomes active, the increase in the χ 2 is rather steep, so that the excluded parameter space is essentially insensitive to the exact implementation of the EŁ bound.
Changing HVP
We start from the main results of Ref. [8], where the representation (11) is fit to a combination of the data sets of Refs. [73][74][75][76][77][78][79][80][81][82][83][84][85], leading to a two-pion contribution to a HVP µ below 1 GeV of [8] a ππ µ ≤1 GeV = 495.0(1.5)(2.1) × 10 −10 = 495.0(2.6) × 10 −10 , where the first error is the fit uncertainty (inflated by χ 2 /dof) and the second error includes all systematic uncertainties of the representation (11). The central configuration uses N − 1 = 4 free parameters in the conformal polynomial. Due to the sensitivity of the radius sum rule (9) to the phase of the VFF, fits with too many free parameters in the conformal polynomial tend to become unstable for r 2 π , because the phase needs to be extrapolated above the energy for which the EŁ bound can be used in practice to constrain the size of the imaginary part. For this reason, in Ref. [8] the central evaluation of r 2 π was obtained with N − 1 = 1, but the full variation with N was kept as a systematic uncertainty, which dominates the uncertainty assigned to the final result [8] Here, we use as reference point the value for N − 1 = 4 [8] where the error refers to the fit uncertainty only. Finally, the fit configuration with N − 1 = 4 leads to a two-pion contribution to the hadronic running of α, ∆α (5) Starting from the central fit results, we now modify the contribution to a HVP µ by including in the fit an additional hypothetical "lattice" observation of a ππ µ ≤1 GeV . The uncertainty for this input could be chosen arbitrarily, but a tiny uncertainty of, e.g., 0.1 × 10 −10 essentially forces the output for a ππ µ ≤1 GeV to be very close to the input. 1 At the same time, the fit finds the parameter values that minimize the tension with the cross-section data. We consider the following three scenarios: (1) "Low-energy" scenario: we fix all parameters of the dispersive representation of the VFF to the central fit results with N − 1 = 4 without "lattice" input for a ππ µ ≤1 GeV , apart from the two phase-shift parameters δ 1 1 (s 0 ) and δ 1 1 (s 1 ), which are used as free parameters in a fit to data and "lattice" input for a ππ µ ≤1 GeV .
(2) "High-energy" scenario: we fix all parameters apart from the parameters c k in the conformal polynomial.
(3) Combined scenario: all parameters are used as free fit parameters.
We are interested in the region of the parameter space that allows for a significant upward shift in a ππ µ . For definiteness, we take ∆a ππ µ ≤1 GeV 18.5 × 10 −10 as reference point, which corresponds to the difference between Eqs. (2) and (4). The dependence of the VFF on the two free phase parameters δ 1 1 (s 0 ) and δ 1 1 (s 1 ) is intertwined with the solution of the Roy equations for the phase δ 1 1 (s), which in turn determines the Omnès function (10). In contrast, the dependence on the parameters in the conformal polynomial is much more direct, as the constraint that removes the S -wave singularity is a linear relation between the parameters c k . Therefore, the VFF is linear in the parameters c k and the same is true for the contribution to the charge radius, while a ππ µ and ∆α (5) ππ (M 2 Z ) are quadratic in the conformal parameters c k . However, in the relevant parameter range the non-linearities prove to be very small.
In order to further restrict possible variants in scenarios (2) and (3), we first investigate the role of the EŁ bound in the context of variations of a ππ µ ≤1 GeV .
Constraints due to the EŁ bound
The EŁ bound (17) provides an additional restriction on the permissible parameter space that is independent of the twopion cross-section measurements. Using the implementation of Ref. [8] and the data compilation of Ref. [72], this constraint leads to a steep rise of the χ 2 function unless the inelastic phase stays small. To illustrate this effect, we consider scenario (2) and fit configurations with N − 1 = 1 . . . 4 free parameters in the conformal polynomial. Starting from the central fit results, we vary the input value for a ππ µ ≤1 GeV . The impact of the EŁ bound on the χ 2 is shown in Fig. 1, as a function of the fit output a ππ µ ≤1 GeV . We find that the bound severely restricts the possible changes in a ππ µ for N − 1 = 1: inducing larger shifts with only a single free parameter in the conformal polynomial automatically leads to a significant effect in the inelastic phase that violates the EŁ bound, thus excluding such a scenario. With two free parameters in the conformal polynomial, the EŁ bound permits larger changes in a ππ µ , but still imposes a restriction. To evade the EŁ bound for large changes in a ππ µ , more freedom in the parameterization is required, and indeed the situation changes if we consider three or more free parameters in the conformal polynomial, see Fig. 1.
In order to better understand this effect, we consider in some detail the case of N − 1 = 2. The fit to data alone leads to Varying the two parameters c 2,3 away from the central fit results, we find that the EŁ bound gives a contribution to the χ 2 that results in a strong anti-correlation between permissible values for the two free parameters. This is illustrated in Fig. 2, where we show the contours for χ 2 EŁ ∈ {0.1, 1, 10} in the c 2c 3 plane. In the close-up plot, we also overlay a heat map for the resulting value of a ππ µ ≤1 GeV . Accordingly, for two free parameters in the conformal polynomial the EŁ bound alone no longer excludes very large shifts in a ππ µ , as shown by the ellipses in Fig. 2. However, large parts of the χ 2 EŁ ellipsis are in strong tension with the cross-section data. Minimizing the total χ 2 in scenario (2) results in the brown dashed path in Fig. 2, which corresponds to the brown curve shown in Fig. 1. For even more free parameters N − 1 > 2, the situation remains qualitatively similar: the EŁ bound again strongly correlates the free parameters of the conformal polynomial, essentially imposing one linear constraint, but the values of a ππ µ that can be reached 4 are no longer bounded. Therefore, in the following we will only consider fit variants with N − 1 = 3 and N − 1 = 4, where the EŁ bound is easily fulfilled even for large shifts in a ππ µ .
Correlations with ∆α (5) had and r 2 π
We now turn our attention to the correlations among the three quantities derived from HVP-the two-pion contribution to the anomalous magnetic moment of the muon a ππ µ , the pion charge radius r 2 π , and the two-pion contribution to the hadronic running of α, ∆α (5) ππ (M 2 Z ). We vary the hypothetical "lattice" input for a ππ µ ≤1 GeV , perform the fits according to the three scenarios defined in Sect. 3, and compute the resulting output values for the three quantities. The results in Figs. 3 and 4 show the correlations of a ππ µ with r 2 π and ∆α (5) ππ (M 2 Z ), respectively, as induced in each of the scenarios.
If the changes in a ππ µ ≤1 GeV are induced only by variations of the two phase-shift parameters δ 1 1 (s 0 ) and δ 1 1 (s 1 ), they have only little impact on the charge radius r 2 π , see Fig. 3. Hence, in practice changes of a ππ µ induced by these parameters cannot be detected by a precision measurement of r 2 π . However, a scenario where the changes in a ππ µ ≤1 GeV are induced by shifts in the parameters c k of the conformal polynomial generates large shifts in r 2 π and could be constrained by additional information on the charge radius of the pion, at least in principle. At present, lattice determinations of the charge radius [86,87] have not yet reached the precision that could exclude these shifts: the current lattice uncertainties cover the entire plot range in Fig. 3, but future progress on the determination of the charge radius could further constrain the allowed parameter range. Interestingly, the combined scenario (3) where all parameters are allowed to vary leads to the largest effect in the pion charge radius, even slightly larger than the effect in the scenarios (2). By definition, this is the scenario with minimal tension with the cross-section data, but Fig. 3 shows that this comes at the expense of the largest shift in the charge radius.
In contrast to the pion charge radius, all scenarios lead to very similar correlations with the hadronic running of α, as shown in Fig. 4. A shift in a ππ µ ≤1 GeV by 18.5 × 10 −10 corresponds to a shift in ∆α (5) ππ (M 2 Z ) ≤1 GeV between 1.2 × 10 −4 and 1.4 × 10 −4 , as shown in Fig. 4. 2 The existence of such a correlation emerges because we do not allow for arbitrary changes in the hadronic cross section: while in general the two quantities need not be correlated due to the different energy dependence of their kernel functions, we find that a correlation does arise if only changes in the ππ channel are considered as allowed by analyticity and 2 This shift is slightly smaller than the 1.8×10 −4 estimated in Ref. [47] if the relative changes occur below 1.94 GeV but are otherwise energy independent. Shifts of this size violate the bound on ∆α (5) had (M 2 Z ) derived in Ref. [88]. Since this bound was derived on the basis of assumptions (dim-6 operator as sole origin of the shift in ∆α (5) had (M 2 Z ) and an arbitrary scale choice when converting the derivative of the HVP function to ∆α (5) had (M 2 Z )), we have to conclude that these assumptions are not tenable. The result for ∆α (5) had (M 2 Z ) indicated by Ref. [39] leads to the same conclusion. unitarity constraints, while trying to minimize the tension with the ππ cross-section data.
Impact on the phase shift and cross section
In scenario (1) we only allow the two phase-shift parameters δ 1 1 (s 0 ) and δ 1 1 (s 1 ) to deviate from the central fit results to data. If only the phase at s 0 = (0.8 GeV) 2 were varied, a huge change in the phase shift of about ∆δ 1 1 (s 0 ) = 10 • would be necessary to obtain a shift in a ππ µ ≤1 GeV by 18.5 × 10 −10 . On the other hand, such a change in a ππ µ could be induced by the parameter δ 1 1 (s 1 ) alone with a shift by 1.8 • . If we fit the two parameters simultaneously to a combination of the space-and time-like data on the VFF and the hypothetical "lattice" input on a ππ µ ≤1 GeV , a shift in a ππ µ ≤1 GeV by 18.5 × 10 −10 then corresponds to modest changes in the phase by ∆δ 1 1 (s 0 ) = 0.8 • and ∆δ 1 1 (s 1 ) = 1.7 • , see Fig. 5. We note that the partial-wave solutions given in Eq. (15) would actually favor values slightly below our reference point (14), but certainly exclude the required change in δ 1 1 (s 0 ) if the shift in a ππ µ ≤1 GeV were induced by this parameter alone. As discussed in Sect. 5, indirect constraints on scenario (1) from a determination of the pion charge radius seem out of reach. However, direct constraints on δ 1 1 (s 0 ) and δ 1 1 (s 1 ) could be obtained from lattice determinations of the elastic ππ phase shift [93][94][95][96][97][98][99][100][101][102], not only at these exact points in energy, but in the whole ρ resonance region: given the phase values δ 1 1 (s 0,1 ), the Roy solutions determine the modified phase shift over the whole energy range. However, the precision of lattice data is not yet sufficient to add meaningful constraints to the parameter space, and only a significant increase in precision will have an impact on a HVP µ determinations. Figure 5 also shows the resulting shifts in the phase parameters for scenario (3), ∆δ 1 1 (s 0 ) = −0.2 • and ∆δ 1 1 (s 1 ) = −0.3 • . As discussed in Sect. 5, it is most promising to indirectly constrain such a scenario with an improved determination of the pion charge radius. In fact, not only the radius is relevant in this regard, but the VFF in the whole space-like region, as shown in Fig. 6. Scenarios (2) and (3) band of the central fit to data. Precise lattice-QCD determinations of the space-like VFF [87] could start to discriminate between the central solution and these shifted variants. Consistently with the small effect on the radius, scenario (1) with shifts only in the two phase-shift parameters has a negligible effect on the space-like VFF: the shifted solution remains well within the uncertainties of the central fit result.
Finally, we take a closer look at the pion VFF in the timelike region. The dispersive representation of the VFF allows us to quantify in detail how the cross sections would need to be altered to achieve a given change in a ππ µ ≤1 GeV , in each of the three scenarios. In Fig. 7, a close-up view of the ρ-ω interference region is shown. It reveals that if the change in a ππ µ ≤1 GeV were explained with the help of δ 1 1 (s 0,1 ), a dramatic shift of up to 8% of the cross section would be necessary. If the shift were obtained by changing the parameters c k , the effect in the cross section at the ρ resonance would be only about half as large, although the resulting cross section would still lie far outside the combined fit to the data. The combined scenario is very close to the one where shifts are only allowed in the parameters c k .
In Fig. 8, we compare both the data sets and the shifted variants of the VFF to the central fit result, as the relative differences normalized to the fit result. We again see that by using the conformal polynomial to induce the shift, the effect on the cross sections is smaller around the ρ resonance than in the scenario with a shift in δ 1 1 (s 0,1 ), while the effect is larger below about 0.72 GeV. Compared to the spread of the data points, the necessary shift in the cross sections is again significant, although less drastic than in scenario (1), where the changes are concentrated in the ρ region. This is consistent with the fact that the conformal polynomial parameterizes the effects of inelasticities above the πω threshold.
While Figs. 7 and 8 make it evident that the changes in the cross section that would generate the desired change in a ππ µ ≤1 GeV are incompatible with the data, Fig. 9 shows the corresponding change in χ 2 as a function of a ππ µ ≤1 GeV , and provides a quantitative measure of the discrepancy. The most dramatic clash with the data would be in scenario (1), but even in the other two any significant change in a ππ µ ≤1 GeV comes at the 6 total error fit error SND CMD-2 BaBar KLOE08 KLOE10 KLOE12 phase shifts changed c k changed, N − 1 = 4 all parameters changed price of huge increases in χ 2 .
The results in Figs. 7 and 8 show that to minimize the effect in the cross section, the changes mainly affect the inelastic part of the VFF parameterization and thus energies above the πω threshold. In principle, these inelastic contributions could be further constrained by e + e − → 2π data above 1 GeV [81,83,103], τ → ππν τ [104], and explicit input on the inelastic channels, but this requires an extension of our dispersive formalism that will be left for future work. We remark that any changes in the physics above 1 GeV will also have an impact on ∆α (5) ππ (M 2 Z ), which is not yet accounted for here: the higher in energy these changes are pushed, the higher the risk to exacerbate tensions in the global electroweak fit [46][47][48][49].
Conclusions
In this Letter we examined the two-pion contribution to HVP in view of recent hints from lattice-QCD calculations that its contribution to the anomalous magnetic moment of the muon could be much larger than obtained from e + e − → hadrons cross-section data, with most of the changes concentrated at low energies. We relied on a dispersive representation of the pion vector form factor and studied which of its parameters could be varied without contradicting other low-energy observables besides the e + e − → 2π cross section itself. We identified three scenarios: (1) where only the elastic ππ phase shift, or (2) where only inelastic effects, or (3) all parameters at the same time are allowed to change, see Sect. 3 for more details. In these scenarios, we then derived the correlations with the pion charge radius and the hadronic running of the fine-structure constant.
We found that in scenario (1) the changes in the cross section are mainly concentrated around the ρ resonance, amounting to a relative effect of up to 8%, see Figs. 7 and 8, while in scenarios (2) and (3) the changes are more uniformly distributed over the entire energy range, at a level around 4%. The first insight from our analysis is thus that a largely uniform change in the cross section is actually allowed by the constraints from analyticity, unitarity, as well as low-energy hadron phenomenology. Moreover, this is the configuration that minimizes the discrepancy 490 500 510 520 530 Figure 9: Increase in the χ 2 as a function of the fit output a ππ µ ≤1 GeV in the three scenarios, excluding the contribution of the "lattice" input (since this depends on the arbitrary uncertainty that acts as a weight, see Sect. 3).
with the data as one tries to increase a ππ µ ≤1 GeV while respecting all constraints, but still even this scenario is in strong disagreement with the e + e − → 2π data, see Fig. 9.
The correlations with the pion charge radius and the hadronic running of the fine-structure constant are shown in Figs. 3 and 4, respectively. One of our main conclusions is that in our framework we can establish a firm correlation between a ππ µ ≤1 GeV and ∆α (5) ππ (M 2 Z ): the required change in the former implies an upward shift between 1.2 × 10 −4 and 1.4 × 10 −4 in the latter for all scenarios. For the charge radius the correlation with a ππ µ ≤1 GeV depends on the scenario, with the largest effect arising in scenario (3), the one for which the change in the cross section is minimized. A similar observation applies to the entire space-like region, see Fig. 6. This opens the possibility to challenge this scenario with future lattice-QCD calculations of the pion charge radius as well as the space-like pion form factor [86,87]. Competitive constraints would require a precision around ∆ r 2 π = 0.005 fm 2 , a factor 3 below the sensitivity of Ref. [87]. Similarly, a precision calculation of the P-wave ππ phase shift would provide further independent constraints on our dispersive representation, but here the precision goal of ∆δ 1 1 (s 0,1 ) = 2 • would require significant advances over current calculations.
To further improve the phenomenological determination of the two-pion contribution to HVP, the most important future development naturally concerns new e + e − → 2π data, with BESIII [105,106] and SND [107] supporting the results already included in the present analysis, and new data from CMD-3 [108] forthcoming. As for direct lattice-QCD evaluations of the HVP contribution, the results of Ref. [39] are being scrutinized by other lattice collaborations, and more detailed comparisons to phenomenology will allow for refined conclusions as to where the e + e − → hadrons cross section would need to be modified. In addition, a direct measurement of HVP in the space-like region would become possible with the MUonE project [109,110], providing further complementary information on the role of HVP in the SM prediction for the anomalous magnetic moment of the muon. | 7,124.6 | 2020-10-15T00:00:00.000 | [
"Physics"
] |
Myristicin and Elemicin: Potentially Toxic Alkenylbenzenes in Food
Alkenylbenzenes represent a group of naturally occurring substances that are synthesized as secondary metabolites in various plants, including nutmeg and basil. Many of the alkenylbenzene-containing plants are common spice plants and preparations thereof are used for flavoring purposes. However, many alkenylbenzenes are known toxicants. For example, safrole and methyleugenol were classified as genotoxic carcinogens based on extensive toxicological evidence. In contrast, reliable toxicological data, in particular regarding genotoxicity, carcinogenicity, and reproductive toxicity is missing for several other structurally closely related alkenylbenzenes, such as myristicin and elemicin. Moreover, existing data on the occurrence of these substances in various foods suffer from several limitations. Together, the existing data gaps regarding exposure and toxicity cause difficulty in evaluating health risks for humans. This review gives an overview on available occurrence data of myristicin, elemicin, and other selected alkenylbenzenes in certain foods. Moreover, the current knowledge on the toxicity of myristicin and elemicin in comparison to their structurally related and well-characterized derivatives safrole and methyleugenol, especially with respect to their genotoxic and carcinogenic potential, is discussed. Finally, this article focuses on existing data gaps regarding exposure and toxicity currently impeding the evaluation of adverse health effects potentially caused by myristicin and elemicin.
Myristicin and Elemicin in Nutmeg and Mace
The name "myristicin" originally referred to the solids that crystallize from nutmeg oil while in prolonged storage. These, however, are today known to be myristic acid [17]. Elemicin was first identified as a component of the myristicin fraction from nutmeg oil [18].
The nutmeg tree is a tropical tree indigenous to the Maluku Islands of Indonesia (Myristica fragrans Houtt., family Myristicaceae). Its seeds consist of a kernel and a covering aril surrounding the kernel. Whereas mace designates the red lacy aril, the dried kernels of the ripe seeds are named nutmeg. When grounded material or powders are hydrodistilled, about 2.4% crude oil can be obtained [19]. These oils are rich in alkenylbenzenes, such as eugenol (19.9%), methyleugenol (16.7%), methyl iso-eugenol (16.8%), myristicin (2.3%), safrole (1.6%), and elemicin (1.7%). In contrast, a slightly different composition of oils from the dried kernels of Myristica fragrans originating from Sri Lanka was reported, with almost no eugenol (0.2%), or methyleugenol (0.6%), nearly equal amounts of safrole (1.4%) and elemicin (2.1%), but higher levels of myristicin (4.9%) [20]. Numerous reports on the composition of nutmeg oils are published, reporting varying levels of alkenylbenzenes in nutmeg seeds. It is, on the one hand, the storage of ground powders [21], but on the other hand, very much the geographical origin that determines the volatile composition of nutmeg extracts as recognized by Baldry and colleagues [22]. They showed high variabilities in alkenylbenzene contents, with myristicin ranging from 0.5% to 12.4%, safrole from 0.1% to 3.2%, elemicin from 0.3% to 4.6%, methyleugenol from 0.1% to 1.2%, and eugenol from 0.1% to 0.7% in different nutmeg oils from West India and South East Asia. Mace powders and mace oils contain similar constituents as nutmeg powders and nutmeg oils. For example, in 10 powdered genuine Indonesian nutmeg seeds extracted with boiling methanol, myristicin accounted for up to 2.9% and safrole accounted for up to 0.39%. Nutmeg oil from Indonesian nutmegs contained 9.73% myristicin and 2.16% safrole [23].
Nutmeg and mace are used as domestic spices and as flavoring ingredients in many food products, such as in gelatins, puddings, sweet sauces, baked goods, meats, fish, pickles (processed vegetables), candy, ice cream, and non-alcoholic beverages [24,25]. In addition, several globally available PFS contain nutmeg seed powders or nutmeg oils to very varying extents [11].
Further relevant examples for plants and their essential oils containing myristicin and elemicin, as well as other selected alkenylbenzenes, are listed in Table 1. Table 1. Occurrence of safrole, myristicin, methyleugenol, and elemicin found in essential oils (EO) from culinary plants.
.2. Myristicin and Elemicin in Food Flavorings
Due to the intentional use of essential oils and the dried powder of nutmeg or mace for flavoring reasons, certain types of soft drinks, pastries, and some types of crisps contain high levels of myristicin and elemicin.
Cola-flavored soft drinks may contain nutmeg oil and/or mace oil, which consist of different major compounds, such as sabinenes and myrcene, as well as at least five different alkenylbenzenes. Myristicin, safrole, and elemicin mainly determine the flavor of these oils. Accordingly, myristicin, safrole, elemicin, methyleugenol, and eugenol were detected in cola-flavored soft drinks [57]. In 2013, Raffo et al. published quantitative data on the amounts of safrole and myristicin in the cola-flavored soft drinks of different brands following different processing procedures, including various storage conditions. Levels of safrole and myristicin varied approximately 2-3 orders of magnitude. In flavored soft drinks, average concentrations of safrole and myristicin were 23.0 and 168.3 µg/L, with minimum contents of 0.6 and 0.4 µg/L and maximum levels of 43.9 and 325.6 µg/L, respectively [12]. These variations might be due to variable levels of alkenylbenzenes in the added essential oils. For example, measurements of alkenylbenzene concentrations in different nutmeg oils of specific geographical origins revealed an at least 30-fold variation, e.g., in the levels of safrole (ranging from 0.1 to 3.2%) and myristicin (0.5 to 13.5%), respectively [57]. In the study of Raffo et al., only the levels of myristicin and safrole were measured in cola-flavored soft drinks, but not those of other alkenylbenzenes. Therefore, the total amount of alkenylbenzenes in cola-flavored soft drinks remains unknown so far.
Parsley and dill teas can be purchased without restriction. Recently levels of alkenylbenzenes in such teas were investigated. Myristicin, methyleugenol, apiol, and estragole are detected to varying extents in dry tea samples or in hot water herbal extracts containing parsley, dill leaves, or seeds, or being in a mixture with other herbs. The total amount of alkenylbenzenes in the dry tea samples ranged from 18 to 1269 µg/g dry preparation [59]. In 2017, Alajlouni and colleagues also found relevant levels of the alkenylbenzenes myristicin, apiol, and estragole (17-6487 µg/g) in parsley and dill-based PFS [60].
Beside this, baked goods, meat products, condiments, relishes, soft candy, gelatin, pudding, soups, alcoholic beverages, and gravies may also contain myristicin and elemicin to various and often unknown amounts if refined with oils from parsley, nutmeg or mace [24,61]. This also applies to other alkenylbenzenes [62,63]. Therefore, monitoring of myristicin, elemicin, and other alkenylbenzenes in many food commodities appears justified in order to gain a reliable database for future exposure assessments [25].
Myristicin and Elemicin in Foods
Analytical methods are already in place to monitor myristicin and elemicin in complex food matrices [25]. An early study reported 16.9 mg myristicin per gram dried nutmeg powder following 12 h methanol extraction at 50 • C [26]. Other methods for analyzing ground nutmeg, wine and beer spices, and many food commodities utilize ultrasonic assisted extractions, followed by solid phase extraction and gas chromatography (GC)mass spectrometry (MS) [64,65]. Other methods for analyzing myristicin from ground nutmeg (502 µg/g), from wine and beer spices (11.87 µg/g), from some food commodities (2.46-15.22 µg/g), and even from human serum (17.60-33.25 µg/g from human volunteers who incorporated 100 mg myristicin 1 h before blood sampling) utilize ultrasonic assisted extractions, followed by solid phase extraction and gas chromatography (GC)mass spectrometry (MS) [64,65]. Other methods use functionalized magnetic microspheres for isolation of allyl-benzodioxoles, followed by gas chromatography-mass spectrometry, such as myristicin (264.2-599.6 µg/L) and safrole (14.0-40.35 µg/L) from cola drinks [66]. However, in these methods, varying and often not fully validated analytical procedures were used, hampering the comparability of the analytical results. In addition, for many food categories no data are available at all. Since there is no legal mandate for monitoring all potentially toxic alkenylbenzenes in all relevant food categories, the availability of comprehensive and reliable occurrence data is currently rather limited. Taken together, the actual occurrence levels of myristicin and elemicin, as well as of many other alkenylbenzenes, are still widely unknown for many foods.
Toxicity of Myristicin and Elemicin: Lessons Learned from Safrole and Methyleugenol
In the following, the current knowledge on the toxicity of myristicin and elemicin in comparison to the structurally related and well-investigated alkenylbenzenes, safrole and methyleugenol, is summarized.
Common Structural Features
Initial steps of the hepatic activation of methylenedioxy-and methoxy-substituted allylic alkenylbenzenes include epoxidation of the exocyclic double bond followed by its cleavage by microsomal or cytosolic epoxide hydrolases or spontaneous hydration to generate 2 ,3 -dihydrodiols [67]. Such metabolites are detected in the urine of animals treated with allylbenzenes [6,[68][69][70]. Another pathway may be the hydroxylation of the 1 -carbon atom adjacent to this 2 ,3 -double bond [71]. Side chain reactions of alkenylbenzenes are catalyzed by various cytochrome P450 monooxygenases (CYPs). Epoxides and dihydrodiols may be derived not only from the allylbenzene compounds but also from some of their metabolites, which still possess an intact allyl group, such as the allylcatechols [72]. However, phenolic and catecholic compounds typically undergo rapid phase II conjugation, which might be a predominant pathway for such metabolites as also shown for the alkenylbenzene eugenol containing a free phenolic group [73]. Thus, in contrast to alkenylbenzenes that bear only methoxy or methylenedioxy substituents, the high first-pass conjugation and rapid elimination may explain why eugenol is deemed to be less toxic as compared to the well-known hepatocarcinogens methyleugenol and safrole.
Following hydroxylation at the 1 -position (Figure 2), the alcoholic metabolite can be sulfonated. Subsequent heterolytic cleavage of the formed sulfate moiety would generate an electrophilic carbenium ion intermediate, which is highly reactive towards nucleophilic sites [74,75], and that may, for example, generate glutathione (GSH) conjugates, as well as adducts with proteins, RNA, or DNA [76]. Since the carbenium ionic charge is delocalized, adducts can be formed at the 1 -or 3 -position, with the 3 -position being the preferred site [77,78].
Figure 2.
Metabolite excretion of safrole in the rat is reported to be 93% within 72 h, and most of this material (86%; [79]) would consist of metabolites formed via demethylenation of the methylenedioxy moiety to yield carbon monoxide or formate and the dihydroxy-benzene moiety [80]. The other metabolic routes observed were allylic hydroxylation and the epoxide-diol pathway [70,79]. Oxidations of the allylic side chain of safrole may proceed (i) via an epoxide resulting in side chain propane diols during different stages of the metabolic steps [72], or (ii) via 1 -hydroxylation followed by sulfonation that might lead to a reactive carbocation intermediate [5]. Other possible steps of metabolic ways of safrole are (iii) the subsequent oxidation of the 1 -hydroxysafrole to the 1 -oxo-safrole [81], (iv) oxidation at the 3 -position to yield 3 -hydroxy-isosafrole, and (v) the demethylenation of safrole to 4-allylcatechol that may isomerize to its quinone-methide [82][83][84]. The occurrence of glutathione conjugates at the 1 -position may be indicative of the intermediate formation of para-quinone methide tautomers [82], whereas glutathione conjugates at the benzene ring point to reactions with ortho-quinone intermediates [82]. CYP: cytochrome P450 monooxygenases; SULT: sulfotransferases; EH: epoxide hydrolases; nuc: nucleophilic structures such as DNA or proteins. However, the metabolic pathway to the carbenium ions is only one selected pathway, already often discussed with respect to the cyto-and genotoxic activity of alkenylbenzenes. This metabolic pathway presumes the presence of sulfotransferases (SULT) and cofactors such as 3 -phosphoadenosine-5 -phosphosulfate (PAPS) [5].
On the other hand, alkenylbenzenes and their metabolites that bear ortho-and/or para-phenolic groups may form quinone methide intermediates ( Figure 2) [82,85] that are also prone to be conjugated by GSH or react directly with other nucleophiles in the cell. The transient formation of a quinone methide of eugenol appears plausible [85] since an eugenol GSH conjugate was detected utilizing rat liver or rat lung microsomes [86]. The cytotoxic effects of eugenol recognized in rat hepatocytes are reasoned to be due to the formation of a reactive quinone methide intermediate [87]. In 1990, Fischer et al. tentatively identified metabolites including thiophenol metabolites (11%) following eugenol ingestion in the urine of humans, presumably formed by GSH conjugation at an aromatic ring position [73]. Thus, methoxylated non-phenolic substances (e.g., methyleugenol and elemicin) may as well undergo CYP enzyme-mediated O-demethylation and subsequent quinone methide formation followed by GSH conjugation. Similarly, there can be oxidative demethylation of methoxy groups in elemicin by CYP1B1 [88], creating the possibility to yield also catechols or other phenols and conjugates, as was shown from benzodioxolesubstituted alkenylbenzenes myristicin and safrole in rat and human urine using GC-MS [8].
Apart from the epoxide, the carbenium ion, and the quinone methide metabolic pathways of the alkenylbenzenes already discussed, another metabolic pathway that may occur after rearrangement of the double bond from 2 ,3 -position to 1 ,2 -position is the oxidation of 3 -hydroxy metabolites of alkenylbenzenes leading to cinnamic acids and propionic acids [6,68,69]. In principle, 3 -hydroxy-1 ,2 -propenylbenzenes may be equivalent to 1 -hydroxy-allylbenzenes as substrates for hepatic SULTs. On the other hand, due to steric reasons further side chain oxidation of the 3 -hydroxy-propenylbenzenes yielding cinnamaldehydes and cinnamic acids, which can be conjugated with GSH or glycine, appear to dominate. Further oxidation, probably via the fatty acid β-oxidation cycle, would lead to side chain cleavage and the formation of benzoic acids and its glycine conjugates [5].
Seemingly small, but relevant structural molecular differences in benzene ring substituents of the parent alkenylbenzenes call for a closer look at the potential metabolic pathways of elemicin, myristicin, methyleugenol, and safrole. In an attempt to identify similarities and possible differences of elemicin and myristicin, we compared their metabolic features to the closely related derivatives methyleugenol and safrole. Those two compounds bearing only methoxy groups at the benzene ring without methylene bridge are methyleugenol and elemicin. The two compounds with a methylenedioxy moiety are safrole and myristicin, which are categorized as benzodioxoles.
Metabolism of Methyleugenol
Results of ADME experiments performed in 2000 within a study of the National Toxicology Program (NTP) led to the conclusion that absorption of orally ingested methyleugenol in rats and mice is rapid and complete, and that the distribution of methyleugenol to tissues is fast. In rodents, methyleugenol is extensively metabolized in the liver and more than 70% of the dose administered is found in the urine of rats and mice as hydroxylated, sulfated, or glucuronidated metabolites [92].
With view on the toxicity of methyleugenol, it is generally assumed that bioactivation is mainly mediated via 1 -hydroxylation at the allylic side chain followed by sulfo conjugation, yielding a highly reactive sulfate ester [10,94].
In the NTP study, it was shown that repeated ingestion of methyleugenol may saturate metabolic enzymes [92], leading to greater tissue accumulation and thus higher probability for genotoxicity, mutations, and malignant cell transformations. Saturability of metabolism is of special concern in cases when 1 -hydoxylation of the allylic side chain becomes more prominent over other pathways. This may enhance hepatocarcinogenesis in rodents at higher dose levels [95].
In rat bile, methyleugenol could be found in the form of GSH conjugates. These conjugates detected by Yao and colleagues potentially resulted from reactions with methyleugenolderived epoxide metabolites, α,β-unsaturated aldehydes, carbenium ions, and quinone methides [96]. These conjugates were further metabolized, yielding the cysteine conjugates found in rat urine. In GSH-fortified microsomal preparations that lack SULT and PAPS, it was generally not expected that carbenium intermediates would be formed. However, Yao et al. found 1 -bound GSH and related cysteine conjugates in such incubations [96]. Thus, it is hypothesized that 1 -hydroxy metabolites or other metabolites than the sulfate esters may directly react with GSH under certain conditions.
Beside 1 -hydroxylation, the metabolites observed in rats and mice suggest that methyleugenol can also undergo demethylation, ring, and/or further side chain oxidations [92].
The NTP authors further concluded that the risk to humans ingesting methyleugenol is expected to be subject to marked inter-individual metabolic variability. Indeed, hydroxylation of methyleugenol investigated in human liver microsomes varied considerably (37-fold), with the highest hydroxylation rate being similar to that observed with liver microsomes from rats [97]. Moreover, one study by Tremmel et al. demonstrated that methyleugenol-induced DNA adduct levels in human liver samples were dependent on the SULT1A1 copy number [94].
Metabolism of Elemicin
Elemicin is the natural continuation of methyleugenol, bearing two metaand one paramethoxy group relative to the allyl side chain. For this compound, the O-demethylation pathway becomes more prominent, which leads to some divergent metabolites, compared to methyleugenol. In 1980, Solheim and Scheline revealed that the two major metabolic pathways of elemicin in rats follow the cinnamoyl pathway and the epoxide-diol pathway [6]. The former route gives 3,4,5-trimethoxyphenyl-propionic acid and its glycine conjugate as major urinary metabolites, whereas 2 ,3 -dihydroxy-elemicin is the most prominent metabolite of the latter route. In addition, elemicin can also be 1 -hydroxylated at the allylic side chain. When comparing the kinetic constants for conversion of elemicin and 1 -hydroxy-elemicin by male rat liver and mixed gender pooled human liver fractions, van den Berg et al. concluded that glucuronidation of 1 -hydroxy-elemicin, representing a detoxification pathway, is the most important pathway in rats and in humans. In contrast, bioactivation of 1 -hydroxy-elemicin by sulfonation was suggested to be only a minor pathway in both rat and human liver [76].
In 2019, Wang et al. confirmed and extended these studies. They found a total of 22 metabolites for elemicin in mice, e.g., in urine, feces, and plasma [88]. In vivo, elemicin and most of its metabolites were mainly excreted in urine collected from 0 to 24 h post-procedure in metabolic cages of male C57BL/6 mice that were orally administered 100 mg/kg elemicin. The obtained results indicate that phase I metabolic reactions of elemicin included demethylation, hydroxylation, hydration, allyl rearrangement, reduction, hydroformylation, and carboxylation. Phase II metabolism of elemicin yielded several conjugates, e.g., with cysteine, N-acetyl cysteine, glucuronic acid, glycine, or taurine [88]. In addition, the 4-demethoxylated forms of elemicin and of 2 ,3 -dihydroxy-elemicin could be detected in human urine after nutmeg abuse [8].
Metabolism of Safrole
From a toxicological point of view, safrole bioactivation by sequential 1 -hydroxylation and sulfonation, resulting in reactive sulfate esters capable of forming adducts with cellular nucleophiles such as DNA, is of high relevance [71,89]. In 1983, Boberg et al. identified 1 -sulfoxy-safrole as an ultimate electrophilic metabolite of safrole and as an initiator of hepatic carcinogenicity in vivo. The toxicological relevance of this pathway was demonstrated in mice co-treated with the hepatic SULT inhibitor pentachlorophenol (0.05% added to the diet of mice) in vivo and in mice being genetically defective with respect to the hepatic synthesis of PAPS [75].
Urinary metabolites of safrole in the rat were also identified via GC-MS in a further study performed in 1982. Metabolite excretion was 93% within 72 h, and most of this material (86%) consisted of metabolites formed via demethylenation of the methylenedioxy moiety. The other metabolic routes observed were allylic hydroxylation and the epoxidediol pathway [79].
Metabolism of Myristicin
Myristicin is well absorbed following oral exposure and is metabolized extensively. Metabolism of the volatile alkenylbenzene myristicin results in the formation of less volatile metabolites, predominantly remaining in the aqueous phase on extraction with ether [99].
Early experiments highlighted the cleavage of the methylenedioxyphenyl moiety concomitant with CO 2 release from myristicin as an important metabolic pathway. Within 48 h after oral administration of radiolabeled myristicin to male albino mice, 73% of the radiocarbon was set free as 14 CO 2 [98], which was potentially formed from the hydroxylation of the methylene group of myristicin and subsequent release and degradation of formate-14 C. This demethylenation reaction was found to be catalyzed by microsomal CYPs and would yield the corresponding catechol derivative.
Isolation of metabolites from male Sprague-Dawley rat urine after a single oral administration of 100 mg/kg myristicin, and comparison before and after glucuronidase treatment, suggests that the catecholic metabolites 5-allyl-1-methoxy-2,3-dihydroxy-benzene and 1 -hydroxy-myristicin are also excreted in their respective conjugated forms [7].
Currently, no comprehensive studies with respect to quantitative metabolism and excretion of myristicin in humans are available. However, one study examined metabolites present in the urine of a patient who ingested five nutmeg seeds, resulting in an intoxication [8].
Genotoxicity
As described above, different metabolic pathways may lead to the formation of reactive intermediates capable of binding DNA, thereby causing genotoxicity. For many alkenylbenzenes, it is widely accepted that the 1 -hydroxylation at the allylic side chain, followed by SULT-mediated sulfo conjugation yielding a highly electrophilic sulfate ester, might be the most relevant pathway leading to toxicity [10]. The sulfate ester may form inter alia DNA adducts as demonstrated by 32 P-postlabeling techniques and mass spectrometry [78,[100][101][102][103][104][105]. Structures of four DNA adducts formed in mouse liver after administration of the proximate hepatocarcinogen 1 -hydroxy-estragole were initially described by Phillips et al. in 1981 [106,107]. Similar kinds of studies, as well as studies on other genotoxicity endpoints and mutagenicity, were performed for many alkenylbenzenes, as systematically reviewed in detail elsewhere [5,10,108]. In the following part, the most relevant studies on genotoxicity of methyleugenol, elemicin, safrole, and myristicin are exemplarily described in brief.
Genotoxicity of Methyleugenol vs. Elemicin
Methyleugenol was found to induce sister chromatid exchange (SCE) in Chinese hamster ovary (CHO) cells after metabolic activation, as well as intrachromosomal recombination in yeast with and without metabolic activation [92]. Some years later, Groh and colleagues further characterized the impact of methyleugenol and its metabolites on DNA damage induction in vitro. It was observed that 1 -hydroxy-methyleugenol and 2 ,3 -epoxymethyleugenol had a higher DNA strand breaking activity than the parent compound methyleugenol in Chinese hamster lung fibroblast (V79) cells, demonstrating the marked relevance of these metabolites. However, in the same study, only 3 -oxomethylisoeugenol and 2 ,3 -epoxy-methyleugenol induced the formation of micronucleated V79 cells [109]. Furthermore, methyleugenol and the oxidative metabolites concentration dependently increased the amount of DNA strand breaks, as measured using the in vitro alkaline comet assay in human colon carcinoma HT29 cells [110,111].
In 1992, Chan and Caldwell found that methyleugenol, 1 -hydroxy-methyleugenol and 2 ,3 -epoxy-methyleugenol caused unscheduled DNA synthesis (UDS) in rat hepatocytes, and that the inducing potency of the 1 -hydroxy metabolite was higher than that of the parent substance in vitro [112]. In 2006, methyleugenol was also shown to form DNA adducts after hydroxylation and sulfonation. DNA adducts of methyleugenol were detected using 32 P-postlabeling techniques in the livers of F344 rats (n = 4 out of 8) exposed orally to 5 mg/kg/day for 28 days. No adducts were found after exposure to 1 mg/kg/day [113].
In 2013, Herrmann et al. detected methyleugenol-induced DNA adducts also in human liver samples [114]. Twenty-nine human liver samples unambiguously contained the N 2 -(trans-methylisoeugenol-3 -yl)-2 -deoxyguanosine adduct (N 2 -MIE-dG). A second adduct, N 6 -(trans-methylisoeugenol-3 -yl)-2 -deoxyadenosine (N 6 -MIE-dA), was also found in most samples, but at much lower levels. The median methyleugenol DNA adduct level detected in human non-tumorous liver samples was 13/10 8 nucleotides for N 2 -MIE-dG and N 6 -MIE-dA combined, corresponding to 1700 adducts per diploid genome (6.6 × 10 9 base pairs). As further elegantly reported, hepatic DNA adduct formation by methyleugenol in mice is strongly affected by their SULT1A content [115,116], proving the toxicological relevance of this metabolic pathway. Indeed, also in human liver samples, an association between the SULT1A1 copy number and the adduct level was demonstrated [94]. Moreover, it is shown in vitro for the structural derivative estragole that the resulting DNA adducts are inefficiently repaired [117], which might contribute to the accumulation of substantial levels of DNA adducts upon prolonged dietary exposure.
Beside this, Yang et al. recently showed that reactive metabolites of methyleugenol were also able to form RNA adducts [118]. However, the biological consequences of these RNA adductions are so far unclear, as also mentioned by the authors.
As shown for methyleugenol [112], also elemicin was found positive in a DNA binding assay and in UDS assays [24,76,119,120].
Despite the well-recognized DNA damages, methyleugenol is reported to be only weakly or non-mutagenic in different bacterial test systems with or without metabolic activation [3,92,121,122]. In another study done by Groh et al. in 2012, it was shown that methyleugenol did not cause mutations at the hprt locus in cultured V79 cells after 1 h of incubation. After extended treatment (24 h) only 2 ,3 -epoxy-methyleugenol exhibited slight mutagenic activity with a mutation frequency being 4-5 times higher than the spontaneous mutation frequency of the solvent control [109]. A possible explanation for the lack of mutagenicity, especially in bacterial systems, might be the lack of metabolic competence, especially in view on SULTs or the cofactor PAPS [5,122].
The mutagenic potential of methyleugenol was also studied in vivo [3]. Data published by NTP indicates that oral administration of methyleugenol via gavage (10-1000 mg/kg bw; 5 days/week) to B6C3F1 mice does not cause micronucleus formation in peripheral blood erythrocytes [92]. Likewise, it was unable to induce chromosomal aberrations in CHO cells or micronucleus formation in peripheral blood erythrocytes of mice in other studies [92,122]. In contrast, Devereux and colleagues observed a higher frequency of β-catenin gene mutations (20/29; 69%) in hepatocellular carcinomas of mice exposed to methyleugenol (37-150 mg/kg) than in spontaneous liver tumors (2/22; 9%) from unexposed mice [123]. Since deregulation of Wnt/β-catenin signaling is considered an early event in chemically induced hepatocarcinogenesis, this observation represents an indication of the genotoxic potential of methyleugenol [3,122].
Beside this, mutagenicity of methyleugenol was recently verified in vivo, utilizing a xanthine-guanine phosphoribosyltransferase (gpt) delta rodent gene mutation assay [124]. For this in vivo mutation assay, transgenic gpt delta rats (n = 10/group, both sexes) were treated for 13 weeks with different doses of methyleugenol via gavage (0, 10, 30, and 100 mg/kg). A significant increase in mutagenicity assessed, via gpt and Spi − mutant frequencies, was observed in rat hepatocytes of the highest dose group. Mutant frequencies were further associated with pro-carcinogenic processes. From these data, the authors concluded that genotoxic mechanisms might be involved in methyleugenol-induced hepatocarcinogenesis [124].
In contrast to methyleugenol, there is currently no literature available regarding the mutagenic potential of elemicin. However, the structural features and the few data on genotoxicity suggest such an activity also for elemicin.
Genotoxicity of Safrole vs. Myristicin
The genotoxic activity of safrole is known. For example, it was demonstrated that safrole is capable of inducing sister chromatid exchanges, chromosomal aberrations, replicative DNA synthesis, and DNA adducts in rat liver in vivo [125]. It appears that these effects result from the 1 -hydroxylation followed by sulfo conjugation yielding reactive sulfate esters. This is because the concomitant application of the SULT-inhibitor PCP or the use of brachymorphic mice, being deficient in the SULT cofactor PAPS, strongly reduced the genotoxic effects [126]. Already in 1986, Reddy and Randerath reported that two DNA adducts were detected by 32 P-postlabeling techniques in the liver of adult female CD1 mice treated with safrole [104]. These DNA adducts were identified as N 2 -(trans-isosafrol-3 -yl)-2 -deoxyguanosine and N 2 -(safrol-1 -yl)-2 -deoxyguanosine. In 1998, using the same 32 P-postlabeling assay, Daimon et al. studied DNA adduct formation in hepatocytes isolated from male F344 rats exposed to safrole [127]. The sum of the two above mentioned major DNA adducts was 898 DNA adducts/10 8 nucleotides. In this study, hepatocytes were isolated 24 h after a single dose of safrole or five repeated doses (once a day) by gavage and allowed to proliferate in Williams medium E supplemented with an epidermal growth factor. This enabled a certain percentage of DNA repair in situ. Beside this, safrole was shown to cause UDS in cultured rat hepatocytes, but not in HeLa cells [128,129].
Randerath et al. investigated the DNA adduct formation of a series of alkenylbenzenes in the liver of adult female CD-1 mice by 32 P-postlabeIing 24 h after i.p. administration of non-radioactive test compounds (2 or 10 mg/mouse). The known hepatocarcinogens, safrole, estragole, and methyleugenol, exhibited the strongest binding to mouse liver DNA. However, the formation of DNA adducts in the liver were demonstrated also for myristicin in male B6C3F1 mice and female CD-1 mice. In comparison to safrole, estragole, and methyleugenol, substitution at the 3-, 4-, and 5-positions of the benzene ring of allylbenzenes (elemicin, myristicin) results in compounds with intermediate DNA binding capability [100]. In 2007, Zhou et al. further proved that myristicin forms DNA adducts comparable to those of safrole and methyleugenol in cultured human hepatocytes as well as in adult mouse liver, as analyzed via 32 P-postlabeling [130]. With the exception of methyleugenol, DNA adduction was dose-dependent in these experiments, decreasing in the order, methyleugenol > safrole~myristicin.
In other experiments, female mice were exposed to soft drinks. Covalent liver DNA adducts detected by 32 P-postlabeling were identical to those detectable with the single compounds myristicin and safrole. Liver adduct levels increased with exposure duration [101].
In freshly isolated hepatocytes from male F344 rats, myristicin induced a dose-dependent but slight increase in UDS, an indicator of DNA excision repair activity [120]. However, the authors concluded from the obtained data that myristicin was negative in that assay [120]. Decreased DNA damage repair might be an important indirect genotoxic mode of action, as highlighted by Martins et al. in 2014 and 2018 [10,134]. They showed in vitro that exposure of human leukemia cells (K562) for 6 h with 100 µM myristicin led to reduced expression of various DNA damage response genes including OGG1 (base excision repair), ERCC1 (nucleotide excision repair), RAD50 (double strand break repair), ATM (DNA damage signaling), and GADD45G (stress response). As summarized by Célia Maria da Silva Martins in 2016 in her dissertation, myristicin appears to activate apoptotic mechanisms and downregulate DNA damage response genes involved in nucleotide excision repair, double strand break repair, DNA damage signaling, and stress response [135]. In 2011, Martins et al. studied the mutagenic potential of myristicin in vitro in mammalian cells [136]. In this experimental setting, myristicin tested without metabolic activation was negative in a comet assay used to evaluate DNA breaks, as well as in a γH2AX assay (sometimes recognized as an indicator for DNA double strand breaks) performed in CHO cells.
The DNA damaging activity may lead to the manifestation of heritable mutations. The mutagenic potential of safrole and its metabolites was studied in different experimental settings [4]. In the bacterial reverse mutation assay (Ames test), safrole was generally negative, or at most, weakly positive [137][138][139]. In contrast to the parent compound safrole, 1 -hydroxy-safrole, as well as other metabolites (2 ,3 -epoxy-safrole, 1 -acetoxysafrole and 1 -oxo-safrole), were demonstrated to be directly mutagenic in the Ames test [139,140]. In addition, safrole was shown to be mutagenic in other experimental settings (bacteria and yeast) and to induce cell transformation in vitro [141,142]. The mutagenic potential of safrole, including the induction of gene mutations, chromosomal aberrations, DNA single-strand breaks, and SCEs was also demonstrated in mammalian cells [143][144][145].
Safrole s mutagenic potential was also studied in vivo [4]. In 1972, Epstein and colleagues obtained negative results for safrole in a mouse dominant lethal assay [146]. In line with this, testing of safrole in a bone marrow micronucleus assay and in a rat liver UDS assay also led to negative outcomes [147,148].
However, other studies clearly indicated the mutagenic potential of safrole in vivo. The first studies performed by Green and Savage in 1978 showed that safrole was positive in an in vivo i.p. host-mediated assay with Salmonella typhimurium [138]. Similar findings were published by Poirier and de Serres in 1979, utilizing the same assay with S. typhimurium or Saccharomyces cerevisiae [141]. Some years later, Daimon and colleagues showed that repeated-dose treatment of F344 rats with 125 or 250 mg safrole/kg bw dose-dependently induced chromosome aberrations in rat liver cells [127]. Moreover, singe-dose treatment of rats with 10-500 mg safrole/kg bw caused SCEs in rat livers in a dose-dependent manner, too. These effects were associated with the generation of DNA adducts in the hepatocytes of these rats [127].
The aforementioned indication of safrole s mutagenic activity was substantiated by the findings of Jin et al., who observed an increased gpt mutant frequency in transgenic gpt delta rats after a 13-week exposure to safrole via diet at the highest dose group tested (0, 0.1%, 0.5%; n = 10/group, both sexes). The authors concluded that these data clearly demonstrated the mutagenicity of safrole in vivo [149]. These findings were confirmed by results of another study performed in 2013, utilizing a similar in vivo transgenic rodent model [150]. In this study, male F344/NSlc-Tg (gpt delta) rats (n = 15 per dose) were fed with 0.5% safrole via diet for 4 weeks. This dose was identified as a carcinogenic dose from an earlier study [140]. In this experimental setting, safrole caused a significantly increased gpt mutant frequency, which was associated with a tumor-promoting activity, as suggested by an elevated number and area of GST-P-positive foci in rat livers, compared to controls. The authors stated that these data confirm that safrole is a genotoxic carcinogen [150].
In contrast to safrole, data on myristicin s mutagenic potential is sparse. A study published by Damhoeri et al. in 1985, studied the mutagenic activity of oleoresins prepared from myristicin-containing nutmeg fruits without metabolic activation in an in vitro mutagenicity assay in S. typhimurium. The authors reported that the tested oleoresins were mutagenic. Moreover, pure myristicin was also positive in the mutagenicity test [151]. Based on these data, it was suggested by Hallstrom and Thuvander in 1997 that both nutmeg and myristicin may be weakly mutagenic, but additional studies were required to finally conclude on the mutagenic potential [24].
Just recently in 2019, NTP characterized the mutagenic potential of myristicin. Myristicin was not mutagenic in S. typhimurium with or without metabolic activation. In addition, a micronucleus test was integrated in the subchronic toxicity study in which myristicin (0, 10, 30, 100, 300, or 600 mg/kg bw; 5 days/week) was administered via gavage to F344/NTac rats and B6C3F1/N mice (10 male and 10 female/group) for 13 weeks. There was a significant dose-dependent decrease in the percentage of polychromatic erythrocytes (PCEs) in the peripheral blood of male and female mice, illustrating toxicity to the bone marrow in mice and suggesting that the test compound reached the target tissue. In mice, however, no significant effect of myristicin on micronucleated red blood cells was observed. A significant increase in micronucleated immature erythrocytes in the peripheral blood was observed in male and female rats of the highest dose group (600 mg/kg bw). This was accompanied by significantly elevated amounts of circulating PCEs. Therefore, the authors suggested that myristicin might have stimulated erythropoiesis in rats. It was concluded that studies performed by the NTP provide limited evidence for the genotoxicity of myristicin [152]. However, findings from others indicated that myristicin, similar to other genotoxic alkenylbenzenes, e.g., safrole and methyleugenol, forms DNA adducts in vivo [100,130,132]. However, NTP authors stated that the consequence of these adducts is unknown, as myristicin was not tested for mutation induction in vivo [152]. Therefore, further and more adequate studies are needed to allow for a conclusive evaluation of the mutagenic potential of myristicin. Ideally, those studies should be designed as comparative studies (e.g., testing of myristicin vs. other alkenylbenzenes, such as safrole, in a similar experimental setting, e.g., as proposed by Nohmi and colleagues [150,153]), to allow a ranking regarding the genotoxic potential of these substances. As demonstrated with other alkenylbenzenes, in vivo assays capable of detecting gene mutations, i.e., transgenic rodent assays like the gpt delta assay, might be appropriate test systems for detecting potential alkenylbenzene-induced mutations.
Genotoxic Effects in Pregnant Mice and in Offsprings
Since the altered hormone constitution in pregnancy may profoundly affect the activity of maternal xenobiotic metabolizing enzymes [154], a period of heightened susceptibility to chemical carcinogenesis may exist not only for the developing conceptus, but also for the dam [155,156]. For example, the effects of pregnancy on the covalent binding of several carcinogens to DNA were investigated in mice. Non-pregnant or timed-pregnant (18th day of gestation) mice of similar age were treated with safrole or 1 -hydroxy-safrole per os. Tissue DNA adduct levels at 24 h after treatment were analyzed via 32 P-postlabeling. Binding of safrole and its proximate carcinogen, 1 -hydroxy-safrole, to maternal liver and kidney DNA was increased by a factor of 2.3-3.5 during pregnancy in mice [157]. In 1993, Randerath et al. observed a similar effect in the liver of pregnant mice exposed to myristicin (48,000 adducts/10 9 nucleotides in liver DNA from dams vs. 17,000 adducts/10 9 nucleotides in liver DNA from non-pregnant mice) [101]. This indicates that exposure to genotoxic compounds may be more hazardous for the maternal body during pregnancy than for non-pregnant adult females. In addition, safrole and myristicin may not quantitatively react in a first pass manner in mouse maternal liver alone. Part of the amount of safrole administered maternally and some reactive metabolites may reach the fetus transplacentally. Indeed, DNA adduct formation was observed by Randerat et al. in fetal liver after exposure to myristicin in pregnant mice [101]. The ability to form DNA adducts of myristicin transplacentally is of concern with respect to rapid cell divisions occurring in fetal liver cells, thus increasing the possibility of fixing potential mutagenic lesions which may further lead to carcinogenesis. In this context, administration of safrole to pregnant mice during the second half of gestation also led to the development of epithelial kidney tumors in female offsprings, demonstrating transplacental carcinogenesis [155]. In this study, a strong age-and sex-dependent difference (p < 0.01) in offspring renal carcinogenesis by safrole was observed. For comparison, in the case of the direct alkylating carcinogen ethylnitrosourea, no significant sex-dependent differences were observed [158], and preweaning as well as adult mice were equally sensitive to renal carcinogenesis by ethylnitrosourea [159].
Carcinogenicity of Safrole, Methyleugenol, Myristicin and Elemicin
Mutagenicity may lead to the development of cancer. For example, mutations in tumor suppressor genes or proto-oncogenes can cause uncontrolled cell division [160,161].
In contrast to safrole and methyleugenol [92,122,168,169], data on the carcinogenicity of myristicin and elemicin are sparse. However, some limited experimental information is available suggesting the possible carcinogenic activity of these compounds [24,152,170].
Although results from an early experimental study using a preweaning mouse model suggest that myristicin is not hepatocarcinogenic [166], the reliability of this study must be questioned. Based on an in silico analysis, Auerbach et al., in 2010, reported that myristicin might potentially act as a weak carcinogen [170]. They predicted that administration of myristicin at 2 mmol/kg/day for 2 years would lead to a weak, albeit significant, increase in hepatic tumor burden in male rats. However, it should be noted that the informative value of in silico testing, with respect to the endpoint carcinogenicity, is rather limited [171].
For elemicin, first indications of tumorigenicity were reported by Wiseman and colleagues, who administered male B6C3F1 mice i.p 1 -hydroxy-elemicin or 1 -acetoxyelemicin in 4 doses during the first 21 days postnatally [126]. In this study, an average of 0.8 hepatoma/mouse relative to 0.1 hepatoma/mouse for the solvent-treated controls was observed after 13 months. An earlier and similar assay with 1 -hydroxy-elemicin, but using only 50% of the doses used by Wiseman et al., however, provided no evidence for its hepatocarcinogenicity when administered to preweaning male mice [166]. Data from two-year combined toxicity and carcinogenicity studies do not exist so far, neither for myristicin nor for elemicin. Those studies are crucial for a conclusive evaluation of the carcinogenic potential of myristicin and elemicin, as also stated by others [10,24,61]. Thus, the possible carcinogenic potential (including the underlying mode of action) of myristicin and elemicin merit further attention.
Other Toxicological Endpoints
In the following part of the manuscript, further toxicologically relevant effects of methyleugenol, elemicin, safrole, and myristicin will be described in a comparative manner. This includes acute, as well as subchronic toxicity, studied in vivo.
Acute Toxicity of Methyleugenol vs. Elemicin and Safrole vs. Myristicin
In 2000, results of a short-term animal study done by NTP showed that methyleugenol is moderately toxic following a single oral dose. The median lethal oral dose (LD 50 ) was 810 to 1560 mg/kg body weight (bw) for rats and 540 mg/kg bw for mice [92]. The undiluted chemical (98% purity) was found to be neither an eye irritant nor a skin irritant to rats and mice [92,172]. In contrast to methyleugenol, there is currently no literature available regarding the acute toxicity of elemicin.
Safrole was shown to be moderately toxic [173]. Its LD 50 following oral administration was 1950 mg/kg bw and 2350 mg/kg bw in rats and mice, respectively [174,175]. Moreover, for safrole, acute neurological effects were described, including depression, ataxia in rats, as well as psychoactive and hallucinogenic effects in humans, which were considered as being similar to those reported for other methylendioxybenzene compounds, including myristicin [173,176,177]. The availability of literature regarding acute toxicity of myristicin is limited. In 1961, Truit and colleagues performed an acute toxicity study in rats treated i.p. with myristicin (200-1000 mg/kg bw) [178]. In the highest dose group, myristicin induced hyperexcitability followed by central nervous depression in rats. From these data, authors derived an LD 50 > 1000 mg myristicin/kg in rats following i.p. application [178]. Although the database on myristicin is rather limited, its acute toxicity after oral administration was considered to be low [24]. Taken together, acute toxicity of myristicin seems to be comparable to that of safrole, especially regarding neurological effects.
Subchronic Toxic Effects of Methyleugenol vs. Elemicin and Safrole vs. Myristicin
In 2000, NTP published the results of 14-week rat and mouse studies, in which subchronic toxicity of the oral administration of methyleugenol (0, 10, 30, 100, 300 or 1000 mg/kg bw via gavage; 5 days/week) to male and female F344/N rats and B6C3F1 mice was investigated [92]. Regarding the experiments done with rats, all animals survived until the end of the study. However, exposure to methyleugenol reduced body weight gain and caused cholestasis, hepatic dysfunction with hypoproteinemia and hypoalbuminemia, as well as atrophic gastritis. Moreover, this led to increased liver and testis weight and adrenal gland hypertrophy [92]. A no observed effect level (NOEL) of 30 mg/kg bw per day was identified [3,92]. In the mouse study, 9 out of 10 males and all females of the highest dose group died before the end of the study [92]. Methyleugenol exposure was associated with reduced body weight gain, elevated liver weight in mice, and increased incidences of cytological alteration, necrosis, bile duct hyperplasia, and subacute inflammation in livers. Furthermore, there were increased incidences for atrophy, necrosis, oedema, mitotic alteration, and cystic glands of the fundic region of the glandular stomach in mice of both sexes [92]. A NOEL of 10 mg methyleugenol/kg bw and day was identified for mice [3,92]. In sum, the available subchronic studies indicated that methyleugenol is moderately toxic, which includes different adverse effects, primarily in liver and stomach [3,92,179].
In contrast to methyleugenol, there is currently no literature available regarding elemicin s subchronic toxicity.
In 1965, Hagan et al. performed a subchronic toxicity study, in which safrole (250, 500 and 750 mg/kg bw per day) was administered via gavage to Osborne-Mendel rats of both sexes for 105 days [180]. In the two highest dose groups, several rats died before the scheduled end of the study. In the lowest dose group, all rats survived until the end of the study. Several organotoxic effects were observed in this rat study, including liver hypertrophy, focal necrosis with slight fibrosis, steatosis, bile duct proliferation, and adrenal enlargement with fatty infiltration [4,180].
Comparable findings were obtained by Jin and colleagues in a rat study performed in 2011 [149]. In this study, safrole was administered to rats via diet (doses: 0, 0.1, and 0.5%; n = 10/group; both sexes) for 13 weeks. The main findings of this study were significantly reduced final body weights in male and female rats of all dose groups and hepatotoxic effects, including increased relative liver weights and significantly increased incidences of centrilobular hypertrophy, centrilobular vacuolar degeneration, and single cell necrosis of hepatocytes. Moreover, the authors found that the relative kidney weights of male and female rats were significantly increased after 13 weeks. Accompanying this, different nephrotoxic effects were observed in male rats of the highest dose group, such as significantly increased incidences of tubular hyaline droplets, granular cast, pelvic calcification, and interstitial cell infiltration in the kidney [149]. Taken together, the liver and kidney appeared to be the target organs with the most severe effects.
Regarding myristicin s subchronic toxicity, NTP published in 2019 the results of 90-day toxicity studies performed in F344/NTac rats and B6C3F1/N mice [152]. In these studies, different doses of myristicin (0, 10, 30, 100, 300, or 600 mg/kg bw) were administered via gavage 5 days/week for 13 weeks to rats and mice of both sexes (n = 10). In the rat study, all males survived until the end, whereas, three female rats of the highest dose group died within 4 days of the study [152]. Exposure of rats to myristicin led to various treatmentrelated effects, including reduced mean body weight, enlarged livers, increased relative liver and kidney weights, as well as increased triglycerides and alanine aminotransferase activity regarding clinical pathology. Accompanying this, several treatment-related lesions were identified in rats, such as centrilobular hepatocyte hypertrophy and necrosis in the liver; epithelium atrophy and hyperplasia as well as necrosis in the glandular stomach; and renal tubule hyaline droplet accumulation as well as a slightly increased severity of nephropathy [152]. Moreover, myristicin also affected the reproductive system of male rats, which included decreased absolute left cauda and left epididymis weights, as well as a lowered number of sperm per cauda epididymis, germinal epithelium degeneration, elongated spermatid retention in seminiferous tubules of the testis, and exfoliated germ cells in epididymal duct lumina. Therefore, the authors concluded that oral myristicin exposure exhibited the potential to induce reproductive toxicity in male F344/NTac rats [152]. In the mouse study, all animals survived until the end. In mice exposed to myristicin, mean body weights were reduced, livers were enlarged, absolute and relative liver weights were elevated and hematology parameters were affected, which included increased leukocyte counts and segmented neutrophil number. Moreover, various treatment-related lesions were observed in mice, such as oval cell hyperplasia, centrilobular hepatocyte hypertrophy, and necrosis of the liver, epithelial and nerve atrophy, glands hyperplasia, hyaline droplet accumulation, and cytoplasmic vacuolization of the respiratory epithelium in the nose. Beside this, there was a significantly increased incidence of atrophy and hyperplasia in the epithelium of the glandular stomach as well as of chronic and epithelial suppurative inflammation in the forestomach [152]. From these findings, authors concluded that the major targets after oral myristicin administration in rats and mice were the liver and glandular stomach. Additional targets were salivary glands, the nose, kidney, testis, epididymis, and the forestomach. Study authors identified a lowest observed effect level (LOEL) of 30 mg/kg bw (increased relative liver weight) for male rats, 10 mg/kg bw (clinical chemistry) for female rats, 100 mg/kg bw (increased liver weights) for male mice, and 10 mg/kg bw (increased liver weights) for female mice. Moreover, a NOEL of 10 mg/kg bw for male rats and of 30 mg/kg bw for male mice was identified, but not for female rats or mice [152]. Together, the aforementioned data clearly indicate that the spectrum of toxic effects following subchronic myristicin exposure is at least in part, and especially regarding the hepatic and renal effects, comparable to that of the structurally similar compound safrole.
Conclusions
The limited toxicological data and the lack of occurrence and consumption data preclude a comprehensive evaluation of adverse health effects potentially associated with myristicin, elemicin, and other alkenylbenzenes.
Therefore, additional occurrence data is needed for all toxicologically relevant alkenylbenzenes in different food products, especially those containing high levels of alkenylbenzenes (e.g., essential oils, basil-containing pesto, or PFS) [5,11,58]. Alkenylbenzenes can be separated either via GC or high-performance liquid chromatography techniques (HPLC) followed by MS [12,25,[181][182][183][184]. However, to increase the specificity and accuracy of methods used for sample preparation, extraction, as well as substance separation constant standardization efforts are needed. Furthermore, data on the consumption of alkenylbenzene-containing foods is required. This data should be collected via appropriate consumption surveys.
The alkenylbenzenes safrole and myristicin as well as methyleugenol and elemicin are structurally closely related (Figure 1). This in turn suggests that the hazard potential of those compounds could exhibit similarities. In this regard, it appears reasonable to identify potential hazards of the toxicologically widely unexplored alkenylbenzenes myristicin and elemicin in comparison to those of the known genotoxic carcinogens safrole and methyleugenol. The available toxicological data, e.g., data on toxicokinetics and genotoxicity, already suggest that both myristicin and elemicin might form reactive metabolites being similar to those being formed from safrole and methyleugenol. However, the sparse data also indicate that there might be quantitative differences that may result in an altered toxicity profile. This in turn, cannot be finally evaluated at present. Indeed, their genotoxic and carcinogenic potential is widely unknown, so far. In this context, two-year combined oral toxicity and carcinogenicity studies are mandatory for the evaluation of the long-term effects, as well as of the carcinogenic potential of myristicin and elemicin, as also recommended by others [10,24,61]. Moreover, the underlying modes of action of these compounds merit further attention, too. In this context, an appropriate experimental setting should be designed taking into account the alkenylbenzene-specific bioactivation (e.g., via SULTs) discussed in detail before [5].
It is important to note that the conventional bacterial reverse mutation test (Organization for Economic Co-Operation and Development (OECD) Test Guideline (TG) 471; Ames test [185]) lacks the metabolic competence to yield the ultimate carcinogenic sulf-oxy intermediates from alkenylbenzenes [186]. However, genetic modifications of the bacteria enabling SULT expression may lead to a more adequate in vitro setting for the mutagenicity testing of compounds metabolically activated via this pathway, such as methyleugenol, myristicin, and elemicin [5,186]. Substantiating this, Monien et al. demonstrated in 2011 that furfuryl alcohol was negative in the standard Ames test, whereas it was mutagenic in a modified setting utilizing S. typhimurium TA100 engineered for the expression of human SULT1 [187]. In line with this, in 2016, Honda and colleagues found methyleugenol, which is not mutagenic in standard Ames test [92], to be mutagenic in a modified Ames test using a human SULT1-expressing S. typhimurium TA100 strain [186]. Although scientific approaches exist that augment bacteria with human sulfotransferases, these systems are not yet internationally standardized and validated for regulatory purposes.
An alternative approach is the hypoxanthine guanine phosphoribosyltransferase (HPRT) assay (OECD TG 476), which is an in vitro mammalian cell gene mutation test using the hprt and xprt genes for gene mutation measurement in mammalian cells [188]. The method is described in detail elsewhere [189]. Modification of the HPRT assay via the use of replication competent cells (e.g., human liver cells) expressing human SULT1A1 could also offer an appropriate setting for in vitro mutagenicity testing of compounds bioactivated in a SULT-dependent manner, such as safrole, methyleugenol, myristicin, and elemicin.
From a toxicological point of view, and for the sake of animal welfare, initial mutagenicity testing of alkenylbenzenes with unknown modes of action, such as elemicin and myristicin, should be done in vitro. This might be sufficient, if initial testing of mutagenicity is conducted using appropriate test systems, enabling the intracellular activation to reactive sulfate esters by SULT-proficient bacterial or mammalian cells. For regulatory purposes, it appears however reasonable to recommend transgenic rodent (TGR) models (OECD TG 488 [190]) as ultimate confirmatory assays to decide on mutagenic potencies of alkenylbenzenes in vivo following a positive in vitro finding [189].
A promising candidate among those TGR models appears to be the gpt delta rodent gene mutation assay, developed by Nohmi et al. [153,[191][192][193]. Since its development, it was already successfully used in various studies in the context of food safety research [153,193]. Regarding alkenylbenzenes, the gpt delta TGR model was demonstrated to reliably identify safrole, methyleugenol, and estragole as mutagens [124,149,194], as also concluded by others [3,4,195]. One additional benefit of such test systems is the option to evaluate mutagenicity in any tissue of interest [189]. This is of particular interest when mutagenicity would have to be tested in distinct organs, such as in the liver, e.g., for testing of suspected hepatocarcinogens, such as methyleugenol and elemicin [189]. Moreover, such an approach might pave the way for simultaneous testing of mutagenicity in different tissues at the same time. Moreover, such in vivo assays are needed to distinguish between genotoxic (e.g., aflatoxin B1) and non-genotoxic carcinogens (e.g., 3-chloro-1,2-propanediol) [153].
Together, the aforementioned approaches would shed more light on the existing, and currently still serious, data gaps, and could help to reduce considerable uncertainties currently impeding the evaluation of adverse health effects potentially associated with the consumption of foods containing alkenylbenzenes.
Conflicts of Interest:
The authors declare no conflict of interest. | 12,282.2 | 2022-07-01T00:00:00.000 | [
"Chemistry"
] |
"Fear, greed, and dedication": the representation of self-entrepreneurship in international English textbooks
A number of studies have reported neoliberal representation in English textbooks in a variety of contexts around the world. However, the study focusing on self-entrepreneurship as one of the critical neoliberal tenets is scantily addressed. To fill this void, the present study seeks to investigate the representation of self-entrepreneurship deliberately inculcated in English textbooks. Anchored in critical discourse analysis and Halliday’s Systemic Functional Linguistics (SFL), this study investigated three Business English textbooks used in higher education in Indonesian. The findings of the study revealed that the English textbooks employed role-playing, presenting celebrity and fame, exhibiting famous entrepreneur figures, presenting the distinct image of entrepreneurial figures, and portraying entrepreneur figures through article or literature to disseminate self-entrepreneurship notions displayed in a variety of discourses. The findings of the current study call to equip educational practitioners (e.g., teachers, policymakers, book designers) with critical thinking skills as well as provide them practical tools to interrogate ideology, norms, and values encapsulated within curriculum artefacts such as language textbooks.
Introduction
Recently, neoliberalism has attracted the interest of linguistics scholars in the particular field of foreign language teaching and learning (Bori, 2020a). In educational landscaping, global English textbooks not only serve as the object of neoliberalism but also act as a vehicle to (re)produce its discourses. Under the neoliberal regimes, individuals are shaped to become enterprising individuals as well as competitive entrepreneurs (Olssen & Peters, 2005). From this view, Bernstein et al. (2015), argue that language learners are the entrepreneurs who selectively choose proper language to learn and perceive learning a proper language as an act of investment, which will raise their competitiveness in the labour market.
The growing body of research in neoliberalism has revealed methods, principles, and rationales on how neoliberal ideologies are (re)shaped and mediated through language teaching (e.g., English). In this regard, textbooks have been not only pedagogical tools but also a venue in disseminating and reproducing neoliberal tenets such as self-entrepreneurship. Framed in this context, the current study interrogates self-entrepreneurship as one of the prominent notions for neoliberal governmentality (re)produced and maintained in EFL textbooks. In search of operationalising the data collection, a corpus of the data was extracted from three prominent Business English textbooks.
The author's ideological stand (e.g., neoliberalism) and moral values are often concealed in textbooks and other curricular materials (Brown, 1997;Gebregeorgis, 2016). In a similar vein, English teachers and learners, to some extent, are unaware of self-entrepreneurship notions encapsulated in English texts or they take it as common sense without critically questioning it as value laden. Thus, the English teachers, students and educational practitioners need practical guidance to reconsider the existence of such a hidden curriculum (see Babaii & Sheiki, 2017). However, the challenge here was not a practical pedagogical yardstick to systematically and critically analyse them (Babaii & Sheikhii, 2017) nor do they have an overall picture of the self-entrepreneurship covertly represented within EFL textbooks. Consequently, many of them left behind it, and others were merely aware but without explaining and giving proper guidance to their students about the core ideological concepts, how Journal on English as a Foreign Language,12(2), 295-317 p-ISSN 2088295-317 p-ISSN -1657e-ISSN 2502-6615 these ideologies are mediated and maintained within global ELT textbooks. In this regard, critical discourse study is urgently needed for such enlightenment which has become the aim of the current study. As a prevalent economic and political ideology, neoliberalism dynamically moves as a massive phenomenon adjusted to multiple contexts and periods of time (Peck et al., 2012). As a dominant political ideology, neoliberalism has influenced a wide spectrum of political dimensions such as reducing government intervention and budgetary policy, lowering the controls of capital on international monetary flow, deregulating market activity while encouraging neoliberal general principles such as free trade, competitiveness, and privatisation (Desjardins, 2013). Some critics even addressed neoliberal rationalities and disciplines linked to the most malicious impact of the current global crisis (Peck et al., 2012). Others believe that this ideology is responsible for asset redistribution from projects of social welfare to market enterprise, the exploitation of both natural and human resources, and obliteration of complementary and mandatory education as one of the fundamental human rights (Lakes & Carter, 2011).
Many neoliberal studies may be divided into three analytical categories: through the lens of a policy framework, ideology, or governmentality (Larner, 2006). The current article views neoliberalism as governmentality explained by Foucault in his famous lecture entitled "securité, territoire, population" where he introduced governmentality which soon changed how people view 'how to govern". Semantically, the word 'governmentality linked two words (gouverner and mentalité) indicating Foucault governmentality hypothesis is based on reciprocal relation between the technology of power and form of knowledge as it is impossible to study technologies of power alone without analysing the construction political rationality underlying them (Lemke, 2001). The concept of Foucauldian governmentality lies upon the notion that government is viewed as "conduct of conduct'' or to act on other individuals' actions in order to manage how individuals think and behave (Foucault, 1991). Governmentality is built on the subject's behaviour in the realm of open possibilities or the subject's actions to regulate, direct, shape, and construct the actions of others (Foucault, 1982). Thus, the power relation model of governmentality may only be applicable to subjects who have many possible choices through "systems of knowledge or discourses" (Varman et al., 2011(Varman et al., , p. 1165. This is what distinguishes governmentality from domination relations models and strategic games that emphasise domination and sovereign force. In nutshell, the concept of governmentality best accentuates naturalisation and pauperization of some Journal on English as a Foreign Language,12(2), 295-317 p-ISSN 2088295-317 p-ISSN -1657e-ISSN 2502-6615 particular ways of thinking and behaviours as the result of the reciprocal relation between subject and government. Neoliberal governmentality shapes political rationality, such as individual ways of life, life expectations, behaviours, habits, and subjectivity to control how individuals think and behave using technologies and techniques (Lemke, 2002;Lorenzini, 2018). Moreover, it constructs both the subjectivity of consumer and entrepreneur encouraged to compete in the global market by maximising potentials (Lorenzini, 2018). From the lens of Foucauldian, technology (or techniques in specific) is a set of practises enacted to socially and physically control the world through a certain routine enabling neoliberalism to shape docile individuals to manage but perceive themselves as a subject that holds freedom to act at the same time (Bori, 2020a). In this case, Individuals have become complacent in their perceptions of freedom, while simultaneously allowing collective power to exert control over them (Davies & Bansel, 2007). Thus, governmentality works as a valuable framework to investigate the subject's constitution by mean of critical approach to language policy (Haque, 2014).
Recently, the study of neoliberalism in ELT textbooks has gained a plethora of scholar intentions. To begin with, Gray (2010), who examined how the representations of the world of work from the late 1970s until the present, found that enterprise became one of the central topics in English textbooks where entrepreneurial figures such as Vijay and Bhikhu Patel and Anita Roddick, the successful entrepreneurs, who built their career from scratch is clearly depicted. In addition, readers were also invited to reflect on how these figures inspire the younger generation. A textbook study by Copley (2017) explored the content of ELT coursebooks over 40 years. It was found that the ideological positioning of global textbooks has evolved where the modern global textbooks accentuate subjectivity rather than the sense of communal citizenship. They also represent competitive, aspirational, and atomized individuals in search of their self-realisation under the free-market framework.
Xiong and Yuan (2018) reported that material orientation and neoliberalism in English language education have a very close relationship. This can be seen from two things. First, English language competence is seen as a linguisticcultural capital. This commodification is believed to be an intelligent investment that can provide both short and long-term benefits for students. English learners are perceived as competitive individuals in the job market. Language learners with linguistic-cultural capital competence have a greater opportunity to increase their marketability. As a result, society is represented as human capital and a competitive set of entrepreneurs in the Journal on English as a Foreign Language, 12(2), 295-317 p-ISSN 2088-1657; e-ISSN 2502-6615 labour market. Second, success stories of English learners, both as individuals and as communities, can be considered as a reflection of neoliberal personality stories characterised by individual entrepreneurial triumphs. Using thematic content analysis to interrogate both locally developed and imported textbooks in Malaysian classrooms context, Daghigh and Rahim (2020) reported that international textbooks represented a set of famous figures (e.g., prominent artists, top athletes and entrepreneurs) characterised as not only gaining a huge amount of public attention, but also making a lot of fortune. Conversely, in locally developed English textbooks, those figures were not depicted based on their fortune and fame, but for their success in the act of humanitarian contribution to the nation.
A recent textbook study by Bori (2020a) looked into two market leading global English textbooks produced in the UK. The findings showed that selfentrepreneur notions are highly represented as central figures in many texts, pictorial, and student activities. Furthermore, students are encouraged to adopt entrepreneurial spirits by involving them in a role play of a new entrepreneur. They were also invited to imagine contextualised challenges faced by actual entrepreneurial figures such as running a restaurant. In this case, they need to think of initial investment, product development, advertising, and building a strong and loyal team. They were also provided some instructions on what to do to become successful self-entrepreneurs. The previous study cited above mainly provided empirical evidence of how neoliberalism disseminated through language textbooks analysed through multiple neoliberal parameters (e.g., consumerism, competitiveness, self-entrepreneurship, etc.). However, the study focusing on self-entrepreneurship tenants in English textbooks especially in Indonesian context is scantily addressed. Thus, the current study addresses two research questions as follows: (1) How are self-entrepreneurship tenets inculcated into Business English textbooks used in higher education in Indonesia? (2) To what extent do the Indonesian Business English text books represent self-entrepreneurship as one of neoliberal tenets? Prior investigations provided a vivid foundation for subsequent studies into how neoliberalism (e.g., self-entrepreneurship) was instilled in English textbooks in different contexts. Compared to earlier investigations, the current study not only focused on self-entrepreneurship as a unit of investigation but also emphasised methodological robustness and empirical evidence. In an attempt to fill these gaps, the present study used Critical Discourse Analysis (CDA) and Systemic Functional Linguistics (SFL) theory to undertake an indepth analysis on self-entrepreneurship that was purposefully incorporated in Journal on English as a Foreign Language, 12(2), 295-317 p-ISSN 2088-1657; e-ISSN 2502-6615 300 curriculum artefacts such as English textbooks. The blending of these theories can give a better understanding of how the discourse of self-entrepreneurship serves as neoliberal tenets interrelated with language learning in educational landscaping. Such a process enables language learners as neoliberal agents to learn to incorporate this idea into their mind. Among other neoliberal tents, the current study focused on investigating self-entrepreneurship. Thus, it provided rich empirical data showcasing how self-entrepreneurship tenets were canalised throughout multi-layered discourses. To extend critical investigation into neoliberalism represented in language textbooks, the current study attempts to focus on how Business English textbooks elucidate selfentrepreneurship into Indonesian higher school. This study attempts two main contributions. First, it empirically investigates how language textbooks represent self-entrepreneurship as one of critical principles of neoliberalism and to what extent Indonesian Business English textbooks disseminate selfentrepreneurship. Methodologically speaking, the second contribution of the current study demonstrates how CDA along with SFL can be viable tools to analyse verbal or textual hidden curriculum (e.g., neoliberalism) embed in English textbooks
Textbook corpus
The current study garnered data from three Business English textbooks widely uses in Indonesian higher education. Published by Cambridge University Press in 2004, the first textbook, Communicating in Business: A Short Course for Business English Students, was authored by Sweeney. This textbook is designed to develop student's language skills in five areas of communication. To accentuate students' listening and speaking skills, the textbook is equipped with recorded materials (Sweeney, 2004). Authored by David Cotton and Sue Robins, the second textbook is entitled Business Class. This Business English textbook consists of 15 chapters adopting a topic-based and skill-based approach.
Published by Pearson in 2001, the integrated approach is employed to help students to master grammar and vocabulary in context (Cottton & Robins, 2001). The third textbook is entitled Business Correspondence: A Guide to Everyday Writing. The textbook published by Pearson in 2003 was authored by Lougheed. Across 15 units of material, students are guided to enhance their writing skills ranging from understanding format business letters to the use of formal and informal Business English (Lougheed, 2003).
Data analysis
In pursuit of interrogating the representation of self-entrepreneurship in both visual and verbal text in three Business English textbooks, the current study deployed Critical Discourse Analysis (CDA) to examine self-entrepreneurship as a neoliberal discourse instilled in textbooks. De Los Heros (2009) noted that CDA best elucidates how texts and social practices construe a societal, ideological system that might glorify or marginalise specific values. In educational landscaping, social practices can be traced within curricular texts such as language textbooks which are often used as the main educational tool in classroom settings. In an attempt to operationalise this analysis, the current study employed CDA framework. This framework best elucidates how textbook writers create particular effective discourses through a variety of texts. Further, the current study incorporated CDA with Halliday's Systemic Functional Linguistics (SFL) meta-functional framework to scrutinise textual analysis. From the lens of SFL, language is viewed as a system of social semiotics to construe experiences (Anafo & Ngula, 2020). While on a lexical level, lexicogrammar in a variety of discourses is deemed as a set of meaningful choices called register which is often represented in both situational and cultural context (Gu, 2021). Language textbooks often showcase such register features influenced by value laden (e.g., neoliberal ideology) which both teachers and students are unaware of (Setyono & Widodo, 2019). In this regard, SFL enables this study to see neoliberal tenets canalised through curriculum texts such as language textbooks. The current study begins the analysis by manually counting the number of units that describe entrepreneurial practices or values. The identified representations of elf-entrepreneurship consist of textual, visual, or both of them. In pursuit of gaining rigor and trustworthiness, the present study adopted Mullet's (2018) work regarding the general analytical framework for CDA. This framework consists of five stages to follow. To begin with, the current study selected the discourse of self-entrepreneurship gleaned from both textual and visual data. Then, the selected discourses were put and prepared for the analytical process. Entering the analytical procedure stage, the selected data, then, moved to the interrogation of historical, social background or the text makers of the selected data. Next, the analytical procedure turned into the qualitative coding to conceptually organise and catagorise the raw data. In this case, open (in this step, SFL theory was often critical to employ), axial and selective coding were carefully enacted (Qureshi & Unlu, 2020). Finally, the analytical procedure entered the inter discursive stages which focus on identifying the discourse interrelation.
Journal on English as a Foreign Language, 12 (2) An in-depth analysis of the textbooks provided the empirical data showing the method used by the textbook producers to inculcate selfentrepreneurship represented both textual and pictorial data. From the data in Table 2, it is apparent that there were five methods commonly deployed by the textbook namely using role-play, displaying celebrity, showing entrepreneur figures, presenting visual text and employing article and literacy. Surprisingly, it was found that employing articles and literacies is the most used approach weight 45% (21), while showcasing celebrity to convey entrepreneurial ideas was the least approach to use with only 2% (1). Another significant finding was the fact that role-play positions second place with 38% (18). Turning now to the selected data to provide empirical evidences on how the English textbook under the study encapsulate certain entrepreneurial ideas internalized in multi-layer discourses and to provide answers to the second question. Therefore, the represented following data were worthy of discussion. In an attempt to inculcate the notion of self-entrepreneurship to student minds, English textbooks used many ways. The neoliberal English textbooks often convey such notions in the form of interactive ways to actively engage students to act as entrepreneur figures. One of the prominent ways is to give students a role play activity. In this challenging activity, the students were initially given materials related to a certain topic wherein Business English, the topics might be related to product and advertisement. Next, students were encouraged to make a couple of groups where they can work together as a team. This not only makes them feel comfortable in performing the given task but also makes them experience the artificial business atmosphere and readjust their mind and behaviour to the real situation. As seen in Figure 1, the students were given a task to perform a presentation and a meeting which in business English class, both of these language skills are critical to master. Yet, the presented ideas to generate the skills might be needed for critical investigations.
Internalising self-entrepreneurship through role play
Right after the students were encouraged to make groups, they were motivated to create a new leather product for a certain company to develop (brainstorm your ideas for a new leather product.. brainstorming is a mental process which imperatively demands students to mentally garner a certain phenomenon (new ideas). In this case the sensers (students) were informed that the purpose of this activity is to create a new product to develop. This indicated that the author of the textbook leads students to train their ability to trigger students' creativity in garnering the products. The students, then, asked to select the best idea. As entrepreneur agents, the students need to consider whether the selected product fits the need of the market demand (choose your best idea...). It is interesting to note that in this simple phrase, the author described the lexis 'best idea' with the possessive 'your'. Semantically, the lexicon 'your' indicates that the selected idea has to be originated from their thought. In this regard, the author might highlight that the students can certainly create a new product. Along with selecting the best idea to develop, they were led to think and act like a real entrepreneur, as seen in the following excerpt: Work out the details of the products, target marketing, marketing, productions, etc. (Data from Business Class) Following transitivity in SFL theory, the actors (students) were encouraged to do a material process (work out) to follow up their idea. This indicates that the text maker urged the student to enact their idea by elaborating the selected idea. In this regard, they need to detail the product and productions. Additionally, they were motivated to set ways to sell the products by addressing the right and the potential consumers and how to sell them. To polish the students' ability in speaking, they were encouraged to present their ideas in front of other groups. As they were asked to perform a business presentation, students not only learn how to present their ideas but also hone their marketing skills where they might deploy resources to convince other groups about their products. The students were also trained to use their ability in implementing 'entrepreneurial instinct' to read business opportunities (discuss the merits and demerits of each proposal and try to reach a decision...). This activity demanded students to elaborate their faculties to read opportunities of each proposed business idea. This pivotal skill enables entrepreneurs to develop their products and services adjusted to market demands. Thus, their business will be sustainable and profitable. By performing these activities, the student might think and behave like entrepreneurs in real life. From the lens of neoliberalism, it is clear that the text makers might attempt to (re)shape the entrepreneurial mindset which is one of the critical principles of neoliberalism.
Showing celebrity and fame
Athletes and sport celebrities has been served as a promising vannue to propagate cultural products and brands ambasadors, and 'short cut' to obtain mass interest and attentions (Sassenberg, 2018). In this respect, celebrities or other notorious figures might become role models for many people (Lines, 2001;Osborne et al., 2016). Following this trend, neoliberal textbooks often use famous people such as celebrities, or celeb-preneur to propagate certain ideology, notion or even products might give a significant effect to others. The following excerpt showcases how the text makers internalise self-entrepreneur notion in language learning through a famous American sportsman-Michael Jordan. In this regard, the text makers showcasing the success story of Michael Jordan by discussing one of the most successful of his endorsement achievements: Endorsement can be very profitable both for the sporting personalities and for the companies which sign them. Take the case of the famous American basketball player, Michael Jordan, one of the most graceful and charismatic players ever to appear on a basketball court. In one year alone, his endorsement of Nike helped sell one and half million pairs of shoes -and earned him $20 million. (Data from Business Class) Following SFL principle, the text makers began the discourse by giving attributes (profitable) to endorsement. The attribute was even amplified with 'very' and completed with purpose (both for the sporting personalities and the companies which sign them…), which indicates that the text makers wanted the students to incorporate the idea (endorsement). In the next sentences they kept glorifying the endorsement by giving a real example of how this promising business benefits the company and the sports man. They also put some celebprenuer characteristics such as basketball player, the most graceful and charismatic player before mentioning economic achievement. Semantically, those characteristics, which might inspire the students, described as venue to achieve a successful endorsement.
Displaying famous entrepreneur figures
As seen in Figure 2, the depiction of a famous businessman is represented to bring entrepreneurial virtues and reasoning in students' minds. Linguistically speaking, the sentences 'What is management skill?' 'What are entrepreneurial skills? are deliberately deployed to shape a cognitive evaluation to distinguish between managerial and entrepreneurial skills. The use of interrogative mood may be inferred as an attempt to soften the author's imperative mood. In this Journal on English as a Foreign Language, 12 (2), 295-317 p-ISSN 2088295-317 p-ISSN -1657e-ISSN 2502-6615 case, the author might use such a structure to loosen the force (direct command) and give some space to negotiate. Visually, to reinforce the entrepreneurial virtue and reasoning, the author employed a famous entrepreneur figure. As a new neoliberal agent, the students were nurtured to shape a sense of being entrepreneurs. The use of such multimodal text provides an ideal image of how entrepreneurs look and behave. Strategically placed under Figure 2, the lexical choices showed the text makers' ideological position on entrepreneurs. In this regard, they employed 'internationally recognized' and 'successful' to glorify the demonstrated entrepreneur figure. The author of the textbook may attempt to create a strategy to shape students' entrepreneurial agency. Following SFL theory, lexis 'recognised' is a mental process followed by a role circumstance (as a successful entrepreneur). Framed in passive form, (the senser was omitted), the sentence might be inferred as the textbook producers' attempt to build a general perception of the figure. The eye contact in the visual text between the figure and readers may provide an act of fostering interaction in which students might cultivate entrepreneurial literacy. This multimodal literacy may affect how students think and behave in a larger community. The constructed asymmetrical domination between the man in the TV screen and men and women is deliberately highlighted. Among the five actors in the picture, the facial expression and the body language of the man in the TV screen is highly emhapasied while the rest is not overtly accentuated. The dominating power held by the man in the TV screen also showcased through his gaze. Framed in the top -down visual metaphor, the pictorial text stressed the dominated position of this central figure. Semiotically, the unequal power among the actors in the selected picture was projected through the possession of attire attribute where only the man in the TV wears the coat. The depiction of the figure leading the meeting through an old-fashioned TV, might be inferred as the situation held in the past decades. Yet, the way this distinct figure is emphasised in English textbooks is not significantly different. This denotes that the text producers might intend to strategically deploy visual text to corroborate the self-entrepreneur figure. The excerpt above apparently stated that always on the move is the common factor to become a successful entrepreneur. Semantically, 'on the move' means invariably improving their skills, knowledge, and attitude adjusted to admired goals indicating that the text makers wanted the students to incrementally empower themselves with various of entrepreneurial attitudes, skills and knowledge to raise their competitiveness. To this point, it is clear that the text makers tried to shape entrepreneurial mindset into students' mind.
Discussion
The current study examines the representation of self-entrepreneurship notions disseminated through Business English textbooks. The first question of the previous study investigated how self-entrepreneurship principles are instilled in Business English textbooks used in higher education in Indonesia. The results of the study how that three Business English textbooks under the study strategically perpetuated self-entrepreneurship tenets displayed in variety of techniques and discourses such as using role-play, showcasing celebrity and fame, displaying famous entrepreneur figures, showing the distinct image of entrepreneurial figures, and displaying entrepreneur figures trough article or literacy. The finding of the study reported that engaging students with selfentrepreneurship literacies is the way employed by text makers to intentionally inculcate self-entrepreneurship notions. In contrast, showcasing celebrities or famous figures to inspire language learners is scantily represented. The second study question investigated how much self-entrepreneurship is represented as a neoliberal concept in Indonesian Business English textbooks. The findings of the present study suggest that role plays were reiterated a multitude of times in the Business English textbooks under the study to disseminate self-entrepreneurship notions. It is encouraging to compare this finding with Bori (2020a) who found that neoliberal English textbooks often put the students to practice their entrepreneurial spirits while imagining themselves opening their business. The difference between what he found and the present study is that the latest reported students were not only encouraged to imagine themselves as an entrepreneur, but also to perform some of the technical entrepreneurial skills (e.g., generating, producing, and selling new products) which is defined as "a disparate set of practices, knowledge, and ways of acting and being" (Urciuoli, 2008, p. 212). For neoliberal individuals, these skills are perceived as valuable assets that are worthy of investing, nurturing, regulating and enhancing (Martin, 2000). This is because, under neoliberal framework, Journal on English as a Foreign Language, 12(2), 295-317 p-ISSN 2088-1657e-ISSN 2502-6615 individuals are encouraged to employ ''responsibility of self' characterised by having moral imperative and rationality to invest at crucial points of life (e.g., education, skill, health care, retirement etc.) (Peters, 2001). Lemke (2001) argued that dissemination of responsibility of self in various discourse make neoliberal individuals view their misfortune (e.g., illness, poverty and unemployment) as their responsibility rather than state or communal responsibility. In this regard, the responsibilisation serves as the foundation of long-life educational praxis, skill and knowledge accumulation and other self-investments. Another significant finding was the represented celebrity figure where with fame and talent on their hand, they were depicted to have a lot of fortunes. This result corroborates with Gray (2013), Bori (2020a), and Daghigh and Rahim (2020) who argued that in line with other neoliberal tenets, selfentrepreneurship was inculcated through celebrity discourses. This is because in language educational landscaping, celebrity discourses have a significant role in promoting English as representation of individualism, outstanding accomplishment and gaining a lot of fortune (Gray, 2013). In classroom context, such glorification might affect students' minds in terms of how they perceive celebrity and fame in a broader context. It is interesting to note that the celebrity figure in this study was framed under endorsement discourse from where they gain a lot of fortune. From the neoliberal perspective, Carrier (2010) argued that by signing such endorsement contracts, celebrities propagate ideas that social causes can be mediated mainly through ethical consumption instead of political engagement.
Under neoliberal frameworks, every social conduct should be aligned with neoliberal governmentality where individuals are transformed from state to self-reliance and entrepreneurial-self enhancement (Peters, 2001). In an attempt to embed self-entrepreneurship notions, the textbook under the study glorified entrepreneur figures in a variety of ways. First, this glorification can be seen, for example, by demonstrating famous entrepreneur figures. Famous entrepreneur figures were often accentuated through what Peters (2001), called as 'selfempowerment' to enhance students' entrepreneurial skills projected for the broader contexts. This argument also echoed our fifth finding where the students were encouraged to be in 'always on the move' mode. The internalisation of this entrepreneurial behavior is viewed as an incremental construction of certain forms of subjectivity (Dardot & Laval, 2014).
The educational reformation under neoliberalism comes to represent new kinds of subjectivity (Ball & Olmedo, 2013). These distinguished subjects were shaped to have a sense of responsibility for their own skills, characteristics, specific preferences, interests, and motives (Martin, 2000). Moreover, they have Journal on English as a Foreign Language,12(2), 295-317 p-ISSN 2088295-317 p-ISSN -1657e-ISSN 2502-6615 to be aware of their subjectivity by developing a certain idea that their "capacity to market themselves" is the key to their own success (Urciuoli, 2019, p. 93). This is because as 'conduct of conduct', neoliberal governmentality transforms individuals into a human capital by internalising a certain form of subjectivity characterised by being autonomous, flexible, and individualised subject who works under entrepreneur of self-concept (Turken et al., 2015). Bearing a human capital characterised by the endeavour to maximise her own selfpotentials is perceived as the most fundamental method in which neoliberalism construes subjectivity (Weidner, 2009). Thus, neoliberal subjects refer to selfentrepreneurs who have a freedom to maximise their potentials and accumulated skills adjusted to dynamic societal demand (Lorenzini, 2018;Walkerdine, 2006). Second, the textbook makers portray the entrepreneur figures in a certain way. Along with this notion, Bori (2020a) argued that in neoliberal textbooks, celebrity figures are often described as central figures. The present findings seem to be consistent with this argument as it is empirically found that entrepreneur figures in the present study were semiotically depicted in a certain attire, gaze, and body language that reflect their dominant and authoritative figure in the selected images. Finally, the glorification of entrepreneurial subjects was demonstrated through literacy including intact articles taken from well-known dailies. Indeed, such an approach might serve as a viable resource for students to learn self-entrepreneurship tenets.
Rojo and Percio (2019) argued that students' interaction with these neoliberal literacies allows them to carry out self-care and self-reflection where they can compare their skills with the presented success enterprise model. They further argued that such activity might enable the students to implement the technology of self. Technology of self is a term Foucault (1988) used to describe an individual's desire to do something without any coercion or control from outsiders (Miller & Rose, 2008). Technology of self can be mediated by the process of canalising certain values or ideologies into the mindset of neoliberal individuals to support the construction of the neoliberal society. In this regard, freemarket is a utopian concept of societal neoliberalism in which government intervention in economic resources can be achieved.
In a nutshell, the present study investigated how neoliberal English textbooks represent self-entrepreneurship. It illuminates some of the strategic approaches used by textbook producers to inculcate this prominent neoliberal tenet. The current study's finding indicates that the idea of selfentrepreneurship was strategically embedded in English textbooks through multiple discourse and approach. Thus, as the main consumer of the global English textbook, students (and educational practitioners) should also be encouraged to critically view English textbooks as a fertile area for cultivating hidden agendas (e.g., neoliberalism) rather than simply a source of learning material. Hopefully, students can be motivated to harness their critical literacy to unmask the hidden curriculum.
Conclusion
Grounded in CDA and SFL theory, this interpretive study interrogated the representation of self-entrepreneurship strategically embedded in English textbooks. The empirical findings show that textbooks under the study integrated self-entrepreneur notions and pedagogy into English language education using several techniques such as role-play, celebrity and famous entrepreneur figures, visual texts and employing articles and literacy. In this regard, the current study might be a valuable yardstick to raise the awareness of neoliberalism embedded in English language textbooks. While in the classroom setting, students might be familiarised with critical reading to hone their high-order thinking skill and employ their metacognitive ability to actively negotiate what they read and construe their own understanding beyond the text (Sutherland & Incera, 2021). Moreover, the present study implied need for dynamic and contextualised critical pedagogy with myriard of viable starting point (Martin, 2015) to neutralise neoliberalism and its tenets (e.g., entrepreneur of self) (Bori, 2020b). It is hoped that such activity could help them decipher hidden curriculums (e.g., neoliberalism). Anchored in subjectivism and interpretivism as an ontological and epistemological paradigm, the current study tried to bring both methodological contribution and practical contribution. Methodologically speaking, this study serves as a pedagogical yardstick for classroom practitioners, policymakers, book designers, and others to interrogate cultures, ideologies, norms, values, and other hidden curricula encapsulated in English textbooks. Practically, the finding of this study contributes to the enhancement of language education by reappropriating neoliberal tenets such as self-entrepreneurship. Along with further research, the broader studies focusing on the representation of selfentrepreneurship in other contexts worldwide are critical to carrying out. Further, the teacher, as well as students' attitudes and perceptions toward neoliberal tenets (e.g., self-entrepreneurship), are worthy to closely investigate.
The present study has several limitations. To begin with, the emerging issues of the current study were empirically examined under the researcher's subjective perspective. Grounded in subjectivism and interpretivism, the findings of the current study might shed light on textbook analysis study regarding language education. However, the present study might not resonate with students' internal study objectives. In this case, future studies can involve students, as the main user of the textbooks, and other relevant users such as educators, policy makers, or textbook authors. Additionally, other ontological and epistemological stands are worthy of consideration to enrich multiple perspectives and approaches. Second, the present study employed only three Business English textbooks as the object of investigation adjusted to Indonesian context. In this regard, the result of the current study should be critically interpreted with the context of the selected textbooks. Employing a variety of English textbooks might be a fertile area where future research can address. It is hoped that the future research can provide more vivid empirical data regarding how the notions of self-entrepreneur were deliberately, (re)shaped, nurtured and enhanced. Methodologically speaking, the current study utilized CDA and SFL as the means of investigating to scrutinise self-entrepreneur tenets embedded in English textbooks. Future studies can explore other methodology to gain more empirical result conseptualising self-entrepreneur ideas. | 8,206.8 | 2022-07-19T00:00:00.000 | [
"Business",
"Education",
"Linguistics"
] |
Micronized Copper Wood Preservatives: Efficacy of Ion, Nano, and Bulk Copper against the Brown Rot Fungus Rhodonia placenta
Recently introduced micronized copper (MC) formulations, consisting of a nanosized fraction of basic copper (Cu) carbonate (CuCO3·Cu(OH)2) nanoparticles (NPs), were introduced to the market for wood protection. Cu NPs may presumably be more effective against wood-destroying fungi than bulk or ionic Cu compounds. In particular, Cu- tolerant wood-destroying fungi may not recognize NPs, which may penetrate into fungal cell walls and membranes and exert their impact. The objective of this study was to assess if MC wood preservative formulations have a superior efficacy against Cu-tolerant wood-destroying fungi due to nano effects than conventional Cu biocides. After screening a range of wood-destroying fungi for their resistance to Cu, we investigated fungal growth of the Cu-tolerant fungus Rhodonia placenta in solid and liquid media and on wood treated with MC azole (MCA). In liquid cultures we evaluated the fungal response to ion, nano and bulk Cu distinguishing the ionic and particle effects by means of the Cu2+ chelator ammonium tetrathiomolybdate (TTM) and measuring fungal biomass, oxalic acid production and laccase activity of R. placenta. Our results do not support the presence of particular nano effects of MCA against R. placenta that would account for an increased antifungal efficacy, but provide evidence that attribute the main effectiveness of MCA to azoles.
Introduction
Copper (Cu) has long been known for its fungicidal properties and it is an essential biocide for wood in contact with the soil, as it is the only active substance that hitherto successfully inhibits wood decomposition by soft rot fungi [1]. The efficacy of Cu-based wood preservatives against wood-destroying fungi is mainly exerted by Cu in its soluble form, as Cu 2+ ions [2][3][4]. However, an increased copper efficacy may be achieved at the nanoscale: several nanoparticles (NPs) have been shown to be more toxic to prokaryotes and eukaryotes than larger particles of Materials and Methods Materials 2,2-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS), bulk CuCO 3 ÁCu(OH) 2 , CuSO 4 , TBA and TTM were purchased from Sigma Aldrich, while agar, malt extract and potassium chloride from VWR (Oxoid, Darmstadt, Germany).
The oxalic acid assay Enzytec oxalic acid was purchased from R-Biopharm AG. The silver stain kit, Dodeca Silver Stain was obtained from BIO-RAD.
Two commercial aqueous suspensions of MCA were investigated. The two MCA formulations contain comparable amounts of Cu particles but differ in the amount of TBA: MCA_HTBA contained 5% w/w and MCA_LTBA 0.4% w/w TBA.
NP characterization. Cu particles in the MCA formulations were characterized prior to fungal exposure. Particle morphology was assessed by transmission electron microscopy (TEM) with a Zeiss 900 microscope (Zeiss SMT, Oberkochen, Germany). TEM grids (400 mesh) coated with 8 nm of carbon were incubated for 20 s on a 10 μl droplet of MCA diluted with nanopure water. The excess suspension fluid was drawn off with filter paper.
Particle size distribution was measured by nanoparticle tracking analysis (NTA) using a NanoSight LM20 (NanoSight Ltd., UK) on MCA diluted with Milli-Q water. Data analysis was performed with NTA 2.3.5 software (NanoSight Ltd., UK). Particle diameters are reported as average and standard deviation of seven video recording of the sample. Zeta potential measurements were carried out on MCA diluted with Milli-Q water using a Zetasizer NanoZS (Malvern Instruments, UK).
Screening for Cu-tolerance
All fungi used in the solid media study were wood-destroying basidiomycetes commonly used in EN 113 tests: Antrodia serialis (Fr.) Donk isolate 43, Coniophora puteana (Schumach.) Karst isolate 62, Gloeophyllum trabeum (Pers.) Murrill isolate 100, R. placenta isolate 45, Trametes versicolor (L.) Lloyd isolate 159 from the Empa culture collection. Fungal mycelia (9 mm in diameter) were grown in 9 cm Petri dishes with 25 mL solid medium (autoclave sterilized) containing 4% (w/v) malt extract and 2.5% (w/v) agar. The media were amended with either 0.01% (w/v) or 0.05% (w/v) of MCA_HTBA or MCA_LTBA. Three replicates were prepared for each condition. Cultures were stored at 22°C and 70% RH. The cultures were inspected regularly, and their 4 cardinal points were marked to determine the growth radii until the colonies reached the edges of the Petri dishes. Fungal growth rate for each colony (in mm per day) was determined dividing the mean value of the latest growth radii minus that of the earliest by the number of days elapsed between the measurements.
Fungal response to Cu 2+ ions and particles
The fungus used in the liquid culture study was R. placenta isolate 45 from the Empa culture collection. Fungal mycelia were grown in 500 mL Erlenmeyer flasks with 250 mL liquid culture medium (autoclave sterilized) containing 1% (w/v) malt extract and 0.6% (w/v) potassium chloride.
To understand the effect of ion, nano or bulk Cu, the following materials were added respectively: CuSO 4 , MCA, CuCO 3 ÁCu(OH) 2 . The concentration of CuCO 3 ÁCu(OH) 2 , and CuSO 4 was 0.02 mM. The quantity of MCA was calculated based on the equivalent 0.02 mM of total Cu. The interference caused by the azole biocide on the fungus was assessed by adding TBA. The amount of TBA, alone or with Cu, was calculated as the amount of TBA content in MCA (5% w/w). TTM (0.02 mM) was used to separate the Cu-based particles from the Cu 2+ ions. All these materials were added to the liquid cultures as indicated in Table 1.
Each treatment was repeated in triplicates. Each flask was inoculated with 1 disc (8 mm in diameter) of the strain pre-cultured in solid medium and incubated in an orbital shaker (100 rpm) at 22°C for 9 weeks. The pH of the liquid culture was between 4.5 and 5, assuming that some Cu particles would not solubilize but would remain suspended in the liquid media. After incubation, the biomass was harvested by filtration and oven dried at 107°C for 24 hours. Fungal growth was estimated as wet and dry biomass weight.
Laccase activity. Laccase activity in R. placenta 45 was measured after 16 weeks incubation in untreated, MCA_LTBA and MCA_HTBA-treated wood colonized by R. placenta 45 (see protective effectiveness of MCA active ingredients) and after 2, 4, and 9 weeks in liquid cultures. For detection of laccase in wood, the colonized samples were treated according to Wei et al. [30]. The grounded wood was stirred overnight at 4C°in dist. H 2 O containing 1M NaCl to extract extracellular proteins. The liquid phase was separated from the solids by vacuum filtration through Whatman no 1 paper and concentrated in an ultrafiltration cell (Ultracel, Millipore) fitted with 10-kDa cutoff membrane.
Oxalic acid assay. After incubation, 0.5 mL of liquid media were taken from each culture medium and oxalate concentration in the samples was analyzed with a spectrophotometer (Genesys 10S UV-vis, Thermo Scientific Inc., Waltham, MA, USA) at 590 nm as specified by the instruction manual (Enzytec Oxalic acid, R-Biopharm AG). Samples were diluted 100-fold with distilled water before being used in the assay as they showed very high activities.
To determine any further changes in the protein secretions of R. placenta 45 exposed to the different amended media after incubation, 0.2 mL of liquid media was taken from each culture medium. Protein extracts from the control, CuCO 3 ÁCu(OH) 2 , CuSO 4 , MCA liquid media -with and without TTM-plus markers (BIO-RAD) were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) using 10% gels loaded with 5 μL of each liquid culture sample and the marker. Gels were silver stained with the Dodeca Silver Stain kit according to the instruction manual (BIO-RAD). [37] with different concentrations of MCA (2%, 1.6%, 1.33%, 1.07%, 0.8%, 0%). Scots pine sapwood blocks (50 x 25 x 15 mm) were also impregnated with 2% and 1.6% equivalent concentrations of CuCO 3 ÁCu(OH) 2 or TBA. No permits were required for the described study, which complied with all relevant regulations. No endangered or protected species were involved. After drying, the samples were exposed to R. placenta 45 at 22°C and 70% RH. Test procedures were performed according to the European standard EN 113 [37]. After incubation, wood blocks were removed from the culture vessels, brushed free of mycelium and oven dried at 103±1°C. The percentage of weight loss was calculated from the dry weight before and after the test.
Statistical analysis
Growth data and oxalic acid concentrations from fungi growing on solid and liquid media and on wood were log-transformed and data expressed as percentages, such as mass loss, were arcsine-transformed prior to analysis (ANOVA) and back-transformed to numerical values for visualization. Means were separated using Tukey's-HSD (Honestly Significant Difference) test at significance level p<0.05. The statistical package used for all analyses was SPSS (Version 17.0, SPSS Inc., Chicago, IL, USA).
NP characterization
The Cu particles in the two MCA formulations were comparable and appeared heterogeneous in size and morphology, as shown in the TEM micrographs (Fig 1a and 1b). The size distributions were also similar for the MCA formulations (Fig 1c and 1d). The mean diameter was 104 ±1.7 nm (mode: 87±2.2 nm) for MCA_HTBA and 174±5.9 nm (mode: 150±8.2 nm). Therefore, the Cu particles were solely in the nano-range. The Cu particles in diluted MCA_HTBA and MCA_LTBA had an average z-potential of -21.0±0.4 mV and -16.5±1.4 mV respectively, indicating suspensions that tend to aggregate.
Screening for Cu-tolerance
We evaluated the growth of different wood-destroying fungi in MCA-amended media to identify the most Cu-tolerant strain for subsequent studies. Both concentrations of the MCA formulations caused appreciable reductions in fungal growth rate compared to the controls (p-value < 0.001). C. puteana 62 was not able to grow in any of the amended media, showing no tolerance to Cu or TBA, whereas A. serialis 43 and G. trabeum 100 effectively grew only in 0.01% MCA_LTBA. Differences in overall fungal growth rates were more evident at 0.01% for MCA_LTBA and MCA_HTBA, as distinct patterns were apparent (p-value < 0.001). Media with 0.05% MCA similarly inhibited fungal growth. Minor growth rates at such concentrations were recorded only for T. versicolor 159 and R. placenta 45. These two strains also outperformed the other fungi at lower concentrations (p-value < 0.001). In particular, mean growth of R. placenta 45 was overall the highest, which indicated a high Cu-tolerance. Therefore, this strain was selected for the subsequent tests.
Fungal response to Cu 2+ ions and particles
We assessed the response of R. placenta 45 to Cu ions, NPs or bulk material to determine if nano effects may account for a superior efficacy of MCA. The effect of Cu 2+ from dissolution of CuSO 4 , nano Cu from MCA, bulk CuCO 3 ÁCu(OH) 2 , and TBA on fungal growth is shown in The differences observed between the tested groups were significant, as indicated by Tukey's test on the wet biomass production. The addition of TTM to liquid cultures substantially reduced biomass production. What emerged is that TBA strongly suppresses growth of R. placenta 45. When TBA was associated with Cu (in MCA or with CuCO 3 ÁCu(OH) 2 ) its inhibition was reduced as follows: TBA > TBA + CuCO 3 ÁCu(OH) 2 > MCA.
Laccase activity. Laccase was detected in both MCA-treated (MCA_LTBA and MCA_HTBA) and untreated wood. The amount detected was minor (approx. 1U/L). On Oxalic acid production. Fig 4 shows the mean values of oxalic acid produced by R. placenta 45 at different conditions. The amount of oxalic acid measured represents only soluble free acid and salts, but it does not take into account the copper oxalate and/or calcium oxalate water insoluble precipitates. The differences observed between the different groups were significant (Tukey's test). Similarly to the results highlighted in the fungal biomass tests, TBA heavily suppressed the production of oxalic acid by R. placenta 45. Oxalic acid production in cultures with MCA, CuCO 3 ÁCu(OH) 2 , CuCO 3 ÁCu(OH) 2 +TBA was lower than in the controls. The addition of TTM to the Cuamended liquid cultures resulted in an increase in the oxalic acid production from MCA and CuCO 3 ÁCu(OH) 2 , both containing CuCO 3 ÁCu(OH) 2 , whereas it did not cause any major difference in CuCO 3 ÁCu(OH) 2 +TBA and CuSO 4 . The highest oxalic acid concentration was measured in cultures exposed to MCA + TTM. The protein profiles obtained by SDS-PAGE showed no difference in the protein expression profiles involved for R. placenta 45 under different growth conditions (data available through the ETH Data Archive at: http://doi.org/10.5905/ethz-1007-21), therefore it is evident that laccase and oxalic acid are the main contributors and no further protein analysis to determine different behavior due to Cu exposure was performed.
Discussion
Cu is currently used to protect wood from fungal decomposition due to its antifungal properties. In particular, Cu is responsible for interference with homeostatic processes and cell membrane functions [41], protein and enzyme damage and precipitation [42], production of reactive oxygen species [43,44] and DNA disruption [45]. When Cu is available as NPs these effects may be enhanced. We investigated here in a systematic approach which formulation (ionic, nano or bulk) of Cu is the most effective against Cu-tolerant basidiomycetes. We discriminated between the effects caused by the particles themselves and those caused by their dissolution into Cu 2+ ions using TTM, a chelator for Cu 2+ ions.
Our results showed that T. versicolor 159 and R. placenta 45 were the two strains that were less influenced by the MCA formulations. We mainly attributed this behavior to TBA-tolerance mechanisms for the white rot fungus T. versicolor 159 [46], and to Cu-tolerance mechanisms for the brown rot fungus R. placenta 45 [47]. The main mode of action for TBA consists in fungal cell membrane disruption by inhibition of ergosterol formation [48], whereas Cu exerts its toxic effects on fungal cells by disrupting different basic metabolic processes. Therefore, R. placenta 45 was selected for the subsequent studies, as it would provide an indication for possible nano effects exerted by MCA on highly Cu-tolerant fungi that would reduce the Cu threshold level and would result in effective protection of wood at lower Cu concentration than the Cu 2+ ion or bulk counterpart.
The liquid culture study confirmed the suitability of TTM to discriminate between Cu 2+ ionic and particle effects. In TTM only-amended cultures, biomass and oxalic acid production were lower than in the control cultures, indicating that TTM bound to essential Cu 2+ . Therefore, the study shows that TTM can be effectively employed for studies on Cu-based NPs, for instance in the field of nanotoxicology, where a similar approach has been developed for zinc oxide NPs by Bürki-Thurnherr et al. [49].
It was not possible to identify the presence of laccase in liquid cultures, although this was clearly detected on wood in the EN 113 study. In addition, the amount of laccase detected in untreated and MCA-treated wood was similar. These two findings indicate that laccase is probably not the principal mechanism for Cu detoxification in R. placenta. Furthermore, we have evidence that supports the fundamental role played by laccase in the Fenton reaction. The Fenton reaction is used by brown and white rot fungi to initialize the attack of wood, as it allows the depolymerization of polysaccharides and lignin by generating radicals [50], whereas in artificial media sugars are readily available, hence radicals are not required. Fungal biomass and oxalic production measurements provided a clear picture on fungal response to Cu in its various forms and TBA. The lower oxalic acid concentrations found in MCA, CuCO 3 ÁCu(OH) 2 , CuCO 3 ÁCu(OH) 2 +TBA cultures is in good agreement with Green and Clausen findings [51], which revealed that the oxalic acid production of two R. placenta strains was reduced in Cutreated wood. The higher oxalic acid concentrations in CuSO 4 (Cu 2+ ions)-amended cultures can be related to the increased biomass produced. In addition, the oxalic acid production was stimulated in Cu+TTM-amended cultures, therefore confirming that free Cu can reduce oxalic acid production. For both fungal biomass and oxalic acid production, the major inhibiting agent was TBA, however the effects were reduced in the presence of Cu, especially for MCA. Therefore, we hypothesize that Cu, here at sub-lethal concentrations, can stimulate growth and enzyme production of R. placenta, as indicated in former studies [52,53]. In addition, for MCA other chemicals in the formulation may have influenced fungal growth by providing additional nutrients. Thus, for the concentrations used, there was no indication of a Cu+TBA additive or synergistic effects against Cu-tolerant fungi. Although there is a lack of scientific literature on Cu and TBA additive, synergistic or antagonist effects, Sun et al. [54] showed a similar behavior for a range of moulds that can biotransform TBA [55,56], hence effectiveness of pure Cu was higher than Cu combined with TBA. In any case, we found no evidence for a specific nano effect against R. placenta and the main active substance against R. placenta was clearly TBA. This means that the fungus is able to recognize Cu also as MCA NPs and can trigger the same Cu-tolerance mechanisms typically shown in the presence of bulk Cu or Cu ions. No additional protein expression patterns were evident in SDS-PAGE analysis, therefore oxalic acid and laccase were valid parameters for determining fungal response to Cu.
Finally, to complete the study, we investigated fungal growth on treated wood i.e. a more natural setting. In this case, the EN 113 guidelines were applied and mass losses of wood treated with bulk CuCO 3 ÁCu(OH) 2 , TBA and two MCA formulations differing in TBA content were compared. This test provides indications on the expected short term effectiveness of the wood treatments.
The recorded mass losses were in good agreement with our findings on fungal growth in liquid cultures. Even in treated wood TBA is largely responsible for the effectiveness of MCA, although high concentrations of Cu also affect the performance. In particular, for the formulations and wood decay fungi assessed, we propose a 1.6% MCA < Cu threshold 2% MCA. The low effectiveness of CuCO 3 ÁCu(OH) 2 is mainly attributed to the poor penetration into the wood: the wood samples did not show any color change towards green/blue due to the presence of Cu, and unreacted CuCO 3 ÁCu(OH) 2 only appeared as fine dust unbound on the sample surfaces.
In conclusion, the NPs in the MCA formulations assessed did not provide additional protection against R. placenta and the main effectiveness has to be attributed to TBA. Therefore, considering the antifungal properties, the efficacy of the MCA formulations tested are not better than conventional Cu azole formulations that do not employ nanotechnologies. MCA-treated wood will still be susceptible to biodegradation by Cu-or TBA-tolerant fungi. From a life cycle assessment perspective, MCA is less eco-efficient than Cu azole, due to the higher energy consumption during the milling process of MC [57]. However, this would also imply no additional risk for the microbial community in vicinity of MCA-treated wood.
Further studies with other wood-destroying fungi and different MC formulations are required to provide a more comprehensive picture on MC NPs effects on wood-destroying fungi. In addition, field studies are required to confirm our lab-scale findings and assess the long term performance. In particular, TTM could not be used in the EN 113 test, due to the absence of a liquid environment that would allow chelation of Cu 2 + ion. Therefore, tests with wood samples immersed in liquid cultures and TTM, in a setting similar to the one suggested by the EN 275 [58] guidelines, may provide further details on the fungal response to Cu 2+ ions and particles. Future studies should focus on the fungal gene pathways that are involved for tolerance mechanisms against TBA and Cu. | 4,593.4 | 2015-11-10T00:00:00.000 | [
"Materials Science"
] |
Rapid and Blind Watermarking Approach of the 3D Objects Using QR Code Images for Securing Copyright
Watermarking techniques in a wide range of digital media was utilized as a host cover to hide or embed a piece of information message in such a way that it is invisible to a human observer. This study aims to develop an enhanced rapid and blind method for producing a watermarked 3D object using QR code images with high imperceptibility and transparency. The proposed method is based on the spatial domain, and it starts with converting the 3D object triangles from the three-dimensional Cartesian coordinate system to the two-dimensional coordinates domain using the corresponding transformation matrix. Then, it applies a direct modification on the third vertex point of each triangle. Each triangle's coordinates in the 3D object can be used to embed one pixel from the QR code image. In the extraction process, the QR code pixels can be successfully extracted without the need for the original image. The imperceptibly and the transparency performances of the proposed watermarking algorithm were evaluated using Euclidean distance, Manhattan distance, cosine distance, and the correlation distance values. The proposed method was tested under various filtering attacks, such as rotation, scaling, and translation. The proposed watermarking method improved the robustness and visibility of extracting the QR code image. The results reveal that the proposed watermarking method yields watermarked 3D objects with excellent execution time, imperceptibility, and robustness to common filtering attacks.
Introduction
Digital watermarking has been proven effective for protecting digital media. It has recently gained considerable research interest. e watermarking process aims to embed secret data such that the resulting object is not greatly distorted. Also, the embedded watermark bits should resist malicious attacks to protect and/or verify the object ownership. Although the 3D objects are widely available and important, there are a few existing watermarking techniques. erefore, the copyright of the 3D object needs more requesting to push research towards developing protection techniques. e various watermarking methods for 3D objects can be classified according to the embedding domains such as the spatial domain [1][2][3], the spectral domain [4,5], and the transform domain [6,7].
A mesh of a 3D object is a collection of polygonal facets that can be entirely described by two kinds of information: the geometry information describes the 3D positions (coordinates) of all its vertices, while the topology information provides the adjacency relations between the different elements [8,9]. Considering these two attributes, watermark information can be added by modifying either of them [10]. Hence, they are usually called embedding primitives. Upon embedding, the quantity of the primitive is modified, typically by a very small amount, so that the watermarked model can still be used normally in any of its intended applications. Moreover, the watermark bits can be any stream of data to identify the owner while the Quick Response code (QR code) image stores data efficiently and has strong error correction capability [11,12].
For document protection, in [13], Arkah et al. presented a color document watermarking technique based on embedding a QR code image into a document. e proposed method gets the document signature and information about the embedding and generates multi-QR codes to watermark the color document. e main contribution of the implemented prototype is to protect the color document against alteration and tampering. Moreover, in [14], Cardamone et al. proposed a nonblind watermarking method of documents for digital rights protection. e proposed method is using the QR code image that contains a signed ID of the user, and then it embeds the QR code image into the third level of the approximation coefficients from the discrete wavelet transform. On the other hand, in [15], Peng et al. generated a 3D QR code which is computed from their 2D counterpart. e 3D QR code has a special structure and is designed to be embedded in 3D shapes. e resultant QR code is 3D printable structures on any curved surface using homogeneous material.
In [16], Rosales-Roldan et al. presented three-color image watermarking techniques based on the singular value decomposition (SVD), discrete wavelet transform (DWT), and discrete cosine transform (DCT) which are used in the QR code image for authentication. e proposed methods apply Arnold permutation into the QR code image to prove the security. us, the proposed method uses the transformed luminance channel (Y) of the Y C b C r color space to embed the QR code image using the Quantization Index Modulation (QIM). In the same context, in [17], Patvardhan et al. presented a color image watermarking method that employs a combination between the discrete wavelet transform (DWT) and the singular value decomposition (SVD) for embedding the QR code image in a Y C b C r color space. e advantages of the proposed method are resistance to common attacks and providing good imperceptibility. Furthermore, in [18], Ran presented a QR code watermarking technique based on embedding the QR code itself in the spatial domain. e proposed method uses MD5 encryption and the logistic chaotic mapping algorithms to directly embed the watermark information into the original QR code image.
In [19], Chow et al. proposed a watermarking method for digital images using the QR code images and based on the discrete wavelet transform (DWT) and the discrete cosine transform (DCT). e proposed method decomposed the cover image using the discrete wavelet transform after it applied the discrete cosine transform on each block of the cover image. e QR code image was transformed using Arnold transform to increase security. en two pseudorandom number sequences are generated to embed the QR code information in the DCT block of the cover image. e main idea of the proposed watermarking method benefits from the QR code structure that is inherent in the error correction to improve the robustness of the watermarking against attacks. Based on the discrete wavelet transform, in [20], Abdul et al. proposed a blind watermarking technique for images using the QR code.
In this paper, we are developing an enhanced rapid and blind method for producing a watermarked 3D object using QR code images with high imperceptibility and transparency. is paper is organized as follows. Section 2 discusses materials and methods used for the 3D object watermarking method and related procedures. e proposed watermarking method is described in Section 3. Section 4 presents the analysis and discussion of the experimental results. Finally, conclusions are summarized in Section 5.
Materials and Methods
is study aims to watermark the 3D object using a QR code to identify ownership rights of that original 3D object. e proposed method converts the 3D object vertices of the triangle from 3D coordinates to 2D coordinates by using the corresponding transformation matrix. en, the watermarking step will be applied using the 2D coordinates of the triangle vertices and the QR code image pixel. e objective of digital watermarking can be summarized as embedding information into the "cover media" in such a way that the watermarked "Stego media" is perceptually indistinguishable from the original one. Furthermore, a good watermarking algorithm should be robust to removal or modification trials.
is is of great importance especially if the watermark will be used to authenticate the source. Another critical issue with watermarking is how secure it is; in other words, how hard it is to decode the hidden information by an unauthorized user even if the watermarking technique is known.
e QR Code.
e Quick Response code is always abbreviated to QR code which is a barcode that is readable by an imaging device such as a camera and smartphone. e QR code system was originally invented and designed in 1994 by the Japanese company Denso Wave and it was registered as a trademark of the same company [21,22]. Simply, e QR code is a matrix code of two-dimensional barcodes and consists of black squares arranged in a square grid on a white background. Unlike the one-dimensional barcodes that were designed to be scanned by a narrow beam of light, the QR code is scanned by a digital image sensor and then digitally analyzed by a programmed processor. e QR code includes three main distinct squares at the corners to set up the image size normalization, orientation, and angle of viewing. Moreover, the small dots throughout the QR code are then converted to binary numbers and validated with the Reed--Solomon error-correcting algorithm [23] which are encoded as bytes of 8 bits. In practice, QR codes often contain data for a locator, identifier, or tracker that points to a standard URL for a website or application. A QR code uses four standardized encoding modes (numeric: Max. 7,089 characters, alphanumeric: Max. 4,296 characters, byte/binary: Max. 2,953 characters, and kanji: Max. 1,817 characters) to store the amount of data efficiently; extensions may also be used [23].
In this paper, the QR code generator is based on the ZXing (zebra crossing) library [24][25][26]. e QR code generator is software that creates data into a QR code image using the format information of two things: the error correction level and the mask pattern used for the symbol. e mask patterns are specified on a grid that is repeated as necessary to cover the whole symbol and protected from errors with a Bose-Chaudhuri-Hocquenghem (BCH) code and a couple of complete copies are included in each QR pattern. Hence, ZXing is an open-source library project implemented in Java, with ports to other languages which supports generating and decoding of multiformat 1D/2D barcode image processing such as QR code and Data Matrix within images and all files can be imported on the fly from a maven repository or can be downloaded via a command. Figure 1 shows the generated four QR codes of size 69×69 which are used as an embedded image within the 3D objects.
Comparison Methods.
In this study, the imperceptibility and the transparency performance analysis of the proposed watermarking method were evaluated between the original 3D object u and the watermarked object v by four different comparison methods. ese include Euclidean distance, Manhattan distance, cosine distance, and the correlation distance. e Euclidean distance or Euclidean metric is the length measurement of a segment connecting between the two points in either the plane or 3-dimensional space in Euclidean space [27]. erefore, it is the most obvious way of representing the differences between points in two 3D objects. It is given as the following equation: where u x , u y , and u z are the Cartesian coordinates of the original 3D object u. v x , v y , and v z are the Cartesian coordinates of the watermarked object v.
e Manhattan distance, also known as the taxicab metric, is the sum of the absolute differences of Cartesian coordinates between two points [28].
is is known as taxicab distance because the shortest path that the car could take between two intersections has the same distance in taxicab geometry.
e Manhattan distance is given as follows: where u x , u y , and u z are the Cartesian coordinates of the original 3D object u. v x , v y , and v z are the Cartesian coordinates of the watermarked object v.
Mathematically, the Cosine distance is a metric used to measure the cosine of the angle between the two 3D objects of an inner product space which is projected in a multidimensional space [29]. us, it is a judgment of orientation and not magnitude and determines whether the two objects are pointing in roughly the same direction which is given as follows: where u x , u y , and u z are the Cartesian coordinates of the original 3D object u. v x , v y , and v z are the Cartesian coordinates of the watermarked object v. e correlation distance is a statistic that measures the dependence between two 3D objects related to each other which is given as follows [30]: where u x , u y , and u z are the Cartesian coordinates of the original 3D object u. v x , v y , and v z are the Cartesian coordinates of the watermarked object v.
Converting 3D Coordinates to 2D
Coordinates. Let us consider the basic representation of triangle vertices A, B, and C in the 3D object coordinate system. So, there is a plane P defined with three points A(xa, ya, za), B(xb, yb, zb), and C(xc, yc, zc) in the threedimensional Cartesian coordinate system. us, for transforming the 3D coordinates to 2D coordinates and later restore coordinates using the transformation matrix, the first step is to set the A as the origin point of the coordinate system. e next step is to produce a new vector called localz that is perpendicular to both AB and AC using the cross product. en, calculate localx which is the line segment that begins at the origin and ends at B. localy is the cross product between localz and localx. Finally, Figure 2 shows a piece of MATLAB source code to produce the transformation matrix.
Based on the above, the main contributions of this paper are as follows: (1) we introduce a 3D object watermarking method that hides a QR code image into the 3D object vertices; (2) we propose a blind extraction based on the reverse steps of the embedding process to recover the QR code image; (3) we brought evidence that the proposed watermarking method performed across the different 3D objects ensures a minimum shape distortion; (4) we present comprehensive experimentation examining the performance of our method and comparing it with other methods.
Computational Intelligence and Neuroscience 3
The Proposed Method
Assume that the 3D object information is stored in an STL file. is format describes only the surface geometry of a three-dimensional object without any representation of color, texture, or other common model attributes. So, the mathematical representation of the 3D object vertices is defined as Obj ⊆ R 3 . In this study, Figure 3 illustrates a general model overview for the proposed method to watermark the 3D object using a QR code image. Each three vertices' coordinate in the Obj will be used to embed onepixel value from the QR code image. e proposed method starts with converting the Obj triangles from the three-dimensional Cartesian coordinate system to the two-dimensional coordinates domain using the corresponding transformation matrix as mentioned in Section 2. en, the watermarking process will be applied using the 2D coordinates of the vertices, QR code image, and a secret key. e watermarked 3D object will be constructed by the inverse of the modified Obj triangles to 3D coordinates.
3.1. e Watermarking Procedure. In this step, the watermarking process mainly focuses on embedding the QR code image into the 3D object. e proposed method applies a direct modification on the third vertex point of the Obj triangle in the 2D coordinates. Hence, let us assume that the three vertices of the current triangle in the 2D coordinates are A(0, 0), B(x, 0), and C(x, y). ere is a point D(x, 0) located on AB which is the projection of the Point C on AB.
In this paper, the proposed embedding method calculates the point D'(x + Δ, 0) according to the current QR code image pixel to be using the following equation: where distAB, distAD, and distAD ′ are the Euclidean distance between A, B, and D ′ QRcode refers to the current pixel of the QR code image. An additional parameter β will be used for the embedding which indicates the number of intervals that will be used to divide the line AB distance. For security matter, the embedding process uses a Secret key to generate random permutation numbers such as [31,32] which identified the current index of the Obj vertices. Moreover, to correctly retrieve the QR code image correctly and avoid the overflow problem, a preprocessing step is applied to the QR code image using a small integer value α as the following equation: Finally, the watermarked 3D object Obj' will be produced by restoring the 3D coordinates from the modified two-dimensional coordinates domain using the corresponding transformation matrix. e detailed steps of the watermarking process are listed in Algorithm 1.
e Extraction Procedure.
In the extraction process, the steps carried out in the watermarking process are generally reversed to retrieve the QR code image using the secret key. erefore, the extraction process starts with converting the 3D coordinates of the watermarked 3D object Obj' to 2D coordinates using the transformation matrix. en, calculate the point D'(x', 0) on AB which is perpendicular from C'. In blind manner and using β, the QR code pixel will be calculated using the distance of AB and AD′ according to the following equation: Notice that the secret key is required to identify the index which is the current QR code pixel located. e detailed list of the extraction procedure steps is illustrated in Algorithm 2.
Results and Discussion
is section presents the performance and analysis results of the proposed watermarking and extraction algorithms using Egg [33], Bunny [34], Horse [34], and Cat Figurine [35] standard 3D objects. Table 1 presents the detailed description of the number of vertices and maximum capacity in bytes for each 3D object and the corresponding image size in pixel of the used QR code and its decoded text in bytes as mentioned in Section 2.
Time Performance Results of the Proposed Algorithms.
e proposed algorithms were implemented using Intel(R) Core (TM) i7-4700MQ CPU, 2.40 GHz processor with 8 GB of RAM. Moreover, the MATLAB version R2017b -64 bits was used in coding the implementation. In addition, the parameters α and the secret key for the QR code image adjustment and the random permutation number generator were selected to be 5 and 1987, respectively. Figure 4 records the execution time of the proposed watermarking and extraction process with ß values ranging from 100 to 1000 for each 3D object which was measured in terms of seconds. Clearly, the extraction execution time is less than the watermarking execution time for the same 3D object. us,
Imperceptibly and Transparency Performance Results.
e imperceptibly and the transparency performances of the proposed watermarking algorithm were evaluated using Euclidean distance, Manhattan distance, cosine distance, and the correlation distance values whose details were explained in Section 2. Figure 5 shows the obtained comparison results of the proposed method between the original 3D object and the watermarked 3D object using values of ß between 100 and 1000. e results show that higher values of ß offer a better visual quality of the watermarked 3D object. Input: 3D object (Obj), QR code image, Secret Key, α and β Output: the watermarked 3D object (Obj') (1) Read the 3D object Cartesian coordinate values ⟶ Obj (2) Read QR code image ⟶ QRcode (3) Preprocessing QRcode pixel using the following function: (4) Generate random permutation numbers using the Secret Key⟶ Index Additionally, Figure 6 illustrates the corresponding resultant values of the Structural Similarity (SSIM) index of the extracted QR code image which is a perceptual metric that quantifies image quality degradation as perceived change in structural information [36] using the following equation: where µx and µy are the local means, and σ 2 x and σ 2 y are the variances of x and y. σ xy is the cross-covariance for images x, y. c 1 Figure 7 shows real samples of extracted QR code images from the 3D models using various values of ß where the extracted QR code images were decoded using ZXing library and the online tool Scan QR code and barcode from IMGonline.com.ua [37].
Robustness Performance Results.
In this subsection, the robustness performance evaluation of the proposed watermarking method is investigated against the common 3D object filtering operations such as rotation, scaling, and translation. To prove the robustness of the proposed watermarking method, the Egg and Bunny watermarked 3D object when β � 500 were affected using the following attacks using open-source system MeshLab (v2016.12): (a) Rotation (with rotation angle � 90°, 180°, and 270°) (b) Scaling (with uniform scaling by 2, 3, and 4) (c) Translation (with XYZ translation by -1, 0.5, and 1) Table 2 illustrates the SSIM results of extracted QR code images after the previous types on the watermarked 3D object attacks. e extracted QR code image qualities are partially degraded after attacks; however it is remaining recognizable Computational Intelligence and Neuroscience and decoded using ZXing library and the online tool Scan QR code and barcode from IMGonline.com.ua. us, the experimental result values prove that the proposed watermarking method maintains almost perfect retrieval of the QR code image and is robust against these attacks.
Comparison with Related Techniques.
e main characteristics of the proposed method are compared with other existing methods to confirm its validity and efficiency. e comparative study is conducted in order to verify the used cover media, the watermark sequence, the embedding space, the domain, and the blindness extraction process between the proposed method and other methods. Table 3 shows a comparison of the recorded details of the related methods. In [13,14,16,17,19], the presented methods were based on embedding the QR code into images based on various domains. On the other hand, in [1,3] the presented methods were based on watermarking the 3D object using a different watermark sequence. Hence in this paper, the proposed 3D object watermarking method achieves the advantage characteristic of using the QR code as the embedding sequence into the 3D object.
Conclusions
is paper proposes a rapid watermarking method that embeds a QR code image in a 3D object based on the spatial domain.
e proposed method starts with converting the 3D object triangles from the three-dimensional Cartesian coordinate system to the two-dimensional coordinates domain using the corresponding transformation matrix and then applying a direct modification on the third vertex point of each triangle. Each three vertices' coordinate in the 3D object can be used to embed one pixel from the QR code image by using the proposed watermarking algorithm. e extraction algorithm is totally blind based on the secret key and the reverse steps of the embedding process to recover the QR code image. e execution time of the proposed method to embed 225 bytes takes about 0.69 seconds; however, the extraction process takes 0.52 seconds for the same watermark bytes. e imperceptibly and the transparency performances of the proposed watermarking algorithm were evaluated using Euclidean distance, Manhattan distance, cosine distance, and the correlation distance values. e results show that higher values of ß, the division parameter, offer a better visual quality of the watermarked 3D object. e proposed method was tested under various filtering attacks, such as rotation, scaling, and translation. e proposed watermarking method improved the robustness and visibility of extracting the QR code image.
Data Availability e data that support the findings of this study are available from the corresponding author upon reasonable request.
Conflicts of Interest
e authors declare that they have no conflicts of interest to report regarding the present study. [19] Grayscale image QR code Grayscale DWT-DCT Yes Rosales-Roldan et al. [16] Color image QR code YCbCr SVD-DWT-DCT Yes Patvardhan et al. [17] Color image QR code YCbCr SVD-DWT No Arkah et al. [13] Color document QR code RGB, gray Spatial domain Yes Cardamone et al. [14] Color document QR code RGB, gray DWT No Ran [18] QR code image QR code Binary image Spatial domain Yes Jiang et al. [3] 3D | 5,450.2 | 2021-11-16T00:00:00.000 | [
"Computer Science"
] |
Sensitive cells extend antibiotic-driven containment of drug-resistant bacterial populations
Treatment strategies for infectious disease often aim to rapidly clear the pathogen population in hopes of minimizing the potential for antibiotic resistance. However, a number of recent studies highlight the potential of alternative strategies that attempt to inhibit the growth of resistant pathogens by maintaining a competing population of drug-sensitive cells. Unfortunately, to date there is little direct experimental evidence that drug sensitive cells can be leveraged to enhance antibiotic containment strategies. In this work, we combine in vitro experiments in computer-controlled bioreactors with simple mathematical models to show that drug-sensitive cells can enhance our ability to control bacterial populations with antibiotics. To do so, we measured the “escape time” required for drug-resistant E. coli populations to eclipse a threshold density maintained by adaptive antibiotic dosing. While populations containing only resistant cells rapidly escape containment, we found that matched populations with sensitive cells added could be contained for significantly longer. The increase in escape time occurs only when the threshold density–the acceptable bacterial burden–is sufficiently high, an effect that mathematical models attribute to increased competition. The results provide direct experimental evidence linking the presence of sensitive cells to improved control of microbial populations.
Introduction
Our ability to successfully treat disease is often determined by our capacity to manage drug resistance [1][2][3][4][5][6]. To minimize the risk of resistance evolution, treatment is often aimed at rapidly reducing -and hopefully clearing -the pathogen population [7][8][9][10][11][12][13][14][15][16][17]. This principle dominates our approach to both infectious disease and cancer, where treatment is often aimed at achieving rapid and dramatic reductions in tumor burden [18][19][20][21][22][23][24][25][26][27][28][29]. Unfortunately, the increasing number of treatment failures associated with drug resistance suggests that this aggressive approach may not be optimal in all cases [1,[4][5][6]. Containment strategies may leverage competition to extend time below failure threshold. A. Aggressive treatment uses high drug concentrations (lightning flashes), which eliminates sensitive cells (blue) but may fail when resistant cells (red) emerge and the population exceeds the failure threshold ("acceptable burden", light blue circle). B. Containment strategies attempt to maintain the population just below the failure threshold by using lower drug concentrations, leveraging competition between sensitive (blue) and emergent resistant (red) cells to potentially prolong time to failure. C. Schematic of potential feedback between growth processes in mixed populations. Drug (lightning flash) inhibits sensitive cells (blue), which in turn inhibit resistant cells (red) through competition but may also contribute to the resistant population via mutation.
Maintaining a sensitive population during treatment comes with both a cost and a benefit [49]. Drug sensitive cells are costly because they are a source for de novo resistance, yet they may also be beneficial because they can competitively suppress growth of the resistant population. In theory, the benefits of competition dominate under some conditions-for example, when sufficiently high pathogen densities can be tolerated. In these cases, treatments designed to maintain a sensitive population should outperform aggressive therapies. Although there is increasing interest in the idea of using sensitive cells to manage resistance, experimental support for this idea remains scarce and is often indirect -involving a comparison of different treatment strategies as opposed to comparing the presence versus absence of sensitive cells [35, 43-48, 50, 61-63].
In this work, we combine in-vitro experiments in computer-controlled bioreactors with simple mathematical models to show that drug-sensitive cells can enhance our ability to control bacterial populations with antibiotics. Specifically, we measured the "escape time" required for different E. coli populations to eclipse a threshold density when exposed to adaptive drug dosing designed to maintain a constant cell density using a minimal amount of drug. Surprisingly, we found that adding sensitive cells led to longer escapes times. The increased escape time in mixed populations (resistant populations with sensitive cells added) occurs only when cells can be maintained at a sufficiently high total density-that is, when the "acceptable bacterial burden" (here called P max ) is large enough to allow for competition between cells. The results provide, to our knowledge, Resistant cells exhibit increased resistance to doxycycline and small fitness cost Left panels: per capita growth rate in bioreactors for ancestral (sensitive, blue) and resistant (red) populations exposed to increasing concentrations of doxycycline (top to bottom in each panel). Real time per capita growth rate (light blue or red curves) is estimated from flow rates required to maintain constant cell density at each drug concentration (Methods). Mean growth rate (thick solid lines) is estimated between 200-300 minutes post drug addition (shaded regions), when the system has reached steady state. Doxycyline concentrations are 10, 30, 50, and 80 ng/mL (top panel, top to bottom) and 50, 150, 300, and 500 ng/mL (bottom panel, top to bottom). Right panel: dose response curve for sensitive (blue) and resistant (red) populations. Filled circles correspond to curves shown in left panels, with error bars corresponding to ± one standard deviation over the measured interval. Solid lines, fit to Hill-like dose response function r = r 0 (1 + (D/h) k ) 1 , with r the growth rate, D the drug concentration, r 0 the growth in the absence of drug, h the half-maximal inhibitory concentration (IC 50 ), and k the Hill coefficient. Half-maximal inhibitory concentrations are estimated to be h = 49 ng/mL (sensitive cells) and h = 210 ng/mL (resistant cells). Resistant cells also exhibit a fitness cost of approximately 10% in the absence of drug. the first direct experimental evidence linking the presence of sensitive cells to improved control of microbial populations. The findings are particularly striking because they occur in well mixed populations with a continual renewal of resources-conditions not typically associated with strong competition-indicating that similar control schemes may be broadly applicable.
Results
Our primary goal is to investigate whether drug sensitive populations of E.coli can suppress the growth of drug resistant E.coli in the presence of antibiotics. To do so, we grew bacterial populations in well mixed bioreactors where environmental conditions, including drug concentration and nutrient levels, can be modulated using a series of computer-controlled peristaltic pumps. Population size is measured using light scattering (optical density, OD), and drug concentration can be adjusted in real time in response to population dynamics or predetermined protocols ( Figure S1). As a model system, we chose E. coli strains REL606 and REL607, which are wellcharacterized ancestral strains used in the celebrated long term evolution experiment 3/24 in E. coli [64]. The strains differ by a single point mutation in araA which serves as a neutral marker for competition experiments; REL 606 (REL 607) appears red (pink) when grown on tetrazolium arabinose (TA) plates. In the absence of environmental input (i.e. no influx or outflow), these E. coli exhibit standard logistic growth in the bioreactors ( Figure S2), suggesting that competition does occur at high cell densities. Despite this implicit evidence of competition within a single population, it is not clear whether competition significantly impacts resistance dynamics in mixed populations or in the presence of antibiotics.
To answer this question, we first characterized the response of both drug-sensitive and drug-resistant E.coli isolates to doxycyline, a frequently used protein synthesis inhibitor. To isolate a doxycycline-resistant mutant, we exposed populations of REL606 strains to increasing concentrations of doxycyline over several days using standard laboratory evolution with daily dilutions and isolated a single colony ("resistant mutant") from the resulting population (Methods). To quantify the responses of the drug-sensitive (REL607) and drug-resistant (REL606-derived mutant) cells to doxycylcline, we measured realtime per capita growth rate for isogenic populations of each strain exposed to different concentrations of drug ( Figure 2). Briefly, growth rate was estimated using influx rate of media required to maintain populations at a constant density (Methods). The resistant isolate exhibits both increased resistance to doxycycline (increased half maximal inhibitory concentration) as well as a 10% fitness cost in the absence of drug ( Figure 2). We note in the experiments that follow, drug concentrations are sufficiently high that resistant cells always have a selective advantage over sensitive cells, despite this fitness cost.
To test our primary hypothesis-that the presence of sensitive cells can enhance the efficacy of antibiotic control strategies-we designed a simple experiment that directly compares identical "treatment" regimens in three different populations: one seeded with a large number of sensitive cells, one seeded with only a small number of resistant cells, and one mixed population combining a large sensitive population and small resistant population ( Figure 3A). Because high concentrations of drug are expected to completely inhibit growth of sensitive cells and therefore eliminate any potential competition, we designed an adaptive drug dosing protocol intended to maintain the mixed population at a fixed density (P max ) using minimal drug. The adaptive protocol uses simple feedback control to adjust the drug concentration in real time in response to changes in population density ( Figure 3A, Methods). Because drug is restricted to a finite range (0-125 ng/mL), populations containing resistant cells cannot be contained indefinitely and will eventually eclipse the target density (P max ). The time required for this crossover is defined as the escape time, and the goal of the experiment is to compare escape times-which correspond, intuitively, to times of treatment failure-for different populations exposed to the same drug dosing.
Specifically, we applied the adaptive dosing protocol to a mixed population of sensitive (90%) and resistant (10%) cells with an initial density just below the threshold P max . In parallel, we also applied an identical drug dosing protocol to matched populations of resistant-only and sensitive-only cells. The initial density of the resistant-only (sensitiveonly) population was set to 0.1P max (0.9P max ) to match the density of the resistant (sensitive) sub-population in the mixed population. We stress that while the temporal dynamics of the mixed population-but not the other populations-completely determine the dosing protocol, all three populations receive identical drug dosing and therefore experience identical drug concentrations over time.
This experimental design tests the effect of sensitive cells by comparing escape time in two extreme scenarios: 1) in the absence of sensitive cells, and 2) in the presence of the largest possible sensitive population (subject to the threshold constraint). In the absence of competition or other intercellular interactions, the dynamics of the mixed
5/24
population should be a simple sum of the dynamics in the two single-species populations. As a result, escape times for both the resistant-only (scenario 1) and mixed (scenario 2) populations should be approximately equal. Intuitively, the drug is expected to inhibit the sensitive cells but have minimal effect on resistant cells, which determine the escape time. On the other hand, if competition suppresses the growth of resistant cells, one would expect the escape time of the mixed population to exceed that of the resistant-only population.
To quantitatively guide our experiments and refine this intuition, we developed a simple mathematical model for population growth in the bioreactors in the presence of an adaptive therapy (Methods). The model implicitly incorporates competition via a logistic growth term, similar to the classic Lotka-Volterra model. The model parameters are fully determined by independent experiments, such as those in Figure 2, that characterize the response of individual populations (sensitive only or resistant only) to fixed drug concentrations (see Table 1 and Methods for a detailed description of the model). As intuition suggests, the model predicts that escape times for the mixed population will be extended relative to the those for the resistant-only population only when the threshold density-the acceptable burden, P max -is sufficiently large.
To test these predictions, we first performed the experiment at P max = 0.2 ( Figure 3B), a threshold density which the model predicts will lead to competitive inhibition. Note that this density falls in the range of exponential growth and falls below the stationary phase limit in unperturbed populations ( Figure S2). To account for batch effects and day-to-day experimental fluctuations, we repeated the experiment multiple times across different days, using different media and drug preparations. Unsurprisingly, the experiments confirm that sensitive only populations are significantly inhibited under this treatment protocol and never reach the containment threshold; in fact, the overall density decreases slowly over time due to a combination of strong drug inhibition and effluent flow ( Figure 3B, blue curves). By contrast, the resistant-only population grows steadily and eclipses the threshold in 6-9 hours ( Figure 3B, red curves). Remarkably, however, the mixed population (black curves) is contained below threshold-in almost all cases-for the entire length of the experiment, which spans more than 18 hours. At the end of the experiment, we plated representative examples of resistant only and mixed populations ( Figure S3), which confirmed that the mixed vial was predominantly resistant at the end of the experiment. Matched drug-free controls indicate that containment in the mixed vial is due to drug, not artifacts from media inflow or outflow ( Figure S3). The experiments also show remarkable agreement with the model (with no adjustable parameters; compare left and right panels in Figure 3B).
If competition were driving the increased escape time, one would expect the effect to be reduced as the threshold density (P max ) is decreased. To test this hypothesis, we repeated the experiments at P max = 0.1 ( Figure 3C). As before, the sensitive-only population is strongly inhibited by the drug and decreases in size over time (blue). Also as before, the resistant-only population (red) escapes the containment threshold, typically between 5-8 hours (faster than in the high P max experiment due to the lower threshold). In contrast to the previous experiment, however, the mixed population also escapes the containment threshold, and furthermore, it does so on similar timescales as the resistant-only population. Again, the agreement between model and experiment is quite good, though the model does predict a slightly longer escape time in the mixed population. The small discrepancy between the model and the experiment suggests that competition may decrease even more rapidly as density is lowered than the model assumes.
To quantify these results, we calculated the time to escape for each experiment. We defined time to escape for a particular experiment as the first time at which the growth curve (OD) exceeded the threshold density P max by at least 0.025 OD units (note Escape times are increased in the presence of sensitive cells only for sufficiently large threshold densities Time to escape for populations maintained at low (P max = 0.1, left) and high (P max =0.2, right) threshold densities ("acceptable burdens"). Small circles: escape times for individual experiments in mixed (black) or resistant-only (red) populations. Large circles: mean escape time across experiments, with error bars corresponding to ± one standard deviation. Time to escape is defined as the time at which the population exceeds the threshold OD of P max + 0.025, where the 0.025 is padding provided to account for experimental fluctuations. Time to escape is normalized by the total length of the experiment (mean length 22.5 hours). Note that in the high P max case (right), the mixed population (black) reached the threshold density during the course of the experiment in only one case, so the escape times are set to 1 in all other cases. 7/24 that the 0.025 was chosen to allow for noise fluctuations in the OD time series without triggering a threshold crossing event). For low values of acceptable burden (P max ), the escape times for resistant-only and mixed populations are nearly identical (Figure 4, left). By contrast, at higher values of P max , the escape time is dramatically increased in the mixed population relative to the resistant-only population (Figure 4, right), even though both receive identical drug treatment and start with identically sized resistant populations.
We note that previous theoretical work [49] indicates that sensitive cells may also come with a cost because they serve as a potential reservoir for de novo resistance. In fact, one can show that for our experimental system, the effect of mutations is expected to be negligible for biologically relevant mutation rates (see SI, Methods). As a result, we neglect mutation in the current model. Importantly, our experiments suggest that sensitive cells are beneficial at high values of P max and have little effect at low P max , consistent with the assumption that mutation-driven costs of sensitive cells in our system are negligible.
Discussion
In this work, we provide direct experimental evidence that the presence of drug-sensitive cells can lead to improved antibiotic-driven control of bacterial populations in vitro. Specifically, we show that adaptive antibiotic dosing strategies can contain mixed populations of sensitive and resistant cells below a threshold density for significantly longer than matched populations containing only resistant cells. The increase in escape time occurs only when the threshold density is sufficiently high that competition is significant. The findings are particularly remarkable given that experiments are performed in well mixed bioreactors with continuous resource renewal, and even the highest density thresholds occur in the exponential growth regime for unperturbed populations. The surprisingly strong effect of competition under these conditions suggests that similar approaches may yield even more dramatic results in natural environments, where spatial heterogeneity and limited diffusion may enhance competition [65][66][67].
Notably, our experiments do not uncover scenarios where sensitive cells may actually be detrimental and accelerate resistance emergence. Theory suggests that these scenarios do indeed exist [49], but because of the typical mutation rates observed in bacteria, they cannot be reliably produced with our experimental system (see SI for extended discussion).
It is important to keep in mind several technical limitations of our study. First, we measured population density using light scattering (OD), which is a widely used experimental surrogate for microbial population size but is sensitive to changes in cell shape [68]. Because we use protein synthesis inhibitors primarily at sub-MIC concentrations, we do not anticipate significant artifacts from this limitation, though it may pose challenges when trying to extend these results to drugs such as fluorquinolones, which are known to induce filamentation [69,70]. In addition, in the absence of cell lysis, OD can not distinguish between dead and living cells. However, our experiments include a slow background flow that adds fresh media and removes waste, leading to a clear distinction between non-growing and growing populations. Under these conditions, fully inhibited (or dead) populations would experience a decrease in OD over time, while populations maintained at a constant density are required to divide at an effective rate equal to this background refresh rate.
Most importantly, our results are based entirely on in-vitro experiments, which allow for precise environmental control and quantitative measurements but clearly lack important complexities of realistic in-vivo and clinical scenarios. Developing drug protocols for clinical use is an extremely challenging problem spanning multiple length 8/24 and time scales. Our goal was not to design clinically realistic adaptive therapies, but instead to provide direct experimental evidence that sensitive cells can improve drug-driven control protocols in a tractable setting. The use of drug sensitive cells to manage resistance will (and should) remain controversial, particularly in the absence of detailed in-vivo investigations. Containment-based strategies come with a number of potentially dangerous drawbacks-including the possibility of increased resistance-from maintaining larger pathogen loads. At the same time, there are proposed benefits of less aggressive strategies, including fewer adverse effects for the patient. Our results are compelling because they provide empirical evidence that competitive suppression can enhance containment of resistant cells in-vitro, raising the question of whether similar competitive dynamics may play out in-vivo. We therefore hope they will motivate continued experimental, theoretical, and perhaps even clinical investigations.
Methods Bacterial Strains, Media, and Growth Conditions
Experiments were performed with Escherichia coli strains REL 606 and 607 [64]. Resistant strains were isolated from lab-evolved populations of REL606 undergoing daily dilutions (200X) into fresh media with increasing doxycycline (Research Products International) concentrations for 3 days. A single resistant isolate was used for all experiments. Stock solutions were frozen at -80C in 30 percent glycerol and streaked onto fresh agar plates (Davis Minimal Media (Sigma) with 2000 µg/ml glucose) as needed. Overnight cultures of resistant and sensitive cells for each experiment were grown from single colonies and then incubated in sterile Davis Minimal Media with 1000 µg/ml glucose liquid media overnight at 30C while rotating at 240 rpm. All bioreactor experiments were performed in a temperature controlled warm room at 30C.
Continuous Culture Bioreactors
Experiments were performed in custom-built, computer-controlled bioreactors as described in [71], which are based, in part, on similar designs from [72,73]. Briefly, constant volume bacterial cultures (17 mL) are grown in glass vials with customized Teflon tops that allow inflow and outflow of fluid via silicone tubing. Flow is managed by a series of computer-controlled peristaltic pumps-up to 6 per vial-which are connected to media and drug reservoirs and allow for precise control of various environmental conditions. Cell density is monitored by light scattering using infrared LED/Detector pairs on the side of each vial holder. Voltage readings are converted to optical density (OD) using a calibration curve based on separate readings with a table top OD reader. Up to 9 cultures can be grown simultaneously using a series of multi-position magnetic stirrers. The entire system is controlled by custom Matlab software.
Experimental Mixtures and Set up
Before the experiments begin, vials are seeded with sensitive or resistant strains of E. coli and allowed to grow to the desired density in the bioreactor vials. Cells were then mixed (to create the desired population compositions) and diluted as appropriate to achieve the desired starting densities. Each vial is connected to 1) a drug reservoir containing media and doxycycline (500 µg/ml), 2) a drug-free media reservoir that provides constant renewal of media, 3) an effluent waste reservoir. Flow from reservoir 1 (drug reservoir) is determined in real-time according to a simple feedback algorithm intended to maintain cells at a constant target density with minimal drug. Flow to/from reservoirs 2 and 3 provides a slow renewal of media and nutrients while maintaining a constant culture volume in each vial.
Drug Dosing Protocols
To determine the appropriate antibiotic dosing strategy the computer records the optical density in each vial every 3 seconds. Every 3 minutes, the computer computes:(i) the average optical density OD avg in the mixed vial over the last 30 seconds and (ii) the current drug concentration in the vial. If OD avg is greater than P max , the desired containment density, and the current drug concentration is less than d max = 125 ng mL then drug and media will be added to the vial for 21 seconds at a flow rate of 1 mL/min. In a typical experiment, this control algorithm is applied to one of the mixed populations to determine, in real time, the drug dosing protocol (i.e. influx of drug solution over time). The exact same drug dosing protocol is then simultaneously applied to all experimental 10/24 Figure 3). In parallel, an identical dosing protocol is applied to a series of control populations, but in these populations, the drug solution is replaced by drug-free media ( Figure S3).
Mathematical Model
The mathematical model used in the simulations iṡ where S and R are the drug-sensitive and drug-resistant densities in mixed vial, R only is the bacterial density in the vial that contains only drug-resistant bacteria, S only is the bacterial density in the vial that contains only drug-sensitive bacteria and D is the drug concentration in the vials, . The initial conditions for the simulations (and experiments) are given in Table 2. The effect of drug on growth rate is modeled as a hill function with parameters r S , k S and h S for the sensitive strain and parameters r R , k R and h R for the resistant strain. There is also a time-delay associated with the effect of drug (denoted by ⌧ S for the sensitive strain and ⌧ R for the resistant strain). Competition in the model is captured by using a logistic growth term with carrying capacity C. It is assumed that the sensitive and resistant strains have similar carrying capacities. Finally, the bioreactor has a continual efflux to maintain constant volume. The rate of this outflow is the sum of the constant background nutrient flow F N and any additional outflow required to compensate for the inflow of drug which enters at a rate F D D . F D is a constant rate and D is an indicator function which is 1 when drug is being added to the vials and 0 when it is not. In the simulations, the decision of when to add drug is based on the same control algorithm that was used in the actual experiment (see Methods: Drug Dosing Protocols). Since Model (1) describes the rate of change of bacterial density, the total efflux (F N + F D D ) is divided by the volume of the vials V . The drug concentration in the vials is determined by the rate of drug flow into the vials (F D D D in , where D in is the concentration of drug in the reservoir) and the rate of efflux out of the vials (F D D + F N ). The values of D in , V , F D and F N were chosen to match the associated values in the experimental system, all other parameters in the model where fit using independent experimental data (see SI for details) and are given in Table 1. | 5,920.6 | 2019-05-16T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Shed Light in the DaRk LineagES of the Fungal Tree of Life—STRES
The polyphyletic group of black fungi within the Ascomycota (Arthoniomycetes, Dothideomycetes, and Eurotiomycetes) is ubiquitous in natural and anthropogenic habitats. Partly because of their dark, melanin-based pigmentation, black fungi are resistant to stresses including UV- and ionizing-radiation, heat and desiccation, toxic metals, and organic pollutants. Consequently, they are amongst the most stunning extremophiles and poly-extreme-tolerant organisms on Earth. Even though ca. 60 black fungal genomes have been sequenced to date, [mostly in the family Herpotrichiellaceae (Eurotiomycetes)], the class Dothideomycetes that hosts the largest majority of extremophiles has only been sparsely sampled. By sequencing up to 92 species that will become reference genomes, the “Shed light in The daRk lineagES of the fungal tree of life” (STRES) project will cover a broad collection of black fungal diversity spread throughout the Fungal Tree of Life. Interestingly, the STRES project will focus on mostly unsampled genera that display different ecologies and life-styles (e.g., ant- and lichen-associated fungi, rock-inhabiting fungi, etc.). With a resequencing strategy of 10- to 15-fold depth coverage of up to ~550 strains, numerous new reference genomes will be established. To identify metabolites and functional processes, these new genomic resources will be enriched with metabolomics analyses coupled with transcriptomics experiments on selected species under various stress conditions (salinity, dryness, UV radiation, oligotrophy). The data acquired will serve as a reference and foundation for establishing an encyclopedic database for fungal metagenomics as well as the biology, evolution, and ecology of the fungi in extreme environments.
Introduction
Fungi are a large group of eukaryotic organisms ranging from unicellular yeasts to multicellular filamentous forms. They have a global distribution due to their small size and their cryptic lifestyle in soil, decomposing matter, and abilities to form a symbiosis with algae, plants, and animals [1][2][3][4]. Fungi are found in every biome including polar, temperate, and tropical environments. Black fungi are an ecologically defined group of stress-tolerant specialists that share morphological similarity despite diverse phylogenetic placement. Black fungi form a polyphyletic morpho-ecological group within Ascomycota, Eurotiomycetes, and "Dothideomyceta" (a clade encompassing Arthoniomycetes and Life 2020, 10, 362 3 of 13 Dothideomycetes) [5]. They are often described with the terms black fungi, black yeasts (BY) and relatives, meristematic fungi, microcolonial fungi (MCF), and rock inhabiting fungi (RIF).
A few examples of their morphology are reported in Figure 1.
A few examples of their morphology are reported in Figure 1. Black yeasts are among the most successful extremophiles and extreme-tolerant organisms on Earth; they are distributed globally in harsh environments that impede colonization by most lifeforms. All black yeasts and meristematic fungi share a number of characters, such as yeast-like polar budding, deep melanization, and meristematic growth [6], thick and even multi-layered cell walls, and exo-polysaccharide production, resulting in an extraordinary ability to tolerate chemical and physical stresses. Stresses include extreme pH, high and low temperature, heavy metals, as well as radionuclides, desiccation, high concentrations of different kosmotropic and chaotropic salts [7], UV ionizing radiation, alpha particles, and even real Space and simulated Mars conditions [8][9][10][11][12][13]. They also display a tremendous capacity to resurrect from dry conditions [14]. Constituent melanization and meristematic growth (i.e., conversion towards isodiametric expansion) is infrequent in the fungal kingdom and is a specific response to stress, thus providing the ability to cope with and adapt to highly diverse stressing environments. The black yeasts are also known for their ability to survive in all the extreme habitats including saltpans [15], acidic and hydrocarbon-contaminated sites [16][17][18], exposed natural rocks [19] and stone monument surfaces [20], hot deserts [21], photocatalytic [22] and solar panel [23] surfaces, and very cold icy habitats [24][25][26][27][28][29][30][31]. These fungi can usually colonize human environments like dishwashers, steam baths, or sauna facilities; some have been isolated from a silicone seal in hospitals and in tap water [32][33][34], while other species are domatia-associated [35] ( Figure 2). Few of them are involved in a broad range of diseases [36,37], while others, because of their ability to degrade pollutants, are good candidates for bioremediation [38]. Black yeasts are among the most successful extremophiles and extreme-tolerant organisms on Earth; they are distributed globally in harsh environments that impede colonization by most life-forms. All black yeasts and meristematic fungi share a number of characters, such as yeast-like polar budding, deep melanization, and meristematic growth [6], thick and even multi-layered cell walls, and exo-polysaccharide production, resulting in an extraordinary ability to tolerate chemical and physical stresses. Stresses include extreme pH, high and low temperature, heavy metals, as well as radionuclides, desiccation, high concentrations of different kosmotropic and chaotropic salts [7], UV ionizing radiation, alpha particles, and even real Space and simulated Mars conditions [8][9][10][11][12][13]. They also display a tremendous capacity to resurrect from dry conditions [14]. Constituent melanization and meristematic growth (i.e., conversion towards isodiametric expansion) is infrequent in the fungal kingdom and is a specific response to stress, thus providing the ability to cope with and adapt to highly diverse stressing environments. The black yeasts are also known for their ability to survive in all the extreme habitats including saltpans [15], acidic and hydrocarbon-contaminated sites [16][17][18], exposed natural rocks [19] and stone monument surfaces [20], hot deserts [21], photocatalytic [22] and solar panel [23] surfaces, and very cold icy habitats [24][25][26][27][28][29][30][31]. These fungi can usually colonize human environments like dishwashers, steam baths, or sauna facilities; some have been isolated from a silicone seal in hospitals and in tap water [32][33][34], while other species are domatia-associated [35] ( Figure 2). Few of them are involved in a broad range of diseases [36,37], while others, because of their ability to degrade pollutants, are good candidates for bioremediation [38]. To date, black fungi genome sequencing results are only a drop in the ocean and sequences are only available for ca. 60 strains, mainly in the family Herpotrichiellaceae (Eurotiomycetes). In contrast, the class Dothideomycetes which hosts the largest majority of extremophilic black fungi remains largely unsampled. As a result, our understanding of the evolution and adaptation strategies of this intriguing group of fungi remains limited. Studies on the genome evolution of these microorganisms, colonizing a diverse array of inhospitable ecological niches, may enable understanding of important genetic factors that govern their success in the extremes and will provide insights into the existence and the understanding of novel enzymes for keeping an active metabolism under conditions, normally incompatible with [39][40][41].
Black Fungi Profit from the Era of Genome Consortia
In 1996, the genome of Saccharomyces cerevisiae was published and marked the beginning of a new era in fungal biology [42]. Advancements in high throughput sequencing technology have been rapidly progressing and leading to the sequencing of species that can be incorporated into genomescale phylogenies, as evidenced by MycoCosm [43], with more than 1700 fungal genomes (http://mycocosm.jgi.doe.gov), enabling these data as the starting point for an increasing number and types of researches.
With this rapid development of DNA sequencing technology, this is the time for large-scale, collaborative genomic studies. An international research team in collaboration with the U.S. Department of Energy Joint Genome Institute has embarked on a five-year project to sequence 1000 fungal genomes from across the Fungal Tree of Life (FTOL). The 1000 Fungal Genomes (1KFG) project which started in 2011, aimed to sequence representatives of approximately two genera from each of the roughly 656 recognized families of Fungi [44] and, to date, more than 1500 reference genomes are available [4,45], however, several lineages remain still unexplored. In this era of genome consortia, the overall plan of the "Shed light in The daRk lineagES of the Fungal Tree Of Life" (STRES) project is to fill gaps in the branches of the FTOL, where black yeasts are found to better reveal the genomic traits and fungal metabolites that enable these microorganisms to inhabit and exploit the extremes. To date, black fungi genome sequencing results are only a drop in the ocean and sequences are only available for ca. 60 strains, mainly in the family Herpotrichiellaceae (Eurotiomycetes). In contrast, the class Dothideomycetes which hosts the largest majority of extremophilic black fungi remains largely unsampled. As a result, our understanding of the evolution and adaptation strategies of this intriguing group of fungi remains limited. Studies on the genome evolution of these microorganisms, colonizing a diverse array of inhospitable ecological niches, may enable understanding of important genetic factors that govern their success in the extremes and will provide insights into the existence and the understanding of novel enzymes for keeping an active metabolism under conditions, normally incompatible with [39][40][41].
Black Fungi Profit from the Era of Genome Consortia
In 1996, the genome of Saccharomyces cerevisiae was published and marked the beginning of a new era in fungal biology [42]. Advancements in high throughput sequencing technology have been rapidly progressing and leading to the sequencing of species that can be incorporated into genome-scale phylogenies, as evidenced by MycoCosm [43], with more than 1700 fungal genomes (http://mycocosm.jgi.doe.gov), enabling these data as the starting point for an increasing number and types of researches.
With this rapid development of DNA sequencing technology, this is the time for large-scale, collaborative genomic studies. An international research team in collaboration with the U.S. Department of Energy Joint Genome Institute has embarked on a five-year project to sequence 1000 fungal genomes from across the Fungal Tree of Life (FTOL). The 1000 Fungal Genomes (1KFG) project which started in 2011, aimed to sequence representatives of approximately two genera from each of the roughly 656 recognized families of Fungi [44] and, to date, more than 1500 reference genomes are available [4,45], however, several lineages remain still unexplored. In this era of genome consortia, the overall plan of the "Shed light in The daRk lineagES of the Fungal Tree Of Life" (STRES) project is to fill gaps in the branches of the FTOL, where black yeasts are found to better reveal the genomic traits and fungal metabolites that enable these microorganisms to inhabit and exploit the extremes. The STRES project will cover as best the amplitude of black fungal biodiversity along the FTOL by sequencing up to 92 strains as reference genomes, representing primarily unsampled genera, from different ecologies and life-styles (e.g., ant-and lichen-associated fungi, rock-inhabiting fungi, etc.), as well as more than 500 additional strains of black yeasts. We also proposed transcriptomics and metabolomics experiments on a selection of reference species to track transcripts and expressed genes under different stress conditions (i.e., salinity, dryness, UV radiation, and oligotrophy) to further discern their roles in nutrient cycling, interactions in the environment, and to investigate the role of melanin in utilizing radiation as an energy source. The project workflow is outlined in Figure 3. The STRES project will cover as best the amplitude of black fungal biodiversity along the FTOL by sequencing up to 92 strains as reference genomes, representing primarily unsampled genera, from different ecologies and life-styles (e.g., ant-and lichen-associated fungi, rock-inhabiting fungi, etc.), as well as more than 500 additional strains of black yeasts. We also proposed transcriptomics and metabolomics experiments on a selection of reference species to track transcripts and expressed genes under different stress conditions (i.e., salinity, dryness, UV radiation, and oligotrophy) to further discern their roles in nutrient cycling, interactions in the environment, and to investigate the role of melanin in utilizing radiation as an energy source. The project workflow is outlined in Figure 3. All strains proposed are currently preserved in private or public culture collections of the international consortium assembled for this project.
The data acquired will serve as a reference and foundation for establishing an encyclopedic database for fungal metagenomics, biology, evolution, and ecology and will further clarify how such fungi adapt and succeed under extreme conditions. These data will also inform on their possible applications in pollutant treatment, as well as possible preventive measures for material protection. All strains proposed are currently preserved in private or public culture collections of the international consortium assembled for this project.
The data acquired will serve as a reference and foundation for establishing an encyclopedic database for fungal metagenomics, biology, evolution, and ecology and will further clarify how such fungi adapt and succeed under extreme conditions. These data will also inform on their possible applications in pollutant treatment, as well as possible preventive measures for material protection.
Available Genomic Data
The application of high-throughput sequencing technologies to elucidate the genetic bases of niche adaptation in black fungi started in 2011, when the first whole-genome sequence, belonging to Exophiala dermatitidis (Chaetothyriales, Eurotiomycetes, Ascomycota) [46], was sequenced as a part of the Fungal Genome Initiative (http://www.broadinstitute.org/annotation/genome/Black_Yeasts/ MultiHome.html). This work was followed by sequencing of four Aureobasidium pullulans varieties [47].
Continued efforts generated genomes of additional ca. 50 black fungi, producing an avalanche of data for comparative genomics. We anticipate the genomes of strains proposed in this project will be relatively small and haploid, with GC content varying between 49-57%, and a very low abundance of repetitive elements.
In 2013, Lenassi et al. [48] reported the genome of Hortaea werneckii (Dothideomycetes) as 51.6 Mb, larger than most phylogenetically related fungi and coding for almost twice the usual number of predicted genes (23k), due to a possible relatively recent whole-genome duplication or hybridization. Gene duplication events might have enabled the rapid evolution of proteins and consequently enhanced the metabolic plasticity, increasing the fitness during the colonization of hostile ecological niches. In 2014, the genome of an Antarctic endolithic black fungus, Cryomyces antarcticus, was released for the first time [49]. Several Antarctic cryptoendolithic black fungi (i.e., Friedmanniomyces endolithicus, F. simplex) have genomes of about 48 Mbp and have a high frequency of gene duplications compared to other extreme-tolerant fungi [50,51]. The analyses of the transcriptome of Cladophialophora immunda (Chaetothyriales, Eurotiomycetes), a black fungus typically associated with hydrocarbons polluted environments, revealed that exposure to toluene activated degradation genes, which likely protects the fungus [52]. Teixeira et al. [53] sequenced and annotated 23 Chaetothyriales genomes, reporting the genome size varying from 25.81 Mb to 43.03 Mb and identifying a reduction of carbohydrate degrading enzymes. Moreover, some genomes of domatia-associated species showed a relatively small size (ca. 20 Mbp) compared to other Chaetothyriales; it was speculated that, despite the reduction of several protein families, members of the clade might tolerate toxic compounds produced from exocrine glands of the ants as a defense against microbes [35].
Main Objectives
The STRES project has three overarching objectives: (I) Cover unsampled lineages and ecologies of black fungi.
During the 1st and 2nd years of the project, STRES aims to sequence and make available to the scientific community the whole genomes from 92 black fungal taxa. Fifty-two species in Dothideomycetes, one Arthoniomycetes species, and 39 in the Eurotiomycetes species have been selected as a reference, covering all the main phylogenetic lineages of black fungi. The majority of the selection represents hitherto unsampled groups. Other species will be included to improve their previous poor assembly resolution or because of their very distant phylogenetic relationships with the closest lineages (e.g., Coniosporium sp.). Several new taxa have been included and will be described during the project. The selected strains represent diverse ecologies and the breadth of phylogenetic lineages of black fungi for a comprehensive study of evolutionary processes and adaptations of these fungi which could not be undertaken by a single laboratory.
(II) Track transcripts and metabolites under different stress conditions. Transcriptomics and metabolomics experiments will be performed on a selection of reference species to track transcripts and expressed genes under four different stress conditions (salinity, dryness, UV radiation, oligotrophy) to discern their roles in nutrient cycling, interactions in the environment, and to investigate the role of melanin in utilizing radiation. Transcriptomics and metabolomics experiments will be performed on a selection of reference as the best representative of the main phylogenetic lineages and ecologies: F. endolithicus, an endemic species of the Antarctic Desert as the Life 2020, 10, 362 7 of 13 most widespread, and C. antarcticus as a recurrent test organism for astrobiological experiments and high multi-stress resistance [9,10,12]. Additional representatives of different ecologies and phylogenies will be sampled among ants-and lichen-associated species, polluted environments, and highly oxidizing surfaces.
(III) Black fungal stress database.
During the 3rd year, a curated repository to provide access to data generated from STRES for comprehensive curated analyses will be developed. Genomics, transcriptomics, and metabolomics will be integrated to look for genes encoding stress response proteins with verified physiological functions and placed in a black fungal stress database. This deep genomic sampling of the diversity of these fungi through the whole genome and transcriptome sequencing will be an immense and valuable resource to understand the organization, regulation, and evolution of stress response systems on black fungi as the background of all major fungal phyla.
Sampling to Sequencing
Sampling has been designed and performed in consultation with all the members of the consortium and will leverage existing biological resources and expertise present in both internationally recognized and private culture collections available for the STRES project.
Selection of the 92 black fungal species as reference genomes has been developed in concert with existing large-scale genome studies to minimize redundancies, overarching most of the main unsampled phylogenetic lineages where black fungi are placed, resulting in a total of 52 Dothideomycetes, 1 Arthoniomycetes, and 39 Eurotiomycetes (Figure 4).
A genome will be sampled from a new lineage in Arthoniomycetes which is sister to Dothideomycetes and the largest taxonomic group of primarily lichenized fungi outside of Lecanoromycetes.
Furthermore, 550 strains (within~95% nucleotide identity of reference genomes) will be re-sequenced to identify single nucleotide polymorphisms (SNPs) and characterize intraspecific genomic variability related to specific stress adaptation, geography, and ecology. Most of the 550 strains will be selected from culture collections involved in the project, but additional taxa proposed by international specialists or scientists interested in joining the consortium may be evaluated by the consortium and JGI and eventually included in the project.
Life 2020, 10, 362 8 of 13 consortium and will leverage existing biological resources and expertise present in both internationally recognized and private culture collections available for the STRES project.
Selection of the 92 black fungal species as reference genomes has been developed in concert with existing large-scale genome studies to minimize redundancies, overarching most of the main unsampled phylogenetic lineages where black fungi are placed, resulting in a total of 52 Dothideomycetes, 1 Arthoniomycetes, and 39 Eurotiomycetes (Figure 4). Desert as the most widespread, and C. antarcticus as a recurrent test organism for astrobiological experiments and high multi-stress resistance [9,10,12]. Additional representatives of different ecologies and phylogenies will be sampled among ants-and lichen-associated species, polluted environments, and highly oxidizing surfaces.
(III) Black fungal stress database.
During the 3rd year, a curated repository to provide access to data generated from STRES for comprehensive curated analyses will be developed. Genomics, transcriptomics, and metabolomics will be integrated to look for genes encoding stress response proteins with verified physiological functions and placed in a black fungal stress database. This deep genomic sampling of the diversity of these fungi through the whole genome and transcriptome sequencing will be an immense and valuable resource to understand the organization, regulation, and evolution of stress response systems on black fungi as the background of all major fungal phyla.
Sampling to Sequencing
Sampling has been designed and performed in consultation with all the members of the consortium and will leverage existing biological resources and expertise present in both internationally recognized and private culture collections available for the STRES project.
Selection of the 92 black fungal species as reference genomes has been developed in concert with existing large-scale genome studies to minimize redundancies, overarching most of the main unsampled phylogenetic lineages where black fungi are placed, resulting in a total of 52 Dothideomycetes, 1 Arthoniomycetes, and 39 Eurotiomycetes (Figure 4).
and lineages represented in the selection proposed
Desert as the most widespread, and C. antarcticus as a recurrent test organism for astrobiological experiments and high multi-stress resistance [9,10,12]. Additional representatives of different ecologies and phylogenies will be sampled among ants-and lichen-associated species, polluted environments, and highly oxidizing surfaces.
(III) Black fungal stress database.
During the 3rd year, a curated repository to provide access to data generated from STRES for comprehensive curated analyses will be developed. Genomics, transcriptomics, and metabolomics will be integrated to look for genes encoding stress response proteins with verified physiological functions and placed in a black fungal stress database. This deep genomic sampling of the diversity of these fungi through the whole genome and transcriptome sequencing will be an immense and valuable resource to understand the organization, regulation, and evolution of stress response systems on black fungi as the background of all major fungal phyla.
Sampling to Sequencing
Sampling has been designed and performed in consultation with all the members of the consortium and will leverage existing biological resources and expertise present in both internationally recognized and private culture collections available for the STRES project.
Selection of the 92 black fungal species as reference genomes has been developed in concert with existing large-scale genome studies to minimize redundancies, overarching most of the main unsampled phylogenetic lineages where black fungi are placed, resulting in a total of 52 Dothideomycetes, 1 Arthoniomycetes, and 39 Eurotiomycetes (Figure 4). Desert as the most widespread, and C. antarcticus as a recurrent test organism for astrobiological experiments and high multi-stress resistance [9,10,12]. Additional representatives of different ecologies and phylogenies will be sampled among ants-and lichen-associated species, polluted environments, and highly oxidizing surfaces.
(III) Black fungal stress database.
During the 3rd year, a curated repository to provide access to data generated from STRES for comprehensive curated analyses will be developed. Genomics, transcriptomics, and metabolomics will be integrated to look for genes encoding stress response proteins with verified physiological functions and placed in a black fungal stress database. This deep genomic sampling of the diversity of these fungi through the whole genome and transcriptome sequencing will be an immense and valuable resource to understand the organization, regulation, and evolution of stress response systems on black fungi as the background of all major fungal phyla.
Sampling to Sequencing
Sampling has been designed and performed in consultation with all the members of the consortium and will leverage existing biological resources and expertise present in both internationally recognized and private culture collections available for the STRES project.
Selection of the 92 black fungal species as reference genomes has been developed in concert with existing large-scale genome studies to minimize redundancies, overarching most of the main unsampled phylogenetic lineages where black fungi are placed, resulting in a total of 52 Dothideomycetes, 1 Arthoniomycetes, and 39 Eurotiomycetes (Figure 4).
Methodologies
Here, the methodologies summarized in the workflow reported above in Figure 3 are here briefly described. We will apply DNA and RNA following community protocols for high purity (e.g., https://dx.doi.org/10.17504/protocols.io.rzkd74w). For the 92 standard coverage genomes, we will provide high-quality DNA/RNA and a proper nucleic acid quantification for Illumina sequencing. The short insert library alone, with standard coverage, has been demonstrated to be more than sufficient for reference genomes, as reported by previous experience (e.g., 1KFG project). We will use an Illumina low coverage-resequencing for up to 550 additional strains within~95% nucleotide identity with reference genomes.
The STRES project will be able to address critical evolutionary and biological research questions by applying effective analysis methods.
(I) Description of particular genes as hallmarks for the whole group of black fungi.
•
Phylogenomic profiling to give insights into the evolutionary history of uncovered clades throughout the FTOL (e.g., the origin of symbioses).
•
Single-nucleotide polymorphisms (SNPs) calling to identify genomic regions contributing to local adaptation or even speciation. • Detection of genes duplication or whole genome duplication as events contributing to the ability to adapt to the extremes.
Life 2020, 10, 362 9 of 13 • Carbohydrate-active enzymes (CAZymes), assuming that predicted metabolic competences vary among different groups of black yeasts according to their phylogenetic affiliation and ecology.
•
Hydrocarbon and monoaromatic-active enzymes. Some black fungi, particularly in the order Chaetothyriales, are well known for their ability to degrade pollutants and hydrocarbons. Understanding the distribution and functionality of these genes will also inform us of their possible applications in bioremediation.
•
Stress-tolerance involved enzymes. Genes involved in stress responses (e.g., UV and ionizing radiation, osmotic, and thermal stresses) will be characterized.
•
Secondary metabolite biosynthetic pathway genes as potential contributors to local adaptation.
•
Transcription regulators (TFs) as drivers of adaptation and speciation.
Different stress conditions will be tested on a selection of reference species in a special climate chamber "Environment Emulation System" (http://eq-vibt.boku.ac.at/equipment/extreme-climatechamber/) (relative humidity up to 10%; oligotrophy; UV radiation and salinity stress) available at BOKU University (Austria). We aim to (i) identify potential common/different metabolic patterns across the different ecologies, and (ii) integrate metabolomic (both polar and non-polar metabolites) and transcriptomic data. We are particularly interested in the role of melanin that enables black fungi to utilize radiation for growing [54]; the utilization of these unconventional sources of energy may play a significant role in conditions of continuous nutrient deficiency.
(III) Black fungi genome database and evolution of the stress response system of black fungi.
The Fungal Stress Response Database (http://internal.med.unideb.hu/fsrd2/?p=consortium) [55,56] and the Saccharomyces cerevisiaeand Aspergillus nidulans-based stress response databases [57] currently incorporate filamentous fungi and yeasts but do not specifically address stress-adapted species. This existing database will be amended with genomes, transcriptomes, and metabolomes that will be obtained in the frame of the STRES project.
Future Directions
The STRES project will generate an unprecedented, comprehensive data set of black fungal genomes, allowing us to nearly complete the phylogenomic tree for the dark lineages of the FTOL and, in concert with other projects, fungi in general. A broad research community of fungal systematists, ecologists, and geneticists will benefit from the generated data, i.e., the reference genomes and complementing information on fungal biology (metabolic pathway), ecology, and adaptation to stress conditions and extremes. Furthermore, our results will play a critical role in the fungal metagenomics community by providing a much-needed source of phylogenetically diverse, reference genomes. The application of multi-omics approaches to extreme-tolerant and extremophilic fungi will strengthen an existing community of users and attract interests from industries, enabling new, exploitable biotechnological applications.
Additionally, the black fungi stress database, generated from this project, will integrate physiology, ecological and geographic data with completely sequenced and annotated genomes and will represent, for the first time, a systematic, comprehensive, and detailed overview of the stress response of these microorganisms, aiming to decipher the remarkable stress tolerance of these fungi and to stimulate further research in the field of fungal biology. The data acquired will serve to elucidate the possible role of black fungi both in bioremediation and developing material protection measures for stone monuments and solar panels, but most importantly to understand the balance and functionality of extreme ecosystems and to speculate on how life, for as we know it, can adapt and evolve up to the edge of life. | 6,252.4 | 2020-12-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Mining the role of angiopoietin‐like protein family in gastric cancer and seeking potential therapeutic targets by integrative bioinformatics analysis
Abstract Background The indistinctive effects of antiangiogenesis agents in gastric cancer (GC) can be attributed to multifaceted gene dysregulation associated with angiogenesis. Angiopoietin‐like (ANGPTL) proteins are secreted proteins regulating angiogenesis. They are also involved in inflammation and metabolism. Emerging evidences have revealed their various roles in carcinogenesis and metastasis development. However, the mRNA expression profiles, prognostic values, and biological functions of ANGPTL proteins in GC are still elucidated. Methods We compared the transcriptional expression levels of ANGPTL proteins between GC and normal gastric tissues using ONCOMINE and TCGA‐STAD. The prognostic values were evaluated by LinkedOmics and Kaplan–Meier Plotter, while the association of expression levels with clinicopathological features was generated through cBioPortal. We conducted the functional enrichment analysis with Metascape. Results The expression of ANGPTL1/3/6 was lower in GC tissues than in normal gastric tissues. High expression of ANGPTL1/2/4 was correlated with short overall survival and post‐progression survival in GC patients. Upregulated ANGPTL1/2 was correlated with higher histological grade, non‐intestinal Lauren classification, and advanced T stage, while ANGPTL4 exhibited high expression in early T stage, M1 stage, and non‐intestinal Lauren classification. Conclusions Integrative bioinformatics analysis suggests that ANGPTL1/2/4 may be potential therapeutic targets in GC patients. Among them, ANGPTL2 acts as a GC promoter, while ANGPTL1/4’s role in GC is still uncertain.
| INTRODUCTION
Gastric cancer (GC) is the fifth most common cancer in the world and the second leading cause among death from cancer. Taking the significant role of angiogenesis during cancer development into consideration, antiangiogenesis agents are expected to improve GC patients' prognosis significantly. However, so far, only vascular endothelial growth factor receptor (VEGFR) antibody ramucirumab 1 and VEGFRtargeted tyrosine kinase inhibitor (TKI) apatinib 2 have showed slight advantage of survival in the second-and third-line therapy of advanced GC, respectively. This phenomenon can be attributed to multifaceted gene dysregulation and complex molecular mechanisms associated with angiogenesis. Thus, figuring out other novel biomarkers related to angiogenesis process in GC may be helpful to improve the precision and efficacy of therapies.
Angiopoietin-like (ANGPTL) proteins are a family of secreted glycoproteins structurally similar to the angiopoietins. 3 To date, eight ANGPTL proteins have been discovered, namely from ANGPTL1 to ANGPTL8. Both angiopoietin and ANGPTL protein family are characterized by two domains: an N-terminal coiled-coil domain and a C-terminal fibrinogen-like domain. 3 Although the sequence is similar, ANGPTL proteins do not bind to the same receptors, named Tie2 or Tie1, as angiopoietins do. Instead, some of them bind to other kinds of receptors, such as leukocyte immune-globulin-like receptor B2 (LILRB2), 4 integrins α5β1, 5 and some of them are orphan ligands. 4 Angiopoietins/Tie receptor signaling is involved in modulating angiogenesis and preserving vascular integrity and permeability. 6 Notwithstanding, none of the ANGPTL proteins bind to the angiopoietin receptors, most members still show effects on angiogenesis. 7 ANGPTL proteins also exhibit many other biological properties in lipid, glucose and energy metabolism, inflammation, hematopoiesis, as well as cancer progression and metastasis. 3,7,8 ANGPTL proteins are widely expressed in many organs, such as skin, liver, breast, and gastrointestinal (GI) tract. 3,8 Researchers have discovered that ANGPTL proteins' transcriptional expression affected prognosis of patients in multiple types of cancer including lung cancer, 9 colorectal cancer (CRC), 10 liver cancer, 11 etc However, the influences can be quite different across different types of ANGPTL proteins and cancers. As far as we know, only ANGPTL2 12,13 and ANGPTL4 14,15 have been discussed previously in GC patients. Therefore, we consider that it is necessary to investigate ANGPTL proteins' transcriptional expression level across different clinicopathological situations and its relationship with prognosis of GC patients, thoroughly. In the present study, we implemented a deep bioinformatics analysis of ANGPTL proteins' mRNA expression data together with available clinical data of GC patients based on several large public databases in order to illustrate their prognostic and potential therapeutic values in the treatment of GC.
| ONCOMINE data-mining analysis
ONCOMINE (www.oncom ine.org), an online web-based cancer database of RNA and DNA sequences, was used to facilitate data mining the transcriptional expression level of ANGPTL proteins in 20 types of cancer. 16 Transcriptional expression level of ANGPTL proteins in GC samples was compared with those in normal gastric samples using Student's t-test. Statistically significant P value and fold change (FC) were demarcated as P < .05 and FC > 2, respectively.
| TCGA-STAD dataset
The Cancer Genome Atlas (TCGA) (https://cance rgeno me.nih.gov) contains gene expression data obtained by sequencing and has accurate clinicopathological data of many cancers. The stomach adenocarcinoma (STAD) dataset contains data from 375 GC tissues and 32 normal gastric tissues. The expression level of ANGPTL proteins was described using Log 2 (counts) together with 95% confidence interval (CI). Kruskal-Wallis (K-W) test was applied to compare expression level of different ANGPTL proteins. Further Dunn's multiple comparisons test would be conducted if the results of K-W test were of significance. P < .05 was considered to be significant. Transcriptional expression of ANGPTL proteins in GC samples was compared with those in normal gastric samples using R statistical software package (http:// www.R-proje ct.org). P value < .01 and Log FC absolute value greater than 1 were considered as filter to find differentially expressed ANGPTL proteins.
| LinkedOmics
The prognostic value of ANGPTL proteins' mRNA transcription level was measured using an online portal, the LinkedOmics (www.linke domics.org), 17 which included gene expression profiles and clinical information of 375 GC patients from TCGA-STAD. Patients with GC were separated into two groups based on median gene expression (high vs low). The overall survival (OS) of these two groups were compared by Cox regression analysis and demonstrated with Kaplan-Meier (K-M) survival curves. The hazard ratio (HR) and P value were calculated as well. P < .05 was considered significant.
| Kaplan-Meier plotter
The prognostic value of ANGPTL proteins' mRNA transcription level was also measured using an online open database, the Kaplan-Meier Plotter (www.kmplot.com), 18 which included gene expression profiles and survival information of 876 GC patients from six other datasets rather than TCGA. Patients with GC were separated into two groups based on median gene expression (high vs low). The OS and post-progression survival (PPS) of these two groups were compared by a Kaplan-Meier survival plot. The HR with 95% CI and Log-rank P value were calculated. P < .05 was considered significant. We demonstrated K-M survival curves based on the value of each ANGPTL protein's most detected probe, and the number at risk was displayed below the curves.
| cBioPortal
The cBioPortal (www. cbioportal.org) is an open-access dataset for exploring multiple cancer genes. The GC dataset contains data from 415 cases, with pathologic diagnosis chosen by cBioPortal for further analyses of ANGPTL proteins. 19,20 The mRNA expression level of ANGPTL proteins in two groups of GC patients with different clinicopathological features was compared by Mann-Whitney (M-W) test. K-W test was applied in multiple groups comparison. Further Dunn's multiple comparisons test would be conducted if the results of K-W test were of significance. P < .05 was considered significant. Besides, genes with the highest expression correlation with each ANGPTL protein were generated by cBioPortal, and the top 120 co-expressed genes with highest Spearman correlation score were included in the following functional enrichment analysis.
| Metascape
Functional enrichment analysis was carried out using Metascape (http://metas cape.org) 21 and analyzed in context of gene ontology (GO) 22 and Kyoto Encyclopedia of Genes and Genomes (KEGG) 23 biological pathways. The genes were assigned to functional groups based on molecular functions, biological processes, and specific pathways.
ANGPTL1/3/6 in patients with GC
Seven ANGPTL proteins were identified using the ONCOMINE database, 16 excluding ANGPTL8. As shown in Figure 1, we first measured the expression of ANGPTL proteins in 20 types of cancer samples and compared those to normal tissue samples. The mRNA expression of ANGPTL1/2/3/6 was significantly dysregulated in GC samples in multiple datasets.
In accordance with Table 1, ANGPTL1's downregulation was observed in GC tissues compared with normal tissues, with a FC of −2.313 in Cui's dataset 24 and a FC of −3.118 in DErrico's dataset, 25 respectively. Overexpression of ANGPTL2 has been reported in diffuse gastric adenocarcinoma compared with normal gastric tissue, according to Chen 26 (FC = 2.239) and DErrico's (FC = 2.360) dataset, while ANGPTL2 was also upregulated in gastric cancer (FC = 2.249) according to Wang's dataset. 27 mRNA expression of ANGPTL3 was found to be downregulated in many types of GC compared to normal gastric tissues. Both Cho 28 (FC = −2.092) and DErrico 25 (FC = −3.247) exhibited elevating transcriptional level of ANGPTL3 in mixed-type gastric adenocarcinoma. Markedly decreased expression of ANGPTL3 was reported in diffuse-type GC and intestinal-type GC, with FC = −2.090 and −3.599, by Cho and DErrico, respectively. In different datasets for ANGPTL3, we observed gastric adenocarcinoma having a FC = −2.074 compared with normal stomach reported by Cho, 28 and a similar trend was found in Cui 24 Besides the datasets included by ONCOMINE, the transcriptional expression level of ANGPTL proteins in TCGA-STAD dataset is also demonstrated in Table 2, Figure 2A and B. ANGPTL1/2/4 exhibited much higher expression level than other members of ANGPTL family no matter in gastric cancer samples or in normal gastric samples (Data not shown, all P values generated by Dunn's multiple comparisons test were less than .05). The mean expression level [measured by Log 2 (counts)] of ANGPTL1/2/4 was 9.982, 11.690, and 9.548 in normal tissue and 7.142, 11.930, and 8.792 in cancer tissue, respectively. When P value < .01 and Log FC absolute value greater than 1 were considered as filter, downregulated genes were ANGPTL1 (P = 4.480 × 10 −21 ), ANGPTL3 (P = 4.140 × 10 −5 ), ANGPTL4 (P = 3.882 × 10 −7 ), ANGPTL5 (P = 1.490 × 10 −9 ), ANGPTL6 (P = 4.150 × 10 −18 ), and ANGPTL7 (P = 1.600 × 10 −25 ). Therefore, both ONCOMINE database and TCGA-STAD dataset indicated that ANGPTL1/3/6 were downregulated in gastric cancer samples ( Figure 3).
| Upregulated ANGPTL1/2/4 were correlated with poor prognosis in GC patients
We separated all GC patients into two groups (high vs low) based on median expression values for each ANGPTL | 4853 protein across all GC samples. As shown in Table 3 and Figure 4, seven ANGPTL proteins were identified using the LinkedOmics, 17 We also investigated the prognostic value of ANGPTL proteins' expression level using Kaplan-Meier Plotter 18 and four members of the family could be identified. High expression of ANGPTL1 (≥27) predicted poor OS in 631 patients (HR: 2.17 [1.74, 2.72]; P: 3.2 × 10 −12 ). Similar situations F I G U R E 1 Significantly changed ANGPTL protein's expression in different types of cancers. This information is attained from ONCOMINE and indicates the numbers of datasets with statistically significant (P value ≤ 10-4, Fold Change ≥ 2 and Gene Rank ≥ Top 10%) mRNA high expression (Red) or low expression (Blue) of ANGPTL proteins (different types of cancer vs corresponding normal tissue). Cell color shade was decided by the best gene rank for the analyses within the cell, and the gene rank was analyzed by percentile of target genes in the top of all genes measured by each study Table 4(a) and Figure 5(A)].
T A B L E 1 Differential expression of ANGPTL proteins between different types of GC and normal gastric tissue. (ONCOMINE)
In regard to ANGPTL1/2/3/4 expression's relationship with PPS, association similar to that with OS was found through K-M Plotter [ Table 4(b) and Figure 5 . Therefore, we concluded that upregulated ANGPTL1/2/4 were correlated with poor prognosis in GC patients based on data from LinkedOmics and Kaplan-Meier Plotter [ Figure 6].
| The association between ANGPTL1/2/3/4/6 expression level and clinicopathological features of GC patients
Taking ANGPTL proteins with aberrant expression or prognostic value into consideration, we next focused on the association between ANGPTL1/2/3/4/6 expression level and clinicopathological features of GC patients. M-W test was implemented to compare the mRNA expression level of ANGPTLs between two groups of GC patients from cBioPortal 19,20 with different clinicopathological features. Only ANGPTL4 exhibited lower expression in Asian than in other races (P = .0016). For age criterion, there was no significant difference between <65 year and ≥65 year groups, except for ANGPTL1, which demonstrated reduced levels in the older group (P = .0231) ( Figure 7A). No significant difference in ANGPTLs expression was observed across gender. As shown in Figure 7(B), GC patients with higher histological grade (G3) showed higher expression of ANGPTL1 (P < .0001) and ANGPTL2 (P = .0002) than G1 and G2 patients. Intestinal-type GC patients tended to express low levels of ANGPTL1 (P < .0001), ANGPTL2 (P < .0001), and ANGPTL4 (P = .0357) compared to patients with mixed-and F I G U R E 4 K-M curves revealed the association between OS and ANGPTL protein's expression in GC patients (LinkedOmics
T A B L E 4 Association of ANGPTL
proteins' expression with prognosis in GC patients revealed by K-M Plotter diffuse-type GC. The location of tumor lesion also affected the expression level of ANGPTL1 (P = .0030), ANGPTL2 (P = .0014), ANGPTL3 (P = .0100), and ANGPTL4 (P = .0350) according to K-W test. Further Dunn's multiple comparisons test revealed increased ANGPTL1 (P = .0033)/2 (P = .0012) and decreased ANGPTL3 (P = .0071)/4 (P = .0030) expression in antrum/distal group compared to gastroesophageal junction group (Data not shown). We found that the expression of ANGPTL1 (P = .0010) and ANGPTL2 (P = .0020) mRNA was significantly increased in T3 and T4 groups, while ANGPTL4 (P = .0217) mRNA expression was significantly reduced in T3 and T4 groups. Upregulated ANGPTL4 was also found to be associated with metastasis (P = .0324) ( Figure 7C). Lymph node (N) status and clinical stage had no relationship with any of these ANGPTL protein's expression level.
| Functional enrichment analysis of genes co-expressed with ANGPTL1/2/4
Besides differential expression in cancer tissue and prognostic value, ANGPTL proteins' expression level in gastric tissue was also taken into account when we selected candidates for functional enrichment analysis. Since ANGPTL1/2/4 exhibited much higher expression level than other members of ANGPTL family in gastric tissue, we decided to further investigate their roles in GC genesis and development. The top 120 genes that had the most significant correlation with ANGPTL1/2/4 were generated by cBioPortal 19,20 and were included in the following functional enrichment analysis using Metascape. 21 As shown in Figure 8(A), the top 120 genes co-expressed with ANGPTL1 were mainly enriched in molecular functions, biological processes, and pathways involved in interactions with extracellular matrix (ECM) (eg, glycosaminoglycan binding, keratan sulfate catabolic process, and ECM structural constituent and organization), cell differentiation and proliferation (eg, positive regulation of epithelial cell proliferation), tissue development (eg, mesenchyme development), and muscle system (eg, actin binding). Figure 8(B) was a network that exhibited the interactions among cluster of genes enriched in the molecular functions, biological processes, and pathways mentioned above. We could see that the genes enriched in ECM-related functions showed closer relationship with those enriched in cell differentiation and proliferation and tissue development-related processes.
According to Figure 8(C), ANGPTL2 probably participate in the development of tumor microenvironment, especially vasculature development. It may also regulate ECM The genes co-expressed with ANGPTL4 did not enrich in some specific biological processes as remarkable as ANGPTL1/2 did. However, ANGPTL4 seemed to play a role in multiple different processes rather than only angiogenesis-related ones. It probably took part in the stress-activated mitogen-activated protein kinase cascade (MAPK cascade), peroxisome proliferator-activated receptor (PPAR) signaling pathway, phosphatidylinositol-3-phosphate (PI3P) biosynthetic process, epithelial cell apoptotic process, and regulation of inflammatory response. ANGPTL4 may also associate with activity as enzyme/kinase activator, phosphotransferase/phosphatase, or steroid hormone receptor binding process (Figure 8(E)). The network in Figure 8(F) revealed the association between genes enriched in stress-activated MAPK cascade and those enriched in PI3P biosynthetic process, phosphotransferase activity function. Genes enriched in enzyme/kinase activator activity function only interacted with each other.
| DISCUSSION
ANGPTL proteins are a family of secreted glycoproteins which participate in multiple biological processes, mainly including angiogenesis, inflammation, and metabolism. 3 During the last decade, emerging evidences revealed ANGPTL proteins' roles in regulating different steps of carcinogenesis and metastasis through their effects on the processes mentioned above. Recently, Carbone et al 3 published a systematic review outlining the current knowledge about ANGPTL proteins' functions in angiogenesis, inflammation, cancer progression, and metastasis. They also discussed the most recent evidences sustaining ANGPTL proteins' role as prognostic biomarkers for cancer therapy. However, the roles of ANGPTL proteins in cancer progression and metastasis can be quite tumor type-dependent, F I G U R E 7 ANGPTL proteins with significantly changed expression according to various clinicopathological features among ANGPTL1/2/3/4/6 (cBioPortal). (A) Differently expressed ANGPTL proteins according to demographical features; (B) Differently expressed ANGPTL proteins according to pathological features; (C) Differently expressed ANGPTL proteins according to clinical staging. Clinical staging was based on the 7 t h edition of AJCC TNM staging. Red boxes highlighted the box plots demonstrating significant difference between groups (P < .05). GC: gastric cancer even researches investigating the same type of cancer generate conflicting conclusions sometimes. So far, few articles have explored the mRNA expression of ANGPTL proteins in GC. 12,14 Their prognostic value and biological functions in GC remain to be elucidated, too. To our knowledge, this is the first study to exhibit ANGPTL proteins' transcriptional expression in GC comprehensively and to identify specific ANGPTL proteins with prognostic value, and biological functions in GC development using integrative bioinformatics.
| ANGPTL proteins' mRNA expression in gastric tissues
Carbone et al have briefly summarized ANGPTL family members' mRNA expression level in esophageal and colorectal tissues based on previous published researches. ANGPTL1 was highly expressed in all parts of the GI tract except esophagus. 3 ANGPTL2 exhibited high expression in esophageal 3 and colorectal cancer. 10 The expression level of ANGPTL4/6/7 was also high in CRC. [29][30][31] Our article revealed significantly higher expression level of ANGPTL1/2/4 than that of ANGPTL3/5/6/7 in gastric tissues through bioinformatics, which was not demonstrated in Carbone's review. 3 The information mentioned above together may provide us a rough impression of ANGPTL proteins' mRNA expression level landscape along the GI tract.
| ANGPTL1: expression, prognostic value, and roles in GC
According to our results, both ONCOMINE database and TCGA-STAD dataset showed that ANGPTL1 were downregulated in GC samples compared to normal gastric samples and prompted ANGPTL1's potential inhibitory role in gastric tumorigenesis. Downregulation of ANGPTL1 was observed in kidney, lung, prostate, bladder, and thyroid cancers, too. 3 In fact, ANGPTL1 was generally supported to be a tumor suppressor across different types of tumor by both in vitro and in vivo experiments. For example, ANGPTL1 overexpression in breast cancer (BC) cells could result in a significant reduction in the number and size of tumor nodules. 32 Primary melanoma tumors derived from ANGPTL1-secreting cells grew more slowly in vivo compared to empty vector-transfected cells. 32 It has also been reported that ANGPTL1 treatment remarkably inhibited in vitro and in vivo migration and invasion ability of hepatocellular carcinoma (HCC) cells. 11 Meanwhile, immunohistochemistry (IHC) analysis of HCC samples revealed that patients with higher level of ANGPTL1 expression had less metastasis as well as longer survival time. 11 Several possible mechanisms underlie this suppressive activity. Firstly, ANGPTL1 may play an essential role in tumor inhibition by balancing angiogenesis and permeability. It could both inhibit VEGF-induced endothelial cells proliferation and induce extracellular signal-regulated kinase 1/2 (ERK1/2)-related antiapoptotic activity.. 32 Secondly, ANGPTL1 was responsible for reorganization of cytoskeleton through inhibition of actin stress fiber formation, which probably result in an altered cellular morphology. Thirdly, ANGPTL1 could induce mesenchymal-to-epithelial transition (MET) through integrin α1β1, miR-630, and SLUG (SNAIL-related zinc-finger transcription factor) pathway, thus allow cancer cells to regain epithelial properties. 9 However, according to our results, upregulated ANGPTL1 was correlated with poor prognosis, higher histological grade, non-intestinal Lauren classification, and advanced T stage in GC patients suggesting a GC-promoting role of this molecule. Perhaps, ANGPTL1 played different roles in gastric tumorigenesis and gastric tumor progression. There was a lack of researches focusing on ANGPTL1's roles in GC. Our functional enrichment analysis indicated several pathways by which ANGPTL1 exerted influence on GC progression.
Most of the pathways are related to interactions with ECM (eg, glycosaminoglycan binding, keratan sulfate catabolic process, and ECM structural constituent and organization), cell differentiation and proliferation (eg, positive regulation of epithelial cell proliferation), tissue development (eg, mesenchyme development), and muscle system (eg, actin binding). Further studies could be conducted based on this analysis.
| ANGPTL2: expression, prognostic value, and roles in GC
ANGPTL2 has been proved to be tumor-promoting among several types of cancer. High ANGPTL2 expression level has been observed in many types of cancer including esophageal, 3 colorectal, 10 prostate, 5 pancreatic, 33 lung, 34 breast, 35 liver, 36 and skin 37 cancers. ANGPTL2 mainly exerted its proangiogenic and antiapoptotic abilities in the tumor microenvironment. ANGPTL2 also increased cancer cells' migratory and invasive ability, thus facilitate tumor metastases through different mechanisms. For instance, an autocrine signaling between ANGPTL2 and its receptor LILRB2 was able to induce early EMT and tumor progression in preneoplastic pancreatic ductal cells. 33 ANGPTL2 also strengthened responsiveness of BC cells to chemokine (C-X-C motif) ligand 12 (CXCL12) by the upregulation of C-X-C motif receptor 4 (CXCR4), thus promoted these cells' recruitment to bone metastatic sites. 35 In addition, ANGPTL2 induced inflammation and oxidative stress, generating a tumor microenvironment that supported methylation, and consequently reducing gene expression of DNA repair enzymes, such as mutS homolog 2 (MSH2), leading to DNA mutations and cancer initiation in an experimental model of skin cancer. 37 Our bioinformatics analysis revealed elevated expression of ANGPTL2 in GC tissue compared to normal gastric tissue based on data from ONCOMINE and no significant difference in ANGPTL2 expression between normal and cancer tissue based on data from TCGA-STAD. Besides, upregulated ANGPTL2 was also correlated with poor prognosis, higher histological grade, non-intestinal Lauren classification, and advanced T stage in GC patients according to our results. Therefore, we may conclude that ANGPTL2 behave as a tumor promoter in GC just like in many other types of cancer. Several studies on ANGPTL2 and GC generated similar conclusion as well. As in vitro researches indicated, ANGPTL2 knockdown caused anoikis and inhibited proliferation, invasion, and migration in GC cells, 12 while proliferation rate and invasive ability in ANGPTL2-overexpressed GC cells were higher than in control cells. 13 Besides, higher expression of ANGPTL2 was observed in highly malignant and undifferentiated GC cell lines. 13,38 Clinical researches prompted that upregulated ANGPTL2 was associated with GC progression, early recurrence, and poor prognosis. 12 Moreover, ANGPTL2 could be a potential novel noninvasive biomarker for GC. The serum ANGPTL2 level of GC patients were significantly higher than those of healthy controls. Recent studies reported receiver operating characteristic (ROC) curves which yielded robust AUC value (0.831) accompanied by high sensitivity (73.0%) and specificity (82.2%) in distinguishing GC patients from healthy controls. 12 Although ANGPTL2's GC promotive activity was observed by both in vitro studies and clinical researches, few studies explored the possible underlying mechanisms. According to our functional analysis of co-expressed genes in GC, ANGPTL2 probably participate in the development of tumor microenvironment, especially vasculature development. It may also regulate ECM binding and adhesion.
Meanwhile, ANGPTL2 could possibly pose influences on cell proliferation and differentiation. Wnt signaling pathway and HMG box domain binding were among the top 20 functions enriched with ANGPTL2 co-expressed genes, too. These candidate pathways mentioned above needed further validation of experiments and could be hints of future mechanism studies.
| ANGPTL3: expression and prognostic value in GC
Few articles about ANGPTL3's role in cancer growth and invasion were reported. Existing studies showed contradicting results in different types of cancer. For instance, ANGPTL3 was significantly upregulated in oral squamous cell carcinoma (OSCC)-derived cell lines compared to normal tissues. In vitro and in vivo OSCC models showed that ANGPTL3 knockdown arrested cell cycle at G1 phase through upregulating cyclindependent kinase inhibitors, thus reduced cancer cell proliferation and growth 39 Nonetheless, in HCC cells, ANGPTL3 inhibited cell proliferation and invasion through downregulation of p38MAPK and MMP-9 cascade's activation. 40 ANGPTL3 exhibited low expression in gastric tissues, though ANGPTL3 was downregulated in GC tissues compared to normal gastric tissues suggesting ANGPTL3's role as a GC suppressor. However, the difference in expression level did not make a difference in GC patients' prognosis and was not associated with other clinicopathological factors, except primary tumor site. Therefore, we are prone to the viewpoint that ANGPTL3 may not play an important role in regulating GC genesis and progression.
| ANGPTL4: expression, prognostic value, and roles in GC
Recent researches revealed ANGPTL4's wide-spectrum of action including cancer growth, angiogenesis, metabolism, and metastasis. However, it seemed that ANGPTL4 acted in a tumor type-dependent manner and even the findings in the same type of cancer contradicted with each other sometimes, too.
For instance, ANGPTL4 could be induced by hypoxia through upregulation of PGE2 receptor in CRC, thus promoted cancer cell proliferation. It could also stimulate a redox-based mechanism which enhanced tumor cell survival by alteration of the O 2 to H 2 O 2 ratio and led to the activation of extracellular signal-regulated kinase in CRC. 14 Inconsistent with upregulated expression in CRC, ANGPTL4 expression was significantly lower in HCC tissues than in nontumor tissues. Low expression of ANGPTL4 was significantly associated with advanced tumor stage, poor differentiation as well as poor overall and disease-free survival (DFS) of HCC patients. 15 However, a study showed that serum ANGPTL4 protein is higher in HCC patients than in normal controls. 41 Besides, ANGPTL4 was reported to be expressed at higher level in the blood of BC patients 42 and high expression of ANGPTL4 correlated with a minor DFS of young BC patients. 43 However, an in vitro study indicated that PPAR β/δ-regulated ANGPTL4 strongly inhibited the transforming growth factor β (TGF β)-induced invasion of MDA-MB-231 human BC cells. 44 ANGPTL4's proteolytic cleavage generated two isoforms of itself: an N-terminal coiled-coil domain (nANGPTL4), and a large fibrinogen-like COOH-terminal domain (cANGPTL4). Whereas the former was mainly involved in the endocrine regulatory role of lipid metabolism, insulin sensitivity, and glucose homeostasis, the latter may be a key regulator of the complex signaling during cancer development. 3 The complexity of ANGPTL4's role in cancer genesis and development probably result from the alteration of cleavage and posttranslational modification. 45 Our integrated bioinformatics analysis further demonstrated the complexity of ANGPTL4's role in gastric cancer. According to data from TCGA-STAD, ANGPTL4 exhibited lower expression in GC tissue than in normal gastric tissue and higher expression in early T stage than advanced T stage. However, elevated ANGPTL4 expression level was also correlated with poor prognosis, unfavorable Lauren classification, and metastasis in GC patients. Previous researches focusing on ANGPTL4 and GC could not provide consistent conclusion, neither. Kubo et al suggested that hypoxia-induced ANGPTL4 expression is independent of hypoxia-inducible factor-1α (HIF-1α) in hypoxic GC cells and ANGPTL4 may be a favorable marker for predicting a long survival time. 46 Meanwhile, Baba et al demonstrated that hypoxia-induced ANGPTL4 expression was regulated by HIF-1α in scirrhous GC cells and was essential for tumor growth, metastasis, and resistance to anoikis through different mechanisms, including downregulation of c-Myc and focal adhesion kinase (FAK)/Src/phosphoinositide 3-kinase (PI3K)-protein kinase B (Akt)/ERK pathway, upregulation of p27, and apoptotic factors caspases-3, −8, and −9. 47 Besides, Tan et al found that cANGPTL4 bearing T266M mutation (T266M cANGPTL4) bound to integrin α5β1 with a reduced affinity compared to wild-type cANGPTL4, leading to weaker activation of downstream signaling molecules. The tumors with T266M cANGPTL4 exhibited impaired proliferation, anoikis resistance, migratory capability, and had reduced adenylate energy charge. Further investigations also revealed that cANGPTL4 regulated the expression of glucose transporter 2 (Glut2). 48 Many of the mentioned pathways by which ANGPTL4 exerted influence on other types of cancer were enriched in our functional enrichment analysis of ANGPTL4 co-expressed genes in GC, such as stress-activated MAPK cascade, PPAR signaling pathway, PI3P biosynthetic process, epithelial cell apoptotic process, and regulation of inflammatory response. However, some of them have not been verified by experiments in GC, perhaps further researches are needed.
| ANGPTL6: expression and prognostic value in GC
Similar to ANGPTL3, ANGPTL6 also showed low expression in gastric tissues and was downregulated in GC tissues. Based on our results, ANGPTL6's expression level did not exhibit prognostic value in GC patients and was not associated with other clinicopathological factors, neither. All the information seemed to suggest that ANGPTL6 may not exert much influence on GC genesis and progression.
However, it has been reported that the interaction between hepatic ANGPTL6 and tumoral integrin/E-cadherin drives liver homing and colonization by CRC cells. Furthermore, an angiopoietin-like 6-mimicking peptide was capable of interfering with this interaction, thus acting as an antimetastatic compound. 30 Hence, we guessed that it was still a possible research direction to investigate ANGPTL6's role in liver metastasis of GC.
| ANGPTL1/2/4 and resistance of antiangiogenesis agents
Emerging evidences has already demonstrated ANGPTL proteins' potential roles in resistance of antiangiogenesis agents in other types of cancer. For example, a study indicated that ANGPTL1 could inhibit sorafenib resistance and cancer stemness in HCC cells through acting as a Met receptor inhibitor. 49 Besides, ANGPTL2 was proved to be among the pro-inflammatory factors overexpressed in pancreatic cancer (PC) cells which led to epithelial-to-mesenchymal transition (EMT) and resistance of anti-VEGF treatment. 50,51 Moreover, in a research on triple-negative breast cancer (TNBC), heparinbinding epidermal growth factor (HB-EGF) was found to play a pivotal role in the acquisition of tumor aggressiveness by regulating both ANGPTL4 and VEGFA. 52 We thought this research indicated that only blocking VEGFR may not be enough to shut down angiogenesis in TNBC. Whether the ANGPTL proteins participated in the mechanism of antiangiogenesis drug resistance in GC or not has not been studied thoroughly before.
| CONCLUSIONS
In the current study, we systematically analyzed transcriptional expression level and prognostic value of ANGPTL proteins in GC patients. We also exhibited the association of expression level with clinicopathological features and supplied a functional enrichment analysis. Integrative bioinformatics analysis suggests that ANGPTL1/2/4, compared to other ANGPTL proteins, may be potential therapeutic targets in GC patients. Among ANGPTL1/2/4, ANGPTL2 tends to be a GC promoter according to our results. However, we cannot conclude whether ANGPTL1/4 are GC promoter or suppressor based on the diverse information provided by our analysis. The ANGPTL proteins' roles in GC are so complex that more well-conducted clinical researches and in-depth experiments are required to validate the diagnostic value of these ANGPTL proteins and explore the underlying mechanism by which ANGPTL proteins influence GC's development. | 7,154 | 2020-05-14T00:00:00.000 | [
"Biology"
] |
Quasi-Normal Modes and Stability of Einstein-Born-Infeld Black Holes in de Sitter Space
We study gravitational perturbations of electrically charged black holes in (3+1)-dimensional Einstein-Born-Infeld gravity with a positive cosmological constant. For the axial perturbations, we obtain a set of decoupled Schrodinger-type equations, whose formal expressions, in terms of metric functions, are the same as those without cosmological constant, corresponding to the Regge-Wheeler equation in the proper limit. We compute the quasi-normal modes (QNMs) of the decoupled perturbations using the Schutz-Iyer-Will's WKB method. We discuss the stability of the charged black holes by investigating the dependence of quasi-normal frequencies on the parameters of the theory, correcting some errors in the literature. It is found that all the axial perturbations are stable for the cases where the WKB method applies. There are cases where the conventional WKB method does not apply, like the three-turning-points problem, so that a more generalized formalism is necessary for studying their QNMs and stabilities. We find that, for the degenerate horizons with the"point-like"horizons at the origin, the QNMs are quite long-lived, close to the quasi-resonance modes, in addition to the"frozen"QNMs for the Nariai-type horizons and the usual (short-lived) QNMs for the extremal black hole horizons. This is a genuine effect of the branch which does not have the general relativity limit. We also study the exact solution near the (charged) Nariai limit and find good agreements even far beyond the limit for the imaginary frequency parts.
I. INTRODUCTION
The organization of this paper is as follow. In Sec. 2, we review the electrically charged black hole solution in EBI gravity with a positive cosmological constant. In Sec. 3, we consider the axial perturbations from the spherically symmetric EBI black hole background and obtain a set of decoupled Regge-Wheeler equations. In Sec. 4, we use the Shutz-Iyer-Will's third-order WKB method to compute the QNMs numerically and discuss the stabilities of the EBI black holes. In Sec. 5, we study the exact solution near the (charged) Nariai limit and compare it with the numerical results in Sec. 4. In Sec. 6, we conclude with several discussions. Throughout this paper, we use the conventional units for the speed of light c, the electric and magnetic constants ǫ 0 , µ 0 , and the Boltzman constant k B , c = 4µ 0 = 4/ǫ 0 = k B = 1, but keep the Newton constant G and the Planck constanth unless stated otherwise.
II. BACKGROUND SOLUTIONS
In this section, we briefly review the black hole solution in EBI gravity with a positive cosmological constant Λ in (3 + 1) dimensions, from which the gravitational perturbations will be considered. The EBI action is given by where the BI Lagrangian density L(F ) is given by Here, β is the BI's coupling constant with dimensions [length] −2 [5,6]. In the weak field or equivalently strong coupling limit, |F µν F µν | ≪ 2β 2 , L(F ) may be expanded as where F 2 ≡ F µν F µν and the usual Maxwell's electrodynamics is recovered at the lowest order. The correction terms, when F is comparable to β, represent the effect of the nonlinear BI fields at the short distance ∼ β −1 and the possible electromagnetic field strength is bounded by −F 2 ≤ 2β 2 . Taking 16πG = 1 for simplicity, the equations of motion are obtained as where the energy-momentum tensor for BI fields is given by Let us now consider a static and spherically symmetric solution with the metric ansatz ds 2 = −N 2 (r)dt 2 + 1 f (r) dr 2 + r 2 (dθ 2 + sin 2 θdφ 2 ).
For the static, electrically-charged case where the only non-vanishing component of the field strength tensor is F rt ≡ E r , the general solutions for the metric and the BI electric field are obtained as in terms of the hypergeometric function [12,14]. Here Q is the electric charge 1 and M is the ADM mass which is composed of the intrinsic mass C and (finite) self energy of a point charge M 0 , defined by In the large-distance limit r ≫ Q/β, the solutions become which show the usual Reissner-Nordstrom-(Anti) de Sitter (RN-(A)dS) black hole at the leading order and the short-distance corrections at the sub-leading orders. On the other hand, in the short-distance limit r ≪ Q/β, we have which show the regularized solutions near the origin, with the milder (curvature) singularity of the metric and the finite electric field at the origin [14,15] due to the non-linear BI fields for a finite coupling β.
The solution (8) can have three horizons generally, i.e., two (inner, outer) black hole horizons, r − , r + , and one cosmological horizon r ++ . The Hawking temperature for the outer The plots of the extremal horizons r * H and r * C (> r * H ) vs. βQ for varying β. We consider β = 2, 1, 1/2, 1/3 (right to left) with Λ = 0.2. The solid/dashed lines denote the '+/−' roots in (17) and there is no '+' root for the last case. black horizon r + is given by and plotted in Fig. 1. There exist two extreme limits of vanishing temperature, when the outer black hole horizon r + meets the inner horizon r − (extremal black holes) or the cosmological horizon r ++ (Nariai solution), at for a positive cosmological constant Λ (r * H < r * C ) (Figs. 1 and 2). At the extreme points, the ADM mass M, becomes [15,16] The first law of black hole thermodynamics is found as with the usual Bekenstein-Hawking formula for the black hole entropy and the scalar potential [12][13][14] A 0 (r) = Q r 2 F 1 The integrated first-law of black hole thermodynamics, called the generalized Smarr relation, is given by [13,14,17,18] 2 The generalized Smarr relation without the last term has been obtained for the planar black holes in [14,18]. If we consider topological EBI black holes with the topological parameter k = +1, 0, −1 for spherical, plane, and hyperbolic geometry generally, with the solid angle Ω k , one finds [13] the last term as (k/3) 4hS BH /Ω k G. The plots of the ADM mass M vs. the black hole horizon radius r + for varying βQ with a fixed Λ > 0. The marginal mass M 0 is given by the mass value at r + = 0. The top two curves in the left represent M 0 > M * , M 0 = M * for βQ > 1/2, βQ = 1/2, respectively, with the extremal mass M * , whereas the bottom two curves represent the cases where M * is absent for βQ < 1/2. We consider βQ = 2/3, 1/2, 2/4.5, 1/3 (top to bottom) with β = 2 and Λ = 0.2. The effect of cosmological constant is not significant for small r + (left) but important for large r + (right). In the latter, we compare the dS case of Λ = 0.2 (thick curve) with the flat Λ = 0 (medium curve) and the AdS case of Λ = −0.2 (thin curve), respectively. Now, from the relation (17), one can classify the black holes by the values of βQ [15].
(i) βQ > 1/2: In this case, there are generally three horizons (two smaller ones for black holes and the largest one for the cosmological horizon) for M * H ≤ M ≤ M * C (< M 0 ), depending on M and Λ (Fig. 3, left), where M * H and M * C denote the values of the extremal mass M * in (19) at r * H and r * C for the extremal black holes and the (charged) Nariai solutions, respectively, with the vanishing Hawking temperature (Fig.1). When the mass is outside this range, i.e., M < M * H or M > M * C , the singularity at r = 0 becomes naked as in the RN black hole.
(ii) βQ = 1/2: In this case, only the Schwarzschild -de Sitter (Sch-dS) -like black holes with a non-degenerate black hole horizon r + are possible for M 0 < M ≤ M * C (Fig. 3, center). It is peculiar that the black hole horizon r + shrinks to zero size, i.e., the point-like horizon, for the marginal case M = M 0 (Fig. 4). This corresponds to the extremal black hole, with the vanishing Hawking temperature (Fig.1), where the horizon degenerates at the origin. When the mass M is smaller than the marginal mass M 0 or larger than M * C , the singularity at r = 0 becomes naked. Note that, in this case, the GR limit β → ∞ does not exist.
(iii) βQ < 1/2: This case is similar to the case (ii), except that the marginal case M = M 0 has no (even a point) horizon so that its curvature singularity at r = 0 is naked always (Fig. 3, right), even though the (non-degenerate) black hole horizon can arbitrarily approach close to the point-like horizon r + = 0 in the limit M → M 0 , with the divergent Hawking temperature (Fig. 1) 3 .
III. PERTURBATION EQUATIONS
In this section, we consider the gravitational perturbations of an electrically charged black hole in EBI gravity with a positive cosmological constant Λ > 0, whose solutions for the metric and fields are given by (8) and (9), respectively. We follow the procedure of Chandrasekhar [19] for the perturbations of RN solution but in the different conventions which agree with those of Wald [20] 4 . We generalize the earlier computations of [21] to a positive cosmological constant but with some important corrections of errors. To accommodate the procedure of [19], it is useful to write the metric (7) as , e 2µ 3 ≡ r 2 , e 2ψ ≡ r 2 sin 2 θ.
We will consider (24) as a background metric and consider the first-order metric perturbations. In our linear perturbations of the spherically symmetric system, we may restrict ourselves to axisymmetric modes of perturbations, without loss of generality [19], whose metric may be generally written as where the metric functions ν, ψ, µ 2 , µ 3 , q 0 , q 2 , and q 3 are functions of only t, r, and θ, due to axisymmetry. There are two kinds of perturbations: The perturbations of q 0 , q 2 , and q 3 , which are called "axial" perturbations, induce a dragging effect and impart a rotation to the black hole, whereas the increments of δν, δψ, δµ 2 , and δµ 3 , which are called "polar" perturbations, do not impart such rotation. These two perturbations are decoupled and can be treated independently [19]. In this paper, we will focus only on the axial perturbations for simplicity. To obtain the solutions for the perturbed metric (26), we first need to consider the perturbations of BI and Einstein equations, (4) and (5). For this purpose, we adopt the tetrad formalism as in [19] and use the Roman indices a, b = 0, 1, 2, 3 for the tetrad indices and the Greek indices µ, ν = t, φ, r, θ for the curved coordinate indices. The tetrad basis e a µ satisfies e a µ e b ν η ab = g µν with η ab = diag(−1, +1, +1, +1). For the metric given by (26), e a µ can be obtained as its mass M has still a non-vanishing remnant M 0 . If this tiny black hole with a unit charge Q = √ 4πǫ 0 αhc (α is the fine-structure constant) is created at the Planck energy scale in the early universe, i.e., M 0 ≈ hc/G, we obtain β ≈ (9/2)Γ(3/4) 4 π −7/2 α −3/2 (ǫ 0h c) −1/2 G −1 . If we consider α P l ∼ 1.5 at the early universe, contrast to α ∼ 10 −2 at the current epoch, we have β P l ∼ 0.1 (ǫ 0h c) −1/2 G −1 and β P l Q ≈ 9Γ(3/4) 4 π −3 α −1 P l G −1 ∼ 0.4 G −1 . 4 We use a metric with signature (− + ++) and the Wald's conventions for the Riemann and Ricci tensors which differ from [19,21,22].
Tensors in the two bases are related by the tetrads, for example, for the field strength and the Ricci tensor. We will now consider equations of motion (4) and (5) in the tetrad basis.
Here, (37), (38), and (39) represent the Gauss's law and the Ampere's law (with the Maxwell's correction term) in the curved space. On the other hand, (40), which corresponds the φ-component of the Ampere's law, shows the induced source terms which correspond the electric currents J (e) ∼ F ab Q ab , due to non-linear interactions between the electromagnetic and gravitational fields, similar to (33). These induced source terms in (40) and (33) represent the back-reactions from the dynamical metric q 0 , q 2 and q 3 , which are also driven by T µν in the Einstein equations (5).
Up to now, we have not assumed any specific solution of the background BI fields. Now, using that the only non-zero components of F and D in the background solution (9) are and substituting ψ, ν, µ 2 , µ 3 with ψ + δψ, ν + δν, µ 2 + δµ 2 , µ 3 + δµ 3 in the above equations, we find the linearized versions of perturbed BI equations as
The perturbed Ricci tensor equations
To find the metric perturbations of the Einstein equation (5), we may conveniently write the Einstein equations (5) as 5 With a simple computation, one can easily find that the cosmological constant term does not appear explicitly in the perturbation of Einstein equations (48). The lineralized form of the perturbed Ricci tensor equations are then given by Since the only non-zero componsnt of the background F ab is F 02 , we replace δF ab by F ab except for δF 02 for simplification, following [19]. Explicit forms of the right-hand side of the perturbed Ricci tensors (51) are given by C. The wave equations for the axial perturbations The linearized Einstein equations can be obtained by equating δR ab of the left-hand side in the above equations with those which can be computed from the Ricci tensors for the metric (26) in the tetrad basis, whose explicit forms are given in the Appendix. As mentioned before, the perturbation equations can be categorized into two kinds. One is the axial perturbations which are characterized by the non-vanishing of q 0 , q 2 , q 3 , F 01 , F 12 , and F 13 . The other is the polar perturbations which are characterized by the non-vanishing of δF 02 , F 03 , F 23 , δν, δψ, δµ 2 , and δµ 3 . In this paper, we only consider the axial perturbations, which are relatively easier and can be treated independently of the polar perturbations. For this purpose, we first consider (57) and (58), which are now given by Eliminating D 12 and D 13 from (44) using (42) and (43), we obtain where With the substitution (60) and (61) take the forms where ∆ ≡ r 2 e 2ν . Assuming that the perturbed fields q 0 , q 2 , q 3 , and Q have a time dependence e −iωt and eliminating q 0 from (66) and (67), we obtain Similarly, eliminating (q 0,2 − q 2,0 ) ,t from (62) using (66), we obtain We can further separate the variables r and θ in (68) and (69) with and where C m n is the Gegenbauer function and we have used the recurrence relation, Substituting (70) and (71) in (68) and (69), we obtain two radial wave equations, where and l = 0, 1, 2, · · · denotes the orbital number. Introducing the tortoise coordinate r * , and further defining Q and D as we find a pair of coupled equations of H 1 and H 2 [21] as To find the one-dimensional Schrödinger-type wave equations, we rewrite the coupled equations (79) and (80) in the matrix form, where The matrix V ij can be diagonalized by the similarity transformation as in the RN case [19]. Then, the two decoupled, one-dimensional Schrödinger-type equations are given by, where the effective potentials, U 1 and U 2 , which are real-valued functions, are given by Here, the decoupled solutions Z 1 and Z 2 are related with the coupled solutions H 1 and H 2 , which are basically the perturbations of the metric variable Q 23 and the fields strength D 01 [19], where The explicit form of U 1 , U 2 , q 1 , and q 2 will be available upon substituting e 2ν and e ϕ with the background solutions (25) and (78), respectively, when we compute the QNMs in the next section.
IV. QUASI-NORMAL MODES FOR THE AXIAL PERTURBATIONS: THE WKB APPROACH
A. The WKB approximation Quasi-Normal Modes (QNMs) are defined as the solutions which are purely ingoing as e −i(ωt+kr * ) near the (outer) black hole horizon r + , where r * = −∞. In this paper, in order to compute QNMs, we will consider the WKB approximation method [9][10][11], which is simple to apply in our case though there are several inherent limitations, as will be explained below. The applicability depends also on several conditions, like the (asymptotic) boundary conditions, and our dS space-time satisfies this condition which is a technical reason of using the WKB approach in this paper. In this section, we briefly summarize the basic results of the WKB approach.
The master equation for the WKB approach may be written as the one-dimensional Schödinger-type equation 6 , where the potential function −Q(x) is assumed to be constant at the asymptotic boundaries x = ±∞, although not necessary the same at both ends, and has one maximum at a certain finite x 0 (Fig. 5). In the black hole case, Φ represents the radial part of the perturbation with the usual time dependence of a positive-frequency mode e −iωt , as well as the appropriate angular dependence. The coordinate x is related to the tortoise coordinate r * which ranges from −∞, at the (outer) black hole horizon r + , to +∞, at the spatial infinity r = ∞ for the FIG. 5: A typical plot of the real part of the potential function −Q(x) ≡ U − ω 2 with the effective potential U , which is independent of the frequency ω ≡ ω R −iω I , and the (complex-valued) "energy" and two turning points at x 1 and x 2 are assumed.
asymptotically flat [10] or at the cosmological horizon r ++ for the asymptotically dS case [22], as adopted in this paper.
In order to solve the one-dimensional potential barrier problem of (92), we adopt a modified WKB approach by matching the two exterior WKB solutions in regions I and III across the two turning points x 1 and x 2 simultaneously [23]. In the interior region II, we expand the potential function −Q(x) near its maximum at x = x 0 and use it to connect the two exterior WKB solutions. For the black hole case with the quasi-normal mode boundary conditions, i.e., purely "outgoing" (from the potential barrier), the result may be written as the simple semi-analytic formula, where the primes denote the derivatives in terms of x and Q ′′ 0 > 0 due to the extremum of the potential −Q at x 0 . Here, the subscripts 2k, 2k + 1 denote the order of the WKB approximations, N = N or N ±1, and Λ 2k , Ω 2k+1 are polynomials of the derivatives Q (m) It is important to note that even and odd-order terms become pure-imaginary and real-valued, respectively, so that, when considering Q 0 = ω 2 − U 0 in the black hole case, they play the distinct roles in obtaining the quasi-normal frequencies, ω = ω R − iω I 7 . Then, the result up to the third WKB orders [10,11] (86), is given by 7 This fact does not seem to be well emphasized in the literature. This causes some misleading results, for example, in [21] due to the related typos in the original work of the third-order terms in [10] where "i" factor in the even part of (93) is missing. Similar typos also appear in the 4th-order terms in [24] (Appendix A), in contrast to the correct i factors in the corresponding 4th-order terms in [10] (Appendix) and [25] (private communications: We thank the authors for kindly sharing their results). 8 The convergence in each order may not be guaranteed generally since the expansions are just asymptotic ones. However, the popular third-order result of [10] can also be obtained as the lower-order results of the phase integral approach which allows the convergent expansions with controlled accuracy [26] and works with the real-valued WKB correction terms, where α ≡ n+ 1 2 (n = 0, 1, 2, · · ·), Ω 3 ≡ Ω 3 /(n+1/2), and the primes and the superscript (m) of U (m) 0 denote the derivatives of the effective potential U(r * ) with respect to the tortoise coordinate r * , evaluated at its maximum point r 0 . The maximum point r 0 is determined by dU /dr * = e 2ν dU/dr = 0, which does not have analytic solutions in our case, where the effective potentials U i in (87) and (88) are quite complicated so that r 0 can be found only numerically. Then, QNMs can be found numerically by solving (94) from the numerical computations of (95) and (96) as a function of the parameters of the theory. QNMs with n = 0 as functions of the BI parameter β. As can be seen in Fig. 6 (left), there are one degenerate event horizon for β = β min , and one degenerate black hole horizon with one cosmological horizon for β = β max . In between them, β min < β < β max , there are generally three horizons (two black hole horizons and one cosmological horizon), depending on β. Only for this dS black hole regime, the WKB formula (94) for QNMs may apply. Fig. well in many cases. For extensions to higher orders, see [10] (Appendix) (4th order), [24] (6th order), and [25] (13th order). See also [27] for a recent review. 7 shows that the black holes are stable for the range of β min < β ≤ β max since their metric perturbations are decaying (which is called also as the"ring-down" phase) with the decay time τ ∼ ω −1
I
for ω I > 0. Moreover, the results show ω ≈ 0 9 , i.e., the "frozen" mode, at β = β min ≈ 0.103 and no QNMs beyond β max ≈ 1.166, where the WKB formula (94) does not apply, as expected from the behaviors of the metric functions and the effective potentials in Fig. 6. As β increases, the negative imaginary parts of QNMs, ω I increase first monotonically and reach maximum points around βQ = 1/2 (here, β = 0.5 with Q = 1), which divides the Sch-type and the RN-type [15], and then decrease monotonically. Beyond the above range, β < β min or β > β max , one can not say anything about its stability since the WKB formula for QNMs can not apply due to the lack of the required boundary conditions and we need other analysis to test its stability. For example, the WKB formula does not say anything about the stability of the naked singularity for β > β max . On the other hand, the real parts of QNMs ω R increase monotonically and saturate to maximum values, corresponding to those of the RN-dS case [22,34], showing two decoupled oscillation modes with a rough relation, ω R(1) ∼ 2ω R (2) . This is in contrast to the imaginary parts with a rough relation, ω I(1) ∼ ω I(2) , so that the real-to-imaginary frequency ratios are ω R(1) /ω I(1) ∼ 2ω R(2) /ω I(2) ∼ 10. show the effective potentials and corresponding lowest QNMs as functions of the electric charge Q. As can be seen in Fig. 8, there are one degenerate event horizon (charged Nariai solution) with one Cauchy horizon for Q = Q min , and one degenerate black hole horizon with one cosmological horizon for Q = Q max . In between them, Q min < Q < Q max , there are generally three (two black hole and one cosmological) horizons, depending on Q. The results show ω ≈ 0 at Q = Q min ≈ 0.9320 and no QNMs beyond Q max ≈ 1.0041, as expected from the behaviors of the metric functions and the effective potentials in Fig. 8. The behaviors of ω I and ω R are almost the same as those of Fig. 7 for β < 0.5 with the similar relations, ω R(1) ∼ 2ω R(2) , ω I(1) ∼ ω I (2) and ω R(1) /ω I(1) ∼ 2ω R(2) /ω I(2) ∼ 10, and other discussions are similar. Here, we divide into three cases depending on the values of βQ, which divides Sch-type and RN-type by βQ = 1/2 [15]. First of all, Figs. 10 and 11 show the effective potentials and corresponding lowest QNMs as functions of the mass M for a RN-type black hole with fixed βQ > 1/2. As can be seen in Fig. 10, there are one degenerate event horizon (chraged Nariai solution) with one Cauchy horizon for M = M max , and one degenerate black hole horizon However, the behavior of ω R is similar to the case of βQ > 1/2 in Fig. 10, except the rapid but finite oscillation near M min .
Figs. 14 and 15 show the cases for a Sch-type black hole with βQ < 1/2. The basic difference from the case of βQ = 1/2 in Figs. 12 and 13 is that there is no (even a point) black hole horizon in addition to the cosmological horizon for M = M min , even though the (non-degenerate) black hole horizon can arbitrarily approach close to r + = 0 as M → M 0 , with the divergent Hawking temperature (Fig. 1). One more remarkable difference is that, as M → M min , ω is divergent as Fig. 15 shows ω ≈ 0 at M max ≈ 0.9774, similarly to the case of βQ > 1/2 in Fig. 11. 10 In the numerical computations of WKB formula (93), we find that the "smallness" of U ′ 0 , which should be "0" by definition, can be a good barometer of the numerical errors. As M → M min , the (non-vanishing) magnitude of U ′ 0 is increasing as shown in Fig. 17. The plots of f = e 2ν(r) and the effective potentials U 1 (r) and U 2 (r) for varying the orbital number l with a fixed βQ = 1/2. Here, we consider from l = 0 to l = 5 (bottom to top, barrier region) when M = 0.95, β = 1/2, Q = 1, and Λ = 0.2 with two (one for black hole and one for cosmological) horizons.
E. QNMs vs. l
We also divide into three cases depending on the values of βQ. Figs. 18 and 19 show the effective potentials and corresponding lowest QNMs as functions of the orbital number l for a fixed βQ > 1/2. As shown in Fig. 18, there are three (two for black hole and one for cosmological) horizons independent of l ( Fig. 18 (left)). On the other hand, the effective potentials depend on l, representing the angular momentum barriers for higher l with a single maximum. However, for lower l, there is no local maximum (but a minimum) in the effective potential (l = 0 case for U 2 ) or there are "three turning points" with one additional turning point within the usual two turning points, i.e., the outer black hole horizon at r + ≈ 1.2 and the cosmological horizon at r ++ ≈ 2.4 (l = 0 case for U 1 and l = 1 case for U 2 ) so that the usual WKB formula does not apply 11 . One remarkable thing in the result of Fig. 19 is that, in contrast to other varying parameters, ω I(1) and ω I(2) respond differently to the varying l, while ω R(1) and ω R(2) respond almost identically. In particular, in the large l limit, ω I approaches asymptotically to a limiting value ω I ≈ 0.03055, while ω R shows a linear dependence ω R = σl with σ ≈ 0.1092, similar to earlier results [11]. Figs. 20 and 21 show the case for βQ = 1/2. The main difference from βQ > 1/2 case in Figs. 18 and 19 is the "magnitude flip" between ω I(1) and ω I(2) in Fig. 21. In the large l limit, ω I approaches asymptotically to a limiting value ω I ≈ 0.03349, while ω R shows a linear dependence ω R = σl with σ ≈ 0.1006. On the other hand, for lower l, i.e., l = 0 for U 1 and l = 0, 1 for U 2 , the usual WKB formula does not apply, due to either "three turning points" (l = 0 for U 1 and l = 1 for U 2 ) or "no local maximum" (l = 0 for U 2 ) within the usual two turning points r + ≈ 1.4 and r ++ ≈ 2.4.
Figs. 22 and 23 show the cases for βQ < 1/2 but the results do not show any qualitative difference from βQ = 1/2 in Figs. 20 and 21. In the large l limit, ω I approaches asymptotically to a limiting value, ω I ≈ 0.03345, while ω R shows a linear dependence ω R = σl with σ ≈ 0.0910. For lower l, i.e., l = 0 for U 1 and l = 0, 1 for U 2 , the usual WKB formula does not apply, due to either "three turning points" (l = 0 for U 1 and l = 1 for U 2 ) or "no local maximum" (l = 0 for U 2 ) within the usual two turning points r + ≈ 1.5 and r ++ ≈ 2.4. there are one degenerate event horizon (charged Nariai solution) with one Cauchy horizon for Λ = Λ max , and one degenerate black hole horizon with one cosmological horizon for Λ min . In between them, Λ min < Λ < Λ max , there are generally three horizons depending on Λ, similar to Fig. 10. Fig. 25 shows ω ≈ 0 at Λ max ≈ 0.2352 and no QNMs below Λ min ≈ 0.1796, similar to the case in Fig. 11. Figs. 26 and 27 show the case for βQ = 1/2. As can be seen in Fig. 26, there are one degenerate event horizon for Λ = Λ max , and one non-degenerate black hole horizon for Λ = "0", i.e., the flat case. In between them, 0 < Λ < Λ max , there are generally two (one for black hole and one for cosmological) horizons. The main difference in Fig. 27 from βQ > 1/2 case in Fig. 25 is the "magnitude flip" between ω I(1) and ω I (2) , as in Figs. 13 and 21. Figs. 28 and 29 show the case for βQ < 1/2 but there is no qualitative difference in the result from βQ = 1/2 case in Fig. 27.
V. EXACT SOLUTION
Generally, finding analytic expressions for QNMs is difficult and one needs to consider their numerical computations. However, there exist some special cases where exact solutions can be found. In this section, we consider the exact solution near the (charged) Nariai solution where the black hole horizon r + and the cosmological horizon r ++ merge. To this end, we first note that the metric function f (r) = e 2ν(r) in (7) and (24) can be written [28,29], near the Nariai limit r + → r ++ , as where ǫ ≡ (r ++ − r + )/r + ≪ 1 and κ + ≡ (1/2)(df /dr)| r=r + is the surface gravity at the horizon r + , which is related to the Hawking temperature T H =hκ/(2π) in (16). In this limit, the tortoise coordinate r * , for the physical region r + ≤ r ≤ r ++ , can be obtained as where we have chosen the integration constant such that r * = −∞ at r = r + and r * = +∞ at r = r ++ . Then, one can invert the relation (99) to get and Now, substituting (100) and (101) in the effective potentials U i of (87) and (88), one can find where and Here, we have used f (r * ) ∼ O(ǫ 2 ) and df /dr = −2κ + tanh(κ + r * ) + O(ǫ 3 ) ∼ O(ǫ 2 ) from κ + ∼ O(ǫ) near the Nariai limit. The potential in (102) is known as Pöshl-Teller potential [30] and its QNMs can be solved analytically [31] as where n = 0, 1, 2, · · · is the overtone mode number. In Figs. 30 -33, considering the dependence on the Hawking temperature T H or black-hole horizon radius r + , the analytic result of QNMs in (108) are compared with the numerical results based on the WKB approximations in Sec. IV. The results show quite good agreements for the imaginary parts ω I even beyond the Nariai limit (Figs. 30 and 31). The best-fit curves for the numerical results in Fig. 30 near the Nariai limit are ω I(1) /T H ≈ 3.1559, 3.0757, 3.013 and ω I(2) /T H ≈ 3.1555, 3.0887, 2.996 for U 1 and U 2 cases when β = 1, 1/2, 1/3, respectively, while the analytic result is ω I /T H = π from (108) for the lowest mode, n = 0. This result may indicate the accuracy of our WKB approach itself. This is contrary to the real parts ω R which depart from numerical results beyond the Nariai limit (Figs. 32 and 33). But this would be partly due to the non-constant or scale-dependent nature of the real part of ω/κ + in (108) for the EBI-dS case, which may be compared with the RN-dS case in [29] or the Sch-dS case in [28,32]. Before finishing this section, we note that the Nariai limit is also associated with the large-l limit, Fig. 30. The orange lines denote the exact solutions near the Nariai limit at the bottom and the blue lines represent the best-fit curves near the limit. 2 and the linear dependence ω R ≈ σl with σ = κ + (r ++ − r + )/2r 2 + as observed in Figs. 19, 21, and 23. which may be compared with the RN case in [31] or the Sch-dS case in [33]. On the other hand, the asymptotic approach of ω I to a limiting value corresponds to ω I = κ + (n + 1/2) in (108) 12 .
We also remark that Figs. 34 and 35 show, as r + → 0, the divergent ω as with (ā,b,c,d) ≈ (1.0239, 2.7715, 0.9834, 0.4133), (1.0356, 2.9366, 1.0215, 1.0623) for U 1 , U 2 , respectively. This is consistent with the similar behavior in Fig. 16. It is interesting to note that the similar divergence can be also seen from the r + → 0 limit in the Nariai limit formula (108) with a quite close exponentā = 1 for ω I , while somewhat different exponent c = 3/2 for ω R , even though the point-like horizon limit r + → 0 would be far beyond the Nariai limit.
VI. DISCUSSION
We have studied, using the Schutz-Iyer-Will's WKB method, QNMs for the axial gravitational perturbations of electrically charged black holes in EBI gravity with a positive cosmological constant. We have found that all the axial perturbations are stable, i.e. "nonnegative" ω I , including the flat (Λ = 0) as well as dS (Λ > 0) cases, for all cases where the WKB method applies, correcting some errors in the literature [21].
It is also found that there are cases where the conventional WKB method does not apply, like the three-turning-points problems for the lower-l cases in Figs. 18 -23, and a more generalized formalism is necessary for studying their QNMs and stabilities [26]. Regarding the degenerate horizons, where the WKB formula can be marginally applicable, our results seem to be consistent with stability [19,22,34] 13 .
On the other hand, we may note in our case that there are actually three types of degenerate horizons with the vanishing Hawking temperature. The first is the Nariai-type horizons, where the (outer) black hole horizons, regardless of whether it is the RN-type or the Sch-type, coincide with the cosmological horizon. In this case, we find that the QNMs are completely "frozen", i.e., ω ≈ 0 and this would indicate the solution as the final state 12 Of course, these two behaviors may not be directly compared with those of Figs 19, 21, and 23 because of different proportional coefficients depending on whether it is near the Nariai limit or not. Actually, we have ω = −0.01763i + 0.12083 l, − 0.02278i + 0.10775 l, − 0.02089i + 0.08954 l near the Nariai limit, while ω = −0.03055i + 0.1092 l, − 0.03349i + 0.1006 l, − 0.03345i + 0.0910 l for the cases of Figs 19, 21, and 23, which correspond to ǫ = 0.99091, 0.70872, 0.57611, respectively. It is rather surprising that we have rough agreements already even beyond the Nariai limit. 13 This may be contrasted with the "horizon instability" of axisymmetric extremal horizons [35]. It would be a challenging problem to study the horizon instability of axisymmetric extremal horizons within WKB approaches or in a more generalized approach [26]. | 9,039.8 | 2020-04-25T00:00:00.000 | [
"Physics"
] |
Chemical Speciation, Bioavailability and Risk Assessment of Potentially Toxic Metals in Soils Around Petroleum Product Marketing Company as Environmental Degradation Indicators
The study aims at investigating chemical speciation, bioavailability and risk assessment of some selected metals in soils around refined petroleum depot using the concentrations of the metals as variables to ascertain the impacts of the activities within the petroleum depot. Surface-soils were got from within the premises of Pipelines and Product Marketing Company, Ibadan, Nigeria, while control samples were got at 200 m away from the study location. Electrical conductivity and pH were measured using a calibrated dual purpose meter, while elemental analysis was done using Atomic absorption spectroscopy analytical technique. The results showed that the soils exhibited low ecological risk; minor enrichment for Mn, moderately severe enrichment for Ni and Co, severe enrichment for Cr and extremely severe enrichment for Pb, Zn and Cd; low contamination factor by Pb, Ni, Mn, Cr, Co, and Fe and moderate contamination by Zn and Cd. Geo - accumulation index results indicated unpolluted with Ni, Mn, Cr, Co, and Fe, unpolluted to moderately polluted with Pb and Zn and moderately to strongly polluted with Cd. Inter-element clustering results indicated chemical affinity and/or similar genetic origin among the elements. Speciation analysis suggested that Fe, Co, Cr, Cd, and Ni occurred in the residual fraction; Pb, and Zn in the carbonate fraction, while Mn have its highest percentage in the Fe-Mn oxides fraction. Percentage mobility and bioavailability showed that most of the metals are immobile and non-bioavailable. Study concluded that the oil-impacted soils were contaminated with most of the metals, but with low ecological risk.
Introduction
The studied petroleum product marketing company (oil depot) which is a subsidiary of Nigeria National Petroleum Corporation (NNPC), supplies various refined petroleum products to retailing outlets for end users in Southwestern part of the country, as well as some Northern states. The depot situated in Ibadan, Nigeria has the largest loading facilities in Southwestern Nigeria. It was established in a remote area suitable for depot activities, however, as a result of population explosion, expanse of land close to the perimeter fence of the storage facilities are currently being exploited for residential purposes and the occupants engage in farming activities even within and around the perimeter fence of the oil depot. Shallow stream which is used in farming and other activities transverse the depot premises and extends to the residential areas. There will be great health and environmental hazards to humans, aquatic life and other forms of life in the locality, as a result of infiltration, plant uptake, bioaccumulation and biomagnifications in the food chain when contaminated with oil products. The existence of petroleum products has significant environmental effects due to diverse petroleum development processes. It was stated by Tisssot and Welte, (1980) that crude oil commonly amasses trace metals via its treatment and refining processes; source rocks, sea salt intrusion, and during migration of crude oil. These contaminants can enter the environmental matrices, affect flora, fauna and finally go into the food chain. Thus, it is necessary to evaluate the concentrations of these selected contaminants (Pb, Ni, Zn, Cd, Mn, Cr, Co and Fe.) in the premises of the study area as indicators of environmental degradation, hence this study.
In this study, the elemental analysis was carried out using Atomic absorption spectrometry (AAS) analytical technique. Its principle is founded on the absorption of electromagnetic rays at a particular wavelength by unexcited ground state gaseous atom to generate a signal that can be assessed. The absorbed radiation is directly proportional to the concentrations of elements present in the path length of the optical device. The technique is regularly used in analytical chemistry for the determination of the concentration of a particular element that of concern (analyte) in the sample matrix for analysis. The AAS technique is capable of determining levels of more than 70 different elements in solution employed in several arenas of science (Harvey 2000).
Study area
The area of study is in Ibadan, South-Western Nigeria with coordinates on latitude 7º15' 2" N and longitude 5 º 12' 36" E in the tropic rainforest of Nigeria. Figure 1 presents the map showing the sampling points in the study area.
Sampling and sample treatment
Surface soil samples from divers points in the study area were collected by dropping the soils into pre-labeled air tight vessels using hand trowel and conveyed to laboratory for analysis. Control samples were got outside the premises of the oil depot where there were little or no anthropogenic influences, about 200 m away (Table 1). Rocks, pebbles, and stones were discarded from soil samples before air-dried at room temperature for two weeks, but well protected with plain white sheets to avoid fugitive dusts from sullying the samples. The samples were then powdered with a thoroughly cleaned agate mortar and pestle and ground samples were then mixed for homogeneity of the soil particle size. The prepared samples were then divided into two sets for pH/conductivity determination and elemental analysis respectively.
pH and Electrical conductivity determination
Standard analytical methods described by Bailey (1986), and Adebiyi and Adeyemi (2010) was used for pH and electrical conductivity determinations using a calibrated dual purpose digital pH/electrical conductivity meter (Jenway 4510) at 25⁰C. The values of the pH were also established using hand-set glass electrode meter HI 2209 (Hanna instrument).
Quality assurance
The technique of Laxen and Harrison (1981) was implemented for the cleaning of the sample bottles and glassware, while blank and triplicates analyzes were done on the samples as reported by Adebiyi and Ayeni (2010).
Recovery analysis/Quality control
The accuracy of the AAS analytical procedure was established through spiking of 0.5g of some soil samples with 5.0 μg/g of the standard mixture of the heavy metal solution of the selected metals (Cd, Cr, Cu, and Ni), while another set 0.5g of soil samples were weighed accurately but without spiked standard solution. Both the spiked and the unspiked soil samples were taken through the same digestion process and taken for AAS analysis. The percentage of the heavy metal recovered (%R) was evaluated using the following equilibrium expression: Where Cʹ = concentration of heavy metal in the spiked soil sample, C = concentration of the heavy metal in the unspiked sample. B = amount of heavy metal used for spiking (Oyewole and Adebiyi, 2017).
Total metal analysis
Based on the description by Adebiyi and Ayeni (2010), powdered soil samples (0.5 g) were weighed precisely and pretreated using 20 mL Aqua regia (HCl and HNO3 acids in the ratio 3:1.
Water soluble fraction (F1)
A 0.5 g of the homogenized soil sample each was thoroughly mixed with 10 mL distilled water and the mixture treated to constant shaking by mechanical shaker for 1 hour. It was then permitted to remain still for 30 minutes and the supernatants transferred into in a volumetric flask and made up to 25 mL with doubly distilled water. The filtrate solution is poured and stored in cleaned plastic containers for elemental analysis.
Exchangeable fraction (F2)
The residue from F1 was mixed and stirred with 20 mL 1 M MgCl2 solution at pH 7 for 1 hour at room temperature. The mixture was shaken carefully for 1hour and then permitted to stand for 30 minutes. The supernatant was decanted and made up to mark in a 25mL standard volumetric flask with distilled water.
Fraction bound to carbonates (F3)
The residue from F2 was subjected to the treatment of 20 mL of 1 M C2H3NaO2/CH3COOH acid buffered at pH 5 for 5 hours at room temperature. The ensuing blend was permitted to stand for 30 minutes, while the supernatant transferred from the residual mixture from F2.
Fraction bound to iron and manganese oxides (F4)
The residue from F3 was also extracted through gentle reducing environments viz., 250 mL of water was used to dissolve 0.69 g of hydroxylamine hydrochloride (NH2OH.HCl) in a standard volumetric flask to prepare 0.04 M NH2OH.HCl. The residue was extracted with 20 mL of the 0.04 M NH2OH.HCl in 25% acetic acid (v/v) with shaking at 96 ºC ± 1ºC in a water bath for 6 hours. The extract was then poured from the residual soil sample into a 25 mL standard volumetric flask and made up to mark with doubly distilled water.
Fraction bound to organic matter and sulphide (F5)
The residue from F4 was oxidized using the following steps viz., 3 mL of 0.02 M HNO3 and 5 mL of 30% (v/v) hydrogen peroxide at pH 2, was added to the residue from F4, while the mixture heated to 85 0 C in a water bath for 2 hours with intermittent shaking and then permitted to cool down. This was followed by the addition of 3 mL of 30% hydrogen peroxide which has been adjusted to pH 2 with HNO3. The mixture was then heated at 85ºC for 3 hours with intermittent shaking and permitted to cool down followed by the addition of 5 mL of 3.2 M ammonium acetate in 20% (v/v) HNO3, followed by dilution to a final volume of 20 mL with doubly distilled water.
The extracted metal solution was then poured from the residual sediment and was then used for the next but the last extraction.
Residual or inert fraction (F6)
The final fraction which is the residue from F5 was oven dried at 105ºC, digested with a mixture of 5 mL conc. HNO3 (HNO3, 70% w/w), 10 mL hydrofluoric acid (HF, 40% w/w) and 10 mL perchloric acid (HClO4, 60% w/w) in Teflon beakers. The supernatant was poured into 25 mL volumetric flask and made to the mark with doubly distilled water and was then taken for elemental analysis using AAS technique.
Data management
The geochemical data gathered analysis in this study involved the application of statistical techniques which includes; Descriptive statistics (range, mean, standard deviation) as well as enrichment factor, geo-accumulation index, contamination factor, modified degree of contamination, pollution load index, T-test, Cluster analysis, principal component analysis/ANOVA, Geoaccumulation index (Igeo), and Potential ecological risk assessment were all deployed to interpret/explain the data obtained in this research. The distribution pattern of the analysed metal temporally and spatially was evaluated using coefficient of variation. Information about the available metals to be transferred in the ecosystem studied was done through mobility factor (Oyewole and Adebiyi, 2017).
Mobility factor (MF)
The mobility of metals in the studied soils is appraised on the ground of absolute and relative contents of fraction weakly bound to the soil components. The relative index of metal mobility was determined as a mobility factor (MF) subject to equation below (Salbu et al., 1998).
Contamination factor (CF)
The level of contamination of the studied soils by the elements is expressed in terms of a contamination factor (CF) using the following equilibrium expression: Where the contamination factor CF ˂ 1 indicating low contamination; 1≤ CF ˂ 3 implies moderate; 3 ≤ CF ≤6 denotes considerable, and CF ˃ 6 indicating contamination level is very high (Hakanson, 1980).
Geoaccumulation index (Igeo)
Geo-accumulation index (Igeo) is employed to quantify the extent of heavy metal contamination associated with the soils. It was calculated using the equilibrium equation below.
Enrichment factor (E.F)
Enrichment factor (EF) was used in the study to assess the relative contributions of natural and anthropogenic heavy metal inputs to soil (Barbieri, 2016). It has also been used to indicate the degree of pollution or contamination or both. It is calculated using the expression: For the case of this study, Iron (Fe) was the referred element as a result of its abundance value of 4700 mg/kg.
Modified degree of contamination (mCd)
Degree of contamination by metals of interest in soil in this study was determined by the equation below: Where N is the number of analyzed metals where C.F is the contamination factor. Modified degree of contamination is broken into seven categories: mCd< 1.5, nil to very low degree of contamination; 1.5 ≤ mCd< 2, low degree of contamination; 2 ≤ mCd< 4, moderate degree of contamination; 4 ≤ mCd< 8, high degree of contamination; 8 ≤ mCd< 16, very high degree of contamination; 16 ≤ mCd< 32, extremely high degree of contamination; and mCd≥ 32, ultra-high degree of contamination (Abrahim and Parker, 2008).
Pollution load index (PLI)
The pollution load index, proposed by Tomlinson et al. (1980) was determined by the following equation: Where N is the number of metals under study where C.F is contamination factor. The PLI is able to give an estimate of the metal contamination status and the necessary action that should be taken.
There are three indicators for pollution load index of a particular site. They are PLI< 1 indicates perfection; PLI= 1 means that only baseline levels of pollutants are present; and PLI> 1 depicts the quality of the site has been deteriorated (Tomlinson et al., 1980).
Potential ecological risk assessment
The potential ecological risk index (RI) determines the sensitivity of biological communities in the large contaminated areas (Hakanson, 1980). This method comprehensively considers the synergy, toxic level, concentration of the heavy metals and ecological sensitivity of heavy metals (Singh et al., 2010;Douay et al., 2013). It takes into account the CF of metals, their potential ecological risk factor (Er) and the toxicological response factor (Tr) taken from Duodu et al. (2016). It is determined by the following equation where Er indicates the potential ecological risk factor of individual metal, Tr indicates the toxicological response factor of each metal (Duodu et al., 2016). CF is the contamination factor of each metal. The MRI takes into account the EF, toxicological response factor (Tr) of individual metal. It is calculated by the following equation: where MEr is the modified potential ecological risk factor of individual metal and EF represents the enrichment factor of each metal.
The considered pollutants for the potential ecological risk index in these Pb, Ni, Zn, Cd, Mn, Cr and Co. Table 2 shows the ecological risk index group approved by Hakanson (1980).
Result of recovery analysis
The recovery, precision, accuracy and sensitivity of the analytical procedures used in this research work for the elemental analysis of the soil samples were tested and confirmed reliable. The value of the result of percentage recovery for selected heavy metals is as shown in Table 3. The percentage of the recovered metals ranged between 85 -96%. However, the acceptable range limits for the percentage recovery analysis for elements is 70 -110%, hence, the result of this study is reliable as it falls within the acceptable limit.
pH and Electrical conductivity results
The mean values of the analyzed parameters (pH and electrical conductivity (E.C) for the oil- (Egbenda et al., 2015), which can be due to soluble salts of metals introduced into the soil via discarded petroleum products.
Elemental analysis results
The comparison of the elemental concentrations of the oil-impacted and control soils is presented in Figure 3 using the mean values. It shows the relativity between the concentrations of elements in the oil-impacted and control soil samples. From the results, Fe has the highest mean concentration occurred in both the control and oil-impacted soils, while, Cd has the lowest mean concentrations. It is observed that highest mean concentration of 1187.08 mg/kg is recorded for Fe, while lowest mean concentration of 1.81 mg/kg is recorded for Cd in the oilimpacted soil.
The control samples have a mean concentration of 725.24 mg/kg for Fe, and a mean concentration of 1.53 mg/kg for Cd. However, the relatively high mean concentration recorded for Fe in the soil studied is in agreement with the earlier observation that Nigeria soil has very high Fe content. It is observed that the elemental composition of the oil-impacted soils is higher than that of control soils. This implies that the oil-impacted soils accumulated the metals from the anthropogenic sources such as refined petroleum. Generally, the concentration of heavy metals in the oil-impacted soils is of the order: Fe> Zn > Pb > Cr > Ni > Mn > Co > Cd, while it follows the order: Fe > Zn > Cr > Pb > Mn > Ni > Co > Cd in the control soils.
Analysis of variance results
The single factor analysis of variance was carried out on the oil-impacted soils using Microsoft excel software package. If F > F critical, the null hypothesis is rejected. However, this is not the case as 0.12< 2.01 as shown in Table 4. Therefore, we accept the null hypothesis. By implication, the means of the ten populations are not all equal.
Hierarchical cluster analysis results
The hierarchical cluster analysis was used in this study to unveil the correlation among the selected heavy metals using Euclidean distance as a basis for similarity measurement. Statistical Package for Social Scientist (SPSS) was used for this analysis. The interelement clustering as a result of the analysis of the selected heavy metals in the oil impacted soil samples is as shown in Figure 4.
The x-axis has the Euclidean distance to produce the similarity matrix, and the y -axis has the analyzed metals. The result showed two major groups; the first group being Fe and the second group being cluster of Cd, Co, Ni, Mn, Cr, Pb and Zn. In the second group, there is inter -element clustering between Zn and Cr; and another between Cd, Co, Ni, Mn, and Pb. The latter group members show closest inter-element clustering, indicating they have the same chemical affinity and/or from similar origin.
A comparison of the concentrations of the analyzed metals in the oil-impacted and control soils
using T-test values at 95% confidence interval is presented in Table 5 show the Ttest results comparing the concentration of the selected metals in oilimpacted and control soils. The test was carried out at 95% confidence interval. A significant difference is confirmed when the value of Texperiment is greater than 2.13 at this limit and otherwise if less. For the sake of this study, there is a significant difference between the concentration of each of the analyzed metal in the oil impacted and control soils.
Comparison of the elemental values with their standard permissible limits
The comparison of the elemental values and their standard permissible limits is presented in Table 6. The mean concentrations of the metals are less than the minimum allowable limits set by the and Fe) in this study is an indication that the anthropogenic influences of oil spills might not have significant contribution on the general level of metals in the oil depot. Table 7 shows the comparison between the mean concentrations of the analysed metals in this study and that of other similar studies. The mean concentrations of the metals obtained in this study are comparatively higher than those reported by Fu et al. (2014), Aigberua and Inengite (2019), except for Ni and Cr. However, the mean concentration values obtained from this study is less than the one reported by Adebiyi and Ayeni (2010). This difference is probably due to the variations in the soil composition of the study area compared to the other studies.
Indices of pollution
Various indices of pollution were employed in this study in estimating the determination of contamination level as well as degree of pollution of the oilimpacted soils involved calculations using different pollution indices such as; contamination factor, geo-accumulation index, enrichment factor, modified degree of contamination and pollution load index. The results of the estimations are presented in Table 8. A Turekian and Wedepohl (1961) background geochemical value was adopted in this study as a standard reference. There is variation in the contamination factor of the analyzed metals in the oil-impacted soils as deduced from With regards to the calculation of the enrichment factor for the study, Fe was chosen as the geochemical normalizer or reference element because of its conservative nature during diagenesis.
The oil-impacted soils exhibited minor enrichment for Mn, moderately severe enrichment for Ni and Co, severe enrichment for Cr and extremely severe enrichment for Pb, Zn and Cd.
The geo-accumulation index is a reflection of both natural and anthropogenic metal inputs to the soils. The oil-impacted soils are unpolluted with Ni, Mn, Cr, Co and Fe, unpolluted to moderately polluted with Pb and Zn and moderately to strongly polluted with Cd.
With respect to the results of the modified degree of contamination and pollution load index, the oil-impacted soils showed a very low degree of contamination by the investigated metals. The pollution load index indicated a perfection of site (oil depot) quality.
Potential ecological risk assessment
The potential ecological risk assessment values of the analyzed metals of the oil-impacted soils are presented in Table 9. The potential ecological risk index (RI) is a reflection of the general situation of pollution brought about by the presence of individual metals. Considering the individual modified ecological risk index/potential ecological risk factor, the oil-impacted soils exhibited a low contamination risk by Pb, Ni, Zn, Mn, Cr and Co while they exhibited a moderate contamination risk by Cd. However, results of the modified potential ecological risk factor (Er) showed that Cd had high adverse effects on the oil depot as a result of its significantly high Er value. The RI value of the oil-impacted soils indicated low ecological risk by the analyzed metals.
Chemical speciation results
The chemical speciation was used in this study to know the biological availability of the selected heavy metals in the oil-impacted soils of the area under study. The percentage mean extraction of the heavy metals for the oil-impacted soils is presented in Figure 5a while that of the control soils is presented in Figure 5b. The residual fractions constitute a significant amount of the analyzed heavy metals, most notably Nickel, Cadmium, Chromium, Cobalt and Iron. This is consistent with the reports of Ramirez et al., 2005. Iron exhibited the highest percentage of extraction (43.07%) in the residual fraction. The high percentage of these metals in the residual phase is an indication that they were not available for uptake by organisms. Residual fractions usually contain abundance mineral compounds composed majorly of sand in which metals can bond strongly with its crystal structure. Hence, metals in this fraction are in non-toxic, non-mobile, non-reactive, and non-available form. A soil sample is considered pollution free with relatively higher percentage of metals. The higher the percentages of metals in this form, the more environmentally friendly the sample. Residual or inert fraction implying low mobility and bioavailability indicates probably a low degree of pollution by the metals considered.
The Fe-Mn oxide fraction showed a relatively high extraction by the oil-impacted soils for Mn.
Manganese has a percentage extraction of 33.19% in the Fe-Mn oxide fraction. This phase is also next in importance for Fe which showed a 23.90% extraction in the Fe-Mn oxide phase. The highest amount of heavy metals found in Fe-Mn oxide form may be due to the chemical composition of the parent rock of tropical soils rich in Fe-Mn minerals. The association of higher concentration of metals with these fractions is caused by adsorption of these metals by the Fe-Mn mineral surface (Zakir et al., 2008).High percentage abundance of metals in the Fe-Mn oxide phase has been reported to be influenced by the high concentration of Fe -Mn minerals in the soil (Etim and Adie, 2012) and may limit the mobility and bioavailability of heavy metals attached to these minerals. Nevertheless, metals in Fe and Mn oxides fractions can be relatively more sensitive to environmental changes unlike metals in residual fraction, which are often unreactive with respect to metal dynamics scale (Akinyemi et al., 2012). Precipitation and oxidation reactions control the availability of Fe and Mn in soils because they are present in appreciable quantities in tropical soils. Metals associated with oxide minerals are likely to be released in reducing conditions because; relatively small changes in pH toward reducing conditions would cause reduction of Fe and Mn oxide species leading to dissolution of associated metals ( Zakir et al., 2008).
The amount of Pb and Zn extracted in percentage for the carbonate fraction is 20.76% and 19.06% respectively. The implication of this is that Pb can be mobilized back at more available status with the pH and redox under reduced conditions (Iwegbue, 2011). In relative to other metals, Zn has been known to be mobile and bioavailable in the soil, all attributed to the significant percentage of exchangeable or carbonate fraction as well as reducible fraction (Teixera et al., 2010). The significant levels of the fractions are commonly attributed to Zn adsorbed permanently on the variable charge surface sites of Fe, Mn or Al oxide (Iwegbue, 2011). However, adsorption of Zn to the carbonate fraction that is reported in this study is consistent with the report of Ideriah et al.,
Bioavailability of the analyzed metals
Mobility of a metal is a measure of its bio-availability. Mobility of a metal is measured by how much of it is present in the first three geochemical fractions (the bio-available form), relative to how much of the metal is present in all the six fractions. It measures the relative amount of the metal weakly bound to the soil components (Adamma et al., 2014). Figure 6 is a diagram showing the bioavailable and non-bioavailable fractions of the analyzed metals in the oil-impacted soils. Generally, the bio-availability of the metals follows the trend: Ni > Zn > Pb > Co > Cr > Cd > Mn > Fe. The relatively high percentage observed for some of these metals suggest that the metals are available for uptake, and when these metals are present in soils via anthropogenic inputs such as oil spills, they have a high potential to be mobile and bioavailable.
Conclusion and recommendations
Oil-impacted soils in the premises of Pipelines and Product Marketing Company (PPMC), Ibadan, Nigeria were analyzed for potentially toxic metal levels using Atomic Absorption spectrophotometer. This was determined to evaluate the levels of potential toxic metals and ecological risk of the contaminants in the area of study. The levels of the heavy metals were assessed in terms of their total elemental concentrations and chemical speciation as well as various statistical approaches in order to assess the distribution and chemical forms of the analysed metals soils. The clustering analysis result showed the correlation between the metals, indicating chemical affinity and/or common sources. The study indicated that the oil-impacted soils have elevated levels of metals than the control soils. The contamination factor indicated a low contamination for all the analyzed heavy metals, other than Zn and Cd which have moderate contamination. The geo accumulation index equally revealed varying levels of contamination associated with the oil depot by the heavy metals. The potential ecological risk assessment indicated a low ecological risk by the metals. The speciation of the heavy metals revealed the chemical behavior and forms of the analysed metals in the soils. The chemical fractionation gave information on the percentage of the heavy metals that are mobile and bioavailable. Variations existed in the behavior of these metals in the soil; some reside in the residual fraction while some are available for uptake. This investigation will also serve as a reference guard for studies of similar settings.
It is therefore recommended that the levels of the analysed metals should be monitored periodically for effects of anthropogenic contributions; also indiscriminate disposal of used oils should be prevented.
Conflict of Interest Statement
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Analyzed metals
Oil impacted soils Control Soils Tr = Toxicological response factor, Er = Potential ecological risk factor, MEr = Modified ecological risk factor, RI = Potential ecological risk index and MRI = Modified potential ecological risk index Figure 1 Map showing the Sampling Locations of the Study Area. Note: The designations employed and the presentation of the material on this map do not imply the expression of any opinion whatsoever on the part of Research Square concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. This map has been provided by the authors. Dendrogram showing the hierarchical cluster analysis of the analyzed metals in the oil-impacted soils Bioavailability of the Analyzed Metals in the Oil-impacted Soils. | 6,567.4 | 2021-04-23T00:00:00.000 | [
"Environmental Science",
"Chemistry"
] |
Multiwavelength Spectral Analysis and Neural Network Classification of Counterparts to 4FGL Unassociated Sources
The Fermi-LAT unassociated sources represent some of the most enigmatic gamma-ray sources in the sky. Observations with the Swift-XRT and -UVOT telescopes have identified hundreds of likely X-ray and UV/optical counterparts in the uncertainty ellipses of the unassociated sources. In this work we present spectral fitting results for 205 possible X-ray/UV/optical counterparts to 4FGL unassociated targets. Assuming that the unassociated sources contain mostly pulsars and blazars, we develop a neural network classifier approach that applies gamma-ray, X-ray, and UV/optical spectral parameters to yield descriptive classification of unassociated spectra into pulsars and blazars. From our primary sample of 174 Fermi sources with a single X-ray/UV/optical counterpart, we present 132 P_bzr>0.99 likely blazars and 14 P_bzr<0.01 likely pulsars, with 28 remaining ambiguous. These subsets of the unassociated sources suggest a systematic expansion to catalogs of gamma-ray pulsars and blazars. Compared to previous classification approaches our neural network classifier achieves significantly higher validation accuracy and returns more bifurcated P_bzr values, suggesting that multiwavelength analysis is a valuable tool for confident classification of Fermi unassociated sources.
INTRODUCTION
The Fermi Gamma-ray Space Telescope -Large Area Telescope (Fermi -LAT) 4FGL catalog includes 5064 sources, 3376 of which are associated with extragalactic blazars or nearby pulsars (Abdollahi et al. 2020). 352 other 4FGL sources include supernova remnants, X-ray binaries, starburst galaxies, and other objects. 1336 4FGL sources are "unassociated", lacking confident astrophysical explanations or source counterparts at longer wavelengths. Given that the bulk of the 4FGL sources are blazars or pulsars it is feasible that many of the unassociated sources are also blazars or pulsars that failed to be classified. The unassociated sources therefore tease systematic expansions to blazar and pulsar catalogs by extending to less obvious sources.
Identifying blazars and pulsars among the unassociated 4FGL sources is an important step towards confident population studies of both classes. The Fermi -LAT unassociated sources might include blazars that are lower luminosity, higher redshift, or viewed at a larger angle with respect to the jet than their more easily detected and identified cousins in the established Fermi blazar catalog (Ferrara et al. 2015). A more complete blazar catalog will serve numerous scientific goals, including the verification and analysis of the blazar sequence (e.g., Fossati et al. 1998;Ghisellini et al. 2017) as a theoretical unifying scheme for blazars.
Similarly, there may be undiscovered pulsars among the unassociated 4FGL sources, valuable additions to the significantly shorter list of Fermi pulsars. The Fermi pulsar list includes canonical and millisecond pulsars but is also notable for dramatically expanding research into 'Black Widow' pulsars (e.g. Wu et al. 2018;, and the unassociated sources may contain several pulsars of these various types. Finally, some unassociated objects may defy classification as blazars or pulsars even after multiwavelength analysis. Identifying and classifying the "low-hanging fruit" of pulsars and blazars among the unassociated sources is a first step in identifying and studying other astronomical objects. While gamma-ray observations from Fermi -LAT are the foundation for the 4FGL catalog, more confident classification of pulsars and blazars can be achieved by extending spectral analysis to lower energies. To this end, the Neil Gehrels Swift Observatory (aka Swift) (Gehrels et al. 2004) is conducting a continuing survey of the Fermi unassociated sources. With the XRT (Burrows et al. 2005) and UVOT (Roming et al. 2005) instruments onboard Swift, X-ray observations from 0.3 − 10.0 keV and UV-visual observations from 450 − 900 nm systematically cover the uncertainty ellipse of each Fermi -LAT unassociated source.
Given the variety of spectral features in blazars and pulsars, this multiwavelength capability is a powerful tool for studying 4FGL unassociated sources at lower energies and for supporting identification and classification efforts. For example Kaur et al. (2019) sorted 217 high-S/N unassociated 3FGL sources into blazars and pulsars using machine learning on gamma-and a single X-ray spectral parameter. The sample was limited to only those sources with a single possible X-ray counterpart in the error ellipse. Training an ML routine with known gamma-ray pulsars and blazars, the authors identified 173 likely blazars with P bzr > 90% (134 with P bzr > 99%) and 13 likely pulsars with P bzr < 10% (7 with P bzr < 1%). From their initial list, 31 sources from the 3FGL unassociated list defied categorization and were labeled 'ambiguous'. Recent work by this collaboration (Kerby et al. 2021) continued that analysis by including more detailed X-ray spectral fitting and systematically adding spectral parameters to machine learning. Adding UV-visual observations from the Swift-UVOT telescope is particularly useful, as pulsars are usually extremely dim in the UV-visual range Saz Parkinson et al. 2016) while blazars emit at all wavelengths and at low redshift can be observed across the electromagnetic spectrum (Ghisellini & Tavecchio 2008).
The basis of expanding analysis of the unassociated sources to lower energies is the assumption that a gamma-ray source is likely to also be emitting X-rays due to the ubiquity of X-ray synchrotron emission in energetic gamma-ray systems. Given the spatial resolution and X-ray sensitivity of our observations with Swift-XRT and the ubiquity of pulsars and blazars emitting both in gamma-and X-rays, if only a single X-ray source is present in a gamma-ray ellipse then there is likely a relationship between the two. Because gammaray emitters like pulsars and blazars normally emit Xrays via mechanisms like synchrotron radiation, an association between an unassociated gamma-ray source and a solitary X-ray sources within its uncertainty ellipse is theoretically sound. The automated analysis after Swift observations calculates the probability of a coincident but unrelated X-ray source in the Fermi 4FGL 95% confidence region based on exposure time and confidence region size, and for a typical 4ks exposure in these ∼5 arcmin semi-major axis regions, the probability is < 0.01. For the handful of exposures with longer than 4ks and/or larger than typical 4FGL confidence regions sizes, the probability of additional spurious X-ray source detections increases. Finally, there are a large number of 4FGL target fields for which no X-ray detection was found in the typical 4 ksec exposure, which is consistent with the estimated low chance probability for spurious X-ray source detection.
In previous work on the Fermi unassociated sources (e.g. Kaur et al. 2019;Kerby et al. 2021) analysis was restricted to Fermi unassociated sources with only one high-S/N X-ray source in their gamma-ray confidence ellipse. Still, it may be the case that Swift observations reveal multiple X-ray sources in the uncertainty ellipse of a single Fermi unassociated gamma-ray source, as a 4FGL gamma-ray uncertainty ellipses spans several arcminutes while the positional uncertainty of a XRT detections is a few arcseconds. In this case, other factors must be considered before determining a likely association between high-and low-energy photons. It is less likely that there will be degeneracy between UV-visual and X-ray sources, as both the XRT and UVOT instruments on Swift have relatively small PSFs.
Our approach to classification of 4FGL unassociated sources has several unique factors compared to other classification efforts and builds on our work on the 3FGL catalog. While Lefaucheur et al. (2017) conducted IR observations of unassociated sources to look for lowenergy counterparts, skipping over X-ray/UV/optical observations would complicate attempts to use spectral information to link gamma-ray sources to counterparts. The sky is much more prolifically filled with IR sources than X-ray sources, and a gamma-ray uncertainty ellipse with just one possible X-ray counterpart might have dozens in the IR bands. Searching in the radio band is similarly difficult, requiring assumptions to link gamma-ray and radio emission without interpreting the intermediate wavelengths, but the detection of characteristic radio pulsations is a direct route to locating pulsars and some previous works have used radio properties to predict pulsar membership after efficient searches (for example, Frail et al. 2018). Recently, Zhu et al. (2021) conducted ML on the 4FGL unassociated sources, but restricted their analysis to only gamma-ray properties. Analysis of X-ray observations specifically can capture the synchrotron peaks of non-thermal emitters like blazars, a valuable region for discriminating the spectra of pulsars and blazars.
In this work, we extend investigations of the unassociated sources by combining gamma-ray data with Swift-XRT and -UVOT observations for 4FGL unassociated targets observed thus far by Swift's systematic program. With this multiwavelength set of parameters, we build a neural network classification (NNC) routine using samples of known pulsars and blazars to classify the unassociated sources. In section 2, we describe the gamma-ray, X-ray, and UV/optical observations and sources used in this work, plus reasons for excluding certain entries from our base sample. Next, in section 3 we describe our production of spectral parameters and present tabulated results. In section 4 we discuss the classifications of the unassociated gamma-ray sources via a NNC method. In section 5 we present and discuss spectral fitting and classification results, summarize our findings, and posit next steps.
Fermi-LAT Unassociated and Swift-XRT Sources
The unassociated sources of the Fermi -LAT 4FGL catalog are comprised of gamma-ray sources of unknown astrophysical nature and no known counterpart (for a discussion of the entire 4FGL catalog, see Fermi-LAT-Collaboration 2019). After ten continuous years of observations, the 4FGL-DR2 catalog (Ballet et al. 2020) contains 5064 sources in total, of which over 1300 are unassociated. 1410 4FGL sources are targets for the Swift-XRT survey of unassociated sources (slightly higher than the total number of currently unassociated sources due to sources gaining or losing associations between DR1 and DR2). By observing the unassociated gamma-ray sources, the Swift program provides highresolution X-ray, UV, and visual observations to find lower-energy counterparts to the unassociated sources. This survey is a further development after previously analyzing unassociated 3FGL sources (detailed in Kerby et al. 2021) and has covered approximately 500 4FGL unassociated targets with > 4 ks of observations each. So far Swift has detected possible X-ray counterparts within the Fermi -LAT uncertainty ellipse of 208 4FGL unassociated sources, with some unassociated sources having multiple possible X-ray counterparts.
Combining archival and new Swift observations, an automated analysis process described in Falcone et al. (2011) produces lists of X-ray sources. For our purposes, a source is deemed 'notable' if it is contained within the 95% uncertainty ellipse of the 4FGL source and if the signal-to-noise ratio calculated with the Ximage function SOSTA is greater than 4. We take the produced list of 238 notable X-ray sources as containing the possible Xray counterparts to the 4FGL unassociated sources, and focus on these detections as the sample for further X-ray analysis.
The HEASARC query interface allows for downloads of Swift-XRT observations within 8' of all 238 notable Xray sources spread across the 205 4FGL targets. Some X-ray sources were matched with Swift-XRT observations only upon expanding the HEASARC search radius to 10', as they sit close to the edge of the field of view. Expanding the search radius of the HEASARC query does not impinge too closely towards the edges of the 23.6 field of view of the Swift-XRT telescope.
We immediately eliminate two 4FGL targets from consideration, along with all related possible counterparts. The two targets, J1836.8-2354 and J1649.2-4513, have very uneven Swift-XRT coverage near the 4FGL centroid position due to a bright, highly-observed X-ray source just outside the field of view. The nine notable X-ray sources from those two targets appear to be related to the extremely long (> 100 ks) observations of the nearby X-ray source that is outside the uncertainty ellipse. Several other 4FGL targets are eliminated if their only possible X-ray counterpart is coincident with a catalogued star that contributes a false X-ray signal via optical loading, the pile-up of optical photons in the Swift-XRT detector, or by a star of such brightness that it would radically dominate the UV/optical emission from an unassociated source, putting it outside of the range of known pulsars and blazars.
Of the 205 4FGL targets with possible X-ray counterparts examined in this work, 14 have more than one notable X-ray source within their uncertainty ellipse. For the purposes of spectral analysis and ML classification, we fit all X-ray sources while making no judgements about which is most likely the counterpart to the gamma-ray source. We tabulate and discuss the unitary sources as our primary sample separately from the more "confused" unassociated sources with more than one X-ray/UV/optical possible counterpart.
Swift-UVOT Source Analysis
For each X-ray source, we also used Swift-UVOT data within 8-10 of the XRT centroid to search for X-ray/UV/optical counterparts using the Swift-UVOT telescope. For each XRT/UVOT source, we generated a fully integrated UVOT image by performing UVOTIMSUM on the UVOT FITS file. We placed circular extraction regions at the coordinates of these UVOT detections; if there was no UVOT detection within the ∼ 5 positional uncertainty of the XRT detection, we centered the extraction region on the X-ray source's centroid position to determine an upper limit to the brightness. The radius of the UVOT extraction region was set to 5 , the PSF of the Swift-UVOT telescope for a point source. UVOTSOURCE was then used to gather counts from the source region, and the background was measured using a source-free circular background region of radius 20 elsewhere in the image for each position. The background-subtracted count rate was converted to flux and magnitude following the process described in Breeveld et al. (2011) to obtain a UVOT magnitude in one of the bands in Table 2. UVOT counterparts imaged in multiple UVOT bands were analyzed once separately for each band.
We applied a 3σ detection threshold for UVOT counterparts using UVOTDETECT. Though the vast majority of XRT detections had unique UVOT counterparts as well, fifteen regions had clustered UVOT sources that could not be disentangled within the constraints of the PSF of the telescopes on Swift. Training, validation, and research datasets with complete column coverage are vital to the machine learning process described below, so we exclude those confused sources from the unassociated sample.
X-ray Spectral Analysis
In total the HEASARC query for Swift-XRT observations collected over 1000 individual observations. Here, we only use observations in the photon counting (PC) mode of the Swift-XRT, enabling two-dimensional imaging across the XRT field-of-view with reasonable energy resolution. Summed exposure time for the X-ray sources varies from a few kiloseconds to many dozens of kiloseconds.
Each level 1 event file was processed and cleaned using xrtpipeline v.0.13.5 from the HEASOFT software 1 . Only events graded 0 through 12 were used in this analysis. Merging the events and exposure files with other 1 https://heasarc.gsfc.nasa.gov/docs/software.html observations of each particular X-ray source was conducted using xselect v.2.4g and ximage v.4.5.1, resulting in a single summed event list for each source plus a summed exposure map and ancillary response file using xrtmkarf. This merging precludes any time series or variability analysis of the X-ray sources, but few X-ray sources in our sample have sufficient observation time or photon counts to enable detailed temporal analysis.
For each possible X-ray counterpart, xselect produced spectra for source and background regions. The source region was circular with radius 20 arcseconds, and the background region was annular with inner and outer radii of 50 and 150 arcseconds respectively. Both regions were centered on the coordinates of the examined X-ray source. The background region size was chosen to be far outside the PSF of a point source for the Swift-XRT detector (half-power diameter of 18 ).
If the count rate in the center of the source region exceeded 0.5 counts per second (with PC mode having frames of 2.5 s each, this is approximately greater than one photon per frame), drawing a new annular source region with an inner radius depending on the count rate would avoid photon pile-up and saturation on the detector. The possible X-ray counterparts to 4FGL unassociated sources are faint enough that none caused such pile-up.
We used Xspec v.12.10.1f (Arnaud 1996) to fit each spectrum. The fitting model included three nested functions: tbabs, cflux, and powerlaw. cflux calculated the total unabsorbed flux between 0.3 and 10 keV and tbabs modeled line-of-sight hydrogen absorption using galactic values from the nH lookup function described in Wilms et al. (2000). The galactic line-of-sight extinction is fixed at the catalog value for each spectrum analyzed. powerlaw is a simple power law with photon index Γ X . Uncertainties on the fitted photon index and X-ray flux were jointly measured using the iterative steppar routine, and the spectral fitting results are included in an accompanying machine-readable database.
Fitting was executed using the C-statistic as the optimization metric. The C-statistic is a useful fitting statistic for spectra with few counts, particularly in cases for which there are not sufficient photons to bin counts for a χ 2 fit (Cash 1976). Compared to the χ 2 statistic, which assumes Gaussian behavior in bins, the C-statistic operates under Poisson statistics. In this way, it is much more applicable to fitting spectra with very few counts in each energy bin. The C-statistic is given by with t the exposure time, m i the predicted count rate in any particular bin, and S i is the observed counts in each bin. For an X-ray source with only a few dozen detected X-ray photons, binning the events for a χ 2 approach would result in only one or two bins, which is not useful for detailed spectral fitting. The Cash statistic does not require any such binning by assuming the more appropriate Poisson distribution of events. The model incorporating hydrogen-absorbed power law spectra returned unusually high or low Γ X for nine 4FGL X-ray counterparts compared to the expected 0 < Γ X < 4 for pulsars and blazars. Given that some of these X-ray sources also coincided with catalogued stars, we viewed these sources as dubious for pulsar/blazar classification but worthy of additional investigation with other approaches. We do not include these spectra in the ML classification. These sources are listed in Table 1.
V-magnitude Conversions
The analysis of Swift-UVOT data produces magnitudes in the six UVOT bands (vv, bb, uu, w1, m2, w2). However, the training sample of known pulsars and blazars uses Johnson V magnitudes, requiring a conversion from the UVOT bands to the Johnson system. To facilitate this conversion, we convert the UVOT magnitudes to fluxes, then use a power-law scaling relationship to predict the V-band flux, converting back to magnitude. The power-law scaling relation uses the central wavelengths of the UVOT bands (given in Table 2) and the V band (540 nm) plus an assumed UV/optical spectral index of α c = 0.5.
The ratio of two fluxes at two different wavelengths is simply which inserted into the magnitude equation gives a predicted conversion between the magnitude in UVOT band m i and in the Johnson V band.
Clearly, a conversion from the UVOT vv magnitude to the Johnson V magnitude is the most direct and most desirable conversion, because the UVOT vv band has close to the same central wavelength as the Johnson V band. Unfortunately, most Swift-UVOT observations were in the higher-energy bands, requiring significant conversions and introducing uncertainties into the analysis. For Swift-XRT sources with observations in multiple UVOT bands, we only used the converted V magnitude originating from the UVOT band closest to the Johnson V band, regarding that observation as most faithfully approximating the Johnson V magnitude.
Conversion between one spectral band and another with an assumed slope is fraught with uncertainty. However, because the average V magnitudes of the known pulsar and blazar samples are so distinct (the medians separated by five magnitudes, at least a factor of 100x in flux), these uncertainties are tolerable as pulsars are typically much dimmer than blazars in visual and UV photons. The V magnitudes in our training and research samples are not corrected for extinction, which would mainly impact pulsars within the galactic plane. However, using the hydrogen column densities of the pulsars in our sample and the n H vs A V scaling relation given in Güver &Özel (2009), it is unlikely that an entire sample of pulsars is dimmed enough to create a fictitious separation in V magnitudes from the blazar population that would bias our classification efforts. Figure 1 shows the converted V magnitude distribution (using V magnitude upper limits for those pulsars without solid estimates) of the known Fermi pulsars and blazars compared to the UV/optical counterparts in our unassociated sample.
Known and Unknown Samples
To train ML classification routines, we gathered a sample of 74 known gamma-ray pulsars (including radioloud, radio-quiet, and millisecond pulsars) and 635 known gamma-ray blazars. This sample was derived from the catalog Fermi -LAT pulsars and blazars (Abdo Table 1. X-ray counterparts to 4FGL unassociated sources with extreme X-ray photon indices, ΓX < −1 or ΓX > 5, excluded from the main ML classification effort. Possible stellar counterparts include spectral type and apparent magnitude from SIMBAD if available.
Overall, our training sample is selected for those pulsars and blazars for which gamma-ray, X-ray, and optical brightness data is available, which implicitly limits the training of our classifier to that subsample of pulsars and blazars that have notable gamma-ray, X-ray, and optical flux. This is especially impactful for pulsars, which have a wide range of intrinsic and extrinsic properties, including physical distance to our position in the galaxy. While we found that our training sample of pulsars is representative of catalogued gamma-ray pulsars in the Fermi -LAT pulsar lists in terms of period and spin-down rate, canonical pulsars tend to be younger and more energetic than the general pulsar population. For pulsars, this leads to a training sample that preferentially includes nearby or energetic pulsars. Fortunately, our research sample of unassociated sources is similarly limited by X-ray and UV/optical flux, and has a simi-lar fraction of subtypes of pulsars (65% canonical, 35% millisecond) compared to the overall Fermi -LAT pulsar sample (55% canonical, 45% millisecond). Any sources deemed likely pulsars by our classification efforts are astrophysically similar to our training sample.
Preferring parameters that are distance-independent, we expressed the X-ray and V-band brightness in terms of a ratio with the gamma-ray flux. While log F X /F γ is a simple ratio of two fluxes 2 , the conversion of V magnitude to flux was slightly more complicated. Using the magnitude equation we converted the V magnitudes to fluxes using a reference flux and magnitude m 2 and F 2 . However, because we are taking the logarithm of the ratio between V-band flux to gamma-ray flux, and because ML procedures first rescale and recenter each parameter, the exact reference flux and magnitude used herein are irrelevant. For our purposes, we adopt the conversion from Bessell et al. (1998).
The parameters for each training or research source include: • X-ray photon index, Γ X • Gamma-ray photon index, Γ γ (PL_Index in the 4FGL catalog) • The logarithm of gamma-ray flux, log F γ , in erg/s/cm 2 (Energy_Flux in the 4FGL catalog) • The logarithm of X-ray to gamma-ray flux ratio, • The significance of the curvature in the gammaray spectrum, henceforth simply curvature (PLEC_SigCurv in the 4FGL catalog) • The year-over-year gamma-ray variability index (Variability_Index in the 4FGL catalog) Normalized histograms of all parameters for both the known pulsars and blazars and the unassociated spectra are shown in Figure 2. For the 174 X-ray sources that are the unique XRT/UVOT possible counterpart to the respective unassociated targets in our research sample, the gamma-ray, X-ray, and UV/optical parameters are given in Table 3. For the unassociated targets with multiple possible counterparts in the uncertainty ellipse, the parameters are given in Table 4.
It is worth noting that while the histograms in Figure 2 show notable overlaps between the unassociated and known samples, there are still certain qualitative differences between the samples. The unassociated sources have systematically lower gamma-ray flux than the known blazars and pulsars, meaning that the variability and curvature estimates from the 4FGL catalog have weaker photon statistics. A bright Fermi -LAT blazar with significant variability on many timescales might not have observable variability if moved to a greater distance for no other reason than the Poissondistributed arrival statistics of the photons. Still, the current flux-limited samples of pulsars and blazars (for example, the flux-and redshift-limited sample of Fermi blazars in Ghisellini et al. 2017) suggests that there are some objects left out of the Fermi association lists simply due to being slightly dimmer.
Because there are many more known blazars than known pulsars in the training sample, we used Synthetic Minority Over-sampling Technique (SMOTE) (Chawla et al. 2002) to generate additional pulsars that mirror the distribution of real pulsars with a k-nearest neighbors approach. Previous classification efforts have shown that unbalanced training datasets can lead to classifiers that are biased against the underrepresented class (for example, Last et al. 2017). The result of the SMOTE expansion is a training set with an equal number of blazars and pulsars, with the artificial pulsars being generated via SMOTE using the real pulsar distribution. The final database is eventually split into training and validation subsamples which contain real known blazars, real known pulsars, and SMOTEgenerated 'known pulsars' that mirror the distributions of spectral properties of the real known pulsars.
Neural Network Design and Training
Building the NNC method, we constructed an approach where the unassociated sources will be assigned a blazar probability P bzr depending on their similarity with known pulsars and known blazars. P bzr = 0 denotes a 0% probability of the object being a blazar, while P bzr = 1 corresponds to 100% probability the object is a blazar. The NNC had seven input nodes (one for each parameter of the training dataset), one hidden layer with four neurons, and one output node returning the predicted blazar probability. In this research we use the MLPClassifier function of the scikit-learn Python package for NNC training and validation, training the NNC with the 'adam' optimization approach (Kingma & Ba 2014). To allow for validation and accuracy checks for the trained NNC, we took a random selection (20% of the training dataset) as a validation subsample, leaving the remainder of known pulsars and blazars as the training subsample. The random selection method StratifiedShuffleSplit was used so that the training and validation subsamples would have exactly the same proportion of pulsars and blazars.
To determine when to stop iterating and training the NNC, we relied on the Log-Loss parameter, an error measure in binary classification approaches that is similar to a likelihood estimator. For a sample with true classifications y i = (0, 1) and predicted classification p i ∈ [0, 1], the Log-Loss is given by At each iteration step in training the NNC, we calculate L log of both the training and validation subsamples. As training continues, the training subsample L log should continuously decrease as the NNC approaches a more exacting fit. However, at some point the validation L log begins to increase as the NNC starts to overfit the training sample and lose predictive capabilities on unknown data points. At this point, training is stopped. Figure 3 shows how the NNC training is stopped at approximately 2000 iterations when the validation L log levels out. After using the training subsample to construct the NNC, we passed the validation subsample through the NNC and recorded the predicted blazar probabilities. Ideally, the NNC would predict blazar probabilities of P bzr = 0 or P bzr = 1 for the validation pulsars or blazars. A validation score for the NNC method depends on the cutoff used to determine what constitutes a "likely" pulsar or plazar. For example, a 90% cutoff would designate any source with P bzr < 0.1 a pulsar, with P bzr > 0.9 a blazar, while a 99% cutoff would only capture sources with 99% confident classifications.
An optimally trained NNC should result in P bzr as close to 0 or 1 as possible for the validation subsample. Typically, NNCs are judged based on the proportion of validation scores above some cutoff value; in a two-class example, the default cutoff for a 'correct' classification is normally 0.5. However, this single cutoff score does not investigate the degree of confidence with which the NNC is classifying the validation sources; a NNC which grades all validation pulsars at P bzr = 0.3 and all validation blazars at P bzr = 0.7 would have a 100% validation score with a cutoff of 0.5 but a 0% score if one strives for greater confidence in classification. Figure 4 shows a generalized approach for determining both the reliability and confidence of classifications from a NNC. Scanning through a logarithmic range of possible cutoff scores approaching unity (with P psr = 1−P bzr for pulsars), the figure shows how the fraction of known pulsars and blazars correctly classified as such during the validation step changes depending on the P bzr cutoff used. This general approach illustrates progress classifying pulsars and blazars at higher degrees of confidence upon adding additional spectral parameters to the samples.
Applying the default 50% cutoff used in previous classification efforts (i.e. a validation blazar must just have a validation P bzr > 0.5 to be considered "correctly" classified) returns an overall accuracy of 99.2% for pulsars and blazars. However, a major improvement of this NNC approach to previous classification efforts (Kaur et al. 2019;Kerby et al. 2021) is its capability to classify validation pulsars and blazars with over 99% confidence. As shown in Figure 4, over 90% of validation pulsars and blazars are classified with greater than 99% confidence in the correct categories, buoying our hope that this NNC routine can pick likely blazars and pulsars from the unassociated sample.
Calculating importances of the different parameters in a neural network ML approach is slightly more difficult than in other related classification schemes. Neurons in an NNC use an activation function which means that certain parameters may not be considered at all for cer- . The fraction of validation pulsars (blue) and blazars (red) classified with P bzr above different cutoff values after training the NNC. The NNC classifies validation blazars with greater P bzr than validation pulsars, with no validation pulsars having P bzr < 10 −3 but many validation blazars having P bzr > 0.999
XrayInd GammaInd
LogFG LogFXFG LogVar LogCurve LogFVFG Hidden Figure 5. A qualitative visualization of the importances of the spectral features in our NNC routine. The columns represent the four neurons in our single hidden layer, the rows of the upper plot representing the seven spectral features used herein. Squares that are shaded darker have linearly heavier weights in the NNC, and therefore play a more important role in discriminating pulsars from blazars.
tain entries. Still, the NNC is described by a weight matrix, and for a network with only a single hidden layer these weights are appropriate measures of the importance of different features. Figure 5 shows a visualization of the weights of the different parameters in our NNC approach, with darker shaded squares representing more impactful features in classifying blazars and pulsars.
Unassociated Classification Results
After training and validating the NNC, we recorded output P bzr values for each of spectra in the unassociated sample. Of the 205 total fully examined sources, we found 157 with P bzr > 0.99 and 18 with P bzr < 0.01. As some of the unassociated targets have more than one notable X-ray source in the gamma-ray uncertainty ellipse, additional work is needed to decipher the results around those targets including which if any X-ray possible counterpart is the actual partner to the gammaray emission. However, of the 174 X-ray sources that are the unique XRT/UVOT possible counterpart to the unassociated gamma-ray source, 14 have P bzr < 0.01 and 132 have P bzr > 0.99. These portions of our research dataset are highly likely pulsar and blazar candidates. The results of the NNC classification on the 4FGL sources with just a single possible X-ray counterpart are given in the last column of Table 3, organized into likely pulsars (P bzr < 0.01), likely blazars (P bzr > 0.99) and ambiguous sources.
Several of the 4FGL targets examined herein have since been the subject of subsequent discoveries that shed additional light on their nature. discusses the detection of a redback pulsar near 4FGL J0955.3-3949, which matches both for X-ray spectral parameters and in location on the sky of the X-ray counterpart in our analysis. 4FGL J0212.1+5321 has also been connected to a millisecond redback pulsar (Linares et al. 2017;Li et al. 2016). 4FGL J2039.5-5617 has recently been identified as a redback pulsar (Clark et al. 2021), as has 4FGL J1306-4035 after observations by the Parkes radio telescope (Keane et al. 2018), and 4FGL J1304.4+1203 is unassociated in 4FGL but classified as a pulsar in 4FGL-DR2 (Ballet et al. 2020). These sources are classified as likely pulsars in this work, showing that our NNC can independently predict classifications for sources that in other works are verified. This increases our confidence that our classifier can point towards other promising sources to examine, and that the link between Swift X-ray/UV/optical and Fermi -LAT gamma-ray sources is valuable and well-founded. Additionally, 4FGL J0838.7-2827 and J0523.3-2527, sources that remained ambiguous in this work, have also since been identified as redback pulsars (Halpern et al. 2017;Strader et al. 2014).
To gain additional insight into 4FGL unassociated sources with multiple notable X-ray sources, we leveraged the much higher spatial resolution of the Swift telescopes to do a coordinate search at the positions of X-ray counterparts. For many lower-energy counterparts, this search returned a coincident catalogued object and provided useful information to decide if the XRT/UVOT source is a likely counterpart to the unassociated target. Table 4 includes the NNC classification results of these possible counterparts grouped by 4FGL target, and Table 5 lists the results of our spatial cross-reference. Several possible counterparts are coincident with catalogued stars, X-ray binaries, or galaxies, differences that might be useful for finding the most likely counterpart among multiple near a single 4FGL target. Interestingly, many of the pairs of possible counterparts have very similar P bzr values, suggesting that the 4FGL source in question may be a pulsar or blazar regardless of which X-ray source is truly linked with the gamma-ray emission.
The classification of the unassociated sample is more bifurcated in terms of P bzr compared to previous works (Kaur et al. 2019;Kerby et al. 2021); most of the unassociated sources have P bzr very close to 0 or 1, showing that the NNC makes confident predictions of blazar or pulsar class membership. The NNC's high validation accuracy and confidence suggest that the unassociated sources can be classified properly if they are pulsars and blazars similar to the known pulsars and blazars in our training sample. Even outside the context of the NNC as a whole, the histograms in Figure 2 and the feature weights in Figure 5 show that log F X /F γ and log F V /F γ , flux ratios introduced in this work, are important parameters to distinguish pulsars from blazars, with pulsars having systematically lower values of both.
While previous approaches have classified unassociated sources with gamma-ray spectral parameters alone, Figure 2 shows that X-ray/UV/optical properties of counterparts to 4FGL targets can dramatically improve the discrimination of pulsars from blazars. While the unassociated sources have lower gamma-ray flux than either of the known pulsar/blazar distributions, and the gamma-ray photon index is not a particularly useful discriminatory variable in any capacity, the variability and spectral curvature measures in the Fermi catalog send mixed signals. The unassociated sources have the low variability expected of pulsars, but the low spectral curvature expected of blazars, probably due to their inherently lower photon statistics. Our addition of several new distant-independent properties via Swift-XRT and -UVOT analysis adds three new features; Figure 5 shows that log F V /F G and log F X /F G are features of moderate to major importance in the NNC weight matrix, as is the X-ray photon index. This reinforces the importance of systematic and methodical follow-up observations of unassociated gamma-ray sources at lower energies to uncover their astrophysical justification.
Overall, the spectral analysis and classification using Swift-XRT and -UVOT data herein is valuable not only for characterizing the Fermi unassociated sources, but also for guiding follow-up observations with likely pulsars and blazars for targeted searches while supporting numerous additions to catalogs of known gamma-ray pulsars and blazars.
Comparison with Random Forest Classifier
A random forest (RF) classifier uses an array of decision trees to classify an object into one of several categories. Each individual decision tree consists of several inequalities using the different parameters of a dataset in a "choose your own adventure"-style series of judgements; compounded many times, a RF classifies an unknown object based on the fraction of the constituent trees giving each classification.
Previous works by this group on earlier catalogs of Fermi unassociated sources (Kaur et al. 2019;Kerby et al. 2021) have used RF classifiers to discern likely pulsars from likely blazars among the Fermi -LAT unassociated sources. While we have used an NNC approach in this work, it is worthwhile to compare the NNC P bzr output to the values produced by a RF classifier. Though the NNC and RF approaches make independent predictions for P bzr on the unassociated sources, the two approaches giving similar outputs on the same dataset would increase confidence in the reproducibility of our results even with different classification approaches. Additionally, comparing the two methods illuminates whether one is more descriptive or predictive in its P bzr output on our multiwavelength data.
We applied the RF classifier developed in Kerby et al. (2021) to the 4FGL sample of this paper. Figure 6 shows that while the NNC and RF both classify many sources with P bzr close to 0 or 1, the NNC classifies many sources as high or low P bzr that the RF leaves ambiguous. Indeed, the NNC results only have a few sources with P bzr not close to 0 or 1, and these sources almost uniformly have RF P bzr values around 0.8. This trend suggests that the NNC is more sensitive to pulsars than the RF approach, classifying sources as likely pulsars that in the RF method are ambiguous. The more bifurcated nature of the NNC P bzr values compared to the RF results suggests that our NNC approach tends towards more confident classification and leaves fewer sources ambiguous, producing more immediately verifiable results, rather then hedging with ambiguity.
Summary and Next Steps
In this work we have classified 174 unique gammaray/X-ray/UV/optical sources into 14 likely pulsars, 132 . Comparing the P bzr scores for the 4FGL unassociated sample, using the exact same training and unknown datasets. The histograms of the two datasets have limited Y-axes to show the sparsely-populated bins between the two extremes. In both cases, the number of sources with P bzr close to unity is over 200.
likely blazars, and 28 ambiguous. Using Swift-XRT and -UVOT observations, we built a collection of results, presented in Tables 3 and 4, representing a significant collection of observations within the uncertainty ellipse of the 4FGL unassociated gamma-ray sources. It is likely that the X-ray/UV/optical sources described herein have the same astrophysical origin as the gammarays described in the Fermi -LAT unassociated catalog, so the unique correspondences in Table 3 describe the lower-energy spectra of the astronomical objects behind the gamma-ray emission of Fermi unassociated sources. Next, we built a neural network classification approach to use spectral information to divide the unassociated sample into likely blazars and pulsars using samples of known gamma-ray blazars and pulsars, reaching higher accuracy on validation subsamples and greater confidence in classification of unassociated sources than previous approaches. Of the 174 unique gamma-ray/Xray/UV/optical spectra constructed and described in Table 3, 132 are P bzr > 0.99 likely blazars and 14 are P bzr < 0.01 likely pulsars. Leveraging the advantages of multiwavelength analysis, our new subsamples of likely pulsars and blazars can expand known gamma-ray pulsar and blazar catalogs to include sources with lower gamma-ray luminosity that were previously unassociated.
As Swift continues its observation campaign of the Fermi -LAT unassociated sources, additional X-ray sources will be detected around previously unobserved targets. Planned follow-up observations across the electromagnetic spectrum can continue to investigate interesting sources discovered herein, including likely pulsars and ambiguously classified objects. These additional observations could validate likely pulsar classification with radio observations near the X-ray source to detect radio pulsations, or investigate ambiguous sources in greater detail to discern their true origin.
The subsamples of likely pulsars and blazars classified with our NNC approach are prime candidates for inclusion in population studies of gamma-ray pulsars and blazars. For example, using archival infrared observations of the likely blazars should allow for classification into likely BL Lac or likely FSRQ subsets, illuminating the biases and drawbacks of the Fermi blazar catalogs used to investigate blazars as a class of AGN. The likely pulsar subset, if validated with radio pulsation searches, would expand the list of known gamma-ray pulsars by almost 10%.
Software: Astropy (The Astropy Collaboration et al. (Harris et al. 2020), Matplotlib (Hunter 2007), scikitlearn (Pedregosa et al. 2011), FTools (Blackburn 1995 ACKNOWLEDGMENTS This research has made use of data and/or software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC. We gratefully acknowledge the support of NASA grants 80NSSC17K0752 and 80NSSC18K1730. E. Ferrara is supported by NASA under award number 80GSFC17M0002.
2013), numpy
Fermi research at NRL is supported by NASA. Table 3. Fermi-LAT features for the unassociated sample investigated in this work, along with Swift-XRT and -UVOT parameters for likely X-ray/UV/optical counterpart. Only unassociated sources with a single possible counterpart are included in this table, and sources are organized based on P bzr , listing likely pulsars and likely blazars separately. The extraction and derivation of the parameters here are described in sections 3 and 2. The mV estimates were produced using the noted UVOT filter, the closest available to the V band central wavelength. Table 3 continued Table 3 continued Table 3 continued Table 3 continued Table 3 continued Table 3 continued Table 4. Fermi-LAT features for the unassociated sample investigated in this work, along with Swift-XRT and -UVOT parameters for all possible Xray/UV/optical counterparts. Only unassociated sources with multiple notable X-ray sources within the Fermi-LAT uncertainly ellipse are included here.
The extraction and derivation of the parameters here are described in sections 3 and 2. The mV estimates were produced using the noted UVOT filter, the closest available to the V band central wavelength. Two sources, noted with *asterisks*, are coincident with dim catalogued stars. UV/optical emission from those stars is probably not related to a pulsar or blazar, and thus the P bzr values should not be trusted. Table 5. Results from a SIMBAD position cross-reference for the possible counterparts to unassociated sources with multiple notable excesses in the gamma-ray uncertainty ellipse, helping discriminate excesses that might be eliminated from consideration as possible counterparts with additional astrophysical information or description.
Target name XRT excess name Cross-reference results | 10,038 | 2021-10-08T00:00:00.000 | [
"Physics"
] |