text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Ab initio parameterisation of the 14 band k·p Hamiltonian: Zincblende study
Despite continued and rapid progress in high performance computing, atomistic level device modelling is still largely out of reach, necessitating the use of quantum mechanical continuum methods, including k·p perturbation theory. The effective use of such methods requires reliable parameterisation, often obtained from experiment and ab initio calculations. A major limitation of this, the systematic tendency of ab initio density functional theory to underestimate semiconducting material energy band gaps and related properties, can be greatly improved upon by the inclusion of exact exchange, calculated within the Hartree-Fock formalism. We demonstrate that the 14 band k·p Hamiltonian can be effectively parameterised using this method, at greatly reduced cost in comparison to GW methods.
Introduction
The modelling of semiconductor devices typically requires multiscale methods, whereupon calculations made at fundamental levels of theory are used to feed more approximate, but critically less computationally expensive, methods. Ab initio density functional theory (DFT) is commonly used for atomistic level calculations, which are then used to parameterise tightbinding (TB), empirical pseudopotential (EPM), or multiband k·p Hamiltonians [1]. Kohn-Sham DFT, however, does not yield the real band structure of semiconductors. Although a good approximation, the energy band gaps are systematically underestimated in this class of materials.
Many body perturbation theory, typically Green's function methods, can be used to more accurately predict the band gaps [2], but at considerable computational expense. An efficient and reasonably accurate alternative is the use of hybrid functionals [3,4], which incorporate into the DFT exchange-correlation energy either a screened long range Coulomb term or a fraction of exact exchange calculated from Hartree-Fock theory using the Kohn-Sham orbitals.
Using this method, we calculate the necessary energy band gaps, dipole matrix elements, and effective masses at characteristic points in the first irreducible Brillouin zone (IBZ), i.e. at Γ point, to parameterise the 8 and 14 band k·p Hamiltonians. Although the 8 band k·p Hamiltonian, which has C 4v symmetry, is popular and widely used, it overlooks the correct C 2v atomistic symmetry of the zincblende (ZB) lattice [5]. In order to restore the correct symmetry it is necessary to implement the 14 band model, including couplings induced by the second conduction band (labelled as Γ 5c in fig. 1) stemming from p-bonding states of atoms in the ZB lattice [5]. We compare both to the DFT band structures in demonstration of this. We further show the importance of non-locality in hybrid DFT calculations to the accurate prediction of band parameters.
Methodology
All DFT calculations are performed using the Crystal [6,7] code, which implements localised basis sets of Gaussian type orbitals (GTOs). The calculation of exact exchange is very efficient using this basis. The hybrid PBE0 [8] and B3LYP [9] exchange correlation functionals are used, the former incorporating 25% of exact exchange energy to the latter's 20%. The IBZ of the ZB lattice unit cell is sampled using the Monkhorst-Pack scheme [10], with shrinking factors 16×16×16. Dipole matrix elements p ij are calculated from Bloch functions, u ki (r), such that: From the dipole matrix elements, Eq. 1, the Kane energies are calculated as follows: The kppw code [11], parameterised with the ab initio data, is used for all k·p calculations. Effective masses are defined by the curvature of the band dispersion at high symmetry points in the IBZ, in this case Γ: An alternative definition of the effective mass, from k·p theory, depends on the interaction between bands around the conduction and valence band edges, where i is the band for which the effective mass is being calculated:
Results and discussion
The effective mass calculated using this expression can be systematically improved by considering additional interactions from remote bands. For the methodology to be consistent, convergence of the two should be observed as further bands are considered in Eq. 4. Indeed, this is observed, as shown in Table 2. We use shortened notation: GaAs 0.0561 0.0552 0.0561 0.0561 Figure 1: Schematic of ZB band structure around Γ point. Top of valence band, Γ 5v , bottom of conduction band, Γ 1c , and second conduction band, Γ 5c . Energy gaps, E g i , and coupling parameters, p i , correspond to the notation in table 1. The Luttinger parameters relate to the valence band edge effective masses along directions of high symmetry in the IBZ and largely determine the band curvature. They are calculated from appropriate pairs of the following system of equations, along different high symmetry directions: As the effective mass tensors are anisotropic, we obtain different Luttinger parameters along each direction, as detailed in table 1. Fig. 1 depicts the schematic band structure of the ZB structure, taken into account by the 14 band k·p Hamiltonian. The 8 band Hamiltonian (C 4v ) includes only the Γ 5v valence and Γ 1c conduction bands, coupled by the p 0 dipole interaction. The p 1 interaction is isomorphic with p 0 , however the inclusion of the p 2 interaction between Γ 5v and Γ 5c bands has the effect of reducing the symmetry to C 2v , as required to resemble the correct atomistic structure of ZB lattice.
Figs. 2 and 3 show the band structures along the <001> direction in the IBZ for GaAs and CdSe, respectively. Largely good agreement between the DFT and k·p calculations is seen up to 0.1Å −1 . In addition to observing the correct C 2v symmetry, the 14 band model shows a notable improvement in its depiction of the heavy hole band well beyond this region. Of greatest note is the inclusion of the higher conduction bands in the 14 band model.
Conclusion
We have demonstrated that the use of hybrid functionals in DFT calculations can be used to effectively and accurately parameterise multiband k·p Hamiltonians for some common ZB semiconductors. Further, we have shown that the 14 band model, by inclusion of higher conduction bands and having the correct C 2v symmetry of the ZB lattice, affords greater accuracy in the prediction of the band structure than the more widely used 8 band model, which is typically synonymous with k·p perturbation theory. | 1,428 | 2014-06-18T00:00:00.000 | [
"Chemistry"
] |
Optical data transmission at 44 Terabits/s with a 49GHz Kerr soliton crystal microcomb
We
report world record high data transmission over standard optical fiber from a
single optical source. We achieve a line rate of 44.2 Terabits per second
(Tb/s) employing only the C-band at 1550nm, resulting in a spectral efficiency of
10.4 bits/s/Hz. We use a new and powerful class of micro-comb called soliton
crystals that exhibit robust operation and stable generation as well as a high
intrinsic efficiency that, together with an extremely low spacing of 48.9 GHz
enables a very high coherent data modulation format of 64 QAM. We achieve error
free transmission across 75 km of standard optical fiber in the lab and over a
field trial with a metropolitan optical fiber network. This work demonstrates
the ability of optical micro-combs to exceed other approaches in performance
for the most demanding practical optical communications applications.
. Field trial network in greater Metropolitan Melbourne. Abstract-We report world record high data transmission over standard optical fiber from a single optical source. We achieve a line rate of 44.2 Terabits per second (Tb/s) employing only the C-band at 1550nm, resulting in a spectral efficiency of 10.4 bits/s/Hz. We use a new and powerful class of micro-comb called soliton crystals that exhibit robust operation and stable generation as well as a high intrinsic efficiency that, together with an extremely low spacing of 48.9 GHz enables a very high coherent data modulation format of 64 QAM. We achieve error free transmission across 75 km of standard optical fiber in the lab and over a field trial with a metropolitan optical fiber network. This work demonstrates the ability of optical micro-combs to exceed other approaches in performance for the most demanding practical optical communications applications.
I. INTRODUCTION
Kerr micro-combs [1][2][3][4] offer the full potential of their bulk counterparts [5,6] but in an integrated footprint, since they generate optical frequency combs in integrated micro-cavity resonators. The realization of soliton temporal states called dissipative Kerr solitons (DKSs) [7][8][9][10][11] opened up a new method of mode-locking micro-combs that has in turn underpinned major breakthroughs in many fields such as spectroscopy [12,13], microwave and RF photonics [14], optical frequency synthesis [15], optical ranging including LIDAR [16,17], quantum photonic sources [18][19][20][21], metrology [22,23] and much more. One of their most promising applications has been in the area of optical fibre data communications where they have formed the basis of massively parallel multiplexed ultrahigh capacity optical data transmission [4, 24 -26]. In this paper, [26] by employing a powerful new type of micro-comb based on soliton crystals [11], we report a world record speed of data transmission across standard optical fibre from any single optical source. We achieve a line rate of 44.2 Terabits/s (Tb/s) utilizing only the telecom 1550nm C-band, and achieve a very high spectral efficiency of 10.4 bits/s/Hz. Spectral efficiency is a critically important performance parameter since it directly governs how much total bandwidth can be realized in a system. Soliton crystals display very stable and robust operation and generation as well as a very high intrinsic conversion efficiency that, all taken together with the extremely low soliton micro-comb FSR spacing of 48.9 GHz that we achieve, enabled us to use a record high modulation coherent data format of 64 quadrature amplitude modulation (QAM). We demonstrate error free data transmission across a 75 km distance of standard optical fibre in our lab, but more importantly in a real-world field trial in an installed metropolitan area optical fibre testbed network in the Melbourne region. Our results were significantly helped by the capacity of the soliton crystals to work without any stabilization or feedback control at all, but only with very simple open loop systems. This significantly reduced the amount and sophistication of the instrumentation required. Our work directly proves the capability of optical Kerr microcombs to out-perform any other approach for practical demanding optical communications systems.
Currently, 100's of Terabits/s are transmitted every instant across the world's fibre optic networks and the global bandwidth is growing at a rate of 25% /yr [27]. Ultrahigh capacity data links that use parallel massive wavelength division multiplexing (WDM) systems combined with coherent advanced modulation formats [28], are critical to meet this demand. Space-division multiplexing (SDM) is another emerging approach where multiple signals are transmitted either over multiple core or multiple mode fibre, or both [29]. In parallel with all of this, there is a growing movement towards very short links but still with very high capacity, particularly for data centres. Even just ten years ago, long haul networks such as undersea links spanning thousands of kilometres, used to dominate the global infrastructure, but nowadays the demand has dramatically shifted towards smaller scale applications including the aforementioned data centres as well as metropolitan area networks (tens to hundreds of kilometres in size). These trends demand highly compact, energy efficient and low-cost devices. Photonic integrated circuits are the only approach that can address these needs, where the optical source is absolutely key to each link, and therefore has the greatest need to meet these requirements. The capability of generated all of the wavelengths by a single chip that is both integrated and compact in order to replace multiple lasers, will yield the highest benefits [30][31][32].
Kerr optical microcombs have attracted a great deal of interest and one of their main applications has been in this area. They have successfully been used as optical sources for ultra-high bandwidth optical fiber transmission of data [24 -26]. A key factor has been achieving the capacity to modelock all of the microcomb lines, and this has been characterized by the discovery of new states of temporal optical soliton oscillation that include feedback-stabilized Kerr combs [25], dark solitons [32] and dissipative Kerr solitons (DKS) [24]. The last one (DKS) has achieved the greatest success, being the basis of extremely high data transmission rates across the full C and L telecom bands, at a rate of 30 Tb/s using only a single source, and 55 Tb/s by using two microcombs [24].
Despite this success, though, micro-combs still need to be even more stable and simpler and robust in both operation and generation, in order to meet the demands of real-world installed fibreoptic systems [26. 28 -32]. They particularly must work without the need for complicated stabilization feedback, preferably in uncomplicated open-loop fashion and without the need for complicated pumping schemes that DKS states need in order to be generated. Furthermore, the conversion efficiency from pump to comb lines must be much higher and their threshold pump power much lower. Systems that use microcombs also must achieve a much higher spectral efficiency (SE) since to date they have only achieved about ¼ of the theoretical maximum. Spectral efficiency is an absolutely key and fundamental parameter that limits the total data capacity of systems [28,29].
In this paper, [26] we demonstrate a world-record high bandwidth for optical fibre data transmission using standard single mode fibre together with a single optical source. Our use of a new and powerful type of micro-comb that operates through states that have been called "soliton crystals" [11,26], based on CMOS -compatible chips [2, 3, 33 -50], enabled us to reach a transmission data rate of 44.2 Terabits per second using only a single chip -almost 50% greater than previously achieved [24,25]. More importantly, we report a significant improvement by a factor of 3.7 times in the enormously important SE, achieving 10.4 bits / s / Hz which is a record high value for microcombs. We do this through the use of a very high coherent modulation format of 64 QAM, together with a microcomb that has a record low spacing, or FSR, at 48.9 GHz. We only use the telecom C-band, leaving room for significant expansion in our capacity. We report experiments in the lab with 75 km of fibre as well as over an installed optical fibre network in the greater Melbourne metropolitan area. These results were made possible because of the highly and stable and robust generation and operation of the soliton crystals, together with their very high natural efficiency. All of these features are intimately tied to the CMOS compatible nature of the integrated platform.
Oscillation states in micro-resonators that have a crystalline type of profile along the resonator path, forming in the angular domain of tightly packed self-localized pulses within micro-ring resonators [11]. Soliton crystals can occur in integrated ring resonators that have a higher order mode crossing. Further, they do not need the dynamic and very complicated pumping schemes or elaborate stabilization that self-localised DKS states need [51]. The basis of their stable behaviour originates from the fact that their intra-cavity power is dramatically higher than DKS states. In fact, it is very similar to the power levels of the chaotic temporal states [11,52]. As a result, there is a very small difference in power levels in the cavity when the soliton crystal states are created out of chaos, and so there is no change in the resonant frequency. It is this self-induced frequency detuning arising from thermal instability due to the soliton step that renders pumping of DKS states, for example, very challenging [53]. The combined effect of natural stability and robust and simple manual generation and the overall efficiency of soliton crystals that makes them extremely attractive for very high bandwidth data transmission exceeding a Terabit per second.
II. EXPERIMENT
A map of the metropolitan network used for the system field trial is given in Fig. 1, while the soliton crystal comb spectrum is shown in Fig. 2 and the experimental setup for the demonstration of high capacity optical data transmission in Fig. 3. The microcomb featured a 48.9 GHz FSR, producing a soliton crystal output with a spectrum spanning across > 80 nm while pumping at 1.8 watts of CW power at a wavelength of 1550nm. The soliton crystal micro-comb was preceded first by the primary comb and displayed very variation in comb line powers at < +/-0.9 dB, for ten different incidents of initiation, and was achieved by sweeping the wavelength manually from 1550.300 -1550.527 nm. This clearly proves the micro-comb turn-key generation repeatability for our devices.
Out of the total number of generated comb lines, eighty were chosen from the 3.95 THz, 32 nm wide C-band window at 1536 -1567 nm. The spectrum was then flattened using a WaveShaper. Following this the number of wavelengths was doubled to 160, corresponding to a 24.5 GHz spacing, to increase the spectral efficiency. This was accomplished with a single sideband modulation technique that generated both even and odd channels that were not correlated. We then grouped six wavelengths, with the rest of the bands supporting data loaded channels based on the same even-odd structure. We were able to use a record high order 64 QAM coherent modulation format that modulated the whole comb at a baud rate of 23 Giga-baud, that achieved 94% utilization of the available spectrum. We performed 2 experiments, the first across 75 km of single mode optical fiber in the lab and the second in a field trial using a metropolitan network in the greater Melbourne area, also based o standard SMF (Fig. 1), which linked Monash University's Clayton campus to the RMIT campus in the Melbourne CBD. The signal was recovered at the receiver with a standard offline digital signal processor (DSP). The constellation diagrams (Fig. 4) at 194.34 THz for the back-to-back configuration where the transmitter was connected directly to the receiver, show that the quality of the signal as reflected in Q2, from error vector magnitude, was almost 18.5 dB, decreasing slightly to 17.5 dB the entire set of comb lines were modulated across the span.
III. RESULTS AND DISCUSSION
The performance of the transmission as measured by the bit error ratio (BER) as a metric for each channel is shown in Figure 5. We studied 3 cases: i) directly connecting the receiver to the transmitter stages termed back-to-back (B2B), following transmission across the ii) fiber in the lab and iii) transmitting the data across the installed metro area network. The performance for all the channels was degraded by transmission, but this was anticipated. Figure 5a shows the 20% threshold for soft-decision forward error correction (SD-FEC), a common benchmark for performance, using a proven code, at a BER of 4x10 -2 [53]. All measurements achieved under the FEC limit. However, since SD-FEC thresholds based on BER can be less accurate at higher modulation formats as well as at higher BERs [54][55], we also used generalized mutual information (GMI) to determine the performance of the system. Figure 5 shows the GMI for every channel as well as its corresponding SE, where we include lines to indicate the projected overheads. We succeeded in demonstrating a line bit rate (raw bitrate) of 44.2 Terabits per second, which corresponds to a net coded rate at 40.1 Terabits/s (for B2B), which dips to 39.2 and 39.0 Terabits/s in the lab and metro network trials, respectively. We also achieved SEs reaching 10.4, 10.2 and 10.1 bits / s / Hz.
Our data rate is an increase of almost 50% compared to the highest previously reported values achieved with a single source [26]. Even more importantly the SE is enhanced even more, being a factor of 3.7 higher than previous reports. This is quite extraordinary since we conducted the experiments under the most challenging of conditions. This includes the absence of any closed loop feedback systems or external stabilization, as well as without the use of any complex generation pump methods. On top of this, we actually fully flattened or equalized, the comb lines, even though this was not required [56]. We did this because we wanted to address any possibilities of the non-flat soliton crystal comb spectrum being construed as representing any sort of limitation. Since we performed our experiments with the use of comb flattening, and this was not necessary, then doing the experiments without flattening would only reduce the system impairments and would actually improve our results even further. Hence, in doing this we clearly show that having a nonuniform spectrum is not any sort of limitation. This identical line of argument applies to the issue of not needing any closed-loop feedback control for the micro-comb. We could always An indicative FEC threshold is given at 410 -2 , corresponding to a pre-FEC error rate for a 20% soft-decision FEC based on spatially-coupled LDPC codes [25] (dashed line). After transmission, all channels were considered to be error-free, b) GMI and spectral efficiency measured for each comb line. GMI was calculated after normalization to scale measured constellations in order to account for received signal-to-noise ratio (SNR). Lines are for 20% and 10% overheads. Spectral efficiency was derived from GMI, and the ratio of symbol rate to comb spacing. GMI indicates a higher overall capacity than BER with the indicated SD-FEC threshold, as GMI assumes the adoption of an ideal code for the system. include this as all other experiments have done, and this would again improve our performance even more. The record high spectral efficiency and absolute bandwidth that we achieve were greatly aided by the very high conversion efficiency we achieved between the pump and the soliton crystal comb lines [11,52]. Again, as mentioned this results from the very small power step in the cavity that occurs when the soliton crystals are generated from the chaos states.
We only used the telecom C-band, and yet the bandwidth of the microcomb was larger than 80 nm. Therefore, wavelengths in both the L (1565-1605 nm) and even S (1500-1535 nm) bands could easily be used. In fact even broader bandwidths can be achieved by increasing the power, by varying the wavelength of the pump, by engineering the dispersion or further methods. This would yield an increase of more than a factor of 3 in total bandwidth, resulting in >120 Terabits per second using only a single source.
Achieving even lower spacings, or FSRs, with miro-combs would yield yet higher SEs since the quality of the signal increases for smaller baud rates. This may result in a smaller overall comb bandwidth however. For our experiments, the use of single sideband modulation allowed multiplexing two channels using one single wavelength, which cut the comb spacing by a factor of 2 while enhancing the back-to-back performance that was limited by transceiver noise. This was made possible by the stability of the soliton crystals. Conversely, electro-optic modulation has also been used to sub-divide the micro-comb repetition rate, and this would also create broader comb bandwidths. This, however, would require locking the comb FSR spacing to an external RF source, although this is feasible since sub megahertz stabilization of microcombs has been achieved [57,58]. Furthermore, increasing the comb conversion efficiency by using a newly discovered class of soliton, called laser cavity-soliton micro-combs [34] will offer a powerful way to increase the system capacity as well as the quality of the signal even further. For recently installed networks, our approach can easily be complemented by using spatial division multiplexing based on multiple core fibre [29,59], yielding bandwidths of more than a petabit per second using a single microcomb. Our results join the many breakthroughs achieved with microcombs, and in particular using soliton crystal combs. These particularly include our applications of soliton crystals to RF and microwave signal processing [60 -81]. This work presented here is the most challenging demonstration ever reported for micro-combs in terms of ease of generation, coherence, stability, noise, efficiency, and others, and is a direct result of the superior soliton crystal microcomb qualities.
IV. CONCLUSIONS
We demonstrate a new world record for performance of ultra-high bandwidth optical transmission systems using a single optical source over standard optical fiber. We achieve this through the use of soliton crystal micro-combs that have a very low FSR spacing of 48.9GHz. Our achievement results from this record low comb spacing together with the efficient, broad bandwidth, and stable nature of soliton crystals, together with their CMOS compatible integration platform. Soliton crystal micro-combs are fundamentally low noise and coherent and can easily be initialised and operated using only very simple open-loop control that only requires commercially available components. Our results clearly show the ability of soliton crystal microcombs to achieve world record high bandwidths for optical data transmission over fibre in very demanding real-world applications. | 4,274.2 | 2020-10-30T00:00:00.000 | [
"Physics"
] |
A Maximally Split and Adaptive Relaxed Alternating Direction Method of Multipliers for Regularized Extreme Learning Machines
: One of the significant features of extreme learning machines (ELMs) is their fast convergence. However, in the big data environment, the ELM based on the Moore–Penrose matrix inverse still suffers from excessive calculation loads. Leveraging the decomposability of the alternating direction method of multipliers (ADMM), a convex model-fitting problem can be split into a set of sub-problems which can be executed in parallel. Using a maximally splitting technique and a relaxation technique, the sub-problems can be split into multiple univariate sub-problems. On this basis, we propose an adaptive parameter selection method that automatically tunes the key algorithm parameters during training. To confirm the effectiveness of this algorithm, experiments are conducted on eight classification datasets. We have verified the effectiveness of this algorithm in terms of the number of iterations, computation time, and acceleration ratios. The results show that the method proposed by this paper can greatly improve the speed of data processing while increasing the parallelism.
Introduction
The extreme learning machine (ELM) has been extensively applied in many areas [1] due to its fast learning ability and satisfactory generalization performance.The regularized ELM (RELM) [2] is an extended variant of the standard ELM [3] which improves the generalization performance and stability of ELMs by adding a regularization term in the loss function [4].However, the dimension and the volume of data have increased significantly with the development in big data.When the number of training samples and the number of hidden layer nodes are especially large, the size of the output matrix of the ELM model will be particularly large.Therefore, the calculation of the ELM based on the Moore-Penrose matrix inverse requires humongous storage and calculations, significantly increasing the computational complexity of the ELM.
To address the above problems, several enhanced ELMs were proposed.By decomposing the data matrix into a set of smaller block matrices, Wang et al. [5] adopted a clustering technique with a message-passing interface to train the block-matrix-based ELM in parallel with the aim of improving computing efficiency.Liu et al. [6] proposed a Spark-distributed parallel computing mechanism to achieve a parallel transformation of ELMs.Chen et al. [7] used a clustering technique with GPUs to parallelize ELMs.Based on the Spark framework, Duan et al. [8] improved the learning speed of the ELM when processing large-scale data by dividing the dataset.All the methods discussed above focus on computation schemes of the RELM using parallel or distributed hardware structures.However, the matrix-inversionbased (MI-based) solution process has low efficiency and high computational complexity, leading to low convergence [9].Therefore, all the methods discussed above cannot solve the problem of the low efficiency of RELMs in the big data scenario.
The alternating direction method of multipliers (ADMM) is a powerful computational framework for separable convex optimization.It has been extensively applied in many fields owing to its fast processing speed and convergence performance [10].Wang et al. [11] used the ADMM to solve the center selection problem in fault-tolerant radial-basis-function networks.Wei et al. [12] applied the ADMM to neural networks to solve the problem of slow training for large-scale data.Wang et al. [13] applied the ADMM to SVMs to achieve distributed learning by splitting the training samples.Luo et al. [14] used the decomposability of the ADMM; then, the regularized least-squares (RLS) problem of the RELM can be split into a set of sub-problems that can be executed in parallel to achieve the purpose of improving computation efficiency.Li et al. [15] used the ADMM to solve the predictive control problem of a distributed model, which enables the model to have a fast-response ability.Xu et al. [16] applied the ADMM to a quantized recurrent neural network language training model into an optimization problem to improve its convergence speed.
One of the main problems of the classical ADMM is the convergence speed.In general, the numerical performance of the ADMM largely depends on an effective solution to the sub-problems-there can be several different sub-problem splitting representations in practical applications.Thus, a generalization of the N-block ADMM is needed because the classical ADMM algorithm is only suitable for solving two-block convex optimization problems, and the sub-problems structure cannot be fully utilized.
To further improve the ADMM convergence speed and generalization performance, several extended variants of the ADMM were presented, including the generalized ADMM [17][18][19][20] and the relaxed ADMM (RADMM) [21].Lai et al. achieved fast convergence by using a novel relaxation technique to modify the ADMM to have an ADMM with a highly parallelized structure.Based on the RADMM, Xiaoping et al. [22] proposed a maximally split and relaxed ADMM (MS-RADMM) by considering splitting model coefficients to improve convergence and parallelism.Su et al. [23] introduced a binary splitting operator into the ADMM, and the optimal solution of the original problem is obtained through the iterative calculation of the intermediate operators to achieve the purpose of improving the convergence speed.Ma et al. [24] used an MS-RADMM with a highly parallel structure to optimize a 2D FIR filter, and a practical scheme for algorithm parameter setting was provided.Hou et al. [25] utilized a tunable step-size algorithm to accelerate MS-ADMM convergence speed.However, the convergence speed of the ADMM largely depends on the choice of parameters in the iterative process.For this reason, we propose an adaptive parameter selection method which uses the improved Barzilai-Borwein spectral gradient method to automatically tune the algorithm parameters to achieve an optimal convergence speed.
For the implementation of the MS-RADMM, we propose an adaptive parameter selection method for joint tuning of the penalty and relaxation parameters.Our main contributions are as follows: (1) Improving Global Convergence: To improve the global convergence of the algorithm, a non-monotonic Wolfe-type strategy is introduced into the memory gradient method.
The global optimal solution is achieved by combining the iteration information of current and past multiple function points.(2) Solving Sub-problem: To improve the convergence speed of the algorithm, the Barzilai-Borwein spectral gradient method is optimized by adding step-size selection constraints to simplify the computational complexity of the MS-RADMM sub-problems.
Fundamentals of the RELM and the ADMM
With an increase in the volume and complexity of datasets, the size of training samples N and the number of the hidden nodes L are very large.As such, the MI-based solutions require enormous memory space and suffer from excessive computational loads.To address these challenges, the ADMM is used to handle the convex model-fitting problem of the ELM.
RELM Method
As a training framework for solving single hidden-layer neural networks [26], the ELM has a good learning speed and generalization performance.For an m-category classification problem, assuming that the training sample is x and number of hidden-layer nodes is L, the ELM model output is given by where ] is the hidden-layer output matrix, w i , s i are the input weight and bias of the ith hidden nodes, g(.) is the activation function, β is the output weight matrix, and T denotes the target output matrix of the network.The actual performance of the ELM depends on the number of neurons in the hiddenlayer.If the number of neurons is too small, the extracted information is insufficient, and it is hard to generalize and reflect the inherent disciplines of the data.If the number is too large, the network structure is too complex, thus reducing the generalization performance.
To further improve the generalization performance and the stability, regularization theory is imported into the ELM to minimize the training error and the norm of the output weight matrix β [27][28][29].The RELM solves for the output weight β in the following RLS problem min where || • || F denotes the Frobenius norm, and µ > 0 is a regularizer that controls the tradeoff between the loss function and a regularization term.However, the MI-based RELM leads to an excessive computational load, particularly in problems concerning high-dimensional data.An effective way to solve large-scale data processing problems is through parallel or distributed optimization methods.The ADMM is a powerful technique for large-scale convex optimization.
ADMM for Convex Optimization
As a computational framework for solving constrained optimization problems, the ADMM achieves good performance on convergence speed and parallel structures.The ADMM [30] decomposes a large global problem into multiple local sub-problems, and the solution of the global problem is obtained through coordinating the solutions of the sub-problems.The following convex model-fitting problem is studied: where is the target output vector, f (.) means a convex loss function, and r(.) is a regularization term.By defining equality constraints z i = a i •x i , the model-fitting problem (3) can be transformed into The augmented Lagrangian of problem (5) is where ρ > 0 is the penalty parameter, and λ i ∈ R N×m is the dual variable.
The ADMM uses the Gauss-Seidel iteration method [31] to minimize the augmented Lagrangian function of optimized variables x and z and updates the dual variable λ according to a multiplier method.The iterative solution process of the model-fitting problem is easily obtained as The global optimal solution is obtained by alternately updating variables x and z [32].
Maximally Split and Relaxed ADMM
The numerical performance of the ADMM largely depends on the efficient solving of sub-problems [33].A maximally split technique and a relaxation technique are used to speed up the ADMM convergence [34].
The MS-ADMM splits the model-fitting problem into multiple univariate sub-problems flexibly with a reasonable scale.It reconstructs the method, based on matrix operations, ensuring that there is only one scalar component in each sub-problem.This gives the MS-ADMM an ideal highly parallel structure.By considering the L-partition ADMM [35], matrix A is simplified to a column vector a i and the vector coefficient x is simplified to a scalar coefficient x i .These scalar characteristics play an important role in improving the parallel computing efficiency and the highly parallel structure of the MS-ADMM.
On the basis of the MS-ADMM, the MS-RADMM [36] is acquired by adopting a relaxation technique.It reconstructs the convergence conditions; past iterations are considered in the next iteration, which makes the MS-RADMM have linear convergence.By enlarging the equality constraint residuals (7).The MS-RADMM is expressed as where α > 0 is the relaxation parameter, magnifying the equality constraint residuals.
Scalars MS-ARADMM
The efficiency of the MS-RADMM depends strongly on the choice of the penalty and relaxation parameters.A suitable parameter selection scheme is key to improving the computational efficiency of the MS-RADMM.
We propose an adaptive parameter selection method for the MS-RADMM and obtain the MS-ARADMM.The MS-ARADMM allows for automatic tuning of the key algorithm parameters to improve the convergence speed.The convergence is measured by using the primal and dual residuals, defined as From the perspective of the convergence principle, when the algorithm approaches the optimal solution, γ k , d k residuals are close to zero.The specific termination conditions are shown as where tol represents the stop tolerance and is a constant; the specific value can be set according to the actual error range.Considering the time cost in the experiment process, the stop tolerance is set to 10 −3 in this paper, and the setting of the stop tolerance is only related to the accuracy of the error and does not depend on the dataset.
Spectral Adaptive Step-Size Rule
The spectral adaptive step-size rule is derived by studying the close relationship between the RADMM [37] and the relaxed Douglas-Rachford splitting (DRS) [38].
For problem (5), assume that a local linear model of ∂ f (x) and ∂r(x) at iteration k is given by where θ k > 0, γ k > 0 are the local curvature estimates of f and r, respectively.ψ, φ are constants.
According to the equivalence of the RADMM and the DRS, the linear model is fitted to the gradient of the target by using DRS theory for problem (13).In order to obtain the optimal step-size with zero residuals on the model problem, such that f (x) + r(x) residuals are zero, the following needs to be satisfied: The optimal penalty parameter for the linear model is given by We can readily find the optimal relaxation parameter under the optimal penalty parameter condition
Estimation of Step-Size
The local curvature estimates θ k , and γ k can often be estimated simply from the results of iteration k and an earlier iteration k 0 .The initial value of the spectral step-size can be calculated by using the local curvature estimation, and the ADMM can modify the spectral step-size by updating the dual variables in the iterative process so as to achieve the best penalty parameter and relaxation parameter. Define where σ is a scaling parameter.
When solving an unconstrained optimization problem, dual variables λ k and spectral step-size affect the convergence performance of the MS-RADMM.At present, line search is commonly used to select θ k , γ k .We can overcome the oscillation phenomenon by adopting a non-monotonic technique.However, when the initial value is taken near a local valley of the function, it is easy to obtain a local extreme value.
To avoid being trapped in a local optimum, a non-monotonic Wolfe-shaped line search strategy is incorporated into the memory gradient method [40].By combining the iteration information of current and past multiple points, the global convergence of the algorithm is improved.
The dual variable update rule is derived from Combined with the idea of the Barzilai-Borwein gradient method, the spectral stepsize θ k is readily obtained as [41] The spectral step-size γ k is solved likewise.
Parameter Update Rules
In the case where the linear model assumptions break down or an unstable stepsize is produced, we can employ a correlation criterion to verify the local linear model assumptions. Define where θ cor k are the correlations between ∆ f k , and γ cor k are the correlations between ∆λ k .The update rules of penalty and relaxation parameters are given by where ε cor is the threshold of curvature estimation and is a constant, which is set as 0.2 in this paper with reference to paper [41].The setting of this threshold further avoids the problem of inaccurate curvature estimation and ensures convergence.
RELM Based on the Scalars MS-ARADMM
The MS-ARADMM is employed to solve the convex model-fitting problem of the RELM and improves the convergence speed of the RELM.
Scalars MS-RADMM for RELM
For an m-category classification problem, calculation of the RELM objective function (3) is equivalent to (4).First, the hidden layer output matrix H is acquired by using the RELM.Then, the MS-ARADMM algorithm is used to solve the optimal output weight of the RELM.The iteration process is given by )) where H j and H i are the jth row and the ith column of the matrix H, k represents the number of iterations, and m represents the number of columns in the matrix.The schematic diagram of the specific model is shown in Figure 1.
Simulation Experiment and Result Analysis
The MS-ARADMM-based RELM is used to train single hidden-layer feedforward neural networks (SLFNs) on eight datasets.The performances of the MS-ARADMM, the MS-RADMM, and the RS-ADMM are evaluated by convergence speed and time cost.The specifications of the datasets are shown in Table 1.
Performance Analysis of Adaptive Parameter Selection Methods
According to the principle of iterative calculation of the gradient algorithm, since the computational complexity of each iteration is the same, it means that the total number of iterations is positively correlated with the time cost.In other words, the convergence performance of the algorithm can be evaluated by comparing the iterative convergence curves of the algorithm, and the time cost of the algorithm can also be analyzed by analyzing the convergence curves.
In order to verify the convergence of the adaptive parameter selection method, numerical experiments were carried out under the same environmental conditions and compared with the current popular improved Barzilai-Borwein algorithms (MBBH, NABBH, and MTBBH).The effectiveness of the method was evaluated by comparing the total number of iterations at the end of execution.
The MBBH algorithm modifies the standard Barzilai-Borwein step-size [42] to have specific quasi-Newton characteristics.However, the curvature condition is not added, and the generated approximate Hessian matrix cannot meet the iterative requirements, which affects the speed of the algorithm.
The NABBH algorithm improves the convergence speed by simplifying the computational complexity of the inverse operation of Hessian matrix [43]; that is, only the inverse matrix of the first derivative matrix of the function is calculated, and the second derivative matrix of the function is omitted.A step-size selection strategy is designed to speed up the convergence of the algorithm.However, this algorithm fails to converge if the condition of monotonic decrease is not met at each iteration.
The MTBBH algorithm realizes monotonic descent by replacing the exact Hessian with a positive-definite data matrix.However, due to adoption of a non-monotonic technique, the algorithm easily falls into local optima.
For problems that tend to fall into local extremes, a new Barzilai-Borwein-type gradient method is proposed by modifying the original Barzilai-Borwein step-size.By introducing a non-monotonic Wolfe-type strategy into the memory gradient method, the global optimal solution is obtained.The purpose of improving convergence speed is achieved by adding step-size constraints [43].In theory, the proposed adaptive parameter selection method has better global convergence and convergence speeds.
A comparison of the performances of the MBBH, the NABBH, the MTBBH, and the proposed algorithms through tests was made.Table 2 and Figure 1 show the simulation results of different methods, which demonstrate the correctness of our theoretical analysis.According to Table 2, under different constraint conditions, the proposed method is found to terminate with the least number of iterations, indicating that it has the fastest convergence speed.It is also clearly shown in Figure 2 that the proposed method has better global convergence and non-monotonicity than the other algorithms.
Convergence Analysis
The key performances of the classification model is convergence speed and accuracy.Considering the background of big data, this paper focuses on the convergence speed of the model in algorithm optimization.In order to evaluate the effectiveness of the proposed algorithm, the convergence of the proposed algorithm is evaluated by comparing the time cost, the number of iterations convergence, and the classification accuracy with the newer improved ADMM algorithm.
The proposed MS-ARADMM is compared with the MS-AADMM and RB-ADMM methods on eight datasets.The experiment is conducted by setting the same termination conditions.The evaluation indicators include the number of iterations and computational time.
The RB-ADMM algorithm decomposes the objective function of the model into a loss function and a regularization function; it uses the ADMM to transform the least square problem into the least square problem without a regularization term so as to improve the calculation speed of the model.However, this method does not fully utilize the model structure of the ADMM, leading to slow convergence.
The MS-AADMM adopts a tunable step-size to accelerate convergence.However, parameter selection plays an important role in the convergence of the algorithm.Inappropriate parameter selection may cause the algorithm to not converge.
The MS-ARADMM is realized by employing an adaptive parameter selection method to improve the convergence speed.
Given the hidden-layer output matrix H, the optimal output weights of the MS-ARADMM are calculated with (27).The output weights of the RB-ADMM and the MS-AADMM are updated by the following:
Comparison of Convergence of MS-ARADMM and RB-ADMM
The difference between the output weight updates ( 27) and ( 28) is that all iterations in the MS-ARADMM are for scalar variable updates.The update method of scalar variables simplifies the sub-problem solving, thus improving the convergence.Although the RB-ADMM can adaptively choose penalty parameters to improve the convergence to a certain extent, it suffers from several flaws.The performance of the RB-ADMM can vary wildly depending on the problem size.Furthermore, without a suitable choice of a residual balancing factor, the algorithm may not converge.Aiming at solving the problems of the RB-ADMM, the MS-ARADMM implements adaptive parameter selection by adding step-size selection constraints, thereby improving the convergence.
The simulation results are given in Table 3.As can be seen from Table 3, with the optimization of the algorithm, the time and number of iterations spent by the model to process large-scale data become less and less, which also means that the algorithm proposed in this paper has a better convergence speed.At the same time, to see the improvement effect of the MS-ARADMM algorithm, the algorithm improvement effect calculated from the results in Table 3 is given in Table 4.This can be seen from Table 4 for the convergence speed improvement, in which the convergence speed of the MS-ARADMM is increased by an average of 99.3032% compared with the RB-ADMM in the two-category datasets.In the six-category datasets, compared with the RB-ADMM, the convergence speed of the MS-ARADMM is increased by 98.4375% on average.In the ten-category classification datasets, from Table 4, the convergence speed of the MS-ARADMM is increased by an average of 96.7624% compared with the RB-ADMM.
Comparison of Convergence of MS-ARADMM and MS-AADMM
As with the calculation formula of β in MS-ARADMM, the introduction of the scalar variable update method in MS-AADMM leads to much more efficient computation.However, parameter selection must be addressed.From the MS-AADMM perspective, this manner does not take into account that relaxation techniques can further accelerate the convergence.MS-ARADMM simplifies the calculation by designing an adaptive parameter selection method to jointly adjust the penalty and relaxation parameters.
From Table 3, the convergence speed becomes faster and faster.This can also be seen from Table 4 for the convergence speed improvement; the convergence speed of the MS-ARADMM is increased by an average of 69.2445% compared to the MS-AADMM in the two-category datasets.In the six-category datasets, compared to the MS-AADMM, the convergence speed of the MS-ARADMM is increased by 71.7948% on average.In the tencategory classification datasets, from Table 3, the convergence speed of the MS-ARADMM by an average of 48.9966% compared to the MS-AADMM.
For the case of the PCMAC, Pendigits, or Optical-Digits dataset, due to the limited dimension and size of the dataset, this dataset leads to lower improvements in the convergence speed.For instance, in the ten-category datasets, the USPS dataset already achieves improvements of 83.8709%.However, the Optical-Digits dataset only achieved improvements of 14.6341%.This huge difference arises from the fact that the MS-ARADMM is suitable for large-scale optimization problems.This greatly reduces the convergence speed improvement for the Optical-Digits dataset, because the size of the Optical-Digits dataset is 64 × 5620 and that of the USPS dataset is 256 × 9298.
Convergence Rate Comparison
Implicit in the MS-ARADMM is the assumption of automatically tuning the parameters to achieve an optimal performance.On this basis, we show that the MS-ARADMM generally gives better convergence than other algorithms.
The convergence performance of different algorithms is compared on eight benchmark datasets.Table 3 and Figure 3 show the simulation results of the three algorithms.The results are in full agreement with the theoretical analysis.According to Table 3, the MS-ARADMM algorithm has the lowest computational complexity and the least iterations of all datasets among all algorithms.From Figure 3, with a maximum of 2000 iterations and an error of 10 −3 , the MS-ARADMM can meet the termination condition within the minimum number of iterations.
Parallelism Analysis
Parallelism is an important indicator for evaluating the convergence speed of the ADMM algorithms.High parallelism performance can effectively relieve the computational burden and improve algorithm efficiency.To verify that the MA-ARADMM has a better convergence speed, simulations are carried out on the datasets.The parallelism performance of the MS-ARADMM is evaluated by analyzing the GPU acceleration ratios and the relationship between the acceleration ratios and the number of CPU cores.
Parallel Implementation on Multicore Computers
Using a maximally splitting technique, the RLS problem can be maximally split into univariate sub-problems that can be executed in parallel, leading to a highly parallel structure.
To verify our theoretical analysis, experiments are conducted on the Gisette dataset on different multicore computers.The relationship between acceleration ratios and the number of cores is characterized by the acceleration ratio R, defined by the single-core runtime divided by the n-core runtime.The experiments are carried out on three multi-core computers.The hardware configurations of the three computers are, respectively, an Intel Core i7-10700 8-core CPU @ 2.9 GHz, an Intel Core i7-4790 4-core CPU @ 3.60 GHz, and an Intel Core i7-8700 6-core CPU @ 3.2 GHz.
The three computers are shown in Figure 4. From Figure 4, the relationship between the acceleration ratios and the number of CPU cores is close to the lower bound, demonstrating the high parallelism of the MS-ARADMM.
Parallel Implementation on GPU
As one of the important indexes to evaluate the convergence performance of the algorithm, the high parallel performance effectively alleviates the computational pressure and further improves the operation efficiency of the algorithm.Through internal multiprocess parallel computing, the GPU can have a speed that is one order of magnitude higher than the CPU; it also has a strong ability of floating point arithmetics, which can greatly improve the computing speed of the ADMM and shorten the calculation time.
In case of high dimensional data, MI-based RELM requires a large amount of storage and computation.To verify the high parallelism of the algorithm, parallel accelerated experiments of MS-ARADMM-based and MI-based RELMs are realized on an NVIDIA GeForce GT 730 display card.The parallel implementations on the GPU are implemented by using the gpuArray function in the MATLAB toolbox.
The MS-ARADMM-based RELM splits the model-fitting problem into a set of univariate sub-problems that can be executed in parallel.Its convergence speed is improved by the parameter selection scheme.Theoretically, the MS-ARADMM has good convergence speed and parallelism.
The simulation results from all of the datasets are given in Table 5.From Table 5, on all the datasets except USPS, Pendigits, and Optical-Digits, the computational complexity of the MS-ARADMM-based RELM is much smaller than that of the MI-based RELM.On all of the datasets, the computational complexy of the MS-ARADMM-based RELM is much smaller than that of the MI-based RELM when implemented on the GPU.The acceleration ratio of the MI-based method is about 5.3443, whereas that of the MS-ARADMM is about 23.5065-an acceleration of four times that of the MI-based method.
Accuracy Analysis
The classification accuracy is an important indicator of classifier performance.Accuracy was compared on the MS-ARADMM-based, MS-AADMM-based, and MI-based RELMs.
Table 6 compares the training accuracy and the testing accuracy of the MS-ARADMM, the MS-AADMM, and the MI-based RELMs.From Table 6, we can see that the classification accuracy by the MS-ARADMM is not affected.From Tables 3 and 6, under approximately identical training and the testing accuracy, the computational time for the MS-ARADMM is less than those of both the MI-based and the MS-AADMM-based methods.Thus, the convergence speed of MS-ARADMM is greatly improved in solving large-scale optimization problems.
Conclusions
In this paper, an MS-ARADMM algorithm is proposed to solve the RLS problem in the RELM.Its novelty is reflected in two aspects: (1) The non-monotonic Wolfe-type strategy is introduced into the memory gradient method to improve the global convergence; (2) The step selection constraint is added to simplify the computational complexity of MS-RADMM subproblems.Since the MS-ARADMM is a convex optimization method with superlinear global convergence, it can ensure a fast response and global optimal solution of the RELM, so it is more suitable to realize the distributed computation of large-scale convex optimization problems of the RELM compared with other ADMM methods.
We focused on the influences of parameters ρ and α on the convergence performance of the RELM model.To verify the performance of the proposed algorithms, we applied them to various large-scale classification datasets, and compared the simulation results with the methods implemented in Tables 2 and 3.The results confirm that the computation efficiency of the RELM model is obviously improved, especially when applied to large-scale convex optimization problems.Therefore, the MS-ARADMM algorithm could enhance the convergence speed since it has a simpler solution process.
Figure 1 .Algorithm 1 : 2 Initialization; 3 4 5 6
Figure 1.Illustration of the MS-ARADMM-based RELM.4.2.Learning Algorithm for MS-ARADMM-Based RELMBy adding step-size selection constraints in the MS-ARADMM iteration, it is ensured that the penalty and relaxation parameters can converge under the bounded conditions.The steps are shown in Algorithm 1.
Figure 2 .
Figure 2. Convergence comparison of different methods.
Table 2 .
Comparison of iterations for different methods.
Table 3 .
Comparison of the convergence speed of RELM for different algorithms.
Table 4 .
Comparison of convergence speed improvement of MS-ARADMM and different methods.
Table 6 .
Training accuracy and testing accuracy. | 6,549.2 | 2023-07-21T00:00:00.000 | [
"Computer Science"
] |
Bound-state effects on dark matter coannihilation: Pushing the boundaries of conversion-driven freeze-out
Bound-state formation can have a large impact on the dynamics of dark matter freeze-out in the early Universe, in particular for colored coannihilators. We present a general formalism to include an arbitrary number of excited bound states in terms of an effective annihilation cross section, taking bound-state formation, decay and transitions into account, and derive analytic approximations in the limiting cases of no or efficient transitions. Furthermore, we provide explicit expressions for radiative bound-state formation rates for states with arbitrary principal and angular quantum numbers $n,\ell$ for a mediator in the fundamental representation of $SU(3)_c$, as well as electromagnetic transition rates among them in the Coulomb approximation. We then assess the impact of bound states within a model with Majorana dark matter and a colored scalar $t$-channel mediator. We consider the regime of coannihilation as well as conversion-driven freeze-out (or coscattering), where the relic abundance is set by the freeze-out of conversion processes. We find that the region in parameter space where the latter occurs is considerably enhanced into the multi-TeV regime. For conversion-driven freeze-out, dark matter is very weakly coupled, evading direct and indirect detection constraints but leading to prominent signatures of long-lived particles that provide great prospects to be probed by dedicated searches at the upcoming LHC runs.
I. INTRODUCTION
Thermal freeze-out of dark matter has proved to be a successful framework for explaining the measured dark matter abundance in the Universe. However, the sizeable couplings of dark matter to the Standard Model (SM) particles required in the simplest realizations of this mechanism have been put under pressure by experimental null-results at colliders [1], direct [2] and indirect [3] detection experiments. Hence, fulfilling the relic density constraint often requires the exploration of 'exceptional' [4] regions, e.g. the region where coannihilation effects increase the effective annihilation rate [5].
Such effects commonly occur in models with a so-called t-channel mediator, where the mediator is taken to be odd under the Z 2 -parity that stabilizes dark matter and for relatively small mass splittings between the mediator and the dark matter particle. Prominent and wellstudied examples are the sfermion coannihilation regions in the minimal supersymmetric standard model (MSSM), see e.g. [6][7][8]. They have, in turn, motivated a wide range of phenomenological studies of t-channel mediators in the simplified model framework exploring different spin assignments and a wide range of coupling strengths [9][10][11][12][13][14].
While the presence of coannihilating mediators can increase the effective dark matter annihilation rate, toward small mass splittings, mediator pair-annihilation alone can become so efficient that dark matter is rendered under-abundant (seemingly) independent of the dark matter coupling. However, this conclusion is only valid if dark matter remains in chemical equilibrium with the mediator during freeze-out through efficient conversions. Dropping this assumption opens up a cosmologically viable part of the parameter space where the relic density is set by conversion-driven freeze-out [15] (or coscattering [16]). 1 In this scenario, thermal decoupling is initiated by the breakdown of efficient conversions between dark matter and the coannihilating partner(s). The required coupling is several orders of magnitude smaller than the one required to initiate the breakdown of dark matter pair-annihilations. This is due to the significantly larger number density for light standardmodel initial states in conversion processes with respect to the dark matter number density.
The boundary between the coannihilation and conversion-driven freeze-out region marks a significant change in the phenomenology within the parameter space of a given model. While the former is characterized by sizeable couplings that give rise to observable signals in conventional dark matter searches, the latter is largely immune to constraints from (in)direct detection but predicts long-lived particles with typical lifetimes of the order of millimeters to meters to be searched for at the LHC. The conversion-driven freeze-out region was unex-plored terrain for a long time, and often flagged underabundant when displaying the viable parameter space in terms of masses, see e.g. [9]. Recently, it has been studied in various contexts [17][18][19][20][21][22] and often constitutes one of a few regions still allowed within a given model [14].
For electrically and color-charged coannihilators -interacting via massless force carriers -non-perturbative effects such as Sommerfeld enhancement and, in particular, bound-state formation can play an important role in dark matter freeze-out. Radiative bound-state formation has been studied for a variety of dark matter models and for general unbroken Abelian and non-Abelian gauge theories [23][24][25][26][27]. The latter is related to earlier results for quarkonium formation inside the quark-gluon plasma obtained in potential nonrelativistic quantum chromodynamics (pNRQCD), see e.g. [28,29]. Recently, nextto-leading-order (NLO) finite temperature corrections of the general singlet-adjoint dipole interactions have been computed [30].
While it has been shown that bound-state formation effects provide sizeable corrections to the effective annihilation cross section for a coannihilation scenario with a colored mediator [26,31,32], it has widely been overlooked that their effects become considerably more relevant for scenarios with small dark matter couplings such as conversion-driven freeze-out. As a consequence of the small coupling, freeze-out is a prolonged process and the mediator annihilation down to significantly smaller temperatures (i.e. later times) becomes important. This increases the relevance of bound-state effects further prolonging the freeze-out process. Furthermore, studies have focussed on the effect of the ground state, while excited bound states become (increasingly) relevant toward smaller temperatures.
In this work, we extend the study of bound-state effects in several aspects: • First, we revisit the formulation of the underlying Boltzmann equations in the presence of excited bound states and derive a general framework for incorporating their effects in terms of an effective annihilation cross section of the coannihilator. This general form requires not only the knowledge of bound-state formation and decay rates but also the transition between the various excited states. However, we formulate two meaningful limiting cases considering fully efficient or non-efficient transitions, the latter of which is considered as a (conservative) benchmark scenario. This part is model independent and applies to any set of bound states in general.
• Secondly, focussing on the case of a colored mediator in the fundamental representation of SU (3) c , we derive general expressions for the bound-state formation rates of arbitrary n, (the principal and angular momentum quantum numbers of the bound state, respectively) and estimates for the transition in some cases. Furthermore, we discuss the impact of higher-order corrections to the bound-state formation and decay rates.
• Finally, we assess the impact of bound states for a colored t-channel mediator model and study the phenomenological consequences of bound-state effects in the coannihilation and conversion-driven freeze-out region. In particular, we observe a drastic shift in the boundary between the two regimes, greatly enlarging the latter region. These considerations allow us to assess the importance of the various corrections in the prescription of bound-state effects studied here.
The remainder of the paper is structured as follows. In Sec. II we introduce the considered benchmark model and review the Boltzmann equations that describe both the coannihilation and conversion-driven freeze-out scenario. In Sec. III we develop our general formalism to include bound states and discuss various limiting cases analytically. In Sec. IV we compute the involved rates for a colored mediator in the fundamental representation of SU (3) c . Section V is dedicated to the phenomenological application before concluding in Sec. VI. Appendices A and B contain further details of the computation of bound-state formation cross sections and discuss NLO QCD effects, respectively.
II. MODEL AND CONVERSION-DRIVEN FREEZE-OUT
We consider a simplified t-channel model with a singlet Majorana fermion χ providing the dark matter candidate, and a colored scalar mediatorq that exhibits a Yukawa coupling involving χ and a right-handed SM quark q, The scalar mediatorq transforms as a triplet under SU (3) c , as a singlet under SU (2) L , and has hypercharge that is identical to the one of right-handed SM quarks. It gives rise to a t-channel annihilation diagram for a pair of χ particles, and the corresponding process χχ →qq leads to a relic abundance of χ via thermal freeze-out. If the masses of χ andq are of comparable size, coannihilation processes need to be taken into account as well, in particular mediator pair annihilation, which dominantly proceeds via the processqq † → gg. (Annihilation into a pair of quarks is p-wave suppressed.) Being a pure QCD process, its cross section is entirely determined by the strong coupling α s . Indeed, this contribution can be so large that the χ relic density falls below the measured dark matter abundance, independently of the value of λ χ [9].
However, this conclusion hinges on the assumption that χ andq are in chemical equilibrium during the freeze-out process, i.e. that the corresponding conversion rates are large compared to the Hubble expansion rate H during the freeze-out process. Since the rates of all conversion processes necessarily involve some power of the coupling λ χ , the assumption of chemical equilibrium can be violated if the coupling strength is small enough. In that case, the conversions have to be included along with (co-) annihilation processes in the Boltzmann equations. This scenario is known as conversion-driven freezeout [15], or coscattering [16].
In general, for the minimal t-channel model considered here, the coupled set of Boltzmann equations reads [15] where x = m χ /T and Y i = n i /s, with number density n i and entropy density s, with where M pl 2.4 × 10 18 GeV is the reduced Planck mass. Yq represents the summed contribution of the mediator and its anti-particle, leading to the various factors 1/2. Here, gq = gq † = N c = 3, and fq is the distribution function that is assumed to be identical for particles and antiparticles as well as all colors. The processes in the first line of each equation denote the usual (co-)annihilation processes into SM particles. The conversion terms in the second line of each equation can be split into processes of the formq → χ and qq † → χχ. The former case requires accompanying SM particles, and can be further decomposed into 1 → 2 and 2 → 2 processes, with where Γ is the decay rate at rest, and where n eq i = T /(2π 2 ) g i m 2 i K 2 (m i /T ) and K i denote modified Bessel functions of the second kind. Depending on kinematic constraints further 1 → 3, 1 → 4 or 2 → 3 process can be relevant, especially for a coupling to top quarks,q =t [12]. In the following, we focus on a coupling to bottom quarks,q =b, and include the processes stated in eq. (6).
The set of Boltzmann equations can describe both coannihilations in and out of chemical equilibrium, with well-known simplifications being possible in the former case by summing both equations [5]. Out of chemical equilibrium, the coupled set of equations needs to be solved. However, since the coupling λ χ is small in this case, all terms except for the ones involving σqq † v and Γ conv can be neglected for conversion-driven freezeout. The former process is considerably Sommerfeld enhanced for small relative velocities, due to the attractive potential generated by gluon exchange in the color singlet configuration of theqq † pair [10]. In addition, the same potential leads to the formation of bound states [25,26,32,33]. In this work, we improve the computations of the relic density in the coannihilation and conversion-driven freeze-out scenario by considering bound-state effects, including an exploration of the role of excited states.
III. INCLUDING BOUND STATES
Within the t-channel model, bound states ofqq † pairs in the color singlet configuration exist and can contribute to the freeze-out process. We consider an extension of the Boltzmann equation by including a set of bound states B i , enumerated by an abstract index i, and with g Bi in-ternal degrees of freedom. Within the model considered here, the bound states are characterized by their n and quantum numbers, i ≡ (n, ) and g B n = 2 + 1, but the discussion in this section applies to any set of bound states in general.
We add a Boltzmann equation for the yield Y Bi = n Bi /s for each bound state, taking into account ionization (or equivalently breaking) into an unboundqq † pair via gluon or photon absorption, direct decay of the bound state into SM particles, and transitions between two bound states. In addition, the collision term in the Boltzmann equation of the mediatorq picks up an extra term due to ionization and its inverse process, recombination [or equivalently bound-state formation (BSF)]. The changes in the Boltzmann equations compared to eqs. (2) and (3) are given by The ionization rate Γ i ion is related to the thermally averaged recombination cross section σ BSF,i v via the Milne relation originating from the detailed balance condition in thermal equilibrium. Indeed, the Milne relation ensures that the ionization and recombination terms drop out in the sum d(Yq + 2 i Y B,i )/dx, consistent with the conservation of the total number ofq andq † in the absence of decays. Note that in the non-relativistic limit where E Bi = 2mq − m Bi > 0 is the binding energy, and we used that Yq denotes the yield of the sum ofq andq † . In addition, detailed balance requires Also, here, we can see that transition terms drop out when summing the Boltzmann equations for all bound states, as required. Before discussing explicit expressions for corresponding rates in Sec. IV, we investigate generic features of the coupled set of equations.
A. Single bound state
We first recall the case of a single bound state B. In a typical cosmological setting, the ionization and decay rates (mediated by the strong interaction) are much larger than H. In this case, the density of bound states almost instantaneously adjusts to a quasistationary number (from the point of view of cosmological versus strong interaction timescales) that can be obtained by setting the left-hand side of the Boltzmann equation for B to zero, turning it into an algebraic equation [34]. For the case of a single bound state (dropping the index i and transition terms), one obtains Inserting this relation in eq. (10) yields the same form as eq. (3) but with the substitution This means it is sufficient to solve the Boltzmann equations forq and χ, while the impact of the bound state is captured by replacing theqq † annihilation cross section by the effective cross section.
In the limit H Γ dec Γ ion the ionization and recombination processes establish equilibrium between the bound state and unboundq (ionization equilibrium). The corresponding rates therefore drop out of the effective cross section, which only depends on the decay rate Γ dec , as can be seen using the Milne relation, eq. (11), The effective cross section increases exponentially with falling temperature, due to the energetic preference for bound states in equilibrium. This increase stops once the ionization rate, which itself becomes exponentially suppressed at low temperatures, falls below the decay rate, and ionization equilibrium breaks down. Therefore, at low enough temperatures, the regime H Γ ion Γ dec becomes relevant, for which In that limit, any bound state that forms decays almost immediately, and therefore the effective cross section is only sensitive to the recombination cross section σ BSF v .
B. Multiple bound states
Let us now generalize the previous findings to a set of bound states. When assuming as before that all relevant ionization, decay and transition rates are much larger than H, we obtain a set of coupled algebraic equations for the yields Y Bi from setting the left-hand sides of the Boltzmann equations (9) to zero. It can be written as where we used eq. (13) and introduced the total width of B i , From the structure of the Boltzmann equation, it is a priori not clear whether the impact of bound states can be captured by an effective cross section when inserting the solution to eq. (18) into the Boltzmann equation (10) forq. However, this turns out to be the case in general.
To see it, we rewrite eq. (18) in the form where we defined y i ≡ Y B,i /Y eq B,i and y ≡ Yq/Y eq q . Introducing the matrix the solution for the bound-state abundances reads Inserting it in the Boltzmann equation (10) forq indeed yields a contribution that has the form of the annihilation term, involving in particular a factor y 2 − 1. Therefore, provided the rates are large compared to the expansion rate, the impact of a set of bound states can in general be captured by an effective cross section, given by with The effective cross section, eq. (23), describes the impact of an arbitrary number of bound states on theq abundance, which can all individually be populated by recombination processes, decay into SM particles, and undergo a network of transitions among them, with the corresponding rates entering in the determination of R i . For a given setup, the R i can be determined numerically. Nevertheless, it is instructive to study two limiting cases analytically.
No transition limit
In the limit Γ i→j trans Γ i dec , Γ i ion we can neglect the transition terms, such that M ij → δ ij becomes the unity matrix, and the total width depends only on ionization and decay rates. The effective cross section becomes In the absence of transitions, each bound state therefore gives a contribution to the effective cross section that is analogous to the case for a single bound state, see eq. (15). In particular, each summand exhibits the limiting cases of ionization equilibrium (Γ i ion Γ i dec ) or instantaneous decay (Γ i ion Γ i dec ) in close analogy to the case of a single bound state.
Efficient transition limit
In the limit Γ i→j trans Γ i dec , Γ i ion , we expect that the transitions establish chemical equilibrium among the bound states, which is indeed a solution to eq. (18) in that limit. The most straightforward way to derive the effective cross section in that limit is to proceed similarly to the case of coannihilations [5], introducing and summing up all Boltzmann equations (9) for the B i , such that the transition terms drop out. Using (26) to write one obtains with effective ionization and decay rates Setting again the left-hand side of the Boltzmann equation (29) to zero, and inserting the resulting algebraic expression together with eq. (28) into eq. (10) yields where The result is similar in form to the case of a single bound state, eq. (15), but with the ionization and decay rates replaced by a thermal average over all bound states and the recombination cross section replaced by the sum. It turns out that obtaining this result directly from the general expression eq. (23) is tedious. The reason is that naively neglecting the ionization and decay rates in the total width would lead to a singular matrix M ij . However, by carefully expanding the abundances around the chemical equilibrium solution y i =const., and treating Γ i ion /Γ i and Γ i dec /Γ i as small, one ultimately arrives at the same expression, eq. (31).
We also note that using the Milne relation, eq. (11), for each bound state, one finds i.e. the summed recombination cross section and the effective ionization rate satisfy a generalized Milne relation. This implies that, in analogy to the case of a single bound state, within the regime of ionization equilibrium (Γ eff ion Γ eff dec ), the effective cross section becomes independent of the recombination cross section, and only depends on the effective decay rate. In the opposite limit Γ eff ion Γ eff dec of almost instantaneous decay, the decay rate drops out, and the effective cross section depends only on σ BSF v sum .
Ionization equilibrium
The limit of ionization equilibrium is somewhat orthogonal to the two limiting cases considered above. When ionization and recombination processes are assumed to be efficient enough to establish ionization equilibrium, the effective cross section approaches the universal form which is a straightforward generalization of eq. (16) and independent of ionization rates Γ i ion as well as transition rates Γ i→j trans . The reason is that efficient ionization and recombination processes establish chemical equilibrium with the unboundq particles in that case for each bound state. This means, in turn, that they are in chemical equilibrium among each other, such that the transition processes play no role for their relative abundances in that limit. This result agrees with the finding in [35], in which a set of bound states in ionization equilibrium is considered.
Indeed, it is easy to see that eq. (33) follows from both the effective cross section in either the limiting case of no transitions or the case of efficient transitions when assuming in addition that Γ i ion Γ i dec . Moreover, the fact that eq. (33) is even valid independently of the size of transition rates can be seen by noticing that the derivation presented in Sec. III B 2 relies only on the assumption of chemical equilibrium among the bound states, which is satisfied in ionization equilibrium.
Therefore, as long as ionization equilibrium holds, the effective cross section is only sensitive to the bound-state decay rates, independently of the size of transition and ionization rates.
In a realistic setup, the limiting assumptions made above may be too restrictive and at best hold only for a subset of bound states and a subset of the corresponding ionization, decay or transition processes. In this case, the effective cross section can be computed using the general result, eq. (23).
IV. RATES
While the discussion in the previous section was generic, we focus on the set of bound states and ionization, decay and transition rates that are relevant for the scalar mediatorq that carries hypercharge and transforms under the fundamental representation of SU (N c ) with N c = 3 in the following.
A heavy (mq Λ QCD ), non-relativisticqq † pair can be described by two wave functions ψ [R] , one for the color octet ( [8]) and one for the color singlet ([1]) configuration. They obey a Schrödinger equation with kinetic energy is the reduced mass, and potential in Coulomb approximation [26] with effective coupling strength Here C [R] 2 denotes the quadratic Casimir of SU (N c ) with 2 = N c = 3, and α s = g 2 s /(4π) is related to the strong coupling constant. Thus, The singlet configuration feels an attractive potential, while it is repulsive for the octet. Therefore, bound states exist for the singlet only. Note that we treat the m quantum number as an internal degree of freedom of the bound state in the Boltzmann equation, and therefore label the bound states by n and only.
In the Coulomb approximation, the bound states are described by hydrogen-like wave functions ψ [1] n m , with the fine-structure constant replaced by α eff [1] and the electron mass by the reduced mass µ. On the other hand, unbound scattering states ψ [R] p rel exist for both the octet and singlet, with wave functions containing the respective effective coupling strength (see App. A).
A. Ionization and recombination
The leading-order QCD process for bound-state formation is where the initial state corresponds to a scattering state in the octet configuration due to color conservation. The matrix element can be computed within pNRQCD analogously to hydrogen recombination [29,36], with a dipole interaction Hamiltonian of the form g s ωr·E where E = t a E a is the color-electric field, r is the relative coordinate, and is the energy difference of initial and final state, which corresponds to the energy of the emitted gluon in the non-relativistic limit. The thermally averaged ionization (or breaking) rate and recombination (or bound-state formation) cross section are given by [26] which can be checked to satisfy the Milne relation, eq. (11), with f g (ω) = 1/(e ω/T − 1). The recombination cross section can be expressed as [29] σ BSF,n v rel = ω 2πN 2 with the matrix element for the QCD process given by and | ψ [1] n |r|ψ [8] One can also consider the analogous electromagnetic process, that proceed from a color singlet scattering to bound state and matrix element obtained from the electromagnetic dipole interaction, where Qq = 1/3 is the electric charge ofq. We evaluate the strong couplings entering in the scattering (s) and bound (b) state wave function at renormalization scale of the typical momentum transfer related to bound and scattering states, respectively, using the notation For the strong coupling that enters via the interaction Hamiltonian we choose α BSF s = α s (µ MS = ω), evaluated at the gluon momentum scale. In contrast to [26], we use an identical scale choice for couplings entering either via Abelian or non-Abelian vertices. Within pNRQCD, the latter manifest themselves exclusively by the C A contribution to α eff s for the gluonic recombination process. For the binding energy we use Using a partial wave decomposition as well as an integral representation for the hypergeometric function entering the scattering-state wave function, and the generating function of the Laguerre polynomials contained in the bound-state wave function, we arrive at the following expressions for the bound-state formation cross sections via where we defined and Here, corresponds to the partial wave of the scattering state, which is constrained by the usual selection rule, and the radial part of the wave function yields the overlap integral This expression can be easily evaluated numerically. The result has the structure where s BSF n is a polynomial, with explicit expressions for n ≤ 3 given in Tab. I. For the 1s ground state our result agrees with [26], and for the 2s state it agrees with [29,30]. The result for 2p differs from the one given in [29,30] (by a factor 3 for the s-wave contribution with = 0, and a factor 3/2 for the d-wave contribution with = 2) but matches the result for hydrogen when translated to the electromagnetic case [36].
We show the functions S BSF n (ζ s , ζ b ), which are proportional to the bound-state formation cross section, in Fig. 1. For the figure we assume that the strong coupling entering in ζ s and ζ b is evaluated at a common renormalization scale, such that S BSF n depends only on the ratio α s /v rel . Furthermore, we show the sum over all = 0, . . . , n − 1 for a given n (solid lines), as well as the results for the s-orbital with = 0 (dashed lines). For α s /v rel 1 the bound-state formation cross section scales as (α s /v rel ) 4+2 for all n. The limit is given by where the second line arises from the = + 1 contribu- tion, and the first from = − 1 exists only for > 0. The contribution from = 0 orbitals therefore dominates for α s /v rel 1, as can also be seen by the convergence of solid and dashed lines for each n ≥ 2 in Fig. 1 in that limit.
In the opposite limit α s /v rel 1, where Here s BSF n | 4n−2 corresponds to the polynomial obtained when keeping only the terms with maximal combined power in ζ s and ζ b in s BSF n (ζ s , ζ b ), being 4n−2 , such that f BSF n depends only on the ratio ζ s /ζ b = α eff s /α eff b . Up to the different renormalization scale at which the effective couplings are evaluated, f BSF n approaches a constant for α s /v rel 1. The behavior at small relative velocities is therefore governed dominantly by the first factor in eq. (55). It exhibits a qualitatively different behavior depending on the sign of ζ s . For (qq † ) [8] → B [1] n + g, the repulsive potential relevant for the initial state implies ζ s < 0, leading to an exponential suppression for small relative velocities, S BSF n → 2π|ζ s |e −2π|ζs| f BSF n . For the electromagnetic process (qq † ) [1] → B [1] n + γ, both the initialand final-state wave function are sensitive to the attractive color singlet potential, such that in particular ζ s > 0, and S BSF n → 2πζ s f BSF n grows with ζ s ∝ α s /v rel . The different shape of S BSF n for the two processes can clearly be seen in Fig. 1. For the electromagnetic process, the combined contribution from all angular momentum states S BSF n decreases with increasing values of n, for all velocities v rel . On the other hand, for the strong process the exponential suppression at large ζ s leads to a maximum of S BSF n . Its position shifts to higher values of α s /v rel for excited states with increasing n. In addition, the value at the maximum increases with n. This indicates that excited levels become more and more relevant the smaller the relative velocity, i.e. the lower the temperature that is relevant for determining the relic density.
B. Decay
The leading decay process is due to annihilation of the constituents of the bound state into a pair of gluons, B n → gg. Here, we briefly review the derivation of the decay rate following [23], provide an expression for general n (for = 0) and discuss the role of higher-order corrections.
For a generic 1 → N decay process, B n → X 1 X 2 . . . X N the matrix element M n can be related to the usual Feynman matrix element for the process with N q → µ in the nonrelativistic limit, and boundstate wave function ψ n m ≡ ψ [1] n m in momentum space, normalized such that d 3 x|ψ n m (x)| 2 = 1 in position space. Here, K is the four-momentum of the bound state.
The bound-state decay rate is given by where m B n = 2mq − E B n 2mq, and |M n | 2 = 1 2 +1 m g X j |M n m | 2 is averaged over the 2 +1 states with different m, and summed over final-state degrees of freedom. Furthermore, the usual factor 1/S! is included if S particles in the final state are of identical type. For two-body decays at rest, the integration over the Lorentzinvariant phase space (LIPS) reduces to a factor 1/(8π).
At leading order in the small relative momentum q and in the non-relativistic expansion, with on-shell four-momentum K 2 = m 2 B n , and The wave-function at the origin is non-zero for orbitals with = 0 only, while the decay of bound states with orbital angular momentum would require keeping further terms in the expansion of M s in q, leading to a suppression of the decay rate of order q 2 /K 2 ∼ E n /mq. We therefore focus on the decay of = 0 states in the following.
The decay rate can also be obtained from an effective operator that describes the interaction of an = 0 bound state with a pair of gluons, with a scalar field Φ n that describes the B n, =0 bound state, and a form factor F (Q) ≡ Z n00 /Q 2 , where Q 2 ≡ p 1 · p 2 . The coefficient can be obtained by matching the matrix element for the two-body decay in the full and effective description.
Note that the matching does not require the gluons to be on-shell. Accordingly, the effective operator, eq. (64), can also be used to compute the 2 → 2 scattering processes of the form Bq → gq. Implementing the effective operator in MadGraph5_aMC@NLO [37], we checked that 2 → 2 processes can only compete with the boundstate decay for very early times, x < ∼ 5 − 10, for which the mediator is still in thermal equilibrium with the SM plasma. Hence, these processes are negligible for the dynamics of dark matter freeze-out considered here.
In contrast, NLO corrections to the two-body decay rate are potentially relevant since in ionization equilibrium, the impact of bound states on the effective cross section is determined predominantly by their decay rate, see the discussion in Sec. III. Following earlier results in the context of quarkonium [38][39][40], the virtual and real corrections to the B 10 → gg decay rate at O(α s ) have been computed for the comparable case of stoponium [41], resulting in: where n f is the number of light quarks, T F = 1/2, and b 0 = 11/3 C A − (1/3 + 4/3 n f )T F . The parameter δ 4q is either 1 or 0 depending on whether or not the four-point interaction ofq is introduced. In the simplified model considered here, δ 4q = 0, while in the MSSM, δ 4q = 1.
For the scale choice (63), we find that the correction (65) is reduced to a few percent rendering the leadingorder (LO) and NLO predictions fully compatible with each other within scale uncertainties, see App. B for a detailed discussion. For definiteness, we will consider the LO decay rate in the main results in the following.
The decay of bound states with > 0 is suppressed compared to those with = 0. Nevertheless, due to the large number of such states, it would be interesting to include them, which is, however, beyond the scope of this work. We note that the two-body decay into a pair of gluons is forbidden for = 1 states due to the Landau-Yang theorem. A decay channel that is possible for these states is into a pair of electrically charged particles, via an intermediate photon or Z-boson. Note that an analogous process with an intermediate gluon is forbidden by color conservation. Furthermore, the decay rate into gqq via an intermediate off-shell gluon also vanishes for = 1, as can be checked by expanding the matrix element in eq. (57) to first order in the relative momentum q. However, a decay into three gluons could be mediated by the strong interactions.
C. Transitions
Since bound states exist only in the color singlet configuration, transitions between energy levels cannot proceed via single gluon emission or absorption. In this work, we do not consider transitions involving two gluons, which can be mediated by the strong interaction. Instead, we provide a lower bound on the size of transition rates by considering the electromagnetic process which is allowed by color and charge conservation. The transition matrix element squared obtained from the electric dipole interaction is given by [36] where ω = |E n −E n | is the photon energy. The matrix element is averaged over m and m , The transition rate from higher to lower energy levels is given by Fermi's golden rule The rate of the inverse process of photoabsorption can be obtained from the detailed balance condition eq. (13). Using the hydrogen-like wave functions and the generating function of the Laguerre polynomials (see App. A) we find | ψ n |r|ψ n | 2 = δ , +1 + δ , +1 (2 + 1)(2 + 1) |I trans where with N n (κ) = κ 3/2 4(n − − 1)! n 4 (n + )! 2κ n . Here with effective strong coupling defined as in eq. (47) and evaluated for the respective energy level as indicated by the subscript. They differ only in the scale choice of the strong coupling constant, related to the typical Bohr momentum of the two energy levels. We checked agreement with various explicit expressions given for specific n and , and all n as well as for n = n in [36], when translating the result to the analogous hydrogen transition rates.
D. Effective cross section
Using the ionization, decay and transition rates discussed above we can compute the effective cross section, eq. (23), that encapsulates the impact of bound states on the freeze-out dynamics. The contribution to the effective cross section, eq. (23), due to bound states, is shown in Fig. 2 for various approximations as a function of x = m χ /T . In the left panel, we include bound states up to n = 6 and for all ≤ n − 1.
The contributions from individual n, states to the effective cross section are indicated by the colored and gray lines in the left panel of Fig. 2. For small x, the = 0 states dominate. The reason is that in this limit, ionization equilibrium holds, and the effective cross section is determined by the decay rate, see eq. (33). For large x, the contribution from each n, level becomes suppressed due to a combination of two effects: (i) the suppression due to the repulsive interaction in the scattering state discussed in Sec. IV A, and (ii) Boltzmann suppression for T E B n . Consequently, each individual contribution features a maximum. Its position shifts to the right for higher n. This implies that excited states dominate the effective cross section for large x. The larger x, the higher n have to be taken into account to obtain a converged result for the total effective cross section.
The line labeled "R i -solution" shows the total result obtained when including all rates as given above using the general expression eq. (23) for the effective cross section. For comparison, we show the limit of efficient transitions, eq. (31), as well as the limit of no transitions, eq. (25). For small x, that is, large enough temperature, all results agree and approach the ionization equilibrium result, eq. (33), that is also shown. The effective cross section can in this limit be written as That is, in ionization equilibrium, excited states lead to a 20% correction to the effective cross section. The factor in front of the sum is the ground-state contribution to eq. (33).
The impact of excited states is much larger for large x, where they give the dominant contribution. The precise value depends in this regime on the recombination as well as transition rates. The efficient transition limit provides an upper bound on the effective cross section (since all orbitals contribute), while the limit of no transitions provides a lower limit (only the bound states with a sizeable decay rate into SM particles contribute, being = 0 orbitals in our approximation). The actual effective cross section is therefore expected to lie in between these two limits. The "R i -solution" result taking into account the electromagnetic transition rates considered in this work can only be considered as illustrative since additional processes mediating further transitions are expected to play an important role. We therefore conservatively adopt the no-transition limit in our numerical analysis in the following.
The effective cross section in the no-transition approximation eq. (25) is shown in the right panel of Fig. 2. We show the result summed up to some maximum n, for n = 1, 3, 6, 10, 15, respectively. While each individual contribution becomes suppressed at large x, the summed result continues to grow with increasing x. The decline at very large x is due to the restriction to n ≤ 15. For x 10 5 , we consider the effective cross section with n ≤ 15 as converged. We leave an exploration of the full result including transitions to future work, and use the no-transition limit with n ≤ 15 as the default choice in the following. For a discussion of the impact of a certain class of higher-order corrections (related to collisional ionization and recombination processes and the associated virtual contributions) computed in [27,30] as well as to bound-state decay we refer to App. B.
V. VIABLE PARAMETER SPACE
To determine the relic abundance, we solve the coupled set of Boltzmann equations (2) for Y χ and (3) for Yq. We compute the involved conversion and annihilation cross sections, σq k→χl (s) and σ χχ (s), σ χq (s), σqq †(s), respectively, with MadGraph5_aMC@NLO [37]. We take into account the leading conversions in α s and regularize m χ =1TeV, Δm=20GeV the soft divergence occurring in the processqg → χb (see the discussion in [15]) by introducing a thermal mass for the gluon [42]. To include the impact of bound states, we replace the annihilation cross section ofqq † pairs by the effective cross section, eq. (23). In addition, we include Sommerfeld enhancement in the contribution from direct mediator annihilation as described in [15]. Figure 3 exemplifies the effective cross section. The long-dashed curve ('pert.') shows the perturbative direct annihilation cross section while the short-dashed ('Som'.), dot-dashed ('BS, n = 1') and solid ('BS, n ≤ 15') curves display the effective cross section after successively including Sommerfeld enhancement, bound-state formation effects of the ground state and excited bound states up to n = 15 (in the no transition limit), respectively. In the following, we choose the latter for our main results. We also show the effective cross section under the assumption of ionization equilibrium in the limit of large n as the gray dotted curve ('ion-eq'). For two benchmark points in the conversion-driven freeze-out scenario, the evolution of the abundances is shown in Fig. 4. Because of the small coupling λ χ , the χ particle cannot annihilate efficiently by itself, and its abundance is reduced only due to conversions intoq. While the colored mediatorq starts to depart from thermal equilibrium at x > ∼ 25, the χ abundance already significantly exceeds the equilibrium value at this time. Subsequently, for x > ∼ 25, conversion processes -which are on the edge of being efficient -gradually transform χ intoq particles. This leads to a prolonged duration of the freeze-out dynamics, which can last until x ∼ O(10 2 −10 3 ). The mediatorq continues to annihilate and is in addition depleted due to bound-state formation.
The duration is further enhanced for a small relative mass splitting ∆m/m χ , which implies that the equilibrium abundances of the mediator and χ are comparable until x ∼ m χ /∆m even if the conversion processes were fully efficient, i.e. in the usual coannihilation scenario. Eventually, the mediators decay viaq → bχ, thereby transferring their remaining abundance to the population of χ particles. For the chosen value of the coupling λ χ for the two benchmark points shown in Fig. 4, the amount of conversions is sufficient to reduce the χ abundance to a final value that matches the observed relic density, Ωh 2 = 0.12 [43].
In Fig. 5, we show the coupling λ χ that is required to achieve Ωh 2 = 0.12 as a function of the mass splitting, ∆m, for fixed mass m χ = 1 TeV (left panel) and as a function of the dark matter mass, m χ , for fixed ∆m = 5 GeV (right panel). The drastic change in the coupling at ∆m 35 GeV and m χ 2850 GeV, respectively, is due to the transition between the conversiondriven freeze-out (to the left) and coannihilation regime (to the right), see below for details.
The gray lines in Fig. 5 show the impact on the relic density for various levels of approximation, relative to our fiducial choice with Sommerfeld enhancement and bound states up to n = 15. The relic density differs up to a factor of order 10 relative to the perturbative leading-order approximation, and for small ∆m/m χ . Relative to the case when including Sommerfeld enhancement, we find differences of up to a factor of order five. The gray line labeled 'BS, n = 1' corresponds to the case when including the ground state. As apparent from the relatively small deviation of this curve from one, excited states with n ≤ 15 only yield a comparably small correction for most of the shown parameter space.
The kink at the transition between the conversiondriven freeze-out and coannihilation regime that can be seen in most curves arises due to the sudden increase of the coupling at this point that causes χχ and χq annihilation processes to become relevant. Accordingly, in the latter regime, not only does the relative importance of non-perturbative effects on the effective mediator annihilation change but also the importance of the effective mediator annihilation with respect to χχ and χq annihilation changes. This causes the quicker decrease of seen in the left panel of Fig. 5. Here, we can also observe that all curves approach unity toward large mass splittings as both the Boltzmann suppression of the mediator abundance during freeze-out and the larger coupling, λ χ , diminishes the relative importance of the mediator annihilation.
In the parameter slice with m χ = 1 TeV, chosen in the left panel, freeze-out mainly occurs while the system is still close to ionization equilibrium. This can also be seen from the gray dotted curve showing the result assuming ionization equilibrium (for all n). It only deviates significantly for low ∆m where freeze-out extends to large x. For even smaller relative mass splittings, ∆m/m χ , considered in the right panel, this effect is even more pronounced as freeze-out extends to larger x (even in the coannihilation region). Here, the result for ionization equilibrium differs by orders of magnitude from the one of our fiducial choice reaching Ωh 2 /0.12 < ∼ 10 −4 in the considered range of m χ (outside the displayed range in Fig. 5).
A. Boundary between coannihilation and conversion-driven regime
In this section, we determine the part of parameter space of the model for which conversion-driven freezeout is relevant. For small mass splitting ∆m ≡ mq − m χ and mass m χ , theqq † annihilation process becomes very efficient, and would deplete the relic abundance below the observed dark matter density, if χ andq were in chemical equilibrium during freeze-out. Within the region of parameter space where this happens, the correct dark matter abundance can only be explained if the as- sumption of chemical equilibrium does not hold. The dynamics are described by conversion-driven freeze-out in this regime, and one obtains a viable relic density for couplings λ χ 1. On the other hand, for points in parameter space where theqq † annihilation cross section is small enough, the standard scenario of coannihilation yields the observed dark matter abundance, with λ χ ∼ O (1). The division between these regimes can effectively be obtained with high precision by solving the Boltzmann equation using the conventional coannihilation approximation in the limit λ χ 1. The relic density obtained in this limit matches the observed dark matter abundance along a line in the two-dimensional parameter space (m χ , ∆m), which we refer to as the boundary line.
Altogether, the correct relic density can be reproduced for any point within the two-dimensional parameter space for a suitable value of λ χ , via conversion-driven freezeout below the boundary line, and via conventional coannihilation above the boundary line. Note that the effectiveqq † cross section, eq. (23), including bound-state effects is relevant both in the coannihilation as well as the conversion-driven regimes, and therefore also for determining the boundary between them.
In Fig. 6, we show the boundary line in the (m χ , ∆m) plane obtained for various approximations, which successively include a number of effects. When using the perturbative tree-levelqq † annihilation cross section only, one obtains the line labeled 'pert.'. This is the result one would obtain when using standard tools for the relic density computation [44][45][46] without further modification. The line labeled 'Som.' is obtained when including Sommerfeld enhancement ofqq † annihilation, and this approximation has been used in previous works in the context of conversion-driven freeze-out with colored mediators [12,15]. 3 The regime of conversion-driven freeze-out extends significantly when including the bound-state effects considered in this work. The line labeled 'BS, n = 1' in Fig. 6 corresponds to including the contribution from the ground state only. Finally, adding excited states up to n = 15 within the default approximation discussed in Sec. IV D yields the thick solid line. We observe that the conversion-driven freeze-out region reaches to significantly higher values of m χ and also ∆m due to the impact of bound states.
Let us briefly comment on the role of excited states. For m χ /∆m < ∼ O(10 2 ), freeze-out dominantly takes place in the regime of ionization equilibrium. In that case, excited states lead to a correction of the effective cross section of order 20%, due to the additional available decay channels, see eq. (75). For m χ /∆m > ∼ O(10 2 ), the freezeout extends to lower temperatures. In this regime, a combination of two effects leads to a significant enhancement of the impact of excited states. First, since ionization equilibrium breaks down for the ground state, its contribution to the effective cross section drops. Secondly, the bound-state formation rate for excited states exceeds the one of the ground state by many orders of magnitude at low temperatures. Hence, excitations remain in ionization equilibrium toward smaller temperatures and dominate the effective cross section. 4 Potentially, the region of conversion-driven freeze-out could even become larger when including transitions between the bound states, which is beyond the scope of this work. To provide a maximal upper bound we show the result that would be obtained when assuming ionization equilibrium to hold during the entire freeze-out and including all n using eq. (75), indicated by the gray dotted line. The full result when including transitions is expected to lie significantly below this line, and above the solid line, cf. the respective results for σqq †v BS eff in the left panel of Fig. 2. For the regime where the gray and thick solid lines differ from each other, ionization equilibrium breaks down during the freeze-out. The boundary therefore becomes insensitive to uncertainties from transitions among bound states where both lines converge, i.e. for m χ < ∼ 2 TeV.
B. Coannihilation regime
While the main focus of this work is on the impact of bound states on conversion-driven freeze-out, we also assess the relevance in the coannihilation regime. As is already apparent from Fig. 6, bound states and Sommerfeld enhancement have a significant impact on the boundary, and therefore on coannihilations as well. In Fig. 7, we show the contours in the (m χ , ∆m) plane for which freeze-out in the coannihilation regime yields the correct dark matter relic abundance for three values of the coupling, λ χ = 0.169, 0.5, 1, respectively. The former choice is motivated by supersymmetry, for which the χ particle can be viewed as the bino and the mediator as the righthanded sbottom quark within the MSSM. In this case, the coupling is fixed by the bottom hypercharge. We note that for large λ χ > ∼ O(1), additional annihilation diagrams forqq † → bb as well asqq → bb contribute, which are modified by bound-state formation. In this work, we are mainly interested in the case of small λ χ , and therefore do not take these contributions into account, since their cross section scales as λ 4 χ and is subleading compared to the QCD contributions toqq † annihilation. The red lines in Fig. 7 correspond to the case with perturbative leading-order annihilation, and the blue lines correspond to our fiducial approximation that includes Sommerfeld enhancement and bound states up to n = 15. It is apparent that the blue contours allow for significantly larger masses m χ for a given λ χ . For example, for λ χ = 0.5 and ∆m = 20 GeV, the mass for which the relic density matches the observed value shifts from m χ 1.2 TeV to 2 TeV when including the aforementioned corrections. For the MSSM value, λ χ = 0.169, the contour almost coincides with the boundary, and the mass shifts from m χ 0.9 TeV to 1.8 TeV (for ∆m = 20 GeV). In addition, for a very small mass splitting, including bound states allows for mediator masses in the multi-TeV regime, around m χ = 3 TeV for ∆m = 5 GeV. This shift can be expected to be of major relevance for experimental searches for colored t-channel mediators within the coannihilation regime. It re-opens part of the parameter space that is constraint by conventional dark matter searches.
C. Conversion-driven regime and collider limits
In Fig. 8 we show the viable parameter space within the regime of conversion-driven freeze-out. The value of the coupling that is required to obtain the measured dark matter abundance is of order 10 −6 − 10 −7 in that case. We show several contours for λ χ /10 −7 = 2, 3, 5, 7. The smallness of the coupling implies that this production mechanism is compatible with null results from direct and indirect dark matter detection experiments, while still providing an explanation of the abundance of dark matter that is insensitive to the initial conditions.
The decay length cτ of the mediator, where τ is its lifetime, is shown by the gray contour lines in Fig. 8. It is of the order of a few centimeters to 1 m within most of the parameter space, going down to 1 mm close to the boundary. For the freeze-out computation, we limit ourselves to the parameter space where ∆m > m b , such that the two-body decayq → χb is kinematically allowed. For even smaller mass splitting, conversions proceed via scatterings, and the mediator would be stable on detector timescales.
The primary signal of conversion-driven dark matter production with a colored mediator are searches for heavy, (meta-)stable colored particles at the LHC. For ∆m < m b , the colored mediator becomes detector stable as its decay is four-body suppressed. We can directly apply the limit from the 13 TeV ATLAS search [48] derived for an R-hadron containing a b-squark. It excludes masses below 1250 GeV. The resulting limit is shown in Fig. 8 as a solid blue curve (and blue shaded exclusion region). For larger ∆m the decay length is in the range 1 mm ∼ 1 m such that a sizeable fraction of decays take place inside the inner detector. To estimate the reach of the same search for this case, we employ the reported cross section upper limits for the muon-system-agnostic analysis for a b-squark R-hadron. We rescale them by the relative suppression of the cross section upper limits toward small lifetimes reported in the similar ATLAS analysis [49] where the case of a gluino R-hadron has been considered. Note that this introduces a certain level of approximation. A recasting of the search is, however, beyond the scope of this work. We use the cross-section predictions from [50]. The resulting limit is displayed as the blue, dashed curve in Fig. 8. Furthermore, we display the limit from the recasting of the CMS 13 TeV R-hadron search [51] performed in [15] as the blue, dotdashed curve.
Being only sensitive to the fraction of R-hadrons traversing a significant part of the detector, the sensitivity of these searches is exponentially suppressed for small lifetimes. Dedicated analyses exploiting the displaced nature of the decay are, hence, expected to greatly improve the sensitivity to this scenario. While several such analyses have been performed by the collaborations, their target model differs considerably from the one considered here, significantly reducing their reach or raising questions about their applicability as pointed out in [52] (contribution 7). For instance, the sensitivity of the displaced jets search [53] considerably suffers from the imposed cut on the invariant mass of the displaced tracks. While the respective choice was optimized for the scenario considered in the search, it reduces the signal of the one considered here by around two orders of magnitude [52]. This is due to its relatively small mass splittings ∆m of order tens of GeV in our scenario, resulting in softer tracks. The search has been targeted to mass splittings of the order of hundreds of GeV. Another example of a potentially sensitive search is the one for disappearing tracks. The existing searches are targeted to charginos whose long lifetime arises due to a tiny mass splitting, O(100 MeV), to the dark matter particle. Accordingly, in the decay, an ultra-soft pion is emitted facilitating the use of a disappearance condition. In our scenario, the emitted b-jet is considerably harder than in the targeted model. However, the search is estimated to still provide sensitivity to the model considered here, as shown in the approximate recasting of [54] performed in [52]. In this recasting, the probability of the R-hadron to cause a charged track was also taken into account. We overlay the respective limit as the purple dotted curve in Fig. 8.
We conclude that, after including the impact of bound states, a wide part of the parameter space for conversiondriven freeze-out is still viable, and provides a clear target for long-lived particle searches at future LHC runs.
VI. CONCLUSION
In this work, we revisited the computation of the relic density in the presence of bound-state effects during dark matter freeze-out. With respect to previous work, we improved the calculations in various aspects and demonstrated the respective phenomenological implications on the cosmologically viable parameter space in the coannihilation and conversion-driven freeze-out scenario.
In the first part of this work, we reformulated the Boltzmann equations including arbitrary excitations of bound states and derived a general framework for incorporating their effects in terms of an effective annihilation cross section. While a full treatment of these effects requires the knowledge of all involved bound-state formation, decay, and transition rates, we introduced meaningful limiting cases when assuming fully efficient or nonefficient transitions. We provided simple analytical expressions for the effective cross section in these limits, as well as a general result. Furthermore, we showed that for an arbitrary set of bound states in ionization equilibrium, the effective cross section is independent of bound-state formation and transition rates, and only depends on a weighted sum of bound-state decay rates.
For the case of a colored coannihilator, we computed the radiative bound-state formation rates for arbitrary excitations with quantum numbers n, , and estimate the lowest order transition rates. Furthermore, we investigated the impact of NLO corrections to bound-state decays. We further discuss the relevance of NLO effects on bound-state formation and decay in App. B.
We then solved the coupled Boltzmann equation for the mediator and the dark matter particle in a t-channel model and assessed the impact of bound states for coannihilations as well as conversion-driven freeze-out. On the one hand, in ionization equilibrium, the effective mediator annihilation cross section is insensitive to the boundstate formation but directly proportional to the boundstate decay rates. Including excited states increases the effective cross section by about 20% in that case. On the other hand, after the breakdown of ionization equilibrium of the ground state, higher excitations become increasingly important. At the same time, a large boundstate formation rate extends the duration of ionization equilibrium down to smaller temperatures. Nevertheless, we found that freeze-out significantly extends beyond the period of ionization equilibrium for small relative mass splittings between the mediator and dark matter, phenomenologically most relevant in the region of high masses, m χ > ∼ 2 TeV. In this region of parameter space, our fiducial approximation that neglects boundstate transitions is expected to underestimate the effects of excited bound states, motivating further studies. In addition, we demonstrated that NLO corrections to the bound-state formation rate itself play only a moderate role in the setup considered here.
Evaluating the cosmologically viable parameter space, we found that the region for which conversion-driven freeze-out is relevant extends significantly when including bound-state effects, ranging up to the multi-TeV region. In addition, our findings imply that significantly higher dark matter masses are viable also within the coannihilation region. This has immediate consequences for dark matter searches. For instance, considering a mass splitting of 20 GeV and a coupling of ∼ 0.169, as predicted in the MSSM, the dark matter mass that matches the relic density is shifted from around 900 GeV to 1.8 TeV by the inclusion of the discussed effects. On the other hand, when keeping the masses fixed at m χ = 900 GeV and ∆m = 20 GeV, the coupling would change from 0.169 to around 5 × 10 −7 as it lies in the conversion-driven freezeout regime.
Dark matter produced via conversion-driven freeze-out is compatible with (in)direct detection limits due to a very weak coupling but yields signatures of long-lived particles at the LHC. We discussed the applicability of existing searches for R-hadrons, disappearing tracks and displaced jets, which exclude masses below about 0.6 − 1.2 TeV. Because of the increase of the viable parameter space for conversion-driven freeze-out, extending into the multi-TeV region, the scenario provides great prospects for long-lived particle searches at future LHC runs.
The computations considered here can be improved in future work in several ways, regarding the description of transitions among bound states, the decay of excited states with angular momentum, as well as the inclusion of thermal corrections.
with the radial overlap integral where f n (ρ) = F n (r)/p 3/2 rel is the dimensionless radial wave function of the bound state.
To compute the radial integral we use an integral representation of the hypergeometric function that appears in the scattering wave function, Note that by substituting s → 1 − s one finds that F (ρ) is real, such that we can drop the complex conjugate in I R . In addition, we use the generating function of the Laguerre polynomials for the bound-state wave function, to write The ρ integration can be performed using the definition of the Γ function, and we obtain with some rational coefficients c r , that will be unimportant in the following. We can generate the required integral by differentiating + +3 with respect to b. Because of the selection rule, + + 3 ≥ 2 + 2 > 2 , such that the sum over r in the square bracket drops out, as announced. Using setting z ≡ ζ b n 1+t 1−t = −ib and using sin(aπ) = i sinh(ζ s π) yields 1 0 ds s −iζs (1 − s) +iζs (z + i(2s − 1)) + +4 = π sinh(πζ s ) (1 + z 2 ) e −2ζsarccot(z) , (A16) which allows us to evaluate the radial integral. Using finally gives the result, eq. (52), for the radial integral. section, eq. (23), in the no-transition limit, eq. (25), is shown in Fig. 9. In [27,30] it was pointed out that the correction to the bound-state formation cross section becomes very large for small enough x, corresponding to T > ∼ E B n . Nevertheless, for these temperatures, ionization equilibrium holds to a large extent. In ionization equilibrium, the effective cross section becomes insensitive to the bound-state formation cross section. Therefore, the effect of the NLO corrections considered in [30] on the effective cross section is almost negligible for small x (left part of the left panel in Fig. 9). For large x, on the other hand, the temperature is so small that the finite-temperature contribution of the NLO corrections gives a negligible contribution. In this region, the zerotemperature correction dominates. This is the reason why the difference between LO and the NLO correction considered in [30] is moderate in the right part of the left panel in Fig. 9. However, it becomes more relevant for excited states, due to the larger effective strong coupling, given our scale choice eq. (47).
In order to further assess the impact of NLO corrections, we show the dependence of the effective cross section when changing all scales at which the strong coupling is evaluated by a factor of two or a half, respectively, by the colored bands in Fig. 9. Within the perturbative uncertainty, both results are consistent with each other. We observe that including the NLO corrections considered in [30] leads only to a small reduction of the scale uncertainty (right panel of Fig. 9). This indicates that further sources of higher-order corrections, including those listed above, would have to be taken into account for a complete NLO analysis.
The effect of the NLO corrections on the boundary between the coannihilation and conversion-driven regime is shown in Fig. 10, and compared to the impact of taking excited states into account. We find that the latter is [30] on the boundary between the coannihilation and conversion-driven regime (red: with BSF NLO correction, blue: without). The impact on the boundary is smaller than the difference that arises when including excited states (n ≤ 15) as opposed to the ground state only (n = 1), which is shown for comparison for both cases, respectively. significantly more important.
NLO corrections to bound-state decay
Real and virtual correction to the decay B 10 → gg have been computed in [41]. The relative correction at NLO in the limit of massless quarks is given by eq. (65) in the main text. Note that collinear singularities in the real correction cancel when including the virtual piece [41], analogously to heavy quarkonium decay [38][39][40].
In Fig. 11 scale. The central line corresponds to µ MS = mq while the lower and upper boundaries of the red shaded band correspond to the choices µ MS /mq = 1/2 and 2, respectively. We adopted µ MS = mq in the main text, while µ MS = 2mq is used e.g. in [41]. We observe that the NLO correction is significantly smaller for µ MS = mq, at the level of a few percent. This justifies using the LO decay rate in our main analysis for this scale choice. In Fig. 11 (right panel) we show the dependence of the decay rate on µ MS at LO and NLO, respectively. As expected, the NLO result is significantly less sensitive to the scale choice. Note that for these figures we have set n f = 5 and neglected the contribution from the top quark since the use of the massless approximation is in general not well justified in that case. Using the expressions for the real corrections for massive quarks obtained in [41] confirms that the top quark contribution would amount to a small change of the already small NLO correction. Note that in Fig. 11 we only vary α ann s while keeping α eff b fixed.
In Fig. 12 we show the impact on the boundary line between conversion-driven freeze-out and coannihilation when taking into account NLO corrections to the decay. As expected, their impact is very small both for the ground state only and when taking into account excitations. Note that to obtain the NLO line when taking excited states into account we have assumed that Γ NLO dec /Γ LO dec is identical for all states with arbitrary n and = 0. | 15,720.6 | 2021-12-02T00:00:00.000 | [
"Physics"
] |
On several ill-posed and ill-conditioned mathematical problems of soil physics
Several well-known mathematical models of concentration fields in the soil (both at the single aggregate and the profile scales) are considered. It is shown that the respective boundary value problems for steady-state profiles belong to the class of ill-posed problems, since their solution does not exist. It occurs because a certain set of processes (for example, diffusion transport + first-order kinetic of the consumption) restricts possible boundary conditions, which, therefore, can no longer be arbitrary. Ill-posed inverse problems are also briefly described as well as one ill-conditioned inverse problem of parameters identification for mathematical model of the soil organic matter concentration profile. Exact solution for this model is the sum of two exponents. For a certain input data it was shown that this problem belongs to the class of ill-conditioned, since a small bias in the input data causes a significantly larger error in the solution (i.e. in calculated parameters).
Introduction
Methods of mathematical processing of the experimental results and mathematical modeling in ecology and soil science are currently used very widely and continue to develop [1][2][3][4]. In addition, the basic mathematical tasks in soil science are similar (or even identical) to the tasks of mathematical physics.
It is well known that the problems of mathematical physics can be divided into forward and inverse. Tasks which start with the causes and then calculate the effects represent the forward tasks. The inverse problems start from observed effects and then calculate causes not yet known [5]. For example, the soil properties and the intensity of organic matter input are the causes for the formation of soil organic matter (hereinafter -SOM) concentration profile. Here calculation of such a profile based on known soil properties and litter input represents a forward problem, whereas a calculation of soil properties or litter input, based on the profile of organic matter represents an inverse problem (hereinafter -IP). Besides the concept of forward and IP, classification of problems as well-posed and ill-posed has been introduced.
If sets of valid input data F and possible solutions V are given, the computational problem of determining v Î V according to f Î F from the equation Âv = f (where  -a continuous operator acting from the solution space to the input data space) is considered as well-posed if: (i) its solution v exists; (ii) for each f there is a unique v; (iii) solution continuously depends on f i.e. solution is stable (small changes in input data f correspond to small variations of the resulting v). In this case, instability means that the solution is not a continuous function of data, i.e. a small perturbation of the data corresponds to a large perturbation of the solution [5].
A problem is ill-posed (hereinafter -IPoP), if one of the above conditions (i)-(iii) is not satisfied [5]. However, IPoP should not be considered as something wrong, as a manifestation of mathematical unprofessionalism. This is just a term for a special class of problems. Moreover, the ill-posed formulation of the problem makes it often possible to realize some important properties of the object under study. IPoP can arise among both forward and inverse problems. In addition, there is a large class of ill-conditioned problems (hereinafter -ICoP), which, although theoretically are well-posed, in fact, do not differ from IPoP (from the point of view of practical calculations).
Let us assume that the problem is correct (its solution exists, the solution is unique and stable to the uncertainty in the input data). Theoretically, a problem has stability if a sufficiently small error in input data induces a small error in the solution. However, in practice, the errors of the input data cannot be made arbitrarily small: their accuracy is limited. The sensitivity of solution to small errors in the input data is called the conditionality of a computational problem. A problem is called illconditioned if small input errors (δf) cause strong changes in the solution (δv). The value α in the inequality δv ≤ α·δf is called condition number (here δv and δf are relative errors). For ICoP α >> 1. However, an exact answer to the question about the value α, at which a problem should be considered as ICoP, essentially depends on the solution accuracy requirements and on the level of the accuracy provided for the source data [6]. In addition, for non-linear models, we should expect α ¹ const, therefore, the problem can be well-conditioned with some values of δf, but become ICoP with others (usually high) δf. Note that the problem that is ill-posed due to instability, is an extreme case of ICoP (with α = ¥). However, in practice, there is no difference between such ill-posed problem and an ICoP, which is formally well-posed, but having so high α ¹ ¥ that (for a given δf) δv turns out to be completely unsatisfactory. The minimum possible values of δf are limited both by the accuracy of measurements (available at a given level of technological development) and by the minimum error of the model (available at a given level of the scientific progress).
In this paper, we consider some examples of both ICoP and IPoP that arose in soil science practice.
Failed condition for the solution existence
Apparently, in literature more attention was paid to inverse IPoP -see, for example, [5][6][7][8][9], but in soil science forward IPoP also occurs. In this section, some examples of both types are considered.
(another conditionf(h) = Co with a large but finite h -is also mentioned in [10]; however, since the original solution is given only for equations (2), we will consider the boundary value problem with these BoC). The "solution" to this problem is suggested in the form where , So However, it is obvious that such function is not a solution, because , whereas according to equations (2) f(¥) = Co.
Non-existence of problem (1)-(2) solution.
One can assume that the mistake in the solution mentioned above could be easily fixed. However, the origin of the mistake is much deeper: the solution of problem (1)-(2) does not exist (so this problem belongs to the class IPoP). It could be shown as follows. One could write well established (see, for example, [11]) general solution of equation (1) in this form: , where , C1, C2 -arbitrary constants. This solution can be rewritten as: but if x®¥ f(x) cannot have arbitrary value Co, and inevitably must be zero: with a root-associated carbon source. Example described above is not unique. In soil science calculation of spatial concentration profiles is often an IPoP. Let us give another example, which will be considered in general form. Smagin [10] developed the model (1) assuming that root systems can serve as the source of SOM: (4) where the exponential term describes the intensity of the organic matter input from roots (distributed vertically in accordance with the exponential law). BoC (2) were chosen again. This problem also belongs to the IPoP class, since its solution does not exist. Indeed, according to [12], a general solution of inhomogeneous linear equation (4) can be presented as the sum of its particular solution and its general solution of the corresponding homogeneous equation. In this case equation (1) will be a homogeneous equation needed and its general solution is equation (3). It is easy to check that function is a particular solution for equation (4). Therefore, its general solution is: This again shows that when x®¥ the only possible boundary condition for function f(x) (i.e. solution) is zero and equations (2) could not been satisfied with arbitrary value Co. It also means that the solution of the problem (2), (4) for Co ¹ 0 does not exist.
Oxygen concentration profile in a soil aggregate.
At the end of this section, we briefly mention the model of biogenic O2 consumption inside a moist spherical soil aggregate considered in [13]: , where C is O2 concentration; r is distance from the center of the aggregate (0 ≤ r ≤ rm; rm corresponds to the surface of the aggregate); complete formulation of the problem in [13] includes equation (5) and the following BoC: , where Co > 0 is treated as the concentration of O2 at the surface of the aggregate (and depends on the depth of its location in the soil). The "solution" of the boundary value problem (5)-(6) is given by the author in the form But this function could not be a solution, since after applying L'Hopital's rule it is easy to show that while according to equations (6) C(0) should be 0. In general, it can be proved that the solution of the problem (5)-(6) does not exist: for equation (5) and BoC C(rm) = Co the value of C(0) is always positive. Therefore, it is impossible to satisfy BoC C(0) = 0 and the problem of solving equation (5) with conditions (6) is IPoP. But, as mentioned above, this fact should not be considered as negative. IPoP arises when the author relies on a set of assumptions and experimental facts: (a) inside the ( ) aggregate only diffusion transport occurs; (b) consumption of O2 satisfies the first-order kinetics; (c) in the center of aggregate the oxygen sensor reported zero concentration of O2... Occurrence of IPoP (considering forward problem) indicates the need to carry out comprehensive analysis of all assumptions and "facts" to find those that most likely contain uncertainties (they are not real). For example, from the above items (a) -(c) the most reliable is (a). But the fact that the kinetics could differ from the first order seems to be quite probable, and here it is necessary to investigate how the change in the kinetics type (within acceptable limits) will affect the problem. Further, the fact that the concentration of O2 in the center of the aggregate will be small (outside the sensitivity of the sensor), but, nevertheless, not zero, in general seems very likely. Fortunately, a similar model was built by Gontchar-Zaykin et al. [14]. They showed that a slight modification gives a well-posed problem: instead of zero boundary condition, it is enough to claim the limitations: 0 < C(0) < Co. Thus, the analysis of ill-posed problem leads to the possibility of a reasonable choice of further ways to study the object by experiment: using more accurate sensors could show whether the O2 concentration in the center of the aggregate will differ from zero.
Failed condition of the solution uniqueness
Completely different reasons can lead to multiple (non-unique) solutions of a certain problem. In a short manuscript we are not able to cover this topic comprehensively and just give a few typical examples.
Cluster analysis
3.1.1. The multiple options of distance selection in classification of objects. In soil science in general and in soil physics specifically cluster analysis is widely used for the classification of any objects [15,16,17,18]. The cluster analysis is designed to classify observations (objects) into more or less homogeneous groups. Its governing idea is to calculate a certain measure of similarity or dissimilarity (for example, some distance) between each pair of objects. Low value of this distance indicates that objects are similar or close to each other, while a large value indicates a lack of similarity. But similarity estimates could be different [19]. Moreover, other approaches are based on a non-distance concept. Specifically, an approach associated with fundamental concepts of the graph theory can be used [20]. For example, let us consider only MATLAB environment which is not designed namely for cluster analysis and operated only with the most common distances. Even in this case there are 10 different distance options (Chebychev distance, City Block metric, Hamming distance, Mahalanobis distance, Minkowski metric, Euclidean and Standardized Euclidean distances etc.). Moreover, for example, in the Minkowski metric parameter of the exponent (fixed number) must be set by user since there is an indefinite number of possible exponents. Therefore, there is an indefinite number of options for cluster analysis even if we use Minkowski metric only.
The multiple options of object grouping algorithms.
The next problem is to obtain a hierarchical grouping of objects, where the objects with the highest similarity coefficients are placed together. Then the groups of objects are bound to form new groups (they are most closely connected too), and this process continues until a complete classification of the objects is obtained. Unfortunately, at this stage there are many methods for analyzing groups as well [19]. Moreover, along with the approach of binding single objects into clusters, it is possible to divide the entire set of objects into clusters. The difference in procedures may be caused by other reasons, for example, by choosing initial points of clustering. This can be a random selection of one or several points as well as not random "typical" point [20]. Considering only MATLAB environment again, there are 7 available algorithms ("nearest neighbor", "furthest neighbor", "average linkage" etc.). It may seem that this situation is not unusual because it is typical for almost any problems. For example, in the classic numerical analysis we have many algorithms for integrating of differential equations (see, for example, [6,12,21,22]). However, this difference is fundamental. Various algorithms of the differential equations numerical integration give solutions with error less than a small value ε comparing with the exact solution of a differential equation. Therefore, all the algorithms give the same solution with an error that can be considered arbitrarily small (within reasonable limits) while various algorithms of grouping in cluster analysis can give completely different solutions.
Multi-optional choice of the classes' number.
Finally, one of the most important issues in solving a cluster problem is the choice of the required number of clusters [23]. In the literature hundreds (!) cluster analysis methods are described. Their diversity origins from the fact that the identification of clusters in many ways is "art". Many of researchers who use the techniques of this analysis create a new method themselves [20]. For example, in MATLAB there are two fundamentally different possibilities for this choice: "Finding the Natural Divisions in the Data Set" and "Specifying Arbitrary Clusters". Each of them generates a set of variants of the clusters number (in the latter case this number is just defined by user). Thus, at each stage of cluster analysis there is a variety of criteria and/or algorithms, that results in the non-uniqueness of the solution of the whole classification task.
Non-uniqueness of the model parameter identification problem.
When one identifies parameters of mathematical models, some distance between model prediction and experimental data is calculated [5,19,22,24]. Since this distance is a function of the model parameters, it is possible to solve the minimization problem: to find parameters for which the distance reaches minimal value. With these values of parameters the model fits the experimental data in the best way. Therefore, these values are considered as a solution of the parameters identifying problem. Obviously, non-uniqueness arises here from one of the reasons that we have already mentioned discussing non-uniqueness of cluster analysis. Indeed, in practice various distances are used: sum of squared deviations between model predictions and experimental data [5] (it corresponds to the sum of the absolute error squares); sum of weighted squared deviations [6,22,25]; sum of relative error squares [26]; some other types of distances are also used [19,27,28]. However, in practice nonuniqueness of solutions does not always lead to catastrophic consequences, which are usually associated with IPoP. Although we get formally different solutions (different sets of numerical values of parameters), but the values of the same parameter, (i) will not greatly differ from each other and (ii) assuming the adequacy of the model, will be close to the true values for models supplied with comprehensive set of experimental information. Therefore, in our opinion, the ill-posedness associated with the solution instability is much more dangerous, as well as the poor conditionality of the formally well-posed problem. Both occur when the model of the chosen level of complexity is not provided with adequate experimental data (for example if the complexity of the model exceeds the experimental data resolution).
Ill-conditionality of one inverse problem of the parameter identification
In soil science as well as in chemistry, biology, physics [20,22,29,30] a problem of representing experimental data as a sum of exponential functions (where coefficients have an important physical meaning) [10,31,32] is common.
Many models of SOM decomposition use only two exponential functions (one for labile components and one for recalcitrant): (7) where f is substrate (in % of the initial amount of substrate) at time x; C1 and C2 are initial amounts of substrates (%, therefore C1 + C2 = 100%) which are decomposed, respectively, with rate constants k1 and k2 [33]. If one needs to determine C1, C2, k1 and k2 from the residual substrate temporal dynamics, the solution of the problem can be presented (using the designation introduced in section "Introduction") as a vector ( ) ( ) ( ) The formula (7) is also a solution of the boundary value problem (2), (4) at Co = 0. In this case, the coefficients in equation (7) depend on physical parameters D, k, L, R, b: (8) Shape of the SOM depth profile represents SOM accumulation, decomposition and mass transfer processes rates. Therefore, it seems that it is theoretically possible to assess the intensity of these processes from the data on the SOM profile distribution [10]. But can it be used in practice?
So let us consider the problem of the SOM profile distribution model parameters identifying, i.e. we need to find the parameters C1, k1, C2 and k2 based on known SOM concentrations at various depths fi(xi) -and then calculate the values D, k, b and R: (9) As mentioned above, the problem of parameters identification is well studied and can be solved, for example, using the least squares method. However, we face the most difficult problem: how to find out that the correct results are obtained [34]. Apparently, the simplest solution is to test the properties of IP and its solution algorithm by following method: (i) set some values D, k, b, R and L; (ii) then calculate values fi(xi) by equation (7) for the depths xi where data on measured SOM concentration are available; (iii) then slightly "bias" the calculated values by adding random noise Δfi simulating the uncertainty of experimental data; and finally (iv) solve IP with "biased" values (fi(xi) + Δfi). The solution is the set of parameters that we denote D*, k*, R* and b* (since their values are slightly different from the original). Because we know the output values D, k, b and R exactly, the described algorithm gives an estimate of the error which arises from a certain IP solution method at given level of noise (if we perform the described algorithm N times). Therefore, it is possible to estimate the influence of the uncertainty in the "experimental" data on the error of the result, i.e. assess the conditionality of the problem or, in other words, its sensitivity to the source data uncertainty. According to [35] , where Np -number of model parameters, Ne -number of experimental points.
The idea underlying this sensitivity test is very simple and has a general character: to solve a problem with several datasets and to check how sensitive the solution is to a variation in input data [34]. Let us give a particular example. For calculations, we use values given in [10] for typical chernozem: D = 0.000251 m 2 year -1 , R = 0.227523 kg year -1 m -3 , k = 0.00278 year -1 , b = 2.558 m -1 and L = 0.008 kg year -1 m -2 . The values fi (given in table 1) were calculated according to the formula (7) with coefficients calculated by equations (8). These values (but with Gaussian noise Δfi) were used as "pseudo-experimental data" (further -PED). Several sets of PED for coefficient of variation CV = 5% are presented in table 1. For each ( ) ( ) were obtained using equations (9) and CVs for these parameters were evaluated (see MATLAB-code in Appendix).
The results are presented on a figure 1. Obviously, the identification problem is well-conditioned for parameter b because its value of CV approximately corresponds to CV of source data f. But for all other parameters we have typical ICoP. For example, if CV input data is 10% than CV for D will be 27.763·10 1.36 ≈ 636%, for R and k -about 1000% (k and R have almost identical curves of "CV -noise level" dependence, so on a figure 1 they are overlapping). In other words, if the experimental measurements (f) were obtained with an uncertainty of ≈10%, then the coefficients R and k obtained by solution of IP for model (7)-(9) could differ from the real ones by a factor of 10.
The reason of ill-conditional character of parameter identification is briefly illustrated on figure 2. As we see, the formula (7) with completely different values of the parameters (cf. C1 = -92.276 and C1 = -144.148) gives almost identical curves. Application of the method described here (conditional analysis through perturbation sensitivity) for other tasks in biology and soil science see, for example, in [36][37][38].
Basic principles of ill-posed problems solution
The question of the IP solution existence consists of two aspects. One is the physical existence of the set of parameters generating the observed data; the second is the existence of a mathematical solution of operator equation Âv = f (see "Introduction"). A formal mathematical solution may not exist. To understand it better, we note that the measured data fe always have some uncertainty (Δf): fe = f+Δf. The question is whether it is possible to find a set of parameters ve, that strictly generates a dataset fe: fe = Âve. The answer is that sometimes we cannot find such parameters, and it is easy to understand why. Indeed, fe -are dataset f spoiled by noise. But this noise is not related to model parameters because it could be produced by phenomena that were described by concentration field equations given above. This is a reason why noise could not be predicted using the same operator equation, as for idealized dataset. It means that we should not expect that we always can find physically realistic model that exactly corresponds to observed dataset [5]. If a model fitted data perfectly, the following equality would have place: ||Âve -fe|| = 0.
But does it make sense to find such a model? It does not give meaningful results for noisy data (data are always noisy). So we should think about certain approximate approach to inversion, based on the search for a model that would fit observations within the given accuracy. Therefore, we are IOP Conf. Series: Earth and Environmental Science 368 (2019) 012011 IOP Publishing doi:10.1088/1755-1315/368/1/012011 9 looking for the pseudosolution of inverse problem, i.e. about such a solution from the certain class of models that fits the observed data in the best way. Pseudosolution of IP exists if there is such ve that , (10) where ||•|| means a certain measurement of discrepancy between observed and predicted noisy data [5]. Different formulations of this measurement for variable types of ve and fe -vectors, functions, etc.see, for example, in [5,34,39,40]). Based on a using measure one can choose Δ = Δ(Δf) but strictly speaking Δ ~ Δf and often simple rule of thumb Δ = Δf is used. This simple idea (inequality (10)) -is a cornerstone of a regularization theory [5]. However, this inequality provides an existence of the solution but not its uniqueness. Indeed, a variety of different ve can satisfy this inequality: for some of them left-hand side would be much less than Δ, for others -just slightly less, and for a certain value of а ve right-and left-hand sides would be strictly equal. The correct solution of IPoP and ICoP can be obtained using a variety of regularization methods (deterministic, statistical or descriptive regularization) -see, for example, [5,7,39,41] and references therein. The most commonly used method of regularization of ill-posed problems is Tikhonov regularization ("ridge regression", "weight decay", "Tikhonov-Miller method", "Tikhonov-Phillips regularization").
Tikhonov regularization is based on minimization of smoothing functional , where ξ > 0 -regularization parameter, chosen based on equation (10); ||Âve -fe|| -functional of discrepancy between predicted and observed data; Ω -regularization term [40]. Both functionals should be constructed in a way to be nonnegative. Let us describe briefly one of possible options of Tikhonov regularization method (where Morozov principle for choosing the regularization parameter [5,42,43] was used, but there are also different principles of this choice -see, for example [42][43][44][45]). For an arbitrary ξо corresponding value of ve(ξо) is calculated, providing a minimal value of J. If for calculated ve(ξо) it was obtained that ||Âve(ξо) -fe|| < Δ, next iteration (of functional minimization) is carried out using the value ξ1 > ξо. If it was obtained that ||Âve(ξо) -fe|| > Δ, for the next iteration value ξ1 < ξо should be used. For the obtained new value ve(ξ1) next check of discrepancy between functional and uncertainty level should be performed. If ||Âve(ξ1) -fe|| ≠ Δ, using the procedure described above new value of ξ2 should be chosen. Then a new level of ve(ξ2), minimizing functional, should be found, and again discrepancy should be compared with uncertainty level until it would be found a value of ξ* providing ||Âve(ξ*) -fe|| = Δ. Corresponding value of ve(ξ*), providing a minimum of functional J, would be considered as a regularized solution.
It is obvious, that the scheme described above can be directly used in a case when solution of operator equation (v) is a function. But here we want to discuss a simpler example, when v is a vector (containing a certain number of components). A problem of parameter identification for equation (7) described in section 4 belongs to this class. In this case J is a function and not a functional. So instead of functional minimization relatively simple problem of minimization of multi-argument function should be considered. The term ||Âve -fe|| is a sum of squared deviations between model predictions and observed data. If there are any prior estimates of parameter values in equation (7), that could be considered as components of vector va, term Ω = ||ve -va|| can be calculated as a sum of squared deviations between components of a resulting vector and components of a priory [46]. If components of the vector vary considerably, normalization is usually carried out in Ω. For example, these squared deviations are divided into squares of corresponding va vector components [41]. A priori estimates are usually based on data for systems similar in certain characteristics to systems of interest. For example 10 if we need to find parameters for chernozem and parameters for luvisol are known, one can use them for calculations.
Concluding remarks
Apparently Chudnovskii [47] was one of the first scientists who raised the question about ill-posed inverse problem in soil science. He considered IPoP for soil thermal conductivity identification and methods for solving of this problem. Some inverse IPoP of ecology and soil science were solved in [48][49][50][51] etc. However, the ideology of IPoP in soil science unfortunately has not been developed comprehensively, despite the fact that IPoP is common in this area. Some researchers treated IPoP as a usual (i.e. in this case -well-posed) problem of numerical analysis. Of course, the results obtained with this approach, are ambiguous, since their uncertainty is high.
In this brief article we were able to consider only a few issues related to ill-posed and illconditioned problems, as well as only a very limited number of examples of such problems in the soil science. Thus, we absolutely did not discuss the classical problems of experimental data processing. But the number of IPoP is very high among them. Specifically, the problem of function differentiation (if the function has an uncertainty), probability density estimation for empirical distributions, solving some types of integral equations, etc. lead to IPoP. On the other hand, in recent years, the processes of automatic collection and processing of experimental data in soil science and ecology have become increasingly widespread [52][53][54][55]. So solving different ill-posed and ill-conditioned problems could become much more important in the near future.
Finally, it should be noted that due to the limited volume of the paper we briefly considered the most important topic: how to solve IPoP and ICoP. Fortunately, these methods are well developed in the mathematical literature. | 6,630.4 | 2019-11-28T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Interfacial Dzyaloshinskii-Moriya interaction and spin-orbit torque in Au1-xPtx/Co bilayers with varying interfacial spin-orbit coupling
The quantitative roles of the interfacial spin-orbit coupling (SOC) in Dzyaloshinskii-Moriya interaction (DMI) and dampinglike spin-orbit torque ({\tau}DL) have remained unsettled after a decade of intensive study. Here, we report a conclusive experiment evidence that, because of the critical role of the interfacial orbital hybridization, the interfacial DMI is not necessarily a linear function of the interfacial SOC, e.g. at Au1-xPtx/Co interfaces where the interfacial SOC can be tuned significantly via strongly composition (x)-dependent spin-orbit proximity effect without varying the bulk SOC and the electronegativity of the Au1-xPtx layer. We also find that {\tau}DL in the Au1-xPtx/Co bilayers varies distinctly from the interfacial SOC as a function of x, indicating no important {\tau}DL contribution from the interfacial Rashba-Edelstein effect.
Spin-orbit coupling (SOC) phenomena is a central theme in condensed-matter physics.
The SOC-induced Dzyaloshinskii-Moriya interaction (DMI) and spin-orbit torques (SOTs) in heavy metal/ferromagnet (HM/FM) bilayers have become two of the foundational aspects of spintronics research [1][2][3][4][5][6][7]. The DMI at HM/FM interfaces is a short-range anti-symmetric exchange interaction due to interfacial SOC and inversion asymmetry [1][2][3]. A strong interfacial DMI can compete with ferromagnetic exchange interaction and perpendicular magnetic anisotropy such that skyrmions [1,2] or Neél domain walls [3][4][5] can be stabilized and be displaced by SOT in chiral magnetic memories and logic. The interfacial DMI can affect micromagnetic nonuniformity [8] and thus magnetic damping [9], and ultrafast dynamics [10,11] during SOT switching of in-plane magnetization. The interfacial DMI also requires an in-plane magnetic field or its equivalent for SOT switching of perpendicular magnetization [12]. Despite the great technological importance and the intensive studies [13][14][15][16][17][18][19][20][21], the understanding of the underlying physics of the interfacial DMI has remained far from complete. Experiments have reported that the magnitude and in some cases even the sign of the interfacial DMI are phenomenally sensitive to the types of HM and FM [20], the HM thickness [19], or even atomic inter-diffusion at the interface [15,16,19]. However, despite the widespread recognition of its interfacial SOC nature [14][15][16][17][18][19][20][21][22], there has been no direct quantification of the interfacial DMI as a function of the strengths of the interfacial SOC (ξ) and the interfacial orbital hybridization in a HM/FM system. The basic question as the relative role and interplay of these two effects on the DMI has yet to be answered. A major challenge for any such experiments is how to widely vary and quantify ξ in a HM/FM system that also exhibits a strong, accurately measurable DMI.
At the same time, the long-standing debate over the quantitative role of the interfacial SOC in the generation of the dampinglike SOT (τDL) in HM/FM bilayers has remained unresolved. Some theories [22][23][24][25] and experiments [26][27][28] have indicated that the interfacial SOC can generate only a negligible τDL via the "two-dimensional" Rashba-Edelstein effect at HM/FM interfaces but instead a substantial loss of the spin angular momentum of an incident spin current to the lattice via interfacial spin-flip scattering [29,30]. In sharp contrast, other theories [31][32][33] predict that, if carriers can be scattered across the HM/FM interface, the τDL generated by interfacial SOC that can be comparable to that arising from the spin Hall effect (SHE) of the HM. The conclusions from experimental studies of the interfacial Rashba-Edelstein effect disagree strongly with regard to both the sign and magnitude of the associated τDL [7,34,35]. These theoretical and experimental divergences provide a strong motivation for a direct experimental determination of how τDL in a HM/FM sample is correlated with the interfacial SOC.
In this Letter, we demonstrate that the interfacial SOC at the Au1-xPtx/Co interface can be tuned significantly via a strongly composition-dependent spin-orbit proximity effect (SOPE). From this ability, we establish that neither the interfacial DMI nor τDL of the Au1-xPtx/Co heterostructure is a linear function of interfacial SOC due to the fundamental role of interfacial orbital hybridization and the absence of any significant interfacial τDL, respectively.
It has been well established that s ISOC at the HM/Co interfaces originates from SOC-enhanced perpendicular orbital magnetic moments ( o ⊥ ) localized at the first Co atomic layer adjacent to the interface [40,41]. According to Bruno's model [41,42] [41]) is twofold of that for the Pt/Co interface (≈ 0.18 µB/Co [40,43]). To quantify s ISOC for the Au1-xPtx/Co interface using ST-FMR [36], we first determined the total interfacial perpendicular magnetic anisotropy energy density (Ks) of the two Co interfaces of the Au1-xPtx/Co/MgO samples from the fits of the effective demagnetization field (4πMeff) vs tCo -1 [ Fig. 1(a)] to the relation 4πMeff ≈ 4πMs + 2Ks/MstCo. We obtain Ms ≈ 1200-1300 emu/cm 3 and Ks ≈ 1.3-2.1 erg/cm 2 [ Fig. 1 Fig. 1(b). s ISOC increases from 1.02 erg/cm 2 for x = 0 to 1.69 erg/cm 2 for x = 0.25-0.75, and then gradually decreases to 0.95 erg/cm 2 for x = 1. From this we can obtain ξPt/Co/ξAu/Co ≈ 2. Provided that orb,i ⏊ for the Au1-xPtx/Co interfaces is not significantly smaller than that of the Pt/Co interface, the 1.8 times variation of s ISOC with x would indicate a strong, but less than 3.6 times, tuning of ξ.
It is interesting to note that, despite the strong tuning of the interfacial SOC, the bulk SOC strength for the Au1-xPtx layer, which should vary in between that of pure Au and pure Pt [45], is expected to be approximately invariant with x because Au and Pt have almost the same SOC strength (0.41 eV) [46]. We have also previously observed a threefold enhancement in the interfacial SOC of Pt/Co heterostructures by thermal engineering of the SOPE, without varying the composition and thus the bulk SOC of the HM [30]. These observations consistently demonstrate the distinct difference between the interfacial and the bulk SOCs, with the former being very sensitive to the local details of the interfaces. We determined the interfacial DMI of the Au1-xPtx 4 nm/Co 3.6 nm bilayers by measuring the DMI-induced frequency difference (ΔfDMI) between counter-propagating Damon-Eshbach spin waves using Brillouin light scattering (BLS) [13][14][15][16][17][18][19][20][21]. Figure 2(a) shows the geometry of the BLS measurements. The laser wavelength (λ) is 532 nm. The light incident angle (θ) with respect to the film normal was varied from 0 o to 32 o to tune the magnon wave-vector (k = 4πsinθ/λ). A magnetic field (H) of ±1700 Oe was applied along the x direction to align the magnetization of the Co layer. The anti-Stokes (Stokes) peaks in BLS spectra [ Fig. 2(b)] correspond to the annihilation (creation) of magnons with k (-k), while the total in-plane momentum is conserved during the BLS process. In Fig. 2(c), we plot ΔfDMI as a function of |k| for Au1-xPtx/Co interface with different x. Here ΔfDMI is the frequency difference of the ± k peaks and averaged for H = ±1700 Oe (see [20] for more details). The linear relation between ΔfDMI and k for each x agrees with the expected relation [13,47] ΔfDMI ≈ (2γ/πμ0Ms)Dk, where γ ≈ 176 GHz/T is the gyromagnetic ratio and D is the volumetric DMI constant.
The fivefold variation of interfacial DMI with x for the Au1-xPtx/Co interfaces is clearly not in linear proportion to the 1.8 times (<3.6 times) variation of s ISOC (ξ). Indeed when we plot D as a function of s ISOC [ Fig. 2(e)], it becomes quite apparent that there is not a linear correlation between D and s ISOC . This is a strikingly observation as it indicates that there must be another composition-sensitive effect that is, at least, as critical as the interfacial SOC for interfacial DMI. So far, effects including electronegativity [4], intermixing [14], proximity magnetism [14], orbital anisotropy [43], and orbital hybridization [14,20] have been suggested to affect DMI. Since Au and Pt have quite similar electronegativities (2.2 for Pt and 2.4 for Au) [50], a substantial composition-induced electronegativity variation seems unlikely to occur at the Au1-xPtx/Co interface. Previous studies have suggested that interfacial alloying, if significant, may substantially degrade both the DMI [14] and s ISOC [30,51]. However, our Au1-xPtx/Co bilayers have very high Ds and s ISOC . In previous work, we have reported on the structural characterizations of HM/FM samples prepared with similar conditions by transmission electron microscopy [52], x-ray reflectivity, and secondary ion mass spectrometry [30], all of which have indicated minimal interface alloying. The relatively low values of Ms (1200-1300 emu/cm 3 ) also indicate a rather minimal proximity magnetism at these Au1-xPtx/Co interfaces [53]. The DMI in Pt/Co bilayers was also suggested to correlate linearly with orbital anisotropy, with the latter being quantified by the o,i ⊥ / o ∥ ratio [43]. Fig. 2(e) we conclude that the variation of the DMI at the Au1-xPtx/Co interfaces with x cannot be attributed to the change of orbital anisotropy.
As we discuss below, the non-monotonic and exceptionally strong tunability of the DMI (stronger than that can be provided solely by the interfacial SOC) can be understood by taking into account the essential role of the varying orbital hybridization at the Au1-xPtx/Co interface. Theoretical calculations have shown that, besides the interfacial SOC, the 3d orbital occupations and their spin-flip scattering with the spin-orbit active 5d states collectively control the overall DMI [54]. The strength of the interfacial orbital hybridization is expected to be inversely proportional to the on-site energy difference of the 3d and 5d states [55]. The Fermi level (EF) of bulk Au (5d 10 6s 1 ) is located ~2 eV above the top of 5d band so that there are no 5d orbitals at the Fermi surface [56]. In contrast, EF of bulk Pt (5d 9 6s 1 ) is located in the top region of its 5d band. In both bulk Au and Pt [36], the density of states (DOS) has a sharp peak at the top region of the 5d band [56]. First-principles calculations [57] have indicated that, in magnetic multilayers consisting of repeats of HM (2 monolayers)/Co (monolayer) multilayers where the interfacial orbital hybridization becomes very important, both the HM 5d bands and the Co 3d bands are significantly broadened compared to their bulk properties. In Fig. 2(f), we compare the calculated results [57] for the local DOS of the Au/Co and Pt/Co systems. For Au/Co, the top of the Au 5d band is located ≈1.2 eV below EF, the minority spin band of Co 3d is centered at EF. For the Pt/Co system, the top of the Pt 5d band is 0.7 eV above EF, and the minority spin band of Co 3d is 0.63 eV above EF for the Pt/Co. In both cases, the majority spin band of Co 3d is located well below EF due to the exchange splitting.
As x increases, the 5d band of the Au1-xPtx/Co interfaces can be expected to be lifted continuously and pass through the Fermi level (i.e. from -1.2 eV for x = 0 to +0.7 eV for x = 1 with respect to EF). Meanwhile, the top of the minority spin band of Co 3d should also be shifted to be well above EF. As a consequence, with increasing x, the hybridization of the 5d orbitals of the Au1-xPtx with the 3d orbitals of Co would be first strengthened (mainly due to the shift of 5d band), then be maximized at an intermediate composition where the DOS peak of the 5d band is approximately at EF and is well aligned with the DOS peak of the 3d minority spin band of Co, and then finally decrease slightly as x approaches 1 (the DOS peak of the minority spin band Co 3d is moved away from EF). This is quite consistent with our experimental observation that, as x increases, the DMI is first increasingly enhanced by, we propose, 5d-3d hybridization, then is maximized at about x = 0.85, and finally decrease [ Fig. 2(d)]. The moderate reduction of interfacial SOC for x > 0.85 [see s ISOC in Fig. 1(c)] is also a likely contributor to the decrease of the DMI in this composition range.
In our previous study of Pd1-xPtx/Fe60Co20B20 interfaces [48], we found that the DMI does vary proportionally with s ISOC , which suggests a negligible variation of the interfacial orbital hybridization in that particular material system. This seems reasonable because, in the bulk, the energy distribution of the 4d DOS of Pd (4d 10 6s 0 ) is rather analogous to that of Pt 5d, with the first DOS peak of the Pd 4d band lying at about EF [36,56]. Since the orbital hybridization at the Pd1-xPtx/Fe60Co20B20 interfaces can further broaden the 5d and 3d bands, the DOS distribution and thus the strength of the interfacial 5d-3d orbital hybridization should be reasonably similar as a function of the Pd1-xPtx composition. This conclusion is very well supported by a first-principles calculation [58]. Pd also has the same electronegativity (2.2) [50] and valence electron number (10) [46] as Pt. Therefore, in the case of Pd1-xPtx/Fe60Co20B20, the interfacial SOC is left as the only important variable that determines the variation of the interfacial DMI with the Pd1-xPtx composition. The SOTs due to an interfacial Rashba-Edelstein effect should increase linearly with the Rashba constant (αR) [22][23][24][25]. It has been established that αR ∝ ξ o,i ⊥ / o ∥ [44] and that o ∥ is approximately constant at the HM/Co interfaces [40,41,43,44]. As a result, s ISOC of the Au1-xPtx/Co interfaces is a good linear indicator for αR. The strong tunability of s ISOC and thus αR of the Au1-xPtx/Co interfaces with x provides a novel access to test for any significant τDL due to the Rashba-Edelstein effect. If we define the apparent FMR spin-torque efficiency ( FMR ) from the ratio of the symmetric and anti-symmetric components of the magnetoresistance response of the ST-FMR [38,39], the efficiency of τDL per current density ( DL j ) for the Au1-xPtx/Co bilayers can be determined as the inverse intercept in the linear fit of ξFMR -1 vs tCo -1 [ Fig. 3(a)]. In Fig. 3(b), we summarize the ST-FMR results of DL j for Au1-xPtx 4 nm/Co 2-7 nm together with the "in-plane" harmonic response results for Au1-xPtx 4 nm/Co 1.4 nm bilayers, "out-of-plane" harmonic response results for Au1-xPtx 4 nm/Co 0.8 nm bilayers [37], and SOT switching results for in-plane SOTmagnetic tunnel junctions (MTJs) [11] increases from 0 to 0.75. When we take into account the spin memory loss to the lattice by interfacial spin-flip scattering, which increases linearly with s ISOC ∝ [30], the variation in the spin current generation is much greater than a factor of 15. We also find that the fieldlike SOT of the Au1-xPtx/Co samples is minimal and uncorrelated to αR [36].
This result disagrees with any significant interfacial torques because the latter, if any, should be proportional to αR or s ISOC so that it should vary by only a factor of 1.8. This conclusion is also supported by our recent experiments that the thermally-tuned interfacial SOC at HM/FM interfaces acts as spin memory loss rather than generates SOTs [30]. Instead, as discussed in [37] the variation of DL j can be well attributed to the composition-dependent resistivity and intrinsic spin Hall conductivity of the Au1-xPtx layer. It is also an interesting observation that DL j for Au1-xPtx 4/Co varies by a factor of 15 when the bulk SOC in Au1-xPtx remains the same. This is consistent with the intrinsic SHE being determined by the spin Berry curvature of the band structure of the HM rather than simply the bulk SOC strength [46]. Therefore, τDL of a HM/FM interfaces, at least in this system, behaves very differently from the interfacial Rashba-Edelstein effect, the interfacial DMI, and the bulk SOC.
In summary, we have demonstrated that the interfacial SOC at Au1-xPtx/Co can be tuned significantly via a strongly composition-dependent SOPE, without varying the bulk SOC and the electronegativity of the HM. We find that both the interfacial SOC and the interfacial orbital hybridization of the HM/FM interfaces play critical roles in the determination of the DMI. As a consequence, the interfacial DMI is not always a linear indicator of interfacial SOC, e.g. in the Au1-xPtx/Co bilayers where the interfacial orbital hybridization varies substantially with composition. The distinct composition dependences of interfacial SOC and τDL suggest minimal τDL from the Rashba-Edelstein effect. This is only consistent with the theoretical prediction that the localized Rashba-Edelstein effect contributes to negligible τDL [22][23][24][25]. These findings provide an in-depth understanding of interfacial SOC, interfacial DMI, and τDL, which will advance the development of high-efficient chiral magnetic devices. The large amplitudes and the strong tunability of both the DMI and τDL provided by the Au1-xPtx composition make Au1-xPtx/Co heterostructure an especially intriguing system for chiral spintronics. | 3,897.6 | 2020-07-19T00:00:00.000 | [
"Physics"
] |
The Electrochemical Behavior of Carbon Fiber Microelectrodes Modified with Carbon Nanotubes Using a Two-Step Electroless Plating/Chemical Vapor Deposition Process
Carbon fiber microelectrode (CFME) has been extensively applied in the biosensor and chemical sensor domains. In order to improve the electrochemical activity and sensitivity of the CFME, a new CFME modified with carbon nanotubes (CNTs), denoted as CNTs/CFME, was fabricated and investigated. First, carbon fiber (CF) monofilaments grafted with CNTs (simplified as CNTs/CFs) were fabricated in two key steps: (i) nickel electroless plating, followed by (ii) chemical vapor deposition (CVD). Second, a single CNTs/CF monofilament was selected and encapsulated into a CNTs/CFME with a simple packaging method. The morphologies of as-prepared CNTs/CFs were characterized by scanning electron microscopy. The electrochemical properties of CNTs/CFMEs were measured in potassium ferrocyanide solution (K4Fe(CN)6), by using a cyclic voltammetry (CV) and a chronoamperometry method. Compared with a bare CFME, a CNTs/CFME showed better CV curves with a higher distinguishable redox peak and response current; the higher the CNT content was, the better the CV curves were. Because the as-grown CNTs significantly enhanced the effective electrode area of CNTs/CFME, the contact area between the electrode and reactant was enlarged, further increasing the electrocatalytic active site density. Furthermore, the modified microelectrode displayed almost the same electrochemical behavior after 104 days, exhibiting remarkable stability and outstanding reproducibility.
Introduction
Microelectrodes are miniaturized working electrodes with micrometer dimensions that can be made with metallic or non-metallic conductors. Due to its tiny dimension, a microelectrode exhibits several unique electrochemical properties, such as a negligible ohmic drop, high detection sensitivity, high mass transfer rate, and enhanced signal-to-noise ratio [1]. These properties have led to its extensive applications in micro biosensors [2] and chemical analysis sensors [3]. Typical microelectrode materials include platinum, gold, silver, and carbon fiber [1]. Carbon fiber (CF) is an attractive electrode material because of its great physicochemical and electrochemical properties, such as its good electrical and thermal conductivities, adequate corrosion resistance, low density, and elasticity [4].
Since a single CF monofilament is only several microns in diameter, it can be directly used as a CF microelectrode (CFME). Due to their microscale volumes and fast response times, CFMEs have been applied in high performance biosensors for detecting secretory elements such as dopamine (DA), and for monitoring signal generation in a single cell in vivo [5,6]. Rodrigo et al. [7] successfully detected glucose concentrations in rats using a CFME biosensor. CFMEs have also been applied in electrochemical analysis and in the monitoring of environmental pollutants [8], owing to their advantageous properties which include a thin diffusion layer, small IR drop, and high signal-to-noise ratio. Yu et al. [9] presented a copper-modified CFME for the electrochemical determination of nitrate in PM 2.5 (airborne particulate matter with aerodynamic diameters less than 2.5 µm).
The surface structure of CF-which is the only effective working electrode in a CFME-is a disordered graphite layer with a low specific surface area (SSA) and activity, resulting in weak electronic responses that often elude the detection by conventional instruments [10]. For this reason, CF in a CFME is never used as-is, without modification. In fact, the surface of CF used in a CFME is always modified with techniques such as electrochemical oxidation, or derivatized with enzymes and nanoparticles, to enhance its sensitivity and selectivity toward biochemical molecules. Additionally, these modifications can effectively expand the application of CFME in the fields of analytical chemistry, environmental and health science, fuel cells, and biofuel cells [11].
Carbon nanotube (CNT) is a nanomaterial suitable for the modification of the CFME because it can significantly increase the overall surface area without substantially changing the size of an electrode [12]. CNTs have been found to promote electron-transfer reactions, minimize the fouling of electrode surfaces, and enhance the electrocatalytic activity of the electrodes [13]. Some researchers have tried to modify CF microdisk electrodes with single-walled CNTs via dip-coating [14,15] or electrochemical deposition [16]. They successfully used these modified microelectrodes to detect specific biochemical substances like dopamine. Among the existing methods used to modify the CFME with CNTs, a two-step method combining electroless plating and a chemical vapor deposition (CVD) process shows great promise. Using this method [17], a CF-Ni-CNTs coaxial fiber structure was successfully fabricated on the surface of a CF. Experimental results validated the fact that the SSA and capacitance of CF were significantly enhanced by this structure. Furthermore, the CNTs grafted on the CF surface by the CVD technique distribute more uniformly and make contact with the pristine CF surface more intimately. In addition, it is simple to adjust the thickness and density of a CNT layer by varying the CVD process parameters, such as temperature, catalyst composition, and process gas mixture [18,19]. As such, this two-step method is very compatible and scalable for the mass production of CNTs/CF, potentially enabling the low cost fabrication of CNTs/CFMEs exhibiting high productivity [19].
In this work, a new cylindrical CFME modified with multi-walled carbon nanotubes (CNTs), denoted as CNTs/CFME, was fabricated and its properties were characterized. First, numerous carbon fiber (CF) monofilaments grafted with CNTs (simplified as CNTs/CFs) that appeared in a bundle were fabricated using a two-step electroless plating/CVD method. Second, a single CNTs/CF monofilament was selected and encapsulated into a CNTs/CFME using a simple sealing method. The electrochemical properties of the as-prepared CNTs/CFMEs with different coating parameters (content, morphology, specific surface area, etc.) were investigated through cyclic voltammetry and chronoamperometry methods.
Materials and Reagents
The CFs used throughout this work are T700SC-12K-50C polyacrylonitrile (PAN)-based carbon fibers (Toray Industries, Inc., Tokyo, Japan), in the form of a bundle comprising 12,000 monofilaments (12K tow). For a single continuous CF filament, the average diameter is 7 µm and the average resistivity is 1.6 × 10 −3 Ω·cm (625 S·cm −1 in conductivity).
During electroless plating, the sensitizing solution was prepared by mixing SnCl 2 ·2H 2 O (10 g·L −1 ) and HCl (40 ml·L −1 ), and the activating solution was prepared by mixing PdCl 2 (0.5 g·L −1 ) and HCl (20 ml·L −1 ). The solution used for electrochemical tests was 5.0 mM potassium ferrocyanide (K 4 Fe(CN) 6 ) aqueous solution, with 1.0 M KCl aqueous solution added as a supporting electrolyte. The purity of the acetylene and argon gases used in this experiment was 99.999 vol%. All of the other reagents were of analytical reagent grade and were used without further purification.
Preparation of CNTs/CFMEs
A segment of CFs which was 100 mm-long was tailored from the CF bundle. Then, it was integrally grafted with CNTs. Following this, a single CNTs/CF monofilament was separated and used to make a CNTs/CFME, as shown in Figure 1. During electroless plating, the sensitizing solution was prepared by mixing SnCl2·2H2O (10 g·L −1 ) and HCl (40 ml·L −1 ), and the activating solution was prepared by mixing PdCl2 (0.5 g·L −1 ) and HCl (20 ml·L −1 ). The solution used for electrochemical tests was 5.0 mM potassium ferrocyanide (K4Fe(CN)6) aqueous solution, with 1.0 M KCl aqueous solution added as a supporting electrolyte. The purity of the acetylene and argon gases used in this experiment was 99.999 vol%. All of the other reagents were of analytical reagent grade and were used without further purification.
Preparation of CNTs/CFMEs
A segment of CFs which was 100 mm-long was tailored from the CF bundle. Then, it was integrally grafted with CNTs. Following this, a single CNTs/CF monofilament was separated and used to make a CNTs/CFME, as shown in Figure 1. The CNTs were directly grown on the CFs segment that was first coated with catalytic electroless nickel (Ni), followed by a CVD process in an acetylene/argon environment. A series of pre-treatments for CFs were carried out to ensure the successful growth of CNTs [17]. These steps included: (1) removal of the sizing agent on CFs through immersing the samples in acetone for 40 min; (2) sensitization and activation treatments using the aforementioned sensitizing and activating solutions for 10 min sequentially, with ultrasonic vibration assistance; (3) electroless plating of a Ni-layer on the CF surface using a recipe listed in Table 1. To differentiate this work from previous work [17], the plating time was set at 2 min, in order to obtain a thin Ni layer; and (4) growth of the CNTs via a vapor-liquid-solid mechanism via the CVD method in a vacuum tube furnace (FWL(ZK)-08/70/3, Facerom, Hefei, China). During the CVD process, the growth temperature was set to 680 °C, and acetylene gas (20 sccm) was introduced as the carbon source and argon (50 sccm) as the carrier gas. In order to elucidate the influence of CNT dimensions and morphologies on the electrochemical properties of CFMEs, the CNTs growth time was individually controlled to be 5 min, 10 min, and 15 min, to obtain a series of CNTs/CF samples with different CNTs contents and morphologies, which were denoted as CNTs/CF-T5, CNTs/CF-T10, and CNTs/CF-T15, respectively. The CNTs were directly grown on the CFs segment that was first coated with catalytic electroless nickel (Ni), followed by a CVD process in an acetylene/argon environment. A series of pre-treatments for CFs were carried out to ensure the successful growth of CNTs [17]. These steps included: (1) removal of the sizing agent on CFs through immersing the samples in acetone for 40 min; (2) sensitization and activation treatments using the aforementioned sensitizing and activating solutions for 10 min sequentially, with ultrasonic vibration assistance; (3) electroless plating of a Ni-layer on the CF surface using a recipe listed in Table 1. To differentiate this work from previous work [17], the plating time was set at 2 min, in order to obtain a thin Ni layer; and (4) growth of the CNTs via a vapor-liquid-solid mechanism via the CVD method in a vacuum tube furnace (FWL(ZK)-08/70/3, Facerom, Hefei, China). During the CVD process, the growth temperature was set to 680 • C, and acetylene gas (20 sccm) was introduced as the carbon source and argon (50 sccm) as the carrier gas. In order to elucidate the influence of CNT dimensions and morphologies on the electrochemical properties of CFMEs, the CNTs growth time was individually controlled to be 5 min, 10 min, and 15 min, to obtain a series of CNTs/CF samples with different CNTs contents and morphologies, which were denoted as CNTs/CF-T5, CNTs/CF-T10, and CNTs/CF-T15, respectively. After grafting CNTs on CFs, a simple sealing method adopted epoxy resin (E44-6101) to encapsulate CNTs/CFMEs. First, a 10-mm long single CNTs/CF monofilament was selected from the as-prepared CNTs/CF bundle. Then, one end of the monofilament was attached to a piece of insulated copper wire (UL1007 24AWG), which was about 1.43 mm in diameter and 30 mm in length, using conductive silver lacquer (MCN-DJ002, Mechanic, Hongkong, China). In order to minimize contact resistance, the CNTs/CF monofilament overlapped with the exposed copper wire for about 3-4 mm. After the conductive silver lacquer was cured, the junction and adjacent bare part of the copper wire were encapsulated with epoxy resin. Finally, the protruding end of the CNTs/CF was carefully trimmed to 250-300 µm in length under a microscope (VHX-2000, KEYENCE, Osaka, Japan), and a CNTs/CFME was fabricated. As such, a series of CNTs/CFMEs, made from pristine CF, CNTs/CF-T5, CNTs/CF-T10, and CNTs/CF-T15 were prepared, denoted as CFME-T0, CNTs/CFME-T5, CNTs/CFME-T10, and CNTs/CFME-T15, respectively.
Performance Characterization
The surface morphologies of CFs with and without CNTs were observed using a high resolution thermal field emission scanning electron microscope (ZEISS Merlin, Jena, Germany). The modified CNTs were observed using a field emission transmission electron microscope (TEM, JEM-2100F, JEOL Ltd., Tokyo, Japan). The electrical conductivities of CFMEs were expressed using the corresponding CF or CNTs/CFs monofilament, instead of being measured directly, to avoid damage to the electrodes. A four-point probe method [20] was employed to measure the electrical conductivity of single CF or CNTs/CF monofilaments. After grafting CNTs on CFs, a simple sealing method adopted epoxy resin (E44-6101) to encapsulate CNTs/CFMEs. First, a 10-mm long single CNTs/CF monofilament was selected from the as-prepared CNTs/CF bundle. Then, one end of the monofilament was attached to a piece of insulated copper wire (UL1007 24AWG), which was about 1.43 mm in diameter and 30 mm in length, using conductive silver lacquer (MCN-DJ002, Mechanic, Hongkong, China). In order to minimize contact resistance, the CNTs/CF monofilament overlapped with the exposed copper wire for about 3-4 mm. After the conductive silver lacquer was cured, the junction and adjacent bare part of the copper wire were encapsulated with epoxy resin. Finally, the protruding end of the CNTs/CF was carefully trimmed to 250-300 μm in length under a microscope (VHX-2000, KEYENCE, Osaka, Japan), and a CNTs/CFME was fabricated. As such, a series of CNTs/CFMEs, made from pristine CF, CNTs/CF-T5, CNTs/CF-T10, and CNTs/CF-T15 were prepared, denoted as CFME-T0, CNTs/CFME-T5, CNTs/CFME-T10, and CNTs/CFME-T15, respectively.
Performance Characterization
The surface morphologies of CFs with and without CNTs were observed using a high resolution thermal field emission scanning electron microscope (ZEISS Merlin, Jena, Germany). The modified CNTs were observed using a field emission transmission electron microscope (TEM, JEM-2100F, JEOL Ltd., Tokyo, Japan). The electrical conductivities of CFMEs were expressed using the corresponding CF or CNTs/CFs monofilament, instead of being measured directly, to avoid damage to the electrodes. A four-point probe method [20] was employed to measure the electrical conductivity of single CF or CNTs/CF monofilaments. The SSA of CFs with and without CNTs was measured by N 2 adsorption tests in a Brunauer-Emmett-Teller (BET) analyzer (ASAP 2020 V4.00, Micromeritics Instrument Corporation, Atlanta, GA, USA). The electrochemical properties of the as-prepared CFME and CNTs/CFME were mainly characterized by cyclic voltammetry, using an electrochemical workstation (CHI 650D, CH Instruments, Shanghai, China) and a three-electrode system, as shown in Figure 2. In the three-electrode system, the working electrode is an as-prepared CFME or CNTs/CFME, the reference electrode is Ag/AgCl (ceramic core, 3.0 M KCl, Φ12 × 120), and the counter electrode is a Pt wire (1 mm in diameter and 37 mm in length). Before electrochemical property tests, both the CFME and CNTs/CFMEs were electrochemically pretreated in 0.5 M H 2 SO 4 with the cyclic voltammetry method, within a potential range of 0 to +1.0 V at a scan rate of 0.1 V·s −1 , until a stable cyclic voltammogram was obtained.
Morphology Observation of CNTs/CFs
Using the aforementioned two-step method, a layer of coaxial CNTs was successfully grown on the CF surface. As shown in Figure 3a-d, the entire surface of the CF is fully covered by slender CNTs, and the diameter of the CNTs/CF composite increases with increasing CNTs growth time. The inset in Figure 3h shows a TEM image of a CNT, clearly indicating that the CNT has a hollow tubular structure like bamboo. The main constituent of a CNT is the carbon element, generated from the decomposition of acetylene on the surface of Ni nanoparticles. Sengupta and Jacob [21] concluded that the dissolved carbon element would diffuse toward the bottom of the Ni particles and segregates as graphite on the CF surface. The Raman spectrum proved that the structure of the CNTs was multi-walled [17]. As the CNTs growth time increases, carbon generated from the decomposition of acetylene increases, causing CNTs to grow in length (as shown in Figure 3f-h), thus enlarging the thickness of the CNT layer, as revealed by Brukh and Mitra [22]. Consequently, the diameter of a CNTs/CF is larger than that of the pristine CF, and increases with increasing CNTs growth time. As can be seen from Figure 3e-h, the surface of the pristine CF is very smooth, but those of CNTs/CF samples are extremely rough and porous, indicating that a CNTs/CF has a larger SSA than a pristine CF. Since the SSA of a CNT (theoretical surface areas for multi-walled CNTs are diameter-dependent and estimated to be in the range of a few hundred m 2 ·g −1 [23]) is much higher than a pristine CF (0.15 m 2 ·g −1 obtained from BET analyses), the thicker the CNTs layer, the larger the SSA of a CNTs/CF. Therefore, the SSAs of CNTs/CF-T5, CNTs/CF-T10, and CNTs/CF-T15 were increased remarkably from 0.15 m 2 ·g −1 to 35.95, 84.97, and 119.53 m 2 ·g −1 , respectively, demonstrating that the SSA of a CF will be significantly enlarged after its surface is covered with CNTs.
The as-fabricated CNTs/CFMEs are shown in Figure 4a. Each CNTs/CFME has two exposed ends. One exposed end is a piece of copper wire, whose length (10 mm in this work) is selected based on the connection requirements to the outer circuits. The other exposed end is a piece of protruding CF monofilament (250-300 µm) modified with or without CNTs, which acts as the working electrode. Figure 4b shows a magnified view of the protruding CF monofilament, which clearly shows that the CF was well encapsulated by epoxy resin. Following proper sample preparation and encapsulation protocols, the epoxy resin keeps the protruding CF clean and away from contaminants.
Electrical Conductivity Analysis of CNTs/CFs
Our test results show that the electrical conductivities of CNTs/CF-T5, CNTs/CF-T10, and CNTs/CF-T15 were enhanced by 15%, 38%, and 57%, respectively, compared to that of the pristine CF (about 625 S·cm −1 ). This validates the beneficial role of CNTs: that longer, denser CNTs lead to better electrical conductivity in the CNTs/CF. Among the CF and CNTs/CFs, CNTs/CF-T15 has the largest electrical conductivity, of 965.94 S·cm −1 , which can be attributed to the following two reasons. First, the highly conductive multi-walled CNTs (about 1000-2000 S·cm −1 [24]) significantly elevated the conductivity of a pristine CF. Second, and importantly, CNTs with a large length-to-diameter ratio would entangle with each other on the CF surface and form a three-dimensional (3D) coaxial conductive network, providing a larger contact area and additional electron conduction pathways, and thus enhancing the electrical conductivity [25]. By extension, the microelectrodes fabricated from
Electrical Conductivity Analysis of CNTs/CFs
Our test results show that the electrical conductivities of CNTs/CF-T5, CNTs/CF-T10, and CNTs/CF-T15 were enhanced by 15%, 38%, and 57%, respectively, compared to that of the pristine CF (about 625 S·cm −1 ). This validates the beneficial role of CNTs: that longer, denser CNTs lead to better electrical conductivity in the CNTs/CF. Among the CF and CNTs/CFs, CNTs/CF-T15 has the largest electrical conductivity, of 965.94 S·cm −1 , which can be attributed to the following two reasons. First, the highly conductive multi-walled CNTs (about 1000-2000 S·cm −1 [24]) significantly elevated the conductivity of a pristine CF. Second, and importantly, CNTs with a large length-to-diameter ratio would entangle with each other on the CF surface and form a three-dimensional (3D) coaxial conductive network, providing a larger contact area and additional electron conduction pathways, and thus enhancing the electrical conductivity [25]. By extension, the microelectrodes fabricated from
Electrical Conductivity Analysis of CNTs/CFs
Our test results show that the electrical conductivities of CNTs/CF-T5, CNTs/CF-T10, and CNTs/CF-T15 were enhanced by 15%, 38%, and 57%, respectively, compared to that of the pristine CF (about 625 S·cm −1 ). This validates the beneficial role of CNTs: that longer, denser CNTs lead to better electrical conductivity in the CNTs/CF. Among the CF and CNTs/CFs, CNTs/CF-T15 has the largest electrical conductivity, of 965.94 S·cm −1 , which can be attributed to the following two reasons. First, the highly conductive multi-walled CNTs (about 1000-2000 S·cm −1 [24]) significantly elevated the conductivity of a pristine CF. Second, and importantly, CNTs with a large length-to-diameter ratio would entangle with each other on the CF surface and form a three-dimensional (3D) coaxial conductive network, providing a larger contact area and additional electron conduction pathways, and thus enhancing the electrical conductivity [25]. By extension, the microelectrodes fabricated from the CNTs/CFs monofilament would therefore have markedly improved electrical conductivity compared to the one made from the pristine CF. Figure 5 shows the CV curves of the as-prepared CFMEs in 5.0 mM K 4 Fe(CN) 6 , performed at a scan rate of 0.10 V·s −1 . As shown, all of the CV curves exhibit highly symmetrical shapes for both the forward and reverse potential scans, indicating that highly reversible redox reactions are taking place at the CNTs/CFMEs. It demonstrates that the CNTs/CFMEs, as microelectrodes, exhibit an outstanding electrochemical property in the presence of K 4 Fe(CN) 6 and are able to reproduce the electrode reaction process of active substances [26]. From Figure 5a, it can be seen that the CV curve of the pure CFME has relatively gentle peaks, low peak currents, and a narrow area under the curve, similarly illustrated by Chen et al. [27]. In comparison, the CNTs/CFMEs show better CV behaviors, e.g., more distinguishable peaks, as well as much higher response currents, as shown in Figure 5b-d. In addition, the microelectrode decorated with longer and denser CNTs has more evident peaks and higher peak currents, demonstrating that the CNTs/CFME has a higher sensitivity than the unmodified CFME [28]. the CNTs/CFs monofilament would therefore have markedly improved electrical conductivity compared to the one made from the pristine CF. Figure 5 shows the CV curves of the as-prepared CFMEs in 5.0 mM K4Fe(CN)6, performed at a scan rate of 0.10 V·s −1 . As shown, all of the CV curves exhibit highly symmetrical shapes for both the forward and reverse potential scans, indicating that highly reversible redox reactions are taking place at the CNTs/CFMEs. It demonstrates that the CNTs/CFMEs, as microelectrodes, exhibit an outstanding electrochemical property in the presence of K4Fe(CN)6 and are able to reproduce the electrode reaction process of active substances [26]. From Figure 5a, it can be seen that the CV curve of the pure CFME has relatively gentle peaks, low peak currents, and a narrow area under the curve, similarly illustrated by Chen et al. [27]. In comparison, the CNTs/CFMEs show better CV behaviors, e.g., more distinguishable peaks, as well as much higher response currents, as shown in Figure 5b-d. In addition, the microelectrode decorated with longer and denser CNTs has more evident peaks and higher peak currents, demonstrating that the CNTs/CFME has a higher sensitivity than the unmodified CFME [28]. Figure 6 shows the individual CV curves of pure CFME and CNTs/CFMEs at scan rates of 0.01 V·s −1 , 0.05 V·s −1 , 0.10 V·s −1 , and 0.50 V·s −1 in 5.0 mM K4Fe(CN)6. As shown, the peak potentials (including the oxidation peak, Epa, and reduction peak, Epc) of identical electrodes at different scan rates show a negligible difference, with Epa approaching 0.35 V and Epc displaying a value of 0.17 V for all four electrode samples. Figure 6 shows the individual CV curves of pure CFME and CNTs/CFMEs at scan rates of 0.01 V·s −1 , 0.05 V·s −1 , 0.10 V·s −1 , and 0.50 V·s −1 in 5.0 mM K 4 Fe(CN) 6 . As shown, the peak potentials (including the oxidation peak, E pa , and reduction peak, E pc ) of identical electrodes at different scan rates show a negligible difference, with E pa approaching 0.35 V and E pc displaying a value of 0.17 V for all four electrode samples. According to the Nernst equation, the boundary condition of the reversible process can be expressed by [29]:
Cyclic Voltammetry (CV) Analysis of CNTs/CFMEs
where ∆E p is the peak-to-peak potential difference, R is the universal gas constant (8.3143 J·K −1 ·mol −1 ), T is the thermodynamic temperature (298.15 K for room temperature), n is the number of electrons involved in the reaction, and F is the Faraday constant (96,485.3383 C·mol −1 ). Consequently, Equation (1) can be simplified as: Since the main electrode reaction in the K4Fe(CN)6 solution is: [Fe(CN)6] 4− − e − = [Fe(CN)6] 3− , the number of electrons involved n is one. Therefore, the boundary condition is theoretically equal to ∆E p ≤ 59.0 mV. Often, the experimentally observed ∆E p values are greater than the theoretical value of 59.0 mV. Figure 7a depicts the ∆E p values of CFMEs at different scan rates. All experimentally observed ∆E p values are larger than the theoretical value of 59.0 mV. However, as can be seen from Figure 6, the peak currents Ip for all electrodes increase remarkably with the increasing scan rates of potential, and the ratios of the oxidation peak currents Ipa to the reduction peak currents Ipc approach unity. Therefore, it can be concluded that the reactions occurring at the CNTs/CFMEs and its pristine CFME are reversible or quasi-reversible processes [30]. Figure 7a shows that the ∆E p values of all CNTs/CFMEs are smaller than that of the pristine CFME at the same potential scan rate. Furthermore, for CNTs/CFMEs, as the CNTs growth time increases-hence producing greater CNT lengths and densities-∆E p decreases proportionally toward the theoretical minimum. Such a decrease of ∆E p According to the Nernst equation, the boundary condition of the reversible process can be expressed by [29]: where ∆E p is the peak-to-peak potential difference, R is the universal gas constant (8.3143 J·K −1 ·mol −1 ), T is the thermodynamic temperature (298.15 K for room temperature), n is the number of electrons involved in the reaction, and F is the Faraday constant (96,485.3383 C·mol −1 ). Consequently, Equation (1) can be simplified as: Since the main electrode reaction in the K 4 Fe(CN) 6 solution is: 6 ] 3− , the number of electrons involved n is one. Therefore, the boundary condition is theoretically equal to ∆E p ≤ 59.0 mV. Often, the experimentally observed ∆E p values are greater than the theoretical value of 59.0 mV. Figure 7a depicts the ∆E p values of CFMEs at different scan rates. All experimentally observed ∆E p values are larger than the theoretical value of 59.0 mV. However, as can be seen from Figure 6, the peak currents I p for all electrodes increase remarkably with the increasing scan rates of potential, and the ratios of the oxidation peak currents I pa to the reduction peak currents I pc approach unity. Therefore, it can be concluded that the reactions occurring at the CNTs/CFMEs and its pristine CFME are reversible or quasi-reversible processes [30]. Figure 7a shows that the ∆E p values of all CNTs/CFMEs are smaller than that of the pristine CFME at the same potential scan rate. Furthermore, for CNTs/CFMEs, as the CNTs growth time increases-hence producing greater CNT lengths and densities-∆E p decreases proportionally toward the theoretical minimum. Such a decrease of ∆E p implies that the redox reactions taking place at the CNTs/CFMEs tended to be increasingly reversible processes as the dimensions and densities of CNTs increased [31]. implies that the redox reactions taking place at the CNTs/CFMEs tended to be increasingly reversible processes as the dimensions and densities of CNTs increased [31]. The oxidation peak current (Ipa) values of all of the electrodes versus the square root of the scan rates (v 1/2 ) are shown in Figure 7b. The linear relationship between Ipa and v 1/2 is demonstrated with linear fitting, and the coefficients of determination are all over 0.99. Compared to the unmodified CFME, all of the CNTs/CFMEs have increasingly larger linear slopes between Ipa and v 1/2 as the thicknesses of the CNT layers increase, indicating that the peak currents of CNTs/CFMEs have a progressively higher sensitivity towards potential scan rates. As shown in Figure 6, even at a slow scan rate of 0.01 V·s −1 , the oxidation peaks and reduction peaks of each CNTs/CFME are clearly visible, especially for the CNTs/CFME-T15 electrode. On the contrary, no such peaks were observed for the pristine CFME. This phenomena proves that the CNTs are beneficial to enhancing the sensitivity of CFMEs, and a greater electrode sensitivity to scan rate can be achieved with thicker and denser CNTs [32].
Electrocatalytic Activity of the Microelectrode
An electrochemical effective area is a critical parameter for characterizing the electrocatalytic activity of an electrode, since it provides the reaction active sites and contact interface area between the electrode and analytes [33]. In light of this, CNTs can appreciably improve the electrochemical performance of a CFME through enlarging its surface area and providing more reaction active sites. The chronoamperogram in Figure 8a shows that the response currents of the CNTs/CFMEs are higher than that of the CFME, which is consistent with the results obtained by the CV method mentioned above. Moreover, the response current decays very slowly in the long time zone, so it can be recognized as a quasi-steady state [34], and its current is defined as the quasi-steady state current iqss.
Szabo et al. [35] reported an approximate formula describing the relationship between the current and time for cylindrical microelectrodes: where i is the current in amps, A is the effective electrode area in cm 2 , D is the diffusion coefficient in cm 2 ·s −1 , c is the concentration of the electroactive species in mol·cm −3 , r is the radius of the cylindrical microelectrode in cm, and τ is a coefficient related to time and is equal to τ=4Dt/r 2 . For a quasi-steady state, since τ is very large in the long time zone, Equation (3) can be simplified as [34]: The oxidation peak current (I pa ) values of all of the electrodes versus the square root of the scan rates (v 1/2 ) are shown in Figure 7b. The linear relationship between I pa and v 1/2 is demonstrated with linear fitting, and the coefficients of determination are all over 0.99. Compared to the unmodified CFME, all of the CNTs/CFMEs have increasingly larger linear slopes between I pa and v 1/2 as the thicknesses of the CNT layers increase, indicating that the peak currents of CNTs/CFMEs have a progressively higher sensitivity towards potential scan rates. As shown in Figure 6, even at a slow scan rate of 0.01 V·s −1 , the oxidation peaks and reduction peaks of each CNTs/CFME are clearly visible, especially for the CNTs/CFME-T15 electrode. On the contrary, no such peaks were observed for the pristine CFME. This phenomena proves that the CNTs are beneficial to enhancing the sensitivity of CFMEs, and a greater electrode sensitivity to scan rate can be achieved with thicker and denser CNTs [32].
Electrocatalytic Activity of the Microelectrode
An electrochemical effective area is a critical parameter for characterizing the electrocatalytic activity of an electrode, since it provides the reaction active sites and contact interface area between the electrode and analytes [33]. In light of this, CNTs can appreciably improve the electrochemical performance of a CFME through enlarging its surface area and providing more reaction active sites. The chronoamperogram in Figure 8a shows that the response currents of the CNTs/CFMEs are higher than that of the CFME, which is consistent with the results obtained by the CV method mentioned above. Moreover, the response current decays very slowly in the long time zone, so it can be recognized as a quasi-steady state [34], and its current is defined as the quasi-steady state current i qss .
Szabo et al. [35] reported an approximate formula describing the relationship between the current and time for cylindrical microelectrodes: where i is the current in amps, A is the effective electrode area in cm 2 , D is the diffusion coefficient in cm 2 ·s −1 , c is the concentration of the electroactive species in mol·cm −3 , r is the radius of the cylindrical microelectrode in cm, and τ is a coefficient related to time and is equal to τ = 4Dt/r 2 . For a quasi-steady state, since τ is very large in the long time zone, Equation (3) can be simplified as [34]: Figure 8b-e shows the scatter plots of the chronoamperometry response current vs. l/lnτ for different CF electrodes. The response current decays very slowly with the increase of time (after about 0.2 s), confirming the occurrence of the quasi-steady state (the front part of the scatter plots). Therefore, a linear fitting was adopted to fit the plots of this part. Table 2 lists the obtained equations of linear regression, whose coefficients of determination are all larger than 0.99, further validating the good linear relationship between the i qss and l/lnτ. Assuming a diffusion coefficient for 5.0 mM ferrocyanide of 6.67 × 10 −6 cm 2 ·s −1 in 1.0 M KCl, as presented by Konopka and Mcduffie [36], the effective electrode area A can be calculated according to Equation (4). The results were listed in Table 3. It indicates increased ratios in the effective electrode area of 79%, 443%, and 749% for CNTs/CFME-T5, CNTs/CFME-T10, and CNTs/CFME-T15, respectively, as compared to a pristine CFME. This result is consistent with the SSA variation of CNTs/CFs after the introduction of CNTs, as mentioned above. Thus, the electrocatalytic activities of CNTs/CFMEs are enhanced significantly, owing to the massive active sites provided by the CNTs. Figure 8b-e shows the scatter plots of the chronoamperometry response current vs. l/lnτ for different CF electrodes. The response current decays very slowly with the increase of time (after about 0.2 s), confirming the occurrence of the quasi-steady state (the front part of the scatter plots). Therefore, a linear fitting was adopted to fit the plots of this part. Table 2 lists the obtained equations of linear regression, whose coefficients of determination are all larger than 0.99, further validating the good linear relationship between the iqss and l/lnτ. Assuming a diffusion coefficient for 5.0 mM ferrocyanide of 6.67 × 10 −6 cm 2 ·s −1 in 1.0 M KCl, as presented by Konopka and Mcduffie [36], the effective electrode area A can be calculated according to Equation (4). The results were listed in Table 3. It indicates increased ratios in the effective electrode area of 79%, 443%, and 749% for CNTs/CFME-T5, CNTs/CFME-T10, and CNTs/CFME-T15, respectively, as compared to a pristine CFME. This result is consistent with the SSA variation of CNTs/CFs after the introduction of CNTs, as mentioned above. Thus, the electrocatalytic activities of CNTs/CFMEs are enhanced significantly, owing to the massive active sites provided by the CNTs. The chronoamperometry response currents of the electrodes at the tenth second were selected as the quasi-steady state currents and the corresponding current densities were calculated (listed in Table 3). The current densities of the CNTs/CFMEs are much higher than that of an unmodified CFME, with a maximum two times increase (CNTs/CFME-T5). It indicates that the CNTs/CFMEs have a higher mass transfer rate and reaction rate. In addition, the current densities of CNTs/CFME-T10 and CNTs/CFME-T15 are very close, and they are both less than that of CNTs/CFME-T5. It is possible that the diameter of the CF microelectrodes enlarges with the increased content of CNTs, resulting in the reduction of its mass transfer rate. Plots of chronoamperometry response current vs. l/lnτ of (b) CFME-T0; (c) CNTs/CFME-T5; (d) CNTs/CFME-T10 and (e) CNTs/CFME-T15, respectively. The chronoamperometry response currents of the electrodes at the tenth second were selected as the quasi-steady state currents and the corresponding current densities were calculated (listed in Table 3). The current densities of the CNTs/CFMEs are much higher than that of an unmodified CFME, with a maximum two times increase (CNTs/CFME-T5). It indicates that the CNTs/CFMEs have a higher mass transfer rate and reaction rate. In addition, the current densities of CNTs/CFME-T10 and CNTs/CFME-T15 are very close, and they are both less than that of CNTs/CFME-T5. It is possible that the diameter of the CF microelectrodes enlarges with the increased content of CNTs, resulting in the reduction of its mass transfer rate.
Reproducibility Analysis of the CNTs/CFME
The reproducibility of the CNTs/CFME-T15-with the largest CNT content and the best reflection of the stability of CNTs/CFMEs-was tested by consecutive cyclic potential scans in the 5.0 mM K 4 Fe(CN) 6 (in 1.0 M KCl), at a scan rate of 0.01 V·s −1 . After 20 consecutive cycles, the CNTs/CFME-T15 was removed from the solution, rinsed in deionized water, and exposed to the air for about 104 days. Subsequently, the same cyclic voltammetry test was performed again. The first, tenth, and twentieth CV curves before and after 104 days are shown in Figure 9. The results indicate that all of the CV curves are nearly identical with constant response currents, indicating that the CNTs/CFME is exceptionally stable and has extraordinary reproducibility. This is largely attributed to the CNT being a remarkable electrode material that possesses high stability and outstanding reproducibility [37].
Reproducibility Analysis of the CNTs/CFME
The reproducibility of the CNTs/CFME-T15-with the largest CNT content and the best reflection of the stability of CNTs/CFMEs-was tested by consecutive cyclic potential scans in the 5.0 mM K4Fe(CN)6 (in 1.0 M KCl), at a scan rate of 0.01 V·s −1 . After 20 consecutive cycles, the CNTs/CFME-T15 was removed from the solution, rinsed in deionized water, and exposed to the air for about 104 days. Subsequently, the same cyclic voltammetry test was performed again. The first, tenth, and twentieth CV curves before and after 104 days are shown in Figure 9. The results indicate that all of the CV curves are nearly identical with constant response currents, indicating that the CNTs/CFME is exceptionally stable and has extraordinary reproducibility. This is largely attributed to the CNT being a remarkable electrode material that possesses high stability and outstanding reproducibility [37].
Conclusions
We fabricated a series of carbon fiber microelectrodes modified with carbon nanotubes (CNTs/CFMEs) using different process parameters. The effective working electrode of a CNTs/CFME was made by Ni electroless plating, followed by CVD processes. The encapsulation of the CNTs/CFME was done via a simple process using an insulated copper wire and epoxy resin sealing. The as-prepared CNTs/CFMEs displayed quasi-reversible electrode behaviors in cyclic voltammetry and a quasi-steady state in chronoamperometry. CNTs are validated to be able to significantly enhance the electrical conductivity of the CFs. Compared with a bare CFME, a CNTs/CFME shows a better CV curve, exhibiting a higher distinguishable redox peak and response current. The modified CNTs can provide the CNTs/CFME with a large effective electrode area, substantially increasing the active sites and current density. Consequently, a CNTs/CFME exhibits better electrochemical properties in comparison with the unmodified CFME, in terms of its remarkable reversibility, electrocatalytic activity, and high mass transfer rate. Moreover, a CNTs/CFME is shown to have
Conclusions
We fabricated a series of carbon fiber microelectrodes modified with carbon nanotubes (CNTs/CFMEs) using different process parameters. The effective working electrode of a CNTs/CFME was made by Ni electroless plating, followed by CVD processes. The encapsulation of the CNTs/CFME was done via a simple process using an insulated copper wire and epoxy resin sealing. The as-prepared CNTs/CFMEs displayed quasi-reversible electrode behaviors in cyclic voltammetry and a quasi-steady state in chronoamperometry. CNTs are validated to be able to significantly enhance the electrical conductivity of the CFs. Compared with a bare CFME, a CNTs/CFME shows a better CV curve, exhibiting a higher distinguishable redox peak and response current. The modified CNTs can provide the CNTs/CFME with a large effective electrode area, substantially increasing the active sites and current density. Consequently, a CNTs/CFME exhibits better electrochemical properties in comparison with the unmodified CFME, in terms of its remarkable reversibility, electrocatalytic activity, and high mass transfer rate. Moreover, a CNTs/CFME is shown to have extraordinary stability and reproducibility, making it more applicable as a sensor in the domains of microscopic biochemical analyses and measurements. | 9,115.4 | 2017-03-30T00:00:00.000 | [
"Materials Science"
] |
Next-to-leading order corrections for $gg \to ZH$ with top quark mass dependence
In this Letter, we present for the first time a calculation of the complete next-to-leading order corrections to the $gg \to ZH$ process. We use the method of small mass expansion to tackle the most challenging two-loop virtual amplitude, in which the top quark mass dependence is retained throughout the calculations. We show that our method provides reliable numeric results in all kinematic regions, and present phenomenological predictions for the total and differential cross sections at the Large Hadron Collider and its future upgrades. Our results are necessary ingredients towards reducing the theoretical uncertainties of the $pp \to ZH$ cross sections down to the percent-level, and provide important theoretical inputs for future precision experimental collider programs.
Introduction
One of the top priorities of the Large Hadron Collider (LHC) and the High Luminosity LHC (HL-LHC) is to accurately measure the properties of the Higgs boson including its various couplings. In particular, the Yukawa couplings of the Higgs boson to fermions are key to understand the origin of fermion masses as well as to verify the family universality of different fermion generations. At the LHC and the HL-LHC, the main production channel gg → H followed by hadronic decays of the Higgs boson is overwhelmed by large hadronic activities arising from strong interactions. Therefore, to measure the Yukawa couplings of quarks lighter than the top quark, the pp → ZH production channel is the most promising one, where the leptonic decay of the Z boson can be efficiently detected and leads to a significant reduction of the hadronic background [1,2].
Besides, the pp → ZH process also provides an independent handle of the HZZ coupling with respect to the H → ZZ * decay process [3,4]. Therefore, it is of crucial importance to have precise knowledge of its differential cross sections both from the theoretical and the experimental points of view.
At the partonic level, the pp → ZH process proceeds through the subprocess qq → Z * → ZH at the leading order (LO) in standard model (SM) couplings. Since the quantum effects due to strong interactions are often quite large for such processes, it is necessary to calculate higher order corrections in the strong coupling α s within perturbative quantum chromodynamics (QCD), in order to improve the accuracy for the theoretical predictions. The next-to-leading order (NLO) and next-to-next-to-leading order (NNLO) QCD corrections were calculated in [5][6][7] and are implemented in the program package vh@nnlo [8,9]. The NLO electroweak (EW) corrections are given by [10,11]. Combined QCD+EW corrections at NLO are also available [12][13][14]. Despite of these progresses, the theoretical precision is still not enough to match the potential of future LHC and HL-LHC runs [15].
There is one special class of higher order corrections coming from the partonic subprocess gg → ZH mediated by a closed top quark loop. Such contributions enter at order α 2 s in perturbation theory. However, this formal suppression by the coupling constant is compensated partly by the high luminosity of gluons at colliders such as the LHC and the HL-LHC, and partly by the large Yukawa coupling of the top quark. Moreover, the differential cross sections for this subprocess receive additional enhancement by the 2m t threshold effects in the boosted regime, e.g., when the Higgs transverse momentum p T ≥ 150 GeV [16,17]. For the total cross section at the 13 TeV LHC, the one-loop gg → ZH contribution is roughly 50% in size compared to the NLO corrections for the qq → ZH channel, and is much larger than the NNLO corrections for the qq channel. Therefore, it is reasonable to expect that the formally α 3 s contributions in the gg → ZH category are not small compared to the formally α 2 s terms in the qq channel. In fact, the estimate in the heavy top limit [18,19] suggests that, the gg → ZH contribution at order α 3 s is similar in size to that at order α 2 s . In the case of Higgs boson pair production, it was realized that the heavy top limit is not quite reliable, and the finite top mass effects are significant [19]. As a result, the unknown size of the exact gg → ZH contribution at order α 3 s has become a major uncertainty of the theoretical predictions for the pp → ZH cross sections [15].
Last but not least, the loop-induced gg → ZH subprocess is interesting on its own right. It is sensitive to possible new heavy particles running in the loop. From the low energy point of view, it has a different dependence on the HZZ anomalous coupling than the tree-mediated qq channel. It is also sensitive to other anomalous couplings such as Htt and Ztt. This channel therefore provides unique information on the possible new physics beyond the SM.
The above considerations make it highly interesting to calculate the gg → ZH subprocess to the next order in perturbation theory. With a slight abuse of notation (as adopted in the literature), we will refer to the one-loop amplitudesquared contributions as the "LO" cross sections for this process. These have been computed in [20,21]. The "NLO" corrections to this process then consist of two parts: the interference between the two-loop and one-loop amplitudes (the virtual contributions), and the squared-amplitude with one loop and one extra parton radiation (the real contributions). Both contributions are infrared (IR) divergent, while their sum leads to a finite prediction. The bottleneck in such a calculation lies in the two-loop virtual amplitude, which involves multiple physical scales including three masses (m Z , m H and m t ) and two Mandelstam variables.
The presence of multiple physical scales makes the computation of the twoloop amplitude rather challenging. Very recently, a purely numeric study based on sector decomposition [22][23][24][25] has appeared [26]. Approximations in certain kinematic regions have also been worked out, including the high energy expansion [27] and the small transverse momentum expansion [28]. In [29], some of the authors of this work proposed a novel approach to calculate loop integrals with a heavy top quark loop and lighter external particles such as the Higgs boson and weak gauge bosons, which is valid in all phenomenologically relevant kinematic regions. In this work, we apply this method to calculate the NLO virtual corrections to the process gg → ZH. We show that our method gives accurate numeric results for the two-loop amplitudes in the entire physical phase-space. Combining with the real corrections, we for the first time provide the complete NLO predictions for the total and differential cross sections of the gg → ZH channel. We also give the total cross sections for the full pp → ZH process by adding the qq contributions.
Setup and analytic calculations
We consider the partonic process g α a (p 1 ) + g β b (p 2 ) → Z µ (p 3 ) + H(p 4 ), where a and b are color indices, while α, β and µ are Lorentz indices. The amplitude can be written as where the tensor structures A αβµ i are given in [21], whose coefficients F i are functions of the masses m t , m Z , m H and the Mandelstam variablesŝ = ( These variables satisfŷ s+t 1 +û 1 = 0. For later convenience, we also definet =t 1 +m 2 Z andû =û 1 +m 2 H . To calculate the squared-amplitude, we multiply Eq. (1) by its complex conjugate, and sum over the polarization states of the gluon and the Z boson. For simplicity, we define the coefficientsĈ i ≡ 7 j=1 M ij F j , where the matrix elements M ij are given by The coefficientsĈ i can be expanded in terms of the strong coupling constant are the NLO virtual corrections, which involve complicated two-loop Feynman integrals with massive external legs. In the following, we describe the calculation ofĈ (1) i using the method of small mass expansion. We generate the relevant two-loop virtual diagrams and the corresponding amplitudes using FeynArts [30]. The resulting expressions are manipulated with FeynCalc [31] and FORM [32]. The two-loop diagrams can be classified into several categories: diagrams consisting of two one-loop sub-diagrams, two-loop triangle diagrams (involving both top quark and bottom quark loops attached to an offshell Z boson), and two-loop box diagrams. While the first two categories can be easily calculated [19,27,[33][34][35], the two-loop box diagrams are rather challenging. We denote the corresponding contributions to theĈ (1) i coefficients asĈ (1) i,box , and calculate them using the small mass expansion. Namely, we expandĈ (1) i,box as a power series in m 2 Z and m 2 H : It should be noted that in contrast to the case of Higgs boson pair production [29,36], there is an extra factor of 1/m 2 Z due to the polarization sum of the Z boson, as is evident in Eq. (2). For the power counting we regard m 2 Z ∼ m 2 H , and denote the nth term (n ≥ 0) in the expansion as order m 2(n−1) . We will also use the notation O(m 2(n−1) ) to denote the partial sum of the series up to the nth term. In practice, we have performed the expansion up to n = 3, corresponding to O(m 4 ).
Acting on the amplitudes, the partial derivatives ∂ m 2 Z and ∂ m 2 H can be written as partial derivatives with respect to the external momenta: H → 0 after taking the derivatives, the coefficientsĈ (1) i,box can be written as linear combinations of scalar loop integrals with massless external legs. These integrals have already been computed in [29,[36][37][38].
The loop integrals contain both ultraviolet (UV) and infrared (IR) divergences which are regularized in dimensional regularization. The UV divergences are removed via renormalization. We adopt the on-shell scheme for m t and the five-flavor MS scheme for α s . In addition, to handle γ 5 in the amplitudes, we apply the Larin scheme [39] which requires a finite renormalization constant The IR divergences will be eventually cancelled by the real corrections and the renormalization of the parton distribution functions (PDFs). We apply the Catani-Seymour dipole subtraction method [40] to subtract the IR divergences from the virtual corrections. After that, we define the UV-and IR-finite coef-ficientsĈ i,ren are the renormalized coefficients and the IR subtraction operator I is defined as where β 0 = 11C A /3 − 4T F N l /3 with T F = 1/2 and N l = 5, andK g is the finite part and is given byK The contributions of the finite coefficientsĈ (1) i,fin to the squared-amplitude can be summarized into the so-called finite part of the NLO virtual corrections: In the small mass expansion, the computation of V fin is rather efficient, which only takes 2 seconds on average for one phase-space point on a workstation with an Intel Xeon W-2155 CPU. Most of the evaluation time is spent on the computation of scalar master integrals. To further accelerate the phase-space integration (which requires to repeatedly evaluate the amplitude), we have generated a very large grid for the master integrals with 63175 points on the twodimensional plane (−4m 2 t /ŝ, −4m 2 t /t 1 ). Note that this grid can be reused for different masses (including m t , m H and m Z ), as well as for different couplings (HZZ, Htt and Ztt). Using the grid bypasses the most time-consuming part of the numeric evaluation. As a result, computing the amplitude with the grid only takes 0.003 seconds on average per phase-space point.
We now need to add the IR-subtracted real corrections to obtain physical cross sections. We consider squared amplitudes of four partonic subprocesses gg → ZH + g, gq → ZH + q, gq → ZH +q and qq → ZH + g, and perform the dipole subtraction to remove the IR divergences in the phase-space integration. In the above four subprocesses, we only include the diagrams with a closed quark loop. The IR limit of these diagrams has the same topology as that of the two-loop virtual diagrams. This selection also allows us to compare our results with those in the heavy top limit [18]. We compute the amplitudes using the package Gosam [41,42], and integrate over the phase-space using the Vegas algorithm implemented in the Cuba library [43]. To avoid numeric instabilities, we require the transverse momentum of the extra radiation to be larger than δ √ŝ . We have varied δ between 0.05 and 0.0005 and find that the result is stable within the integration precision of 0.3%.
Numeric results
In this section, we present our numeric results for the total cross sections and the M ZH distributions, where M ZH = (p 3 + p 4 ) 2 is the invariant mass of the Z boson and the Higgs boson. For the input parameters, we take G F = 1.16637 × 10 −5 GeV −2 , m Z = 91.1876 GeV, m H = 125 GeV, m t = 172.5 GeV [44], and use NNPDF3.1 NNLO PDFs [45] with α s (m Z ) = 0.118. We neglect the mass of the bottom quark appearing in the loop. According to [46], we set the default values for the renormalization scale µ r and the factorization scale µ f to be µ def = M ZH , while the scale uncertainties are estimated by varying the two scales simultaneously up and down by a factor of three.
The validity of the small mass expansion has already been demonstrated in the case of Higgs boson pair production [36]. To be more careful, we have performed a similar analysis for ZH production. First of all, we have compared our results for the finite part of the virtual corrections with the results of Ref. [26] obtained using the sector decomposition program pySecDec. It should be noted that the conventions for the finite part are slightly different between Ref. [26] and our work, due to the different choices of the IR subtraction operator I( ) 1 . For comparison, we have performed the necessary transformation to arrive at their convention (dubbed V fin later). The results of V fin at six phase-space points are listed in Table 1. We find that our results at O(m 4 ) agree quite well with the pySecDec results. In particular, for the first four points where the pySecDec results are accurate enough, the relative errors of our results are much smaller than 0.1%. We emphasize that our results can still be systematically improved by incorporating higher order terms in the small mass expansion. The last two points correspond to the high energy region far above the 2m t threshold, where the pySecDec results show large numeric uncertainties. On the other hand, the small mass expansion is expected to converge fast in the high energy region, and our results should be reliable here. The high energy expansion of Ref. [27] also works in this region, and it will be interesting to compare our results with theirs as a crosscheck.
In Table 1, we also list the O(m 0 ) and O(m 2 ) results of our calculation, which shows the excellent convergence of the small mass expansion. To examine the convergence for a broader range of phase-space points, we show V fin as a function of √ŝ for several representative values of p T in Fig. 1. The blue and red marks represent the O(m 0 ) and O(m 2 ) results, respectively; while the black lines correspond to the O(m 4 ) ones. It can be seen that the O(m 2 ) and O(m 4 ) results almost overlap with each other completely, which demonstrates the reliability of the expansion in the entire phase-space. We expect that the terms at O(m 6 ) are irrelevant for phenomenological applications.
We now combine the finite part of the virtual corrections with the IRsubtracted real corrections, and present our predictions for the total and differential cross sections. We first consider the LHC with a center-of-mass energy of √ s = 13 TeV. We use the program package vh@nnlo [8,9] to calculate the contributions from the qq channel (including QCD and EW corrections). This program also gives the gg → ZH contributions up to the NLO in the heavy top limit, which we use as a reference to compare our results with. The various results for three values of µ r = µ f are listed in Table 2. As expected, the NLO corrections lead to significant enhancement (about 100%) to the gg → ZH cross section. Combining our results with the qq contributions, we arrive at the state-of-the-art fixed-order prediction for the pp → ZH total cross section at the 13 TeV LHC: In the last two columns of Table 2, we show for comparison the results in the heavy top limit given by vh@nnlo. We find that the situation is quite different from the Higgs boson pair production: the finite top mass effects are much milder, which reduces the NLO cross sections in the gg channel only by about 4%. This accidental fact makes it possible that by calculating the O(α 4 s ) contributions in the heavy top limit, one could reduce the residue theoretical uncertainty of the total cross section down to the percent-level. We now turn to the differential cross sections. It is well-known that the heavy top limit is not valid above the 2m t threshold. On the other hand, the small mass expansion provides reliable results for differential cross sections in the entire phase-space. As an example, we show in Fig. 2 the LO and NLO differential cross sections in the gg → ZH channel with respect to the invariant mass M ZH of the Z boson and the Higgs boson at the 13 TeV LHC. The upper plot employs a logarithmic scale for the vertical axis to access the distributions in the broad range 200 GeV M ZH 2500 GeV, while the lower plot shows the ratios to the central values of the LO differential cross sections. It is clear that the sizes of the corrections are kinematics-dependent, and it is not sufficient to use a uniform K-factor to rescale the LO differential cross sections. The NLO corrections are rather large across the whole range, especially around the peak and to the far tail. The significant corrections in the tail region have important implications for new physics searches, since new phenomena are usually most evident in the high energy regime. The boosted region is also relevant when using jet substructure techniques to measure the hadronic decays of the Higgs boson [47]. The corrections in the peak region, on the other hand, are the most important to the total cross section. To see the peak region more clearly, we show the M ZH distributions in a narrower range in Fig. 3, with a linear vertical axis. It can be seen that the total cross section receives its most contributions from regions around the 2m t threshold, where the NLO corrections are significant. The ratio plot also shows a small kink at the 2m t threshold, which comes from the Coulomb-type enhancement in that region entering at NLO. Finally, we envision a possible future high-energy upgrade of the LHC (HE-LHC) operating at a center-of-mass energy of 27 TeV. In Table 3 we list the results for the total cross section at 27 TeV. Again, the NLO corrections are significant, with the top quark mass effect slightly larger than that in the 13 TeV case. The differential cross sections can also be easily computed, which we leave for future investigations.
Conclusion
In this work, we present for the first time a calculation of the complete NLO corrections to the gg → ZH process. We use the method of small mass expansion to tackle the most challenging two-loop virtual amplitude, in which the top quark mass dependence is retained throughout the calculations. We compare our results of the two-loop amplitude with the purely numeric results from pySecDec. We find that at phase-space points where pySecDec is precise enough, the relative deviations of our results are much smaller than 0.1%. We have also demonstrated the excellent convergence of the small mass expansion in the entire phase-space, which makes us confident that the expansion up to O(m 4 ) is sufficient for phenomenological applications.
We employ the dipole subtraction method to combine the virtual corrections with the real radiation contributions, and find that the IR divergences all cancel out. This allows us to give numeric predictions for the total and differential cross sections at the NLO. We add the contributions from the qq channel to obtain the state-of-the-art fixed-order predictions for the total cross sections, which amount to σ pp→ZH = 882.9 +3.5% −2.5% fb at the 13 TeV LHC, and σ pp→ZH = 2.555 +4.0% −2.7% pb at the 27 TeV HE-LHC. We further present our results for a representative differential cross section: the invariant mass distribution of the Z boson and the Higgs boson. We demonstrate that our method can provide reliable predictions for the differential cross sections from the low energy region all the way up to the highly-boosted regime. Our results are necessary ingredients towards reducing the theoretical uncertainties of the pp → ZH cross sections down to the percent-level, and provide important theoretical inputs for future precision experimental programs at the LHC and the HL-LHC. The results for other phenomenologically interesting distributions at the 13 TeV LHC, and the distributions at the 14 TeV LHC/HL-LHC and the 27 TeV HE-LHC will be presented in a forthcoming article. | 5,080.6 | 2021-07-17T00:00:00.000 | [
"Physics"
] |
Biomedical Relevance of Novel Anticancer Peptides in the Sensitive Treatment of Cancer
The global increase in cancer mortality and economic losses necessitates the cautious quest for therapeutic agents with compensatory advantages over conventional therapies. Anticancer peptides (ACPs) are a subset of host defense peptides, also known as antimicrobial peptides, which have emerged as therapeutic and diagnostic candidates due to several compensatory advantages over the non-specificity of the current treatment regimens. This review aimed to highlight the ravaging incidence of cancer, the use of ACPs in cancer treatment with their mechanisms, ACP discovery and delivery methods, and the limitations for their use. This would create awareness for identifying more ACPs with better specificity, accuracy and sensitivity towards the disease. It would also promote their efficacious utilization in biotechnology, medical sciences and molecular biology to ease the severity of the disease and enable the patients living with these conditions to develop an accommodating lifestyle.
Introduction
Cancer is used synonymously to mean either malignant tumors or neoplasms and refers to a group of diseases that affect any part of the human body [1]. According to the World Health Organization (WHO), it is the leading cause of global mortality, accounting for about 20 million deaths in 2020, in which the most common causes of death include lung (1.80 million), colon and rectum (935,000), liver (830,000), stomach (769,000) and breast (685,000) [2]. It occurs through the rapid formation of abnormal cells that develop uncontrollably to invade the surrounding body parts and organs through the process of metastasis, the primary cause of death from cancer [3]. Cancer affects all age groups, but the incidence of cancer rises with age due to the build-up risk of specific cancers that increases with age, coupled with the fact that the capacity for cellular amelioration mechanisms becomes less effective with age [4].
The causes of cancer have been linked to a series of interactions between an individual's genetic factors and the three categories of external agents [5]. These external agents include biological carcinogens through infections from some parasites, viruses and bacteria [6]; physical carcinogens through the interaction with ultraviolet and ionizing radiation [7]; and chemical carcinogens through exposure to asbestos, tobacco smoke, water contaminants, such as arsenic, and food contaminants such as aflatoxins [8]. Approximately thirteen percent of cancers diagnosed in 2018, for instance, were caused by carcinogenic infections such as hepatitis B virus, hepatitis C virus, Epstein Barr virus, human papillomavirus (HPV) and Helicobacter pylori [9]. Particularly, the risk of liver and cervical cancer increases with some HPV and Hepatitis B and C viruses, while the incidence of HIV increases the risk of cervical cancer substantially [10].
Understanding the molecular mechanism of cancer formation from uncontrolled cell division to tissue invasiveness can shed light on the mutations of genes and proteins involved in cell cycle inactivation and suppression [11]. Epigenetic alterations have also been reported in cancer, where some protein-coding genes were altered due to methylation in colon cancer [12]. The epigenetic alterations implicated in DNA-repair genes that cause reduced expression of DNA-repair proteins have been involved in cancer progression's genetic instability at the early stage [13]. Another cause of mutation in cancer is DNAmismatch repair or homologous recombinational repair (HRR) in defective cells 9899. DNA repair inhibition has also been said to be involved in heavy metal-induced carcinogenicity. At the same time, the regulatory activities of miRNAs can target and reduce the expression of some protein-coding genes [14].
Appropriate and efficacious treatment is associated with early and correct cancer diagnosis because every cancer treatment is related to a specific treatment regimen. The current treatment regimens of cancer include immunotherapy, hormone therapy, stem cell transplant, biomarker testing (the use of genes, proteins and other substances referred to as tumor markers or biomarkers), radiotherapy, chemotherapy and surgery [15]. The goal of these treatment procedures is primarily to either remove/kill cancer from the body (primary treatment), to reduce the chance that cancer will recur by killing the remaining cancer cells after primary treatment (adjuvant treatment) or to relieve the side effects of treatment (palliative treatment). Despite these interventions, late side effects are associated with these treatment regimens, which depend on the type of cancer being treated and where in the body the cancer treatment is conducted [16]. Certain general side effects reported include lymphedema, fertility issues, nerve problems, sexual health issues, urinary issues, insomnia, anemia, loss of appetite, thrombocytopenia, constipation, delirium, diarrhea, edema, fatigue and others. Researchers and the international community have called for a more sensitive treatment regimen with little or no side effects to reduce cancer incidence and its growing menace. The use of anticancer peptides (ACPs) would eliminate the limitations of most cancer treatments, such as low solubility and restrictive and negative side effects [17].
Host defense peptides (HDPs), also known as host-defense antimicrobial peptides (AMPs), are key components of the innate immune system in all life forms, both vertebrates and invertebrates. In fact, some invertebrates, such as insects and crustaceans, do not have an adaptive immune system and use only the innate immune system for their protective mechanism. HDPs have emerged as amphipathic and short cationic biomolecules of diverse sequences, synthesized from the cells and tissues of complex life forms [18]. HDPs perform various functions, which include antimicrobial, anti-inflammatory, immunomodulatory, antioxidant, protease inhibitors, antiparasitic, and anticancer (the use of anticancer peptides (ACPs)) functions [19]. Many HDPs have been identified, which differ in structure and sequence and have been classified into αhelical peptides, βsheets with disulfide bridges, cyclic peptides, and peptides with extended flexible loop structures [20]. Several HDP databases exist as a catalog for over 2600 naturally occurring anticancer peptides (ACPs) [21]. However, synthetic ACPs for cancer treatment bring consistency and stability and allow for more consistent and accurate research results tailored to providing the desired effect by binding to a specific receptor without interfering with other receptor subtypes under proper usage conditions with fewer side effects and great benefits [22]. This review attempts to explore the biological importance of anticancer peptides (ACPs) in the treatment of cancer through their mechanism of action and technologies used for their identification and delivery, as well as their challenges.
Modes of Action of Host-Defense Peptides (HDPs)
HDPs have many modes of action, which seem to be conserved to some degree across different cell types, which include bacteria and cancer cells [23]. Cationic HDPs can interact with the membrane of cancer cells as they predominantly contain negatively charged molecules [24]. An example of such peptides is Leucine-leucine-37 (LL-37), which has membranolytic activities (Table 1). Such cationic HDPs have an electrostatic interaction with cancer cell membranes, which plays an important role in eliciting a cytotoxic effect on the cancer cells [25,26]. Arias et al., 2020 [22] undertook a study to improve the anticancer activity of synthetic HDPs. Arias et al., 2020 [22] could substitute an arginine for a lysine amino acid, which resulted in an enhanced electrostatic interaction and selectivity for Jurkat Leukemia cells compared to non-cancerous peripheral blood mononuclear cells.
It was observed that HDPs have the ability to interact with the cell membrane of cells, such as bacteria, and neutralize the charge allowing the HDPs to further penetrate through the membrane, thus increasing their cytotoxic effects [27]. It should be noted that cancer cells have an abnormal cell membrane composition when compared to normal cells [28]. The fluidity of cell membranes may be determined by the cholesterol concentration and distribution throughout the membrane. Cancer cells with lower cholesterol deposition will have an increased sensitivity to certain HDPs [24,29]. A study by Frislev et al., 2017 [30] observed the use of a liprotide HAMLET (human α-lactalbumin made lethal to tumor cells) in order to kill cancer cells ( Figure 1). [30] which represents the liprotide complex, HAMLET, and the target organelles in MCF7 (human breast adenocarcinoma cell line) cells. The HAMLET complex is comprised of α-lactalbumin and oleic acid. The α-lactalbumin function in the complex is to retain the oleic acid (OA) in solution and transport the OA to vesicles of MCF7 cells, increasing membrane fluidity (Frislev et al., 2017). Accessed at https://s100.copyright.com/CustomerAdmin/ PLF.jsp?ref=1054733c-0e16-42b8-9270-647b8f875da8 (accessed on 19 June 2021).
The study showed that the liprotides were able to increase the fluidity of the membrane. The same study also observed that by knocking down Annexin A6, a protein that is responsible for plasma membrane repair, they could enhance the liprotide's killing effects. A study by Mamusa et al., 2017 [31] observed that derivatives of the HAL-2 peptide were able to directly damage yeast cell membranes resulting in increased death of the cells. The same study also observed that, by replacing a methionine with a valine amino acid, the therapeutic effect was decreased to all cells tested. These studies show that the composition of the antimicrobial peptide is important and that making informed modifications may result in enhanced therapeutic effects in different cell types such as bacterial or cancer cells.
Antimicrobial peptides have also been reported to interact with cell receptors in order to either engage secondary effector proteins or initiate immunomodulatory processes such as the inflammatory signaling pathways [32,33]. An example of such anticancer peptide with immunomodulatory activities includes high mobility group box protein 1 (HMGB1) ( Table 1) [34]. It should be noted that the selective interaction of these HDPs is attributed to their physicochemical properties [32]. It has also been observed that identifying the exact functions of HDPs is difficult as their expression levels may dictate the role they play in processes such as inflammation (pro-inflammatory or anti-inflammatory) [32,35,36]. A study by Li et al., 2019 [37] showed that the antimicrobial peptide LL-37 interacted with the P2X 7 receptor, which allowed for the internalization of the HDP into the cell. Once the AMP LL-37 activated the P2X 7 receptor, it induced pore formation in the membrane [38]. It was observed that a scrambled form of LL-37 would not interact strongly with a neutral membrane and cause the pore-forming phenomenon, showing how important the structure of an antimicrobial peptide was to its ability to interact with receptors [37].
Research is ongoing to improve the therapeutic effects of ACPs while reducing their toxicity [38]. In order to achieve the aforementioned goal, studies modifying natural cationic ACPs have been performed. A study by Alexander et al., 2002 [39] performed a study to observe the effects of AMPs to affect macromolecular synthesis. The study showed that Pleurocidin derivatives at low levels were able to inhibit macromolecular synthesis within cells while causing less damage to the cell membrane. This study shows that by understanding the composition of AMPs and how they function, we may produce AMPs that are active and effective at lower concentrations, which may reduce their toxicity.
Novel Anticancer Peptides (ACPs) Used in Cancer Therapy
The negative charges exhibited more by cancer cells than by normal cells are contributed by several factors: overexpression of heparan sulfate proteoglycans, the abundance of zwitterionic phosphatidylethanolamine, PS overexpression, glycolipids glycosylation deregulation, and membrane glycoproteins with O-glycosylation repeat [55]. The loss of asymmetry of the phospholipid distribution within the extracellular and intracellular plasma membrane layers in tumor cells exposes the tumor cells' PS exterior [56]. Internalizing anticancer HDPs at the hydrophobic core also compromises the stiffness and fluidity of the cancer cell membrane and promotes their lytic effects [51].
The novelty of the use of ACPs in the sensitive treatment of cancer has been wellestablished, utilizing their structure and functional relationships. ACPs, such as LL-37-an alpha-helical peptide that belongs to the cathelicidin family-are under clinical trials in a phase I-II stage against melanoma using intratumor injection [40]. A nonamer peptide, LTX-315-derived from structure-activity relationship studies of HDP bovine lactoferricin-has also been tested as an efficacious drug either as a single or in combination with immune checkpoint inhibitors (such as anti-CTLA4/anti-PD-1) for its effect of causing changes in a tumor microenvironment such as an upsurge of T effector cells, necrosis of tumors and a decrease in immunosuppressive cells in a human clinical trial (phase 1) [58]. The phase II stage clinical trial of LTX-315 is ongoing to validate the intratumor administration for patients living with advanced metastatic sarcoma using tumor-infiltrating lymphocytes (TILs). Dusquetide (SGX942), a novel innate defense regulator of both pathogen-associated molec-ular patterns (PAMPs) and damage-associated molecular patterns (DAMPs) by binding p62, is being explored in phase III clinical trials against neck and head cancer [59]. D-K6L9, in combination with IL-12, was found to reduce neovascularization in breast and prostate cancer cell lines, while (KLAKLAK)2 uses apoptotic-induced-mitochondrial-membrane damage to treat Hela cell lines [60,61]. Some novel ACPs are summarized in Table 1 against different cancer forms as well as the mechanisms involved.
Mechanism of ACPs for Cancer Treatment
ACPs can exhibit anticancer activity through various mechanisms; mainly through membrane disruption or pore formation. These membrane-active mechanisms are an essential feature of ACPs, as the chance of resistance developing against this type of treatment is low [60]. ACPs can also express anticancer activity through non-membranolytic mechanisms by focusing on intracellular targets, mediating innate host immunity or actively blocking pathways that lead to tumor formation.
The carpet model can describe one of the mechanisms of ACPs, which directly involves cell membrane disruption. This carpet model starts with HDP, which is positively charged and interacts with a negatively charged phospholipid located on the outer layer of the cell membrane. This interaction causes the peptides to align parallel to the cell membrane without protruding into the phospholipid bilayer and, in turn, covering the cell, thus it is termed 'carpet.' Once enough peptides protect the cell membrane, a threshold concentration will be reached. Then, the peptides start to rotate on themselves and insert into the membrane, causing permeabilization of the cell membrane. The continuation of this process will ultimately lead to the formation of micelles (Figure 2) [60]. A study by Arias et al. [22] demonstrated the membrane disruption mechanism of ACP Tritrp-Arg by incorporating fluorescent dye propidium iodide (PI) into the treatment of Jurkat cells. This study showed an increase in PI fluorescent intensity, indicating that the cell membrane was permeated, and the fluorescent dye could bind to the DNA within the cell. Specific examples of these ACPs are indicated in Table 1. In the barrel and stave model (Figure 3), transmembrane channels are formed from the collection of monomer peptides on the cell membrane; the monomer peptides undergo a structural change and aggregate together to form the 'stave' within the membrane bilayer. The insertion process is the result of the aggregation, which forces the peptides into the lipid core region; the insertion of the peptide creates a hydrophilic channel that blocks off the hydrophobic part of the bilayer. Once the channel has formed, more monomers will be accumulated to further increase the channel's size. In addition, the cell membrane is also weakened due to the hydrophobic forces exerted by the peptide. At the moment, the only known ACP that employs this mechanism is alamethicin [43]. The toroidal pore (Figure 4) can be described as temporary holes within the cell membrane created by host defense peptides that are long enough to span across the bilayer before the disintegration of the membrane. The host defense peptides will align parallel to the cell membrane. Once the peptides reach a certain concentration, the peptides will start to insert into the membrane causing the lipid layer to bend inwards, resulting in a toroidallike pore structure, which is lined by the polar head of the lipid layer and the inserted peptide [60,62]. The toroidal pore allows for host defense peptides to gain access to the intracellular space of the cell, where peptides can disrupt pathways that are responsible for DNA replication, protein synthesis, or permeating the mitochondrial membrane. Examples of host defense peptides that use this mechanism include cecropin A, protegrin-1 and magainin-2 [62]. Host defense peptides capable of translocation across cellular membranes can trigger apoptosis via disruption of the mitochondria. The host defense peptides, which can enter the intracellular space of the cell through the pores, will permeate the membrane of the mitochondria, which will result in the expulsion of protein cytochrome c. The release of cytochrome c will cause a cascade of effects whereby Apaf-1 oligomerization will be activated followed by the activation of caspase 9, which will lead to the conversion of pro-caspase 3 to caspase 3. The caspase 3, in turn, will induce apoptosis [63]. Host defense peptides, such as LL-37 and CATH-2, which has previously been shown to be able to cross the cell membrane and induce apoptosis [64].
Anticancer inhibition can also occur without directly acting on the cancerous cell membrane. Certain host defense peptides can actively compete for binding sites, blocking pathways that lead to cancer formation ( Figure 5). We can see this demonstrated using synthetic peptide BLBD, which can bind to β-catenin and LEF-1 sites, which results in the decreased formation of breast cancer cells [65]. In addition, certain pathways require protein to undergo conformational changes, such as dimerization or isomerization, before binding to active sites; host defense peptides can be used to bind and merge with these proteins, ultimately disabling them [66,67]. Peptides can also be used to target tumor cells to make for more efficient drug delivery and to enhance the effects of chemotherapy drugs. This peptide has already been used in Phase I/II clinical trials for advanced solid tumors. For example, the peptide BT1718 will target overexpressed MMP14 sites found in tumor growth and improve drug delivery and efficiency [68]. Another way that the host defense peptides can be used for the treatment of cancer is by mediating the body's own immune system. Peptides such as alloferon-1 and alloferon-2 found in insect venom can induce natural killer cells in mammals. Cancer treatment with only alloferon-1 has shown cancer suppression activity close to the results from a low dose of chemotherapy [67].
In cancer treatment, peptides can be used in a variety of ways. This includes the use of peptides as medications (for example, as angiogenesis inhibitors), tumor-targeting agents that transport cytotoxic pharmaceuticals and radionuclides (targeted chemotherapy and radiation therapy), hormones and vaccinations [69]. Proapoptotic peptides, for instance DP1, could be used to treat a variety of solid human tumors such as head and neck cancers, melanomas, and papillomas. This primary application of ACPs could be an adjuvant to pre-existing treatment methods such as chemotherapy or radiotherapy [70].
Transient pore formation is another mechanism used by HDPs to establish their effects. Melittin, for instance, can bind on the vesicle translocated and redistributed on both sides of the membrane to induce stable and transient membrane permeabilization above a critical peptide-to-lipid ratio at the nanomolar range that allows for the transmembrane conduction of atomic ions without glucose or larger molecule leakage [71]. HDPs also use electroporation to establish functional domains of intracellular peptides or to gain insight into the peptide inhibition of signal transduction in adherent cells, such as chondrocytes, through transiently forming pores [43]. The use of membrane depolarization is another mechanism utilized by the HDPs to establish their therapeutic effects through the release of cellular contents, leading to death [72]. For instance, abeta-induced membrane depolarization in PC12 cells has been seen to be sensitive to metabotropic glutamate receptor mGluR (1) antagonists and to pertussis and cholera toxins with the involvement of G-protein.
Discovery Techniques for the Identification of Sensitive HDPs
Several technologies have been explored to identify HDPs of novel importance. Bakare et al., 2020 [20] used the HMMER (a name given by the software developers Sean Eddy and Travis Wheeler) and other in silico technologies to develop putative ACPs that could target the ligand-binding sites of cadherin-1 to monitor the peptides' prognostic efficacy in patients living with cancer. HMMER is used to identify homologous protein or nucleotide sequences and sequence alignment to discover more sensitive peptides [73]. Grafskaia et al., 2018 [74] used transcriptomic technologies in combination with in silico analysis to discover ten novel synthetic antimicrobial peptides from the sea anemone Cnipodus japonicas, three of which were verified to be potent against bacterial strains. Transcriptomic technologies are used to study the sum of an organism's RNA transcripts, giving rise to the genome's ability to synthesize biomolecules within the cells and control its gene expression regulation [75].
More potent, cost-effective, broad-spectrum HDPs have been developed by Liu et al., 2020 [76] using advanced computer-assisted design strategies to address challenging problems of translating a primary sequence to peptide structures to solve myriad multi-drug resistance problems. Fields et al., 2020 [77] used a machine learning approach and a simple biophysical trait to develop 20-amino acid bacteriocin peptides that can traverse the membrane of pathogens causing cytotoxic, antimicrobial and hemolytic activities. Antimicrobial peptides with potent and broad-spectrum activities have also been recently designed using molecular engineering technologies to elucidate the peptide motifs and translation opportunities to explore rational design for industrial collaboration [78]. Molecular engineering technologies are being explored in this regard to design and test molecular properties, behavior and interactions in order to assemble better peptides, systems and processes for specified functions [79].
Apart from this, Aruleba et al., 2018 [80] studied the ligand-binding sites and molecular docking interaction of Slc2a4 as a target for the treatment of cancer with putative HDPs using HMMER for the discovery of the peptides. Wang, 2017 [81] described the discovery, design and treatment strategies of HDPs using the conserved genes from the genomic and proteomic approaches outside the HDP genes with experimental validation, which guarantee a complete mapping utilizing the procedures of sequence shuffling, library screening, hybridization and de novo design. Tucker et al., 2018 [82] used bacterial self-screening of surface-displayed peptide libraries to discover diverse physicochemical parameters of next-generation antimicrobials to unravel the current limitations of peptide applications. Molecular dynamic (MD) approaches have been adopted to study the relationship between the biological function and mechanism of HDPs to optimize these antibiotic candidates. In silico technologies, such as HMMER, with molecular validation techniques have also been used to explore the use of novel HDPs as diagnostic candidates against three bacterial pneumonias and HIV with great promise for industrial collaborations in a lateral flow device [83,84].
Molecular Validation Techniques Used for HDPs
Several anticancer peptides (ACPs) have been discovered to manage cancer and other diseases, which are being subjected to molecular validation to ascertain their activities and application in terms of sensitivity and specificity. Tincho et al., 2016 [85] used molecular methods, such as cell viability, cytotoxicity and other anti-HIV assays, to show the anti-HIV activities of HDPs against HIV gp120 and NL4-3 receptors. The use of cell viability for peptides' molecular validation has limitations because cell viability is very diverse due to the redox potential of the cell population, cell membrane integrity, or the activity of cellular enzymes that form a snapshot of cytotoxicity or drug efficacy [86]. Even specially designed cell viability indicators, such as fluorescence microscopes, microplate readers or flow cytometers, have positive and negative attributes with their sensitivity, reliability and compatibility being determined by relevant cell lines [87]. Williams et al., 2017 [84] employed site-directed mutagenesis to generate variant HDPs from previously identified ones and used molecular validation techniques, such as a lateral flow device (LFD) binding assay and nanotechnologies recombinant technologies, to explore their activities against HIV p24. The advantages of the use of LFD include easy pick-up from testing sites and pharmacies, with the fast generation of results within 30 min, thus making it popular [88]. However, researchers compared 5869 people with both an LFD and a PCR test in mass testing in Liverpool. Seventy of these people were positive from the PCR tests and, of these 70, only 28 were positive on LFD tests, showing a sensitivity of about 40% [89]. Cardoso et al., 2021 [78] enumerated the molecular targets and mechanisms involved in several HDPs' activities and pharmacokinetics. An example of a molecular target that binds one or more signaling peptides or signaling proteins is tropomyosin-receptor kinase B (TrkB), which is bound and activated by the neurotrophic protein brain-derived neurotrophic factor (BDNF) [90]. Prada-Prada et al., 2020 [91] used circular dichroism, cytotoxicity, minimum inhibitory concentration (MIC), growth and timekill kinetics to explore Ib-M's mechanisms and structural activities of peptides against E coli. Scanning electron microscopy (SEM), transmission electron microscopy [73], NaCl permeability, agarose diffusion, and inhibitory concentration assays were utilized to evaluate the hemolytic activities, toxicity, stability and mechanism of action of chicken hemoglobin HDPs [92]. These technologies, as listed, are necessary to assess and ensure the specific activities of the optimized peptides for clinical trials with negligible toxicity. Regmi et al., 2017 [93] highlighted the combinatorial drug therapy of HDP from Bacillus amyloliquefaciens with beta-lactams evaluating its stability, MIC, susceptibility testing, synergy testing and antibiofilm property against pathogens. This combined application of two or more HDPs or the use of antibiotics with HDPs is very promising to prevent the development of antimicrobial resistance and provide the susceptible host with an optimized therapy [94].
Challenges for the Use of Anticancer Peptides (ACPs) in Cancer Treatment
Many advancements were made in cancer therapy because of the complications experienced with cancerous cells' resistance to cancer treatments and the low specificity of the currently employed drugs in chemotherapy [95]. The anti-neoplastic drugs, presently used in cancer treatments, consist of various damaging side effects, mainly because they target rapidly dividing cells rather than only cancerous cells [96]. Thus, anticancer peptides (ACPs) provide an alternative anticancer drug [95]. ACPs are beneficial because they differentiate between neoplastic and non-neoplastic cells, which interact specifically with the negatively charged membrane components (these differ between cancer and non-cancer cells [95]. ACPs have a similar ability, in which these peptides have specificity towards malignant cells [96]. Despite the several effective in vivo studies published on ACPs, none of these approaches have made it onto the market [95]. The challenges for using ACPs in cancer treatments are the poor bioavailability of peptides, the toxicity of peptides, immune response to treatments, and the cost-inefficiency of these approaches.
In the cancer treatment serum, proteolytic degradation is a major threat to the potency of the antitumor peptides [96,97], as this promotes unspecific binding to the serum components, reducing the half-life of these molecules in the serum, leading to proteolytic degradation, thus decreasing the bioavailability and affecting the stability of the serum. To overcome these challenges, research needs to be conducted to improve the pharmacodynamic properties of the serum. Baker et al., 1993 [97] successfully exhibited this phenomenon of bioavailability of the peptides by introducing D-amino acids or substituting naturally occurring L-amino acids by diastereomers, such that the peptides' cytotoxicity was decreased against normal, non-cancerous cells and that these diastereomers peptides maintained their anticancer activities. Ultimately, these researchers recorded a reduction in serum inactivation and enzymatic degradation in in vivo studies [98]. Other approaches were also taken by introducing vector-mediated delivery of the genes that encode the active anticancer peptides [80], using various delivery systems (liposomes, polymer nanoparticles or quantum dots [96]. Some of the naturally occurring ACPs and some synthetic ACPs are more stable in the serum, especially synthetic ACPs containing D-amino acids, which confer stability against proteolytic degradation [96]. All anticancer peptides (ACPs) represent a degree of anticancer activity. However, not all these peptides are suitably selective against cancer cells [96]. In cancer treatments, the hydrophobicity of the peptides is essential for membrane penetration, and a fundamental advocate of the membrane interactions of peptides is amphiphilicity. ACPs promote tumor tissue penetration and therefore kill target cells rapidly by perturbing the integrity of the plasma membrane. The problem arises when peptides showing cancer cell specificity only fall within a narrow range of 0.53-0.78 and have non-tilted helical peptide structures, when these specific peptides, with further analysis, indicate toxic effects to both cancerous and non-cancerous cells and when tilted helical peptides represent a nonspecific means of cell membrane lysis. Therefore, research needs to be conducted that focuses on the structure of peptides, and the activity relationship needs to be further understood to design promising novel antitumor agents [95].
The introduction of foreign ACPs into a host can elicit treatment neutralizing antibodies and/or cause potentially harmful allergic responses in cancer patients [96]. Thus, a few approaches have been considered to overcome this problem. To avoid causing deleterious anti-ACP immune responses, the introduction of host-defense peptides (HDPs) could be a possible alleviation strategy, or the co-administration of foreign ACPs with immunosuppressive drugs. Alternatively, the encapsulation of ACPs in liposomes that are engineered to deliver their cargo straight to the tumor sites could be promising, as this minimizes the opportunity for the host to acquire anti-ACP immunity [96].
In most new treatment approaches, the cost efficiency of the process is always a major setback. In the case of ACPs and AMPs, the cost of isolating naturally occurring ACPs and even the production of synthetic ACPs tends to exceed the costs of the normal production of conventional chemotherapeutic agents that are currently employed in cancer treatments [95,98]. This is not a problem with lipopeptides and other small peptides since chemoenzymatic methods and solution synthesis are employed to generate them; however, when it comes to large peptides, the production costs increase [99].
Conclusions
The challenges of cancer treatment procedures and associated side effects have necessitated the quest for new therapeutic interventions to combat its menace. ACPs have demonstrated potential therapeutic efficacy against different forms of cancer because of their specificity and the inability of tumors to develop resistance towards them. To this end, this review provides a comprehensive account of novel anticancer ACPs ranging from modes of action, relevant ACPs used in cancer treatment, mechanisms of action, molecular validation, technologies employed in their discovery, and limitations for their use as anticancer agents. It is essential to state that the different structures in terms of amino acid composition and residues gave rise to variation in the mechanisms of action of various ACPs and their targets for cancer therapy. Utilizing the strategies from this review would enhance the development of more sensitive and specific HDPs to solve cancer incidence completely. Acknowledgments: The authors would like to thank the various institutions, namely, the University of the Western Cape and the University of Free State, for infrastructure and administrative support. | 7,002.8 | 2021-07-29T00:00:00.000 | [
"Biology"
] |
An Efficient Technique-Based Distributed Energy Management for Hybrid MG System: A Hybrid SBLA-CGO Technique
This manuscript presents an optimal energy management on microgrid (MG) connected to the grid that chooses the energy scheduling based on the proposed method. The present method is the joint implementation of the Side-Blotched Lizard Algorithm (SBLA) and the Chaos Game Optimization Algorithm (CGO) and hence it is named as SBLA-CGO method. Here, the MG system contains a photovoltaic system (PV), wind turbine (WT), battery storage (BS) and fuel cell (FC). Constantly, the necessary load demand of MG system connected to the grid is measured with SBLA method. The CGO increases the perfect match of MG with the expected load requirement. Moreover, renewable energy forecasting errors are evaluated twice by MG energy management for minimizing the control. Through the operation of MG schedule of several RES to decrease the electricity cost using the first method. Balancing the energy flow and minimize the effects of prediction errors according to the rule presented as planned power reference is second method. The main aim of the present method is evaluated with connection of fuel cost, the variation of energy per hour of the electrical grid, the cost of operation and maintenance of MG system connected to the network. According to RES, the energy demand and SOC of the storage elements are the conditions. Renewable energy system units use batteries as energy sources to allow them to operate continuously on stable and sustainable power generation. The analysis of the present method is analyzed by comparing with the other systems. The results of the comparison assess the strength of the present system and confirm their potential for solving the issues.
(DR) was assessed and extra codes were also evaluated. To calculate the generation, storage and receptive load offers, the ALO system was introduced to rectify the economic dispatch problems.
A distributed energy management system of MG community was presented by G. . In all repetitions, the central controller MG aligns the DER and ESS programming at MG level. Optimization occurs while the disturbed power of entire buses approaches zero. The dynamic thermal model house was interconnected with the HEMS to control heating, ventilation and air conditioning system through clients. Y. Liu et al. have established a safe distributed phrase energy management (S-DTEM) strategy of multiple interrelated MGs .
Every MG was maintained through a distributed MG ESS that only converts the commercial quantity and price information with other MGs for preserving the privacy of information. While every MG involves a buyer, S-DTEM will change the energy selling price and working period for reducing their local cost by trading energy with MGs / main stage. This method could reduce the cost of integrated MGs. With quadratic barrier functions, the finite time integration of S-DTEM could analyze certain difficult operating conditions. An intentional MG-EMS can penalize the prohibition of maximum function; the misbehavior detection mechanism was also developed by the finite-time convergence property. X. Yang et al. have described a bilayer game theoretic method for IEM of multi-MG (MMG). The maximum layer of the method has maintained the energy trade and consumption behavior of every MG; the cost model was intended based on economic factors and the will of the users. The minimized segment executes at high frequency and regulates the MG operations for reducing the larger and actual segments. Also, the supply and demand variables and the output events could be managed correctly and the energy trading behind the MGs may gain directly devoid of disturbance. J.
Zeng et al. have developed a completely distributed operating method of MEMS with maximum penetration and demand for renewable energy (Zeng et al. 2018). The iterative best methodresponse algorithm was developed for choosing the Nash equilibrium game. Finally, to validate the effective and validity MG method was evaluated.
Background of Research Work
A review of current investigation operation illustrates that, energy management of hybrid RES devices for MG. At energy management system, the power generator persistence is a great challenge. The energy source management is established using energy supervision system and committed to solve the energy sources of RES and cost factors included in the problem.
Additionally, there are several methods that are used: fuzzy, neuro-fuzzy and optimization methods on MG energy management system. Through fuzzy logic controller provides best outcomes hence it does not categorize fuzzy system theory unique nature. On contrary, it has been evaluated that PSO has a better optimal search capacity. Moreover, the PSO algorithm, there are random variations in velocity equation, thus varying the better value from unstably. Also, a RES control method is primarily evaluated for tracking the power requirement and control the DC bus voltage. Finally, the integrated MG system is evaluated to defeated this challenge and offer a hopeful solution. Several work-based methods are established at bibliography to remedy this issue; these demerits and troubles have encouraged this investigation work.
Energy Management Configurations with MG Connected System
This section introduces the energy management system with the MG system connected. EMS is used to assess and verify the MG power management is evaluated on grid-connected mode for the purpose of performance. The MG may be organized through corresponding classifications subsequent to the control tasks have been performed. Below the grid mode during MG task, load demand should be met at entire time. To gain the main goal, the MGs goal may be set and imagined using optimization method. The main objective is to establish with the production of wind / solar energy, Fc and battery. Resources are energy storage devices and unassignable / assignable resources. The control factors considered the dynamic power and operating position of ESS and transmitting units to determine energy management tasks. Moreover, a power equilibrium condition, the power of the controllable units and optimal start-up time of the transmitting units are evaluated. At upcoming time steps the necessary increase/reductions for future time steps is stable. The power of transmissible / non-transmissible resource units and ESS is coordinated premature. By choosing the most optimal control group of controllable units, assuming the market prices, the prediction power of non-transmissible units load level, the proposed method is improving the revenues in a provided time horizon. Using the proposed method, the cost of fuel for dispatchable resources is described as energy management problems.
Problem Formulation
The mathematical modelling of renewable sources like WT, PV, FC and Battery are articulated as follows,
Modelling of Photovoltaic (PV)
In RES one of the sources is PV, it can able to occupy the sunlight and directly transform the sunlight into electricity. Based on the solar radiation, output power produced by PV is expressed as follows,
Modelling of Wind Turbine (WT)
The WT output power is evaluated through power curve shown at below equation.
From the above equation (2), the optimum unit of wind power expressed as WG p , then the wind speed, minimum wind speed, cut out and cut in wind speed is expressed as
Modelling of Battery Energy Storage System (BESS)
Depending on weather conditions, the energy production of RES is evaluated. To defend the frequency and system voltage for saving the excess power the BESS is employed and the MG system load changes or power production of Res is less means the BESS will provide the power to the load. The variation of PV and WT was smoothening by the BESS. Compared to the MG system, the load is low while the power is produced by RES then for succeeding usages the extra power will save in BESS. Related to the earlier SOC, generated power of RES and specified overall loads of the system, the energy saved in the BESS at time t which is expressed from below, e Ch Ac Dc here the efficiency of battery charging is expressed as , based on RES, the overall energy produced is represented as ) (T e RES , in the MG system, and the overall provided energy is expressed as Load e .
Constraints
The overall power produced from the DER, the energy saved at BESS and energy exchanged through the grid will fulfill the overall demand for MG. At time t output constraints of DER convey the generated power of each DER which will be in upper and lower bounds of production of power for all kinds of DER. The constraints of DER is described as follows, The increased and decreased suitable saving ability of BESS in energy storage is expressed as follows, From the above equation, the DOD is expressed as the Depth of battery discharge. The overall produced power by DER, energy storage in BESS and the interchanged energy at all time by grid will be fulfill the overall demand of MG.
Formulations of Objective Function
The main purpose of the work is to reduce the annualized cost of system. The overall cost of the system comprises: replacement cost, overall capital cost, operating cost, power purchasing cost and maintenance cost. In the capital cost the updating cost is also occurred. The cost minimization of the objective function is shown below, here the annualized cost of system is expressed as c, the annual cost of the PV, WT and Fc is represented as PV c , WT c , FC c . The overall cost of electricity is expressed as GS c , GP c . The annualized cost of each element is the capital, operational and replacement cost.
Annualized Cost
The cost of capital of the element comprises the cost of installation and purchase. The annualized cost of all elements like PV, WT and FC which s expressed as follows, From the above equation, the initial capital cost of WT is expressed as
Annual Replacement Cost
Replacing the WT final life is denoted as annual replacement cost. The overall replacement cost is expressed as follows,
Proposed Approach of SBLA-CGO Based on Energy Management System
Based on proposed method is presented in this paper. The present method is the joint implementation of both the Side-Blotched Lizard Algorithm and Chaos Game Optimization and hence it is named as SBLA-CGO method. Here, the MG system contains a Photo-Voltaic (PV) system, Wind Turbine (WT), Battery Storage (BS) and Fuel cell (FC). The required load requirement of grid associated MG system is measured using SBLA method. The exact combination of MG is increased with the load demand condition forecasted using CGO. The energy management system with HIRES is a figure 3. A step-by-step process of the proposed method
Load demands using SBLA
Step 1: Initialization The input parameters like PV, WT, FC and battery
Step 2: Random Generation
After the process of initialization arbitrarily create the distributed solutions along lower and upper bounds.
Step 3: Fitness Function
Utilize the below functionality to establish cost reduction.
Step 4: Subpopulation Produce the frequencies of subpopulation and for deflect the depletion of morph a less number of population is maintained.
Step 5: Population Changes
Based on the fitness function if the initial population is produced and then the subpopulation colors are dispatched. In the population changes it has many subpopulations which are described as follows,
Delete Function
If the population of morph interchanged as negative then the algorithm cells delete the function of lizard.
Transform Function
When the changes in the population of the color with the described population has a positive change and the one affected by it has a negative one the methods conducts the transform lizard function.
Add Function
In one population if there is a positive change in the population index or one is affected and it did not have negative change means the algorithm will add a population function.
Step 6: Termination After completing the above process the population will give best position and to enhance more values the iterative process will repeat.
Chaos Game Optimization for MG System
Step 1: The preliminary location of applicant's solution or the previous eligible points on search space is specified interms of random process of selection approach Step 2: Previous points are measured to the fitness values of the previous applicant's solution based on self-similarity.
Step 3: Global best process: In this process the maximum stage of qualification is evaluated.
Fig 4: Flowchart of Proposed SBLA-CGO
Step 4: The mean group is evaluated at the search point for the qualification point based on the random process.
Step 5: In the search space for the qualification point with three vertices a transitory triangle is evaluated.
Step 6: The fitness values of the new candidate solution are measured depends on selfcompatibility problems.
Step 7: The early qualifying points with poor fitness values equivalent with poor self-similarity levels are replaced through novel seeds.
Step 8: The termination criteria is evaluated.
Result and Discussion
The simulation consequences of proposed and other method are evaluated on this section. To reduce the overall cost creation and enlarge PV and WT usage. Figure 20 illustrates that comparison of CO2 emissions with the proposed and existing system. Figure 21 shows the comparison of COE through proposed and existing method. Figure 22 shows a comparison of proposed and existing system. Figure 23 shows the amount of fuel consumption compared to the proposed and existing method. Table 1 illustrates that statistic investigation of proposed and existing method based on mean, median and standard deviation (SD).
Conclusion
In this paper presents the optimal energy management of PV-WT, FC and ESS hybrid energy systems connected to grid using the SBLA-CGO method. The paper evaluates the modelling of | 3,131.8 | 2021-04-28T00:00:00.000 | [
"Engineering"
] |
Polymer-Assisted Metal Deposited Wood-Based Composites with Antibacterial and Conductive Properties
: Compressible metallic porous materials (CMPMs) have great potential for development in the energy and environmental fields. However, the scale-up preparation of CMPMs with stable metal layers, excellent elasticity, and multifunctionality remains exceedingly challenging. In this study, we designed a novel strategy with the aid of polymer-assisted metal deposition to synthesize metallic porous wood (Ni-PW) with a hierarchical cellular structure and excellent elasticity. Our approach can produce highly compressible MPW using intrinsically porous delignified wood with only 15.16% strain loss under a large compressive strain of 40% after 1000 loading-unloading cycles and 129.4 µ m of the average porous size of the Ni-PW measured by mercury injection method. The result-ing Ni-PW displays excellent antibacterial properties for Escherichia coli ( E. coli ) and Staphylococcus aureus ( S. aureus ) and electric conductivity (Resistance < 7 ty), which renders great potential in energy and environmental applications. This research provides a new insight into the fabrication of CMPMs in a cost-effective (~56.5 ¥ m − 2 ) and scalable way.
Introduction
Compressible metallic porous materials (CMPMs) with large surface areas, high porosity, and superior conductivity have received widespread attention from researchers [1,2], especially in the fields of energy conversion and storage and antibacterial treatments [3][4][5][6][7][8][9]. Metallic materials with porous structures allow for rapid access by electrons, molecules, and solutions, which substantially increases material utilization and resultant multifunctionality. Metal nanomaterials, such as nickel, silver, and gold nanoparticles, have been widely demonstrated as effective uses for antibacterial coating and antibacterial treatment, but their high cost is the major obstacle for large-scale applications [10][11][12][13]. Therefore, a tremendous amount of research has been done to seek alternative cost-effective approaches to synthesize multifunctional CMPMs in order to accelerate their practical applications. Until recently, one of the most promising strategies has been to integrate metal particles into highly porous matrices, such as porous silica, graphene nanoshell, metalorganic framework, polymer, and hydrogel [14][15][16][17][18]; however, most synthesis methods of the metallic porous materials reported require tough and tedious steps, which limits their feasibility of large-scale production.
Directly deposited metal coating on a target host surface via physical vapor deposition (PVD), electroplating, electroless deposition (ELD), etc., is regarded as a facile way to acquire homogeneous metallic porous materials [19][20][21][22]. PVD is the most widely used surface metallization technique for forming a conductive and conformal thin metal layer and reliable barrier layers that protect against metal particle diffusion prior to metal depositing, but expensive equipment, harsh environments, and sophisticated steps are required. The "shadow effect" of this method also deeply affects the metal deposition of the porous matrices. Electroplating is a popular method to synthesize metallic porous materials in laboratories and factories due to the low cost and ease of operation. Nevertheless, it is only applicable to conductive substrates. Conversely, ELD is another economic and facile approach to creating metal coating, but can be carried out on both conductive and non-conductive substrates. ELD, essentially, is an autocatalytic redox process in which the metal particles are chemically reduced and deposited on any catalyst preloaded substrates, including flexible polymeric ones such as polyurethane (PU), poly(ethylene terephthalate) (PET), yarn, paper, carbon fibers, etc [21][22][23][24][25]; however, most of these substrates are not porous and exhibit single performance under incompressible resilience. It is a big challenge to construct metallic porous materials endowed with superior antibacterial and conductive properties without compromising the intrinsic compressibility of the substrates.
Herein, we propose a facile and scalable approach for preparing durable compressive metallic porous wood (Ni-PW) via the ELD process. The resultant Ni-PW is demonstrably antibacterial and conductive. In our research, polyethyleneimine (PEI), a functional polymer with abundant amino groups, is first cross-linked onto the PW, which forms a uniform surface for metal ion adsorption through chelation. Reliable nickel (Ni)-coated PW can be obtained after depositing a metal layer on the surface of PEI-PW by an aqueous ELD process. We found that the as-obtained Ni-PW can significantly facilitate the mass transfer and show multifunctional properties. In addition to being remarkably antibacterial against Escherichia coli (E. coli) and Staphylococcus aureus (S. aureus), the Ni-PW is also proven to be highly conductive and compressible, which are the ideal properties for elasticity-responsive conductive materials.
Preparation of PW Substrates
The procedure of preparing PW was based on our previous work [26][27][28]. First, the wood was cut into specific sizes and these blocks were immersed for 10 h in boiling deionized (DI) water (1 L) containing NaOH (99.99 g) and Na 2 SO 3 (56.82 g); next, the blocks were soaked in boiling H 2 O 2 until completely white. The white ones were washed with DI water to remove residual chemicals. Finally, the wet blocks were frozen at −18 • C several times and subsequently freeze-dried. To make the oxidized wood, 0.5 g dry wooden blocks were submerged in 50 mL DI water with a pH of 10 adjusted with 1 M NaOH. About 5 mM NaClO was introduced to initiate the oxidation process. During the reaction, a pH meter was used to monitor the pH of the solution and a pH (approximately 10) was maintained by adding 1 M NaOH. The reaction was stopped after 2.5 h by adding 1 M HCl to adjust the pH to neutral. From there, the mixture was poured out and the blocks were washed with 1 M HCl. The wet blocks were rinsed with DI water until the pH was above 5. Finally, the PW was frozen at −18 • C several times and subsequently freeze-dried for further PEI grafting.
Preparation of PEI Grafted on PW Substrates
A certain amount of dry PW was put into a 100 mL beaker, and 4 wt.% of PEI solution dissolved with methanol was added until it completely swamped the white. The PW suspension was put in a water bath of 35 • C for 24 h and the PW was squeezed lightly at intervals. After the immersion, the solution was poured and the wet PW was rinsed several times by DI water to remove uncombined PEI. Subsequently, the wet PW was immersed by 4 wt.% of glutaraldehyde solution and squeezed lightly for 1 h at room temperature.
Finally, the PEI grafted PW was washed thoroughly with DI water to remove uncombined glutaraldehyde and then freeze-dried for further ELD performing.
Preparation of Metallic PW (Ni-PW)
The dried PEI-PW was immersed into a 5 mM AgNO 3 solution and dissolved in a 50 g mixture solution of ethanol and ethylene glycol in a ratio of 2:1 by weight and extruded continuously for 1 h. Later, the samples were rinsed with DI water several times to remove the residues. Finally, the Ag + -supported PW was immersed into the Ni ELD bath for different intervals and the target Ni-deposited PW was obtained. The specific experiment was as follows: the ELD of Ni was conducted in a plating bath, which is composed of a 1:1 mixture of pre-prepared solution A and B, and the pH was adjusted to 10 with an ammonia solution. Among these, solution A contains 20 g L −1 Ni 2 SO 4 ·6H 2 O, 33 g L −1 sodium citrate, and 14 g L −1 sodium hypophosphite in DI water; solution B contains 3 g L −1 DMAB reductant in DI water. The ELD process was conducted at room temperature and after Ni ELD the sample was washed thoroughly with DI water and fully freeze-dried.
Antibacterial Test
The antibacterial activities of the Ni-PW against S. aureus and E. coli were evaluated through modified Kirby-Bauer and turbidity assays methods.
Conductive Test
The resistance of the Ni-PW was measured using a multimeter. In addition, the resistance variation of such modified wood under cyclic compression was determined by a self-designed compression apparatus.
The Fabrication of Metallic Porous Woods
Wood, a renewable and natural carbon resource, contains approximately 45% cellulose and is the preferred raw material for low environmental and human health safety risks; it can also be chemically modified for use in traditional and advanced materials fields [29][30][31][32]. Porous-wood (PW), composed of cellulose by a "top-down" approach from natural wood [33], is an ideal substrate for fabricating conductive material through ELD due to its excellent porous structure and mechanical properties. We developed the CMPWs based on three criteria: (1) the growth direction of wood fiber must not be destroyed and the whole volume structure of the original wood must be preserved; (2) the metal layers must be firmly adhered to the surface of the wood fiber; and (3) the cellular architecture and porous structure of the metal-coated wood must not collapse during a continuous compression-relaxation process. Polymer-assisted metal deposition (PAMD), a modified ELD method, can enormously increase bonding between the metal layer and the substrate as well as the mechanical strength of the deposited conductive surface. Instead of a catalyst layer, long functional polymer chains are immobilized firmly on the catalyst preloaded substrate before the chemical deposition of metal particles [34,35]. Although PAMD is considered to be one of the most effective methods of preparing CMPWs, choosing an appropriate polymerization reaction can significantly facilitate the fabrication process and enhance the efficiency and reproducibility. For example, an inert atmosphere is a necessity for some polymerization processes such as surface initiated-atom transfer radical polymerization (SI-ATRP), and it usually takes several hours to complete on the PW [36]; however, the commercial polymer PEI can be grafted firmly onto the surface of PW by simple cross-linking.
As shown in Figure 1, Ni-PW with metal coating was successfully prepared via PAMD. In this specific experiment, readily available, pre-washed wood was delignified and bleached completely. Functional polymers were then immobilized onto the treated wood surface through a cross-linking reaction with numerous amine groups on cellulose.
As a proof-of-concept, PEI and glutaraldehyde were chosen as the functional polymer and cross-linking agents, respectively. The cross-linking reaction occurs between amino groups on the PEI and hydroxyls on the PW in the presence of glutaraldehyde, which forms a functional branched polymer coated on PW. The PEI-grafted PW was then immersed into an AgNO 3 aqueous solution, where silver ions were adsorbed onto the secondary amine groups of the copolymer layer through ion exchange [37,38]. Finally, Ni-coated PW were obtained after immersing the catalyst-loaded samples into ELD baths of Ni for a certain amount of time. The detailed data of surface morphology (SEM images), chemical composition (FTIR, XPS spectra and EDS images), crystal structure (XRD spectra), thermal behaviors (TGA spectra), pore size and distribution (mercury intrusion method), and cyclic compression release tests of the as-prepared Ni-PW were based on our recent work [39].
The Antibacterial Properties of the Ni-PW
It is well known that nanometallic materials (such as nickel) exhibit outstanding antibacterial activities [40][41][42]. The as-prepared Ni-PW with active Ni nanoparticles and unique porous structure is expected to possess superior antibacterial properties [39]. In this study, Gram-positive S. aureus and Gram-negative E. coli were used as bacterial models to investigate the antibacterial activities of the as-made Ni-PW.
The antibacterial activities of the modified woods were first assessed by calculating the growth inhibition of the bacteria on LB agar mediums through a modified Kirby-Bauer method. In the disk diffusion antimicrobial tests, a certain amount of S. aureus and E. coli were inoculated on the mediums, respectively, before various samples with equal quality were placed uniformly. Subsequently, the mediums were cultured for several days at 37 • C, and the antibacterial properties were investigated via measuring the inhibition zones. Surprisingly, no inhibition zone was observed for the original PW and the PEI modified PW against S. aureus or E. coli, even after 7 days of bacterial culture ( Figure 2); however, the Ni-PW exhibited obvious inhibition zones for the two bacteria ( Figure 3). The prominent antibacterial activity of Ni-PW can be mainly attributed to two factors: the hierarchical porous structure of the raw wood, which can provide enough space for the bacteriostatic areas, and phosphorous-containing lipopolysaccharides of E. coli and teichoic acids of S. aureus on the outer bacterial surface, which are susceptible to nanoparticles [43]. Hence, when the enhanced dispersion stability of the metal layer in the fiber substrate significantly increases the contact site between the Ni-layer and the bacteria, the bacterial cell wall is further destroyed and the bacteria are killed.
In addition, the turbidity assays were also used to investigate the antibacterial abilities of Ni-PW against S. aureus and E. coli. Two bacteria were cultured in the LB liquid mediums at 37 • C for 24 h in the presence of various samples, and the change in turbidity was shown in Figure 4. By our observation, all groups became more turbid after 24 h cultivation except the Ni-PW, which verifies our finding that the growth of bacteria was effectively inhibited in the medium containing the Ni-PW, which shows its excellent antibacterial activity. To study the antibacterial activities of the samples quantitatively, the survival rate of S. aureus and E. coli was determined in the liquid medium after 24 h cultivation. The bacterial survival was observed by the optical density at 600 nm (OD 600 ), and the survival rate was calculated by the equation: Survival (%) = A/B × 100% (where A and B are the OD 600 values in the group and control groups after 24 h cultivation, respectively). As displayed in Figure 5, the survival rates of the control group containing PW and PEI-PW are over 90% regardless of S. aureus or E. coli after 24 h cultivation, whereas the survival rates of groups containing Ni-PW are below 10%, which is remarkably lower than those of the control groups. These results reconfirm the conspicuous antibacterial properties of the as-prepared Ni-based wood against S. aureus and E. coli, which is in accordance with the result of the disk diffusion assays.
The Electrical Conductivity of the Ni-PW
Three-dimensional (3D) compressible conductors with excellent electrical conductivity can expand the scope of applications within the new fields of robotic skins, camera eyes, and pressure detection [44,45]. In addition to the excellent conductivity offered by the coated metal, the original PW also possesses a spongy porous structure with a high surface area that facilitates electron transfer. More notably, the as-prepared Ni-PW still exhibited superior elasticity after 1000 loading-unloading cycles [39]. These characteristics endow the as-prepared Ni-PW with outstanding electrical conductivity.
To evaluate the electrical conductivity of the Ni-PW, we first tested the resistance of the stationary fresh Ni-PW in the air, and the initial resistance was recorded as~6.7 Ω, indicating that the presence of electron transfer within the material; however, the resistance of the Ni-PW was found to increase to 11.3 Ω (by~68%) after it was stored in the air for 3 days, and then remained stable for the following 30 days (Figure 6). This increase in resistance is due to the formation of metallic oxide films on the surface of the Ni-based material in a humid environment and can be addressed via electrochemical deposition, self-assembly, physical encapsulation, and other metal-capped methods that have been applied in the electronic industry. Notably, the Ni-PW simultaneously displayed remarkable compressibility and superior electrical conductivity [36]. The same resistance measured above was conducted on the metal-coated wood under mechanical compression-relaxation cycles. Interestingly, the conductive woods showed higher resistance (~7.9 Ω) in the original state and lower resistance (~1.1 Ω) under a compression strain of 60%. As shown in Figure 7, the curve of the normalized electrical resistances, (R/R 0 )-compressive strain, could be well fitted to a similar profile under a 60% compression strain, where R 0 and R were the resistance of the sample before and after compressing, respectively. Since the conductivity of a material depends heavily on the contact resistance between the fibers, the more closely the fibers contact-or the larger the contact area between the fibers is-the more superior the conductivity is. When the compression strain was higher than 40%, the contact area between fibers did not increase remarkably, thus the R/R 0 dropped slowly (Figure 7a). Moreover, the symmetry of the unloading and loading curves indicated the rapid recovery of the Ni-PW in compression cycles. The R/R 0 of the Ni-PW returned to the original state is~1.1, almost the same as the initial value (R/R 0 = 1). This negligible change in R/R 0 is the result of the excellent compression capacity and fatigue resistance of the as-made Ni-PW. The resistance durability of the Ni-PW was tested under compression strains varying from 0 to 40% for 1000 compression cycles. As shown in Figure 7b, the R/R 0 of the as-prepared Ni-PW recovered to the original state is~1.15, indicating that such material still held superior elasticity even after 1000 compression cycles. In addition, the R/R 1 (R 1 is the resistance in the first cycle with a different compression strain) increased to~1.2, 1.4, 1.6, 2.4, and 2 at the compression strain of 0%, 10%, 20%, 30%, and 40%, respectively, after 1000 cycles (Figure 8). This increase in resistance was due to the accumulation of microcracks appearing on the metal layer upon compression cycling. A 3-V circuit with the as-made Ni-PW as the electronic pressure sensor was fabricated to display its elasticity-dependent electrical conductivity. A blue light-emitting diode (LED) was lit when connected to the Ni-PW, and its luminance fluctuated as the sensor was compressed and relaxed (insets in Figure 7 and Supplementary Video S1). Moreover, the asprepared Ni-PW was used as electrical substrate to drive an LED display with four letters, "S C A U", on a 3-V circuit ( Figure 9). Finally, this compression-resilience was applied to adjust the motor speed and the power the fan rotation and ball movement ( Figure 10 and Supplementary Video S2). All demonstrations strongly support the application of Ni-PW as flexible electronic pressure sensors in various fields.
Conclusions
In summary, we have developed an environmentally friendly and economical path for the production of Ni-PW with a stable structure and outstanding antibacterial and conductive properties. Ni-PW possesses a hierarchical cellular structure, and the size and shape of the materials are highly controllable. Based on the remarkable cyclic compressibility of the PW and the unique multi-functionality of the metals, the as-prepared Ni-PW demonstrates prominent antibacterial abilities against S. aureus and E. coli, as well as superior electrical conductivity. In light of these advantages, we believe that Ni-PW has promising prospects in multiple fields such as developing antibacterial agents, dampers, pressure sensors, and flexible wearable electronics.
Author Contributions: Writing-Original draft, F.S.; methodology, writing-review and editing, Y.Y.; formal analysis, investigation, data curation, funding acquisition, Z.C.; Conceptualization, writing-review and editing, supervision, funding acquisition, Z.Y. All authors have read and agreed to the published version of the manuscript.
Data Availability Statement:
The data created in this study are fully depicted in the article.
Conflicts of Interest:
The authors declare no conflict of interest. | 4,365.2 | 2022-08-11T00:00:00.000 | [
"Materials Science"
] |
Joule-Level Twelve-Pass LD End-Pumped Bonded Neodymium Glass Laser Amplifier
: This paper reports on a Joule-level multi-pass laser amplification device with diode end-pumped square-rod neodymium glass (Nd:glass) bonded to K9 glass. The device generated 1.17 J pulse energy at 1 Hz and 1053 nm. The optical-to-optical efficiency was 13.01%, and the effective energy extraction efficiency was 44.23%. Comparing Nd:glass of the same specification without K9 glass under the same conditions, the thermal wave aberration of the former was 85.71% of that of the latter, which is 0.78 um. The near-field modulation degree at the highest energy output was 1.42 within 90% of the spot, and the far-field energy concentration was 81.88% within the 2.5-fold diffraction limit. The Nd:glass bonding method of the square rod is relatively novel in laser amplification systems pumped by the diode end face and can be further studied in future works. mJ energy injection. The optical-to-optical efficiency of 13.01% and the effective energy extraction efficiency of 44.23% are higher than those of the existing similar laser systems. In addition, the single-pass thermal wavefront aberration was 0.78 µ m , which is 85.71% of the non-bonded square-rod Nd:glass under the same conditions. In the case of the maximum output energy of the twelve-pass amplification system, the near-field modulation of the beam was 1.42 within the 90% spot range, and the far-field energy concentration was 81.88% at the 2.5-fold diffraction limit. In the subsequent works, a spatial light modulator and a wavefront corrector will be used to improve the quality of the laser beam. Experiments show that the Joule-level bonded Nd:glass laser amplification system with high optical efficiency has potential application prospects in high-power laser amplification systems and can be used as a pump source in various systems, e.g., in optical parametric chirped pulse amplification (OPCPA) systems.
Introduction
Solid-state laser systems with high energy and high repetition rates have attracted wide attention. In China's SG (Shen Guang) Up facility [1] and the US National Ignition Facility (NIF) [2], the preamplification module is a Joule-level solid-state laser amplification device. High-energy, intense, and high-repetition rate laser has a wide range of applications in astrophysics, plasma jets, and other high-energy density physics [3][4][5]. In terms of engineering applications, these laser systems have a lot of value. It is used as a pump source in chirped pulse amplification (CPA) or optical parametric chirped pulse amplification (OPCPA) systems [6][7][8]. In laser shock peening [9], laser-induced damage threshold measurement [10] and other materials processing also have great potential. However, in current application scenarios, many laser systems use flashlamp pumping methods, which have high thermal effects and low electrooptical conversion efficiency. The emission spectrum of a laser diode has a central wavelength of 802 nm and a narrow full width at half maximum (FWHM) of 3 nm, which can be well-matched with the Nd:glass with absorption wavelength of 802 nm and FWHM of 14 nm. In this case, the small heat sink and high electrooptical conversion efficiency are suitable for laser systems with high repetition rates. Therefore, laser diode pumped solid-state lasers have had some good reports in recent years. A DiPOLE (Diode Pumped Optical Laser Experiment) laser has achieved 105 J output energy based on diode pumping at a repetition frequency of 10 Hz, and the average power has reached the kW level [11,12]. A laser system with an output energy of 9.3 J and a repetition frequency of 33.3 Hz has been reported [13]. Other projects, such as POLARIS (Petawatt Optical Laser Amplifier for Radiation Intensive Experiments) in Germany and HAPLS (High-repetition-rate Advanced Petawatt Laser System) in the United States, have achieved output energy of tens of Joules at multiple repetition frequencies [14,15]. Neodymium phosphate glass (Nd:glass) lasers currently have distinctive features among Joule-level lasers. They have a higher level of energy storage density, lower quantum defects, and can be produced in large sizes. However, due to the lower thermal conductivity, Nd:glass is limited in the repetitive frequency laser systems, and a better heat dissipation strategy is required. In order to have a smaller thermal effect, the structure of the plate gain medium is generally used. Huang et al. demonstrated the composite plate technology of a sapphire cooling plate for Nd:glass lasers, and the relay imaging multi-pass technology obtained 560 mJ output energy at 1 Hz [16]. Recently, Yao et al. used square-rod Nd:glass lasers to achieve a 1 Hz output energy in a 1 J system under LD (laser diode) pumping [17], but the effective energy extraction efficiency is not high, only 12%. The cooling of rod-shaped Nd:glass lasers generally adopts side-side cooling, but if it is based on end-face pumping, the heat generated due to quantum defects and other reasons is mainly concentrated at the end surface of the gain medium, and the thermal effect is more obvious. Therefore, a solution for end-face bonding glass was proposed.
The current bonding technology has several methods, including surface-activated bonding [18], chemically activated direct bonding [19], and thermal diffusion bonding [20]. The method of atomic diffusion bonding has also been reported [21]. Surface-activated bonding is to treat the bonding surface with acetone or other solutions and then irradiate it with a fast atom beam to form suspending bonds, but this method is more expensive to process. Chemical surface activation bonding uses a strong acid and a strong base to act on the bonding surface and then optically connect them to form a stable bonding structure. Someone reported a kind of thermal diffusion bonding in which the oxide layer of the crystal is removed after surface treatment and a phosphate glass layer of several nanometers in thickness is formed [22]. After multi-stage heating (the highest temperature reaches above 1000 • C), the part diffuses to the bonded crystal in the formation of a strong bonding layer. As for Nd:glass, its bonding ability cannot reach the diffusion level within its acceptable temperature range. For Nd:glass, the degree of diffusion of end-face molecules in its endurable temperature range is weaker than that of other crystals. The Nd:glass bonding method mentioned in this article is a thermal bonding method based on thermal diffusion bonding, and the heating temperature is about 100 • C.
In this paper, a technical scheme of diode end-pumping a square-rod Nd:glass endface bonding K9 glass laser is proposed, which realizes the Joule-level amplification with high light efficiency and high effective energy extraction efficiency. Due to the same substrate between Nd:glass and K9 glass, the thermal expansion coefficient, refractive index, and other parameters are basically the same, which ensures transmittance of the gain medium and avoids the separation of the bonding surface. The experiment shows that the transmittance of bonded Nd:glass is 99.53%, and the Fresnel diffraction at the bonding surface is very small. The thermal conductivity of K9 glass is three times that of Nd:glass. In the case of end-face pumping, part of the heat is conducted through K9 glass, which can reduce the heat density of the entire gain medium. In the experiment, compared with unbonded glass in the same state, the thermal wavefront aberration of the former is 85.71% of that of the latter. Under the diode end-pumping of 9.02 J at 802 nm, the relay imaging multi-pass amplification technology is adopted to achieve twelve-pass amplification, and the output energy is 1.17 J at 1 Hz and 1053 nm. The optical-to-optical efficiency is 13.01%, and the effective energy extraction efficiency is 44.23%. The twelvepass near-field modulation is 1.42 within 90% of the spot range. The far-field energy concentration is 81.88% within 2.5 times the diffraction limit.
Amplification System Setup
The schematic diagram of the laser diode end-pump-bonded Nd:glass multi-pass laser amplifier system is shown in Figure 1. The laser amplifier system mainly includes a pump system, a beam expander (BE), a serrated aperture (SA), a polarization beam splitter (PBS), a half-wave plate (λ/2), two 45 • Faraday rotators (FR1 and FR2), a Pockels cell (PC), two thin-film polarizers (P1 and P2), two sets of 4F relay imaging vacuum telescope systems (VT1 and VT2), and a bonded Nd:glass laser amplifier head (AMP), as well as some mirrors (M1, M2, M3, TRM1, and TRM2). The pump system includes two sets of pump-coupling optical paths, which pump the two end faces of the laser amplifier head vertically. The LD array in each group of coupled optical paths is composed of 60 closely-arranged bars, with an emission area of 11 cm × 1.5 cm. Each LD array has a maximum pump power of 20 kW at a wavelength of 802 nm. The beam is smoothed through the lens group, and finally the two sets of coupled optical paths are aligned in the middle of the laser amplifier head to form an 8 mm × 8 mm square flat-top pump spot, which is also the position of the image plane of the two pump spots. The pump distribution is shown in Figure 2. The pump distribution measured on the image plane maintains high uniformity in both the horizontal and vertical directions, and the spatial intensity modulation within the entire platform is less than 15%. Both VT1 and VT2 are composed of planoconcave lenses with a focal length of 750 mm. The central section of AMP is on their image plane (object plane) and the three end mirrors M1, M2, and M3 also have their image plane positions respectively to maintain the beam quality during the beam transmission. A 5 mm diameter pinhole plate is installed at the focal point of VT1 to filter and block high-frequency spatial light and some stray light. The input signal laser is a 1053-nm pulse laser with a pulse width of 5 ns generated by a regenerative amplifier with a repetition frequency of 1 Hz. After passing through a 5× beam expander and an 8-mm square sawtooth aperture, the signal light is spatially shaped from a circular Gaussian beam into an 8-mm square flat-top beam. Subsequently, the laser passes through the PBS, an isolator composed of λ/2 and FR1, and P1, its polarization state is P polarization, and it enters the traditional four-pass amplification optical path. When the laser passes through the PC for the first time, the power of the PC is turned off. Due to the polarization control effect of FR2, the laser goes to the fourth power in the optical path and is coupled out from P2. Before the laser passes through the PC for the second time, the power of the PC is turned on and the voltage is adjusted to a half-wave voltage. At this time, the PC is used as a half-wave plate. After passing through the PC, the laser polarization state becomes S polarization, passing through the M2 mirror, and passing through the energized PC for the third time, the laser polarization state becomes P polarization and continues to be amplified four times in the optical path, and then the laser is coupled out from P2. The power is turned off, and the laser passes through the PC for the fourth time, and the eight-pass amplified laser is coupled out from the PBS. In order to achieve twelve-pass magnification, when the laser passes through the PC for the fourth and fifth times, the PC is kept energized, so that the laser performs an additional four-pass magnification in the optical path. Before passing the PC for the sixth time, the power of the PC is turned off, and the twelve-pass amplified laser is coupled out from the PBS.
Amplifier Head
The amplifier head is side-cooled by circulating water at 23 °C as shown in Figure 3. Figure 3a is an assembly drawing of the amplifier head; Figure 3b is a cross-sectional view of it and it is sealed with O-rings on both sides of the square bar.
Amplifier Head
The amplifier head is side-cooled by circulating water at 23 • C as shown in Figure 3. Figure 3a is an assembly drawing of the amplifier head; Figure 3b is a cross-sectional view of it and it is sealed with O-rings on both sides of the square bar.
Amplifier Head
The amplifier head is side-cooled by circulating water at 23 °C as shown in Figure 3. Figure 3a is an assembly drawing of the amplifier head; Figure 3b is a cross-sectional view of it and it is sealed with O-rings on both sides of the square bar. The gain medium of the laser amplifier head is a bonded Nd:glass square rod with a size of 15 mm × 15 mm × 60 mm, and the gain zone is 0.5 wt.% doped Nd:glass with a length of 40 mm, and the two end faces are bonded separately with 10-mm thick K9 glass. The Nd:glass type N31 was independently made and processed by the Shanghai Institute of Optics and Fine Mechanics (SIOM). Its density is 2.53 g/cm 3 , the stimulation emission cross-section is 3.8 × 10 −20 cm 2 , and the fluorescence lifetime is 351 µs. The substrate of Nd:glass is K9 glass, and some physical parameters of the two materials such as refractive index (1.53) and thermal expansion coefficient (1.07 × 10 −5 /K) are consistent, which reduces the laser transmission loss. It also avoids the separation of the bonding surface caused by the thermal deformation of the medium during the pumping process. The two end faces of the bonded square rod correspond to the antireflection coatings of the pump wavelength and the laser wavelength, and the transmittance of the bonded square rod is 99.53% at a wavelength of 1053 nm. As shown in Figure 3b, the clamping position of the bonded square rod is on the K9 glass, which avoids the mechanical stress caused by the clamping of Nd:glass. Without the bonding method, the heat accumulated on the transparent end faces of the neodymium glass can only be dissipated to the side surfaces (water-cooled convective heat transfer coefficient is 500 W/m 2 ·k), and the heat dissipation of the air in the axial direction of the end face is too low to be noticed (the natural convection heat transfer coefficient of air at room temperature is 5 W/m 2 ·k). The thermal conductivity of K9 glass (1.4 W/m·k) is higher than that of Nd:glass (0.56 W/m·k), which enables Nd:glass to transfer the heat accumulated on the end surface to K9 glass for diffusion, and the overall thermal density of the medium is reduced in the end. As a result, it reduces the thermal stress in the Nd:glass and the risk of its collapse. The two outermost end faces are not deformed due to thermal expansion, so the beam quality can also be improved.
The gain distribution measured in the experiment is shown in Figure 4, the total pump energy is 9.02 J, and the pump pulse width is 500 us. The gain uniformity measured on the entire platform is less than 6.94% rms in the 95% area, and the single-pass small signal gain is about 2.30. The energy storage of the amplifier head is calculated according to Equations (1) and (2) [23].
where E s is the saturated energy storage density; for a four-level system, γ = 1; σ l is the emission cross-section of Nd:glass; the calculated saturated energy storage density is 4.97 J/cm 2 ; G 0 is the single-pass small signal gain obtained from the experiment; A is the gain area; and the calculated energy storage of Nd:glass is 2.65 J.
at a wavelength of 1053 nm. As shown in Figure 3b, the clamping position of the bonded square rod is on t glass, which avoids the mechanical stress caused by the clamping of Nd:glass. Wi the bonding method, the heat accumulated on the transparent end faces of the neo ium glass can only be dissipated to the side surfaces (water-cooled convective heat fer coefficient is 500 W/m 2 •k), and the heat dissipation of the air in the axial directi the end face is too low to be noticed (the natural convection heat transfer coefficient at room temperature is 5 W/m 2 •k). The thermal conductivity of K9 glass (1.4 W/m higher than that of Nd:glass (0.56 W/m•k), which enables Nd:glass to transfer the accumulated on the end surface to K9 glass for diffusion, and the overall thermal de of the medium is reduced in the end. As a result, it reduces the thermal stress i Nd:glass and the risk of its collapse. The two outermost end faces are not deforme to thermal expansion, so the beam quality can also be improved.
The gain distribution measured in the experiment is shown in Figure 4, the pump energy is 9.02 J, and the pump pulse width is 500 us. The gain uniformity mea on the entire platform is less than 6.94% rms in the 95% area, and the single-pass signal gain is about 2.30. The energy storage of the amplifier head is calculated acco to Equations (1) and (2) [23]. ℎ where Es is the saturated energy storage density; for a four-level system, γ = 1; emission cross-section of Nd:glass; the calculated saturated energy storage density i J/cm 2 ; G0 is the single-pass small signal gain obtained from the experiment; A is the area; and the calculated energy storage of Nd:glass is 2.65 J.
Thermal Effects
Under the condition of repetition frequency of 1 Hz and a pump energy of 9.02 J, the two bonding surfaces of the bonded Nd:glass laser amplifier head gathers a lot of heat, resulting in wavefront aberration. As shown in Figure 5, the thermally induced wavefront difference measured by a wavefront sensor (SID4, Phasics) in the experiment was 0.78 µm. With the same pumping conditions, the thermally induced wavefront profile difference of the unbonded square-rod Nd:glass of the same specification was measured to be 0.91 µm. The former one is 85.71% of the latter one; for this Nd:glass square rod, the bonding method is not the best for optimizing its thermal effect. A new cooling structure should be tried in later works. of the unbonded square-rod Nd:glass of the same specification was mea μm. The former one is 85.71% of the latter one; for this Nd:glass square r method is not the best for optimizing its thermal effect. A new cooling be tried in later works. In order to understand the main wavefront types, SID4 was used to Legendre polynomial of the wavefront aberration in Figure 5. Figure 6 s terms in the analysis of the wavefront aberration's Legendre polynomia fourth, the sixth, and the thirteenth terms are dominant; the fourth and th the defocus of x and y, respectively. The thirteenth item was dominant medium and the pump cross-section were both square, and the distance b of the pump area and the central point was different, resulting in different capabilities. These items can be compensated for by wavefront correctors In order to understand the main wavefront types, SID4 was used to decompose the Legendre polynomial of the wavefront aberration in Figure 5. Figure 6 shows the first 21 terms in the analysis of the wavefront aberration's Legendre polynomial coefficient. The fourth, the sixth, and the thirteenth terms are dominant; the fourth and the sixth terms are the defocus of x and y, respectively. The thirteenth item was dominant because the gain medium and the pump cross-section were both square, and the distance between the edge of the pump area and the central point was different, resulting in different heat dissipation capabilities. These items can be compensated for by wavefront correctors.
μm. The former one is 85.71% of the latter one; for this Nd:glass square rod, the method is not the best for optimizing its thermal effect. A new cooling structur be tried in later works. In order to understand the main wavefront types, SID4 was used to decom Legendre polynomial of the wavefront aberration in Figure 5. Figure 6 shows th terms in the analysis of the wavefront aberration's Legendre polynomial coeffic fourth, the sixth, and the thirteenth terms are dominant; the fourth and the sixth t the defocus of x and y, respectively. The thirteenth item was dominant because medium and the pump cross-section were both square, and the distance between of the pump area and the central point was different, resulting in different heat di capabilities. These items can be compensated for by wavefront correctors. Another function of FR2 in Figure 1 is to compensate for thermally induced depolarization [24]. Figure 7a,b show the near fields of a two-pass laser when the FR2 is not placed and when the FR2 is placed and the depolarization effect is compensated.
Another function of FR2 in Figure 1 is to compensate for thermally induced d ization [24]. Figure 7a,b show the near fields of a two-pass laser when the FR2 is no and when the FR2 is placed and the depolarization effect is compensated.
Output Energy
Nd:glass has a high energy storage density, but its limited single-pass gain m unable to efficiently extract the stored energy during single-pass amplification. Mu amplification can improve energy extraction efficiency. However, the gain of each the multi-pass amplification gradually decreases due to the extraction of stored en the previous pass. The theoretical calculation of the multi-pass amplification out ergy is based on the iterative calculation [25], and Equations (3)-(5) are the main tion formulas: where Ein is the energy injected in a certain pass; Eout is the energy of the single-p plification; Es is the saturated energy storage density; g0 is the single-pass small sign coefficient; and l is the length of the gain zone; the average single-pass transmittan the amplifier obtained by experimental measurement is 85.04%; is the single-p traction efficiency; and is the next single-pass small signal gain coefficient. Afte iterations of calculation, the output energy curves of differently injected laser en multi-pass amplification are shown in Figure 8. The experimental measurement and the theoretical calculation results were in good agreement in the four-pass, eig and twelve-pass amplification. A sampling mirror was used to sample the amplifi energy and reflect it to the energy harvester (Gentec QE65S) because the output was relatively large. For the four-pass and eight-pass amplification, when the inje ser energy was 6.50 mJ, energies of 83.86 mJ and 704.72 mJ were obtained, respe and neither reached gain saturation. For the twelve-pass amplification, when the energy was 3.00 mJ, the gain saturation was reached, and the maximum output was 1.17 J. The energy extraction efficiency reached 44.23%. The optical-to-opti ciency reached 13.01%.
Output Energy
Nd:glass has a high energy storage density, but its limited single-pass gain makes it unable to efficiently extract the stored energy during single-pass amplification. Multipass amplification can improve energy extraction efficiency. However, the gain of each pass in the multi-pass amplification gradually decreases due to the extraction of stored energy in the previous pass. The theoretical calculation of the multi-pass amplification output energy is based on the iterative calculation [25], and Equations (3)-(5) are the main calculation formulas: where E in is the energy injected in a certain pass; E out is the energy of the single-pass amplification; E s is the saturated energy storage density; g 0 is the single-pass small signal gain coefficient; and l is the length of the gain zone; the average single-pass transmittance T of the amplifier obtained by experimental measurement is 85.04%; η l is the single-pass extraction efficiency; and g 0 is the next single-pass small signal gain coefficient. After many iterations of calculation, the output energy curves of differently injected laser energy in multi-pass amplification are shown in Figure 8. The experimental measurement results and the theoretical calculation results were in good agreement in the four-pass, eight-pass, and twelve-pass amplification. A sampling mirror was used to sample the amplified laser energy and reflect it to the energy harvester (Gentec QE65S) because the output energy was relatively large. For the four-pass and eight-pass amplification, when the injected laser energy was 6.50 mJ, energies of 83.86 mJ and 704.72 mJ were obtained, respectively, and neither reached gain saturation. For the twelve-pass amplification, when the injected energy was 3.00 mJ, the gain saturation was reached, and the maximum output energy was 1.17 J. The energy extraction efficiency reached 44.23%. The optical-to-optical efficiency reached 13.01%. Figure 9 shows the twelve-pass near-field profile at the laser image plane position with a repetition frequency of 1 Hz and an output energy of 1.17 J by using a CCD (Charge Coupled Device) camera (Camyu Corp., GYD-SG1024B12GA). Some clear diffraction rings in the image were caused by some dead pixels in the measurement system. The modulation degree of the laser intensity within the range of 90% was 1.42. Because the Pockels cell produced a few small damage points in the previous experiment, it accumulated during the twelve-pass amplification process. The rings appearing in Figure 9 are diffraction phenomena caused by some dust in the diagnostic channel. In subsequent research, a spatial light modulator should be used for beam shaping to improve modulation. Figure 9 shows the twelve-pass near-field profile at the laser image plane position with a repetition frequency of 1 Hz and an output energy of 1.17 J by using a CCD (Charge Coupled Device) camera (Camyu Corp., GYD-SG1024B12GA). Some clear diffraction rings in the image were caused by some dead pixels in the measurement system. The modulation degree of the laser intensity within the range of 90% was 1.42. Because the Pockels cell produced a few small damage points in the previous experiment, it accumulated during the twelve-pass amplification process. The rings appearing in Figure 9 are diffraction phenomena caused by some dust in the diagnostic channel. In subsequent research, a spatial light modulator should be used for beam shaping to improve modulation. Figure 9 shows the twelve-pass near-field profile at the laser image plane position with a repetition frequency of 1 Hz and an output energy of 1.17 J by using a CCD (Charge Coupled Device) camera (Camyu Corp., GYD-SG1024B12GA). Some clear diffraction rings in the image were caused by some dead pixels in the measurement system. The modulation degree of the laser intensity within the range of 90% was 1.42. Because the Pockels cell produced a few small damage points in the previous experiment, it accumulated during the twelve-pass amplification process. The rings appearing in Figure 9 are diffraction phenomena caused by some dust in the diagnostic channel. In subsequent research, a spatial light modulator should be used for beam shaping to improve modulation. The corresponding far-field mode and the far-field energy concentration measured at the focal length of the lens are shown in Figure 10. The far-field energy concentration was 81.88%, at 2.5 times the diffraction limit. There are some speckles around the spot, which reduces the energy concentration. Some of the defocus aberrations can be compensated for by adjusting the lens position in the laser-magnifying cavity, and the remaining aberrations require further improvement and optimization by a wavefront corrector. The corresponding far-field mode and the far-field energy concentration measured at the focal length of the lens are shown in Figure 10. The far-field energy concentration was 81.88%, at 2.5 times the diffraction limit. There are some speckles around the spot, which reduces the energy concentration. Some of the defocus aberrations can be compensated for by adjusting the lens position in the laser-magnifying cavity, and the remaining aberrations require further improvement and optimization by a wavefront corrector.
Conclusions
In this paper, a twelve-pass amplification system with diode end-pumped bonded Nd:glass to achieve Joule-level output energy is described. Under the 9.02 J pump energy at 802 nm, 1.17 J saturated output energy was achieved at a repetition frequency of 1 Hz and 3.00 mJ energy injection. The optical-to-optical efficiency of 13.01% and the effective energy extraction efficiency of 44.23% are higher than those of the existing similar laser systems. In addition, the single-pass thermal wavefront aberration was 0.78 μm, which is 85.71% of the non-bonded square-rod Nd:glass under the same conditions. In the case of the maximum output energy of the twelve-pass amplification system, the near-field modulation of the beam was 1.42 within the 90% spot range, and the far-field energy concentration was 81.88% at the 2.5-fold diffraction limit. In the subsequent works, a spatial light modulator and a wavefront corrector will be used to improve the quality of the laser beam. Experiments show that the Joule-level bonded Nd:glass laser amplification system with high optical efficiency has potential application prospects in high-power laser amplification systems and can be used as a pump source in various systems, e.g., in optical parametric chirped pulse amplification (OPCPA) systems.
Conclusions
In this paper, a twelve-pass amplification system with diode end-pumped bonded Nd:glass to achieve Joule-level output energy is described. Under the 9.02 J pump energy at 802 nm, 1.17 J saturated output energy was achieved at a repetition frequency of 1 Hz and 3.00 mJ energy injection. The optical-to-optical efficiency of 13.01% and the effective energy extraction efficiency of 44.23% are higher than those of the existing similar laser systems. In addition, the single-pass thermal wavefront aberration was 0.78 µm , which is 85.71% of the non-bonded square-rod Nd:glass under the same conditions. In the case of the maximum output energy of the twelve-pass amplification system, the near-field modulation of the beam was 1.42 within the 90% spot range, and the far-field energy concentration was 81.88% at the 2.5-fold diffraction limit. In the subsequent works, a spatial light modulator and a wavefront corrector will be used to improve the quality of the laser beam. Experiments show that the Joule-level bonded Nd:glass laser amplification system with high optical efficiency has potential application prospects in high-power laser amplification systems and can be used as a pump source in various systems, e.g., in optical parametric chirped pulse amplification (OPCPA) systems. | 6,910.4 | 2021-03-30T00:00:00.000 | [
"Physics"
] |
Comparative gene expression profiling of mouse ovaries upon stimulation with natural equine chorionic gonadotropin (N-eCG) and tethered recombinant-eCG (R-eCG)
Background Equine chorionic gonadotropin (eCG) induces super-ovulation in laboratory animals. Notwithstanding its extensive usage, limited information is available regarding the differences between the in vivo effects of natural eCG (N-eCG) and recombinant eCG (R-eCG). This study aimed to investigate the gene expression profiles of mouse ovaries upon stimulation with N-eCG and R-eCG produced from CHO-suspension (CHO-S) cells. R-eCG gene was constructed and transfected into CHO-S cells and quantified. Subsequently, we determined the metabolic clearance rate (MCR) of N-eCG and R-eCG up to 24 h after intravenous administration through the mice tail vein and identified differentially expressed genes in both ovarian tissues, via quantitative real-time PCR (qRT-PCR) and immunohistochemistry (IHC). Results R-eCG was markedly expressed initially after transfection and maintained until recovery on day 9. Glycan chains were substantially modified in R-eCG protein produced from CHO-S cells and eliminated through PNGase F treatment. The MCR was higher for R-eCG than for N-eCG, and no significant difference was observed after 60 min. Notwithstanding their low concentrations, R-eCG and N-eCG were detected in the blood at 24 h post-injection. Microarray analysis of ovarian tissue revealed that 20 of 12,816 genes assessed therein were significantly up-regulated and 43 genes were down-regulated by > 2-fold in the group that received R-eCG (63 [0.49%] differentially regulated genes in total). The microarray results were concurrent with and hence validated by those of RT-PCR, qRT-PCR, and IHC analyses. Conclusions The present results indicate that R-eCG can be adequately produced through a cell-based expression system through post-translational modification of eCG and can induce ovulation in vivo. These results provide novel insights into the molecular mechanisms underlying the up- or down-regulation of specific ovarian genes and the production of R-eCG with enhanced biological activity in vivo.
The β-subunits of eCG and equine LH (eLH) have an identical primary structure and are reportedly expressed from the same gene [7,9]. Thus, eCG is a potentially suitable good model to study structure-function relationships among gonadotropins owing to its dual LH-and FSH-activities in nonequid animals [10][11][12]. In equidaes, eCG exhibits only LH activity [12].
Owing to its long half-life in blood, a single dose of eCG, as opposed to multiple doses, is adequate to stimulate ovarian gene expression [13]. Furthermore, eCG and human CG (hCG) together stimulate ovulation in rats and mice [14,15]. Moreover, eCG administration in cows is reportedly associated with an increase in their ovulation rate [16], particularly in early postpartum calves [17,18]. Therefore, we speculate that ovulation rate is very important to determine the litter size in experimental animals and animal stock farms.
The glycosylation sites at amino acid residue 52 in the αsubunit of human FSH (hFSH) [19] and hCG [5] and residue 56 in eCG [7] are important for signal transduction when the cAMP response is impaired, and the binding activities of these hormones are increased by 2-to 3-fold [20], consistent with our previous findings [2,6]. Thus, post-translational glycosylation of glycoprotein hormones plays a pivotal role in receptor-mediated signal transduction. N-and O-linked oligosaccharides at residue 56 of the α-subunit of eCG and a C-terminal extension (residues 114-149) in the β-subunit are included in vectors expressing eCG gene to produce recombinant-eCG (R-eCG) and to investigate the role of these regions in the biological activity of eCG.
High-throughput RNA sequencing and microarray analysis are useful during transcriptome profiling and gene expression analysis [21,22]. A microarray contains thousands of millions of complementary DNA fragments or oligonucleotides that hybridize with specific RNA molecules in a sample [22]. A recent study revealed differentially expressed genes (DEGs) upon RNA-seq using ovarian tissue of dairy goats upon repeated eCG treatment [23], indicating that three-time eCG treatment dysregulated several ovarian genes including glucagon, follistatin-related protein 3 (FSTL3), and aquaporin-3 (AQP3), thereby reducing reproductive function.
We previously attempted to assess the different roles of R-eCG with respect to their attached oligosaccharides [2,24], glycosylation sites for LH-and FSH-like activity [2], tethered R-eCGs [6], internalization of rat FSH and LH receptors by R-eCG [25], and signal transduction through eel FSH receptor and LH receptor by R-eCG and Natural-eCG (N-eCG) [26]. Furthermore, we analyzed the ovulation rates between N-eCG and deglycosylated R-eCGs in mice [27] and demonstrated that deglycosylated R-eCG mutants were induced at markedly lower levels in nonfunctional oocytes compared to N-eCG treated group. Non-functional oocytes in N-eCG and R-eCG mutants were approximately 20 and 2%, respectively. Numerous studies have reported the effects of a combination of eCG and hCG on reproductive performance and estrous synchronization [13][14][15][16]. However, no studies have investigated the effects of N-eCG and R-eCG on gene regulation through RNA-based microarray analysis.
In the present study, we hypothesized that treatment of ovarian tissues with N-eCG and R-eCG results in different DEG profiles. We produced R-eCG proteins in CHO-S cells, characterized their physiological function in vivo, and analyzed the differences in gene expression profiles through microarray analysis.
Results
Production of R-eCG and western blot analysis eCG contains two N-linked glycosylation sites at amino acid positions 56 and 82 in the α-subunit of eCG. The β-subunit of eCG contains one N-linked glycosylation site at position 13 and approximately 12 O-linked glycosylation sites at the C-terminal region (Fig. 1). Thus, we constructed an expression vector encoding the tethered R-eCG mutant, which was linked with the C-terminal region of the β-subunit without the signal peptide region of the α-subunit composed of 24 amino acids. Tethered R-eCG gene is comprised of 813 bp containing signal sequence 60 bp of eCG β-subunit as shown in Fig. 1.
Further, we analyzed the molecular weight of R-eCG. On western blot analysis, the approximate molecular weight of R-eCG was 40-46 kDa (Fig. 2b). After deglycosylation with PNGase F, the molecular weight significantly decreased to approximately 30-36 kDa (Fig. 2b). The glycan chains were substantially modified posttranslation in tethered R-eCG, confirming the loss of the its chains upon PNGase F treatment.
Metabolic clearance rates (MCRs) of N-eCG and tethered R-eCG in vivo
To analyze the MCR, eCG was detected in both groups (~550 mIU/mL) in the serum at 1 h after injection, as shown in Fig. 3. Although the MCR was slightly higher in the R-eCG-treated groups, no significant difference was observed between N-eCG and R-eCG treatment after 1 h. Their concentrations were low (~100 mIU/mL) until 24 h. These results indicate that R-eCG produced herein had a normal MCR and induced ovulation, as previously described [27].
Further, we analyzed the data to gain insight into the biological processes and functions of the DEGs. The distribution of the 63 DEGs (at least 2-fold) between ovaries treated with N-eCG and R-eCG and their distribution in different Gene Ontology (GO) categories were analyzed (Supplementary Material Fig. 1). GO analysis was performed using the Panther database (http:// www.pantherdb.org). GO terms "biological process" were the most represented (> 4 genes) among R-eCGtreated ovarian tissue, including "signal transduction (16 genes)," "developmental processes (14)," "protein metabolism and modification (9)," "cell culture and motility (5)," and "nucleoside, nucleotide and nucleic acid metabolism (4)." In category "molecular function," these genes were classified into 18 subcategories through GO analysis, with the largest number of genes represented in "protease (6 genes)," "signaling molecule (5)," "oxidoreductase (5)," "nucleic acid binding (5)," and "hydrolase (5)." Seven genes were categorized as "molecular function unclassified" (Supplementary Material Fig. 2). The number of classified genes is the number of genes in categories after excluding overlapping categories.
Gene expression analysis through quantitative reversetranscription PCR (qRT-PCR) analysis
To validate the results of microarray analysis, we performed RT-PCR and qRT-PCR analyses using specific primers (Supplementary Material Table 1) for the 14 genes identified herein ( Fig. 5a, b). Among the upregulated genes identified through microarray analysis of R-eCG-treated ovaries, six genes, i.e., Tex19.2, Sectm1b, Ctsk, Gpnmb, Sectm1a, and Hsd17b1, were confirmed to be up-regulated through qRT-PCR (Fig. 5a). Among the down-regulated genes, eight genes, i.e., OVGP1, BC048546, Tmem68, Dcpp1, Prkg2, Edn2, Adamts1, and Akr1b7, were confirmed to be down-regulated by > 2-fold in the R-eCG-treated mouse ovarian tissue, of which seven, i.e., OVGP1, BC048546, Tmem68, Dcpp1, Edn2, Adamts1, and Akr1b7, were confirmed to be down-regulated via qRT-PCR analysis (Fig. 5b). Nonetheless, one gene, Prkg2, displayed no significant change in expression levels upon qRT-PCR analysis. The fold-change in the expression levels of these genes was consistent with the results of the microarray analysis, confirming that the results of qRT-PCR analysis correlated with those of the microarray analysis. The metabolic clearance rate (MCR) of N-eCG and of R-eCGβ/α. Both eCGs were intravenously administrated at 5 IU through the tail vein. Blood samples were collected after 10 and 30 min and 1, 2, and 24 h. The samples were centrifuged at 5000 rpm for 15 min at 4°C, and eCG concentrations in the serum were estimated using a PMSG ELISA kit. The levels of eCG were analyzed via sandwich ELISA in triplicate. Superscripts indicate significant differences in the groups (p < 0.05)
Immunohistochemical analysis of ovarian tissue
To determine the cell types expressing four proteins (HSD17β1, ADAMTS1, Edn2, and OVGP1), immunohistochemical analysis was performed for the same ovarian tissues used for microarray analysis (Fig. 6). Among the up-regulated genes in the R-eCG-treated ovarian tissue, HSD17β1 was localized in the granulosa cells and theca folliculi. Among the down-regulated genes in R-eCGtreated ovarian tissue, ADAMTS1, which is required for normal ovulation and is localized in the cumulus oocyte complex during the preovulatory stage, was also localized in granulosa cells. Edn2, which is transiently expressed in granulosa cells immediately prior to ovulatory follicle rupture, was also strongly expressed in the ovarian stroma of a N-eCG-treated ovarian tissue. OVGP1, which improves the efficiency of in vitro fertilization and increases the number of fertilized eggs, was weakly expressed in the ovaries after ovulation. These results indicate that the expression of these four proteins was directly correlated with the time of ovulation in mice.
Discussion
This study examined the biological activity of tethered R-eCG, containing N-and the O-linked oligosaccharide chains and their MCR in vivo. Furthermore, this study evaluated differential gene expression profiles in mouse ovaries stimulated with N-eCG and R-eCG in combination with hCG. The present study shows differences in the up-and Fig. 4 Hierarchical clustering of gene expression profiles in N-eCG-treated and R-eCG-treated ovarian tissues. The ovaries were excised from 8week-old ICR female mice. The mice were induced to superovulate with 10 IU of N-eCG or R-eCG and then 10 IU of hCG after 48 h, and the ovulated oocytes were collected in an oviduct ampulla after 13 h. Thereafter, the ovaries were harvested after 13 h and RNA was analyzed via microarray analysis. Gene expression levels were evaluated through microarray analysis with 12,816 gene probes. Genes showing > 2-fold differences in expression were identified. The expression of 63 (0.49%) of 12,816 genes differed by at least 2-fold between N-eCG-treated and R-eCG-treated ovaries. Twenty of 63 genes were up-regulated in R-eCG-treated ovaries and 43 genes were down-regulated down-regulated genes (> 2-fold) in ovarian tissues treated with N-eCG and R-eCG.
Thus, far, we have expressed R-eCG in only CHO-K1 cells and stable CHO-K1 cells under G418 selection [6,[25][26][27]. Hence, levels of secreted R-eCG at 24 h posttransfection have remained unknown. However, supernatants of the culture media of CHO-S cells were recovered until 9 days after transfection. In the present study, single-chain R-eCG was markedly up-regulated on day 1 after transfection in CHO-S cells. However, R-eCG with a C-terminal deletion in the β-subunit was detected at a low concentration on day 1 and 3 post-transfection (data now shown). The present results indicate that the CTP region including up to 12 O-linked oligosaccharides plays a pivotal role in the early secretion of eCG from cells into the supernatant medium after transfection.
Various studies have reported that R-eCG proteins lead to the production and secretion of stable heterodimeric eCG in COS-7 cells [28] and infected Sf9 cells [29], with thermal stability similar to that of native pituitary LH [30]. Secreted single-chain eCG in COS-7 cells is detectable as a doublet of 46 and 44 kDa [12]. The present results show that the molecular weight of R-eCG greatly decreases upon elimination of the N-linked oligosaccharide chains via PNGase F treatment, decreasing the molecular weight to approximately 30-36 kDa. Our results are consistent with those of other studies, suggesting that R-eCG contained highly modified N-linked glycosylation sites in COS-7 cells and CHO-K1 cells post-translation [12,27].
Furthermore, R-eCG mutants deglycosylated through site-directed mutagenesis was markedly low in number (< 2.4%) in nonfunctional oocytes in comparison with N-eCG (21.2%) [27]. These results suggest a specific model for ovulation without displaying a long-half-life and to only induce functional oocytes in experimental animals despite using N-eCG. Furthermore, the MCR of R-eCG was somewhat higher than that of N-eCG at 10-60 min after injection and was similarly maintained at 2 and 24 h. These MCR results suggest that R-eCG derivatives can be beneficially utilized for animal experiments. In the present study, the expression level of R-eCG produced from CHO-S cells was extremely low for experimental animals and animal stock farms application. Thus, plans to isolate the single colony cells expressing lots of R-eCG quantity using the DG44 cells which would produce more R-eCG are underway.
Furthermore, we previously reported that R-eCG exerts dual LH-and FSH-like activity in in vitro bioassays involving rat Leydig and granulosa cells, respectively [2,6]. Moreover, we previously reported that R-eCG has both LH-and FSH-like activity in cells expressing rat LH/CGR and rat FSHR [25]. Nevertheless, no studies have examined differential gene expression in ovaries stimulated with N-eCG and R-eCG. We performed gene expression profiling for ovarian tissue through microarray analysis after administration of N-eCG or R-eCG. We identified genes upand down-regulated by > 2-fold. The present results show that 63 genes were up-and down-regulated (0.49% of 12, 816 genes in R-eCG-injected ovaries). These changes in gene expression profiles directly render oocytes nonfunctional upon comparing N-eCG-treated and R-eCG-treated ovaries, suggesting that tethered R-eCG derivatives used herein can cause slightly aberrant gene expression in the ovaries and produce functional oocytes without nonfunctional oocytes, in comparison with N-eCG-treated ovaries. We also reported the ovulation rate, indicating that deglycosylated R-eCG mutants displayed only 2% nonfunctional oocytes compared to about 20% in N-eCG treated ovaries [27]. We revealed that gene expression profiles differ slightly differ between N-eCG and R-eCG mutants, thus suggesting that the glycan chains play a pivotal role in the ovulation rate and gene expression of ovaries treated with R-eCG compared to N-eCG treated ovaries.
In the "biological process" category, the largest number of deregulated genes included signal transduction (16 proteins; Supplementary Material Figs. 1 and 2). In contrast, the largest number of genes (6 genes) among the 18 "molecular function" categories were present in "proteases." Seven genes were categorized as "molecular function unclassified." We assessed differences in the expression of ovary-specific genes between groups treated with N-eCG Bold genes were adjusted to RT-PCR and qRT-PCR and R-eCG through qRT-PCR analysis. The differences in gene expression were confirmed for six genes. These genes, Tex19.2, Sectm1b, Ctsk, Gpnmb, Sectm1a, and Hsd17β1, were specifically over-expressed in R-eCG-treated ovaries. Among the genes found to be down-regulated on microarray analysis, seven genes were confirmed to be downregulated through qRT-PCR analysis. These differences should be further assessed through a systematic study. Immunohistochemical analysis was conducted to determine the cell type responsible for protein expression in the ovaries. We first confirmed that 17β-hydroxysteroid dehydrogenase type 1 (17β-HSD1), which catalyzes the conversion of estrone to estradiol, is primarily localized in ovarian granulosa cells after R-eCG injection. Our results are consistent with those of another study, showing that 17β-HSD1 is expressed in the placenta and ovarian granulosa cells [31]. Some studies have reported that ADAM TS-1 is induced in granulosa cells in preovulatory follicles after LH administration [32] and is important for follicular development and the maintenance of normal granulosa cell layers in follicles [33]. Endothelin-2 (Edn2), a potent vasoconstrictive peptide, is abundantly produced by preovulatory follicles during ovulation at the onset of CL formation [34]. Edn2 directly induces vascular endothelial growth factor in granulosa cells of the bovine ovary [35] and ovulation and CL formation are significantly impaired in Edn2-knockout mice [34]. Oviduct-specific glycoprotein (OVGP1), also known as oviductin, is the major nonserum glycoprotein in the oviduct fluid during fertilization and increases the number of fertilized eggs and promotes The other eight genes were down-regulated in R-eCG-treated ovaries. The microarray results were compared and further analyzed via RT-PCR and qRT-PCR. Actb served as an endogenous control early embryonic development [36]. Furthermore, Edn2 and OVGP1 are primarily localized in the ovaries after N-eCG administration. These results suggest that 17β-HSD1, ADAMTS-1, Edn2, and OVGP1 perform pivotal functions as ovulatory factors during ovulation in mice. Although the differences in MCR and gene expression profiles were expected, the glycosylated glycans were the absolutely cause of these changes, indicating that R-eCG produced from CHO cells is highly modified with mannose, with small amount of sialic acid attached to the end of glycosylated chains. Therefore, mammalian cells should be developed to investigate the role of modified glycosylation on the expression levels of R-eCG.
Conclusions
This study shows that R-eCG produced from CHO-S cells has high biological activity in vivo. Although R-eCG disappears rapidly from the circulation immediately after its administration, R-eCG displayed a wide range of biological activity including the induction of ovulation and oogenesis. We showed that 63 ovarian genes were differentially expressed between N-eCG-treated and R-eCG-treated ovaries. Differential expression patterns of these genes were further confirmed through RT-PCR, qRT-PCR, and immunostaining analyses. Further systematic analyses are required to investigate the role of these DEGs in ovulation. Nevertheless, our results suggest that these differences may have resulted from the nature of the hormone, including oligosaccharides and folding. Therefore, R-eCG derivatives can potentially be produced at high levels with high biological activity to induce oocytes in vivo.
Materials
The oligonucleotides used herein were synthesized by Genotech (Daejon, Korea). The restriction enzymes and the DNA ligation kit were purchased from Takara (Tokyo, Japan). The QIAprep-Spin plasmid kit was acquired from QIAGEN, Inc. (Hilden, Germany). The Lumi-Light western blot kit was purchased from Roche (Basel, Switzerland), and the pcDNA3 mammalian expression vector, FreeStyle CHO-S suspension cells, PNGase F, FreeStyle MAX transfection reagent, and TRIzol reagent were obtained from Fig. 6 Localization of HSD17β1, ADAMTS1, EDN2, and OVGP1. The ovaries were induced to superovulate with 10 IU of either N-eCG or R-eCGβ/α, followed by 10 IU of hCG after 48 h. Representative immunohistochemical analyses for HSD17β1, ADAMTS1, EDN2, and OVGP1 were conducted with antisera, and a goat anti-rabbit IgG antibody (secondary antibody). According to the microarray and qRT-PCR results, HSD17β1 was upregulated in the R-eCG-treated ovaries, while the other three proteins (ADAMTS1, EDN2, and OVGP1) were up-regulated in the N-eCG-treated ovaries. Immunohistochemistry was performed with a Vectastain ABC kit. Scale bar = 200 μm Invitrogen (Carlsbad, CA, USA). The PMSG ELISA kit was purchased from DRG International, Inc. (Mountain side, NJ, USA), Centriplus Centrifugal Filter Devices from Amicon Bio separations (Merck, Billerica, MA, USA), and an anti-myc antibody and antibodies against HSD17β1, ADAMTS1, Edn2, and OVGP1 were purchased from Santa Cruz Biotechnology (Dallas, TX, USA). Disposable spinner flasks were obtained from Corning Inc. (Corning, NY, USA). A peroxidase-conjugated anti-mouse IgG antibody was obtained from Bio-Rad (Hercules, CA, USA), whereas pregnant-mare serum gonadotropin (eCG; ≥1000 IU/mg, G4877) and hCG (5000 IU, CG5) from Sigma-Aldrich Corp. (St. Louis, MO, USA), as were all other reagents. PMSG and hCG reagents are generally used to induce the ovulation in mice as we have previously reported [15]. All protocols complied with the approved Guidelines for Animal Experiments of Hankyong National University, Korea, and were approved by the Animal Care and Use Committee of Hankyong National University, Korea (Approval ID: 2015-8).
Construction of tethered eCG gene
cDNA encoding the tethered R-eCGβ/α was inserted into the mammalian expression vector pcDNA3, as previously reported [6]. The same method was used to insert a myc tag (Glu-Gln-Lys-Leu-Ile-Ser-Glu-Glu-Asp-Leu) between the first and second amino acid residues of the β-subunit of the mature eCG protein [27]. Plasmid DNA was then purified and sequenced in both directions through automated DNA sequencing to ensure correct inserts. The cloned expression vector of tethered eCG was designated as pcDNA3-eCGβ/α, as previously reported [6]. A schematic representation for tethered R-eCG β/α is shown in Fig. 1.
Cell culture and generation of tethered R-eCG
In CHO-S cells, the tethered R-eCG expression vector was transfected into CHO-S cells using the FreeStyle MAX reagent (Invitrogen; Carlsbad, CA, USA) transfection method, in accordance with manufacturer's instructions. Flasks were placed on an orbital shaking platform, rotating at 120-135 rpm at 37°C in a humidified atmosphere of 8% CO 2 in air. On transfection, the cell density was approximately 1.2-1.5 × 10 6 cells/mL. The plasmid DNA (260 μg) and a FreeStyle™ MAX Reagent complexes were gradually added to 200 mL of medium containing cells. Finally, culture media were sampled on day 9 after transfection and centrifuged to eliminate cell debris. The supernatant was sampled and stored at − 20°C until the assay. The samples were concentrated using a Centricon filter or by freeze-drying and mixed with PBS.
Quantification of R-eCG proteins
R-eCG protein was quantified with the PMSG ELISA kit (DRG Diagnostics; Mountain side, NJ, USA). Briefly, the PMSG standard and R-eCG samples (100 μL) were dispensed into the wells of a plate coated with the antibody and incubated for 60 min at ambient temperature. After rinsing thrice, 100 μL of anti-PMSG antibody conjugated with horseradish peroxidase was added into each well and incubated for 60 min. The plate wells were rinsed five times, and substrate solution (100 μL) was added and incubated for 30 min at ambient temperature. Finally, 50 μL of a stop solution was added and the absorbance was measured at 450 nm, using a microtiter plate reader Cytation™ 3 (BioTeK, Winooski, VT, USA). The average absorbance of each standard was plotted against its corresponding concentration in a linear-log graph. We determined the average absorbance of each sample to determine the corresponding PMSG value via simple interpolation through a standard curve. Given the low expression level of R-eCG in CHO-S cells, samples were concentrated approximately 40~50 times for application of R-eCG in MCR and superovulation. Concentrated R-eCG samples were diluted about 40 times for standard curve calibration. Samples for the standard curve were 0. 25, 100, 200, 400, 800 mIU/mL. Finally, 1 IU was considered 100 ng in accordance with the conversion factor of the suggested assay protocol.
Detection of R-eCGs via western blotting and enzymatic digestion of N-linked oligosaccharides
Concentrated sample media were subjected to SDS-PAGE (12.5% resolving gel) via the Laemmli method [37]. After SDS-PAGE, the proteins were electro-transferred to a nitrocellulose membrane for 2 h in a Mini Trans-Blot Electrophoretic Transfer cell. To eliminate all N-linked oligosaccharides, the R-eCG sample was incubated for 24 h at 37°C with PNGase F [2 μL of the enzyme (2.5 U/mL) per 30 μL of sample+ 8 μL of 5× reaction buffer]. The reaction was terminated by boiling for 10 min, and the samples were subjected to SDS-PAGE and the proteins were electro-transferred on to a membrane. After blocking the membrane with a 1% blocking reagent for 1 h, followed by probing with monoclonal anti-myc antibody (1: 5000) for 2 h, the membrane was washed and probed with a secondary antibody (peroxidase-conjugated anti-mouse IgG antibody 37.5 μL/15 mL of the blocking solution) for 30 min. The membrane was then incubated for 5 min with 2 mL of the Lumi-Light substrate solution and X-ray film was exposed to the membrane for 1-10 min.
Assessment of the MCR of N-eCG and R-eCG
Each animal was intravenously administered 5 IU of N-eCG or R-eCG through the tail vein to determine the 50% dose for the induction of superovulation. Blood was sampled from the transorbital vein in heparinized microhematocrit tubes. Blood samples were obtained at 10 and 30 min and at 1, 2, and 24 h and centrifuged for 15 min at 5000 rpm at 4°C, and plasma eCG concentrations were estimated using the PMSG ELISA kit (DRG Diagnostics).
Animals
The MCRs of N-eCG and R-eCG were determined in 8week-old male B6D2F1 (C57BL6 × DBA/2) 12 mice. The female 16 mice (8-week-old B6D2F1; Oriental Bio, Gyeonggi, Korea) were superovulated by injection of 10 IU of N-eCG and R-eCG and then 10 IU hCG after 48 h. The ovarian tissues were sampled at 13 h after hCG administration. All mice were euthanized with carbon dioxide inhalation, and the ovarian tissues were collected at the end of study. All the mice were raised in an environment with the temperature of 23 ± 1°C with regular 12 h light/dark cycle and allowed free access to feed and water. The animals were processed according to the Animal Care and Use Committee procedure. The protocol was approved by the Committee on Ethics of Animal Experiments at the Hankyong National University (Approval ID: 2015-8).
Microarray analysis
Total RNA was extracted from ovaries, using TRIzol reagent, and purified using RNeasy columns in accordance with the manufacturers' protocols, as previously described [15].
1) Labeling and purification
Total RNA was amplified and purified using an Ambion Illumina RNA amplification kit (Ambion, Austin, TX, USA) in accordance with the manufacturer's instructions to obtain biotinylated cRNA. Briefly, 550 ng of total RNA was reverse-transcribed into cDNA with a T7 oligo(dT) primer. Second-strand cDNA was synthesized, transcribed in vitro, and labeled with biotin-NTP.
3) Raw data preparation and statistical analysis
Raw data were extracted using the software provided by the manufacturer (Illumina Genome Studio v.2009.2) and filtered using a detection p-value of < 0.05 (a signal value higher than that of the background was necessary to set the detection p-value of < 0.05). The selected gene signal value was logarithmically transformed and normalized to XYZ. Comparative analysis between two groups was conducted on the basis of the p-value evaluation, via the local-pooled-error test (adjusted Benjamini-Hochberg false discovery rate had to be< 5%) and the fold-change. Biological ontology-based analysis was performed for the Panther database (http://www. pantherdb.org). Furthermore, genes whose expression levels differed by > 2-fold were considered differentially expressed between the two groups.
Immunohistochemistry
Immunohistochemical staining of ovarian samples was performed using the Vectastain ABC kit (Vector Laboratories, Burlingame, CA, USA) in accordance with the manufacturer's instructions. The samples were fixed in 10% neutral-buffered formalin at ambient temperature for 24 h and washed with PBS. Thereafter, the fixed samples were rehydrated in graded ethanol (EtOH) solutions (3 min each in 100% 2×; 95% 1×; 70% 1×; and 50% 1×) and embedded in paraffin. Paraffin-embedded tissues were sectioned into 8-μm-thick sections, which were then mounted onto poly-L-lysine-coated slides. The slides were boiled in 10 mM sodium citrate for 10 min and chilled on ice for 20 min. Thereafter, they were washed with 3% hydrogen peroxide for 10 min and blocked for 1 h at ambient temperature. The slides were incubated with the primary antibody and then with an anti-rabbit IgG antibody (secondary antibody). Finally, the slides were immunostained using the ABC detection kit in accordance with the manufacturer's instructions and stained with DAB. The slides were examined under a Nikon Eclipse TE-2000-E confocal microscope (Tokyo, Japan).
Data and statistical analysis
Data are presented as mean ± SEM values. One-way ANOVA with Tukey's multiple-comparison test was conducted to compare the results between samples. In figures, the superscripts indicate significant differences between groups (p < 0.05). | 6,368.6 | 2020-05-26T00:00:00.000 | [
"Biology"
] |
The generalized density approach in progressive enlargement of filtrations
Motivated by credit risk modelling, we consider a type of default times whose probability law can have atoms, where standard intensity and density hypotheses in the enlargement of filtrations are not satisfied. We propose a generalized density approach in order to treat such random times in the framework of progressive enlargement of filtrations. We determine the compensator process of the random time and study the martingale and semimartingale processes in the enlarged filtration which are important for the change of probability measures and the evaluation of credit derivatives. The generalized density approach can also be applied to model simultaneous default events in the multi-default setting
Introduction
In the credit risk analysis, the theory of enlargement of filtrations, which has been developed by the French school of probability since the 1970s (see e.g.Jacod [14], Jeulin [17], Jeulin and Yor [18]), has been systematically adopted to model the default event.In the work of Elliot, Jeanblanc and Yor [10] and Bielecki and Rutkowski [2], the authors have proposed to use the progressive enlargement of filtrations to describe the market information which includes both the ambient information and the default information.Let (Ω, A, P) be a probability space equipped with a reference filtration F = (F t ) t≥0 representing the default-free market information.Let τ be a positive random variable which represents a default time.Then the global market information is modelled by the filtration G = (G t ) t≥0 , which is the smallest filtration containing F such that τ is a G-stopping time and G is called the progressive enlargement of F by τ .In this framework, the reduced-form modelling approach has been widely used where one often supposes the existence of the G-intensity of τ , i.e. the G-adapted process (λ t , t ≥ 0) such that (1 1 {τ ≤t} − τ ∧t 0 λ s ds, t ≥ 0) is a G-martingale.The process λ, also called the default intensity process, plays an important role in the default event modelling.More recently, in order to study the impact of default events, a new approach has been developed by El Karoui, Jeanblanc and Jiao [8,9] where we suppose the density hypothesis: the F-conditional law of τ admits a density with respect to a non-atomic measure η, i.e. for all θ, t ≥ 0, P(τ ∈ dθ|F t ) = α t (θ)η(dθ) where α t (•) is an F t ⊗ B(R + )-measurable function.
The density hypothesis has been firstly introduced by Jacod [14] in a theoretical setting of initial enlargement of filtrations and is essential to ensure that an F-martingale remains a semimartingale in the initially enlarged filtration.There exist explicit links between the intensity and density processes of the default time τ , which establish a relationship between the two approaches of default modelling.In particular, the density approach allows us to analyze what happens after a default event, i.e. on the set {τ ≤ t}, and has interesting applications in the study of counterparty default risks.We note that, in both intensity and density approaches, the random time τ is a totally inaccessible G-stopping time which avoids F-stopping times.
In this paper, we consider a type of random times which can be either accessible or totally inaccessible.The motivation comes from recent sovereign credit risks where the government of a sovereign country may default on its debt or obligations.Compared to the classical credit risk, the sovereign default is often influenced by political events.For example, the euro area members and IMF agree on a 110-billion-euro financial aid package for Greece on 02/05/2010 and another financial aid program of 109-billion-euro on 21/07/2011.The eventuality of default-or-not of the Greek government depends on the decisions made at the political meetings held at these dates.Viewed from a market investor, there are important risks that the Greek government may default at such critical dates.
From a mathematical point of view, the existence of these political events and critical dates means that the probability law of the random time τ admits atoms.Hence the sovereign default time can coincide with some pre-determined dates.In this case, the classical default modelling approaches, in particular, both intensity and density models are no longer adapted.To overcome this difficulty, we propose to generalize the density approach in [8].More precisely, we assume that the F-conditional law of τ contains a discontinuous part, besides the absolutely continuous part which has a density.This generalized density approach allows to consider a random time τ which has positive probability to meet a finite family of F-stopping times.
There are related works in the credit risk modelling.In Bélanger, Shreve and Wong [1], a general framework is proposed where reduced-form models, in particular the widely-used Cox process model, can be extended to the case where default can occur at specific dates.In Gehmlich and Schmidt [12], the authors consider models where the Azéma supermartingale of τ , i.e. the process (P(τ > t|F t )) t≥0 contains jumps (so that the intensity does not exist) and develop the associated HJM credit term structures and no-arbitrage conditions.Carr and Linetsky [3] and Chen and Filipović [4] have studied the hybrid credit models where the default time depends on both a first-hitting time in the structural approach and an intensity-based random time in the reduced-form approach.The generalized density model that we propose can also be viewed as hybrid credit model.
In this paper, we first investigate, under the generalized density hypothesis, some classical problems in the enlargement of filtrations from a theoretical point of view.
In particular, we deduce the compensator process of the random time τ , which is discontinuous in this case.This means that the intensity process does not necessarily exist.We also characterize the martingale processes in the enlarged filtration G and EJP 20 (2015), paper 85. obtain the G-semimartingale decomposition for an F-martingale, which shows that in the generalized density setting, the (H')-hypothesis of Jacod (c.f.[14]) is satisfied, that is, any F-martingale is a G-semimartingale.The main contribution of our work is to focus on the impact of the discontinuous part of the F-conditional law of τ and study the impact of the critical dates on the random time.
For applications of the generalized density approach, we study the immersion property, also called the H-hypothesis in literature, i.e., any F-martingale is a G-martingale, which is commonly adopted in the default modelling.We give the criterion for the immersion property to hold in this context.The immersion property is in general not preserved under a change of probability measure.As one consequence of the characterization results of G-martingales, we study the change of probability and the associated Radon-Nikodym derivatives.Another application consists of a model of two default times where the occurrence of simultaneous defaults is possible.In the literature of multiple defaults, it is often assumed that two default events do not occur at the same time.The generalized density framework provides tools to study simultaneous defaults, which is important for researches of extremal risks during a financial crisis.
The paper is organized in the following way.In section 2, we make precise the key assumption of the generalized density approach and deduce some basic results.
The Section 3 is devoted to the compensator of τ and we conduct the additive and multiplicative decompositions of the Azéma supermartingale.In Section 4, we study the decomposition of G-semimartingales in the generalized density framework by carefully dealing with the discontinuous part of the F-conditional distributions of τ .Section 5 concludes the paper with applications to the immersion property and a model where double default is allowed.
Generalized density hypothesis
In this section, we present our key hypothesis, the generalized density hypothesis, and some basic properties.Let (Ω, A, F, P) be a filtered probability space where F = (F t ) t≥0 is a reference filtration satisfying the usual conditions, namely the filtration F is right continuous and F 0 is a P-complete σ-algebra.We use the expressions O(F) and P(F) to denote the optional and predictable σ-algebras associated to the filtration F respectively.Let τ be a random time on the probability space valued in [0, +∞].Denote by G = (G t ) t≥0 the progressive enlargement of F by τ , defined as G t = s>t σ({τ ≤ u} : u ≤ s) ∨ F t , t ≥ 0. Let (τ i ) N i=1 be a finite family of F-stopping times.We assume that the F-conditional distribution of τ avoiding (τ i ) N i=1 has a density with respect to a non-atomic σ-finite Borel measure η on R + .Namely, for any t ≥ 0, there exists a positive F t ⊗ B(R + )-measurable random variable (ω, u) → α t (ω, u) such that, for any bounded Borel function h on R + , one has where H denotes the event In particular, the case where the function h is constant and takes the value 1 leads to the relation Remark 2.1.The above assumption implies that the random time τ avoids any Fstopping time σ such that P(σ = τ i < ∞) = 0 for all i ∈ {1, • • • , N }.Namely for such F-stopping time σ one has P(τ = σ < ∞) = 0.However, the random time τ is allowed to EJP 20 (2015), paper 85.
coincide with some of the stopping times in the family (τ i ) N i=1 with a positive probability.
Moreover, without loss of generality, we may assume that the family (τ i ) N i=1 is increasing.
In fact, if we denote by (τ (i) ) N i=1 the order statistics of (τ i ) N i=1 , then The following proposition shows that we can even assume that the family (τ i ) N i=1 is strictly increasing until reaching infinity.Proposition 2.2.Let (τ i ) N i=1 be an increasing family of F-stopping times.Then there exists a family of F-stopping times (σ i ) N i=1 which verify the following conditions: (a) For any ω ∈ Ω and i, j Proof.The case where N = 1 is trivial.We prove the result by induction and assume Moreover, for k ∈ {2, . . ., N }, we define Note that for each i k, the set E i is F τ k -measurable.Therefore , where τ N +1 = ∞.One also has, for any ω, Moreover, the strict inequality τ 1 < τ 2 holds on {τ 1 < ∞}.Then by the induction hypothesis on (τ 2 , • • • , τ N +1 ), we obtain the required result.
For purpose of the dynamical study of the random time τ , we need the following result which is analogous to [14, Lemme 1.8].
Proposition 2.3. There exists a non
for any bounded Borel function h.
Proof.Let (α t (•)) t≥0 be a family of random functions such that the relation (2.1) holds for any t ≥ 0. We fixe a coutable dense subset D in R + such as the set of all nonnegative rational numbers.If s and t are two elements in D, s < t, there exists a positive Note that for any bounded Borel function h, one has Hence there exists an η-negligeable set B t,s such that α s (u) = α t|s (u) P-a.s. for any We then obtain that α s (u) = E[ α t (u)|F s ], P-a.s. for any u ∈ R + and all elements s, t in D such that s < t.Moreover, since B is still η-negligeable, for any t ∈ D, By [7, Theorem VI.1.2],for any θ ∈ R + , there exists a P-negligeable subset E θ of Ω such that, for any ω ∈ Ω \ E θ , the following limits exist: Moreover, we define Then α(θ) is a càdlàg F-martingale, and therefore the random function α(•) is O(F) ⊗ B(R + )-measurable.We then deduce the proposition from (2.3).
We summarize the generalized density hypothesis as below.In what follows, we always assume this hypothesis.Assumption 2.4.We assume that there exists a non-atomic σ-finite Borel measure η on R + , a finite family of F-stopping times (τ i ) N i=1 such that P( for any bounded Borel function h. Remark 2.5. 1) The condition P(τ i = τ j < +∞) = 0 is not essential in Assumption 2.4.
In fact, for an arbitrary finite family of F-stopping times (τ i ) N i=1 , if we suppose that the random time τ has an F-density α(•) with respect to η avoiding (τ i ) N i=1 , then by Remark 2.1 and Proposition 2.2, we can always obtain another family of F-stopping times (σ i ) N i=1 such that P(σ i = σ j < +∞) = 0 for i = j and that τ has an F-density avoiding the family Ωi is a totally inaccessible F-stopping time.Note that τ also admits an F-density avoiding the family (τ i , τ i ) N i=1 and the F-density is still α(•).Therefore, without loss of generality, we may assume in addition that each F-stopping time τ i is either accessible or totally inaccessible.
We compute firstly the conditional distribution of τ 1 .For any 0 ≤ t < θ, one has So τ satisfies Assumption 2.4 with the generalized density We also consider the case where τ may reach infinity and denote by p ∞ a càdlàg version of the F-martingale Note that Assumption 2.4 implies that, for any t ≥ 0, We define (2.5) Note that G t = P(τ > t|F t ), P-a.s..The process G = (G t ) t≥0 is a càdlàg F-supermartingale and called the Azéma supermatingale of the random time τ .Moreover, for any bounded Borel function h, one has (2.6) The following result shows that any G t -conditional expectation can be computed in a decomposed form, which can be viewed as a direct extension to [8, Theorem 3.1].
EJP 20 (2015), paper 85. and (2.9) Proof.We may assume that Y T (•) is non-negative without loss of generality so that the following proof works without discussing the integrability (as a byproduct, we can prove the case where Y T (•) is non-negative without any integrability condition).The integrability of Y T (τ ) results from the finiteness of each term in the following formulas.
The first term on the right-hand side of (2.7) is obtained as a consequence of the so-called key lemma in the progressive enlargement of filtration ([10, Lemma 3.1]): Note that which implies (2.8).For the second term in (2.7), we shall prove by verification.Let Z t (•) be a bounded F t ⊗ B(R + )-measurable random variable, one has Note that Moreover, Therefore we obtain we find another random function Y T (•) such that Y T (τ ) = Z T (τ ), P-a.s.Moreover, Y T (•) satisfies the integrability conditions as in the proposition.
Compensator process
In the credit risk literature, the compensator and the intensity processes of τ play an important role in the default event modelling.The general method for computing the compensator is given in [18] by using the Doob-Meyer decomposition of the Azéma supermartingale G.In [8], an explicit result is obtained under the density hypothesis (see also [11] and [20]) where the compensator is absolutely continuous and the intensity exists.In this section, we focus on the compensator process under the generalized density hypothesis.
We introduce the following notations.For any i ∈ {1, • • • , N }, denote by D i the process (1 1 {τi≤t} ) t≥0 .We use the expression Λ i to denote the F-compensator process of D i , that is, Λ i is an increasing F-predictable process such that M i := D i − Λ i is an F-martingale with M i 0 = 0. Note that, if τ i is a predictable F-stopping time, then Λ i = D i and M i = 0.The following result generalizes [8, Proposition 4.1 (1)].Here the Azéma supermartingale G is a process with jumps and needs to be treated with care.
Proposition 3.1.The Doob-Meyer decomposition of the Azéma's supermartingale G is given by G t = G 0 + M t − A t , where A is an F-predictable increasing process given by Proof.For any t ≥ 0, let The process C is F-adapted and increasing.It is moreover continuous since η is assumed to be non-atomic.Note that by (2.5), The process Moreover, one has where One can also rewrite [D i , p i ] as Note that [Λ i , p i ] is an F-martingale since Λ i is F-predictible and p i is an F-martingale (see [7,VIII.19]).Moreover M i , p i is an F-predictable process such that [M i , p i ] − M i , p i is an F-martingale.Therefore we obtain that is a predictable process, and G + A is an F-martingale.
In the following, we denote by Λ F the process which is an F-predictable process.It is well known that the G-compensator of τ is Λ G = (Λ F τ ∧t ) t≥0 (c.f.[18, Proposition 2]).We observe from Proposition 3.1 that the compensator Λ F is in general a discontinuous process and may have jump at the stopping times (τ i ) N i=1 , so that the intensity does not exist in this case.A similar phenomenon appears in the generalized Cox process model proposed in [1] where the default can occur at specific dates.A general model where the Azéma supermartingale is discontinuous has also been studied in [12].
We can treat general F-stopping times (τ i ) N i=1 , (see Remark 2.5).In case they are predictable F-stopping times, Λ i t = 1 1 {τi≤t} and M i t = 0, so the last term on the right-hand side of (3.1) vanishes and we obtain In case where {τ i } N i=1 are totally inaccessible F-stopping times, then τ is a totally inaccessible G-stopping time.In this case, the compensator process of τ is continuous.
A similar result can be found in Coculescu [5].
i=1 are totally inaccessible F-stopping times, then τ is a totally inaccessible G-stopping time.Proof.Since τ i is totally inaccessible, the F-compensator process Λ i is continuous.Moreover, M i , p i is the compensator of the process [D i , p i ] = (1 1 {τi≤t} ∆p i τi ) t≥0 and hence is continuous (see [7,VI.78] and the second part of its proof for details).Therefore the process A in the Doob-Meyer decomposition of G is continuous since η is nonatomic.This implies that the F-compensator Λ F of τ is continous.Thus the process (1 1 {τ >t} + Λ F τ ∧t ) t≥0 is a uniformly integrable G-martingale, which is continuous outside the graphe of τ , and has jump size 1 at τ .Still by [7, VI.78], τ is a totally inaccessible G-stopping time.
There exists a multiplicative decomposition of the Azéma supermartingale.By [13,Corollary 6.35], G exp(Λ F ) is an F-martingale, which is the Doléans-Dade exponential of the F-martingale M such that In the following, we give the explicit multiplicative decomposition under the generalized density hypothesis as a general case of [8, Proposition 4.1 ( 2 where L is an F-martingale solution of the stochastic differential equation dM s , t ≥ 0. (3.4) Proof.On the one hand, for any t ≥ 0, if there exists u ∈]0, t] such that ∆Λ F u = 1, making the right-hand side of (3.3) vanish, then we have p (1 1 [[0,τ [[ ) u = 0, which implies that G u = 0.It is a classic result that G is a non-negative supermartingale which sticks at 0 (c.f.[21, page 379]), then G t = 0. On the other hand, if ∆Λ F = 1, we denote by M F the F-martingale defined as is the solution of
Martingales and semimartingales in G
In this section, we are interested in the G-martingales.We first characterize the Gmartingales by using F-martingale conditions, as done in [8,Proposition 5.6].However, under the generalized density hypothesis, we shall distinguish necessary and sufficient EJP 20 (2015), paper 85. conditions although they have similar forms at the first sight.In fact, the decomposition of a G-adapted process is not unique, and the martingale property can not hold true for all modifications.This makes the necessary and sufficient conditions subtly different.Proposition 4.1.Let Y G be a G-adapted process, which is written in the decomposed form Y G t = 1 1 {τ >t} Y t + 1 1 {τ ≤t} Y t (τ ), t ≥ 0, P-a.s.where Y is an F-adapted process and Proof.We first treat the martingale case.By Proposition 2.7, the conditional expectation can be written as the sum of t equals the sum of the following terms and Since the measure η is non-atomic, one has By the condition (a), it is equal to where we use again the fact that η is non-atomic.Therefore, by the condition (b), one can rewrite the term (4.1) as which vanishes thanks to the condition (c).Moreover, by condition (a) and (b), we can rewrite (4.2) as which also vanishes.
In the following, we treat the local martingale case.Assume that the processes in (a)-(c) are local F-martingales, then there exists a common sequence of F-stopping times which localizes the processes (a)-(c) simultaneously.Thus it remains to prove the following claim: assume that σ is an F-stopping time such that Note that the processes α(θ) and p i are all F-martingales for θ ≥ 0, i ∈ {1, . . ., N }.
Therefore, the conditions (1) and ( 2) imply the corresponding conditions in replacing α σ (θ) and p i,σ by α(θ) and p i respectively.We then deduce the following conditions (2) leads to (2').Finally, by (2.5), we obtain that Generalized density approach is an F-martingale and hence is also an F-martingale.Hence the condition (3) leads to (3').By the martingale case of the proposition proved above, applied to the process ).The proposition is thus proved.
In view of Proposition 4.1, it is natural to examine whether the converse is true.
However, given a G-adapted process Y G , the decomposition Y G t = 1 1 {τ >t} Y t +1 1 {τ ≤t} Y t (τ ), P-a.s. is not unique.For example, if one modifies arbitrarily the value of Y (θ) on n i=1 {τ i = θ} for θ in an η-negligiable set, the decomposition equality remains valid.However, the F-martingale property of 1 1 ∩ N i=1 {τi =θ} Y (θ)α(θ) cannot hold for all such modifications.In the following, we prove that, if Y G is a G-martingale, then one can find at least one decomposition of Y G such that Y and Y (.) satisfy the F-martingale conditions in Proposition 4.1.Proposition 4.2.Let Y G be a G-martingale.There exist a càdlàg F-adapted process Y and an O(F) ⊗ B(R + )-measurable processes Y (•) which verify the following conditions : and such that, for any t ≥ 0 one has Y G t = 1 1 {τ >t} Y t + 1 1 {τ ≤t} Y t (τ ), t ≥ 0, P-a.s.
Proof.The process Y G can be written in the following decomposition form where Ỹ and Ŷ (•) are respectively F-adpated and F ⊗ B(R which implies This equality shows that Ŷ (τ i which implies Ŷt (θ)α t (θ)η(dθ).Let D be a countable dense subset of R + .For any θ ∈ R + and all s, t ∈ D such that θ ≤ s ≤ t, let Ŷt|s The equality (4.5) shows that there exists an η-negligeable Borel subset B of R + such that Ŷt|s (θ)α s (θ) = E[ Ŷt (θ)α t (θ)|F s ] provided that θ ∈ B. By the same arguments as in the proof of Proposition 2.3, we obtain a càdlàg F ⊗ B(R + )-adapted process Y (•) verifying the conditions (a) and (b), and such that Y G t = 1 1 {τ >t} Ỹt + 1 1 {τ ≤t} Y t (τ ), P-a.s..For the last condition (c), for any t ≥ 0, let The process Y F is an is also an F-martingale.Let Z be a càdlàg version of this F-martingale and let which is a càdlàg version of the process Ỹ .The equality Y G t = 1 1 {τ >t} Y t + 1 1 {τ ≤t} Y t (τ ), P-a.s.still holds.The result is thus proved.
In the theory of enlargement of filtrations, it is a classical problem to study whether an F-martingale remains a G-semimartingale.The standard hypothesis under which this assertion holds true is the density hypothesis (c.f.[14, Section 2] in the initial enlargement and [8, Proposition 5.9], [15,Theorem 3.1] in the progress enlargement of filtrations).We now give an affirmative answer to this question under the generalized density hypothesis, which provides a weaker condition.
where U G is a G-local martingale and M is the F-martingale defined as One has G = M − Ā.We denote by where the second equality comes from the fact p i τi∧t = p i t and the third equality comes from (2.4).Since Y is an F-martingale, (Y τi∧t − Y t )p i τi∧t t≥0 is an F-martingale for any i = 1, • • • , N .Hence we obtain the result.
However, the condition (a) may not hold in general since we are allowed to change the value of α t (θ) for θ in a η-negligible set without changing the F-conditional law of τ .
The immersion property is not necessarily preserved under a change of probability measure.In the following, we study the change of probability measures based on the previous results of G-martingale characterization, similar as in [8, Section 6.1].Firstly, we deduce relevant processes under a change of probability measure, as a generalization of [8,Theorem 6.1].Secondly, we show that to begin from an arbitrary probability measure (where the immersion is not necessarily satisfied), we can always find a change of probability which is invariant on F, and the immersion property holds under the new probability measure.Proposition 5.2.Let Y G be a positive G-martingale of expectation 1, which is written in the decomposed form as Y where Y and Y (•) are positive processes which are respectively F-adapted and F ⊗ B(R + )-adapted.Let Q be the probability measure such that dQ/dP = Y G t on G t for any t ≥ 0. Then the random time τ satisfies Assumption 2.4 under the probability Q, and the (F, Q)-conditional density avoiding (τ i ) N i=1 and the (F, Q)-conditional probabilty of τ = τ i < ∞ can be written in the following form In particular, one has α Q θ (θ) = α θ (θ) on N i=1 {τ i = θ} and p i,Q τi = p i τi .Moreover, by Proposition 5.1 we obtain that (F, G) satisfies to the immersion property under the probability Q.The result is thus proved.
A two-name model with simultaneous default
The density approach has been adopted to study multiple random times in [9], [16] and [19].In the classical literature of multi-default modelling, one often supposes that there is no simultaneous defaults, notably in the classical intensity and density models.For example, if we suppose that the conditional joint F-density exists for two default times, then the probability that the two defaults coincide equals to zero (see [9]).However, during the financial crisis where the risk of contagious defaults is high, it is important to study simultaneous defaults whose occurrence is rare but will have significant impact on financial market.The generalized density approach provides mathematical tools to study simultaneous defaults.The idea consists of using a recurrence method.
We shall apply previous results to this two-default model.Let F 1 be the progressive enlargement of F by the random time σ 1 .Then σ 1 is an F 1 -stopping time.The filtration F 1 will play the role of the reference filtration in the previous sections.
Proposition 4 . 3 .
Any F-local martingale U F is a G-semimartingale which has the following decomposition: | 6,549.2 | 2015-01-01T00:00:00.000 | [
"Mathematics"
] |
Novel relationships between some coordinate systems and their effects on mechanics of an intrinsically curved filament
The fixed frame, Frenet-Serret frame and generalized Frenet-Serret frame are commonly used coordinate systems in the study of a filament or a moving rigid body. In terms of Eulerian angles, we derive some relations in these frames and apply these relations to find some significant results. Especially, we find the angle between the normal of centerline of a filament and the line of nodes which is the crossover between the horizontal plane of fixed frame and normal plane of centerline. We prove that the general solution of a set of nonlinear differential equations represents a circular helix or the corresponding filament has a unique helical ground-state configuration. We show that the effective description of a planar filament depends on the value of its torsional modulus. Finally, we find the expression of energy for a three-dimensional intrinsically curved filament when its cross-section area vanishes, and show that under an applied force the finite intrinsic curvature alone can induce a discontinuous transition in extension.
Introduction
Filamentary structure is ubiquitous in the world and it has wide applications in engineering and science so that the study on the conformal and mechanical properties of a filament has a long history dating back to Euler and Lagrange [1,2]. The interest in filaments has been increasing owing to that recent experiments and theories have revealed its relevance to microscopic objects such as carbon nanotubes [3] and biomaterials .
The configuration of a filament is determined by the shape of its centerline and the twist of its cross-section around the centerline. The centerline of a filament is a curve passing through the center of cross-section, so that it can be described by the Frenet-Serret equations introduced in differential geometry [30,31]. The Frenet-Serret equations define a local Cartesian coordinate system and we refer to it as Frenet-Serret frame. The Frenet-Serret frame can also be used to describe the trajectory of a particle or the center-of-mass of a many-body system. However, a curve has vanishing cross-section so that the Frenet-Serret frame cannot describe the twist of crosssection of a filament. Therefore, when the twist is significant, it is commonly to use another local frame called generalized Frenet-Serret frame [4][5][6][7][8][9][10] to study a filament. Meanwhile, to study the motion of a rigid body, in classic mechanics it is more convenient to use Eulerian angles [32] so introduces Euler body-frame which can be identified to the generalized Frenet-Serret frame [4][5][6][7][8][9][10]. Moreover, to specify the position of a filament completely it also needs a fixed frame. The relationships between these frames are therefore very important. In particular, Eulerian frame introduces a line-of-nodes which is a line perpendicular to the tangent of centerline of the filament and lies on the horizontal plane of the fixed frame. Note that the normal of centerline is also perpendicular to tangent, we can ask an intriguing question, i.e., would the line-of-nodes coincide with the normal of centerline? The answer to this question helps us to find some significant results as we will report in this paper. Moreover, when the cross-section area of filament tends to zero, independent variables are reduced from 3 (Eulerian angles θ, f and ψ) to 2 (curvature and torsion, or θ and f), what is ψ in this limit? The answer to this question helps us to set up a three-dimensional (3D) elastic model to study the effect of a finite intrinsiccurvature (IC) on the mechanical property of a filament.
Description of a filament
2.1. Frenet-Serret frame Using arclength s as variable, the position vector of a curve in a 3D fixed Cartesian coordinate system is written r (s)=(x(s), y(s), z(s)). The unit tangent of the curve is defined as º º ( )˙( ) s s d ds t r r , and symbol '˙' represents the derivative with respect to s. The curvature, κ(s), of the curve is given by where n is the unit normal. It requires k 0 in 3D case and = | | t 1 results int n. We can define further a binormal unit vector by b=t×n, so that t, n and b form a local right-hand Cartesian coordinate system, i.e., the Frenet-Serret frame, and where τ is the torsion and represents the rotational rate of n around t. The plane perpendicular to t is called normal plane. Clearly, both n and b lie on the normal plane. We can then find κ and τ from Equations (1)-(2) are called Frenet-Serret equations. In differential geometry, it has been shown that once κ and τ are known, the shape of a curve is completely determined [30,31]. The necessary and sufficient condition to have a planar curve is τ=0 at arbitrary s. On the other hand, a general helix is defined as a curve in which t makes a constant angle with a fixed direction. This condition is equivalent to that κ/τ is s−independent. When both κ and τ are s−independent, the helix is called a circular helix.
Generalized Frenet-Serret frame
Frenet-Serret equations cannot describe the twist of cross-section of a filament so that we need to generalize them. For a filament with a finite cross-section, we can still represent its centerline by r, but it is more convenient to describe its configuration by a triad of unit vectors , t 1 and t 2 are oriented along the principal axes of cross-section [1,[4][5][6][7][8]. The orientation of the triad is given by the solution of generalized Frenet equations [1,8,20,28], where ò ijk is the antisymmetric tensor, and w w w w = + + t t t 1 1 2 2 3 3 is a vector in which ω 1 and ω 2 are components of curvature so k w w = + 2 1 2 2 2 , and ω 3 is the twist rate. t, t 1 and t 2 form another local coordinate system and we call it as generalized Frenet-Serret frame. t 1 and t 2 are coplanar with n and b, so that we can rotate the Frenet-Serret frame counterclockwise an angle α around the common axis t to the generalized Frenet-Serret frame, i.e., a a a a = + =-+ ( ) t n b t n b cos sin , sin cos . 5 1 2 From equations (2)-(5), it is straightforward to show w a w a w t a = = = +˙( ) c c sin , cos , It follows that in general w t ¹ 3 , or s−independent κ and ω 3 do not result in a helical centerline automatically. Clearly α represents the distortion of cross-section around the centerline.
Note that independent variables in two frames are different. There are two independent variables, κ and τ, in Frenet-Serret frame; but there are three independent variables, ω 1 , ω 2 and ω 3 , in generalized Frenet-Serret frame. Equation (6) provides the relations between these variables.
In Eulerian angles
In mechanics, it is convenient to use Eulerian angles to describe the motion of a rigid body [32]. The Eulerian angles give relations between a fixed coordinate system and a body-frame rigidly embedded in the rigid body. The same ideas can be readily applied to describe the configuration of a filament by using the generalized Frenet frame as the body-frame and replacing time used in a moving body by s [5][6][7][8][9][10]29], since intuitionally the configuration of a filament looks like the trajectory of a flat rigid thin plate. The line common to x−y plane in the fixed frame and t 1 −t 2 plane is called the line-of-nodes, shown as the green-line in figure 1, where x is the unit vector along green-line. The Eulerian angles are generated by three rotations to move the fixed frame into the body-frame [32], as shown in figure 1. The first rotation is through f, the angle between x-axis and line-ofnodes, about z-axis so to move x-axis into the line of nodes. The second rotation is through an angle θ about the line-of-nodes, to move z-axis to t-axis. The final rotation is through an angle ψ about t-axis to move y-axis into t 2 -axis. Replacing s by time, w becomes the angular velocity of a rigid body, so we have [32] w q y f y q = +˙( ) sin sin cos , 7 1 w q y f y q = -˙( ) sin cos sin , 8 2 w qf y = +˙( ) cos . 9 3 In this convention, in fixed frame we can write [32] f sin sin , cos sin , cos , 10 cos cos cos sin sin , sin cos cos cos sin , sin sin , 11 1 and t 2 =t×t 1 . Using equations (3) and (10), we can expressṫ, n, κ and τ in Eulerian angles explicitly, as Equation (14) suggests that for a planar curve, it is always possible to choose f=0 so τ=0 by a proper rotation of the fixed frame.
The advantage of using Eulerian angles to study a filament is that it satisfies = | | t 1 i automatically, so we do not need to consider it as a constraint in many calculations and it may simplify calculations greatly.
3D model
The energy for a 3D filament with a finite IC κ 0 , a finite intrinsic-twist rate τ 0 and under a uniaxial force (along where k 1 , k are bending rigidities and k 3 the twisting rigidity or torsional modulus, respectively. κ 01 and κ 02 are components of κ 0 and k k k = + 2 . In this model the end at s=0 is fixed at r(0)=0 and f is applied to another end at s=L. L is the total contour length and is a constant, i.e., we consider an inextensible filament. When κ 0 =τ 0 =0, it reduces into the WLC model.
When κ 0 =τ 0 =0, replacing s by time, becomes the Lagrangian of a rigid body with one point fixed and under a constant force, and k i becomes principal moment of inertia [32].
3 0 3 as a special potential energy, though it is difficult to find an analogy in the dynamics of a rigid body, owing to the complex of ω i .
2D models
In 2D case, we can write q q º = ( ) t r cos , sin with r=(y, z). There are two elastic models for a 2D intrinsically curved filament [18,[25][26][27]. The energy of the model 1 is [18,21,22,[25][26][27] where q º -˙ẏz zÿ¨is the signed curvature, κ 0 is the intrinsic signed curvature, and q = |˙| |˙| t . In this model, q and κ 0 can be either positive or negative. Meanwhile, the energy of the model 2 is [27] In this model, q |˙| is curvature. These two models have very different mechanical properties [21,22,27]. Especially, model 1 shows a discontinuous change in z(L) with a varying f [21,22]; but there is no such a transition in model 2 [27].
Some important relations in variables 4.1. A relation between α and Eulerian angles
Equation (6) provides a relation between Frenet-Serret and generalized Frenet-Serret frames. However, using equation (6) to calculate α is uneasy in many cases so that some other ways to calculate α are useful. A related intriguing problem is that after a comparison between equations (6) and (9), we can ask would α=ψ? Since α−ψ is the angle between n and x, the problem is equivalent to finding the relation between n and x. To find this relation we can combine equations (6), (9) and (14) so obtain where β is a s−independent constant. From equations (6)- (7), we find further w q y f y q = = +0 sin sin cos 1 or y q qf = -˙ṫan sin when α=0. It follows β=π/2 and Equation (19) indicates that α=ψ requires either f = 0 or q = 0. f = 0 gives a planar centerline and q = 0 gives a helical centerline. In more general cases a y ¹ , or n does not coincide with x.
Relations in variables when the cross-section area vanishes
Next we consider another unsolved question, i.e., when the cross-section area vanishes, the filament becomes a curve so that we need only two variables, θ and f, to describe it, in this case what are the limiting forms of ψ?
Could we simply take ψ as a s−independent constant or even ψ=0 in equations (7)- (9) and (15)? Physically in this case we can ignore the distortion of cross-section and set α=0 so ω 3 =τ, and then from equation (19) it follows Therefore, in general ψ cannot be s−independent. ψ is s−independent, being 0 or −π/2, only when either q = 0 or f = 0, i.e., the filament becomes a helix or a planar curve. Equations (14), (19) and (20) are very useful and we will apply them to solve some intriguing problems for an intrinsically curved filament.
GSC of the general model when f=0
A very important question on the general 3D model, given by equation (15), is whether it has a unique GSC when f=0. In this case, the GSC of the model is given by = 0 so ω 1 =κ 01 , ω 2 =κ 02 and ω 3 =τ 0 . Therefore, from equations (7) We should stress again that in general ω 3 =τ 0 is not equivalent to τ=τ 0 . For instance, when κ 0 =0, the general solution of equations (21)-(23) is simply a straight twisted cylinder so that τ=0 but a t = 0 . When k ¹ 0 0 , the general solution of equations (21)-(23) is unavailable in literature, and there is not a direct way to find the general solution since these equations are nonlinear. Moreover, even if we could find the general solution, it should have three undetermined parameters and be rather complex, so that it would be difficult to justify the uniqueness of corresponding configuration. Note that it is not a necessity to have a unique GSC for every physical model. A typical example is that the model given by equation (17) has uncountable GSCs when f=0 [22]. Fortunately, using equation (19) we can solve this problem readily in an indirect way. Substituting equations (21)-(23) into equation (19), we obtain a = 0. It follows ω 3 =τ=τ 0 from equation (6). In other words, when f=0 and k ¹ 0 0 , the model takes a circular helix as its unique GSC. Note that this conclusion is valid even if k, k 1 and k 3 are s−dependent.
We can find a particular solution of equations (22) . ψ and ḟ are also sindependent in the solution. All other forms of solution can be obtained by a coordinate transformation from this solution. When τ 0 =0, we get θ=π/2 so the GSC becomes a circle. Moreover, κ 02 =0 leads to θ=0 so gives a straight centerline.
Note that the above conclusions are based on k 3 >0 which leads to equation (23). When k 3 =0, there is not equation (23) so that we have only two equations, equations (21)- (22), but three unknowns, consequently there are infinite numbers of GSCs. In fact, when k 3 =0, the filament can be arbitrarily twisted so that ẏ ( ) s or ȧ ( ) s can take arbitrary values. Moreover, from equations (21)- (22) we know that θ is coupled to f and ψ when k ¹ 0 0 , so that infinite possibility of ψ results in infinite numbers of GSC. We should address that the method used here should also be instructive in solving other nonlinear differential equations. This is because κ and τ determine the shape of a curve completely [30,31], and to find κ and τ so identify the solution curve in many cases may be much easier than to find a solution directly.
κ 01 and κ 02 when the cross-section area vanishes
Another intriguing question is that when the cross-section area vanishes, how to assign k 01 and κ 02 in 0 ? To answer this question we can substitute equation (20) into equations (7)-(8) so obtain ω 1 =0 and ω 2 =κ, as well as κ 01 =0 and k k = 2 . Note that since ω 1 and ω 2 can be either positive or negative, there is not special reason to limit the signs of κ 01 and κ 02 , or κ 02 =±κ 0 in this case. Together with ω 3 =τ, the energy density in equation (15) is reduced into This is a natural result for a rigid curve since κ and τ are two independent variables for its shape. In 3D case, we only need to take sign '−' in the first term of D 3 since it requires κ>0 and κ 0 >0. However, to be consistence with 2D models, retaining the sign '+' in equation (25) is still necessary, as we will explain later.
From equations (14) and (25), we know that when k 3 >0 the model requires continuous q and ḟ implicitly to prohibit q ¥ or f ¥ since otherwise it results in infinite τ and E 3D at discontinuous points. The same rule should be also applied to the more general model given by equation (15) to avoid undefined τ, α and E 3D .
From 3D model to 2D models
Next we consider another unsolved question: what is the limit form of equation (25) in 2D case? A planar curve requires τ=0 at arbitrary s and from equation (14) it is equivalent to f = ( ) s 0. Consequently, from equations (13) and (25), at first glance the model 2 would be correct. But we have to be careful for the conclusion since these two models have quite different mechanical properties. The key point is that the model 2 allows to change sign of q ( ) s at some points [27] or it allows a discontinuous q ( ) s . However, from the arguments in the last paragraph of section 5.2, we know that k 3 >0 prohibits a discontinuous q ( ) s . We can confirm this conclusion by looking at a simple case when κ 0 is independent of s. In this case, it is straightforward to find that when f=0 the model 1 has a unique circular GSC but the model 2 has infinite numbers of GSCs [21,22,27]. Note that in section 5.1 we have shown that the corresponding 3D model has a unique circular GSC when f=0 and k 3 >0, agreeing with that obtained from the model 1. Therefore, when k 3 >0 the 3D model must be reduced into the model 1.
It is also clear that when κ 0 keeps the same sign for all s, we only need to take sign '−' in the first term of D 3 . However, note that in 2D case κ 0 can be either positive or negative, to maintain a reasonable GSC and be consistent with the 2D model, we need to take the sign '+' in the first term of D 3 when κ 0 <0. It is also equivalent to adopt the model 1.
In contrast, if k 3 =0 or the effect of ω 3 is negligible, the 3D model should be reduced into the model 2. Ignoring the effect of torsion completely is impractical for a macroscopic filament, but may be a good approximation for some biopolymers. The model 2 is analogous to a free rotating chain model for biopolymers.
In conclusion, exactly two planar elastic models of a filament come from a same 3D model. The model 1 corresponds to taking k 3 >0 in D 3 but the model 2 corresponds to taking k 3 =0. In other words, though k 3 or the torsional term in energy disappears in 2D models, the effective description of a planar filament still strongly depends on the value of k 3 .
Shape equations and boundary conditions for a 3D system with vanishing cross-section
Equation (25) can help clarify the role of a finite κ 0 to the transition in z(L) for a 3D filament. Using standard variational technique to extremize E 3D , we obtain the shape equations and six BCs Equation (26) is a 4th-order nonlinear differential equation in θ and equation (27) is a 2nd-order nonlinear differential equation in ḟ . Full expressions of both equations are lengthy so we do not present them here. To derived these expressions are rather easy by using some softwares, such as Mathematica.
Note that at both s=0 and s=L using implies that ḟ ( ) 0 , ḟ ( ) L , θ(0), θ(L), q ( ) 0 and q ( ) L are free, or adopt the hinged-hinged BCs. The hinged-hinged BCs is also the most commonly used BC in force experiment for a biopolymer. In contrast, one can adopt fully fixed BCs so take ḟ ( ) 0 , ḟ ( ) L , θ(0), θ(L), q ( ) 0 and q ( ) L as constants. We can also choose partial fixed and partial free BCs. Results obtained from different BCs usually require different constrains so are basically different.
Since the model 1 in 2D case corresponds to having a finite k 3 in 3D case and under an applied force the model 1 shows a discontinuous jump in z(L), we focus on the case with k 3 >0 henceforth.
To find the general solution of shape equations is an arduous task and so does to show exactly that there is a unique solution though physically in most cases the solution should be unique. However, we can examine two simple solutions. The first possible solution is a planar curve and the second is a helix.
Equations (26)-(29) are valid even when k 0 , τ 0 , k and k 3 are s−dependent. But in this work we focus on the effect of a finite κ 0 so that for simplicity we assume κ 0 , τ 0 k and k 3 are all s−independent henceforth. Moreover, without loss of generality we also assume τ 0 0. k 3 is irrelevant in this case owing to t = 0. Equation (32) recovers the shape equation and BC for model 1 [21,22,26]. The corresponding 2D system has been studied thoroughly [21,22,26] and the main conclusions are: in ground state z(L) can undergo a multiple-step discontinuous transition. The transition is accompanied by unwinding loops, and the critical force reaches a limit quickly with decreasing number of loops [21]. Owing to symmetry, it is natural that free of twist the 3D filament has the same behavior as that of the 2D filament.
Helix solution
On the other hand, for a helix solution, substitute a s−independent θ into shape equations, we obtain 0 . It follows that ḟ is also s-independent and so does for τ since equation (14) gives t qf =ċos . Therefore, the filament forms a circular helix. Note that either k 3 =0 or k=0 leads to f=0 or no helix at a finite f. other is from P 4 to P 3 . It means that in practice the discontinuous change in z r will be more likely to occur at P 3 with increasing | | F , or occur at P 2 with decreasing | | F . The hysteresis indicates that the phase transition is first order. These behaviors are similar to that reported in [9] and [10], except for that in this work the transition requires the fixed BC, but in [9] and [10] the transition occurs at the hinged-hinged BC.
In another limit, i.e., when ¥ k r or ¥ k 3 , we find exactly that when k > 32 243 r , it has always g(z r )>0 so that the helix is stable. In contrast, when k < 32 243 r , g(z r ) has always two zeros, so z r also shows a discontinuous change in z r , similar to that presented in figure 3.
In the model, g(z r ) can have three real zeros when k > k 1 r r . A typical example with k r = 20 and κ r =0.28 is shown in figure 4. In this case, the first zero of g(z r ) indicates that a helix with z r <z s is unstable, and the second and third zeros of g(z r ) gives a critical regime similar to that presented in figure 3. We should also note that the critical regime of F becomes much narrower and the change of z r (Δz) becomes quite large in this case, owing to the larger linear regime at low F. For instance, in figure 4 the critical regime is ΔF=0.03298−0.03037=0.00261 and Δz≈0.6 which is about twice of z r before transition. Such a small ΔF and large Δ z means that the hysteresis may be negligible so that the filament may work as a sensitive switch or sensor. Moreover, the sharp transition of z r occurs at F>0 in this sample.
Our above discussions reveal that g(z r ) can have not any real zero, have one real zero, have two real zeros and have three real zeros, and different number of zeros results in different mechanical behaviors. Consequently, the phase diagram for the helix can be divided into four regimes, as shown in figure 5. In regime I, g(z r ) has not any real zero, so z r varies smoothly between 0 to 1. In regime II, g(z r ) has only one real zero and g(0)<0, so that the helix with a small z r is unstable, but z r of the stable helix increases smoothly to 1 with increasing F. In regime III, g(z r ) has two real zeros and g(0)>0, and there exists a critical regime bounded by z r in which z r has a first order transition with varying F, similar to that shown in figure 3. We also note that the regime III has two disconnected parts separated by regime I, as shown in figure 5. Finally, in regime IV, g(z r ) has three real zeros and g(0)<0, so that the helix with a small z r is unstable, and there is a critical regime, bounded by the remained two real zeros, in which z r shows a discontinuous change, similar to that shown in figure 4. From figure 5, we also find that at either 1.35>k r >0.492 or k > > 4 3 32 243 r , there is not discontinuous change in z r . In conclusion, in this subsection we show exactly that an intrinsically curved 3D filament has a planar GSC only when τ 0 =0 and it recovers 2D results. It follows that a finite IC alone is indeed enough to induce a discontinuous change in z(L) even in 3D case. Moreover, with hinged-hinged BC, the filament can form a helix only when f=0. In contrast, with fixed BC and proper parameters, the filament can form a helix and z r of the helix can subject to a first order transition. These results suggest that a finite κ 0 is crucial for the conformal and mechanical properties of a filament. In contrast, a finite τ 0 is not a necessity for the phase transition.
Conclusions and discussions
In summary, in terms of Eulerian angles, we derive some useful relations in variables between some coordinate systems. Especially, we find the angle between the normal of centerline of a filament and the line of nodes defined in Eulerian frame.
We apply these relations to explore some physical problems. We show exactly that free of external force and torque, the ground-state configuration of a three-dimensional filament is unique and is a circular helix when the filament has a finite torsional modulus. A byproduct is that the general solution of a set of nonlinear differential equations represents a circular helix. We derive the expression of energy for a three-dimensional intrinsically curved filament when its cross-section area vanishes. We show that the effective description of a planar filament depends on the way to take the planar limit. More exactly, it dependents on whether the torsion becomes zero at a fixed torsional modulus or the torsion and torsional modulus go to zero simultaneously. It also helps explain why the two planar models have very different mechanical properties. We find that when the intrinsic torsion is zero, the three-dimensional filament has the same ground-state configuration as that of the two-dimensional one, so that a finite intrinsic-curvature alone is indeed enough to induce the first order transition in extension. We show that under a finite external force to form a helix it requires the fixed boundary conditions, and present the phase diagram for such a helix. We reveal that with proper parameters, a helical filament can subject to a first order phase transition in extension. The transition can be sensitive to applied force, so that such a filament may be used as a sensitive switch or sensor. Our results confirm that the finite intrinsic-curvature plays the key role in the phase transition but a finite intrinsic torsion is not so crucial for the transition.
In this work we do not consider the effect of temperature so that our results on filament can be applied only to the systems with negligible thermal effect, such as a macroscopic filament or a very rigid biopolymer. However, ignoring thermal effect may be inappropriate for many biopolymers. Indeed, in two-dimensional systems it has been shown that a finite temperature can suppress transition [21,22]. Note that in general the temperature results in a much stronger thermal fluctuation in a three-dimensional system than that in a twodimensional case, we can expect that the temperature may depress the transition further so lead to different results. Therefore, the effect of a finite temperature on a three-dimensional system deserves a further study.
Our results are not only helpful for the understanding of the mechanical property of a semiflexible biopolymer or a filament, but also useful for the study of dynamics of a rigid body. The new relations in variables for different frames may be also significant in differential geometry. Finally, the method to find the helix solution for nonlinear differential equations may be instructive to other mathematical or physical problems. | 7,162.4 | 2018-03-05T00:00:00.000 | [
"Physics",
"Engineering"
] |
Deriving Polarization Properties of Desert-Reflected Solar Spectra 1 with PARASOL Data
. One of the major objectives of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) is to conduct highly accurate spectral observations to provide an on- orbit inter-calibration standard for relevant Earth- observing sensors with various channels. To 3 calibrate an Earth- observing sensor’s measurements with the highly accurate data from the CLARREO, errors in the measurements caused by the sensor’s sensitivity to the polarization 5 state of light must be corrected. For correction of the measurement errors due to the light’s 6 polarization, both the instrument’s dependence to on the incidentce’s polarization stateus and the 7 on-orbit knowledge of the polarization state of light as a function of observed scene type, 8 viewing geometry, and solar wavelength, are required. In this study, an algorithm for deriving 9 the spectral polarization state of solar light from desert is reported. The desert/bare land surface 10 is assumed to be composed of two types of areas: Fine sand grains with diffuse reflection 11 (Lambertian non-polarizer) and quartz-rich sand particles with facets of various orientations 12 (specular-reflection polarizer). The adding-doubling radiative transfer model (ADRTM) is 13 applied to integrate the atmospheric absorption and scattering in the system. Empirical models 14 are adopted in obtaining the diffuse spectral reflectance of sands and the optical depth of the dust 15 aerosols over the desert. The ratio of non-polarizer area to polarizer area and the angular 16 distribution of the facet orientations are determined by fitting the modeled polarization states of 17 light to the measurements at 3 polarized channels (490, 670, and 865 nm) by the Polarization and 18 Anisotropy of Reflectances for Atmospheric Science instrument coupled with Observations from 19 a Lidar (PARASOL). Based on this physical model of the surface, the desert-reflected solar 20 light’s polarization state at any wavelength in the whole solar spectra can be calculated with the 21 ADRTM.
Abstract. One of the major objectives of the Climate Absolute Radiance and Refractivity 1 Observatory (CLARREO) is to conduct highly accurate spectral observations to provide an on-2 orbit inter-calibration standard for relevant Earth-observing sensors with various channels. To 3 calibrate an Earth-observing sensor's measurements with the highly accurate data from the 4 CLARREO, errors in the measurements caused by the sensor's sensitivity to the polarization 5 state of light must be corrected. For correction of the measurement errors due to the light's 6 polarization, both the instrument's dependence to on the incidentce's polarization stateus and the 7 on-orbit knowledge of the polarization state of light as a function of observed scene type, 8 viewing geometry, and solar wavelength, are required. In this study, an algorithm for deriving 9 the spectral polarization state of solar light from desert is reported. The desert/bare land surface 10 is assumed to be composed of two types of areas: Fine sand grains with diffuse reflection 11 (Lambertian non-polarizer) and quartz-rich sand particles with facets of various orientations One of the major objectives of the Climate Absolute Radiance and Refractivity Observatory 5 (CLARREO) (Wielicki et al., 2013) is to conduct highly accurate spectral observations to 6 provide an on-orbit inter-calibration standard for relevant Earth-observing sensors with various 7 channels. To calibrate an Earth-observing sensor's measurements with the highly accurate data 8 from the CLARREO, errors in the measurements caused by the sensor's sensitivity to the 9 polarization state of light must be corrected (Lukashin et al., 2013;Sun and Lukashin, 2013;Sun 10 et al., 2015). For correction of the measurement errors due to light's polarization, both the 11 instrument's dependence on the incidentce's polarization stateus and the on-orbit knowledge of 12 the polarization state of light as a function of observed scene type, viewing geometry, and solar 13 wavelength, are required. Empirical polarization distribution models (PDMs) (Nadal and Breon, 14 1999;Maignan et al., 2009) (Deschamps et al., 1994) may be used to correct radiometric bias (Lukashin et al., 2013). But 17 these can only be done at 3 or 4 solar wavelengths (i.e. 490, 670, and 865 nm) at which the 18 PARASOL has reliable polarization measurements. Since the CLARREO is designed to measure 19 solar spectra from 320 to 2300 nm with a spectral sampling of 4 nm (Wielicki et al., 2013), 20 which has potential to inter-calibrate space-borne sensors at nearly all of the solar wavelengths 21 (Sun and Lukashin, 2013), the PDMs for the inter-calibration applications should be made as 22 functions of every sampling wavelength of the CLARREO. Due to strong dependence of solar 23 light's polarization on wavelength (Sun and Lukashin, 2013), the applicability of empirical 1 PDMs based on only 3 or 4 channels of PARASOL polarization measurements will be very 2 limited. In our previous studies (Sun and Lukashin, 2013;Sun et al., 2015), polarized solar 3 radiation from the ocean-atmosphere system is accurately modeled. Because the refractive index 4 of water at solar spectra is well known (Thormählen et al., 1985), Sun and Lukashin (2013) 5 actually can produce the PDMs for ocean-atmosphere system at any solar wavelength. However, 6 it is still a difficult problem to obtain spectral PDMs for other scene types. For scene types other 7 than water bodies, although many studies have been conducted (Coulson et al., 1964;Egan, 8 1968;Egan 1969;Wolff, 1975;Egan, 1970;Vanderbilt and Grant, 1985;Tamalge and Curran, 9 1986;Grant, 1987), no reliable surface reflection matrix such as that based on the Cox and Munk 10 (1954; 1956) wave slope distribution models for oceans is available. For scene types dominated 11 by diffuse reflection, like fresh snow, grass lands or needle-leaf trees/bushes, this may not be a 12 serious problem. But for scene types like desert, snow crust/ice surfaces, or even broad-leaf trees, 13 specular reflection is still significant (like what happens at the ocean surface), polarization of the 14 reflected light can be very strong, thus needs to be accurately accounted for. For example, the 15 PARASOL data show that the degree of polarization (DOP) of reflected light from clear-sky 16 desert can be ~30%. The broad-leaf trees also can reflect solar light with a DOP of ~70%. For a 17 sensor with a sensitivity-to-polarization factor of only ~1%, its measurement for light with a 18 DOP of ~30% and ~70% will have relative errors of ~0.3% and ~0.7%, respectively, solely due In this study, an algorithm for obtaining the spectral polarization state of solar light from desert 6 with the PARASOL data is developed. The method of deriving the polarization state of solar 7 light from desert-atmosphere system at any wavelength with the PARASOL-measured polarized 8 radiances at 490, 670, and 865 nm is reported in Section 2. Numerical results and discussions are 9 presented in Section 3. Summary and conclusions are given in Section 4. The polarization of reflected light is related to the surface roughness (Wolff, 1975) and to the 12 size of reflecting elements (Egan, 1970). In this study, the desert/bare land surface is assumed to Assuming desert is a stationary sand "ocean" with quartz-rich sand-particle facets as specular-4 reflection "waves" and Lambertian reflection sand grains as "foams", we can adopt the formula 5 given in Cox and Munk (1956) for where denotes the roughness parameter of the desert surface, and where n is the real refractive index of the silica and λ denotes the solar wavelength in μm. In this 2 study, to account for the impurity absorption in the quartz-rich sands, we assume the imaginary 3 part of the sand refractive index to be 0.02. This assumption of sand's imaginary refractive index 4 could have a small effect on the modeled total reflectance from the desert, but has little effect on 5 the DOP and AOLP calculations.
6 However, f, , and RL must be obtained from observations for desert. In this study, the spectral is the reflectance of the Lambertian desert area, which 18 as the first element of the RL, is linearly extrapolated to the CLARREO solar wavelength limit of 19 320 nm. The empirical spectral reflectance of desert from this process is displayed in Fig. 1 is applied for calculation of the Stokes parameters of the reflected light from the desert- , 2006; 2009; 2013). Two-mode lognormal size distributions (Davies, 1974;Whitby, 1978;16 Reist, 1984;Ott, 1990;Porter and Clarke, 1997)
21
where λ is the solar wavelength in μm. Dust AOD decreases with the increase of wavelength.
22
In this study, the ratio of the non-polarizer area to polarizer area of the desert and the angular However, it is's worth noting here that the errors in the AOLP from the ADRTM due to our 21 assumptions for dust refractive index will only have a minor effect on the polarization correction 22 accuracy. This is due to the fact that the DOPs at these observation angles are very small, and 23 also that the AOLP errors in these observation angles actually will not result in any significant 1 difference in polarization correction, i.e. AOLP = ~0 o and AOLP = ~180 o means the same to the 2 satellite sensor. However, at 670 nm, the PARASOL data for desert show stronger reflectance in 3 the backward-reflecting directions than in the forward-reflecting directions. This is significantly 4 different from the ocean cases. Desert's reflection ofto solar radiation is a complicated 5 phenomenonissue thatwhich is neither Lambertian nor specular-reflection. Thus, our simple 6 approach here shows some difference in reflectance from the data. However, our objective for 7 this study is to accurately model the desert DOP accurately, and to accurately model the desert 8 AOLP accurately when the DOP is not trivial. Such modeling errors in the total reflectance are to 9 be expected and not the concern of this study. Errors in modeling the reflectance is ignorable for 10 this purpose.
11
For an even longer wavelength of 865 nm, Figures 14 to 19 show that, similar to the cases for the | 2,303 | 2015-07-15T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Lac Operon Boolean Models: Dynamical Robustness and Alternative Improvements
: In Veliz-Cuba and Stigler 2011, Boolean models were proposed for the lac operon in Escherichia coli capable of reproducing the operon being OFF, ON and bistable for three (low, medium and high) and two (low and high) parameters, representing the concentration ranges of lactose and glucose, respectively. Of these 6 possible combinations of parameters, 5 produce results that match with the biological experiments of Ozbudak et al. 2004. In the remaining one, the models predict the operon being OFF while biological experiments show a bistable behavior. In this paper, we first explore the robustness of two such models in the sense of how much its attractors change against any deterministic update schedule. We prove mathematically that, in cases where there is no bistability, all the dynamics in both models lack limit cycles while, when bistability appears, one model presents 30% of its dynamics with limit cycles while the other only 23%. Secondly, we propose two alternative improvements consisting of biologically supported modifications; one in which both models match with Ozbudak et al. 2004 in all 6 combinations of parameters and, the other one, where we increase the number of parameters to 9, matching in all these cases with the biological experiments of Ozbudak et al. 2004.
Introduction
The lac operon in Escherichia coli is a paradigmatic example of a genetic regulation system involving the interaction of positive and negative regulatory molecules. The system encodes a pathway for lactose catabolism that is hierarchically controlled by glucose availability, and it has been reported to exhibit a bistable behavior, since the catabolic genes are either uninduced (OFF) or induced (ON) in a single cell, depending on previous activation history and specific extracellular lactose and/or glucose concentrations [1,2]. The lac operon has been used as a model system for gene regulation since its initial description in [3], where the concept of an operon was first introduced. Since the molecular components of the regulatory system have been well characterized, it is an excellent candidate for global analysis and modeling. The operon includes three genes involved in the uptake and catabolism of lactose and/or structurally similar sugars. The first gene, lacZ, encodes for the enzyme β-galactosidase, which converts lactose to allolactose and to subsequent catabolic intermediates; lacY, the second structural gene of the pathway, encodes for lactose permease, a membrane transporter for lactose uptake. Finally, the product of gene lacA is an acetyltransferase that takes part in the degradation and excretion of non-lactose sugars that may be misrouted through the lactose degradation pathway.
The fundamental states of the lac operon are described schematically in Figure 1. In brief, when a low level of extracellular lactose occurs, a strong binding of the LacI protein to the operator sequences strongly represses lac gene expression, thus allowing only a very low amount of mRNA to be transcribed (Figure 1a), which can be considered as an OFF state. However, even at this low expression level, a few LacY permease molecules can reach the cytoplasmic membrane and take up any eventual lactose that appears outside the cells. The presence of low amounts of LacZ in the cytoplasm leads to transformation of this lactose into allolactose, the true natural inducer of the lac operon. When this happens, allolactose interacts allosterically with the LacI repressor protein, decreasing its affinity for operator sequences and releasing the negative control of the lacZ promoter, rendering the system into an ON state (Figure 1b). Positive feedback loops, like the one described here for the lac system, have been proposed as a typical property of bistable systems [4]. On the other hand, an essential feature of the regulatory system is that high extracellular glucose concentrations can also inhibit the expression of lac genes, a phenomenon traditionally considered as a type of catabolite repression that is directly mediated by the glucose molecule. Such a repression has usually been assumed to involve the transcriptional activator protein Crp (cAMP receptor protein), which also takes part in lac gene expression as a cAMP-dependent positive regulator (Figure 1). In the repression model, glucose uptake by the cells produces a drastic reduction in the intracellular cAMP levels, negatively affecting Crp activity and abolishing lac gene expression, which would explain the metabolic preference of glucose over lactose in E. coli [5]. An additional explanation of the catabolic hierarchy of glucose is given by the inducer exclusion effect, which involves glucose-mediated inhibition of lactose uptake, by means of a direct reduction of LacY permease activity [6]. This would greatly contribute to the regulatory effect of glucose by preventing the inducer molecule to reach the LacI repressor.
Bistability is normally defined as a condition where a system can respond to the same external signal or input in two different manners, depending on the internal state and/or the previous history of the system. However, true bistability requires the system to switch in an all-or-none way between alternative fixed points, without the appearance of intermediate states or a "quantitative" cellular response. Although a quantitative continuous response to the presence of lactose, or the gratuitous inducer thiomethyl-β-D-galactoside (TMG), was initially shown in experiments measuring the average response of whole culture populations, containing several millions of cells, observations with individual bacteria showed unequivocally that each cell responds in a discrete manner, switching alternatively among the OFF and ON states [1,2], with virtually no evidence of intermediate responses.
This shows that bistability can be a characteristic of the lac system. Using genetically engineered E. coli reporter cells, gratuitous inducers and sophisticated experiments, the authors in [7] showed a discrete hysteretic response of lac gene expression in individual bacteria when pre-incubated in the presence or absence of extracellular lactose and then transferred to fresh medium with different lactose concentrations. Their results showed that, depending on the cell's previous history, the response profile to lactose in the fresh medium varied significantly within a specific concentration range, while outside its limits, cells would respond independently of their previous incubation conditions: Any concentration lower than 3 µM would not lead to lacZ induction, whereas any concentration higher than 30 µM would induce the reporter gene. However, most values within the 3-30 µM window were found to induce lacZ expression only for lactose preincubated cells. This window of bistability was very clearly defined in the absence of extracellular glucose but when this sugar was present, the concentration window moved towards higher lactose concentrations (see Figure 2c in [7]).
Thus, the above all-or-none and bistable behavior that characterizes the lac system makes it suitable for discrete modeling. In particular, Boolean models have been used successfully to analyze genetic regulatory systems on numerous occasions [8][9][10]. They are useful for the analysis of large regulatory networks when mechanistic details underlying are scarce, insufficiently defined, or even controversial, as it has been discussed that, in such models, the interaction type (inhibition or activation) and network topology are enough for capturing dynamics of gene networks [11,12], without the need of estimating parameters, thereby reducing model complexity. For example, they are being used extensively to represent interactions revealed by high throughput sequencing technologies in the context of systems biology [13,14]. Although some of these assumptions make them less applicable for certain biological processes, they have proved very valuable for predicting the behavior of genes during bacterial metabolism [15] or in the context of interspecies interactions [16]. Even more, other researchers have been able to reproduce bistability for eukaryotic cell differentiation [17], surfactant production in bacteria [18] and microbial signaling [19] using Boolean models.
In this context, the authors of [20] modeled the lac regulation system using a Boolean network that was able to reproduce the bistability of the system under certain but not all of the conditions described in [7]. Figure 2 displays a scheme of this model, where G e and (L e , L em ) are parameters that account for different concentration levels of glucose and lactose, respectively. This allows to assign specific values as inputs in order to predict the final outcome of the network in terms of fixed points (steady states) that are consistent with the operon being ON, OFF, or in a bistable condition. When the nodes are updated in a parallel regime, the model can reproduce the experimental results in [7] when extracellular glucose is low or absent. However, the predicted outcome of the model differs from the actual behavior of the lac system when lactose and glucose concentrations are high, a feature that limits the usefulness of this initial model for representing lactose fermentation by E. coli. [20] (left) and its local Boolean functions (right). A solid (dashed) edge represents an activation (inhibition). We use the red box to differentiate the parameters of the model (outside) from its variables (inside).
In this work, we first analyze the dynamics of the Boolean models considered but under all different updating schemes in order to know how robust they are in the sense of whether or not new attractors appear. In general, this is a difficult task because the number of different updates schemes associated to a network grow exponentially with its size [21]. Fortunately, there are currently mathematical tools that can significantly reduce this analysis [22]. Secondly, we propose improvements for the performance of the model considered by a rational redesign of the specific Boolean functions in order to predict the operon behavior in each of the sugar combinations tested experimentally in [7], including combined scenarios of low, medium and high concentrations of glucose and lactose. Furthermore, we have taken into account current biological data, regarding catabolic repression and inducer exclusion mechanisms, to bring function definitions up to date.
The paper is organized as follows; Section 2 summarizes the basic mathematical concepts and definitions used along the manuscript, Section 3 contains the most relevant aspects taking into account of [20], in Section 4 our main mathematical results are presented, in Section 5 alternative improvements for the studied models are proposed. Finally, we discuss on our findings in the conclusions section.
Mathematical Background
A Boolean Network (BN) is a couple (G, F), where G = (V, E) is a finite directed graph named interaction digraph (e.g., Figure 2(left)), V is a set of n nodes (n also called the network size), E ⊆ V × V is the set of edges and F = ( f 1 , ..., f n ) : {0, 1} n → {0, 1} n is a Boolean function composed by n local functions f i : {0, 1} n → {0, 1} such that x ∈ {0, 1} n → x i = f i (x) ∈ {0, 1} (x is called a configuration while the value of the variable x k associated with node k is known as an state). Besides, the local functions f i depends only on variables x j such that (j, i) ∈ E (e.g., Figure 2 (right)).
Among the different ways that the states of a BN can be updated is the family of (deterministic) update schedules [22] defined as functions s : V → {1, ..., n} such that s(V) = {1, ..., m} for some m ≤ n. In particular, the well-known parallel (or synchronous) and sequential (or asynchronous) update schedules are obtained when m = 1 (i.e., all states are updated at the same time) and m = n (i.e., all states are updated at different times but following a predetermined order), respectively. We denote by S n the set of deterministic update schedules that exist for a BN of size n. If the states of a configuration x ∈ {0, 1} n are updated according to a given update schedule s, a new configuration x = F(x) ∈ {0, 1} n will be obtained. The change from x to x can be represented by the arc x → x and is known as transition from x to x . Thus, the 2 n configurations and their respective transitions give rise to the dynamics of the system which can be represented in a state transition graph Furthermore, because it is finite, the dynamics has limit behaviors called attractors, which are of 2 types: , for all j = 0, ..., l − 2, F(x l−1 ) = x 0 and l > 1 is a integer named the length of the limit cycle.
The set of configurations that converge to a specific attractor is called the attraction basin.
The lac Operon Boolean Models to be Considered: Main Aspects and Choice Justification
In [20] the authors proposed different Boolean models explaining the bistability of the lac operon in Escherichia coli. These are BNs that have the following characteristics in common:
1.
All of them have qualitative behaviors that match very well with the experiments performed in [7].
2.
The interaction digraph is composed by nodes that represent mRNA, proteins, and sugars. Its edges represent the type of interaction between the nodes (activation/inhibition).
3.
The dynamic is obtained considering the parallel update schedule.
Next, we will specify which are the models of [20] that we choose. Hence, we will summarize the main points that their authors studied and with which we will work from Section 4 onwards. Finally, we justify our choices.
The Chosen Models of Veliz-Cuba and Stigler 2011: Those without Catabolite Repression
We will consider the alternative model without catabolite repression proposed in [20] which differs from the initial one presented in that work only in the definition of the local function f C ; in the initial one f C = ¬G e while in the alternative model without catabolite repression f C = 1 (see our justification in Section 3.4). In addition, we will also consider its reduced version (without catabolite repression) and, from now on, we will refer to this alternative model without catabolite repression and its reduced version as the original and reduced one, respectively. We show the details of these models in Figures 2 and 3, where we point out that the networks have been drawn "disarming" those of [20] which consider some pairs of nodes as a single node, but the reader can easily check that our networks and the corresponding of [20] are equivalents.
The Dynamics Produced by the Original and Reduced Models
According to [20], the original model ( Figure 2) assumes two concentration of glucose, low (G e = 0) or high (G e = 1), and three concentrations of extracellular lactose, low, medium or high, represented by the pairs of parameters (L e , L em ) = (0, 0), (0, 1) and (1, 1), respectively (the pair (L e , L em ) = (1, 0) has no meaning, therefore is not considered). This gives a total of six different combinations of parameters and, consequently, each one will have its corresponding dynamic. Here, a configuration corresponds to the vector (M, P, B, C, R, R m , A, A m , L, L m ) ∈ {0, 1} 10 and, for purposes of relating the dynamics to the results of [7] (see details in Section 3.3), only its steady states matter, i.e., limit cycles (if they exist) are ignored. Besides, when (M, P, B) = (1, 1, 1) (when (M, P, B) = (0, 0, 0)) it will be said that the operon is ON (OFF). Abusing this notation, henceforth, we will call ON (OFF) to a steady state that verifies the above condition.
Notice that case 1 implicitly covers three of the six combinations of parameters (those obtained when high concentration of glucose is combined with low, medium and high concentration of extracellular lactose, respectively), all three having the same dynamic. The remaining three combinations of parameters are those obtained in cases 2, 3 and 4, each with a different dynamic.
We summarize the four cases in Table 1 and their respective dynamics in Figure 4. Regarding the reduced model (Figure 3), four cases similar to the original one are obtained, which can be summarized using the same Table 1. It should be mentioned that, in this reduced model, a configuration is of the form (M, L, L m ) ∈ {0, 1} 3 , and the operon is ON (OFF) when M = 1 (M = 0). In the Figure 5, we show the dynamics of its four cases.
Notice that although only 3 different dynamics are generated (because its cases 1 and 2, involving 4 out of 6 combinations of parameters, generate a single dynamic), they are still in accordance with the summary of Table 1.
Glucose
Low The bistability of the lac Operon, according to [23], means that it must be a region of bistability, i.e., for a range of parameters, a set of "cells" may have both, OFF and ON, fixed points. In [20], this is accomplished by considering stochasticity in the uptake of the inducer in the following way, considering that glucose is absent (G e = 0) while that extracellular lactose is given as the discretization of a random variable L ∼ N(µ, σ) taken from a normal distribution with µ equal to the mean of {0.6, ..., 2.8} incremented by 0.2 and σ = 0.3 (these values are chosen in order to be consistent with those measured in the biological experiments of [7]). Then, the extracellular lactose (L e , L em ) is given as a function of L defined by Below, we summarize the stochastic experiment carried out in [20]: (1) Starting with the normal distribution N(0.6, 0.3) for L, to generate randomly a set of values for L and calculate for each of them the corresponding value for the pair (L e , L em ). (2) Assuming G e = 0, to find which are the steady states of the dynamic obtained for each value (L e , L em ) of (1), it is simply one of the three possibilities showed in the second column of Table 1; OFF, bistable (i.e., OFF and ON) or ON. The results of this experiment were those that matched with the biological experiments of [7] in 5 out of 6 combinations of parameters described in Section 3.2. In the remaining one, where glucose and lactose concentrations are high (see Table 1), the original model (as well as the reduced one) predicts the operon being OFF while the biological experiments of [7] show a bistable behavior.
Our Justification for the Choice of Models without Catabolite Repression
One of the most relevant outcomes of the Boolean models proposed in [20] is that the presence of glucose at a medium or high concentration would be able to completely repress the expression of the lac mRNA, which is a direct consequence of the inhibition function f C = ¬G e , which aims to take into account one of the proposed aspects of catabolite repression. However, this assumption fails to consider the well-described fact that glucose is not able to shut down LacZ expression in bacteria when the operon has been pre-induced experimentally [24]. Furthermore, the whole concept of glucose repression mediated by a drop in cAMP and subsequent lack of the CRP-cAMP activator complex has been under critical revision [25][26][27][28] based on the following compelling challenges: experimental data indicate that intracellular cAMP concentrations are similar when lactose, glucose, or a mixture of both substrates is present at concentrations lower than 300 µM and that cAMP abundance is only reduced by a factor of 5-8 when external glucose concentrations are above 300 µM [29,30], which is higher than the bistability window explored in [7]. Exogenous addition of cAMP did not abolish glucose-mediated inhibition of lac gene expression, even though the metabolite was shown to enter the cells [31]. On the other hand, when the repressor protein LacI is inactivated by deletion of the lacI gene, no catabolite repression of LacZ is observed, indicating that this process relies solely on the activity of the repressor protein, shutting down transcriptional activity by means of inducer exclusion [32]. Finally, the CRP-cAMP activator complex is also required for the expression of the PtsG glucose uptake transporter [33], which would imply that, if this pathway of catabolite repression were valid, glucose would eventually inhibit its own transport, which has never been observed. This inconsistency with experimental data and the relevant objections discussed above justify our choice of the models without catabolite repression of [20], where f C = 1, to account for the current view of glucose-induced lac repression.
Results: Dynamical Robustness of the Original and Reduced Models
Our goal is to study the dynamics of the original and reduced models not only for the parallel update schedule but for the whole family of deterministic update schedules defined in Section 2. As evidenced in [21], this set grows exponentially with the network size. In particular, its sizes are |S 3 | = 13 and |S 10 | = 102, 247, 563 for the reduced and original models, respectively.
Observe that it is easy to deduce that the steady states OFF and ON obtained in the dynamics of the original, and reduced models will also continue to appear with any other deterministic update schedule because their are known to be invariant under update schemes; however, limit cycles could also appear making the desired modeling less robust. We will analyze this aspect below.
Dynamical Robustness of the Original Model
We begin by giving a short known Lemma that allows us to prove that the dynamics of the first three cases mentioned in Section 3.2 are highly robust for any deterministic update schedule. Proof. It is known that the nodes of an acyclic digraph can be ordered in m layers, being the first and last those that have the root and leaves nodes, respectively, such that the arcs belongs between the i-layer and the j-layer, for 1 ≤ i ≤ m − 1 and i < j. Since there are no loops, the dynamic evolves fixing the states of the nodes, in the worst case, from the first to the last layer. Therefore, the only possible attractor is a fixed point. Proof. Let s be an arbitrary update schedule and (M, P, B, C, R, R m , A, A m , L, L m ) ∈ {0, 1} 10 an initial configuration at time t = 0. We will show that in every case, we can obtain one of the acyclic digraphs without loops showed in Figure 6: Cases 1 and 2. It is easy to check that C = 1 and L = L m = 0, ∀t ≥ 1. This implies that, A = A m = 0, ∀t ≥ 2. Therefore, we have the left digraph of Figure 6. Case 3.
Cases 1, 2 and 3 for the Original Model
Notice that C = L m = 1, ∀t ≥ 1. Therefore, we have the following sequence of implications; A m = 1, ∀t ≥ 2 ⇒ R = 0, ∀t ≥ 3 ⇒ R m = 0, ∀t ≥ 4. Therefore, we have the right digraph of Figure 6. Thus, in every case, Lemma 1 assures that the attractors are only fixed points, i.e., there is no limit cycle. Proposition 1 implies that for the first 3 cases, and whatever be the update schedule, the attraction basin of its dynamics will be the whole space of configurations {0, 1} 10 , as summarized in Table 2. It is easy to see that for an arbitrary update schedule and a initial configuration (M, P, B, C, R, R m , A, A m , L, L m ) ∈ {0, 1} 10 at t = 0, we will have that C = 1 and L = 0, ∀t ≥ 1. This implies that A = 0, ∀t ≥ 2, and we obtain the digraph of Figure 7 that has three cycles; one of length 6 and two of length 5. Therefore, we cannot apply the previous Lemma. We will analyze its dynamical behavior through a detailed study of all its different dynamics by using the algorithms developed in [22] that, roughly speaking, list all the sets of schemes that generate exactly the same dynamics, significantly reducing the number of dynamics to analyze. These algorithms have been shown to be effective in obtaining new and relevant information for the study of concrete biological networks [34,35] as well as in the field of the cellular automata theory [36][37][38][39][40][41].
Simply, we generate one dynamic for each of the above representative update schedules and summarize its results in Table 3. Table 3. Average size of the attraction basin for case 4 of the original model (Section 3.2) and calculated over the set S 10 , FP ⊆ S (set of update schedules whose dynamic have only steady states) and LC ⊆ S (set of update schedules whose dynamic have limit cycles). Its respective sizes are |S 10 |=102,247,563 (100%), |FP|=71,891,966 (70.3%) and |LC|=30,355,597 (29.7%). We can observe that less than 30% of all the dynamics present limit cycles and with a balanced proportion between ON/OFF basins, but they are smaller than those associated to the limit cycles, which are on average 5 times larger than that of ON and OFF, respectively. On the other hand, more than 70% of all the dynamics did not have any limit cycle, i.e., they only had the fixed points OFF and ON (being, on average, about 8 times greater the OFF basin than the ON basin).
In Figure 8 we exhibit one example of the dynamical behavior that may have the original model when the update schedule considered is slightly different from the parallel one. There are, in addition to the fixed points ON an OFF, three limit cycles of length 4 and one of length 2.
Dynamical Robustness of the Reduced Model
Clearly, the dynamical robustness of the reduced model is quick and straightforward to analyze, so we only summarize its results.
Cases 1, 2 and 3 for the Reduced Model
Proposition 2. The dynamics associated with the cases 1, 2 and 3 of the reduced model of Section 3.2 have no limit cycle, whatever the update schedule considered.
The dynamics for the above three cases also are quite robust for any deterministic update schedule, and this is summarized in Table 4. Table 4. Size of the attraction basin associated with the dynamics of cases 1, 2 and 3 for the reduced model (Section 3.2) and considering any of the |S 3 | = 13 possible updates schedules.
Case 4 for the Reduced Model
Now the exhaustive analysis boils down to just 13 possible update schedules and the results are summarized in Table 5. Table 5. Average size of the attraction basin for case 4 of the reduced model (Section 3.2) and calculated over the set S 3 , FP and LC (see definitions in Table 3). Its respective sizes are |S 3 | = 13 (100%), |FP| = 10 (77%) and |LC| = 3 (23%). Observe that about 77% of the dynamics have only the fixed points OFF and ON, but, in this case the average size of the ON attractor basin is almost 2 times bigger than that of OFF. The same occurs in the other 23%. Furthermore, the OFF and ON basins add up, on average, exactly the same as that of the limit cycles.
Examples of the dynamical behavior of the reduced model exhibiting limit cycles are those of Figure 5(left).
Alternative Improvements for the Studied Models
As mentioned at the end of Section 3.3, of the 6 parameter combinations, (G e , L e , L em ) = (1, 1, 1) -meaning glucose and lactose concentrations in high levels (see Table 1)-it is the only one that does not matches with the experimental data (see Figure 2c in [7]) where bistability should be observed again, instead of the operon being OFF as obtained from the models. Therefore, the first improvement naturally consists in solving that in both models. The second one will consist of increasing the parameters of the original model from 6 to 9, matching in all these cases with the biological experiments of [7].
Improvement 1: The Original and Reduced Models Match in All 6 Parameter Combinations with Ozbudak et al. 2004
Observe that according to what was discussed at the beginning of Section 4, some local functions of both models must necessarily be modified to make the steady states OFF and ON appear at the same time (bistability). On the other hand, by considering carefully the arguments we gave in Section 3.4, the local function f C = 1 is not sufficient to adjust the Boolean model to represent the actual behavior of the system when both glucose and lactose are present in the extracellular environment of E. coli. To further refine the response of the Boolean network to the balance of each different sugar and to highlight the relevance of the previous history of the cell, a new definition has been proposed for f L and f L m , considering the inhibitory effect of glucose on lactose transport will be significant in the short term as long as a low concentration of lactose was initially present, while the opposite will be true if a high lactose concentration was already inside the cell. These changes are shown in Figures 9 and 10. Thus, the improved original and reduced models predicts:
3.
ON: when (G e , L e , L em ) = (0, 1, 1). The above is summarized in Table 6, which match for all the 6 combinations of parameters with the biological experiments of [7].
Improvement 2: (Improved) Original Model Extended to 9 Parameters
As explained in the introduction, there is a window of bistability (see Figure 2c in [7]) which justifies that three parameters can also be considered for the glucose (low, medium and high). Taking into account again the arguments given in Section 3.4 as well as those discussed for the first improvement, the following changes are proposed over the improved original model (see Figures 13 and 14).
At this point, we repeat the stochastic experiment described in Section 3.3, but for different glucose values. Its results are shown in Figure 15 and summarized in Table 7.
Observe how the results of this extended model match the entire window of bistability (see Figure 2c in [7]) in each of its 9 combinations of parameters.
Conclusions
The lac Operon Boolean model without catabolite repression and its respective reduced version proposed in [20] were analyzed. These models have the particularity of being simple but capable of reproducing the operon being OFF, ON and bistable for different levels of lactose and glucose, matching very well with biological experiments such as those published in [7]. Unlike the models of [20], the Boolean networks proposed here predict bistability even at high glucose concentrations. This feature has been observed experimentally when inducer (a non-metabolizable lactose analogue) concentrations are also high [7]. Furthermore, our models take into account intermediate glucose concentrations, thus increasing sensibility to changes within the "window of bistability" of the lac system (see more details in the second part below).
In a first part of this paper, we have studied the dynamical robustness of these models where we made the following contributions: • For the first 3 cases described in Section 3.2 and that included 5 of the 6 combinations of parameters allowed in the original and reduced models, we establish two Propositions proving the non-existence of any limit cycle, whatever the update schedule used. • For the case 4, where bistability appears, we made an exhaustive analysis of all its possible dynamics generated with any update schedule. Here we detail for both models the average sizes of their attraction basins, the number of dynamics without limit cycles (i.e., only with fixed points) and the number of dynamics with limit cycles (being less than 30% in both models). • Again in the case 4, its predominant attractor (those that have the bigger attraction basin), changes dramatically; OFF attraction basin being, in average, 8 times bigger than that of ON for the original model while that in the reduced one, the ON basin is almost 2 times bigger than that of OFF.
The above findings allow us to conclude that the effect of the glucose and lactose parameters in the interaction digraph associated to each network breaks its cycles for cases 1, 2 and 3, transformed into an acyclic network that will have only fixed points as attractors and, consequently, being completely robust in these situations. However, in the case of bistability, the role of the update schedule can change this property significantly because limit cycles with large attraction basins can appear. Moreover, this work supports the hypothesis of [20] that network topology is a key factor for qualitative dynamical properties but not for quantitative ones.
As a second part, we have proposed two alternative improvements for the Boolean models studied, with biological support and that involve small modifications in their local functions. In the first improvement, the prediction is corrected for the case in which the parameters of the models represent high glucose and lactose levels, achieving the bistability observed in the biological experiments of [7]. In this way, with such an improvement, the original and reduced models match perfectly with the above experiments of [7] for all the 6 combinations of parameters. The second improvement increases the possible combinations of parameters for the original model, going from 6 to 9, enriching the dynamics of the models and matching the bistability window observed in [7] with each of the 9 possible combinations of parameters. By keeping in mind that in our models, inducer exclusion can effectively explain operon regulation, which takes into account the current challenges against the glucose-mediated repression model, our results can also be compared with some continuous models such as those of [42], based on differential equations, where bistability windows are displayed in a wide range of glucose concentrations. It is worth mentioning that the main softwares used in this manuscript were Matlab (for the exhaustive analysis of all the different dynamics of each model and which allowed us to build Tables 3 and 5) and RStudio (for the visualization of most of the state transition graphs presented in this work as well as for the stochastic experiment; Figure 15).
Although the lac operon is a classical model and many of its key molecular players were identified a long time ago, our understanding of how these players interact with each other has evolved continuously through the years [43]. Full comprehension of the system requires robust and thorough knowledge of key regulatory features, like catabolite repression, where the view of a direct inhibitory effect of glucose on the cAMP-CRP regulatory system has been challenged during the last decades [25][26][27][28][29][30]. Furthermore, lac operon bistability is a feature that is hard to reproduce by models in general, and the applicability of a Boolean network including glucose-mediated regulatory systems has only been tried previously in [20]. Our present work shows how the translation of updated mechanistic information about the actual role of glucose into the network allows taking better advantage of a Boolean model to test and reproduce bistability in a way that is faithfully representative of experimental data. This has multiple implications as, in general, bistability has been considered a ubiquitous feature in bacteria, involving several processes [44], and understanding the dynamics of global gene regulatory systems revealed by high throughput technologies has become increasingly complex, thus demanding simpler, robust modelling algorithms, such as Boolean networks. Future models could include additional molecular features of the operon for the lac system, like repressor oligomerization, inducer (lactose) degradation by LacZ activity, and consideration of additional binding sites within the lac promoter region to provide a more detailed description of biological outcomes. Another natural future extension consists of giving a more quantitative approach to the models here studied, similar to what has been done in works such as [45][46][47].
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data used to support the findings of this study are included within the article.
Conflicts of Interest:
The authors declare no conflict of interest. | 8,425.2 | 2021-03-11T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Ekaterinburg city development in the experience economy context
. The article discusses the socio-economic aspects of the development of the city of Ekaterinburg from the standpoint of the cultural and entertainment potential of its development. The authors examined the dynamics of the development of the quest industry in the Russian Federation within the framework of the experience economy. The main development trends in the field of entertainment are identified. Collected, analyzed and systematized statistical and economic indicators characterizing the current state of the entertainment industry in the post-pandemic period of economic development of the country and the city of Ekaterinburg. The most competitive areas of the entertainment industry have been identified. Based on consumer preferences, the authors formulated conclusions about the further growth of the market share of quest services in the total volume of entertainment services in the city of Ekaterinburg.
Introduction
The service sector is a complex of sectors of the economy, enterprises and organizations of which provide various kinds of services and perform various jobs for the population, as well as for other enterprises and organizations [1, p. 28].According to Reznik, Maskaeva and Ponomarenko, the fast-growing entertainment sector belongs to the tertiary service sector in accordance with their classification, according to a functional basis and is characterized by a high degree of elasticity of the population's demand [2].At the same time, today there are a large number of interpretations of the essence of services, highlighting various specific features of their provision.Many works of foreign and domestic scientists have been devoted to this issue since the end of the 19th century [3,4].
Thus, after analyzing the conceptual and terminological apparatus of the essential interpretations of the service, we can conclude that the provision of services occupies a significant position in the conscious economic activity of any person, which affects various aspects of his activity [5].
In recent decades, questions of time distribution in the economy have changed, as have the boundaries of economic human behavior.Now people are in a situation where they need to come up with solutions to distribute their free time between family, work and personal hobbies rationally.In addition, in order to approach the process of time distribution thoroughly, it is necessary to measure this resource [6, p. 28].
Obviously, in terms of organizing leisure, the interests of people are different and often do not coincide.Some individuals prefer to read a book or watch TV at home after a hard day's work, others go in for sports and maintain their health, and still others prefer to go to the cinema or museums.Naturally, the economy and the market adapt to the needs of people and new types of services appear in society, offering to spend their leisure time.
Materials and methods
Statistical, accounting and reporting and information and analytical sources of activity of entertainment industry enterprises in Russia and abroad served as materials and research base.The authors in their work relied on the study of the colossal experience in organizing the leisure of the population both in economically developed countries, and on the experience of Russian organizations in the field of entertainment, including organizations in the city of Ekaterinburg.Materials of scientific researches of domestic and foreign scientists in the field of development of economy of impressions are used.When processing and systematizing the information and analytical material, the methods of grouping, detailing, synthesis, the historical and logical method of research, as well as methods of economic and statistical analysis were used.
Literature review
According to World Bank statistics, almost 2/3 of GDP in economically developed countries is produced in the service sector, given this fact, the economy can be defined as a "service" one [7].An integral property of a service is its inseparability from production (simultaneous production and consumption of a good), as a result of which the service consumes not only the time of the producer himself, but also the time of the consumer, connecting the producer's working time with the consumer's free time.The peculiarity of the modern economy is that it treats the consumer's free time as its resource, and this time can be optimized, controlled and rationalized [8, p. 534].
In modern society, the consumption of services occurs in real time, while a person has a wide choice of what to spend his free time on.In the modern world, most of what brings pleasure to a person is a service produced by someone.Thus, we can conclude that the consumer's free time for the service economy becomes a limited economic resource that needs to be analyzed and evaluated, since the production of services is inseparable from their consumption.
According to the official statistics of the Russian Federation, since 2016-2020, there has been an increase in the volume of paid services per capita (Table 1).It indicates a high level of demand in the service sector of the economy.All types of services show positive dynamics until 2020.The drop in demand by more than 15% in 2020 compared to 2019 is due to the preventive measures of the coronovirus infection in 2020-2021.Moreover, it should be noted that the entertainment sector, including the tourism business, suffered first.It should be noted that not only in Russia, but also throughout the world, of all aspects of the economy, the service sector suffered the greatest losses [10].In addition, it should be noted that the share of paid entertainment services accounts for about 7-8% in the total structure of services provided.Moreover, for the analyzed period of time, their share remains unchanged, with the exception of the 2020 pandemic year, in which their share decreased to 5% against the backdrop of an increase in housing and communal, medical and telecommunications services, which once again confirms the greatest vulnerability of entertainment services in the context of epidemiological instability.
The peculiarity of the modern economy is that it treats the consumer's free time as its resource and this time can be optimized, controlled and rationalized [8, p. 539].In addition, in the modern world, most of what brings pleasure to a person is a service produced by someone.Thus, we can conclude that the consumer's free time for the service economy becomes a limited economic resource that needs to be analyzed and evaluated, since the production of services is inseparable from their consumption.
Research by neuroscientists has shown that people whose brains are damaged in the area that generates emotions are unable to make decisions.This idea is significant because it helps us understand that human beings are not as logical as we might imagine.Understanding this has important consequences that affect the formation of the entertainment services market [11].
According to the researchers, experiences create longer lasting happiness because they are more open to positive reframing; they tend to become more significant parts of the personality; and they have a stronger influence on the development of social relations.Experiences help us learn, grow, and connect with each other, so it is no wonder we spend our money and time on them.
Back in 1998, the Harvard Business Journal coined the term "experience economy" in an article about how more people are spending money on experiences rather than products.What is the experience economy?The experience economy is defined as an economy in which many goods or services are marketed with an emphasis on the impact they can have on people's lives.The development of the experience economy that earlier a person could receive half of the services of leisure activities free, and now these services are becoming an object of sale.
World Economic Forum experts note that the entertainment and mass media industry has experienced several digital transformations over the past 20 years.In total, they include four similar waves: the emergence and development of file-sharing systems in the mid-1990s; the emergence of the first video streaming services in the early 2000s; the growth of mobile traffic and the development of cloud technologies in the late 2000s; at the moment, the fourth wave begins, associated with the introduction of the Internet of things and the emergence of entire ecosystems around the information provider [12].
Results
Traditionally, the global entertainment industry is referred to by the acronym REST (Recreation, Entertainment, Sports, Tourism) and includes businesses specializing in recreation, entertainment, sports and tourism.Everyone wants to be able to take a break from work, get new vivid impressions and emotions, and try new things.In this regard, the entertainment market is one of the most dynamic and developing, always ready to meet the rapidly growing needs of consumers [13].
Pine and James [14] consider the experience economy in terms of the scale of consumer participation and environmental impact and divide it into four sub-segments: entertainment, education, escapism, and aesthetics.One of the most promising areas that combines all four areas of the experience economy is the quest industry.
To date, experts distinguish the following types of quests focused on various consumer goals: experiential room, performance, action game, city quest, role-playing quest, corporate quest, immersion quest theater [15].In addition, in the world of global digital reality, a person needs to switch to other activities in order to maintain his psycho-emotional state of health.Therefore, according to many domestic and foreign scientists, games in virtual reality do not have the same strong impression as games in the real world, when a person can touch objects, smell and see everything with his own eyes without the help of special glasses [16].
At the end of 2019, the Levada Center research organization conducted a survey among the Russian population to identify the most popular type of leisure among Russians.Sociologists interviewed 1,616 citizens over 18 years of age (Fig. 1).It can be seen from the diagram that the most common type of recreation among Russians is passive recreation, which consists in watching movies and series on TV.About a third of Russians meet with friends almost every day.This is because people tend to make useful contacts, communicate with new people and increase their social connections.Visiting theaters and museums has become the most unpopular type of leisure, only 2% of the population prefer to do it once a week, monthly -3-8%.More than half of the respondents (55%) stated that they never go to theaters and museums.It can be concluded that many consumers prefer to stay at home, in a place where they can use the Internet as a leisure activity, read blogs, information pages, communicate with friends on social networks.
According to the statistics of Russian Standard Bank, for the three quarters of 2021, the total number of transactions with bank cards when paying for entertainment among Russians increased by 35%, and the average check -by 19% compared to the same period in 2020 and amounted to 2,041 rubles [18].Ekaterinburg is in fourth place in the list of Russian cities where residents spent the most money on entertainment.The average check amounted to 1,509 rubles, for comparison, the largest amount in Moscow is 2,141 rubles [19].
Ekaterinburg is the capital of the Urals with a population of almost one and a half million people.The city is an industrial and industrial center of the entire Sverdlovsk region and attracts tourists not only from nearby cities.According to the Minister of Investment and Development of the Region, Victoria Kazakova, since the beginning of 2021, more than 1 million guests have visited the Sverdlovsk region.Kazakova noted that Ekaterinburg is one of the "leaders in business tourism" in the country.Residents and guests of the city want to take a break from everyday routine, work and spend their leisure time in different ways.Due to the large population and its constantly growing needs, the entertainment market in Ekaterinburg is trying to offer various types of leisure.Nowadays the city has a large number of libraries, museums and quests.These organizations occupy a leading position.The number of museums is increasing every year, as the desire of people to immerse themselves not only in history, but also to visit contemporary art exhibitions and interactive platforms increases.New shopping centers are being built in the city, in which cinemas are opened; therefore, their number is increasing.Various large objects, such as zoos, circuses, theaters, parks, practically do not change their composition, since the construction of buildings requires large areas.
Based on the data presented in the graph, we can conclude that quest organizations rank third in terms of the number of all entertainment companies in Ekaterinburg.Every year their number increases, and this industry is gaining momentum.
According to the report on the development of the quest industry around the world, published by the international booking service Xola, in 2016, there were 3,000 quests in the world, and in just a few years their number has grown rapidly [20].In March 2020, it was estimated that more than 50,000 quests are already functioning worldwide (Fig. 3).The report also reveals that 41% of repeat quest bookings are made on the same day that the team attends the first game.This does not apply to planning large events such as birthdays or team building.The report also notes an upward trend in mobile bookings, prompting businesses to develop not only a full-screen version of the website, but also a scaled-down version of the website to fit any screen size.It should be noted that in each of the countries represented there has been a significant increase in the number of escape rooms.Statistics do not take into account the number of people who lived in these periods in a particular country, but represents a quantitative increase in interest in the quest industry around the world.
From the data presented, it is clearly seen that Russia is in the lead in terms of the number of quest rooms in the world.It should be noted that the Russian quest market is different from the Western one.There are many companies in Russia, high competition and quality of escape rooms, original formats.However, at the same time, unlike the Western market, which hosts various conferences on the construction of quests, there are numerous communities, the Russian industry is more closed, and our companies do not reveal their secrets.
The Japanese Takao Kato developed the very first quests in reality in 2007.After gaining popularity in Asia, this type of entertainment began to appear in the United States.In Europe, the popularity of quests came in 2011, many companies began to appear, and the city of Budapest became the leader.The first Russian quests appeared in the capital of the Urals in the spring of 2012.Snail.Quest room opened the first classic "reality quest" and horror project in Ekaterinburg.From the moment of its appearance to the present day, the most famous company in the quest industry in Russia is the international network "Claustrophobia", which opened its first quest in Moscow in 2013 [21].After nine years of work, "Claustrophobia" has 193 unique scenarios that are played out in 203 quests.The company's projects are open in nine cities in seven countries around the world.More than a million games have been played on these venues and about four million players have had an unforgettable experience [22].
The new entertainment gained great popularity in a short time, despite the fact that the organizers practically did not engage in marketing and promotion of projects.Many fans of computer games with pleasure became participants in real stories and decided to visit unique microuniverses, where different types of mental and physical activity of a person are combined.
Discussion
For the first half of 2021 in Russia, quest services are presented at 6,200 sites in various cities of the country, offering quests of various kinds, quizzes, role-playing games and other interactive entertainment.According to RBC estimates, the industry turnover for 2021 amounted to 1.25 billion rubles [23].
At the beginning of 2022, the championship in Russia in terms of the number of quests per capita is shared by Novosibirsk, Krasnoyarsk and Ekaterinburg (7 per 100 thousand people).This is followed by Kazan (6 per 100 thousand) and Perm (5 per 100 thousand) [24].
Statistical indicators characterizing the structure of entertainment in the city of Ekaterinburg indicate a high level of demand among the population for quest services, giving priority in the ranking of popular types of entertainment (Fig. 4).[25].
Fig. 4. Choice of entertainment in Ekaterinburg
The list of entertainment included, in addition to quests, cinemas, concert venues, sports complexes, rides and other outdoor activities.The lowest rating is 3.62 and refers to mini golf.This is explained by the fact that there are almost no playgrounds in the city and this sport is not very popular among the city residents.From the diagram, we can conclude that quest rooms have the highest rating -4.84/5.00,therefore, they are in demand entertainment on the market.
Conclusions
Thus, after analyzing the state of the entertainment market in the city of Ekaterinburg, we can conclude that the population mainly chooses an active form of leisure.Entertainment that combines physical and mental activities, as well as aesthetic pleasure and the development of interpersonal skills is gaining popularity.One of the best offers in this area are quest rooms, which combine all the necessary characteristics for excellent leisure.The Ekaterinburg market follows the needs of the population and new quests appear in the city, and competition among companies is becoming more and more serious.
Fig. 2 .
Fig. 2.The number of organizations in the entertainment sector in Ekaterinburg[19]. | 3,981.6 | 2023-01-01T00:00:00.000 | [
"Business",
"Economics"
] |
“ The Spirit in Prison ” : An Exegetical and Theological Study of 1 Peter 3 : 19
The Phrase “the spirit in Prison” in 1 Peter 3:19 has generated confusion and misunderstanding in the right interpretation of the phrase among scholars. Some translate this phrase to mean disembodied souls that Jesus Christ preached to at His death, others see it as fallen angels and or demons. Against the above, this paper employed historical grammatical method of exegesis to ascertain the meaning of this phrase considering its content and context. It was discovered from the inquiry that the phrase means “people in darkness” with its implications on the theology of: suffering and Justification, death and resurrection, sin, grace, baptism and ascension of Christians. DOI: 10.7176/RHSS/9-12-04 Publication date:June 3
Cappadocia, Asia, and Bithynia" (1 Pet. 1: 1b NKJV). Further, some early church fathers such as Eusebius and Jerome claim that these "strangers" or "pilgrims" were "native-born Jews, who had been converted to the Christian faith". Other scholars opine that they were "all of Gentile origin". In fact, some claim that "they were Gentiles by birth, but had been Jewish proselytes, or 'proselytes of the gate', and had been converted to Christianity" (Barnes, 1975). Besides, Maclaren (1984) claims that the pilgrims in those Gentile nations mentioned in 1 Pet. l: lb, "may refer to the scattered Jewish people". Barnes adds that they were "not Jews in general, but those of the ten tribes who had wandered from Babylon and the adjacent regions into Asia Minor". The last postulation however, is that: the people whom the Apostle Peter addressed were basically Christians. It is possible they were converts from both Judaism and Heathenism, but Peter simply employed a language common to the Jews in his opening statements (Barners, 1975). In sum, Peter was writing to Gentile congregations which had a minority of Jews in their membership.
Historical Background
The infant church after Pentecost indeed suffered persecution, even from the hands of the Sanhedrin (Bigg, 1975), which consequently led to the death of Stephen and the spread of the Gospel (Acts 3-4, 7-8). In Acts we also see that the church suffered persecution through Saul before his conversion (Acts 9). More so, it seems that the church was not just persecuted by the Jewish leaders, but the Roman leaders were involved. As a matter of fact, Jews in Rome were ordered by Emperor Claudius to evacuate the city (Acts 18:2).
Moreover, that the Christians were scattered throughout the cities of Asia Minor which Peter mentions in 1 Pet. 1: 1b, is enough reason for us to infer that they did go through severe persecution. Besides, it is also possible that it was even the persecution that caused them to scatter abroad. To add to that, if the persecution led to the loss of life, in terms of martyrdom, Peter would have probably mentioned. Another possible view is that, there was religious discrimination by the Gentile government towards the Christian community of faith; also the Christian servants were oppressed by their masters (1 Pet 2: [13][14][16][17][18][19]. Nevertheless, scholars argue back and forth regarding the scope of the persecution (Elliot, 2000). Now, whether this persecution was world-wide or not, Peter's first epistle does not answer that. But one thing we do know is that the Christians were treated unjustly (1 Pet. 2:12). However, from the tone of Peter's first epistle, we may deduce that what led to the writing was the oppression which the Christian church was passing through. However, Peter describes their condition (persecution) as "the trial of your faith" (1 Pet. 1: 7).
First of all, Peter wrote his first epistle to indoctrinate the "newly-converted Jews" regarding the Christian religion (Church and Hist, 1972). It is postulated that the churches which Peter wrote to were founded not by Peter but Paul during his second missionary journey (see Acts 16: 6-7). Consequently, Paul and Peter met in Rome (64 A.D.); shortly after Paul would be away to Spain: which was why "he asked Peter to keep an eye on his great fields in Asia" (Lenski, 1956). In writing therefore, Peter began with their calling and conversion into the family of God. Secondly, Church, (1939) opines that Peter wrote his epistle to instruct the converts to live a holy life (1 Pet. 1: 15-16). Thence, Peter was admonishing them to deny worldliness and carnal affections, that with their whole soul they may desire "the celestial kingdom of Christ". The third purpose for Peter's writing is to prepare their hearts for suffering in the world for the sake of their faith. Seeing these were new converts into the Christian fold, the persecution could have reduced them in number (not necessarily in terms of death) back to the world they were called out from. Therefore, Peter wrote to fortify and strengthen them in a time of "special affliction" (Downey, 1929); and to establish them firmly in Christ. Peter himself demonstrated this unwavering faith when he held on to his testimony even to the point of death. "Shortly after Peter had written this letter he suffered martyrdom".
Exegetical and Theological Study of 1 Pet. 3:19
This section carries uses the historic-grammatical method of exegesis in studying the passage under question placing it in its right context. In doing that, we would employ the basic steps of doing exegesis: Literary unit (Immediate and Larger context), Genre, Textual Analysis, Translation, Structure and Interpretation/ Theology. More so, we would place the text in its historical context so as to garner what it meant to the author (Peter) as well as the immediate audience, "the strangers scattered" (1 Pet. l: lb). Literary Unit The literary unit is the contextual framework which strategically houses the passage under discussion. Hence, this literary unit places the text (1 Pet. 3: 19) within its right context, considering the preceding issues as well as the matters that follow up. Usually, the literary unit is divided in two: Larger and Immediate Context.
Larger Context
The larger context contains blocks of various yet interconnected stories (in terms of a narrative) and ideas through a chapter or segment. While we keep that in mind, we proceed to identify the larger context of 1 Pet. 3:19. Further, the larger context of 1 Pet. 3: 19 is: 1 Pet. 3: 13-4: 7. Actually, Peter introduces the theme of suffering (or persecution as the case may be) in 1 Pet. 3: 13, then he ends that theme with an admonition regarding their preparation for the Parousia in 1 Pet. 4: 7.
Immediate Context
The immediate context of 1 Pet. 3: 19 is 1 Pet. 3:18-22; within these verses Peter uses the example of Jesus' suffering in connection with what the then Christians were passing through. Thus 1 Pet. 3:19 strategically falls within the literary framework of 1 Pet. 3:18-22, seeing then that the subject of discussion is Christ.
Genre
First Peter's genre is epistolary. However, the genre of 1 Pet. 3: 18-22 is at first multiple in nature; although it is primarily a prose, it could be subdivided into metaphor (1 Pet. 3: 19), and even history (1 Pet. 20). While acknowledging the difficulty of ascertaining the original text, a majority of the Committee preferred the reading peri amartion epathen because (a) this verb, which is a favorite of the author (it occurs elsewhere in 1 Peter eleven times), carries on the thought of ver. 17, whereas apothneskeiv (which occurs nowhere else in the epistle) abruptly introduces a new idea; (b) in view of the presence of the expression peri amartion scribes would be more likely to substitute apethanen for epathen than vice versa; and (c) the readings with hemon or humon (which in later Greek had the same pronunciation) are natural and, indeed, expected scribal expansions.
{C} humas
The Committee was inclined to prefer humas accidentally omits the pronoun) ACK81 614 1739vg syr hmg cop sa ' bo Clement), because copyists would have been more likely to alter the second person to the first person (as more inclusive) than vice versa.
en o kai
Several scholars have advocated the conjectural emendation that introduces the subject "Enoch" Instead of improving the intelligibility of the passage (as a conjectured reading ought to do), the word Enoch breaks the continuity of the argument by introducing an abrupt and unexpected change of subject from that of ver. 18.
Ho
Despite the difficulty of construing ho, the Committee felt obliged to accept it as the text, (a) because it is strongly and widely supported by --c A B C K P ---33 81 614 1739 Byz it 65 vg arm Cyprian Origen lat al, and (b) because the other readings are obvious ameliorations of the difficulty, some witnesses having omitted the word, and others having substituted for it either O (69 206 216 241 630 1518) or ----(cop bo vid Augustine vld ).
Theo
After Theo most manuscripts of the Vulgate insert deglutiens mortem ut vitae aeternae haeredes efficeremur ("swallowing up death that we might be made heirs of eternal life"). As is suggested by the use of the present participle deglutiens in the sense of the past tense, it is probable that the addition is a translation of a Greek gloss, which, according to Harnack's reconstruction, may have read katapion (tov) thanaton, hin zoes aioniou kleronomoi genetomen (BibleWorks~[c:\program files (x86)\bible works 7\init\700.swc].)
Translation
Based on the textual analysis above, there is obviously nothing that warrants another translation of the passage under discussion. However, for the sake of our study, we shall provide a working translation of the text (1 Pet. 3: 18-22). "Since Christ also underwent suffering once and for all in reference to sin, the just for the unjust, so that he may bring us to God; after dying as a mortal, he was brought back to life by the Holy Spirit: through whom he carried out the preaching unto the people living in darkness: who were at some time disobedient when God eagerly waited with forbearance in the days of Noah, while he was in the process of constructing the ark, in which only few, even eight persons were delivered through the flood. Now this water of the flood stands as an antitype for baptism which saves us-not the outward washing of the body but a conscientious seeking of God-through the resurrection of Jesus Christ; who has ascended into heaven, and is on the right hand of God, having angels and authorities and powers all subject to him".
Interpretation of 1 Pet. 3:19
In carrying out the interpretation, we would pay a keen attention to two crucial points: the person who did the preaching, and the time which the preaching was done. Once these points are clarified, then the expression "the and the various species of living things (Gen. 6:14-7:24). As a matter of fact, Peter makes it explicitly clear that the preaching was done; that is "while [Noah] was still constructing the ark" (1 Pet. 3: 20c). Out of the multitude of the people in the generation of Noah, only eight souls were saved by water because they believed in the preaching of the forth coming destruction of the world through the flood (Gen. 7:1,7; 1 Pet 3:20d; 2 Pet. 3:6).
Theological relevance of 1 Pet.3:19
First of all, 1 Pet. 3:19 in the context of the entire epistle presents us with the theology of suffering and justification. Suffering is intricately part of the Christian experience, aimed at purifying and establishing believers firmly in the faith which they profess. In fact, when Christians suffer, they should consider such as a moment of testing. By the way, even Christ Himself suffered as a sinner for the justification of our sins hence leaving us an example that we may follow His steps . So to us Christians today, in the course of our suffering, we should have Christ as our model and consolation.
Second, the text also provides the theology of death and resurrection. Suffering may lead to death, but death is not the end of the Christian who died believing in Christ. So this eternal hope of rising from the dead at the resurrection should strengthen us Christians to hold on to our faith to the very end. In fact, it will serve as a grand testimony unto those outside the Christian community of faith.
Third, the passage hints on the theology of sin. To any Christian who disregards the voice of God through His servants and apostles, such is counted as disobedience. And disobedience bounds the heart of man to enormous evil and immorality, making such bound to their lustful passions as if they were in prison. Christians should walk in the light God has provided in His Word which alone can keep us from stumbling and keep us within the confines of righteousness.
Fourth, the passage presents the theology of Grace. God does not immediately pass judgment on any generation of people that is rebellious; He rather gives sinners the chance for repentance so that when He acts finally He shall be justified. More so, Christians today should not count the seeming delay in the judgment of God (as in the Parousia) as slackness, but He is simply demonstrating forbearance towards us as He did in the time of Noah so that no one would perish (1 Pet. 3: 20; 2 Pet. 3: 9) Fifth, the text emphasizes the theology of Baptism. Baptism is essential for our salvation. It is a sign of death to our old manner of living as well as a mark of new life in Christ (1 Pet. 3: 21; Rom. 6: 1-7). Thence, Christians who are baptized should also forfeit their sinfulness and walk in righteousness, and not taking baptism to mean a mere rite or tradition.
Finally, the text also provides the theology of Ascension. Peter didn't only mention the theme of suffering, death and resurrection; but emphasized the glorification of Christ after his ascension into heaven (1 Pet. 3:22). Therefore within the context of 1 Pet. 3:18-22, Peter discusses the subject of salvation, and the reward which Christians shall receive should they endure to the very end.
Conclusion
From the findings of this research, these are the following conclusions: First, Peter wrote his epistle (through Silvanus) in a time when the Gentile Christians were passing through trials and temptations; hence his aim was to comfort and fortify them regarding the Christian faith which they adhered to. Second, we conclude that the preaching to the spirits in prison was done through the agency of the Holy Spirit. Third, we conclude that the spirits in prison were not demons nor disembodied souls but the people living in the time of Noah before the flood. Fourth, we conclude that in the actual sense it wasn't Christ that did the preaching but Noah, who forewarned the people of his time. Finally, the preaching was not done at Jesus' death but (by Noah) in the days before the destruction of the world through the flood, while Noah was still preparing the ark. | 3,501.4 | 2019-01-01T00:00:00.000 | [
"Philosophy"
] |
Xanthones, A Promising Anti-Inflammatory Scaffold: Structure, Activity, and Drug Likeness Analysis
Inflammation is the body’s self-protective response to multiple stimulus, from external harmful substances to internal danger signals released after trauma or cell dysfunction. Many diseases are considered to be related to inflammation, such as cancer, metabolic disorders, aging, and neurodegenerative diseases. Current therapeutic approaches include mainly non-steroidal anti-inflammatory drugs and glucocorticoids, which are generally of limited effectiveness and severe side-effects. Thus, it is urgent to develop novel effective anti-inflammatory therapeutic agents. Xanthones, a unique scaffold with a 9H-Xanthen-9-one core structure, widely exist in natural sources. Till now, over 250 xanthones were isolated and identified in plants from the families Gentianaceae and Hypericaceae. Many xanthones have been disclosed with anti-inflammatory properties on different models, either in vitro or in vivo. Herein, we provide a comprehensive and up-to-date review of xanthones with anti-inflammatory properties, and analyzed their drug likeness, which might be potential therapeutic agents to fight against inflammation-related diseases.
Introduction
Inflammation is a kind of active defense reaction of organisms to external stimulations, such as infectious microorganisms, or internal processes, such as tissue injury, cell death, and cancer [1][2][3]. However, long-term low-grade inflammation leads to many human diseases, including aging, metabolic disorders, cancer, and neurodegenerative diseases [4][5][6][7]. Thus, the discovery of anti-inflammatory medicines has been and is continuing to be one of the hotspots of pharmaceutical research.
During inflammatory responses, a variety of cytokines and chemokines are released to restore tissue integrity and orchestrate cell infiltration. Tumor necrosis factor-α (TNF-α) is a major pro-inflammatory cytokine that is secreted from various cells and is associated with immune and inflammatory diseases in humans [8]. Interleukin-1β (IL-1β) is another pro-inflammatory cytokine that is crucial for host defense responses to infection and injury [9]. The IL-6 and IL-12 family of cytokines possess both proand anti-inflammatory functions [10] while IL-10 is a potent anti-inflammatory cytokine that impedes the action of many pro-inflammatory mediators to maintain tissue homeostasis and attenuate the damage [11]. Alterations in prostaglandin E2 (PGE2) activity are associated with inflammatory diseases. The pathway of PG synthesis starts with the generation of arachidonic acid from cell membrane phospholipids by phospholipase A2 (PLA2). Then, arachidonic acid is converted to PGs by the enzyme cyclooxygenase (COX) [12]. The inducible COX-2 is recognized as the most active mediator during arachidonic acid is converted to PGs by the enzyme cyclooxygenase (COX) [12]. The inducible COX-2 is recognized as the most active mediator during inflammatory processes. Additionally, inducible nitric oxide synthase (iNOS) is highly expressed under inflammatory conditions, which catalyzes the synthesis of nitric oxide (NO) [13]. Because macrophages produce a wide range of biologically active molecules participating in both beneficial and detrimental outcomes in inflammation, therapeutic interventions targeting macrophages and their products have attracted lots of attention for controlling inflammatory diseases.
Currently, anti-inflammatory therapy mainly includes non-steroidal anti-inflammatory drugs (NSAIDS) and glucocorticoids, both of which possess various side effects, such as cardiotoxicity, hepatotoxicity, and immunological dysfunction [14,15]. Natural products have attracted increasingly more attention due to their safety and effectiveness [16]. Emerging evidence indicates that natural products always function as multi-component and multi-target patterns [17]. Naturally occurring anti-inflammatory compounds might be promising candidates for the treatment of enteritis, arthritis, and skin inflammation. Xanthones were firstly isolated in 1855 by a German scientist pursuing research on dysentery and then named by the Greek word for yellow, xanthos [18]. Xanthones possess a unique 9H-Xanthen-9-one scaffold (Figure 1), which mainly occurs in the plants of the families Gentianaceae and Hypericaceae, as well as some fungi and lichens [19]. Several types of xanthones have been identified, including simple oxygenated xanthones, xanthone glycosides, prenylated xanthones, xanthonolignoids, and miscellaneous [20]. The studies of xanthone are provoking not only due to the structural diversity but also a variety of pharmacological activities. Many xanthones have been reported with potent anti-inflammatory properties [21−25]. Herein, we provided a comprehensive and up-to-date review of xanthones with anti-inflammatory properties and analyzed their drug likeness, which might be further developed to treat inflammation-related diseases.
Xanthones with Anti-Inflammatory Properties
Using the keywords xanthone and inflammation, we collected data from Google Scholar, Web of Science, Scopus, and Pubmed. A total of 44 xanthones were found with anti-inflammatory properties, containing 6 simple oxygenated xanthones (1-6), 2 xanthone glycosides (7, 8), 33 prenylated xanthones (9-41), and 3 xanthonolignoids (42-44) ( Figure 2). Many models, either in vitro or in vivo, have been recruited to evaluate the anti-inflammatory properties of xanthones. To organize the review, the xanthones were classified based on bioassays (Table 1).
Xanthones with Anti-Inflammatory Properties
Using the keywords xanthone and inflammation, we collected data from Google Scholar, Web of Science, Scopus, and Pubmed. A total of 44 xanthones were found with anti-inflammatory properties, containing 6 simple oxygenated xanthones (1-6), 2 xanthone glycosides (7, 8), 33 prenylated xanthones (9-41), and 3 xanthonolignoids (42-44) ( Figure 2). Many models, either in vitro or in vivo, have been recruited to evaluate the anti-inflammatory properties of xanthones. To organize the review, the xanthones were classified based on bioassays (Table 1).
The major role of neutrophils in the host defense is to eliminate invading microorganisms [41]. In neutrophils, N-formylmethionyl-leucyl-phenylalanine (fMLP) is a powerful activator of polymorphonuclear and mononuclear phagocytes, and the effects of fMLP on neutrophil activity can be inhibited by pertussis toxin [42]. The neutrophil-mediated inflammatory response is regarded as a multi-step process involving the initial adhesion of circulating neutrophils to activated vascular endothelium [43]. In fMLP/CB-stimulated human neutrophils, several gambogic acid analogs (23, 24, 29, 31, 32, 34, and 35) inhibited superoxide anion generation and elastase release [44]. Several xanthons (3,9,42, and 43) were isolated from the twigs of Hypericum oblongifolium wall, which showed anti-inflammatory activity in isolated human neutrophils [45].
CD3 − synovial cells are suggested to play an important role in RA development and therefore are a perfect model in the search for new anti-arthritic drugs. Mangiferin (7) downregulated TNF-α, IL-1β, and IFN-γ expression in TNF-α-stimulated CD3 − synovial cells from rheumatoid arthritis (RA) patients, which indicated that mangiferin could be a potent candidate for the treatment of RA [46].
Sepsis is a major cause of death worldwide [47]. Infection-induced inflammation is strongly regulated by many endogenous negative feedback mechanisms that modulate the intensity of inflammation, promote its eventual resolution, and return it back to homeostasis. Mangiferin (7) dose-dependently upregulated the expression and activity of HO-1 in the lung from septic mice [48].
Carrageenan is a pro-inflammatory agent used as a tool to induce inflammatory hyperalgesia in rats and mice [49]. The carrageenan-induced peripheral inflammatory pain model is widely used because it resembles inflammatory pain susceptible to both steroidal and nonsteroidal anti-inflammatory drugs [50]. Local administration of mangiferin (7) prevented inflammatory mechanical hyperalgesia induced by carrageenan in rats, which depended on the inhibition of TNF-α production/release and the CINC1 (cytokine-induced neutrophil chemoattractant 1)/epinephrine/PKA (protein kinase A) pathway [51].
MC3T3 is an osteoblast precursor cell line derived from Mus musculus (mouse), which is one of the most convenient and physiologically relevant systems for the study of transcriptional control in calvarial osteoblasts [52]. Dexamethasone is a known synthetic glucocorticoid, which induces sodium-dependent vitamin C transporter in MC3T3-E1 cells [53]. Bone morphogenetic protein 2 (BMP2) plays a role in postnatal bone formation, mediated by activating ligand-bound Small Mothers Against Decapentaplegic (SMAD) family members [38]. Mangiferin (7) attenuated dexamethasone-induced injury and inflammation in MC3T3-E1 cells by activating the BMP2/Smad-1 signaling pathway [54].
HFLS-RA is a human fibroblast-like synoviocyte with high proliferating ability and susceptibility. HFLS-RA cell is an excellent cellular model for studying synoviocyte physiology in relation to the development and treatment of RA [55]. α-Mangostin (10) (10 µg/mL) was found to suppress the expression and activation of key proteins in the NF-κB pathway and inhibit the nuclear translocation of p65 in HFLS-RA cells [56].
Adjuvant-induced arthritis (AA) is evaluated by paw edema, arthritis score, and hematological parameters. α-Mangostin (10) protected joints from rats suffering from AA, indicated by attenuated paw swelling, reduced inflammatory cell infiltration, decreased secretion of IL-1β and TNF-α in serum, and inhibition of NF-κB activation in synovia [56].
The presence of neuroinflammation is a common feature of dementia [57]. Reactive microgliosis, oxidative damage, and mitochondrial dysfunction are associated with the pathogenesis of all types of neurodegenerative dementia, such as Parkinson's disease dementia (PDD), frontotemporal dementia (FTD), Alzheimer's disease (AD), and Lewy body dementia (LBD). Peripheral LPS-induced neuroinflammation in C57bl/6J mice has been used to evaluate neuroinflammation and neurodegeneration as an adjuvant therapeutic strategy. α-Mangostin (10) reduced the levels of proinflammatory cytokine IL-6, COX-2, and 18 kDa translocator protein (TSPO) in the brain from LPS-induced neuroinflammation in C57BL/6J mice, which was considered as an adjuvant treatment in preclinical models of AD, PD, and multiple sclerosis [58].
RA is a long-term autoimmune disease in which the body's immune system mistakenly attacks the joints; RA causes pain, stiffness, and swelling in the joints [59]. α-Mangostin (10) decreased the clinical score at both doses (10 and 40 mg/kg) and decreased the histopathological score at the high dose in collagen-induced arthritis (CIA) in DBA/1J mice [60].
Asthma is a chronic inflammatory disease of the airways characterized by reversible airway obstruction, airway hyperreactivity (AHR), and remodeling of the airways [61]. Allergic asthma is associated with excessive T helper type 2 (Th 2) cell activation and AHR [55]. α-Mangostin (10) and γ-mangostin (9) reduced the major pathophysiological features of allergic asthma in ovalbumin-induced allergic asthma mice, including inflammatory cell recruitment into the airway, AHR, and increased levels of Th2 cytokines and phosphoinositide 3-kinase (PI3K) activity, which indicated both compounds might have therapeutic potential for the treatment of allergic asthma [62].
Comparison of the Drug Likeness of Anti-Inflammatory Xanthones with Marketed Drugs
Swiss Institute of Bioinformatics provides SwissADME to calculate molecular descriptors of the identified anti-inflammatory xanthones [70]. For each compound, the following descriptors were calculated: Molecular weight (MW); number of stereogenic centers; number of hydrogen bond acceptors (HBA) and donors (HBD), described as the electrostatic bond between a hydrogen and a lone pair of electrons; number of rotatable bonds (RB); number of rings; fraction of sp 3 carbons (Fsp 3 ) defined as the ratio of sp 3 hybridized carbons over the total number of carbons; and fraction of aromatic heavy atoms (Far), defined as the number of aromatic heavy atoms divided by the total number of heavy atoms [68].
The obtained values for each molecular descriptor are shown in Table S1 (Supplementary Materials), grouped according to the categories defined in the previous section. Drug development involves the assessment of absorption, distribution, metabolism, and excretion (ADME), drug-likeness, and medicinal chemistry friendliness. Physicochemical properties, pharmacokinetics, polar surface area (PSA), Log S and iLOGP, and bioavailability properties for xanthone derivatives are presented in Table S2 (Supplementary Materials). Especially for log P and log S, more than one algorithm was used in the process. Seven molecular descriptors were calculated, including the mean and median values for anti-inflammatory xanthone derivatives ( Figure 3). tissue inflammation mice 19 20 mg/kg Reduced macrophage content through inhibiting MAPKs and NF-κB activation [29]
Comparison of the Drug Likeness of Anti-Inflammatory Xanthones with Marketed Drugs
Swiss Institute of Bioinformatics provides SwissADME to calculate molecular descriptors of the identified anti-inflammatory xanthones [70]. For each compound, the following descriptors were calculated: Molecular weight (MW); number of stereogenic centers; number of hydrogen bond acceptors (HBA) and donors (HBD), described as the electrostatic bond between a hydrogen and a lone pair of electrons; number of rotatable bonds (RB); number of rings; fraction of sp 3 carbons (Fsp 3 ) defined as the ratio of sp 3 hybridized carbons over the total number of carbons; and fraction of aromatic heavy atoms (Far), defined as the number of aromatic heavy atoms divided by the total number of heavy atoms [68].
The obtained values for each molecular descriptor are shown in Table S1 (Supplementary Materials), grouped according to the categories defined in the previous section. Drug development involves the assessment of absorption, distribution, metabolism, and excretion (ADME), druglikeness, and medicinal chemistry friendliness. Physicochemical properties, pharmacokinetics, polar surface area (PSA), Log S and iLOGP, and bioavailability properties for xanthone derivatives are presented in Table S2 (Supplementary Materials). Especially for log P and log S, more than one algorithm was used in the process. Seven molecular descriptors were calculated, including the mean and median values for anti-inflammatory xanthone derivatives (Figure 3).
For the sake of comparison between the chemical properties of the anti-inflammatory xanthone derivatives and marketed drugs, these compounds were divided into synthetic compounds, assumed synthetic compounds, natural product-type macrocycles, polycyclic compounds, natural products, and natural product derivatives [71] (Figure 3). For the sake of comparison between the chemical properties of the anti-inflammatory xanthone derivatives and marketed drugs, these compounds were divided into synthetic compounds, assumed synthetic compounds, natural product-type macrocycles, polycyclic compounds, natural products, and natural product derivatives [71] (Figure 3).
Size: Molecular Weight
Traditional therapeutic agents are small molecules that fall within the Lipinski's rule of five [72], including a molecular mass less than 500 Da, no more than 5 HBD, no more than 10 HBA, and an octanol-water partition coefficient logP not great than 5. According to the results, the mean molecular weight for anti-inflammatory xanthone derivatives was 401.3 Da, which adhered to Lipinski's rule ( Figure 3B). Most NSAIDS typically adjust to Lipinski's rule, with a molecular mass of less than 500 Da [73]. Among all the reviewed anti-inflammatory xanthone derivatives, about 95% of them have a molecular weight less than 500 Da, except two dimers.
Chirality: Number of Stereogenic Centers
Because the core structure of xanthone is planar, the number of stereogenic centers in xanthones was less than that of synthetic compounds, assumed synthetic compounds, natural product-type macrocycles, polycyclic compounds, natural products, and natural product derivatives [71]. The average number of the stereogenic center is 0.5 for the identified anti-inflammatory xanthone derivatives ( Figure 3B). The highest value of the stereogenic center is natural product-type macrocycles, with a mean value of 12.0. For the synthesis of new drugs, the more chiral centers, the more difficult and costly the synthesis is. The mean value of the identified anti-inflammatory xanthone derivatives is satisfied with the new drug development criteria.
Polarity: PSA and HBD/HBA
Prediction of the permeability is a major challenge in drug discovery. Solubility governs the skill of drugs to transport across systemic circulation, brain penetration, and the gastrointestinal membrane. Polarity is highly relevant to the prediction of permeability, and PSA is used in the practice of medicinal chemistry to quantify polarity [74]. PSA is defined as the surface area of a molecule that arises from oxygen or nitrogen atoms, plus hydrogen atoms attached to nitrogen or oxygen atoms. The PSA principle takes into account the contribution to polarity, arising from electronegative atoms different from nitrogen and oxygen, but as different atoms have different electronegativity, they will produce a redistribution of the electron density. Thus, some drugs are neglected in PSA calculation. PSA does not distinguish HBD from HBA properties and shows a high degree of correlation with the number of HBA groups but lower correlation with the number of HBD groups [75]. PSA is widely used with discrete success as a molecular descriptor model of permeability and other ADME-related properties to obtain a better understanding and thus prediction of biological events influenced by polarity.
The PSA mean values were 99.4 Å 2 for xanthone derivatives, 86.9 Å 2 for polycyclic new drugs, and 105.3 Å 2 for natural products. Similarly, the HBA/HBD and PSA values for the anti-inflammatory xanthone derivatives increased accompanied by an increase of the molecular weight ( Figure 3D-F and Figure 4). According to the rules of five (Ro 5), HBD < 5, HBA < 10, and PSA < 140 Å 2 [76], most of the anti-inflammatory xanthone derivatives satisfied this criterion, which indicated that they might have good oral absorption.
Molecular Flexibility: Rotatable Bonds and Aromatic Character
RBs are defined as any single bond, not in a ring, bound to a nonterminal heavy atom. The amide C-N bonds are excluded because of their high rotational energy barrier. Reduced molecular flexibility, as measured by the number of RBs, and low PSA or total HB are important predictors of good oral bioavailability [77]. The RB number was found to influence oral bioavailability, with 65% of compounds with ≦7 RBs exhibiting an oral bioavailability of ≧20% [78]. The increased RB number has a negative effect on the permeation rate. A threshold permeation rate is a prerequisite of oral
Molecular Flexibility: Rotatable Bonds and Aromatic Character
RBs are defined as any single bond, not in a ring, bound to a nonterminal heavy atom. The amide C-N bonds are excluded because of their high rotational energy barrier. Reduced molecular flexibility, as measured by the number of RBs, and low PSA or total HB are important predictors of good oral bioavailability [77]. The RB number was found to influence oral bioavailability, with 65% of compounds with 7 RBs exhibiting an oral bioavailability of 20% [78]. The increased RB number has a negative effect on the permeation rate. A threshold permeation rate is a prerequisite of oral bioavailability.
The mean number of RBs for the anti-inflammatory xanthone derivatives is 3.7, and the mean number of aromatic heavy atoms is 14.8. The mean values of RBs for the polycyclic compounds, natural products, natural product derivatives, and synthetic drugs are 7.4, 9.4, 7.4, and 5.4, respectively. The RBs for most of the identified anti-inflammatory xanthone derivatives are less than those of polycyclic natural products, indicating a good permeation rate ( Figure 3G). Compared to synthetic compounds (mean Fsp 3 of 0.27), natural products (mean Fsp 3 of 0.55) are more like a typical trait [70]. The identified anti-inflammatory xanthone derivatives have a mean Fsp 3 of 0.24 because xanthone derivatives have a higher aromatic character.
Lipophilicity: LogP
The major role of lipophilicity in drug discovery is to balance potency and ADME properties [79]. Lipophilicity is commonly described as logD, where the distribution coefficient, D, is quantified by the concentration of all species (unionized and ionized) of a compound at a given pH in two immiscible phases (commonly 1-octanol and water/buffer) at equilibrium. The distribution coefficient (D) is replaced with the partition coefficient (P) at any given pH if only one species (typically neutral) is present.
The log P values of the anti-inflammatory xanthone derivatives vary a lot depending on the predict method on Swiss ADME. MLOGP is the most discrepant in all the logP index ( Figure 5). Compared to the natural products, natural derivatives, synthetic compounds, assumed synthetic compounds, natural product-type macrocycles, and natural product polycyclic, the logP value of the anti-inflammatory xanthone derivatives (3.7) is higher, which indicated a lower oral bioavailability.
Solubility: Log S
It has been reported that over 75% of drug candidates have low solubility based on the biopharmaceutics classification system (BCS). Solubility is one of the challenging properties in drug discovery. Compounds that are not fully soluble in bioassays result in erratic assay results, such as enzyme and cell-based assay. Because the actual concentration in solution is much lower than the target concentration, it can appear as an artificially low potency. Solubility issues cause a lot of frustration and lots of productivity in drug discovery [80]. In some cases, a high amount of organic solvent has to be used to dissolve the compounds, which causes an unexpected toxicity. The development of insoluble compounds can be expensive and time consuming. Solubility is expressed as log S and values greater than -4 are acceptable for a drug [81].
The relationship between the molecular size and aqueous solubility of xanthone derivatives is fairly stable; when the molecular weight gets higher, the solubility of anti-inflammatory xanthone derivatives decreased (Figure 6). Most anti-inflammatory xanthone derivatives might face the solubility issue.
Solubility: Log S
It has been reported that over 75% of drug candidates have low solubility based on the biopharmaceutics classification system (BCS). Solubility is one of the challenging properties in drug discovery. Compounds that are not fully soluble in bioassays result in erratic assay results, such as enzyme and cell-based assay. Because the actual concentration in solution is much lower than the target concentration, it can appear as an artificially low potency. Solubility issues cause a lot of frustration and lots of productivity in drug discovery [80]. In some cases, a high amount of organic solvent has to be used to dissolve the compounds, which causes an unexpected toxicity. The development of insoluble compounds can be expensive and time consuming. Solubility is expressed as log S and values greater than -4 are acceptable for a drug [81].
The relationship between the molecular size and aqueous solubility of xanthone derivatives is fairly stable; when the molecular weight gets higher, the solubility of anti-inflammatory xanthone derivatives decreased ( Figure 6). Most anti-inflammatory xanthone derivatives might face the solubility issue. enzyme and cell-based assay. Because the actual concentration in solution is much lower than the target concentration, it can appear as an artificially low potency. Solubility issues cause a lot of frustration and lots of productivity in drug discovery [80]. In some cases, a high amount of organic solvent has to be used to dissolve the compounds, which causes an unexpected toxicity. The development of insoluble compounds can be expensive and time consuming. Solubility is expressed as log S and values greater than -4 are acceptable for a drug [81].
The relationship between the molecular size and aqueous solubility of xanthone derivatives is fairly stable; when the molecular weight gets higher, the solubility of anti-inflammatory xanthone derivatives decreased ( Figure 6). Most anti-inflammatory xanthone derivatives might face the solubility issue.
Compliance of Xanthones with the Rules of Drug Likeness
In order to quickly eliminate lead candidates that have poor physicochemical properties for oral bioavailability, the five rules of drug likeness have been widely adopted in the pharmaceutical industry, which helps to predict the in vivo behavior of potential drugs [77]. The biophysicochemical properties and molecular descriptors of the anti-inflammatory xanthone derivatives were framed as different rules of compliance. Most anti-inflammation xanthone derivatives appear to have a good drug likeness, which are green in the visualization map in Table S3 (Supplementary Materials).
Trends on the PK Behavior of Xanthones
The brain or intestinal estimated permeation method (BOILED-Egg) is proposed as an accurate predictive model that works by computing the lipophilicity and polarity of small molecules [82]. It delivers a rapid, intuitive, and easily reproducible yet statistically unprecedented robust method to
Compliance of Xanthones with the Rules of Drug Likeness
In order to quickly eliminate lead candidates that have poor physicochemical properties for oral bioavailability, the five rules of drug likeness have been widely adopted in the pharmaceutical industry, which helps to predict the in vivo behavior of potential drugs [77]. The biophysicochemical properties and molecular descriptors of the anti-inflammatory xanthone derivatives were framed as different rules of compliance. Most anti-inflammation xanthone derivatives appear to have a good drug likeness, which are green in the visualization map in Table S3 (Supplementary Materials).
Trends on the PK Behavior of Xanthones
The brain or intestinal estimated permeation method (BOILED-Egg) is proposed as an accurate predictive model that works by computing the lipophilicity and polarity of small molecules [82]. It delivers a rapid, intuitive, and easily reproducible yet statistically unprecedented robust method to predict the passive gastrointestinal (GI) absorption and brain access of small molecules useful for drug discovery and development [83].
According to the results, about 75% of anti-inflammatory xanthone derivatives have a higher probability of being highly absorbed in the GI ( Figure 7A). It might be due to their lower MW and lower polarity of the benzene rings. In total, 33 anti-inflammation xanthone derivatives have higher GI absorption, and 10 xanthone derivatives have a high probability of being a substrate for P-glycoprotein (P-gp, Figure 7A). predict the passive gastrointestinal (GI) absorption and brain access of small molecules useful for drug discovery and development [83]. According to the results, about 75% of anti-inflammatory xanthone derivatives have a higher probability of being highly absorbed in the GI ( Figure 7A). It might be due to their lower MW and lower polarity of the benzene rings. In total, 33 anti-inflammation xanthone derivatives have higher GI absorption, and 10 xanthone derivatives have a high probability of being a substrate for Pglycoprotein (P-gp, Figure 7A).
The blood-brain barrier (BBB) is a highly selective semipermeable border that separates the circulating blood from the brain and extracellular fluid in the central nervous system [84]. Most of the anti-inflammatory xanthone derivatives have a low probability of being able to cross the BBB ( Figure 7B), and there are 10 xanthone derivative with potential abilities to be a substrate for P-gp (Table S5, Supplementary Materials). SwissADME provides the potential ability of xanthone derivatives to be a P-gp substrate to inhibit one of five major isoforms of cytochrome P450, CYP450 (CYP1A2, CYP2C19, CYP2C9, CYP2D6, and CYP3A4) [85,86]. The predicted results are shown in Table S4 (Supplementary Materials). The anti-inflammatory xanthone derivatives have higher opportunities to be CYP450 enzyme inhibitors, especially for the CYP2C9 (Figure 8). Compound 35 was identified as a possible inhibitor of all the CYP isoforms (Table S4, Anti-inflammatory xanthone derivatives with high GI absorption were classified accordingly to its P-gp substrate (right pie chart). (B) BBB permeability of the identified xanthone derivatives.
The blood-brain barrier (BBB) is a highly selective semipermeable border that separates the circulating blood from the brain and extracellular fluid in the central nervous system [84]. Most of the anti-inflammatory xanthone derivatives have a low probability of being able to cross the BBB ( Figure 7B), and there are 10 xanthone derivative with potential abilities to be a substrate for P-gp (Table S5, Supplementary Materials). SwissADME provides the potential ability of xanthone derivatives to be a P-gp substrate to inhibit one of five major isoforms of cytochrome P450, CYP450 (CYP1A2, CYP2C19, CYP2C9, CYP2D6, and CYP3A4) [85,86]. The predicted results are shown in Table S4 (Supplementary Materials). The anti-inflammatory xanthone derivatives have higher opportunities to be CYP450 enzyme inhibitors, especially for the CYP2C9 (Figure 8). Compound 35 was identified as a possible inhibitor of all the CYP isoforms (Table S4, Supplementary Materials). enzyme inhibitors, especially for the CYP2C9 (Figure 8). Compound 35 was identified as a possible inhibitor of all the CYP isoforms ( Anti-inflammatory xanthone derivatives with high GI absorption were classified accordingly to its Pgp substrate (right pie chart). (B) BBB permeability of the identified xanthone derivatives.
Conclusions
Xanthones have been implicated in biological activities and chemical isolation, as well as total synthesis. In the last decade, increased reports of xanthones as potential anti-inflammatory reagents have been challenged in the phytochemical, pharmacological, and synthetic community to innate challenges of the construction of this class of natural products. However, although most of the recent research has concentrated on anti-inflammatory activities in vitro and their mechanisms, in vivo information is still restricted and lacks good-quality preclinical models to make a further step in clinical application. More efforts should be paid to verify the therapeutic effects of xanthones using in vivo animal models. Besides mangiferin and α-mangostin, there is a hint of the emergence of studies from other xanthones concerning the discovery of drug candidates.
So far, there are still limited data available on the bioavailability of xanthones. The lack of toxicity studies on xanthones does not negate its importance, as the safety and efficacy of drugs are related to each other. Future structure-activity relationship studies on simplified fragments of the members of this natural product family are also necessary to ascertain both the key features related to activity and the mode of action of these natural products. Ongoing exciting results remain to be discovered and reviewed. Future research on the chemistry and biology on anti-inflammatory xanthones looks very bright and challenging, and with tremendous therapeutic applications.
By using the online bioinformatics tool SwissADME, the biophysicochemical properties, molecular descriptors, and PK parameters were predicted and evaluated for xanthones with anti-inflammatory properties. A series of drug-likeness analysis methods and parameters were mentioned to proceed with the anti-inflammatory xanthone derivatives, such as logP, MW, logS, HBA, HBD, PSA, number of stereogenic centers, and RBs, even with CYP450 inhibitors. Xanthone derivatives have good compliance with the drug-likeness chemical properties. Many new drugs were developed from natural products and natural plants. Experimental data combined with bioinformatics predictive tools could be an efficient and economical way to discover new health products and new anti-inflammatory drugs. Despite some compounds not obeying the usual drug-likeness rules, many others have been successfully developed as new drugs. | 6,623.4 | 2020-01-30T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Screening of high-Z grains and related phenomena in colloidal plasmas
Recent important results are briefly presented concerning the screening of high-Z impurities in colloidal plasmas. The review focuses on the phenomenon of nonlinear screening and its effects on the structure of colloidal plasmas, the role of trapped ions in grain screening, and the effects of strong collisions in the plasma background. It is shown that the above effects may strongly modify the properties of the grain screening giving rise to considerable deviations from the conventional Debye-Hückel theory as dependent on the physical processes in the plasma background.
Introduction
Screening of charged objects embedded in a plasma background is one of the important problems of plasma physics, which attracted the attention of researchers during decades.Our interest in this issue is connected, first of all, with its implications in spatial ordering phenomena in colloidal plasmas (CP) such as dusty plasmas (DP) or charged colloidal suspensions (CCS).CP consist of a large number of highly charged (Z 10 3 −10 5 ) colloidal particles of submicron size immersed in a plasma background.Experiments have revealed a number of collective effects in CP, in particular, the formation of Coulomb liquids or crystals associated with the strong Coulomb coupling in the colloidal subsystem [1][2][3][4][5].It is clear that the properties of grain screening play therewith an important role, since the effective screened potentials produce the most significant contributions to the grain-grain interactions and thus determine collective properties of the colloidal component in CP.
The simplest approach conventionally employed in describing the grain screening in CP is the Debye-Hückel (DH) approximation, or, its modification for the case of the grain of finite size, the DLVO theory [6,7].The DH approximation represents the version of Poisson-Boltzmann (PB) approach linearized with respect to the effective potential based on the assumption that the system is in the state of thermodynamic equilibrium.The DH theory yields the effective interparticle interaction in the form of the so-called Yukawa potential which constitutes the basis for the Yukawa model.
Extensive molecular dynamics and Monte Carlo (MC) computer simulations performed for the Yukawa system (YS) [8][9][10] indicate that the latter makes it possible to explain the formation of condensed state in CP.However, it is clear that an accurate description of grain screening in CP requires more accurate approaches.Let us point out some important issues which should be primarily taken into account.
Firstly, these are the nonlinear effects in grain screening.Simple estimates evidence that the magnitude of the ratio eφ/k B T (where e is the electron charge, φ is the potential, k B and T are the Boltzmann constant and the temperature, respectively), which determines the significance of nonlinear effects, is of the order of 10 near the grain surface in real experiments on DP and CCS.This means that the linear approximation may fail in this case.
Secondly, a distinguishing feature of DP is that the charge of dust grains is maintained by plasma currents to the grain surface.Thus, DP are far from thermodynamic equilibrium even in the steady state.In these conditions, the Boltzmann distribution for plasma particles does not hold, which makes the equilibrium PB as well as the DH theory inapplicable.In other words, the kinetic description of grain screening is needed.Note that in this case the properties of grain screening may essentially depend on the presence of collisions in the plasma background.
It should be pointed out that the above issues have been the subject of numerous studies, where a number of important results have been obtained.The effects of nonlinear screening in the thermodynamically equilibrium case of CCS were studied in references [11][12][13].It was found that in the presence of nonlinear effects, the effective potential at distances can be described by the linear Debye theory.However, the effective charge is smaller here than the bare grain charge.
A basic reference model for the case of collisionless plasma background with regard to the absorption of plasma particles by the grain, the OML theory, has been initiated by the paper of Bernstein and Rabinovitz [14].As mentioned in this work, the asymptotic behavior of the screened potential for collisionless case is inversely proportional to the square distance.The authors also formulate here the question about the role of the bound ionic states in the grain screening.Later on the OML theory and the closely related collisionless approaches have been developed in numerous works [15][16][17][18][19][20][21][22].The particular interest to the collisionless case is due to its industrial implications and due to the fact that the laboratory and astrophysical DP may be considered collisionless in most of the cases with a good accuracy.It was found that within the range of plasma parameters and grain sizes typical of the experimental observations, the effective potentials in the vicinity of the grain approach the predictions of DH theory, i.e., the allowance for being charged by plasma currents does not considerably affect the properties of screening.
Let us say a few words about the role of bound ionic states.In most of the literature, the effects related to the ions trapped by negatively charged grains are neglected.Nevertheless, it is a priori unclear to what extent the properties of screen-ing can be affected by the presence of the bound states.An attempt to give some insight into this problem is made in references [14,[19][20][21][22].
Strictly speaking, the relative contribution of bound states within the collisionless models is in principle indeterminate, which is, eventually, the consequence of the time-reversibility of Vlasov equation.The matter is that the stationary solutions of the Vlasov equation are dependent on the way the steady state of the system is formed.Thus, to tackle this problem, one has to employ additional considerations or principles in evaluating the number of the trapped ions.
As mentioned above, this problem was originally pointed out in the work [14].The authors related the generation of bound states to the ion collisions.Recently, this idea was used while considering the presence of trapped ions in both analytical [21,22] and numerical [23] studies.These papers give answers to a number of important questions but many aspects of the problem still remain open.In particular, the bound ion distributions found in references [21,22] in the approximation of small collision frequency based on the calculations of free and bound ion balance, do not exhaust the variety of many other distributions which could exist in the absence of collisions.
The opposite case of strongly collisional plasma background is much less examined.In references [24][25][26] the grain screening has been studied based on the continuous drift-diffusion (DD) approximation.The effects of grain charging by plasma currents are essential in this case and the effective screened field considerably deviates from the predictions of DH theory.The main conclusion of the authors is that the effective potentials can be still fitted by DH theory, though, with effective parameters, and the screening length is, typically, longer than the Debye radius.
The goal of this paper is to give a concise review of further important results on the above issues recently obtained in [27][28][29][30][31].
Nonlinear phenomena in the grain screening and the structure of colloidal plasmas
In the case of thermodynamic equilibrium, an accurate description of nonlinear effects in grain screening can be obtained within the Poisson-Boltzmann (PB) approach describing the plasma as a two-component gas with Boltzmann distribution.The relevant equation for the case of a single spherical high-Z grain of a radius a in a plasma background reads ∆ϕ with the boundary conditions for the effective self-consistent potential ϕ ϕ (a) = Ze/a 2 , ϕ(∞) = 0, specifying the electric field on the grain surface and the potential at infinity.Here e is the charge of a positively charged plasma particle and n is the plasma concentration at infinity.
The conventional treatment based on the assumption eϕ kT 1 (2) yields, after linearization with respect to ϕ, the well-known DLVO solution [6,7] with the effective charge where r D denotes the Debye screening length.
It is clear that at short distances the condition (2) is definitely violated.Thus, the transition a → 0 with the DH limit is incorrect.In other words, in the case of a grain of a small size the nonlinear effects in screening may be significant and the applicability of equations (3,4) would break down.
To estimate the validity of the linear approximation, it is convenient to introduce the plasma-grain coupling χ = Ze 2 kT a giving the potential-to-kinetic energy ratio for a plasma particle on the grain surface.As mentioned above, its magnitude for DP and CCS with high-Z impurities is of the order of 10, which casts doubt on the validity of the linear DH theory for the description of screening.
Below we consider the problem of screening of a finite-size charge Z in a plasma background for the range χ 1 − 50 in two ways.The first one is the accurate numerical solution of the above PB equations.The other one is the method of MC computer simulations providing a microscopic description of screening.
The PB boundary problem (1) has been solved numerically, by using the shooting method and the second-order Runge-Cutta numerical algorithm [32].The MC simulations of screening were performed for the NVT-ensemble using the conventional Metropolis algorithm [33], within the finite model with the microscopic twocomponent plasma background represented by a large number of charged hard spheres confined in a spherical volume with the grain fixed in the centre.The goal of computations was the effective screened potential ϕ(r) and the charge distribution function Q(r) defined as the ratio of the total charge residing within a sphere of a radius r to the grain charge.
Let us say a few words about the choice of parameters.The PB theory is based on the notion of the mean field, which loses its sense for strongly coupled plasma background, in the case that Γ = e 2 kT d 1, as the plasma correlations become significant.Here d = (4πn) −1/3 is the average distance between plasma particles.Typical of CP are the values Z 1, Γ 1. Hereinafter we give the comparison of the results obtained within the two above approaches for Z = 25, Γ = 0.1 and 0.05, χ = 2, 10, 20.As follows from our computations, both approaches evidence (at strong plasma-grain coupling, and in a distinct contrast with the linear DH theory) the existence of an interesting effect connected with the accumulation of plasma charge on the grain surface ("plasma condensation"), which sharply affects the characteristics of screening, figures 1,2.While the asymptotic behavior of the screened potential at long distances retains its Yukawa-like form given by equation ( 5), the magnitude of the effective charge Z * can be well described by the DLVO theory only for small χ.For stronger plasma-grain coupling, in a sharp contrast with the predictions of linear screening theory, the effective charge approaches zero, which evidences a pronounced enhancement of screening in this case.
An important point is the existence of a critical magnitude of this parameter, χ 4 weakly depending on the other plasma parameters.It is interesting to note that this critical magnitude is much larger than unity, which means that the linear screening approximation is quite accurate even for the expansion parameter (2) ranging up to 4 − 5.
It is clear that the above phenomenon of nonlinear screening is of importance to the structural properties of CP.Its effect on the phase diagram for CCS can be illustrated based on the model of effective intergrain forces in the following way.
As mentioned above, the basic reference system of CP based on the notion of effective interaction is the Yukawa-system with the interparticle effective potential given by with two dimensionless parameters: the coupling Γ and the screening length ∆; x is the dimensionless distance.
Our study employs the connection between the dimensionless parameters Γ and ∆ of a YS and the microscopic parameters of CP, which can be established as follows.
Let us consider a two-component asymmetric strongly coupled plasma, which is the simplest example of CP.Here a good microscopic model appears to be a system of charged hard spheres interacting through Coulomb forces.In case the size of a plasma particle is negligibly small (in agreement with physical situation in CP), such a system can be described by three dimensionless parameters, such as the packing fraction for the colloidal component the charge asymmetry Z, and the plasma-grain coupling χ.
Under the assumption that the screening of the grains is produced purely by plasma background and that the screening can be described in terms of the linear DH theory for point charges, one easily gets the effective interaction in the form (6), with the parameters of YS determined by and the dimensionless distance specified as x = r/d.Here d = (4πn c ) −1/3 is the average distance between colloidal plasma particles; r D = (4πn bg e 2 /k B T ) −1/2 is the Debye screening length produced by the single plasma component.Thus, the parameters entering the effective Yukawa interaction are expressed herewith via the microscopic plasma parameters.Our further considerations are based upon the idea that the properties of CP can be described by an effective pair interaction in the form (6) even in the case that the nonlinear screening is significant.As mentioned above and shown in [13], the effective screened potential retains in this case the Yukawa-like form at long distances.The effective charge, however, should be found from the exact solution of the relevant PB equation; the background density n bg , which determines the Debye length, is assumed to be equal to the average plasma background concentration.Within such an approach, the account of nonlinear effects reduces to re-scaling the well known melting curve for YS [9] with the use of the relevant effective charge Z * instead of the bare charge Z.This latter can be evaluated by numerically solving the nonlinear PB equation for a single grain in one-component plasma background in a spherical cell with relevant parameters.An important point is that due to the connection between the microscopic parameters of two-component asymmetric plasmas and the parameters of the Yukawa model one can obtain important qualitative conclusions about a minimal charge asymmetry Z min needed for the formation of Coulomb lattices in CP.Namely, there is a connection between the parameters Z, Γ and ∆: which is the consequence of the relations ( 7), ( 8) and the global charge neutrality condition Zn = n bg .Therefore, the parameter Z specified via equation ( 9), which has the physical meaning of charge asymmetry, can be used for the description of YS instead of coupling Γ.In other words, the relation ( 9) makes it possible to transfer the melting curves onto the Z − ∆ plane.The results of computations of the melting curves are given in figure 3.As can be seen, there exists a minimal charge asymmetry Z min = 355 needed in order to obtain a crystal state.The same conclusion and a close value of Z min = 360 was obtained in [34] based on the Lindemann melting criterion for the case of specific effective graingrain forces.The lower melting curve in the figure 3 is close to that given in that work.However, we see that the allowance for the nonlinear screening results in shifting the melting curves to higher values of charge asymmetry Z at small packing fractions of a colloidal component.
It should be noted that our considerations are based on the effective Yukawa interaction in the form (6) which is expected to work in the case of dilute charged colloidal suspensions with high charge asymmetry and weakly coupled plasma background [35].The effects of nonlinear background screening are commonly accepted to be associated with induced many-body forces between colloidal particles and are expected to become relevant for moderate packing fractions.The present example shows that the nonlinear screening may be important in the case of small packing fractions as well.
A more accurate description of the structure of CP can be obtained by means of MC computer simulations based on the microscopic model of asymmetric twocomponent plasmas (TCP).As shown above, the nonlinear grain screening obtained within PB theory has the direct analogue in the MC simulations with the microscopic description of plasma background, i.e., the phenomenon of "plasma condensation" near the grain surface.This suggests that the above phenomenon should manifest itself in MC simulations of asymmetric TCP affecting its structural properties as well.
Below we give our results of MC simulations of strongly coupled TCP with the charge asymmetry up to Z = 100 based on the "primitive model" aimed at the elucidation of the nonlinear effects on the structural properties of TCP.
Within the "primitive" model, a TCP is considered as an overall charge neutral mixture of charged spherical grains in a compensating plasma background.In all the simulations we assume the size of a plasma particle to be negligibly small, in accord with the physical situation in CP.We performed MC simulations of such a system for canonical ensemble by using the conventional Metropolis algorithm and periodic boundary conditions [33].An accurate account of long-range Coulomb forces was achieved due to Ewald's summation procedure [36].
The idea of simulations was to study radial grain-grain and plasma-plasma distributions near the critical point χ If the coupling between the components χ is smaller than 4, the TCP grain-grain distribution exhibits an oscillatory behavior characteristic of a liquid phase.It means that the effects of screening produced by the plasma component, do not qualitatively change the properties of the colloidal component, and the latter behaves like one-component plasma.In figure 4, we can see that in this case (for χ = 2 and 3) the plasma-plasma distributions are characteristic of a gas phase.In the case of strong plasma-grain coupling χ > 4 the reduction in grain-grain correlations, and the appearance of correlations (on the length of the order of the grain diameter σ c ) in plasma-plasma distributions are observed, figures 4, 5.These indicate a pronounced enhancement of grain screening and the accumulation of plasma particles near grain surfaces.Remarkably, this was observed in all the simulations near the same threshold value χ = 4 within a wide range of other parameters of TCP regardless of the way of simulations.Direct visual observations of equilibrium configurations for strong plasma-grain coupling also evidence the accumulation of plasma particles near the grains, figure 6.
Thus, we see that the qualitative change in the structural properties near the point χ = 4 is a rather general feature of asymmetric TCP and the threshold value obtained in MC simulations of this system is in a good agreement with the studies of nonlinear screening of a single grain based on the continuous PB theory.
Grain screening in collisionless plasmas and the effects of trapped ions
As mentioned in the introduction, the problem of grain screening in collisionless background with regard to the effect of plasma particle loss at the grain surface has attracted much attention of the researchers.However, the effects of trapped ions remain in many respects poorly known.
The purpose of this section is an attempt to elucidate the properties and the role of bound ionic states in grain screening within the nonlinear collisionless model in the case of a grain charged by plasma currents.In particular, we are going to focus on the effects produced by various numbers of trapped ions on the charge densities and the effective screened potentials.
We start from the conventional Poisson equation for a single charged spherical grain of a radius a immersed in a plasma background ∆φ with the ion and electron densities n i (r) and n e (r) being specified as where and Here n ib/if (r) is the density of bound/free ions, n 0i/0e is the ion/electron density at infinity; φ(r) is the self-consistent effective potential; e is the absolute value of the electron charge; T i/e is the ion/electron temperature; s i/e = k B T i/e /m i/e is the thermal ion/electron velocity; m i/e is the ion/electron mass, and Z i is the ion charge number.
Also, here we introduced the notation The relations ( 12)-( 14) can be obtained by integrating the Maxwellian distributions over velocities taking into account the energy and angular momentum conservation laws and the limitations imposed by the presence of the absorbing grain.I.e., i) we take into account all the ion and electron trajectories which do not touch the grain, ii) we exclude from the phase space all the finite ion trajectories which intersect the grain surface, and iii) we exclude the outgoing free ion or electron trajectories, which previously met the grain.Notice that the above equations also follow from the stationary solution of the Vlasov equation with the appropriate boundary conditions (Maxwellian distributions at the infinity and zero value of distribution functions with positive radial velocity at the grain surface).
It should be noted that in the derivation of the density for bound ions, we also start from the Maxwellian distribution, though the finite trajectories do not reach the infinity and, therefore, cannot be coupled to the heatbath.Thus, we employ an additional assumption that the bound states are initially formed with equilibrium distribution.
The relation ( 13) contains a free parameter, the amplitude A, which determines the relative contribution of bound ionic states to the charge density.As mentioned in the Introduction, its value is indeterminate within the collisionless model, because the concentration of bound states cannot be related in any way to the ion concentration at infinity.However, some reasonable estimates for the magnitude of A can be obtained as follows.Consider the limit a → 0 in equations ( 12)-( 14).It can be verified that the value A = 1 can be found from the additional requirement for the distributions to be Boltzmannian, which corresponds to the case of thermodynamic equilibrium.It is natural to assume that, in general case, the value of A, though being dependent on various physical situations (i.e., on how the system reaches its steady state), would have a magnitude of the same order.
In order to solve the problem ( 11)-( 14) within the interval a r r max , we have to formulate the boundary conditions for the effective potential φ(r) φ(a) = φ 0 , (15) φ(r max ) = φ as . ( Here, the right boundary r max has to be chosen at a sufficiently long distance, r max r D , so that the potential is described by its asymptotic value φ as .The latter is known [17,18], and it reads The boundary value of the potential φ 0 at the grain surface is determined by the balance of plasma currents to the grain surface.In order to find it, we use the well-known equation [37] ω 2 pe Here ω 2 pσ = 4πe 2 σ n σ /m σ , s σ = (k B T σ /m σ ) 1/2 , t = T i /T e Z i , n σ is the particle density of σ species at infinity, and u = e|φ 0 |/k B T e is the sought-for dimensionless potential at the grain surface.
The two-point boundary value problem for the effective potential ( 11)-( 16) was solved numerically by using the shooting methods [32].The computations were performed for the following range of parameters: τ = T i /T e = 0.08 − 1.0, ρ = a/r D = 0.015 − 3.0, A = 0 − 10, The results of computations are given in the figures.In figure 7 the behavior of plasma charge densities associated with the calculated effective potentials are displayed within a typical range of parameters.As is seen, the bound ionic states tend to concentrate in the vicinity of the grain surface.The most remarkable feature in the behavior of the bound ion states is that, beginning with some critical distance r c , the density of bound states is strictly equal to zero.In figure 8 we give the relevant dependencies for r c obtained in our calculations.Notice that, with the increasing the grain size, the value r c diminishes.As a result, in the case of very large grains, for a 2 − 3r D , the bound states cannot exist at all.This conclusion is in agreement with the results of reference [14], where it was mentioned that the role of the bound states for large grain sizes is insignificant.
Let us show that this effect is connected with the asymptotic behavior of the effective potential inversely proportional to the square distance.The expression for the density of the bound states (13) contains a multiplier, θ-function accounting for the loss of ions with the trajectories meeting the grain.Its argument is given by At larger distances, the potential φ(r) may be replaced by its asymptotic expression (17).In this case, the argument of the θ-function It means that the density of the bound states is equal to zero (i.e. they are absent) at larger distances, where the potential assumes its asymptotic form.
In figure 9 the calculated effective potentials are displayed.As can be seen, the allowance for the bound ionic states for the amplitudes A = 0 − 1 results in rather insignificant changes in effective potentials, suggesting that the densities of free and bound ions adjust themselves self-consistently to produce very close potentials for different values of A. 1) with the calculated effective potentials for A = 0 (2), 0.1 (3), 1.0 (4).The ion-to-electron temperature ratio is τ = 0.08, the grain radius is ρ = 0.015 (left) and ρ = 1.5 (right).
Remarkably, for smaller grain sizes, a 0.01r D , the critical radius r c tends to increase indicating that the region where the potential takes its asymptotic form moves to larger distances.In this case, the effective potentials within the region r < r c are very close to those predicted by DH theory.This conclusion is in agreement with the theoretical results of the papers [16,20], as well as with the recent experiments [38], where the Yukawa type of the effective grain-grain interactions was demonstrated in the direct measurements.
Screening of a grain charged by plasma currents in strongly collisional background
In this section we consider the screening of a spherical grain charged by plasma currents in a weakly ionized high pressure gas.As will be shown below, the properties of grain screening in this case substantially depend on the type of boundary conditions (BC).In contrast to the works [24][25][26], where complicated semirealistic multigrain systems with relevant specific BC are considered, we are going to examine the simplest case of a single grain with the emphasis on the basic features of this problem.
Thus, we examine a single spherical grain of a radius a imbedded in a weakly ionized high pressure gas.In this case, it is natural to use the drift-diffusion (DD) approach, because the collisions of plasma particles with the neutrals play here a dominant role.Assuming two types of plasma particles (ions and electrons) only , we write the general time-dependent equations for the unknown ion/electron densities n i,e and self-consistent potential φ in the form Here, α is the coefficient of recombination, I 0 is the intensity of plasma ionization (we examine the case of uniformly distributed plasma sources).The expression for the current densities j i,e is as follows: where µ i,e and D i,e are the ionic/electronic mobility and diffusivity, respectively.These latter are assumed to be related by the Einstein's equation µ i,e = z i,e e i,e D i,e /k B T (here z i,e = ±1 is the ion/electron charge number).In a weakly ionized gas with dominating plasma-neutrals collisions, it is reasonable to assume that the ion and electron temperatures are equal.Thus, hereinafter we consider only the case that T i = T e = T .The grain charge emerges as a result of plasma currents due to the difference in electron and ion diffusivities.With regard to spherical symmetry, the relevant equation for the grain charge number Z reads where the subscript (r) denotes the radial component of a current.In order to formulate the BC, we admit that the system is confined within a spherical volume of sufficiently large radius R 50 − 500r D (where r D is the Debye screening length) with the grain placed at the center.The BC are specified at the surface of this sphere and at the surface of the grain.In our simulations, we consider the two basic cases and two types of BC, respectively.In the first case (I), the sources of plasma ionization, which compensate the losses of plasma particles due to the absorption on the grain surface, are assumed to be far from the grain (outside the spherical volume).The action of these sources is modelled by maintaining constant electron and ion densities on the surface of the sphere, n i = n e = n 0 .According to this, we write the BC for the densities n i,e n i,e = n 0 , r = R, and assume the rates of plasma ionization and recombination over the volume I 0 and α to be equal to zero.In the second case (II), we examine the problem with uniformly distributed plasma sources (I 0 = 0) with allowance for the plasma recombination over the volume (α = 0).Note, that in this case the quantities I 0 and α are related to the unperturbed bulk plasma density n 0 by the equation I 0 = αn 2 0 valid in the absence of the grain.The relevant BC read The BC for the potential at the grain surface have the form ∂φ ∂r = − Z(t)e a 2 , r = a and for the densities n i,e we use the BC [24] n i,e = 0, r = a appropriate for the case of strongly collisional background.We solved the above system of equations ( 19)-( 21) using the method of lines and the Gear's method.In addition, we performed a limited number of Brownian dynamics (BD) simulations based on the particle-in-cell (PIC) method [39] with spherically symmetric concentric cells and the BC corresponding to the case (I).In these simulations, the plasma background is modelled by finite numbers of particles of two types representing the ion and the electron components.The dynamics of the system is governed by the reduced Langevin equations of overdamped motion Here, x k is the radius vector of the k-th particle, and U is the potential energy of the configuration.The friction coefficient h and the random force F k (t) are determined by the properties of the heatbath (in our case the role of the heatbath is played by the high pressure neutral gas).Random force acting on k-th particle is specified by the Gaussian distribution which determines the probability for the momentum to be transferred to the k-th plasma particle during the time span ∆t.The random forces, which act on different plasma particles are uncorrelated.It is clear that the quantities h and D related to the ion and the electron components are different.
In the above expressions, we omitted the subscripts for simplicity.Note that the friction coefficient h can be expressed via diffusivity and temperature, hD = k B T , which enables one to establish the correspondence with the continuous DD approach.
A detailed presentation of the issues concerning BD and its relation to the continuous probabilistic approaches, such as Fokker-Planck and Smolukhovsky equations, can be found in references [40,41].Here we would like to point out that the overdamped BD represents the direct microscopic analogue to the DD approach, since the latter can be derived from the Smolukhovsky equations for one-particle distributions (i.e., within the additional mean field approximation).The aim of BD simulations was to test the results of the DD approximation.
The range of parameters is typical of the DP experiments in high pressure weakly ionized noble gases like Ne or Ar: plasma background coupling Γ 10 −3 ; plasma density n 0 10 10 cm −3 ; the density of the neutrals n 10 18 cm −3 ; radius of the grain a 10 −3 cm; electron-ion recombination coefficient α 10 −7 cm 3 /sec; the ratio of the Debye length to the grain radius r D /a 0.1 − 50.The ratio of diffusivities in all computations was fixed, A = D e /D i = 10 3 (with the exception for the BD simulations).The goal of the simulations was the final timeindependent density and charge distributions which establish themselves after sufficiently long period of relaxation.
The results of computations are given in the figures.In figure 10, we give the relative charge distributions for different types of BC.Remarkably, in the case of BC (I), we observe the Coulomb-type asymptotic behavior of the screened field with the effective charge determined by the asymptotic value of the charge distribution.Note that such an asymptotic behavior of the screened field may be viewed as a consequence of the Ohm's law for the problem under consideration.In contrast to the case (I), the screening in the case of ionization over the volume has a finite screening length 10 − 50r D .The computations performed for the same plasma parameters, in particular, for the same steady bulk density n 0 at long distances for the cases (I) and (II) indicate that there exist a sheath ranging up to 10r D independent of the type of BC (provided that the ionization rate is relatively low).At longer distances, a distinct difference in the asymptotic behavior is observed.The stationary grain charges acquired by the grain in both cases are nearly equal.
Figure 11 illustrates the behavior of the relative charge distributions as dependent on the rate of ionization.The bulk plasma density is held constant therewith due to the simultane- ous appropriate change of the recombination coefficients.The approximate straightness of the lines outside the sheath (on the log scale) suggests the exponential type of the screening at distances.Different rates of ionization (and recombination) correspond to the different slopes and the screening lengths, respectively.The higher is the intensity of ionization, the shorter is the length of screening.At higher rates, the relative indifference of the sheath is likely to break down, and the properties of screening approach the predictions of the DH theory (the bold line in figure 11).We see that, typically, the charging plasma currents in the presence of collisions result in the increase of the length of screening, as compared to the equilibrium DH theory.These results correlate qualitatively with those of reference [26] dealing with a more complicated case of non-isothermic nitrogen plasma.
Comparison of the continuous DD approach and the microscopic BD simulations shows a qualitative agreement between both cases, see figure 12.Some discrepancy (DD approach yields approximately 10% higher absolute value of the stationary grain charge) is, apparently, the result of microscopic effects in the plasma background in BD simulations.
Conclusions
Thus, we see that the properties of screening of high-Z impurities in colloidal plasmas may considerably vary depending on the physical processes in the plasma background.
The nonlinear effects in screening in the thermodynamically equilibrium case of a high-Z grain with a fixed charge (e.g., the case of colloidal suspensions) are essential for a strong plasma-grain coupling.The nonlinearity is associated with the accumulation of plasma particles on the grain surface and results in a sharp decrease of the effective charge as compared with the linear theory.The linear DLVO theory works well only for weak plasma-grain coupling, χ < 4. The nonlinear effects have a number of consequences for the structural properties of strongly coupled CP.In particular, they give rise to qualitative changes in the pair distribution functions and result in shifting the melting curves to larger magnitudes of charge asymmetry.
The grain screening in the collisionless background, with regard to the absorbtion of plasma particles by the grain, is close to the predictions of the DH theory (in the vicinity of grains) for the range of plasma parameters typical of DP and for small grain sizes (a r Deb ).At longer distances, we observe the asymptotic behavior of the effective potentials inversely proportional to the squared distance.The bound ionic states result in considerable changes in the plasma densities near the grain.However, they weakly affect the effective potentials in these conditions.The presence of the bound states is limited by some critical distance ( 2 − 3r Deb ), beyond which they cannot exist at all.
The processes of grain charging in strongly collisional plasma background result in a considerable deviation from the equilibrium DH theory.In case the plasma sources are placed at infinity, at long distances we observe the Coulomb field with a certain effective charge.The effect of screening manifests itself in the decrease of this effective charge as compared to the stationary grain charge.The smaller is the ratio of the Debye length to the grain size, the smaller effective charge is observed.In case the plasma sources are distributed uniformly over the volume, there exists a finite screening length depending on the rate of ionization.Typically, this screening length in the presence of plasma currents and strong collisions considerably exceeds the Debye radius.The stationary grain charge as well as the field within the sheath around the grain ( 10r D ) does not depend on the type of BC and on the ionization rate, provided that this latter is relatively low.At higher ionization rates, the properties of screening approach the predictions of DH theory.
In conclusion, we would like to mention that an important problem, which still remains poorly examined, is the properties of grain screening in a weakly collisional and intermediate case.It would be interesting to study this issue within the Bhatnagar-Gross-Krook model, or based on the Fokker-Planck equations.Of particular interest is the collisionless limit obtained within these approaches, which could be compared to the results of the paper [21].Further valuable information on the above issues could be obtained by means of microscopic computer simulations in the spirit of reference [23].
4 .
In particular, we performed a number of simulations for different values of plasma-grain coupling χ though for a fixed value of coupling Γ c = Z 2 e 2 /k B T d c in the colloidal component (we use here a slightly different definition for the average interparticle distance d c = (4πn c /3) −1/3 ).The range of parameters was as follows: the charge asymmetry Z = 10 − 100, volume fractions of colloidal component v c = 0.001 − 0.1, the coupling χ = 1 − 50.Note that these parameters are connected with the coupling Γ c by the relation Γ c = Zχv c 1/3 .(10) Therefore, by varying the charge asymmetry Z of a TCP, one can change the parameter χ while holding the above coupling Γ c constant.The results for Z = 10; 15; 24; 60; v c = 0.01; χ = 2 − 40 are presented in the figures.The most remarkable result consists in a pronounced change in the behavior of the system near the point χ 4.
Figure 5 .
Figure 5. Radial grain-grain distribution functions for infinite TCP for the same grain-grain coupling Γ = 26 and packing fraction v c = 0.01 (liquid state).Solid line: χ = 2; dashed line: χ = 8; the charge asymmetry Z = 60 and 15.The unit of distance is d c .
Figure 6 .
Figure 6.Equilibrium configuration for the plasma component near a single grain, Z = 100; the coupling in the plasma background is Γ p = 0.05; χ = 20.The unit of distance is σ c .
Figure 10 .
Figure 10.Comparison of charge distributions for different types of BC for the same stationary bulk plasma parameters.The Debye length is r D /a = 10 for (1) and (1a), and r D /a = 2 for (2) and (2a).Dashed and solid lines relate to the BC (I) and (II), respectively.
Figure 12 .
Figure 12.Comparison of the results of DD and BD simulations for the same parameters, a/r D = 0.373, A = 10.0.Left: relative charge distributions.Right: comparison of ion (1) and electron (2) densities obtained in DD approximation (dashed lines) and in BD simulations (solid lines). | 9,277.8 | 2003-01-01T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Effect of Pile Driving on Ground Vibration in Clay Soil: Numerical and Experimental Study
In this study, peak particle velocity (PPV) values for driving three piles with diameters of 40 cm, 50 cm, and 70 cm in a clayey soil through the impact piling method are investigated by an experimental study and a numerical simulation. An experimental study is carried out on a scale of 1:20 of the operation. Numerical simulation is performed by using an axisymmetric model in PLAXIS 2D finite element software. Properties of the soil and the piles used in the experimental study are obtained from geotechnical tests and employed in the numerical simulation. The model has been verified by comparing the acquired PPV values with those measured in the experimental study. The results show a good agreement between the computed values and the experimental data. Moreover, measured peak particle velocities in the experimental study indicate that an increase in the diameter of the pile can increase the level of ground vibration. Some sensitivity analyses have been performed by numerical modeling to determine the effect of soil and pile properties on the changes of PPV. Also, increase in friction angle of the soil and pile diameter and reduction in elastic modulus of soil will increase the level of ground vibration. The results indicate that the amount of PPV at a distance of 100 cm is about 10.33% of the amount of PPV at a distance of 25 cm from the impact site to the pile with a diameter of 3.5 cm. In addition, this amount of reduction for pile with a diameter of 2.5 and 2 cm is equal to 8.31% and 12.77%, respectively.
Introduction
In most of the construction operations where an adequate ground support is not available, pile diving for construction of foundations is required. Thus, it is essential to predict and estimate the PPV value of ground vibration induced by pile driving to prevent disturbing the residents and damages to the structures adjacent to the operation site.
Many researchers, including; Wiss (1981), Attewell et al. (1992), Fellenius (2008, 2014a, b), and Deckner et al. (2017) have investigated the ground vibrations resulting from impact pile driving method and presented various equations for estimation of the peak particle velocity (PPV) value and particle image velocimetry (Jiang et al. 2020). Most of the abovementioned researchers concluded that attenuation of ground vibration is dependent to the hammer impact force and the radial distance from the vibration source. However, Massarsch and Fellenius (2008) presented other equations for estimation of the level of ground vibrations due to impact pile driving. They concluded that ground vibration due to pile driving is based on three different waves. Spherical waves that are generated from the pile toe, cylindrical waves which are emitted from the pile shaft and Rayleigh waves which are generated from the interaction of these two waves on the ground surface. In addition to these waves, dynamic soil resistance at shaft and toe of the pile have to be taken into account for estimation of level of ground vibration due to impact pile driving. Field study is counted as another method of estimating and predicting the ground vibrations caused by pile driving. In this method, propagation of ground vibrations induced from pile driving is investigated by measuring and analyzing the peak particle acceleration or peak particle velocity of vibrations at specified distances from the pile driving operation. Various studies have been conducted in this area, including studies by Kim and Lee (2000), Hadjuk et al. (2004), Dungca et al. (2016).
In addition to the aforementioned methods, experimental study in small scale has been done by researchers like; Musir and Abdul Ghani (2013), Mahmood and Abdulrahman (2017) to inspect the ground vibrations caused by pile driving. Musir and Abdul Ghani (2013) investigated the effects of various parameters in propagation of ground vibrations that affect the surrounding structures by means of a scaled laboratory test. The soil used in their study was river sand and the piles were squared piles with 7.5 mm  7.5 mm, and 10 mm  10 mm diameters and the length of 300 mm. The hammer weight and the height of fall were considered 1200 g and 150 mm, respectively. Results of their study showed that the vibration at the bottom of the building were higher than the top of the building and the highest value for particle velocity was recorded at the points close to the piles. Moreover, researchers found that smaller piles generate more vibrations at the top of the structures.
In this regard, Mahmood and Abdulrahman (2017) undertook an experimental study in small scale and considered a concrete pile with dimensions of 20 mm  20 mm and a container made of steel in dimensions of 1200 mm  1200 mm  900 mm to evaluate the changes in the values of PPV at the distances 2.5d, 5d, 10d, 17d, 20d and 25d from the pile. A sandy soil was used in their study. The hammer weight was 1.68 kg and the hammer height of fall was considered 279 mm. The results indicated that the maximum particle velocity was recorded at the points close to the pile and at the penetration depths of 20 cm to 24 cm.
In the recent decades, due to an increasing development in numerical analysis of geotechnical problems, use of geotechnical software to investigate the propagation and attenuation of ground vibrations caused by pile driving has become more popular. In this regard, Masoumi et al. (2007) developed a numerical model using the Finite Element-Boundary Element model by ABAQUS software to predict the inductive ground vibrations from installation of piles through vibratory and impact pile driving methods. In their analysis, soil medium was considered elastic with hysteresis damping and separation between the soil medium and the pile had been neglected. In case of impact pile driving, the results showed that body waves were attenuated at points located far from the pile and Rayleigh waves, independent of the penetration depth of the pile, had the most influence on the ground vibrations at the points on the ground surface. Masoumi et al. (2009) performed another study using the same FE-BE model by ABAQUS software. In this study, the non-linear behavior for the soil around the pile and also the dynamic interaction between the soil and the pile had been taken into account. Researchers concluded that this method presents more accurate results relating to the prediction of ground vibrations. Madheswaran et al. (2009) analyzed the peak particle acceleration induced by a piling operation in a real condition through numerical modeling using finite element method by PLAXIS software. After verifying the results of their numerical modeling, researchers investigated the impact of concrete trenches on absorption of ground vibrations due to pile driving. The acquired results from numerical modeling indicated that the finite element analysis done by PLAXIS overestimated the peak particle acceleration about 20% more than the field data. By analyzing the obtained PPV from pile driving at different distances from the pile, researchers concluded that the presence of concrete-filled trenches at specific distances from the pile considerably affect the absorption of ground vibrations. Khoubani and Ahmadi (2012) simulated the continuous penetration of a pile to a desired depth, using the commercial code ABAQUS. In their study, the effects of plastic deformations around the pile and the frictional contact between the pile and the soil on the level of vibrations had been taken into account. Furthermore, continuous pile penetration modeling allowed the researchers to predict the magnitude of vibrations at different depths. The maximum value for particle velocity in their study was related to the penetration of the pile to the critical depth of penetration. Likewise, the PPV value was higher at lower depths, especially at points close to the pile. This value increased on the ground surface for points located at farther distances from the pile.
Recently, Rezaei et al. (2016) investigated the changes in the values of PPV at radial and vertical distances from impact pile driving operation by modeling the continuous penetration of the pile to the desired depth through ABAQUS software. They also investigated the effects of soil properties and pile geometry on the level of vibrations. The results indicated that the Mohr-Coulomb model can successfully simulate the pile penetration into the soil. An increase in cohesion and friction angle and a decrease in elastic modulus of soil increase the PPV value.
Most of the numerical studies on the ground vibrations due to impact pile driving have used the data presented by Masoumi et al. (2009) to validate the numerical modeling and compared the computed PPV values with those presented by Wiss (1981). On the other hand, experimental studies about inductive ground vibrations from impact pile driving method have been performed on sandy soils. In this study, ground vibration due to impact pile driving in a clayey soil is investigated through an experimental study in small simulating scale and also numerical modeling through finite element PLAXIS 2D software. To validate the results of numerical modeling, the soil and pile properties used in the experimental study have been extracted by implementing geotechnical tests and employed in the numerical modeling. Afterwards, the computed PPV values are compared to the field data. Moreover, sensitivity analysis is performed to determine the effects of pile geometry and soil properties on the level of ground vibration.
Experimental Study
In order to investigate the level of ground vibrations due to driving three piles with diameters of 40 cm, 50 cm and 70 cm and the length of 600 cm in a clayey soil, an experimental study in scale of 1:20 of a hypothetical pile driving site with 28 m width is carried out. In this study, a clayey soil container is used to simulate the pile driving site and three wooden cylinders are utilized to simulate the piles. A steel box is also used as the pile driving hammer.
Clayey Soil Container
Due to simulating the pile driving process in scale of 1:20, the container used in this study is made in dimensions of 1.4 m  1 m  0.5 m. Inside the container is insulated in order to prevent soil penetration into the outer space and also to absorb the vibrations emitted from the hammer impact.
Piles Used in Experimental Study
For simulating the piles used in the hypothetical pile driving operation, three wooden cylinders in diameters of 2 cm, 2.5 cm and 3.5 cm and the length of 30 cm have been made to simulate the piles with diameters of 40 cm, 50 cm, and 70 cm and the length of 600 cm, respectively. Figure 1 show the wooden cylinders made for the simulation of the piles.
Simulated Pile Driving Hammer
The hammer weight considered for the current study was 1000 g. Weight of the steel box with dimensions of 60 mm  60 mm  60 mm was 700 g. So as to reach the required weight, a 300 g box has been attached to the steel box. The hammer height of fall has been considered 200 mm. Figure 2 shows the simulated pile driving hammer.
Clayey Soil Deposit
The soil used in this study is a clayey soil. The constituent elements of this soil have been identified through X-ray fluorescence test and presented in Table 1.
Experimental Study Procedure
In order to estimate the PPV values in predetermined distances from driving three piles into the soil, piles have been placed at the specified point. After inserting the pile, the seismograph (SPSEISw-3) sensors were located at 25 cm, 50 cm, 75 cm and 100 cm intervals from the pile. The next step involved placing a wooden box with 50 cm length around the pile to prevent the hammer from deviating from the pile head. Vibrations were measured for each blow, and this process continued until the full penetration of the pile into the soil. Figure 3 illustrates the placement of sensors (CH1 to CH2) as well as the wooden box used to prevent the deviation of the hammer.
Test Results
Peak vertical velocity (PVV) values for driving each pile into the depth of 30 cm through impact driving The seismograph (SPSEISw-3) sensors were located at 25 cm (CH1), 50 cm (CH2), 75 cm (CH3) and 100 cm (CH4) intervals from the pile. As the distance from the impact site increases, the consistency between the results increases. The most inconsistent results is in the CH1, which is the sensor closest to the impact site. The process of PPV changes in CH2 to CH4 (all penetration depths) is almost the same.
After converting the units from mV/g to mm/s, PVV values are shown in Fig. 4a to c. In this figure, PVV values are shown at depths of 5 cm, 10 cm, 15 cm, 25 cm and 30 cm from the ground surface. PVV value increased by reaching to the critical depth of penetration and it has been reduced by penetrating to the lower depths. The recorded particle velocities at each point are extracted from Fig. 6a to c and shown as PPV at specified distances from the pile in Fig. 5.
The maximum PPV value recorded for penetration of all three piles was at the distance 25 cm from the piles and its value had a slump at distance 100 cm from the piles. The maximum rate of attenuation of ground vibration in penetration of all the piles to the desired depth was related to the distances 25 cm to 50 cm from the piles. After that, the trend of attenuation of ground vibration declined to slightly less than 2.5 mm/s. Among the piles tested in the current study, the highest value of PPV is related to the penetration of the pile with 3.5 cm diameter with the value of almost 22.5 mm/s. The PPV value diminished with a decrease in the diameter of the piles.
Geotechnical Laboratory Tests
Various geotechnical tests such as; direct shear test and uniaxial compressive strength test have been performed on the soil used in the experimental study to determine the friction angle, cohesion and elastic modulus of the soil. To determine the elastic modulus of the pile with 2.5 cm diameter, a uniaxial compressive strength test has been performed on this pile. The results of geotechnical tests on the soil and the pile are presented in Figs. 6 and 7 and Table 2.
Numerical Simulation of Impact Pile Driving
Numerical simulation has been performed by means of PLAXIS 2D software. This software is capable of modeling and analyzing static and dynamic geotechnical problems in two forms, i.e. plain-strain and axisymmetric (Brinkgreve and Vermeer 1998). A 15-nodded axisymmetric model is used in this study. Half of the pile and the medium are modeled in the numerical simulation. The pile has a diameter of 50 cm and a length of 6 m, which only 25 cm of its diameter has been modeled due to numerical simulation in axisymmetric form. To estimate the PPV values at distances 5 m to 20 m from the pile driving operation, width of the model has been extended to 28 m. In dynamic analysis of pile driving through PLAXIS 2D, Pile is embedded into the desired depth and a distributed dynamic load is applied on the pile head. Interface element has been employed at the contact area between the pile and the soil. Boundaries are set in two forms; standard fixities on both sides and the bottom of the model to prevent deformations around the model and dynamic absorbent boundaries to prevent reflection of seismic waves into the model (see Fig. 8). Mohr-Coulomb model is assumed for the soil and linear-elastic model is used for modelling the pile. Since the estimation of level of ground vibration has been considered, pile is regarded as a rigid body which transfers the energy generated from impact of the hammer to the pile head into the surrounding soil (Khoubani and Ahmadi 2012). Properties of the soil and the pile with 2.5 cm diameter obtained from geotechnical tests (Table 2) have been used in the numerical simulation. Mesh generation is done as medium in points located far from the pile and in areas close to the pile; mesh generation is done as fine (see Fig. 9). The calculations consist of three phases of plastic analysis, dynamic analysis at the time of the hammer impact to the pile head, and the second dynamic analysis phase for a complete dynamic cycle. The amount of applied force on the pile head and the duration of the dynamic cycles are selected based on the aforementioned experimental study on this pile in scale of 1:20. PPV values are extracted from the velocity-time curve for points located at 5 m, 10 m, 15 m and 20 m from the pile (see Fig. 10).
Verification of the Model
PPV values for driving a pile with 50 cm diameter to the depth of 6 m have been extracted from the velocity-time curve in PLAXIS software, shown in Fig. 10, for the points located at distances 5 m, 10 m, 15 m and 20 m from the pile. Moreover, The PPV values recorded in the 1:20 scale experimental study on the same pile have been extracted from Fig. 7. These values are shown in Fig. 11. Both of the obtained PPV values are compared in Fig. 12. Considering the Fig. 12, it can be seen that the PPV values within 5 m from the pile were approximately equal in both methods. Numerical study overestimated the PPV values about 21.2% at distances of 10 to 20 m from the pile. In general, the computed values are in good agreement with the field data.
Sensitivity Analysis
Sensitivity analysis was performed by means of numerical modelling to identify the effects of different parameters such as pile diameter and soil properties including friction angle and elastic modulus on the changes in the level of ground vibration. In order to determine the effect of these parameters on the level of ground vibrations, an impact force of 1.5 MN was considered in the numerical simulation. Piles used in the sensitivity analysis had 60 cm, 80 cm and 100 cm diameter. Figure 13 shows the PPV values due to the increase in the elastic modulus of the soil. The soil elastic modulus was considered 20 MPa in the first analysis. 50% and 66.5% have been added to its amount in the subsequent analyses. By increasing the elastic modulus of the soil to 66.5% more than its initial value, 37.5% is diminished from the PPV value at the distance of 5 m from the pile. Most of the changes in level of ground vibration have been experienced at distances close to the pile, whereas the percentage of these changes has been almost the same at farther distances.
Friction Angle of the Soil
In order to study the changes in the level of ground vibration due to the changes in the friction angle of the soil, analyses have been done by considering 25°, 35°F ig. 10 Velocity-time curve Figure 14 shows the PPV values for three friction angles used in the sensitivity analysis. With 28% increase in friction angle, the PPV value increased 6% within 5 m from the pile, and by increasing the friction angle to 45% more than its initial value, PPV increased by 11%. Most of the changes in level of ground vibration due to variation of the friction angle were related to the points close to the pile and these changes were negligible at points located at distances far from the pile.
Pile Diameter
The effect of pile diameter on the value of PPV is shown in Fig. 15. Diameters of the piles used in this analysis were 0.6 m, 0.8 m and 1 m. With 25% and 40% increase in the diameter, the PPV value increased about 10% and 28% of its initial value at the distance of 5 m from the pile, respectively. A 0.2 m increase in the diameter of the pile, resulted in an approximately 30% increase in PPV value at distances 10 to 20 m from the pile. By increasing the pile diameter from 0.8 m to 1 m, there was a 13.5% increase in the level of ground vibration. In order to better understand the research process, the modeling framework is as shown in Fig. 16.
Conclusions
In this study, estimation of level of ground vibration due to impact pile driving in a clayey soil has been done by implementation of an experimental study and also numerical simulation by means of finite element PLAXIS software. Properties of materials used in the experimental study were extracted from geotechnical ig. 14 PPV values at ground surface points for various friction angles The PPV values obtained from the experimental study were compared with the computed PPV values in numerical simulation and indicated a close agreement between the results of two methods. Moreover, sensitivity analyses were performed to identify the effects of different parameters on the level of ground vibration induced by pile driving. The highest value of PPV is related to the penetration of the pile with 3.5 cm diameter with the value of almost 22.36 mm/s. The amount of PPV at a distance of 100 cm is about 10.33% of the amount of PPV at a distance of 25 cm from the impact site to the pile with a diameter of 3.5 cm. And this amount of reduction for pile with a diameter of 2.5 and 2 cm is equal to 8.31% and 12.77%, respectively. The results of this study are summarized as below: • The proximity of the results acquired from the experimental study and numerical modelling indicates that the Mohr-Coulomb model in PLAXIS software can be used in dynamic analysis of pile driving. • As the distance from the pile increases, the PPV value decreases. Highest attenuation rate of ground vibrations is related to the points close to the pile and as the distance from the pile increases, the attenuation trend levels out. • The results obtained from the experimental study as well as the sensitivity analysis by numerical modelling showed an increase in PPV value due to an increase in the diameter of the pile. • An increase in the elastic modulus of the soil will lead to a decrease in the level of ground vibration. • An increase in the friction angle of the soil increases the level of ground vibration at the points close to the pile. • Pile diameter and elastic modulus of soil have the greatest effect on the changes of PPV value and the friction angle of soil has the least effect on these changes. • The effect of friction angle and elastic modulus of soil on the changes of PPV value are greater at points located close to the pile and at farther distances, level of ground vibration is under the influence of geometrical damping and independent to the changes in different parameters. • An increase in the diameter of the pile causes a noticeable increase in the level of ground vibration at all the points.
Authors contribution The main research targets were expanded by Hirad Shamimi Noori and Reza Shirinabadi. Hirad Shamimi Noori and Reza Shirinabadi established the models and calculated the results. Ehsan Moosavi analyzed the calculated results and edited the draft of manuscript. Mehran Gholinejad managed laboratory tests and collaborated with the team in analyzing experimental results. All authors replied to reviewers & apos; comments and revised the final version.
Data availability The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Declarations
Conflict of interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 5,546.8 | 2021-08-05T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Impact of sustainable project management on project plan and project success of the manufacturing firm: Structural model assessment
This study has aimed to investigate the impact of sustainable project management on sustainable project planning and success in manufacturing firms. Data was collected from project management professionals in a manufacturing firm in Malaysia. A total of 231 responses were analyzed using the partial least square (PLS) method. The findings revealed that sustainable project management has a significant impact on sustainable project success and sustainable project planning. Sustainable project planning is positively correlated with sustainable project success. The results also indicated that sustainable project planning mediates the effect of sustainable project management on sustainable project success. The findings have significant insight into the body of knowledge of the project life cycle and indicated that sustainable project planning is a crucial tool attributed to project management towards the project success of the manufacturing firm. The results can be used as a guideline for organizations, providing direction in project management to achieve sustainable development for business.
Introduction
Sustainability is the capacity to be maintained at a certain level. In this study, sustainability refers to project management's economic, environmental, and social benefits in a manufacturing firm. Sustainability is an integral part of project management practices that maintain the economic, environmental, and social (triple bottom line) future benefits. Kivilä et al. [1] indicated sustainable project management (SPM) with Triple Bottom Line (TBL)-economic, environmental, and social has a significant impact on project success. SPM focuses on the planning, monitoring, controlling, and ensuring project delivery process along the project life cycle. Project managers are responsible for looking at the overview of project management with the integration of sustainability as one of the project's objectives [2][3][4]. Sustainable project management can lead to maintaining sustainable project planning, which reflects the manufacturing firm's sustainable project success. The integration of sustainability into project planning practices is essential to ensure the project management process and planning. In this study, sustainable project planning (SPP) comprises three main dimensions: managerial control, risk response, and work consensus. In Malaysia, the manufacturing firm lacks innovation, competitiveness, labor-intensive industries, and inadequate enablers [5]. Tay et al. [6] stated that due to limited raw material, data storage capacity, handling variability, and streaming stability, manufacturing companies in Malaysia lack comprehensive business sustainability. Business sustainability is often defined as managing the triple bottom line. Terrafiniti [7] reported that the work of sustainability managers is slowly entering established practice in the manufacturing industry. Many organizations have adopted a sustainability approach in Malaysia; however, there is still great variability in practice, and sustainability managers and project planners are often hampered by resistance, apathy, and misunderstanding.
The role of project planning facilitates project management throughout the project life cycle and leads to the project's success [8]. To encourage the integration of sustainability [9] in project management and project planning, sustainability measurement dimensions need to be added as a project success criterion. There are six dimensions to evaluate sustainable project success (SPS), project efficiency, stakeholders, team, business success, preparation for the future, and sustainability. The main objective of this study is to explore what are the sustainable measurement dimensions related to sustainable project management to predict sustainable project planning and sustainable project success in the manufacturing industry.
Alsawafi et al. [10] and Elkington [11] explained the concept of sustainability business, which is known as the Triple Bottom Line (TBL): Economic, Environmental, and Social. TBL is a measurement of performance incorporate. TBL can lead manufacturing companies to focus on social and environmental concerns and generate profits [12]. The manufacturing companies face pressure in operating their business activities [13,14]. This is because of the unstable of economic (recession), environmental (depletion of natural resources), and social (labors and human rights) issues [11]. Sustainable project management can change the policies of the business organizations to achieve specific objectives as success criteria [15]. The governments highly acknowledge the responsibility to adopt sustainability into project development strategies in many countries due to increases in population and limited resources, especially in the area of sustainable development [16]. Consequently, a sustainable project management mechanism can lead to maintaining a sustainable project plan, leading to sustainable project success for the manufacturing companies. This study emphasized that sustainable project management is a crucial tool that predicts sustainable project plans and sustainable project success.
Underpinning theory
This study has adopted the concept of sustainability in project management [16]; developed the relationship between sustainable project management and project success. Yu et al.
[17] stated that sustainable project success is a key determinant that leads to the success of projects of the companies. Following the concept of sustainable project management [16], we propose an approach that considers sustainability from the triple bottom line (TBL) perspective (e.g., economic, environmental, and social). In this study, sustainable project management (economic, social, and environment) reflects sustainable project planning (managerial control, risk response, work consensus), and sustainable project success (efficiency, team business success, preparation for future, sustainability). Shokri-Ghasabeh and Kavoousi-Chabok [18] indicated that sustainable project management contributes toward the success of project encourage more organizations to practice sustainability in project management, as sustainability is one of the measurements in project success.
Sustainable project management
Project management is the processes, methods, knowledge, skills, and experience to achieve specific project objectives. Sustainable project management refers to implementing projects that will serve to support future generations and society in economic, environmental, and social benefits [19]. This study evaluates sustainable project management by the manufacturing firm's economic, environmental, social benefits. Reducing the use of natural resources, liquid waste, biodiversity, and energy can lead to sustainable project management. Besides, the manufacturing firm or company's relationship with the local community, labor practices management, and human rights management is a crucial element that assists in maintaining sustainable project management. Sustainable development has been widely promoted and has started a new development paradigm in governmental and non-governmental organizations. Many organizations are moving towards sustainability in project management, which could incorporate sustainable project planning and sustainable project success of the company.
Sustainability has also moved towards changing the profession in project management. The researchers explored the importance and practices of sustainability in project management [20][21][22]. Martens and Carvalho [23] suggested that the sustainability principle with TBL dimensions (economic, environmental and social) should be included in the project management process, which leads to integrating sustainable project planning and contributing to business success organizations. Martens and Carvalho [24] explained the challenges of sustainability in the project management function. Dvir et al. [25] dispute that project planning and project success should also be included in the sustainable project management context. Martens and Carvalho [23] analyzed the sustainable project management from different manufacturing industries and applications and clustered it into the TBL (e.g. economic, environmental and social) dimensions. In their study, the main concerns of the economic variable are the company's financial performance and its advantages from social and environmental practices, cost management, stakeholder management, and business ethics in economic performance. Thus, we postulated that: H1: Sustainable project management has a significant impact on sustainable project success.
H2: Sustainable project management has a significant impact on sustainable project planning.
Sustainable project planning and success
Project planning contains the project activities [25], schedule, cost and resources planned within the project life cycle of the business organization. This study emphasizes sustainable project planning, which is evaluated by managerial control, risk response, and work consensus in the manufacturing firm. The managerial control, project task, process, and solution for potential risks are the crucial elements for measuring sustainable project planning. Sustainable project success refers to the development that meets the needs of the present. This study considers project efficiency, stakeholders, team, business success, preparation for the future, and sustainability to evaluate the manufacturing firm's sustainable project success. For the sustainability of project success, economic costs and benefits of government policy or business strategy are required to be taken into consideration. The key elements of project efficiency (e.g. cost or budget, completion on time, and scope of the project) consist of sustainable project success. The technical specification of project meetings, solving customers' problems, and improving customers' quality of life are crucial components for evaluating sustainable project success. Besides, the realization and perpetuation of economic, environmental, and social benefits are major elements for the success of the sustainable project. The team's productivity, profitability, market share, and new technologies can lead to sustainable project success for the manufacturing firm.
In this study, sustainable project success refers to the company's efficiency, stakeholder, teamwork, and preparation for future business success. Project planning is the most critical step to be performed in the project management process [26] and sustainable project success in the company. Managing a project is challenging, as it is challenging to construct a project plan suitable for all types of the project due to different projects operation. Therefore, project monitoring and controlling should be applied to adapt to the fast-changing situation in a project environment. Some researchers argue that sustainability in project planning can help moderate the dynamic project environment by reducing the uncertainties and pre-determine underlying problems within the project context [27]. Project risk is reviewed during the planning process, and risk management help to mitigate the high-risk activities; thus, the project uncertainties can be a remedy through the planning process [28]. Besides, proper detailed planning allows a project team to understand the project objectives clearly, and lead the behavior of a project to improve the efficiency of the execution. The previous literature explored sustainable project planning efforts that affect project success [17]. However, an integration of sustainability in project planning into sustainable project management and the impact on sustainable project success has yet to be realized and put into a research model. Therefore, we postulated that: H3: Sustainable project planning has a significant impact on sustainable project success.
Mediating effect of sustainable project planning
The project success corresponds to good project management, which comprises the objectives and benefits envisioned by the project team. It is good to include sustainability in the initial objectives, thus enabling the organization to enjoy the sustainable project outcome. For a project-oriented company, Kerzner [29] stated that project success is closely related to the results through the projects, as these are the company's fundamental business and core competencies. The performance measurement and planning ability are also considered part of the measurements for project management, which contribute to the success of projects [30]. Project Management Institute [15] explained the measures of project management time, cost, scope and quality, resources, and risk, which are widely applied in project management for the company's success. The previous studies [16, 23, 24, 31] extended and classified the dimensions of project success such as project efficiency, impact on the customer, team, business success, preparation for the future, and sustainability. Dvir et al. [25] examined the relationship between sustainable project management and sustainable project success. Carvalho et al. [32] stated that the sustainability dimensions in economic, environmental, and social contexts could influence the manufacturing company's project success. Thus, we postulated that: H4: Sustainable project planning mediates the effect of sustainable project management on sustainable project success.
Based on the review of literature and concept of underpinning theory, Fig 1 shows the conceptual model of this study.
Operationalization of construct
The measurement instruments of this study were adopted and modified from the review of the literature. In this study, sustainable project management consists of three major dimensions of the triple bottom line (TBL) such as economic, social and environmental, which measured with nine items adopted from Martens and Carvalho [23], using a five-point Likert scale ranging from 1 (unimportant) to 5 (very important). Similarly, sustainable project planning is entailed with three dimensions such as managerial control, risk response, and work consensus, which evaluated with ten items modified from [17] and used a five-point Likert scale ranging from 1-5, whereas 1 for "very little extent", 2 for "little extent", 3 for "some extent", 4 for "great extent" and 5 for "very great extent". Sustainability project success is contained five dimensions such as efficiency, stakeholder, team, business success, preparation for future, and sustainability in this study, which measured with eighteen items modified from Martens and Carvalho [24], using the five-point Likert scale ranging from 1 (very little extent) to 5 (great extent).
Data collection and sampling method
We used a self-administered questionnaire for distributing to the respondents using an online survey form. For distributing questionnaires, the online survey method is easy to reach the respondents through the internet [33]. The respondents' addresses are obtained from the Project Management Institute (PMI). PMI is a well-known global non-profit project management organization for project management. In this study, the respondents' list is limited to members . We used a stratified random sampling method for collecting data. We collected the profile and email ID of the project manager, engineer, director, and CEO of the manufacturing company from the site of PMIMY. We distributed the questionnaire with a consent form and politely request the respondents to participate in the study. We ensured the respondents that the survey has been conducted only for academic purposes, and there is no personal identification, and responses will remain anonymous.
We distributed the questionnaires with Sekaran and Bougie [33] suggested that it is suitable for selecting the participants in this study due to different levels of management having another point of view in project management. Data was collected from the low-level project management (e.g., Project Engineer), middle-level management (e.g., Project Managers and Senior Project Managers), and top management (e.g., Managing Director and CEO) of the manufacturing firm in Malaysia. Online survey questionnaire links were distributed through social media platforms such as LinkedIn and WhatsApp group. A total of 300 questionnaires with survey links were distributed to respondents. We received a total of 238 return questionnaires out of 300. During the data screening process, we identified 7 incomplete responses, and thus, 231 valid responses were received for data analysis with a response rate of 77%. To evaluate the reliability of the sample size, we used G-Power 3.1 statistical tool. The results indicated with the effect size of 0.15, error 0.05, and the number of predictor 2, G-Power suggest sample size of 107 with the actual power of 0.95 to examine the conceptual model. Reinartz et al. [35] postulated that using the partial least square (PLS) method, the minimum sample size is required 100. In this study, we have collected 231 valid responses from the manufacturing firm, exceeding the minimum sample size requirement. Thus, the sample size of this study is acceptable and adequate for the analysis.
Common method variance
Common method variance plays a significant role in social science studies due to the single source of data. Podsakoff et al. [36] postulated that Harman's single-factor test is crucial to measure the common method variance. In this study, we used Harman's [37] single-factor test. The result indicated that the highest factor estimated 21.53% of the variance, which is lower than 50%, indicating no common method variance in this study.
Demographic information
The demographic information and outcome of the responded are shown in Table 1. The results revealed that Middle-level management ranked the highest at 57.2% of the total sample size. Meanwhile, 36.8% of respondents are holding low-level management. The rest 6.0% are holding a significant position (high-level management). In academic qualification, the majority of the respondents are bachelor's degrees (67.3%) followed by 31.7% respondents with master's degrees, and only 1% respondent is Doctoral degrees. Most of the respondents have less than 5 years in project management experiences of 35.6%; while 32.7% of respondents have 6 to 10 years of project management experience and 11.9% and 10.9% of respondents had have 11 to 15 years and 16-20 years of project management experiences, and rest 8.9% of respondents have over 20 years of project management experiences. In the highest project Capital expenditures (CAPEX) category, 47.5% of respondents have experience in managing over RM20 million of project value in a single project. Following by the second "less than RM5 millions" and third "less than RM1 mil" rank of the CAPEX amount with 39.6% of respondents, followed by 8.9% not exceeding RM10 million and 4.0% of them is not exceeding RM20 million of highest project capital expenditure. Table 2 illustrates the information of respondents' firm, which consists of the industry category of the company, numbers of employees in a firm, ownership status, company's annual sales turnover amount, and practicably of sustainability in business or project management. The distribution of industry category is mainly on petrochemical industry 20.8% of respondents, following by construction 19.8% and 18.8% for information technology (IT) and telecommunication industry. About 13.9% of respondents are working in the consultancy industry, while 7.9% of respondents are in the chemical/gas industry. This is followed by accounting/ finance, health care, and manufacturing industries, which sharing of 9.0% equally. The remaining industry contributed 9.8%, which are logistics, machine makers. Regarding respondents' firm, the number of employees was 35.6%, more than 1000 employees, which is considered a larger company in Malaysia. This is followed by 26.7% less than 100, 12.9% from 101-250, and 501-1000. The least proportion of 11.9% of respondents works in a firm that consists of 251-500 employees. In terms of firm ownership, approximately 59.4% of respondents work in private-owned companies, 37.6% work in publicly listed companies, and 3.0% work in government-linked companies and non-governmental organizations. Most of the company's annual sales turnover amount over RM100 million (46.5%), 26.7% of the company have RM500k to RM20 million annual sales turnover, 19.8% has RM50 million to RM100 million, and 4.8% of respondent's firm has RM20 million to RM50 millions yearly sales turnover, and 2.0% of them has less than RM500k. For a firm's sustainability practice, 75.2% of the company practice sustainability in business or project management, while the remaining 24.8% companies are not.
Measurement model assessment
SmartPLS 3.0 software is used through the structural equation modelling approach to assess the model of this study, and reliability analysis. The findings revealed that composite reliability (CR) ranged from 0.831-0.953, while the accepted threshold value is >0.70 [38]. It implies that all items are sufficient to represent the respective constructs and all constructs are reliable. The Rho_A is reflected in the range between 0.758 and 0.945, the threshold value is >0.70 [39], which satisfy the internal consistency reliability. The factor loading ranged (FL) from 0.725-0.939, which is greater than 0.60 [38], thus, it disclosed sufficient internal reliability. The average variance extracted (AVE) for all variables shown in ranging from 0.498 to 0.869 (Table 3). Hence, the AVE value for the economic dimension shows 0.498 that is close to the cut-off point of 0.50 [38]. In addition, other criteria such as factor loading, CR, and Rho_A values are achieved the satisfactory level except for the AVE value. Thus, it suggests that the study explained adequate convergent validity [39,40]. The findings of variance inflation factor (VIF) are ranged between 1.384 and 2.980, which is less than cut-off point 5. Hair et al. [40] believed that collinearity issues may occur if VIF values exceed 5. Thus, the findings indicating that the multicollinearity issue does not exist in this study. The measurement items are included in Appendix.
The results of the Fornell and Larcker [41] criterion for this study is shown in Table 4. The square root of the AVE each construct (diagonal) exceeded and it is highest among the other inter-correlation constructs (horizontal axis), which explained that all constructs share more variance with their associated indicators than with any other construct [38], which indicated a positive result in supporting discriminant validity.
In the cross-loading criterion, the item loadings are generated using the PLS algorithm function and illustrated in Table 5. The result of all the items cross-loading of the respective construct is higher against the items' loading of other constructs. The highest item loading for each construct is italic to represent the respective intended constructs. Hence, we can conclude that each construct is more closely correlated to its items than other constructs' items, signifying the discriminant validity.
Structural model assessment
The hypothesis of the research was tested using the PLS algorithm and bootstrapping approach. The larger R 2 value implies more accurate and precise constructs. The findings revealed that sustainable project planning explains 21.2% variance, and sustainable project success explains 40.1% variance. The f 2 values showed 0.269, 0.400, and 0.028 for sustainable project management (SPM), sustainable project planning (SPP) and sustainable project success (SPS) respectively. The results found that SPM has a positive correlation to SPS (β = 0.147, t = 1.938 and p<0.05), therefore, H1 is supported. The relationship between SPM and SPP is found significant (β = 0.460, t = 5.178 and p<0.01), and thus H2 is supported. Furthermore, SPP has the strongest relationship with SPS (β = 0.552, t = 7.321 and p<0.01), and therefore H3 is supported (Table 6). Henseler et al. [42] stated that assessing the direct and indirect relationships between exogenous and endogenous latent variables is important for the evaluation of a structural model. The findings revealed that sustainable project planning mediates the effect of sustainable project management on sustainable project success (β = 0.254, t = 7.321 and p<0.01).
Discussion
The findings revealed that sustainable project management has a highly significant impact on sustainable project planning. This finding is relevant to [2,17], highlighted in integrating project management and project planning for a construction engineering project. There is a lack of empirical studies that measure the relationship between sustainable project management and sustainable project planning in Malaysian manufacturing firms. Eid [43] addressed the impact of sustainable development on project management processes, specifically focuses on project management processes from different perspectives (e.g., initiation, planning, execution, controlling, and closure). Yu et al. [17] suggested that sustainability in project planning helps to realize the goal of sustainable project management, specifically in the project life cycle process. However, this study concluded new findings that there is a significant positive correlation between sustainable project management and sustainable project planning. The project planning assists in guiding the project team in execution, controlling and monitoring the project. Sustainable project planning can lead to identifying and minimizing the project risk and communicating with the team and stakeholders who have certain credits contribute to sustainable project management.
The results revealed that there is a significant positive link between sustainable project management and project success. This finding is relevant to [23, 32], who identified a significant positive relationship between sustainability in project management and process success in a different context. Mir and Pinnington [44] indicated that the project management process was implemented with a project life cycle model (planning, execution, monitoring, controlling, and closure) and appropriate planning procedures. Besides, the sustainable project success was evaluated with the project efficiency, impact on stakeholders, team, business success, preparation for future, and sustainability measurement to reflect the integration of sustainability in project management. Sustainable project planning has a highly significant impact on sustainable project success. It implies that sustainable project planning is an essential tool that affects the manufacturing company's project success. This finding is relevant to the previous studies [17, 45,46], whereas project planning was found a significant impact on project success. Zwikael et al. [46] TECON TENV TSOC TMC TRR TWC TPE TISE TIMT TBS TPPF identified project risk as a significant factor that measuring the project efficiency and effectiveness with the presence of risks. It implies that the potential risks, project planning delivery, and solutions for potential risks during the project planning process can reflect sustainable project planning. The adoption of managerial control, risk response, and work consensus can improve the manufacturing firm's sustainable project planning. The findings revealed that sustainable project planning mediates the effect of sustainable project management on sustainable project success. It implies that sustainable project planning is a critical factor contributing to sustainable project success in the life cycle of the project management process. In the context of sustainable project management, practising good project planning could lead to sustainable project success for the industry. There is some research on project planning in a project management context, which in turn leads to project success [47]. This study found that sustainable project management is highly correlated to sustainable project planning, which directly reflects success. Moreover, sustainable project planning is highly associated with sustainable project success. It denotes that both sustainable project management and sustainable project planning lead to deliver better sustainable project success for the manufacturing industry.
The findings of this study concluded that sustainable project management could be composed of three essential constructs, with economic, social, and environmental dimensions; these findings are related to previous studies [16,23] in which the researchers highlighted the key factors of sustainable project management and challenges of sustainable project management functions. Over past decades, researchers conducted an exploratory study on sustainability in project management, project success, and sustainability in project planning separately. Sustainable project management is closely related to project success and integration of project planning [17,24], and project success. Sustainable project management and planning can lead to sustainable project success for the manufacturing firm. The findings identified that the economic, social and environmental dimensions are crucial for sustainable project management because, manufacturing company's financial and economic performance, financial benefits, cost management, natural resources, energy, labor practices management, and relationships with the local community can assist to improve the sustainable project management.
Conclusion
The findings of the study have a crucial implication. Practically, the importance of sustainable project management is essential in improving the success of a project of business organizations. The social, environmental, and economic dimensions can play a significant role in leading the company's sustainable project management. The relation between sustainable project management and sustainable project planning was evaluated as a new finding. Sustainable project management, including economic, environmental and social planning needs effort on managerial control, risk response, and work consensus during project planning of the business industry. Sustainable project planning can lead to predicting the project success of the industries. The significant relationship between sustainable project planning, sustainability in project management, and project success was identified as a new empirical finding of this study. The findings of this study could contribute to the business organizations in providing direction in project management to achieve sustainable development and success for business organizations. The project managers can evaluate and improve the relationship between sustainable project management (with economic, environmental, and social context) and sustainable project planning for evaluating the success (project efficiency, impact on stakeholders, external impact on team, business success, and preparation for future) of the project. There is required more attention to the role of sustainable project planning on directing and controlling the project management, reducing project risks, and forming the understanding and commitment for the sustainable success of the project. This study was conducted only based on the project management professional (PMP) and project executives who are working in Malaysia. It implies the limitation of sample size and generalization of the study due to the participation of limited numbers of PMP and project executives in this study. The respondents were not defined based on projects' experiences, as different project experiences causing different knowledge backgrounds, which may have affected the results. Thus, more explanatory studies can be conducted in the future with different cultures and countries and enhance the generalization of sustainable project management, sustainable project planning, and sustainable project success scale. Future research can focus on the relationship between the importance of sustainable project management, the extent of efforts in sustainable project planning, and the impact on sustainable project success in specific project contexts such as information technology and business-related projects. Future research can be conducted by including other project management professionals (e.g. managers and directors) with large sample sizes or respondents to better generalise the study.
The importance of sustainability in project management is found to be significant in this study. To achieve business objectives, it is crucial to include sustainability in project management with the triple bottom line such as economic, environmental and social. Based on the study's findings, we identified that there is a highly significant relationship between sustainable project planning and sustainable project success. It implies that sustainable project planning predicts the sustainable project success of manufacturing firms in Malaysia. In addition, the results also indicated that sustainable project management is highly correlated with sustainable project planning. It denotes that sustainable project management is crucial for the success of a sustainable project in the company.
Moreover, there is a positive and significant relationship between sustainable project management and sustainable project success. These findings imply that sustainable project management and planning are the key functions for the development of sustainable project success. Sustainable project planning serves as a bridge to link sustainable project management and sustainable project success. This study emphasizes that sustainable project planning manifested three dimensions: managerial control, risk response and work consensus. Sustainable project planning in the project life cycle can predict project success. It denotes that sustainable project planning is a critical tool to maintain sustainability in project management. The study identified the importance of sustainable project planning in the project life cycle from a sustainable project management perspective, which in turn leads to the manufacturing firm's sustainable project success. The measurement scale of this study may help the business organization develop sustainability in project management towards sustainable project success through the proper implementation of sustainable project planning. | 6,822.2 | 2021-11-24T00:00:00.000 | [
"Business",
"Environmental Science",
"Engineering"
] |
Applying Noise-Based Reverse Correlation to Relate Consumer Perception to Product Complex Form Features
Consumer behavior knowledge is essential to designing successful products. However, measuring subjective perceptions affecting this behavior is a complex issue that depends onmany factors. Identifying visual cues elicited by the product’s appearance is key in many cases. Marketing research on this topic has produced different approaches to the question. +is paper proposes the use of Noise-Based Reverse Correlation techniques in the identification of product form features carrying a particular semantic message. +is technique has been successfully utilized in social sciences to obtain prototypical images of faces representing social stereotypes from different judgements. In this work, an exploratory study on subcompact cars is performed by applying Noise-Based Reverse Correlation to identify relevant form features conveying a sports car image. +e results provide meaningful information about the car attributes involved in communicating this idea, thus validating the use of the technique in this particular case. More research is needed to generalize and adapt Noise-Based Reverse Correlation procedures to different product scenarios and semantic concepts.
Introduction
Knowledge about customer requirements is essential to developing successful products [1,2]. Moreover, in marketing research, product aesthetics is considered a key factor to the affective response and purchase intention of consumers [3][4][5][6][7][8]. Scholars have studied for long the complex relationship between the visual appearance of an object and the elicited consumer response. As a result, a variety of models have been proposed to explain the consumer affective response (CAR) to product design, either generic ones, such as the Unified Model of Aesthetics [9], or directly actionable such as Kansei Engineering [10,11]. e modeling of this consumer behavior is a complex problem due to two main reasons. e first one is the difficulty of measuring subjective judgements [1]. e second one is that products need to be parameterized, it is, described in terms of shape characteristics (product form features, PFF), to find relationships between them and the consumer response. ese parameterized models must reflect the complexity of the product visual representation using a limited number of variables to make their use feasible. erefore, from a marketing perspective, it is essential to detect the most relevant product features to evaluate their influence on consumer perception [12]. e problem of parameterization is also present in some psychological and social sciences experiments; for example, those focused on studying prototypical images of social stereotypes. In these cases, indirect approaches such as Reverse Correlation (RC) are used to overcome the problem. RC operates by presenting random variations of stimuli of the object under study (pictures of faces, in most cases) with no prior assumptions, leaving the participants the task of making the most meaningful attributes emerge through their responses to a given judgement [13]. RC studies produce a Classification Image (CI), which is considered to represent a mental image of the prototypical object the participants were asked about.
In this study, we propose applying the same approach to the case of product perception. In particular, we analyze the application of a RC technique, the Noise-based Reverse Correlation (NBRC), to the identification of the aesthetic features of a product that contribute to convey a desired affective concept, through the determination of the CI for that product-concept pair. To assess the viability of this approach, we performed an exploratory experiment using sports cars as the object of study.
1.1.
e Relevance of the Aesthetic Factor in Marketing Research.
e visual appearance of consumer products plays a crucial role as a marketing differentiation factor, especially in highly saturated consumer markets [3][4][5][6][7]. e image is the first information channel in the consumer-product interaction [14], and consequently, aesthetics becomes a key factor in the product judgement by consumers [15][16][17]. It has been shown, for instance, that products with higher hedonic qualities are more appealing [18]. An in-depth analysis of these complex aspects of user-product interaction is carried out in [19].
Due to this reason, the influence of the visual appearance of products has been studied for long [8,[20][21][22], even in terms of purchase experience design [23]. However, knowledge about how users perceive a given product cannot be implemented through design without knowing why it is perceived this way. us, much research has focused on the relationship between this consumer perceptual response and the design parameters of the product, as they are the objective features that the designer can control [24,25].
For instance, Kansei Engineering [10,12] and Conjoint Analysis [26,27] constitute two effective tools to find relationships between consumer response and product visual features. However, several factors are difficult in their use in a more general approach. For example, it is necessary to preestablish the formal characteristics of the product in order to use these tools. Product parameterization is needed in advance to obtain results. en, the utility of these results depends on the ability of researchers to select product features relevant to the consumer affective response [12]. Moreover, these features are often broadly described to simplify the product description, thus limiting the ability of these techniques to capture the influence of specific design details [28]. Techniques such as eye-tracking [29] have been used to overcome these limitations.
is paper proposes an alternative approach to overcome the problem of detecting product features meaningful to consumer perception. It consists of using the RC technique, described in the next section, to obtain a "prototypical image," an image containing the visual features of an object that convey a specific message. is image may provide the designer with information useful to communicate this concept through the product's appearance. is information is especially interesting as typicality is a relevant factor in consumer' judgement. It has been shown that typicality influences the consumer response to a product [30]. e identification of "prototypical features," product form features highly contributing to the whole perception of the prototypical product, would allow designing with sound criteria to control to which extent the image of the product is conveying typicality in the desired concept.
RC methods have an extensive application in psychosocial studies, but no research has been found to be applied to product design. We explore in this work the feasibility of using RC techniques to obtain the prototypical image of a product representing an aesthetic/affective concept. e next section is devoted to explaining this technique thoroughly.
e Reverse Correlation Methods.
People generally agree in their judgements of the aesthetic of consumer products. erefore, there must be a relationship between the stimulus (the product) and the response (the perception). Several methods have been used to model this complex relationship. Direct approaches use sets of stimuli built by varying the values of the attributes which define the product (product form features, PFF) to produce different responses. ese responses are then correlated to the stimuli to determine the relationship model. However, using this approach will produce larger experiment designs as the number of defining attributes and their possible values increase. is is the case of most consumer products, which need many attributes to get their form fully defined. In these situations, it might be preferable to use a different approach, such as RC [31,32], to develop this type of stimulus-response model.
In direct methods, the relevant attributes of the stimulus are fixed, and their values are systematically manipulated and correlated to the responses. In RC methods, it is the opposite. e relevant attributes of the stimulus are not fixed, while the response variable is. Each stimulus is randomly generated, and the obtained responses are used to classify each input regarding the judgement. Due to this, these techniques are called "reverse," as the information about the influence of the attributes on the judgement is obtained by correlating the presented stimuli with the given responses. RC is a data-driven technique that does not need priory suppositions about the relevant attributes of the stimulus and permits the participant to use the criteria they want to judge the stimuli [32,33].
Materials and Methods
e procedure for conduct a RC experiment is based on the use of a base stimulus, which is randomly modified, generating many samples for a survey. Participants are asked to judge them, and the CI is derived from their answers. ere are different variations of this basic approach [34]. One of them is NBRC, often used to obtain mental representations [33]. In recent years, NBRC procedures have been mainly used in face perception research [32,33,[35][36][37][38][39][40]. When we see a face for the first time, we infer the personality traits of that person by matching the visual input to our mental prototypes of faces with different attributes. From the result of this match, we infer the personality traits of the owner of 2 Complexity the face [38], making attributions such as trustworthiness or dominance [41][42][43][44][45][46]. NBRC methods produce relevant CIs displaying the image that the participants use as a referent to evaluate the required judgement, referred to as "prototypical image" [33]. Following the same approach in this work, the NBRC method was used to obtain the prototypical image of a consumer product. As far as we know, this work is the first study applying this approach to product design. e NBRC technique uses a starting base face to generate many variations by applying random noise layers over it. Usually, this base image is not an actual one but a composition of different greyscale images in which the face contour is made coincident and subsequently blurred. e base face features (gender, age, expression) are selected according to the requirements of the study. Once the base face is obtained, variations are generated applying different types of noise over it. e most usual ones are sinusoidal noise, white noise, or Gabor noise [32,40,47].
A survey is then prepared using these images. It consists of around 300/1000 tasks per participant, and in each one, a pair of pictures is shown. One of them is the base image with a noise pattern applied. e other one uses the same base image, but the noise pattern is inverted. e participants must select one of them for each task. e CI is created by averaging all noise patterns of the chosen pictures. When this average pattern is applied over the base face, the resulting image displays the traits that induced the judgement under study. In other words, this image represents a face conveying this particular social judgement (prototypical face). According to [32], the image obtained by applying the average pattern from nonselected pictures in the survey (the anti-CI) would display the inverse of the prototypical face.
Our intention in this work was to check if the NBRC could be used to visualize mental representations of products that fit some target judgement in the same way that it does for faces. Perceiving faces is a critical task for humans, and, to meet this need, after millions of years of evolution, our brain developed complex specialized neural networks intended to perceive faces [48]. For this reason, faces are perceived in a different way from other kinds of objects [49][50][51]. Human observers perceive faces in some objects that resemble the shape of a human face, such as a clock or the front of a car. Facial perception research has studied this effect extensively, known as pareidolia.
Several works found similarities between the neural activity of observers during the perception of faces and those registered when an object that resembles the shape of a face is perceived [52][53][54][55]. e front end of a car is one of the best examples of anthropomorphization in the perception of consumer products [56]. Cars can be classified using many criteria. One of the most common and wellknown taxonomies is the sports car versus the family car, and the difference between these kinds of cars can be clearly seen on the front end of the cars. erefore, in this first attempt to obtain the prototypical image of a consumer product using NBRC, we selected the frontal view of a car as the product and the appearance of a sports car as the fitting judgement.
Case Study
Sports cars were used as a case study to test the applicability of NBRC to relate consumer perception to complex product form features. We perform the task following the procedure depicted in Figure 1, which consists of 5 steps detailed in the following sections.
Stimuli and Participants.
Randomly varied images of cars were created, overlaying different random noise patterns over a base image (Figure 1 (A)). e base image is obtained by averaging grayscale images of the object analyzed (typically a face), which leads to base images with blurry contours. To create our base image of a front end of a car, frontal images of six subcompact cars (B-segment) were selected. e images were converted to grayscale, cropped, and centered to get the area of the cars to span as a large part of the image as possible. e six images were overlaid using different transparency values, finally obtaining the base car image ( Figure 2).
In this work, we used sinusoidal noise [32,40] to generate the stimuli because it generated more meaningful variations of the base car than other commonly used types of noise such as white [57,58] or Gabor noise [47]. To use sinusoidal noise in this task, the image height and width must be equal and have a power of 2. erefore, the resulting image was resized to 512 × 512 pixels. e rcicr R package [59] was used to generate the stimuli. Firstly, 300 sinusoidal noise patterns were created by combining five layers of sinusoidal patches. e five images differ in the spatial frequency of the sinusoids (2,4,8,16, and 32 cycles per image). In the same way, each one of these images was obtained by averaging twelve sinusoidal patches that differ in orientation and phase (6 different orientations and two phases) and in the contrast of the image, that was randomly assigned. Lastly, a final pool of 600 paired images was obtained by inverting each noise pattern. is complete noise generation process is shown in [32]. Finally, each noise pattern pair was superimposed on the base car image, obtaining 300 slightly different pairs of stimuli. Figure 2 shows the base car image and one stimuli pair obtained following this process.
25 Spanish young adults (65% men and 35% women) between 24 and 35 years old (M � 27.40, SD � 3.67) participated in the experiment [60], which was approved by the ethics committee at the Universitat Politecnica de Valencia (P15_10_01_20). Individual consent forms were also gathered.
Survey Procedure.
e survey consisted of two blocks of 150 tasks. Each one presented a pair of pictures (direct and inverse noise layers) side-by-side (Figure 3). e participants were asked to quickly choose in each task the car they reckoned as having a sports car appearance at first impression, insisting on this point despite the understandable difficulty of the task. e participants were shown both the stimuli pairs and the position of direct and inverse pictures randomly. ey had to select one of the stimuli by clicking the corresponding Complexity 3 button under the stimuli (see Figure 3) or by pressing the left/right arrow key.
Data Processing.
After the survey, a CI per participant was obtained by averaging the noise patterns of the chosen pictures and, similarly, an anti-CI was produced using the noise patterns of the unselected ones. A total of 7,500 answers were processed, with the average response time by trial across all participants being 3.67 seconds. e rcicr R package (v. 0.3.4.1) was used for this task [59]. According to [32], the CI and anti-CI represent the extreme images in the individual judgement scale (an image displaying the visual
Complexity features of a sports car and another one showing what is not considered a sports car). Finally, the average CI and anti-CI
for all participants were generated by averaging the noise pattern of the individual CIs and anti-CIs (Figure 1 (D)).
Results
e individual CIs and anti-CIs of each participant were overlaid on the base image (Figure 1 (E)). S1 Table in the Supplementary Material of this paper shows all the 50 images. As an example, Figure 4(a) shows the images obtained for participant 3. To increase the visibility of the results, a selective Gaussian Blur filter (radius � 30; max delta � 10) and a shadow/highlight compensation filter were applied, resulting in the images in Figure 4(b). e global CI and anti-CI images in Figure 5 were obtained by overlaying the average CI and anti-CI for all the participants on the base car, while Figure 5(b) shows the filtered version of the global CIs.
To show which parts of the image were most relevant to convey the sports/nonsports appearance, we used rcicr to prepare a z-map ( Figure 6) using a Gaussian filter of radius 5, a background mask and applying a z-transform over the luminance of the CIs noise pixels [32]. Green zones in this representation correspond to areas of the image that directly convey the sports car look, while the red and white zones provoke the inverse response (nonsports car).
Discussion
In this work, we have proposed the use of RC, a technique used in social research, to the identification of product visual features relevant to eliciting a particular consumer response. To explore the viability of this approach, a case study has been conducted.
Due to the exploratory nature of this work, cars were selected as the object of study to facilitate the generation of a distinguishable CI. Cars are products of very widespread use and they display easily interpretable visual attributes. NBCR has proved to be a successful tool in the face perception field [61] and the front end of a car resembles the shape of a face, being one of the best examples of anthropomorphizing in the perception of consumer products [56]. In addition, sports cars are generally recognizable by many people, and their stereotype features are easy to forecast. erefore, we could contrast if the resulting CI depicted some of the main characteristics typical of this kind of product.
According to this, the results of the experiment are satisfactory and the image obtained can be related to that of a sports car in several of its visual features. It is true that, as expected, individual CIs ( Figure 4) are difficult to interpret. However, the addition of the information contained in the noise of all individual CIs leads to a clearer pattern, and some features typically related to a sports car can be identified in the global CI ( Figure 5(a)) while they are not present in the global anti-CI ( Figure 5(b)). In this regard, it should be noted that the base image used in an NBRC task significantly influences and limits the space of attainable results. e information sampled in the noise patterns of the CIs cannot deeply change the base image. We created our base image using frontal images of six subcompact cars out of the typical sports car segment. erefore, slight changes in the form and details increasing the sportiness perception of the base car were expected, rather than major changes to the main dimensions or basic shapes that would transform a subcompact car into a typical sports car. Despite this, interestingly, a modification of the ratio height-width can be perceived in the CI.
Moreover, comparing the global CI image with the global anti-CI and the base image, some differences can be noticed ( Figure 5). e vehicle in the global CI seems lower and presents a slight increase in the width of the front from the anti-CI. ere are differences in the headlights area, giving a more aggressive impression on the global CI due to Seleccionar Seleccionar Complexity its bigger size and curvature. e front bonnet looks different on both cars. e bonnet appears dark in the center and clearer on the sides in the CI, while the anti-CI presents the opposite pattern. is conveys the impression of the presence of elevated feature lines near the bonnet boundary in the CI. In the anti-CI, the bonnet seems to be a rounded continuous metal sheet, as in the CI, it looks like a more complex concave/convex surface. Finally, the CI has the appearance of having longer side-view mirrors and shorter ground clearance. All these individual features are common characteristics of sports vehicles and, in general, the car in the CI conveys the impression of a sports car to a greater extent than the car in the anti-CI. erefore, we could conclude that the technique has performed successfully in this particular case. ese subjective impressions are compatible with the results of the cluster test performed on the noise data of the CIs (Figure 6). e resulting z-map shows the parts of the car that significantly influence the stimuli classification (green, red, and white zones). It can be seen how, in general, the elements of the car mentioned above fall into these areas. e luminance of the pixels in the bonnet and headlight areas of the car shows that these parts have a great influence on the sportiness perception, while those in the lateral rear mirror area do the same but correlate inversely. e features related to structural changes (for example, changes in the apparent height, width, or ground clearance of the car) are more difficult to detect in the z-map. However, the green zone between the car underbody and the ground can be related to the shorter ground clearance appearance of the car in the CI than that in the anti-CI.
As aforementioned, assessing consumer perception is a complex process. However, the results of this study have been satisfactory and the NBRC can be considered a promising marketing tool in the field of product design. e obtained global CI reflects, at different grades, several of the features expected. Some of these features are easily noticeable, such as the bonnet line, the headlights, or the lateral rear mirrors. It is worth noting that the arising of these features is strongly restricted by the original car typology. RC studies have by their very nature this kind of limitation and the resulting CI is heavily dependent on the stimuli utilized [33]. us, the obtained prototypical image should be considered that of a subcompact sports car. is is something to consider when dealing with specific marketing goals (for example, when assessing sustainable or socially responsible product features).
We must also consider that RC works very efficiently with subtleties affecting facial features, but major changes in product structure are presumably harder to show up. As said above, in the present experiment, some of these structural variations were partially observed, such as changes in the appearance of the whole car proportions, which is very promising. An interesting future research field aims at studying if other kinds of products are less variable in shape but not in surface details (shoes, helmets, packaging). is factor is not so relevant and RC proves more powerful.
However, some limitations in this study must be pointed out. Every participant performed 300 trials in the NBRC task. e selection of this number was based on the face perception research literature using NBRC [62]. While increasing the number of trials by respondent would lead to more detailed CIs, the demotivation of the participants would also increase, raising the probability of random responding. Regarding the number of respondents, 25 participants took part in the NBRC task. Previous works in face perception research obtained well-defined CIs using between 20 and 30 participants [32,39,63]. erefore, although 300 trials and 25 participants seem to be optimal in face perception, more research is needed to establish these values in the application of NBRC in the field of marketing. Generally, product research intends to gather meaningful information about at least market segment sizes, so the CI should account for a significant number of individuals. Moreover, focusing on more specific products and judgements would require appropriately dimensioning the sample from a demographic point of view.
On the contrary, fewer trials may suffice if the shape of the analyzed product is simple, without small features, and with limited relevant details. e global CI obtained in this work is the result of the analysis of 7500 trials. Figure 7 shows how the number of trials affects the information gathered by the noise pattern of the final global CI. is graph represents the Pearson correlation coefficient between the luminance of the pixels of the final global CI and those in CIs obtained using fewer trials. To get these data, we varied the number of trials used to generate CI from 10 to 7500 (step-size � 10). As can be seen in Figure 7, the correlation between the cumulative CI and the final CI reaches 0.82 when half part of the available trials are used.
In any case, it is not possible to generalize the results obtained until more research is performed. As aforementioned, the product chosen is widespread and the prototypical image should supposedly be solid. In this sense, it is necessary to conduct more studies with different kinds of products to study the suitability of this technique with less familiar examples. Given the small number of existing studies using this approach outside face perception research, there is still a lack of information on how the base image used during the survey can influence its outcome. More experiments with different types of products would be required to contrast the validity of the method and to build a solid methodology. It is assumed that objects with a less generic shape or with a wider visual variability will require Complexity 7 specific graphic treatment of the base images used. On the other hand, we have already pointed out that there is some similarity between the way in which our brain processes facial information and that in which we perceive objects that resemble faces [52][53][54][55]. More studies are needed using less anthropomorphic products to check if the NBRC performs equally when the pareidolia effect does not occur.
Our future work will address these issues and explore other possibilities. It might be interesting to contrast the results obtained through NBRC with those from other userproduct interaction assessment techniques such as Kansei Engineering or to contrast the z-maps obtained by NBRC with those produced using eye-tracking for the same product/judgement pair.
Conclusions
is work presents a proposal of the use of Noise-Based Reverse Correlation for product perception assessment. is is, to our knowledge, the first application of this technique to product analysis. e results obtained are satisfactory and promising: e CI produced in the exploratory study portrays some of the expected features of a sports car, thus validating this particular case.
However, as we have described, the product was chosen to meet certain requirements in order to intuitively facilitate the application of RC. erefore, despite the favorable results of this study, similar experiments in other different cases are needed to be able to generalize the suitability of the application of this technique in marketing research.
Data Availability
e image data used to support the findings of this study are included within the article.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.
Supplementary Materials
Individual Classification and Anti-Classification Images by experimental subject are shown in Table S1 in the Supplementary Materials file of this paper. (Supplementary Materials) | 6,343.4 | 2022-08-16T00:00:00.000 | [
"Business",
"Computer Science",
"Psychology"
] |
Social, economic, political and health system and program determinants of child mortality reduction in China between 1990 and 2006: A systematic analysis
Background Between 1990 and 2006, China reduced its under-five mortality rate (U5MR) from 64.6 to 20.6 per 1000 live births and achieved the fourth United Nation’s Millennium Development Goal nine years ahead of target. This study explores the contribution of social, economic and political determinants, health system and policy determinants, and health programmes and interventions to this success. Methods For each of the years between 1990 and 2006, we obtained an estimate of U5MR for 30 Chinese provinces from the annual China Health Statistics Yearbook. For each year, we also obtained data describing the status of 8 social, 10 economic, 2 political, 9 health system and policy, and six health programmes and intervention indicators for each province. These government data are not of the same quality as some other health information sources in modern China, such as articles with primary research data available in Chinese National Knowledge Infrastructure (CNKI) and Wan Fang databases, or Chinese Maternal and Child Mortality Surveillance system. Still, the comparison of relative changes in underlying indicators with the undisputed strong general trend of childhood mortality reduction over 17 years should still capture the main effects at the macro-level. We used multivariate random effect regression models to determine the effect of 35 indicators individually and 5 constructs defined by factor analysis (reflecting effects of social, economic, political, health systems and policy, and health programmes) on the reduction of U5MR in China. Results In the univariate regression applied with a one-year time lag, social determinants of health construct showed the strongest crude association with U5MR reduction (R2 = 0.74), followed by the constructs for health programmes and interventions (R2 = 0.65), economic (R2 = 0.47), political (R2 = 0.28) and health system and policy determinants (R2 = 0.26), respectively. Similarly, when multivariate regression was applied with a one-year time lag, the social determinants construct showed the strongest effect (beta = 11.79, P < 0.0001), followed by the construct for political factors (beta = 4.24, P < 0.0001) and health programmes and interventions (beta = −3.45, P < 0.0001). The 5 studied constructs accounted for about 80% of variability in U5MR reduction across provinces over the 17-year period. Conclusion Vertical intervention programs, health systems strengthening or economic growth alone may all fail to achieve the desired reduction in child mortality when improvement of the key social determinants of health is lagging behind. To accelerate progress toward MDG4, low- and middle-income countries should undertake appropriate efforts to promote maternal education, reduce fertility rates, integrate minority populations and improve access to clean water and safe sanitation. A cross-sectoral approach seems most likely to have the greatest impact on U5MR.
Social, economic, political and health system and program determinants of child mortality reduction in China between 1990 and 2006: A systematic analysis Background Between 1990 and 2006, China reduced its under-five mortality rate (U5MR) from 64.6 to 20.6 per 1000 live births and achieved the fourth United Nation' s Millennium Development Goal nine years ahead of target. This study explores the contribution of social, economic and political determinants, health system and policy determinants, and health programmes and interventions to this success. Methods For each of the years between 1990 and 2006, we obtained an estimate of U5MR for 30 Chinese provinces from the annual China Health Statistics Yearbook. For each year, we also obtained data describing the status of 8 social, 10 economic, 2 political, 9 health system and policy, and six health programmes and intervention indicators for each province. These government data are not of the same quality as some other health information sources in modern China, such as articles with primary research data available in Chinese National Knowledge Infrastructure (CNKI) and Wan Fang databases, or Chinese Maternal and Child Mortality Surveillance system. Still, the comparison of relative changes in underlying indicators with the undisputed strong general trend of childhood mortality reduction over 17 years should still capture the main effects at the macro-level. We used multivariate random effect regression models to determine the effect of 35 indicators individually and 5 constructs defined by factor analysis (reflecting effects of social, economic, political, health systems and policy, and health programmes) on the reduction of U5MR in China. Results In the univariate regression applied with a one-year time lag, social determinants of health construct showed the strongest crude association with U5MR reduction (R 2 = 0.74), followed by the constructs for health programmes and interventions (R 2 = 0.65), economic (R 2 = 0.47), political (R 2 = 0.28) and health system and policy determinants (R 2 = 0.26), respectively. Similarly, when multivariate regression was applied with a one-year time lag, the social determinants construct showed the strongest effect (beta = 11.79, P < 0.0001), followed by the construct for political factors (beta = 4.24, P < 0.0001) and health programmes and interventions (beta = −3.45, P < 0.0001). The 5 studied constructs accounted for about 80% of variability in U5MR reduction across provinces over the 17-year period. Conclusion Vertical intervention programs, health systems strengthening or economic growth alone may all fail to achieve the desired reduction in child mortality when improvement of the key social determinants of health is lagging behind. To accelerate progress toward MDG4, low-and middleincome countries should undertake appropriate efforts to promote maternal education, reduce fertility rates, integrate minority populations and improve access to clean water and safe sanitation. A cross-sectoral approach seems most likely to have the greatest impact on U5MR. Reduction of the under-five mortality rate (U5MR) has been recognized by the United Nations as one of the leading global priorities, and the fourth Millennium Development Goal (MDG4) calls on countries to reduce their U5MR by two-thirds from their 1990 baseline [1]. The latest Countdown Report finds only 19 of 68 target countries are on track to achieving this goal [2]. Evidence based guidance on the optimal mix of investments could greatly assist in accelerating progress.
Industrialized western countries achieved reductions in U5MR greater than 70% in the 30-year period between 1900 and 1930 [3][4][5][6], from baselines comparable to the rates observed in sub-Saharan African countries today [5,6]. This large decline has been attributed to economic development, improved diets and housing [7,8]. Economic progress alone, however, is not the answer; while there is clearly a correlation between U5MR and gross domestic product per capita (GDP) [9], there are many pairs of countries with 10-fold or greater difference in GDP but the same level of U5MR, and vice versa [10]. Analysis of more recent declines in child mortality have broadly identified several other key determinants of child survival, including maternal education [11][12][13][14], parental socio-economic status [15,16], public health expenditure and access to health services [14][15][16][17][18], sanitation and access to clean water and electricity [17], fertility rate [15,19], household income [15,19] and integration of minority population groups [14,20]. However, inconsistency and even contradiction among studies abounds and the interplay among these determinants and their relative importance in reducing U5MR remain unclear.
Most studies that have tried to identify the main drivers of U5MR reduction have relied on national-level data assembled through time series studies and indicators from nationally representative exercises such as Demographic Health Surveys (DHS) or Multi-Indicator Cluster Surveys (MICS) [11,12]. These studies were limited in their scope, the number of indicators that they used, the quality and quantity of the information available on mortality trends and the rigour of the analytic methods, thus limiting the inferences that can be drawn from them. The availability of annual child mortality data along with data related to a wide range of relevant determinants for each of 30 Chinese provinces over a 17-year period [21,22] provided the opportunity to conduct a more rigorous assessment of determinants of child mortality.
In the period between 1990 and 2006, China reduced its U5MR from 64.6 to 20.6 per 1000 live births, thus achieving MDG4 nine years ahead of schedule in a population of over 80 million under-fives [23]. This study explores the contribution of social, economic and political determinants, health system and policy determinants, and health programmes and interventions to this success using 35 indicators and provincial U5MRs from 30 Chinese provinces over the period 1990-2006.
Data sources
For the starting year (1990), we obtained U5MR data for 30 provinces, measured as the number of under-five deaths per 1000 live births, from the Chinese national report on neonatal, infant and under-five mortality [10,22]. We believe that those baseline rates are plausible because they were derived from a nation-wide neonatal, infant and under-five mortality rate study conducted in 1990 [22]. For each year between 1992 and 2006, we obtained an estimate of U5MR for the same 30 provinces from the China Health Statistics Yearbook (CHSY) [21]. We combined Chongqing and Sichuan Province for consistency across time, because Chongqing had been under the administration of Sichuan Province and became a Municipality directly under the Central Government in 1997. The CHSY reports province level U5MR estimates based on data from China' s Maternal and Child Health Annual Report system. This vital registration system collects information on births and maternal and child deaths at rural county and urban district level. A detailed description of the annual report system and its quality is available in recent publications [23][24][25]. The reliability of province-level U5MR estimates was much improved from 1996 onwards when the "Maternal and Infant Law" [26] was passed and the collection and management of the data were centralized by statisticians in the School of Public Health, Peking University. For further details about data sources and quality please see Online Supplementary Document (table w1). Based on an explicit set of criteria (Online Supplementary Document, table w2), we decided to impute the data in the period 1991-1995 in the 16 provinces with inconsistent data during this period. In the other 14 provinces with plausible data, the missing U5MR data for 1991 were imputed by assuming a linear trend between 1990 and 1992. Overall, 416 (81.6%) data points for the U5MR outcome variable were based on the reported estimates and 94 (18.4%) were imputed because of concerns over data quality. Figure 1 displays the trends of U5MR in each province between 1990 and 2006.
For each study year we also obtained province-level data on different social (n = 8), economic (n = 10), political (n = 2), health system and policy (n = 9) and health programmes and intervention (n = 6) indicators, available for each province and every year. The 20 social, political and economic indicators were extracted from the National and Provincial Statistics Yearbook (NPSY) [21]. Seven of the health system indicators were identified from the CHSY and the other seven were retrieved from the Health Finance Annual Report [27]. We also created a dummy variable indicating the coverage of China' s Safe Motherhood Program which was initiated in 2000 in selected high U5MR provinces [24]. A detailed description of the source, definition, and measurement unit of each indicator is provided in Online Supplementary Document (tables w1 and w3).
The government data on province-level mortality and indicators are not of the same quality as some other health information sources in modern China, such as articles with primary research data available in CNKI and Wan Fang databases, or Chinese Maternal and Child Mortality Surveillance system, which were used in some of our recent highprofile publications [23][24][25]. Still, we believe that the comparison of relative changes in underlying indicators with the undisputed strong general trend of childhood mortality reduction over 17 years should still capture the main effects at the macro-level and should be useful for drawing very general conclusions.
Statistical analysis
A detailed description of our step-wise approach to the analysis of these data is presented in the Online Supplementary Document (table w2). We based our analysis on a conceptual framework that is adapted from the widely accepted Mosley and Chen child survival framework [28]. We conceptualized that distal determinants, including social, economic, political, health system and policy and health programs and interventions, act through a set of proximal determinants to affect child survival, as measured by U5MR. We took a reduced-form approach [29] to specifically examine the association between the 5 distal determinants and U5MR (see Figure 2).
Based on this conceptual framework, we first ran univariate and multivariate regression models to estimate the association between each of the 35 indicators and U5MR in each province (Online Supplementary Document, tables w4 and w5). We used a random effects linear regression model, taking into account the clustering of annual U5MR within each province. The indicators were standardized to facilitate comparison of the regression coefficients across indicators (see Online Supplementary Document, table w2, for details).
We then grouped the 35 indicators into 5 separate categories to capture the effects of social, economic, political, health system and policy, and health programmes and in-terventions determinants in each province. Factor analysis was conducted to extract the main variation from variables in each group. One factor was created per group to represent the majority proportion of common variation within that group. The 35 indicators were assigned to each of the 5 factors ('constructs') based on their maximum loadings on each factor, as shown in Table 1.
The 5 constructs, ie, the social, economic, political, health system and policy, and health programmes and interventions, were entered into the same random effects linear regression model described above. Univariate and multivariate regressions were again conducted to compare the unadjusted and adjusted associations between the 5 covariates and the province-level U5MR. To take into consideration possible time lags between changes in the 5 distal covariates and their effect on U5MR, time lags of zero and one years were applied for univariate regression, and zero, one, two and three years for multiple regression ( Table 2).
In an attempt to gain further programmatic insights from our data, the 30 provinces were stratified into two groups using three different criteria: (i) those above and below the median rate of U5MR decline (which was -1.720 per 1000 live births per year); (ii) those above and below the median U5MR in 1990 (which was 54.5 per 1000 live births); and (iii) those above and below the median GDP per capita in 2006 (which was US$ 1709). We conducted multivariate analyses (stratified analyses with 1-year time lag) of the 5 constructs separately in each subset of 15 provinces to identify the key determinants of child mortality reduction in different contexts ( Table 3).
RESULTS
Between 1990 and 2006, the U5MR decreased substantially in all 30 provinces in China (Figure 1). It varied more than 9-fold across provinces in 1990, ranging from 13. Factor analysis showed that within each of the social, economic, political, health system and policy, and health programs and interventions constructs, all the indicators correlated well with the resulting factor (as suggested by the large factor loadings). Each of the 5 extracted factors captured 68-91% of the common within-group variation of its affiliated indicators ( Table 1).
Crude and adjusted associations between U5MR and these 5 constructs are presented in Table 2. In the univariate analysis with one-year time lag, determinants within the social construct showed the strongest crude association with U5MR reduction (R 2 = 0.74), followed by strong effects of health programmes and intervention (R 2 = 0.65), economic determinants related to both population and local governments (R 2 = 0.47), political determinants as measured by decentralization indices (R 2 = 0.28) and health system and policy determinants (R 2 = 0.26). When multivariate regres- sion was applied with a one year time lag, 78% of the variation in the system was explained by the 5 constructs. Again, the social determinants showed the strongest effect (beta = 11.79, P < 0.0001), followed by political determinants (beta = 4.24, P < 0.0001), and health programmes and interventions determinants (beta = −3.45, P < 0.001). Health system and policy determinants had a counter-intuitive adjusted effect (beta = 4.11, P < 0.0025) and the effect of economic factor was not statistically significantly different from 0 (beta = −2.08, P = 0.2123) ( Table 2).
The associations showed distinctive patterns of change when different time lags (0-3 years) were applied ( Table 2) Our interpretation of this finding, which is important for health planning and resource allocation, is that social determinants of child survival act both within a short and midterm period. The effects of health programmes, interventions and political budgetary decisions are more likely to be felt within a short time period. The effects of economic growth and investments into health systems also contribute substantially to child mortality reduction, but they require a mid-term period to be detected in full. The notable change was the increased importance of determinants in the health systems and policies construct in the sub-group of provinces that started with lower U5MR, higher GDP and slower declines in U5MR. With a few exceptions, the determinants in the social construct were nearly always associated with the largest contribution to U5MR reduction (beta range 5.6 to 11.9). Economic factors have a positive role in the reduction of child mortality across all 6 strata (beta range −3.9 to −12.6), followed by health programs and intervention determinants (beta range 0.1 to −4.1). However, the associations of determinants in the health program and intervention construct with U5MR differed across stratified groups. They seemed to have most importance in the 15 provinces with higher starting U5MR and lower GDP. The effect of political determinants was significant in the provinces with higher starting U5MR and faster rate of U5MR decline. In the 15 provinces with a faster-than-median rate of U5MR decline, economic determinants were the strongest factors independently associated with U5MR (one-year lag model beta = −12.5, P < 0.001), followed by social determinants (beta = 6.1, P < 0.001). The same pattern was observed in the study of association between the 5 constructs and U5MR in the 15 provinces with above-median baseline U5MR level (economic determinants: beta = −12.6, P < 0.001; social determinants: beta = 5.6; P < 0.001), and with below median levels of GDP per capita (economic determinants: beta = −12.6, P < 0.001; social determinants: beta = 5.6; P < 0.001) ( Table 3).
We conducted several additional sensitivity analyses to examine the robustness of our reported results. We reclassified the 'crude birth rate' indicator from the social to health system and policy construct (Online Supplementary Document, table w7). Although this indicator increased the overall effect of the latter construct substantially across all 4 time lags, the construct with social indicators remained the most significant determinant of child survival reduction. This analysis gave 2 important results: (i) the large effect of social determinants on child survival reduction is not dependent on fertility reduction; and (ii) fertility reduction has a very strong independent effect on child mortality. We also repeated the multivariate analysis after excluding the indicators that were not associated with U5MR in the univariate analysis (Online Supplementary Document, table w8), again with little overall change to the main conclusions. Finally, we ran the analysis only using data for 1996-2006, to avoid any biases that may have been introduced by use of imputed trends in 16 provinces in the 1991−1995 period (Online Supplementary Document, table w9). None of these analyses generated substantially different results. We presented the immunization coverage for all main vaccines against childhood diseases in the 1990−2006 period, to demonstrate that vaccination rates remained consistently very high with little variation throughout the study period and were thus not expected to influence our results (Online Supplementary Document, table w10).
DISCUSSION
We are not aware of any other studies of this scale that have explored the impact of many diverse determinants of child survival in large child populations over an extended period of time, during which genuine progress in U5MR reduction has been achieved. The results of our analysis showed that the identified determinants accounted for almost 90% of the observed U5MR reduction during the years examined.
Importance of social determinants
The fall in U5MR observed in China since 1990 was most influenced by social determinants -although the health system, health program, political and economic determinants also had important and independent roles. Along with the creation of the community-based "barefoot doctor" health providers in rural areas (whose role also included also promotion of literacy, sanitation and hygiene), which was hailed as one of the foundations of the primary health care movement [30,31], the Chinese government launched effective efforts to control population growth even before the one-child policy. Those efforts had already halved the total fertility rate from 5.9 to 2.9 by 1979 [32,33]. Although good quality child mortality data are not available for China from 1950-1980, available data report a large reduction in infant mortality rate from about 250 per 1000 live births in 1950 to 50 by 1980 [34]. Based on our analysis, the continuing decline in China' s U5MR owes much to its broad social progress and political stability, with economic development also benefiting from these determinants, and in turn influencing the number of child deaths prevented [21,22,27,34].
Importance of fertility decline
Our results suggest that China' s success in reducing fertility rates and the resulting community approaches to improved parenting and protection of child health had a major influence on child mortality. Although it is difficult to isolate this factor and make secure inferences about its independent effects, we found that fertility rate had the highest loading on the "social factor" cluster, which itself explained most of child mortality reduction. In these circumstances the effects of the other determinants that we studied may be attenuated in other countries in the absence of the level of fertility rate reduction observed in China. This hypothesis is reinforced by the sensitivity analysis presented in the Online Supplementary Document, table w7, where the indicator of fertility decline was moved to the health systems and policy construct where it substantially increased the effect size of this construct. There have been debates about the direction of the causal association between fertility reduction and child mortality reduction [35,36]. We believe that the example of China, where fertility was dramatically and suddenly reduced by law regardless of the second variable (U5MR), which then led to large reduction of U5MR during the following two decades, represents strong evidence in favor of a causal role of effective fertility measures on child mortality reduction.
Variability of the impact of determinants of child mortality reduction
Social determinants seemed to be strongly associated with the reduction in U5MR when all 30 provinces, 35 indicators and 17 years were included in the analysis, closely followed by determinants in the health programmes and interventions construct. However, more detailed analyses revealed several interesting findings relevant for health policy and planning. If short-term effects are required, investments are better placed in social determinants, health programmes and interventions, and political determinants that include empowerment of local governments. However, if more strategic and long-term effects are expected, investments should once again support social determinants, but also health system development and economic development. In the context of a high baseline U5MR, low GDP and a planned rapid rate of U5MR decline, the greatest ef-fect should be expected from action on economic and social determinants, but also health programmes and interventions and political determinants. However, in the context of low U5MR, higher GDP and a planned moderate rate of U5MR decline, the greatest effect should be expected from action on social determinants and health system and policy determinants. These findings are consistent with previous observations on similar data sets [15,18].
Limitations of the study
There were many interesting potential determinants which we could not study in the absence of reliable year-to-year information. This includes immunization rates, although we performed a separate analysis of their likely effects on our overall results (Online Supplementary Document, table w10). We would have also liked to investigate the effects of more specific health-program variables (for example child nutrition status and practices, management of diarrhea and pneumonia, vitamin A supplementation), more detailed data on maternal education level, levels of health facility access and use, health insurance coverage, poverty thresholds, corruption indices, and many others [37][38][39][40]. None of these were included because we could not, at the time of analysis, obtain reliable information on any of these indicators from Chinese information sources. In this study, we used only indicators for which the available data during the period 1990-2006 suggested a level of completeness and reliability that would allow sufficient statistical power to address the main aims of this study. The Online Supplementary Document shows the approaches and sensitivity analyses that we used to assure and verify the quality of our input data.
This Chinese example, in which child health inequities do not appear to have been widening over the past 15 years, is important as a case study in the wider global context [41,42]. We suggest there would be value in encouraging other nations to collate a similar set of determinants (for example through large scale intermittent surveys such as serial MICS and DHS augmented with data from other sources) and then apply the conceptual framework and methodology we adopt in this study. There have already been a few good reports of such analyses in the literature [15,[43][44][45].
While we employed many excellent indicators to capture social, economic, health systems and policy, and health programmes and interventions determinants of U5MR reduction, it is very difficult to evaluate the impact of political determinants in the same way. We believe that our two political indicators represented a proxy of the level of decentralization and the spending power of the local governments. However, we believe that the mismatch between local resources and spending responsibilities in the absence of adequate central-local grants / transfers at the provincial and sub-provincial levels is an important political issue which may, in large part, explain why insufficient public resources are employed to target social and health indicators in poor localities [46]. Given the wide disparities within provinces, the provincial GDP per capita may have little impact on the living conditions (and U5MR) in remote 'pockets of poverty' within provinces. Future analyses should seek to extend and develop more appropriate indicators of political determinants to better reflect the well documented imbalance between available resources and spending responsibilities at the provincial and sub-provincial levels in China. Given the size of China' s provinces, such analyses will be highly relevant to similar analyses at country level elsewhere, and should contribute to reforms in the equity of public resource allocation.
CONCLUSION
The results presented in this study support the recent calls to broaden vertical programs to include strengthening of health systems [47,48]. However our research suggests that this approach also has its limitations, as it potentially ignores the broader social, economic and political determinants that impact on all sectors of society. In addition to maternal and child health and nutrition programs, approaches to reducing child mortality should also incorporate improvements in general literacy and particularly education of women; access to fertility control options; access to clean water and sanitation; integration of minority populations, along with ensuring underlying political stability and good governance. As many of these determinants are not traditionally under the purview of health authorities, there is a risk that those determinants are inadequately considered in national approaches to reducing child mortality. An analysis of the relative importance of these and other determinants, if data are available, and the further study of the possible reasons for their impact, may help explain large disparities between the U5MRs of nations with similar rates of economic development. It may also explain the difficulty in further reducing U5MR after communicable disease mortality is controlled by disease-specific and other health-and nutrition-focused interventions. The WHO Commission on Social Determinants of Health was a step toward an analysis of these factors [49,50], but without convincing attempts until now to apply this approach to a key child health indicator such as U5MR.
In conclusion, this analysis has shown that China has achieved its remarkable progress in reducing U5MR through an inter-sectoral approach made possible through political stability over a prolonged period of time. The key characteristics of child mortality reduction were sustained economic growth and a focus on social development alongside key investments in health systems and expanded health intervention coverage. | 6,591.2 | 2012-06-01T00:00:00.000 | [
"Economics",
"Medicine",
"Political Science"
] |
Effective Trace Acquirement during Product Information Diffusion and Application
Information dissemination has become part of people’s daily communication and there is great interest for both academic and industrial communities. Most previous studies have focused on the strategy and mechanisms. The methods controlling the process of information diffusion have rarely been studied. Thus, previous studies have failed to effectively mine the value of product information diffusion on social networks. In this study, based on the information diffusion product in consumer self-organized social networks, the control of the product information diffusion process was explored. The node identification principle of the QR code sender designed in this study and the linked list that associated information with specific nodes allowed the acquisition of effective traces in long-chain transmission from the information source to the value nodes, and solved user information disclosure during the transmission process. This method was applied to the tracing system of defective vehicles, achieving accurate recall of defective vehicles.
Introduction
WeChat, social media websites, and mobile intelligent terminals, as consumer-led media and technology, have developed rapidly. Consumers not only produce a great amount of data online but also spontaneously create their own marketing networks. This blurs the boundaries between enterprises and consumers. As content providers and information publishers, consumers have built their own media networks; however, enterprises publish abundant product information through various forms of social media, such as network platforms and Intelligent Information Management intelligent media terminals, to attract the attention of potential users. Incentives have been offered to encourage consumers to share product information on their self-organized social media. Therefore, consumers are transformed from people who simply browse product information and purchase products into enterprise collaborators and product promoters. Such partnerships can benefit both consumers and enterprises.
Consumers form circles of friends on social media. Consumers do not always know how to make the right choices when faced with massive amounts of data and media advertising. Therefore, they prefer to receive recommendations from friends that they trust in their communication circles (Figure 1).
There are currently two avenues of information transmission; namely, the rumor diffusion model that was developed based on the epidemic model and information cascade. The classic epidemic disease model, Susceptible-Infectious-Recovered (SIR), proposed by Anderson in 1991, reference [1] [2], has been widely employed, especially for the spread of rumors. Scholars have improved the SIR model and the improved models are summarized in Table 1. The classic rumor-spread model is the Daley and Kendal (DK) model [3] [4] [5], which suggests that rumor spread is similar to the spread of infectious diseases. People are divided into three categories in the DK model: the ignorant, the spreader, and the terminator. In the SSIC model [6], the spread of rumors on super networks can be effectively hampered [7] by: 1) identifying rumors and isolating them, and 2) improving the extent of rumors, which allows the public to know more about them, to weaken their spread. The SEIR model [8] considers the ambiguity and attractiveness of the rumor content. Mean-field equations are used to characterize the dynamics in the SEIR model considering homogeneous and heterogeneous networks. Rumors spread faster in the BA network in the SEIR model than in the WS network, while the diffusion scales of rumors spread are exactly the opposite in the two networks. Mean-field equations are used to describe the dynamics of rumor models to consider the characteristics of rumors spreading and analyze the key events in complex networks. A novel SIR model [9] was applied to heterogeneous/homogeneous networks. The results showed that rumors spread faster in homogeneous networks compared to heterogeneous networks. The diffusion scales of rumor spread in the two networks, however, were exactly the opposite, as shown in Table 1.
In general, understanding the spread of rumors has been improved using the SIR model, mainly in two aspects: 1) constructing network structures based on different attributes, and 2) increasing the attributes of the study objects.
The improved rumor spread model and the epidemic disease model differ from the information diffusion model in that networks are self-organized by consumers in two ways. First, information is diffused in various ways. Node infection is forced and spontaneous in the epidemic disease model, while information diffusion in a consumer network is optional and voluntary. Second, the targets vary. Scholars assess rumor models to disturb and suppress rumor spread in hopes of minimizing its impact. Studies on product information diffusion in consumer networks encourage consumers' forwarding behaviors and maximize their effects. Therefore, the epidemic model as well as the improved rumor model, which is based on the epidemic model, cannot abstractly represent information dissemination processes in consumer networks. was the sum of the weight of all the activated nodes. In the IC and LT models, each activated node attempted to influence nodes that were not activated; and in the voter model [18], each node had two options, and two types of information competed for activating more nodes.
In addition to the traditional information diffusion models, as mentioned above, there are also many expansion models based on these models. The IC model with negative opinions (IC-N) [19] also considers the diffusion of negative information. In the IC-N model, successful activation of silent nodes by a positive activated node will lead to the spread of positive and negative information simultaneously, but successful activation of silent nodes by a negative activated node will lead to the diffusion of only negative information. The competitive LT model (CLT) model [20] extended the LT model and considered two types of competing information in the network. In the CLT model, the seed nodes are activated and attached to one of the two competing information to be spread; when activating silent nodes, the seed nodes attempt to persuade the silent nodes to accept the information they supported. The Signed Voter model [21] is an extension of the Voter Model; in this model, when the two nodes on one edge are friends, one node successfully activates the other node and attempts to persuade it to accept the information he or she supports. The IC-N model and the CLT model produce a similar negative impact when successfully activating silent nodes, which makes the other node hold the opposite view.
In the above studies, a default hypothesis was commonly adopted, which was: the consumer spontaneous information diffusion cannot be controlled. Therefore, even if previous studies had realized that blindly launching advertising in social media was inefficient, they would have been able to do nothing but promote information diffusion through seeking and developing influential seed nodes. Once the seed nodes activate the forwarding behavior and start self-organized diffusion among consumers, the follow-up forwarding behavior will be invisible to the external diffusion network; and the enterprise cannot be sure whether the purchasing behavior provoked is driven by the selected seed node or not. Thus, the problems studied in the past are only about macroscopic strategic research or microscopic seed node identification [22] [23] [24] [25] [26].
Therefore, all the information diffusion models currently available are "black process. The node identification principle of the QR code sender designed in this study and the linked list that associated information with specific nodes realized the acquisition of effective traces in long-chain transmission from the information source to the value nodes. The coding system for public and private codes solved user information disclosure during information diffusion. The above method has not only controlled the product information diffusion process, but can also be applied to the technical reasoning of defective products to achieve accurate recall of defective vehicles.
The logic structure of the study is as follows: In the second part, the method to obtain effective traces from the value node to the information source was explored. In the third part, the technical process reasoning of defective products was studied, including the tracing system of defective products and technical process reasoning of defective products based on the method to obtain effective trace. The conclusions are provided in the fourth part.
Effective Trace Acquirement from Value Node to Information Source
This section aims to obtain the effective trace from a given value node z to the information source, thereby achieving control of the product information diffusion process. However, there should be a prerequisite; that is, to identify and record the nodes that diffuse information to obtain the network where information about one specific product is diffused.
The development of QR code technology enabled us to realize the above objectives. Consider the following scenario: When a consumer shops in the mall and sees something that his friend wanted to buy, he sends the QR code of this product to his friend who will buy this product if his demand happens to be satisfied after seeing the information linked to this code; or, if through a quantity of nodes, the information is finally sent to the person who needs this product, then the above process is the product information diffusion in the consumer self-organized network, with QR code as the carrier.
To obtain the above product information diffusion network, it is necessary to record and identify the nodes that diffuse the information and associate relevant information to those nodes, which is based on the QR code technology, as shown in Figure 2. In the process of information diffusion, only the product information is visible and the identification code linked with personal information is only visible to the code sender. On the premise that the transmission nodes agree to identify the personal information, when the enterprise wants to find out the contribution made by consumers, the personal information can be provided for relevant enterprises so that they can give awards to the transmission nodes that have created value so as to encourage their behavior of diffusing relevant product information in social networks.
Before a consumer forwards the information of a product, the two parts in information of a product, the two are associated, that is, the scanned QR code is attached with the consumer's personal information identification code, which is then entered into the database. If this consumer forwards the QR code to his friend, the QR code sent to his friend also contains the identification code of the forwarder in addition to the product information, but it is invisible to his friend who can only see the product information. In this way, user information during product information diffusion is protected, so that the disclosure of personal information can be avoided to the largest extent and consumers can diffuse product information without the worries behind it. The mechanism showing how the above process is carried out is shown in Figure 3.
Here, it is assumed that this consumer is willing to provide the only information that can reveal his personal identity to the QR code sender (e.g., WeChat ID, mobile phone quantity, QQ quantity, or an account registered on a social networking site etc.) so as to obtain the identification code (private) generated by this QR sender for this consumer, and the sender promises to keep the personal information confidential. If the user has entered his personal information and obtained the identification code, the system can automatically identify this user and attach his identification code to the product information if the consumer scans the QR code using the mobile APP developed by the QR sender.
If the consumer has not yet received the personal identification code from the QR code sender and scans the QR code using the APP, the system will pop up a page asking if it is allowed to generate the consumer's personal identification code to record the cumulative value contribution made by this consumer during product information diffusion and to be used in follow-up awards. It is assumed The association of information and nodes in a linked list. Note: A linked list was used to save product information, identification nodes and the association between the two, and it was assumed that the probability of information forwarding between consumers is estimable. (b) The association method of the linked list was simulated with products of K 2 type as the case.
here that all consumers discussed in this study are willing to accept the invitation from the QR code sender to generate their personal identification codes.
If the node is an information source, a new linked list will be established under the directory of the product diffused by this node. If the information received by this node is from other consumers, the formation will be included in the existing linked list according to the product information and information source identification code.
When a consumer scans the QR code linked with certain product information, this code does not contain the identification codes of other users, and this consumer forwards the product information to his friend, then the consumer is an information source that diffuses information about this product.
To obtain the effective trace from the value node (the node generating purchasing) to the information source node, the information source corresponding Intelligent Information Management to the value node should be found in the system's linked list structure according to the product-consumer catalogue. There may be more than one effective trace, for there may be more than one source of information that initiates the long chain spread to the value node z.
Reasoning of the Technical Process of Defective Products
The method to control product information diffusion; that is, the approach to obtain an effective trace was applied in technical process reasoning of defective products. According to the module production in the auto industry, the method to trace defective vehicles was designed. The product supply chain tracing model was constructed based on the parts batch relation and the order information system. In addition, the above QR code identification principle and the method for tracking effective trace were used to achieve the goals of detection of defective products, tracing to the cause, detection of defective products in the same batch, and accurate recall of defective products.
Tracing System of Defective Products
The recall of defective vehicles includes active recall and commanded recall. The tracing process is the same in both cases: Defective parts detected first, and then internal and external tracing is performed. Defective vehicles are caused by two major aspects; namely design and manufacturing. Design-related defective vehicles are often recalled based on the production time, while vehicles with defects caused during the production process are usually recalled per the production batch. Part suppliers and assembly unit suppliers are the information carriers in internal tracing. The information carriers in external tracing are vehicle manufacturers, dealers and client bases. The scope and quantity of defective products are confirmed according to the batch and quantity of defective parts determined by the internal tracing.
Internal Tracing System
For an auto production chain, the internal tracing system can be divided into three steps, as shown in Figure 4. 1) It is completed in the 4S store or auto maintenance department where the technical personnel identify the defective parts, confirm the identification code and purchase batch of the defective parts according to the batch list, and trace the supplier X that have produced those parts.
2) The supplier X inquires about the designer of the defective parts, processing/testing process of the parts and the workers who finished this process according to the identification code and production data of the parts. The batch and quantity of the designer, operator or defective products are determined to minimize the quantity of defective products.
The specific processes are: 1) to determine the link where the defects were
External Tracing System
The internal tracing system aims at one auto supply chain while the external tracing system has to accommodate more than one auto sales chain. The manufacturing of auto parts belongs to mass customization. Therefore, the external tracing system should recall defective products based on the specific orders.
Application of QR Code Technology in the Tracing System of Defective Products
The QR code technology is mainly used for product identification and tracking, data collection, entry and information transmission in the tracing system. Detailed, comprehensive product data information and dynamic product tracking technology are the basis of an effective tracing system; the QR code identification principle can effectively meet the tracing system requirements for information collection and product tracking. The QR code embedded in the product can track products and record and transfer information about the key parts. Also, a waterproof, high temperature resistant, QR code can work in harsh environments. Based on the node identification principle of the QR code sender, information needing to be identified is listed in Table 2.
The production process is described below. The supplier provides the parts to the manufacturer who manufactures the parts. After a series of processes and with the participation of the designers and operators, the subparts are processed into parent parts, which are then assembled in the final working procedure. After that, the finished product enters the 4S store to be purchased by one consumer, completing the normal direction flow from procurement to production, sales, and consumer. During such a process, information about the product, organizations and personnel involved above should be associated, as shown in Figure 6. The QR code associated with relevant information is printed onto the corresponding single items, cartons and vehicles. Table 3 shows the nodes needing to be recorded and their symbolic representation.
The above mechanism to associate information with nodes is realized by the linked list data structure in the system as shown in Figure 7. If there is a problem with the production process, whether there is a design problem or whether it is caused by improper operation of the operator should be checked. For whatever reason, it is important to identify the processes as well as personnel associated with it, and inspect the products manufactured in the same batch to determine whether they also have quality problems.
At this point, the causes and links of this defective product are found. Then, it is still necessary to determine whether it is an individual case or all the products in the same batch are unqualified. If all the products in the same batch are unqualified, then the models these products were assembled into need to be identified. Moreover, among these vehicles, those that have been sold and those in the warehouse need to be determined. Consumers who have purchased those vehicles need to be contacted to recall the vehicles, and those in the warehouse should be melted down and remade.
The above product process identification and defective product tracing avoids the recall of qualified products, achieving a precise recall process and preventing the economic losses of enterprises.
Conclusions and Future Research Work
The novel contribution of the paper is that, based on the product information diffusion in consumer self-organized social networks, the control of the product information diffusion process was explored, making the following achievements and innovations. To solve the problem of node identification and information association in the process of information diffusion, a method to control the product information diffusion process was proposed based on the designed information identification principle of QR code senders, a method to create QR codes containing disclosed or encrypted information, and a method to generate the linked list associating information with nodes. This method addressed the threat of personal information leakage during user information transmission, and achieved control of product information diffusion. Moreover, this method was applied to trace and record the technical process of defective products, realizing the accurate recall of defective vehicles.
In light of the findings of this paper, the following questions for future research are presented. As the diffusion of product information in social network is driven by spontaneous behavior of consumers, and the choice of consumers would be affected by many factors. And, the analysis of influencing factors and consumers' behavior selection also needs theoretical support from sociology, economics and psychology. Therefore, the simulation and control of product | 4,622.8 | 2020-04-07T00:00:00.000 | [
"Computer Science"
] |
FACTORS OF CUSTOMER SATISFACTION WITH THE QUALITY OF BANKING SERVICES AND PREDICTION OF THEIR SIGNIFICANCE
: A single paragraph of about 150-200 words. In modern business, the quality of products and services is, not without reason, given considerable attention. That is why it is understandable that quality is considered a key business paradigm in banking as well. In particular, the quality of banking services is viewed as an essential prerequisite for gaining new clients and retaining existing ones. In this paper, scientific attention is focused on researching the factors of customer satisfaction with the quality of banking services and predicting their importance. In line with that, a descriptive "survey" method (survey-research method) was applied, while other methods were not completely ignored, because without of them a complete answer to all the questions could not be given. By applying factor analysis (orthogonal varimax method), five factors were obtained, which were, in further data processing and analyzing the research results, treated as the main factors of client satisfaction with the quality of banking services. One-factor analysis of variance - ANOVA - was used to predict the importance of selected factors from the client's point of view. The results of this research can contribute to the improvement of customer relations by improving the existing practice of managing customer relations.
Introduction
Quality is a concept that has been around since ancient times. It can be said that the first written traces related to the existence of human civilization have some record related to quality. Of course, the term itself and its explanation was always in the context of the time in which it was created. From that initial period to the present day, the meaning of quality has evolved significantly. Today, the term "quality" is indispensable in any form of communication. That is why it can be quite justified that without high quality there is no survival of the organization on it but we have to bearing in mind the competition that reigns in the world market. It is evident, that today products or services on the market are most often recognized by their quality. Consequently, it is quite understandable that significant attention is paid to quality, which raises quality to the level of a new paradigm in modern business systems. Accordingly, it is quite understandable that the question of the quality of banking services is increasingly being raised in the banking industry as a priority for gaining a prominent position in a competitive business environment.
In conditions of increasingly difficult acquisition of new clients, banks takes all necessary measures to retain existing users of their services for as long as possible. At the same time, by maintaining their level of satisfaction with the quality of the provided services, the bank encourages its clients to recommend it to new potential users. In those circumstances, functionality, security, reliability and integrity are the basic prerequisites for the vitality of banking services. Care, kindness, flexibility, responsibility and courtesy do not have to be expected from the user, but if they are provided, they can significantly improve the user's perception of the service provided. Factors with the potential to produce customer delight, such as commitment, attentiveness and helpfulness, represent areas where banks gain a good reputation and perfection of service delivered. The challenge for a bank that wants to delight its clients is to convince its workpeople to demonstrate warmth and sympathy towards their clients.
Different approaches and models are observed in quality analysis, and the following four models that are most often used: • Quality as customer satisfaction. It is achieved by analyzing buyers/clients and planning products and/or services that meet these needs; • Quality as a process. Delivery of goods and services can be considered a set of processes, a chain that connects needs analysis, general setting of goals, definition of plans, production of products or preparation of services, placement of goods and/or services. Similar sets of processes can be defined in management procedures, but also in each link of the chain. Quality implies that each step of the process is implemented in the correct way; • Results-based quality. The quality of the product or service must also be taken into account when evaluating the efficiency of the process. It is difficult to judge quality based on pure results alone; • Value-based quality. It is relatively easy to define in production and most service activities, but there are also activities in which this model is difficult to apply. Some of these could be education, judiciary, healthcare, etc. For the success of the policy of improving the quality of banking services, several key dimensions were introduced (Bahtijarević- Šiber et al., 1991): • reliability -the probability of failure-free operation in a certain period of time or after a certain number of cycles; • performance -basic operational characteristics (ability, features, functions); • convenience -in creating transactions and obtaining customer services; • sensitivity -according to the needs of service users, and • adaptability -individual customization of the service. Banks focus their efforts, for the most part, on appreciating and highlighting the previously mentioned dimensions. At the same time, reliability, openness in the specificities of banking products and clients' trust in the services provided by the bank, becomes the most important dimension, and performance loses its importance over time.
When looking at data and analyzing the wide range of aspects from which the user evaluates the quality of a banking service, it can be concluded that it is very difficult to provide all these dimensions in one service. That is why banks, in order to achieve their competitive advantage, pay attention to a certain group of quality dimensions.
Problem and subject of research
Experience and research unequivocally say that the quality of products and services play a crucial role in gaining new and retaining existing users. That is why it is understandable that the basic assumption of the progress of competitive advantage in the market is quality. Therefore, the basic premise for the survival and improvement of business activities and success of a business entity is the skill of the quality management system to ensure reliable and unchanging quality of products and services with minimal costs.
Considering that services have a great importance and a large participation in national economies, which is especially manifested in developed countries, research attention in this paper is devoted to that aspect. Therefore, the quality of banking services is in the field of scientific research, which is a research problem. Bearing in mind that this problem can be studied from different aspects, it is quite justified to narrow it down to the specific subject of research -the factors of clients' satisfaction with the quality of banking services and predicting their importance.
Research methods
Considering the nature of the problem and subject, it was quite justified to apply the descriptive "survey" method (survey-research method) in the research. This variant of scientific description involves the active involvement of the respondents in providing information about the advent that are the subject of study, based on which they can get to the essence of the research problem and determine its condition, but also reveal cause-and-effect connections and relationships. However, this does not mean that the application of other research methods was completely ignored. On the contrary, in order to be able to answer to all the questions raised by this research, beside the descriptive, as a special, but also basic scientific research method, it was also necessary to apply general research methods, of which the statistical method was the most represented in this research.
However, this does not mean that the application of other research methods is completely ignored, on the contrary, in order to answer all the questions posed by this research, beside this basic scientific research method, it is necessary to rely on the following methods: synthesis analysis; inductions -deductions; abstractions -concretizations; generalization -specialization, as well as the method of special sciences -the method of content analysis. Such a methodological approach ensured that the research was more consistent and trustworthy.
Also, the application of the content analysis method as an individual scientific research method was unavoidable in this research.
Research instruments
For the purposes of this research, it was necessary to construct and validate an instrument for examining the opinions and attitudes of clients on satisfaction with the quality of provided banking services. This procedure is complex and involves a certain procedure that takes place as follows: first, the variables that most fully represent the features, features or characteristics of the research object are defined, and then, depending on the direction of influence, they are conditionally classified into independent (predictor) and dependent (criterion).
The basic instrument is a scale of attitudes of users of banking services (clients) about satisfaction with the quality of services which was constructed and validated in 2022. by Nina Mitić for the purpose of preparing a doctoral dissertation.
The validity of the manifest variables expressing the satisfaction of clients with the quality of the provided banking services was also confirmed by the utility values (Table 1.). We started from the knowledge that high communality values indicate good internal consistency of the applied scale and also the validity of the defined manifest variables (items), as well as the validity of the applied scale as a whole. The data in the above The reliability of the scale of attitudes is determined by the Cronbach's alpha coefficient (Cronbach's Alph), which is 0.8968, that means that it has high reliability considering the number of items, and the statements (24) which are included in it.
Sample research
The research was carried out on a convenience sample, which was formed from users of banking services who could be reached during the period of the research (who are "near at hand"). For practical reasons, the sample on which the research was conducted was formed in a bank that gave its consent to survey clients in its office.
Of course, this sample has certain disadvantages. However, the specificity of the research subject allows a sample to be used representatively because it was formed from users of banking services who were available at the given moment. A total of 120 of them were examined.
The selected sample can be considered large, because it exceeds the upper limit of the small sample. Namely, the threshold value is, according to some authors, somewhere in the range of 25 to 30 respondents (Gilford, J. P, 1968), and according to others, below 50 respondents (Petz, B, 1981).
The structure of the sample according to the information on the basis of which the respondents decided to use the services of a particular office, i.e. bank, is shown in the following chart: Chart 1. Choice of bank branch Source: Mitić (2022) As it can be seen in the chart, the recommendation of a friend had the most significant influence on respondents to choose for the services of a particular bank branch, while the influence of printed propaganda materials was the least on the scale.
Research results and their interpretation
The attempt to empirically determine the manifestations of client satisfaction with the quality of banking services is often reduced to a greater number of procedures by which they are expressed. Therefore, in this research, they were summarized according to a single criterion that leaves no room for bias and arbitrariness. In line with that, a factor analysis was applied, which enabled a larger number of manifest variables to be reduced to a smaller number of latent variables -factors based on their mutual connection and according to predetermined mathematical and logical conditions. After that, the obtained factors were rotated using the orthogonal varimax (Varimax) method and in further data processing and analysis of research results were treated as the main factors of client satisfaction with the quality of services provided.
To determine the influence of differences in the choice of bank branches on the differences in the importance given to the isolated factors of the quality of banking services, a one-factor analysis of variance -ANOVA was applied.
Factor structure of the quality of banking services
The initial basis for determining the factors was the inter correlation matrix of all manifest variables (24) that expressed the satisfaction of users of banking services. The matrix was subjected to significance tests to check whether the data in it were acceptable for factor analysis. This resulted in a very good (0.826) Kaiser-Meyer-Olkin "Sample Adequacy Index" of 0.826, which is considered a very good indicator. (Table 2). The data in this table also show that the value of Bartlett's test of specificity is high (p = 0.000) and rightly represents a reliable basis for the application of factor analysis. (2023) To determine the number of common factors of client satisfaction with the quality of banking services, Kaiser's (Kaiser, H. G) criterion was applied, according to which only factors with characteristic values (eigenroots) greater than unity that can be used to explain the variance, and in this case there are five of them (7,654; 2,381; 1,819; 1,372; 1,287). Those five factors explain a total of 60.468% of the variance, which is shown in Table 3. (2023) Katel's scree test (Scree test) was also used as a criterion for checking the number of isolated factors, which confirmed the five-factor solution, i.e. that five factors were isolated and they are representing all 24 manifest variables -the indicators of the quality of the banking services (Chart 2.).
Chart 2. Katel's "scree" test for determining the number of factors of client satisfaction with the quality of banking services
Source: Mitić (2023) Table 4 shows the matrix of the factor structure of clients' satisfaction with the quality of banking services. The statements (attitudes) expressed by the manifest variables that are listed in the same order as in the rating scale.
Each of the isolated factors represents several manifest variables of satisfaction with the quality of the provided banking services. At the same time, for the definition of the factor, only those manifest variables were taken into account whose correlation coefficient with the factor is greater than 0.30, which is the best indicator of the adequacy, that is, the uniqueness of that representation, and at the same time the starting point for determining the nature of that factor. The first factor is most significantly defined by the following manifest variables ‡ : bank employees are attentive and always ready to help (5/0.818), customers are approached with respect (6/0.816), bank employees are kind to users of banking services (3/0.755), the users of banking services the necessary information are provided in a timely manner (7/0.730), whenever possible certain concessions are made to the client (10/0.589), the services are adapted to the needs of the client (11/0.549), there is a pleasant atmosphere in the bank branch (18/0.414), bank employees are characterized by a professional appearance (4/0.409), the speed of bank employees in providing services is quite acceptable (2/0.376), bank employees are efficient in providing services (1/0.318) and a quality mobile application is available to clients (12/0.302 ).
It is noticeable that the listed manifest variables are uniquely directed towards clients with the aspiration to meet them. Therefore, this factor can be defined as KINDNESS AND WILLINGNESS TO HELP THE CUSTOMER. (Table 4) and the value of the correlation coefficient of the claim with the factor ("r").
22.
Bank branches have a separate area for children.
-0,038 0,397 0,124 0,169 0,550 Source: Mitić (2023) The second factor is determined by variables that are mainly focused on the individual needs of users. These are: the price of banking services is affordable (17/0.690), the client's personal and financial data are protected, the services are adapted to the needs of the clients (16/628), the services are adapted to the needs of the clients (11/0.626), whenever possible, the client is certain concessions (10/0.606), access to each client is individual (8/0.589), a wide range of banking products and services are available to clients (9/0.579), bank branches have parking spaces for clients (21/0.402), regular creative promotional campaigns are implemented (24/0.397) and users of banking services are given the necessary information in a timely manner (7/0.362). According to the mentioned manifest variables, this factor can be labeled as INDIVIDUAL APPROACH TO CLIENTS.
The third factor is mostly determined by the following manifest variables: availability of services through payment and deposit ATMs and mobile applications 24 hours a day (14/0.686), the spread of businesses and ATMs is in accordance with the needs of clients (13/0.682), clients have a quality mobile application available ( 12/0.675), bank branches have a special area for children (22/0.0.671) and in bank branches priority is given to vulnerable categories -pregnant women, disabled people, etc. (23/0.554).
According to the content of the manifest variables, it can be seen that they refer to the temporal and spatial availability of bank branches and ATMs and that they are aligned with the needs of all categories of clients. In line with that, the most adequate name for this factor is AVAILABILITY OF BUSINESS OFFICES AND ATMS TO ALL CATEGORIES OF CUSTOMERS.
The fourth factor is most closely related to the manifest variables related to equipment, their neatness and hygiene, without which there is not even a pleasant environment. The following variables are in question: the cleanliness and hygiene of the bank branch is at a high level (19/0833), the bank branch is equipped with the most modern equipment (20.0723), the bank employees are characterized by a professional appearance (4/0.530), the bank branch has a pleasant atmosphere ( 18/0.491), personal and financial data of the client are protected (16/0.350), access to each client is individual (8/0.346) and bank branches have a parking space for clients (21/0.310). According to the mentioned variables, the most adequate name for this factor is EQUIPMENT, TIDY AND HYGIENE OF THE BANKING OFFICE.
The fifth factor unites manifest variables directly and indirectly focused on efficiency in the provision of services, and they determine it to the greatest extent: bank employees are efficient in providing services (1/0.799), the speed of bank employees in providing services is quite acceptable (2/0.792) and regularly implement creative promotional campaigns (24/0.550).
Bearing in mind the common features of the presented variables, this factor can be called -EFFICIENCY IN PROVIDING BANKING SERVICES AND PROMOTIONAL CAMPAIGNS.
As can be seen, the factor analysis made it possible to reduce the 24 manifest variables related to customer satisfaction with the quality of banking services to five factors (latent dimensions), with each of them representing several manifestations of the quality of banking services with which it is correlated. (r0.30).
The importance of factors of satisfaction with the quality of banking services
The identified factors of satisfaction with the quality of banking services do not have the same importance, because they have different participation in explaining the total variance. All five factors participate in explaining the total variance with 60.468% (Table 5). At the same time, the defined factors, as well as the manifest variables that determine them, are sufficiently relevant indicators of client satisfaction with the quality of the provided banking services. However, all the isolated factors of client satisfaction with the quality of banking services provided (Table 5) do not participate equally in the total variance, and therefore, do not contribute equally to the variability of the researched phenomenon, that is, they do not have the same significance. This is indicated by the fact that all five isolated factors explain the total variance with 60.468%. At the same time, the first singled out factor -kindness and willingness to help the client -is undoubtedly the most significant and has the greatest impact on the total variance (31.890%). Of course, that was realistic and to be expected, because it is known that customers expect, if nothing more, at least a fair relationship from officials from any field of activity. Directly related to this factor is the next (second) factor -individual approach to clients -(9.919%) because it is impossible to be kind and willing to help a client without an individual approach. It is realistic to assume, because banking transactions are carried out individually in order to protect customer data.
The third factor -the availability of branches and ATMs to all categories of clients -participates in explaining the total variance with 7.570%, which is not negligible. Many choose to use the services of bank branches and ATMs of banks whose clients they are just in order to avoid paying commissions that are charged when using services in other bank branches, as well as their ATMs.
Apparently, the fourth factor -the equipment, orderliness and hygiene of the bank branchesparticipates in explaining the total variance with 5.719%, which is understandable, because clients attach great importance to the attitude of bank employees towards them and simpler access to bank branches and their ATMs.
The fact that the fifth factor -efficiency in providing banking services and promotional campaignsmodestly participates in explaining the total variance (5.363%) is not unexpected, because the previous factors include variables that agree with this factor as well.
As it can be seen, the first three factors are the most significant, because they explain the total variance with almost 50% (49.387%), while the remaining two factors with only 11.081%. This does not mean that the last two factors should be ignored, because they must also be taken into account if a more realistic picture of the factors of client satisfaction with the quality of the provided banking services is to be formed.
The influence of the choice of a banking branch on the differences in giving importance to factors of satisfaction with the quality of banking services
In this analysis, attention is focused on the comparison of average values. To compare the average results of more than two groups, it was necessary to apply one-factor analysis of variance (ANOVA), which was emphasized earlier.
The influence of the choice of a bank branch on the differences in giving importance to the first factor -kindness and willingness to help the client
In t This part of the researched aimed to determine whether there is a statistically significant difference between the arithmetic means of the factor scores of the first factor in the four groups of bank branches.
If you look at the value of Levine's test of homogeneity of variance in Table 6, you can see that it is 0,118 § , and it can be concluded that it is above the threshold value of 0.05. This suggests that the assumption of equality of variances in the results of each of the four groups was not violated in determining the differences in the influence of the choice of banking MPs in giving importance to the first factor. According to the data in Table 7, it can be concluded that there is no statistically significant difference between the factor scores of the first factor in the four groups of bank branches selections. In other words, this means that the differences in the way of accessing information about the banking branches do not have a statistically significant effect on the reporting of differences in the importance given by respondents to the first factor of customer satisfaction with the quality of banking services, marked as -kindness and willingness to help the client. This statement stems from the fact that p>0.05, because F=0.534, and the level of significance is p=0.660. In essence, clients, regardless of what information they were guided by the banking branch, have approximate attitudes about kindness and willingness to help the client.
The previous findings that the influence of differences in the mean values of the groups (categories) of bank branches selection on the first factor is very small and that is confirmed by the value of the eta square, which is only 0,01. ** § Values of the Levine homogeneity test are significant for further articulation of the calculation and interpretation of the F-ratio value and its significance. (2023) If you look at Chart 3., you can see that clients who chose bank branches based on printed propaganda materials attach the greatest importance to kindness and willingness to help the client. The reason for this is probably the information and promises contained in the written propaganda materials.
The influence of the choice of bank branches on the differences in giving importance to another factor -individual approach to clients
Even when we are looking at the influence of the choice of bank branches in giving importance to the second factor, Levin's test of homogeneity of variances (Table 8) suggests that the level of statistical significance is 0.387, which significantly exceeds the threshold value of 0.05. This indicates that the assumption of equality of variances in the results of each of the four groups was not violated when determining the differences in the influence of the choice of bank branches in giving importance to the second factor. (2023) According to the data in Table 9, it can be observed that there is no statistically significant difference between the arithmetic means of the factor scores of the second factor in all treated groups of the banking post card selection. Based on this, it can be concluded that the influence of differences in the way of choosing a bank branches on the differences in giving importance to the second factor of client satisfaction with banking services -individual approach to clients -is statistically insignificant, because p>0.05, F=1.021, and the level of significance p =0.386. In line with this, it can be stated that users of banking services, regardless of what guided them to choose a particular bank branch, have similar attitudes when it comes to the individual approach of bank employees to clients. This statement is supported by the value of eta square, which is 0.026. This indicates that the influence of the differences in the mean values of the factor scores for certain ways of choosing a bank branches on the other factor is small. According to the data shown on Chart 4, it can be seen that clients who chose a certain bank branch random (by chance) attach the most importance to the second factor of satisfaction with the quality of banking services, which is probably in accordance with their nature, because they themselves, ad hoc, chose a bank branches for the performance of certain financial transactions. On the other hand, surveyed clients who chose a particular bank branch based on printed propaganda materials attach the least importance to this factor. It is likely that this group of respondents has a discrepancy between what was promised in the printed propaganda materials and the actual access of clients to banking branch.
The influence of the choice of a bank branch on the differences in giving importance to the third factor -the availability of bank branches and ATMs to all categories of clients
In line with this the Levin's homogeneity of variance test (0.293), which is above the value of 0.05, in determining the differences about the influence of the choice of bank branches in assigning importance to the third factor of satisfaction with the quality of banking services -the availability of bank branch and ATMs to all categories of clients, the assumption of equality of variances in the test results of each of the four possibilities offered. (2023) Based on the data in Table 11, it can be concluded that the influence of differences in the choice of a bank branches and giving importance to the third factor of client satisfaction with the quality of banking services -the availability of branches and ATMs to all categories of clients -is not statistically significant, because p>0.05 (F= 0.329; p=0.804). This is in agreement with the values of eta square, which is 0.01. This finding is relatively acceptable, because clients sometimes use other bank branches, especially their ATMs. This can also be understood as a reminder to banks to plan the layout of their offices and ATMs. (2023) Chart 5 clearly shows that two groups of surveyed clients attach the greatest importance to the third factor. This is the group of respondents who chose a bank branchs quite random (by chance) and the group who did it through printed propaganda materials. But this is understandable, because friends are not often territorially well connected, and the media act frontally in providing information about bank branches and their ATMs.
The influence of the choice of bank branches on the differences in giving importance to the fourth factor -the equipment, orderliness and hygiene of the bank branches
Considering the value of Levin's homogeneity of variance test (0.924), which significantly exceeds the threshold value of 0.05, in determining the differences about the influence of the choice of bank branches at giving importance to the fourth factor of satisfaction with the quality (Table 12). According to the data in Table 13, it can be observed that there is no statistically significant difference between the arithmetic means of the factor scores of the fourth factor in all four groups of bank branches choices. This can be understood as the relative uniformity of the equipment, orderliness and hygiene of bankbranches, so the influence of differences in the choice of bank branches does not come to the fore. This is also confirmed by the value of eta square, which is very low (0.01). However, if you look at the data on Chart 6, it can be seen that the group that decided to choose a bank branches based on information from the media and the Internet attaches the least importance to the fourth factor. It is obvious that these are clients whose expectations regarding the equipment, tidiness and hygiene of the banking offices are unjustified.
Chart 6. Arithmetic averages of the factor scores of the fourth factor u in relation to the choice of a bank branches Source: Mitić (2023)
The influence of the choice of a bank branches on the differences in giving importance to the fifth factor -efficiency in the provision of banking services and promotional campaigns
Considering According to the data in Table 14, it can be seen that the obtained value of Levin's test of homogeneity of variance is 0.161, which indicates the fact that the assumption of homogeneity of variance was not violated, because Levin's test significantly exceeds the threshold value of 0.05. (2023) It is evident that the F-ratio value is 2.913 (Table 15), which is at the significance level of 0.37 (p0.5). This indicates that the influence of the differences in the values of the arithmetic averages of the factor scores by groups of bank branches choices significantly affects the difference and giving importance to the fifth factor -efficiency in the provision of banking services and promotional campaigns -as a factor of client satisfaction with the quality of services provided. The eta square (0.07) is in agreement with this finding, which indicates that the influence of the differences in the arithmetic averages of the factor scores by categories of bank branches selection is medium. From the mentioned data, it can be seen that there is a statistically significant difference between the group that decided to choose a bank branches based on the recommendation of a friends and the group that did so by chance (p=0.026; p0.5). (2023) Why this difference can be best seen in Chart 7? The group of respondents (clients) who chose a certain bank branch by chance (random) attach the most importance to the fifth factor. The reason for this is of course the positive experience gained by using the services in the selected bank branch. On the other hand, the group of clients who were guided by the recommendations of friends attach the least importance to the fifth factor -efficiency in the provision of banking services and promotional campaigns. It is obvious that they did not "get" what they expected based on the recommendations of a friends.
Chart 7. Arithmetic averages of factor scores of the fifth factor in in relation to the choice of a bank branches Source: Mitić (2023) Overall, the influence of the choice of bank branches on the preference of the customer satisfaction factor with the quality of the provided services is statistically significant but only in the case of the fifth factor, when the group that decided to use the services of certain bank branches based on the recommendation of a friends or completely by chance. It can also be stated that the influence of the differences of arithmetic means of factor scores by groups of bank branches selection on the previous four factors of satisfaction with the exists quality of banking services, but is not statistically significant.
Conclusion
It is evident that the importance of services is growing, as well as that their participation in national economies and it is large. Therefore, it is quite logical that in modern business, the quality of services is given priority in all organizations that want to balance their services in a highly demanding environment. That is why the quality of banking services, i.e. the factors of satisfaction with the quality of banking services and the prediction of their importance, were found in the field of scientific-informative orientation of this work.
Satisfaction that is given with the quality of banking services can be expressed by a greater number of variables, it was necessary to carry out their selection, which was done by creating a scale for assessing clients' views on satisfaction with the quality of banking services provided. The scale consisted of 24 manifest variables. Manifest variables were summed up by factor analysis, using the orthogonal varimax (Varimax) method. Thus, 24 manifest variables were Factors of Customer Satisfaction with the Quality of Banking Services and Prediction of their Significance reduced to five factors of customer satisfaction with the quality of banking services, which explain the total variance with 60.468%. The first identified factor -kindness and willingness to help the client -is undoubtedly the most significant and has the greatest impact on the total variance (31.890%). This is logical, because clients expect at least a minimum of attention from bank employees.
The second factor -individual approach to clients -is also significant because it participates in explaining the total variance with 9.919%. It is closely related to the previous factor, because kindness and willingness to help the client is difficult to achieve without an individual approach.
The third factor -the availability of bank branchs and ATMs to all categories of clients -has a solid share in explaining the total variance (7.570%). Its allocation is logical, because most users of banking services tend to have bank branches and ATMs " within one's reach".
The fourth factor -equipment, orderliness and hygiene of the banking branch -participate in explaining the total variance with 5.719%. This factor significantly contributes to a pleasant environment in the bank branches, as well as the area where the ATM is located. No one can stand untidiness and bad air in the room, as well as the rain all over their shoulders or the blinding of the ATM screen by the sun when withdrawing money from the ATMs.
The fifth factor -efficiency in providing banking services and promotional campaigns -has a modest share in explaining the total variance (5.363%). This was expected because the previous factors to some extent include manifestations that agree with this factor as well.
In the case of the first four factors of satisfaction with the quality of banking services, the research showed that the differences in the way of accessing information about the banking branches do not have a statistically significant effect on the occurrence of differences in giving importance to these factors. Only in the case of the fifth factor was found a statistically significant difference (0.026) when the group that decided to use the services of certain bank branch on a friend's recommendation or completely random (by chance).
By understanding the factors of client satisfaction with the quality of the provided banking services, it contributes to take measures to avoid the negative aspects, and to further tendency the positive aspects of the banking business. | 8,358 | 2023-06-23T00:00:00.000 | [
"Business",
"Economics"
] |
Influence of Chitosan Addition on Resorcinol–Formaldehyde Xerogel Structure
: Gels are usually not environment-friendly due to their di ffi cult biodegradability. Therefore, the addition of chitosan, even in small amounts, will make such gels biodegradable and thus can be useful in many applications that require environment-friendly materials. The addition of small quantities of chitosan to the reacting solution resorcinol–formaldehyde xerogel was investigated. Di ff erent hybrid resorcinol–formaldehyde–chitosan xerogels were characterized by di ff erent techniques, including Raman spectra, FTIR, XRD, TGA, SEM, surface area and porosity analyzer, and CHNS / O microanalyzer. It was seen that the addition of chitosan, even in a minor quantity, has a significant influence on the structural features of the resulting xerogels. The lattice order and crystallinity, chemical functions, thermal stability, morphology, elemental ratio, pore structure, and appearance were changed by adding chitosan into the xerogel
Introduction
Chitin, which is biopolymer extracted from sea creatures including crustaceans of shrimps and crab shells, is the second most common polymer after cellulose on earth [1,2]. Chitosan (Cs) is considered a deacetylated derivative of chitin [3]. It is a biodegradable non-toxic natural polymer. Chitosan has some unique features such as biocompatibility, biodegradability, non-toxicity, and complexing with metal ions [4]. Therefore, it is exploited universally in various applications such as in pharmaceuticals [5], the food industry [6], and water remediation [7]. Furthermore, the surface groups, namely, hydroxyl and amino groups, contribute to the hydrophilicity and active adsorption sites of Cs [8]. Pekala introduced the polycondensation reaction of resorcinol (R) with formaldehyde (F) to form a xerogel (X), which is named hereafter as RFX [9]. Xerogels are a kind of solid-formed gels, which are being synthesized through a slow drying process at room temperature with an unconstrained shrinkage [10]. The difference between aerogels and xerogels lies in the drying process, where the excess of solvent is extracted from the gel to obtain dry gels. Regarding aerogels, the solvent is removed by supercritical CO 2 extraction, which is a tedious technique that preserves the hierarchy of the pore structure formed in the gelation process. On the other hand, removing the solvent by convective drying at ambient conditions results in xerogels. In the later case, the micropores are preserved whereas the macro-and mesopores could collapse depending on the mechanical strength of the gel [11]. Xerogels are considered to be excellent support materials for metal and non-metal functional groups, and are utilized broadly in fuel cells due to their ability to exhibit controlled structures and adjustable pore sizes as well as their good stability [12,13]. They are widely used in thermal insulation, nuclear particle detection, light guides, as wood adhesives, and electronic devices [14][15][16][17][18]. Kinnertová and Slovák [19] investigated the effect of catalyst amount on properties of RF xerogels and found that it has a significant effect on the pore structure. Pincipe et al. [20] studied the effects of various parameters on the properties of melamine-resorcinol-formaldehyde xerogels and confirmed that they have a significant influence on the gel formation and its pore structure.
Various studies in literature were mentioned that address mixing resorcinol-formaldehyde gels with different components. For instance, Alshrah et al. [21] reported that the addition of polyacrylonitrile nanofibers into resorcinol-formaldehyde aerogels enhanced their thermal conductivity and other properties. Grishechko et al. [22] investigated a mixed lignin-phenol-formaldehyde and found that the resulting pore-size distributions depended strongly on the initial composition, but not on the method of drying. Haghgoo et al. [23] studied the composite made of multi-walled carbon nanotubes with resorcinol-formaldehyde gels. The composite gels were synthesized by conducting the sol-gel reaction of resorcinol and formaldehyde in a suspension of carbon nanotubes in water, followed by CO 2 supercritical drying.
Due to the importance and wide applications of both xerogels and chitosan, chitosan can import its unique features to xerogels and therefore widen their range of applications. Furthermore, the addition of chitosan to gels can make them more biodegradable and thus environment-friendly materials. The authors investigate here hybrid compounds of minor quantities of chitosan into RFX. Structure changes (such as lattice orders/defects, crystallinity, morphology, pore properties), thermal stability, chemical structure, and appearance of outcome hybrid products are investigated.
Synthesis of RFX with Chitosan
Xerogels (X) were synthesized from resorcinol (R), formaldehyde (F), water (W), Na 2 CO 3 catalyst (C), and chitosan (Cs). A stock solution was prepared as 0.41 wt.% of Cs into acetic acid. R (12.44 g) and C (0.024 g) were weighted and mixed with ultrapure water (W), and then mixed with a suitable amount (x mL, where x = 0, 1, 2, 3, 4, and 5 mL) of Cs solution. The corresponding produced gels will be denoted as RFX-Cs-"x", where "x" is the used volume of the Cs stock solution used (mL). The total volume of W and Cs stock solutions were fixed at 32.60 mL, where an amount (x mL) of Cs stock solution corresponds to an amount (32.60-x) mL of W. This mixture was stirred magnetically until all contents were fully dissolved, and then, 17.40 mL of F solution was added onto the dissolved reactants. The weight percentages of the above ingredients of the six xerogels (RFX-Cs-0 to RFX-Cs-5) are listed in Table 1. After that, the solution pH value was adjusted at 5 by using diluted HNO 3 and NH 4 OH buffers. RF solutions were then poured into polypropylene vials, sealed, and placed to cure in an oven at 50 ± 1 • C for 7 days. To prohibit the dehydration of the gels, and to increase the crosslinking in produced products, 2% acetic acid solution was poured upon the gel surfaces after their solidification for 7 days. After the 7-day period, the cured samples were allowed to cool to room temperature. The remaining solutions above the cured gels were then decanted and exchanged with acetone at room temperature. The solvent exchange process was done by leaving the gels in acetone at room temperature for 24 h, and replacing the remaining acetone daily with fresh acetone for 3 consecutive days. After the third day of solvent exchange, the cured gels and the accompanying fresh acetone were placed in an oven at 50 ± 1 • C for 2 days to evaporate acetone completely [24].
Characterization
Fourier transform infrared (FTIR) spectroscopy (NICOLET, iS10, Thermo-Scientific) was used to confirm the structure of the prepared samples. FT-Raman spectra were estimated by a Bruker FT-Raman spectrometer of type RFS 100/S that is attached to a high resolution (better than 0.10 cm −1 ) Bruker-IFS 66/S spectrometer. The morphologies of RFX-Cs composite gels were observed with a FEI Nova™ nanoscanning electron microscopy 450 (Nova NanoSEM). CHNS/O, Thermo Scientific FLASH 2000 Organic Elemental Analyzer (Melano, Italy) was used to determine the elemental compositions of the samples. Thermogravimetric analyses (TGA) were carried out using a PerkinElmer Pyris6 TGA analyzer under a flow of N 2 gas in the range of 30 to 800 • C, with a heating rate of 10 • C/min. X-ray diffraction (XRD) measurements were conducted by Miniflex II Benchtop XRD apparatus, manufactured by Rigaku Corporation, Japan. The 2θ scan data were collected at 0.05 • intervals over the range of 5 to 90 • , and at a scan speed of 0.05 • /min. A Micromeritics ASAP2420 ® accelerated surface area and porosimetry analyzer system, with an enhanced micropore capability (utilizing 1-Torr pressure transducer), was used to measure the pore structures of RFX-Cs-0, RFX-Cs-1, RFX-Cs-2, RFX-Cs-3, RFX-Cs-4, and RFX-Cs-5 samples using the adsorption isotherms of N 2 at 77 K. Prior to the adsorption measurements, the samples were regenerated in-situ for 24 h at a temperature of 473 K under vacuum (1 × 10 −4 Pa). The pore structure properties were obtained by built-in calculations based on the density functional theory (DFT) [24]. Figure 1 shows the Raman (a) and FTIR (b, c) spectra for the six RFX-Cs samples. Figure 1a shows two characteristic Raman spectra peaks at 1355 and 1589 cm −1 , which refer to disorder peak (D-band) and graphitic peak (G-band), respectively. The intensity ratio of the D-band to the G-band (i.e., I D /I G ratio) helps to estimate the defects and disorders of RFX-Cs samples [25]. The results of I D /I G ratios, along with CHNS/O analyses and pore characteristics of RFX-Cs-0 through RFX-Cs-5, are presented in Table 2. Overall, it is observed from Table 2 that increasing the concentration of Cs into RFX leads to increasing I D /I G ratios, which indicates a significant increase of disorders and defects inside the structures of RFXs (even though the added amounts of Cs is very minor). Furthermore, increasing the concentration of Cs into the RFX results in slightly reducing concentrations of carbon and hydrogen, and slightly increasing concentration of nitrogen. The presence of Cs into RFX affects its pore structure, pore volume, surface area, crystal order/disorder, N 2 adsorption capacity, and average pore width. Moreover, the data listed in Table 2 shows that the adsorption capacities of RFX-Cs-1 and RFX-Cs-2 samples are low. This is due to lowering their total pore volumes. Figure 1b exposes the whole scale of FTIR spectra from 4000 to 400 cm −1 , and Figure 1c shows a zoomed-in range from 1700 to 500 cm −1 . The peaks at 2935, 2870, and 1477 cm −1 (related to the CH 2 stretching and bending vibrations) were observed in RFX-Cs-0 (curve 1), but not in other samples. The broadband at 3302 cm −1 indicates the aromatic OH group of resorcinol, whereas that at 1607 cm −1 refers to the aromatic ring stretches. The medium-to-weak peak appearing at 1218 cm −1 refers to that of methylene ether linkages between resorcinol rings [26]. Curves 2 through 6 correspond to the presence of Cs into the matrix of RFX. The dotted blue lines crossing the peak at~1153 cm -1 is for asymmetric stretch of C-O-C, and that at 1298 cm -1 is for the C-N stretching vibration of type I amine. The peak at 1298 cm -1 that appeared for RFX-Cs-0 started to diminish and became shoulder peak when adding Cs into the structure of RFX. On the other hand, a new small peak appears at 995 cm −1 in curves 2 to 6 (crossed via dotted blue lines), which is not found in pure RFX (i.e., sample RFX-Cs-0, represented by curve 1). Therefore, changes occurred in the structure of RFX by adding Cs even in minor amounts. Figure 2a,b exhibits the XRD patterns and TGA thermograms, respectively, for the six RFX-Cs samples. In Figure 2a, the full width at half-maximum (FWHM) peeks of XRD profiles was used. It was observed that the intensity of XRD peaks decreases by adding Cs to the RFX. For instance, the intensity decreased from 1998 to 1480 cps when comparing RFX-Cs-0 to RFX-Cs-5, respectively, which corresponds to a decrease of~26% in intensity. This observation aligns with the Raman results, which indicate an increasing disorder with the addition of Cs into the RFX-Cs matrix. Furthermore, the XRD peak of RFX-Cs-0 at 15.89 • shifts gradually to 19.75 • for RFX-CS-5. This shift is due to the insertion of Cs into the RFX matrix. Figure 2b exposes that the presence of Cs into the RFX matrix affects the thermal stability noticeably. Overall, the increasing presence of Cs in the RFX structure decreases its thermal stability, which could be attributed to the already observed increasing disorder or defects. Figure 3a that the color of samples changes gradually from an almost black color for RFX-Cs-0 to a pale yellow color for RFX-Cs-5. This can be attributed to the increasing distance between RF cross-linked segments upon the inclusion of Cs. Further, it was observed that the samples with higher Cs content exhibited less shrinkage than those of low Cs content upon drying. Figure 3b shows that RFX-Cs-3 (image 4) has a distinct topography that starts to form microspheres of RFX-Cs gel rather than interconnected lumps of samples RFX-Cs-0 through RFX-Cs-2 (image 1 through 3, respectively). On the other hand, noticeable changes appear between the morphologies of RFX-Cs-3 and RFX-Cs-4 samples (images 4 and 5, respectively). This could indicate a critical composition between these two samples whereas the behavior changes considerably, which is worthy of future investigations. Samples RFX-Cs-4 and RFX-Cs-5 (images 5 and 6) show more voids between in the matrix of the gel, which is consistent with the observation obtained from Figure 3a. Therefore, the presence of Cs into the matrix of RFX samples has a significant effect on both the visual and morphological properties of the resulting hybrid RFX-Cs gels.
Conclusions
The impact of adding chitosan (Cs) into resorcinol-formaldehyde xerogel (RFX) structures upon their synthesis was studied. The properties of different samples were characterized by Raman spectra, FTIR, XRD, TGA, SEM, surface area and porosity analyzer, and CHNS/O microanalyzer. The results showed that, even though the Cs amounts added are minor, they have a significant effect on the RFX matrix. This impact led to clear changes on the features of gels formed (such as structure order, structure functions, crystallinity, elemental composition, morphology, pore structure, and optical appearances). Moreover, the Cs-embedded gels are more biodegradable and thus can be utilized for applications that require such environment-friendly materials. This includes, but is not limited to, the encapsulation and controlled release of bioactive compounds, biomedical fields, and tissue engineering. Other applications include packaging and thermal insulation tools, and as templates for the fabrication of new functional materials. Therefore, this study turns the spotlight on Cs-embedded gels that have potential to be developed as new advanced materials for various applications. the authors. Technical support from the Department of Chemical Engineering, Central Laboratory Unit (CLU) and Gas Processing Centre (GPC) at Qatar University is also acknowledged. Further, the publication of this article was funded by the Qatar National Library.
Conflicts of Interest:
The authors declare no conflict of interest. | 3,215.8 | 2019-10-28T00:00:00.000 | [
"Materials Science",
"Chemistry",
"Environmental Science"
] |
Bioelectroanalytical Detection of Lactic Acid Bacteria
Lactic acid bacteria (LAB) are an industrial important group of organisms that are notable for their inability to respire without growth supplements. Recently described bioelectroanalytical detectors that can specifically detect and enumerate microorganisms depend on a phenomenon known as extracellular electron transport (EET) for effective detection. EET is often described as a type of microbial respiration, which logically excludes LAB from such a detection platform. However, members of the LAB have recently been described as electroactive with the ability to carry out EET, providing a timely impetus to revisit the utility of bioelectroanalytical detectors in LAB detection. Here, we show that an LAB, Enterococcus faecalis, is easily detected bioelectroanalytically using the defined substrate resorufin-β-d-galactopyranoside. Detection is rapid, ranging from 34 to 235 min for inoculum sizes between 107 and 104 CFU mL−1, respectively. We show that, although the signal achieved by Enterococcus faecalis is comparable to systems that rely on the respiratory EET strategies of target bacteria, E. faecalis is not dependent on the electrode for energy, and it is only necessary to capture small amounts of an organism’s metabolic energy to, in this case 1.6%, to achieve good detection. The results pave the way for new means of detecting an industrially important group of organisms, particularly in the food industry.
Introduction
Lactic acid bacteria (LAB) are an economically important group of microorganisms that have utility in the food industry, in clinical settings, and the environment [1]. The ability to detect and enumerate a range of lactic acid bacteria is relevant to, for example, the identification of beer-spoilage organisms in the brewing industry [2], in characterizing persistent endodontic infections in dentistry [3], and in monitoring recreational water for fecal contamination in municipal settings [4]. Genera belonging to the LAB include Lactobacillus, Streptococcus, Vagococcus, and Enterococcus. Simple and rapid techniques for enumerating LAB that could be operated by non-specialists would, therefore, find application in numerous settings. Lactic acid bacteria belong to an exclusively Grampositive phylum, the firmicutes, and are fermentative organisms distinguished by their inability to produce heme [5]. Even though LAB possess many of the components of respiratory chains, they do not respire on account of their inability to synthesize functional heme-containing cytochromes that typically act as the terminal reductases in oxidative respiratory chains [6].
Recently, methods for bioelectroanalytical detection of low numbers of organisms in environmental samples have been described [7]. The technique involves tagging enzymespecific substrates to electrochemical reporters to achieve specific detection of a target organism by exploiting a phenomenon known as extracellular electron transfer (EET)-the metabolic process that transports electrons from the cytosol to the exterior of a cell [8,9]. When the electrochemical reporter, or redox mediator, is released into the medium, it is indicative of the presence of the target organism; the redox mediator is reduced metabolically and subsequently reduces the electrode. EET has been extensively studied, particularly in the model Gram-negative electrogens Geobacter spp. and Shewanella spp., and is usually described as a type of anaerobic respiration achieved by heme-containing electron transfer protein complexes [9,10]. The environmental significance of EET is in biogeochemical cycling of metal oxides in the subsurface and is achieved biologically, as microbes transport electrons across their membranes and reduce solid terminal electron acceptors, such as Fe (III) or Mn (V), in a process that yields metabolic energy in the form of ATP [11,12]. The practical role of respiration in achieving a good detection signal in bioelectroanalytical systems was recently deduced from the protracted detection times for Escherichia coli (E. coli) strains with deficient respiratory chains that are only able to grow by fermentation [13]. Therefore, the utility of bioelectroanalytical systems to rapidly detect lactic acid bacteria is unknown, as LAB are unable to respire and instead gain their metabolic energy exclusively by fermentation.
EET is well described in Gram-negative organisms but has typically remained more obscure in Gram-positive organisms, except for a few notable exceptions, and despite Firmicutes regularly turning up in the phylogeny of bioelectrochemical systems [14][15][16]. Recent research suggests that EET in Gram-positive bacteria is evolutionarily more ancient than in Gram-negative organisms [17]. Lately, it has become commonplace to describe EET mechanisms in Gram-positive organisms that include LAB. Light et al. (2019) recently described a flavin-based EET mechanism in Listeria monocytogenes and electrode-dependent growth that is distinct from more well-described mechanisms of EET [9]. Similarly, EET has been described in another clinically important LAB, Enterococcus faecalis (E. faecalis), and although the exact mechanism is yet to be elucidated, its ability to reduce an electrode and Fe (III) has been demonstrated [8,18,19]. In addition, for both Listeria monocytogenes and E. faecalis, EET has been implicated in virulence either through increased competitive capabilities, enhanced biofilm potential, or through EET mediated synergistic interactions with commensals. In these systems, however, basic quantitative assessments of the EET process is not always reported.
In light of the recent insights into the EET in LAB, we decided to revisit the idea of their compatibility in bioelectroanalytical detection. The aim of this contribution is to determine whether a clinically relevant member of the LAB, E. faecalis, is amenable to rapid detection and enumeration in a bioelectroanalytical system ( Figure 1). A secondary aim of this contribution is to comment on the quantitative importance of EET mechanisms in E. faecalis by looking at EET efficiency. To our knowledge, this is the first report describing the potential for specific and swift bioelectroanalytical LAB detection and to quantitatively assess the efficiency of the EET process and how this relates to EET mechanisms of E. faecalis in the context of biosensing.
Appl. Sci. 2022, 12, x FOR PEER REVIEW 2 of specific substrates to electrochemical reporters to achieve specific detection of a target ganism by exploiting a phenomenon known as extracellular electron transfer (EET)metabolic process that transports electrons from the cytosol to the exterior of a cell [8 When the electrochemical reporter, or redox mediator, is released into the medium, i indicative of the presence of the target organism; the redox mediator is reduced metab ically and subsequently reduces the electrode. EET has been extensively studied, parti larly in the model Gram-negative electrogens Geobacter spp. and Shewanella spp., and usually described as a type of anaerobic respiration achieved by heme-containing electr transfer protein complexes [9,10]. The environmental significance of EET is in biog chemical cycling of metal oxides in the subsurface and is achieved biologically, as m crobes transport electrons across their membranes and reduce solid terminal electron ceptors, such as Fe (III) or Mn (V), in a process that yields metabolic energy in the form ATP [11,12]. The practical role of respiration in achieving a good detection signal in bi lectroanalytical systems was recently deduced from the protracted detection times Escherichia coli (E. coli) strains with deficient respiratory chains that are only able to gr by fermentation [13]. Therefore, the utility of bioelectroanalytical systems to rapidly det lactic acid bacteria is unknown, as LAB are unable to respire and instead gain their me bolic energy exclusively by fermentation.
EET is well described in Gram-negative organisms but has typically remained m obscure in Gram-positive organisms, except for a few notable exceptions, and despite F micutes regularly turning up in the phylogeny of bioelectrochemical systems [14][15][16]. R cent research suggests that EET in Gram-positive bacteria is evolutionarily more anci than in Gram-negative organisms [17]. Lately, it has become commonplace to descr EET mechanisms in Gram-positive organisms that include LAB. Light et al. (2019) recen described a flavin-based EET mechanism in Listeria monocytogenes and electrode-depen ent growth that is distinct from more well-described mechanisms of EET [9]. Similar EET has been described in another clinically important LAB, Enterococcus faecalis (E. calis), and although the exact mechanism is yet to be elucidated, its ability to reduce electrode and Fe (III) has been demonstrated [8,18,19]. In addition, for both Listeria mo cytogenes and E. faecalis, EET has been implicated in virulence either through increas competitive capabilities, enhanced biofilm potential, or through EET mediated synergis interactions with commensals. In these systems, however, basic quantitative assessme of the EET process is not always reported.
In light of the recent insights into the EET in LAB, we decided to revisit the idea their compatibility in bioelectroanalytical detection. The aim of this contribution is to termine whether a clinically relevant member of the LAB, E. faecalis, is amenable to rap detection and enumeration in a bioelectroanalytical system ( Figure 1). A secondary a of this contribution is to comment on the quantitative importance of EET mechanisms E. faecalis by looking at EET efficiency. To our knowledge, this is the first report describ the potential for specific and swift bioelectroanalytical LAB detection and to quant tively assess the efficiency of the EET process and how this relates to EET mechanisms E. faecalis in the context of biosensing. Figure 1. Schematic of LAB detection concept: the LAB (i) reduces a redox mediator (ii) that is released from a specific detection substrate, in this case resorufin-β-D-galactopyranoside (not shown), which is subsequently oxidized (iii) at the working electrode (iv) of a screen-printed electrode 4 mm in diameter printed with carbon ink. The counter electrode (v) is also carbon ink, and the reference electrode (vi) is silver. All are connected to the potentiostat by a USB connection (vii). Once oxidized, the mediator can be reduced again by the microbe in a cyclical fashion.
Growth Conditions for E. faecalis OG1RF and Other Lactoacid Bacilli (LABs)
Overnight growth of E. faecalis OG1RF was attained by inoculating a single E. faecalis colony grown on BHI agar into BHI broth (Acumedia, San Bernardino, CA, USA) and grown for 18 h at 37 • C at 200 rpm in a shaking incubator. Overnight bacterial cultures were centrifuged; the supernatant was discarded, and the pellet washed three times in phosphatebuffered saline (PBS) before adjusting to the desired inoculum density in fresh media.
Respiratory Stimulation (and Inhibition) of LAB
Stock concentrations of menaquinone (Mk-4 Cayman Chemical, Ann Arbor, MI, USA) (10 mg mL −1 ) and Heme (Sigma, Singapore) (0.5 mg mL −1 ) were made by dissolving in absolute ethanol and deionized H 2 O and were diluted to a final concentration of 0.02 mg mL −1 and 0.002 mg mL −1 , respectively. Stock dissolved in water was filtered and stored in 4 • C for up to a week. The end point optical density of the microbial growth in each well with initial inoculum of 5 × 10 5 CFU mL −1 was recorded at 48 h. For aerobic growth, flasks were incubated at 30 • C shaking at 200 rpm, while the bacteria incubated for anaerobic growth were kept in an anaerobic chamber (Bactron, Sheldon Manufacturing, Cornelius, OR, USA) with an N 2 , CO 2 , and H 2 atmosphere, also at 30 • C, and only removed for periodic measurements. The absorbance at 600 nm wavelength was recorded with the Tecan Spark™ 10M microplate reader. End-point pH was recorded with a pH meter from pooled samples to ensure sufficient volume for accurate pH measurements from the small volumes incubated
Preparation of Electrochemical Reactors and Mini Electrochemical Cells
Conical electrochemical cells with stirrers were prepared as previously reported [7]. For screen-printed electrodes (SPE) reactors, the caps of 1.5 mL Eppendorf tubes were trimmed off and a hole was punctured at the conical portion of the Eppendorf tube using a 30G needle. After autoclaving (120 • C, 15 psi 15 min), the modified tubes were attached to the SPEs with epoxy resin. The SPEs consist of a circular carbon working electrode with a diameter of 4 mm, a carbon counter electrode, and a silver reference electrode on a ceramic support (Metrohm DropSens, DRP-C110, Herisau, Switzerland). SPEs were sterilized by soaking for one minute in 70% ethanol, followed by ultraviolet sterilization for 20 min. Reactors were assembled in a biological safety hood and the epoxy was allowed to cure for a minimum of 12 h before use. SPE technology is convenient for non-specialist users and can be commercially implemented at very low cost.
For electrochemical experiments, dissolved oxygen was displaced from the reactors by sparging N 2 gas through the medium prior to dispensing the media into the reactor in an anaerobic chamber. The only inlet was sealed with tape before its removal from the anaerobic chamber and subsequent bench top operation in a bead bath at 37 • C.
Potentiometric Enumeration
To enhance E. faecalis communication with the electrode when required, resorufin or the electrochemically active selective agent for E. faecalis, resorufin-β-D-galactopyranoside, was added at a final concentration of 50 µM, as previously reported [7]. Potentiometric measurements were performed with a VSP-150 multichannel potentiostat (Bio-Logic SAS, Claix, France) or DRP-STAT8000 (Metrohm DropSens, Oviedo, Spain). All electrochemical potential values are reported with respect to an Ag reference electrode, and average current was recorded every 60 s during chronoamperometric measurements.
Estimation of Columbic Efficiency
To estimate the quantitative contribution of electrochemical output relative to overall metabolism, the consumption of carbon in a system inoculated with 10 2 CFU ml was determined by estimating the change in chemical oxygen demand (COD) over 20 h using a high-range COD kit (Hach, Loveland, CO, USA). COD measurements were conducted in triplicate and the mean value was used in the to calculate CE. The relationship between the number of electrons consumed and the number of electrons captured at the electrode (charge) was calculated as reported by Logan (2008): where C E is Coulombic efficiency, I is current, F is Faraday's constant, V An is the volume of the anode chamber, and ∆ COD is the change in COD over a 20 h period.
Results and Discussion
E. faecalis can easily be detected and enumerated using the proposed detection framework described previously [7,13]. Chronoamperometric analysis of E. facecalis reveals that all test inocula, with the exception of 10 7 CFU mL −1 where the current onset is almost instant, produce a distinctive curve showing an initial flat period of baseline current representing lag phase growth followed by a sharp increase in current generation resulting from increased metabolic activity (and thus EET) as the culture enters the exponential growth phase (Figure 2A). A previously reported method defines a detection event when the slope of the chronoamperometric readout exceeds five standard deviations of the baseline current for more than five consecutive time points [13]. We apply a more conservative definition here and define detection as the time at which the current passes a threshold value (20 µA). This is an objective and robust way to define a detection time and yields a linear calibration curve that inversely correlates with inoculum size. For every log fold increase in inoculum size ranging from 10,000 to 10,000,000 CFU mL −1 , the detection time increased linearly, yielding mean detection times of 235 (±16), 148 (±12), 62 (±12), and 32 (±8) min for 10 4 , 10 5 , 10 6 , and 10 7 CFU mL −1 , respectively, where the values in parentheses represents the standard deviation of three replicates. This observation is easily interpreted from the relationship between inoculum density and the duration of the lag phase in classical growth curve theory. This property can be used to construct a linear standard curve with a regression coefficient (r 2 ) of 0.96 ( Figure 2B). Additionally, the standard deviation of the mean detection time for three replicates is between 7 and 23%, which is comparable to previous reports and also to commercially available systems for bacterial detection [4,7,20]. In short, the bioelectroanalytical system is an effective means of enumerating E. faecalis and should be further developed to effect real world detection of lactobacilli in the multitude of settings for which they are relevant. This will require development of selective medium to suppress non-target organisms that are specific to the individual settings.
The detection compounds used here are electroactive glycosides comprising glucose conjugated to resorufin. When the glycosidic bond is cleaved by native glucosidases expressed by E. faecalis, the electroactive resorufin is liberated from the pyranose ring and subsequently reduced by microbial electron carriers to dihydroresorufin [21]. Following its reduction by microbes, the mobile dihydroresorufin in turn reduces the electrode generating a current proportional to the metabolic activity of the culture [22]. Microbial metabolic coupling to an electrode via a mobile redox mediator is well documented and usually described as a component of an energy-yielding type of respiration called mediated EET [23]. The quantitative importance of respiration compared to fermentation in a similar bioelectroanalytical detector was recently demonstrated by contrasting mediated detection times achieved with wildtype E. coli vs. a mutant, E. coli SHSP 18-an E. coli K12 derivative that is auxotrophic for δ-aminolevulinic acid, a growth factor critical in heme synthesis and therefore respiration [13,24]. The wildtype E. coli detection time, for an inoculum size of 5000 CF, was about 6 h compared with the detection time of around 14 h for the SHSP 18 mutant that was unable to respire. Thus, from this previously reported study, it appears that fermentative metabolism is not ideally compatible with bioelectroanalytical detection even in the presence of mediators. The detection compounds used here are electroactive glycosides comprising glucose conjugated to resorufin. When the glycosidic bond is cleaved by native glucosidases expressed by E. faecalis, the electroactive resorufin is liberated from the pyranose ring and subsequently reduced by microbial electron carriers to dihydroresorufin [21]. Following its reduction by microbes, the mobile dihydroresorufin in turn reduces the electrode generating a current proportional to the metabolic activity of the culture [22]. Microbial metabolic coupling to an electrode via a mobile redox mediator is well documented and usually described as a component of an energy-yielding type of respiration called mediated EET [23]. The quantitative importance of respiration compared to fermentation in a similar bioelectroanalytical detector was recently demonstrated by contrasting mediated detection times achieved with wildtype E. coli vs. a mutant, E. coli SHSP 18-an E. coli K12 derivative that is auxotrophic for δ-aminolevulinic acid, a growth factor critical in heme synthesis and therefore respiration [13,24]. The wildtype E. coli detection time, for an inoculum size of 5000 CF, was about 6 h compared with the detection time of around 14 h for the SHSP 18 mutant that was unable to respire. Thus, from this previously reported study, it appears that fermentative metabolism is not ideally compatible with bioelectroanalytical detection even in the presence of mediators.
The detection time for E. faecalis observed here (Figure 2A) cannot be directly compared to the previous studies, but it is likely to be similar. For a higher inoculum size of 10,000 CFU, double that reported for the E. coli SHSP mutant, the detection time is around three hours vs. a six-hour detection time for 5000 CFU of wildtype E. coli and 14 h for the fermentative strain [13]. The detection time achieved here for 10,000 CFU E. faecalis resembles respiratory behavior.
The genome of E. faecalis is reportedly deficient in heme, but supplementation of this growth factor and or menaquinone induces respiratory behavior in E. faecalis by activating The detection time for E. faecalis observed here (Figure 2A) cannot be directly compared to the previous studies, but it is likely to be similar. For a higher inoculum size of 10,000 CFU, double that reported for the E. coli SHSP mutant, the detection time is around three hours vs. a six-hour detection time for 5000 CFU of wildtype E. coli and 14 h for the fermentative strain [13]. The detection time achieved here for 10,000 CFU E. faecalis resembles respiratory behavior.
The genome of E. faecalis is reportedly deficient in heme, but supplementation of this growth factor and or menaquinone induces respiratory behavior in E. faecalis by activating the redox center of cytrochrome bd and providing a quinone pool to transfer electrons from NADH + to terminal reductases. Respiratory behavior is defined in this sense by observations of increased vitality or biomass production accompanied by less acidification of the medium from lactate accumulation [25].
Upon analyzing the growth yields of E. faecalis in bench top experiments, we observed that the optical density (OD 600 ) was always higher in aerobic conditions than it was in anaerobic ones ( Figure 3A). This phenomenon was reported previously [26] and it is suggestive of respiratory behavior and could arise from residual heme or quinoids in the undefined medium that we used as was reported for the Todd Hewitt broth r by Del Papa and Perego (2008). Further analysis shows that the addition of heme, menaquinone, or both does not alter this trend, although menaquinone does give a slight boost in growth, observed by an increase in OD in anaerobic conditions, suggesting a role in managing reactive oxygen species (Figure 2A). A comparative analysis of Lactobacilli electron trans-port chain stimulation by heme and menaquinone by Brooijman et al. (2009) [25] showed similar behavior in E. faecalis, i.e., that E. faecalis growth is stimulated by menaquinone and, to greater extent, by the addition of both heme and menaquinone. However, the difference in pH between 'fermentative' and 'respiratory' growth ranged only between 0.2 pH units (5.62-5.82) [25], although growth increased by 38%. Under equivalent conditions, we observed a quantitatively lower increase in growth (OD 600 ) of around 14% and the pH to be similarly stable ( Figure 3B). We observed a pH with a delta (∆ pH ) of only 0.13 pH units but an overall moderately lower pH of 5.14 and 5.27 between the unsupplemented control and supplemented conditions, respectively. The ∆ pH between fermentative LAB and the supplemented treatment thathad been induced to respire was previously reported to be greater what we have observed here, and the observed ∆ pH is usually close to a single pH unit; for example, the ∆ pH between a heme and menaquinone-stimulated Streptococcus entericus culture compared to an unsupplemented control was 1.07, with the final pH being 4.42 and 5.49 for fermentative and respiratory conditions, respectively [25]. Taken together, although E. faecalis exhibits less respiratory stimulation upon the addition of heme in BHI medium relative to other LAB, it is clear that it does derive some growth benefits from medium supplementation with respiratory components. Despite the small difference in pH between respiratory and fermentative growth in E. faecalis, Broojiman (2009) still concluded that E. faecalis is both heme and menaquinone stimulated [25]. Concluding that E. faecalis OG1RF will grow vigorously and that its response to additive respiratory components is minimal, we proceeded to examine the quantitative extent of E. faecalis EET in our microscale detectors using BHI as a medium and the redox active aglycone used in the proposed detection approach, resorufin, to effect electron transport. The charge generated by E. faecalis was substantially greater (20 mC) when mediated with resorufin than the control current which barely increased above a baseline current of around 2 mC ( Figure 4A). Over a 20 h period, around 600 mg L −1 COD more was consumed in the mediated system (ΔCOD = 2020 mg L −1 ) compared to the unmediated system (ΔCOD = 1420 mg L −1 ), which is suggestive of an increase in EET by E. faecalis upon the addition of the mediator ( Figure 3B). However, the response of E. faecalis to mediator addition in terms of COD consumption was similar in both benchtop controls and in bio- Concluding that E. faecalis OG1RF will grow vigorously and that its response to additive respiratory components is minimal, we proceeded to examine the quantitative extent of E. faecalis EET in our microscale detectors using BHI as a medium and the redox active aglycone used in the proposed detection approach, resorufin, to effect electron transport. The charge generated by E. faecalis was substantially greater (20 mC) when mediated with resorufin than the control current which barely increased above a baseline current of around 2 mC ( Figure 4A). Over a 20 h period, around 600 mg L −1 COD more was consumed in the mediated system (∆ COD = 2020 mg L −1 ) compared to the unmediated system (∆ COD = 1420 mg L −1 ), which is suggestive of an increase in EET by E. faecalis upon the addition of the mediator ( Figure 3B). However, the response of E. faecalis to mediator addition in terms of COD consumption was similar in both benchtop controls and in bioelectrochemical systems. The ∆ COD in equivalent reactors but without the electrode and incubated under identical conditions was 1160 and 1920 mg L −1 for the unmediated and mediated systems, respectively, equating to 760 mg L −1 more COD consumed in the benchtop system that was supplemented with resorufin compared with the unsupplemented control. Thus, while the electrochemical behavior of E. faecalis is stimulated with the addition of redox mediators, so too is a planktonic culture, and this is reflected in the observation in Figure 4A as well as with the growth studies in Figure 3A, where the addition of menaquinone only resulted in some growth stimulation under anaerobic conditions. This is also in keeping with observations by other researchers [25]. Coulombic efficiency (CE) of the electrochemical response of E. faecalis is low: 1.6% in the mediated system and 0.1% in the unmediated system ( Figure 4B). EET is increasingly being described in Firmicutes isolated from the gut [8,9,27,28] and, more recently, in situ directly by inserting electrodes into a mouse gut and comparing the output with germ-free organisms [29]. These studies, in the main, have lacked a quantitative assessment of the electron flux as a function of the carbon utilization in the system. EET is increasingly being described in Firmicutes isolated from the gut [8,9,27,28] and, more recently, in situ directly by inserting electrodes into a mouse gut and comparing the output with germ-free organisms [29]. These studies, in the main, have lacked a quantitative assessment of the electron flux as a function of the carbon utilization in the system. Where columbic efficiency (CE) is reported, it is low (typically <1%) and of a comparative magnitude to that reported here. For example, similarly diminutive CE observations were made by Naradasu et al. (2019), where a CE of only 0.02% was reported for a gut-isolated Entrococcacea sp. with a 99% 16 S RNA sequence similarity to E. avium [27]. The question therefore remains whether EET in many LAB-based systems, either to an electrode or in the reduction in metal oxides (e.g., the commonly reported Fe (III) reduction), is a respiratory phenomenon or if it is a strategy for Firmicutes to exert redox control over their environment, e.g., to enable nutrient uptake or even mitigate metal toxicity, or even if it is just redox leakage. Since the first option is involved in energy conservation, and the other comes at a metabolic cost, the distinction is an important one. However, the low CE of EET in LAB systems reported to date suggest that such a strategy, if it used to conserve energy, is a minor one.
Mechanisms for EET in Firmicutes have been reported to be flavin dependent [9,28], although this does not necessarily mean that the flavins are mobile redox mediators as proposed in Shewanella spp. Light et al. (2018) and Hederstedt (2020) identified key genes for EET in Firmicutes. In L. monocytogenes, the important EET gene cluster appears to contain a flavoprotein (PplA) a type-two dehydrogenase (NdH2), and two small proteins, EetA and EetB, as well as genes for quinone synthesis. Orthologous genes in E. faecalis, ppl3 and ndhA, appear to have some role in EET, but alternative dehydrogenases Ndh2 and Ndh3, as well as EetA and EetB, also have a role depending on the type of EET mechanism at play. It appears that Ndh3 and EetA are essential to EET in wildtype E. faecalis, i.e., where there is no supplementary heme and, hence, they are not respiring (Hederstedt, 2020). Under such conditions, in our systems at least, there appears to be both a growth benefit and an increase in carbon utilization upon introducing a redox mediator. While a useful detection signal (current) from E. faecalis can only be recorded at an electrode in the presence of a mediator, the growth benefit to E. faecalis of such an addition appears to be independent of the electrode. Additionally, Hederstedt (2020) and Pankratova (2018) suggest that EET is always promoted by the presence of a mediator and that EET is curtailed by the activation of cytochromes [8]. Taken together, the relatively low bioelectricity yield of E. faecalis, the absence of advanced COD consumption at the electrode, and the fact that assembly of a fully functioning electron transport chain reportedly attenuates EET in E. faecalis, suggests that the observed current production in our systems is incidental. This may arise from the fact that the quinone pool (which has been shown essential to all EET described in Firmicutes to date) may become charged with electrons and can provide reducing power for other biological processes that are not essential to conserving energy. Alternatively, the current signal in these systems may arise from leakage, a phenomenon well known in mammalian electron transport chains and that produces free radicals [30]. Nonetheless, the reducing power captured here is sufficient to achieve detection that is likelybe more rapid than traditional techniques. Accordingly, the question arises of whether it is helpful to describe the bioelectricity production observed here in E. faecalis detection systems as EET. If the phenomenon is not special, i.e., that it applies to most organisms and it is not quantitatively substantial, then the focus should be on the analytical procedure describing EET rather than the biological significance, i.e., the ability to sense the metabolism of a particular organism rather than the 'special qualities' of that organism's metabolism. In this investigation, we observed CE comparable to that attributed to an electrochemically isolated organism in a member of the same genus. Recognizing the need to objectively screen for electrogenicity, Zhou et al. (2015) proposed a rapid colorimetric screening procedure to assign EET capabilities to different microbial genera. They only screened one member of the Firmicutes, and they did not assign to it the ability to carry out EET when it is objectively compared to known EET-capable genera.
Conclusions
The ability to detect microbial metabolism at an electrode is not dependent upon an organism's respiratory behavior, nor is it constrained to genera with definitive and substantial EET capabilities afforded by specific electron-transporting outer membrane components. Such a finding renders an industrial important group of organisms, the LAB, amenable to rapid bioelectroanalytical detection. As a phylogenetically distinct group, LAB do not respire but conserve energy fermentatively. We showed that, to specifically detect LAB through bioelectroanalysis with confidence, redox mediator-based detection compounds are required. Using a simple but sometimes overlooked means to determine CE, we showed that it is only necessary to capture one or two percent of a microbe's metabolic activity to set up an effective detection signal. While we have demonstrated this concept with E. faecalis, the technique is likely to be compatible with many other industrially or clinically relevant microorganisms. Here, we show the utility of a detection compound that has some specificity for the test organism; however, with medium development, it is possible that tests could be designed that specifically detect LAB under the range of conditions that reflects the diversity of applications where they are important. | 7,291.2 | 2022-01-25T00:00:00.000 | [
"Biology"
] |
Learning from a lot: Empirical Bayes for high‐dimensional model‐based prediction
Abstract Empirical Bayes is a versatile approach to “learn from a lot” in two ways: first, from a large number of variables and, second, from a potentially large amount of prior information, for example, stored in public repositories. We review applications of a variety of empirical Bayes methods to several well‐known model‐based prediction methods, including penalized regression, linear discriminant analysis, and Bayesian models with sparse or dense priors. We discuss “formal” empirical Bayes methods that maximize the marginal likelihood but also more informal approaches based on other data summaries. We contrast empirical Bayes to cross‐validation and full Bayes and discuss hybrid approaches. To study the relation between the quality of an empirical Bayes estimator and p, the number of variables, we consider a simple empirical Bayes estimator in a linear model setting. We argue that empirical Bayes is particularly useful when the prior contains multiple parameters, which model a priori information on variables termed “co‐data”. In particular, we present two novel examples that allow for co‐data: first, a Bayesian spike‐and‐slab setting that facilitates inclusion of multiple co‐data sources and types and, second, a hybrid empirical Bayes–full Bayes ridge regression approach for estimation of the posterior predictive interval.
Baseball batting example, revisited
We shortly revisit the famous baseball batting example (Efron and Morris, 1975), often used as a scholarly example of EB. While this is an estimation problem instead of a prediction problem, we revisit it for several reasons: i) it is a well-known example for which the true values are known; ii) the EB objective function is the same as for diagonal linear discriminant analysis; and iii) by casting the problem to a large p setting it allows us to show the importance of p being large.
For 18 baseball players, their batting averages over the first 45 bats are recorded and denoted by B i . The batting averages over the remainder of the season are also known, and considered to be the truth. We follow Van Houwelingen (2014) by modeling B i ∼ N (θ i , σ 2 i ), where the aim is to estimate θ i . The variances are estimated byσ 2 i = B i (1−B i )/45. Then, to effectuate shrinkage Van Houwelingen (2014) applies a Gaussian prior N (µ, τ 2 ) to θ i . In the formulation of the marginal likelihood (see Main Document), this implies hyper-parameter α = (µ, τ 2 ), and estimation of α is straightforward due to the conjugacy of the likelihood and the prior: Then, the posterior mean estimate equalŝ θ i = E(θ i |B i ; (μ,τ 2 )) =μ +τ 2 (τ 2 +σ 2 i ) −1 (B i −μ). The conclusion in Van Houwelingen (2014) is that the shrinkage prior slightly reduces the mean squared error, but enforces too strong shrinkage for the extremes. E.g. for the best playerθ 1 = 0.271, whereas X 1 = 0.400 and true θ 1 = 0.346. Two possible explanations come to mind: the estimate of the prior parameters is not good due to p = n being small and/or the prior does not accommodate the extremes well. We investigate this.
First, we simulate 10,000 additional true values from a density estimate with Gaussian kernel (using R's density command) applied to (θ 1 , . . . , θ 18 ). To obtain B i , i = 19, . . . , 10018, Gaussian noise was added with variances θ i (1 − θ i )/45. The estimates obtained in Van Houwelingen (2014) wereμ = 0.256 andτ 2 = 0.000623. The latter seems to be a major cause of over-shrinkage: the true variance computed from the 18 known θ i 's equals 0.00143. If we estimate τ 2 from the large data set, a much better estimate is obtained:τ 2 = 0.00195, as compared to the variance of the 18 known plus 10,000 generated true θ i 's, equaling 0.00166. From this, we obtain posterior mean estimateθ 1 = 0.293, which is substantially closer to θ 1 = 0.346 thanθ 1 = 0.271. Estimates for all 18 players are displayed in Figure 1(a).
In this example, it is natural to replace the Gaussian prior by a 3-component Gaussian mixture prior (bad, mediocre and good players): θ i ∼ 3 k=1 p k N (µ k , τ 2 k ). Then, α consists of 8 hyperparameters given that p 3 = 1 − p 1 − p 2 . We employed the EM-type algorithm of Van de Wiel et al. (2012) to maximize the marginal likelihood (see Main Document) in terms of α. Here, we use that the likelihood is Gaussian, and the Gaussian mixture prior is conjugate to it. The latter also facilitates straightforward computation of the shrunken estimatorθ Mixt i = E(θ|B i ; α). In this setting, the mixture prior is fairly close to the estimated Gaussian prior, and so are the shrunken estimates, as displayed in Figure 1(b). Slightly less shrinkage for the extremes is observed, though. For example,θ Mixt 1 = 0.298.
Bayesian elastic net
The Bayesian linear elastic net model, as used in the Main Document, is (Li and Lin, 2010): with some arbitrary (possibly improper) density f (σ 2 ). The normalizing constant g(λ 1 , λ 2 , σ 2 ) is given by: Since the simulations are for illustrative purposes only, the error variance was kept fixed at its true value (σ 2 = 1) throughout the simulations. Then, after introducing the latent variables τ = τ 1 · · · τ p T , we have the following conditional distributions for β and τ : , and GIG denotes the generalized inverse Gaussian distribution.
Marginal likelihood from Gibbs samples
According to Chib (1995), the log marginal likelihood of a Bayesian model may be calculated from the converged Gibbs samples as: where β * is some high posterior density point of p(β|Y) and τ (k) are Gibbs samples indexed by k = 1, . . . , K. In principle, any point β * may be used, but for the sake of efficiency a high-density point of β is preferred, such as the posterior mode. Then, for fixed σ 2 , the log marginal likelihood is approximated by: Sampling from the multivariate normal is a costly operation in high dimensions. In Bhattacharya et al. (2015) an efficient sampling scheme for β is described: Furthermore, if (τ j − 1)|Y, σ 2 , β j ∼ GIG(1/2, ψ, χ j ), then 1/(τ j − 1)|Y, σ 2 , β j ∼ IGauss(µ j = ψ/χ j , λ = ψ). Sampling from this inverse Gaussian is done by the following scheme: 3 Proof for Theorem 1: EMSE τ 2 for linear regression Then, let us first compute the expected squared bias w.r.t. β: where we used the central moments of Gaussian random variables, available from Isserlis' Theorem (Isserlis, 1918): Hence, we need to compute V (β 2 j ) and Cov(β 2 j ,β 2 k ). These are again derived from expressions of the central moments of Gaussian random variables. Let us first express the non-central moments in Cov Y (β 2 j ,β 2 k ) = Cov(β 2 j ,β 2 k ) = E(β 2 jβ 2 k ) − E(β 2 j )E(β 2 k ) in terms of the central ones. Denote the centralized value ofβ j byβ j =β j − β j . Then, because T 2 = 0 due to the symmetry of the central Gaussian distribution. Likewise, the second term of the covariance equals: Subtracting the latter from T 1 cancels the latter 3 terms in both expressions, rendering where we used the equations for the central moments of Gaussian random variables (Isserlis, 1918).
Note that the latter can also be obtained by writingβ 2 (5) and (6) into (4) renders: Taking expectation w.r.t. β gives: because we assume i.i.d. central priors for β j . Now to compute n). Hence, the requested moments are known (Press, 1982): where we assume p < n − 3. Substituting (8) into (7) and aggregating with the expected squared bias (3) finalizes the result: This simplifies for independent X i , because then ψ jj = 1 and ψ jk = 0:
Simulation Example
Here, we show the results for all simulation settings presented in the Simulation example. | 1,976.8 | 2018-06-01T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Wonder and the Patient
Is it possible to distinguish, as sociologist Arthur Frank proposes, an ‘ideal of wonder’ within which ill persons could recover some of their former sense of life and flourishing, even within the constraints of ill-health? Beyond this, are there more general benefits in terms of health and well-being that could accrue from cultivating an openness to wonder? In this paper I will first outline and defend a notion of wonder that gives philosophical support to Frank’s proposal, noting why thinking about medical treatment may readily provoke a sense of wonder. Second I will however limit the normative force of such an ‘ideal of wonder’ noting its demands and some of the challenges facing it. The paper goes on, third, to conjecture wider benefits within and beyond the clinical encounter arising from being mindful of the wonder of embodied human agency. Fourth the paper will consider alignments between the foregoing analysis and some theoretical commitments in recent work in health geography. Finally I will briefly reconsider the notion of the body-as-territory, and the role of the imagination in bringing it under wonder’s gaze.
circumstances of serious illness. Those are circumstances of which he is able to write with great authority both as a distinguished scholar and as an acute witness in his own case. I want to begin by drawing on his own reflections upon the salient part of the story. Describing his experiences of becoming progressively ill, submitting to tests, entering hospital, undergoing surgery, and beginning to recuperate, he comes face to face with the challenge of having as it were to negotiate both with his ill body and with his professional carers. Medicine's instincts were to seek to control his illness through controlling his body: Every day society sends us messages that the body can and ought to be controlled. Advertisements for prescription and non-prescription drugs, grooming and beauty advice, diet books, and fitness promotion literature all presuppose an ideal of control of the body. Control is good manners as well as a moral duty; to lose control is to fail socially and morally. But then along comes illness, and the body goes out of control. … Physicians justifiably think it is their duty to restore, in the name of society, the control that the sick are believed to have lost. Control, or at least management, becomes a medical ideal (Frank 2002, 58).
Frank's experiences of medicine's attempts at control were discouraging. Partly this arose from correctable deficiencies in procedure (and in common courtesy) in the institutionalised healthcare he received; but partly it arose from the fearful realisation that medicine might dissemble over something that he would ultimately have to acknowledge openly, namely that cure might not be possible. The resources he needed had to include an attitude towards his own ill body that was grounded on something other than its subjugation. This attitude, it transpired, was one of wonder at the body: a wonder held on his part certainly, but also, ideally, on the part of his medical carers: What I recommend, to both medical staff and ill persons, is to recognize the wonder of the body rather than try to control it. Wondering at the body means trusting it and acknowledging its control. I do not mean we should stop trying to change the direction the body is taking. I certainly did all I could, and I value all that my physicians did, to use treatment to change the direction my body was taking. Wonder and treatment can be complementary; wonder is an attitude in which the treatment can best proceed. (59, my emphasis) Illness might simply be something he had to live with, but an attitude of wonder could make this meaningful.
Wonder is almost always possible; control may not be. If the ill person can focus on an ideal of wonder in place of control, then living in a diseased body can recover some of its joy. (Frank 59,my emphasis) Frank's experience is striking. But not only that: as he describes it, it is also heartening (irrespective of the fact that, fortunately, he also enjoyed a good clinical outcome: seminoma is one of the more treatable cancers). However, is his advocacy of a sense of wonder realistic? To what benefits or goods might it credibly give rise? (In particular, how does it help treatment to 'best proceed'?) And how widely might one really urge this 'ideal of wonder' upon other patients? In proposing it, Frank is obviously making a normative claim, but he doesn't indicate its strength; in particular he doesn't tell us whether this is something that we can ordinarily urge upon patients, as distinct from applauding it when we happen to find it in them. Nor does he spell out what he takes wonder and wondering to consist in, as exercised in the circumstances he envisages. I think these are solid questions worth pursuing, and it is in an appreciative spirit that I will try to do so.
First, I will briefly outline a notion of wonder that I believe is substantial, supports Frank's proposal (albeit from a philosophical rather than a social-scientific perspective), and is relevant more widely to the clinical encounterincluding those in general practice as distinct from hospital medicine. I will also note why thinking about medical treatment readily provokes wonder. Second, as for the normative force of the 'ideal of wonder,' I will be prepared to limit it. It is an ideal that is initially difficult to approach, and relatively few patients may habitually feel wonderment in relation to their own embodiment, ill or otherwise. This doesn't make Frank's ideal any the less worthwhile: the occupants of minor roles deserve, and may well reward, study. So, third, I will conjecture what further good things might follow (mainly, but not exclusively, for the clinical encounter) when a patient is indeed mindful of the wonder of embodied human agency in general and of their own in particular. (Frank's is fundamentally a patient-centred view; my own analysis is situated between the perspectives of patient and physician.) Fourth I will consider the extent to which the analysis offered here aligns with some of the theoretical commitments underlying some recent work in health geography. Finally I will briefly reconsider the notion of the body-as-territory, and the role of the imagination in bringing it under wonder's gaze.
Wonder and treatment
Wonder, for Frank, seems to be a specially-reassuring mindfulness and attentiveness to one's circumstances, both mediated through and often focused upon the bodyill or not. It is clearly and importantly positive: it is a source of recovered joy for the ill person, a manifestation of trust towards to the body, and a vitally enabling circumstance, 'an attitude in which treatment can best proceed'(59). I certainly endorse its being an attitude rather than an emotion (Evans 2012, 85)one of intensified and compelling attention, in which the ordinary is presented to us anew, commingled with and suffused by aspects of the extraordinary; our imagination is engaged in advance of our understanding though that might follow (Evans 2002, 127). Wonder is, I would argue, more profound and more durable than curiosity; for instance, it survives explanation: the wonder of the universe is frequently increased, not diminished, with our understanding of its inter-penetrated complexity. Its objects need not threaten to impact upon us as might those of awe; shorn of an aesthetic, it need not engage the sublime or the terrifying (Evans 2012, 127). It can be both impersonal (especially not directed at the self [Moore 2005, 269]) and intimatethink of the simple 'tangled bank' upon which Darwin saw written the entire story of evolution (1859). Wonder is above all a glimpsing; a disclosure (Miller 1992, 51).
This view is, I believe, echoed in Frank's description of its source on the occasion that concerns him (and he mentions no precursors): it seems to have come about through his defiance of the physical misery of his illness by enjoying a uniquely vivid experience of walking in the September rain, which on the occasion in question he valued for its own sake (not least, perhaps, because of the possibility of its impending loss); crucially, he was undistracted by any plans, purposes or inward-facing concerns. Perhaps through the mediation of intense experiences, wonder for Frank derives from the body, from which it was 'learned' (59) and for which he notes that he can claim 'little credit' (59). The embodied ground of experience of all kinds, wonder included, seems inescapable; it is understandable that his focus is upon the ill body that he has become. I tend to think that wonder swiftly proceeds outside ourselves, taking us away from ourselves towards 'something in the face of which we set aside our own concerns and even our self-conscious awareness, in the most powerful instances' (Evans 2012, 127). In wonder, we find the world, and ourselves within it, temporarily 'transfigured' (there is a measure of consistency among the views emanating from the relatively narrow range of disciplines that have considered wonder from a scholarly viewpoint [Evans 2012]). That 'green September day' was, I suspect, transfigured for a while. Now consider the provocation of wonder in relation to treatment: if transfiguring is essentially a matter of perception, treatment aims at substantial material change: 'It is no small thing to have your body rearranged, first by disease and then by surgical and chemical interventions intended to cure that disease' (54).
When we can pause and acknowledge it, wonder is dramatically invited by our bodies; by their constitutions, complexities and capacities; and by their being the material fundament and medium of that extraordinarily elusive yet ever-present song of life, daily conscious experience. Few things spotlight this more dramatically than medical treatment (Evans 2014).
Consider, first, what it is to be doing nothing in particular, thinking and feeling nothing in particular, in no discomfort or agitation but no especial pleasure or stimulation eithersimply in the 'background' state of being ready to do or think the next thing that comes along needing to be done or thought, the unnoticed small change of bodily existence ready to be cashed out in judgment or action. This is the taken-for-grantedness of ordinary being, and it could last not a moment without the extraordinary complexity and frantic silent activity of the bio-physicochemical constellation that is our bodies. 'Health is life lived in the silence of the organs,' says Leriche (quoted Canguilhem 1989, 91). Woven into and emergent from this constellation, its patterns as reliable as they are intricate, are our ordinary perceptual and kinaesthetic and proprioceptual experience; our qualitative sensations; our capacity for recognition and memory, movement and conjecture, decision and willed action; in short, our embodied agency. If we are aware of any of this at all, ordinarily it is only subliminally as we foreground only what is of concern to us. 1 Consider, second, that upon this improbable fabric are wrought extraordinary metamorphoses: of an ordinary life-cycle; of disease and recovery; andthis the intelligent purposive work of other embodied agenciesof the treatments that are medicine's names for its own organised changes in our constituent flesh. These bodily 'rearrangements' are indeed 'no small thing,' in either experiential or, when we stop to think keenly about it, metaphysical terms (Evans 2014). Successful or not, medical treatments provoke wonder and they constitute wondersalbeit usually unattended-to. (Indeed, illnesses might also provoke wonder for some patients.) 2 I have elsewhere suggested how this provocation can be important in the clinical encounter especially from the clinician's viewpoint (Evans 2012). Wonder conceived as a transfiguring attentiveness offers to the clinician an ever-present source of ethical regard (since even the most damaged patient retains embodiment, always worthy of wonder, in a context that plausibly joins wonder to respect); a sense of wonder can revitalise diagnostic imagination amid the dulling impact of clinical routine; and through wonder at embodied human experience, the clinician may find irresistible the recalling of her shared vulnerability with the patient. It is to both the first and the last of these, to ethical source and to shared vulnerability, that I suspect Frank implicitly appeals when he says that 'The body is not a territory to be controlled by either the physician's treatment or the patient's will' (Frank 2002, 62).
The inference we may draw is, I think, that for Frank wonder stands in an ambiguous relationship to treatment. Wonder invites us to attenuate our dependence upon treatment when this is too-readily misconceived under the ideal of control. However, Frank endorses treatment in his commending wonder as that attitude in which treatment may 'best proceed.' Perhaps wonder is ambiguous like this: perhaps both aspects are true.
The normative force of the 'ideal of wonder'
The very term 'ideal' is almost paradigmatically normativethat at which one should most perfectly aimbut it is a term often used in intentional contrast with the practical world, and different ideals (like different norms) often compete: indeed, the ideal of wonder is urged by Frank precisely as a rejection of the ideal of control. He is under no doubt about the significance of wonder to the clinician: A physician who does not have this sense of wonder seeks only to cure disease. Sometimes he succeeds, but if cure is the only objective, not achieving it means he has failed. For the artful physician, wonder precludes failure. The physician and the ill person enter into a relationship of joint wonder at the body, in which failure is as irrelevant as control. (62) It seems hyperbolic to suppose that 'wonder precludes failure,' except in the rather stipulative sense of suppressing cure as a goal whenever cure seems beyond us. A sense of wonder might indeed give us a supplementary reason for doing so, perhaps through giving us a way of seeing the patient's world anew (and seeing anew is a characteristic of a state of wonder), but wonder has no monopoly upon our recognising that other goals sometimes supplant the aim of cure.
Frank's conjecture that wonder is 'almost always possible' (59) seems equally ambitious; it is a surprisingly large claim in those terms, and perhaps we should infer it to mean rather that wonder is almost always possible for those who already know how to access it. For a sense of wonder is something that someone may or may not have in ready response to a particular situation: Frank is openly troubled by the difference between those doctors who in the clinical context have it and those doctors who do not, insofar as this determines whether they might share a sense of wonder with the patient.
As it stands, Frank's view of wonder as both importantly beneficial and readily available is strongly normative. Yet, gripped as I am myself by the call of a sense of wonder in response to human embodiment, I am not at all certain how generally I could expect everyone to share it. This is a matter quite separate from how good I think it would be if they did (generally I would think it a very good thing indeed). Rather as risks are properly understood as a combination of two independent variables, the magnitude and the likelihood of the harm in question, so perhaps normative force mightat least, herebe thought a combination of both the scale and the attainability of the good in question. To the nature of that good we shall shortly turn, but I think its attainability is subject to a limitation that in turn limits the normative claim of wonder as an ideal. People vary in their inclination towards wonder according to habit, disposition, even talent (Opdal 2001) not to mention the filtering of sensitivity and imagination by upbringing (education and developmental psychology are alike foci of disciplinary enthusiasm for studying wonder [Minney and Potter 1984]). Indeed, were modern industrial societies and their institutionalised healthcare systems not more orientated towards the ideal of control than that of wonder, Frank would not be saying anything distinctive and nor would he have felt the need to say it. 3 Thus, habitual openness to wonder may be no more easily producible from a standing start than is, say, humility or compassion; this does not undermine them as normative ideas, but it might temper our expectations.
This importantly distinguishes clinicians from patientsand recall that Frank commends wonder to both groups. I strongly believe that for physicians an openness to wonder is an educational good that could well be promoted in aspects of the medical curriculum. However, while wondering (and empathetic, and humble, and compassionate) clinicians can be encouraged in these respects, they are more plausibly identified and encouraged by medical schools' admissions processes than they are manufactured by their courses of instruction. But no such selection process applies to patients whose vocation is first and foremost an inevitability of the frailty of flesh. Frank is, in my view, right to commend wonder to both physicians and 'ill persons,' but it might be only a minority of patientswell or illwho are naturally disposed to respond to his call, at least at first. Frank's own experience notwithstanding, serious illness may for many patients be not at all the right time to start to develop a sense of wonder: for many, the very illness of illness may preclude it.
None of this makes Frank's claim any less important, nor the 'goods' made available in wonder any less valuable, but it does remind us that the claim's normative force is limited. With this acknowledgement, let us now consider what good things might indeed follow, where a patient is open to wonder.
Some 'goods' arising from a patient's sense of wonder Recognising shared embodied agency I recalled above that the physician who is open to wonder at embodied human nature is thereby also mindful that she shares this nature, and its attendant vulnerability, with the patient. The patient who is open to wonder can reciprocate that sense of shared nature and vulnerability. Patient and physician may thus jointly recognise the intensity of the physician's task, and the commitment (to helping the patient who may be the locus of significant suffering) and understanding (of that same shared fragility of the flesh) that this task requires of her.
I suspect Frank has exactly such reciprocity in mind when he observes that 'The ill person who finds a physician to join in this wonder is fortunate' (62). This might mean that such fortune is rare as well as good, of course. Both the rarity and the goodness are acknowledged in Michelle Clifton-Soderstrom's attempt to ground the ethical foundation of medicine in the reciprocal 'otherness' of other people. In this account, interpersonal relationships (including, specifically, the clinical relationship between patient and physician) have a primary ethical and ontological character that precedes the 'knowing' relationships of science: 'the first encounter with the Other is not one of comprehension' but one of ethical recognition and acknowledgement (2003,450). Signally, Clifton-Soderstrom grounds this squarely in wonder, implicitly of a self-diminishing kind that she contrasts with something more akin to scientific curiosity. As she puts it, the otherness of another person transcends the idea of 'the Other' in oneself, …point [ing] to the phenomenon of wonder of another person, and wonder as the experience that distinguishes human beings. The experience of wonder is often neglected in the practice of medicine. Instead of a profession filled with the wonder of who the Other is, scientific wonder becomes the main or only parameter for medical practice. (453, my emphasis) It may be that both physician and patient sense the existential, rather than simply scientific, wonder of embodiment and the wonder of addressing it. But even if that recognition be a vibrant one only for the patient, the clinical encounter can still in consequence be a more mutually-respectful occasion, surely a good thing for patient and clinician alike, and it becomes an occasion with greater imaginative possibilityincreasingly important with the rising proportion of presenting cases (particularly in family practice) whose problems are 'functional disorders' and whose basis, being at least in part emotional or social, may require correspondingly more imaginative investigation (Muller-Lissner et al. 2001). If we are looking for merely instrumental goods from the patient's having a sense of wonder, then our first operative example seems to lie within the clinical encounter.
Shared commitment to the clinical endeavour
A second gain, more obviously featuring in the clinical context, will be a consequence of the first. An appreciation of and respect for his own (and others') bodily powers and limitations combined with an intensified appreciation for the clinician's engagement with those limitations in his own case seems likely to result in the patient's having a more strongly shared commitment to the clinical endeavour. (Whether or not the greater imaginative freedom, that one might think implied by an openness to wonder, tends towards the giving of a richer history rather than a merely more unfathomable one is, of course, a separate question.) This notion of shared commitment is obvious enough and easily understood. However, the evident reciprocity that it involves on the patient's part seems too easily eclipsed by the predominant focus upon the physician's commitment and obligations, and I think this asymmetry is worth a little attention. Clifton-Soderstrom, for instance, draws her conclusion in terms that rest mainly upon the physician's need to recognise the otherness of the patient as constituting the prior grounding of ethical response. Her analysis is an application to clinical medicine of the ethics of Emmanuel Levinas (2003, 451). But it seems to me that this cuts both ways in the clinical encounter: to the patient the Other is the physician whose own otherness must alike be respected. This requirement may understandably be dimmed in the context of a patient's significant suffering. However in ordinary workaday clinical consultationsperhaps most especially in primary care -I see no reason suddenly to drop this reciprocal requirement of acknowledgment of the Other simply because the project of the clinical encounter is an asymmetric one (that is, primarily conceived towards the benefit of the patient). Even the context of the clinical encounter as clinical is, on the story that Clifton-Soderstrom is presenting to us, epistemically subsequent to the primary ethical context of an encounter between two people who are reciprocally Other. All the more reason then to emphasise that the patient open to wonder at his own embodiment is likely to feel more fully a part of that endeavour and to behave as a more engaged partner in discerning and fostering the means to his own recovery.
When, as sometimes happens, the patient's conception of his needs differs sharply from the physician's, then a demonstrably shared sense of responsibility can only improve the patient's chances of persuading the clinician to take seriously his dissenting view and to respect it.
These first two goods are 'instrumental' in that they subserve other goods or goals. But it is possible that there may be something intrinsically, constitutively, good about having a sense of wonder. An 'ideal of wonder,' like an 'ideal of service,' suggests something whose virtue cannot simply be reduced to the accumulated virtues of its good results.
Wonder as constitutive of a flourishing life?
One way of responding would be to think in terms of what constitutes a flourishing life: what goes into a lifeany lifethat in some recognisable sense 'goes well'? There is no need here for any essentialist attempt to specify necessary and sufficient conditions for a flourishing life. We need simply identify things whose inclusion in a life would typically make that a better life than it would have been without them. An otherwise sustainable life that additionally has such things is, to that extent, also a flourishing life. Examples might include: kinship; friendship; a sense of the community to which one belongs; a sense of one's identity and purpose; laughter; imagination; intimacy; beauty; the ability to make enduring meaning; and doubtless many others besides. I claim that we can readily add 'a sense of wonder' to the list, wherein line with the account we gave abovethis sense means the inclination and ability to remain imaginatively open to the fullness of the world around one beyond the reach of immediate explanation. It is reasonable to prefer a life in which wonder is possible to an otherwise corresponding life in which it was not: this is not because the ability or inclination to wonder does or makes or supports anything else in particular, but simply becauselike kinship and imaginationit is prima facie a good thing to have, engage, enjoy. Thus wonder does not simply support a flourishing life: rather, it part-constitutes one; it is not so much a route to flourishing: rather it is part of what it is to flourish, one flavour of what Nussbaum would call a 'sense of life ' (1987). 4 Additional work is needed, both in making a general case and in quarantining exceptions, but I would expect such work to yield a strong argument in favour of including a sense of wonderor, in Frank's terms, an 'ideal of wonder'among life's intrinsically good things. Indeed, it invites a variation on Socrates' famous dictum concerning the unexamined life: "The un-wondering life is less worth living than one lived in wonder." Wonder as constitutive of a healthy life?
Is openness to a sense of wonder more specifically constitutive of a healthy life? This is more elusive. It would look more plausible as we construed 'health' more broadlyin effect, the more that we took health to converge upon flourishing. But the objections to such convergence are substantial and well-known. A healthy life cannot be eo ipso a flourishing one, since not all flourish who enjoy good physical health or vice versa; and a more fundamental problem concerns the limitless authority and responsibility for flourishing that such convergence then appears to place upon medicine. In any case, Frank's own narrative distinguishes clearly medical and also clearly non-medical dimensions to his illness and recovery; this is not a route that could support his account.
A healthy life sounds like (and indeed presumably is) an intrinsically good thing, other things being equal; so now we need to know how remaining open to wonder counts as an integral part of the healthiness of that healthy life. But to maintain that a sense of wonder is itself a 'healthy' thing is either to present it as conducive to health in the particular ways that we have already considered or more adventurously to align it with other clearly existential goods such as kinship, intimacy, fulfilment and so forth that are often regarded as healthsustaining or health-promoting (and then to seek out the supporting evidence [White 2009]) or to speak metaphorically. Frank's account suggests that he might endorse the first and the second of these; whereas relying on the third usually means that all analytic bets are off.
Alignments and misalignments: geographers and wondering
In some respects my discussion may appear to echo, even to align with, aspects of the 'turn to affect' that engages much thinking in geographical writing, including health geography writing. 5 The key aspect here is the centrality of embodiment in experience. I have noted that wonder is both mediated through and focused upon the body, even (for Frank) 'learned from' the body. I have conjectured inter alia: that in wonder our imagination is engaged in advance of our understanding; that (pace Clifton-Soderstrom) existential considerations are pre-eminent over scientific ones in addressing the wonder of embodiment; that ordinary being is taken-for-granted rather than analysed; that our bodies are the medium of daily conscious experience; and that willed action is not wholly distinguishable from other aspects of embodied agencyagency which itself, taken as a whole, is often only subliminally available to our conscious awareness, and is in some respects forever mysterious. In the same spirit, I contrast imaginative awareness with immediate explanation. This much appears in some respects to align with some of the tenets of 'nonrepresentational theories' in geography. In particular the emphasis on situating our view of understanding (epistemology) in the context of our material nature (ontology) might seem to sit well with non-representational theories' concern to enfold the (otherwise detached, external) objects of experience and symbolic representation into an essentially acted, lived, bodied world of practices and inhabitings. Ben Anderson and Paul Harrison summarise this concern neatly thus: non-representational theories 'share an approach to meaning and value as "thought-in-action"' (2010). States of wonder might be taken to be particularly intense illustrations of what Anderson and Harrison dub 'constant relations of modification and reciprocity with their environs ' (2010, 7). Or again NRT's emphasis on relational ontologies (Bissell 2010) might provisionally seem echoed in my grounding the ethical foundation of medicine in a mutual otherness and in my suggestion of the self-abnegation of some intense states of wonder. And Greenhough's insistence on 'understanding the world through an engagement with its materiality ' (2010, 43) appears prima facie to offer the very stuff of some of the most intense wondering experiences (though unfortunately experiences that are not considered in this paper), concerning the very ipseity, the very 'this-ness' of things that reminds us of what Wittgenstein regarded as the most fundamental of philosophical mysteries, that there is a world at all, that there is anything at all rather than nothing (1961, section 6.44).
But there are crucial misalignments, too. Within Frank's reported experience, and lurking too in my analysis, lies an aspect of stubborn residual dualismthe self that he is, wondering at his body, temporarily externalised or detachedand I do not see that this can (or should) be fully dissipated. Moreover, wonder more generally seems in one sense highly externalising. One's reaction to the thing wondered-at may be deeply embodied as well as deeply imaginative, but at the very least, wonder is liable to concern the world-out-there as something that is really present, in order that it may also be, briefly, newly-present. This declares wonder's intentionality, which seems to me to be of its essence: in wonder, one wonders-at; one does not simply 'wonder' tout court. I have no grasp of states of wonder that are void of intentionality and no sense that such states are even coherent.
More generally, my analysis neither requires nor can tolerate very much subordinating of meaning or significance. Anderson and Harrison argue for a sophisticatedly altered priority among the sources of meaning rather than for meaning's wholesale relegation, but some of their predecessors in theory may be less cautious. Thus the criticisms directed at Nigel Thrift among others, in Ruth Leys' sustained scepticism concerning the 'turn to affect' and the dangers of ontological privileging of 'corporeal affective reactions' over cognition and intentionality, which are reduced seemingly to little more than epiphenomena (Leys 2011). Pace Leys, the avowedly embodied grounding of experience does not seem to me even momentarily to entail that its intentionality must, or even could, be dispensed with. More specifically, I believe that wonder is conceived and encountered in response to (or even as a form of) the representation of the thing wondered-at; the world is made 'newly-present' in changed representation and signification. What less hard-line affective theorists generally, or non-representational theorists in particular, might make of wonder other than as an intense, atypical, form of embodied attunement is not immediately clear, but the question could supply an intriguing dialogue to be pursued with them.
Elsewhere, perhaps contemporary geographers more readily engage with considerations of curiosity than of wonder; while they are related notions, curiosity has a more obviously teleological nature than does wonder, necessarily constituting rather than contingently provoking a desire to understand. Ironically, even curiosity has been too often dismissed as insufficiently practical in, for instance, discourses about academic impact, something that Richard Phillips laments in the course of advocating resistance through 'geographies of curiosity' in which for instance geographers can 'illuminat[e] the ways in which environments variously encourage and discourage curiosity ' (2010, 448) and by dispelling anxieties 'about emotional dimensions of academic practice' (449). Neither curiosity nor wonder is, to my mind, plausibly an emotion, but securing physical, intellectual and moral spaces that are conducive to curiosity may invite further analysis about whether there can be geographical underpinnings to attitudes of wonder. If so, such attention might valuably be paid to the clinical, ancillary and social spaces of healthcare, and to those characteristics of spaces (clinical or otherwise) that might be able to encourage untrammelled, rather than trammelled, reflective experience. Phillips also makes a valuable point about the plainly geographic savour of descriptions of curiosity that are couched as exploring unknown territories, though whether states of wonder are so readily captured in geographically-orientated terms needs further thought (449). More generally, geographical engagement with wonder may be a focus for interesting future work although the most obvious geographical sources of wonder of the kind celebrated by Keatsthe prospect of Cortez staring with 'eagle eyes' upon the Pacific, 'Silent, upon a peak in Darien'sadly belong to a world less-charted than our own (Keats 1884).
Wonder, the body and imagination
As we've noted, Frank declares that 'The body is not a territory to be controlled by either the physician's treatment or the patient's will' (62) in justification of the hope that for 'the ill person [who] can focus on an ideal of wonder in place of control, then living in a diseased body can recover some of its joy' (59). Frank's view combines an overt ethical programme with an ontology that is only implicit and ambiguous; objectors to dualist accounts of our embodied state will find only correspondingly ambiguous support here, and there are echoes of the notion of our 'inhabiting' our bodies that cannot be entirely stilled.
Yet it seems to me that our experiences of embodiment are ambiguous in this way. Our identities are grounded in such continuities as we (and those around us) can most readily find. Dementia and other erosions of our cognitive and interpersonal faculties can amount to wholesale transformation within an apparently unperturbed embodiment. However, diseases such as the cancer that Frank faced can bring about physical metamorphosis of the gravest kind accelerating or distorting aspects of the metamorphosis of our typical embodied lifetime careers. Disease and medicine's response do, alike, achieve changes in us, but they also achieve changes in what I've called 'our constituent flesh;' in the throes of the struggles involved, the hoped for 'changes in the direction that my body was taking,' as Frank put it, we do indeed inhabit our diseased bodies even while we hope for their restoration.
At the very least this is the dualism of relation: we may succeed in replacing the ideal of control with the ideal of wondering-at, but both are attitudes towards the body: both address the body from a position that is somehow both immanent within, and transcendent towards, the body in question. For me this is a further ground of wondermetaphysical wonder -but I notice in more prosaic terms that it relies heavily upon our acts of imagination.
Imagination is very much a mode of agency. Frank's commending the ideal of wonder consists throughout in emphasising agencyhe commends and urges a sense of wonder as itself active and as something that we can attain through learning and development. It was an act of the imagination that disclosed to him what, he feared, medicine was implicitly hidingthat cure might not be possible. It is a forward-looking imagination that conjectures wonder as an attitude 'in which treatment can best proceed'something for whose first experiment in any individual's case there can hardly be prior evidence. Reading about Frank's experience could prepare me for facing a comparable adversity; but when the time came to put his thesis to the test, I would have nothing more solid than an act of faith and imagination. And indeed any reasonable analysis of wonder as an idealsomething to be elevated, generalised, striven for even if not actually attainedrequires our imaginative projection. The contrast between the attunement involved and the workaday entrapments of lazy or fearful thinking constrain the imagination as they constrain the sense of wonder more generally.
This does not make a sense or an ideal of wonder at the body any less importantsimply more demanding of our imaginative energy. I can think of no more worthwhile expenditure of that energy than in pursuit of Frank's implicit view that a sense of wonder, like a sense of beauty, is a transfiguring ideal capable of presenting the world to us anew, in health and in sickness alike. | 8,488.2 | 2014-12-09T00:00:00.000 | [
"Philosophy",
"Sociology"
] |
Distinct tumour antigen-specific T-cell immune response profiles at different hepatocellular carcinoma stages
Cancer-testis antigens (CTAs) and tumour-associated antigens (TAAs) are frequently expressed in hepatocellular carcinoma (HCC); however, the role of tumour-antigen-specific T cell immunity in HCC progression is poorly defined. We characterized CTA- and TAA-specific T cell responses in different HCC stages and investigated their alterations during HCC progression. Fifty-eight HCC patients, 15 liver cirrhosis patients, 15 chronic hepatitis B patients and 10 heathy controls were enrolled in total. IFN-γ ELSPOT using CTAs, including MAGE-A1, MAGE-A3, NY-ESO-1, and SSX2, and two TAAs, SALL4 and AFP, was performed to characterize the T-cell immune response in the enrolled individuals. The functional phenotype of T cells and the responsive T cell populations were analyzed using short-term T-cell culture. T cell responses against CTAs and TAAs were specific to HCC. In early-stage HCC patients, the SALL4-specific response was the strongest, followed by MAGE-A3, NY-ESO-1, MAGE-A1 and SSX2. One-year recurrence-free survival after transcatheter arterial chemoembolization plus radiofrequency ablation treatment suggested the protective role of CTA-specific responses. The four CTA- and SALL4-specific T cell responses decreased with the progression of HCC, while the AFP-specific T cell response increased. A higher proportion of CD4+ T cells specific to CTA/SALL4 was observed than AFP-specific T cell responses. The IFN-γ ELISPOT assay characterized distinct profiles of tumour-antigen-specific T cell responses in HCC patients. CTA- and SALL4-specific T cell responses may be important for controlling HCC in the early stage, whereas AFP-specific T cell responses might be a signature of malignant tumour status in the advanced stage. The application of immunotherapy at an early stage of HCC development should be considered.
Introduction
Hepatocellular carcinoma (HCC) is the fourth most common cause of cancer-related death and ranks sixth in incidence worldwide [1]. The incidence of HCC is particularly serious in China, and over 50 % of global newly diagnosed liver cancer cases and liver cancerrelated deaths occur in China [2]. Therefore, there is an urgent need for effective HCC therapies, including those targeting antigens expressed by HCC as a result of tumour occurrence.
Host immunosurveillance, which plays an important role in tumorigenesis by eliminating tumour cells and suppressing tumour growth, was proposed by Paul Ehrlich a century ago [3,4]. Several studies have shown that the immune system plays an important role in the occurrence and development of HCC [5,6]. The function of the immune system changes during the development of HCC. Cytotoxic T lymphocytes, which target HCC tumour cells, are especially important regulators of tumour progression and protect HCC patients [7,8]. Recently, immune checkpoint inhibitor-based immunotherapy for HCC [9,10] has not only provided additional evidence supporting the role of the immune system in controlling HCC progression but also revealed that our understanding of the T cell immune response to HCC is insufficient, especially in terms of diverse T cell immunity in different stages of HCC.
The tumour antigens recognized by T cells have not been well characterized and may be immunogenic neoantigens that have not yet been identified in HCC. However, several cancer testis antigens (CTAs) whose expression is limited to cancer cells and reproductive tissues and is not found in adult somatic tissue can spontaneously induce a T cell response in HCC patients. CTAs comprise a range of self-derived proteins, such as melanoma-associated antigen A1 (MAGE-A1), MAGE-A3, New York esophageal squamous cell carcinoma antigen 1 (NY-ESO-1), and synovial sarcoma X break point gene 2 (SSX2), that can become immunogenic in HCC either by mutation or aberrant expression. These are currently popular and widely investigated CTAs in the field of HCC; however, the data on their involvement in HCC are insufficient [11][12][13]. In addition to CTAs, tumour-associated antigens (TAA) are also enriched (but not specific) in cancer cells [14]. Sal-like protein 4 (SALL4) is a type of TAA; although SALL4 is not expressed in the majority of normal human tissues, it is expressed in human embryonic stem cells, testes and ovaries, is highly expressed in HCC and is associated with aggressive HCC [15]. Alpha-fetoprotein (AFP) is derived from embryonic endoderm tissue cells. The content of AFP is high during foetal development and gradually decreases to the level observed in adults after birth. The majority of HCC patients have high levels of this antigen. Other malignant tumours of the stomach and pancreas are also often accompanied by a small number of increased AFPs [16]. Targeting CTA and/or TAA using vaccination strategies has been suggested because of their frequent expression in a large proportion of HCC cells (i.e.: MAGE-A1 and A3: > 50%; NY-ESO-1 > 30%, and SSX2 > 70%) [11,17,18].
Very few researchers have performed detailed and combined analyses of important tumour antigen-specific T cell responses and their associations with different HCC statuses. To address this issue, 98 individuals, including healthy controls (HCs) and those with different stages of HCC, liver cirrhosis (LC), or chronic hepatitis B virus infection (CHB), were enrolled. Overlapping peptides were synthesized to perform comprehensive T cell response analysis, covering CTAs (MAGE-A1, MAGE-A3, NY-ESO-1, SSX2) and TAAs (SALL4 and AFP). The goal of this study was to further clarify the diverse characteristics of tumour antigen-specific T cells among different HCCs.
Patients and samples
In total, 98 individuals from Beijing YouAn Hospital were recruited, including 58 HCC, 15 LC, 15 CHB, and 10 HC individuals. The inclusion criteria were as follows: 1) diagnosis of HCC; 2) age between 18 and 75 years; and 3) patients who were unsuitable for or unwilling to receive surgery and were assessed as able to tolerate transcatheter arterial chemoembolization (TACE) or/and radiofrequency ablation (RFA) as a palliative or curative therapy. The exclusion criteria were as follows: 1) other malignancies; 2) severe coagulation disorders; 3) secondary liver cancer; 4) other immune-related diseases; and 5) any immunotherapy. The diagnostic criteria of HCC were applied according to the European Association for the Study of the Liver-European Organization for Research and Treatment of Cancer Clinical Practice Guidelines: Management of Hepatocellular Carcinoma [19], and HCC was classified based on the Barcelona Clinic Liver Cancer (BCLC) staging system [20]. At our interventional therapy centre, TACE combined with ablation therapy is the best option among the available interventional treatment strategies and is more effective than TACE or ablation treatment alone [21,22]. We have performed many studies, and this strategy has become the standard therapy applied by our team [23][24][25][26]. Forty-one of the 58 HCC patients who were evaluated as suitable for TACE combined with RFA therapy received curative treatments and were followed up with every 3 months for 1 year. All 41 HCC patients underwent dynamic contrast enhancement CT scans to evaluate recurrence [27]. In addition, the diagnosis of LC and CHB was made according to previously reported guidelines [28,29]. The study, conforming to the tenets of the 1990 Declaration of Helsinki, was approved by the Institutional Review Board of Beijing YouAn Hospital. Written informed consent was obtained from all candidates. Ten millilitres of blood from each patient was collected, and PBMCs were isolated by density gradient centrifugation.
Synthetic peptides for T-cell analysis
A total of 334 overlapping peptides (18-mers overlapping by 10 amino acids) spanning the complete amino acid sequence of SALL4, MAGE-A1, MAGE-A3, NY-ESO-1, SSX2 and AFP were utilized. Their purities were determined to be > 90% by analytical high-performance liquid chromatography. Peptides were dissolved in dimethylsulfoxide (Sigma, Haverhill, Suffolk, UK) and diluted with RPMI 1640 before being combined into nine pools with 23-45 peptides per pool (Table S1).
Human IFN-γ ELISPOT assay As described previously [30], a total of 250,000 PBMCs with 8 μg/mL peptide per well containing RPMI 1640 medium with 10% FCS were used in a standard human IFN-γ ELISPOT assay. In brief, assays were carried out in 96-well MultiScreen filter plates (Millipore) coated with 15 mg/mL anti-IFN-γ mAb (1-DIK; Mabtech). Phytohaemagglutinin (10 μg/mL) was used as a positive control. Plates were incubated for 16-18 h. The plate was washed 5 times, and biotin-conjugated anti-human IFN-γ Ab (Mabtech, Nacka, Sweden) was added and reacted for 2 h. After washing the plate 5 times, streptavidin-ALP (Mabtech, Nacka, Sweden) was added and reacted for 1 h. Finally, newly prepared NBT/BCIP solution (Bio-Rad, Hercules, CA) was added for colour development after washing. The reaction was stopped by washing with distilled water, and the plate was dried at room temperature. Spot enumeration was performed with a CTL ELISPOT reader system (Cellular Technology Ltd., S6 Universal, America). To quantify antigenspecific responses, mean spots of the negative control were subtracted from the reaction wells, and the results were expressed as spot-forming units (SFUs) per 10 6 PBMCs. Responses were regarded as positive if the results were at least three times the mean of the negative control wells and above 25 SFUs/10 6 PBMCs. If background wells were 25 SFUs/10 6 PBMCs or positive control wells were negative, the results were excluded from further analysis.
Generation of tumour antigen specific T-cell lines
According to the IFN-γ ELISPOT results and the remaining samples, PBMCs from 5 HCC subjects were stimulated with the corresponding responsive antigen. Overlapping peptides were added to 200,000 cells for stimulation for 1 h and then the cells were grown in 96-well plates. Short-term T cell lines were grown for 10 days in AIM-V + 10% human AB serum (Invitrogen, Carlsbad, CA) supplemented with 100 μg/mL (final concentration) interleukin (IL)-2 (R&D Systems, Minneapolis, MN). In total, 19 antigen-specific T cell lines were generated.
Statistical analysis
Continuous variables are expressed as the mean ± standard deviation (SD). Statistical analysis of the data was performed using the χ 2 test for constituent ratio analysis. Two-tailed Student's t tests were used to compare parametric continuous data, and the Mann-Whitney U test was used when data were not normally distributed. Statistical significance was set at P < 0.05. Analyses were performed with SPSS software v25 (IBM, New York, USA), and graphs were constructed with GraphPad Prism 8.0 (GraphPad software Inc).
CTA and TAA-specific T cell responses were detected in HCC patients
A total of 98 individuals were enrolled in the study: 58 had HCC, 15 had LC, 15 had CHB, and the remaining 10 were HCs. All HCC patients were classified according to the BCLC staging system. The tumour burden was significantly different among patients with different stages of HCC. Patients with early-stage (BCLC-0/A) HCC were more likely than patients with advanced-stage (BCLC-B/C) HCC to have solitary tumour lesions (84% vs. 39.39%, P = 0.001), small tumour volumes (76% vs. 30.3%, P = 0.001), and no vascular invasion/metastasis (100% vs. 54.55%, P < 0.0001). AFP and protein induced by vitamin K absence or antagonist-II (PIVKA-II), two HCC biomarkers, were analysed, and the PIVKA-II value of early-stage patients was lower than that in advancedstage patients [84.50 (23.00, 374.25) mAU/mL vs. 512 (60.50, 4051.50) mAU/mL, P = 0.005] ( Table 1).
CTA-and TAA-specific T cell responses were detected by IFN-γ ELISPOT assays in all 98 individuals to evaluate the comprehensive T cell response and its specificity. In total, 67.24% (39/58) of HCC patients responded to at least one CTA or TAA. In contrast, no positive response was found in any individual in any control group (LC, CHB, and HC) (Fig. S1). The difference between HCC and all control individuals was significant (P < 0.0001) (Fig. 1
Distinct profiles of CTA-and TAA-specific T cell responses in patients with different HCC clinical characteristics
The analysis of the distribution of T cell responses revealed distinct patterns of CTA-and TAA-specific T cell responses among patients with different stages of HCC. Among patients with early-stage HCC, the strongest response was against SALL4 (66.88 ± 22.23 SFUs/10 6 cells), followed by MAGE-A3 (32.16 ± 14.95 SFUs/10 6 cells), NY-ESO-1 (22.21 ± 10.32 SFUs/10 6 cells), MAGE-A1(18.21 ± 6.79 SFUs/10 6 cells) and SSX2 (12.84 ± 6.38 SFUs/10 6 cells), while the AFPspecific T cell response was relatively low (only 18.88 ± 10.01 SFUs/10 6 cells). The difference between the SALL4-specific and AFP-specific T cell responses in these patients was significant (P = 0.0173) ( Fig. 2A). Among patients with advanced-stage disease, the AFP-specific T cell response was the strongest (146.18 ± 58.75 SFUs/10 6 cells), and much higher than SALL4 (19.15 ± 10.87 SFUs/10 6 cells, P = 0.0157), MAGE-A3 (16 ± 5.63 SFUs/10 6 cells, P = 0.0504), MAGE-A1 (4.93 ± 2.42 SFUs/10 6 cells, P = 0.0015), (Fig. 2B). Interestingly, the magnitude and frequency of CTAand SALL4-specific T cell responses was decreased in patients with advanced-stage HCC compared to those with early-stage HCC, although not all differences reached statistical significance. On the other hand, the AFP-specific T cell response showed a trend of increasing with the progression of HCC (Fig. 2C). Further analysis of the T cell response profile was therefore performed comparing early and advanced stages of HCC. The results showed that the combination of a positive CTA-and SALL4-specific T cell response and a negative AFP-specific T cell response was present in the majority of early-stage HCC patients. In contrast, a combination of a negative CTA-and SALL4-specific T cell CTAs & SALL4-specific T cell response was defined as "+" if one antigen was recognized by PBMCs at least. Non-parametric test was used to compare the response magnitude of patients in different stages, and chi-square test was used to compare the recognition frequencies response with a positive AFP-specific T cell response was observed in most patients with advanced-stage HCC (Fig. 2D). This result highlighted the potential protective role of CTA-and SALL4-specific T cell responses in HCC patients with early-stage disease [13].
Further analyses of the correlations between the breadth of the T cell response, i.e., the number of CTA, SALL4 and AFP being recognized, and the tumour stage and other tumour characteristics were performed according to the distinct profile of the tumour antigenspecific T cell response. The association between tumour stage and the breadth of CTA-and the SALL4specific T cell response was analyzed, and more CTAand SALL4-specific T cells were detectable in patients with early-stage HCC than in those with advanced-stage HCC (P = 0.0104, Fig. 3A). In addition, a comparison of the frequency of CTA-specific T cells between different tumour burdens showed that the breadth of recognition in patients with a low tumour burden (< 15 cm 3 ) was broader than that in patients with a high tumour burden (> 15 cm 3 , P = 0.0017, Fig. 3B).
Forty-one out of 58 HCC patients received TACE combined with RFA therapy. All 41 patients were evaluated as having complete ablation of their tumours on dynamic contrast enhancement CT scans after the operation. Among them, 17 patients did not experience recurrence of HCC during the one-year follow-up. The correlation between CTA-and SALL4-specific T cell responses and recurrence was analyzed, and a significantly stronger T cell response to tumour antigens was found in patients without recurrence than in those who experienced relapse (177.65 ± 61.21 SFUs/10 6 cells vs. 49.33 ± 17.60 SFUs/10 6 cells, P = 0.0403, Fig. 4). Patients with early-stage HCC had T cells that could recognize several tumour antigens, and these specific T cells represent protective immunity.
Moreover, SALL4-and NY-ESO-1-specific T cell responses in patients with a low tumour burden (53.86 ± 18.3 SFUs/10 6 cells and 16.88 ± 8.03 SFUs/10 6 cells, respectively) were significantly higher than those in patients with a high tumour burden (25.59 ± 14.46 SFUs/ 10 6 cells and 0, respectively; P = 0.0102 and P = 0.0461, respectively; Fig. 5A). SALL4-and NY-ESO-1-specific T cell responses were significantly higher in patients with solitary lesions (61 ± 19.1 SFUs/10 6 cells and 15.07 ± 7.22 SFUs/10 6 cells, respectively) than in those with multiple lesions (9.58 ± 3.54 SFUs/10 6 cells, P = 0.0425 and 0, P = 0.0464, respectively; Fig. 5B). These differences were also present between patients with and without vascular invasion/metastasis (VI/M) (Fig. 5C). In contrast, the AFPspecific T cell response showed correlations with different tumour characteristics opposite to those of the CTA/SALL4-specific T cell response. In particular, in relation to the number of tumour lesions, the AFP-specific T cell response was significantly stronger in patients with multiple lesions (201.75 ± 78.65 SFUs/10 6 cells) than that in patients with solitary tumour lesions (13.35 ± 4.42 SFUs/10 6 cells, P = 0.0072; Fig. 5D). In addition, a clearly higher AFP-specific T cell response was found in the advanced stage (146.18 ± 58.75 SFUs/ 10 6 cells) compared with the early stage (18.88 ± 10.01 SFUs/10 6 cells; P = 0.035; Fig. 5D), which indicated that the AFP-specific T cell response might be a signature of tumour status in the advanced stage. However, we did not find any association between the quantity of serum AFP and the magnitude of the AFP-specific T cell response (data not shown). The AFP-specific T cell immune response could possibly be used as a supplement to serum AFP detection. Patients whose AFP-specific T cell responses are positive or high should be followed up with closely to allow for the early detection of tumours.
Functional analysis of T cells with the progression of HCC
The restriction of CTA-and TAA-specific T cell responses to CD4+ or CD8+ T cells was analyzed using in vitro culture of T-cell lines stimulated by peptide pools, and the cytokine secretion and degranulation marker CD107a of the MAGE-A3-specific T cell line were evaluated by A higher proportion of CD4+ T reactive cells was observed, especially CTA/SALL4-specific responses (Fig. 6A). An example ICS is shown in Fig. 6B (The gating strategy was show in Fig. S2). Interestingly, the MAGE-A3-specific T cell population, especially the CD8+ T cell population, was dominated by cells that presented only one functional molecule evaluated (Fig. 6B/C). We also analyzed the T cell phenotype of patients with the AFP-specific T cell response (Fig. 6D/E) (The gating strategy was show in Fig. S3); again, AFP-specific T cell responses were dominated by cells producing one functional molecule evaluated in both the CD4+ and CD8+ T cell populations. Overall, we observed that CTA/SALL4-specific T cell responses are dominated by CD4+ rather than CD8+ T cell responses, with approximately 30% of these cells presenting more than one functional molecule evaluated; in contrast, very few AFP-specific CD4+ T cells present more than one of the functional molecules (5%).
Discussion
In this study, we found CTA-and TAA-specific T cell responses only in patients with HCC and not in those with LC or CHB. Furthermore, CTA-and TAA-specific T cell responses were detected in 67.24% of HCC patients, which is higher than the rate of serum AFP positivity among patients with HCC [31]. Importantly, we found different hierarchies of CTA-and TAAspecific T cell responses at different stages of HCC. The SALL4-specific T cell response was the strongest, followed by MAGE-A3, NY-ESO-1, MAGE-A1 and SSX2, in early-stage HCC patients, whereas the AFPspecific T cell response was the highest in advanced HCC patients. Two opposite correlations between T cell response and the progression of HCC were identified. This phenomenon indicates the divergence of tumour antigens and that changes in the predominant tumour antigen-specific T cell response in patients with different HCC stages are common. Strong and broad CTA-& SALL4-specific, but not AFP-specific, T cell responses were observed in patients with HCC that was early-stage or less aggressive. A strong relationship between the CTA-and SALL4-specific T cell response and earlystage HCC was identified, suggesting a potential protective role of this T cell response in the partial control of cancer development. Moreover, the association between a high CTA-and SALL4-specific T cell response and a low relapse rate of HCC at the 1-year follow-up further supports the potential protective role of the CTA-and SALL4-specific T cell response in early-stage HCC. Studies on SALL4 showed that the expression of SALL4 was correlated with the malignancy of HCC and suggested a poor prognosis among HCC patients [32,33]. In this study, patients with early-stage HCC had a stronger SALL4-specific T cell immune response than those with advanced-stage HCC. When HCC progresses to a certain extent, tumour cells expressing SALL4 escape recognition and killing by T cells, and the T cell repertoire changes dramatically at different stages of HCC. Indeed, further study of the relationship between SALL4 expression and the specific T cell response should be performed.
These data, in line with the specific expression of CTA and SALL4 but not AFP in HCC but not normal liver, indicate the potentially important role of the CTA-and SALL4-specific T cell response in the early stage of HCC and in the control of HCC recurrence. Our study further confirmed that the expression of CTA and SALL4 in HCC tumours might promote antitumour immune surveillance and facilitate postoperative recovery [18]. Although CTA DNA is found in the late stage of HCC [34], there is a complex series of processes from gene expression to functional antigen production that can elicit an appropriate protective immune response. For example, the activity of the antigen, the immunogenicity, the expression of MHC molecule, the affinity between the MHC molecule and the antigen, and the ability of the T cell to recognize the antigen and to play a protection role against the antigen-expressing tumour cells. Recently, immunotherapy for HCC [9,10] has not only provided more powerful evidence to support the role of immune system in controlling HCC progression but has also indicated that our understanding of T cell immune responses to HCC is insufficient, especially our understanding of diverse T cell immune statuses in different stages of HCC. Furthermore, a successfully generated MAGE-A3-specific short-term T cell line showed that specific cytokine-secreting T cells were restricted by CD4+ T cells, not CD8+ T cells. These CD8+ T cells could proliferate but remained impaired with respect to functions and were therefore undetectable by intracellular cytokine staining. Functional T cells can coexist with tumours with persisting antigens if the expression level of tumour antigens or the frequency of the T cells encountering tumour antigens is low [35,36]. The study of Junliang Fu et al. demonstrated that CD4+ cytotoxic T cells correlated with the survival outcomes of HCC patients [37]. Additionally, further research is needed to reveal the true nature of tumour antigen-specific T cells.
It is known that approximately 50% of HCCs secrete AFP [38,39], which is not only an oncofoetal antigen and diagnostic marker for liver cancer [40] but also an independent risk predictor associated with pathological grade, progression, and survival outcome [41]. In this study, the AFP-specific T cell response was not only found to be specific to HCC patients but was also highly distributed in patients with advanced-stage HCC, which implied that the AFP-specific T cell response might be a signature of tumour status in the advanced stage. The AFP-specific T cell response is common in patients with advanced-stage disease, which indicates the interaction between the protective role of the host T cell immune response to control the progression of HCC and the various mutations or antigenic drift from tumour cells to escape immune killing, immunoediting and immune surveillance [5,42,43]. ICS with a short-term T cell line showed that the AFP-specific T cell response was predominantly restricted by CD8+ T cells, which, as responsive T cells, is a signature of malignant tumour status. We observed that most somatic mutations were tolerated and accumulated neutrally, confirming that mutations generating neoantigens with high immunogenicity are rare in HCC or were already immuneeliminated [42]. The cell-mediated cytotoxicity of specific T cells is impaired early in the cells' fate [36] in the presence of persistent antigens. Similar to intertumoral heterogeneity, the tumour-antigen-specific T cell response also displayed distinctions among patients, possibly due to different immunogenic stimuli or levels of immune escape. Of course, whether the function of AFP-specific T lymphocytes can be efficiently activated in vivo to target AFPexpressing tumour cells needs to be further explored.
Accordingly, we propose that tumour cells mutate and escape killing by immune cells during the progression of HCC. As tumour cells that express antigens and are recognized and killed by immune cells can be eliminated due to a lack of survival advantages, different antigen recognition spectra appear between patients with HCC in the early and advanced stages. The T-cell immune responses in earlystage HCC patients show diversity, while those with advanced-stage HCC predominantly display AFP-specific T-cell immune responses and few other T-cells that cannot recognize tumour cells as a result of tumour evolution. Survival is achieved only by avoiding recognition and killing by immune cells. In the advanced stage of HCC, AFP-specific T cells become dominant and the main driving force of antitumour immunity. However, the survival ability of tumour cells at this stage has exceeded the killing ability of this antitumour immunity, and solitary AFP-specific T cells cannot control tumour progression. "While the priest climbs a post, the devil climbs ten." As the main driver of the immune response, AFP-specific T cells are insufficient to control and eliminate the continued growth of the malignant cells, and the ultimate result is tumour progression.
Conclusions
The results of this study indicated that the CTA-and TAA-specific T cell response was only present in patients with HCC. Different hierarchies of CTA-and TAA-specific T cell responses were found at different stages of HCC. The SALL4-specific T cell response was the strongest response, followed by MAGE-A3, NY-ESO-1, MAGE-A1 and SSX2, in patients with earlystage HCC, whereas the AFP-specific T cell response was the highest in patients with advanced-stage HCC. Furthermore, strong and broad CTA-& SALL4-specific, but not AFP-specific, T cell responses were observed in patients with HCC that was early-stage, less aggressive or with a low relapse rate at the 1-year follow-up. The application of immunotherapy in the early stage of HCC may benefit patients more. | 5,931.8 | 2021-09-08T00:00:00.000 | [
"Biology",
"Medicine"
] |
On Thermal-Pulse-Driven Plasma Flows in Coronal Funnels as Observed by the Hinode/EUV Imaging Spectrometer (EIS)
Using one-arcsecond-slit-scan observations from the Hinode/EUVImagingSpectrometer (EIS) on 5 February 2007, we find the plasma outflows in the open and expanding coronal funnels at the eastern boundary of AR 10940. The Doppler-velocity map of Fe xii 195.120 Å shows the diffuse closed-loop system to be mostly red-shifted. The open arches (funnels) at the eastern boundary of AR exhibit blue-shifts with a maximum speed of about 10 – 15 km s−1. This implies outflowing plasma through these magnetic structures. In support of these observations, we perform a 2D numerical simulation of the expanding coronal funnels by solving the set of ideal MHD equations in appropriate VAL-III C initial temperature conditions using the FLASH code. We implement a rarefied and hotter region at the footpoint of the model funnel, which results in the evolution of slow plasma perturbations propagating outward in the form of plasma flows. We conclude that the heating, which may result from magnetic reconnection, can trigger the observed plasma outflows in such coronal funnels. This can transport mass into the higher corona, giving rise to the formation of the nascent solar wind.
Introduction
The solar wind is the supersonic outflow of fully ionized gas from the solar corona streaming along the magnetic-field lines. It is well established that the polar coronal hole is the source of fast solar wind (Hassler et al., 1999;Wilhelm et al., 2000;Tu et al., 2005), while the slow solar wind originates from the boundary of active regions and along the streamers in the equatorial corona (Habbal et al., 1997;Sakao, Kano, and Narukage, 2007). Hassler et al. (1999) have found that the plasma can be supplied from the chromospheric heights in the network boundaries to the solar wind in polar coronal holes. More precise estimation of the formation of nascent wind in coronal funnels between 5 -20 Mm in coronal holes has been carried out by Tu et al. (2005). The bases of the polar coronal holes are mostly the sources of fast solar wind. It has also been suggested that plasma outflows observed at the edges of active regions are the source of the slow solar wind. Active-region arches, which can extend outward as the rays in the outer corona, may channel it (Slemzin et al., 2013). In addition to the large-scale origin, it has also been found that the small-scale outflows at the coronal hole boundaries (CHBs) can serve as the source of the slow solar wind (Subramanian, Madjarska, and Doyle, 2010). Recently, Yang et al. (2013) have presented a numerical model to describe the process of magnetic reconnection between moving magnetic features (MMFs) and the pre-existing ambient magnetic field that drives an anemone jet with inverted y-shape base and associated plasma blobs. They have found that an increase in the thermal pressure at the base of the jet is also driven by the reconnection, which induces a train of slow-mode shocks propagating upward resulting in plasma upflows. Their findings contribute to the formation of jets and small-scale flows in the quiet-Sun corona where MMFs undergo into the low atmospheric reconnection.
Two outstanding issues, however, remain unsettled: i) what are the drivers of these winds in the outer corona? and ii) what are the source regions and what drivers enable the mass supply to the lower solar atmosphere (chromosphere-TR, and inner corona)? There exist several studies advocating the role of Alfvén waves as a possible answer to the first question. The ion-cyclotron waves at kinetic scales are visualized as one of the possible candidates to provide momentum and heat the outer coronal winds both in theory and observations (e.g. Ofman and Davila, 1995;Tu and Marsch, 1997;Tu et al., 2005;Suzuki and Inutsuka, 2005;Dwivedi and Srivastava, 2006;Jian et al., 2009, and references cited therein). The second question is crucial at present. The consensus of solar-wind research, however, is unclear as to the origin of the mass supply to the supersonic wind. Tian et al. (2010) have found upflows in the open-field lines of coronal holes starting in the solar transition region and interpreted this as an evidence of the fast solar wind in the polar coronal holes. As far as the slow solar wind is concerned, the outflows at the boundaries of active regions can contribute at larger spatio-temporal scales to the mass supply as an expansion of the loops lying over these active regions (Harra et al., 2008). It has been shown recently that collimated jet eruptions can also contribute to the formation of the solar wind . The magnetic-field topology of structures such as jets (e.g. spray surges) may not contribute to the solar wind (Uddin et al., 2012). The contributions from the confined ejecta in the solarwind formation depend mainly on the local magnetic-field topology and plasma conditions. The question remains on whether the mass supply to the slow solar wind comes from the lower atmosphere expanding along the curved coronal fields. The question is what are the potential physical drivers? It has been found that the open-field lines at the boundary of active regions reconnect periodically with closed-field lines to guide the plasma motion in the form of solar wind (Harra et al., 2008). Similar examples are reported at the boundary of the coronal holes at small spatial scales as the source of slow solar wind (Subramanian, Madjarska, and Doyle, 2010). Therefore, an alternative option may be magnetic reconnection as a potential mechanism in the formation of slow solar wind. Apart from magnetic reconnection, the wave-heating scenario can shed light on the slow solar-wind source regions. Schmidt and Ofman (2011) have found that the energy stored in the slow magnetoacoustic waves propagating toward the higher atmosphere within expanding loops. This may be a potential candidate for the acceleration and formation of the slow solar wind. Wave activity at the bases of the fast (polar coronal holes) and slow (equatorial corona) solar wind can be important to power the energized plasma at greater heights up to the corona where it can be triggered supersonically in interplanetary space (e.g. Harrison, Hood, and Pike, 2002;Dwivedi andSrivastava 2006, 2008;De Pontieu et al., 2007;McIntosh et al., 2011;McIntosh, 2012, and references cited therein). Thus, there seems to be compelling evidence for the role of magnetic reconnection and wave phenomena in the solar-wind source region.
In the present article, we report the evidence of the outflowing magnetic arches acting as coronal funnels at the eastern boundary of an AR 10940 loop system observed on 5 February 2007. These coronal funnels seem to open up in the higher atmosphere to transport the outflowing plasma. Their footpoints are rooted in the boundary of the active region. They are the most likely heated regions that result in the activation of the outflowing plasma. We present a 2D MHD simulation of the open and expanding-funnel-type model atmosphere in which a rarefied and hot region is implemented near the footpoint that exhibits plasma perturbations similar to the observations. In Section 2 we summarize the observational results. We present the numerical model in Section 3. Discussion and conclusions are given in the last section.
Observational Results
The active region AR 10940 was observed by a one-arcsecond-slit scan of EUV Imaging Spectrometer (EIS: Culhane et al., 2006) onboard the Hinode spacecraft on 5 February 2007. The EIS is an imaging spectrometer of which 40-and 266-arcsecond slots are used for the image analyses using the light curves and emission per pixel. The one-and twoarcsecond slits are utilized for spectral and Doppler analyses using spectral-line profiles. EIS observes in two modes: i) scan; ii) sit-n-stare. The EIS observes high-resolution spectra in two wavelength intervals: 170 -211 Å and 246 -292 Å using, respectively, its Short-Wavelength (SW) and Long-Wavelength (LW) CCDs. The spectral resolution of the EIS is 0.0223 Å per pixel. The analyzed observations were taken on 5 February 2007 and the data-set contains spectra of various lines formed at chromospheric, transition region (TR), and coronal temperatures. The scanning observation started at 12:14:12 UT and ended at 13:31:21 UT on 5 February 2007. The scanning steps were without any offset in the region containing a coronal active region and its eastern boundary with open and expanding magnetic arches where we are interested in the present investigation. This provides us with an opportunity to understand the plasma activity along the open-field regions at the eastern boundary of the active region in between the diffused loop systems that reach up to a higher height in the corona (Figure 1). We refer to these structures as "coronal funnels". Such flow regions are heated at their base exhibiting outflows. To understand the approximate magnetic-field geometry associated with AR 10940 and its surrounding region, we perform a potential field source surface (PFSS) extrapolation. Fan lines over the Solar and Heliospheric Observatory (SOHO)/Michelson Doppler Imager (MDI) observations are shown in Figure 2. It is clear that the core loops of the active region are bipolar and connect the central opposite magnetic polarities (white loops shown by the black arrow). Another set of closed-field lines connect to the east-side weak negative polarity and the central positive In order to obtain the velocity structures in such coronal funnels, we select the strongest EIS line [Fe XII 195.12 Å] in our study. We aim for the understanding of the impulsively generated plasma outflows in such funnels and associated physical processes. The slit started the scanning of the polar coronal hole with (X cen , Y cen ) ≈ (799.087 arcsecond, −19.185333 arcsecond). The observational windows acquired on the CCDs are 128 pixels high along the slit, and 111 pixels wide in the horizontal direction where spectra have a spectral resolution of 0.0222 Å per pixel.
We apply the standard EIS data-reduction procedures and calibration files/routines to the data obtained from the EUV-telescope, which is the raw (zeroth-level) data. The subroutines are found in the sswidl software tree under the IDL environment (www.darts.isas.jaxa.jp/pub/ solar/ssw/hinode/eis/). These standard subroutines reduce the dark-current subtraction, cosmic-ray removal, flat-field correction, hot pixels, warm pixels, and bad/missing pixels. The data are saved in the level-1 data file, and the associated errors are saved in the error file. We choose the clean and strong line Fe XII 195.12 Å to examine the spatial variations of the intensity and Doppler velocity in the observations scanned over AR 10940 on 5 February 2007. We co-align the Fe XII 195.12 Å map with respect to the long-wavelength CCD observations of He II 256.86 Å by considering it as a reference image and by estimating its offset. The orbital and slit-tilt are also corrected in the data using the standard method described in the EIS-software notes. We perform a double Gaussian fit for the removal of the weak blending of Fe XII 195.18 Å line as per the procedure described by Young et al. (2009), which is also outlined in the EIS Software Note 17. We constrain the weakly blended line 195.18 Å to have the same width as the line at 195.12 Å and to fix an offset of +0.06 Å relative to it. It is to be noted that we perform the fitting on 2 × 2 pixel 2 binned data to enhance the signal-to-noise ratio and also to be obtain a reasonable fit. Figure 1 displays the intensity (left) and Doppler-velocity (right) maps of the observed AR and open arches at its eastern boundary. The intensity map shows that the AR is made up of the core diffused loop systems lying at the lower heights in the plane perpendicular to the line-of-sight. In actuality, however, they may be tilted. These low-lying core loops exhibit high emission and downflows. At the eastern boundary, we have identified the open and expanding arches that exhibit coronal-funnel behavior to transport mass and energy into the higher corona. These expanding funnels may be the part of more quiescent and largescale loop systems opening higher up in the corona. It is seen in the enlarged Doppler map (see Figure 3) that four such regions are at least identified as blue-shifted and outflowing regions. As already noted, these could be the legs of higher and large-scale loops, which form the building blocks of the coronal funnels at relatively lower heights of the corona. They exhibit the slow and subsonic plasma outflows mostly spreading along their field lines with a maximum speed of about 10 -15 km s −1 . Figure 3 shows the selection of path 1 -4 (top panel). The first two paths are drawn over the large-scale open and magnetic-field arches (lines) extending towards QS -CH region in the North-East of this AR (not shown here). These two regions serve as the open and expanding funnel regions. Paths 3 and 4 are drawn to the expanding and blue-shifted regions that can also serve as coronal funnels. However, they are actually the lower parts of some core-loop systems. On this account, we have selected two coronal funnels (1 and 2), while the others (3 and 4) serve as funnels but are probably associated at larger heights with the curved-loop magnetic field. The middle and bottom panels (left to right), respectively, show the variation of projected L.O.S. Doppler velocity along paths 1 -4 with height in these funnels. For funnels 1 and 2, the footpoints exhibit stronger outflows with a maximum speed of 15 km s −1 . At heights beyond 10 Mm, the outflow speed weakens in these funnels. This signifies the start of the outflows due to heating near the footpoints of these funnels; the outflows weaken with the height as we move away from the heating source. The other two funnels (3 and 4) are likely the legs of the diffused core loops, exhibiting the outflows at certain lower heights with a maximum speed of 10 km s −1 . This indicates different locations of the heating. It is also noted that the outflows diminish at lower heights compared to the corresponding ones in funnels 1 and 2 in the form of open arches. This is because of their association with the curved loops at greater heights where plasma is trapped and flows downward in order to maintain a new equilibrium. It is also noted that the flow structure is more gentle in the open funnels 1 and 2. However, it is greater at the footpoint and decreases with height. In funnels 3 and 4, which are the lower parts of curved loops, the generation of the flow is rather impulsive. The outflow starts at a certain height above the loop's footpoint. It increases up to a certain distance and decreases thereafter. This shows that impulsive heating is at work near the loop footpoint, which causes enhanced upflows up to a certain distance. The downflowing plasma from the upper part of the loop may counteract with the upflows. The red-shifted apex of the core-loop system is clearly evident in Figures 1 and 3. Funnels 3 and 4 are the lower parts of this loop system.
In the next section, we outline the details of the 2D numerical simulation of such observed expanding coronal funnels and their plasma dynamics. We solve a set of ideal MHD equations in the appropriate VAL-III C initial temperature conditions and model atmosphere using the FLASH code.
Model Equations
Our model system starts from a gravitationally stratified solar atmosphere, which can be described by the ideal two-dimensional (2D) MHD equations: Here , V , B, p = k B m T , T , γ = 5/3, g = (0, 0, −g) with its value g = 274 m s −2 , m, and k B are the mass density, flow velocity, magnetic field, gas pressure, temperature, adiabatic index, gravitational acceleration, mean particle mass, and Boltzmann's constant, respectively. It should be noted that we do not consider radiative cooling and thermal conduction in our present model for the sake of simplicity. We simulate only the dynamics of the plasma outflows to compare them with that of the observed coronal funnels.
Initial Conditions
We assume that the solar atmosphere is at rest [V e = 0] in equilibrium with a current-free magnetic field [∇ × B e = 0]. As a result, the magnetic field is force-free: The divergence-free constraint is satisfied automatically if the magnetic field is specified by the magnetic-flux function [A(x, y)] as B e = ∇ × (Aẑ).
Here the subscript "e" corresponds to equilibrium quantities. We set a curved magnetic field by choosing where S is the strength of a magnetic pole and (a, b) = (0, −10) Mm is its position. For such a choice of (a, b), the magnetic-field vectors are weakly curved to represent the expanding coronal funnels. As a result of Equation (5) the pressure gradient is balanced by the gravity force: −∇p e + e g = 0.
With the ideal-gas law and the y-component of Equation (8) where is the pressure scale height, and p 0 denotes the gas pressure at the reference level that we choose in the solar corona at y r = 10 Mm. We take an equilibrium temperature profile [T e (y)] (see top-left panel in Figure 4) for the solar atmosphere derived from the VAL-C atmospheric model of Vernazza, Avrett, and Loeser (1981) that is smoothly extended to the solar corona.
The transition region is located at y ≈ 2.7 Mm. There is an extended solar corona above the transition region, with the temperature minimum level located at y ≈ 0.9 Mm below the solar chromosphere. Having specified T e (y) (see Figure 4, left-top panel) with Equation (9), we obtain the corresponding gas-pressure and mass-density profiles.
Numerical Scheme and Computational Grid
Equations (1) -(4) are solved numerically using the FLASH code (Lee and Deane, 2009;Lee, 2013). This code uses a second-order, unsplit Godunov solver (Godunov, 1959) with various slope limiters and Riemann solvers, as well as Adaptive Mesh Refinement (AMR) Lee 2011, 2012). We set the simulation box of (−7, 7) Mm × (2.6, 51.6) Mm along the x-and y-directions ( Figure 5). The lower boundary, where we apply the heating pulse, is set at x 0 = 0 Mm, y 0 = 2.6 Mm. We set and hold fixed all of the plasma quantities at all boundaries of the simulation region to their equilibrium values, which are given by Equations (6) and (9). As the magnetic field is curved and plasma is stratified gravitationally, open boundaries would not be a perfect choice with regard to the fixed boundary conditions. We have verified this experimentally. As the FLASH code uses a third-order accurate Godunov-type method, a characteristic method is already built into this procedure. However, the Riemann problem at the boundaries corresponds to open boundaries, resulting in numerically induced reflections from these boundaries. On the other hand, the fixed boundaries lead to negligibly small numerical reflections. Therefore, we adopt this approach in the numerical simulations. In addition, we modify the equilibrium mass density and gas pressure at the bottom boundary as where and Here A and A p are the amplitudes of the perturbations, (x 0 , y 0 ) are their initial position, and w x , w y denote its widths along the x-and y-directions, respectively. The symbol τ denotes the growth time of these perturbations. We set and hold fixed A = 7, A p = 2, x 0 = 0 Mm, y 0 = 2.6 Mm, w x = 3 Mm, w y = 0.5 Mm, and τ = 10 seconds. These fixed boundary conditions perform much better than transparent boundaries, leading to only negligibly small numerical reflections of the wave signals from these boundaries.
In our modeling, we use an AMR grid with a minimum (maximum) level of refinement set to 3 (8) (see Figure 5). The refinement strategy is based on controlling numerical errors in mass density, which results in an excellent resolution of steep spatial profiles and greatly reduces numerical diffusion at these locations.
A standard procedure to check the magnitude of a numerically induced flow is to run the code for the equilibrium alone, without implementing any perturbation. We have verified that the numerically induced flow is of the order of 2 km s −1 in the solar corona, and the transition region is not affected by the spatial resolution which is about 20 km around the transition region. This resolution is much smaller than the width of the transition region (≈ 200 km) and the pressure scale height near the bottom, which is about 0.5 -1 Mm. We have also performed grid-convergence studies by increasing the spatial resolution by a factor of two at the transition region. As the numerical results have been found essentially similar results for the finer and coarser grid, we have limited our analysis to the latter.
Results of the Numerical Simulation and Comparison with Observations
The results of the numerical simulations and their comparison with the observed plasma outflows are summarized as follows: Figure 6 (first column) displays the velocity vectors plotted over the total velocity maps for t = 50, 100,150,200,250,300,350,400,450 seconds. The diverging magneticfield lines of the model funnel is also over-plotted in these snapshots. It is clear from the t = 50 seconds snapshot that the implemented heating at the footpoint, just below the transition region, results in the alteration of the ambient plasma pressure, and the plasma starts flowing upward with a typical average speed of 40 km s −1 near the footpoint at this time. The outflow velocity is maximum near the heated region at the footpoint of the model funnel. This is higher than the observed line-of-sight outflow velocities (10 -16 km s −1 ) in various coronal funnels above their footpoints (see Figure 3). The simulated outflow velocities greatly depend on the initial conditions of the model funnel and magnitude of the heating pulse. The observed trend of the flows in funnels 1 and 2 (see Figure 3) and the same derived from the model match each other (see Figures 6 and 7). At higher altitudes, above the heating location in these funnels, the outflow weakens which qualitatively matches with the results obtained from our model (see Figures 6 and 7). Our model fits better the plasmaflow conditions in the open coronal funnels (e.g. funnels 1 and 2) where the gentle flows start and expand due to their footpoint heating. The flow becomes steady at each height after 300 seconds in the model funnel when the heated plasma reaches the height of 10 Mm. This also indicates that the heating pulse is at work for a certain duration, and after some time the generated flows reach a new equilibrium. The physical scenario is in agreement with the open coronal funnels 1 and 2 (see Figures 3 and 7). Comparison of these two scenarios supports the outflow of the plasma due to heating. The heating causes thermal flows guided in the magnetic-field lines of the open coronal funnels, while it subsides away from the source. The physical behavior of the velocity field and its spatial distribution as observed by Hinode/EIS along each funnel are consistent with the velocity field at a particular temporal span of the numerical simulation when the outflowing plasma rises to a maximum height of ≈ 10 Mm. For the funnels that are the lower parts of the curved-loop system (funnels 3 and 4; Figure 3), the energy release might occur where the blue-shift is enhanced at a certain height above the footpoint impulsively. Therefore, the locations in those observed funnels are identical to the energy-release site of the modeled funnel where outflows start as a result of heating. On the contrary, it seems that impulsive heating in funnels 3 and 4 causes the increment in the outflows up to a certain height and thereafter is balanced by the downflowing and trapped plasma from the loop apex. Figure 6, second and third columns, respectively, display the temperature and density maps for t = 50, 100, 200, and 300 seconds. It is clear that during the heating, the plasma maintained at inner coronal/TR temperatures (sub-MK and 1.0 MK; the hot plasma envelops the cool one) and with somewhat higher density, starts flowing from the footpoint of the model funnel toward greater heights. The plasma is denser near the footpoint. It flows along the open funnels toward higher heights. Thermal perturbations created the slow and subsonic flows of the plasma; therefore, it reaches up to only the lower heights. This is a situation similar to that of the transition region within the funnel being pushed upward due to the evolution of the thermal perturbation underneath. Sub-mega-Kelvin and denser plasma from lower solar atmosphere move up and is enveloped by the hot coronal plasma. The denser plasma maintained at the TR and inner coronal temperatures is visible up to a height of 10 Mm in the model funnel. This is consistent with the observations. Only the lower parts of the coronal arches are associated with enhanced fluxes and thus with higher densities, while their higher parts are less intense and denser regions (see Figure 1, left panel).
Discussion and Conclusions
We have presented observations of outflowing coronal arches (coronal funnels 1 and 2) lying at the eastern boundary of the AR 10940 observed on 5 February 2007, 12:15 -13:31 UT. The scanning observations show that these arches open up in the nearby quiet-Sun corona and exhibit plasma outflows maintained at coronal/TR temperature of around 1.0 MK. They serve as the expanding coronal funnels from which the plasma moves to the higher corona and serves as a source of the slow solar wind (Harra et al., 2008). The plasma outflows may be generated in such open-field regions because of the low atmospheric reconnections between the open-and closed-field lines (Subramanian, Madjarska, and Doyle, 2010). Episodic heating mechanisms are now well observed and interpreted as the drivers of outflowing plasma in the curved coronal loops (Klimchuk, 2006;Del Zanna, 2008;Brooks and Warren, 2009). The steady heating may generate the hot-plasma upflows in the corona (Tripathi et al., 2012).
There has so far been little effort made to model the plasma outflows in the corona and that too in an entirely different context of the large-scale evolution of coronal magnetic field. Murray et al. (2010) have modeled in 3D the origin and driver of coronal outflows, and found that outflows are the result of the expansion of an active region during its development. Harra et al. (2012) have modeled the AR coronal outflows as a consequence of compression during the creation and annihilation of the magnetic-field lines.
However, our objective of the present investigation is to implement and test our 2D numerical-simulation model of the localized expanding coronal funnels with Hinode/EIS observations of the outflowing open field arches (i.e. funnels) at the boundary of AR 10940. We solve the set of ideal MHD equations in the appropriate VAL-III C initial temperature conditions and model atmosphere using the FLASH code. The key ingredient of our model is the implementation of realistic ambient solar atmosphere, e.g. realistic temperature, presence of the TR, implementation of expanding coronal fields, stratified atmosphere in the initial equilibrium, which significantly affect such plasma dynamics. We have implemented a rarefied and hotter region at the footpoint of the model funnel that triggers the evolution of the slow and subsonic plasma perturbations propagating outward in the form of plasma flows similar to the observed dynamics. We implemented the localized heating below the transition region at the footpoint of the funnel. The plasma is considered rarefied in the horizontal direction, mimicking the structured, open, and expanding magnetic funnels. The heated plasma evolves and exhibits the plasma perturbations as has been observed as outlined in the Hinode/EIS observations. The outflows start at the base of the funnel that further weakens with the height, which is suggestive of the plasma dynamics due to heating near the footpoint. A similar physical scenario is observed in the selected outflowing magnetic arches mimicking the coronal funnels 1 and 2 at the eastern boundary of AR 10940.
We conclude that the implemented episodic heating can excite plasma outflows in expanding coronal funnels residing at boundary of solar active regions as well as in the quiet-Sun. These slow, subsonic plasma outflows may not be launched at higher altitudes in the corona. We have examined the presence of hot and denser plasma up to 10 Mm height as triggered by thermal perturbations in our model. Observations also show that the plasma only rises significantly in the lower parts of these funnels up to inner coronal heights with significant intensity and velocity distributions. However, even if such flows reach up to the inner coronal heights of 10 Mm in these funnels, they may also contribute to the mass supply to the slow solar wind. Therefore, our model and observations invoke the dynamics of the plasma in the localized coronal funnels, which may be important candidates to transport mass and energy into the inner corona. However, more observational studies should be performed with new spectroscopic data (e.g. Interface Region Imaging Spectrograph (IRIS)) to compare with our proposed 2D model, specifically under the physical conditions of different types of localized flux tubes in the solar atmosphere, which can serve as plasma outflowing regions due to episodic heating. | 6,812.4 | 2014-08-21T00:00:00.000 | [
"Physics"
] |
Concentration of Immunoglobulins in Microfiltration Permeates of Skim Milk: Impact of Transmembrane Pressure and Temperature on the IgG Transmission Using Different Ceramic Membrane Types and Pore Sizes
The use of bioactive bovine milk immunoglobulins (Ig) has been found to be an alternative treatment for certain human gastrointestinal diseases. Some methodologies have been developed with bovine colostrum. These are considered in laboratory scale and are bound to high cost and limited availability of the raw material. The main challenge remains in obtaining high amounts of active IgG from an available source as mature cow milk by the means of industrial processes. Microfiltration (MF) was chosen as a process variant, which enables a gentle and effective concentration of the Ig fractions (ca. 0.06% in raw milk) while reducing casein and lactose at the same time. Different microfiltration membranes (ceramic standard and gradient), pore sizes (0.14–0.8 µm), transmembrane pressures (0.5–2.5 bar), and temperatures (10, 50 °C) were investigated. The transmission of immunoglobulin G (IgG) and casein during the filtration of raw skim milk (<0.1% fat) was evaluated during batch filtration using a single channel pilot plant. The transmission levels of IgG (~160 kDa) were measured to be at the same level as the reference major whey protein β-Lg (~18 kDa) at all evaluated pore sizes and process parameters despite the large difference in molecular mass of both fractions. Ceramic gradient membranes with a pore sizes of 0.14 µm showed IgG-transmission rates between 45% to 65% while reducing the casein fraction below 1% in the permeates. Contrary to the expectations, a lower pore size of 0.14 µm yielded fluxes up to 35% higher than 0.2 µm MF membranes. It was found that low transmembrane pressures benefit the Ig transmission. Upscaling the presented results to a continuous MF membrane process offers new possibilities for the production of immunoglobulin enriched supplements with well-known processing equipment for large scale milk protein fractionation.
Introduction
The natural function of the major bovine immunoglobulin G (IgG) class is to agglutinate pathogens and enhance the complement system. It, therefore, protects the calf from pathogens [1]. This function renders immunoglobulins (Ig) an interesting material for various food supplements and pharmaceutical applications. Such products are used to support the immune system, respectively, which may be used to treat gastrointestinal human diseases after immunizing the cow with the relevant antigen [1,2]. Typically, these products are obtained from colostrum and milk, which is secreted by the cow directly with a pilot cream separator type MM 1254 D (GEA Westfalia Group GmbH, Oelde, Germany) at 8.000 g and 50 • C to obtain a fat content below 0.1% in the skim milk. The skim milk was either cooled to 4 • C with a plate heat exchanger and processed the following day or was subjected directly to the same microfiltration unit as described by Kühnl et al. [17]. The microfiltration was operated in a crossflow mode with a closed loop using ceramic ISOFLUX ® and standard (TAMI Industries, Nyons, France) membranes (support and selective layer titanium dioxide) both with nominal cutoffs of 0.14 µm, 0.2 µm, 0.45 µm, or 0.8 µm. The membrane featured an area of 0.35 m 2 , 1178 mm length, 25 mm diameter, 23 channels, and 3.5 mm equivalent hydraulic diameter per channel.
Operating Conditions during Microfiltration
Prior to filtration, the membrane was cleaned and conditioned with 0.5% Ultrasil 14 caustic solution (Ecolab, Düsseldorf, Germany) at 60 • C for 20 min. The preheated membrane was extensively flushed with softened water and cooled down to a filtration temperature of 50 • C ± 1 • C or 10 • C ± 1 • C The filtrations were performed in batch mode for 90 min with 30 L of skim milk of which the first 5 L of retentate were discarded to avoid a dilution by rinsing water. Sampling of retentate and permeate from the feed tank was performed, respectively. The permeate pipeline was performed after 0 min, 10 min, 20 min, 30 min, 45 min, 60 min, and 90 min. Samples were analyzed for β-Lg, α-La, BSA and IgG, and casein, as described in Section 2.3. At the same time points, the permeate flux (J) was measured gravimetrically and converted to liters by dividing the values by the density at the equivalent temperature. The MF-process was operated at a transmembrane pressure ∆p TM = 1 bar. If not stated otherwise, wall shear stress τ w = 150 Pa ± 5 Pa (equivalent to 2 bar pressure drop). The ∆p ™ was calculated according to Equation (1) with p inlet and p outlet as the pressures at the membrane inlet and outlet, respectively. The pressure on the permeate side p permeate was adjusted with a manual valve to achieve low ∆p TM . P = p inlet + p outlet 2 − p permeate (1) The transmission (P) was calculated, according to Equation (2), as the ratio of the concentration of the equivalent component in the permeate (C per ) and retentate (C ret ).
After an experiment, the plant was flushed with water and cleaned in a three-step procedure (alkaline-acid-alkaline), which was described by Kühnl et al. [17]. The cleaning efficacy was validated by measuring the water flux at fixed conditions.
Measurement of Immunoglobulin G with Reversed-Phase High-Performance Liquid Chromatography
The quantitative determination of native IgG, β-Lg, α-La, and blood serum albumin (BSA) in one run was conducted by reversed-phase high-performance liquid chromatography (RP-HPLC) with an elution profile of acetonitrile and water in the mobile phase, according to Kessler and Beyer (1991) [24] with the gradient and column modifications described by Toro-Sierra et al. (2013) [25]. Calibration was carried out using purified bovine IgG, β-Lg, α-La, and blood serum albumin (BSA) (Sigma, Steinheim, Germany). Before analysis, the pH of all samples was adjusted to 4.6 with 1 M HCl in order to precipitate casein micelles and denatured whey proteins. The supernatant of all samples was filtered through 0.45 µm syringe filters and injected into the RP-HPLC.
IgG Transmission as a Function of Time and Pore Size
The aim of the microfiltration when used for casein fractionation is to completely retain the casein micelles with a spread in diameter between 50-400 nm (d 50,3 = 180-200 nm) while maximizing the transmission of the whey proteins ranging from 14 kDa for α-La up to IgG (146-163 kDa) or more for other immunoglobulin fractions. In this way, the ratio of casein to whey proteins is modified from 80:20 (w/w) as present in raw bovine milk towards higher casein concentrations e.g., 90:10 or 95:5 in the retentate. In milk protein fractionation by microfiltration, the retained proteins form a deposit layer, which influences the filtration performance in terms of flux and protein transmission [11,16]. Therefore, an accurate prediction of the Ig transmission by comparing the nominal pore size of the MF membrane and the molecular size is not possible. Figure 1 shows the IgG (A) and β-Lg (B) as well as casein transmission as a function of time. The flux is presented in Figure 2A,B, which summarizes the data as a function of the different pore sizes at steady state conditions.
IgG Transmission as a Function of Time and Pore Size
The aim of the microfiltration when used for casein fractionation is to completely retain the casein micelles with a spread in diameter between 50-400 nm (d50,3 = 180-200 nm) while maximizing the transmission of the whey proteins ranging from 14 kDa for α-La up to IgG (146-163 kDa) or more for other immunoglobulin fractions. In this way, the ratio of casein to whey proteins is modified from 80:20 (w/w) as present in raw bovine milk towards higher casein concentrations e.g., 90:10 or 95:5 in the retentate. In milk protein fractionation by microfiltration, the retained proteins form a deposit layer, which influences the filtration performance in terms of flux and protein transmission [11,16]. Therefore, an accurate prediction of the Ig transmission by comparing the nominal pore size of the MF membrane and the molecular size is not possible. Figure 1 shows the IgG (A) and β-Lg (B) as well as casein transmission as a function of time. The flux is presented in Figure 2A,B, which summarizes the data as a function of the different pore sizes at steady state conditions. The transmission levels of IgG ( Figure 1A) were found to be at the same level as the reference major whey protein β-Lg (18 kDa) ( Figure 1B) at all evaluated pore sizes despite the large difference in the molecular size of both fractions. A reason for the comparatively good transmission behavior of IgG might be its flexibility and shape. IgG consists of two heavy chains and two light chains, which are connected via disulphide bonds (Scheme 1). The constant region (CH1) and variable region (VH) of the heavy chain form together with the constant (CL) and variable region (VL) of the light chain in the fragment antigen-binding (Fab). The constant regions CH2 and CH3 of the two heavy chains form the fragment crystallizable (Fc) [3,28,29]. The Fab and Fc region are connected by a hinge region with no secondary structure (Scheme 1A). From a functional perspective, this gives the molecule a high flexibility so that the two Fab regions can interact with different epitopes of antigens. From a filtration perspective, this flexibility might contribute to the fact that the proteins can pass the deposit layer and membrane-like globular proteins, which is 10 times lower in molecular sizes such as β-Lg. Scheme 1. Three dimensional (A) and schematic (B) structure of IgG. Heavy chains (green), light chains (blue), disulphide bonds (yellow), constant (CH1), and variable region (VH) of the heavy chain, the constant (CL) and variable region (VL) of the light chain, fragment antigen binding (Fab), and the fragment crystallizable (Fc). Three-dimensional structure generated with UCSF Chimera [30] using pdb code 1HZH from the RCSB Protein Data Bank [31]. Schematic structure drawn with AutoCAD and modified from References [28,29].
There was an initial flux decline for membranes with nominal pore sizes of 0.14 µm, 0.2 µm, and 0.8 µm (Figure 2A), which can be attributed to initial fouling during the transition from water to milk [32]. After approximately 30 min, the steady state was reached. Strikingly, the flux increased by ca. 30% when changing to the membrane with the pore size of 0.2 µm to 0.14 µm. This lower flux of the 0.2 µm membrane might be attributed to pore blocking. The casein micelles feature a particle size distribution range of 50-400 nm with a mean of 180-200 nm. This means the average particle size of The transmission levels of IgG ( Figure 1A) were found to be at the same level as the reference major whey protein β-Lg (18 kDa) ( Figure 1B) at all evaluated pore sizes despite the large difference in the molecular size of both fractions. A reason for the comparatively good transmission behavior of IgG might be its flexibility and shape. IgG consists of two heavy chains and two light chains, which are connected via disulphide bonds (Scheme 1). The constant region (CH1) and variable region (VH) of the heavy chain form together with the constant (CL) and variable region (VL) of the light chain in the fragment antigen-binding (Fab). The constant regions CH2 and CH3 of the two heavy chains form the fragment crystallizable (Fc) [3,28,29]. The Fab and Fc region are connected by a hinge region with no secondary structure (Scheme 1A). From a functional perspective, this gives the molecule a high flexibility so that the two Fab regions can interact with different epitopes of antigens. From a filtration perspective, this flexibility might contribute to the fact that the proteins can pass the deposit layer and membrane-like globular proteins, which is 10 times lower in molecular sizes such as β-Lg. The transmission levels of IgG ( Figure 1A) were found to be at the same level as the reference major whey protein β-Lg (18 kDa) ( Figure 1B) at all evaluated pore sizes despite the large difference in the molecular size of both fractions. A reason for the comparatively good transmission behavior of IgG might be its flexibility and shape. IgG consists of two heavy chains and two light chains, which are connected via disulphide bonds (Scheme 1). The constant region (CH1) and variable region (VH) of the heavy chain form together with the constant (CL) and variable region (VL) of the light chain in the fragment antigen-binding (Fab). The constant regions CH2 and CH3 of the two heavy chains form the fragment crystallizable (Fc) [3,28,29]. The Fab and Fc region are connected by a hinge region with no secondary structure (Scheme 1A). From a functional perspective, this gives the molecule a high flexibility so that the two Fab regions can interact with different epitopes of antigens. From a filtration perspective, this flexibility might contribute to the fact that the proteins can pass the deposit layer and membrane-like globular proteins, which is 10 times lower in molecular sizes such as β-Lg. Scheme 1. Three dimensional (A) and schematic (B) structure of IgG. Heavy chains (green), light chains (blue), disulphide bonds (yellow), constant (CH1), and variable region (VH) of the heavy chain, the constant (CL) and variable region (VL) of the light chain, fragment antigen binding (Fab), and the fragment crystallizable (Fc). Three-dimensional structure generated with UCSF Chimera [30] using pdb code 1HZH from the RCSB Protein Data Bank [31]. Schematic structure drawn with AutoCAD and modified from References [28,29].
There was an initial flux decline for membranes with nominal pore sizes of 0.14 µm, 0.2 µm, and 0.8 µm (Figure 2A), which can be attributed to initial fouling during the transition from water to milk [32]. After approximately 30 min, the steady state was reached. Strikingly, the flux increased by ca. 30% when changing to the membrane with the pore size of 0.2 µm to 0.14 µm. This lower flux of the 0.2 µm membrane might be attributed to pore blocking. The casein micelles feature a particle size distribution range of 50-400 nm with a mean of 180-200 nm. This means the average particle size of Scheme 1. Three dimensional (A) and schematic (B) structure of IgG. Heavy chains (green), light chains (blue), disulphide bonds (yellow), constant (CH1), and variable region (VH) of the heavy chain, the constant (CL) and variable region (VL) of the light chain, fragment antigen binding (Fab), and the fragment crystallizable (Fc). Three-dimensional structure generated with UCSF Chimera [30] using pdb code 1HZH from the RCSB Protein Data Bank [31]. Schematic structure drawn with AutoCAD and modified from References [28,29].
There was an initial flux decline for membranes with nominal pore sizes of 0.14 µm, 0.2 µm, and 0.8 µm (Figure 2A), which can be attributed to initial fouling during the transition from water to milk [32]. After approximately 30 min, the steady state was reached. Strikingly, the flux increased by ca. 30% when changing to the membrane with the pore size of 0.2 µm to 0.14 µm. This lower flux of the 0.2 µm membrane might be attributed to pore blocking. The casein micelles feature a particle size distribution range of 50-400 nm with a mean of 180-200 nm. This means the average particle size of the casein micelles matches the nominal pore size of the 0.2 µm membrane, which might lead to the blocking of the membrane pores rather than forming a deposit layer on top of the membrane, which can be assumed for the 0.14 µm membrane [33]. Initial pore blocking might also be responsible for the initial low flux for the 0.45 µm membrane, which increased by 30% along the filtration time.
The ceramic gradient membranes with pore sizes of 0.14 µm and 0.2 µm showed IgG-transmission rates over 60% while reducing the casein fraction below 1% (0.14 µm) and 4 % (0.2 µm) in the permeates, respectively ( Figure 2B). For the casein fraction, similar amounts of 1.4% were reported using a pore size of 0.2 µm in the uniform transmembrane pressure mode (UTP) while, at a pore size of 0.1 µm, the casein transmission was negligible [21]. Even though 4% does not seem to be much, due to the different ratio of casein micelles of (80%) to whey proteins (20%) in skim milk, already minor protein transmission led to relatively large impurities in the MF-permeate. Even though typical pore sizes of ceramic membranes for casein whey protein fractionation are between 0.05 and 0.2 µm [34], larger pore sizes were tested because the IgG transmission was expected to be at a lower level at the smaller pore sizes. However, even though the IgG transmission was above 90% for the pore sizes of 0.45 µm and 0.8 µm, the high casein transmission of >75% renders these membranes unsuitable for this fractionation task.
Impact of Membrane Type on IgG Transmission and Flux
The used gradient membranes (Figures 1 and 2) possess a thickness gradient of the selective layer along the membrane, which creates a longitudinal decreasing the membrane resistance (Scheme 2). This means that the higher ∆p TM at the membrane inlet compared to the outlet is compensated by a higher membrane resistance at the inlet and a lower resistance at the outlet, which means the permeate volumetric flow is constant along the entire membrane element. By decreasing the membrane resistance corresponding to the decreasing ∆p TM along the membrane, the same flux (isoflux) is generated with the objective to improve the overall filtration performance. However, the standard membrane (Scheme 3) does not have a resistance gradient. Therefore, there is a flux decline along the membrane. Moreover, the higher ∆p TM at the inlet provokes a more intense deposit layer formation at the inlet, which effects both the flux and protein transmission [16]. However, the retentate pressure drop along the membrane depends on the material, the length of the membrane, the hydrodynamic diameter, the density of the fluid, and the fluid viscosity. As for a given temperature, the density and the membrane characteristic are constant. The pressure drop is directly proportional to the flow velocity. Therefore, the fixed membrane resistance gradient requires a defined pressure drop or crossflow velocity, respectively. Therefore, it renders the operation of the filtration plant inflexible. Figure 3 shows the protein transmission and flux as a function of the pore size at steady state conditions using standard membranes. The transmission of IgG was 28% for the 0.14 µm pore size compared to 60% by using the equivalent gradient membrane at the same process conditions. The reason for the inferior performance could be an inhomogeneous deposit (as indicated in Scheme 3). Even though the initial overall permeate fluxes with both membranes were comparable (138 L m −2 h −1 versus 131 L m −2 h −1 , see Supplementary Material Figure S2), during the transition from water to milk, the flux at the membrane inlet should be higher for the standard membrane due to the missing additional membrane resistance R m . Therefore, the convective transport of particles to the membrane surface is higher, which in turn leads to a more intense deposit layer formation especially at the inlet of the membrane. This negatively effects the overall protein transmission. The flux was similar for both membranes. For the other pore sizes, the casein transmission was too high in order to allow the fractionation of IgG and casein. In conclusion, the separation of IgG and casein with a pore size of 0.14 µm using a conventional membrane is feasible. Nevertheless, due to their better performance, the gradient membranes were used for further experiments. formation at the inlet, which effects both the flux and protein transmission [16]. However, the retentate pressure drop along the membrane depends on the material, the length of the membrane, the hydrodynamic diameter, the density of the fluid, and the fluid viscosity. As for a given temperature, the density and the membrane characteristic are constant. The pressure drop is directly proportional to the flow velocity. Therefore, the fixed membrane resistance gradient requires a defined pressure drop or crossflow velocity, respectively. Therefore, it renders the operation of the filtration plant inflexible. Scheme 2. Schematic description of the used gradient membrane. Concept of TAMI Industries, modified from [35]. Flux J, transmembrane pressure ΔpTM, dynamic viscosity η, membrane resistance RM, and fouling resistance RF. The indices in and out refer to the membrane inlet and membrane outlet.
Scheme 2.
Schematic description of the used gradient membrane. Concept of TAMI Industries, modified from [35]. Flux J, transmembrane pressure ∆p TM , dynamic viscosity η, membrane resistance R M , and fouling resistance R F . The indices in and out refer to the membrane inlet and membrane outlet. Figure 3 shows the protein transmission and flux as a function of the pore size at steady state conditions using standard membranes. The transmission of IgG was 28% for the 0.14 µm pore size compared to 60% by using the equivalent gradient membrane at the same process conditions. The reason for the inferior performance could be an inhomogeneous deposit (as indicated in Scheme 3). Even though the initial overall permeate fluxes with both membranes were comparable (138 L m −2 h −1 versus 131 L m −2 h −1 , see Supplementary Material Figure S2), during the transition from water to milk, the flux at the membrane inlet should be higher for the standard membrane due to the missing additional membrane resistance Rm. Therefore, the convective transport of particles to the membrane surface is higher, which in turn leads to a more intense deposit layer formation especially at the inlet of the membrane. This negatively effects the overall protein transmission. The flux was similar for both membranes. For the other pore sizes, the casein transmission was too high in order to allow the fractionation of IgG and casein. In conclusion, the separation of IgG and casein with a pore size of 0.14 µm using a conventional membrane is feasible. Nevertheless, due to their better performance, the gradient membranes were used for further experiments. Figure 3 shows the protein transmission and flux as a function of the pore size at steady state conditions using standard membranes. The transmission of IgG was 28% for the 0.14 µm pore size compared to 60% by using the equivalent gradient membrane at the same process conditions. The reason for the inferior performance could be an inhomogeneous deposit (as indicated in Scheme 3). Even though the initial overall permeate fluxes with both membranes were comparable (138 L m −2 h −1 versus 131 L m −2 h −1 , see Supplementary Material Figure S2), during the transition from water to milk, the flux at the membrane inlet should be higher for the standard membrane due to the missing additional membrane resistance Rm. Therefore, the convective transport of particles to the membrane surface is higher, which in turn leads to a more intense deposit layer formation especially at the inlet of the membrane. This negatively effects the overall protein transmission. The flux was similar for both membranes. For the other pore sizes, the casein transmission was too high in order to allow the fractionation of IgG and casein. In conclusion, the separation of IgG and casein with a pore size of 0.14 µm using a conventional membrane is feasible. Nevertheless, due to their better performance, the gradient membranes were used for further experiments.
Impact of Transmembrane Pressure on IgG Transmissionand Flux at Different Pore Sizes
It is well known that the transmembrane pressure has a major impact on the filtration performance [36]. Figure 4 shows the flux and the protein transmission as a function of the nominal
Impact of Transmembrane Pressure on IgG Transmissionand Flux at Different Pore Sizes
It is well known that the transmembrane pressure has a major impact on the filtration performance [36]. Figure 4 shows the flux and the protein transmission as a function of the nominal pore size. The flux difference between ∆p TM = 1 bar and ∆p TM = 2 bar was below 10% for the pore sizes 0.14 µm, 0.2 µm, and 0.45 µm, which indicates that the limiting flux was already reached at 1 bar of the transmembrane pressure. Even though the applied transmembrane pressure did not have a great impact on the flux, the IgG transmission decreased to 50% when increasing the pressure from ∆p TM = 1 bar to ∆p TM = 2 bar, which is in agreement with previously reported data for the major whey proteins [16,37,38]. The decrease can be attributed to the higher compactness of the deposit layer and, therefore, a reduction of the porosity of the fouling layer takes place, which in turn results in a reduction of protein transmission. pore size. The flux difference between ΔpTM = 1 bar and ΔpTM = 2 bar was below 10% for the pore sizes 0.14 µm, 0.2 µm, and 0.45 µm, which indicates that the limiting flux was already reached at 1 bar of the transmembrane pressure. Even though the applied transmembrane pressure did not have a great impact on the flux, the IgG transmission decreased to 50% when increasing the pressure from ΔpTM = 1 bar to ΔpTM = 2 bar, which is in agreement with previously reported data for the major whey proteins [16,37,38]. The decrease can be attributed to the higher compactness of the deposit layer and, therefore, a reduction of the porosity of the fouling layer takes place, which in turn results in a reduction of protein transmission. The impact of the transmembrane pressure on IgG and β-Lg transmission was studied in more detail for the chosen pore size of 0.14 µm (Figure 5). For a better comparison, the protein transmission of the second largest whey protein fraction α-La as well as blood serum albumin (BSA) as a reference for a larger whey protein was studied. The transmission of BSA was around 10% to 25% depending on the applied pressure, which is in the range of previously reported data [36]. Even though the molecular mass (66.4 kDa) [39] and mean hydrodynamic diameter (6.4 nm) [23] of BSA are lower than IgG with (146-163 kDa) [3] and 10.4 nm [23], the transmission of IgG (45-55%) was significantly higher than that of BSA (10-20%). This substantiates the hypothesis formulated in chapter 3.1 that the size must be responsible for comparatively good transmission of IgG, which should be studied in more detail. For the other whey proteins, the transmission was similar and approximately constant with 40% for β-Lg and 45-50% for α-La at different transmembrane pressure levels (1.5 bar, 2.0 bar, and 2.5 bar). For the lower applied transmembrane pressure of 1.0 bar and 0.6 bar, the transmission was higher. This again can be attributed to the deposit layer, which is less intense and compact at the lower pressure due to the lower convective transport of particles to the membrane. The impact of the transmembrane pressure on IgG and β-Lg transmission was studied in more detail for the chosen pore size of 0.14 µm (Figure 5). For a better comparison, the protein transmission of the second largest whey protein fraction α-La as well as blood serum albumin (BSA) as a reference for a larger whey protein was studied. The transmission of BSA was around 10% to 25% depending on the applied pressure, which is in the range of previously reported data [36]. Even though the molecular mass (66.4 kDa) [39] and mean hydrodynamic diameter (6.4 nm) [23] of BSA are lower than IgG with (146-163 kDa) [3] and 10.4 nm [23], the transmission of IgG (45-55%) was significantly higher than that of BSA (10-20%). This substantiates the hypothesis formulated in chapter 3.1 that the size must be responsible for comparatively good transmission of IgG, which should be studied in more detail. For the other whey proteins, the transmission was similar and approximately constant with 40% for β-Lg and 45-50% for α-La at different transmembrane pressure levels (1.5 bar, 2.0 bar, and 2.5 bar). For the lower applied transmembrane pressure of 1.0 bar and 0.6 bar, the transmission was higher. This again can be attributed to the deposit layer, which is less intense and compact at the lower pressure due to the lower convective transport of particles to the membrane.
Impact of Temperature on IgG Transmission and Flux
For microbiological reasons, milk protein fractionation with microfiltration is carried out at either cold temperatures around 10 °C [37,40], i.e., below the growth optimum of microorganism in milk or at 50-55 °C [12,37,41], i.e., above the microbiological growth optimum. Especially for ceramic membranes, higher temperatures are preferred due to the lower viscosity of the retentate and the inherent advantages during long production runs. Figure 6 shows the transmission of IgG and casein at 10 °C and 50 °C at ΔpTM = 1 bar (A), respectively. ΔpTM = 2 bar (B) as a function of the pore size. The casein transmission was similar at both temperatures. However, it should be noted that, during extended filtration times whereupon the permeate is collected and concentrated with an ultrafiltration unit, there might be an accumulation of β-casein at 10 °C in the whey [42,43]. The IgG transmission was 20% to 30% lower at 10 °C compared to 50 °C at the same operating conditions. A reason for the lower transmission could be due to the lower flow velocity in the laminar boundary layer of the membrane where fouling takes place. The thickness of this boundary can be estimated according to Equation (3) [44].
At 10 °C, the kinematic viscosity of skim milk is about two times higher (1.48 × 10 −6 m 2 s −2 versus 7.84 × 10 −7 m 2 s −2 at 50 °C), which means the laminar boundary layer is approximately two times thicker (1.02 × 10 −5 m versus 1.95 × 10 −5 m). This, in turn, leads to a less turbulent flow profile directly above the membrane, which again effects the deposit layer and protein transmission [45]. The flux was at 39 L m −1 h −1 (ΔpTM = 1 bar), respectively, and 44 L m −1 h −1 (ΔpTM = 2 bar) for the pore size of 0.14 µm. Therefore, 35% or 43%, respectively, of the flux at 50 °C can be attributed to the higher viscosity and density at the lower temperature [37,45]. In addition, the same effect of the decreasing flux with increasing pore size from 0.14 µm to 0.2 µm and 0.45 µm was observed at 10 °C. In summary, the operating temperature of 10 °C is feasible for the fractionation of IgG and casein using ceramic membranes. Even though the low temperature will extend the filtration time, respectively, in the required filtration area, it could be necessary to use a lower temperature than 50 °C over extended filtration times since bovine antibodies are known to be heat sensitive [8].
Impact of Temperature on IgG Transmission and Flux
For microbiological reasons, milk protein fractionation with microfiltration is carried out at either cold temperatures around 10 • C [37,40], i.e., below the growth optimum of microorganism in milk or at 50-55 • C [12,37,41], i.e., above the microbiological growth optimum. Especially for ceramic membranes, higher temperatures are preferred due to the lower viscosity of the retentate and the inherent advantages during long production runs. Figure 6 shows the transmission of IgG and casein at 10 • C and 50 • C at ∆p TM = 1 bar (A), respectively. ∆p TM = 2 bar (B) as a function of the pore size. The casein transmission was similar at both temperatures. However, it should be noted that, during extended filtration times whereupon the permeate is collected and concentrated with an ultrafiltration unit, there might be an accumulation of β-casein at 10 • C in the whey [42,43]. The IgG transmission was 20% to 30% lower at 10 • C compared to 50 • C at the same operating conditions. A reason for the lower transmission could be due to the lower flow velocity in the laminar boundary layer of the membrane where fouling takes place. The thickness of this boundary can be estimated according to Equation (3) [44].
At 10 • C, the kinematic viscosity of skim milk is about two times higher (1.48 × 10 −6 m 2 s −2 versus 7.84 × 10 −7 m 2 s −2 at 50 • C), which means the laminar boundary layer is approximately two times thicker (1.02 × 10 −5 m versus 1.95 × 10 −5 m). This, in turn, leads to a less turbulent flow profile directly above the membrane, which again effects the deposit layer and protein transmission [45]. The flux was at 39 L m −1 h −1 (∆p TM = 1 bar), respectively, and 44 L m −1 h −1 (∆p TM = 2 bar) for the pore size of 0.14 µm. Therefore, 35% or 43%, respectively, of the flux at 50 • C can be attributed to the higher viscosity and density at the lower temperature [37,45]. In addition, the same effect of the decreasing flux with increasing pore size from 0.14 µm to 0.2 µm and 0.45 µm was observed at 10 • C. In summary, the operating temperature of 10 • C is feasible for the fractionation of IgG and casein using ceramic membranes. Even though the low temperature will extend the filtration time, respectively, in the required filtration area, it could be necessary to use a lower temperature than 50 • C over extended filtration times since bovine antibodies are known to be heat sensitive [8].
Conclusions
The aim of this work was to determine whether it is possible to use ceramic microfiltration for the fractionation of casein micelles and bovine IgG and, if so, which pore size and operating conditions should be used. Membranes with pore sizes of 0.2 µm, 0.45 µm, and 0.8 µm, irrespective of membrane type (standard or gradient) and operating parameters such as transmembrane pressure (1 bar or 2 bar) and temperature (10 °C or 50 °C), were inapplicable for the fractionation of IgG and casein due to casein transmission. In contrast, it was found that ceramic gradient membranes with a pore sizes of 0.14 µm featured IgG-transmission rates between 45% to 62% depending on the process conditions while reducing the casein fraction below 1% in the permeates. The transmission of IgG taken as a model protein for immunological active proteins in whey was observed to be similar than that of the much smaller globular major whey protein fractions β-Lg and α-La while the intermediatesized and ellipsoidal BSA showed a lower transmission. Transferring the observed results to a continuous process offers new possibilities for production of immunoglobulin enriched supplements using industrial filtration equipment on a large scale.
Conclusions
The aim of this work was to determine whether it is possible to use ceramic microfiltration for the fractionation of casein micelles and bovine IgG and, if so, which pore size and operating conditions should be used. Membranes with pore sizes of 0.2 µm, 0.45 µm, and 0.8 µm, irrespective of membrane type (standard or gradient) and operating parameters such as transmembrane pressure (1 bar or 2 bar) and temperature (10 • C or 50 • C), were inapplicable for the fractionation of IgG and casein due to casein transmission. In contrast, it was found that ceramic gradient membranes with a pore sizes of 0.14 µm featured IgG-transmission rates between 45% to 62% depending on the process conditions while reducing the casein fraction below 1% in the permeates. The transmission of IgG taken as a model protein for immunological active proteins in whey was observed to be similar than that of the much smaller globular major whey protein fractions β-Lg and α-La while the intermediate-sized and ellipsoidal BSA showed a lower transmission. Transferring the observed results to a continuous process offers new possibilities for production of immunoglobulin enriched supplements using industrial filtration equipment on a large scale. | 8,359.4 | 2018-06-28T00:00:00.000 | [
"Materials Science"
] |
Parameter-Adaptive Event-Triggered Sliding Mode Control for a Mobile Robot
Mobile robots have played a vital role in the transportation industries, service robotics, and autonomous vehicles over the past decades. The development of robust tracking controllers has made mobile robots a powerful tool that can replace humans in industrial work. However, most of the traditional controller updates are time-based and triggered at every predetermined time interval, which requires high communication bandwidth. Therefore, an event-triggered control scheme is essential to release the redundant data transmission. This paper presents a novel parameter-adaptive event-trigger sliding mode to control a two-wheeled mobile robot. The adaptive control scheme ensures that the mobile robot system can be controlled accurately without the knowledge of physical parameters. Meanwhile, the event trigger sliding approach guarantees the system robustness and reduces resource usage. A simulation in MATLAB and an experiment are carried out to validate the efficiency of the proposed controller.
Introduction
In recent years, the rapid development of autonomous systems has given rise to the application of mobile robots in many field applications. The massive surge in the implementation of mobile robots is thanks to mobility, reliability, and the fact that they can operate without any assistance from human operators. The term "mobile robots" is not exclusively limited to any particular locomotion designs; there are numerous types of mobile robots on the ground, underwater, or in the air [1][2][3][4][5]. However, one of the most extensively researched mobile robots is the two-wheeled differential drive robot, which is in demand for many tasks such as transportation in factory storage, restaurants, and hospitals; and in servicing and guiding people in museums and airports [6][7][8][9].
In order for mobile robots to operate automatically and reliably, it is necessary to derive a robust controller for trajectory tracking. There have been many works on the tracking control for two-wheeled mobile robots throughout the years, in both kinematic and dynamic aspects [10][11][12][13][14][15]. In order to address the problem of external disturbances, an adaptive-based controller for the dynamic model has been proposed in some literature [16][17][18]. Imitation learning is also a viable solution to overcome the instability caused by the disturbances [19][20][21]. To further improve the control performance, Fukao et al. [22] implemented an adaptive tracking controller for both the kinematic (1) A novel event-triggered sliding mode controller for the kinematic model of a mobile robot is designed to reduce the computational work of the microcontroller. (2) An adaptive scheme is combined with the event-triggered sliding mode controller to estimate the parameters of the mobile robot. This ensures that the control perfor- mance of the proposed control is kept robust in the presence of uncertainties and unknown parameters. (3) Simulation and experiment results in different scenarios are given to validate the proposed controller.
Parameter-Adaptive Sliding Mode Controller
Consider a well-known kinematic model for a mobile robot, as shown in Figure 1a [45]. The mobile robot is required to track the trajectory generated by a virtual robot, as shown in Figure 1b.
where ω r and ω l are the angular velocities of the right and left wheel; r is the wheel radius; W is the distance between two wheels; x c , y c , and θ c are the positions of the mobile robot. (2) An adaptive scheme is combined with the event-triggered sliding mode controller to estimate the parameters of the mobile robot. This ensures that the control performance of the proposed control is kept robust in the presence of uncertainties and unknown parameters. (3) Simulation and experiment results in different scenarios are given to validate the proposed controller.
Parameter-Adaptive Sliding Mode Controller
Consider a well-known kinematic model for a mobile robot, as shown in Figure 1a [45]. The mobile robot is required to track the trajectory generated by a virtual robot, as shown in Figure 1b.
where and are the angular velocities of the right and left wheel; is the wheel radius; is the distance between two wheels; , , and are the positions of the mobile robot.
The relationship between , and , is expressed in the following form: where and are the linear and angular velocities of the mobile robot, respectively. By defining the tracking errors between the current pose and the reference trajectories as ≜ − , ≜ − , ≜ − in the global coordinates where , and are the positions of the reference mobile robot, the tracking errors with respect to the robot coordinate are given as follows: where , , are the tracking errors between the robot pose and the reference trajectories in the robot coordinate.
The dynamic of the tracking errors is given as follows: The relationship between v, ω and ω r , ω l is expressed in the following form: where v and ω are the linear and angular velocities of the mobile robot, respectively. By defining the tracking errors between the current pose and the reference trajectories as e x x c − x r , e y y c − y r , e θ θ c − θ r in the global coordinates where x r , y r and θ r are the positions of the reference mobile robot, the tracking errors with respect to the robot coordinate are given as follows: where e 1 , e 2 , e 3 are the tracking errors between the robot pose and the reference trajectories in the robot coordinate. The dynamic of the tracking errors is given as follows: where v r and ω r are, respectively, the reference linear and angular velocities. Equation (4) can be rewritten in an affine form as follows: . e = f(e) + g(e)u v (5) where Equation (2) shows that the parameters of r and W are required to be known accurately for achieving the control input u = ω r ω l T of the mobile robot. However, these parameters are difficult to measure accurately in practice. In reality, continuous usage of the mobile robot can lead to the wearing of the wheel-tire; this might cause the wheel radius to change. Alternatively, the measurement may not be accurate due to the lack of a precision measurement instrument. Therefore, an adaptive-based controller is required to estimate r and W. By defining a 1 r , b W 2r and using Equation (2), the control input u can be calculated based on the estimation of a and b as follows: whereâ andb are, respectively, the estimated parameters of a, b, and the terms a From Equation (6), the control input u v for Equation (5) can be calculated as follows: Substituting Equation (6) into Equation (4) and writing it in the affine form, it can be obtained as . e = f(e) + (g(e) + g(e))u v (8) where In order to guarantee the convergence of all the tracking errors e = e 1 e 2 e 3 T , let us define the sliding surface for the system of Equation (8) as follows: Taking the first time derivative of the sliding surface, it yields: . s = f(e) + (g(e) + g * (e))u v (10) where g(e) Assume that there exists a positive constant µ so that it satisfies b ≥ µ > 0. Theorem 1. Consider the system in Equation (8). Supposing that Assumption 1 is satisfied, the following control input and parameter adaptation law will lead to the convergence of the sliding surfaces s to zero while the parameter estimation's errors a and b remain within the boundaries. .â where k, γ 1 , γ 2 are positive constants, g(e) −1 is the pseudo-inverse of the non-square matrix g(e), and the function f b is defined as follows: Proof of Theorem 1. Consider a candidate Lyapunov function as follows: Taking the first-time derivative of the Lyapunov function of Equation (14) and using Equation (10) yields . V e, a, b = s T f(e) + (g(e) + g * (e))u v + 1 γ 1 a a . a + 1 ω(s 1 (e 2 − e 1 ) + s 2 (e 2 + e 1 ) + s 3 (e 2 + e 1 − 1)) Substituting Equations (11) and (12) into Equation (15) yields: Case 2: Ifb ≤ µ referring to Equation (13) and Assumption 1, it can be deduced that b − b ≤b − µ ≤ 0; therefore one can conclude that . V ≤ 0. The second order derivative of the Lyapunov function is shown as: ..
Consequently, it can be seen that s, a, b are bounded in both cases. Equations (17) and (18) also show that .. V is bounded. By applying Barbalat's lemma, it can be proved that e → 0 as t → ∞ .
Parameter-Adaptive Event-Triggered Sliding Mode Controller
In this section, an event-triggered controller is proposed to drive the dynamic error of Equation (8) to zero. The event-triggered control strategy only generates the control input if a certain condition is satisfied, leading to a non-constant sampling period.
Consider the control input of Equation (11) expressed in the discrete time as follows: where t i is the discrete time instants with i ∈ Z + . Let us define the event-triggered error in a certain interval t ∈ [t i , t i+1 ] as ε(t) e(t) − e(t i ). If this error passes a given threshold, the control input value will be updated. Otherwise, it will remain constant during the given interval.
Assumption 3. Assume that f(e(t))
is a Lipschitz function with respect to its argument such that ||f(e(t)) − δf(e(t i )) || ≤ L ||e(t) − e(t i ) ||, where L is a positive constant. s in Equation (10) . Consequently, the control input of Equation (19) can substitute into Equation (21) to obtain: .
To analyze the system stability, the candidate Lyapunov function is taken as in Equation (14) V e, a, b = 1 2 Taking the first-time derivative of Equation (23) and utilizing Equation (22), it yields: Let us define V s T f(e(t)) − δf(e(t i )) − s T δksign(s(t i )) for further analysis. By applying Assumption 3 and the norm property, V can be rewritten as: Combining Equations (24) and (25), and utilizing Equation (12), the following expression is obtained: Hence, by using the triggering rule in Equation (20), one can ensure that . V e, a, b ≤ 0. This completes the proof of Theorem 2.
In the design of event-trigger control, there is a possibility that the Zeno phenomenon will occur, which means an infinity of events trigger in a finite-time interval [50]. In order to avoid the Zeno phenomenon, the inter-event time T i = t − t i should be proved to have a lower bound [51,52].
Simulation Scenario
In this section, simulations are carried out in two different trajectories in MATLAB to evaluate the effectiveness of the proposed controller. The simulation parameters are shown Table 1. In addition, a block diagram describing the proposed controller implemented on the mobile robot is shown in Figure 2. The simulations are carried out in two different scenarios to evaluate the performance of the proposed controller. Each scenario follows the same control structure from Figure 2; the only differences are the reference trajectory and the initial values. The bound value for is chosen as 6 < < 14.
Experimental Scenario
The experiment is performed on a commercialized mobile robot named Pioneer 3-DX (P3-DX) (Figure 3). Encoders are installed on DC motors to perceive the rotational speed of the wheels. The reference trajectories are generated by a virtual robot implemented on a personal computer (PC). The PC also implements the proposed controller by The simulations are carried out in two different scenarios to evaluate the performance of the proposed controller. Each scenario follows the same control structure from Figure 2; the only differences are the reference trajectory and the initial values. The bound value for a is chosen as 6 < a < 14.
Scenario 1
The reference trajectory in this scenario is a circle with a radius of 1m. The simulation time is 300 s. The initial positions and values for the estimated parameters of the mobile robot are x = 0.6 m and y = 0 m;â = 8 andb = 1. The controller gains are selected as k = 0.8, γ 1 = 5, γ 2 = 5.
Scenario 2
In this scenario, the reference trajectory is an ellipse with the value of the major axis a ellipse = 1 m and minor axis b ellipse = 0.9 m. The simulation time is 300 s, which means that the mobile robot moves ten laps of the ellipse, taking 30 s for each lap. The initial positions and values for the estimated parameters of the mobile robot are x = 0.4 m and y = 0 m;â = 8 andb = 3. The controller gains are chosen as k = 0.8, γ 1 = 15, γ 2 = 15.
Experimental Scenario
The experiment is performed on a commercialized mobile robot named Pioneer 3-DX (P3-DX) ( Figure 3). Encoders are installed on DC motors to perceive the rotational speed of the wheels. The reference trajectories are generated by a virtual robot implemented on a personal computer (PC). The PC also implements the proposed controller by retrieving the perception of the encoders from a microcontroller board to establish the control input, which is sent to the microcontroller board. The proposed controller's control input becomes the reference input of a low-level speed controller for DC motors, implemented on the microcontroller board and known as inner loop control, which ensures the velocities of two wheels tracking to the control input of the proposed controller. Currently, the connection between the PC and the microcontroller board is serial communication with RS232 protocol, meaning that the baud rate is limited to 115,200 kbps. Further, wireless communication using either Long Range Radio (LoRa) technology or a well-known Wi-Fi ESP8266 also limits the bandwidth. Thus, the traveling interval and the data packet size of the communication should be reduced to save bandwidth. The specifications of P3-DX are shown in Table 2, below. A brief structure diagram of the mobile robot is also shown in Figure 4. known Wi-Fi ESP8266 also limits the bandwidth. Thus, the traveling interval and the data packet size of the communication should be reduced to save bandwidth. The specifications of P3-DX are shown in Table 2, below. A brief structure diagram of the mobile robot is also shown in Figure 4.
(a) (b) known Wi-Fi ESP8266 also limits the bandwidth. Thus, the traveling interval and the data packet size of the communication should be reduced to save bandwidth. The specifications of P3-DX are shown in Table 2, below. A brief structure diagram of the mobile robot is also shown in Figure 4.
Event-Triggered Scenario
The parameters for the experiment are the same as those shown in Table 1. The mobile robot is required to move in a circular trajectory in 600 s, which means twenty laps of the circle with a radius of 0.5 m. The initial positions and estimated parameters of the mobile robot are x = 0.4 m and y = 0 m;â = 13 andb = 1. The controller gains are selected as k = 0.8, γ 1 = 2, γ 2 = 1.5.
Time-Triggered Scenario
For an intuitive comparison, an experiment utilizing a time-triggered scheme instead of the event-triggered scheme is performed. Regarding the experiment conditions, the same test conditions of the event-triggered scenario are applied, except for the initial value of the estimated parameterâ which isâ = 1 in this case.
Results and Discussion
In general, the simulation and experimental results show that the response of the mobile robot under the control of the proposed controller has tracked the reference trajectory well for all scenarios, as shown in Figure 5. However, the spending time of the mobile robot to track the reference trajectories depends on the initial positions of the mobile robot. It is noted that the trajectory tracking of the mobile robot was more difficult at the vertices of the ellipse, shown in Figure 5b, than those of the circle, as shown in Figure 5a. The explanation for this phenomenon is that the ellipse trajectory possesses abrupt changes in velocities; however, the proposed controller is derived from a kinematic model, and such a controller can only perform well for a trajectory with almost linear velocities. Regarding the experimental results, it can be seen that the event-triggered scenario (Figure 5c) has a larger error boundary and a slower convergence rate than the time-triggered scenario (Figure 5d). This deviation is attributed to the continuous update of the control signal of the time-based update as opposed to the event-based update.
From Figure 6, it can be seen that the errors of X and Y in all scenarios are bounded despite existing fluctuations. The convergence rate can also be accelerated by choosing a higher gain k. However, the chattering phenomenon will worsen over time and ultimately degrade the control performance. In Figure 6a, the error of X reaches a peak at around 28 s. The same situation can also be seen in Figure 6b,c, albeit to a lesser extent. This can be attributed to the fact that during the first lap (0-30 s), the trajectory tracking of the mobile robot is still considered to be in the transient phase where the controller is adapting to the trajectory, and overshoot may occur as a result. Additionally, the use of pseudo-inverse in the matrix calculation of Equation (11) may cause some abnormalities depending on the simulation platform in some cases. As for the event-triggered and timetriggered experiments, the chattering effect is less prominent in the latter. Furthermore, the overshoots in transient time are lower in both the X and Y errors for the time-triggered scenario than those of the event-triggered scenario.
The angular error evolutions are depicted in Figure 7. Overall, all errors converge to zero after the first 30 s. It can be seen that the overshoot in the transient time of the case shown in Figure 7a is lower than those of the cases shown in Figure 7c,d. This can be explained by the fact that the simulation did not account for the weights of the PC and battery installed on the mobile robot. This leads to an increase in the inertia of moment in the actual robot, causing large overshoot. Moreover, the choice of the initial positions of the mobile robot affects the overshoot as well as the convergence time of the response. For instance, a larger initial value ofb for the case of the ellipse trajectory, shown in Figure 7b, can cause a more considerable overshoot than that of the circular trajectory, shown in Figure 7a; however, the convergence time of the former is faster. explanation for this phenomenon is that the ellipse trajectory possesses abrupt changes in velocities; however, the proposed controller is derived from a kinematic model, and such a controller can only perform well for a trajectory with almost linear velocities. Regarding the experimental results, it can be seen that the event-triggered scenario (Figure 5c) has a larger error boundary and a slower convergence rate than the time-triggered scenario (Figure 5d). This deviation is attributed to the continuous update of the control signal of the time-based update as opposed to the event-based update. From Figure 6, it can be seen that the errors of and in all scenarios are bounded despite existing fluctuations. The convergence rate can also be accelerated by choosing a higher gain . However, the chattering phenomenon will worsen over time and ultimately degrade the control performance. In Figure 6a, the error of reaches a peak at around 28 s. The same situation can also be seen in Figure 6b,c, albeit to a lesser extent. This can be attributed to the fact that during the first lap (0-30 s), the trajectory tracking of the mobile robot is still considered to be in the transient phase where the controller is adapting to the trajectory, and overshoot may occur as a result. Additionally, the use of pseudo-inverse in the matrix calculation of Equation (11) may cause some abnormalities depending on the simulation platform in some cases. As for the event-triggered and timetriggered experiments, the chattering effect is less prominent in the latter. Furthermore, the overshoots in transient time are lower in both the and errors for the time-triggered scenario than those of the event-triggered scenario. In general, the chattering phenomenon is much more prominent for the cases shown in Figure 8a,b than for the cases shown in Figure 8c, d, due to the amplitude of the reference velocities. In order to reduce the chattering effect, a saturation function can be implemented in place of the sign function or by implementing an advanced sliding mode controller such as high-order sliding mode and super-twisting sliding mode. Again, the velocities of the mobile robot under the control of the event-triggered scheme (Figure 8c) chatter more heavily than those controlled by the time-triggered scheme (Figure 8d). One reason is that the time interval between each update of the control input in the event-triggered case is higher, which causes a more significant error between each update and amplifies the chattering effect.
of the mobile robot is still considered to be in the transient phase where the controller is adapting to the trajectory, and overshoot may occur as a result. Additionally, the use of pseudo-inverse in the matrix calculation of Equation (11) may cause some abnormalities depending on the simulation platform in some cases. As for the event-triggered and timetriggered experiments, the chattering effect is less prominent in the latter. Furthermore, the overshoots in transient time are lower in both the and errors for the time-triggered scenario than those of the event-triggered scenario. The angular error evolutions are depicted in Figure 7. Overall, all errors converge to zero after the first 30 s. It can be seen that the overshoot in the transient time of the case shown in Figure 7a is lower than those of the cases shown in Figure 7c,d. This can be explained by the fact that the simulation did not account for the weights of the PC and battery installed on the mobile robot. This leads to an increase in the inertia of moment in the actual robot, causing large overshoot. Moreover, the choice of the initial positions of the mobile robot affects the overshoot as well as the convergence time of the response. For instance, a larger initial value of for the case of the ellipse trajectory, shown in Figure 7b, can cause a more considerable overshoot than that of the circular trajectory, shown in Figure 7a; however, the convergence time of the former is faster. On the subject of estimating the parameters, the goal of the adaptive parameter scheme is to ensure that the estimated parameters are bounded and stable. In both the simulation and experimental results, the estimated parameters remained bounded, as shown in Figure 9. The responses of the parameters in the transient time are affected by the choice of the gains γ 1 and γ 2 ; instability for the estimated parameters will occur if the gains are chosen too high. explained by the fact that the simulation did not account for the weights of the PC and battery installed on the mobile robot. This leads to an increase in the inertia of moment in the actual robot, causing large overshoot. Moreover, the choice of the initial positions of the mobile robot affects the overshoot as well as the convergence time of the response. For instance, a larger initial value of for the case of the ellipse trajectory, shown in Figure 7b, can cause a more considerable overshoot than that of the circular trajectory, shown in Figure 7a; however, the convergence time of the former is faster. As for the triggering time, Figure 10 shows the time interval when the event is triggered, which is represented by pulse signals. Overall, the performances of the simulations and the experiment are nearly identical because the choice of the triggering threshold is the same. In the simulation, with the sampling time of 0.01 s, it can be calculated that the control input has to be updated 30,000 times, as shown in Figures 10a and 10b, respectively. However, the implementation of the event-triggered strategy has reduced the times needed for updating to 12,050 and 17,070 times, which is significantly lower (accounting for 40% and 56.9% of the original). For the event-triggered experimental result, the control input for the event-triggered strategy is generated and updated 41,280 times in comparison with 60,000 times of the traditional controller, accounting for 68.8% of the total update intervals. This can be considered the advantage of the event-triggered strategy because it helped save computing resources.
The inter-event time denoted in plus signs for scenario 1, scenario 2, and the experiment is also shown in Figure 11. The efficiency of saving computing resources can be investigated through the inter-event time, with a high value signifying the system is not required to update for a long time, thus reducing frequent control computation. In general, the lowest inter-event time is 0.01 s for all cases. The inter-event time can reach up to 0.06 s and 0.1 s, as shown in Figures 11a and 11b, respectively. In the event-triggered experiment shown in Figure 11c, 0.12 s is the highest inter-event time but it only occurs once, whereas the averages are 0.03 s, 0.02 s, and 0.01 s. In contrast, the inter-event time for the time-triggered experiment is always 0.01 s, as shown in Figure 11d, as the control input is updated at a fixed time interval. mented in place of the sign function or by implementing an advanced sliding mode controller such as high-order sliding mode and super-twisting sliding mode. Again, the velocities of the mobile robot under the control of the event-triggered scheme (Figure 8c) chatter more heavily than those controlled by the time-triggered scheme (Figure 8d). One reason is that the time interval between each update of the control input in the eventtriggered case is higher, which causes a more significant error between each update and amplifies the chattering effect. To improve the inter-event time and reduce the number of triggered events, we can increase the threshold value for the triggering rule at the cost of precision. If the threshold value is sufficiently large, the control signals may not be updated for a long time, leading to the system's instability as the controller cannot react quickly enough to drive the states' errors to zero. On the contrary, a lower threshold means the computations of the control signals are more frequent, thus the states' errors may not have any difficulty in converging to zero.
On the subject of estimating the parameters, the goal of the adaptive parameter scheme is to ensure that the estimated parameters are bounded and stable. In both the simulation and experimental results, the estimated parameters remained bounded, as shown in Figure 9. The responses of the parameters in the transient time are affected by the choice of the gains and ; instability for the estimated parameters will occur if the gains are chosen too high. As for the triggering time, Figure 10 shows the time interval when the event is triggered, which is represented by pulse signals. Overall, the performances of the simulations and the experiment are nearly identical because the choice of the triggering threshold is the same. In the simulation, with the sampling time of 0.01 s, it can be calculated that the control input has to be updated 30,000 times, as shown in Figure 10a and Figure 10b, respectively. However, the implementation of the event-triggered strategy has reduced the times needed for updating to 12,050 and 17,070 times, which is significantly lower (accounting for 40% and 56.9% of the original). For the event-triggered experimental result, the control input for the event-triggered strategy is generated and updated 41,280 times in comparison with 60,000 times of the traditional controller, accounting for 68.8% of the total update intervals. This can be considered the advantage of the event-triggered strategy because it helped save computing resources. The inter-event time denoted in plus signs for scenario 1, scenario 2, and the experiment is also shown in Figure 11. The efficiency of saving computing resources can be investigated through the inter-event time, with a high value signifying the system is not required to update for a long time, thus reducing frequent control computation. In general, the lowest inter-event time is 0.01 s for all cases. The inter-event time can reach up to 0.06 s and 0.1 s, as shown in Figure 11a and Figure 11b, respectively. In the event-triggered experiment shown in Figure 11c, 0.12 s is the highest inter-event time but it only occurs once, whereas the averages are 0.03 s, 0.02 s, and 0.01 s. In contrast, the inter-event time for the time-triggered experiment is always 0.01 s, as shown in Figure 11d, as the control input is updated at a fixed time interval. required to update for a long time, thus reducing frequent control computation. In general, the lowest inter-event time is 0.01 s for all cases. The inter-event time can reach up to 0.06 s and 0.1 s, as shown in Figure 11a and Figure 11b, respectively. In the event-triggered experiment shown in Figure 11c, 0.12 s is the highest inter-event time but it only occurs once, whereas the averages are 0.03 s, 0.02 s, and 0.01 s. In contrast, the inter-event time for the time-triggered experiment is always 0.01 s, as shown in Figure 11d, as the control input is updated at a fixed time interval. To improve the inter-event time and reduce the number of triggered events, we can increase the threshold value for the triggering rule at the cost of precision. If the threshold value is sufficiently large, the control signals may not be updated for a long time, leading to the system's instability as the controller cannot react quickly enough to drive the states' errors to zero. On the contrary, a lower threshold means the computations of the control signals are more frequent, thus the states' errors may not have any difficulty in converging to zero.
Conclusions
This paper presented a parameter-adaptive event-triggered sliding mode controller for a mobile robot. While the sliding mode controller drives the state variables to zero, the event-triggered scheme reduces the time needed for the microcontroller to calculate control input, thus saving resources. To further improve the accuracy of the controller, the adaptive method is also utilized to estimate the unknown physical parameters of the mobile robot. Simulations and experiments are performed to verify the effectiveness of the proposed controller in terms of errors and stability. Despite the tracking error convergence to zero, the control system is affected by the chattering phenomenon, and the convergence rate is still not optimized. In future work, these issues will be considered by utilizing an improved version of the sliding mode algorithm, such as a super-twisting algorithm. Additionally, the implementation of the proposed controller for a network control scheme using a standard messaging protocol in the application layer of the TCP/IP model, such as Message Queuing Telemetry Transport (MQTT), will be an interesting topic.
Conclusions
This paper presented a parameter-adaptive event-triggered sliding mode controller for a mobile robot. While the sliding mode controller drives the state variables to zero, the event-triggered scheme reduces the time needed for the microcontroller to calculate control input, thus saving resources. To further improve the accuracy of the controller, the adaptive method is also utilized to estimate the unknown physical parameters of the mobile robot. Simulations and experiments are performed to verify the effectiveness of the proposed controller in terms of errors and stability. Despite the tracking error convergence to zero, the control system is affected by the chattering phenomenon, and the convergence rate is still not optimized. In future work, these issues will be considered by utilizing an improved version of the sliding mode algorithm, such as a super-twisting algorithm. Additionally, the implementation of the proposed controller for a network control scheme using a standard messaging protocol in the application layer of the TCP/IP model, such as Message Queuing Telemetry Transport (MQTT), will be an interesting topic. | 7,526.4 | 2022-08-02T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Optimal allocation to treatments in a sequential multiple assignment randomized trial
One of the main questions in the design of a trial is how many subjects should be assigned to each treatment condition. Previous research has shown that equal randomization is not necessarily the best choice. We study the optimal allocation for a novel trial design, the sequential multiple assignment randomized trial, where subjects receive a sequence of treatments across various stages. A subject's randomization probabilities to treatments in the next stage depend on whether he or she responded to treatment in the current stage. We consider a prototypical sequential multiple assignment randomized trial design with two stages. Within such a design, many pairwise comparisons of treatment sequences can be made, and a multiple-objective optimal design strategy is proposed to consider all such comparisons simultaneously. The optimal design is sought under either a fixed total sample size or a fixed budget. A Shiny App is made available to find the optimal allocations and to evaluate the efficiency of competing designs. As the optimal design depends on the response rates to first-stage treatments, maximin optimal design methodology is used to find robust optimal designs. The proposed methodology is illustrated using a sequential multiple assignment randomized trial example on weight loss management.
Introduction
In many randomized controlled trials, participants are equally allocated to intervention arms. Such a design is consistent with the view of clinical equipoise that must exist before the start of the trial. 1 However, it may be preferable to allocate more participants to one arm than to another, for instance, when variances and/or costs vary across the treatment arms, [1][2][3][4][5] or when outcomes are categorical rather than quantitative. [6][7][8][9][10] The derivation of the optimal allocation of units to treatment conditions has not only been done for individually randomized trials, but also for more complex trial designs such as cluster-randomized trials, [11][12][13][14][15][16] and trials with partially nested data. [17][18][19] From a statistical point of view, it is more efficient to assign more subjects to the condition with the lowest costs and highest variance. Other, more practical, reasons to use unequal allocation over equal allocation include resource constraints, administrative, political or ethical concerns or when the aim is to gain experience from an intervention and to study its feasibility. 5,20 The focus of these references is on trials where subjects are randomized to either one single treatment or a combination of treatments, but do not change their assigned treatments during the course of the trial. This is a drawback since in real research practice some subjects may benefit more from one treatment and others more from another. Adaptive treatment strategies (ATSs), which are also called dynamic treatment regimens or adaptive interventions, are more flexible in the sense that they allow changing treatments over time. [21][22][23][24] An ATS individualizes treatments to subjects via decision rules that adjust the type, intensity, dosage or delivery of a treatment and specify when, whether and how to proceed at certain critical clinical decisions. For instance, those subjects for whom their assigned treatment turns out to be beneficial may continue the same treatment, while those others may be assigned to another treatment. The use of sequential treatments is often necessary because of: (i) heterogeneous treatment outcomes across subjects, (ii) change in treatment goals over time, (iii) the need to balance potential risks and benefits or (iv) to reduce costs when intensive treatment is not necessary. 25,26 Also, the use of sequential treatments implies multiple clinical decisions to be taken throughout the course of the study. These clinical decisions are formalized through ATSs.
Based on the number of treatments and treatment switches, various competing ATSs may be developed and they may be compared to one another in a so-called sequential multiple assignment randomized trial (SMART). 25,27 SMARTs are multi-stage randomized trial designs that are used to inform on the development of multiple ATSs embedded in it. The use of SMART designs allows researchers to evaluate the timing, sequencing and adaptive selection of treatments by using randomization and developing the best sequence(s) of treatments that lead to the optimal outcomes in the long term. In SMARTs, participants are allowed to switch through multiple stages, where each stage corresponds to a clinical decision, and subjects may be randomized at each stage. Sequenced randomization ensures that at each decision point the groups of participants assigned to the intervention options are balanced in terms of patient characteristics. This adds flexibility, allowing participants to remain on those treatments that are having an effect and giving the possibility to switch away to patients being treated with less effective options. This has made SMART designs appealing in a broad variety of health care, behavioural and psychological settings.
Multiple ATSs are embedded in a SMART and the main question in the design phase of a SMART is how many subjects should be assigned to each ATS, and whether an unequal allocation is better than an equal allocation. Some recent papers studied the relation between sample size and power for SMART designs, 25,28-34 but did not study the optimal allocation of units to treatment sequences and the loss of efficiency of using equal rather than unequal allocation.
The aim of this paper is to derive optimal allocations of units for a prototypical SMART design. This is a two-stage design where all units are randomized to two treatment conditions in the first stage. Those who respond to their assigned treatment are not re-randomized in the second stage, while those who do not respond are re-randomized to two secondstage treatments. This design was considered earlier by NeCamp et al. 32 in the setting of a cluster-randomized trial. In our contribution, we focus on individual randomization. We focus on sample sizes to be used when comparing two ATSs that start with different first-stage treatments. Four of such pairwise comparisons can be made in their prototypical SMART design, and one comparison may be of more importance than another. We therefore use multiple-objective optimal design methodology to consider all comparisons simultaneously, while taking into account their relative importance. 35 Multiple-objective optimal designs are useful when the study has multiple and conflicting objectives, such multiple pairwise comparisons of marginal means of ATSs in a SMART. It combines these objectives in one optimality criterion and tries to seek a design that is highly efficient for each of these criteria. We provide a Shiny App to calculate the optimal allocation of units and to evaluate the efficiency of the design with equal allocation. We demonstrate our optimal design methodology on the basis of a SMART example that compares two different treatments, nutrition (NUT) and physical activity (PHY), for weight loss management. Our focus is on SMARTs with a quantitative outcome with individual randomization. In other words, we do not focus on cluster-randomized SMARTs or other complex SMART designs with clustered data.
The remainder of our contribution is organized as follows. Section 'Prototypical SMART design' further discusses the prototypical SMART design and its embedded ATSs. Furthermore, this section introduces the example of weight loss management. Section 'Derivation of the optimal design' derives the optimal allocation of units for studies in which either the total sample size or the budget is fixed. In the latter case, we consider the realistic situation where costs may vary across treatment conditions. The optimal allocation turns out to depend on the subjects' probabilities to respond to their first-stage treatment. We therefore also focus on maximin optimal designs that are robust to incorrect prior estimates of these probabilities. Furthermore, Section 'Derivation of the optimal design' introduces the Shiny App that we developed for finding the optimal design. Section 'A SMART example' demonstrates our optimal design methodology on the basis of the weight loss example. It shows how the optimal design is influenced by the costs per treatment, proportion of responders to firststage treatments and the relative importance of the four pairwise comparisons. Section 'Discussion' summarizes our findings, discusses limitations of this contribution and gives directions for future research.
Prototypical SMART design
Before we focus on the prototypical SMART, we rehearse some general ingredients for arbitrary SMART (see for instance Ertefaie et al., 36 but using different notation). The observed covariates and treatment assignment at stage k are denoted O k and X k , respectively, and O k and X k denote the covariate and treatment histories up to and including stage k. Within a SMART multiple ATSs are embedded; these are denoted d i , i = 1, . . . , I. An ATS is basically a treatment trajectory and denoted by a vector of counterfactual treatment assignments for a given individual j. If the SMART has two stages, then d i = (X 1 , X R 2 , X NR 2 ), where X R 2 is the treatment assignment in the second stage had the subject responded, and X NR 2 is the treatment assignment in the second stage had he or she not responded. So, for a subject who responds, X NR 2 is not observed, and for a subject who does not respond X R 2 , hence d i is called a vector of counterfactual treatments. The observed treatment history only includes the treatments a subject has actually been assigned to X 2 = (X 1 , X 2 ). At the end of each stage k, a tailoring variable is measured which determines if a subject has responded to the treatment in that stage or not. In other words, this variable determines which treatment the subject is assigned to in the subsequent stage. At the end of the study (i.e. at the end of the final stage) the continuous outcome variable Y j is measured on each subject. These outcomes are then used to compare different ATSs to one another.
The prototypical SMART design is visualized in Figure 1. This design has been used in various research fields; published examples of its use in the treatment and long-term management of many chronic conditions include weight loss, 26,37,38 substance abuse, 39,40 cancer research, 41,42 adolescent depression, 43 adolescent conduct problems, 44 suicide, 45 and attention-deficit/hyperactivity disorder. 46 The prototypical SMART is a two-stage design with two first-stage treatments A and B; the proportions randomized to these treatments are denoted p 1 and 1 − p 1 , respectively. After some amount of time it is determined which subjects respond to their first-stage treatment, depending on some criterion such as a sufficient amount of weight loss or smoking cessation. The response rates to first-stage treatments A and B are equal to γ 1 and γ 2 , respectively. Those subjects who respond to their first-stage treatment are not further randomized, but receive second-stage treatment C or F, depending on their first-stage treatment. This may be the same as the first-stage treatment, but may also be another treatment or discontinuation of treatment with or without further monitoring. Those subjects who do not respond to their first-stage Figure 1. A scheme of the prototypical sequential multiple assignment randomized trial (SMART) design from NeCamp et al. 32 Circled 'R' denotes randomization at each stage. p 1 and (1 − p 1 ) are, respectively, the proportions of subjects receiving first-stage treatments A and B. p 2 and (1 − p 2 ) are, respectively, the proportions of subjects receiving second-stage treatments D and E for non-responders starting with first-stage treatment A. p 3 and (1 − p 3 ) are, respectively, the proportions of subjects receiving second-stage treatments G and H for non-responders starting with first-stage treatment B. γ 1 and γ 2 indicate, respectively, response rates for the first-stage treatments A and B. treatment are further randomized. Non-responders who received first-stage treatment A are randomized to second-stage treatments D and E, with proportions p 2 and 1 − p 2 , respectively. Such a second-stage treatment may be an intensified version of the first-stage treatment A, treatment A augmented with another treatment (which may be first-stage treatment B), first-stage treatment B, or an entirely different treatment. In the same manner, non-responders who received first-stage treatment B are randomized to two second-stage treatments G and H. This design includes eight different treatment conditions, where some of the second-stage treatments may be the same as the first-stage treatments or a combination of them.
Four ATSs are embedded in the prototypical SMART design, see Table 1. For instance, the first ATS, denoted d 1 , assigns all subjects to first-stage treatment A. Responders receive second-stage treatment C while non-responders receive second-stage treatment D.
The primary analysis goal of a SMART design is usually one of the following: (i) comparing first-stage intervention options; (ii) comparing second-stage intervention options; (iii) comparing two or more embedded ATSs in the study starting with the same first-stage intervention option or (iv) comparing two or more embedded ATSs in the study starting with different first-stage intervention options. 31 In the derivation of our optimal design, we focus on embedded ATSs that start with different first-stage treatments, which is a common primary aim in SMARTs. 32
Example: weight loss management
Bariatric surgery is an effective treatment for obese patients to lose weight. Given its costs, potentially harmful side effects and the risk of death, patients in the Netherlands are only considered eligible if they can demonstrate they have previously attempted other means to lose weight. Two treatments are an increase in PHY and a change in NUT. Figure 2 visualises the example SMART design. All patients are first randomized to either PHY or NUT. Then, at the end of the first stage, subjects are categorized as responders or non-responders, according to some predefined definition of response, for example, a threshold for weight loss after a given period of time. Non-responders are then re-randomized to second-stage treatments, regardless of their treatment in the first-stage. They either switch to the other treatment or pursue with a combination of both treatments (NUT + PHY) in the second stage. Responders are not re-randomized and pursue with their first-stage treatment. This example is visualized in Figure 2. Four different ATSs are embedded within this prototypical SMART design: . The superscript R refers to second-stage treatment assigned to responders, while the superscript NR denotes second-stage treatment assigned to non-responders.
The SMART design of this example is a simplification of the prototypical SMART design in the sense that just two treatments are involved. Responders continue with their first-stage treatment, while non-responders are randomized to the other treatment or a combination of both treatments. This specific SMART design was previously used for, among others, the treatment of anxiety disorder, 25 obsessive-compulsive disorder 47 and chronic pain. 48 Derivation of the optimal design Introduction For a given ATS d i , i = 1, . . . , 4, let Y j , j = 1, . . . , N d i be the continuous primary outcome of interest for the jth subject as measured at the end of stage 2, with N d i denoting the number of subjects whose treatment trajectories are Table 1. The four ATSs embedded in the prototypical SMART design.
ATS label
First-stage treatment Status at the end of first-stage Second-stage treatment Responder F Non-responder H ATS: adaptive treatment strategy; SMART: sequential multiple assignment randomized trial. consistent with the ATS d i . Y j is supposed to have E(Y j ) = μ i and Var(Y j ) = σ 2 , for all j = 1, . . . , N d i . We assume common variance σ 2 across all four ATSs. The target parameter μ i , the marginal mean outcome expected under ATS d i , depends on the proportion of responders to first-stage treatment in ATS d i in the population. It is estimated by a weighted average of the observed outcomes of subjects whose treatment trajectories are consistent with d i . 31 The weights follow from the fact that there is a structural imbalance between responders and non-responders: the nonresponders are re-randomized but the responders are not. For instance, for ATS d 1 , responders have a probability of p 1 of receiving the treatment sequence they actually received, and their subject-specific weights are W j = 1 / p 1 . For nonresponders, this probability is p 1 p 2 and hence their weight is W j = 1 / p 1 p 2 . Here p 1 is the randomization probability to treatment A in the first-stage and p 2 is the randomization probability to treatment C in the second-stage. The weights are the inverse of the probabilities, hence the weighting is called inverse probability weighting. By using these weights, the relative contribution of the responders and non-responders in the calculation of the weighted mean outcome in ATS d 1 is the same as when this ATS had not been embedded in a SMART. In other words, since the ATS is embedded in a SMART, the non-responders have a higher weight than the responders to account for the fact that some of them are randomized to treatment E, rather than treatment D. This is a generalization from the work of Ghosh et al. 31 in the sense that we allow the proportions p 1 and p 2 to be unequal to 0.5. For the other ATSs, subject-specific weights can be obtained in a similar way.
The weighted mean for the continuous primary outcome of interest for ATS d i is equal to The expected value of this weighted mean is given by Equation (2) shows that the weighted mean is an unbiased estimator of the marginal mean. The variance of the weighted mean is equal to For each ATS d i , the variance of the weighted mean is computed using the subject-specific weights. First, the expected number of people in the trial whose treatment trajectories are consistent with d i is computed for each ATS. For d 1 , this is equal to with the first term on the right side representing the expected number of responders and the second being the expected number of non-responders. The proportions p 1 and p 2 are defined as above, while N is the total sample size of the SMART and γ 1 is the response rate to first-stage treatment A. Following from (4), we obtain The variance for the weighted mean Y d 1 , for ATS d 1 , is obtained by plugging (5) and (6) into (3): The right side of (7) consists of two factors. The first is the common variance of a mean, while the second is used to account for the fact that subjects may be re-randomized. This second factor is a function of the response rate γ 1 to first-stage treatment A.
Using their respective subject-specific weights, formulae for the variance of the weighted mean Y d i for the other ATSs are obtained in a similar way; these are shown in Table 2.
We consider pairwise comparisons of ATSs that start with different first-stage treatments. The expected difference in weighted means of two such ATSs d i and d i ′ (with i = 1 or 2 and i ′ = 3 or 4) is μ i − μ i ′ with the corresponding variance we assume that weighted means of ATSs that start with different first-stage treatments are independent. This assumption holds as long as outcomes of subjects from ATSs that start different first-stage treatments are independent Table 2. Variance for the weighted mean Y di for the four adaptive treatment strategies (ATSs) embedded.
Considering the ATSs embedded in our example, four possible pairwise comparisons exist, with corresponding variances: , with Y d i being the weighted mean for the continuous primary outcome variable of interest for the ATS d i , i = 1, . . . , 4. Formulae for the variance of these comparisons can be derived by plugging in the variances of the single ATSs as reported in Table 2.
The optimal design ξ * is minimized. Each objective has its own optimal design. For instance, the optimal design for Φ 13 is ξ * 13 =(0.5, 1, 1), which implies both first-stage treatments have randomization probability 0.5, all non-responders in first-stage treatment A receive second-stage treatment D, and all nonresponders to first-stage treatment B receive second-stage treatment G. The optimal designs for the other objectives are ξ * 14 =(0.5, 1, 0), ξ * 23 =(0.5, 0, 1) and ξ * 24 =(0.5, 0, 0). The optimal design for one objective does not only hold for the other single objectives, but it may also perform poorly. 49 For that reason, a multiple-objective optimal design is used, so that all of the four pairwise comparisons are taken into account simultaneously. We do so by using a weighted sum of the four objectives, where weights are to be chosen by the user. The use of weights allows placing more emphasis on the one objective than another, subject to the researcher's interests and the goals of the study. A constraint is put on the weights such that their sum is equal to 1. The optimal design problem becomes a multiple-objective optimal design problem. The aim is to minimize the optimality criterion with λ ii ′ being the weight assigned to the respective objective Φ ii ′ . The corresponding optimal design is a so-called compound-optimal design.
Optimal design under a fixed total sample size In this scenario, the optimal design is sought under a fixed total sample size N. This is a realistic scenario when studying treatments for a rare disease or condition, but it can also be used when resource constraints allow recruiting a fixed number of subjects. It is assumed that a priori estimates of the response rates γ 1 and γ 2 are available. The optimal design minimizes the objective in (9); it is found by taking the gradient of (9) with respect to p 1 , p 2 and p 3 . The optimal proportions for the second-stage treatments are given by and It is worth noting that the optimal second-stage proportions p * 2 and p * 3 do not depend on the response rates γ 1 and γ 2 , or on the total sample size N, but only on the choice of the weights. In particular, p * 2 increases as λ 13 and/or λ 14 increase. This is obvious since objectives Φ 13 and Φ 14 are comparisons that include treatment D, and more efficient comparisons can be made if more subjects are assigned to this treatment. Similarly, p * 3 increases when λ 13 and/or λ 23 increase. This is also obvious since objectives Φ 13 and Φ 23 are comparisons that include treatment G, and more efficient comparisons can be made if more subjects are assigned to this treatment.
The optimal randomization probability for the first-stage treatment A takes on a more complicated form: where p * 1 depends on both γ 1 and γ 2 , and on the optimal proportions p * 2 and p * 3 , while it does not depend on N. A detailed derivation of the optimal design is given in the online supplement.
Optimal design under a fixed budget
In this scenario, we consider a budgetary constraint: the total costs C for treating subjects should not exceed the budget B. The costs are calculated as where c A are the costs per subject in treatment A and N A are the number of subjects who receive treatment A, and similarly for the other treatments B to H. The costs may vary across subjects and are assumed to be known beforehand. The sample sizes are stochastic since they depend on the proportions p 1 , p 2 and p 3 and response rates γ 1 and γ 2 . In the derivation of the optimal design, we use their expected values. For the first-stage treatments, we have for the second-stage treatments C, D and E, we have and for the second-stage treatments F, G and H, we have For a given budget, the total sample size N that can be used decreases when the costs increase. This implies that a design is not only determined by the proportions but also by the total sample size: ξ = ( p 1 , p 2 , p 3 , N ). The optimal design is found in a numerical manner through a domain search algorithm, see the online supplement for more details.
Robust optimal design
The optimal design depends on the response rates γ 1 and γ 2 , hence the optimal design is locally optimal. These parameters are often unknown in the design stage of a SMART and an educated a priori guess based on expert opinions or findings in the literature should be used. There is, however, no guarantee that such a guess is correct and robust optimal design methodology may be used to protect against a loss of efficiency due to a misspecification of the response rates. We use maximin optimal design methodology 50 to allow specification of intervals, rather than point estimates, of the two response rates. The maximin optimal design ξ MMD maximizes the minimal relative efficiency (RE) among all designs in the design space Ω. In other words, it selects the best of the worst-case scenarios. The maximin optimal design can be found using the following three steps: 1. Define the parameter space for the response rates and the design space Ω for the proportions. For instance, the first response rate γ 1 may be between 0.2 and 0.3 and the second response rate γ 2 may be between 0.35 and 0.45. The design space is Ω = (0 ≤ p 1 ≤ 1, 0 ≤ p 2 ≤ 1, 0 ≤ p 3 ≤ 1). 2. For each possible combination of the two response rates in the parameter space, compute the locally optimal design ξ LOD . Then compute the RE of each design ξ in Ω compared with the locally optimal design: RE = Φ(ξ LOD ) / Φ(ξ). 3. For each design in Ω, find its smallest RE value within the parameter space. Then, select the design that has the highest minimum RE across all designs in the design space. This is the maximin optimal design ξ MMD and its minimum RE is called the maximin value.
This procedure yields the design which is most robust to a misspecification of the response rates and it can be used when working under a fixed budget or under a fixed total sample size.
Statistical power for the optimal design
Once the optimal allocation to treatments has been derived, it makes sense to determine how much power the study has for each of the four pairwise comparisons of ATSs 51 . The following steps should be taken in such a power analysis: 1. Calculate the variance Var( Y d i ) for each of the four ATSs in the SMART. For the case of a fixed total sample size this can be done easily by plugging in the optimal proportions p * 1 , p * 2 and p * 3 and total sample size N into the equations of Table 2. For the case of a fixed budget, first, the total sample size N has to be calculated from the budget, costs and optimal proportions. This can be done on the basis of equations (15) to (18), as is further explained in the online supplement.
For each of the four pairwise comparisons of ATSs: calculate Var
3. For each of the four pairwise comparisons of ATSs, get a prior estimate of the expected difference in marginal means μ i − μ i ′ . A prior estimate may be obtained from the literature or an expert's expectations. As an alternative, one may use the minimal relevant effect size, that is, the smallest effect size that is considered to be relevant. 4. For each of the four pairwise comparisons of ATSs, select the type I error rate α and decide whether a one-sided or two-sided test has to be performed. 5. For each of the four pairwise comparisons of ATSs, calculate the power. For a one-sided alternative use the following equation: are the variances of the two ATSs to be compared, z 1−α is the (1 − α)th quantile of the standard normal distribution and z −1 is the inverse of the standard normal distribution. For a two-sided alternative, α has to be replaced by α / 2.
Shiny app
We developed a Shiny app 52 to facilitate finding the optimal design; it is available from https://andreamorciano.shinyapps. io/OptimalSMART/. It calculates locally optimal designs for a fixed total sample size as well as a fixed budget. In the first case, the user should specify the total sample size, in the latter case, he or she should specify the costs per treatment along with the budget. Furthermore, an a priori estimate of the two response rates should be specified to find the locally optimal design. The numerical algorithm that finds the optimal design for the budgetary constraint has a precision of 0.00002 for the optimal proportions. The Shiny app can also be used to find the maximin optimal design. It that case intervals [γ 1 − 0.05, γ 1 + 0.05] and [γ 2 − 0.05, γ 2 + 0.05] are considered around the user-specified values γ 1 and γ 2 . These intervals are continuous; in our algorithm, we use a step size of 0.01 to discretize these intervals, while a step size of 0.05 is used for the response rates. In the case the reader is interested in using a different step size, he/she can contact the first author.
A SMART example Introduction
We apply the optimal design methodology to the example of the weight loss management study of Figure 2. Participants are randomized to two first-stage treatments: PHY and NUT. A response is defined as a (absolute or relative) loss in body weight that exceeds a user-selected threshold value. We use three sets of a priori guesses for the two response rates of the two first-stage treatments: (γ 1 , γ 2 ) = (0.15, 0.25), (γ 1 , γ 2 ) = (0.25, 0.40) and (γ 1 , γ 2 ) = (0.40, 0.55). In each case, we choose a larger value for NUT than for PHY, as previous research has demonstrated that PHY produces smaller bodyweight loss than diet (NUT). 53 For the first set of response rates, the definition of a response is most stringent, resulting in the smallest response rates, and for the third it is most lenient, resulting in the highest response rates.
We consider three sets of weights for the multiple-objective optimal design (9). The first considers each comparison to be of equal importance, which implies that equal weights are used: (λ 13 , λ 14 , λ 23 , λ 24 ) = (0.25, 0.25, 0.25, 0.25). The second puts more emphasis on those comparisons where second-stage treatments are either PHY or NUT, but not a combination of the two. In this case, researchers are mainly interested in the comparison between d 1 = (PHY, PHY R , NUT NR ) and d 3 = (NUT, NUT R , PHY NR ) rather than the other ones. Designs with a single second-stage treatment are less expensive, they may be easier to implement by the researchers and easier to adhere to by the participants. As an illustration we use (λ 13 , λ 14 , λ 23 , λ 24 ) = (0.70, 0.10, 0.10, 0.10). The third set of weights puts more emphasis on those second-stage treatments that are a combination of NUT and PHY, for instance, because there is a believe combined treatment is more effective. In that case the main focus is on the comparison between d 2 = (PHY, PHY R , (NUT + PHY) NR ) and d 4 = (NUT, NUT R , (NUT + PHY) NR ). As an illustration we use (λ 13 , λ 14 , λ 23 , λ 24 ) = (0.10, 0.10, 0.10, 0.70).
For this specific example, we developed another version of our Shiny app; this is available at https://andreamorciano. shinyapps.io/OptimalSMART2/.
Locally optimal design under a fixed total sample size For each combination of (γ 1 , γ 2 ) and (λ 13 , λ 14 , λ 23 , λ 24 ), the optimal design is given in Table 3, along with the RE of the balanced design (where p 1 = p 2 = p 3 = 0.50) as compared to the optimal design. We observe the optimal design hardly depends on the response rates, but it does depend on the weights. For each set of weights, the optimal design dictates (about) equal randomization to first-stage treatments. For the first set of weights, the optimal design is (almost) equal to the balanced design and the RE of the balanced design is 1. For the second set of weights more than half (two-thirds) of participants are randomized to single second-stage treatments. This is obvious because the chosen weights put more emphasis on the comparison of single second-stage treatments. For the third set of weights, less than half (one-third) of participants are randomized to single second-stage treatments. This is also obvious because the chosen weights put more emphasis on the comparison of combined second-stage treatments. The optimal proportions p * 2 and p * 3 for the second set of weights are the complement of those for the third set of weights. In all cases, the RE of the balanced design is above 0.9, which implies it performs rather well as compared to the optimal design.
The results do not necessarily apply to other combinations of weights and response rates, so a researcher who is planning a SMART is advised to use our Shiny app to derive the optimal design for the trial at hand, and to do a sensitivity analysis to study how the optimal design is influenced using by various realistic combinations of weights and response rates.
Locally optimal design under a fixed budget
To find the optimal design under a budgetary constraint, the costs for both treatments and the budget need to be defined. We assume both stages are of equal length, so the costs do not vary across stages. The costs for combined treatment are the sum of the costs for both single treatments. We consider two sets of costs for NUT (C N ) and PHY (C P ): (C N , C P ) = (300, 50) and (C N , C P ) = (300, 300). Let us assume the costs are expressed in euros and the length of each stage is one month. The costs for NUT are a reasonable amount to buy healthy food for one participant per month in the Netherlands. The costs for PHY in the first set cover a subscription to the local gym for one month, those in the second set also include personal training by a fitness coach. Furthermore, the budget is B = 100, 000. For the response rates and the weights, we consider the same sets of values as in Section 'Locally optimal design under a fixed total sample size'.
For (C N , C P ) = (300, 50), the optimal proportion p * 1 is somewhat above 0.5, which implies that in the first stage more subjects are randomized to the least expensive treatment PHY than to the more expensive treatment NUT. The optimal proportion p * 1 hardly depends on the chosen weights, but it slightly increases with increasing response rates. Higher response rates imply more subjects receive the same treatment in stage 2 as they did in stage 1. It is therefore advantageous to already randomize more subjects to the least expensive treatment PHY in stage 1, so that more subjects receive this treatment in stage 2 as well. For (C N , C P ) = (300, 300), both first-stage treatments are equally expensive and the optimal proportion p * 1 is (about) 0.5. It hardly depends on the chosen weights and the response rates. The optimal proportions p * 2 and p * 3 hardly depend on the response rates but they do depend on the chosen weights. For the first set of weights, (λ 13 , λ 14 , λ 23 , λ 24 ) = (0.25, 0.25, 0.25, 0.25), somewhat more subjects are randomized to the single second-stage treatments NUT or PHY than to the combined second-stage treatment PHY + NUT. This is obvious since single second-stage treatments are less expensive than combined treatments. For the second set of weights, (λ 13 , λ 14 , λ 23 , λ 24 ) = (0.70, 0.10, 0.10, 0.10), even more subjects are randomized to single second-stage treatments than for the first set of weights. This is also obvious because the second set of weights puts more emphasis on the comparison of those ATSs with single second-stage treatments. For the third set of weights, (λ 13 , λ 14 , λ 23 , λ 24 ) = (0.10, 0.10, 0.10, 0.70), more subjects are randomized to combined second-stage treatments than Table 3. Locally optimal design: optimal proportions for first-stage (p * 1 ) and second-stage (p * 2 , p * 3 ) treatments for three different sets of weights (λ 13 , λ 23 , λ 14 , λ 24 ) for the multiple-objective optimal design, and for three different sets of response rates (γ 1 , γ 2 ). The relative efficiency (RE) of the balanced design is also provided. The optimal proportions are derived under a fixed total sample size. to single second-stage treatments, which is also obvious because this set of weights puts more emphasis on the comparison of ATSs with combined second-stage treatments. The optimal total sample size N * depends on the combination of costs (C N , C P ). As is obvious, fewer subjects can be included for C P = 300 than for C P = 50. Furthermore, N * depends on the weights: most subjects can be included for the second set of weights and fewest for the third set of weights. For the second set of weights, more subjects are randomized to the least expensive single second-stage treatments, hence a larger total number of subjects can be included. Finally, more subjects can be included when the response rates increase. Subjects who respond to treatment are not re-randomized, hence they receive a single treatment in the second-stage. Single treatments are less expensive than combined treatments, hence more subjects can be included.
The RE of the balanced design slightly depends on the response rates. It is also related to the weights. The RE is highest for the first set of weights, since the optimal proportions are nearest to those of the balanced design. Slightly lower relative efficiencies are found for the third set of weights, but these relative efficiencies are still above 0.9. The lowest relative efficiencies are observed for the second set of weights as the optimal proportions deviate most from those of the balanced design. The lowest RE is RE = 0.85, which implies that the balanced design requires 100%[1 − (1 / 0.85)] = 17% more subjects than the optimal design.
Robust optimal design
The optimal designs that were presented in subsections 'Locally optimal design under a fixed total sample size' and 'Locally optimal design under a fixed budget' are locally optimal since they depend on the response rates γ 1 and γ 2 . Such response rates are often unknown in the design phase of a SMART and an educated a priori guess must be given. There is, however, no guarantee such a guess is correct, and an incorrect guess may result in a suboptimal design. This problem may be overcome by using robust optimal design methodology; here we use the maximin optimal design methodology as described in section 'Robust optimal design'.
Tables 5 and 6 in the online supplement show maximin optimal designs using the same sets of weights and combinations of costs as in Tables 3 and 4 Tables 3 and 4. A comparison of Table 3 and Table 5 of Supplemental material, and Table 4 and Table 6 of Supplemental material shows the locally optimal designs and maximin optimal designs are (almost) identical for the chosen sets of weights, response rates and costs. As a result, the minimal RE of the balanced design as given in Tables 5 and 6 of Supplemental material is almost equal to that of the RE of the balanced design in Tables 3 and 4. This result is not surprising since in Sections 'Locally optimal design under a fixed total sample size' and 'Locally optimal design under a fixed budget', it was shown that the optimal design hardly depends on the response rates. Of course, this finding does not necessarily hold for all combinations of responses rates, weights and costs. The user is therefore encouraged to apply maximin optimal design methodology in the case the response rates are likely to be misspecified. Table 4. Locally optimal design: optimal proportions for first-stage (p * 1 ) and second-stage (p * 2 , p * 3 ) treatments for three different sets of weights (λ 13 , λ 23 , λ 14 , λ 24 ) for the multiple-objective optimal design and for three different sets of response rates (γ 1 , γ 2 ). The relative efficiency (RE) of the balanced design is also provided. The optimal proportions are derived under a fixed budget with C = 100, 000 and for two different sets of costs (C P , C N ).
Discussion
Considering our example of a prototypical SMART design, we derived the optimal design ξ * =( p * 1 , p * 2 , p * 3 ) both under a fixed sample size and budget constraint. Under a fixed sample size, we found that the optimal probability in the first-stage p * 1 is mostly influenced by the weights chosen for the multiple-objective optimal design, while it is only slightly influenced by the response rates. On the other hand, second-stage optimal probabilities are only influenced by the choice of the weights. When considering the second set of weights (λ 13 , λ 14 , λ 23 , λ 24 ) = (0.70, 0.10, 0.10, 0.10) or the third set, (λ 13 , λ 14 , λ 23 , λ 24 ) = (0.10, 0.10, 0.10, 0.70), which, respectively, put more emphasis on the use of single and combined treatments, the optimal design ξ * performs better than the balanced design ξ b = (0.50, 0.50, 0.50), although the latter still achieves a RE above 0.90. When equal weights are used, ξ * and ξ b perform almost identically in terms of RE. Under a fixed budget, the optimal proportions are influenced also by the cost of treatments, besides the aforementioned weights and response rates. When including cost of treatments into account, the performance in terms of RE of the optimal design ξ * , with respect to ξ b , improves. The reason might be that unequal allocation of patients to intervention options seems to work better under a fixed budget than under a fixed sample size, as was also previously stated in the literature. 2,3 It is especially advised to use the optimal design rather than the balanced design when the second set of weights, (λ 13 , λ 14 , λ 23 , λ 24 ) = (0.70, 0.10, 0.10, 0.10), is used. For this set, ξ b may have a RE as low as 0.86. When using equal weights for the multiple-objective optimal design, ξ b achieves a RE with respect to ξ * above 0.95. When using the third set of weights, (λ 13 , λ 14 , λ 23 , λ 24 ) = (0.10, 0.10, 0.10, 0.70), ξ b achieves a RE above 0.90.
It should be mentioned that the optimal designs are locally optimal, as they depend on the two unknown response rates γ 1 and γ 2 . One way to address this issue is using maximin optimal design methodology. In our example, the maximin optimal designs are quite similar to the locally optimal designs. In other words, the locally optimal designs are rather robust with respect to mild misspecification of the response rates. However, this finding does not always hold and it is advocated to derive a maximin optimal design if there is uncertainty about the a priori guesses of the response rates.
We derived our optimal design under the assumption that outcomes of subjects in ATSs that start with different firststage treatments are independent of each other, resulting in a zero correlation between weighted mean outcomes of ATSs starting with different first-stage treatments. There are situations in which this assumption may be violated. Consider for instance the situation in our weight loss example where just a limited number of personal trainers is available. It may then occur, a personal trainer trains subjects from ATSs starting with different first-stage treatments. In such a case, the outcomes of subjects who have been trained by the same personal trainer become dependent because of the trainer's skills, enthusiasm, experience, etc. In such a case, the assumption of independence is violated and hence our optimal design is not applicable. Such a problem can be easily solved by letting each personal trainer only train subjects from ATSs that start with the same first-stage treatment.
One limitation of this study is that it does not take clustered data structures into account, while such data may also occur in SMARTs. 54,55 Clustered data occur, among others, in cluster-randomized trials and multicentre trials. In such studies not only the total number of subjects in each treatment sequence needs to be determined, but also the number of clusters and cluster size. 56 The optimal design will depend on the intraclass correlation coefficient, which measures the degree of dependence of outcomes within the same cluster.
Another limitation of this study is that formulae and methodology only apply to the prototypical SMART designs in Figures 1 and 2. Based on the number of treatments, stages and randomizations, different SMART designs can be developed, of which many examples exist in the literature 57,58 and online. 59 It would be necessary to study optimal designs for such other types of SMART designs.
To our knowledge, this is the first paper that studies optimal allocation to treatments in SMARTs. Our Shiny App allows researchers in the fields of biomedical, health and social sciences to derive the optimal design for their SMART and to calculate the efficiency of a balanced design. We hope that this paper will further contribute to the development and implementation of SMARTs. | 10,928.4 | 2021-09-23T00:00:00.000 | [
"Mathematics"
] |
The Role of Transposable Elements in the Origin and Evolution of MicroRNAs in Human
MicroRNAs (miRNAs) are crucial regulators of gene expression at the post-transcriptional level in eukaryotes via targeting gene 3'-untranslated regions. Transposable elements (TEs) are considered as natural origins of some miRNAs. However, what miRNAs are and how these miRNAs originate and evolve from TEs remain unclear. We identified 409 TE-derived miRNAs (386 overlapped with TEs and 23 un-overlapped with TEs) which are derived from TEs in human. This indicates that the TEs play important roles in origin of miRNAs in human. In addition, we found that the proportions of miRNAs derived from TEs (MDTEs) in human are more than other vertebrates especially non-mammal vertebrates. Furthermore, we classified MDTEs into three types and found that TE head or tail sequences along with adjacent genomic sequences contribute to generation of human miRNAs. Our current study will improve the understanding of origin and evolution of human miRNAs.
Introduction
Transposable elements (TEs) as important components of many genomes are able to mobilize and replicate in the host genomes [1]. There are two kinds of these elements: retrotransposons, and DNA transposons [2]. The retrotransposons can be further classified into three categories: long terminal repeat (LTR), long interspersed nuclear element (LINE) and short interspersed nuclear element (SINE). TEs were claimed to be an evolutionary force and were found to be related to epigenetic regulatory mechanisms [3][4][5]. In addition, TE sequences are able to provide TF binding sites during gene expression, and change regulatory networks of gene expression [6,7].
The discovery of small RNA led to an emerging of prospect for uncovering the functions of TEs. Various small RNAs have been discovered, such as microRNAs (miRNAs), short interfering RNAs, piwi interacting RNAs and so on [8][9][10]. MiRNA is the first discovered small RNA [10]. MiRNAs are a class of short non-coding RNAs (approximately 22 nt) that are cleaved from longer (approximately 70 to 90 nt) precursor miRNAs (pre-miRNAs) [11,12]. In animals, most miRNAs regulate gene expression by targeting mRNA-specific regions (known as miRNA-target sites) via partially complementary manner [13,14]. These target sites are mainly located in 3'-untranslated regions (3'-UTRs) and complement with miRNAs via sequences of approximately 7 nt at the 5' ends of miRNAs (known as 'seed' regions) [15,16]. MiRNAs regulate gene expression by degrading mRNAs or repressing mRNA translation via recognizing their target sites [17]. Although many algorithms were developed to predict the target sites of miRNAs, the mechanism of miRNA recognition of target sites is not fully understood [15,[18][19][20]. Understanding the origin of miRNAs and their target sites will improve the rationale for developing algorithms to optimizing the prediction of miRNA-target genes.
TEs were claimed to provide a natural mechanism for the origin of new miRNAs and the targets of some miRNAs [21][22][23][24][25]. For instance, mir-28, mir-95 and mir-151 are derived from LINE-2 TEs, and mir-548 family is derived from Made1 TEs [21][22][23]. Alu elements of TEs could be targeted by almost 30 human miRNAs [25]. The hsa-mir-566 was found to be derived from Alu and 80% of its predicted target sites were claimed to be derived from TEs and related to Alu element [23]. However, it remains largely unknown what and how miRNAs originated from TEs in human.
In the current study, we provide evidences to show that TEs are important sources for the origin of miRNAs in human. Our results uncover the evolution of miRNAs derived from TEs in human and provide an insight into the mechanism of the origin of miRNAs.
Materials and Methods
To identify the miRNAs derived from TEs (MDTEs), pre-miRNAs and their associated data were collected and analyzed by following steps: Firstly, 6845 pre-miRNAs with chromosomal locations of eight vertebrates (Danio rerio, Xenopus tropicalis, Gallus gallus, Bos taurus, Mus musculus, Macaca mulatta, Pan troglodytes and Homo sapiens) were obtained from miRBase v20 [26]. The pre-miRNAs and their adjacent upstream and downstream 4,000 bp sequences were downloaded using the BioMart tool from Ensembl genome database (Release 68) [27].
Secondly, the pre-miRNAs and their adjacent sequences were used as the query sequences to identify TEs. TEs which locate on the query sequences were identified using the RepeatMasker program based on the repeatmasker libraries 20140131 [28,29]. The Wu-blast program was used as the searching engine of RepeatMasker and the parameter-s was set to improve the accuracy of identification. The locations of TEs on query sequences were extracted from the ".out" file which is one of the output files of RepeatMasker.
Finally, the locations of TEs were compared with those of pre-miRNAs on query sequences. If a pre-miRNA overlapped with a TE on the query sequence, this miRNA was defined to be a MDTE. The proportions of overlap between pre-miRNAs and TE sequences were calculated for classification of MDTE types.
To identify a human MDTE which lose its sequence feature of TE and has homologous relations of miRNAs among human and seven other vertebrates were analyzed and human MDTEs without TE sequence feature were identified if a human miRNA showing non-overlapping with a TE but has homologies from several other vertebrates overlapping with same TE sequence.
To address whether different TE families have equal contributions to the origin of MDTEs, the proportions of MDTEs generated from different TE families were calculated and compared with the proportions of TE families in human genome. The proportions of MDTEs generated from different TE families were calculated following the procedure described above. The proportions of TE families in human genome were obtained from the published genome sequencing data [30]. Pearson's Chi-squared test was used to evaluate the significance of differences and P< 0.01 indicates that different TE families have different contributions to the origin of MDTEs significantly.
Identification of miRNAs derived from TEs in human and seven other vertebrates
The MDTEs were identified from human and seven other vertebrates (Danio rerio, Xenopus tropicalis, Gallus gallus, Bos taurus, Mus musculus, Macaca mulatta and Pan troglodytes). Surprisingly, non MDTEs were found in Xenopus tropicalis. Proportions of MDTEs in miRNAs increased with the evolution of vertebrates and the proportion in human was more than those in other analyzed vertebrates (Fig 1). Meanwhile, it was observed the proportions of MDTEs in miRNAs bear little relevance to the proportions of TEs in genomes. For example, although more than one-third of the genomes are made up by TEs in Danio rerio and Xenopus tropicalis [31,32], the proportions of MDTEs in miRNAs are less than 5%. In comparison, TE sequences constitute 9% of genome in Gallus gallus, but 6.98% of miRNAs were MDTEs [33]. The MDTEs account for 19.84% of miRNAs in Homo sapiens and TE sequences make up 44.83% of its genome [30]. This observation might be due to the significant differences between the components of TEs in Danio rerio and Xenopus tropicalis and those in human and other mammals. This argument was supported, at least in part, by the observation that the major TEs are DNA types in Danio rerio and Xenopus tropicalis compared to retrotransposable elements in mammals [30][31][32]34]. Given the contribution of TEs to miRNAs were negligible in Drosophila [35], MDTEs mainly present in genomes of human and other mammals. Information of MDTEs in human and seven other vertebrates was summarized and listed in Table 1.
When MDTEs were undergone homology analysis among Danio rerio, Gallus gallus and mammals, no homology of MDTEs among them were found. Fourteen MDTEs are conserved among five mammals and forty-seven MDTEs are conserved among primates. This finding implies that MDTEs are species-specific due to the difference of TEs among species.
Analysis of MDTEs in human
To further investigate the pattern of MDTEs in human, 1872 miRNA gene sequences of Homo sapiens collected from the miRBase v20 were mapped to the human genome and analyzed. In total, 386 MDTEs which completely or partly overlap with TEs show unique relationships to their related TEs. It can be demonstrated via observing the origin of multi-copy MDTEs or MDTE families. Each copy in multi-copy MDTEs or each member of a MDTE family was found to be originated from the same TE. For example, six copies of hsa-mir-3118 in human genome are all partly derived from LINE/L1PA13 and a large miRNA group, hsa-mir-548, is derived from DNA/MADE1 element. Taken together, our findings suggested that a MDTE and its homologies are derived from the same TE. S1 Table lists detailed information of all MDTEs.
When multi-copy MDTEs were excluded, 338 unique MDTEs (UMDTEs) were identified and can be classified into three types (Fig 2A): Type I UMDTEs derived from inverted TE sequences, Type II UMDTEs with sequences partly overlap with TE sequences that are not inverted, and Type III UMDTEs with sequences wholly overlap with TE sequences.
MiRNAs have been identified in various organisms with rapidly increasing number in databases. In humans, except for multi-copies, approximately 19.84% (338/1704) of miRNAs overlapping with TEs are regarded as UMDTEs. Inverted TEs have been claimed as an important configuration of miRNA origin [21,22]. Consistently, 11.24% of UMDTEs were found to be derived from inverted TEs, 36.98% of the UMDTEs were found to be derived from whole TEs ( Fig 2B). It might be due to the abundance of similar fragments and palindromic structure in TEs [4,36]. These fragments provide the potential to form the hairpin structure of miRNAs.
Type II MDTEs are derived from TEs via two patterns in human
In UMDTEs, 51.78% miRNA belongs to Type II MDTEs which partly overlap with TEs. Type II MDTEs was found to be generated by two patterns: Pattern I in which MDTEs loss their TE sequence features from whole TE sequences, and Pattern II in which MDTEs with a part of the pre-miRNA are derived from the head or tail of TEs (Fig 4). In Pattern I, it is evident to observe MDTEs in miRNA homologies or multi-copy miRNAs obviously. The overlap between TEs and MDTEs in pattern I is reduced from 100% to 30% or even less (Table 2).
Compared with Pattern I MDTEs, which account for 77.14% of Type IIMDTEs, TEs form one arm of the pre-miRNAs in Pattern II MDTEs. In this condition, the TEs were inserted in the proximity of appropriate sequences that are similar to the complementary sequences of the TE head or tail to form the hairpin structure of pre-miRNAs, such as hsa-mir-326, hsa-mir-421 and hsa-mir-619 (S2 Table). For Pattern II MDTEs, the mature miRNAs are derived from not only the internal portion of TEs but also non-TE sequences that were complementary with the head or tail of TEs.
Two origin mechanisms of MDTEs can be found across three types of MDTEs. In Type II and Type III MDTEs, some miRNAs were generated via being taken into genomes by TEs and passed on to new species after species differentiation. In Type I and Type II MDTEs, some miRNAs are generated by TEs from current genome.
Identification and characterization of human MDTEs without TE sequence features
About 19.84% miRNAs wholly or partly overlap with TE sequences in human, but the origin of other miRNAs is not very clear. To identify those MDTEs losing their sequence features of TEs, the miRNAs which do not overlap with TEs in human were analyzed and compared with their homologies in other vertebrates. Twenty-three miRNAs were identified as human MDTEs which do not overlap with TEs, while their homologies either wholly or partly overlap with same TE sequence in other species (S3 Table). Although these MDTEs just account for 1.35% of all miRNAs in human, it implies that more miRNAs than expected may be derived from TE sequences in vertebrates.
Conclusion
In summary, we found that TE is an important origin source of human miRNAs. MiRNAs can be brought into genomes during the insertion of TEs or generated by TE sequences via particular mechanisms in current genome. When MDTEs fixed in the genome, sequence features of TE of MDTEs might be lost during the evolution. The observation that some MDTEs partly overlap with TE sequences and some MDTEs do not overlap with TEs implies that there are more MDTEs in genomes of vertebrates than what we previously believed. Our findings provide an insight into the origin and evolution of miRNAs.
Supporting Information S1
Author Contributions
Conceived and designed the experiments: SQ FM. Analyzed the data: SQ PJ XZ. Wrote the paper: SQ LMC FM. | 2,860.2 | 2015-06-26T00:00:00.000 | [
"Biology"
] |
Speeding-Up Elliptic Curve Cryptography Algorithms
: In recent decades there has been an increasing interest in Elliptic curve cryptography (ECC) and, especially, the Elliptic Curve Digital Signature Algorithm (ECDSA) in practice. The rather recent developments of emergent technologies, such as blockchain and the Internet of Things (IoT), have motivated researchers and developers to construct new cryptographic hardware accelerators for ECDSA. Different types of optimizations (either platform dependent or algorithmic) were presented in the literature. In this context, we turn our attention to ECC and propose a new method for generating ECDSA moduli with a predetermined portion that allows one to double the speed of Barrett’s algorithm. Moreover, we take advantage of the advancements in the Artificial Intelligence (AI) field and bring forward an AI-based approach that enhances Schoof’s algorithm for finding the number of points on an elliptic curve in terms of implementation efficiency. Our results represent algorithmic speed-ups exceeding the current paradigm as we are also preoccupied by other particular security environments meeting the needs of governmental organizations.
Introduction
Elliptic curve cryptographic (ECC) was initially proposed in [1,2] as an alternative to the already established public key cryptographic schemes.As a side note, the credit for the first use of elliptic curves in a cryptology related context is given to Lenstra for his factorization algorithm [3].ECC has received an increasing amount of attention in time not only for the high level provable security offered, but especially due to a desired property concerning the implementation efficiency: the cryptographic keys are significantly shorter compared to, e.g., the case of RSA [4].
For more than a decade now, ECC has become a central piece for the Blockchain technology.To be more specific, the Elliptic Curve Digital Signature Algorithm (ECDSA) [5] is widely adopted in the construction of cryptocurrency and, implicitly, blockchains.Thus, there has been a justified hype with respect to efficient implementations of ECDSA and other ECC schemes in recent years, especially using Field Programmable Gate Arrays (FPGAs) [6][7][8][9][10][11][12][13][14].Hence, FPGA-based hardware accelerators represent the main applied research topic when dealing with (permissioned) blockchains.To underline the importance of the subject, we have to mention that the main FPGA technology producer, Xilinx [15], organized a competition in 2021 [16], which encouraged R&D representants to propose, among other topics, blockchain-related projects [17].
Motivated by all the above and following the work of Géraud et al. from [32], this paper aims at building the foundation for new techniques of optimizing the implementation of ECDSA.As in the case of most public-key cryptosystems, the basic arithmetic operation used in ECC is the modular reduction.[32] describes a method allowing to double the speed of Barrett's algorithm [33] by using specific RSA moduli with a predetermined portion.The result is then applied in order to generate DSA [34] parameters.As an extension, our article presents a technique for generating ECDSA moduli with a predetermined portion that allows one to double the speed of Barrett's algorithm, a widely adopted efficient technique for performing modular reduction in a costly reduced manner.We also provide the reader with mathematical proof of our algorithm.
Moreover, we target a more general type of optimization suitable not only for ECDSA, but for various ECC algorithms.Thus, we propose an artificial intelligence (AI) based approach that enhances the speed of Schoof's algorithm for finding the number of points on an elliptic curve [35].Schoof's method is the first deterministic polynomial time algorithm for counting points on elliptic curves defined over finite fields.The result represented, undoubtedly, a breakthrough in terms of designing ECC algorithms.
While the first result we propose is rather particular and can be applied for a certain digital signature scheme (ECDSA), the latter can be of general interest in terms of ECC algorithms.We underline that the AI-optimized variant of Schoof's algorithm is rather a proof of concept for a future series of results, with respect to this research direction.
Nonetheless, our methods can also be combined with already established algorithmic improvements to obtain even better implementation timings.
Specific Supplementary Motivation
The common practice when using ECC is to have specific curves [5,36] rather than choose them every time when the algorithm is run in order to ease computations (by applying dedicated formulae for point addition and scalar multiplication).Nonetheless, speeding-up ECDSA in the general case can be advantageous, e.g., either for cryptographic implementations, requiring a higher level of security than the standard one, or simply for proprietary cryptographic algorithms (of course, given that a step consisting of checking the security of the generated curve is performed.).Such needs are customary, especially for governmental organizations.Moreover, in the aforementioned example, implementations for resource constrained devices and cryptographic hardware may be of great interest.Thus, when proposing our results, we do not seek to compare them with existing targeted ECC implementations in terms of speed, as we consider performing an initial costly step: the parameter generation.
In addition to the above, we believe that our AI-based strategy will benefit from the rapid advances in the field of AI in the near future.
Structure of the Paper
In Section 2, we introduce the notations and briefly describe Barrett's algorithm for modular reduction, Schoof's algorithm for point counting on elliptic curves, and ECDSA.The main results are discussed in Section 3, namely the algorithm for generating the ECDSA parameters and the method for finding the number of points of an elliptic curve.Details regarding the straightforward, unoptimized implementations of the previously mentioned algorithms are presented in Section 4. We conclude and provide the reader with future work ideas in Section 5.Moreover, we recall ECDSA in Appendix A and Schoof's algorithm in Appendix B.
Notations
Throughout this paper, we denote by NextPrime(r) the smallest prime p, such that p ≥ r. #S represents the cardinality of the set S. We let P be the bit-length of p, such that P = ||p||.The value of P is fixed from now onwards.The binary shift-to-the-right of x by y positions is further denoted by x y.
Barrett's Algorithm
Let d and m be integer numbers.Barrett's algorithm (Algorithm 1) only uses two bit-shifts and one multiplication to produce an approximate value of the quotient obtained when d is divided by m.This approximation is denoted by c 3 and it satisfies the following inequality which means that the whole loop is not repeated more than two times.The bit-lengths of d and m are represented by D and M. Algorithm 1 also requires a quantity that is denoted by L and which represents the maximal bit-length of the numbers that can be reduced.Barrett's algorithm works as long as the condition D ≤ L is satisfied.In most cases, these constants can be chosen such that D = L = 2M, provided that the reduction is performed after every operation.The constant k is computed only once, since it does not depend on the value of d.Further details regarding Algorithm 1 can be found in [33].
The elliptic curves considered are of the form y 2 = x 3 + ax + b and are defined over a finite field F p , where p is prime.An important result, which will be used throughout this paper, is the following theorem.
Theorem 1 (Hasse).The number of points n of an elliptic curve defined over a finite field of size p satisfies the inequality |n − p − 1| ≤ 2 √ p.
In [35], Schoof published the first deterministic and polynomial-time algorithm that computes the order of an elliptic curve, which is defined over a finite field.This algorithm starts off by using Theorem 1, which provides an interval of possible values for the order of the elliptic curve.That specific interval has the width 4 √ p.
Since the order can be written as #E(F p ) = p + 1 − t, where t is the trace of the Frobenius endomorphism [37], the problem of finding the order reduces to that of finding the value of t.The next step involves computing the value of t modulo for some primes, such that their product is greater than 4 √ p. Finally, the Chinese Remainder Theorem [37] produces the value of t, which is needed for finding the order.The details of Schoof's algorithm are included in Appendix A as Algorithm A4.
ECDSA
ECDSA [5] is a digital signature scheme based on cyclic groups of elliptic curves defined over finite fields.Its security relies on the Elliptic Curve Discrete Logarithm Problem [38].Details about setting-up the parameters of ECDSA, generating a signature and verifying it are included in Appendix A.
Double-Speed Barrett for ECDSA
For the setup of ECDSA, two prime numbers p and n are required: p represents the size of the finite field and n is the order of the group E(F p ), since we are only considering the case when the order of this group is prime.Note that both the multiplications performed in Algorithm 1 are multiplications by constants, namely k and n.
Our aim is to generate the primes p and n such that their leading bits do not have to be computed.Moreover, we want this to happen also for the associated constants k p = 2 L p and k n = 2 L n .The idea of Algorithm 2 is that if we choose the prime p in a convenient way, then we can control the most significant bits of n.
Input: P, the bit-length of the prime p, which has to be even and large Output: (p, a, b, n), the parameters needed for ECDSA Then p, n, k p and k n satisfy the following inequalities: 4. Proof.
1.
Using the inequality r < 2 U and Line 6 of Algorithm 2, we obtain that p < Q.
2.
From Theorem 1 we have that p For the left side of the inequality, i.e., 2 P−1 < n, we obtain that Thus, For the right side of the inequality, i.e., n Using Line 2 of Algorithm 2, we can deduce that Hence,
4.
Similarly, using Line 2 of Algorithm 2, we obtain that Remark 1.In Line 6 of Algorithm 2, we do not allow the distance between α and the next prime p to be too large.Additionally, we choose α in a specific way so that we can control p's size.This implies that the probability of the first U bits of p being different than the first U bits of α is negligible.We performed 10 5 experiments with the value P = 256, and the success rate was 100%.Example 1.This example illustrates Algorithm 2. For the values P = 256, L = 512, and U = 128, we get:
Enhancing Schoof's Algorithm Using AI
Our aim is to modify Schoof's algorithm by replacing Hasse's interval with another one containing the order, such that the width of the new interval is smaller.
In order to obtain such a result, we can use a neural network that takes input triplets of the form (p i /2 P , a i /2 P , b i /2 P ) and returns as output elements ŷi given by ŷi where we denote by ni that the estimate of the actual order n i .Here we use the sigmoid activation function for the output layer to ensure that the output is in the appropriate range.The labels used for training the neural network are written as The elements of the training, validation, and test sets will be written in the form (p * i /2 P , a * i /2 P , b * i /2 P , y * i ), where instead of * , we will have the superscripts tr, v, and t, respectively.These three sets will have the cardinal numbers equal to N tr , N v , and N t , respectively.
At training time, we choose to use as loss function the mean squared logarithmic error, since we want this to work well for large primes.This function is given by Let us denote by the average distance between the actual order and the estimate of the order, which is computed on the validation set, Our approach is a probabilistic one, since we need to assume that the order n satisfies the inequality n − 2 < n < n + 2 . ( This leads to the following result involving t, Notice that in the above inequality, we have doubled in order to increase the probability that our assumption is true.Hence, if we manage to determine the value of t 0 ≡ t (mod 4 ), then we can find t by replacing t 0 in the formula t = (p + 1 − n) − 2 + t 0 , and thus we know the order of the group.The benefit of using the estimate given by the neural network is that Schoof's algorithm can be applied for an interval of width equal to 4 instead of one of width equal to 4 √ p.This means that if the neural network is good at estimating the order, i.e., < √ p, then this approach will be faster than the standard one.
Remark 2. Since our algorithm is probabilistic, after obtaining a value of , which is significantly lower than √ p, instead of assuming that n lies in the interval ( n − 2 , n + 2 ), we can assume that it lies in an interval of greater width, for example ( n − 4 , n + 4 ).By doing this, we are able to increase the success probability of the algorithm.
Remark 3. The difference between Schoof's algorithm and our proposed technique is that we choose the set of primes from Line 1 of Appendix B, such that the product of the elements is greater than 4 .All the steps that follow remain unchanged.
GitHub Implementation
We refer the reader to [39] for the source code representing the implementation of our proposed results.
Note that for simplicity, we overlooked the initial part of Algorithm 2 (i.e., from Line 1 to Line 6) in our implementation.
Implementation Results
We ran the code for our algorithm on a standard laptop using Ubuntu 20.04.5 LTS OS, with the following specifications: Intel Core i3-1005G1 with 2 cores and 8 Gigabytes of RAM.The programming language we used for implementing our algorithms was Python, and the AI library we chose was TensorFlow.
AI-Based Speed-Up
Our AI-based technique can speed up the search for an elliptic curve of prime order.This speed-up depends initially on the model architecture and then on the accuracy of the AI-model.Within the current section, we report our (proof of concept) results.
To achieve our proof of concept goal, in our implementation we initially considered primes p of length 32 bits.Thus, we generated 60,000 elliptic curves of the form (p, a, b, n) by means of Schoof's algorithm.Based on these examples, we trained, validated, and tested the neural network model we chose.This network was composed of 7 dense hidden layers with the number of units decreasing from 512 to 8. Note that decreasing the number of units, as stated before, is a straightforward technique used in AI algorithms.The reason we decided to have 7 hidden layers was to obtain the best compromise in terms of error rate and code optimization (especially with respect to time complexity).We provide the reader with a graphical representation of the relationship between the number of neural network layers and the error rate of our proposed algorithm in Figure 1.Note that the error rate stabilizes at 7%, starting with the use of 7 layers.Hence, using the previously described neural network, we managed to replace Hasse's interval with another interval, with the width approximately 15% smaller than the original one.In this case, the probability that the order n satisfies Equation (1) was 93%, which was also the success rate of our probabilistic algorithm.The obtained probability was computed by finding the number of testing examples, which satisfied Equation (1).
We provide the reader with a graphical representation of the relationship between the number of neural network layers and the reduced Hasse interval of our proposed algorithm in Figure 2. Note that the percent by which the width of our reduced Hasse's interval is smaller than the original one stabilizes right after 7 layers at 16%.Note that the next value considered after 7 is 9 given that the AI models work better in the case of an odd number of layers.Thus, given the results presented in Figures 1 and 2, we chose to use 7 layers in our implementation as a trade-off between accuracy and time complexity.Table 1 shows that the difference between the timing of the 7 layers implementation version and the 9 layers version is significant (the latter is 55% slower) and clearly sustains our choice of parameters.Due to the fact that our proposed result is a particular algorithmic improvement, we compare it in terms of efficiency with the original Schoof algorithm (see Remark 3) and provide the reader with precise timings in Table 2.The average timings we report are given for 32, 48, and 64 bits prime numbers.It is straightforward for any other publicly available implementation optimization to be applied in our case too, and, thus, we can obtain the best timings.Based on the results in [32], we showed that using Barrett-compatible ECDSA parameters doubles the speed of Barrett's algorithm when performing the modular reductions required for generating (Algorithm A2) and verifying (Algorithm A3) ECDSA signatures.Thus, in such an optimized ECDSA implementation, the steps including modular reduction are performed two times faster than in a standard ECDSA implementation.
ECDSA Related Works Comparison
The authors of [13] report the fastest implementation of the ECDSA verification algorithm in FPGA compared to the already established results in the literature so far.The majority of papers presenting hardware optimizations for blockchain applications are only discussing the verification algorithm of ECDSA.Nonetheless, we are interested in optimizing the complete ECDSA scheme as our proposed speed-ups can also be applied for the signature generation algorithm (already stated in Section 4.2.2),not only for the verification algorithm.
Given that our FPGA implementation is work in progress, at this point, our target is to make a software implementation comparison of both the signing and the verification ECDSA algorithms.Thus, we considered the fastest (lightweight) ECDSA implementation available online [40] and modified it to include our proposed optimization from Section 3.1.The average time differences we obtained after 100 runs are presented in Table 3.Note that in the implementation at [40], the modular reduction steps are performed in a straightforward manner as opposed to our proposed double-speed Barrett optimization.Hence, the speed-up is obvious both in theory (see Section 4.2.2) and in practice (see Table 3).
Conclusions and Future Work
We briefly described Barrett's algorithm for modular reduction, Schoof's algorithm for point counting on elliptic curves, and ECDSA as an example for applying our proposed speed-ups.We presented as main results an algorithm for generating implementationfriendly ECDSA parameters and a method for finding the number of points of an elliptic curve, representing an enhancement of Schoof's algorithm.We also gave details regarding the unoptimized implementations of the previously mentioned algorithms.
Future Work
We consider that timing comparisons between our enhanced Schoof algorithm and already established implementations of SEA [41] represent an interesting idea to be tackled in the near future.
Next, a valuable idea is looking into more sophisticated AI optimizations for the mathematical computations inside Schoof's algorithm.
Another interesting research direction is the implementantion of ECDSA in cryptographic hardware while using our proposed optimizations, together with a complexity analysis of other implementations in the literature.We are currently working on such an approach using FPGA-based equipment.
Figure 1 .
Figure 1.The relationship between the number of neural network layers and the error rate of our proposed algorithm.
Figure 2 .
Figure 2. The relationship between the number of neural network layers and the reduced Hasse interval.
Table 1 .
Timing comparison between the implementation of our proposed algorithm using 7 and 9 layers, respectively.
Table 2 .
Timing comparison between the implementation of our proposed algorithm and the original Schoof algorithm.
Table 3 .
Timing comparison between a lightweight ECDSA implementation and our enhanced version of it. | 4,670.4 | 2022-10-07T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Preparing Proteoforms of Therapeutic Proteins for Top-Down Mass Spectrometry
A characteristic of many proteoforms, derived from a single gene, is their similarity regarding the composition of atoms, making their analysis very challenging. Many overexpressed recombinant proteins are strongly associated with this problem, especially recombinant therapeutic glycoproteins from large-scale productions. In contrast to small molecule drugs, which consist of a single defined molecule, therapeutic protein preparations are heterogenous mixtures of dozens or even hundreds of very similar species. With mass spectrometry, currently high-quality spectra of intact proteoforms can be obtained only, if the complexity of the mixture of individual proteoform-ions, entering the gas phase at the same time is low. Thus, prior to mass spectrometric analysis, an effective separation is required for getting fractions with a low number of individual proteoforms. This is especially true not only for recombinant therapeutic proteins, because of their huge heterogeneity, but also relevant for top-down proteomics. Purification of proteoforms is the bottleneck in analyzing intact proteoforms with mass spectrometry. This review is focusing on the current state of the art, especially of liquid chromatography for preparing proteoforms for mass spectrometric top-down analysis. The topic of therapeutic proteins has been chosen, because this group of proteins is most challenging regarding their proteoform analysis.
Introduction
The analysis of proteoforms, often also termed protein species or isoforms, is the next level in proteomics. The first comprehensive definition of this subgroup of proteins was published by Jungblut et al. [1] and Schlüter et al. [2], using the term "protein species". In 2013, Smith and Kelleher [3] introduced the term "proteoform", which today is widely accepted in the community of proteomics experts. The concept of "proteoform" is nearly identical with the concept of "protein species". The only difference is that the proteoform concept is gene-centric and the proteinspecies-concept is chemistry-centric.
For developing methods for comprehensive analysis of proteoforms, the group of therapeutic proteins is a suitable training area. Therapeutic proteins are known to be rich in the number of proteoforms. Although a therapeutic protein product is containing only trace amounts of impurities like host cell proteins, which are difficult to detect because of their very low concentration, the analysis of their proteoforms is very challenging because of their large number, their similarity and their low concentration compared to the main proteoform.
Analysis of proteoforms: challenges
The most common method in proteomics is the bottom-up or shotgun approach. It relies on the proteolytic cleavage of proteins by proteases like trypsin. The resulting peptide mixture is subjected to liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) analysis. Proteins are identified from the LC-MS/MS data by comparing the peptide fragment spectra against in-silico fragment spectra generated from a protein database [4]. As a rule of thumb, a protein is claimed to be identified, if at least two unique peptides are identified representing parts of the sequence. Thus, often a sequence coverage of 100% is not obtained. Consequently, if this is the case, it can be only stated that a product or several products (proteoforms) of a defined gene has been identified. No information about the identity of the underlying proteoform is obtained. It can even be assumed that the identified tryptic peptides may be products of several different proteoforms. For the characterization of a therapeutic protein, bottom-up proteomics is a standard method. The signals in the LC-MS chromatograms represent tryptic peptides of all proteoforms of the therapeutic protein. A defined tryptic peptide, which is present in all proteoforms, will form one single monoisotopic signal. Its signal intensity represents the sum of this peptide from the different species. The presence of an individual proteoform only can be detected, if this proteoform will yield a tryptic peptide, a defined phosphor-peptide, which is unique for this proteoform. However, it cannot be excluded, that there are several proteoforms containing that peptide. As a result, bottom-up proteomics is helpful for getting LC-MS chromatograms which can be used as fingerprints of a therapeutic protein, but will give no information about the number and composition of proteoforms within the therapeutic protein product. The detection of a low abundant proteoform is especially difficult, since a unique tryptic peptide of such a proteoform is present in a low amount and thereby the signal in a bottom-up proteomics LC-MS chromatogram will have a low intensity. Thus, if the detection of different proteoforms is of interest, top-down mass spectrometry (TDMS) is the method of choice, because it utilizes the intact proteoform for analysis instead of proteolytic peptides.
For performing a TDMS analysis, a purified individual intact proteoform is transferred into the MS. From the MS spectrum of the intact ions, the molecular weight can be determined. Various techniques are available for fragmentation of the intact proteoform such as HCD, CID, ETD, ETHcD, ECD, UVPD and IRMPD, yielding different types for fragments, which complement each other [5]. After fragmentation, the proteoform can be identified by interpreting the fragment spectrum. There are several software tools available for analyzing the TDMS intact data [6][7][8]. The review of Schaffer et al. is recommended as an introduction into TDMS [9]. Robust protocols for mass analysis of intact proteins with TDMS were recently published by Donnelly et al. [10]. TDMS is requiring sample mixtures of low complexity for obtaining high quality spectra of proteoforms. Aebersold et al. estimated the number of proteoforms being present in the human organism in the range of approximately a billion [11]. Thus, very efficient purification steps prior to the TDMS are required to tackle the huge number of individual proteoforms in cells and tissues of body fluids. Beside the excessive number of individual proteoforms, their dynamic range is a further challenge.
Analysis of proteoforms of recombinant therapeutic proteins: challenges
Similar challenges are associated with recombinant therapeutic proteins. The importance of therapeutic proteins has been continually increasing over the past years [12,13]. Currently, several types of therapeutic proteins [14] are available in the market including monoclonal antibodies (mAbs), erythropoietin (EPO), insulin, human growth hormone and many more. Therapeutic proteins market is dominated by the monoclonal antibodies with sales of approximately $123 billion in 2017 and will be seen increasing with the upcoming biosimilar market [13]. Therapeutic proteins possess several advantages over small molecule drugs due to their higher specificity towards drug targets, which are in most cases also proteins [15]. This makes therapeutic proteins able to target specific key steps in disease pathology [16].
This group of man-made proteins has presumably a significantly higher number of proteoforms per gene than proteoforms per gene in vivo, causing a huge number of proteoforms within a single recombinant therapeutic protein (rTP) product. The heterogeneity is developing during the production of an rTP mainly in the upstream processing. The first event increasing the heterogeneity is alternative splicing [17][18][19]. The second critical step is the protein biosynthesis at the ribosomes, in which errors can occur. Proteolytic cleavage may happen at any stage after the protein has left the ribosome, not only within the host cell, but also extracellularly, if host cell proteases have not been removed by purification of the target protein.
Many therapeutic proteins like conventional monoclonal antibodies or erythropoietin [20] are posttranslationally modified by glycans. Especially, the glycan chains are adding an additional factor multiplying the heterogeneity of proteoforms. An example of a therapeutic glycoprotein is Etanercept, which is decorated with Oand N-glycans. Commercial preparations of Etanercept used as drugs show a very high degree of complexity [21]. It can be assumed that therapeutic fusion proteins applied to patients like etanercept are containing even hundreds of species, which differ in their exact composition of atoms. In addition to glycans, all other forms of posttranslational modifications are possible, depending on the nature of the protein and the type of the host cells and the upstream parameters.
Why is the heterogeneity of recombinant therapeutic proteins much higher than the heterogeneity of gene products in-vivo? Host cells used for the production of recombinant therapeutic proteins are optimized to synthesize a large excess of recombinant proteins [22]. However, increasing the expression of proteins does not usually correlate to increase in the correctly processed bioactive form of the recombinant proteins [22]. Consequently, the probability is increasing, that these overexpressed recombinant proteins are underlying errors during synthesis, side reactions of enzymes and spontaneous chemical reactions. As a result, the number of recombinant species, which have a low quality, is much higher than in a native cell in an intact organism [23]. It was reported that overexpressing recombinant therapeutic proteins is also accompanied by an increase in high molecular weight aggregates and misfolded forms [24]. Thus, it can be assumed that the cellular systems, which usually remove low-quality or incorrectly processed proteins, are swamped by these inadequate proteins [25] and thereby these species will not be processed in the cell or be eliminated. Beside the enzymatic reactions mainly taking place in the upstream-processing, chemical reactions which modify the recombinant therapeutic proteins, can occur during the whole production process including even the final product fill and finish or storage [26,27]. A very common reaction is the oxidation of methionine, which can happen on nearly every stage of the production and can affect the efficacy of the product.
Is any risk associated with the large number of species? Fortunately, severe side effects associated with species, which are not exactly identical with the target protein, have been reported very seldomly. An unfortunate case with dramatic consequences for a few patients was reported from Seidl et al. [28]. In this case, tungsten ions, a contamination which got into the glass vials during the production of the vials, induced the dimerization of erythropoietin. As a result, a few patients developed autoantibodies against erythropoietin, thereby destroying the remaining cells in these patients, which were producing the native hormone. Since a therapy with erythropoietin was not possible any more, these patients had to get blood transfusions for survival. Non-human glycan structures bound to therapeutic proteins, which can occur when producing them in mouse cells, can induce hypersensitivity reactions [29,30].
More common than severe side effects is the phenomenon that , showing even small differences in their composition of atoms compared with the target species, make the species less potent than the target species. For example, deamidation, causing a + 1 Da shift of the molecular weight, can decrease the efficacy of a therapeutic protein [31], as observed with recombinant human interleukin (rhIL)-15 [32]. Deamidation converts asparagine or glutamine to aspartic acid or glutamic acid, respectively. As a result, the polar, uncharged amides are changed into negatively charged carboxylic acids, impacting protein surface-charge density and surface hydrophobicity, thereby explaining the change of the efficacy of a therapeutic protein. Deamidation of asparagine can occur spontaneously at physiological pH of 7.4 [32]. A further important modification of proteins is the disulfide bond (S-S), which is formed by the oxidation of thiol groups (SH) between two cysteine residues resulting in a covalent bond [33], which is decreasing the molecular weight of a protein by 2 Da. Disulfide-bonds have an impact on protein stability as well as on activities [33]. Du et al. stated that during the manufacturing process, extensive reduction of antibodies has been observed after harvest operation or Protein A affinity chromatography and multiple process parameters correlate to the extent of the reduction [34]. The topic "disulfide bonds of therapeutic proteins" is in depth discussed by Lakbub et al. [35].
More details about sources and effects of microheterogeneity are described in the excellent reviews of Beyer [36] and Ambrogelly [37].
How large are the differences of the individual proteoforms of a therapeutic protein? Proteofroms can vary in all chemical properties known, such as size, isoelectric points (pI) [38] and hydrophobicity [39]. The pIs of recombinant erythropoietin varies from pH 3.5-6 [38,40]. Therapeutic proteins are characterized by the presence of size variants arising from the manufacturing process or storage conditions when exposed to chemical, physical or conformational stress [41]. These size variants may include the N terminus clipped proteins, truncated forms, fragments representing sub molecular weight species or improperly assembled therapeutic proteins. The formation of dimers or multimers, in which more than two monomers are forming a complex, is a problem, which many therapeutic proteins are associated with [42]. Such aggregates can induce adverse immune responses in patients [43]. The proteoforms of recombinant erythropoietin are varying within a range of 4-6 kDa [20]. Beside these larger differences in size, the composition of atoms of many proteoforms derived from one single gene can be very similar within subtypes of proteoforms such as the family of acidic proteoforms. As a result, the separation of charge variants by ion exchange usually is successful but the composition within a single fraction might not only contain one single but also multiple proteoforms [44].
Separation of proteoforms of therapeutic proteins with liquid chromatography
Liquid chromatography (LC) is the most common for purification and fractionation of therapeutic proteins [37]. The proteoforms are either separated by sizeexclusion (SEC), making use of different path lengths through chromatographic particles related to the size of the proteins, or by adsorption chromatography. The latter is applying the principle of separation of molecules by their different velocities during crossing a column filled with chromatographic particles. The velocities are proportional to the affinities of the molecules towards the stationary phase of the stationary phase. Depending on the chemistry of the functional groups of the stationary phase, different forms of liquid chromatography are possible based on adsorption to the stationary phase, highlighted in bold in Table 1. Table 1 is giving an overview about the different types of separation methods and their frequency of application with a focus on therapeutic proteins and in addition with respect to proteoforms. The numbers of column 2 compared with column 3 clearly show that the topic of proteoforms is not yet addressed very often. The selected reviews will give deeper insights into the different separation methods.
Affinity chromatography using chromatographic material derivatized with protein-A is the most common and effective method for the purification of recombinant monoclonal antibodies [45]. For the separation of proteoforms of recombinant monoclonal antibodies, it is not very relevant.
Ion exchange chromatography (IEX): charge variants of therapeutic proteins such as acidic or basic species can be separated with ion exchange chromatography (IEX) [46]. IEX of proteins can be performed with oppositely charged ionic group on the stationary phase as either anion exchange or cation exchange chromatography. Elution buffers are decreasing electrostatic interactions of the proteins with IEX material thereby decreasing the affinity of the protein towards the stationary phase. Elution can be either pH or salt based [47]. Salt-based elution is used for IEX with ultra violet (UV) online detection. Coupling IEX directly with MS is only possible if the elution buffer system is volatile [48]. Acidic species are often related to PTM's like sialic acid or deamidation on asparagine, while basic variants are formed by aspartate isomerization, succinimide formation, variants of C terminal lysine and N terminal glutamine [49]. IEX is giving relative quantitative information about charge variants which can be important for the qualification of manufacturing batches [50].
Hydroxyapatite-chromatography (HAP) is based on a material consisting of the crystals of calcium hydroxyapatite, described by the formula Ca 5 (PO 4 ) 3 (OH). HAP can be described as mixed-mode chromatography. The Ca 2+ −ions can act via electrostatic interactions as anion-exchanger. Also, metal coordination bonds of carboxylic groups can be formed with the Ca 2+ −ions. With the anionic phosphate groups of HAP, positive-charged molecules will be adsorbed by electrostatic interactions. Phosphate-, chloride-ion-, and calcium-ion-gradients are common as well as multi-component gradients [39]. Therefore, finding appropriate eluents is more difficult than with anion-exchange chromatography. However, screening systematically appropriate parameters of eluent systems should offer the chance to separate proteoforms. As indicated in Table 1, HAP is not very often applied for the chromatography of therapeutic proteins, which may be associated with the fact that it is more complex to find optimal elution systems.
Hydrophilic interaction chromatography (HILIC) is making use of high affinities of polar and hydrophilic molecules to hydrophilic stationary phase [51,52]. Usually the sample application buffer has a high content (>80%) of an organic solvent like acetonitrile. Thus, it is working well for glycans. However, proteins under these conditions may precipitate. If proteoforms will not precipitate, HILIC is an interesting alternative for other forms of adsorption chromatography, especially, if precipitation proteoforms will be removed from the proteoforms of interest.
Hydrophobic interaction chromatography (HIC) is yet another method which can be used for separating different proteoforms of a therapeutic protein. These separations rely on the varying hydrophobicity profiles due to change in conformation of the protein HIC separations use reverse salt gradients and can operate in nondenaturing mode [53]. HIC was presented as a reliable method for monitoring oxidation of tryptophan residues in complement determining region (CDR) of recombinant mAbs [54]. HIC is effective in resolving the proteoforms of antibody drug conjugates varying in drug to antibody ratio [55]. Charge variants coeluting with IEX can be resolved with HIC in the second dimension of separation. Douglas and colleagues demonstrated the separation of carboxy terminal variants, isomerization variants with HIC which could not be resolved at the IEX level [56]. Quantitative information of the succinimide variants was given by HIC with TSKgel butyl-NPR column [57]. Similar application can be also found in detection of impaired disulfide bonding. Typical HIC buffers like ammonium sulfate are requiring desalting of the proteins prior to the MS [58]. Recently, direct coupling of HIC with MS for detailed characterization of mAbs was demonstrated by applying a volatile ammonium acetate buffer [53].
Immobilized metal-affinity-chromatography (IMAC) is widely used for enriching recombinant proteins with histidine tags from a protein extract from host cells. For production of therapeutic proteins, IMAC is not very often used, because metal ions are bleeding into the product. Metal ions like nickel or copper are critical for patients. For the separation of subgroups of proteoforms for analytical purposes also IMAC is an option.
Mixed mode chromatography (MM) is performed with stationary phases which consist of at least two different functional groups [59], like hydroxy apatite (see above). Consequently, a MM material offers two or more types of chromatography. HAP is combining anion exchange (AEX), cation exchange (CEX) and IMAC. Also, with SEC mixed mode chromatography is possible, as described by Schlüter et al. [60]. In that study the electrostatic interaction induced by anionic sugars, which are part of a dextran polymer, were used to separate vanillylmandelic acid, glycine and phenylalanine from each other with a SEC column, which is usually applied for the separation of proteins in the range of 10-100 kDa. Mixed mode chromatography is not very often described for the chromatography of therapeutic proteins ( Table 1), but it has a huge potential for the separation of proteoforms. For successful separations a rational screening of appropriate parameters is recommended.
Size exclusion chromatography (SEC) is a gold standard for monitoring the presence of aggregates of therapeutic proteins. SEC uses porous stationary phase material wherein the size variants are separated based on the differential access to the pores of the SEC material resulting in different path lengths in relationship to the size [61,62]. SEC is effectively separating low molecular weight and high molecular weight species in mAbs [63]. SEC has found many applications like stability testing [64], quality control during manufacturing [65], in depth characterization of antibody-drug-conjugates (ADC's) [66] and assessing aggregate content in biosimilarity studies [67]. However, resolution of SEC is rather poor to clearly distinguish individual size variants. Non-specific adsorption to the SEC material can result in peak broadening thereby decreasing resolution. This problem may be minimized by use of organic modifiers in mobile phase or adjusting the pH in relation to the pI of therapeutic protein [61]. Advances in the chemistries of stationary phases incorporating very small core-shell particles or the use of sub-micron particles are improving the resolution of SEC columns [61].
Reversed phase liquid chromatography (RPLC) mainly exploits the differences in hydrophobic properties of molecules for their separation. Sample application onto RPLC columns is performed with eluents having a high content of water, supporting a high affinity of the molecules in the sample towards the stationary phase, which is hydrophobic. Elution is achieved with gradients increasing the concentration of organic solvents in the eluent. Coupling RPLC with the high sensitivity detectors can provide qualitative and quantitative information of the cleaved, modified proteoforms along with main form [68]. Ambrogelly et al. reported RPLC as a method giving a first-hand check of the product quality to help in optimizing the purification strategy [69]. When coupled to high resolution mass spectrometric detection, RPLC also allows distinction of the major glycoforms. More than a decade ago, Dillion presented RPLC not only for determining the intact mAb glycosylation profile but only with the use of high temperature and organic solvents with high eluotropic strength coefficients [70]. Many advancement to conventional RPLC columns have come up in recent times to improve the separation of large therapeutic proteins at milder conditions [71].
The major concern in the use of RPLC for protein separations is the presence of organic solvents, which may precipitate proteins. Since precipitation will occur on the column, it is very difficult to recognize. In the case of proteoforms, it can be assumed that some may be more prone to precipitation than others. As a result, the chromatogram, in which signals from some but not all proteoforms are present, may be misinterpreted since the chromatogram is giving no information about the proteoforms which got lost by precipitation. TDMS protocols often apply RPLC for the analysis of proteoforms, because those species, which elute, are present in a liquid, which is optimal for electrospray ionization (ESI). Because of the problem with precipitation of proteins in RPLC in all TDMS approaches the question is how representative the TDMS chromatogram is regarding the original composition of proteoforms or vice versa how many proteoforms got lost during RPLC.
Elution modes of liquid chromatography: beside the different types of stationary phases, different elution modes are existing, which have an impact on the separation of molecules, namely isocratic elution, gradient elution (GE) and displacement elution (DE). DE is typically using the same sample application buffer and adsorption chromatography materials as gradient elution. In contrast to GE, DE is not using a salt gradient with an increasing concentration of a salt having a low affinity towards the stationary phase, but the elution buffer of DE is consisting of the sample application buffer, into which the displacer is added. The displacer ideally should have an affinity to the stationary phase higher than any of the sample components. After the sample application onto the column is finished, the eluent containing the displacer is immediately pumped onto the column. At the beginning, the displacer molecules are binding strongly to the top of the column, thereby displacing the sample component with the highest affinity. These sample components then displace the sample components with a lower affinity and so on. By this process, bands are formed moving down the column, driven by the displacer. The DE is finished, as soon as the displacer has saturated the stationary phase of the column completely. Within a band a high purity of the component is achieved [72]. DE has been shown to be suitable for separation of complex mixture of tryptic peptides [73][74][75] and proteins [76][77][78][79]. One of the characteristics of DE is that DE has a different selectivity compared with GE [77]. This is one important argument for using DE for the separation of proteoforms. Thus, it is not surprising, that DE has been applied to the separation of proteoforms of therapeutic proteins successfully [46,[80][81][82][83][84].
Rational screening of parameters of liquid chromatography is recommended for optimal results of the separation of proteoforms. The first method describing multi-parallel high-throughput screening for parameters of liquid chromatography Preparing Proteoforms of Therapeutic Proteins for Top-Down Mass Spectrometry DOI: http://dx.doi.org /10.5772/intechopen.89644 was published 2002 by the group of Cramer [85]. In this case, the authors screened for displacers for ion-exchange systems. In the following year the group reported a multi-parallel high-throughput screening for displacers based on batch chromatography [86]. Thiemann et al. published a similar approach termed proteinpurification parameter screening system (PPS), which was not focusing on the identification of appropriate displacers but more general on any kind of parameters for adsorption chromatography, independent of the elution mode [87]. The PPS was successfully applied for purification and identification of an angiotensin-II generating enzyme [88], and for screening for parameters for optimal displacement chromatography of proteins [78,79]. Rational screening was also used for developing a displacement chromatography of proteoforms of a recombinant protein with HIC [89].
Separation of proteoforms of therapeutic proteins with capillary electrophoresis
Compared with liquid chromatography, capillary electrophoresis (CE) offers better resolving power. CE techniques such as capillary zone electrophoresis (CZE), capillary gel electrophoresis (CGE) and capillary isoelectric focusing (CIEF) have been adapted for the separation and characterization of proteins [90,91]. These are basic techniques routinely used for quality control [91]. With CGE, the size of proteins is characterized, while in CIEF, proteins are separated according to their isoelectric point (pI). CIEF is using pH gradients formed by carrier ampholytes in a capillary [92]. It is important to note that pH plays a major role in CZE and should be well maintained [93]. Considerable protein adsorption must be considered when performing CIEF and CZE. The interaction of the analytes with the surface of the capillary may compromise the resolution, peak widths and shapes when using conventional bare fused-silica capillaries. Minimizing adsorption can be done by using better coating material or using reagents that reduce adsorption [94]. A penetrated surface layer protein A from bacteria was reported as capillary coating. The coating could be used for over 100 injections without loss of separation performance [95]. Another study reported that adsorption still happened when using LPA-coated capillary [96].
CZE and CIEF are more often used for separations of charge variants induced by C-terminal lysine truncation, N-terminal pyroglutamate formation, sialylation and deamidation [97].
The direct coupling of CE with MS is technically challenging regarding the CE-MS interface [98]. A study demonstrated a successful attempt to directly couple CIEF with mass spectrometry for characterization of transtuzumab, bevacizumab, cetuzimab and infliximab by optimizing the reagent, liquid composition and enhanced sample mixture by glycerol to reduce non-CIEF electrophoretic mobility and band broadening [99]. A CZE method was developed for the intact analysis of recombinant human interferon-β1 (rhIFN-β1). The charged species due to deamidation and sialylation were sufficiently separated. In contrast to dynamic polymeric coatings, such as polybrene or hydroxypropyl-methylcellulose, they covalently coated the bare-fused silica capillary with cross-linked polyethyleneimine (CPEI) to get positively charged surface, thus reducing the possibility of protein interaction with the coating. They then coupled this CZE to ESI-MS/MS and identified 138 proteoforms, of which, 55 were quantified.
For the in-depth characterization of the composition of proteoforms of a therapeutic protein CE online-coupled to MS is a good option, if prior to the CE, the mixture of proteoforms has already been fractionated by LC using separation mechanisms orthogonal to the CE separation mechanism.
Conclusion
A huge progress has been made in the field of TDMS, allowing the identification and comprehensive analysis of the composition of atoms of proteoforms, especially if they are smaller than 30 kDa. TDMS analysis of larger proteoforms still is more challenging. However, until today the most critical point is the purification of a proteoform towards near homogeneity or at least the significant reduction of complexity of the sample, which is desorbed and ionized into a tandem mass spectrometer for TDMS. A low complexity of the composition of a protein mixture entering the MS still is mandatory for getting high quality spectra. Thus, efficient separation methods are needed for obtaining fractions with low complexity. For developing strategies for separating proteoforms, therapeutic proteins are well suited, however challenging because of their heterogeneity. In depth separation of the proteoforms of a therapeutic protein requires the combination of fractionation techniques based on orthogonal mechanisms. In addition, the combination of gradient chromatography and displacement chromatography will add further opportunities for successful separations.
Conflict of interest
The authors declare no conflict of interest. | 6,547.8 | 2019-10-28T00:00:00.000 | [
"Biology"
] |
Some natural aqueous extracts of plants as green inhibitor for carbon steel corrosion in 0.5 M sulfuric acid
ABSTRACT The inhibiting impact of natural aqueous extracts of some plants such as curcumin, parsley and cassia bark extracts for the corrosion of carbon steel (C-steel) in 0.5 M H2SO4 solution was inspected utilizing some techniques such as galvanostatic and potentiodynamic anodic polarization and weight loss measurements. Outcomes indicated that the percentage inhibition efficiency increases with increasing the concentration of the extract due to its horizontal adsorption on the C-steel surface. The process of adsorption is followed by the Temkin isotherm. These natural extracts acted as pitting corrosion inhibitors by shifting the pitting potential to more noble values. The sequence of inhibition efficiency of the natural extracts decreases in the following order: cassia bark extract > parsley extract > curcumin extract. This arrangement is related to the molecular size of the major components of the three natural extracts used. GRAPHICAL ABSTRACT
Introduction
Sulfuric acid is used in many industrial applications, for example pickling and chemical cleaning of steel. Unluckily, the acidic solution caused the corrosion damage. The addition of the inhibitors is one of the effective methods used to protect the steel from the corrosive acid attack. Most of the inhibitors used are inorganic or organic compounds containing heteroatoms (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16). These compounds are more efficient for the protection of steel from corrosion damage by its adsorption on the steel surface. Unfortunately, most of the synthetic compounds are toxic and cause damage to public health and the environment. Therefore, most researchers now tend to use some natural extracts for some plants to inhibit the corrosion attack of some metals and alloys in aqueous solutions (17)(18)(19)(20)(21)(22)(23)(24)(25)(26).
In the previous work, aqueous extract of the leaves of henna (Lawsonia) was studied as an inhibitor for the corrosion of carbon steel (C-steel) in acidic solutions (27). The inhibiting action of the extract is discussed in view of adsorption of the complex formed between metal cations and Lawsonia molecules on the steel surface. Also, Guar gum (28) and some natural oils such as rosemary (29), parsley, lettuce and radish oils (30) are used as corrosion inhibitors for C-steel in aqueous solutions.
The purpose of this manuscript is to find an environmental-friendly, nontoxic, inexpensive and harmless to human health, efficient inhibitor created from the aqueous extract of some plants such as curcumin, parsley and cassia bark extracts to inhibit the corrosion of carbon steel in 0.5 M H 2 SO 4 solution. The study is conducted by some techniques such as galvanostatic polarization, potentiodynamic anodic polarization at scanning rates of 1 and 50 mV/s and weight loss measurements.
Electrochemical measurements
The experiments were performed with C-steel of type L-52 used in Egyptian petroleum pipelines and has the following chemical composition (wt-%): 0.14 C, 0.6 Mn, 0.05 S, 0.04 P and the rest is Fe.
A cylindrical rod entrenched in Araldite with exposed surface area of 1 cm 2 was used for the electrochemical measurements such as galvanostatic and potentiodynamic polarization measurements. The exposed area was refined with different emery papers starting from coarser to finer, followed by degreasing with acetone and finally washed with distilled water twice, just before insertion in the electrolytic cell. The experiments were performed at the 23 ± 1°C using an air thermostat. The cell used in the electrochemical measurements contains three electrodes, C-steel as the working electrode, saturated calomel reference electrode (SCE) and a platinum foil auxiliary electrode.
The galvanostatic and potentiodynamic polarization at scan rates of 50 and 1 mV/s experiments were carried out using a PS remote potentiostat with PS6 software for calculation of some corrosion parameters.
Weight loss measurements
The weight loss measurements were accomplished in large test tubes suspended in a thermostated water bath. Each tube was open to air. In each experiment, 50 ml of the test solution was used. The test species were cut into 1.0 × 2.0 × 0.3 cm elements. Treatment of carbon steel coupons such as those mentioned in the electrochemical measurements. The cleaned C-steel coupons were weighed before and after immersion in 50 ml of the test solution for a period up to 8 h. The average weight loss for each identical experiments was taken and expressed in mg cm −2 .
Natural extracts
The natural extracts were obtained from a natural product company, Cairo, Egypt.
The main components in the three natural extracts are given in Table 1.
Results and discussion
3.1. Galvanostatic polarization Figure 1 clarifies the impact of different concentrations of the cassia bark extract on the galvanostatic anodic and cathodic polarization curves of C-steel electrode in 0.5 M H 2 SO 4 solutions. Analogous curves were also obtained for the parsley extract and curcumin extract, but it is not shown here. Some corrosion parameters were anodic (b a ) and cathodic (b c ) Tafel constants, corrosion potential (E corr ), corrosion current density (i corr. ), surface coverage (θ) and the percentage inhibition efficiency (%IE). The corrosion parameters were calculated from the intercept of the anodic and cathodic Tafel lines and are present in Table 2. The percentage inhibition efficiency (%IE) and surface coverage (θ) were calculated using equations (1) and (2): where I uninh. and I nh are the corrosion current densities in uninhibited and inhibited solution, respectively. It is evident that from Figure 1 and Table 2, as the concentrations of the extract of curcumin, parsley and cassia bark increase, the polarization curves shift toward more negative potential and lower current density values. The values of b a and b c are diverse a bit, indicating the inhibition impact of these extracts by its adsorption on the C-steel due to blocking adsorption mechanism (31). Also, these natural extracts are categorized as a mixed type inhibitor. The values of E corr are slightly shifted toward negative direction, the values of i corr are lowered and the values of θ and consequently %IE increase. These results confirm the inhibitory effect of these extracts. The efficiency of inhibition of natural extracts is reduced in the following order: cassia bark extract . parsley extract . curcumin extract.
Potentiodynamic anodic polarization
The potentiodynamic anodic polarization curves of C-steel electrode in 0.5 M H 2 SO 4 containing different concentrations of cassia bark extract at a scanning rate of 50 mV/s is presented in Figure 2. Analogous curves were also obtained for the parsley extract and curcumin extract, but are not shown here.
It is obvious from this figure that there is only one anodic peak (A) followed by passive region before O 2 evolution.
Peak (A) was explicated due to the active dissolution of Fe into Fe +2 ion according to a previous mechanism (32). At certain potential, the current drops to low values, indicating the onset of passivity. The electrode is considered now to be covered by a passivating oxide film of mainly ferric oxide (γFe 2 O 3 ). This oxide can be created in the film covering the electrode surface by the oxidation of Fe 2+ ions according to Abdallah and Megahed (33): As the concentration of the extract increases, the passive film increases as shown in Figure 2. As the potential become positive, the current rises again due to the evolution of oxygen according to The values of peak current density (i P ) and the peak potential (E p ) are calculated and listed in Table 3.
The percentage inhibition efficiency (% IE) was calculated from the following equation (34): where I p(inh) and I p(uninh) are the peak current densities in the presence and absence of inhibitors. Inspection of the curves of Figure 2 and Table 3 indicates that, as the concentration of natural extracts increases, the values of E p are shifted to more positive values and I p to lower values and the values of %IE increases. This indicates an increased resistance to the active dissolution C-steel.
The values of % IE of the natural extracts decrease in the following order: cassia bark extract . parsley extract . curcumin extract. Figure 3 represents the effect of the addition of different concentrations of the natural cassia bark extract on the potentiodynamic anodic polarization curves of C-steel electrode in 0.5 M H 2 SO 4 containing 0.5 M NaCl as pitting corrosion agent at a scanning rate of 1 mV/s. Analogous curves were also obtained for the parsley extract and curcumin extract, but are not shown here. It was found that the pitting potential of the C-steel electrode shifts to more positive (noble) values with increasing the concentration of these natural extracts. This indicates an increased resistance to pitting attack (35,36). Figure 4 represents the relationship between pitting potential and the logarithm of the molar concentration of the added compounds. Straight lines were obtained and the following conclusions can be drawn:
Natural extract as pitting corrosion inhibitors
The increase of inhibitor concentration causes the shift of the pitting potential into more positive values in accordance with the following equation (28,37): where a 2 and b 2 are constants which depend on both the composition of additives and the nature of the electrode. Inhibition afforded at the same concentrations of the natural extracts decreases in the following order: cassia bark extract . parsley extract . curcumin extract.
Weight loss measurements
The effect of increasing concentrations of the natural cassia bark extract on the weight loss of C-steel electrode in 0.5 M H 2 SO 4 solution is presented in Figure 5. The same curves were obtained for the parsley extract and curcumin extract, but are not shown. As shown from these figures, it is obvious that by increasing the concentration of the natural extract, the weight loss of C-steel is decreased. This means that the presence of these extracts retards the corrosion of C-steel in 0.5 M H 2 SO 4 solution. The linear relationship obtained in Figure 5 indicates the absence of insoluble surface film during corrosion. In the absence of any surface films, the inhibitors are first adsorbed onto the metal surface and thereafter affect corrosion.
The corrosion rate values (k) (mg cm −2 min −1 ) were calculated from equation (7): Corrosion rate (k) = loss in weight (mg cm −2 ) time (min) . (7) The inhibition efficiency (%IE) and the surface coverage (θ) of the natural extract were calculated from the following equations: where, K inh and K uninh are the corrosion rate in devoid of and containing the natural extracts, respectively. The values of k, θ and %IE for various concentrations of three natural extracts are given in Table 4. The obtained data from Table 4 reveal that the inhibition efficiency of these natural extracts is arranged in the following order: cassia bark extract > parsley extract > curcumin extract.
Adsorption isotherms and explanation of inhibition
The inhibition of general and localized pitting corrosion of C-steel in 0.5 M H 2 SO 4 solution by some natural extracts, e.g. curcumin extract, parsley extract and cassia bark extract, were inspected. The primary step of the inhibitory action of these aqueous extract toward the corrosion of Csteel in 0.5 M H 2 SO 4 solution is usually by the adsorption of these extracts on the steel surface The adsorption of the natural extracts on the steel surface is considered as an alternative adsorption process between the natural extract compounds in the aqueous solution (Ext aq ) and the water molecules adsorbed on the C-steel surface (H 2 O) where x is the ratio of the number of water molecules substituted by one molecule of extract adsorbate. The adsorption process depends on the chemical structure of the extract and the presence of the active group in it, the number of adsorption active centers in the molecule and their charge density, molecular size, mode of adsorption, the nature of the metal surface used, temperate, type of the corrosive acidic solution, and the potential of the metal-solution interface Trials were made to insert the θ values to some adsorption isotherm such as Temkin, Frumkin, Frundich and Langmuir isotherms. In the present work, the adsorption process obeys the Temkin's adsorption isotherm (38), according to the below equation: where K is the equilibrium constant of the adsorption process, a is the molecule interaction parameter and C is the inhibitor concentration in the bulk solution. Figure 6 clarifies the relation between θ and log C for C-steel in the presence of natural extracts. Straight lines were obtained indicating that the adsorption of these natural extracts obeys Temkin's adsorption isotherm. The natural extracts block the reaction sites on the surface of C-steel samples by adsorption and reduce the available area for further corrosion reaction.
The values of %IE obtained by galvanostatic and potentiodynamic anodic polarization and weight loss techniques indicate that the extent of inhibition efficiency of the natural extracts toward the corrosion of C-steel in 0.5 M H 2 SO 4 solution obeys the following order: cassia bark extract . parsley extract . curcumin extract.
The inhibiting vigor of natural extracts could be interpreted by strong blocking adsorption on the C-steel surface due to the high molecular weight of these natural extracts. We expect horizontal adsorption of its components of the natural extracts on the C-steel surface. The adsorbed layer acts as a barrier between the C-steel surface and corrosive H 2 SO 4 solution leading to a decrease in the corrosion rate. This difference in the inhibition efficiencies could be explained on the basis of the molecular size. The three natural extracts had a high molecular size and this led to facilitate the adsorption process and hence increase the surface coverage.
The values of %IE which were evaluated for the three natural extracts used toward the corrosion of carbon steel in 0.5 M H 2 SO 4 solution using different techniques show an agreement and conformity of the experimental results. However, there is a small difference in the values obtained from the different techniques. This observed discrepancy could be attributed to the differences of the experimental condition.
Conclusions
1. Curcumin, parsley and cassia bark extracts inhibit corrosion of C-steel in 0.5 M H 2 SO 4 solution. 2. The inhibition efficiency of the three extracts used increases with increasing the concentration of the extract. 3. The polarization curves proved that the natural extracts act as a mixed inhibitor. 4. The inhibition was explained in view of its horizontal adsorption on the surface of the steel. 5. The process of adsorption follows Temkin's isotherm. 6. The order of the inhibition efficiency depends on the molecular size of the major component of three natural extract used. 7. The natural extracts inhibit the pitting corrosion of C-steel by shifting the pitting potential to more noble values.
Disclosure statement
No potential conflict of interest was reported by the authors. | 3,441.2 | 2018-04-12T00:00:00.000 | [
"Environmental Science",
"Chemistry"
] |
Perovskite neural trees
Trees are used by animals, humans and machines to classify information and make decisions. Natural tree structures displayed by synapses of the brain involves potentiation and depression capable of branching and is essential for survival and learning. Demonstration of such features in synthetic matter is challenging due to the need to host a complex energy landscape capable of learning, memory and electrical interrogation. We report experimental realization of tree-like conductance states at room temperature in strongly correlated perovskite nickelates by modulating proton distribution under high speed electric pulses. This demonstration represents physical realization of ultrametric trees, a concept from number theory applied to the study of spin glasses in physics that inspired early neural network theory dating almost forty years ago. We apply the tree-like memory features in spiking neural networks to demonstrate high fidelity object recognition, and in future can open new directions for neuromorphic computing and artificial intelligence.
were applied to a pristine perovskite nickelate device, and no resistance change was observed, indicating that the resistance change observed in the hydrogen doped devices is due to the proton motion under electrical pulses. Figure 6. Schematic of procedure for tree branch generation and measured experimental data. (a) Branch 1 is generated by applying consecutive constant electric field pulses to the device. To generated branch 2, the device is first reset to the original resistance state by applying an electrical pulse of the opposite polarity, as shown in (b); then consecutive pulses with the same pulse field as branch 1 is applied, and the device resistance change follows the same path to reach point A, at which point larger consecutive constant pulses are applied to generate a new branch 2. Similarly, multiple branches can be generated. Representative experimental data collected from our nickelate device is shown in the bottom figure. (b) A single reset pulse with opposite polarity (0.03 V/nm, 1 ms) is used to reset the device back to the original state from a programmed state. Figure 7. Controlled synaptic weight updating. Controlled weight updating can be observed in the nickelate devices under consecutive pulses with multiple pulse widths (pulse field -0.027 V/nm). After ~175 pulses, the resistance change (Rn+1-Rn) is less than 0.15%. The saturation behavior of the synaptic strength for consecutive e-field pulses of same magnitude is similar to what is observed in biological synapses and is considered to be a crucial feature to maintain stability of neural circuits in the brain. 1,2
Supplementary Figure 8. Experimental data showing multiple generations of the tree branch structure.
By increasing the stimulation pulse field and/or pulse width, the resistance branch can be generated over multiple generations, indicating sophisticated neural tree structures can be made possible with the perovskite nickelate devices. This is due to the synergistic combined effects of (a) sensitive dependence of the channel resistance to proton distribution due to charge localization as well as the (b) ability to control the migration of protons at near-atomic scale via electric fields. Figure 9. Algorithmic simulation of nickelate device characteristics for object recognition. The experimentally obtained resistance curves are normalized between 0 to 1 for algorithmic interpretation of resistance as synaptic weight to be used in the neural network. This normalized curve is compared to weight change curves given by equation (1) for different u values. The input voltages which causes the resistance change in device are also appropriately scaled to the input for ∆w curves for curve fitting. Blue curve represents the device resistance change (experiment) and simulated curve used for digit recognition is shown in red. . A 10 mm × 10 mm NdNiO3 thin film was connected to a working electrode and fixed onto a sample stage. 0.01 M PBS electrolyte was added on the film dropwise until fully covering the surface of the film. A Kapton film was then used to cover the electrolyte to avoid spillage during measurement. A Pt wire and customized Ag/AgCl electrode were also immersed in the electrolyte as counter and reference electrode. After each pulse treatment, the X-ray absorption spectroscopy signals were collected in-situ. (b) Schematic figure of how multiple pulses were applied on the nickelate film during potentiation (-500 mV, 30 s, 10 pulses) and depression (+500 mV, 30 s, 10 pulses). (c) Evolution of electrical resistance ratio (R/Ro×100%) during the potentiation and depression process. After the application of 10× potentiation pulses, the electrical resistance of film increased, suggesting the formation of insulating phase upon proton and electron uptake. When the bias with opposite polarity was applied, the resistance of NdNiO3 film decreased. (d) (e) and (f) A set of conducting atomic force microscopy (CAFM) images of NdNiO3 before and after potentiation/depression pulses treatment. (d) for pristine sample, (e) for sample after potentiation and (f) for sample after depression. (g) The normalized pre-edge hump area evolution (A/Apristine ×100%) during in-situ treatment (arrow to the left), and the energy shift (vs. pristine NdNiO3) of the white line peak of XANES spectra (arrow to the right). Upon potentiation, protons from the electrolyte were taken by the NdNiO3 film and the Ni valence changed near-surface from Ni 3+ to Ni 2+ , leading to decrease of the pre-edge hump area as well as negative shift of the white line peak of XANES spectra. Upon bias application of reverse polarity, Ni 2+ changed back to Ni 3+ and the opposite trend was observed. (h) Raw data of the in-situ Ni Kedge XANES spectra. The box with dash line is the pre-edge region. (i) Zoom-in figure of pre-edge area of in-situ XANES spectra. This independent set of experiments conducted at the APS enables us to understand, verify and calibrate the XAS curves from pristine versus doped regions of the film, and aids in further understanding the nano-probe XAS experiments which requires great care in setting up and sample alignment.
Supplementary Note 1. Network Architecture
The network architecture used in this work is a two-layer spiking neural network, as shown in Supplementary Figure 15. Each neuron is assigned to each pixel of the input image. Depending on the pixel intensity value, the neuron outputs a Poisson distributed spike train. The duration of Poisson distributed spike train in our simulation was 350ms. One millisecond corresponds to one single time-step for the simulations. Therefore, 350ms duration is equal to 350 time-steps. Excitatory layer receives spikes from input layer and depending on the neuron model the membrane potential of these neurons changes. Excitatory layer neurons are connected to input layer via synapses. These synapses propagate the spikes from input layer to excitatory layer and also changes its strength depending on the synapse learning rule. Each inhibitory neuron receives connection from one neuron from excitatory layer and connects back to all other excitatory neurons. The number of inhibitory neurons is equal to number of excitatory neurons. The purpose of this layer is to provide lateral inhibition for competitive learning.
Neuron Model
Leaky-integrate-and-fire (LIF) neuron model is used as the excitatory neuron. The differential equation form of this neuron model is as following, τ is membrane potential decay constant, Vmem is membrane potential of neuron, Erest is resting membrane potential, conductance values for excitatory and inhibitory synapses are ge and gi, Eexe and Einh are equilibrium potential of excitatory and inhibitory synapses.
A dynamic conductance change model was used for synapses. i.e., when a pre-synaptic neuron fires, the synaptic conductance instantaneously changes according to their strengths and then decays exponentially with a time constant. 3 So, if pre-synaptic neuron is inhibitory in nature and spikes, then the conductance gi of synapse is updated. To have direct control over membrane potential, ge of excitatory neuron is kept zero. Whenever a pre-synaptic spike occurs, if pre-synaptic neuron is excitatory then membrane potential of post-synaptic neuron is updated directly and if pre-synaptic neuron is inhibitory, then gi is updated. τgi is the time constant of inhibitory post-synaptic potential.
As the membrane potential of neuron increases with the incoming spikes, it generates spikes if the membrane potential reaches its threshold value Vthresh and becomes in-active for certain time period trefrac i.e, membrane potential is reset to the resting potential Vrest and neuron's membrane potential does not change during this time trefrac. These neuron dynamics are illustrated in Supplementary Figure 16.
Synapse learning
Spike time dependent plasticity (STDP) is used as learning rule for synapses. The synapses maintain two parameters, first is its weight (strength) and second is the spike trace. Spike trace keeps track of spiking activity of a pre-synaptic neuron. Value of trace is updated by 1 whenever there is presynaptic spike and it decays exponentially.
xpre is the pre-synaptic trace and xtar is the threshold of trace. When xpre is greater than xtar , it will cause potentiation and xpre less than xtar will cause depression. wmax is maximum weight which can be attained by synapse and 'η' is learning rate.
(wmax − w) u factor ensures that the amount of change in synapse weight saturates and it approaches wmax, thereby acting as weight controlling factor. The exponent u controls the rate of saturation as the weight change occurs. Higher u corresponds to slow weight saturation i.e, higher u forces small changes to synapse weight, therefore it takes longer to reach wmax.
Training:
Spiking neural network (SNN) is trained on the Modified National Institute of Standards and Technology database (MNIST) dataset. 4 The MNIST dataset has a collection of total 70000 grey scale images of single digits. Each image is 2828 pixel data. 60000 images are used for training the network and 10000 for testing the network's prediction accuracy. Spike time dependent plasticity rule (STDP) rule is used as unsupervised learning rule. Each training image is shown to the network for 350 timesteps. Each timestep represents 1ms, therefore total 350ms. The input image pixel values are converted to Poisson spike train in the input layer. These spike trains are then propagated to excitatory layer neurons through synapses. The potential of excitatory neurons (Vmem) increases as they receive spikes and once the potential reaches threshold (Vthreshold) the neuron spikes. The weight of all the synapses connected to the neuron which spiked are updated. This update happens using STDP learning rule which in turn uses the pre-synaptic neuron's spiking activity trace xpre. If the pre-synaptic neuron trace is greater than xtar, then the synapse is potentiated, else synaptic depression occurs. Synaptic weights are always updated when a post-neuron spikes i.e, whenever an excitatory neuron spikes. The inhibitory neuron receives spikes from an excitatory neuron and connects back to all other neurons in excitatory layer. This is done to encourage competitive learning between neurons. So whenever an excitatory neuron's spiking activity increases, it causes inhibitory neuron to spike and these inhibitory spikes reduces the membrane potential of other neurons, thus causing a decrease in spiking activity of other neurons.
Testing:
At the end of training, each neuron is assigned a tag of a digit i.e, each excitatory neuron now represents a digit. So if the number of spikes generated by a particular neuron is more than all the other neurons when a testing digit is shown to the network, then the tag represents the digit that is recognized. The processes of identifying the digits to be assigned to each excitatory neuron is started when we are 5000 images away from completing training of the network. During this period, number of spikes of each neuron for each image is stored. After every 500 images, all the neurons are assigned digits and this occurs for the final 5000 images. For every 500 images shown, the spiking rate of each neuron for each digit is sampled and the neuron with maximum spiking rate or group of neurons whose spiking rate is above a certain threshold are assigned the digit tag. For example, to assign digit 9 to excitatory neurons, for the 500 images shown, number of images with digit 9 are sampled and the spiking activity of neurons belonging to those images is averaged to obtain spiking rate of each neuron. Now, among these neurons, those of which are above a threshold spiking rate for digit 9 are assigned a tag of digit 9. Recurring assignments are done over last 5000 images so that a generalized digit tag is assigned to each excitatory neuron. 10000 test images provided by MNIST dataset was used to determine the accuracy of the network. When we test the network, the learning is frozen i.e, no synaptic weight updates are performed when we pass an image to the network. The testing image is input to the network and excitatory neuron which spikes highest number of times is identified and the digit tag belonging to that neuron indicates the recognized digit.
Supplementary Note 2. Perovskite Ultrametric Trees and Spin Glasses
Here, we explain the apparent connection between the experimental voltage-resistance curves reported in the manuscript and magnetization-temperature curves reported for spin glasses. The notion of a spin glass originated with the study of the low temperature state of substitutional magnetic alloys, with finite concentrations of magnetic ions in non-magnetic hosts. [5][6][7][8] In general, spin glasses are models characterized by disorder and frustration. Disorder implies that interactions between different states of the system are random. Frustration usually means that conflicting interactions compete with each other and consequently the system doesn't settle on a single equilibrium state satisfying all constraints, but rather a multitude of equilibrium states. In experiments, tree states in spin glasses have been typically accessed by changing the global temperature (heating-cooling cycles) and the corresponding experimental data have been reported at very low temperatures in the 1 -20 Kelvin range, such as for CuMn and CoCl2 systems. [9][10][11][12] In the context of the present study, by inserting impurity dopants in the form of hydrogen and subjecting the resulting material to a bias voltage, one obtains a regime exhibiting certain characteristics that is prototypical of a spin glass. Here, the voltage will play the role of the "temperature" and the resistance of the material will define a "state" of the system. The data measured from our nickelates (see the figure below) forms a tree whose branching ratio is given by K = 3: This defines an ultrametric topology on the space of statesa characteristic feature of spin glasses in the following way. Fix N, the number of pulses, and let ΣN be the collection of all states corresponding to N. These correspond to the extremities (the leaves) of the branches at level N. For any two states α,β ∈ ΣN, let A denote the closest common ancestor (see Supplementary Figure 17 for an illustration) and let NA be the corresponding number of pulses. The overlap RN(α,β) is given by RN(α,β) = NA/N.
The lower one must go to find this ancestor, the smaller the overlap. The distance dN(α,β) between α and β is then defined as dN(α,β) = 1 − RN(α,β), which can be viewed as the normalized depth of the common ancestor A. It has the property that for any three states, α,β,γ in ΣN, at least two of the distances dN(α,β), dN(β,γ), dN(α,γ) are equal. In the case when exactly two distances are equal, the third is shorter. For instance, in Figure 4, these distances are equal to each other since they all share the same common ancestor A. The space (ΣN,dN) satisfying this property is said to be ultrametric. 5 This allows a hierarchical structure on the state space by grouping all states within a certain distance into a single cluster. Then it is straightforward to see that ultrametricity implies that these clusters partition the space with no overlapping among different clusters. The theoretical development of spin glass phase and their potential use in neural networks has a long history and is an active area of research. 6,13 The experiments reported in this paper presents a physical realization of such models at room temperature allowing a hierarchical structure of arbitrary level N and branching ratio K that can be accessed electrically in solid state devices at ambient conditions and in a reversible manner. | 3,729.6 | 2020-05-07T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Evaluating and Prioritizing Barriers for Sustainable E-Learning Using Analytic Hierarchy Process-Group Decision Making
: E-Learning is a popular computer-based teaching–learning system that has been rapidly gaining global attention during and post COVID-19. The leaping changes in digital technology have enabled E-Learning to become more effective in recent years. It offers freedom from restrictions caused by geographical boundaries and provides time flexibility in the teaching–learning process. Apart from its numerous advantages, the success of E-Learning depends upon many critical success factors (CSFs) and barriers. If the barriers that lie in the way of successful E-Learning implementation are not addressed diligently, it will limit E-Learning success. It has been revealed through past research that these barriers are serious threats that need immediate attention in their redressal. This paper attempts to reveal sixteen barriers under four different dimensions by going through a comprehensive review of the literature and engaging decision makers. Furthermore, it uses the Analytic Hierarchy Process-Group Decision Making (AHP-GDM) methodology to evaluate and prioritize them. The results obtained show that barriers related to the Institutional Management Dimension (BIMD), Infrastructure and Technological Dimension (BITD), Student Dimension (BSD), and Instructor Dimension (BID) pose the greatest challenges in the successful implementation of E-Learning. The AHP-GDM methodologies reveal the comparative relationship among these barriers as BIMD > BITD > BSD > BID and further quantify their negative effects as 46.35%, 29.88%, 12.30%, and 11.4%, respectively, on successful E-Learning systems (‘>’ indicates comparative challenges).
Introduction
The world has changed since COVID-19, the impact of the pandemic is expediting several trends in global education, business, and other domains. To deal with this situation, the United Nations Educational Scientific and Cultural Organization (UNESCO) recommended the use of distance/digital learning, which is more commonly referred to as E-Learning. E-Learning provides teaching-learning by using electronic media through the Internet as online learning. Online learning may be regarded as a kind of distance learning. Mobile learning provides a fun mode for teaching-learning through a smartphone. Digital learning provides teaching-learning through the use of information and communication technology (ICT), such as computers, smartphones, and the internet. All the modes enhance knowledge, performance, and ease of life when used efficiently. However, there is prevailing need for an E-Learning system for the sustainability of education [1].
1.
To provide a comprehensive review of the literature to identify barriers to successful E-Learning.
2.
To evaluate, prioritize, and rank the E-Learning barriers with the help of Analytic Hierarchy Process-Group Decision Making (AHP-GDM).
The main contribution of the present research is to provide a comprehensive review of the literature on barriers that hinder successful E-Learning implementation. The identified potential barriers to E-Learning are evaluated using MCDM-based research methodology. The present research provides an evaluation of identified barriers and their influence on the successful E-Learning system. The present research will help the university management, instructors, students, and other related stakeholders.
Various sections on the state-of-the-art literature review associated with Multiple Criteria Decision Making (MCDM) research methodologies, a framework for barriers related to E-Learning, the AHP methodology, the use of AHP-GDM in the present research, and resulting conclusions were documented. Section 2 shows the present study framework for prioritizing E-Learning barriers. It also documents the state-of-art literature review associated with MCDM-based research methodologies and barriers related to E-Learning. Section 3 illustrates the AHP-GDM methodology. Section 4 illustrates the case for using AHP-GDM methodology for ranking E-Learning barrier factors. Section 5 documents the discussions on the result of the ranking of barriers to E-Learning. Finally, this research study ends with the conclusions and future work in Section 6.
MCDM-Based Research-Methodologies-Related Work
In addition to exploring the barriers to E-Learning, a literature review on multiple models is performed and reported in this section. Several researchers have implemented a methodology that focused on MCDM. Gupta et al. [54] implemented the AHP model to define and check the quality of the E-Learning system. They deployed the weight and rank E-learning quality requirements that are vital for stakeholders to observe, such as educators, accreditation bodies, and institutional management. They found that the E-Learning system is successful irrespective of the type of educational system i.e., part-time, full-time, and distance education program.
Jeong and Yeo [55] used AHP's comparison function to identify and evaluate the use of multimedia-based E-Learning content and created a model using multimedia factors. This work extracted nine different criteria for the model presentation from previous research studies. Yigit, Isik, and Ince [56] applied AHP to their study. The web-based software SDUNESA uses AHP's parameters for the selection of learning objectives. They are defined and explained under computer education priorities. The AHP's parameters save time in searching for a large database for learning objectives. The AHP-based research methodology has been frequently used in the analysis of various aspects of E-Learning.
The strategic development of E-Learning systems is used with decision making on the most appropriate types of E-Learning systems at various levels. Mohammed, Kasim, and Shaharanee [57] surveyed a panel of 95 respondents consisting of administrative and academic personnel and graduates in Malaysia. They were asked to assess and rate the performance of five identified E-Learning systems. They concluded that strategic readiness for E-Learning implementation was the most important criterion among the human resources, specific information, and Communications Technology (ICT) infrastructure, basic ICT infrastructure, and legal and formal readiness for E-Learning implementation. They also found that a flipped classroom is the most suitable E-Learning approach. Their study helps the school administration to meet its requirement of establishing a new approach to select suitable alternatives of teaching-learning.
Based on the literature review, AHP-GDM research methodologies have been applied in analyses when different factors and aspects are available in decision making. Nevertheless, incorporating technological innovation into education is not without obstacles. Even in developed countries and cities, in day-to-day teaching and learning, there are obstacles when using technology [40]. Consolidating E-Learning into conventional education is a reasonably troublesome task that may run into various sorts of confusion and troubles. These issues are considered as obstacles or barriers to coordinating E-Learning with traditional educational fields, making it difficult for E-Learning to succeed [16]. If one recognizes these barriers and hindrances, one can pay particular attention to them when using the E-Learning system and make the E-Learning application robust, foolproof, and successful [9,51].
Al-Azawei et al. [21], Al Gamdi and Samarji [16], and Stoffregen et al. [20] found different barriers related to the infrastructure and technological dimensions. Many researchers also discovered that different barriers related to the institutional management dimension are also significant for successful E-Learning [47,52]. To identify the barriers to successful E-Learning, a systematic review of the literature was carried out. The various barriers are identified and grouped into four dimensions, namely student, instructor, infrastructure and technology, and institutional management. These barriers are further subcategorized into 16 factors.
The Identification of Barriers Related to Different E-Learning Dimensions
Based on the literature review and successive framework, the following four different barrier dimensions are identified.
Student Barriers
The critical move from traditional teaching and learning to E-Learning has immensely increased because of its various advantages. It has become crucial as students are limited to their homes and do not have access to university classrooms, laboratories, and libraries.
Even though E-Learning has received much acclaim, a human instructor can never be replaced by a computer system. Students play a significant role in the E-Learning system. In E-Learning, instructors and students are at a distance from each other, so various challenges reduce their eagerness to use the E-Learning system [34]. Different barrier factors are found in the literature for student dimensions, such as lack of information and communication technology, lack of ICT skills [17,18,22,30], lack of E-Learning knowledge [31], lack of English language proficiency [16], and lack of motivation [30]. Assareh and Bidokht [34] carried out a study at Jerash University on 230 male and 170 female students. They discussed the effectiveness of online learning and whether enjoyment from online learning was a major barrier from a student's perspective. Based on the series of semistructured interviews with E-Learning experts from Tanzanian Higher Learning Institutions (HLIs), it was found that the major five barriers are poor infrastructure; financial constraints; inadequate support; lack of E-Learning knowledge; and teachers resistance to change [31]. A 214-question questionnaire-based study revealed that the lack of adequate English language proficiency was a barrier from a student's perspective [16].
Instructor Barriers
The instructor is considered a significant dimension of E-Learning. Some instructors are less familiar with E-Learning. They require proper training in utilizing technology for web-based online courses. Some even have less confidence in utilizing ICT technologies in education [30,40]. In this unusual situation of COVID-19, instructors must be given the necessary training and should possess skills for using E-Learning tools. This training is being given to instructors through E-Learning now. When they gain proper knowledge, their level of confidence may increase to use the new electronic devices and deliver education with the help of web-based technology. Some studies have also found that instructors' attitude changes positively after receiving the required training in hardware, software, and organization of delivering the material in the E-Learning system [47].
Different research studies found various E-Learning barriers related to the instructor dimension, for example, lack of ICT skills [17,38]. Many instructors are at the early stage of their career and do not possess enough ICT skills, making them handicapped in using the E-Learning system. It is important to note that training should be provided to instructors as well as to students to make them ICT proficient so that they can derive maximum benefits from the E-Learning system [59].
Lack of E-Learning knowledge translates into insufficient experience in operating the E-Learning system. Al-Azawei et al. [21], in their study, identified barriers such as inadequate training programs, lack of technical support, ICTs, and E-Learning illiteracy and lack of awareness, interest, and motivation toward E-Learning technology. In their findings, they concluded many instructors lack skill and experience in using the Learning Management System (LMS). Because of this deficiency, it becomes difficult for them to engage in class using the LMS system. Some instructors find it difficult to share teaching materials and manage other teaching-learning activities such as taking attendance, online exams, grading tests and assignments, and monitoring students. Based on the quantitative and qualitative research methodology, it appears that many senior faculties find it difficult to use the LMS system [59]. Hence, they prefer not to change their teaching methods from traditional classroom teaching to E-Learning systems. In E-Learning, the human dimension plays a significant role and influences its outcome. Therefore, universities must strive to take the human dimension into consideration while removing the barriers to the E-Learning system. Based on the Nigerian Higher Education Institutes (HEIS) case studies [42] and a study based on a series of semistructured interviews with Tanzanian HLIs [31], resistance to change is one of the barriers present in the senior faculties, who prefer traditional classroom teaching to E-Learning. It is difficult for them to adopt a new system. Since they are well-conversant with traditional teaching-learning methods, they do not see the need or the urgency of using E-Learning. Thus, they are reluctant to change to a new E-learning system. Lack of time to develop E-courses was revealed during a 214question questionnaire-based study [16], where an integrative review was conducted over three months by an interinstitutional research team [19]. It is a common barrier and faced by both young and senior instructors. Course materials must be developed systematically to suit the LMS system requirement so that students can easily download, study, and understand them. Most instructors do not have time to develop E-courses due to their additional teaching hours, extracurricular, and administrative activities.
Many instructors are not motivated to take more initiatives to adopt the E-learning system because of insufficient skill, time, salary structure, lack of promotions, experience, and training [21].
Infrastructure and Technological Barriers
Lack of a robust and proper infrastructure and updated technology is considered as the most challenging barrier dimension for successful E-Learning implementation [38,40]. This includes having the latest hardware and software programs [30], a high-speed connection to the Internet, an uninterrupted power supply, a regular and capable maintenance system, and general support from the administration of the Internet provider organization [60]. The shortage of these components may generally cause a big disappointment in the E-Learning system [47,60]. Other studies reported that hardware equipment, the capacity of data transfer of the E-Learning system, and proper software for the system vastly influence the success of E-Learning implementation [19].
Institutional Management Barriers
Lack of institutional management care and commitment is an important barrier [38]. The management of an institution may differ in its strategic decision making to implement an E-learning system because of a lack of technical knowledge and trust in E-Learning. The investment priority may shift or differ [60]. Other studies have also identified different barriers of institutional and management support dimensions, such as lack of proper policies and support, lack of financial support, and lack of proper training for all users [17].
Overview of AHP-GDM Research Methodology
The present research study uses the AHP-GDM methodology in evaluating and prioritizing the E-Learning barriers by considering the case study which is illustrated in the following section. The AHP-GDM provides the pairwise comparison judgment of Decision Makers (DMs) in crisp form and eases comparison and fine-tuning of the results. The detailed steps are further described as follows.
AHP-GDM Methodology
AHP-GDM was developed by Saaty [61] for the systematic decision support technique. The AHP helps in dealing with composite, unstructured, and multiple criteria-based problems for decision making. Many research studies have applied the AHP in a wide variety of decision areas [62][63][64].
In AHP, expertise and understanding are used to formulate the final opinion. For the pairwise comparison, the opinion of the expert is considered. If a single DM provides an opinion in the AHP, it can biased it and lead to misguiding judgment. GDM may be used to eliminate this bias. Several DMs may be utilized for pairwise comparison which cannot be biased. The Geometric Mean (GM) can be calculated by synthesizing the decisions made by different DMs. The resulting final synthesized decision matrix is more reliable than the single DM. Table 2 depicts Saaty's scale, which is intended for the pairwise comparison matrix.
Step 2
The GM of each of the rows for both the decision matrix and pairwise comparison matrices are calculated. The priority vector (P.V.) values follow the normalization of each GM value.
Step 3
An overall summation of the product of the sum of each vector column for both the decision matrix and pairwise comparison matrices containing the P.V. values of each row is carried out to obtain the principal eigenvalue (λ max ), i.e., where C j is the sum of each column vector.
Step 4
The Inconsistency Index (I.I.) of each pairwise comparison matrix is checked using Equation (3): Here, the number of matrix elements is denoted by n.
Step 5
Random Index (R.I.) [61] is determined for each of the square matrices using Equation (4). R.I. calculated values are also available as a ready reckoner documented in Table 3. 3.1.6.
Step 6 The Inconsistency Ratio (I.R.) for each of the square matrices is obtained by dividing I.I. value with R.I. values. A further revision of the matrix elements is needed if the inconsistency ratio is >10%.
Step 7
A pairwise comparison matrices (A i , i = 1, 2, . . . , n) Table 2 is used to assign a weight to these matrices. The P.V. values, principal eigenvalues, I.I., and I.R. are calculated using the logic which is shown in steps 2-6.
Case Illustration Using AHP-GDM Methodology for Ranking E-Learning Barrier Factors
After constructing the structure depicted in Figure 2, the next step was to determine the relative contribution of each barrier factor with their respective dimensions. Three DMs were identified with more than six years of teaching experience in E-Learning. One DM had E-Learning experience of teaching engineering courses that involved theory and practice, whereas the other two DMs were from the computer science and science faculty. The DM carried out the pairwise comparison to ascertain the weight. There were three observers who played a significant role in the AHP-GDM methodology. The pairwise comparisons made by expert DMs were reviewed. The various AHP-GDM requirements, for instance, use of Saaty's scale and consistency in decision making, were ensured. Once the observers were satisfied, they okayed the pairwise table for further analysis. Under AHP, the expert's knowledge is used to formulate a final opinion. Before considering the final opinion, the pairwise judgment was critically observed by three observers for their final acceptance. The expert's judgment and observer's responsibility, as shown in Table 4, play a significant role in establishing final relationships among E-Learning barriers. In AHP, GDM may be used with different DMs for pairwise comparison. Saaty's scale, as shown in Table 2, is used for the pairwise comparison matrix. Consequently, various DMs pairwise decisions are synthesized using the GM method.
The decision matrix obtained after synthesizing provides more accuracy than that obtained by a single DM. Table 5 shows the pairwise comparison matrix of all the barrier dimensions provided by each expert DM. Here, λ max refers to the maximum eigenvalue. The CR is a consistency ratio and RI is a random index, whereas CI shows the consistency index.
After constructing the pairwise comparison tables, they were synthesized using geometric mean (GM). Table 6 shows the synthesized values of the pairwise comparison for the main barrier dimensions based on the judgment provided by DMs. In the next step, the consistency level of the pairwise comparisons was checked with the help of the CI and RI. Table 3 shows the RI values for different matrix sizes. The multiple values provided by different DMs may be aggregated in a single value in the relevant matrix. Thus, the weighted GM method was applied to aggregate the judgment of all three DMs. All three DMs are senior faculties with university teaching experience of more than ten years. In addition to this, DMs also possess an E-Learning teaching experience of more than five years. One of the DMs has E-Learning teaching experience in engineering courses, whereas the remaining two DMs have E-Learning teaching experience in computer science courses. Similarly, the aggregation process synthesized all the value of E-Learning dimensions and barriers. Table 7 shows the aggregated synthesized values of the dimensions.
In the final step, the barriers from all dimensions were ranked based on their global weights that were determined as a relative contribution. Finally, two different types, i.e., "local weights" and "global weights" were found. "Local weights" refer to the synthesizing value of the preceding hierarchical tier, while "global weights" are the synthesizing value of the top hierarchical level referred to as the goal.
To obtain the final ranking of the barriers, AHP associated the priority weights of the dimension with the comparison ratings for the factor to find the local and global ranking [65]. This is performed by the following equation: Global weights = ∑ (local weight for dimension i × local weight for factor j with respect to dimension i) Table 7. Synthesizing of barriers of dimensions after aggregation. Table 8 shows the aggregation of the barriers in all 4 dimensions, respectively, calculated the same way. The final calculated weights and rankings are presented in Table 9.
Results and Discussion
The main objectives of the present research were to identify and prioritize the barriers to successful implementation of an E-learning system based on the AHP-GDM methodology.
The various dimension weights are obtained and shown in Figure 3. Among the four dimensions, i.e., BIMD, BITD, BID, and BSD, the BIMD was found to be the most hindering to the E-learning system. The other barriers found in decreasing order are BITD, BSD, and BID, in order of their hindrance to E-learning systems. The relationship ship BIMD > BITD > BSD > BID was obtained at the end of the results. The ' >' indicates more influence than the counter barriers. The relationship of BIMD > BITD > BSD > BID barriers was quantified in percentage weights as 46.35 > 29.88 > 12.30 > 11.47. The final weight of subfactors derived for 16 subfactors also influences the E-learning system. It is observed from Figure 4 that there are three major factors: IMFS, IIFS, and ITSS. These three constitute 60% of the total influence as compared with the remaining one.
The main stakeholders of an E-learning system to deliver teaching-learning in an effective way are institutional management, instructors, and students. The results obtained clearly indicate that the support of institutional management is crucial in supporting the E-Learning system. Failing to support the implementation of the E-learning strategy poses the biggest hurdle. The instructors are on the delivery end, whereas the students are on the receiving end, thus, they influence the E-learning system. The results show that the influence of the instructor and students is equally influential on the E-Learning system. Lack of experience in the E-Learning teaching-learning system influences the perceptions towards the adoption of E-Learning, hence instructors and students both provide threats to successful E-learning [11]. The necessary trained manpower may also influence the perception of students towards E-Learning as well. Training enhances the confidence level of instructors in conducting online classes in a comfortable mood. Based on the obtained results, it can be concluded that, in an E-Learning system, there are three stakeholders. BIMD, BID, and BSD constitute about 70% of the negative impact on the E-Learning system. If they are controlled, the 70% impact of such barriers that may be hindering the success of the E-Learning system may be removed.
The barrier of infrastructure and technology is the third worst barrier influencing the E-Learning system. It may be considered as the backbone of the E-learning system. In the absence of infrastructure, the necessary computer hardware and software support required to deliver the effective teaching-learning is hindered. The latest version of computer hardware along with up-to-date software helps in accomplishing teaching and learning in an effective manner. The time consumed in transferring the knowledge is largely reduced on an updated computer as compared with an older computer system. The latest software not only increases the software usage speed but also enhance the users' experience. Thus, the students and instructors are motivated to engage themselves more in an effective manner. The Internet communication technology (ICT) has undergone a dynamic revolution and changed the computer world to take the giant steps towards the digital age. Digital technology has helped to enhanced computing speed. Finally, our studies conclude that the more barriers there are, the more difficult it is to achieve E-learning objectives, which is in line with the results of Kaymak and Horzum (2022) [12]. The technological barriers significantly affect the learners' willingness to opt for online learning, which is also supported by past study [66].
Managerial implications and limitations:
The result of the present research identifies various barriers to the E-Learning system in KSA and are also similar to the findings in study [67]. The results of this study can assist policymakers for higher education studies, university authorities, and government ministries and related stakeholders. It can also help with designing and evolving new E-Learning courses so that they can be used successfully. Because of the prevailing COVID-19 pandemic, many universities in KSA still prefer the E-Learning system, hence the present study will help them to remove all the E-Learning barriers to make the system successful. The barriers of BIMD must be removed by the management to make a positive and encouraging environment in the universities. The Kingdom of Saudi Arabia (KSA) is attempting to enhance the infrastructure for E-learning. The management must devise sound systems to introduce E-Learning courses to gain the advantage of E-Learning. Instructors and supporting staff must undergo periodical training on a regular basis to gain the advantage of changing technology. The present study holds some limitations.
Conclusions and Future Work
The E-Learning system is subjected to various barriers and dimensions as they play an important role in a successful E-Learning system. It is important to analyze E-Learning barriers and their effect on teaching and learning outcomes. It is generally the management whose objectives, missions, and vision take a university to the lead role. Institutions also play a vital role in connecting teachers to their students through the E-Learning platform. The communication gap, due to institutional inefficiency, may widen further and make it difficult for students to reach educators. The overloading of courses for instructors may create a hurdle in the interaction with students and peers. Institutions also need to adopt sound policies and practices to enhance student welfare schemes. Lacking strategic decision making and failing to provide state-of-the-art infrastructure slows down the use of the E-Learning system. E-Learning barriers may also change based on social, financial, and regional circumstances. Studying the barriers to E-Learning adoption is significant but challenging as well. The present research of evaluating the E-Learning barriers is based on the studies that have been carried out in KSA, and hence may be applicable to similar geographic conditions and cultures around the globe.
The present study is based on the judgmental decisions of DMs, so there may be a subjective bias. We adopted GDM to reduce the risk of subjective bias. The subjective bias can still be reduced by employing more DMs. Thus, an expert team with more DMs can be included in assessing and prioritizing E-Learning barriers through MCDM. Fuzzy-based MCMD can be used to reduce ambiguity and judgmental bias in decision making. Future research may also include more barriers based on empirical studies, as it will be helpful to reveal new significant barriers. In-depth analysis of such identified barriers can further be supported through structure equation modeling (SEM). | 5,994.8 | 2022-07-22T00:00:00.000 | [
"Computer Science"
] |
A Frog Peptide Ameliorates Skin Photoaging Through Scavenging Reactive Oxygen Species
Although many bioactive peptides have been identified from the frog skins, their protective effects and the molecular mechanisms against skin photodamage are still poorly understood. In this study, a novel 20-residue peptide (antioxidin-NV, GWANTLKNVAGGLCKMTGAA) was characterized from the skin of plateau frog Nanorana ventripunctata. Antioxidin-NV obviously decreased skin erythema, thickness and wrinkle formation induced by Ultraviolet (UV) B exposure in hairless mice. In UVB-irradiated keratinocytes (HaCaT cells) and hairless mice, it effectively inhibited DNA damage through reducing p-Histone H2A.X (γH2AX) expression, alleviated cell apoptosis by decreasing the expression of apoptosis-specific protein (cleaved caspase 3), and reduced interleukin-6 (IL-6) production via blocking UVB-activated Toll-like receptor 4 (TLR4)/p38/JNK/NF-κB signaling. In UVB-irradiated human skin fibroblasts (HSF cells) and hairless mice, it effectively restored HSF cells survival rate, and rescued α-SMA accumulation and collagen (especially type I collagen) production by restoring transforming growth factor-β1 (TGF-β1)/Smad2 signaling. We found that antioxidin-NV directly and rapidly scavenged intracellular and mitochondrial ROS in HaCaT cells upon UVB irradiation, and quickly eliminated the artificial free radicals, 2, 2′-azinobis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS+). Taken together, antioxidin-NV directly and rapidly scavenged excessive ROS upon UVB irradiation, subsequently alleviated UVB-induced DNA damage, cell apoptosis, and inflammatory response, thus protecting against UVB-induced skin photoaging. These properties makes antioxidin-NV an excellent candidate for the development of novel anti-photoaging agent.
INTRODUCTION
As the outmost layer of the body, skin is subjected to biotic and abiotic insults such as microorganism infection and radiation injury. Skin tissues can sense environmental regulation of local and overall internal environmental homeostasis through the cutaneous neuro-endocrine system (Slominski et al., 2012). Some stress factors have been shown to affect different cell signaling and biochemical pathways in the skin, for example, ultraviolet (UV) not only triggers mechanisms that protect the integrity of the skin and regulate the overall internal environmental balance, but also triggers skin pathology (aging, cancer, autoimmune reactions) (Slominski et al., 2018). UV radiation causes excessive reactive oxygen species (ROS) formation from UV absorption by non-DNA chromophores in cells (Portugal et al., 2007;Rinnerthaler et al., 2015;Baek and Lee, 2016). The highly reactive molecules are able to damage virtually all categories of cellular constituents including proteins, carbohydrate, lipids, and DNA (Baek and Lee, 2016). The overproduction and/or mismanagement of ROS may result in oxidative stress, which have been implicated in a large variety of skin disorders and skin diseases, such as UV irradiation damages, skin inflammation, bacterial skin infections, and skin cancer (Portugal et al., 2007;Pham-Huy et al., 2008;Godic et al., 2014).
Skin possesses efficient defense mechanisms against oxidative stress under normal conditions, mainly based on the antioxidants. There are two known groups of antioxidant agents, antioxidant enzymes and non-enzymatic low molecular weight antioxidants (LMWAs) (Portugal et al., 2007;Rinnerthaler et al., 2015). The first group is composed of gene-encoded proteins such as superoxide dismutase (SOD), catalase, glutathione peroxidase. The second group is composed of organic small molecules such as glutathione (GSH), carotene, polyphenols, uric acid, CoQ10, vitamin C, vitamin E. No geneencoded LMWA has been reported until we characterized antioxidant peptides (AOPs) with various structures from the skin secretions of two frog species, Rana pleuraden (Yang et al., 2009) and Odorrana livida (Liu et al., 2010). Since then, the frogskin AOPs have been identified by other researchers from different species (Lu et al., 2010;Yu et al., 2015;Barbosa et al., 2018;Niu et al., 2018;Cao et al., 2019;Demori et al., 2019). These data confirm that amphibian skins have a common peptide antioxidant system to cope with the increasing oxidative stress. These frog-skin-derived AOPs are different from antioxidant enzymes and LMWAs. They possess gene-encoded origins as antioxidant enzymes do, but they show no enzyme activity. Instead they function as direct free radical scavengers like LMWAs. The AOPs can rapidly and constantly eliminate the free radicals of ABTS + and/or DPPH that generated by the commercial radical initiators in vitro (Yang et al., 2009;Liu et al., 2010;Lu et al., 2010;Yu et al., 2015;Niu et al., 2018;Cao et al., 2019). There is limited understanding of their protective functions and the mechanisms of action against skin injuries caused by ROS in vivo. Currently, only two frogskin AOPs with potential skin protective effects in vivo have been described (Qin et al., 2018;Yin et al., 2019). One is antioxidin-RL, which was identified from the frog Odorrana livida (Yang et al., 2009); the other is OA-VI12, which was isolated from O. andersonii (Cao et al., 2019). They prevented UVB irradiationinduced photoaging in mice, but the detailed mechanisms of the two AOPs remain to be fully understood.
The frogs have developed an excellent chemical defense system composed of various defensive peptides to maintain skin integrity and functionality (Xu and Lai, 2015;Demori et al., 2019). In our previous work, we have characterized a wound healing-promoting peptide, cathelicidin-NV, from the frog skin of N. ventripunctata. The peptide effectively accelerated cutaneous wound healing in mice with mechanical injury (Wu et al., 2018). N. ventripunctata lives in high altitude (3120-4100 m) where there is low temperature, long sunshine duration and strong ultraviolet radiation. Their naked skins are susceptible to external insults in the harsh environments, especially UV radiation. Based on the wavelength, UV can be classified into three types: UVA (320-400 nm), UVB (280-320 nm), and UVC (100-280 nm). UV irradiation, especially UVB, has the twofold effect of regulating the brain and central neuroendocrine system to rebalance the internal environment (Slominski et al., 2018), and triggering the overproduction of ROS, leading to photo-induced skin damage, skin diseases and even skin cancer (Rinnerthaler et al., 2015). To cope with the increasing oxidative stress, N. ventripunctata should possess potent free radical scavenging and radio-protective effect for their survival. Therefore, it is rational to hypothesize that N. ventripunctata may also have antioxidant peptide(s) in their skins to protect from the free radicals injury. In this work, we are interested to characterize the peptide antioxidant system from N. ventripunctata. Additionally, we try to investigate the potential mechanisms underlying the protective effects of AOP against UVB-induced skin photodamage in hairless mice.
N. ventripunctata Sample
Skin secretions of N. ventripunctata (n 30; weight range 20-25 g) were collected as previously reported (Wu et al., 2018). Frogs were stimulated with volatilized anhydrous ether immersed in absorbent cotton, and their skin surface was seen to exude secretions. Skin secretions were washed with 0.1 M phosphate buffer (PBS), (pH 6.0, containing 1% protease inhibitor cocktail, Sigma, United States). The collected solutions containing skin secretions were quickly centrifuged (10, 000 × g for 10 min) and the supernatants were lyophilized.
Peptide Purification
The peptide purification procedures were performed according to the method described in our previous work (Wu et al., 2018). An aliquot (1 g) of lyophilized skin secretion was dissolved in 10 ml PBS and centrifuged at 5, 000 × g for 10 min. The supernatant was applied to a Sephadex G-50 (Superfine, Amersham Biosciences) gel filtration column (2.6 cm diameter, 100 cm length) equilibrated with 0.1 M PBS for preliminary separation. Elution was performed with the same buffer, collecting fractions of 3.0 ml. The absorbance of the eluted fractions were monitored at 280 nm. The anti-photoaging activity in mice was tested as described below. The fraction containing antiphotoaging activity was further purified by a C 18 reversedphase high performance liquid chromatography (RP-HPLC, Gemini C 18 column, 5 μm particle size, 110 Å pore size, 250 mm length, 4.6 mm diameter) column. The elution is performed using a linear gradient of 0-80% acetonitrile containing 0.1% (v/v) trifluoroacetic acid in 0.1% (v/v) trifluoroacetic acid/water over 60 min as illustrated in Supplementary Figure S1B. UV-absorbing peaks were collected, lyophilized, and assayed for anti-photoaging activity. Peaks with anti-photoaging activity were collected and lyophilized for a second HPLC purification procedure using the same condition as illustrated in Supplementary Figure S1C.
Primary Structural Analysis
N-terminal sequence of the purified peptide was determined by Edman degradation on an Applied Biosystems pulsed liquidphase sequencer (model ABI 491). Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) was used to identify the purity of the isolated peptide. AXIMA CFR mass spectrometer (Kratos Analytical) was analyzed in linear and positive ion mode using an acceleration voltage of 20 kV and an accumulating time of single scanning of 50 s.
cDNA Cloning
The experiment was performed according to the method described in our previous work (Wu et al., 2018). Total RNA was extracted from the skin of N. ventripunctata using RNeasy Protect Mini Kit (QIAGEN, Germany) according to the manufacturer's instructions. An in-fusion SMARTer ™ directional cDNA library construction kit was used for cDNA synthesis. The synthesized cDNA was used as template for PCR to screen the cDNAs encoding the purified peptide (antioxidin-NV). According to the sequence determined by Edman degradation, an antisense degenerate primer (antioxidin-NV-R1) was designed and coupled with a 5′ PCR primer (the adaptor sequence of 3′ PCR primer provided in the kit) to screen the 5′ fragment of cDNA encoding antioxidin-NV. Then, a sense primer (antioxidin-NV-F1) was designed according to the 5′ fragment and coupled with 3′ PCR primer from the kit to screen the full-length cDNAs. The PCR conditions were, 2 min at 95°C, and 30 cycles of 10 s at 92°C, 30 s at 50°C, 40 s at 72°C followed by 10 min extension at 72°C. The PCR products were cloned into pGEM ® -T easy vector (Promega, Madison, WI, United States). DNA sequencing was performed on an Applied Biosystems DNA sequencer, model ABI PRISM 377. Primers used in this research are listed in the supplementary material Supplementary Table S1.
Peptide Synthesis
Antioxidant-NV (GWANTLKNVAGGLCKMTGAA) and the scrambled version of antioxidin-NV called sNV (LTAGMAWNAKGKACTVGLGN), were synthesized by the peptide synthesizer Synpeptide Co. Ltd (Shanghai, China). The synthetic peptides were purified and then analyzed by HPLC and MALDI-TOF MS to confirm that the purity was higher than 98%.
ABTS + Scavenging
Free radical scavenging activity was determined by measuring reduction of radical 2, 2′-azinobis (3-ethylbenzothiazoline-6sulfonic acid) (ABTS + ) according to manufacture instruction of the kit GMS10114.4 (Genmed Scientifics INC, Shanghai, China). The total formation of products (i.e. the reduced form of ABTS and the purple antioxidin-NV modification) and the total consumption of ABTS radical were determined by linear regression analysis. The concentrations of ABTS and ABTS free radical were calculated by using ε 340 4.8 × 10 4 M −1 cm −1 and ε 415 3.6 × 10 4 M −1 cm −1 , respectively (Yu et al., 2015). The purple antioxidin-NV modification was monitored at A 550 .
Cytotoxicity and Hemolysis
Cytotoxicity against human skin fibroblasts (HSFs) (KCB 200537, Kunming Cell Bank, Chinese Academy of Sciences) and human HaCaT keratinocytes (KCB200442YJ, Kunming Cell Bank, Chinese Academy of Sciences) was determined by the MTT assay. Antioxidin-NV dissolved in serum-free DMEM medium was added to cells in 96-well plates (2 × 10 4 cells/well), and the serum-free DMEM medium without antioxidin-NV was used as control. After incubation for 24 h, 20 μl of MTT solution (5 mg/ ml) was added to each well, and the cells were further incubated for 4 h. Finally, cells were dissolved in 200 μl of Me 2 SO 4 , and the absorbance at 570 nm was measured. Rabbit erythrocyte suspensions were incubated with antioxidin-NV and then the absorbance of supernatant was measured at 540 nm. 1% (v/v) Triton X-100 and PBS were used as positive and negative controls, respectively (Mu et al., 2017).
Determination of Intracellular and Mitochondrial ROS Production
The level of intracellular ROS generation was detected using 2′, 7′-dichlorodihydrofluorescein diacetate (DCFH-DA) with an Ex/ Em of 504/529 nm. After 24 h later with UVB irradiation and sample treatment, cells were stained with 30 μM 2′, 7′-DCFH-DA (Sigma, United States) for 30 min at 37°C in a CO 2 incubator. The cells were then analyzed by flow cytometry (FACSCaliburTM, Becton-Dickinson, CA, United States) and an inverted fluorescence microscope (Zeiss, Germany). Mitochondrial ROS production with an Ex/Em of 585/590 nm was detected using mitochondrial reactive oxygen ROS kit (CA1310, Solarbio, China) as described by the manufacturer's instructions.
UVB Irradiation and Antioxidin-NV Treatment in Cells
UVB irradiation and sample treatment were performed according to a method previously reported (Hwang et al., 2013a;Hwang et al., 2013b). When HaCaT or HSF cells were cultured in six-well culture plates (2 × 10 6 cells/well) and reached over 80% coverage, cells were pretreated with serum-free DMEM for 12-h incubation, then cells were washed twice with phosphate buffered saline (PBS). The cells with thin layers of PBS were exposed to UVB lamps (JT8-Y20W, Philips, Netherlands) in the wavelength range of 280-320 nm and their irradiation intensity was measured with a UVB irradiometer (Shanghai Sigma High Technology Co. Ltd, Shanghai, China), controlling the total irradiation dose at 80 mJ/ cm 2 . After UVB irradiation, the cells were washed with warm PBS three times. The cells were immediately treated with antioxidin-NV (10, 20, and 40 μg/ml) or vitamin C (40 μg/ml, SCR, China) in serum-free medium conditions for 24 h. Control cells were maintained in the same culture conditions without UVB exposure.
DNA Fragmentation Analysis
DNA fragmentation was assayed by agarose gel electrophoresis. HaCaT cells were seeded in six-well plates and cultured as described above. After 24 h later with UVB irradiation and sample treatment, HaCaT cells DNA were extracted for DNA fragmentation analysis. Cellular DNA was extracted using cell genomic DNA extraction kit (Solarbio, China) as described by the manufacturer's instructions. The DNA samples were mixed with the 6× loading buffer (TaKaRa, Japan) and stained with nucleic acid dye (ZEESAN, China), and then used 10 μl for 1% agarose gel electrophoresis and observed under UV light imaging system (Bio-Rad ChemiDoc ™ XRS, United States).
Western Blot Analysis
After 24 h later with UVB irradiation and sample treatment, the cells were washed twice with ice-cold PBS and lysed with RIPA lysis buffer (Beyotime, China). The proteins were extracted for western blot analysis according to our previously described method (Wu et al., 2018). The concentration of protein was determined by the Bradford protein assay. Then the cellular proteins were separated on a 12% SDS-PAGE gel and electro blotted onto a polyvinylidene difluoride membrane. Primary antibodies against γH2AX, JNK, p38 MAPK, IkBα, NF-κB p65, caspase-3, cleaved caspase-3, Smad2 (1:2000; CST, United States), and β-actin (1:5000, Santa Cruz Biotechnology, United States) were used in western blot analysis.
Skin tissues in UVB-irradiated hairless mice were taken for tissue immunofluorescence staining. Primary antibodies against cleaved caspase 3, collage I (1:400, CST, United States) were used in tissue immunofluorescence analysis.
Apoptosis in Flow Cytometry
Annexin V-fluorescein isothiocyanate (FITC)/propidium iodide (PI) double staining was used to measure percentile of apoptosis in HaCaT cells. After 24 h later with UVB irradiation and sample treatment, the cells were re-suspended in 500 μl of 1× binding buffer and mixed with Annexin V-FITC/PI (Cat number APOAF, Sigma, United States). After incubation for 30 min, the cells were measured by Accuri C6 flow cytometry (Accuri, Ann Arbor, United States).
Cytokine and Chemokine Measurements
After 24 h later with UVB irradiation and sample treatment, culture supernatants were collected and assessed for transforming growth factor-β1 (TGF-β1) and IL-6 using ELISA kits (DAKAWE, Beijing, China).
Photo-aged skin tissue weighing 100 mg plus ice-cold PBS was fully ground into a 10% (m/v) tissue suspension. The suspension was processed by an ultrasonic disruptor (Saifei, China), centrifuged at 4°C for 10 min (3500 g/min), and the supernatant was collected. The supernatant was used to assay the level of TGF-β1 and IL-6 using ELISA kits (DAKAWE, Beijing, China).
Experimental Animals and Ethics Statement
Adult N. ventripunctata (n 30; weight range 20-25 g) was collected from Shangri-La, Yunnan province of China. Adult male SKH-1 hairless mice were purchased from Labreal Laboratories and housed in the pathogen-free facility. At the termination of the study, mice were sacrificed by cervical dislocation under CO 2 anesthesia in accordance with the guidelines from the Care and Use of Medical Laboratory Animals (Ministry of Health, People's Republic of China). All the animal study was reviewed and approved by the Institutional Animal Care and Use Ethics Committee of Kunming Medical University (IACUC approval number: KMMU2020063). All the animal experiments described in this study were conducted at Kunming Medical University.
Hairless Mouse Model of Photoaged Skin and Antioxidin-NV Treatment
Adult male SKH-1 hairless mice (n 30, 6-8 weeks old, 20-30 g, Labreal Laboratories) were used. The mice were housed for at least 7 days prior to the experiments in a ventilated and temperaturecontrolled room and had access to water ad libitum. ASS-03AB UV phototherapy light source (Shanghai Sigma High Technology Co. Ltd, Shanghai, China) was used for UVB irradiation (wavelength 280-320 nm). The mice were randomized into five treatment groups (six mice per group): (Slominski et al., 2012) Sham (mice were covered with PBS); (Slominski et al., 2018) PBS (mice were covered with PBS after UVB exposure); (Portugal et al., 2007) NV (mice were covered with antioxidin-NV dissolved in PBS after UVB exposure); (Rinnerthaler et al., 2015) VC (mice were covered with vitamin C dissolved in PBS after UVB exposure, vitamin C, recognized as an antioxidant, is often used to prevent lightinduced skin aging, therefore, vitamin C was selected as the Frontiers in Pharmacology | www.frontiersin.org January 2022 | Volume 12 | Article 761011 positive control) and (Baek and Lee, 2016) sNV(the scrambled version of antioxidin-NV, mice were covered with sNV dissolved in PBS after UVB exposure). In the PBS, NV, VC and sNV group, mice were directly exposed to UVB radiation, then were treated with PBS, antioxidin-NV, vitamin C, or sNV (100 μl, 200 μg/ml) to the back, respectively. Mice were exposed to UVB radiation at 100 mJ/cm 2 (one minimal erythematal dose 100 mJ/cm 2 ) five times during the first week and then to 200 mJ/cm 2 three times a week for 12 weeks thereafter. After sacrifice, some of the skin tissues were snap frozen in liquid nitrogen and stored at −80°C, and others were fixed in formalin and embedded in paraffin for immunohistochemistry.
Histological Analysis
The tissues were fixed in 10% formalin. Then, the tissues were sectioned using a microtome and stained with hematoxylin and eosin (H&E) for histological analysis. The pathology slides were read in blindness, and the images were recorded.
Masson Stain
The paraffin-embedded skin specimens were measured using Masson's trichrome stain kit (Solarbio, China). The slides were stained with Bouin's Fluid and Weigert's iron hematoxylin working solution. Furthermore, the slides were differentiated in phosphomolybdic-phosphotungstic acid solution and stained with aniline blue solution. Finally, the slides were read in blindness, and the images were recorded.
Immunohistochemistry (IHC) Analysis
The paraffin-embedded tissue sections were dried, deparaffinized, and rehydrated. Following a microwave pretreatment in citrate buffer (pH 6.0), the slides were immersed in 3% hydrogen peroxide for 20 min to block the activity of endogenous peroxidase. After extensive washing with PBS, the slides were incubated with γH2AX (1:480; CST, United States), Cleaved Caspase 3 antibody (1:200; CST, United States), Collagen I antibody (1:200; abcam, United Kingdom) or α-SMA antibody (1:100; abcam, United Kingdom) overnight at 4°C. The sections were then incubated with the secondary antibody for 1 h at room temperature, and the slides were developed using the UltraVision Quanto HRP detection kit (Thermo Scientific, United States). Finally, the slides were counterstained using hematoxylin. The slides were read in blindness, and the images were recorded.
Statistical Analysis
Statistical differences were determined using Student's t-tests or oneway ANOVA provided by GraphPad Prism software. Results are shown as mean ± SD from three independent experiments. A p value less than 0.05 was considered as statistically significant difference.
Isolation and Characterization of Antioxidin-NV
As shown in Supplementary Figure S1A, the skin secretions of N. ventripunctata were divided into five fractions after Sephadex G-50 gel filtration. The fraction containing anti-photoaging activity was pooled and subjected to a C 18 RP-HPLC column for further purification (Supplementary Figures S1B,C). The purified peptide was designated as antioxidin-NV (Supplementary Figure S1C). After Edman degradation, the amino acid sequence of antioxidin-NV was identified as GWANTLKNVAGGLCKMTGAA. MALDI-TOF MS analysis indicated that antioxidin-NV had a measured molecular mass of 1963.70 Da (Supplementary Figure S2), matching well with the calculated molecular mass of 1963.30 Da. The cDNA clone encoding the precursor of antioxidin-NV was sequenced from the skin cDNA library of N. ventripunctata (GenBank accession number: MW114946). As shown in Figure 1, the deduced amino acid sequence of antioxidin-NV is completely consistent with that sequenced by Edman degradation. It is composed of 72 amino acid residues, including a predicted signal peptide (24 amino acid residue), an acidic peptide region (28 amino acid residue) that ends in a typical trypsin-like proteases processing site (-Lys 51 Arg 52 -), followed by a mature peptide (20 amino acid).
Antioxidin-NV Rapidly Eliminated Artificial ABTS + Radicals and Scavenged Intracellular/Mitochondrial ROS ABTS + free radical scavenging kinetics, owing to its relative stability, easy measurement, good reproducibility, ABTS + radicals are commonly used to evaluate antioxidant capacity (Yang et al., 2009). We confirmed the antioxidant activity of antioxidin-NV by assessing its ability to scavenge ABTS + free radical. The assay is based on decolorization by monitoring absorbance decreases at the characteristic wavelength of 734 nm. As illustrated in Figure 2A, antioxidin-NV could rapidly scavenge ABTS + in a dose-dependent manner. It could get rid of ABTS + immediately when it contacted with ABTS + . At the concentration of 80 μg/ml, antioxidin-NV scavenged 96% ABTS + within 1 min, and scavenged nearly 99% ABTS + within 8 min. Even the concentration down to 5 μg/ml, 40% ABTS + was scavenged within 4 min by antioxidin-NV.
Then, we were interested to assay whether antioxidin-NV directly clear the ROS induced by UVB irradiation in HaCaT cells. As an indicator of ROS production, DCFH-DA fluorescence intensity was measured by flow cytometry. A progressive increment of intracellular ROS level was observed in the UVB-irradiated HaCaT cells, and the addition of antioxidin-NV significantly decreased the intracellular ROS level in HaCaT cells upon UVB irradiation ( Figures 2B,C). The scavenging efficacy of UVB-induced intracellular ROS is comparable to the ROS inhibitor, N-acetyl-L-cysteine (NAC) (Figures 2B,C). Furthermore, antioxidin-NV effectively cleared the ROS in mitochondria induced by UVB irradiation ( Figure 2D). The data indicate that antioxidin-NV had a strong ability to scavenge ROS induced by UVB irradiation, suggesting a strong antioxidant activity of antioxidin-NV.
Antioxidin-NV Suppressed UVB-Induced Skin Photoaging in Hairless Mice
UV-induced skin photoaging leads to the accumulation of intracellular ROS (Zhang et al., 2017), especially the stronger Frontiers in Pharmacology | www.frontiersin.org January 2022 | Volume 12 | Article 761011 5 biological effect of UVB (Diffey, 2002). To evaluate the antiphtoaging activity of antioxidin-NV, we established a UVBinduced skin photoaging mouse model to assay whether topical application of antioxidin-NV can inhibit skin photoaging in mice. As illustrated in Figure 3A, UVB irradiation obviously induced skin photoaging in hairless mice, but topical application of antioxidin-NV significantly suppressed UVB-induced skin photoaging in hairless mice with reduced skin erythema, hyperplasia, wrinkling, and roughness compared to PBS-treated mice. H&E staining of the dorsal skin showed that UVB-irradiation resulted in a reduction of the thickness of epidermal layers, but topical application of antioxidin-NV FIGURE 1 | The cDNA sequence of antioxidin-NV precursor. Deduced amino acid sequence is shown below the cDNA sequence. The putative signal peptide is italicized and the amino acid sequence of mature peptide is underlined and bold. The stop codon is indicated by an asterisk. Amino acid numbers or nucleotide numbers are shown after the sequences. significantly reversed this reduction ( Figures 3B,C). To our surprise, antioxidin-NV showed a better therapeutic efficacy against UVB-induced skin photoaging than vitamin C (VC, positive control) ( Figures 3A-C). The scrambled antioxidin-NV (sNV, isotype control) had no significant therapeutic effects on UVB-induced skin photoaging, indicating that the therapeutic efficacy of antioxidin-NV against UVB-induced skin photoaging is due to its unique amino acid sequence ( Figures 3A-C). Besides, antioxidin-NV did not exhibit cytotoxicity and hemolytic activity at an absolutely high concentration (Supplementary Figure S3), and no adverse effect on the body weight, general health or behavior of the mice were observed for the topical antioxidin-NV treatment, implying antioxidin-NV had low side effects.
Antioxidin-NV Inhibited UVB-Induced DNA Damage in HaCaT Cells and Hairless Mice by Reducing p-Histone H2A.X (γH2AX) Expression
Skin photoaging is closely associated with DNA damage (Zhang et al., 2017), and keratinocytes are the cells distributed in the outer layer of skin which can be directly irradiated by UVB. So we analyzed whether antioxidin-NV can suppress UVB-induced DNA damage. Agarose gel electrophoresis showed that UVB irradiation produced a typical ladder with clearly increased intensity of DNA fragmentation in HaCaT cells, and antioxidin-NV significantly reduced its formation in a dosedependent manner ( Figure 4A). Western blot analysis and immunofluorescence staining further showed that UVB irradiation resulted in a significant increment of p-Histone H2A.X (γH2AX) expression, a marker protein for DNA damage. However, antioxidin-NV significantly reduced γH2AX expression in HaCaT cells induced by UVB irradiation in a dosedependent manner (Figures 4B,C). Furthermore, IHC analysis also showed that UVB irradiation significantly increased γH2AX expression in hairless mice, but topical application of antioxidin-NV reduced UVB-induced γH2AX expression ( Figure 4D).
Antioxidin-NV Inhibited UVB-Induced Cell Apoptosis in HaCaT Cells and Hairless Mice
Cell apoptosis is a critical pathological process of skin photoaging (Liu et al., 2021). The therapeutic effect of antioxidin-NV against UVB-induced apoptosis was assayed by flow cytometry. As illustrated in Figure 5A, UVB exposure resulted in the FIGURE 3 | Topical application of antioxidin-NV significantly suppressed UVB-induced skin photoaging in hairless mice, obviously decreased skin erythema, thickness and wrinkle formation. (A) Images of a representative mouse from each group taken after 12 weeks are shown. 200 mJ/cm 2 UVB radiation and vehicle, antioxidin-NV, VC or sNV (the scrambled version of antioxidin-NV) were used to treat the back skin of mice for 12 weeks. The red dotted line indicated the UVB-induced skin photoaging: decreased skin erythema, coarse wrinkling, rough texture and thickening. (B) skin tissues were taken and paraffin blocks were cut into 4 μm thick sections for HE staining, and the white line indicated the thickness of the epidermis. Scale bar 50 μm. (C) Epidermal thicknesses in each group of mice were measured and analyzed. Data are presented as mean ± SD (n 6). ns, no significance, **p < 0.01.
Frontiers in Pharmacology | www.frontiersin.org January 2022 | Volume 12 | Article 761011 apoptosis of HaCaT cells (43.26%), while antioxidin-NV (40 μg/ ml) treatment reduced UVB-induced HaCaT cell apoptosis at both the early and late stages (14.44%). Immunofluorescence and western blot analysis showed that antioxidin-NV significantly reduced the expression of apoptotic protein cleaved caspase 3 (a marker protein for apoptosis) in the UVB-irradiated HaCaT cells in a dose-dependent manner ( Figures 5B,C). We further analyzed whether antioxidin-NV can suppress UVB-induced cell apoptosis in hairless mice. Immunofluorescence, IHC and western blot analysis showed that UVB irradiation markedly increased the expression of cleaved caspase-3 in the skin of hairless mice, indicating that UVB irradiation significantly resulted in cell apoptosis in the skin of mice (Figures 6A-C). But antioxidin-NV significantly inhibited the expression of cleaved caspase-3 in the skin of hairless mice induced by UVB irradiation, suggesting that it could inhibit UVB-induced cell apoptosis in the skin of hairless mice (Figures 6A-C).
Antioxidin-NV Inhibited UVB-Induced Inflammatory Response in HaCaT Cells and Hairless Mice by Attenuating UVB-Activated TLR4/p38/JNK/NF-κB Signaling
Inflammation was found to enhance the epidermal hyperproliferative response to UVB and play a crucial role in promoting skin photoaging (Pillai et al., 2005). As illustrated in Figure 7A, UVB irradiation obviously increased the secretion of IL-6 in UVB-exposed HaCaT cells, but antioxidin-NV effiectively suppressed the secretion of IL-6 in a dose-dependent manner. Furthermore, UVB irradiation increased IL-6 production in the skins of hairless mice, but the topical application of antioxidin- NV significantly decreased the secretion of IL-6 compared to PBS treatment ( Figure 7B).
MAPKs and NF-κB signaling are known to be important signal transduction pathways activated by UVB irradiation (Subedi et al., 2017). Therefore, western blot analysis was performed to further explore the effect of antioxidin-NV on MAPK and NF-κB signaling pathway in HaCaT cells and skin tissues. As illustrated in Figures 8A,B, UVB irradiation markedly increased JNK, p38, IkBα and p65 phosphorylation in HaCaT cells, but antioxidin-NV significantly decreased UVBinduced JNK, p38, IkBα and p65 phosphorylation in a concentration-dependent manner. The same results were observed in UVB-irradiated hairless mice, antioxidin-NV also significantly decreased JNK, p38, IkBα and p65 phosphorylation ( Figures 8C,D). Antioxidin-NV Rescued Collagen Production in UVB-Irradiated Hairless Mice by Rescuing a-SMA Accumulation and Restoring TGF-β1/Smad2 Signaling HSFs, which can synthesize and maintain the extracellular matrix of skin and reduce skin photoaging, are a very critical cell type in skin photoaging (Tobin, 2017). Given the above observation that antioxidin-NV significantly suppressed UVB-induced skin photoaging in hairless mice, we further explored the potential effect of antioxidin-NV on HSF cells survival rate and the accumulation of alpha smooth muscle actin (a-SMA) following UVB irradiation in vitro and in vivo. As illustrated in Figure 9A, UVB irradiation directly inhibited HSF survival rate, but antioxidin-NV obviously restored HSF cells survival rate post UVB irradiation in a concentration-dependent manner. The expression of α-SMA, a marker protein of HSF cells was examined in the skin tissue of UVBirradiated mice using IHC staining. As illustrated in Figure 9B, UVB irradiation obviously reduced the expression of α-SMA in the skin of UVB-exposed mice, but a higher accumulation of a-SMA positive staining was observed in NV treatment when compared to PBS.
Collagen is derived from HSF cells and plays important role in maintaining the elasticity of skin (Lee et al., 2012). Considering that antioxidin-NV has strong ability to restore the accumulation of a-SMA in hairless mice following UVB irradiation, and a-SMA is also a marker of myofibroblast which has a higher capacity to synthesize collagen (Nakyai et al., 2018). We further investigated whether antioxidin-NV can promote collagen production in UVBirradiated hairless mice. Masson's trichrome staining was used to evaluate the presence and distribution of collagen. As shown in Figure 9C, the collagen fibers of mice without UVB irradiation (sham) were dense and regular, while the collagen fibers of mice became less dense and more erratically arranged after UVB irradiation, but antioxidin-NV treatment markedly increased the abundance and density of collagen fibers in UVB-irradiated skins compared to PBS treatment. Type I collagen, which is the major component of collagen fibrils, is the most abundant structural protein in the skin (Makrantonaki and Zouboulis, 2007). Therefore, we further explored the potential effect of antioxidin-NV on type I collagen expression in the skin. Collagen I expression level in the skin tissue was examined using IHC staining ( Figure 9D), immunofluorescence ( Figure 9E) and western blot Figure 9F). The results showed that UVB irradiation markedly resulted in a reduction of collagen I deposition in the skin of hairless mice, while antioxidin-NV treatment significantly rescued collagen I production in hairless mice post UVB irradiation ( Figures 9D-F).
Transforming growth factor-β (TGF-β) is an important cytokine that promotes collagen production (Mouw et al., 2014). In addition, the TGF-β/Smad pathway also promotes the differentiation of myofibroblasts (Guo et al., 2009). To determine whether antioxidin-NV affect TGF-β secretion in HSF cells upon UVB irradiation, we measured the effect of antioxidin-NV on TGF-β1 production in HSF cells using ELISA and western blot, respectively. UVB irradiation obviously suppressed TGF-β1 production in HSF cells, but antioxidin-NV treatment significantly increased TGF-β1 production in UVB-irradiated HSF compared with PBS treatment in a dose-dependent manner ( Figures 10A,B). Furthermore, UVB exposure also suppressed TGF-β1 production in the skin of hairless mice, while topical application of antioxidin-NV significantly increased TGF-β1 production in the skin of UVBirradiated hairless mice compared with PBS treatment ( Figures 10C,D). Furthermore, Smad proteins, including Smad2, are essential components of downstream TGF-β signaling. As illustrated in Figure 10E, UVB irradiation markedly reduced Smad2 phosphorylation in HSF cells, but antioxidin-NV obviously increased Smad2 phosphorylation in UVB-irradiated HSF cells compared to PBS-treated cells in a dose-dependent manner, suggesting that antioxidin-NV rescued collagen production in hairless mice upon UVB exposure through restoring TGF-β1/ Smad2 signaling.
DISCUSSION
Skins are a major target of oxidative stress because of ROS that originate from both endogenous and exogenous sources. Ultraviolet radiation is the most important environmental factor in the development of skin aging that is accompanied by a gradual loss of function, physiological integrity and the ability to cope with internal and external stressors (Bocheva et al., 2019). UVB, in particular, induces biological effects that are 1000 times stronger than UVA (Diffey, 2002). Antioxidant supplementations might be an effective therapeutical strategy to restore skin homeostasis (Portugal et al., 2007;Pham-Huy et al., 2008;Godic et al., 2014). Among vertebrates, skins of amphibian display excellent radio-protective abilities and represent a resource for prospective antioxidant peptides. As a step towards understanding amphibian's radio-protective ability and identifying novel anti-photoaging peptides, we address this issue and have characterized a potential antiphotoaging peptide (antioxidin-NV) from N. ventripunctata skin in this work. The structural organization of antioxidin-NV precursor is similar to amphibian antimicrobial peptide precursors, comprising a highly conserved signal peptide and acidic spacer peptide followed by a variable mature peptide. UVB irradiation causes overproduction of reactive oxygen species (ROS) in the skin, which results in oxidative damage of proteins and nucleic acids, leading to DNA damage, inflammation and apoptosis (Portugal et al., 2007;Baek and Lee, 2016). Our results revealed that topical application of antioxidin-NV greatly suppressed UVB-induced skin erythema, thickness and wrinkle formation in hairless mice, suggesting the peptide has strong therapeutic effects against UVB-induced damage. It has been shown that UVB radiation causes DNA damage such as cyclobutane pyrimidine dimers and six to four pyrimidine-pyrimidone photoproducts (Heffernan et al., 2009), and then the damage induces phosphorylation of the Ser-139 residue of the histone variant H2AX, forming γH2AX. γH2AX is a sensitive molecular marker of DNA damage, and accumulates at the site of damage (Maréchal and Zou, 2013). We observed that UVB induced fragmentation of DNA in HaCaT cells and accumulation of γH2AX signals in the cells and in vivo, while our results suggest that antioxidin-NV is beneficial in the prevention of UVB-induced DNA damage in vivo and in vitro. The effects of antioxidin-NV on DNA damage related signaling pathways need to be further investigated to highlight its protective mechanism in vitro and in vivo. . The results were quantified by Image J. The densitometry of phosphorylated JNK, p38, IkBα and p65 were normalized to total JNK, p38, IkBα and p65, and graphed as the mean ± SD (n 3). NV-10, 20, 40 indicated the concentrations of 10, 20, 40 μg/ml respectively. ns, no significance, *p < 0.05, **p < 0.01.
Frontiers in Pharmacology | www.frontiersin.org January 2022 | Volume 12 | Article 761011 Mitochondria are considered the most important source of endogenous ROS in the cell (Gniadecki et al., 2000;Zorov et al., 2014). Excessive ROS leads to oxidative stress that is associated with the mitochondrial uncoupling respiration, formation of the mitochondrial permeability transition pore, and mitochondrial dysfunction (Tiwari et al., 2002;Salimi et al., 2019). Mitochondrial dysfunction and oxidative stress are responsible for the induction or activation of the mitochondrial pathway of apoptosis (Maity et al., 2009). Activation of effector caspases is believed to be the final step in the apoptosis pathways. Among the effector capases, caspase 3 plays a critical role in the execution of apoptosis, because it is required for oligonucleosomal DNA fragmentation and promotes the activation of other effector caspases (Rehm et al., 2002). In this study, because antioxidin-NV significantly suppressed intracellular and mitochondria ROS generation, we hypothesized that antioxidin-NV should possess the anti-apoptotic effect. As expected, antioxidin-NV ameliorated UVB-induced apoptosis and inhibited the expression of apoptosis-specific protein, cleaved caspase 3 in HaCaT cells and skin tissues. Therefore, our results showed that antioxidin-NV could prevent the activation of the mitochondrial pathway of apoptosis by scavenging ROS in vitro and in vivo. This mechanism differs from that of other agents against skin photoaging by modulating the Nrf2-dependent antioxidant responses (Chaiprasongsuk et al., 2019) or oxidative stress (Zhang et al., 2017).
Inflammation enhances the epidermal hyperproliferative response to UVB and increases production of ROS and cytokines, accelerating the aging process (Pillai et al., 2005). IL-6, a cytokine produced by various cells, such as HSF and HaCaT cells, is a signal molecule that mediates the inflammatory response (Wu et al., 2013) and is also known to be associated with ROS caused by UV radiation. In our study, antioxidin-NV significantly decreased UVB-increased IL-6 secretion in vitro and in vivo. Furthermore, after the treatment by TLR4 inhibitor MTS510, IL-6 downregulation induced by antioxidin-NV was completely inhibited. These results indicated that antioxidin-NV inhibited UVBinduced IL-6 expression by blocking TLR4-mediated inflammatory responses, then further decreased the epidermal hyperproliferative response to UVB. UVB-induced ROS production activates MAPKs and NF-κB signaling pathways, which further induce the inflammation and apoptosis in cells and cause skin aging (Subedi et al., 2017). In the present study, antioxidin-NV inhibited UVB-induced MAPK and NF-κB signaling pathway. Antioxidin-NV significantly decreased JNK, p38, IkBα and NF-κB p65 phosphorylation. This demonstrated that JNK, p38, IkBa, and NF-κB p65 signaling pathways were involved in antioxidin-NV-mediated downregulation of inflammatory cytokine production upon UVB irradiation, and they may orchestrate in regulating these cytokines expression and inhibiting skin photoaging process. According to these data, we concluded that antioxidin-NV reduced UVB-induced inflammatory response in HaCaT cells and hairless mice by attenuating UVBactivated TLR4/p38/JNK/NF-κB signaling.
Skin photoaging involves a complex interplay of primarily HaCaT, HSF cells and their associated extracellular matrix. HSF cells are very important factors in skin photoaging, because they can synthesize and maintain the extracellular matrix of skin and reduce skin photoaging (Lee et al., 2012). Antioxidin-NV restored the survival rate of HSF cells upon UVB irradiation in a concentration-dependent manner in vitro. Additionally, antioxidin-NV significantly restored UVB-reduced α-SMA Western blot showed effects of antioxidin-NV on TGF-β signaling pathways and relative activation analysis. The results were quantified by Image J. The densitometry of phosphorylated Smad2 were normalized to Smad2, and graphed as the mean ± SD (n 3). NV-10, 20, 40, VC-40 indicated the concentrations of 10, 20, 40 μg/ml respectively. ns, no significance, *p < 0.05, **p < 0.01. expression in vivo. The TGF-β pathway regulates aspects of cell growth and extracellular matrix synthesis, including collagen synthesis by dermal HSF cells. TGF-β1, a multifunctional cytokine belonged to TGF-β family members, is a key factor in collagen synthesis, which promotes the expression of collagen and type-I procollagen and inhibits the expression of MMP-1 (Kopecki et al., 2007). Our results indicated that antioxidin-NV increased TGF-β1 secretion in vitro and in vivo. Antioxidin-NV significantly increased UVB-inhibited TGF-β1 secretion in a dose-dependent manner in HaCaT cells. Antioxidin-NV also significantly upregulated TGF-β1 levels in UVB-induced skin tissues in vivo. Furthermore, Smad proteins, including Smad2, are key regulators in TGF-β signaling pathways and they are essential components of downstream TGF-β signaling. Antioxidin-NV activated phosphorylation of Smad2 to increase TGF-β1 secretion in vitro and in vivo. Collagen derived from HSF cells is one of the main building blocks of skin (Lee et al., 2012). Type I collagen, the major component of collagen fibrils, is the most abundant structural protein in the skin (Makrantonaki and Zouboulis, 2007). Antioxidin-NV significantly rescued the collagen and type I collagen production in HSF cells of the skin tissues after UVB irradiation, and type I collagen rescued by antioxidin-NV is critical for maintaining the elasticity of skin upon UVB irradiation. Therefore, antioxidin-NV rescued collagen production in UVB-irradiated hairless mice by restoring TGF-β1/Smad2 signaling.
In a recent work, two small peptides named FW-1 (FWPLI-NH 2 ) and FW-2 (FWPMI-NH 2 ) were isolated from the skin secretion of Hyla annectans. FW-1 and FW-2 directly inhibited UVB-induced tumor necrosis factor-α (TNF-α) and IL-6 secretion. The authors described that FW-1 and FW-2mediated downregulation of TNF-α and IL-6 secretion through modulating the UV-induced stress signaling pathways such as MAPKs and NF-κB. Besides, the authors described that FW-1 and -2 displayed antioxidant effects in the skins of mice by reducing UVB-induced ROS production through an unknown mechanism (Liu et al., 2021). In our work, we found that antioxidin-NV directly scavenged free radicals such as ROS and ABTS + . Both H. annectans in the recent work and N. ventripunctata in our work live in the southwestern plateau area of China. This plateau area possesses long duration of sunshine, and suffers strong ultraviolet radiation, which make the naked skin of frogs evolve an effective antioxidant system to scavenge free radicals induced by light radiation. Accordingly, series of peptides have been identified with antioxidant activity from the skin of frogs lived in this plateau area (Yang et al., 2009;Liu et al., 2010), but the previous studies did not investigate whether these peptides have anti-photoaging activity. While our work definitely indicated that antioxidin-NV-mediated reduction of free radical accumulation led to the reduction of DNA damage, apoptosis, and inflammation upon UVB radiation, thereby providing protection against UVB-induced skin pohto-aging. Our study supplement the radio-protective mechanism of frogs lived in the southwestern plateau area of China, and prove the feasibility to identify effective anti-photoaging peptide from the frogs lived this plateau area.
In conclusion, antioxidin-NV identified from N. ventripunctata skin is a bioactive/effector compound with potential anti-photoaging ability. It shows strong antioxidant activities by scaveging intracellular and mitochondrial ROS accumulation upon ultraviolet radiation. As subsequent results, it inhibits UVB-induced DNA damage, apoptosis, and inflammation. Our results suggest that the therapeutic effect of antioxidant-NV on UV-induced photoaging was mediated through the alleviation of the oxidative stress-induced process of skin photoaging. Thus, antioxidant-NV may serve as a potent candidate for the prevention and therapy of photoaging.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material.
ETHICS STATEMENT
The animal study was reviewed and approved by The animal study was reviewed and approved by the Institutional Animal Care and Use Ethics Committee of Kunming Medical University (IACUC approval number: KMMU2020063), Yunnan, China. Written informed consent was obtained from the owners for the participation of their animals in this study. | 9,767.8 | 2022-01-19T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Biology"
] |
Fabrication and characterization of ZnO/Se1-xTex solar cells
Selenium (Se) element is a promising light-harvesting material for solar cells because of the large absorption coefficient and prominent photoconductivity. However, the efficiency of Se solar cells has been stagnated for a long time owing to the suboptimal bandgap (> 1.8 eV) and the lack of a proper electron transport layer. In this work, we tune the bandgap of the absorber to the optimal value of Shockley–Queisser limit (1.36 eV) by alloying 30% Te with 70% Se. Simultaneously, ZnO electron transport layer is selected because of the proper band alignment, and the mild reaction at ZnO/Se0.7Te0.3 interface guarantees a good-quality heterojunction. Finally, a superior efficiency of 1.85% is achieved on ZnO/Se0.7Te0.3 solar cells. Graphical abstract Supplementary Information The online version contains supplementary material available at 10.1007/s12200-022-00040-5.
Se solar cells, based on indium tin oxide (ITO)/TiO 2 /Se/ Au device structure, reached an impressive efficiency of 5% in 1985 [15], but the progress was slow for the following 30 years [16,[18][19][20]. Si advanced so rapidly that Se has not received much attention for a long time [21,22]. Until 2017, Todorov et al. created a record efficiency of 6.5% by optimizing the functional layer thickness and adopting a MoO x hole transport layer [17]. It is notable that the bandgap of Se is out of the optimal range of S-Q limit (1-1.5 eV), so it would lead to an inadequate use of sunlight and thereby a low photocurrent. Tellurium (Te), as the congener of Se, has a narrow bandgap of 0.33 eV [23] and the same hexagonal crystal structure [24], thus, it is possible to continuously tune the bandgap of Se 1−x Te x to the optimal S-Q bandgap of 1.36 eV. Se and Te are two less-studied photovoltaic materials and stand out for their simple composition, high carrier mobility, good air stability, high photoconductivity and thermoelectric response [25,26]. They are also the significant components of transition metal dichalcogenides (TMDCs), which are widely applied in high-performance field-effect transistors (FETs) [27,28], optoelectronic devices [29], and thermoelectric devices.
Owing to the tunable photoconductivity and optical response of Se 1−x Te x , it has been used for solar cells [30], short-wave infrared photodetectors [31] and semiconductor core optical fibers [32]. In 2019, Hadar et al. investigated Se 1−x Te x films for PV application [30], but the efficiency of the alloy solar cells is less than 3%, only half of the pure Se solar cells. Therefore, it is important to choose the Se 1−x Te x film with a suitable component and bandgap. In addition, the current Se and Se 1−x Te x solar cells commonly adopt TiO 2 as an electron transport layer (ETL) [15,16,20]. Unfortunately the inertness of TiO 2 surface cannot be bonded with Se 1−x Te x tightly and may potentially give rise to an inferior interface with poor adhesion. ZnO surface is more reactive than TiO 2 , and ZnO has a higher electron mobility (> 150 cm 2 /(V·s)) [33] and lower fabrication temperature [34]. Therefore, ZnO is a preferred alternative compared to TiO 2 .
In this work, we optimized the component and bandgap of Se 1−x Te x absorber and adopted the active ZnO ETL to assemble solar cell devices. First, we alloyed Se films with Te at certain molar ratios (x = [Te] = 0.2, 0.3, 0.4, 0.5) and tuned the bandgap from 1.53 to 1.13 eV. Based on the S-Q limit, we chose the Se 0.7 Te 0.3 film with a bandgap of 1.36 eV for the target absorber material. Then, combining the band alignment and surface reactivity, ZnO ETL was selected to construct ITO/ZnO/Se 1−x Te x /Au solar cells. Factually, theoretical thermodynamic calculation confirmed that ZnO can react with Se, and the Zn 2+ exposed at (111) polar surface of ZnO fabricated by magnetron sputtering under oxygen-poor condition (O:Ar = 1:99) is more conducive to the formation of Zn-Se bonds at the ZnO/Se interface. Therefore, it can help to form a strong-adhesion interface and obtain satisfactory device performance. Finally, we achieved a superior efficiency of 1.85% on ITO/ZnO/Se 0.7 Te 0.3 /Au solar cell.
Film and device preparation
For the preparation of Se 1−x Te x raw materials, a certain proportion (x = [Te] = 0.2, 0.3, 0.4, 0.5) of Se and Te powder (99.999% purity, Aladdin) were sealed in a quartz tube, then heated at 560 °C in a muffle furnace for 24 h, and slowly cooled to room temperature with a cooling rate of 22 °C/h. For device preparation, the ITO glass (Kaivo, Zhuhai, China) with the square resistance of 6-8 Ω/sq was used as the substrate. The ITO substrates had been cleaned using a detergent, isopropanol, ethyl alcohol and DI water rinsing in sequence. Then 1 μm Se 1−x Te x films were deposited by thermal evaporation (Kurt J. Lesker, ~ 5 × 10 −3 Pa), and annealed at 200 °C for 2 min on a heating stage in the glove box. Subsequently, ZnO films (180 nm thickness) were prepared by magnetron sputtering (JCP500, Technol Science; O:Ar = 1:99 atmosphere). Finally, Au electrodes (0.09 cm 2 area, 100 nm thickness) were evaporated by the resistance evaporation thin-film system (Beijing Technol Science) under a vacuum pressure of 5 × 10 −3 Pa.
Film characterization
The morphologies and energy dispersive spectroscopy (EDS) characterization of Se 0.7 Te 0.3 films were checked by scanning electron microscopy (SEM, GeminiSEM, Zeiss, without Pt coating). The X-ray diffraction (XRD) with Cu Kα radiation (Empyrean, PANalytical B.V.) was carried out to determine the component and orientation of Se 0.7 Te 0.3 and ZnO films. The morphologies of the Se 1−x Te x and ZnO films were observed by the atomic force microscope (AFM, SPM9700, Shimadzu). The optical transmittance of Se 1−x Te x film was recorded by UV-Vis spectrophotometer (Perkin Elmer Instruments, Lambda 950 using integrating sphere). Ultraviolet photoelectron spectroscopy (UPS, AXIS-ULTRA DLD-600 W, Kratos) was used to confirm the energy level positions of Se 0.7 Te 0.3 . The Hall coefficient and carrier concentration were obtained via a Hall measurement system (Ecopia HMS5500). The X-ray photoelectron spectroscopy (XPS, AXIS-ULTRA DLD-600 W) was used to characterize the interface between Se 1−x Te x and ZnO).
Device characterization
The device performance was characterized by a digital source meter (Keithley2400) under simulated AM 1.5G solar (Oriel 94023A, light intensity of 1000 mW/cm 2 calibrated with a standard silicon cell). external quantum efficiency (EQE) measurements were carried out using a 300 W xenon lamp of Newport (Oriel, 69911) as a light source and a Newport oriel cornerstone TM 130 1/8 Monochromator (Oriel, model 74004) to split light into monochromatic waves. Capacitance-voltage (C-V) and drive-level capacitance profiling (DLCP) measurement was carried out with Keithley 4200-CVU module at a frequency of 70 kHz. Temperature dependent admittance spectral (AS, Agilent E4980A LCR meter) was used for temperature-dependent AS and conductivity measurements, and samples were put in a liquid nitrogen cryostat (Janis VPF-100). The temperature was controlled by a temperature controller (Lakeshore 325) and ranged from 80 to 320 K at a step of 10 K. When the setting temperature was stable, AS and current-voltage (I-V) measurements were performed using an impedance analyzer (Agilent E4980A LCR meter) and a semiconductor device parameter analyzer (Agilent B1500A), respectively.
Results and discussion
A certain proportion of Se and Te powder were mixed evenly to form Se 1−x Te x (x = 0.2, 0.3, 0.4, 0.5) blocks (Additional file 1: Fig. S1). Then the Se 1−x Te x films were deposited at room temperature by thermal evaporation (Fig. 1a) using Se 1−x Te x powder ground from the blocks. The as-deposited films were amorphous (Additional file 1: Fig. S2a), so a postannealing process was required. The film with intermediate component x = [Te] = 0.3 was selected to study the annealing temperature from 150 to 250 °C. The film annealed at 250 °C for 2 min was thermally decomposed because of the high saturated vapor pressure at this temperature (Additional file 1: Fig. S3d), and the film annealed at 150 °C for 2 min is incompletely crystallized (Additional file 1: Fig. S3a). When the annealing temperature was 200 °C for 2 min, the film showed flat surface, densely arranged grains and high crystallinity (Fig. 1b, c), which meets the requirements of highefficiency solar cells. Allowing for the high vapor pressure of Se, Se may escape from Se 1−x Te x films during the annealing process, giving rise to the deviation from the target component. The EDS indicated that the measured Se:(Se + Te) composition of Se 0.7 Te 0.3 film is 0.699 (Additional file 1: Fig. S4), consistent with the feeding ratio of 0.7. Therefore, the annealing process is reasonable. Subsequently, all Se 1−x Te x films were crystallized at 200 °C for 2 min.
The XRD patterns of annealed Se 1−x Te x films with 2θ ranging from 10° to 60° are shown in Additional file 1: Fig. S2b, and the zoom-in diffraction peak of (102) is depicted in Fig. 1d. The (102) peaks shift to a small degree with the increase of x in accordance with the Bragg's Law [35] and the content of Te was calculated as expected (Additional file 1: Table S1). The morphologies of the Se 1−x Te x films before and after annealing were observed by AFM, which exhibits larger grains and stronger crystallinity after annealing (Additional file 1: Fig. S5). With the increase of Te content, both the grain size of Se 1−x Te x films, and the full width at half maxima (FWHM) of (102) peak gradually decrease (Additional file 1: Fig. S5 and Table S1). This indicates a decrease in crystallinity. The transmittance and reflectance spectra were measured on an UV-Vis spectrophotometer to determine the bandgaps of the crystallized Se 1−x Te x films (Additional file 1: Fig. S6). Using Tauc method [36], the bandgaps of Se 1−x Te x films with x = 0.2, 0.3, 0.4, 0.5 are fitted as 1.53, 1.36, 1.25 and 1.13 eV, respectively (Fig. 1e). The bandgap has a linear relationship with x ( Fig. 1f), which well satisfies the Vegard's law (Eq. (1)) [37], where E g(Se) = 1.83 eV is the bandgap of Se, and E g(Te) = 0.33 eV is the bandgap of Te. Among them, Se 0.7 Te 0.3 film with a bandgap of 1.36 eV has more potential according to the S−Q limit.
The position of energy levels, conduction type and carrier density are important to design the solar cell structure. Ultraviolet photoelectron spectroscopy (UPS) of Se 0.7 Te 0.3 film demonstrated that the valence band maximum (VBM) and conduction band minimum (CBM) of the annealed Se 0.7 Te 0.3 film are − 5.31 and − 3.95 eV, respectively (Additional file 1: Fig. S7). The detailed calculation process to obtain the VBM and CBM is shown in Additional file 1:. The positive Hall coefficient (R H , Additional file 1: An n-type ELT is needed to construct a heterojunction with p-type Se 0.7 Te 0.3 film. Here, we selected the n-type ZnO because of its higher electron mobility and lower synthesis temperature than the commonly used TiO 2 . Gibbs free energy calculation (Eq. (2), Table 1) [38] shows ZnO can slightly react with Se 1−x Te x during 200 °C annealing, but TiO cannot. ZnO and TiO 2 were further compared experimentally and ZnO showed a better performance as shown in Additional file 1: Fig. S8.
where Δ r G ⊖ m , Δ r H ⊖ m and Δ r S ⊖ m mean the changes of Gibbs free energy, enthalpy and entropy, respectively, and T is the temperature. The parameters and results of the calculation procedure are shown in Additional file 1: Tables S3 and S4 [39][40][41]. The existence of ZnSe is proven by the XPS measurement (Additional file 1: Fig. S9d, e). The ZnSe transition layer can enhance the adhesion between Se 1−x Te x and ZnO substrate and benefit the low-defectivity ZnO/Se 1−x Te x heterojunction interface (see Additional file 1 for experimental details). The finally designed device structure is shown in Fig. 2a, where ITO and gold with high work function are chosen as front and back electrodes, respectively.
ZnO prepared by magnetron sputtering, shows a wide bandgap of 3.22 eV as depicted in Additional file 1: Fig. S10a, thus it does not limit the absorption efficiency of the absorber at visible band. In addition, the smooth, uniform and compact surface of ZnO (1.712 nm roughness and ~ 80 nm grain size, Additional file 1: Fig. S11) is conducive to the subsequent fabrication of Se 1−x Te x absorbers and gold electrodes (see Additional file 1 for detailed descriptions). The XRD of ZnO films shows that the preferred orientation is polar (111) facet (Additional file 1: Fig. S10b). According to the first-principle calculation [42], the Zn-terminal (111) facet has lower energy than the O-terminal facet. Therefore, our ZnO film is conducive to the formation of a thin Zn-Se transition layer at the interface with Se 1−x Te x film. Combining the energy band of ZnO [43], ZnSe [44], and Se 0.7 Te 0.3 , the band alignment is shown in Fig. 2b, which demonstrates no transport barrier for photogenerated carriers. The cross-section SEM image of the device (Fig. 2c) displays a decent interface. The thickness of ZnO and Se 1−x Te x were 180 and 1000 nm, respectively, but the expected ZnSe was too thin to be observed by crosssection SEM.
The device performance was characterized by a digital source meter under simulated AM 1. Table 2, with x increases, the open-circuit voltage (V OC ) of Se 1−x Te x solar cells decreases as expected, but the short-circuit current (J SC ) does not always increase due to the current loss at long wavelengths according to Additional file 1: Fig. S12c. In addition, the fill factor (FF) of Se 1−x Te x solar cells is rather low because of the cliff at the interface and the leakage according to the small shunt resistance (R sh ) as shown in Additional file 1: Table S5. Then Se 1−x Te x solar cells with x = 0.2, 0.3, 0.4 and 0.5 showed Table 2). Among them, Se 0.7 Te 0.3 solar cell stood out with a better balance between V OC and J SC . Thus, we mainly focused on the Se 0.7 Te 0.3 device and analyzed its air stability, defect properties and recombination mechanism, for the sake of providing guidance for the further performance optimization.
For the air stability, we found that the unencapsulated Se 0.7 Te 0.3 solar cells demonstrated an improved efficiency from 0.81% to 1.25% after 1-month storage in ambient conditions ( Fig. 3a and Table 3), as well as the other Se 1−x Te x device (Additional file 1: Figs. S12b, S13b and Table S5). After 9 months, the efficiency of Se 0.7 Te 0.3 device further increased to 1.85% (Fig. 3a and Table 3), a similar phenomenon was also observed by Todorov et al. [17] To analyze the degree of defect recombination of aged Se 0.7 Te 0.3 devices, the quality factor (A) fitting [45] and Hall effect measurement were conducted. By fitting the dark J-V curve [5,46,47], the A (1.34-1.41) of Se 0.7 Te 0.3 device, after 9-month aging was obtained, smaller than that (1.56) after 1-month aging (Fig. 3b). Through Hall effect measurement, the carrier concentration (p) of Se 0.7 Te 0.3 film after 1-month was 1.88 × 10 14 cm −3 (Additional file 1: Table S2), while the p after 9-month aging was too small to be measured. The smaller A and Hall effect results illustrate the lower defect recombination (see Additional file 1 for more analysis about A). The mechanism of defect reduction in Se 0.7 Te 0.3 film can be explained by the low diffusion barrier (0.16 eV) of Se (or Te) vacancy along Se-Se (or Te-Te) chains as shown in Additional file 1: Fig. S14 [48]. It means that the defects in Se 0.7 Te 0.3 can reduce by the way of a self-healing process, resulting in better device performance.
Although Se 0.7 Te 0.3 has great potential compared to Se, the device performance is inferior to the pure Se solar cells at the current stage. Inspired by Cao's work [49], a multijunction Se 1−x Te x -based solar cell, with the gradient distribution of the absorbers in each sub-cell to absorb the full solar spectrum, will optimize the efficiency in the future. But for now, we are focusing on the performance improvements of Se 1−x Te x single-junction solar cell. Therefore, a series of device physical characterizations were applied to understand the loss mechanism in our devices. According to the external quantum efficiency (EQE) spectra, the absorption edge of Se 0.7 Te 0.3 solar cells is red shifted compared with pure Se solar cells (Fig. 3c). The full spectrum integral J SC of Se 0.7 Te 0.3 solar cells is 9.9 mA/cm 2 , close to the J SC from the J-V curve. However, the collection efficiency of photogenerated carriers at long wavelengths is weak, which is always attributed to the short carrier diffusion length or nonradiative recombination centers in Se 0.7 Te 0.3 absorber. The width of the depletion region (x d ) of Se solar cells is 260 nm (Additional file 1: Fig. S15a) and the carrier diffusion length (L d ) is 480 nm (Additional file 1: Fig. S15c). Thus, the optimal thickness of Se films is 740 nm, so the absorber should be thinner to reduce the carrier recombination loss. To explore the V OC loss mechanism in the Se 0.7 Te 0.3 solar cells, we conducted the device physical characterizations to analyze the recombination loss through A and the light intensity dependent V OC . The A (1-2) of the device implied that the main recombination mechanism in Se 0.7 Te 0.3 solar cells is interface recombination. The J-V curves of the device were measured at different light intensities from 1 to 100 mW/cm 2 . Figure 3d shows that the V OC and logarithm light intensity have a linear relationship in accordance with Eq. (3) [50].
while the J SC and light intensity satisfy the power law in accordance with Eq. (4) [50].
where I, m, k B , q, and α represent the light intensity, a constant, Boltzmann constant, elementary charge and logarithmic coefficient, respectively. The extracted m and α are 1.7 and 0.9, respectively. When m is larger than 1 and α is smaller than 1, the device performance is governed by the defect-related nonradiative recombination. V OC deficit (defined by (E g − V OC )/q) of Se 0.7 Te 0.3 solar cell is 1.04 eV. It is known that the radiation recombination loss at room temperature is less than 0.3 V [51], much smaller than the real V OC loss in our devices. Hence, the nonradiative recombination loss (1.04 − 0.3 = 0.74 eV) dominates 72% of total V OC loss. To sum up, the performance of Se 0.7 Te 0.3 device is governed by the ZnO/Se 0.7 Te 0.3 interface recombination, and it can be minimized by interface energy band engineering or increasing the doping concentration of the absorber.
Next, we further identified the interface defect information by C-V, DLCP and AS measurement. The C-V and DLCP curves are shown in Additional file 1: Fig. S16. To acquire the defect concentration, an abrupt heterojunction model was used to fit the experimental data. The capacitance and voltage satisfied the following relationship (Eq. (5)) [52].
where V bi , A, ε and N A represent for a built-in electric field, electrode area, permittivity and doping concentration, respectively. The intercept of the linear fitting (Additional file 1: Fig. S16a) on the x-axis represents the builtin potential (V bi = 0.377 V), which is close to the V OC of 0.348 V. The small V bi results from the small Fermi energy level difference between ZnO (− 4.32 eV) and Se 0.7 Te 0.3 (− 4.73 eV). Therefore, it is important to increase the free hole density of Se 0.7 Te 0.3 in the future. The doping density calculated through C-V and DLCP measurement are N A,CV = 1.65 × 10 16 cm −3 and N A,DLCP = 1.06 × 10 16 cm −3 , respectively. Interface defects can be calculated by the difference between N A,DLCP and N A,CV (Fig. 4a), the interface defect concentration of the device is 5.9 × 10 15 cm −3 , which acts as non-radiative recombination centers, and hence affects the charge extraction. Interfacial defects may derive from interfacial Se or Te vacancies and the ZnO/Se 1−x Te x lattice mismatch.
Temperature dependent AS measurement was further performed to study the defect depth and defect density of state (4) J SC ∝ I , . According to the AS and differential capacitance spectra (Additional file 1: Fig. S17), there is a defect signal at the frequency range from 10 2 to 10 4 Hz and at the temperature range from 180 to 240 K. The defect depth (E d ) can be calculated by Arrhenius formula (Eq. (6)) [53], where ƒ is the frequency, ξ is a constant without physical meaning. As shown in Fig. 4b, the fitted E d is 0.017 eV. To further confirm defect depth obtained with AS, we measured the temperature dependent dark I-V curves from 80 to 320 K (Additional file 1: Fig. S18) and calculated defect depth E a by Eq. (7) [54].
where σ means conductivity and σ 0 is a constant without physical meaning. As shown in Fig. 4c where ƒ is the frequency and x d is the width of depletion region. The DOSt of Se 0.7 Te 0.3 film is shown in Fig. 4d. The concentration by integrating the defect DOSt was 1.23 × 10 15 cm −3 , which is two orders of magnitude higher than that of the traditional high-efficiency CdTe and Cu 2 (In,Ga)(S,Se) 2 thin film solar cells [56]. More effort should be done to reduce the interface and bulk defects in the future.
Conclusion
In conclusion, ZnO/Se 1−x Te x solar cells were fabricated in a full vacuum environment at low temperature (less than 200 °C). We found that the Zn 2+ exposed surface of ZnO ETL would bond with Se to form a high-quality ZnO/ Se 1−x Te x heterojunction interface during the post-annealing process. We then tuned the bandgaps of Se 1−x Te x to the optimal value of S-Q limit (1.36 eV) by alloying 30% Te with 70% Se. Consequently, a superior efficiency of 1.85% was achieved based on ITO/ZnO/Se 0.7 Te 0.3 /Au device. The analysis of the recombination mechanism of the Se 0.7 Te 0.3 device implied that the defects of ZnO/Se 0.7 Te 0.3 interface and Se 0.7 Te 0.3 thin film may limit the device efficiency. Our results confirmed that the construction of efficient ZnO/ Se 0.7 Te 0.3 is feasible and represented an important advance for the realization of stable, efficient and green Se 1−x Te x solar cells.
Author contributions JZ carried out the film preparation, device design and performance analysis of Se 1−x Te x solar cells, and drafted the manuscript. JT supervised the topic selection of the manuscript, and CC supervised the writing and polishing of the manuscript. Other authors participated in the analysis and discussion of the experimental phenomena. All authors read and approved the final manuscript.
Declarations
Competing interests The authors declare that they have no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 5,689.6 | 2022-09-08T00:00:00.000 | [
"Materials Science"
] |
Characterization of Fines Produced by Degradation of Polymetallic Nodules from the Clarion–Clipperton Zone
: The discharge of fluid–particle mixture tailings can cause serious disturbance to the marine environment in deep-sea mining of polymetallic nodules. Unrecovered nodule fines are one of the key components of the tailings, but little information has been gained on their properties. Here, we report major, trace, and rare earth element compositions of <63 μ m particles produced by the experimental degradation of two types of polymetallic nodules from the Clarion–Clipperton Zone. Compared to the bulk nodules, the fines produced are enriched in Al, K, and Fe and depleted in Mn, Co, Ni, As, Mo, and Cd. The deviation from the bulk composition of original nodules is particularly pronounced in the finer fraction of particles. With X-ray diffraction patterns showing a general increase in silicate and aluminosilicates in the fines, the observed trends indicate a signifi-cant contribution of sediment particles released from the pores and cracks of nodules. Not only the amount but also the composition of nodule fines is expected to significantly differ depending on the minimum recovery size of particles at the mining vessel.
Introduction
The economic potential of polymetallic nodules (also called ferromanganese nodules) has attracted attention for more than half a century [1], but actual commercial mining has yet to take place. While the technical issues hampering their exploitation have been partially resolved over time, environmental issues have emerged [2][3][4]. The assessment of environmental risks is now a prerequisite for a potential polymetallic nodule mining practice. This is especially true since the nodule fields with high economic value are mostly found in areas beyond national jurisdiction [5] and thus subject to the International Seabed Authority (ISA) regulations.
A mining system design for polymetallic nodules consists of the following parts in general: a miner at the seafloor, a mining platform/vessel near the sea surface, a riser in between, and a discharge system at some depth in the water column [6]. It is envisaged that the biggest environmental impact will be caused by the miner, and that the mining platform and the riser will result in relatively minor disturbances [7]. Disturbance from the discharge system is another important yet poorly understood factor that may harm broad areas from the surface to the seafloor, both inside and outside the mining block [8,9]. A mixture of bottom water, sediment, benthic biota, and unrecovered nodule fragments will be released back to the ocean after separation processes at the surface platform [10,11]. One preferable strategy to avoid the most damaging outcome is to re-lease them below the oxygen-minimum zone and the thermocline [12], but this is certainly insufficient to alleviate growing concerns on the environmental impacts of tailings discharge.
Unrecovered fines of polymetallic nodules will constitute an important part of solids in the discharge. They will be particularly important for the nodule mining system with a hydraulic pump, which is the most used lifting method at present for prospecting [13,14]. Turbulence in the lifting pipe of several kilometers length will facilitate numerous collisions of friable nodules, resulting in particle failure through impact fragmentation, chipping, attrition, and/or abrasion [15,16]. Both environmental impact and ore loss can be minimized if all the produced particles are recovered before discharge, but a thorough separation is expensive and inefficient. In order to maintain economic feasibility of the mining, particles smaller than a certain size are bound to be released with seawater. The size of particles that cannot be efficiently recovered may differ depending on the design of the separation process, but it is expected that at least particles smaller than 8 μm will be technically difficult to recover [17].
To date, only a handful of studies have examined the degradation of polymetallic nodules during a mining operation (e.g., [17][18][19][20][21][22][23]). Little information indeed has been provided on the particles less than few tens of micrometers in diameter, despite the fact that those small ones are actually more likely to be unrecovered and released into the ocean. Most previous studies focused only on the size reduction of nodule fragments and paid little attention to their chemistry or mineralogy. Considering that polymetallic nodules are both physically and chemically heterogeneous with numerous microlayers, inclusions, and impurities [24,25], the composition of the nodule fines produced can be largely different from that of the bulk nodules.
Focusing exclusively on the silt-and clay-sized (<63 μm) fractions, we examine the particles produced by an experimental degradation of polymetallic nodules. The primary purpose of this study is to understand how the degraded particles differ from the original nodules in composition. Special attention is paid to the elements of economic interest (Cu, Ni, Co, Mn, Zn, Mo, and rare earth elements (REEs)) and environmental concern (the aforementioned elements plus As, Cd, and Pb). The improved understanding on the fines will provide useful information for future environmental impact assessments and the design of environmentally acceptable mining systems.
Experimental Setup and Procedures
Current understanding on the degradation of the polymetallic nodules during the mining process has been gained from laboratory experiments for the most part, because in situ experiments were mostly unavailable due to high cost and regulatory constraints. Accordingly, several studies have used lift pumps reduced to a laboratory scale that mimic on-site facilities [19,22], and others took a more abstracted approach by using conventional instruments such as vibrating mill, rotary mill, and ball mill [20,21]. In this study, we opted to use a planetary ball mill (Retsch PM 100), since it is particularly suitable for the investigation of small-sized particles. Particle-wall and particle-particle interactions were reproduced by a rotating agate jar filled with polymetallic nodules and water but without any ball charge. A constant sun wheel speed of 100 rpm was used. Both frictional and impact forces act on nodules under this setting.
Two types of polymetallic nodules collected from different localities in the Korea Deep Ocean Study (KODOS) area for nodule exploration in the Clarion-Clipperton Zone (CCZ) were used for the experiment (Figure 1) [26]. Type 1 nodules are generally 9-12 cm in diameter and have discoidal shapes. Type 2 nodules are generally 3-6 cm in diameter and have sub-spherical shapes and smoother surface. Type 1 nodules are usually more porous and fragile, while Type 2 nodules have a relatively dense internal structure (Figure 2). These two contrasting types of nodules were selected to examine nodules formed by different processes (diagenetic vs. hydrogenetic), although all CCZ nodules are strictly mixed types consisting of both diagenetic and hydrogenetic layers. For each type, about 3 kg of nodules were crushed to a size smaller than 2 cm in diameter before the experiment, as it is a common practice for miners to crush nodules before vertical transport in order to prevent clogging of the pipes [27,28]. Five samples each were randomly taken from a gently mixed pile of crushed nodules and were completely ground for the bulk analysis; the rest of the pile was used for the degradation experiment. Degradation in a planetary ball mill was carried out for 15, 25, and 35 min for each nodule type, and all the batches were run in triplicate. For each run, crushed nodules with a dry weight of 25 g were prepared in a 250 mL agate jar, which was filled with distilled water to a total of 250 g. When the milling was over, the contents in the agate jar were wet-sieved through a 63 μm sieve using distilled water. The coarse particles left in the sieve corresponded to the fraction that will be easily separated by on-board processing and thus were removed from the further analysis after dry weighing. A small part of the well-mixed suspension containing <63 μm particles was taken for particle size distribution analysis. The rest of the suspension went through further size separation for bulk chemical analysis and X-ray diffraction (XRD) analysis. Size separation into 32-63 μm, 16-32 μm, 8-16 μm, and <8 μm fractions was carried out by using Stoke's Law, where the settling velocity of a particle is inversely proportional to the square of its diameter. The settling time was calculated assuming a wet density of 1.9 g/cm 3 . Withdrawal of supernatant after suspension in distilled water in a 50 ml settling tube was repeated three times per each separation step.
Analytical Methods
The particle size distribution of the <63 μm fraction of each sample was analyzed using the laser diffraction method with a Malvern Mastersizer 2000. Samples were pretreated with 2% Calgon solution and were dispersed in an ultrasonic bath for 5 minutes just before the measurement to prevent agglomeration.
For elemental analysis, 20-100 mg of freeze-dried samples were each digested by a mixture of 3 mL hydrochloric acid (HCl) and 0.5 ml hydrofluoric acid (HF) in a tightly closed Teflon vessel at 185 °C for 36 hours. The vessel was cooled, opened, heated at 185 °C until dryness, and a mixture of 1 ml perchloric acid (HClO4) and 3 mL 6N nitric acid (HNO3) was added. Then, the mixture was heated until dryness, treated with 2 mL 6N nitric acid HNO3, heated again until dryness, and finally diluted with 2% HNO3.
The major element and trace element (including rare earths) compositions of the digested samples were each analyzed by using an inductive coupled plasma-optical emission spectrometry (ICP-OES, Optima 3000DV, PerkinElmer) and an inductive coupled plasma-mass spectrometry (ICP-MS, PlasmaTrace, VG elemental), respectively, which were both housed at the Korea Institute of Ocean Science and Technology in Busan, Republic of Korea. One Geological Survey of Japan standard (JMn-1) and two United States Geological Survey standards (NOD-A-1 and NOD-P-1) were used as certified reference materials. Analytical results for three standard materials were generally within ± 5% of reference values for each element.
The mineralogy of the bulk nodules and nodule fines were analyzed with a PANalytical X'pert Pro X-ray diffractometer housed at the Korea Institute of Ocean Science and Technology, using Cu-Kα radiation generated at 40 kV and 20mA. The powdered samples were scanned from 3° to 70° with a step size of 0.02° and a measuring speed of 1° per minute at room temperature.
Particle Size Distribution
Size distributions of <63 μm particles from polymetallic nodules show a gradual increase in proportion of finer particles over time (Figure 3), indicating progressive production of the fines during the experiment. Volume fractions of <8 μm particles among total <63 μm particles produced by initial crushing of Type 1 and Type 2 nodules were 24.5% and 19.5%, respectively. They increased to 33.8% and to 25.4% after 15 min of degradation in a planetary ball mill, and after 35 min, they increased further to 38.7% and to 27.8%, respectively. Median sizes (in volume basis) of total <63 μm particles derived from Type 1 and Type 2 nodules were initially 22.5 and 25.8 μm, but they decreased to 13.3 and 16.8 μm, respectively, after 35 min of degradation.
Chemistry
The average bulk composition of Type 1 and Type 2 nodules is presented in Table 1, 2. As expected, the high Mn/Fe ratio (≈6.4) and high Ni and Cu contents of Type 1 nodules suggest their dominantly diagenetic origin, whereas the low Mn/Fe ratio (≈1.2) and high Co and Ce contents of Type 2 nodules indicate their dominantly hydrogenetic origin [5,29].
The chemical composition of the <63 μm particles from Type 1 nodules is given in Table 1. Compared to the bulk nodules, Fe, Al, K, Ti, and Zr are relatively abundant in the fines, whereas Na, Mn, Co, Ni, Cu, Zn, As, Mo, and Cd are generally deficient (Figure 4a-c). The deviation from the bulk nodule composition is usually most pronounced in the finest fraction (<8 μm; red circles in Figure 4). The Mn, Ni, and Cu contents of the 32-63 μm fraction are about half of those of the bulk nodules, but the <8 μm fraction contains only about one-fifth of those of the bulk nodules. P, Ca, Pb, and REEs show opposite patterns between different size fractions; they are generally abundant in the 32-63 μm fraction and deficient in the <8 μm fraction compared to the bulk nodules.
Causes of Compositional Variation
Our investigation on experimentally produced fines of polymetallic nodules shows that their elemental composition differs widely from that of original nodules. Many elements exhibited similar partitioning trends in both nodule types: Al, K, and Fe are relatively enriched, whereas Mn, Co, Ni, As, Mo, and Cd are depleted in the fines (Figure 4). The enrichment or depletion of each element is usually most noticeable in the finest fraction of particles (< 8 μm). The Mn and Al content in Type 1 nodules, which were 28.6% and 2.0%, respectively, changed to ≈6.4% and ≈6.7% in the <8 μm fines, respectively. The Mn and Al content in Type 2 nodules, which were 16.4% and 1.4%, respectively, changed to ≈9.0% and ≈4.5% in the <8 μm fines, respectively.
The above results suggest the selective incorporation of certain mineral phases in the < 63 μm fraction. Indeed, the XRD patterns indicate the general increase in silicates and aluminosilicates in the fines ( Figure 5). The most straightforward explanation for these observations is that sediment particles within the pores and cracks of nodules are released to form part of the fines. Abyssal sediments in the KODOS area as well as other nodule exploration areas in the CCZ have average grain size of about 3-30 μm [31][32][33][34][35]. Thus, it is reasonable to expect that sediment particles encapsulated in the nodules are mostly silts and clays as well. Once released, they will be concentrated in the < 63 μm fraction in higher proportion than were in the bulk nodules.
Surface sediments in the KODOS area where the nodule samples were collected are dominated by quartz, feldspar, illite, smectite, and biogenic silica [26]. Sediments within the nodules may have somewhat different mineralogical composition, but a study from the UK license area in the CCZ have shown that quartz, feldspar, phillipsite, and various clay minerals make up a sizable portion in the bulk mineralogy of nodules [25]. The concentration of such minerals can explain the enrichment of Al and K in the nodule fines. It also effectively explains the greater decrease observed for Type 1 nodule fines in the abundance of most trace metals (e.g., Mn, Co, Ni, Cu; Figure 4), because more porous and fragile Type 1 nodules will likely release a greater amount of encapsulated sediments compared to the Type 2 nodules (Figure 3). Whether this can be generalized as a universal characteristic of dominantly diagenetic nodules in the CCZ needs further evaluation, but dendritic structures that frequently encapsulate sediment particles are typical for diagenetic layers, which precipitate in the sediment pores [5,24,36].
While the concentration of detrital minerals in the <63 μm fraction seems to be responsible for a large part of compositional variability, it is possible that ferromanganese parts of nodules may also enter the fines unevenly. In particular, the enrichment of Fe in the nodule fines is noteworthy. Surface sediments in the KODOS area contain about 4 wt % of Fe, which is similar to the average Fe content of Type 1 nodules and much lower than that of Type 2 nodules (Tables 1 and 2). Yet the fines generated from both nodule types show consistently higher Fe content compared to the bulk nodules, suggesting that a certain Fe-rich phase is also concentrated in the <63 μm fraction. In the case of Type 1 nodules, this seems to be because the hydrogenetic parts of the nodules are preferentially incorporated into the fines. The observation that the relative abundance of Co in fines is not as low as those of Mn, Ni, and Cu is in line with this interpretation (Figure 4a-c). The mechanism underlying this selective incorporation is unclear, but one possibility lies in the secondary fillings of pore spaces within nodules. The idea is supported by the report that pore-filling materials of polymetallic nodules from the German license area in the CCZ mostly have a chemical composition that is typical for hydrogenetic precipitation with low Mn/Fe ratios [24,37]. In the case of Type 2 nodules, we suspect the preferential incorporation of iron (oxyhydr)oxides (e.g., ferrihydrite, goethite) over δ-MnO2, but a more extensive investigation is required to answer this question properly, which is beyond the scope of this paper.
As detrital minerals become abundant, most of the potentially toxic elements such as copper, zinc, and cadmium show decreased concentration in the finer fractions. However, this is not exactly the case for lead. A lead content similar to or higher than that of the bulk nodules is observed for different size fractions of fines (Tables 1 and 2), which might be related to the dissolution and readsorption behavior of lead [38]. Together with the arsenic content that varies in a somewhat disorganized fashion, this highlights the need for a much more in-depth investigation of heavy metals in the nodule fines.
Implications on the Nodule Mining
The fines produced by the physical degradation of nodules and the sediments accidentally taken together with nodules have long been regarded as the two major components of the solids in discharge. A few studies have quantitatively assessed their amount. Ozturgut et al. (1981) [18], based on the on-site test results of the DOMES (Deep Ocean Mining Environmental Study) Project, estimated that the abraded nodule fragments and the entrained sediments in discharge will each amount to 5% and 20% of the total mass of the nodules produced, respectively. Yamazaki et al. (1991) [19] proposed 7% and 13% for the same parameters based on their laboratory experiments. Oebius et al. (2001) [7] estimated a much lesser amount: 1-2% and ≈4% of the nodule mass for the fines and the sediments, respectively. The large discrepancy between these previous estimates implies that the actual amount of mine tailings can vary greatly depending on the engineering system.
With the development of various technologies for polymetallic nodule mining (see [39] for examples), some types of miners are now expected to entrain minimal amounts of sediment. However, based on our results, it is inferred that not only sediments around the nodules but also sediments "inside" the nodules will contribute significantly to the discharge. The important difference between the two is that the former arises from incomplete screening in the nodule collection process, whereas the latter is mainly generated in the lifting process. In other words, the latter is strictly a part of the nodule fines. This means that even if a well-designed miner effectively rejects the sediments and collects only the nodules from the seafloor, the discharged materials can still contain a large amount of sediment particles later released from the nodules.
We also emphasize the importance of the recovery size of particles at the mining vessel for the reliable assessment of environmental impacts of the nodule fines. Many studies have already pointed out that upon discharge, the grain size distribution of tailings will be the most important factor that will largely determine the temporal and spatial scales of their impact [6,40,41]. Our results add that the composition of the fines as well as their amount will depend to a great extent on the recovery size of the particles. For example, about three times as many fines are to be discharged into the ocean if only >64 μm particles are retained, compared to the case of recovering all >8 μm particles produced in our experiment (Figure 3). At the same time, the discharged materials will contain up to several times higher concentrations of different toxic elements (Tables 1 and 2).
As of February 2021, 18 contracts for the exploration of polymetallic nodules, 16 of which are for exploration in the CCZ, have been granted by the ISA. The ISA is currently drafting regulations on the commercial exploitation of deep-sea mineral resources [42]. The latest Draft Regulations on Exploitation of Mineral Resources in the Area [43], in addition to the provisions on general obligations relating to the marine environment, specifically provide that discharge should not be made except where permitted in accordance with "The assessment framework for Mining Discharges as set out in the Guidelines", which brings the need to develop relevant guidelines in parallel to the regulations [44]. Insufficient scientific knowledge is one of the key challenges in developing the regulations and associated guidelines [42,45,46], and the results of our study contribute to a better understanding of the degradation products of nodules. Further scien-tific research on various aspects of the mining discharge are encouraged in order to fill large knowledge gaps on the road to the sustainable exploitation of polymetallic nodules.
Conclusions
The chemistry and mineralogy of the fines produced by the degradation of polymetallic nodules differ from those of the original nodules from which they derived, and they also differ by particle size. This clearly indicates that both the amount and the properties of the tailings to be discharged into the ocean will be heavily affected by the minimum recovery size of particles. We suggest that the sediment particles inside the pore space of nodules later released during the degradation are majorly responsible for the observed compositional variation. Their contribution to the discharge is expected be particularly important when mining diagenetic nodules. The possible difference in composition between sediments inside and outside the nodules and the (re)adsorption behavior of toxic heavy metals in the fines during the degradation should be further explored. | 4,964.4 | 2021-02-15T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
An Exploration of Blockchain-based Traceability in Food Supply Chains: On the Benefits of Distributed Digital Records from Farm to Fork
Blockchain is a novel distributed database technology that could solve some issues of traditional traceability systems, such as cost of adoption and vulnerabilities to hacking and data tampering. This study aims to gain insights on the benefits of applying blockchain technology for traceability in food supply chains through literature review and an investigation of five companies that are experimenting with blockchain-based food traceability. Our findings suggest that, upon implementation and contribution by all supply chain participants, blockchain-based traceability can provide cost-savings, reduced response time to food scandals and food-borne illness outbreaks, improved security and accuracy, better compliance with government regulations, and thus increase consumer trust. Companies are increasingly taking for the safety of the food they sell, rather than risk their brand on a large recall.
Introduction
In recent years, various food scandals have damaged consumer trust in the food industry across the world (Sarpong, 2014;Garaus & Treiblmaier, 2021). In 2011, China witnessed a massive pork mislabeling scandal along with food fraud, which led to recalling donkey meat products that included fox meat (Kamath, 2018). In 2013, several meat suppliers in Europe replaced lamb and beef with horsemeat, which affected 4.5 million processed products, equaling 1,000 tons of food (Kamath, 2018). In 2017, papayas in the US market were linked to a multi-state outbreak of Salmonella (Kamath, 2018). Meanwhile four million Canadians are affected by domestically acquired foodborne illnesses each year, which resulted from food contamination (Astill et al., 2019).
Both food companies and consumers would benefit from faster response times to food scandals and outbreaks of foodborne illnesses (Aung & Chang, 2014;Astill et al., 2019). Typically, food incidents are slow in being handled due to low transparency and inefficient batch sorting, which leads to an inability to trace food items in the supply chain (Sarpong, 2014;He et al., 2018). Further, the complexity and dynamics of modern food supply chains, along with large distances between supply chain entities, make it an ongoing challenge to ensure food safety and quality (He et al., 2018;Behnke & Jansson, 2020). Hence, traceability has become paramount in global food supply chains because consumers expect higher levels of reliability and safety (Casino et al., 2019;Behnke & Janssen, 2020;Tayal et al., 2021).
"Traceability" refers to the ability to track an item in the supply chain from producer to user, enabled by rapid access to relevant and reliable information (Bhatt et al., There are growing internal and external pressures for traceability in food supply chains due to food scandals. Traceability refers to tracking food from the consumer back to the farm and vice versa for quality control and management. However, many traceability solutions have failed to meet the needs of supply chain stakeholders. Blockchain is a novel distributed database technology that could solve some issues of traditional traceability systems, such as cost of adoption and vulnerabilities to hacking and data tampering. This study aims to gain insights on the benefits of applying blockchain technology for traceability in food supply chains through literature review and an investigation of five companies that are experimenting with blockchain-based food traceability. Our findings suggest that, upon implementation and contribution by all supply chain participants, blockchain-based traceability can provide costsavings, reduced response time to food scandals and food-borne illness outbreaks, improved security and accuracy, better compliance with government regulations, and thus increase consumer trust. 2013; Xiong et al., 2020). It helps to ensure food safety and quality, as food is a perishable product and foodborne illnesses can originate from mishandling anywhere in a supply chain (Yon & Woo, 2018). That said, retailers are often inundated with data, while suppliers are reluctant to waste valuable transport time completing checklists and audits (Sarpong, 2014). Hence, automated data gathering and storage might be preferable to human data entry practices, and a distributed system solution with the option of data mining could be a more feasible solution than relying on a single centralized database (Bhatt et al., 2013;Bumblauskas et al., 2020;van Hilten et al., 2020).
As a meta-technology, blockchain allows for improved traceability in food supply chains (Kramer et al., 2021). Being built on a decentralized and distributed database (Vu et al., 2021), blockchain enhances transparency, accountability, trust, and traceability in supply chains (Kim & Laskowski, 2017;Gurtu & Johny, 2019;Behnke & Jansson, 2020). Kshetri (2018) also argues that it contributes to cost, quality, speed, dependability, risk reduction, sustainability, and flexibility goals. Nonetheless, the adoption of blockchain technology in food supply chain management is still in its infancy, thus allowing us only a limited understanding of its potential (Treiblmaier, 2018;Müßigmann et al., 2020;Lim et al., 2021). More research is needed on the benefits and challenges of blockchain-based traceability in food supply chains.
This study aims to provide insights mainly about the benefits of blockchain-based traceability in food supply chains. In so doing, the article first reviews recent literature on blockchain technology and traceability in supply chain management, and then discusses five industry cases on blockchain-based traceability in food supply chains. The insights derived from the cases contribute to our extant body of knowledge on the application of blockchain in the supply chain management field, by outlining how blockchain helps to improve food product traceability. The article concludes with implications for practice and suggestions about potential future research avenues.
Literature Review
The impacts of blockchain on supply chain management Blockchains use a common shared ledger that records transactions made by users (Mansfield-Devine, 2017; Casado-Vara et al., 2018; Kamilaris et al., 2019). A sequential list of timestamped records gets spread among a network of users whose machines are all running the blockchain protocol, in order to be validated by the nodes (Mansfield-Devine, 2017; Kamilaris et al., 2019;Wang et al., 2019). "Blocks" form a linked chain of hashed information, and each block must refer to the preceding block to be valid (Tapscott & Tapscott, 2017). This distributed approach is more secure than earlier technology allowed because it uses cryptography (Casado-Vara et al., 2018;Chang & Chen, 2020;Wang et al., 2020), and more trustworthy because the structure permanently time-stamps and stores the information in blocks, preventing anyone from altering the ledger (Lemieux, 2016;Ying et al., 2018;Behnke & Janssen, 2020).
Indeed, key characteristics of blockchain-based systems include security, reliability, transparency, and immutability (Wang et al., 2020). In a blockchain system, no central authority controls or maintains the network. Instead, the network is maintained by the participating nodes, while updating information in the database requires the consensus of ledger community participants (Ying et al., 2018;Pournader et al., 2020).
Conversely, when using a centralized database, someone must act as a trusted authority (Mansfield-Devine, 2017). This central authority actor, however, for a variety of reasons, may turn out to have a somewhat or very limited view of an entire supply chain, which thus hinders collaboration, delays information processing, and increases the risk of data corruption, as data flows through intermediaries (Apte & Petrovsky, 2016;Mukri, 2018). Thus, a traditional pre-or nonblockchain system is more vulnerable to corruption, hacking, data leaking, contractual disputes, tampering, and fraud (Azzi et al., 2019;Min, 2019;Chen et al., 2021). This makes blockchains for supply chain management a proverbial "game changer", meaning a foundational technological disruption to both global and local current supply chain systems.
An Exploration of Blockchain-based Traceability in Food Supply Chains: On the Benefits of Distributed Digital Records from Farm to Fork
However, industrial applications tend to use "permissioned" systems that allow authorizing only selected users to join a network and controlling user permissions for safety or necessary business privacy purposes (Behnke & Janssen, 2020). Permissioned blockchain systems in the business-to-business context build on business-technology frameworks like Hyperledger (Behnke & Janssen, 2020), which enable permissioned users to have duplicated transactional records, as well as permission access to monitor the movement and progress of supply chain flows (Chang & Chen, 2020).
The transparency of blockchain systems can help establish the authenticity of transactions (Mansfield-Devine, 2017), while removing intermediaries from the old systems can enable transactions to become faster between supply chain actors (van Hilten et al., 2020). In this vein, distributed ledger technology allows supply chain partners to reduce or eliminate transaction costs. It may also allow them to use untrusted external resources, as easily as they currently use trusted internal resources (Tapscott & Tapscott, 2017). Further, blockchain technology improves supply chain dependability by exerting increased pressure on supply chain partners to be more responsible and accountable for their actions (Kshetri, 2018). As a result, both the improved connectivity among supply chain partners and the increased visibility of information flows can offer consumers more detailed information about the origin of products (Casado-Vara et al., 2018). In food supply chains, knowing the origin of products means improved food safety (Casino et al., 2019).
The importance of traceability in food supply chains
The growing public attention to food quality and safety have led to developing food traceability systems (Dabbene et al., 2014;Chen, 2015;Astill et al., 2019). "Traceability" signifies the ability to track a product and its history through a supply chain from harvest through transport, storage, processing, distribution, and retail (Moe, 1998;Kamilaris et al., 2019). This requires significant information sharing about product history, specification, and location, among a network of others (Kumar et al., 2017). Of note, traceability can be classified according to the direction in which information is recalled in a food chain (Aung & Chang, 2014). Similarly but distinctly, "tracking" refers to the ability to follow-up the downstream path of a product along a supply chain, while "tracing" refers to the ability to determine the origin of a product and its ingredients, using records held upstream in the supply chain (Dabbene et al., 2014;Behnke & Janssen, 2020).
Traceability
necessitates the engagement of stakeholders along an entire food supply chain (Dabbene et al., 2014). Since traceability systems can yield huge volumes of data, automated data collection, storage, and accessibility become critical (Chen, 2015). According to Dabbene et al. (2014), such automation uses machine-readable optical labels (QR codes) and radio frequency identification devices (RFID) to enhance the precision and reliability of identifying traced units. Tracing focuses on "batches" (products with the same "best before" date and batch number), "trade units" (boxes of products with the same batch numbers, sent along a supply chain), or "truck units" (pallets of products with different batch numbers, for distribution or storage purposes) (Behnke & Janssen, 2020).
With reliable information, traceability can improve food safety through timely identification of food sources and by providing better information about the causes of potential food contamination (Astill et al., 2019;. Ene (2013) noted that the objectives of food supply chain traceability include: 1) contributing to food safety by enabling the identification of outbreak or hazard sources, managing safety alerts, and withdrawing contaminated or dangerous products; 2) providing reliable information to users by guaranteeing product authenticity, and that certain production practices have been followed; and 3) improving overall product quality and processes by identifying sources of noncompliance, while enhancing product flows and stock management.
According to Opara (2003), six key elements of traceability constitute the food supply chain traceability system: 1. Product traceability: physical location of a product at any stage in the supply chain, inventory management, product recall, type of product traceability, and type of food to be traced.
2. Process traceability: type and sequence of activities affecting the product (cause, location, time; chemical, physical, environmental, and atmospheric factors), compliance standards and regulations with governmental entities, and collaboration among food supply chain entities.
An Exploration of Blockchain-based Traceability in Food Supply Chains: On the Benefits of Distributed Digital Records from Farm to Fork 3. Genetic traceability: genetic product constitution, type and origin of ingredients, information on planting materials (seed, stem cuttings, tuber) to create the original product.
4. Input traceability: type and origin of inputs such as fertilizers, chemical sprays, livestock, feed, additives, and chemicals for preservation.
5. Disease and pest traceability: involving the epidemiology of pests, bacteria, viruses, and emerging pathogens, which may contaminate food.
6. Measurement traceability: measurement standards, length, depth, precision to trace, quality control, and type of traceability.
In general, supply chain partners have both internal and external traceability requirements. Internal traceability includes, for example, sharing logistic data, inventory data, contracts, prices, and organic product certification links, while external traceability refers to, for example, providing food origin information and farmer data to consumers (Yon & Woo, 2018;van Hilten et al., 2020;Xiong et al., 2020). Thus, we see consumers calling for food safety, while farmers wish traceability systems that can aid them in crop management that increases their profits (Xiong et al., 2020;Chen et al., 2021). An increasing need therefore exists to provide traceability from "farm to fork", whereas the current costs of putting traceability systems into place are a major barrier for most supply chain actors (Aung & Chang, 2014;Casino et al., 2019). That said, if the benefits of food traceability come to be seen as outweighing the costs involved, then blockchain-based systems may indeed be a gamechanger in this respect.
Blockchain-based traceability in food supply chains
According to Paliwal et al. (2020), improved traceability is one of the key benefits of applying distributed ledger technology. Other benefits of adopting blockchainbased food traceability involve data interoperability, cost reduction, transparency, auditability, integrity and authenticity, as well as improved data accuracy, data management, and prediction through data analytics in food logistics (Casino et al., 2019;Pournader et al., 2020). Further, blockchain-enabled food traceability allows for improved cybersecurity and reduced food fraud, by using strong cryptography (Wang et al., 2020) and by identifying counterfeiting, dilatation, and adulteration, in support of better food security and safety (Etemadi et al., 2021;Garaus & Treiblmaier, 2021;Tayal et al., 2021).
Within a blockchain system, information is tied to each individual product, creating a digital record that proves its provenance, compliance, authenticity, and quality (Bumblauskas et al., 2020). Blockchain systems not only carry information on each transaction, but also associated metadata (origin, contracts, process steps, environmental variations, microbial records) that can be used to connect items across the entire supply chain (Pearson et al., 2019;Wang et al., 2020). Some of the data are collected via sensor networks tracking location, time, temperature, and humidity levels, and are reported on the blockchain in real-time (Grecuccio et al., 2020).
Traceability based on such real-time, reliable, and accurate data can increase accountability in a food supply chain, improve shelf life, help prevent food loss, and increase consumer trust in the brand (Kayikci et al., 2020;Shahbazi & Byun, 2021).
Methodology
We selected five companies that have recently experimented or are experimenting with blockchainbased food traceability as case studies to further investigate the benefits of using blockchain technology for traceability in food supply chains. Chang and Chen (2020) argue that the case study method is a highly informative approach to study blockchains in supply chain management. Our case study data were collected in 2018, and include Walmart, Provenance, Carrefour, Foodchain, and Ripe.io. No specific criteria for choosing the cases were used, besides that they needed to address a blockchain-enabled food supply chain management pilot. Data on the cases were found in scholarly and practitioner literatures on innovation management and food business, and we also used online sources such as industry magazines, blogs, news articles, and corporate websites to collect further information.
We utilized a content analysis method for our data collected from the five cases. We examined and analyzed the case data based on traceability elements that were inferred from the literature review (Opara, 2003).
Specifically, we looked for information about how the companies involved in each case applied blockchain applications for solving their food supply chain traceability problems according to common traceability elements. Then, we performed a descriptive analysis, which included creating brief case descriptions, and a to their supply chains. Their goals are to track tuna caught by fishermen with verified and sustainable claims, including traceability and compliance to standards at the origin and along the chain, as well as preventing the "double spend" of product certificates and identification tags. Provenance chose to first understand the key supply chain problems in tuna fishing and then assess the technology opportunities in Indonesia, the largest tuna producing country in Southeast Asia.
Some of the problems were human rights abuses, overfishing, fraud, and illegal, unreported, and unregulated fishing. The firm made use of a hybrid blockchain solution, allowing them to trace the source of tuna in minutes, rather than days or weeks as had been usual previously. In the pilot, fishermen sent SMS messages to register their catch on the Provenance blockchain. Information on the origin and supply chain journey of the fish could be accessed and verified by consumers using their smartphones. In this vein, Provenance could provide a robust proof of compliance to standards by government authorities at the origin and along the entire food supply chain.
Case 3: Carrefour -tracing of chickens, cheese, milk, oranges, and salmon Carrefour is a European retailer experimenting with food supply chain traceability through blockchain technology. The pilot involved IBM to create a food trust platform aimed at providing better transparency, traceability, and efficiency in food supply chains from farm to fork. Carrefour aimed to track free-range chickens, eggs, cheese, milk, oranges, tomatoes, salmon and ground beef steak, among others, with an objective of implementing a global food traceability standard across all links of its supply chain.
Carrefour's solution is based on Ethereum. It helped them to accurately record events along the supply, processing, packaging, and distribution chain. However, for tomatoes and eggs, they began experimenting with Hyperledger Fabric, because it includes the concept of information "channels", which are equivalent to having multiple separate blockchains at the same time. In other words, the firm can have one channel per product line. Carrefour's perception is that this facilitates the multiplication of different blockchains on a single common core. They consider this as a major enabler of industrializing blockchains. experiments and summarized key insights from data. These insights highlight how the case companies applied blockchain technology to solve food supply chain traceability problems, as well as what the perceived or pursued benefits of establishing a blockchain-based traceability system were.
Findings
The following sections provide brief case descriptions to understand the context of each case. Thereafter, we summarize key insights from the cases in a table.
Case 1: Walmart -pork and mango pilots with IBM In 2016, Walmart launched two pilots using IBM's Hyperledger-based blockchain solution to trace the origin of sliced mangoes sold in North America and pork sold in China. Walmart chose IBM's solution as it was not recreating an existing supply chain, but rather leveraging emerging technologies to enhance supply chain traceability. Walmart had to establish trust through its traceability system due to various recent outbreaks of foodborne illnesses, while the resulting traceability included numerous stages from food production through food consumption. The length, depth, and precision of the food supply chain included farm and slaughterhouse tracking, and store tracking with Walmart's distribution center.
Blockchain technology helped Walmart create greater transparency, veracity, and trust in its food information, so that its supply chain partners could act immediately if a problem arose. Also, they found that cooperation with government entities was crucial. The supply chain entities were able to record, trace, and verify the authenticity and quality of their products throughout the product lifecycle, across multiple different authorities. Audits, identification numbers, and safetyprotocols were logged in real-time and stored as ecertificates. Notably, Walmart's blockchain enabled tracing at the item level, not just batch level. This allowed officials to determine the origin of a specific mango in just two seconds. Addressing several vulnerabilities in the food supply chain, Walmart's pilots went beyond technology to gain people's trust and confidence in food.
Case 2: Provenance -tracking tuna on the blockchain
Provenance is a UK-based firm behind a digital platform that enables retailers to bring integrity and transparency
An Exploration of Blockchain-based Traceability in Food Supply Chains: On the Benefits of Distributed Digital Records from Farm to Fork
Case 4: Foodchain -creating stories from farm to fork Foodchain S.p.A. is an Italian start-up company with a blockchain-based traceability service. The company strives to use blockchain technology to gain a competitive advantage in food supply chain transparency and traceability. The first phase was identifying and registering raw materials and producers in the blockchain. Thereafter, each food item was recorded on a blockchain using a "smart label", such as a unique QR code. The entire process was monitored, while quality control of the product was tracked in realtime and shareable between all stakeholders of the food supply chain through computer or smartphone.
Given the immutability of data stored on a blockchain ledger, Foodchain S.p.A. believes that it will help food brands to increase trust and loyalty among their customers. In other words, the company's QR codes allow consumers to access the full and immutable story of a food product and learn about all the steps made by the product before landing on their table. Thus, Foodchain enables the monitoring of the entire food supply chain, which aids in improving food quality control and traceability. Their blockchain implementation is private and permissioned, built on Ethereum, but the company has also launched its own public, permissionless blockchain infrastructure called Quadrans.
Case 5: Ripe.io -The internet of tomatoes
Ripe.io is a blockchain start-up company that showcases the value of distributed ledger technology in agriculture by collecting data throughout the entire food supply chain. Its pilot project was called the "Internet of Tomatoes", in which Ripe.io used a blockchain to compile a wealth of data from the farm and apply it to growing better tomatoes. It allowed data to be recorded of every single tomato produced by growers and share that information with the supply chain and consumers using blockchain technology. The objective of Ripe.io was to enable data transparency and traceability from farm to fork, by providing information on an individual tomato, including not only its origin with a farm and producer, but also its sweetness, texture, size, variety, nutritional value, how it was grown, and its ripening record.
For this purpose, Ripe.io collected data from each tomato produced by given growers, and shared the information with restaurant purchasers of tomatoes. Using blockchain technology allowed them to monitor every detail, such as temperature, humidity, and colour, and store the information digitally and securely. Ripe.io is attempting to create a system that can help firms save money through efficiency gains and remove adulterated food quickly and efficiently. Also, blockchain-based traceability allows retailers and authorities to trace and track every item in real time for more accurate monitoring and prediction of shipping and delivery.
Summary of key insights from the cases
Summing up the findings on blockchain-based traceability and its benefits in our five use cases, most value came from cost savings and reduced time for tracing food items through a food supply chain. Due to this, food data were digitally stored on a blockchain, and time to access information about a specific food product only took minutes, compared to weeks in previouslyused traditional traceability systems. The new system helped the companies studied in our cases to achieve cost savings, as well as time savings when solving food crises. Table 1 summarizes the key insights gathered from our use cases.
Another key benefit of operating with a shared distributed ledger is automatically achieving compliance with government standards. Prior to having a blockchain-based food traceability system, compliance with government requirements were often challenging due to disparate record-keeping and paperbased documents. Blockchain solved this problem by digitally and securely storing all compliance-based documents, thus eliminating the need for any paper documents. In the case of Walmart, it became easy for all supply chain entities to comply with government standards. Hence, the blockchain system helped firms to achieve better quality control over food, making it possible to trace the product from farm to fork, which in turn helped them to build increased trust with consumers as supply chain operations and management became more transparent.
Some traceability elements, such as product and process traceability appear to be common across the cases studied. For example, case companies attempted to trace individual items and promote enhanced coordination between supply chain entities to achieve better control over the supply chain. That said, only Carrefour covered all six traceability elements in its offering. Specifically, the category of "disease and pest traceability" was not seen consistently across the cases, as only Carrefour put special effort on it. In fact, Carrefour attempted to predict not only pathogens, but also allergens through the traceability system, which would help in disease and pest traceability. Allergens are not discussed in the previous literature as a traceability element.
An Exploration of Blockchain-based Traceability in Food Supply Chains: On the Benefits of Distributed Digital Records from Farm to Fork
The insights from our study also highlight that simply comprehending blockchain technology and how it creates ledger communities for supply chains is important because comprehension is the key to implementing an efficient DLT-based food traceability system.
Understanding the advantages (and disadvantages) of public, private and hybrid blockchains helps firms to implement and choose the technology specific to their needs. Except for Foodchain, all firms we studied were leaning to implement a hybrid blockchain solution, due to its flexible modular architecture and enhanced security that includes permissioning. Backend modularity of blockchain systems saves the cost of entirely replacing the existing supply chain, so that the new system can be incorporated on top of and together with the existing supply chain itself. Finally, due to their high data accuracy, companies such as Provenance that traced tuna fish and Ripe.io that traced tomatoes were benefitted far more by blockchain-based traceability systems compared with traditional pre-blockchain systems. Information related to an individual tuna fish or single tomato, rather than merely being faced with information about the whole batch it was in, or having to deal with another unit, was obtained rapidly, making more efficient the tracing of its origin.
Discussion and Conclusion
This article aimed at contributing to the field of supply chain management innovation by investigating the benefits of blockchain-based traceability in food supply chains. While blockchain technology has begun to demonstrate how it can transform industries and enhance business model innovation (Zhao et al., 2016;Tandon et al., 2021), it also constitutes a managerial challenge for incumbents (Beck & Muller-Bloch, 2017). To more fully leverage the potential of blockchain technology, engagement is needed throughout the supply chain. Blockchain-based traceability provides value only if all supply chain partners adopt and actively contribute to it (Gurtu & Johny, 2019). Thus, adoption of blockchain technology may be hindered by various issues involving usage by personnel, technical aspects, education, policies, and local regulatory frameworks (Kamilaris et al., 2019).
Contribution to theory
One of the overall findings of our study was that research involving blockchain-based applications in supply chain management is still emerging. There is a growing need for more scholarly studies on the topic. Also, common practices in blockchain-enabled food traceability systems have often not yet been operationalized, as companies are still experimenting and implementing what they have been learning from individual pilot projects. That said, our results contribute to the widening body of literature on blockchain-based traceability in several ways. In particular, the traceability elements identified by Opara (2003) provided a feasible framework to analyze cases of firms experimenting with blockchain-based traceability in the food supply chain context. However, our findings go further, for example noting the traceability of allergens, which was not discussed in Opara's (2003) framework, likewise recognizing that blockchain enables a more detailed approach to data traceability than was previously possible.
Traceability is important in preventing and responding to food crises such as food contamination. We agree with Dabbene et al. (2014) that blockchain-based solutions can be used effectively for food traceability because of their ability to better address length, depth, and precision in supply chains. Internal traceability attributes such as lot number, pack date, and order number, which have already been used, can now be recorded on a blockchain digitally and dynamically at each stage of the food supply chain. On the other hand, blockchain solves a social problem, in addition to a technical problem (Kamath, 2018). We agree with Azzi et al. (2019), Gurtu and Johny (2019) and Behnke and Janssen (2020) that by adopting blockchain technology, firms can create more reliable, transparent, and secure traceability systems, which contributes to food safety and quality, and thus to consumer trust, provided that all food supply chain entities contribute to the system.
The results also confirm that a hybrid blockchain may provide robustness and cost-savings in traceability due to its modularity benefits (Tapscott & Tapscott, 2017). Such a system will not require replacing or reconstructing the entire supply chain, but rather allows for leveraging already available technology such as QR An Exploration of Blockchain-based Traceability in Food Supply Chains: On the Benefits of Distributed Digital Records from Farm to Fork codes (Yoo & Won, 2018). This will bring value to food businesses that do not have to face the unbearable costs of reconstructing their whole supply chain to accommodate a new technology that is supposed to save them time and money. Given the successful implementation of a blockchain-based traceability system, food supply chain entities can rapidly and accurately record, authenticate, and ensure the status of an individual food product, tracking its movement and quality throughout the product lifecycle. We argue that such as system can provide benefits to all stakeholders in a food supply chain, by helping them to produce and gain more detailed data analysis reports.
Implications to practice
This study also provides managers in the food industry with some recommendations. First, blockchain technology is increasingly demonstrating its potential for providing greater transparency, veracity, and trust in food traceability. With it involved, supply chain partners can act immediately if problems such as food scandals appear. We therefore encourage managers in food companies to experiment with blockchain technology as potentially a way to gain competitive advantage, better comply with regulations, and respond to rising consumer concerns surrounding food safety and quality.
Second, building and managing a blockchain-based food traceability system should be done in collaboration with governments to meet international compliance standards and cultivate societal knowledge about food safety. Such a system for any society will attempt to solve the problem of documentation and compliance with local and global regulatory systems involving food supply chains. This can be achieved by recording supply chain-relevant government data such as standards, regulatory guidelines, and corporate registries on a permissioned public blockchain, and comparing them with data and metadata from each supply chain transaction. This would provide secure and trustable compliance for government agencies related to food supply, agriculture, health, infrastructure, natural resources, economy, employment, and others.
Third, experimenting with food-oriented blockchain pilots may result in companies seeking to implement the system more broadly for their food supply chains. While Behnke and Janssen (2020) list scalability as one of technical hindrances for blockchain systems, they also argue that current blockchain-based food traceability pilots indicate that scaling can, and indeed will eventually be reached. We therefore suggest that leveraging blockchain technology can help companies that deal with food to identify vulnerabilities in their current food supply chains. This would allow managers of food businesses to better gain the trust of people in regard to their food products, as those vulnerabilities are reduced or removed through a distributed ledger system. Thus, food brand managers should start building stories about their respective brands that engage all supply chain entities, and which can be supported by real-time information obtained from their food supply chain through a blockchain-based traceability system.
Limitations and future research avenues
Limitations to our study are at least two-fold. First, blockchain-based applications are still emerging in the market. We were only able to explore five cases in a specific area of food traceability involving supply chains. Further, each of those cases is recent or involved still in ongoing experimentations. Thus, this paper provided insights on the early experiences and evidence available at the current time involving blockchains in food supply chains. Thus, future research would benefit from analyzing a larger number of cases and focusing on more mature blockchain-based solutions. In particular, the link between blockchain-based tracing and specific broader social sustainability benefits for food should be examined, as also suggested in other recent studies (Paliwal et al., 2020;Lim et al., 2021;Vu et al., 2021).
Second, our case analyses were based on publicly available data, such as academic and practitioneroriented articles, reports, news, blogs, and corporate websites. Future research would benefit from first-hand investigation of blockchain-based companies currently conducting food traceability system experiments and risk management practices (see Shahbazi & Byun, 2021), as well as exploring the various perceptions currently held about the benefits of blockchain by supply chain entities at different stages from farm to fork. This could be done either through interviews and surveys of various targeted stakeholders, or through action research by scholars participating in designing solution architecture for blockchain-based traceability systems. Mervi Rajahonka, DSc (Econ), works as an RDI Advisor at the Small Business Center (SBC) at South-Eastern Finland University of Applied Sciences XAMK, Finland, and she is an Adjunct Research Professor at Carleton University in Ottawa, Canada. She has been working at SBC for about 10 years, participating in numerous EU-funded projects. She earned her doctoral degree in Logistics from the Department of Information and Service Economy at Aalto University School of Business in Helsinki, Finland. She also holds a Master's degree in Technology from Helsinki University of Technology and a Master's degree in Law from the University of Helsinki. Her research interests include business models, service modularity, and service innovations. Her research has been published in a number of journals in the areas of logistics, services, and operations management. | 7,855.4 | 2021-07-06T00:00:00.000 | [
"Computer Science"
] |
Rapid and e ffi cient LC-MS/MS diagnosis of inherited metabolic disorders: a semi-automated work fl ow for analysis of organic acids, acylglycines, and acylcarnitines in urine
Objectives: The analysis of organic acids in urine is an important part of the diagnosis of inherited metabolic disorders (IMDs), for which gas chromatography coupled with mass spectrometry is still predominantly used. Methods: Ultra-performance liquid chromatography-tandem mass spectrometry (LC-MS/MS) assay for urinary organic acids, acylcarnitines and acylglycines was developed and validated. Sample preparation consists only of dilution and the addition of internal standards. Raw data processing is quick and easy using selective scheduled multiple reaction monitoring mode. A robust standardised value calculation as a data transformation together with advanced automatic visualisation tools are applied for easy evaluation of complex data. Results: The developed method covers 146 biomarkers consisting of organic acids (n=99), acylglycines (n=15) and acylcarnitines (n=32) including all clinically important isomeric compounds present. Linearity with r 2 >0.98 for 118 analytes, inter-day accuracy between 80 and 120 % and imprecision under 15 % for 120 analytes were achieved. Over 2 years, more than 800 urine samples from children tested for IMDs were analysed. The work fl ow was evaluated on 93 patient samples and ERNDIM External Quality Assurance samples involving a total of 34 di ff erent IMDs. Conclusions: The established LC-MS/MS work fl ow
Introduction
Organic acidurias are a heterogeneous group of inherited metabolic disorders (IMDs) resulting in the accumulation of organic acids (OA) in body fluids.The accumulation of OA, caused by a deficiency of an enzyme or transport protein in one of the cellular pathways [1,2], disrupts the metabolic homeostasis (predominantly acid-base balance) leading to metabolic acidosis, ketosis, and other metabolic consequences [3,4].The typical clinical presentation of these disorders appears within the first weeks of life of the newborn and may present metabolic difficulties that develop further into hypotonia, failure to thrive, poor feeding, vomiting, Barbora Piskláková and Jaroslava Friedecká contributed equally to this work and should be considered first authors.lethargy, and developmental delay [3,4].Early correct diagnosis and supportive treatment can prevent the most severe manifestations of the disease, thereby improving the quality and length of the child's life [5].
OA are characterised as polar substances containing one or more carboxyl or other (hydroxy-, keto-, side chain) functional groups.In some cases of organic acidurias, toxic acyl-CoAs accumulated in the body are eliminated by the kidneys as OA and conjugated with glycine/carnitine in the mitochondria as part of the detoxification mechanism.The resulting conjugates, acylglycines (AG) and acylcarnitines (AC) are eliminated from the body in the urine along with other OA.These are key metabolites in the diagnosis of organic acidurias [6].
The analysis of OA has been a domain of gas chromatography coupled with mass spectrometry (GC-MS) used IMD diagnosis for many years [7,8].For clinical use, however, this is a time-consuming and laborious technique and is prone to matrix effects and other issues due to the necessity of analyte extraction and derivatization.In addition, due to complex sample preparation, loss of analytes may occur.Even the data evaluation itself is challenging and protracted, requiring experienced personnel.Therefore, in recent years, this technique has been replaced by liquid chromatography with mass spectrometry (LC-MS), including [9][10][11] or excluding sample derivatization [12][13][14].In addition, the analysis of OA in urine can also be performed using nuclear magnetic resonance spectroscopy [15,16], also used in the screening of IMDs [17][18][19][20] however it is not widely applied due to limited sensitivity.
The objective of this work was to develop and validate semi-automated workflow for rapid and efficient diagnosis of inherited metabolic disorders based on ultra-performance liquid chromatography-tandem mass spectrometry (LC-MS/MS) analysis of a wide metabolic spectrum of OA, AG and AC in urine as an advanced alternative to GC-MS.
Materials and methods
Overall workflow including analytical strategy of the development and validation of the method is schematically represented in Figure 1.
Chemicals
Water, acetonitrile, formic acid, and methanol were purchased in LC-MS grade from Honeywell Riedel-de Haën (Seelze, Germany).Sigmatrix Urine Diluent was retrieved from Sigma-Aldrich (St. Louis, USA).A list of all chemical standards (99 OA, 15 AG, 30 AC) used, including internal standards (22 IS), is provided in Supplementary Table 1.
Preparation of stock solutions and internal standard mixture
All OA, AG, and AC standards were prepared as 10 mmol/L stock solutions in LC-MS water and stored at −20 °C.IS mixture was prepared by mixing IS stock solutions into a 10 mL flask as follows: 22 μL of 224 mmol/L lactic acid-13 C 3 , 50 μL of 41.3 mmol/L methyl-2 H 3 -malonic acid, 10 μL of 100 μmol/L isovaleryl-2 H 9 -carnitine, 200 μL of 10 mmol/L homovanillic acid-13 C 6 , 18 O, 11.5 μL of 10 mmol/L hexanoylglycine-13 C 2 , 15 N and 1 mL of 100 μmol/L orotic acid-15 N 2 .The 10 mL volume was topped up with LC-MS grade water.The IS mixture was stored in aliquots at −20 °C.
Biological material
All procedures followed were in accordance with the tenets of the Helsinki Declaration (as revised in 2013) and were approved by the ethical committee of the Faculty of Medicine and Dentistry, Palacky University Olomouc, and University Hospital Olomouc (licence number: 66-19).To date, more than 800 clinical samples have been analysed using this method in the Laboratory of Inherited Metabolic Disorders, Department of Clinical Biochemistry, University Hospital Olomouc, Czechia.Samples from IMD-positive patients (n=93) come from patients (n=46) and residual urine samples of External Quality Assurance (EQA) ERNDIM qualitative schemes -Diagnostic Proficiency Testing (DPT) scheme (n=28) and Qualitative Organic Acid in Urine (QLOU) scheme (n=19).A list of all patient samples is provided in Supplementary Table 5.
For analytical validation procedures, blank surrogate urine matrix samples (n=38) of healthy controls were used.Pooled urine samples were prepared by mixing 38 healthy controls.These samples were diluted to a creatinine concentration of 2 mmol/L and for analysis, equal aliquots of samples were mixed.Pooled urine was stored in aliquots at −20 °C.Basic clinical information about the samples is given in Supplementary Table 7.
Preparation of calibration curve standards and quality control samples
The calibration range for all analytes was determined according to their physiological concentrations in human urine found in the literature [21,22] or according to the expected concentration when no concentration data were found.The ten-point calibration curve was prepared by binary dilution series in LC-MS water and stored at −20 °C.A calibrator (50 μL), 50 μL of synthetic urine Sigmatrix Urine Diluent (SUD), and 10 μL of IS mixture were mixed into the glass vials with an insert.The highest (Cal10) and the lowest (Cal1) calibration points for each compound are provided in Supplementary Table 2.
Quality control (QC) samples were prepared at three levels, i.e.High (HQC), Medium (MQC), and Low (LQC).HQC was set at 75 % of the calibration range, MQC was set at 25 % of the calibration range and LQC was set at the lower level of physiological concentration of an appropriate analyte.QC samples were prepared in 11 mixtures and stored at −20 °C.QC concentration levels are provided in Supplementary Table 3.
LC-MS/MS analysis
The analysis was performed on the HPLC instrument Exion LC (SCIEX, Framingham, MA, USA) using Acquity UPLC HSS T3 C18 column (1.8 μm, 100 × 2.1 mm) (Waters, Milford, MA, USA) and a mass spectrometer QTRAP 6500+ (SCIEX, Framingham, MA, USA).Eluent A contained 0.5 % formic acid in water and eluent B contained 100 % acetonitrile.The flow rate was set at 0.37 μL/min, sample injection at 1 μL and the run time was 26 min.The autosampler was set at 5 °C and the column temperature was 30 °C.The gradient profile for this HPLC solvent run was as follows: t=0-2 min 100 % A; t=2-9 min 80 % A; t=9-17 min 5 % A; t=17-20 min 5 % A; t=21-26 min 100 % A. The analysis was performed in the scheduled multiple reaction monitoring (MRM) mode under polarity switching (simultaneously in positive and negative polarity) with an ion spray voltage=−4,500.0V, +5,500.0V, temperature=450.0°C, curtain gas=35.0arb, collision gas=medium, ion source gas 1 and 2=50.0 arb using nitrogen as collision gas.The instrument was operated in the MRM mode in which two selective transitions are chosen for each compound during quantification.
Conditions of detection in the mass spectrometer (MS) were optimised for all analytes separately.Standards of OA, AG and AC were diluted in the mobile phases A and B (1:1, v/v) to final concentrations of 10, 3, and 1 μmol/L, respectively.Each standard solution was directly injected into the MS by the syringe pump and specific MRM transitions (mass/charge ratio of first and third quadrupole), collision energy of the second quadrupole and declustering and exit cell potentials were optimised in both positive and negative polarities.For compounds that are not commercially available (2-methylacetoacetate, 3-hydroxybutyrylcarnitine, 3-hydroxyisovalerylcarnitine and hawkinsin), MS parameters were found and optimised from patient urine samples.First, MS spectra were found online [21], and preliminary MRM transitions were generated.Subsequently, the MS2 spectrum of the identified chromatographic peak of the compound was measured, and then MRM transitions were optimised.
For the optimization of separation conditions and parameters of MS detection, standards of all compounds with concentrations of 100 μmol/L OA, 10 μmol/L AC, and 30 μmol/L AG were prepared and analysed separately.Isomeric compounds were analysed both separately and in a mixture of related isomers for unequivocal identification of each isomer.MS detection conditions were selected based on the highest S/N value for a given MRM transition separately in the Analyst 1.7 software (SCIEX, Framingham, MA, USA).Identification of the isomers was done according to their separation or, if possible, by finding a specific MRM transition.
Method validation
The method was validated according to European Medicines Agency (EMA) and Food and Drug Administration (FDA) validation guidelines Piskláková et al.: LC-MS diagnosis of inherited metabolic disorders [23,24], and the following parameters were evaluated: linearity, accuracy, imprecision, matrix effect and carry over.The methods and procedures for the analytical validation process are given in the Supplementary Text.
Patient diagnosis workflow
The identification and diagnosis of patients were made by a robust standardised (RS) value calculation [25][26][27].For this purpose, a group of initial 200 samples from healthy controls was selected as a control group for computation RS reference values.Subsequently, RS values were calculated for patient samples according to the Eq. ( 3), where X ′ is a robust standardised value, x i is a value of the ith observation, X is a median of the healthy control population and Q 3 − Q 1 the interquartile range of the healthy control population.
RS values of healthy controls with their age and sex are provided in Supplementary Table 8.Calculated quantiles of RS values for healthy controls are given in Supplementary Table 9. IMD network for over 80 disorders was created in Cytoscape software.RS values of patients for each biomarker were imported to the Cytoscape software and the level change of the respective biomarker was visually monitored.
Data analysis and statistics
For MS data acquisition and processing, Analyst ® 1.7 and SCIEX OS 2.0 software (SCIEX, Framingham, MA, USA) were used.The peak area of the analyte relative to the corresponding IS were used to robustly transform the data for all urine samples from healthy controls, EQA, and patients, as well as for long-term routine use in diagnostic practice.IS of each analyte are listed in Supplementary Table 2. Data analysis including RS values calculation and visualization was carried out in MS Excel 365 (Redmond, Washington, USA) and GraphPad Prism 9.3.1 (GraphPad Software, San Diego, CA, USA).IMD network was made in Cytoscape software 3.8.2[28] (Bethesda, MD, USA).
LC-MS/MS method development and optimization
Overall, 146 analytes as biomarkers of IMDs consisting of 99 OA, 15 AG, 32 AC, and 22 stable isotope IS were included in the developed method.All isomeric compounds were successfully separated on the HSS T3 column due to optimisation of mobile phase composition.All mass spectrometric method parameters are provided in Supplementary Table 4 and the separation of all compounds divided into 4 classes for better display is shown in Figure 2.
Analytical validation
Analytical validation was performed on standards of 140 analytes and the following parameters were determined: linearity, accuracy, imprecision, matrix effect and carry over.The results of the analytical validation (including graphical representation) and its discussion are presented in Supplementary Text and Supplementary Figures 3 and 4.
Patient diagnosis workflow
For diagnostic purposes, the calculation of RS values is fully automated and requires only copying area ratio values of analytes.Multianalyte scatterplots in GraphPad Prism are also done automatically involving the import of the RS values into the data table.Next, the operator can display an individual or multiple patients at once.Based on outlying values of disease-specific biomarkers it was possible to diagnose samples with 34 IMDs listed in Supplementary Table 5 along with RS values.The workflow also includes plotting IMD network, where IMDs are divided into separate groups according to affected metabolism (amino acids, branched-chain amino acids, urea cycle, carbohydrates, fatty acids, mitochondrial disorders, peroxisomal disorders, and others).This network was created in Cytoscape freeware, which is an interactive tool highly user adjustable.The original Cytoscape file with all data is available in Supplementary File 1.The representative graphical presentation of 5 individual patient samples (primary hyperoxaluria type 2 (OMIM#260000), succinic semialdehyde dehydrogenase deficiency (OMIM#271980), 2-hydroxyglutaric aciduria (OMIM#600721), medium-chain acyl-CoA dehydrogenase deficiency (OMIM#201450) and methylmalonic aciduria (OMIM#251000)) are shown in Supplementary Figure 1B-F.A blank IMD network is shown in Supplementary Figure 1A.IMD abbreviations used in the network are in accordance with the official OMIM nomenclature [29].
As an example, the results of the RS values graphical plotting of a patient suffering from glutaric aciduria type II (GA2; OMIM#231680) (Patient 14; 1.5 years, female) are presented in Figure 3. Due to multiple acyl-CoA dehydrogenase deficiency, patients excrete increased amounts of not only glutarate but also other OA and their conjugates.However, abnormalities in OA levels may be present only when patients are under stress [30], which can make GC-MS-based diagnosis difficult.In this sample, pathology was found not only at the level of OA (ethylmalonate, adipate, suberate, sebacate) but also AG (glutarylglycine, hexanoylglycine, suberylglycine) and AC (butyrylcarnitine, glutarylcarnitine, hexanoylcarnitine, octanoylcarninite) as confirmatory biomarkers.Further, for a better orientation in affected pathways, the metabolic network is depicted in Figure 4.
Long-term usage
This method has been used in our laboratory for more than 2 years in the routine diagnostic process.It has become the method of the first choice and indispensable for acute cases such as metabolic crises.The previously used GC-MS has been side-lined for control purposes only.It suffers from difficult sample preparation, loss of analytes due to variable extraction and derivatization efficiencies, contamination of GC-MS instrument by derivatisation reagents and timeconsuming data evaluation requiring highly qualified personnel.Without multi-algorithmic peak detection, runby-run and peak-by-peak evaluation are common, making this approach completely unsuitable for patients requiring urgent care diagnosis.The advantage of our new robust method is the ease of pre-selected peak integration using Sciex OS software and the subsequent RS values calculation.Sample analysis can be performed immediately after blank analysis, making this approach suitable for laboratories with urgent emergency tasks.In our experience, one sample result can be obtained within 1 h of sample delivery.
Mass spectrometry proved to be a sufficiently stable platform, as confirmed by the coefficient of variation (CV) of the selected analytes (n=23) included in the ERNDIM Internal Quality Control System at 2 levels.Median and interquartile range (IQR) of CV for Level 1 and Level 2 are 16.2 and 19.8 %, with IQR of 14.9-20.4% and 18.9-21.3%, respectively.Consequently, frequent maintenance of the MS is not required, which is a great advantage in its routine use in a clinical laboratory.With respect to the robustness of retention times (RTs), the method provides average stability of 1.6 % (median) for all analytes when used routinely over 1 year (Supplementary Figure 5A).Within batch average reproducibility was 0.4 % (median; Supplementary Figure 5B).
Although this method was developed for urine analysis, in our laboratory it has been also applied to dry blood spot (DBS) samples in the context of differentiating true and false positivity of isovaleric acidemia (IVA) due to Pivinorm treatment of pregnant women (Figure 5).Pivaloyloxymethyl ester of mecillinam as the main component of the drug is converted into pivaloylcarnitine interfering with isovalerylcarnitine.Both carnitines are baseline separated by the method and semiquantitation is performed directly from the newborn screening sample containing stable labelled internal standard.
External quality assessment
EQA samples -ERNDIM DPT and QLOU (Heidelberg) in the years 2021-2022 have been analysed as a part of the routine operation.The diagnosis was performed by this platform with a 100 % success rate.A list of samples from ERNDIM DPT and QLOU and diseases diagnosed by the developed LC-MS/MS approach are provided in Supplementary Table 6.Furthermore, the RS values plots of each ERNDIM sample with the biomarkers on which the diagnosis was based are shown below in Figure 6.
Discussion
In the routine diagnosis of IMD, due to acute clinical manifestations, organic acidurias are one of the important groups of diseases monitored in all laboratories.The ability to provide a rapid laboratory response with maximum coverage is very important.Compared to GC-MS, which is currently exclusively used, here we present an advanced semi-automated workflow based on LC-MS/MS that allows both rapid analysis with minimal sample preparation and additionally provides complementary information at the level of OA together with the corresponding AG and AC.In our experience, this makes this method a very clinically powerful tool for the diagnosis of a wide range of IMDs.
Recently, there has been widespread use of LC-MS/MS for IMD diagnosis.Körver-Keularts et al. [12] established liquid chromatography-quadrupole-time of flight-mass spectrometry (LC-QTOF/MS) method which allows the diagnosis of 32 IMDs in urine without the lengthy extraction and derivatization step.The above approach was further extended [31] to 78 IMDs, which, however, consisted in expanding the number of biomarkers but not in improving the diagnosis of organic acidurias.Another LC-QTOF/MS method [32] focusing on plasma samples allowed the diagnosis of 42 IMDs using up to 340 known IMD-related metabolites.Considering the laborious sample preparation and data processing, it is questionable whether this approach can be used for rapid clinical diagnostics.The CLAM-2030 automated sample pretreatment system directly coupled to LC-QTOF/MS appears promising and has been demonstrated on 9 acidurias [11].Nevertheless, this system is time consuming and financially demanding and sample derivatization and incubation are required.A recently published work [14] showed a rapid analysis of 5 common serum and urinary OA.Despite being a fast quantitative method, it is limited to measuring a very small profile, although the authors mention the possibility of extending it to more OA.
Important parameter of the developed workflow is the requirement for a very small sample volume, which can be crucial, especially in neonates and infants.It allows analysis even from a few tens of μL of urine compared to GC-MS where the standard requirement is around 2 mL.Very often we encountered creatinine levels lower than 0.5 mmol/L in young children, which was easily solved in our approach by adjusting the sample preparation and compensating with a higher LC-MS injection.
The big advantage of the developed workflow lies also in straightforward data evaluation.Although the method has been subjected to a complete validation to verify the analytical parameters of all analytes, relative quantification is used in routine operation, where the peak areas of the analytes are related to the corresponding internal standards.Due to the high number of 140 analytes, it is in principle impossible to achieve absolute quantification based on calibration standards in the long term.For internal quality control purposes, commercially available ERNDIM QC materials can then preferably be used.To achieve maximum analytical quality, it is then essential to maintain the longterm consistency of internal standards.The use of the automated robust scaling method has proven to be a fast, convenient, and reliable tool for IMD diagnosis.Data transformation using z-scores is not fully new in data interpretation [12,31,33].However, for urine metabolites, robust data transformation method was used due to the large scale and variability across all measured analytes and the skewed distribution of the data including outliers.To calculate RS values, medians and interquartile range (IQR) as a robust scale estimator was used instead of median absolute deviation (MAD), which is less effective for nonsymmetric distributions with present outliers [25][26][27].As an outcome, the data are visualised using a multianalyte scatter plot and IMD network in GraphPad Prism and Cytoscape software, respectively.The result of the analysis is carried out within 1 h after the sample delivery.Routine raw data processing involving integration of the peaks of 12 samples in a batch (10 urine and 2 IQC samples), transfer to a spreadsheet, conversion to RS values, upload to statistical and visualization software, subsequent printing and initial evaluation of the results takes a total of 90 min.This calculation is valid for experienced personnel familiar with IMD and the software used.In addition, the method can be easily expanded by adding more relevant biomarkers for other IMDs, e.g.MS and LC conditions were optimised for hawkinsin (specific for hawkinsinuria, OMIM#140350) from an EQA sample during its routine laboratory operation.In the future, the method is likely to be extended to include other new or known relevant IMD biomarkers.In addition, the method allows the separation of all 4 isomers of carnitine C5 (i.e.pivaloyl-, 2-methylbutyryl-, isovaleryl-and valeryl-), which is used in the expanded newborn screening as a second-tier method to detect false positive IVA from treatment with the antibiotic Pivinorm.The use of the same sample eliminates subsequent requests for an additional patient sample and thus the burden on the patient and parents themselves.Some of the findings in EQA samples in Figure 6 may not be directly related to the disease, but are associated with metabolic ketoacidosis, or supportive treatment (carnitine, amino acids, vitamins or cofactors supplementation, etc.).That is the case of sample QLOU-DH-2021-A (mitochondrial shortchain enoyl-CoA hydratase 1 deficiency, OMIM#616277).Here, higher concentrations of 2-hydroxyisovalerate and 2-hydroxyisocaproate were also present, referring to lactic acidosis [34].This is also supported by the increased lactate excretion.Increased concentrations of certain AC were also observed in this sample, probably due to carnitine supplementation.On the other hand, it is common to encounter samples that do not have elevated levels of typical biomarkers.For instance, sample DPT-2021-F (tyrosinemia type I, OMIM#276700) was found to have increased only 4-hydroxyphenylpyruvate and 4-hydroxyphenyllactate.However, aminoacidopathies are investigated by amino acid analysis, for which this method is not suitable.Furthermore, sample QLOU-DH-2021-F with medium-chain acyl-CoA dehydrogenase deficiency (MCADD, OMIM#204450) did not show pathogenicity at the level of OA, but only at the level of AC and AG.This just demonstrates the importance of comprehensive testing in the diagnosis of IMD.
Although the use of the advanced LC-MS/MS method is very convenient, it has some pitfalls as described below.In terms of analyte structure, compounds containing the amino group are not retained on the reversed-phase column.Oxoacids have poor peak shape (as depicted in Figure 2C), low sensitivity and poor validation parameters due to known low stability in aqueous solutions.Thus, many oxoacids can only be detected at elevated pathological conditions as shown in Supplementary Figure 2.These drawbacks of analysing oxoacids can be also seen in their reproducibility of RT.Further, primary hyperoxaluria can be detected based on secondary biomarkers (glycerate and glycolate) as shown in Supplementary Figure 1B.Oxalate as a primary biomarker is not retained on the reversed-phase column due to its low molecular weight and high polarity.One limitation of the study is that the effect of age on the reference limits was not taken into account.This could, for example, affect the assessment of a mild increase of methylmalonic acid in the case of vitamin B12 deficiency.Based on the literature sources, major differences in the definition of age groups can be found and this will be further addressed as part of method improvements.
In conclusion, a robust LC-MS/MS approach enabling rapid diagnosis of a broad spectrum of more than 80 IMDs at the level of OA, AG and AC (146 analytes) in urine has been developed and validated.Compared to routinely used GC-MS, the LC-MS/MS method offers fast and easy sample preparation, low sample consumption, higher coverage of metabolites significantly improving the diagnostic process, comfortable data processing with clear identity of analytes and advanced result visualization and statistical evaluation.This approach has been in use at our institution for 2 years and has been successfully tested on EQA samples in addition to routine samples.As the method is cost-effective, fast, and simple, it has a high potential for implementation as the firstchoice method in other laboratories for IMD diagnosis and finally to replace the mostly used GC-MS for urinary OA analysis.
Figure 1 :
Figure 1: Schematic representation of the workflow and validation.
Figure 3 :
Figure 3: Plotted results of RS values (x-axis) of measured analytes (y-axis) for patient with glutaric aciduria type II (Patient 14) represented by red dots.Grey dots represent values of healthy controls (n=200) (A).The cut-out zoom demonstrates a closer distribution of healthy controls (B).
Figure 4 :
Figure 4: IMD network with visualisation of the patient with glutaric aciduria type II (Patient 14).Organic acidurias are divided into separate groups according to affected metabolism.Biomarkers (end nodes with biomarker names) are connected to a particular disease (rectangle node with green title) and expressed by size and colours according to the calculated RS values.For better display and clarity of the image, the cut-off value of 20 was set for biomarker labelling.IMD abbreviations are in accordance with the official OMIM nomenclature.
Figure 5 :
Figure 5: Chromatographic separation of C5 carnitines in DBS sample.Extracted chromatogram of DBS analysis of a patient suspected of IVA with a high amount of interfering pivaloylcarnitine (blue).Internal standard isovaleryl-2 H 9 -L-carnitine (red) coeluates at the same time as isovalerylcarnitine.
Figure 6 :
Figure 6: Graphical plotting of RS values (x-axis) of biomarkers (y-axis) used to elucidate the diagnosis.The results are shown for samples of the ERNDIM DPT and QLOU (Heidelberg) schemes from the years 2021-2022 which were analysed during standard routine operation. | 5,972.4 | 2023-05-19T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Discrimination of Minced Mutton Adulteration Based on Sized-Adaptive Online NIRS Information and 2D Conventional Neural Network
Single-probe near-infrared spectroscopy (NIRS) usually uses different spectral information for modelling, but there are few reports about its influence on model performance. Based on sized-adaptive online NIRS information and the 2D conventional neural network (CNN), minced samples of pure mutton, pork, duck, and adulterated mutton with pork/duck were classified in this study. The influence of spectral information, convolution kernel sizes, and classifiers on model performance was separately explored. The results showed that spectral information had a great influence on model accuracy, of which the maximum difference could reach up to 12.06% for the same validation set. The convolution kernel sizes and classifiers had little effect on model accuracy but had significant influence on classification speed. For all datasets, the accuracy of the CNN model with mean spectral information per direction, extreme learning machine (ELM) classifier, and 7 × 7 convolution kernel was higher than 99.56%. Considering the rapidity and practicality, this study provides a fast and accurate method for online classification of adulterated mutton.
Introduction
Mutton is very popular because of its delicious taste and rich nutrition. However, due to the relatively high price, mutton has become the adulteration object to some illegal merchants [1]. In China, adulterated mutton refers to mutton mixed with low-price meat, such as pork or duck, without declaration [1][2][3]. Food adulteration not only harms consumer rights and interests, but also affects the order of the food market [4][5][6]. In particular, pork adulterated in mutton will undermine ethical beliefs for some ethnic groups. Traditional detection techniques of food adulteration are usually based on deoxyribonucleic acid [7], polymerase chain reaction (PCR) [8], and chromatography [9]. However, these methods have some shortcomings, such as being time-consuming, laborious, or costly. Compared with the above methods, near-infrared spectroscopy (NIRS) has the great potential of being fast, nondestructive, and environment-friendly. Although it is not cheap at the moment, low-cost and miniaturised spectrometers are still being developed with the progress of science and technology, and the NIRS technology has great application potential. In recent years, NIRS has been applied to detect the quality and adulteration of meat [10][11][12][13][14][15].
To collect more representative spectral information by using the single-probe NIRS system, some studies reported that their samples had been scanned many times or collected the multipoint spectra from different directions. Barragan et al. [16] collected four spectra of each sample by scanning four times with a single-probe and used mean spectral information to build the model for authentication of barley-finished beef. Alamprese et al. [17] recorded two spectra of each sample by scanning two times for the identification and quantification of minced beef adulteration with turkey meat. Boiret et al. [18] established a detection model for active components in tablets by using the mean multipoint spectral information in two orthogonal directions. Moreover, Duan et al. [19] proved that the spectral information from different regions of interest could affect the model performance by using hyperspectral imaging technology. Although the NIRS system can collect the multipoint spectral information in different ways, there are no reports about the influence of spectral information on the model performance for meat quality detection. To obtain the multipoint spectral information using the single-probe NIRS and explore the influence of different spectral information on the classification model of minced mutton adulteration, it is necessary to develop an online NIRS system, which can adaptively collect multipoint spectra for each sample, and use different spectral information to establish the classification model of minced mutton adulteration.
As one of the representative deep learning algorithms, the convolutional neural network (CNN) can directly extract representative features from the original data, which avoids the cumbersome operation of the traditional method that requires multiple data preprocessing. In recent years, it had been gradually used in the modelling of NIRS due to its excellent ability of spectral feature extraction, high accuracy, and strong robustness [20][21][22]. To adapt to the relevant operation requirements of the convolution layer, the spectral data vector of each sample was transformed into a two-dimensional (2D) spectral information matrix by constructing a spectral information matrix. Padarian et al. [23] used CNN to extract the deep features contained in 2D spectral information matrixes to predict the soil property and proved that CNN was an effective tool for modelling. However, there are some studies that reported that the size of the convolution kernel and the type of classifier could affect the performance of the CNN model. Chen et al. [24] studied the influence of the size of the convolution kernel on the CNN model and found that the coefficient of determination (R2) for the calibration increased with the size of the convolution kernel. Li et al. [25] optimised CNN models by comparison of different kernel sizes and achieved better classification accuracy with a large convolution kernel size. Su et al. [26] used the classifiers of SVM (support vector machine), LSVM, and Softmax for the identification of wheat leaves, and found that LSVM had the highest classification accuracy and lowest iteration times. Sharma et al. [27] compared the CNN-Softmax, CNN-ELM, and CNN-SVM model for fire detection, and the results showed that the classification accuracy of the CNN-ELM (extreme learning machine) model was 2.7-7.1% higher than that of CNN-Softmax. The above research showed that CNN was an effective qualitative classification model, but the sizes of the convolutional kernel and classifiers had influence on its performance. There are also few studies that have used the CNN model combined with different classifiers to detect food adulteration. Therefore, it is meaningful to establish the classification model of adulterated mutton by using CNN combined with different classifiers, and to explore the influence of different convolution kernels and classifiers on the model performance.
In order to explore the influence of different spectral information and model parameters on NIRS classification of minced mutton adulteration based on CNN, the following was carried out in this study: (1) an online NIRS system was developed, which could adaptively collect the spectra of four points in one run according to the size of the sample; (2) samples of pure mutton, pure pork, pure duck, and adulterated mutton (minced mutton mixed with 10-20-30-40-50% (w/w) pork/duck) were prepared, and the spectral information of samples from four different directions (45 • interval between adjacent directions) were collected; (3) the mean spectral information per direction and of four directions were obtained, and the 1D spectral data of samples were converted to a 2D spectral information matrix; (4) the CNN models with different classifiers (Softmax, ELM, and SVM) were established and compared based on different spectral information, and the influence of different spectral information, convolution kernel sizes, and classifiers on models was explored. This study has certain significance for safeguarding the rights and interests of consumers and promoting the healthy development of the mutton industry.
Sample Preparation
In this study, samples of mutton, pork, and duck were purchased from the local supermarket in Shihezi city, Xinjiang autonomous region. The samples were sent to the laboratory in an insulation box with ice (the temperature was between 0 and 5 • C) and then stored in a refrigerator (the temperature was between 0 and 4 • C). Because the white fascia and fat of meat will bring great influence to the spectrum of point acquisition, the visible fat and skin of the meat was removed before sample preparation to reduce the interference in classification. For preparation of pure meat and adulterated meat samples, the trimmed meat was weighted by an electronic scale (YingHeng, China), and they were minced and mixed by a meat-mincing machine (Joyoung, JYS-A900, China) for 30 s. Then, 30 ± 1 g of minced meat was compacted in a round petri dish (60 mm in diameter × 15 mm in depth × 10 mm in thickness) to 10 mm in thickness while keeping its surface smooth, to ensure the homogenisation of the sample. In order to improve the generalisation performance of the classification model, five kinds of samples were prepared. According to 35 samples of each adulterated proportion (10%, 20%, 30%, 40%, and 50%), 350 (2 × 5 × 35) adulterated mutton samples with duck/pork were prepared, and 35 pure mutton, 35 pure duck, and 35 pure pork samples were also obtained.
Online NIRS System
The online NIRS system developed in this study mainly includes two pairs of photoelectric sensors, a conveyor, a microprocessor (STM32F103C8T6, STMicroelectronics Inc Geneva, Switzerland,), a NIR spectrometer (900-2500 nm, NIRQuest512, Ocean Optics Inc., Dunedin, USA), two halogen lamps (MR11 20W, Philips Inc., Amsterdam, The Netherlands), a single optical fibre probe (QP400-1-vis-nir, Ocean Optics Inc., Dunedin, USA), a PC machine, and a self-designed software. Its structural schematic diagram is shown in Figure 1a. The speed of the conveyor was set as 8 cm/s according to the preliminary research results of our laboratory and the existing reports on online detection [28][29][30]. The halogen lamps were installed on both sides of the dark box, and the installation angle was set to 30 • to the horizontal plane. The vertical distance between the probe and the sample surface was 1.5 cm. The interval time of 4 collection points of each sample was determined by the time interval of the rising and falling edge of the signal from photoelectric sensor 1 ( Figure 1a). Photoelectric sensor 2 was used to detect the arrival signal of the sample and send it to the control software, to drive the spectrometer to collect the spectral data of the sample. Due to the interference of light sources on the photoelectric sensor signal, two pairs of 5 V laser sensors with a beam of 650 nm were applied in this system and a conical sleeve was adopted to reduce the influence of light sources, as shown in Figure 1b. The acquisition software was designed in Qt Creator 4.9.1 (Qt Company Ltd., Finland) by using C++, which was based on the OmniDriver 2.56 (Ocean Optics Inc., Dunedin, FL, USA). The software could complete the parameter configuration of the spectrometer, including integration time, smoothness and scanning times, communication with the microcontroller, and data processing. The online NIRS system developed in this Figure 1. Schematic diagrams of the (a) online NIRS system, (b) method for avoiding the interference, and (c) data acquisition. The input current is converted into an optical signal and emitted by a laser photoelectric switch. The receiver detects the target object according to the intensity of the received light or whether there is no light. A conical sleeve is installed on the receiver of the laser photoelectric switch, which is mainly used to avoid the interference of the light emitted by the light source to the receiver. The acquisition software was designed in Qt Creator 4.9.1 (Qt Company Ltd., Finland) by using C++, which was based on the OmniDriver 2.56 (Ocean Optics Inc., Dunedin, FL, USA). The software could complete the parameter configuration of the spectrometer, including integration time, smoothness and scanning times, communication with the microcontroller, and data processing. The online NIRS system developed in this study could adaptively collect four spectra of each sample according to the sample size in the run direction.
Spectral Data Acquisition and Dataset Partition
Before the acquisition of spectra, the NIRS system needed to be preheated about 30 min. In this study, the integral time, smoothness, and scanning times of the NIRS acquisition system were set to 40 ms, 5 times, and 10 times, respectively. The whiteboard (USRS-99-010, Labsphere Inc., Sutton, NH, USA) was used to perform white calibration of the spectrometer, and the light source was turned off for black calibration. The reflection spectra of four points in the run direction of the sample were automatically collected according to the sample size. To study the influence of the spectral information on the classification model, spectra of four directions (1, 2, 3, and 4) at 45 • intervals were collected for each sample, as shown in Figure 2. The obtained spectral data were saved as a CSV file for further data processing. A total of 455 samples were used for collecting the spectral information. First, the spectra of four points per direction (including direction 1, 2, 3, and 4) were averaged to obtain the mean spectral information per direction, and a total of 1820 spectra were obtained for 455 samples. Based on the mean spectral information of the four directions, a total of 455 samples were divided into 341 samples for the calibration set and 114 samples for the validation set by the joint x-y distances algorithm, which was first proposed by Galvao et al. [31], and its principle is to calculate the distance between samples by using two variables, label value and spectrum, to ensure the maximum distribution of samples, to effectively cover the multidimensional vector space, to increase the difference and representativeness between samples, and to improve the stability of the model. On this basis, 1820 mean spectra per direction were divided into 1364 spectra for the calibration set and 456 spectra for the validation set. Two kinds of spectral information were used to establish models, and two validation sets from different spectral information were used to test the classification performance.
Model Establishment and Evaluation
In this study, CNN models with the classifiers of Softmax, ELM, and SVM were established on the basis of a 2D spectral information matrix. The best model for classification of adulterated mutton was selected by comparing the performance of the models. The schematic diagram of the data process is shown in Figure 3.
Model Establishment and Evaluation
In this study, CNN models with the classifiers of Softmax, ELM, and SVM were established on the basis of a 2D spectral information matrix. The best model for classification of adulterated mutton was selected by comparing the performance of the models. The schematic diagram of the data process is shown in Figure 3.
Model Establishment and Evaluation
In this study, CNN models with the classifiers of Softmax, ELM, and SVM were established on the basis of a 2D spectral information matrix. The best model for classification of adulterated mutton was selected by comparing the performance of the models. The schematic diagram of the data process is shown in Figure 3.
Modelling Methods
The CNN model usually consists of an input layer, hidden layer, and output layer, and the hidden layer contains a convolution layer, pooling layer, and fully connected layer. The main function of the convolution layer is to extract features of input data [32]. The output data are obtained by multiplying the convolution kernel with the corresponding element values in the coincidence region of input data and adding an offset, and its convolution operation is described as Formula (1): where is the output of the convolution layer; i is the serial number of the convolution layer; k is the serial number of the convolution kernel; N is the number of channels of input data; W and B are the weight and offset, respectively; ⊗ represents the convolution operation.
After convolution, the output data are input to the next layer by the nonlinear transformation activation function. The most commonly used activation function is rectified linear unit (ReLu), which is described as Equations (2) and (3) [33]:
Modelling Methods
The CNN model usually consists of an input layer, hidden layer, and output layer, and the hidden layer contains a convolution layer, pooling layer, and fully connected layer. The main function of the convolution layer is to extract features of input data [32]. The output data are obtained by multiplying the convolution kernel with the corresponding element values in the coincidence region of input data and adding an offset, and its convolution operation is described as Formula (1): is the output of the convolution layer; i is the serial number of the convolution layer; k is the serial number of the convolution kernel; N is the number of channels of input data; W and B are the weight and offset, respectively; ⊗ represents the convolution operation.
After convolution, the output data are input to the next layer by the nonlinear transformation activation function. The most commonly used activation function is rectified linear unit (ReLu), which is described as Equations (2) and (3) [33]: The function of the pooling layer is reducing dimensions and selecting representative features from input data [34]. The pooling kernel can be set to different sizes, and the output result of the pooling layer is calculated by moving the pooling kernel in the region of the input data matrix according to the stride.
In general, the CNN is composed of two or more fully connected layers, and the neurons of fully connected layers are completely connected between the two layers. Twodimensional features from the last pooling layer are flattened. Then, it is regarded as the input of the full connection layer for the further feature extraction [35]. By calculating in fully connected layers, a one-dimensional vector is output, and each value of the vectors represents the quantitative value of classifications.
The CNN model in this study contained 3 convolution layers, 3 pooling layers, and 2 fully connected layers. The size of the input layer was 230 × 230 × 1 (width × height × channel). Three convolution kernels of different sizes (3 × 3, 5 × 5, and 7 × 7) were set in the convolution layer to discuss its influences on the models, the padding method was the same padding, the size of step was 1, and the activation function was ReLu. The maximum pooling method of 2 × 2 was used to compress images and extract features. To improve the generalisation ability of the whole network and prevent over fitting, dropout layers were added between adjacent convolution layers, and the drop rate was set to 0.3. In the CNN model, the Adam was selected as the optimiser, the mean square error (MSE) was used to measure the loss value, and the learning rate was set to 0.001. In this study, the number of iterations was set to 500, and the CNN was implemented in Python 3.7.6 using Keras library and the Tensorflow 2.3.0 backend.
Classifiers
The classifiers of Softmax, ELM, and SVM were used in this study. A common classifier used in the CNN model is Softmax, which calculates the probability value of each output channel and selects the channel with the highest probability value as the result of classification. As a kind of supervised machine learning algorithm, SVM can classify samples to the maximum extent by mapping nonlinear data to a high-dimensional space [35]. As a fast-learning algorithm, ELM can initialise input weights and offsets randomly, which has the characteristics of strong generalisation ability and fast calculation speed [35]. To select a fast and accurate model that is suitable for online detection of adulterated mutton, the classification performance of the CNN model with different classifiers was compared. Parameters of different classifiers were optimised to obtain better model performance. For the SVM classifier, the radical basis function (RBF) was set to the kernel function, and parameters (gamma and cost) were searched by the optimisation method of the genetic algorithm (GA). The activation function of the ELM classifier was sigmoid, and 10-200 neurons at 5 intervals were used to search the best number of neurons for hidden layers. When the neurons were set as 175, the best performance of the ELM classifier could be obtained.
Model Evaluation
In the classification model of this paper, the samples were classified into five categories: pure mutton, pure pork, pure duck, adulterated mutton with pork, and adulterated mutton with duck samples. To evaluate the classification ability of models, the classification accuracy (Acc) was applied. In order to obtain a fast classification model, the prediction time was used to measure the efficiency of different classifiers.
Spectral Data Analysis
To avoid the uninformative bands existing in the NIR spectral data, the range of 1038-2475 nm was selected for modelling, and the number of spectral channels was 230. The Savitzky-Golay (SG) with a five-point filter was applied to smooth the spectra. Figure 4 shows the mean reflectance spectra of representative samples. Figure 4a shows that the spectral trends of mutton, duck, and pork are similar, but there are obvious differences in spectral reflectance because of their different chemical composition [1]. The mean spectrum of pork samples has higher values of reflection than mutton or duck samples. Figure 4b,c show that spectral reflection curves of all samples with different adulteration percentages have similar trends, and the reflectance changes with different adulteration proportions. It is clear that wavelengths around 1260, 1520, 1650, and 1840 nm have significant absorption peaks in the spectral curves of pure meat and adulterated mutton, which are related to the absorption bands of water, fat, and protein in the meat samples. In particular, three absorption peaks at 1260, 1840, and 1650 nm are mainly related to the C-H second overtone and the C-H 2 stretch first overtone in fat [36], and the absorption peak at 1520 nm is closely related to the N-H stretching second and first overtones in protein [37]. In addition, the range of 1700-2475 nm has a low reflectance value, because there are some absorption bands mainly caused by the combined overtones of molecular groupings such as O-H, N-H, and C-H [38]. mutton, which are related to the absorption bands of water, fat, and protein in the meat samples. In particular, three absorption peaks at 1260, 1840, and 1650 nm are mainly related to the C-H second overtone and the C-H2 stretch first overtone in fat [36], and the absorption peak at 1520 nm is closely related to the N-H stretching second and first overtones in protein [37]. In addition, the range of 1700-2475 nm has a low reflectance value, because there are some absorption bands mainly caused by the combined overtones of molecular groupings such as O-H, N-H, and C-H [38]. To fully utilise the capacity of the CNN model, the 1D spectra were normalised to the range of 0-1 and converted into a 2D spectral information matrix according to Equation (4).
where x is the spectral vector after normalisation; is the transposition matrix of x; S is the matrix of a 2D spectral information matrix.
As the 2D spectral information matrixes converted from the 1D spectral data were single-channel grey images, jet colour was applied to improve the visual recognition ability of spectral information matrix features. Figure 5 shows the mean 2D spectral information matrixes of representative samples. It can be seen that the brightness in the upper left region is larger and decreases along the lower right region. Some differences could be found for different meat. Figure 5a shows that there are brightness values of 0.8-1 at (25,25), and two brightness values of 0.4-0.6 at (50, 25) and (25,50) in the mutton spectral information matrixes, while duck and pork samples have no obvious brightness values in the same region. Meanwhile, it can be seen from Figure 5b,c that the spectral information matrixes of adulterated mutton have similar brightness distribu- To fully utilise the capacity of the CNN model, the 1D spectra were normalised to the range of 0-1 and converted into a 2D spectral information matrix according to Equation (4).
where x is the spectral vector after normalisation; x T is the transposition matrix of x; S is the matrix of a 2D spectral information matrix.
As the 2D spectral information matrixes converted from the 1D spectral data were single-channel grey images, jet colour was applied to improve the visual recognition ability of spectral information matrix features. Figure 5 shows the mean 2D spectral information matrixes of representative samples. It can be seen that the brightness in the upper left region is larger and decreases along the lower right region. Some differences could be found for different meat. Figure 5a shows that there are brightness values of 0.8-1 at (25,25), and two brightness values of 0.4-0.6 at (50, 25) and (25,50) in the mutton spectral information matrixes, while duck and pork samples have no obvious brightness values in the same region. Meanwhile, it can be seen from Figure 5b,c that the spectral information matrixes of adulterated mutton have similar brightness distributions, but there are still slight differences between the spectral information matrixes of samples with different proportions and different types of adulterated mutton. Based on the above differences, CNN can extract different depth features and combine different classifiers to classify different meat.
Model Establishment and Evaluation
The CNN models with different classifiers (Softmax, SVM, and ELM) and convolution kernel sizes (3 × 3, 5 × 5, and 7 × 7) based on two kinds of different spectral information were established, and the model performance was tested with two validation sets from different spectral information. Validation set 1 was the validation set corresponding to the mean spectral information of the four directions; validation set 2 was the validation set corresponding to the mean spectral information per direction. The influence of different mean spectral information, convolution kernel sizes, and classifiers on models was explored, and the optimal models were selected based on the accuracy of the validation set. The high accuracy of the validation set indicated the strong classification ability of the model. When the accuracy of validation sets was the same, the optimal models were selected according to the accuracy of the cross-validation sets, and the high accuracy of the cross-validation set indicated that the model had good stability.
tions, but there are still slight differences between the spectral information matrixes of samples with different proportions and different types of adulterated mutton. Based on the above differences, CNN can extract different depth features and combine different classifiers to classify different meat.
Mutton
Pork Duck
Model Establishment and Evaluation
The CNN models with different classifiers (Softmax, SVM, and ELM) and convolution kernel sizes (3 × 3, 5 × 5, and 7 × 7) based on two kinds of different spectral information were established, and the model performance was tested with two validation sets from different spectral information. Validation set 1 was the validation set corresponding to the mean spectral information of the four directions; validation set 2 was the validation set corresponding to the mean spectral information per direction. The influence of different mean spectral information, convolution kernel sizes, and classifiers on models was explored, and the optimal models were selected based on the accuracy of the validation set. The high accuracy of the validation set indicated the strong classification ability of the model. When the accuracy of validation sets was the same, the optimal models were selected according to the accuracy of the cross-validation sets, and the high accuracy of the cross-validation set indicated that the model had good stability.
CNN-Softmax Models Establishment Based on Different Spectral Information
The results of CNN-Softmax models are shown in Figure 6. It could be found that the size of the convolution kernel had little effect on the performance of the models. When the mean spectral information per direction was used to establish the model, the accuracy of all datasets was higher than 97%. By comparing the results of validation sets, the performance of the model with the convolution kernel of 3 × 3 was slightly lower than those with the other two convolution kernels. When the sizes of the convolution kernel were set as 5 × 5 and 7 × 7, the accuracy of validation set 1 and 2 for the two models was the same. By comparing the accuracy of cross-validation sets for the two models, it could be found that the model had the highest accuracy when the size of the convolution kernel was set as 5 × 5, of which the accuracy for validation set 1, validation set 2, and the cross-validation set was 100.00%, 98.25%, and 99.56%, respectively. Similarly, it could be obtained that the accuracy of the model with the size of the convolution kernel of 5 × 5 was highest when the mean spectral information of the four directions was used for modelling, and the accuracy of validation set 1, validation set 2, and the cross-validation set was 99.12%, 91.86%, and 98.25%, respectively. tion kernel was set as 5 × 5, of which the accuracy for validation set 1, validation set 2, and the cross-validation set was 100.00%, 98.25%, and 99.56%, respectively. Similarly, it could be obtained that the accuracy of the model with the size of the convolution kernel of 5 × 5 was highest when the mean spectral information of the four directions was used for modelling, and the accuracy of validation set 1, validation set 2, and the cross-validation set was 99.12%, 91.86%, and 98.25%, respectively.
(a) (b) Figure 6. Results of CNN-Softmax models with different convolution kernel sizes based on mean spectral information (a) per direction and of (b) four directions. Figure 7 shows the results of CNN-SVM models. When the mean spectral information per direction was used for modelling, the accuracy of calibration sets and cross-validation sets for all models was 100.00%. It was found that the classification performance of the model with the 3 × 3 convolution kernel sizes was lowest by comparing the accuracy of validation sets, and when the sizes of the convolution kernel were set as 5 × 5 and 7 × 7, the accuracy of the cross-validation set and validation set 1 had the same results. However, it could be found that the accuracy of the model with the convolution kernel of 7 × 7 was highest by comparing the accuracy of validation set 2, and its accuracy of validation set 1, validation set 2, and the cross-validation set was 98.45%, 98.68%, and 100.00%, respectively. Similarly, the classification accuracy of the model with the convolution kernel size of 5 × 5 based on the mean spectral information of the four directions was highest, and the accuracy of validation set 1, validation set 2, and the cross-validation set was 99.12%, 90.79%, and 99.71%, respectively. Figure 7 shows the results of CNN-SVM models. When the mean spectral information per direction was used for modelling, the accuracy of calibration sets and cross-validation sets for all models was 100.00%. It was found that the classification performance of the model with the 3 × 3 convolution kernel sizes was lowest by comparing the accuracy of validation sets, and when the sizes of the convolution kernel were set as 5 × 5 and 7 × 7, the accuracy of the cross-validation set and validation set 1 had the same results. However, it could be found that the accuracy of the model with the convolution kernel of 7 × 7 was highest by comparing the accuracy of validation set 2, and its accuracy of validation set 1, validation set 2, and the cross-validation set was 98.45%, 98.68%, and 100.00%, respectively. Similarly, the classification accuracy of the model with the convolution kernel size of 5 × 5 based on the mean spectral information of the four directions was highest, and the accuracy of validation set 1, validation set 2, and the cross-validation set was 99.12%, 90.79%, and 99.71%, respectively. Figure 8 shows the results of CNN-ELM models. When the mean spectral information per direction was used to establish the classification model, the accuracy of all models was higher than 98.00%, and the accuracy of validation set 1 for all models was 100.00%. When the size of the convolution kernel was set as 3 × 3, the model performance of validation set 2 was slightly lower than the models with the other two sizes of the convolution kernel. When the size of the convolution kernel was set as 5 × 5 and 7 × 7, the accuracies of the two validation sets for all models were the same; however, it could be found that the model had the highest accuracy when the size of the convolution Figure 8 shows the results of CNN-ELM models. When the mean spectral information per direction was used to establish the classification model, the accuracy of all models was higher than 98.00%, and the accuracy of validation set 1 for all models was 100.00%. When the size of the convolution kernel was set as 3 × 3, the model performance of validation set 2 was slightly lower than the models with the other two sizes of the convolution kernel. When the size of the convolution kernel was set as 5 × 5 and 7 × 7, the accuracies of the two validation sets for all models were the same; however, it could be found that the model had the highest accuracy when the size of the convolution kernel was set as 7 × 7 by comparing the accuracy of the cross-validation set, and the accuracy of validation set 1, validation set 2, and the cross validation set was 100.00%, 99.56%, and 99.93%, respectively. When modelling with the mean spectral information of the four directions, the classification accuracy was highest when the size of the convolution kernel was set as 5 × 5, and the accuracy of validation set 1, validation set 2, and the cross-validation set was 97.37%, 90.57%, and 100.00%, respectively. Figure 8 shows the results of CNN-ELM models. When the mean spectral information per direction was used to establish the classification model, the accuracy of all models was higher than 98.00%, and the accuracy of validation set 1 for all models was 100.00%. When the size of the convolution kernel was set as 3 × 3, the model performance of validation set 2 was slightly lower than the models with the other two sizes of the convolution kernel. When the size of the convolution kernel was set as 5 × 5 and 7 × 7, the accuracies of the two validation sets for all models were the same; however, it could be found that the model had the highest accuracy when the size of the convolution kernel was set as 7 × 7 by comparing the accuracy of the cross-validation set, and the accuracy of validation set 1, validation set 2, and the cross validation set was 100.00%, 99.56%, and 99.93%, respectively. When modelling with the mean spectral information of the four directions, the classification accuracy was highest when the size of the convolution kernel was set as 5 × 5, and the accuracy of validation set 1, validation set 2, and the cross-validation set was 97.37%, 90.57%, and 100.00%, respectively.
Comparison of the Classification Performance of CNN Models with Different Classifiers
According to the above results, the model established with the mean spectral information per direction was better than the models established with that of the four directions. When two validation sets were used to test the models established by the mean spectral information per direction and of the four directions, it could be found that the models based on the mean spectral information per direction had a higher stability and accuracy than the models based on that of the four directions, and for the same validation set, the maximum difference of accuracy could reach 7.68% for CNN-Softmax, 8.33% for CNN-SVM, and 12.06% for the CNN-ELM model. The results indicated that convolution kernel sizes had little effect on the performance of CNN models, but spectral information had great influence on that, and the models based on the mean spectral information per direction performed better than the models based on that of the four directions. This may be because different average scales have different abilities to retain key information. More effective information is contained in the mean spectra information per direction, so the model has a good effect.
To explore the influence of different classifiers on the models, the optimal models established by different classifiers based on the mean spectral information per direction were compared. The results are shown in Table 1. Table 1 shows that the accuracy of validation sets for all models based on mean spectral information per direction with the best kernel size was higher than 98.00%. By comparing the accuracy of validation sets 1 and 2, the performance of the CNN-ELM model was better than that of the CNN-Softmax or CNN-SVM model. In order to evaluate the efficiency of the model, the prediction time for validation set 2 of models was calculated. It can be found that the prediction time of the CNN-ELM model was 0.02 s, which is much shorter than those of the other two models. This was because the parameters calculated by a single hidden layer of the CNN-ELM model were less than those calculated by the fully connected layer of the CNN-Softmax model [31]. The above results indicated that classifiers had little effect on model accuracy, but had significant influence on classification speed, and CNN-ELM was more suitable for adulterated mutton detection compared with the CNN-Softmax or CNN-SVM model.
Conclusions
An online NIRS system based on a photoelectric sensor was developed, which could adaptively collect the spectral information of four points in one run according to the size of the sample. Based on the mean spectral information per direction and of four directions, the CNN models with different classifiers were established to classify samples of pure mutton, pork, duck, and adulterated mutton with pork/duck. The influence of spectral information, the convolution kernel size, and the classifier on model performance was explored. The results showed that the spectral information (mean spectra information per direction and average spectra information of the four directions) had great influence on the classification performance of the model. When the models were tested with two validation sets from different spectral information of the same samples, the models based on the mean spectral information per direction had a higher stability and accuracy than the models based on that of the four directions, and the maximum difference of accuracy could reach 12.06% for the same validation set. In addition, the size of the convolution kernel (3,5,7) had little effect on model performance, and classifiers (Softmax, SVM, and ELM) had little effect on model accuracy, but had significant influence on classification speed. The accuracy of all datasets of the CNN-ELM model with spectral information per direction on the basis of the convolution kernel of 7 × 7 was all higher than 99.56%. Considering the rapidity and practicality of online detection, it could meet the fast and accurate online classification of adulterated mutton. The results provide a theoretical basis and technical support for rapid evaluation of adulteration mutton. | 8,912.4 | 2022-09-23T00:00:00.000 | [
"Computer Science"
] |
Evaluation of a reduced mechanism for turbulent premixed combustion
Abstract In this study, 3D direct numerical simulations of a multi-component fuel consisting of CO , H 2 , H 2 O , CO 2 and CH4 reacting with air are performed. A freely propagating turbulent premixed stoichiometric flame is simulated for both low and high turbulence conditions i.e., the rms values of turbulent velocity fluctuations normalised by the laminar flame speed are of order 1 and 10. A skeletal mechanism involving 49 reactions and 15 species, and a 5-step reduced mechanism with 9 species, are used in order to evaluate the performance of the reduced mechanism under turbulent conditions. The 5-step mechanism incurs significantly lower computational expenses compared to the skeletal mechanism. The majority of species mean mass fractions and mean reaction rates computed using these two mechanisms are in good agreement with one another. The mean progress variable and heat release rate variations across the flame brush are also recovered by the reduced mechanism. No major differences are observed in flame response to curvature or strain effects induced by turbulence, although some differences are observed in instantaneous flame structure. These differences are studied using a correlation coefficient and detailed analysis suggests that this comes from the fluctuating heat release induced effects in the case with higher turbulence level. Further considerations based on instantaneous reaction rate and local displacement speed are discussed to evaluate the suitability of the reduced mechanism.
Introduction
Natural hydrocarbon based fuel resources such as methane are finite, and are becoming increasingly more expensive to extract often requiring off-shore drilling at great depths. At the same time emission regulations are becoming stricter, due to increasing levels of CO 2 in the atmosphere. In light of these developments, low calorific value fuels such as Coke Oven Gas (COG), Blast Furnace Gas (BFG), and those coming from bio-gasifiers, are becoming increasingly popular as alternative fuels for power generation using industrial gas-turbines [1]. These are typically multi-component fuels, involving CO; H 2 ; H 2 O; CH 4 ; CO 2 ; O 2 and N 2 , with their compositions varying greatly depending on the production process [2][3][4].
The design of combustors operating efficiently, and in an environmentally-friendly manner to burn such fuels in turbulent flows is challenging. An integral part of the modern design process involves computational fluid dynamics (CFD) simulations of turbulent reactive flows. Three-dimensional direct simulations of turbulent reactive flows of practical interest, are still expensive despite the development of faster and efficient computers. This is primarily due to two issues: (1) accurate description of the flow field requires to resolve the smallest dissipative scales, i.e., the Kolmogorov scale g k , which requires an extremely prohibitive fine numerical grid, and (2) accurate description of the chemistry, requires the use of a very large detailed reaction set. A detailed reaction set usually involves more than hundreds of reactions and tens of species, even for a simple fuel such as CH 4 , and the requirement for multi-component fuels is even larger. Furthermore, the time scales associated with each species can be very disparate, thus requiring the use of an extremely small time-step. All of the above factors, make such simulations impractical even on the fastest super-computer available to date.
Reynolds averaged Navier-Stokes (RANS) and Large Eddy Simulations (LES) approaches tackle the first issue on numerical and computational requirements. The second issue on the required chemical complexity, can be tackled in a variety of ways, using tabulated chemistry approaches [5][6][7][8][9][10], and chemistry reduction involving quasi steady state assumptions (QSSA), combined with partial equilibrium assumptions [11][12][13][14][15][16][17][18][19]. In schemes with QSSA, the computational effort is reduced considerably by introducing steady-state and partial equilibrium assumptions for particular species and reactions respectively. This reduces the number of species to be carried in simulations and the stiffness of the system by removing species with relatively short lifetimes.
Usually, reduced mechanisms obtained using these approximations, are validated against laminar one-dimensional measurements such as the flame speed and ignition delay time. Following this validation procedure, such reduced mechanisms have been used in past Direct Numerical Simulation (DNS) studies [20][21][22][23][24][25][26][27], to gain insight for combustion sub-model development. This step entails a major assumption: that the reduced mechanism retains the same flame front structure and turbulence-flame interaction thereby yielding the same statistics as one would obtain using a detailed or a skeletal mechanism. This may or may not be correct and has not been validated yet in three dimensions, since most of the DNS studies in the past used either a single irreversible reaction or reduced chemical kinetics in three dimensional turbulence. Skeletal chemical kinetic mechanisms on the other hand were predominantly used in two dimensional simulations only, due to the high computational demand for three-dimensional simulations with detailed chemical complexity.
These investigations have been reviewed in many past studies [28][29][30][31], helping us to understand the role of chemical detail in turbulent combustion simulations. For example, it was shown in the 2D DNS of Baum et al. [32,33] that the responses of heat release rate and flamelet speed to curvature and tangential strain rate induced by turbulence on hydrogen-air premixed flames, were substantially different when a single or skeletal chemistry was used, although the general statistics such as the probability density function (pdf) of curvature did not vary much. The role of simulation dimensions was examined in detail in [34] where 2D simulations yielded much broader displacement speed pdfs in comparison with 3D simulations, with the discrepancies being proportional to the turbulence level u rms =s l , where s l is the laminar flame speed. While the 3D simulations revealed the displacement speed to be strongly negatively correlated with curvature, the 2D data showed a much weaker correlation [34]. Since the displacement speed strongly depends on the flow field and mixture transport properties, it is expected that the type of chemical mechanism used will also affect this correlation and the respective pdfs through turbulence-chemistry interaction. This is particularly important from a modelling point of view since the displacement speed is involved in the Gequation and FSD modelling approaches. Furthermore, preferential diffusion effects of light species, are not described when a 1-step chemistry is used. The comparison of LES results with experimental data to assess the accuracy of reduced chemistry models [35], suffers from many additional assumptions introduced for the sub-grid scale combustion modelling. As a result, the exact influence of the chemical model employed, cannot be isolated unambiguously.
DNS studies are ideal to isolate the influence of chemical kinetics modelling on the flame structure and turbulence-chemistry interaction, and to test the performance of a particular chemical scheme for turbulent combustion. However, in the past, DNS studies of premixed combustion in simple canonical configurations with skeletal chemistry and archetypical configurations with reduced chemistry were predominantly used to gain insights on turbulence-chemistry interaction and model validation. A review of these studies in [28] suggests that three-dimensional DNS with adequate detail of chemical kinetics will be required to make general strides on the development of combustion sub-models for optimal design of future engines and fuels. The DNS of combusting flows in archetypical configurations with detailed chemistry and molecular transport for multi-component fuels is expected to be beyond the reach of even exa-scale computing. The use of skeletal or reduced mechanisms seems a plausible choice at this time.
Obviously, a reduced mechanism is preferred for computational reasons. However, the reduced mechanism must retain the essential features of flame structure, the relative role of various fuel species and important radicals, and their interactions with turbulence. The former aspects are usually verified using laminar flame measure-ments and quantities computed using detailed or skeletal chemistry, as noted earlier. The turbulence-flame interaction aspects are usually presumed to hold. In this study, an attempt has been made to verify the ability of a reduced mechanism to capture the turbulence-flame interaction and flame front structure compared to a skeletal mechanism. This is achieved by performing 3D DNS of turbulent premixed combustion of a multi-component fuel mixture in a canonical configuration. The combustion chemistry is modelled using an extensively validated skeletal and 5-step reduced mechanism for multi-component fuel mixtures [36]. The details of these two mechanisms are given later in Section 2.1. The specific objectives of this study are (1) to compare the spatial distribution of heat release rate and species mass fractions in turbulent premixed flames computed using the skeletal and reduced mechanisms, (2) to study the respective statistics of mass fractions, reaction rates etc. obtained using these two mechanisms and (3) to examine the flame statistics, specifically pdfs of flame curvature, displacement speed, tangential strain rate, stretch rate and generalised flame surface density (FSD) which is closely related to the scalar dissipation rate of the reaction progress variable. These quantities are involved in combustion modelling based on flamelets approach.
The rest of the paper is organized as follows. The mathematical background and the numerical implementation are presented in Section 2 along with the computational parameters and the chemical schemes used. The results are presented and discussed in Section 3, and conclusions are drawn in the final section.
Governing equations and numerical method
The direct numerical simulations have been conducted using the SENGA2 code [37] which is a fully compressible code. The equations solved are those for the conservation of instantaneous mass, momentum, energy, and species mass fractions. These equations are written respectively as @q @t þ @qu k @x k ¼ 0; ð1Þ using common nomenclature. The symbol a denotes a species identifier. Further details of the above equations can be found in [37]. The thermal conductivity of the mixture, k, is calculated using a relationship in [38], which is given as where C p is the specific heat capacity at constant pressure for the mixture. The model parameters are A k ¼ 2:6246 Â 10 À5 kg m À1 s À1 and r ¼ 0:6859. The dynamic viscosity, l, of the mixture is calculated by assuming a constant Prandtl number, Pr, through From laminar unstrained flame calculations Pr ¼ 0:7. The diffusion velocities for species are calculated using Fick's law: and the mass diffusivity, D a , for species a is calculated using [38]: where Le a is the Lewis number of species a, which is taken to be constant but different for each species. These values calculated by taking the average Le a for each species across the laminar unstrained flame are shown in Table 1.
The chemical reaction rate of species a; _ x a , is modelled using Arrhenious rate kinetics for 49 reactions involved in a skeletal mechanism [36] given in Table 2. This mechanism involves 15 species listed in Table 1 and it is suitable for combustion of low-calorific value multi-component fuel mixtures. This mechanism was validated in an earlier study [36] for a range of thermo-chemical conditions including laminar flame speeds and ignition delay times.
Development of reduced chemistry
The 5-step reduced mechanism was developed in [36] from the skeletal mechanism in Table 2, using the CARM software [39], which has the ability to directly generate source codes for the steady-state species reaction rates. By removing certain intermediate species from the skeletal mechanism, the computational effort is reduced as the number of non-steady state species to be carried in the simulations is decreased. For a restricted regime of interest, many intermediate species can be removed from the system without losing the solution accuracy. Such a reduced mechanism with QSSA, in the absence of transport phenomena, can be described as follows.
For non-QSS species: For QSS species: The QSSA is applicable to an intermediate species when its production rate, _ x jp , is nearly equal in magnitude to the destruction rate, _ x jd , resulting in a very small net change in concentration. Concentrations of QSS species are solved by the non-linear algebraic system described in Eq. (10) without any truncation and are identified using a relative error based on their production and destruction rates, whereas non-QSS species concentrations are resolved in the usual manner using Eq. (9). Computational time saving results from a further decrease in system size from N s;skeletal to N s;reduced . Furthermore, the stiffness of the system is also decreased as species with small life times are removed using a targeted search algorithm (TSA) of Tham et al. [40]. Numerical solutions of the zero-dimensional Perfectly-Stirred Reactor (PSR) with the 49-reaction skeletal mechanism in Table 2 were used as input to CARM.
This reduced mechanism is written as: and the rate expressions for the nine species involved in the above five steps are given in [36]. The supplementary material of this article provides a CHEMKIN compatible subroutine to calculate these reaction rates.
Flow configuration and boundary conditions
A sketch of the computational domain is given in Fig. 1. The computational domain is discretized in space using a structured Cartesian mesh. Each of the spatial derivatives in the conservation equations is evaluated at each mesh point using a 10th order central finite difference scheme for all interior points that are five or more points away from a non-periodic boundary. The order of this scheme is gradually reduced as the boundary is approached. Time advancement of the solution is carried out using a low-storage explicit five-stage fourth-order Runge-Kutta method [41]. The time-step size is 15 ns for all cases, determined by the acoustic CFL condition.
A freely propagating multi-component fuel premixed flame is simulated. At the inlet u in ¼ u þ u 0 where u ¼ ð u; 0; 0Þ is the constant mean inlet velocity and u 0 is the fluctuating turbulence velocity. These fluctuations are calculated from a pre-computed cold flow simulation using periodic boundary conditions and a Batchelor-Townsend energy spectrum. These pre-computed velocity fluctuations are then saved and added to mean flow at the inlet at every time step. A scanning plane runs through the saved velocity field and an interpolation scheme is used to update the inlet boundary.
Periodic boundary conditions are applied in the homogeneous (y; z) directions. Subsonic constant density reflecting inflow boundary conditions are applied at the inflow boundary, and partially-reflecting boundary conditions at the outflow boundary, based on characteristics analysis [42,43], later extended to the NSCBC formulation [44]. Transverse convective terms are also included [45,46], in order to correctly estimate the wave amplitude variations at both the inflow and outflow boundaries. This was found to be an essential component to ensure numerical stability, especially for the highest turbulence level considered in this study.
Mixture conditions
The scalar field is initialised using a steady-state laminar flame solution obtained using the PREMIX code of the CHEMKIN package [47,48]. The fuel mixture is at T r ¼ 800 K and 1 atm. It is composed of CO; H 2 ; H 2 O; CO 2 and CH 4 , and mole fraction percentages of these species are given in Table 3. This composition is typical of a BFG mixture [1], or a low hydrogen content syngas mixture [2][3][4]. At these conditions the laminar flame speed is s l ¼ 2:5 m=s and the flame thickness is d l ¼ 0:75 mm, where d l ¼ ðT p À T r Þ= maxðjdT=dxjÞ, and T p is the product temperature. Table 4 lists the turbulence parameters used for the DNS: u rms is the root mean square value of fluctuating velocity, with an integral length scale l int on the reactant side. The turbulence Reynolds number is Re ¼ u rms l int =m r , where m r is the kinematic viscosity of the reactant mixture. The Damköhler number is Da ¼ ðl int =u rms Þ= ðd=s l Þ, the Karlovitz number is Ka ¼ ðd=g k Þ 2 , and the Zeldovich thickness is defined as d ¼ m r =s l . Figure 2 shows the locations of these conditions in the combustion diagram [49]. In order to isolate the effect of the turbulence level, the integral length scale is kept approximately the same. According to the classical combustion diagram [49], these conditions correspond to the thin reaction zones regime. The simulations were run for 9.76 and 2.56 flame times, t fl ¼ d l =s l , corresponding to about 34 and 32 eddy turn-over times, t e ¼ l int =u rms , for cases A and B respectively.
Computational requirements
The computational domain is L x ¼ 14 mm and L y ¼ L z ¼ 7 mm with the corresponding number of gridpoints N x ¼ 768 and N y ¼ N z ¼ 384 for cases A and B. For the 5-step reduced mechanism the same physical domain size and resolution is used for case B. The same physical domain size but with a smaller numerical res- olution for the 5-step mechanism is used for case A having N x ¼ 432 and N y ¼ N z ¼ 216. The minimum reaction zone thickness among all species present namely CH 4 dictates the numerical resolution for case A, while the Kolmogorov length scale dictates the resolution for case B. It was observed that at least 20 points were required inside this minimum reaction zone thickness to ensure numerical stability for the skeletal mechanism, whereas for the 5-step reduced mechanism 10 grid points were sufficient to ensure this. The simulations were run on the UKs super-computer facility HECTOR. The computational details such as total memory requirements, number of cores used, output interval frequency, t out , total number of sampled data sets, N tot , and time step size, D t , are given in Table 5.
Post-processing method
The global flame behaviour is analysed through the calculation of the consumption speed defined as: where A is the total area in the homogeneous direction and the integral is taken over the volume V of the computational domain. The DNS data have been post-processed using the same spatial differencing schemes as used in the DNS. Averaging is done both in space (in the homogeneous y; z directions) and time, and by combining adjacent spatial points in order to increase the statistical accuracy. Five neighbouring points (symmetrically about i for interior points) are combined after ensuring that the statistics such as the x-wise averages and the pdfs of c are not unduly affected. The average value of a quantity V at point i in the x direction is calculated using where N p ¼ 5. The i À 3 indicates that for points well away from the boundaries the averaging is symmetric about point i, using the 5 neighbouring grid points. Due care is taken at the boundaries. For case A, time averaging is performed between 3.5 and 5.6 flame times, and for case B time averaging is performed between 1 and 2 flame times. During these two intervals the flames in both cases seem to be in a statistically stationary state at least as far as s c is concerned as shown in Fig. 3. Conditional averages are taken over the entire volume in bins of c, and time-averaged over the above time intervals. The flame surface except where stated otherwise is defined as the temperature iso-surface using c ¼ ðT À T r Þ=ðT p À T r Þ ¼ c à ¼ 0:32, corresponding to the location of maximum heat release in the unstrained planar laminar flame. This choice is justified for this study because the discrepancies in statistics between the two mechanisms are observed to be maximum for c close to c à . Furthermore, mass fraction based progress variable definitions were shown in [50] to vary substantially among different reactant species. The ith component of a unit normal vector to the flame surface is given by where v 2 ¼ ð@c=@x i Þð@c=@x i Þ, and the generalized fine-grained FSD is R ¼ v. The flame stretch U is given by [51] where a t is the tangential strain rate, K m is the surface curvature, and s d the displacement speed. The displacement speed is calculated for all points on the flame surface using where the heat release rate is _ x a . The flame surface quantities are normalised as follows: Probability density functions of displacement speed, curvature, tangential strain and stretch are extracted from the flame surface calculated using the samples collected over the entire sampling period as for the mean quantities. These quantities are analysed to address the third objective of this study, but not all of these quantities are shown in this paper.
Comparison of spatial correlations
Figs. 4 and 5 show the instantaneous heat release rate in x-y planes for cases A and B respectively. Slices are shown for four different z þ spanning the entire length of the physical domain in the z direction. The top row shows the results using the skeletal mecha- nism, and the bottom row is for the reduced mechanism. The heat release rate _ Q in both figures is normalised using the maximum heat release rate in the laminar case for the skeletal mechanism, in order to highlight differences with the reduced mechanism.
For case A the general shape of the flame front is captured well by the reduced mechanism for all z. In both cases heat release is observed to peak in regions with negative curvature (convex towards the products), indicating that the same physical behaviour is recovered. Some differences are observed with respect to the maximum heat release rate values obtained, with the reduced mechanism reaching slightly higher maximum heat release values. Furthermore, heat release regions behind the ''main'' flame like the one for the third location, although captured with the reduced mechanism, are found to burn faster. For case B, the differences between the two mechanisms are more pronounced: the skeletal mechanism shows a more patchy and distributed flame front, giving an overall thicker flame. The reduced mechanism on the other hand has a thinner flame front with a more continuous heat release zone. The difference though in the maximum heat release rate between the two mechanisms is reduced in comparison with case A, implying that turbulence is more dominant than chemical kinetics to the flame evolution.
The two-dimensional spatial cross-correlation function, r, can be used to better quantify the difference between the two mechanisms for a given x þ -y þ plane. This can be calculated for each z þ using where i; j and k are indices for x þ ; y þ and z þ directions respectively. The superscripts s and r stand for the skeletal and reduced mechanisms respectively. The over-bar indicates a two-dimensional average of a quantity V in the corresponding x þ -y þ plane. The crosscorrelation, rðz k Þ, is also time-averaged as discussed in the previous section and is calculated for the heat release rate and species mass fractions. The results are shown in Fig. 6 for cases A and B respectively. The correlation for the heat release rate is high for case A across all z, with the minimum falling only slightly below 0.8, a result consistent with the visual comparison seen in Fig. 4. For case B, the heat release correlation is not as strong and is found to drop to about 0.6 in the middle of the domain which is also in agreement with Fig. 4 In order to help elucidate the effect of the turbulence on the spatial correlations, Fig. 7 shows the correlation coefficients for the unstrained laminar flame. The correlation coefficients in the laminar case are all high contrary to the turbulent cases, reaching values larger than 0.9 both for the heat release and the species mass fractions. This result signifies the importance of using three-dimensional DNS data for validating the performance of a reduced mechanism, in contrast to laminar one-dimensional validations. Thus, turbulence reduces the spatial correlation coefficients through the influences of flame stretch and turbulencechemistry interactions.
The 5-step mechanism was developed using numerical solutions of the Perfectly Stirred Reactor (PSR) to ease the numerical implementation, as input to CARM [39]. A species a was identified as being in steady-state if: where _ x ap and _ x ad are the species production and destruction rates respectively, and the error e was taken to be less than 1%. Thus, are the poorer correlation coefficients observed in Fig. 6 in the turbulent case a result of the failure of the above QSS assumption related to the species rates? In order to establish the validity of this in the turbulent case, one can compute the maximum species rate-related QSSA error e a from the skeletal chemistry DNS data using: where the denominator is chosen based on the local maximum, max loc , between the species production and destruction rates, while the outer global maximum, max gl , is taken over the entire volume of the domain. Furthermore, in order to ensure that there are significant production or destruction rates for species a at the spatial point where the error is calculated, the denominator is subject to the following conditions. If the local production rate is larger than the local destruction rate i.e., ð _ x ap À _ x ad Þ P 0, then If the local destruction rate is higher i.e., for ð _ x ad À _ x ap Þ > 0, then Figure 8 shows the instantaneous e a as obtained from the DNS using Eqs. (18)-(20) for species 1-14 in the skeletal mechanism (see Table 1), for cases A and B. A similar trend was observed at dif-ferent time-steps. The laminar flame results are also shown as grey bars. It is important to remind ourselves at this point that species 8, 9, 10, 11, 13 and 14 i.e., OH; HO 2 ; HCO; O; CH 3 and CH 2 O (see Table 1) are put in steady-state while developing the reduced mechanism [36]. Figure 8 shows that the errors for the laminar flame are larger than the 1% limit set in the PSR computations. Despite this, the use of the reduced mechanism is justified as the correlations in Fig. 7 are high. For cases A and B the error is reduced in comparison to the laminar flame for HCO only, and increased for OH; HO 2 ; O; CH 3 and CH 2 O. Of these species, the steady-state assumptions introduced for OH; O; CH 3 and CH 2 O are expected to primarily affect the CH 4 correlations, since these species readily interact with CH 4 through reactions 41-49 of Table 2. The spatial mass fraction correlations of CH 4 however in Fig. 6 are as high as in the laminar case, implying that CH 4 is relatively insensitive to the QSSA for the aforementioned species.
In order to understand how the rate-related QSSA error is affected by the turbulence, the local error, i.e., without the global maximum operation in Eq. (18) can be analysed. Figures 9-11 show e OH ; e HO 2 and e O against c for case B. These species readily react with H 2 O and H 2 O 2 in the majority of the reactions listed in Table 2, and are thus expected to impart the most influence on the spatial correlations for these species. Also shown in grey continuous lines is the QSSA error conditionally averaged in bins of c and time-averaged as explained in Section 3.1. The grey dashed line shows the laminar flame result to elucidate the turbulence effect. In the laminar case, the QSSA error peaks at c ' 0:5 for OH, c ' 0:01 for HO 2 and c ' 0:3 for O. The local QSSA error for the turbulent case, on the other hand, peaks for all of these species at much lower c values. This is expected since the turbulence is stronger at low c values thus imparting most influence on species rates. As previously stated, the reduced mechanism was developed using PSR solutions as input to CARM, and as a result transport effects (convection and diffusion) are excluded. turbulence for low c values invalidates the QSSA through enhanced (turbulent) transport. On the burnt side the QSSA error is generally less since the turbulence is weaker. In particular, the conditional error is less than the laminar flame error for c P 0:2 for OH and O, and for c P 0:1 for HO 2 . Also, for all of these species the majority of points fall below the laminar flame result implying that QSSA holds in the turbulent cases also. It is shown in the following sections that the mass fraction of H 2 O is over-estimated in a mean sense at relatively larger c values. Since the conditional QSSA errors in Figs. 9-11 are larger than the laminar flame errors at relatively lower c values, the low correlation observed for H 2 O at high c values cannot be a result of the rate-related QSSA.
Comparison of mean profiles
In this section the mean profiles of important species mass fractions and net rates, heat release rate, and progress variable across the flame brush are examined, to test the performance of the 5step reduced mechanism. The results are shown in Figs. 12-16. As noted earlier, the quantities are normalised using the maximum of laminar flame value for the skeletal mechanism. Figure 12 shows that the mean mass fraction of H is slightly under-estimated by the reduced mechanism as one moves towards the products, and close examination of Fig. 14 reveals that this is owing to the slight under-estimation of the H production rate over the same region. Nevertheless, taking into account that H is a highly diffusive species, the overall agreement with the skeletal mechanism is good. The CO mean mass fraction which is the main fuel constituent is well captured and similar results were found for the species O 2 ; CO 2 and CH 4 . The mean mass fractions of H 2 O and H 2 O 2 are over-estimated for both turbulence levels and the same was observed for the mean mass fraction of H 2 , which explains the lower spatial correlations observed for these species in the previous section. Careful examination of Fig. 13 reveals that H 2 O and H 2 O 2 are over-estimated in the unstrained laminar case also. H 2 O in particular is over-estimated mainly in the product side while H 2 O 2 is over-estimated across the entire flame brush. Careful examination of Fig. 15 reveals that the over-estimation of the H 2 O mass fraction is owing to the over-estimation of its production rate in the same region. Similar arguments apply for H 2 O 2 also, the only difference being that its consumption rate is instead over-estimated in the product side, which helps to explain why the reduced mechanism's estimation for the H 2 O 2 mass fraction approaches that of the skeletal mechanism for large x þ .
As discussed in [36], these effects may be partly alleviated through the adjustment of the activation energy of one of the most dominant reactions, namely the chain-branching reaction O þ H 2 ¼ H þ OH. Increasing the activation energy of this reaction would reduce the production rates of the OH and H radicals. This in turn would have a direct effect on the production rates and mass fractions of H 2 O, H 2 , and H 2 O 2 , since they readily interact with OH and H radicals. However, at the same time the flame speed would also decrease, since less OH radicals would be available for CO oxidation through the main heat releasing reaction OH þ CO ¼ H þ CO 2 . Since there is no way to pre-estimate the increase factor for the activation energy, this has to be based on one-dimensional laminar flame data. Thus, as it was indicated in [36], an increase of the activation energy of the reaction O þ H 2 ¼ H þ OH by 27.5%, reproduces the correct flame speed despite the small over-estimation of the aforementioned species mass fractions. Furthermore, species such as CO 2 (not shown here) relating to atmospheric pollution are estimated with excellent accuracy. Also, despite the discrepancies observed for the mean mass fractions of H 2 O and H 2 O 2 , as one may see from Fig. 16 the reduced mechanism captures very well the mean progress variable and heat release rate variation across the entire flame brush. For both turbulence levels the reduced mechanism predicts the heat release rate to drop and spread out due to flame thickening and consistent with the laminar flame result, the maximum heat release rate occurs around c ¼ 0:32 also.
Comparison of flame front structure
The 5-step reduced mechanism was found in the previous section to give an overall good agreement with the majority of species Figs. 17-20 show conditional averages in bins of c for the species mass fractions and heat release rate for the highest turbulence level, i.e., case B. Similar results were obtained for case A and thus they are not shown here. The continuous black lines show the results for the skeletal mechanism and the dashed black lines show the results for the reduced mechanism. Also the laminar flame result is shown using grey lines to elucidate the effect of the turbulence. Figure 17 shows that the conditionally averaged mass fractions of H and CO are well captured by the reduced mechanism and a similar good agreement was also found for the conditional averages of O 2 ; CO 2 and CH 4 . These results imply that the distribution of the aforementioned species over the temperature field calculated with the reduced mechanism is similar to that using the skeletal mechanism. Figure 18 shows that the conditional average of the H 2 O mass fraction is estimated equally well as the H mass fraction. Nevertheless, for high c which are expected to occur for large x, the conditionally averaged mass fraction of H 2 O is slightly over-estimated. This happens both for the laminar and the turbulent cases which explains the associated over-estimation of its mean spatial value for large x. Careful examination of the reactions involving H 2 O shows that one of the most important reactions affecting H 2 O concentration is the reaction OH þ H 2 ¼ H þ H 2 O. Furthermore, this reaction was found to become more important as one moves towards the products side and actually produces H 2 O for large c [36,50,52,53]. Fig. 19 shows that the conditionally averaged mass fraction of H 2 is also over-estimated, and for all c even in the laminar case.
Both the H 2 O and the H 2 over-estimation for the laminar flame are interlinked: the reaction O þ H 2 ¼ H þ OH affects the H 2 concentration significantly and throughout the flame brush. The QSSA for O and OH, causes higher reverse rates through this step, reducing the H 2 consumption, and increasing its concentration throughout. At the same time, the QSSA combined with the higher levels of H 2 enhances the forward rate of the reaction OH þ H 2 ¼ H þ H 2 O at large c, producing more H 2 O, and causing an over-estimation in its concentration in this region. However, as explained in Section 3.2, despite this fact the correlations in the laminar flame remain high. Furthermore, the H 2 O over-estimation occurs at large c and as shown in Section 3.2 the QSSA (for the species expected to affect the most the H 2 O concentration) in the turbulent case, holds on average better in this region than in the laminar flame. Hence the lower spatial correlations for H 2 O observed in Fig. 6 in the turbulent case, are owing primarily to the different scalar-turbulence interaction rather than failure of the QSSA. Fig. 18 also shows the conditional average of H 2 O 2 , which in comparison to H 2 O peaks at lower c values. This explains the much lower spatial correlations observed for H 2 O 2 in Figs. 6 and 7, since this species exhibits strong mass fraction gradients in regions of intense turbulence. As noted previously while discussing Figs. 9-11, the QSSA errors peak in the regions of low c values. One would therefore expect, perhaps, the H 2 O 2 over-estimation seen in Fig. 18 to be associated with failure of the QSSA. However, when compared to the laminar flame result, the over-estimation using the reduced mechanism, for the turbulent case at low c is approximately of the same magnitude, implying that this is not resulting from the QSSA. Another important observation which may help to justify the above point is as follows: for c P 0:2, the difference with the skeletal mechanism result is small for the laminar case but it is large for the turbulent case. Figures 9-11 show that in the same range, i.e., for c P 0:2, the conditional QSSA error for OH; HO 2 and O is actually smaller than the laminar case despite the increased discrepancies in Fig. 18. Thus, the over-estimation of the H 2 O 2 mass fraction for the reduced mechanism in the turbulent case cannot be correlated to the failure of the QSSA, at least for the conditions investigated in this study. Fig. 20 shows the conditionally averaged heat release rate. The values obtained using the 5-step reduced mechanism agree well those obtained using skeletal mechanism for small c values. The reduced mechanism slightly over-estimates the conditionally averaged heat release rate for large c values. This is consistent with the results shown in Fig. 5. Since most of the heat release comes from the enthalpy of formation of H 2 O whose mass fraction as previously discussed is over-estimated for large c, this causes the associated slight over-estimation of the heat release rate in the same region of c space. Nevertheless, it is shown in the following section that the flame surface statistics obtained with the skeletal mechanism are recovered using the reduced mechanism.
Comparison of flame surface statistics
Figs. 21-23 show pdfs of the displacement speed, flame stretch and generalized FSD. These quantities are obtained on c ¼ 0:32 isosurface where the heat release rate peaks in the laminar flame. The pdfs of curvature and tangential strain rate using the reduced mechanism were indistinguishable with the skeletal mechanism results and thus they are not shown here. As one may see from Fig. 21, the reduced mechanism produces almost identical displacement speed pdfs with the skeletal mechanism both for the low and high turbulence levels. This implies that the flame structure computed using the reduced mechanism has the same dependency on strain and curvature effects as with that computed using the skeletal mechanism. Thus, the flame stretch is also almost identical for the reduced and skeletal mechanisms as one can observe in the flame stretch pdf shown in Fig. 22. The pdf of generalized fine-grained FSD shown in Fig. 23 suggests some differences. This quantity has a higher mean value for both turbulence levels when the reduced mechanism is used. Since R gen ¼ v, it is a measure of the flame front thickness and thus the reduced mechanism has a slightly thinner flame front. This implies that flame statistics obtained using the reduced mechanism is less sensitive to the turbulence level as observed in previous studies [32,33,54]. The quantitative difference, however, is found to be less than 12%. Fig. 24 shows scatter plots for the heat release rate and displacement speed against curvature for case B. Similar results were observed for case A. The black dots are for the skeletal mechanism and the grey dots for the reduced mechanism. The expected physical behaviour, i.e., heat release rate and displacement speed correlate strongly with curvature and their peak values occurring in negatively curved regions, is recovered by the reduced mechanism for both turbulence levels. The results for both mechanisms are found to be nearly identical in regions of positive curvature. For negatively curved regions, the reduced mechanism slightly overestimates _ Q þ and s þ d , which is consistent with the visual pictures shown in Figs. 4 and 5. Fig. 25 shows the corresponding scatter plots for correlation with tangential strain rate. Although some correlation is observed for the heat release rate, with generally positively strained regions showing a higher heat release rate, the influence of curvature is much more dominant. The correlation between s þ d and a þ t is not as strong as for the heat release rate. The displacement speed is observed to reach a maximum for a þ t > 0. For large strain rates, the displacement speed drops down to the laminar flame speed and shows the same dependency as for negatively strained regions. All of these effects are well captured by the reduced mechanism for both turbulence levels indicating that it is an acceptable substitute for the skeletal mechanism even for turbulent combustion.
Further consideration
From the above discussion in Sections 3.2-3.5, it is clear that the 5-step reduced mechanism can reproduce the flame statistics well. However, one can argue that the quantities studied in these sections are averaged and thus the averaging process can mask the effects of errors associated to QSSA, which form the premises for developing the reduced mechanism. There are two ways to assess this point in a broad perspective, one way is to have two different reduced mechanisms with significantly different reduction errors and a second method is to carefully study the instantaneous quantities, such as the flame front locations and heat release rate obtained using the reduced and skeletal mechanisms. Any chemistry reduction method will aim to minimise the reduction error which is related to a careful selection of species obeying QSSA within a carefully selected tolerance limit so that the reduced set can reproduce flame observables such as laminar burning velocity, flame thickness etc., and ignition delay times, which are either measured or computed using a skeletal/comprehensive mechanism. As noted earlier in Section 2.2, the 5-step reduced mechanism considered here was shown to reproduce these observables for a wide range of thermochemical conditions in [36]. It is also important to recognise that the first method will be helpful to answer a question such as, what is the sensitivity of the reduction error on the turbulent flame observable when a reduced chemistry is employed? Since the focus of this study is to assess the performance of the reduced 5-steps for turbulent flames invariably involving flame stretch, corrugations and contortions, the second method is used here for further assessment.
The flame front position at time t, evolved from an instant t 1 , is given by Thus, the temporal evolution of flame front position is controlled by a fine balance among the mean convection, turbulent fluctuating velocity, and the self propagation of the flame front denoted by the displacement speed s d . The former two quantities are governed predominantly by fluid dynamics, which is kept to be the same for both skeletal and reduced mechanism cases. The displacement speed is dictated by chemical reaction rate and the molecular diffusive fluxes as noted in Eq. (15), which are strongly coupled in premixed flames. Thus, influences of chemical kinetics, modelled using skeletal or reduced mechanism, on the propagation of the flame front, its instantaneous location and interaction with turbu-lence are of leading order irrespective of u rms =s l values. Thus, any difference one may observe in x f between flames simulated using the reduced and skeletal mechanisms must originate from chemical reaction rate given by chemical kinetics and this difference is also affected by mutual interaction of turbulence and chemistry. The results in Figs. 4 and 5 show that the flame front position and, its corrugations and contortions resulting from turbulence-flame interaction are not very different when the 5-step reduced mechanism is used instead of the skeletal mechanism. To gain a closer understanding of this behaviour, the pdfs of s þ d and _ x þ c , where _ x þ c is the progress variable reaction rate, using the samples collected along the flame front (c ¼ 0:32 iso-surface) are shown in Fig. 26. The results are shown for two different times, t þ t=t fl ¼ 4 and 4.5 for case A and 1.6 and 1.8 for case B. These times, normalised by the flame time, can easily be translated in terms of initial eddy turnover time, t e , since t þ ¼t Da, wheret ¼ t=t e . Thus, the times shown in Fig. 26 correspond to a difference of about 0:1t e . These times, sufficiently separated but close enough, are chosen carefully to avoid any bias from chemistry reduction error or insufficient turbulence-chemistry interaction. The later point is ensured by selectinĝ t > 1, which is widely accepted to be sufficient to realise a fully developed interaction between turbulence and chemistry. It is clear in Fig. 26 that the reduced chemistry represents turbulence-chemistry interaction and flame stretch effects well compared to the skeletal mechanism. It is worth to note that the values of u rms =s l are of Oð1Þ in case A and Oð10Þ in case B. Thus, it is apparent that the 5-step reduced chemistry considered here can represent both instantaneous flame front features and flame related statistics well over a good range of turbulence level.
Despite these insights and good performances observed, one must recognise that the results reported in this study are specific to the turbulence and mixture conditions tested here. The reduced mechanism is observed to perform relatively better for the lower turbulence level considered in this study. This implies that this mechanism performs better for larger Da number flames. Its performance for the lower Da number flame, case B, is somewhat reduced and more tests with Da < 1 flames are required to make further assessment. Since the QSS assumptions hold better in the PSR limit which aided the reduced mechanism development, it is expected that there could be a regime between the PSR and flamelet combustion limits where this mechanism may have some difficulties. Thus, further DNS at higher turbulence levels is required to establish an upper limit for the applicability of this reduced mechanism. Also, additional DNS in non-premixed combustion with extinction and re-ignition, would be useful in evaluating the performance of this reduced mechanism over a broader range of conditions.
Conclusions
A direct numerical simulation was used to examine the performance of a 5-step reduced mechanism for combustion of multicomponent fuel mixtures. The DNS database includes two sets of simulations at low and high turbulence conditions, u rms =s l $ Oð1Þ and Oð10Þ, performed using both skeletal and reduced mechanisms developed in [36]. In this study, premixed combustion of a multicomponent fuel mixture consisting of CO; H 2 ; H 2 O; CH 4 and CO 2 is simulated. Three dimensional homogeneous turbulence enters through one end of the computational domain and leaves from the other end after interacting with a premixed flame. The performance of the reduced mechanism is evaluated by (1) comparing cross correlations of instantaneous heat release rate and species mass fractions obtained using both the skeletal and reduced mechanisms, (2) using physical space analysis of average mass fractions and reaction rates of various species, (3) analysing flame structures in progress variable space, (4) studying scatter plots and pdfs of flame surface variables and (5) comparing normalised instantaneous displacement speed and reaction rate distributions on the flame surface.
The mean mass fractions and reaction rates of H; O 2 ; CO, CO 2 , and CH 4 computed using the reduced mechanism are found to be in agreement with those from the skeletal mechanism. A similar behaviour is observed for the correlation coefficients, see Eq. (16), for these species. The reduced mechanism slightly over-estimates the average mass fractions of H 2 O; H 2 , and H 2 O 2 inside the flame brush, particularly in regions where the heat release rate peaks. The spatial cross correlation for these species is observed to be weak. This is shown to be related to the over-estimation of their reaction rates inside the flame brush. The validity of the raterelated QSS assumptions for the steady-state species in turbulent flames is also examined. It is found that the rate-related QSSA for HCO holds better for both turbulent cases than in the unstrained laminar flame. The rate-related QSS errors on the other hand, for OH; HO 2 ; O; CH 3 and CH 2 O, are found not to hold as well as in the laminar case and this error peaks mainly for low c values where turbulence is stronger. Despite this no direct link between the poorer spatial correlations for H 2 O; H 2 , and H 2 O 2 , and the increased rate-related QSS errors of the steady-state species is observed. This implies that the discrepancies in the instantaneous mass fractions of H 2 O, H 2 , and H 2 O 2 are rather a manifestation of the differences in turbulence-chemistry interaction for the reduced mechanism cases resulting from turbulent diffusion and convection induced by fluctuating heat release rate.
The instantaneous contours of heat release rate are found to be very similar for the low turbulence case with high correlation coefficients. For the high turbulence case the heat release correlation is reduced and significant differences are observed in the instantaneous heat release rate contours. Nevertheless, the spatial variation of mean heat release rate and progress variable for the reduced mechanism agrees well with the skeletal mechanism results. Furthermore, the pdfs of displacement speed, curvature, tangential strain, stretch and generalized FSD are found to be nearly identical. The scatter plots of heat release rate and displacement speed against curvature and tangential strain rate are observed to be similar for the reduced and skeletal mechanism. Also, the difference in distributions of instantaneous displacement speed and heat release rate on the flame surface is observed to be small.
These results suggest that the 5-step reduced mechanism developed in [36], is a reasonable substitute for the skeletal mechanism even for the turbulent flames, at least for the conditions considered in this study. These results also suggest that the performance of the reduced mechanism is improved for the low turbulence level compared to the high turbulence level considered here. Since premixed combustion is considered for this study, direct simulations of other combustion modes such as non-premixed combustion under a wide range of flow conditions causing extinction and re-ignition events need to be performed to assess the range of applicability of this reduced mechanism. This is a subject for further study. Use of the reduced mechanism is shown to reduce the computational burdens considerably. Furthermore, the ability of the CARM [39] software to directly produce source codes for the species reaction rates makes it seamless and straightforward to include the reduced mechanism in direct numerical or large eddy simulations. Thus, a wide range of turbulence and thermo-chemical conditions can be covered for DNS of multi-component fuel combustion in archetypical configurations, which are akin to practical devices, by using the 5-step reduced mechanism.
HECToR, the UK's national high-performance computing service, which is provided by UoE HPCx Ltd at the University of Edinburgh, Cray Inc and NAG Ltd, and funded by the Office of Science and Technology through EPSRC's High End Computing Programme.
A copy of CHEMKIN compatible reaction rate subroutine from CARM for the reduced mechanism can be obtained by writing to<EMAIL_ADDRESS>or<EMAIL_ADDRESS>Also available as supplementary material to this article. | 12,027.4 | 2014-12-01T00:00:00.000 | [
"Engineering",
"Chemistry"
] |
Experimental Study on Mechanical Strength of Diesel- Contaminated Red Clay Solidified with Lime and Fly Ash
Diesel-polluted soil is unstable and easy to migrate with environmental changes and causes secondary pollution. In this paper, 0# diesel is used as the pollutant, and lime fly ash is selected as the solidifying material. This paper selects four curing ages of 7D, 14D, 21D, and 28D and four pollution concentrations of 0%, 5%, 10%, and 15%. 20%, 25%, 30%, and 35% four moisture content variables were used to conduct an unconfined compression test, direct shear test, and scanning electron microscope test on diesel-contaminated red clay. The results show that the curing age significantly affects the curing effect, and the curing age of 21D is the optimal age. The mechanical properties of the cured soil were the best at the optimum age and when the pollution concentration was 5%. The mechanical properties of the solidified soil with a moisture content of 30% are the best at the optimal age and the same pollution concentration. Additionally, the scanning electron microscope data indicate that when the pollution concentration increases, the cement created by the interaction of lime, fly ash, and pozzolan increasingly forms. The “oil film” generated by diesel oil seeping into the soil is bound and unable to fill the soil’s pores, hence reducing the soil’s strength.
Introduction
With the growth of China's economy, there are more and more engineering projects. Du et al. [1] pointed out that the annual oil production in China has exceeded 1:8 × 10 11 kg, and the oilfield area covers an area of about 3:2 × 10 5 km 2 . Thus, as the industry develops and demand for diesel fuel for vehicle usage increases, diesel fuel consumption and transportation often result in leakage, as shown in the 2013 Beihai diesel tank leaking event in Guangxi and the 2014 Guangxi Nanning diesel tank rollover disaster. A significant quantity of diesel oil seeps into the red clay roadbed and foundation, causing variable degrees of soil pollution. Dieselcontaminated soil is inherently unstable and rapidly migrates in response to environmental changes, resulting in secondary contamination. As a result, efficient soil remediation of diesel contamination is a pressing issue. The remediation methods of diesel-contaminated soil are mainly divided into physical methods (physical separation method, steam extraction method, thermal decomposition, electrolysis, etc.), chemical methods (chemical reduction, chemical leaching, soil performance improvement, and remediation technology, etc.), and biological methods (bioaugmentation method, biological culture method, bacterial injection method, etc.) [2]. Diesel has complex components and various structural properties, which is extremely easy to cause secondary pollution. A single repair method often has an insignificant repair effect and high cost. The research on the restoration of heavy metal ioncontaminated soil has formed a system, but it is still in the exploratory stage to restore oil-contaminated soil and its secondary utilization.
He et al. [3] pointed out that the compressive strength of oily soil treated with lime and fly ash first increased and then decreased with the number of dry-wetting cycles. Shah et al. [4] showed that the geotechnical properties of petroleumcontaminated soil were improved after treating petroleum-contaminated soil with different stabilizers such as lime, fly ash, and cement alone or as admixtures. Stabilizers improve soil geotechnical properties through cation exchange, agglomeration, and pozzolanic action. Adding 10% lime, 5% fly ash, and 5% cement to the contaminated soil works best. Kogbara and Al-Tabbaa [5] mixed one part of slaked lime with four parts of slag, one part of cement, and nine parts of slag cement in diesel-contaminated sandy soil. Theresults show that cement and lime-activated GGBS can effectively reduce theleaching of pollutants in polluted soil. Al-Rawas et al. [6] used cement and cement bypass dust as stabilizers to effectively improve the properties of oilcontaminated soils and provide a safe and effective solution for practical construction applications. Portelinha et al. [7] pointed out that diesel pollution affects soil water holding capacity and unsaturated hydraulic conductivity, forming a curved shape similar to clay materials. Bian et al. [8] evaluated the shear characteristics of oil-contaminated soil by resistivity and pointed out that under constant compaction and saturation, the shear strength of oil-contaminated soil decreased with the increase in resistivity.
Zhou et al. [9] used a direct shear test, variable head penetration test, and compression test to point out that with the increase in diesel content, the cohesion of oily soil first increased and then decreased, and the internal friction angle first changed slightly and then increased. The compressibility and permeability of oil-contaminated soil first decreased and then increased with the increase in diesel content. Zheng et al. [10] found through the unconfined compressive strength test that it decreases with the increase in oil content, but when the water content is low, the soil strength increases instead; Chen et al. [11] found through the indoor quick shear test that the bonding force between sand particles is small, the influence of nondielectric oil on the bonding force between soil particles is much smaller than that of water, and the effect of crude oil and diesel oil on the shear strength of unsaturated sand is not significant; Li [12] pointed out that the permeability coefficient of diesel-contaminated loam is about 97% lower than that of clean loam when the oil content is 8%; He et al. [13] simulated the dry-wet cycle indoors and confirmed the immobilization of oil-contaminated soil by lime fly ash with the help of an unconfined compression test. It has high compressive strength; Zha et al. [14] and other studies pointed out that the use of fly ash and a small amount of lime mixture can effectively improve the engineering properties of expansive soil, reduce the expansion and shrinkage of expansive soil, and improve the strength of expansive soil. Li et al. [15] have confirmed that lime fly ash can effectively improve the mechanical properties of oil-contaminated saline soil. Han et al. [16] point out that under a CNS boundary condition, the shear stress for single-joint and double-joint specimens increases slowly with the increase in shear displacement. Song et al. [17] propose that calcium oxide enhances the strength of Zn 2+ -contaminated soil because calcium oxide will react with SiO 2 , Al 2 O 3 , and Fe 2 O 3 in the red clay to produce C-S C-A-H. Song et al. [18] compared the red clay before and after pollution; it is found that under the same axial strain, the damage variable increases with the increase in confining pressure. Song et al. [19] pointed out that with the increase in the number of wetting and drying cycles, the soil particles' connection is closed, the soil porosity decreases, and the strength increases.
In environmental and geotechnical engineering, the solidification treatment of diesel-contaminated red clay is a study area that cannot be overlooked. Therefore, in this paper, Guilin red clay is used as the test material, lime with a strong curing effect and fly ash with strong adsorption are selected, and through the unconfined compression test, direct shear test, and scanning electron microscope test, the effect of different pollution concentrations, different mechanical properties, and microstructure of cosolidified diesel-contaminated red clay with water content and different curing ages was investigated.
Experimental Materials
2.1.1. Diesel. The pollutant used in this test is 0 # diesel, taken from the Sinopec gas station in Guilin City. The color of the 0 # diesel is light yellow with light green luster, slightly soluble in water, with good fluidity but greater viscosity than water and substantial volatility; it has a special pungent odor. The relative density of the diesel used is 0.857. The viscosity coefficient is 3.56-4.05 mPa·s, and the freezing point is -25.82°C.
Red
Clay. The soil used in the test was taken from a foundation pit in Lingui District, Guilin City, which belongs to the subregion of Guofeng Plain in the structural erosion landform area, with many low mountains and hills. The soil layer structure can generally be divided into three layers. The first layer is plain fill (Q 4 ml ), mainly composed of cohesive soil, crushed stone, and schist, with a loose structure. The layer thickness is 0.40-2.00 m, and the average thickness is 1.01 m. The second layer is plastic secondary red clay (Q 3 al+pl ), which is uniform in soil quality, smooth in section, and slightly glossy. The layer thickness is 0.20-5.70 m, and the average thickness is 1.65 m. The third layer is pebble gravel soil (Q 3 al+pl ), mainly gray-brown pebble gravel, the layer thickness is 0.50-2.50 m, and the average thickness is 0.85 m. The red clay used in this experiment was taken from the second layer at a depth of 3-5 m. The red clay was retrieved, air-dried, crushed, passed through a 2 mm sieve, stored in a moistureproof plastic bucket, sealed for use, and subjected to geotechnical tests. The properties of its basic physical properties and parameters are shown in Table 1.
Fly
Ash. The fly ash was purchased from Gongyi Longze Water Purification Material Co., Ltd. It is mainly composed of coal ash and slag, with strong adsorption properties, the specific gravity is between 1.95 and 2.36, and the dry density is 450 kg/m 3 to 700 kg/m 3 . The specific surface area is between 220 kg/m 3 and 588 kg/m 3 , and the main components are SiO 2 , Al 2 O 3 , and Fe 2 O 3 . Fly ash can undergo recrystallization, ion adsorption and exchange, carbonation, and pozzolanic reaction with lime, forming a "reticular" and "rod-like" structure in the soil, which 2 Geofluids significantly improves the mechanical properties of the solidified soil.
Lime.
Lime is a common solidified material purchased from Xilong Science Co., Ltd. as bottled quicklime, with white or gray lumps, granules, or powder. The calcium oxide content is greater than or equal to 98%. CaO reacts with water to form Ca(OH) 2 , of which OHcan decompose Si-O and Al-O bonds in the glass body of fly ash, which can fully stimulate the activity of fly ash. At the same time, it provides Ca 2+ for the formation of hydraulic gels in the hydration reaction.
Test Method.
The naturally air-dried red clay was crushed and passed through a 2 mm sieve. Calculate the mass of air-dried soil, water, diesel oil, and lime fly ash required to prepare the soil sample (diesel and solidified material are mixed according to the mass percentage of the dry soil). The procedure for the test is shown below. The first step is to equally distribute the weighted diesel oil into the soil, cover it with plastic wrap, and allow it to sit for 12 hours so that the oil molecules may infiltrate the soil particles. The air-dried soil preparation sample is sprayed with distilled water in the second stage. In the third step, the contaminated soil and curing agent were combined and stirred equally and sealed for 24 hours in the third phase, according to the prescribed dose of the curing agent. During this time, the soil samples were flipped over often to ensure that all of the components were well mixed. According to the "Geotechnical Test Method Standards" (GB/ T50123-2019) [20], a standard vertebral triaxial sample with a diameter of 39.1 mm and a height of 80 mm and a reshaped ring knife sample with a diameter of 61.8 mm and a height of 20 mm were prepared by the static pressure method for unconfined compressive strength and direct shear tests. The unconfined compressive strength test and direct shear test were carried out after curing for 7D, 14D, 21D, and 28D under standard curing conditions (curing temperature is 20 ± 2°C, humidity ≥ 95%); dry density of the sample is 1.40 g/cm 3 .
Since diesel oil is a nonaqueous liquid, it will not dissolve with the pore water in the soil after infiltrating it, forming a "diesel pore liquid." Therefore, fly ash with strong adsorption and alkalinity is selected, which can strongly combine with oily substances through intermolecular attraction and chemical chain, and is irreversible. Lime is a common solidified material for treating polluted soil, and lime-solidified soil has high shear strength and compressive strength. The combination of lime and fly ash is used to solidify dieselpolluted soil. The two have a pozzolanic reaction and solidification reaction, forming a cementitious compound to fill in the soil particles, which has a good solidification effect and improves the soil's mechanical strength. After many attempts, the moisture content was set to 20%, 25%, 30%, and 35%. The oil content was set to 0%, 5%, 10%, and 15%. The curing age is set as 7D, 14D, 21D, and 28D. The curing material is 20% fly ash+12% lime.
Characteristics of Unconfined Compressive
Strength of Cured Diesel-Contaminated Red Clay [20,21]. According to the geotechnical test method standard, the maximum axial stress is taken as the unconfined compressive strength without side limitation. If the maximum axial stress is not apparent, the corresponding stress of the axial strain of 15% is taken as the unconfined compressive strength. From Table 1, it can be seen that the optimal moisture content of the soil used in this test is 30%. Therefore, the unconfined compression strength selected the moisture content of 30% and the pollution concentration of 0%, 5%, 10%, and 15% and explored the unconfined compression strength of the same-dosage curing agent at the curing age of 7D, 14D, 21D, and 28D. The unconfined compression strength of different curing ages obtained by the test is shown in Figure 1. It can be seen from Figure 1 that the unconfined compressive strength of lime-fly ash-solidified diesel-polluted soil increases first and then decreases with the increase in curing age, and the strength reaches its peak when the curing age is 21D. Compared with the unconfined compressive strength with a curing age of 7D and a pollution concentration of 10%, the unconfined compressive strength increased by 159%, 166%, 134%, and 144%, respectively.
With the increase in age, the failure strain corresponding to the ultimate strength of the solidified diesel-contaminated red clay increased first and then decreased (2.48%, 2.61%, 2.31%, and 1.64%, respectively). Analysis of the reasons shows that the pozzolanic reaction between lime and fly ash has fully occurred, and this reaction is a slowdeveloping and time-consuming process.
With the increase in curing age, the active silica in fly ash and the Ca 2+ ionized from lime forms C-S-C and C-A-H and condenses on the surface of soil particles, increasing the effective contact area between soil particles, thereby improving soil strength. At the same time, after the Ca(OH) 2 lime hydration, the SiO 2 forms sol, and Al 2 O 3 on the surface of the fly ash glass body dissolves slowly and gradually reacts with Ca(OH) 2 to form calcium silicate, and calcium aluminosilicate and other compounds are filled in the soil The 3 Geofluids connection strength between the particles and the soil particles is enhanced. The unconfined compressive strength of the solidified soil is improved.
Influence of Pollution Concentration on the Unlimited Compressive Strength of Cured Diesel-Contaminated Soil.
To explore the influence of diesel pollution concentration on the unconditioned compressive strength of cured soil, the unconfined compressive strength of cured soil under the optimal conditions of 21D curing age and moisture content of 30% was selected to explore the unconventional compressive strength of cured soil under different diesel pollution concentrations. The unconstrained compressive strength of the diesel fuel under different concentrations obtained by the test is shown in Figure 2.
It can be seen from Figure 2 that as the pollution concentration increased from 5% to 15%, the unconventional compressive strength of the cured soil decreased with the increase in the pollution concentration, decreasing by 15.5% and 19.4%, respectively. The stress-strain curve of the cured soil under different concentrations is mainly divided into four stages: the elastic stage-the stress-strain curve of this section is linear, and the cured soil sample is inelastic deformation; the yield stage-the stress-strain curve of this section is close to a straight line, and the solidified soil sample begins to undergo plastic deformation; the strengthening stage-the stress of this section slowly increases until the peak, and the solidified soil sample will be destroyed; and the failure stage-the stressstrain curve of this section shows a downward trend until the cured soil sample is destroyed.
Geofluids
The main reason is that after the diesel molecules penetrate the soil, a layer of "-film" will be formed on the surface of the soil oil as the pollution concentration increases, and the "oil-film" gradually increases and thickens. Because diesel is hydrophobic, the "oil-film" will hinder the infiltration of water, resulting in gelatinous particles generated by the reaction of the cured material unable to fill the soil particles, so that the unconfined compressive strength decreases with the increase in pollution concentration.
Influence of Moisture Content on the Unlimited Compressive Strength of Cured Diesel-Polluted Soil.
To explore the influence of moisture content on the unconfined compressive strength of cured soil, the unconditioned compressive strength of cured soil under different moisture content was selected with a curing age of 21D, a pollution concentration of 5%, and a moisture content of 20%, 25%, 30%, and 35%. The unconstrained compressive strength under different diesel pollution concentrations obtained by the test is shown in Figure 3.
It can be seen from Figure 3 that the unconfined compressive strength of the solidified soil increases and then decreases with the increase in water content, and with the increase in moisture content, the destructive strain of the cured soil shows a gradual increase trend, and the failure strain of 20%, 25%, 30%, and 35% is 1.38%, 1.88%, 2.47%, and 3.07%, respectively. The primary explanation for this might be that when moisture content rises, the percentage of water molecules and oil molecules in the soil increases, whereas the fraction of water molecules and cementitious gels formed by cured materials increases.
Most of the components in the pores of the cured soil are water molecules, which reduce the curing effect, so the compressive strength of the soil is reduced. At the same time, the diesel infiltration into the soil will produce a cohesive effect on the soil particles. Because the quantity of water molecules between soil particles increasingly exceeds the number of oil molecules, the bonding effect steadily diminishes. The unconfined compressive strength of the cured soil decreases with the increase in moisture content.
Shear Strength Characteristics of Solidified
Diesel-Polluted Soil Figure 4. It can be seen from Figure 4 and Table 2 that under the condition of the same pollution concentration, with the increase in the curing age, the shear strength of the cured soil under the same vertical load is gradually increased compared with that of the cured soil with a curing age of 7D and a pollution concentration of 10%, the vertical load is 100 kPa, and the shear strength is increased by 318%; the vertical load is 200 kPa, and the shear strength is increased by 228%; the vertical load is 300 kPa, and the shear strength is increased by 190%; the vertical load is 400 kPa, and the shear strength is increased by 173% at the vertical load of 400 kPa.
Compared with the cured soil with a curing age of 7D and a pollution concentration of 10%, the cohesion of cured soil is increased by 507%, and the internal friction angle is increased by 54%. Several factors contributed to this, including low vertical loads and short curing ages. Lime hydration reaction-produced Ca(OH) 2 could not ultimately promote 6 Geofluids fly ash activity, and calcium silicate and other gels created by volcanic-ash interaction were less active. Some of the products are blocked by the "oil-film" formed on the soil's surface by diesel infiltration into the soil and cannot be filled into the soil particles, and there are pores between the soil particles. The shear strength of the soil is reduced. With the increase in the curing age, the curing reaction is sufficient, and a large number of gels such as calcium silicate are generated, covering the "oil-film." When the vertical load increases, the product will be pressed into the "oil-film" to fill the soil particles, so that the pores between the soil particles become less, thereby improving the shear strength of the cured soil.
Influence of Pollution Concentration on the Shear
Strength of Cured Diesel-Contaminated Soil. The shear test of the cured soil sample with a curing age of 21D was selected to obtain the shear strength of cured dieselcontaminated red clay under different pollution concentrations, as shown in Figure 5.
Geofluids
It can be seen from Figure 5 that at the same curing age and the same moisture content, the shear strength of the contaminated soil with a combined curing diesel pollution concentration of 5% of the lime fly ash has the best shear resistance compared with that of the cured soil with a pollution concentration of 15%; the shear strength increases by 40% at the vertical load of 100 kPa; the shear strength increases by 44% under the vertical load of 200 kPa; the shear strength increases by 32% under the vertical load of 300 kPa; and the shear strength increases by 9% at the vertical load of 400 kPa. Compared with the cured soil cohesion of 10% at the same curing age, the cohesion of the cured soil decreased by 34%, and the internal friction angle decreased by 2.5%.
On the other hand, the contamination concentration was negatively correlated with the curing effect. After diesel infiltration into the soil, analysis of the reasons will be covered with a layer of "oil-film" on the surface of the soil. When the pollution concentration is low, the "oil-film" cannot be covered entirely on the soil. The "oil-film" adsorbed by the curing reaction is less generated by the gel condensation; most of the products fill into the pores between the soil particles; with the increase in pollution concentration, the "oil-film" coverage area increases; most of the products are adsorbed and cannot be filled into the soil particles to provide strength for the solidified soil. Therefore, as the pollution concentration increases, the shear strength of the cured soil decreases.
Effect of Moisture Content on Shear Strength of Solidified
Diesel-Polluted Soil. The shear test of the cured soil sample with a curing age of 21D was selected to obtain the shear strength of cured diesel-contaminated red clay at different moisture content rates.
It can be seen from Figure 6 that under the same curing age and the same pollution concentration, the shear resistance of contaminated soil with a combined curing diesel Furthermore, the moisture content is negatively correlated with the curing effect. Analyze the reasons; when the moisture content is low, the soil particles are filled with glue gel generated by water molecules and curing reaction, the moisture in the soil can provide strength, and the shear strength of the cured soil is provided by the combination of water molecules and the cement condensate generated by the curing reaction; with the increase in moisture content, the water in the soil is excessive, not providing strength; the shear strength of the cured soil is mainly provided by the gel condensate generated by the curing reaction, and the pore water mainly plays a lubricating role. Therefore, the shear strength of cured soil at low moisture content is higher than that of cured soil at high moisture content. At the same time, the higher moisture content will reduce the adsorption of the cement, resulting in the cohesion of the cured soil decreasing with the increase in the moisture content.
Influence of Pollution Concentration on the Microscopic Characteristics of Solidified Diesel-Polluted Soil
A healed soil sample with optimum curing age of 21D and moisture content of 30% was chosen for scanning electron microscopy tests based on comparisons of the microscopic properties under various pollution levels. The scanning electron microscopy (SEM) test uses Hitachi Corporation's S-4800 field emission scanning electron microscope. To obtain more explicit microscopic morphology, the central part of the soil sample is gold-injected to increase electrical conductivity. Figure 7 shows the microscopic morphology of cured soils at different concentrations at 1000 times and 5000 times magnification.
Comparing the microscopic morphological photos of lime fly ash combined with curing contaminated diesel soil under different pollution concentrations, when the pollution concentration is 0%, the curing reaction generates a large number of filamentous, needle-like, and flake gel crystals. With the adsorption effect of fly ash, the fly ash is closely connected with soil particles, mainly through point-topoint or point-to-surface connections. Some of the products are stacked on top of the soil particles in flakes, and the number of loose and tiny soil particles in the soil body is small. When the pollution concentration is 5%, the diesel molecules penetrate the soil, the fly ash adsorbs the diesel into the soil particles, the arrangement is uneven, there are still more pores, and the gel crystals generated by multiple curing reactions form a smaller-scale mesh unit structure (see Figure 7(d)), which improves the strength of the cured soil.
When the pollution concentration reaches 10%, the number of oil molecules rises, and the gel crystal forms a bigger size agglomeration that is more consistently structured but still has huge holes. More granular debris is adsorbed by diesel to the surface of the soil particles, and a large number of diesel molecules form a honeycomb structure and bond with the agglomerate. When the pollution concentration is 15%, large-scale agglomerates form aggregates, and the aggregates are primarily in contact with 9 Geofluids surfaces, with fewer pores and larger particles. The increase in pollution concentration increases the thickness of the "oil-film" formed by diesel infiltration into the soil, and the adhesion to large particle aggregates is more substantial so that the cured products cannot be filled into the soil particles and the soil strength cannot be improved. The microscopic
Conclusion
This paper is aimed at solving the problem of diesel pollution red clay under different influencing factors and uses lime fly ash as a curing agent. Based on the indoor geotechnical tests, conclusions can be drawn as follows: (1) The diesel-contaminated red clay cured with lime and fly ash has higher compressive strength and shear strength. When the content of the curing agent is certain, the curing age of 21D is the optimal age. Compared with the curing age of 7D, the unconfined compression strength of the same moisture content and the same pollution concentration increases by 159%, 166%, 134%, and 144%, respectively. Compared with the vertical loads of 100 kPa, 200 kPa, 300 kPa, and 400 kPa under the same moisture content and pollution concentration during the 7D curing period, the shear strength increases by 40%, 44%, 32%, and 9%, respectively (2) The failure mode of solidified diesel-contaminated soil is strain softening. When the pollution concentration increased from 5% to 15%, the unconfined compressive strength of solidified soil decreased with the increase in pollution concentration, which decreased by 15.5% and 19.4%, respectively. As the pollution concentration increases, the viscosity of diesel oil slows down the rate of ash reflection. On the other hand, the "oil-film" formed by the infiltration of diesel oil into the soil particles gradually thickens, which hinders the filling of the gels generated by the curing reaction into the soil. Therefore, the mechanical strength of contaminated soil cannot be improved (3) The main reason for improving the strength of the solidified contaminated soil is that the cement particles formed by the reaction of multiple fly ash and lime pozzolans are connected to form a network structure filled into the soil particles, and the particles are closely arranged to form tight connections. As the pollution concentration increases, the dieselbonded cement particles cannot enter the soil particles, and the soil strength decreases. The consistency of the microstructural characteristics and the change of the macroscopic mechanical strength of the solidified diesel-polluted soil were confirmed
Data Availability
The data used to support the findings of this study are included within the article.
Conflicts of Interest
We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted. | 6,567.4 | 2022-06-20T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Effectiveness of mesenchymal stem cells cultured under hypoxia to increase the fertility rate in rats (Rattus norvegicus)
Abstract Background and Aim: Mesenchymal stem cells (MSCs) transplanted into the testes of rats with testicular failure can help rescue fertility. However, the low viability of transplanted MSCs limits the success of this treatment. This study aimed to determine the effectiveness of MSCs cultured under hypoxia to increase the fertility rate in rats (Rattus norvegicus). Materials and Methods: Bone marrow-derived MSCs (200 million cells/rat) were transplanted into male rat models with induced infertility (10 rats/treatment group) after 4 days of culture in 21% O2 (normoxia) and 1% O2 (hypoxia). Ten fertile and 10 untreated infertile rats served as controls. In the infertile male rats that had been fasted from food for 5 days, the fasting condition induced malnutrition and then resulted in testicular failure. Results: The results indicated that the MSCs cultured under hypoxic conditions were more effective than those cultured in normoxic conditions as a treatment for testicular failure in infertile male rats based on the increased number of cells expressing p63 as a quiescent cell marker and ETV5 as a transcription factor expressed in Sertoli and germ cells. Furthermore, the structure of the seminiferous tubules, which contain spermatogonia, primary and secondary spermatocytes, and spermatid, Sertoli, and Leydig cells, was improved in infertile male rats treated with the MSCs cultured under hypoxic conditions. Conclusion: The testicular transplantation of MSCs cultured under hypoxic conditions was an effective treatment for testicular failure in rats.
Introduction
Mesenchymal stem cell (MSC) transplantation using rabbit [1] and rat [2] bone marrow and later rat [3] and rabbit [4] adipose tissue was shown to be effective in rebuilding the tissues supporting the endogenous stem cells, allowing them to multiply and mature into sperm cells. The viability of the transplanted MSCs from the bone marrow [2,5], adipose tissue [3], or umbilical cord blood [6] is low; hence, this treatment has limited efficacy. The reduced viability of MSCs is thought to be caused by a normoxia culture with a high oxygen concentration (O 2 >20%). Cell senescence [7], cell apoptosis [8], and gene mutation [9] can all be caused by normoxia culture. As a result, the low viability of MSCs restricts the success of cell transplant therapy. It is hypothesized that the efficiency of MSC transplantation is influenced by apoptosis [10][11][12][13][14][15]. To achieve therapeutic impact, substantial doses of MSCs are necessary. Several researchers are attempting to acquire adequate dosage without the use of boosters, hence reducing the influence on rising expenses. Due to these issues, the effectiveness of the treatment remains unclear. Thus, alternative investigations are still required to learn more about the efficacy of more relevant treatments.
Other studies have indicated the critical significance of stem cell cultivation under hypoxic conditions as follows: To keep transplanted MSCs alive and adaptable, they were grown in a hypoxic environment (1-3% O 2 ) [2,4]. This condition was induced by cells in the quiescent state [4,16,17], allowing the cells to live longer [18,19]. Hypoxia-inducible factor 2 (HIF2), a critical regulator of progenitor stem cell function, may influence the expression of p63 as a definite marker in quiescent cells. Quiescent MSCs are self-renewing stem cells that remain in gap 0 and do not cycle (i.e., gap 1/synthesis/gap 2/mitosis) [20] or in undifferentiated states [21]. However, there is still a high potential for cell renewal [22]. At the indifferent stem stage, self-renewal is a symptom of the biological process and defense mechanism [23]. The homing signal based on the vascular endothelial growth factor (VEGF) expression following transplantation is critical for culture hypoxia-conditioned rat MSCs (1% O 2 concentration) [2]. VEGF is a component of the stem cell extracellular matrix that helps Copyright: Safitri and Purnobasuki. Open Access. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http:// creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. maintain a favorable milieu for stem cells to survive after transplantation.
Scientific evidence on the effectiveness of MSCs cultured under hypoxia for testicular failure is still lacking. Therefore, we determined whether HIF2 (with HIF2 alpha [HIF2α] monoclonal antibody [ep190b] as a marker) regulated the transplantation of MSCs in the form of quiescent MSCs (with p63/TP73L monoclonal antibody as a marker) derived from rat bone marrow and whether HIF2 played a role crucial for spermatogonial stem cells (SSCs). Infertile males with testis failure can be treated with transplantation of MSCs cultured under hypoxic conditions. The findings of this study are relevant in the area of male reproductive health.
This study aimed to determine the effectiveness of MSCs cultured under hypoxia to increase the fertility rate in rats (Rattus norvegicus).
Ethical approval
The study was approved by Animal Care and Use Committee (No: 239-KE; Komisi Etik Penelitian) of the Faculty of Veterinary Medicine, Universitas Airlangga, Surabaya, Indonesia.
Study period and location
The study was conducted from March 2018 to 2020 at Department of Veterinary Science, Faculty of Veterinary Medicine, Universitas Airlangga, Surabaya, East Java, Indonesia.
Stem cell isolation
Stem cells were harvested from the bone marrow through aspiration of the femur, tibia, and ulna [24] of 3-month-old rats (Rattus norvegicus) [25]. The aspirate was placed in heparinized tubes (Z181099, Sigma-Aldrich ® , Burlington, Massachusetts, USA) and stored at 4°C to be transported to the laboratory [26].
Stem cell culture
The aspirate from the rat bone marrow was transferred into 15 mL sterile tubes (SIAL0790-500EA, Sigma centrifuge tubes, Sigma-Aldrich ® ), rinsed twice with 5 mL sterile phosphate-buffered saline (PBS), (MFCD00131855, Sigma-Aldrich ® ), and filled up to a total volume of 10 mL. The diluted sample was added with the same volume of Ficoll (Biowest, Nuaillé, France) in a separate 15 mL tube. Centrifugation was performed for 15 min at room temperature (37°C) at 1600 rpm. After centrifugation, the cells were collected from the Ficoll-PBS interface using a sterile Pasteur pipette (Corning™ C7095BNMR, Thermo Fisher Scientific, Waltham, MA, USA) and transferred into a 15 mL tube. The cells were resuspended in PBS up to a total volume of 15 mL. The tube was gently inverted and shaken (CLS6791 Sigma, Corning LSE Benchtop Shaking Incubator with Platform, Sigma-Aldrich ® ) 5 times to homogenize the suspension.
The suspension was centrifuged again for 10 min. The supernatant and floating cells were discarded, and the cell pellet was resuspended in 6 mL of alpha-modified essential medium (α-MEM) (M0894; Sigma-Aldrich). Mononucleated cells were placed on a plate in 10 cm 2 with approximately 2×10 7 cells and incubated at 37°C in a humidified atmosphere with 5% CO 2 (BioSpherix, Florida, USA) for 24 h to let the cells adhere (sticking in a plate). After 24 h, the media and non-adherent cells were discarded. The adherent cells were rinsed twice with 5 mL PBS, and 10 mL fresh α-MEM medium was then added into the dish, which was returned into the incubator. The culture was observed daily under an inverted microscope. The medium was changed every 4 days, preceded by a rinse with 10 mL PBS, after which 10 mL fresh α-MEM medium was added. The culture was continued until approximately 75-80% confluence was achieved. After confluence, the cells were passaged into several other dishes to cultivate subcultures [26]. Passaging was performed three times, and then, the cells were assigned to two conditions: Hypoxic precondition treatments of 1% in a hypoxia chamber (BioSpherix, Florida, USA) inside a 5% CO 2 incubator and another treatment using 21% O 2 concentration (normoxia) over 4 days. The MSCs were observed under a microscope before being transferred into the testes.
Infertility rat model
This study used male rats with testicular failure that had been fasted from food for 5 days but provided with drinking water ad libitum [2,27]. The fasting condition for 5 days induced malnutrition and then resulted in testicular failure. The malnutrition condition caused the adrenal cortex to function suboptimally in producing dehydroepiandrosterone (DHEA). Low levels of DHEA in the blood can cause fatigue and decrease sperm concentration. DHEA is the most potent precursor of steroid hormones, such as testosterone, which is produced by the renal adrenal cortex [28] and Leydig cells of the testis [29]. Low testosterone production can lead to decrease spermatogenesis and, thus, testicular failure. The animal models used in this study were healthy 8-10-week-old Wistar strain male rats (R. norvegicus) with a body weight of 250-300 g. The rats were placed in individual plastic cages in the experimental animal laboratory at the Faculty of Veterinary Medicine, Universitas Airlangga.
MSC transplantation methods
The MSCs were transplanted into male rats with testicular failure, which were then compared with the negative and positive control rats. The T1 group consisted of 10 infertile male rats that were transplanted with stem cells cultured under normoxia (21% O 2 concentration) for 4 days with a dose of 200 million cells/ rat [1]. The T2 group consisted of 10 infertile male rats that were transplanted with stem cells cultured under hypoxia (1% O 2 concentration) for 4 days with a dose of 200 million cells/rat [1]. The positive (fertile) control group was composed of 10 normal male rats (fertile) injected with 0.1 mL PBS. The negative Available at www.veterinaryworld.org/Vol.14/November-2021/28.pdf (infertile) control group consisted of 10 infertile male rats injected with 0.1 mL PBS.
The testes of the male rats were surgically excised after 54 days (one cycle of spermatogenesis) [30] to collect testicular tissue. The improvement in the testicular tissue was observed by histopathological preparations using hematoxylin and eosin (H&E) stain (B8438, Sigma-Aldrich ® ). Immunohistochemical (IHC) observation was performed to determine the expression of p63 (with p63/TP73L monoclonal antibody) as a marker of quiescent cells, HIF2α (with HIF2α monoclonal antibody [ep190b]) as a marker that is crucial for endogenous cells (SSCs, Sertoli cells, etc.), and ETV5 (with ETV5 monoclonal antibody) as a marker for the transcriptional factor to improve testicular failure and infertility. In vitro fertilization between the ovum and sperm cells was performed to observe the fertility of the male rats.
Histopathological assessment
Histopathological examination of the testicular tissues for the presence of Sertoli cells, Leydig cells, spermatogonia, spermatocytes, and primary and secondary and spermatid cells was performed on the testicular tissues that were fixed with 10% formalin. The testes were then dehydrated through a series of increasing alcohol concentrations, cleared with xylol, and embedded in paraffin. Thin sections mounted on slides were processed for H&E staining [31].
Histopathological examination was performed using a light microscope with a magnification of 200×. Five fields of view were assessed for each slide. Observations and identification of the spermatogonia and Sertoli and Leydig cells and regeneration of seminiferous tubules were based on the existing histological description [27].
IHC observation
IHC observation was performed to determine the expressions of p63, HIF2α, and ETV5. The samples were prepared for histopathological examination of the testicular tissue. After deparaffinization of the preparation (paraffin block) with xylene 3 times each for 3 min, rehydration of the preparation with 100%, 95%, and 70% ethanol each for 2, 2, and 1 min and finally with water for 1 min was performed. Subsequently, the preparation was soaked in peroxidase blocking solution at 37°C for 10 min and then incubated in pre-diluted blocking serum at 25°C for 10 min.
Subsequently, the preparation was incubated with a secondary antibody (conjugated to horseradish peroxidase) at 25°C for 10 min, washed with PBS for 5 min, and then incubated again with peroxidase at 25°C for 10 min. Next, the preparation was washed with PBS for 5 min and then incubated with chromogen diaminobenzidine at 25°C for 10 min.
Furthermore, the preparation was incubated with H&E for 3 min and washed with water. Finally, the preparation was cleaned, dropped with mounting media, and closed with a coverslip. Then, the expressions of p63, HIF2α, and ETV5 (brown color) were observed on the cells using a light microscope with 400×. Five fields of view (one tubule/field view) were assessed for each slide through the scoring system. The following IHC scoring system [32] was used: IHC score = A×B, where A denotes the wide percentage of expressions and B is the intensity of the chromogen color (Table-1).
Medium preparation, sperm and oocyte collection, and in vitro fertilization
The media, M16 (MR-016 EMD Millipore, Sigma-Aldrich Inc., Darmstadt, Germany) and PBS (10010031 PBS, pH 7.4; Thermo Fisher Scientific), were manufactured in accordance with the established procedure for producing these two media. Before use for in vitro fertilization, a droplet medium was prepared in a Petri dish (Sterilin™ 100 mm Petri Dishes, Thermo Fisher Scientific) with a volume of 50 µL as a washing medium and 25 µL as an in vitro culture medium. The droplet medium was then incubated for 3 h in a 5% CO 2 incubator at 37°C before being used for in vitro fertilization [33].
Sperm cells were collected from male rats after being sacrificed by dislocation of the fourth cervical spine and then disinfected with 70% alcohol. A Y-shaped incision was made in the abdomen, the stomach contents were removed, and the left testicle was pulled out. The fats were separated, and then part of the cauda epididymis, the mature sperm storage, was taken. The obtained cauda epididymis was washed with PBS twice, cut into small pieces to free IHC=Immunohistochemical the spermatozoa, placed on M16, and then incubated in a 5% CO 2 incubator at 37°C [34].
Before the oocytes were collected, PMSG and hCG hormones were injected subcutaneously to stimulate superovulation: On the 0 th h on the 1 st day, female rats were injected with 5 IU of 0.1 mL PMSG to stimulate the process of folliculogenesis and left to stand for 48 h. The female rats were injected with 5 IU of 0.1 mL hCG 48 h after the PMSG was injected and mated directly with a single vasectomized male to stimulate ovulation. After 17 h, a vaginal plug was used to confirm that the female had mated. Then, the oocytes were flushed. The female rats were killed by dislocation of the fourth cervical spine. A Y-shaped incision was then made on the abdomen, and the uterus was removed and separated from the fallopian tube section and then rinsed with M16. The oocytes from the ampulla of the fallopian tube were flushed using an inverted microscope [35].
Fertility rate observations
The fertility of spermatozoa was determined by examining in vitro fertilization. Semen analysis was conducted according to the guidelines of the World Health Organization. Semen was processed over a two-layer discontinuous density gradient, formed by a top layer of 40% (v/v) PureSperm (Nidacon Lab AB, Gothenburg, Sweden) and a lower layer of 80% (v/v) PureSperm, by centrifugation at 1 500 g for 15 min at 37°C. The pellet was resuspended in 3 µL SAGE fertilization medium with 5% HSA and spun down at 200 g for 10 min at 37°C [36]. The oocyte and sperm were both placed in the Petri dishes containing M16 drops under a mineral oil overlay and incubated in 5% CO 2 incubators at 37°C for 5 h for in vitro fertilization [37]. Fertilization was confirmed by the presence of the second polar body of oocyte [36] through an inverted microscope (Nikon Eclipse TE 2000S; Nikon, Tokyo, Japan) at 400× with Hoffman modulation optics. In rats, the first polar body is known to degenerate.
The embryo quality was evaluated on days 2-3. On days 2-3, embryo development was assessed, including blastomere number, size, regularity, presence, and percentage of fragmentation. All embryos were graded on a scale of 1-5, with 1 being the best. Grade 1 embryos had symmetrical blastomeres of equal size with no cytoplasmic fragmentation. Grade 2 embryos had blastomeres of equal size and minor cytoplasmic fragmentation covering <10% of the embryo surface. Grade 3 embryos had even or uneven blastomeres and minor cytoplasmic fragmentation covering 10-25% of the embryo surface. Grade 4 embryos had blastomeres of equal or unequal size and moderate to high cytoplasmic fragmentation covering 25-50% of the embryo surface. Grade 5 embryos contained few blastomeres of any size and severe fragmentation covering >50% of the volume of the embryo. Embryos with good quality were defined as those with six to eight equal-sized cells and <10% fragmentation on day 3 [36]. The fertility rate of sperm formula in rats was calculated as follows: Number of good-quality embryos/the number of mature oocytes × 100% [35].
Statistical analysis
The expressions of p63, HIF2α, and ETV5 and the fertility rate of the sperm cells were statistically analyzed using the SPSS software (v. 17 for Windows XP; SPSS Inc., Chicago, IL, USA) with a 99% confidence level (α=0.01) and 0.05 significant difference (p<0.05). The comparative steps for hypothesis testing were as follows: Normality data test, Kolmogorov-Smirnov test, homogeneity of variance test, analyses of variance, and Tukey's HSD post hoc test with 5% least significant difference.
Results
Data were collected from 40 male rats, which were divided into four treatment groups: Normal males in the positive (fertile) control, infertile males treated with PBS in the negative (infertile) control, infertile males transplanted with stem cells cultured under normoxia (21% O 2 concentration) for 4 days in the first treatment (T1) group, and infertile males transplanted with stem cells cultured under hypoxia (1% O 2 concentration) for 4 days in the second treatment (T2) group. The results indicated that transplantation with MSCs from hypoxic precondition culture improves testicular function by decreasing the extent of damage and increasing fertility. The expression levels of p63, HIF2α, and ETV5 increased, as were the regeneration of the testicular tissue, as described in more detail in the following, such as intact tubular seminiferous tissue; formation of Sertoli cells, Leydig cells, spermatogonia, spermatocytes, and primary-secondary and spermatid cells; and improvement of the fertility rate of sperm in vitro.
Expression of p63
The average score of the p63 expression in the T2 group was 6.0 b ±0.50, although the score was lower than that in the positive (fertile) control group, in which p63 was expressed (9.60 a ±0.44), but the score was still much higher than that in the T1 (2.5 c ±0.25) and negative (infertile) control groups, in which p63 was not expressed (0.41 d ±0.22) ( Figure-1 and Table-2).
Expression of HIF2α
The average score of the HIF2α expression in the T2 group was 7.0 a ±0.75. This was the highest among those in the other groups, T1 (3.9 b ±0.44), positive (fertile) control (0.6 c ±0.34), and negative (infertile) control (0.3 c ±0.33) ( Figure-2 and Table-2). An increase in the score of the HIF2α expression could occur if they were given a low O 2 concentration.
Expression of ETV5
The average score of the ETV5 expression in the T2 group was 7.2 b ±0.34, although the score was lower in the positive (fertile) control group (10.5 a ±0.25), but the score was still higher than that in the T1 (2.2 c ±0.15) Available at www.veterinaryworld.org/Vol.14/November-2021/28.pdf and negative (infertile) control groups, in which ETV5 was not expressed (0.3 d ±0.23) ( Figure-3 and Table-2).
Regeneration of testicular tissue
Microscopic examinations of five different fields of view revealed that the T2 group experienced repaired testicular tissue. The improvements were identified based on the regeneration of Sertoli cells, Leydig cells, spermatogonia, spermatocytes, primary-secondary and spermatid cells, and seminiferous tubules. An overview of these improvements could be compared with the positive (fertile) control, which did not experience testicular degeneration. The T2 group remained under the normal condition ( Figures-4a and d, Table-3). The T1 group did not exhibit testicular tissue repair, indicating that intact seminiferous tubules were not observed, and spermatogonia, Sertoli cells, and Leydig cells were degenerated. This amount of damage was comparable to that of the negative (infertile) control ( Figures-4b and c, Table-3).
The number of different types of cells was counted based on the characteristics of each cell, as in the following: Spermatogonium: It has a round shape and is located near the basement membrane, and the nucleus has an oval shape with fine chromatin and a thin nuclear membrane. Available at www.veterinaryworld.org/Vol.14/November-2021/28.pdf Primary spermatocyte: It has the largest size among gamete cells, with heterochromatin in the nucleus, and is located between the basal membrane and the tubular lumen.
Secondary spermatocyte: It is rarely observed in the seminiferous tubules because it quickly divides into spermatids.
Spermatid: It has a round shape and is smaller than spermatocytes, and the nucleus is round, pale, and bright.
Sertoli cell: It has a slim, irregular shape, and the base attaches to the basement membrane of the seminiferous tubules, having one nucleus located at the center.
Leydig/interstitial cell: It is located in the loose connective tissue between the tubules; this is a large cell, polygonal in shape, and the nucleus is clearly visible and also polygonal in shape.
Improvement of the fertility rate of the sperm
The T2 group exhibited a significant improvement in the fertility rates compared with the T1 group, although the rate did not reach that of the positive (fertile) control group (Table-4).
Discussion
In this study, we determined whether MSCs exposed to hypoxic conditions could repair testicular function more effectively than MSC cultured under normoxic conditions. Hypoxia, in this study, was adjusted to the normal and physiological conditions in which the stem cells were in situ (in vivo). Therefore, an ex situ (in vitro) study was conducted by inducing hypoxic conditions so that the conditions of the stem cells cultured are the same as the in situ physiological conditions (in vivo). In in vivo conditions, the MSCs are in the form of quiescent.
The induction of quiescent MSCs through p63 quiescent expression increased the life expectancy of stem cells by maintaining their viability and the adaptive condition for stem cell transplantation. The role of hypoxia in maintaining quiescence stem cells begins with the induction of HIF2α. Subsequently, HIF2α activated the gene pluripotency after it had been preceded by an initial adaptation time by HIF1α. The pluripotency of these stem cells may prolong the lifetime of quiescent cells so that the function of stem cells (stemness) is maintained. Furthermore, after transplantation, it mobilized the endogenous stem cells toward the defect area (testicular tissue). The process of mobilization can occur through several ways: (a) Induction of proteolysis (protein degradation) from the microenvironment of bone marrow, such as induction of pharmacological agents (granulocyte-colony stimulating factor or cyclophosphamide) or induction of quiescent stem cells that were transplanted (p63 marker); (b) blockade of CXCR4 or VLA-4 by specific blocking molecules, such as Available at www.veterinaryworld.org/Vol.14/November-2021/28.pdf [27]. The mobilization of endogenous stem cells will further move the stem cells to the testicular tissue, resulting in spermatogenesis processes and rescue of testicular failure and infertility repair. This study demonstrated that the MSCs from the hypoxic precondition culture were effective for therapy in male rats with testicular failure and infertility based on the increased of p63 expression as a quiescent cell marker as a crucial for progenitor of stem cell function and ETV5 expression as a transcriptional factor for regeneration of testicular tissue and improvement of the fertility of sperm.
The regenerative efforts of stem cells were differentiated by the decreased expression of p63 compared with that in the positive (fertile) control group. This study demonstrated that the score of p63 in the T2 group was lower than that in the positive (fertile) control group but was still better than those in the T1 and negative (infertile) control groups, which is slightly expressed ( Figure-1 and Table-1).
The p63 gene can maintain the viability of stem cells and regenerate stem cells from various tissue cells, which are known as ringmasters. In the previous study [4], the absence of p63 was showed a decreased of proliferation ability of cells, indicating that p63 is a key function in increasing the division of stem cells because the p63 gene directly promotes and control the stem cell environment and maintain undifferentiation.
Hypoxic precondition led to HIF1α release; therefore, so that not bound by the van Hippel-Lindau factor as a factor inhibited for HIF1α action furthermore [38]. Furthermore, HIF1α would be bound to HIF1β so that complex HIF1 was formed [9]. The HIF1α+HIF1β bond occurred in specific DNA sequences known as the hypoxia response element (HRE) 5'-TACGC-3'. Complex HIF1α and HIF1β bonds on HRE occurred at the start of exposure to hypoxia [39], thus causing cell cycle arrest and gene expression [40]. This inhibited the p21 expression, resulting in cell cycle inactivation and resistance to senescence and exhausting cells [7]. This is thought to slow down the proliferation of cultured stem cells; thus, quiescent cells can still be maintained [41].
Long-term maintenance of quiescent cells was also thought to be influenced by cultivation time-dependent hypoxic preconditions. After 48 h under low O 2 tension, the role of HIF1α would be replaced by that of HIF2α with different target genes [39]. The target genes in in vitro culture were expected to induce the expression of pluripotency genes [42], such as OCT4, SOX2, NANOG [9,43], and REX-1 [23]. A hypoxic precondition is an effort to change the ability of multipotent stem cells to become pluripotent.
The mean identification of the HIF2α score in the T2 group was the highest among those of the other groups. In vitro cultures, low oxygen tension (hypoxia), and cultivation time-dependent administration of oxygen induce expressions of pluripotency genes [37], such as OCT4, SOX2 [1,44], REX-1 [44], and NANOG [45]. Pluripotency genes are activated by HIF2α [46] after preceding initial adaptation time by HIF1α [9]. The pluripotency of these stem cells can retain quiescent cells; therefore, the function of stem cells is maintained. The quiescent cells with p63 as a marker, which is regulated by HIF2α from hypoxic precondition culture with 1% O 2 concentration for 4 days, are crucial for conducive niche in vivo, so that after transplantation, the stem cells could be transdifferentiated through spermatogenesis in the seminiferous tubules of the testis.
Furthermore, stem cells from the hypoxic precondition culture were found to be effective based on ETV5 formation in the testicular tissue, with an average score of 2.95 b ±0.50. The ETV5 as a marker of SSC function that can increase and improve the testicular environment and support endogenous stem cells, so that stem cells can be mobilized to the testicular tissue that has failed, resulting in improvement and rescue for fertility.
In this study, IHC methods were employed to identify ETV5. The score of the ETV5 expression in the T2 group was approximately 3. Although this score was below that of the positive (fertile) control group, it was still well above those of the T1 and negative (infertile) control groups, which was slightly expressed ( Figure-3 and Table-1). Previous research demonstrated that bone marrow-derived MSCs are adult stem cells that quickly grow and differentiate into cells that are needed in response to the presence of defects [40].
Regeneration of the testicular tissue was identified as an intact seminiferous tubule tissue; formation of Sertoli cells, Leydig cells, spermatogonia, spermatocytes, and primary and secondary and spermatid cells. The viability of stem cells that differentiate into cells is necessary. In infertile conditions, the degenerative testicular tissue can be regenerated if the stem cells are viable. If they are not viable, then the testicular tissue will remain degenerated.
The survival of stem cells in the animal model of degenerative tissue, such as testicular failure, is beyond the scope of the therapeutic effect of MSC treatment. In addition, poor survival following cell transplantation is a crucial factor [5]. This study demonstrated that stem cells from the hypoxic precondition culture survived based on the effectiveness of therapy in male rats with testicular failure and infertility through the regeneration of their testes, which can be observed using IHC methods, H&E staining, and a light microscope [31]. Testicular tissue repair was confirmed by the regeneration of the seminiferous tubules, based on observing of seminiferous tubules become intact and compact again [27].
In this study, light microscopy examination revealed that testicular tissue repair occurred in the T2 group. The overview of the testicular tissue repair can be compared with that in the positive (fertile) control group, which did not experience testicular degeneration and remained in the normal condition ( Figure-4 and Table-2), whereas the T1 group did not exhibit improvement in their testicular tissue. The tissue damage was comparable with that in the negative (infertile) control group with testicular degeneration ( Figure-4 and Table-2).
Conclusion
Transplantation of MSCs cultured under hypoxic conditions is an effective treatment for testicular failure in a rat infertility model. | 6,689.2 | 2021-11-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Phenomenological Inferences on the Kinetics of a Mechanically Activated Knoevenagel Condensation: Understanding the “Snowball” Kinetic Effect in Ball Milling
We focus on understanding the kinetics of a mechanically activated Knoevenagel condensation conducted in a ball mill, that is characterized by sigmoidal kinetics and the formation of a rubber-like cohesive intermediate state coating the milling ball. The previously described experimental findings are explained using a phenomenological kinetic model. It is assumed that reactants transform into products already at the very first collision of the ball with the wall of the jar. The portion of reactants that are transformed into products during each oscillation is taken to be a fraction of the amount of material that is trapped between the ball and the wall of the jar. This quantity is greater when the reaction mixture transforms from its initial powder form to the rubber-like cohesive coating on the ball. Further, the amount of reactants processed in each collision varies proportionally with the total area of the layer coating the ball. The total area of this coating layer is predicted to vary with the third power of time, thus accounting for the observed dramatic increase of the reaction rate. Supporting experiments, performed using a polyvinyl acetate adhesive as a nonreactive but cohesive material, confirm that the coating around the ball grows with the third power of time.
Introduction
Mechanical processing by ball milling (BM) is a well established methodology in materials science [1,2]. Typically used to mix granular solids, reduce their particle size, refine their microstructure, and enhance their chemical reactivity [1,2], it has also been applied to chemical synthesis [3][4][5][6][7]. Physical and chemical transformations are activated and driven by the transfer of mechanical energy to the particulate. This activation takes place every time the particulate gets trapped between the surfaces of the milling tools (the ball and the jar walls) as they collide with each other [1][2][3][4][5][6][7]. Under the effect of the intense mechanical stresses generated during each collision, the granular solids undergo severe mechanical deformation [1][2][3][4][5][6][7]. Eventually, deformation results in the effective mixing of the solid phases on the microscopic scale, which is accompanied by the formation of extended interfaces between the reactants and the consequent enhancement of chemical reactivity [1][2][3][4][5][6][7].
The scenario mentioned above is particularly suitable to explain the transformations involving organic substances. Highly deformable, they readily undergo the shearing, compression, and folding processes that local mechanical stresses induce during collisions. The resulting intimate mixing affects The scenario mentioned above is particularly suitable to explain the transformations involving organic substances. Highly deformable, they readily undergo the shearing, compression, and folding processes that local mechanical stresses induce during collisions. The resulting intimate mixing affects the conditions under which chemical reactions occur [3][4][5][6][7] and can occasionally favour the formation of products different from those of thermally activated transformations [3][4][5][6][7].
In general, a close relationship between chemical reactivity and mechanical characteristics of the processed solids can be expected [1,2]. Accordingly, rheological behaviour of the particulate is expected to significantly affect the transformation kinetics. The recent kinetic study of a mechanically induced Knoevenagel condensation provided clear evidence for the importance of rheological effects [8].
The particular Knoevenagel condensation investigated was the model reaction between vanillin and barbituric acid shown in Scheme 1: Scheme 1. The mechanochemical Knoevenagel condensation of vanillin and barbituric acid.
The reaction was successfully carried out under mechanical activation conditions for the first time in 2003 [9]. Since then, it has been investigated systematically under various milling conditions [10][11][12]. Experimental evidence clearly shows that the Knoevenagel condensation proceeds quantitatively in planetary and mixer ball mills, at a rate that is strongly influenced by milling parameters such as milling frequency, number of balls, and powder charge [10][11][12].
While these results confirm the appealing BM capability of activating organic transformations in the absence of solvent phases, additional evidence has been recently given concerning the role that the properties of processed substances can have on the transformation rate and the overall kinetic behaviour. In particular, it has been shown that the mechanical processing can induce a change in the physical form of the reaction mixture from a dry, loose powder to a cohesive, rubber-like state coating the ball [8]. This change takes place exactly during the sigmoidal increase in the transformation rate, thus suggesting that the rheological properties of reactants and products can affect the rate of the reaction, resulting in kinetics that differ dramatically from the simple first-order kinetics of the same reaction in solution [8].
Although such changes in the rheological characteristics have been reported in only a few cases [8,13,14], it is possible that they are quite general during mechanochemical reactions [15]. However, there is currently no mechanistic interpretation of how these rheological changes occur. In this work, we propose a phenomenological interpretation of the kinetic data collected in a previous investigation [8]. Our interpretation is based on a kinetic model that takes account of the statistical nature of BM. We support our kinetic analysis with an experimental investigation of the formation of cohesive rubber-like states in a suitably selected model system. In the following, we first recapitulate the conditions under which the Knoevenagel condensation between vanillin and barbituric acid was studied in the previous work [8]. We then describe the methods used here to perform and support the kinetic analysis.
Results
The experimental and theoretical work carried out within the framework of the present investigation aims at explaining the kinetics of the mechanically activated Knoevenagel condensation between vanillin and barbituric acid described in the literature [8]. For convenience, the original kinetic dataset is reproduced in Figure 1. The reaction was successfully carried out under mechanical activation conditions for the first time in 2003 [9]. Since then, it has been investigated systematically under various milling conditions [10][11][12]. Experimental evidence clearly shows that the Knoevenagel condensation proceeds quantitatively in planetary and mixer ball mills, at a rate that is strongly influenced by milling parameters such as milling frequency, number of balls, and powder charge [10][11][12].
While these results confirm the appealing BM capability of activating organic transformations in the absence of solvent phases, additional evidence has been recently given concerning the role that the properties of processed substances can have on the transformation rate and the overall kinetic behaviour. In particular, it has been shown that the mechanical processing can induce a change in the physical form of the reaction mixture from a dry, loose powder to a cohesive, rubber-like state coating the ball [8]. This change takes place exactly during the sigmoidal increase in the transformation rate, thus suggesting that the rheological properties of reactants and products can affect the rate of the reaction, resulting in kinetics that differ dramatically from the simple first-order kinetics of the same reaction in solution [8].
Although such changes in the rheological characteristics have been reported in only a few cases [8,13,14], it is possible that they are quite general during mechanochemical reactions [15]. However, there is currently no mechanistic interpretation of how these rheological changes occur. In this work, we propose a phenomenological interpretation of the kinetic data collected in a previous investigation [8]. Our interpretation is based on a kinetic model that takes account of the statistical nature of BM. We support our kinetic analysis with an experimental investigation of the formation of cohesive rubber-like states in a suitably selected model system. In the following, we first recapitulate the conditions under which the Knoevenagel condensation between vanillin and barbituric acid was studied in the previous work [8]. We then describe the methods used here to perform and support the kinetic analysis.
Results
The experimental and theoretical work carried out within the framework of the present investigation aims at explaining the kinetics of the mechanically activated Knoevenagel condensation between vanillin and barbituric acid described in the literature [8]. For convenience, the original kinetic dataset is reproduced in Figure 1. The chemical conversion follows a sigmoidal trend characterized by a significant increase of the reaction rate approximately between 28 and 32 min of mechanical processing. The kinetic curve is quite different from the one observed for the same reaction carried out in solution, which corresponds to a simple first-order kinetics.
Hutchings et al. already noted that the sigmoidal kinetics can be related to "… the dramatic changes in the physical form of the reaction mixture as the reaction progressed. Specifically, the reaction mixture changed from a dry, free-flowing powder during the induction period to a cohesive rubber-like state that formed a robust coating around the ball during the sigmoidal increase in reaction rate, after which it returned again to a free-flowing powder…" [8]. A picture of the coated ball is given in Figure 2. We show that such observation, in combination with a simplified kinetic modelling, can satisfactorily explain the experimental findings.
The Phenomenological Kinetic Model
Discussed in detail elsewhere [16][17][18], the model takes into account the fundamental features of BM. Accordingly, it assumes that: (a) Only a small amount of powder is processed during each collision; (b) The powder is trapped between the impacting milling tools during each collision with approximately stochastic dynamics; (c) For any component of the powder mixture, the amount involved in each collision corresponds to the average composition of the powder charge; (d) The chemical composition of the processed powder remains uniform during the entire mechanical treatment. The chemical conversion follows a sigmoidal trend characterized by a significant increase of the reaction rate approximately between 28 and 32 min of mechanical processing. The kinetic curve is quite different from the one observed for the same reaction carried out in solution, which corresponds to a simple first-order kinetics.
Hutchings et al. already noted that the sigmoidal kinetics can be related to " . . . the dramatic changes in the physical form of the reaction mixture as the reaction progressed. Specifically, the reaction mixture changed from a dry, free-flowing powder during the induction period to a cohesive rubber-like state that formed a robust coating around the ball during the sigmoidal increase in reaction rate, after which it returned again to a free-flowing powder . . . " [8]. A picture of the coated ball is given in Figure 2. The chemical conversion follows a sigmoidal trend characterized by a significant increase of the reaction rate approximately between 28 and 32 min of mechanical processing. The kinetic curve is quite different from the one observed for the same reaction carried out in solution, which corresponds to a simple first-order kinetics.
Hutchings et al. already noted that the sigmoidal kinetics can be related to "… the dramatic changes in the physical form of the reaction mixture as the reaction progressed. Specifically, the reaction mixture changed from a dry, free-flowing powder during the induction period to a cohesive rubber-like state that formed a robust coating around the ball during the sigmoidal increase in reaction rate, after which it returned again to a free-flowing powder…" [8]. A picture of the coated ball is given in Figure 2. We show that such observation, in combination with a simplified kinetic modelling, can satisfactorily explain the experimental findings.
The Phenomenological Kinetic Model
Discussed in detail elsewhere [16][17][18], the model takes into account the fundamental features of BM. Accordingly, it assumes that: (a) Only a small amount of powder is processed during each collision; (b) The powder is trapped between the impacting milling tools during each collision with approximately stochastic dynamics; (c) For any component of the powder mixture, the amount involved in each collision corresponds to the average composition of the powder charge; (d) The chemical composition of the processed powder remains uniform during the entire mechanical treatment. We show that such observation, in combination with a simplified kinetic modelling, can satisfactorily explain the experimental findings.
The Phenomenological Kinetic Model
Discussed in detail elsewhere [16][17][18], the model takes into account the fundamental features of BM. Accordingly, it assumes that: Additionally, it is reasonable to assume that (i) the mechanical stresses generated during each collision result in critical loading conditions (CLCs) only in a subvolume V * of the trapped powder, and that (ii) such subvolume remains approximately constant during the entire mechanical treatment. CLCs can be regarded as the mechanical loading conditions that generate mechanical stresses intense enough to cause the mechanical deformation required to activate a given transformation process. Thus, V * represents the volume of effectively processed powder. Both CLCs and V * can vary from process to process.
For a given transformation, the volume fraction of effectively processed powders is κ = V * /V, where V is the total volume of powder inside the vial. Ideally, the powder charge can be divided into equal volume elements V * with the same probability of being processed in a given collision.
Once the first collision has occurred, the powder charge can be divided into two volume fractions, χ 0 (1) and χ 1 (1), of powders processed respectively zero and one times. These two fractions are equal to 1 − κ and κ. As the second collision takes place, another volume fraction κ gets processed. Based on the model assumption that the powder charge remains always uniform, the volume fractions effectively processed zero, one, and two times become equal to The process continues as the number of collisions, n, increases. The volume fraction χ 0 (n) of powder not yet processed after n collisions can be expressed as This simply indicates that, during each collision, χ 0 (n) decreases by the fraction κ χ 0 (n). The corresponding expression for the fraction χ i (n) of the powder processed i times after n collisions is more complicated, as its variation is governed by two different contributions. During the n-th collision, a fraction of the powder that has been processed i times, namely χ i (n − 1), is processed for the (i + 1)-th time. This fraction contributes to χ i+1 (n), thus it must be subtracted from χ i (n − 1). On the other hand, the n-th collision will process a fraction of χ i−1 (n − 1), equal to κ χ i−1 (n − 1), for the i-th time, thus this amount must be added to χ i (n − 1). Therefore, after the n-th collision, the volume fraction of powder effectively processed i times is equal to As long as the volume fraction κ is small, i.e., κ << 1, which is the case in typical BM experiments, the discrete Equations (1) and (2) can be written in the continuous forms whereas Equation (4) is solved by Equations (5) and (6), which satisfy the condition ∞ i=0 χ i (n) = 1, can be regarded as the fundamental equations that describe the kinetics of ball mill-activated transformations. A kinetic curve with a simple exponential character can be obtained by assuming that the final product forms when the powder is processed above CLCs for the first time. Under this assumption, the volume fraction χ p (n) of the product phase can be expressed as A sigmoid curve is obtained if the formation of the product requires that the powder undergoes CLCs twice. In this case, If m CLCs are required, the resulting equation is more complicated, but retains the sigmoid shape and the volume fraction of the product can be expressed as Similarly, if m CLCs are required to form an intermediate that evolves after p additional collisions, into a final product, the volume fraction of the intermediate can be expressed as In all cases, the volume fraction of powder processed effectively during a collision, κ, can be regarded as a measure of the rate of the gradual mechanochemical transformation.
All of the model equations can be referred to mass fractions if density of reactants, intermediates if present, and products are taken into due account. In addition, under the assumption that the number of collisions, n, is proportional to time, t, all of the model equations reported above can be expressed as a function of time.
Analysis of Kinetic Data
The capability of mechanical processing by BM of inducing the Knoevenagel reaction raises a crucial question for its kinetic analysis. Is the reaction activated by the very first impact? Or, perhaps, a given fraction of powder must be hit twice to undergo the chemical transformation? At present, we do not have direct evidence concerning the point. Under these circumstances we make the simplest possible assumption that allows kinetic modelling. Accordingly, we assume that the Knoevenagel condensation takes place, in a small fraction of the powder trapped between ball and vial, when the powder has undergone CLCs once. If the rate coefficient, k, accounting for the transformation rate remained constant, we could equal it to κ and try to use the equivalent of Equation (7) referred to time to describe the reaction kinetics. However, a constant k does not seem to be the case for the observed transformation.
The rate coefficient k measures the volume fraction processed effectively during a single collision. Therefore, it is proportional to the volume of material subjected to CLCs. In turn, the volume of material subjected to CLCs can be expected to scale with the amount of material that can be effectively trapped between the surfaces of milling tools during the collision.
The efficiency of trapping can vary greatly depending on the nature of the processed material. Trapping loose powder is quite difficult because of the intrinsic dynamics of the granular body inside the reactor. Thus, it can be expected, and it is also generally observed [16][17][18], that only a very small fraction of the powder charge is effectively processed in individual collisions. In contrast, a greater trapping efficiency can be expected when the processed material coats the milling tools. Indeed, a significant fraction of the whole powder charge inside the reactor will be certainly loaded by milling tools under such circumstances.
According to the experimental findings regarding the Knoevenagel condensation, the material inside the reactor forms a coating around the milling ball. Such coating forms gradually as a consequence of the ball impacting and rolling inside the reactor and we can suppose that it makes k increase with time. In particular, we can expect that the coating forms at a rate proportional to the total surface that can be coated. It follows that the coating rate can be expressed as where v is the volume of material forming the coating layer around the milling ball and r is the radius of the coated ball. Now, we can suppose that the rate of radius increase remains constant. This means that dr dt ∼ c. Thus, where r 0 is the radius of the uncoated ball. Once we substitute Equation (13) in Equation (11) and integrate Equation (11), the result is It follows that the volume of the coated ball is If we assume that the rate coefficient k is proportional to the volume v ball of the coated ball, then where V is the total volume of powder inside the reactor and A is a proportionality constant. Thus, to a first, rough approximation, we can expect that k exhibits a cubic dependence on time.
It is also worth noting that experimental findings suggest that k increases definitely only after about 20 min of mechanical processing. Therefore, it seems reasonable to assume that the material starts coating the ball only after an induction period t 0 .
The above-mentioned observations can be summarized as follows: Here, k 0 is the apparent rate constant of the transformation for times shorter than the induction period t 0 , when the material is still unable to stick to the ball surface and form a stable coating. For times longer than t 0 , the apparent rate constant k is affected by the additional contribution related to coating.
The variation of k with time does not allow any analytical description of the kinetic curve. However, it is still possible to write the equations as a function of time and solve them numerically. (3) and (4), Equations (19) and (20) provide a general description of the transformation kinetics. A best-fitting procedure allows us to find the expression of the kinetic curve and the spectrum of k(t). values associated.
The hypothesis that the reaction activation requires that powder undergoes CLCs only once allows a very good fitting of the experimental data. As shown in Figure 1, the resulting kinetic curve perfectly interpolates the experimental points. Such kinetic curve is obtained allowing k(t) to vary in time as shown in Figure 3. It can be seen that, starting from a relatively low value, k(t) progressively increases up to about 0.8 min −1 . Now, it is worth noting that Eq. 18 satisfactorily describes the variation of . The best-fitted curve almost overlaps the estimates, providing significant support to the theoretical model proposed. In particular, it seems definitely reasonable to expect that cohesive states develop gradually under the effect of adhesive forces between the coated ball and the material present in the vial. The mechanism is analogous to that governing the formation of snowballs. Accordingly, the rate of adhesion to the coating depends on the amount of material involved in the coating itself and the adhesion process proceeds autocatalytically until all the material inside the vial has been collected or other competing disaggregation processes intervene to limit the growth of the coating layer.
The satisfactory best-fitting allowed obtaining reliable estimates for the different unknown quantities in Eq. 18. In particular, it suggests an induction period, , about 27 min long and, for times shorter than 27 min (not shown for clarity in Figure 3), an initial value, , of about 0.01 min −1 for the rate coefficient. The rate of radius increase, , is approximately equal to 0.1 mm min −1 , a value that properly accounts for the rapid formation of the coating layer.
The results discussed heretofore suggest that the kinetic modelling can suitably explain the experimental findings concerning the Knoevenagel condensation between vanillin and barbituric acid carried out under mechanical activation conditions [8]. Specifically, it seems that the simplest possible kinetic assumption that the reaction occurs in a fraction of the processed material already at the first impact, combined with the hypothesis that the rate of coating formation is proportional to the total surface that can be coated, leads to kinetic equations able to reproduce the experimental data quite satisfactorily.
However, there are at least two issues that deserve further investigation in the attempt of gaining a deeper insight into the behaviour of the processed material and validating, as far as possible, the kinetic model proposed. On the one hand, it is worth investigating whether or not cohesive, rubber-like states can form under the effects of a mechanical action different from the one imparted to milling balls by the Retsch MM400 shaker-type ball mill used in previous work [8]. On the other, it is highly desirable to obtain direct evidence concerning the formation kinetics of the coating layer around the milling ball in order to support the hypothesis underlying Eqs. 14-16.
To this aim, we expressly designed and carried out experiments involving a different ball mill and a different adhesive material. Specifically, in view of the number of preliminary runs to be done under different conditions, we used polyvinyl acetate glue. In addition, we equipped the ball mill with the sensors needed to monitor the milling conditions and obtain in situ indirect information on the rheological behaviour of the material inside the vial. Details are given in the following. Now, it is worth noting that Equation (18) satisfactorily describes the variation of k(t). The best-fitted curve almost overlaps the k(t) estimates, providing significant support to the theoretical model proposed. In particular, it seems definitely reasonable to expect that cohesive states develop gradually under the effect of adhesive forces between the coated ball and the material present in the vial. The mechanism is analogous to that governing the formation of snowballs. Accordingly, the rate of adhesion to the coating depends on the amount of material involved in the coating itself and the adhesion process proceeds autocatalytically until all the material inside the vial has been collected or other competing disaggregation processes intervene to limit the growth of the coating layer.
The satisfactory best-fitting allowed obtaining reliable estimates for the different unknown quantities in Equation (18). In particular, it suggests an induction period, t 0 , about 27 min long and, for times shorter than 27 min (not shown for clarity in Figure 3), an initial value, k 0 , of about 0.01 min −1 for the rate coefficient. The rate of radius increase, c, is approximately equal to 0.1 mm min −1 , a value that properly accounts for the rapid formation of the coating layer.
The results discussed heretofore suggest that the kinetic modelling can suitably explain the experimental findings concerning the Knoevenagel condensation between vanillin and barbituric acid carried out under mechanical activation conditions [8]. Specifically, it seems that the simplest possible kinetic assumption that the reaction occurs in a fraction of the processed material already at the first impact, combined with the hypothesis that the rate of coating formation is proportional to the total surface that can be coated, leads to kinetic equations able to reproduce the experimental data quite satisfactorily.
However, there are at least two issues that deserve further investigation in the attempt of gaining a deeper insight into the behaviour of the processed material and validating, as far as possible, the kinetic model proposed. On the one hand, it is worth investigating whether or not cohesive, rubber-like states can form under the effects of a mechanical action different from the one imparted to milling balls by the Retsch MM400 shaker-type ball mill used in previous work [8]. On the other, it is highly desirable to obtain direct evidence concerning the formation kinetics of the coating layer around the milling ball in order to support the hypothesis underlying Equations (14)- (16).
To this aim, we expressly designed and carried out experiments involving a different ball mill and a different adhesive material. Specifically, in view of the number of preliminary runs to be done under different conditions, we used polyvinyl acetate glue. In addition, we equipped the ball mill with the sensors needed to monitor the milling conditions and obtain in situ indirect information on the rheological behaviour of the material inside the vial. Details are given in the following.
Supporting Experiments
Experiments were performed using a SPEX Mixer/Mill 8000. It was chosen because of the different mechanical action with respect to the Retsch MM400 ball mill. While the latter makes the vials oscillate on the horizontal plane with a simple harmonic motion, the SPEX Mixer/Mill 8000 swings the vial along a three-dimensional trajectory that combines a vertical harmonic motion with synchronous oscillations on the horizontal plane. It follows that the vial movement results in a more efficient stirring of powder and a wider volume spanned by the milling ball. In addition, we also used a SPEX vial with a flat base, which makes ball trajectories inside the vial more complicated and impacts on the vial walls more energetically.
The use of a SPEX Mixer/Mill 8000 has another advantage. The milling dynamics have been thoroughly studied under the most diverse experimental conditions [19][20][21], which provided significant help in the planning of experiments and the interpretation of their results. Following previous work [19], a single milling ball was used. Furthermore, the ball mill was equipped with a piezoelectric transducer in order to monitor the occurrence of impacts between the ball and the vial walls. The sensor was placed on the bottom end of the vial and connected to a computer to record the sequence of electric signals generated by impacts. The polymer, initially in the liquid phase, was added as a partially dried powder.
A short sequence of signals generated by the piezoelectric sensor is shown in Figure 4. Signals are regularly spaced, thus indicating that the ball underwent regular dynamics. Two impacts per vial cycle were detected, which corresponds to an impact frequency of about 23.2 Hz. swings the vial along a three-dimensional trajectory that combines a vertical harmonic motion with synchronous oscillations on the horizontal plane. It follows that the vial movement results in a more efficient stirring of powder and a wider volume spanned by the milling ball. In addition, we also used a SPEX vial with a flat base, which makes ball trajectories inside the vial more complicated and impacts on the vial walls more energetically. The use of a SPEX Mixer/Mill 8000 has another advantage. The milling dynamics have been thoroughly studied under the most diverse experimental conditions [19][20][21], which provided significant help in the planning of experiments and the interpretation of their results. Following previous work [19], a single milling ball was used. Furthermore, the ball mill was equipped with a piezoelectric transducer in order to monitor the occurrence of impacts between the ball and the vial walls. The sensor was placed on the bottom end of the vial and connected to a computer to record the sequence of electric signals generated by impacts. The polymer, initially in the liquid phase, was added as a partially dried powder.
A short sequence of signals generated by the piezoelectric sensor is shown in Figure 4. Signals are regularly spaced, thus indicating that the ball underwent regular dynamics. Two impacts per vial cycle were detected, which corresponds to an impact frequency of about 23.2 Hz. We recorded long sequences of signals to gain information on the ball dynamics on long time-scales. Although more complex analyses can be performed, the distance between two consecutive signals and the intensity of signals allow sufficient insight in this respect. The former gives information on the periodicity of the ball motion, with relatively short values pointing out the occurrence of irregular trajectories with multiple rebounds on the vial base and cylindrical wall. The latter measures the energy dissipation at collisions, with a decrease of signal intensity for less energetic collisions.
The distance between two consecutive signals, ∆, is shown in Figure 5a as a function of time, . We recorded long sequences of signals to gain information on the ball dynamics on long time-scales. Although more complex analyses can be performed, the distance between two consecutive signals and the intensity of signals allow sufficient insight in this respect. The former gives information on the periodicity of the ball motion, with relatively short values pointing out the occurrence of irregular trajectories with multiple rebounds on the vial base and cylindrical wall. The latter measures the energy dissipation at collisions, with a decrease of signal intensity for less energetic collisions.
The distance between two consecutive signals, ∆, is shown in Figure 5a as a function of time, t. It can be seen that ∆ remains approximately constant during the first 6 min of BM. Then, it decreases slowly for other 13 min, after which it keeps the constant value of 23.2 Hz, twice the milling frequency. Overall, data indicate that the ball never undergoes chaotic dynamics. However, after about 20 min, there is a clear transition to an extremely regular behaviour. Based on the impact frequency as well as on previous work on other systems [19][20][21], it can be inferred that the ball has reached the most regular possible dynamics. Accordingly, it simply travels between the opposite bases undergoing almost perfectly inelastic collisions.
The relative intensity of signals, ℎ, normalized to the maximum ℎ value observed, is plotted in Figure 5b. Two distinct sequences of ℎ values were detected, simply due to the position of the piezoelectric sensor. Being placed externally on the bottom end of the vial, it recorded with higher intensity the impacts occurring on the bottom base. Those occurring on the vial cap were much less intense, also because of the O-ring that, while assuring a good sealing of the vial, dampened any vibration. Stronger and weaker signals exhibited a similar variation with time. They initially kept an approximately constant intensity. Then, the intensity decreased progressively, and significantly, until a new constant value was reached.
The variation of ℎ is substantially synchronous with the variation of the distance ∆ between consecutive impacts. This strongly suggests that something happened to the ball after about 6 min of mechanical processing, as evident from the direct inspection of the ball after 3 min and after 24 min of milling. Before the transition in the ball dynamics occurred, the ball was still quite clean. Conversely, once the ball had reached the new dynamics, the ball was completely coated with material as shown by the picture in Figure 6. It can be seen that ∆ remains approximately constant during the first 6 min of BM. Then, it decreases slowly for other 13 min, after which it keeps the constant value of 23.2 Hz, twice the milling frequency. Overall, data indicate that the ball never undergoes chaotic dynamics. However, after about 20 min, there is a clear transition to an extremely regular behaviour. Based on the impact frequency as well as on previous work on other systems [19][20][21], it can be inferred that the ball has reached the most regular possible dynamics. Accordingly, it simply travels between the opposite bases undergoing almost perfectly inelastic collisions.
The relative intensity of signals, h, normalized to the maximum h value observed, is plotted in Figure 5b. Two distinct sequences of h values were detected, simply due to the position of the piezoelectric sensor. Being placed externally on the bottom end of the vial, it recorded with higher intensity the impacts occurring on the bottom base. Those occurring on the vial cap were much less intense, also because of the O-ring that, while assuring a good sealing of the vial, dampened any vibration. Stronger and weaker signals exhibited a similar variation with time. They initially kept an approximately constant intensity. Then, the intensity decreased progressively, and significantly, until a new constant value was reached.
The variation of h is substantially synchronous with the variation of the distance ∆ between consecutive impacts. This strongly suggests that something happened to the ball after about 6 min of mechanical processing, as evident from the direct inspection of the ball after 3 min and after 24 min of milling. Before the transition in the ball dynamics occurred, the ball was still quite clean. Conversely, once the ball had reached the new dynamics, the ball was completely coated with material as shown by the picture in Figure 6. Based on the evidence mentioned above, we opened regularly the vial to weigh the ball during the transition period starting from 5 min of milling. We repeated the experiments three times to suitably account for experimental uncertainties. At the beginning of BM, the material inside the jar kept its powder form, although it showed tendency to aggregate. As the coating layer around the ball began to form, part of the powder adhered to the rubber-like cohesive state. Generally, it could be recognized and easily separated from the coated ball. In the later BM stages, the material not yet involved in the formation of the cohesive layer around the ball also formed irregular coatings on the vial. Eventually, all the material was consumed in the coating layer. The results are shown in Figure 7, where the mass of the coated ball, , is plotted as a function of time, . Data arranged according to a concave up, increasing curve. Assuming that the mechanical processing does not change the density of the adhesive material, so that it keeps constant throughout the milling, Equation 14 can be readily utilized for expressing the mass of the coated ball. It can be rewritten as 4 3 where and are the densities of adhesive and ball, equal to 1.1 and 7.85 g cm −3 respectively. We used Eq. 21 to interpolate the experimental data in Figure 7. It can be seen that it works quite well. Thus, it can be reasonably deduced that the coating around the ball grows proportionally to the total surface of the coated layer and that the thickness of the coated layer varies linearly with time. Based on the evidence mentioned above, we opened regularly the vial to weigh the ball during the transition period starting from 5 min of milling. We repeated the experiments three times to suitably account for experimental uncertainties. At the beginning of BM, the material inside the jar kept its powder form, although it showed tendency to aggregate. As the coating layer around the ball began to form, part of the powder adhered to the rubber-like cohesive state. Generally, it could be recognized and easily separated from the coated ball. In the later BM stages, the material not yet involved in the formation of the cohesive layer around the ball also formed irregular coatings on the vial. Eventually, all the material was consumed in the coating layer. The results are shown in Figure 7, where the mass of the coated ball, m, is plotted as a function of time, t. Data arranged according to a concave up, increasing curve. Based on the evidence mentioned above, we opened regularly the vial to weigh the ball during the transition period starting from 5 min of milling. We repeated the experiments three times to suitably account for experimental uncertainties. At the beginning of BM, the material inside the jar kept its powder form, although it showed tendency to aggregate. As the coating layer around the ball began to form, part of the powder adhered to the rubber-like cohesive state. Generally, it could be recognized and easily separated from the coated ball. In the later BM stages, the material not yet involved in the formation of the cohesive layer around the ball also formed irregular coatings on the vial. Eventually, all the material was consumed in the coating layer. The results are shown in Figure 7, where the mass of the coated ball, , is plotted as a function of time, . Data arranged according to a concave up, increasing curve. Assuming that the mechanical processing does not change the density of the adhesive material, so that it keeps constant throughout the milling, Equation 14 can be readily utilized for expressing the mass of the coated ball. It can be rewritten as 4 3
Discussion
where and are the densities of adhesive and ball, equal to 1.1 and 7.85 g cm −3 respectively. We used Eq. 21 to interpolate the experimental data in Figure 7. It can be seen that it works quite well. Thus, it can be reasonably deduced that the coating around the ball grows proportionally to the total surface of the coated layer and that the thickness of the coated layer varies linearly with time. Assuming that the mechanical processing does not change the density of the adhesive material, so that it keeps constant throughout the milling, Equation (14) can be readily utilized for expressing the mass of the coated ball. It can be rewritten as
Discussion
where ρ a and ρ b are the densities of adhesive and ball, equal to 1.1 and 7.85 g cm −3 respectively. We used Equation (21) to interpolate the experimental data in Figure 7. It can be seen that it works quite well. Thus, it can be reasonably deduced that the coating around the ball grows proportionally to the total surface of the coated layer and that the thickness of the coated layer varies linearly with time.
Discussion
The results presented heretofore provide significant insight into the main factors and processes underlying the kinetics of the mechanically activated Knoevenagel condensation between vanillin and barbituric acid investigated in previous work [8], and help clarify the interplay between the degree of chemical conversion and the rheological properties of the substances subjected to BM.
The first aspect to note concerns the shape of the kinetic curve that is suggested by experimental points shown in Figure 1. As already observed by Hutchings et al. [8], the chemical conversion is initially relatively slow and only subsequently its rate undergoes a marked increase that can be related to a modification of the rheological properties of the processed chemicals. The kinetic analysis we have carried out throws new light on both these issues.
First, it is worth noting that the kinetic model describes satisfactorily the initial portion of the kinetic curve without the need for a time dependent rate coefficient k. Accordingly, experimental data can be best-fitted by Equation (7) up to a milling time of about 27 min, which corresponds to the induction period t 0 . Therefore, the initial portion of the kinetic curve that best fits the data in Figure 1 has the simple exponential character resulting from the assumption that the final product forms when the powder is processed above CLCs for the first time. In this regard, it is worth remembering that the kinetic model relates the chemical conversion under BM conditions to individual impacts, but makes no microscopic hypothesis on the kinetic processes and mechanisms that rule the transformation during the impact. It follows that the shape of the kinetic curve is not determined by chemistry, but mostly by the number of times the powder has to be processed above CLCs to trigger the chemical reaction.
Second, the increase in transformation rate observed in the second stage of the Knoevenagel condensation has been correctly ascribed to the modification of the rheological behaviour of the processed substances and, in particular, to the formation of rubber-like, cohesive states around the milling ball. There exists a simple phenomenological explanation for the fact that the formation of a coating layer around the milling ball gives rise to a rate increase of chemical conversion. The amount of material processed above CLCs is, somehow, proportional to the amount of material that the ball traps during individual impacts. In turn, the latter quantity depends, somehow, on the size of the milling ball. In the presence of adhesion processes, a layer of adhesive material coats the ball and its thickness increases with time, making the coated ball bigger and bigger. It follows that the coated ball can trap an ever-increasing amount of material and the resulting behaviour mimics an autocatalytic chemical process.
In the absence of a proper kinetic modelling, it is quite hard to go beyond the phenomenological explanation mentioned above. However, we used a suitable conceptual framework to describe the fundamentals of BM and our kinetic model can be used to quantitatively analyse the kinetics of the Knoevenagel condensation between vanillin and barbituric acid studied in previous work [8]. Briefly, we utilized a general form of model equations, namely Equations (19) and (20), involving a time-dependent rate coefficient and we performed a direct inversion procedure to best fit the experimental data in Figure 1 and extract the best-fitted values of the time-dependent rate coefficient k(t). Plotted in Figure 3, the obtained best-fitted values clearly show that k(t) increases with time, providing a first, generic support to the hypothesis that the apparent autocatalytic behaviour is simply the result of an increased trapping efficiency of the coated ball. A trivial data analysis suggests that k(t) exhibits a cubic dependence on time. As shown by Equations (15) and (16), this is exactly the time dependence expected for k(t) based on the assumption that the amount of material trapped by the ball during individual impacts is proportional to the ball size. Moreover, Equation (18) is able to best fit the spectrum of k(t) values. Thus, we have indirect information on the formation rate of the rubber-like cohesive states. It appears that the layer thickness of the adhesive material grows at about 0.1 mm min -1 , which is a reasonable value for the adhesion processes observed in previous work concerning the Knoevenagel condensation [8].
Starting from the results of the kinetic analysis discussed so far, we have gone a step further. In particular, we carried out new experiments to verify the occurrence of adhesion processes in a ball mill with a different mechanical action, ascertain their time dependence and investigate the underlying kinetics in situ. To this aim, we processed suitable amounts of polyvinyl acetate glue in a SPEX Mixer/Mill 8000 in the presence of a single milling ball and at a relatively low milling frequency. The experimental findings give clear indications regarding both the reliability of our methodological approach to monitor the milling dynamics and the rate of the adhesion processes. In particular, a piezoelectric transducer on the bottom end of the vial suffices to point out the occurrence of a transition in the impact conditions during the BM of the polyvinyl acetate glue. The gradual modification of the impact conditions involves a regularization of the ball trajectories between the opposite vial bases and can be ascribed to the formation of a coating layer around the milling ball. We measured the thickness of the coating layer and we found that it increases with time according to the third power of time. This means that the rate of coating formation is proportional to the coating surface and that the coating thickness increases linearly with time. Therefore, the experimental findings justify a posteriori the hypothesis made initially to carry out the kinetic analysis. Overall, this provides direct support to the kinetic analysis carried out on the data describing the kinetics of the Knoevenagel condensation and it suggests that our approach can be extended to similar cases.
Materials and Methods
For convenience, we report the experimental conditions that, in previous work [8], allowed observing the formation of cohesive, rubber-like states during the Knoevenagel condensation between vanillin and barbituric acid. Chemicals with >98 % purity were purchased from Sigma Aldrich UK and were used as received, unless indicated. BM experiments were carried out using a Retsch MM400 mixer mill. Vanillin (0.29 g, 1.9 mmol), barbituric acid (0.24 g, 1.9 mmol), the required amount of water, and a steel grinding ball (13.6 g) were added to a 25 cm 3 stainless steel vial. The vial was shaken at 25 Hz for the required amount of time. The product was a powder, yellow if the reaction was incomplete, orange if complete [8].
The new experiments aiming at explaining the formation of rubber-like cohesive states were carried out using polyvinyl acetate in the form of commercial wood glue. Preliminary tests at room temperature showed that the liquid exhibited relatively low viscosity. Viscosity progressively increased as the glue hardened, which took place on the time scale of about 2 h.
We added 5 mL of the liquid adhesive to the hardened steel vial of a SPEX Mixer/Mill 8000 together with a stainless steel ball with diameter of 0.62 cm and mass of about 7.8 g. Once sealed, the vial was clamped on the mill and equipped with a piezoelectric transducer on the bottom end to detect the collisions between ball and container. Then, the mill was operated at 11.6 Hz, which is the lowest milling frequency that the mill can bear to keep the vial swing regular.
We interrupted the milling and opened the vial at regular time intervals to take the ball coated with adhesive material, and weighed it using a laboratory precision balance able to read four decimal places. Every time, vial and ball were perfectly cleaned, a further 5 mL of adhesive was put inside the vial, and the milling restarted up to the subsequent time interval selected.
Conclusions
The use of a kinetic model able to take into due account the intrinsic statistical nature of BM allows the phenomenological interpretation of the sigmoidal kinetics exhibited by the Knoevenagel condensation carried out under mechanical activation conditions. The observed kinetics are satisfactorily described assuming that reactants subjected to CLCs at least once transform into products and that the apparent rate constant increases with time after an induction period. The increase of the apparent rate constant with time is connected with the formation of a stable coating around the ball. As the processed material sticks to the ball, the amount of reactants subjected to the necessary CLCs per collision increases. The kinetic analysis suggests that the apparent rate constant varies with the third power of time and support experiments, performed using a polyvinyl acetate adhesive, confirm such time dependence. Specifically, the mass coating the ball increases according to third-power kinetics. This supports the kinetic analysis carried out and provides a general conceptual framework to investigate similar case studies. | 11,256 | 2019-10-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
All-digital wavefront sensing for structured light beams
We present a new all-digital technique to extract the wavefront of a structured light beam. Our method employs non-homogeneous polarization optics together with dynamic, digital holograms written to a spatial light modulator to measure the phase relationship between orthogonal polarization states in real-time, thereby accessing the wavefront information. Importantly, we show how this can be applied to measuring the wavefront of propagating light fields, over extended distances, without any moving components. We illustrate the versatility of the tool by measuring propagating optical vortices, Bessel, Airy and speckle fields. The comparison of the extracted and programmed wavefronts yields excellent agreement. © 2014 Optical Society of America OCIS codes: (140.3295) Laser beam characterization; (010.7350) Wave-front sensing; (120.5050) Phase measurement; (090.1995) Digital holography; (050.4865) Optical vortices. References and links 1. B. Hermann, E. J. Fernndez, A. Unterhuber, H. Sattmann, A. F. Fercher, W. Drexler, P. M. Prieto, and P. Artal, “Adaptive optics ultrahigh-resolution optical coherence tomography,” Opt. Lett. 29, 2142–2144 (2004). 2. A. Roorda, F. Romero-Borja, I. William Donnelly, H. Queener, T. Hebert, and M. Campbell, “Adaptive optics scanning laser ophthalmoscopy,” Opt. Express 10, 405–412 (2002). 3. M. A. A. Neil, R. Juakaitis, M. J. Booth, T. Wilson, T. Tanaka, and S. Kawata, “Adaptive aberration correction in a twophoton microscope,” J. Microsc. 200, 105–108 (2000). 4. M. Rueckel, J. A. Mack-Bucher, and W. Denk, “Adaptive wavefront correction in two-photon microscopy using coherence-gated wavefront sensing,” Proc. Natl. Acad. Sci. 103, 17137–17142 (2006). 5. M. Booth, M. Neil, and T. Wilson, “Aberration correction for confocal imaging in refractive-index-mismatched media,” J. Microsc. 192, 90–98 (1998). 6. J. C. Ricklin and F. M. Davidson, “Atmospheric turbulence effects on a partially coherent Gaussian beam: implications for free-space laser communication,” J. Opt. Soc. Am. A 19, 1794–1802 (2002). 7. F. Roddier, M. Séchaud, G. Rousset, P.-Y. Madec, M. Northcott, J.-L. Beuzit, F. Rigaut, J. Beckers, D. Sandler, P. Lena, and O. Lai, Adaptive Optics in Astronomy (Cambridge University, 1999). 8. M. Paurisse, M. Hanna, F. Druon, and P. Georges, “Wavefront control of a multicore ytterbium-doped pulse fiber amplifier by digital holography,” Opt. Lett. 35, 1428–1430 (2010). 9. R. Navarro and E. Moreno-Barriuso, “Laser ray-tracing method for optical testing,” Opt. Lett. 24, 951–953 (1999). 10. S. R. Chamot, C. Dainty, and S. Esposito, “Adaptive optics for ophthalmic applications using a pyramid wavefront sensor,” Opt. Express 14, 518–526 (2006). 11. M. P. Rimmer and J. C. Wyant, “Evaluation of large aberrationsusing a lateral-shear interferometer having variable shear,” Appl. Opt. 14, 142–150 (1975). #211246 $15.00 USD Received 1 May 2014; revised 23 May 2014; accepted 23 May 2014; published 30 May 2014 (C) 2014 OSA 2 June 2014 | Vol. 22, No. 11 | DOI:10.1364/OE.22.014031 | OPTICS EXPRESS 14031 12. S. Velghe, J. Primot, N. Guérineau, M. Cohen, and B. Wattellier, “Wave-front reconstruction from multidirectional phase derivatives generated by multilateral shearing interferometers,” Opt. Lett. 30, 245–247 (2005). 13. J. Millerd, N. Brock, J. Hayes, M. North Morris, M. Novak and J. Wyant, “Pixelated phase-mask dynamic interferometer,” Proc. SPIE 5531, 304–314 (2004). 14. M. North Morris, J. Millerd, N. Brock, J. Hayes, B. Saif, “Dynamic Phase-Shifting Electronic Speckle Pattern Interferometer,” Proc. SPIE 5869, 58691B (2005). 15. R. G. Lane and M. Tallon, “Wave-front reconstruction using a Shack-Hartmann sensor,” Appl. Opt. 31, 6902– 6908 (1992). 16. C. Schulze, D. Naidoo, D. Flamm, O. A. Schmidt, A. Forbes, and M. Duparré, “Wavefront reconstruction by modal decomposition,” Opt. Express 20, 19714–19725 (2012). 17. C. Schulze, A. Dudley, D. Flamm, M. Duparré, and A. Forbes, “Reconstruction of laser beam wavefronts based on mode analysis,” App. Opt. 52(21), 5312–5317 (2013). 18. G. G. Stokes, ”On the composition and resolution of streams of polarized light from different sources,” Trans. Cambridge Philos. Soc., 96, 399 (1852). 19. G. G. Stokes, Mathematical and Physical Papers (Cambridge University, 1922). 20. I. Freund, ”Poincaré vortices,” Opt. Lett. 26, 1996–1998 (2001). 21. V. Denisenko, A. Minovich, A. Desyatnikov, W. Krolikowski, M. Soskin, and Y. Kivshar, ”Mapping phases of singular scalar light fields,” Opt. Lett. 33, 89–91 (2008). 22. I. Freund, A. I. Mokhun, M. S. Soskin, O. V. Angelsky, and I. I. Mokhun, ”Stokes singularity relations,” Opt. Lett. 27, 545–547 (2002). 23. S. Vyas, Y. Kozawa, and S. Sato, ”Polarization singularities in superposition of vector beams,” Opt. Express 21(7), 8972–8986 (2013). 24. H. Yan and B. Lü, ”Spectral Stokes singularities of stochastic electromagnetic beams,” Opt. Lett. 34(13), 1933– 1935 (2009). 25. H. Yan and B. Lü, ”Propagation of spectral Stokes singularities of stochastic electromagnetic beams through an astigmatic lens,” J. Opt. Soc. Am. B 27(3), 375–381 (2010). 26. Y. Luo and B. Lü, ”Spectral Stokes singularities of partially coherent radially polarized beams focused by a high numerical aperture objective,” J. Opt. 12, 115703(2010). 27. O. Korotkova and E. Wolf, ”Generalized Stokes parameters of random electromagnetic beams,” Opt. Lett. 30(2), 198–200 (2005). 28. D. Andrews, Structured Light and Its Applications (Academic, 2011). 29. A. Forbes, Laser Beam Propagation: Generation and Propagation of Customized Light (CRC, 2013). 30. Y. Li, J. Kim, and M. J. Escuti, “Orbital angular momentum generation and mode transformation with high efficiency using forked polarization gratings,” Appl. Opt. 51, 8236–8245 (2012). 31. A. Dudley, Y. Li, T. Mhlanga, M. Escuti, and A. Forbes, “Generating and measuring nondiffracting vector Bessel beams,” Opt. Lett. 38(17), 3429–3432 (2013). 32. D. Goldstein, Polarized Light (Marcel Dekker, 2004). 33. C. Schulze, D. Flamm, M. Duparré, and A. Forbes, “Beam-quality measurements using a spatial light modulator,” Opt. Lett., 37(22), 4687–4689 (2012). 34. G. Thalhammer, R. W. Bowman, G. D. Love, M. J. Padgett, and M. Ritsch-Marte, “Speeding up liquid crystal SLMs using overdrive with phase change reduction,” Opt. Express 21(2), 1779–1797 (2013). 35. D. Flamm , C. Schulze, D. Naidoo, S. Schröter, A. Forbes, and M. Duparré, “All-digital holographic tool for mode excitation and analysis in optical fibers,” J. Lightwave Technol. 31(7), 1023–1032 (2013). 36. D. Flamm, O. A. Schmidt, C. Schulze, J. Borchardt, T. Kaiser, S. Schröter, and M. Duparré, “Measuring the spatial polarization distribution of multimode beams emerging from passive step-index large-mode area fibers,” Opt. Lett. 35, 3429–3431 (2010). 37. Q. Cui, M. Li, Z.Yu, “Influence of topological charges on random wandering of optical vortex propagating through turbulent atmosphere,” Opt. Comm. 329, 10–14 (2014).
Introduction
Optical aberrations are inevitable in nearly all optical systems, leading to the quest for efficient and precise measurement techniques of the phase or wavefront of an optical field.Accurate wavefront estimation is significant in areas such as ophthalmology [1,2], microscopy [3][4][5], free-space communication [6] and astronomy [7].Laser material processing also relies heavily on wavefront contol as unwanted aberrations can hinder beam quality necessary for cutting and drilling [8].Some conventional and state-of-the-art methods to extract the wavefront of an optical field exist, ranging from ray tracing [9], pyramid sensors [10], interferometers [11][12][13][14], the Shack-Hartmann sensor [15], to the use of correlation filters [16] via modal decomposition [17].However, these techniques are often over-complicated and some of them are unable to detect phase singularities due to an absence of light at the singularity.
To extract the individual phase singularities present in a sample beam, Stokes polarimetry [18,19] can be performed once the linearly polarized (y−axis) sample beam is interfered with a x−polarized reference beam [20].This approach, to exploit the amplitude and phase relationship between orthogonal states of polarization, has been implemented on scalar fields to resolve the topology of neutral pairs of closely positioned phase singularities in speckle fields [21].Apart from implementing Stokes polarimetry to investigate phase singularities, it can be used to study polarization singularities in coherent beams [22,23] and stochastic electromagnetic beams [24,25] as well as partially coherent radially polarized beams [26] by using the spectral Stokes parameters [27].Although these techniques can reconstruct closely spaced singularities, they require the manual adjustment of various optical components for the extraction of the Stokes parameters.Moreover, these techniques are designed to measure the wavefront at a fixed plane.Consequently, no study to date has measured wavefronts of propagating fields over extended distances in real-time.
In this work we control the dynamic and geometric phase of light to construct an adjustmentfree scheme for the real-time measurement of propagating wavefronts.We devise a novel approach that employs non-homogeneous polarization optics together with digital holograms encoded on a spatial light modulator (SLM).Since these holograms are dynamic, we can demonstrate for the first time Stokes polarimetry in real-time on propagating beams.We illustrate the robustness of our technique by measuring the wavefront of a variety of static and propagating structured light beams [28,29] such as vortex, Bessel, Airy and speckle fields.We demonstrate that we can reconstruct wavefronts with very high fidelity and resolution which allows us to observe the movement of optical vortices during propagation.Our approach is likely to be useful for the characterization of structured and vector light fields.
Theory and concept
While the theory behind Stokes polarimetry is well known, we briefly outline it here for the benefit of the reader.Concurrently, we will illustrate how novel additions to the standard Stokes measurements, namely (1) the inclusion of non-homogeneous polarization optics in the form of a polarization grating (PG) [30,31] and (2) encoding the SLM to mimic homogeneous polarization optics (such as a quarter-waveplate), can transform this manual-based procedure to an all-digital one.A further departure point is that the SLM can also be used to propagate the field under study, so that the wavefront measurements can be done over extended propagation distances without any moving parts.
First, consider the incoherent superposition between two optical fields of orthogonal polarization states, e.g.horizontal and vertical, representing a vector field where: are the horizontally and vertically polarized components, respectively.x (ŷ) are unit vectors parallel (perpendicular) to the propagation axis.|U ↔ (r, φ )| and |U (r, φ )| are amplitudes, and δ ↔ (r, φ ) and δ (r, φ ) are phases of the horizontal and vertical components, respectively.Let's define the phase difference between the horizontal and vertical components as δ (r, φ ) = δ ↔ (r, φ ) − δ (r, φ ).The vector light field (of Eq. ( 1)) has a spatially inhomogeneous state of polarization given by the equation: illustrating that the state of polarization is different at every spatial point (r, φ ).In general, the state of polarization at a spatial point (r, φ ) can be described by a polarization ellipse, where the angle of orientation of the polarization ellipse is given by δ (r, φ ).
We can assign one of the polarization states (vertical) to represent the field of interest, i.e. the field whose wavefront, δ (r, φ ), we wish to reconstruct, while the other orthogonal polarization state (horizontal) can represent a reference field with a known (or flat) wavefront, for example: Here each component consists of a common Gaussian field, while the vertical component has an additional phase term [exp(iδ (r, φ ))].Most liquid crystal on silicon (LCOS) SLMs only diffract the vertical component of an incident field into the off-axis, diffraction orders which possess the encoded phase profile.The horizontal component remains impervious to the encoded phase profile in the on-axis, undiffracted order.This can be referred to as "diffraction-inefficiency".We exploit the diffraction-inefficiency to experimentally realise the fields such as those in Eq. ( 4).A linearly polarized Gaussian beam illuminating the liquid crystal display (LCD) of a SLM encoded with the phase profile, exp(iδ (r, φ )), will modify the phase of only the vertical component by δ (r, φ ), while the horizontal component remains unchanged as depicted in Eq. ( 4).Consequently, the incoherent mixture of the two components after the SLM can be described in general by Eq. ( 1) where the weightings of the two modes can be controlled by adjusting the orientation of the polarization state of the field illuminating the LCD via the use of a polarizer.The extraction of the phase difference between the horizontal and vertical components (δ (r, φ )) can be achieved via Stokes polarimetry which necessitates four separate intensity profile measurements [32]: The Stokes parameters are directly related to the polarization ellipse; in terms of the Stokes parameters, the angle of orientation and the ellipticity of the polarization ellipse at every spatial point (r, φ ) are given, respectively, by the equations [32]: The salient feature of Eq. ( 6) is that the phase difference δ (r, φ ) between the horizontal and vertical components of a vector light field at a point (r, φ ) is completely described by the ellipticity of the corresponding polarization ellipse at that point.In turn, the ellipticity of the polarization ellipse is completely described by the Stokes parameters S 3 (r, φ ) and S 2 (r, φ ) which are easily measured via intensity measurements.Therefore, the phase difference δ (r, φ ) between the horizontal and vertical components is easily measured by measuring the Stokes parameters S 3 (r, φ ) and S 2 (r, φ ).A similar measurement has been done in [21] by implementing an interferometer.The two intensity profiles, I 45 • and I 135 • , pertaining to S 2 can be measured behind a polarizer at angular orientations of 45 • and 135 • [as depicted in Fig. 1(a)] and those pertaining to S 3 (I R and I L ) by introducing a preceding quarter-waveplate [as depicted in Fig. 1(b)].Thus four measurements are required, and to date have been performed manually on static fields.These four manual measurements can be reduced to two digital measurements, by replacing the quarter-waveplate and polarizer in Fig. 1(b) with a PG which acts as a beam-splitter for right-and left-circular polarization, hence extracting I R and I L in a single measurement as shown in Fig. 1(d).Furthermore to measure the remaining two intensity profiles, I 45 • and I 135 • , a quarter-waveplate with its fast-axis set at 45 • needs to be introduced before the PG to project I 45 • and I 135 • into the two detectable ports: I R and I L , respectively.Since a quarter-wave plate induces a quarter-wavelength phase shift on an incident beam thus converting linearly polarized light to circular and visa versa, this can be achieved by encoding the SLM with an additional π/2 phase term, as illustrated in Fig. 1(c).Previously, adjustments of the polarizer and quarterwaveplate would need to be made while acquiring measurements [Figs 1(a Apart from implementing this technique to extract the wavefront of unknown optical fields at a single plane, its use can be extended to investigate the field's wavefront at multiple planes along the beam's propagation.Traditionally this would be achieved by moving the wavefront sensing device transversely along the beam's propagation axis [illustrated in Fig. 2(a)].Not only does the alignment become problematic (e.g., in the case of monitoring the trajectories of phase singularities), but it is also a time consuming procedure if many propagation steps are required.In a more advantageous approach we digitally simulate free-space propagation on the SLM [Fig.2(b)].This digital propagation technique [33] manipulates the spatial frequency spectrum of a beam of interest by first Fourier transforming the beam at the plane of the LCD with a lens and second by simultaneously displaying the phase term exp(ik z z) on the SLM while implementing the inverse Fourier transform with a lens to the plane of the detector, Merging these two digital methods (wavefront-sensing and propagation) together provides a tool that does not require any information of the optical field under investigation as well as no moving optical components.
Experimental Methodology
To measure the wavefront of structured light beams we used an experimental setup as outlined in Fig. 3(a).A helium-neon laser (power ∼ 10 mW, λ ∼ 633 nm) was expanded and collimated [ f (L 1 ) = 20 mm and f (L 2 ) = 100 mm] to illuminate the LCD of a reflective SLM (Pluto Holoeye, 1920 × 1080 pixels, 8 μm pixel pitch, calibrated for λ ∼ 633 nm) preceeded by a polarizer (P) to set the amplitudes of the two orthogonal states depicted in Eq. ( 1).The LCD was encoded with phase-only azimuthal [Fig.3(b)], conical, cubic or random gray-level holograms to generate various structured light fields e.g.vortex, Airy, Bessel or speckle fields, respectively.
The fields generated at the plane of the LCD were either Fourier-transformed (lens L 3 ) or relayimaged (lenses L 4 and L 5 ) onto a CCD detector (PointGrey fire-wire CCD, 1600×1200 pixels), preceded by a PG, where the Stokes measurements, S 2 and S 3 , were recorded.The PG which was placed ∼ 5 mm before the CCD detector, was manufactured by implementing a polarization holography setup where the sample was exposed with the interference of two plane waves each of opposite circular polarization.Since most phase-only holograms produce their corresponding fields in only the near-field (NF) or only the far-field (FF) and not both simultaneously, the two imaging systems (Fourier and relay) allowed the user to easily switch between the two and measure the wavefront of a variety of structured light beams.The Fourier-transforming imaging system also allowed the user to digitally simulate free-space propagation on the LCD (as defined in Eq. ( 7)), providing a means to extract the wavefront at multiple planes along the beam's propagation in real-time.
Since the LCD can be dynamically addressed, we first displayed the hologram required to create our field of interest [e.g.Fig. 3(b)] followed by the same hologram encoded with an additional π/2 phase term [e.g.Fig. 3(c)].For each of the two holograms, the corresponding intensity profiles (I 45 • , I 135 • and I R , I L ) were recorded to determine the two Stokes parameters, S 2 and S 3 , respectively necessary for reconstructing the wavefront as defined in Eq. ( 6) and visualized theoretically in Fig. 3(d) and experimentally in Fig. 4. Similarly, this measurement was computed multiple times to extract the wavefront as the field was propagated by sequentially encoding the the phase term, exp(ik z z), for various values of z while the necessary Stokes measurements were recorded.
The intensity profile of the field under investigation (i.e. the beam whose wavefront was being extracted) was viewed by encoding a blazed grating over the hologram [e.g.Fig. 3(e)] with the PG removed from the setup.The desired, first diffraction order (with the reference, zero diffraction order removed) was then viewed on the CCD detector [e.g.Fig. 3(f)].The propagation of the intensity profile was also monitored by sequentially encoding the phase term, exp(ik z z), for various values of z together with the blazed grating.
Results and Discussion
The wavefronts of near-field, higher-order optical vortices [Fig.5(a)] and Bessel beams [Fig.5(d)] were measured via the digital Stokes polarimetry and are depicted in Fig. 5.It is evident that there is extremely good agreement between the experimentally extracted wavefronts, Fig. 5(b) [(e)], and the theoretically programmed phase profiles, Fig. 5(c) [(f)], for the vortex (Bessel) beams of azimuthal indices = −3 to +3.Apart from identifying unit and higherorder phase singularities, this technique can also distinguish the handedness of the azimuthal phase profile [illustrated by the white arrows in Fig. 5(b)].Closer inspection of the measured, higher-order (| | > 1) phase singularities reveals a separation into unit-charged singularities, denoted by the inserts in Figs 5(b) and 5(e).Even though the wavefront of the zero-order vortex displays no curvature [less than a 1% and evident from the uniform, red wavefront in Fig. 5(b)], the splitting of the higher-order vortices is attributed to their fundamental instability.The evolution of the wavefront for a Gaussian and vortex beam ( = +2) as they propagate was investigated and selected frames are shown in Figs 7(a) and 7(b), respectively.Unlike the previous results, these measurements were made in the far-field of the LCD plane [as illustrated in Fig. 3(a)] for the execution of the virtual propagation described in Eq. (7).One draw-back to performing Stokes polarimetry on the far-field mode is that the mode size is drastically smaller [by a factor of f (L 3 )] than that in the near-field, decreasing the resolution of the measured wavefronts.As the field propagates its wavefront becomes curved [evident in the frames of Fig. 7(a)] in agreement with the encoded phase profile [exp(ik z z)] which causes the two singularities in the vortex beam to move further away from one another, consequently appearing as if they are spirally around the beam axis [Fig.7
Current Capabilities and Future Improvements
Although our wavefront sensor implements custom optics such as the PG, it offers many advantages.Firstly, our approach can be used to measure the wavefront of propagating light fields, over extended distances, with a single static device [evident in the video clips: (Media 1) and (Media 2)].Since the necessary holograms can be addressed to the SLM at very high refresh rates [34], we are capable of extracting real-time measurements within a time-frame of a few milliseconds.To further increase this time frame, multiple holograms could be addressed via multiplexing [35] to allow the extraction of multiple measurement parameters in a single acquisition.We also propose that our approach could be combined with a current wavefront extraction technique (also based on SLM technology) known as modal decomposition [16,17,36].This will allow one to decompose arbitary laser modes to yield an all-digital wavefront extraction technique for unknown beams.
Another appeal of our wavefront sensing technique is that the CCD detector defines the resolution of the reconstructed wavefront.Even though the PG (positioned before the CCD) splits the beam, projecting each diffraction order to a detector area of half the original spatial resolution, our approach offers a spatial resolution of 10 6 sample points as opposed to devices based on lenslet arrays which currently only offer a resolution of 10 4 sample points.Furthermore, this technique does not necessitate that we restrict ourselves to a single detector thus offering an ad- ditional improvement in spatial resolution.With this high spatial resolution we can identify the position and topology of closely spaced phase singularities in random speckle fields [evident in Fig. 6(d)].This approach will allow one to experimentally verify current numerical studies of phase singularities propagating through atmospheric turbulence [37].
Conclusion
In conclusion, we have presented a new approach to conventional Stokes polarimetry that results in an all-digital, adjustment-free measurement of the wavefront of propagating structured light beams.As the digital holograms are dynamical addressed, the technique can be executed at refresh rates of 250 Hz to yield real-time wavefront measurements at various planes along the beam's propagation.We have successfully demonstrated this tool on beams that are usually difficult to analyse with traditional wavefront sensing techniques, namely, Airy, Bessel, vortex and speckle fields, while observing dynamic changes in their wavefronts during propagation.Since we are able to reconstruct wavefronts with high spatial resolution and fidelity, we suggest our diagnostic could be a versatile tool in numerous areas such as studies into the creation and annihilation of phase singularities, microscopy and free space communication.
) and 1(b)].However in our approach the necessary optics (PG and SLM) remain static and only a phase change is programmed on the SLM to mimic the required effect of a quarter-waveplate [Figs1(c) and 1(d)].
Fig. 1 .
Fig. 1.A comparison between (a) and (b) the standard, manual Stokes polarimetry and (c) and (d) our corresponding digital method for extracting S 2 and S 3 , respectively.The example illustrated here is an incoherent superposition of two Gaussian fields of orthogonal polarization where the vertical component has an additional azimuthal phase of exp(i3φ ).P: polarizer; λ /4: quater-waveplate; LCD: liquid crystal display of the SLM; and PG: polarization grating.Accompanying polarizer, quater-waveplate and LCD settings are given.
Fig. 2 .
Fig. 2. A comparison between (a) the standard, manual movement of a detector along a beam's propagation and (b) the digital propagation method.The example illustrated here is a Gaussian beam.WS: wavefront-sensing technique consisting of a LCD and PG; D: detector; and L: lens.Accompanying intensity profiles and phase patterns encoded on the LCD for particular propagation distances (z) are given as inserts.
Fig. 4 .
Fig. 4. Experimentally measured intensity profiles (a) I 45 • , I 135 • and (b) I R , I L (for a vortex beam of = +1) used to calculate the Stokes parameters (c) S 2 and (d) S 3 , respectively needed for the extraction of the (e) [(f)] measured (theoretical) wavefront.
Fig. 5 .
Fig. 5. Experimentally measured intensity profiles of (a) vortex beams and (d) Bessel beams.Some of the gray-level holograms used are given as inserts.Corresponding (b) [(e)] experimentally measured and (c) [(f)] theoretically calculated wavefronts for near-field vortex (Bessel) beams.The correlation between the measured and theoretical wavefronts varies from a minimum of 0.914 to a maximum of 0.988.Corresponding azimuthal indices are given in the top right corner.The white arrows highlight the handedness of the azimuthal phase.Inserts depict a magnified view of the singularities.
Fig. 6 .
Fig. 6.Experimentally measured intensity profiles of an (a) Airy beam and (b) speckle field.The gray-level holograms used are given as inserts.Corresponding experimentally measured (Exp.) and theoretically calculated (Th.) wavefronts for a near-field (c) Airy beam and (d) speckle field.
Fig. 7 .
Fig. 7. Experimentally measured wavefronts for a (a) Gaussian and (b) vortex beam ( = +2) at different propagation planes.Corresponding propagation distances are given in the bottom right corner.The full data (video) can be viewed in (a) (Media 1) and (b) (Media 2). | 5,863.2 | 2014-06-02T00:00:00.000 | [
"Engineering",
"Physics"
] |
Machine Learning Model as a Useful Tool for Prediction of Thyroid Nodules Histology, Aggressiveness and Treatment-Related Complications
Thyroid nodules are very common, 5–15% of which are malignant. Despite the low mortality rate of well-differentiated thyroid cancer, some variants may behave aggressively, making nodule differentiation mandatory. Ultrasound and fine-needle aspiration biopsy are simple, safe, cost-effective and accurate diagnostic tools, but have some potential limits. Recently, machine learning (ML) approaches have been successfully applied to healthcare datasets to predict the outcomes of surgical procedures. The aim of this work is the application of ML to predict tumor histology (HIS), aggressiveness and post-surgical complications in thyroid patients. This retrospective study was conducted at the ENT Division of Eastern Piedmont University, Novara (Italy), and reported data about 1218 patients who underwent surgery between January 2006 and December 2018. For each patient, general information, HIS and outcomes are reported. For each prediction task, we trained ML models on pre-surgery features alone as well as on both pre- and post-surgery data. The ML pipeline included data cleaning, oversampling to deal with unbalanced datasets and exploration of hyper-parameter space for random forest models, testing their stability and ranking feature importance. The main results are (i) the construction of a rich, hand-curated, open dataset including pre- and post-surgery features (ii) the development of accurate yet explainable ML models. Results highlight pre-screening as the most important feature to predict HIS and aggressiveness, and that, in our population, having an out-of-range (Low) fT3 dosage at pre-operative examination is strongly associated with a higher aggressiveness of the disease. Our work shows how ML models can find patterns in thyroid patient data and could support clinicians to refine diagnostic tools and improve their accuracy.
Introduction
Thyroid carcinomas are the most common endocrine cancers and are usually associated with good survival.Their incidence and mortality trends have been identified as being consistent with over-diagnosis, and several recent efforts have been made to mitigate this problem [1].Despite the usual good prognosis, some variants may appear more aggressive than other, influencing the mortality rate.The aggressive behavior has been ascribed to the histologic subtype and/or to the clinic-pathologic features, an issue that remains controversial [2].
Potential "aggressive variables" for consideration include the specific histology (welldifferentiated thyroid cancer versus poorly differentiated thyroid cancer), molecular profile, size and location of distant metastases (pulmonary metastases versus bone metastases versus brain metastases), functional status of the metastases (RAI avid versus 18FDG-PET avid) and effectiveness of initial therapy (completeness of resection, effectiveness of RAI, external beam radiation or other systemic therapies) [3].
Fine needle aspiration cytology (FNAC) is a simple, safe, cost-effective and accurate diagnostic tool for the initial screening of patients with thyroid nodules, but the recent literature data has shown some possible limits [4].False negatives are not so rare and should be related to sampling error (the size and number of nodules lead to heterogeneity and unsampled areas), while the majority of false-positive diagnoses are related to interpretative errors due, for example, to overlapping cytological features in adenomatous hyperplasia, thyroiditis and cystic lesions.
For these reasons, it is of fundamental importance to match the FNAC result with a series of other clinical and anamnestic data, in order to obtain adequate diagnostic sensitivity and specificities.In this regard, the American Thyroid Association recommended that serum thyrotropin (TSH) should be measured during the initial evaluation of a patient with a thyroid nodule; FNAC is the procedure of choice in the evaluation of thyroid nodules, and it is recommended for nodules > or = 1 cm in greatest dimension with high or intermediate suspicion sonographic pattern.Sonographic patterns with an estimated high risk of malignancy are solid hypoechoic nodules or solid hypoechoic components of a partially cystic nodule with one or more of the following features: irregular margins (infiltrative, micro-lobulated), microcalcifications, a taller-than-wide shape, rim calcifications with small extrusive soft tissue component, evidence of extra thyroidal extension (ETE).Sonographic patterns with estimated intermediate risk of malignancy are hypoechoic solid nodules with smooth margins and without microcalcifications, ETE, or a taller-than-wide shape [3].
Therefore, what the doctor must do when visiting a patient suffering from a thyroid nodule is combine all these variables and formulate his own suspicion of risk.If this process were infallible and repeatable, with high sensitivity and specificity, few misdiagnoses would be made.
For these purposes (containing over-diagnosis, predicting aggressive variants and refining diagnostic tools), a machine learning (ML) approach could offer the opportunity to stratify patients in risk classes and consequently to perform a more accurate diagnosis and therapy.
Recently, ML approaches have been successfully applied to healthcare datasets.However, these models often behave as black boxes and do not allow for clinical interpretation of results [5][6][7][8].
Contemporary clinical trials have shown that an artificial intelligence model's performance matched that of experienced radiologists and pathologists [8].Elliott Range DD et al. reported that the performance of the ML in predicting thyroid malignancy with FNAC is comparable to the performance of an expert cytopathologist, suggesting that matching ML and medical diagnoses can offer better performance than either alone [6].
The aim of this work is the application of ML to predict tumor histology (HIS), aggressiveness and post-surgical complications in a population of consecutive patients who underwent thyroid surgery in a single center during a 13-year period.
Data Collection
This retrospective study was conducted at a single academic center between January 2006 and December 2018 and received approval by the ethics committee of Maggiore Hospital (CEI 133/2022).We reviewed data about 1218 patients who underwent surgery at the ENT Division of Eastern Piedmont University, Novara (Italy).Informed consent was obtained from all subjects involved in the study.Data analysis includes only primitive thyroid disease.Thyroidectomy performed during total laryngectomy, parathyroidectomy or other major surgery are not included.Patients were excluded if their medical records were not available or missing.For each patient, the following data were collected: general information (sex, age, anthropometric data), clinical history (smoke, alcohol, radiation, comorbidity), thyroidal specific diseases (and their treatment), surgical options (type of resection, days of hospitalization, complications).Partial thyroidectomy "PT" (e.g., resection of one lobe +/− isthmus) was performed just in case of known or suspected monolateral benign disease.Total Thyroidectomy (TT), Near total thyroidectomy (NTT) and Sub-Total thyroidectomy (STT) were indicated in case of malignancy or symptomatic bilateral benign disease.In all patients, an external median cervical approach allowed the surgical excision with the purpose of identifying and preserving recurrent laryngeal nerves (RLN) and parathyroid glands.Peri-operative management includes the placement of a drainage tube (Jackson Pratt drainage with inner diameter about 2.2 mm), intravenous antibiotic prophylaxis (Cefamandole 2 gr) and accurate hemostasis with bipolar forceps, absorbable sutures and absorbable hemostatic devices like fibrillary or hemostatic sponges.All surgical complications were recorded, both general (hemorrhage, hematoma, other neck swelling, infection of surgical site) and specific to thyroidal surgery (hypocalcemia, recurrent laryngeal nerve palsy and less common external branches of the superior laryngeal nerve injury, esophageal lesion, tracheal perforation, subcutaneous emphysema, thoracic duct injury, cervical sympathetic nerve chain lesion).Serum calcium levels at 6, 24, 48, 72 and 96 h after surgery were recorded, as well as clinical signs and symptoms of hypocalcemia.Transient hypocalcemia (tHypoCa) was defined as serum calcium level at discharge < or = 8.0 mg/dL with the necessity of calcium supplementation for less than 6 months after surgery.Transient recurrent laryngeal nerve palsy (tRLNP) is defined as hypomobility or paralysis of one or both vocal folds lasting less than 6 months after surgery.Hypocalcemia and inferior laryngeal nerve palsy are considered as permanent if still present 6 months after surgery.
The diagnosis (benign or malignant) and aggressiveness of the tumors were determined by pathological evaluation of thyroidectomy specimens.In particular, aggressiveness was assessed according to the American Thyroid Association (ATA) 2015 risk stratification system for differentiating thyroid carcinoma [3].
Data Cleaning
Features with more than one-third of values missing have been discarded.For remaining features, missing data for numeric variables was handled using median imputation.Missing categorical data was assigned a value of "Unknown".In order to reduce data sparsity, we added binary variables (YES/NO) to resume several multiclass categorical or numeric features.For example, we generated a derivate feature, "DIAGNOSTIC PRE", including ultra-sound (US) data, fine-needle ago-biopsy (FNAB) results and clinical presentation, according to American Thyroid Academy (ATA) guidelines, in order to divide thyroid nodules into suspected ("YES") or not ("NO") for malignancy.We classified as "affected by malignancy" a patient whose specimen contained thyroid cancer cells.Lymph nodes were positive if confirmed by histo-pathological exam.
Other examples of derivate features are: "ECO YES/NO" on the basis of suspicious US patterns (micro-calcification, vascularization, irregular margin, solid composition, hypoechogenicity, elongated shape) or "AGGRESSIVENESS YES/NO" according to the presence/absence of ATA suggested criteria (aggressive pathological subtypes like tall cell, columnar cell or hobnail, extra-thyroidal extension, lymph node involvement and distant metastasis).The final dataset is composed by n = 1218 patients, described by 95 features, divided into pre-surgery (46) and post-surgery (49) based on the fact that they described characteristics well known before surgery or revealed after the procedure.For example, SEX, AGE, cytological results, and US features were included as pre-surgery variables; capsular invasion, malignancy, complication and serum post-operative blood calcium levels were included as post-surgery characteristics.
Prediction Tasks
We aim to predict two main events: (i) the tumor histology and its aggressiveness, including T and N YES/NO variables, (ii) complications, including transient hypocalcemia and duration of post-operative recovery.Since predicting the exact duration of the postoperative recovery would be unfeasible, we aim to predict if the post-operative recovery will be longer than three days (we always attached a drainage tube, and the third day usually corresponded with drainage removal time).Each event prediction is estimated by using two different sets of features: all features and only pre-surgery features.Given that post-surgery features are highly informative of the surgery outcome, we expect prediction accuracy to drop when using only pre-surgery features.For each prediction task, we exclude variables strictly related to target prediction, e.g., we exclude all calcium measurements after surgery when predicting transient hypocalcemia.
Class Re-Balancing
Classes are naturally unbalanced in several prediction tasks, especially for complications such as hypocalcemia, since few patients usually experience it.For example, only 30 patients out of 1200 experienced permanent complications related to vocal cords, 72 patients experienced permanent hypocalcemia and 37 patients had bleeding.Since most machine learning algorithms perform badly with such strongly unbalanced classes, we will apply the Synthetic Minority Oversampling Technique (SMOTE) to oversample the minor class in the training data.The challenge is represented by a minority class that has typically very little data and is often the focus of attention.One approach for handling imbalance is to generate extra data from the minority class, to overcome its shortage of data.Figure 1 shows a t-SNE visualization of the dataset to illustrate how SMOTE works.The minority class-patients with malign tumor histology (represented in blue)-is oversampled by generating synthetic data points.These synthetic patients have features close, in feature space, to the ones of real patients with malign tumor histology.
Model Training
We split the dataset using a 75:25 distribution.Models have been trained with 75% of data, and tested with the remaining 25%.Note that we split data into training and test sets before applying SMOTE to avoid overfitting.Indeed, if SMOTE is applied before the train-
Model Training
We split the dataset using a 75:25 distribution.Models have been trained with 75% of data, and tested with the remaining 25%.Note that we split data into training and test sets before applying SMOTE to avoid overfitting.Indeed, if SMOTE is applied before the train-test split, some synthetic data points in the test set may be generated from real data points in the training set, yielding a data leak from train to test set and thus overfitting.We compared three off-the-shelf classifiers as provided by scikit-learn: Random Forest, Multilayer Perceptron and k-Nearest Neighbors.Since the performances of these models were similar, we focused on explainable models to understand feature importance.We thus trained the Random Forest model using 3-fold cross-validation.The hyper-parameters were tuned via cross-validated grid search over the number of trees and a maximum tree depth.
Results
Since the test sets can be strongly unbalanced, prediction tasks are evaluated by using balanced accuracy, defined as the average of recall obtained for each class.Furthermore, given the small size of the data, we check the stability of the classifier by splitting the data into train and test sets 10 times, and then computing the standard deviation σ of the balanced accuracy.
Tumor Histology, Aggressiveness and T/N
The prediction of tumor histology and aggressiveness is accurate (more than 90%), with small drop (4-5%) when using only pre-surgery feats.Prediction of T/N is very accurate (more than 95%) with all features, while we observe a 10% drop in accuracy and some false negatives when using only pre-surgery feats.Pre-surgical features are suggested to be incomplete in predicting cases of occult metastasis in the recurrent level and capsular rupture even in the presence of small thyroid nodes (on which depend the T and N stages).
Transient Hypocalcemia, Complications and Post-Surgery Recovery
Prediction of transient hypocalcemia and complications are nearly accurate (more than 80%), with no drop when using only pre-surgery feats.This means that post-surgery features are not predictive of complications.We observe some false negatives.
See Figures 5 and 6.Prediction of the duration of post-surgery recovery is accurate (more than 90%) with all features, while we observe a 10% drop in accuracy and few false negatives and positives when using only pre-surgery feats.
Feature Importance (Figures 2-6)
We show all plots, ranking most important features.In each plot, for each feature, each point represents a patient.The color of each point represents the value of the feature for this patient: low (high) value corresponds to blue (red).The importance of the feature in the prediction task is represented by the Shapley Additive Explanations (SHAP) value on the x-axis: patients with positive (negative) impact on the prediction task stay on the right (left) side.
The mentioned variables are defined in Appendix A. For example, in Figure 6, top row, sex is the most important feature for predicting complications, when using both all features and only pre-surgery features.Male patients have a low value for the sex feature, equal to 1 (blue), while females are equal to 2 (red).Male patients have a positive impact (they are on the right side) on the prediction of complications.That is, male patients are more likely to have complications (one of permanent hypocalcemia, bleeding or vocal cords permanent disfunction).
Another example is cytology.Patients who had cytology screening are represented in red, while patients who did not have cytology are represented in blue.All red points in the Figure 2, patients who had cytology, have positive impact on prediction of malignant histology.Therefore, patients who underwent cytology screening showed more frequent malignant tumors.
Tumor Histology, Aggressiveness and T/N
The prediction of tumor histology and aggressiveness is accurate (more than 90%), with small drop (4-5%) when using only pre-surgery feats.Prediction of T/N is very accurate (more than 95%) with all features, while we observe a 10% drop in accuracy and some false negatives when using only pre-surgery feats.Pre-surgical features are suggested to be incomplete in predicting cases of occult metastasis in the recurrent level and capsular rupture even in the presence of small thyroid nodes (on which depend the T and N stages).
Transient Hypocalcemia, Complications and Post-Surgery Recovery
Prediction of transient hypocalcemia and complications are nearly accurate (mo than 80%), with no drop when using only pre-surgery feats.This means that post-surger features are not predictive of complications.We observe some false negatives.
See Figures 5 and 6.Prediction of the duration of post-surgery recovery is accurate (more than 90%) wit all features, while we observe a 10% drop in accuracy and few false negatives and posi tives when using only pre-surgery feats.2-6)
Confusion Matrix
In Figure 7 is represented the confusion matrix that show the performance of our algorithm.Each row of the matrix represents the instances in a predicted class, while each column represents the instances in an actual class.The name stems from the fact that it makes it easy to see whether the system is confusing two classes (i.e., commonly mislabeling one as another).
Pre-surgery variables showed globally high sensibility and specificity in histology prediction, whereas, concerning nodal involvement and hypocalcemia prediction, specificity of such variables significantly decreased.
Discussion
The present study confirmed that ML models can successfully help clinicians to improve diagnostic accuracy.First, a data set was drawn up, in as much detail as possible, to better describe patients with thyroid disease.Subsequently the characteristics of the population were divided into pre-and post-surgical, in order to identify new characteristics which, combined with those already known in the literature, can increase the diagnostic accuracy with the goal to maximize resection of malignant nodules and mostly minimize resection of benign nodules.
In the recent literature, some authors, like Guo et al., purposed a robust prediction model on 2423 patients, based on blood parameters (lymphocytes, platelets count, neutrophils, RDW and RDW-CV, PTH and alkaline phosphatase), mixed with BRAFV600E mutation research and clinical features such as gender and age.The obtained results (AUC of 0.874-95%CI, 0.841, 0.906) seem to show a high value in diagnosing benign and malignant thyroid tumors; the limitations relate to the fact that the population belongs to a single region and the absence of correlation with clinical or radiological data [9].
Other previous studies [5] suggested to analyze ultrasound data with an ML approach.As reported by Ha et al., many studies using the ML technique in thyroid imaging have developed Computer-Aided Diagnosis (CAD) systems based on US features, such as composition, shape, margin, echogenicity and calcifications, and have demonstrated their potential in thyroid cancer diagnosis [10].
Zaho et al. presented their personal results which indicate that an approach based on the knowledge of experienced radiologists and the ML classifier can significantly outperform the radiomics approaches and the current biopsy guideline method in terms of diagnosing thyroid nodules and reducing the unnecessary FNAB rate of thyroid nodules.Due to the retrospective nature of the study, the authors encourage further multicenter and prospective studies with long-term follow-up in order to validate such promising results.The ML method has significant potential for enhancing the ability of radiologists to determine the optimal clinical management of thyroid nodules [11].
In a recent review by Ludwig et al., 930 papers published from 2018 to 2022 were analyzed, in order to focus AI innovations in the field of ultrasonography and microscopic diagnosis of thyroid nodules.The authors suggest significant benefits of using CAD systems in diagnosing thyroid nodules, especially for less experienced radiologists, contributing to significantly reducing the inessential FNAB; nevertheless, the benefit of AI in assisting more experienced clinicians still remains an unmet issue [12].
Considering the cytopathology point of view, in 2023, Wong et al. published an update on the current status of ML applied to pathology diagnosis: the recent development of machine learning algorithms will enable cytopathologists to focus their attention on the regions of interest (ROIs), allowing more accurate and faster interpretations.
Future ML algorithms may integrate cytopathology, radiology and clinical information, creating an even more powerful and promising tool in thyroid cancer diagnostic [13].
Among the most recent studies that involve ML and clinical data in order to improve the diagnosis of thyroid cancer, that by Xi et al. analyzed 724 patients with 1232 nodules, creating a data set with age, gender, blood thyroid function examination, ultrasound findings (9 characteristics), laterality and histological results; the authors confirmed that already-known data, such as calcification, large size, cystic composition and enriched blood flow at US, are strong indicators of malignancy.Moreover, the unilaterality seems to be the worst prognostic factor; they obviously concluded that a larger ML model, involving different studies, could be a high-quality dataset for further improvements in predicting thyroid nodule malignancies [14].
As shown in Table 1, the pre-surgical features are the most accurate in predicting histology, aggressiveness, staging and the onset of complications related to surgical treatment.If, in predicting histology, the pre-surgical variables involved are those known in the literature (especially FNAB, ultrasound-derived data and thyroid function), it is interesting to observe that in our population, having an out-of-range (LOW) fT3 dosage at pre-operative examination is strongly associated with a higher aggressiveness of the disease (Figure 3); this trend seems to be confirmed using all variables.This could prove to be very important, considering that, nowadays, tumor aggressiveness, especially of papillary histology, is explicitly based on histological characteristics [8].Another interesting occurrence, as shown in Figure 5; Figure 6, seems to be that hypertensive patients have a higher incidence rate of transient hypocalcemia and of complications in general (including hemorrhage and recurrent laryngeal nerve palsy).This finding could be related to an uneasy intra-operative control of blood pressure and a higher risk of bleeding, making dissection more difficult.
To the best of our knowledge, the current study presents an original ML model that could be used to evaluate all features describing patients with thyroidal disease, highlighting some clinical variables, that could be related to more aggressive cancer or possible complicated surgery.
Our study has some limitations: First of all, the entire population refers to a single center and is mostly representative of a single region.Moreover, the retrospective nature of the analysis is obviously influenced by some missing data not recorded at surgery time; a prospective dataset collection will significantly improve the strength of the research.
It would also be interesting to compare the predictive model of this ML model with the sensitivity of the assessment that the clinic derives from the combination of the data at his disposal (which of the two has the greater sensitivity in predicting histology, complications, etc.).
Conclusions
ML algorithms analyzing pre-surgical features may provide a cost-effective and rapid point-of-care addition to the armamentarium of the endocrine surgeon.
Future studies, including prospective and multicentric analyses, mixing clinical, laboratory and US data, are needed to understand the potential clinical implications of the ML approach in this field.
Figure 1 .
Figure 1.T-distributed Stochastic Neighbor Embedding (t-SNE) visualization of data points by considering all variables (top row) or only pre-surgery variables (bottom row), for the original dataset (left column) and data oversampled by SMOTE algorithm, (right column).Blue (red) data points represent patients with malign (benign) tumor histology.
Figure 1 .
Figure 1.T-distributed Stochastic Neighbor Embedding (t-SNE) visualization of data points by considering all variables (top row) or only pre-surgery variables (bottom row), for the original dataset (left column) and data oversampled by SMOTE algorithm, (right column).Blue (red) data points represent patients with malign (benign) tumor histology.
Figure 2 .
Figure 2. Top 10 most important features for the prediction of tumor HISTOLOGY.Classifiers are trained over all variables (top) or only pre-surgery variables (bottom).
Figure 2 .
Figure 2. Top 10 most important features for the prediction of tumor HISTOLOGY.Classifiers are trained over all variables (top) or only pre-surgery variables (bottom).
Figure 3 .
Figure 3. Top 10 most important features for the prediction of tumor AGGRESSIVENESS.Classifi are trained over all variables (top) or only pre-surgery variables (bottom).
Figure 3 .
Figure 3. Top 10 most important features for the prediction of tumor AGGRESSIVENESS.Classifiers are trained over all variables (top) or only pre-surgery variables (bottom).
Figure 4 .
Figure 4. Top 10 most important features for the prediction of variable N (lymph nodes metastasis Classifiers are trained over all variables (top) or only pre-surgery variables (bottom).
Figure 4 .
Figure 4. Top 10 most important features for the prediction of variable N (lymph nodes metastasis).Classifiers are trained over all variables (top) or only pre-surgery variables (bottom).
Figure 5 .
Figure 5. Top 10 most important features for the prediction of TRANSIENT HYPOCALCEMIA Classifiers are trained over all variables (top) or only pre-surgery variables (bottom).
Figure 5 .
Figure 5. Top 10 most important features for the prediction of TRANSIENT HYPOCALCEMIA.Classifiers are trained over all variables (top) or only pre-surgery variables (bottom).
Figure 6 .
Figure 6.Top 10 most important features for the prediction of COMPLICATIONS (one of permanen hypocalcemia, bleeding or vocal cords permanent disfunction).Classifiers are trained over all var ables (top) or only pre-surgery variables (bottom).
Figure 6 .
Figure 6.Top 10 most important features for the prediction of COMPLICATIONS (one of permanent hypocalcemia, bleeding or vocal cords permanent disfunction).Classifiers are trained over all variables (top) or only pre-surgery variables (bottom).
Figure 7 .
Figure 7. Confusion matrix for the prediction of tumor histology (left column), N (central column) and transient hypocalcemia (right column).Classifier are trained over all variables (top row) or only pre-surgery variables (below row).
Table 1 .
Balance accuracy for different prediction tasks. | 5,671.4 | 2023-11-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Delaware Reincorporation and the Double-Exit Puzzle: Evidence from Post-Initial Public Offering Acquisitions
: Initial public offerings and mergers and acquisitions represent important opportunities for investors to exit and harvest their entrepreneurial success. Some firms are acquired shortly after their initial public offerings. This exit strategy is known as a double exit. In addition, issuing firms may choose to reincorporate in Delaware during their IPOs. In this study, we use hand-collected data from 1993 to 2020 to investigate whether and to what extent Delaware reincorporation may affect the M&As in the post-IPO stage. We use a Cox proportional hazard model to test the relation between Delaware reincorporation and the likelihood of being acquired for our sample IPOs. Recognizing that Delaware reincorporation is not a random decision, we adopt a Heckman switching regression method to estimate the relation between Delaware reincorporation and takeover premiums and announcement returns. We report that IPO firms choosing to reincorporate in Delaware experience a higher likelihood of being acquired compared to those IPO firms choosing to remain incorporated in their home states. We further document that IPO firms choosing to reincorporate in Delaware receive lower premiums in acquisitions, and experience lower abnormal returns on announcements.
Introduction
Mergers and acquisitions (M&As) represent an important exit strategy for many entrepreneurial firms so that investors can recoup their investments along with the returns, if any.Equally important are initial public offerings (IPOs), which allow the entrepreneurs and their investors to cash out and liquidate at least a portion of their equity stake in the issuing firms.Funds exiting from entrepreneurial firms may pursue other investment opportunities in the private sector.In this sense, exit strategies are important for both investors and entrepreneurs.Nonetheless, it has been a puzzle that around 30% of IPO firms are acquired shortly after they become public.In other words, investors and entrepreneurs first exit the entrepreneurial firms through IPO and subsequently exit through M&As, which is also known as a double-exit strategy (Dai et al. 2005).Given that both IPOs and M&As involve substantial flotation costs and transaction costs, it is crucial to gain deep insights on entrepreneurial firms' decisions to exit, especially through the double-exit strategy.
However, empirical evidence is still scant about the double-exit strategy, including its antecedents, motives, and consequences.In this study, we ask the research question as to whether and to what extent the reincorporation decisions of IPO firms may affect their subsequent exit through acquisitions.A US firm can incorporate in a state to become a legal person in that state even if it does not have any business in that state.As a result, the corporate law in the incorporated state applies to that firm.Moreover, firms in the U.S. are free to reincorporate in any other state (Heron and Lewellen 1998) to change their legal domiciles.Although US firms can choose to incorporate in any of the states, they typically make a binary choice of incorporating either in their home states or Delaware (Daines 2002).Delaware provides a takeover-friendly law environment with fully defined codes and better experience in settling corporate cases (Daines 2001).Therefore, issuing firms may strategically reincorporate to Delaware, and such reincorporation decisions at the IPO stage may convey information on firms' intention to choose the double-exit route (Song et al. 2021).Our study intends to fill this void in the literature and shed further light on whether and to what extent Delaware reincorporation may affect the likelihood of being acquired for issuing firms choosing to reincorporate to Delaware.Furthermore, issuing firms' decisions to reincorporate to Delaware present an opportunity for academics and practitioners to understand the costs and benefits associated with the strategic decision.
In this study, we download IPO prospectuses from SEC EDGAR and manually collect information on firm incorporation and reincorporation decisions.We construct a sample of 1153 IPOs from 1993 to 2015, out of which 426 issuing firms reincorporated to Delaware right before or after their IPOs.We document that the likelihood of being acquired increases by 55% for firms reincorporated to Delaware compared to stay-at-home-state issuing firms.We posit that, although Delaware reincorporation facilitates the exit of issuing firms through acquisitions in the post-IPO period, issuing firms experience lower valuation and return in the acquisition.Our estimation reveals that reincorporated issuing firms would have received a higher takeover premium by 48.4% and experienced a higher cumulative abnormal return (CAR) on the announcement by 20.2% had they chosen to stay in their home states.
The remainder of this paper is organized as follows.Section 2 reviews the relevant literature and develops the research questions.Section 3 details our data, sampling procedure, and empirical method.Section 4 reports our empirical results.Section 5 summarizes and concludes.
Institutional Background of Incorporation and Reincorporation Decisions
In the U.S., firms are free to choose their states of incorporation.Moreover, firms tend to make a binary choice in terms of where to incorporate.In other words, most U.S. firms either incorporate in Delaware or their home states, with Delaware being able to attract more than 50% of publicly traded firms (Bebchuk and Cohen 2003).Earlier research proposes two competing hypotheses to explain the incorporation decisions of U.S. firms, whereas the empirical evidence remains mixed.The "Race to the top" hypothesis argues that the competition between states would benefit shareholders due to improved corporate laws that maximize the firm's value (Romano 1985).The "Race to the Bottom" hypothesis, similar to entrenchment theory, predicts that states providing pro-management corporate laws, such as antitakeover statutes, would harm shareholders' value (Cary 1973;Bebchuk 1992).More recently, Daines (2002) argues that some firms prefer in-state incorporation so that they are able to obtain favorable tax or operational treatments by displaying "loyal citizenship".Bebchuk and Cohen (2003) propose the extra-cost pull hypothesis, emphasizing the extra cost associated with incorporation in other states.Although the filing fees and franchise tax are not substantial for most publicly traded firms, they are nonnegligible for entrepreneurial firms.
Firms in the U.S. are also free to change their state of domicile at any time without altering their current operation (Waisman et al. 2009).Nonetheless, reincorporated firms are subject to the corporate law of new states of incorporation even though they may not have any business in those states.Therefore, the conventional view regarding the incorporation choice is a "pure" question for the legal regime.It is natural for firms to choose their home state for incorporation because of lower legal costs and stronger local influence.As firms grow, they weigh more on corporate laws offered by the states of incorporation.Given that firms are making a binary choice, they have to balance the benefits and costs associated with their reincorporation decisions.
Reincorporation is not uncommon for U.S. firms.Using industrial manuals, Moody's OTC manuals, and the disclosure database, Heron and Lewellen (1998) find 1004 firms reincorporated between 1980 and 1992.According to the financial market and regulations, firms may choose to change their state of incorporation at the time of IPO (IPO reincorporation) or after IPO (midstream reincorporation).The existing literature largely focuses on midstream reincorporation and investigates the agency costs between entrenched managers and shareholders (Peterson 1988;Heron and Lewellen 1998).Note that new issuing firms have more degrees of freedom to reincorporate because such decisions do not require votes (Ferris et al. 2006).Nonetheless, evidence is scant on how and why IPO firms make reincorporation decisions.To fill the void in the literature, we focus on IPO reincorporation to shed further light on this important corporate decision.
Initial Public Offerings and the Double-Exit Puzzle
Celikyurt et al. ( 2010) study an unexplored motive for IPOs: firms go public to make acquisitions.Equally importantly, Dai et al. (2005) find that VC-backed firms that went public between May 1996 and December 2000 are more likely to be acquired shortly after going public.Initial public offerings represent an important exit strategy, and so do mergers and acquisitions.M&As tend to have profound implications for both the target and acquirers (Kellner 2024).Issuing firms being acquired shortly after their IPOs is referred to as a double-exit puzzle because this exit strategy involves higher transaction costs and more uncertainties.In this study, our intention is not to explore the motives for firms choosing a double-exit strategy.Rather, we build on the literature on Delaware incorporation and the literature on M&As, and we investigate whether and to what extent Delaware incorporation may affect the issuing firms' exit through acquisitions in the post-IPO period.
We posit that Delaware reincorporation facilitates the exit of issuing firms through acquisitions in the post-IPO period.Daines (2001) reports that Delaware firms are more likely to attract takeover bids than firms incorporated in other states because of the mild Delaware antitakeover law environment.In other words, issuing firms choosing to reincorporate in Delaware are more likely to receive bidding offers.Moreover, Delaware is the only state with a specialized chancery court to resolve corporate law disputes (Waisman et al. 2009;Ni 2020), and merger deals in Delaware are known to be quick, efficient, and business-friendly (Fuerst and Geiger 2003).Therefore, we propose the following hypothesis: H1.Compared with stay-at-home state issuing firms, IPO firms choosing Delaware reincorporation have a higher likelihood of being acquired in the post-IPO stage.
On the other hand, we argue that a tradeoff exists between the quickness and easiness of exit through acquisition and the valuation of the transactions.The existing literature has explored the tradeoff between firms' valuation and their takeover likelihood.The "bargaining power hypothesis" claims that a target with strong takeover defenses will extract more analyst coverage in a negotiated acquisition (Subramanian 2003;Bainbridge 2002;Gordon 2002).The "bonding hypothesis" (Johnson et al. 2015) argues that issuing firms adopt a strong takeover defense during IPO to reduce the takeover likelihood so that they can protect their long-term business relationships.Therefore, building on the existing literature, we propose the following hypotheses: H2.Compared with stay-at-home-state issuing firms, IPO firms choosing Delaware reincorporation experience lower takeover premiums in their post-IPO acquisitions.
H3.
Compared with stay-at-home-state issuing firms, IPO firms choosing Delaware reincorporation experience lower announcement returns in their post-IPO acquisitions.
Sample Selection
We obtain information for IPOs in the U.S. from Refinitiv's Securities Data Company (SDC) Platinum New Issues database.Following the convention, we require our sample IPO firms to be U.S. issuers with an offer price of no less than USD 5. We exclude offer types of unit issues, real estate investment trusts (REITs), closed-end funds, American Depositary Receipts (ADRs), and other non-common share types.We further exclude financial firms and utility firms because they operate in highly regulated industries.
We manually collect reincorporation information for our sample IPOs from the IPO prospectus in the SEC EDGAR database.Our study focuses on the issuing firms reincorporated to Delaware.In particular, we identify an IPO firm which was originally incorporated in a non-Delaware state and decided to change its state of incorporation to Delaware in a time window from six months preceding its IPO to six months after its IPO.For example, eBay incorporated in California in May 1996 and reincorporated into Delaware in April 1998.Less than six months later, eBay went public on 24 September 1998, and was traded on the Nasdaq exchange.For another instance, Ignyta went public on 13 March 2014, and reincorporated from Nevada to Delaware on 12 June 2014.
Our sampling procedure yields a sample of 446 issuing firms that reincorporated to Delaware in the period between 1993 and 2015.We define stay-at-home-state firms as firms that remain incorporated in non-Delaware states.We exclude those issuing firms that are incorporated in Delaware from their onset.We also eliminated the cases of reincorporation events not related to the going public process as we define it.During the same time frame, we are able to identify 727 IPOs as our matching sample and control group.
We chose our sample period from 1993 to 2015 because we need to track the mergers and acquisitions of our sample IPOs within the 5-year window after their going public.We obtain information on M&As from Refinitiv's Securities Data Company (SDC) Platinum Mergers and Acquisitions database.We use Compustat Capital IQ database to collect financial information for our sample IPOs.Since the focus is on firms' M&A activities in the post-IPO stage, we follow Celikyurt et al. (2010) to use a five-year window after the IPO issue date.We only study the control contest takeover, for which the bidder seeks to own at least 50% of the target firms.While the target firms are U.S. public firms, the bidders can have different public statuses or countries of incorporation.We can use the following example to illustrate the timeline.After filing an IPO on 12 October 2006, FCStone reincorporated from Iowa to Delaware on 6 December 2006, and went public on 16 March 2007.The acquirer, International Assets Holding Corporation, announced that they were seeking to purchase 100% shares of FCStone on 2 July 2009, and the deal was completed on 30 September 2009.
Our final sample contains 1153 IPO firms, among which 426 chose to reincorporate to Delaware and 727 of which stayed in their original states of incorporation.Within five years after their IPOs, 149 reincorporated firms received 163 takeover bids, whereas 249 stay-at-home-state IPO firms received 273 takeover bids.In total, 315 of the 436 bids were completed.
Measures of M&A Activities
To test whether and to what extent Delaware reincorporation may affect post-IPO M&A activities of issuing firms as targets, we construct four dependent variables: the likelihood of being acquired, takeover premium, cumulative abnormal announcement returns, and deal completeness.
We construct a dummy variable which is equal to one if a firm receives a control contest bid within a five-year window after its IPO, and zero otherwise.This dummy variable considers bids with all types of completion statuses, including deals marked as "completed", "withdrawn", "pending", or "intended".Additionally, we have a dummy variable to test the likelihood of the firm being acquired within the next fiscal year.The deal completeness is defined as a binary variable equal to one if the deal is completed and equal to zero otherwise.
We have two other measures to capture the performance of the targets in acquisitions.Following Officer (2003), we compute the premium using the "component" data, which is the aggregate amount of each form of payment to the target firm, and then scale the aggregate number by the market value of the target.Furthermore, we use the standard event study method to gauge the five-day cumulative abnormal returns (i.e., CAR [−2, 2]) for the targets on the announcement.In the unreported version, we use several other event windows to calculate the robustness of our findings.
Control Variables
Note that we focus on those post-IPO M&As in which our sample IPOs are the targets of the takeover.In our regression analysis, we include three sets of variables capturing various aspects of our sample firms' new issuances, financial information, and M&A deal characteristics.We normalized all the nominal variables in 2015 constant dollars (USD). Specifically, for the first set of variables capturing IPO characteristics, we use Underpricing to gauge the information asymmetry at the IPO stage, and measure Underpricing as the percentage change in stock closing price on the first trading day relative to the IPO offer price.Overhang is the ratio of the number of shares retained to the total number sold at the IPO, which captures the dilution of the equity stake by the original shareholders of the issuing firms (Dolvin and Jordan 2008).Many Internet companies which went public in the late 1990s are reported to be associated with aggressive acquisition strategies (Schultz and Zaman 2001).We further add a dummy variable Tech firm to capture the technique attributes of our sample IPOs using the 33 tech industries based on 4-digit SIC codes defined by Loughran and Ritter (2004).We use indicators of Internet companies and Tech-IPOs to capture the industrial attributes of our sample IPOs (Junkunc and Eckhardt 2009;Ofek and Richardson 2003).We intend to see whether the timing of the IPO may affect the exit strategy, and, as such, follow Ritter and Welch (2002) by including a Bubble dummy as one for IPOs in 1999 and 2000, and zero otherwise.We gauge Age at the IPO as the year difference between a firm's IPO year and its founding year, which is obtained from Professor Jay R. Ritter's website.In addition, following Cremers et al. (2008), we include an indicator for "relationship-intensive industries" that tend to have longer-term relationships between the corporation and stakeholders, such as employees, customers, suppliers, and the local community.Issuing firms which intend to maintain such relationship-specific investment may choose to adopt antitakeover provisions at the IPO stage (Johnson et al. 2015).
The second set of variables measures the financial aspect of our sample IPOs.Following Eckbo (2010), 42 days were chosen to avoid run-up issues for the target firm's valuation.We control for the Market value of equity, Free cash flow (FCF), and Leverage 42 days ahead of the acquisition announcement.We also control for the Market value of equity (target), since the study by Officer (2003) found that deal premium, deal completeness, and CAR for both the target and bidder are all negatively affected by the market value of equity (target).Jensen's free cash flow hypothesis (1986) predicts that firms could be potential takeover targets when they have a large FCF but choose not to pay out to shareholders.His theory also relates high leverage to possible takeover for the reason that once the firm reaches a threshold of debt level, it cannot continue to exist in its old form to generate benefits.The reincorporated and stay-at-home-state firms in our sample have different leverage profiles.Therefore, we control target firms' FCF and leverage for all regressions and expect leverage to affect the takeover likelihood positively.
The third set of variables is related to M&A transactions.To be specific, we include six indicators to capture whether a particular transaction is a tender offer, a cash offer, a stock offer, a friendly takeover, a private bid, or a horizontal acquisition (2-digit SIC), respectively (Masulis et al. 2007;Walters et al. 2007).
Instrument Variables
Firms do not make a random choice to reincorporate to Delaware during IPO.To address this self-selection, we adopt the endogenous switching regression in our study.Our first step is to predict firms' reincorporation likelihood by a probit model, using three variables that may affect firms' reincorporation decisions.The first two indicators measure if the firm is advised by a national law firm and whether this law firm has previous reincorporation advising experience.The third instrument variable is the antitakeover status for the firm's home state.
Firms choose their lawyers long before they go public, which implies that the law firm identity and characteristics are good instrument candidates for this endogeneity problem (Johnson et al. 2015).Daines (2002) finds that the national law firm has a significant influence on the firm's decision of the incorporating state.Local law firms without national identity are more inclined to advise firms to stay incorporated in their home state due to their familiarity with the local state corporate laws.On the other side, national law firms have better knowledge of Delaware corporate laws and are more capable of advising the reincorporation process to Delaware.We define the indicator of the National law firm as a law firm if it has led IPOs in more than four different states, which is the 90% percentile in our sample.The second measure of law firm identity is the law firm's previous experience with reincorporation.Similarly, we construct a Law firm experience dummy, which equals one if the IPO firm is advised by a law firm that has advised any reincorporation in the past two years before the IPO and zero otherwise.Our last instrument variable is a state Antitakeover status index which is based on the legal environment of the firm's original home state before reincorporation.Subramanian (2002) and Bebchuk and Cohen (2003) find that state-level antitakeover statutes positively relate to incorporation likelihood.We use a 0−6 scale index to control for the law environment, with a higher index representing a more pro-manager law environment.
Descriptive Statistics and Univariate Tests
Table 1 reports the descriptive statistics for our sample IPOs from 1993-2015 and our sample post-IPO M&A transactions from 1993-2020.Panel A reports firm characteristics during IPO, and panel B reports IPO deal characteristics.Here, 37% of the 1153 firms in our sample chose to reincorporate into Delaware during IPO.Close to half of the firms were VC-backed technology firms, and the average firm's age at IPO was 15.1 years.Furthermore, 50% of issuing firms were associated with national law firms, while 32% of all firms were advised by law firms with previous reincorporation experience.The average of the antitakeover index is 2.34, and the value of this index for Delaware is 1.
Panel C provides the firm characteristics before M&A, and panel D details the M&A deal characteristics.The average time to the first bid after IPO is 30.84 months, and the average time to complete a control contest (closing speed) is 99.9 calendar days, which is close to the 64.6 trading days to complete the deal reported in the literature survey work by Betton et al. (2008).The mean (median) of the takeover premium in our samples is 49% (34%).The average 5-day cumulative abnormal announcement return for the target (the bidder) is 26% (−2%).
Table 2 reports the results of univariate tests for variables across the subsamples of Delaware-incorporated IPOs and stay-at-home-state IPOs.As evidenced by panel A, 398 of the 1153 firms in our sample engaged 436 M&A deals within five years after their IPOs.Reincorporated firms are less likely to hire local law firms and have a home-state promanagement law environment before reincorporation.In panel B of Table 2, we see that the overall reincorporated firms have higher offer prices, underpricing, IPO proceeds, retain more shares (Overhang), and are more likely to go public during bubble years.Panels C and D compare M&A deal characteristics and firm financial conditions before M&A.We find that reincorporated firms have a lower leverage ratio, receive a higher takeover premium, are more likely to receive a cash offer, take less time to receive the first control contest bid, and complete the deal faster.
Multivariate Analysis of Post-IPO M&A Activity
This section reports the empirical result in investigating whether and to what extent Delaware reincorporation may affect issuing firms' post-IPO M&A activities.
The Likelihood of Post-IPO M&A Engagement
In this section, we adopt a Cox proportional hazard model to investigate the relation between Delaware reincorporation and the likelihood of our sample IPOs being acquired after their IPOs.With the proceeds raised in the IPO, firms may expand their business by launching cash-consuming investment projects or restructuring their debt and equity ratio, which causes considerable variations in their financial conditions on a yearly basis.Although we assume a constant hazard rate, we recognize that the hazard rate (likelihood) of engaging in M&A is likely to vary over time.Therefore, we control the reincorporation effect for each fiscal year and test our hypothesis for the firm's likelihood of being acquired in the next fiscal year.
Table 3 presents the results of Cox proportional hazard regression to test Hypothesis 1.The Cox model can be expressed by the hazard function denoted by h(t).We use the model in Equation 1to simultaneously evaluate the effects of various factors on survival.In other words, we are able to investigate how specified factors influence the rate of a particular event happening (e.g., engaging in M&As in the post-IPO stage) at a particular point in time, which is commonly referred to as the hazard rate.
In model 1, we add one set of control variables related to M&A deal characteristics.We further control Underpricing and other IPO deal characteristics variables in model 2. In model 3, we include a set of variables to capture various aspects of firm financial conditions.Across different model specifications, we document that the decision to reincorporate to Delaware during IPO is associated with a significantly higher probability of being acquired within the first five years of IPO.Therefore, Hypothesis 1 is supported by the findings.In addition, we gauge the economic significance of our findings by using the coefficient of reincorporation (i.e., 0.44) in model 3. We report a hazard ratio of 1.55 (e 0.44 ), which reveals that Delaware reincorporation is associated with a 55% increase in the likelihood of being acquired.
Regarding other control variables, our findings are generally in line with existing literature.For example, Tender offer and Attitude are positively related to the likelihood of issuing firms to receive acquisition bids (Betton et al. 2008).Consistent with the free cash flow hypothesis (Jensen 1986), Leverage has a significant positive effect on the likelihood of being taken over.
Takeover Premium and Short-Term Cumulative Abnormal Return
To test Hypotheses 2 and 3, we examine the effects of Delaware reincorporation on the takeover premiums and announcement returns in this section and report our results in Table 4.In particular, we adopt a switching regression method to address the possible self-selection bias on reincorporation decisions.We study this effect on reincorporation and stay-at-home-state firms separately in our switching regression method.Table 4 provides the coefficient of each independent variable in the OLS regression and switching regressions for the analysis of takeover premium.The OLS regression results indicate that the reincorporation is significantly negatively associated with the takeover premiums (Dospinescu and Dospinescu 2019).We then perform a Heckman (1979) analysis correcting for self-selection in the subsamples of reincorporated issuing firms and stay-at-home-state issuing firms.In the first stage, we model issuing firms' choice regarding reincorporation, and calculate the Mills ratios for both subsamples from the first-stage probit regression.In the second stage, we include the invers Mills ratios as an additional control, and we run regression analyses separately for both subsamples.We only report the second-stage regression results in Table 4.In addition, we measure the market reaction to the takeover deal announcement using the short-term cumulative abnormal return (CAR) with a 5-day event window.The reincorporation effect on the CAR for the target firm, presented in columns 4-6 of Table 4, shows a similar pattern to the study of takeover premiums.Specifically, reincorporated issuing firms experience lower CARs on the announcement of acquisitions in the post-IPO stage.Therefore, the results reported in Table 4 lend strong support to Hypotheses 2 and 3.
In Table 5, we summarize the actual versus hypothetical outcomes for reincorporated firms and stay-at-home-state firms.We use the regression results reported in Table 4 to calculate the hypothetical outcomes had issuing firms chosen the other strategy regarding reincorporation at their IPOs.We find that issuing firms choosing to reincorporate to Delaware would have experienced higher takeover premiums had they chosen not to do so, as shown in Panel A. The premium difference, 48.4%, is economically and statistically significant.On the contrary, stay-at-home-state firms would have received a 6.4% lower premium if they had decided to reincorporate into Delaware during their IPOs.Similarly, the target's CAR would be larger if reincorporated firms had decided to stay in their home state during the IPO.When all control variables are applied, the reduction in CAR for reincorporated firms is 20.2%, and the value is significant at the 5% level.On the other hand, CAR for stay-at-home-state firms would be 16.4% lower if they had made the decision to reincorporate into Delaware during the IPO.Taken as a whole, the results reported in Tables 3-5 support Hypotheses 1-3, and indicate that there is a tradeoff between the easiness and quickness of exit through acquisitions and lower firm valuation in the transaction.
Robustness Checks
In this section, we conduct a series of tests to ensure the robustness of our findings.Researchers tend to use different windows to calculate the takeover premium.Following Cremers and Sepe (2015), we use one week before the announcement date as the preannouncement day to measure the premium.However, Boone and Mulherin (2007) use four weeks before the announcement date to calculate the premium to avoid near-term information leakage.We also use the approach proposed by Boone and Mulherin (2007) and replicate it in our analysis.The adoption of this alternative measure does not change our findings in a material way.Likewise, we use different event windows, namely [−1, 1], [−3, 3], and [−4, 4], to calculate CARs and document consistent results.
In addition, we focus only on venture-backed IPOs because venture capitalists are likely to influence corporate decisions on reincorporation and are equally likely to consider the double-exit strategy through acquisitions after IPOs.Around 46% of the 1153 IPO firms in our sample are backed by venture capital.Out of the 436 takeover deals, 232 had the target firms backed by venture capital before IPO.Using subsamples of IPOs backed by venture capital, we find that the positive effect of Delaware reincorporation on the likelihood of post-IPO acquisitions is even stronger.As evidenced in Table 6, we document a consistently negative relation between Delaware reincorporation and premium as well as CAR.
Discussion
In this study, we posit that Delaware reincorporation is positively associated with the likelihood of our sample IPOs being acquired after their going public (H1).We further propose that there is a tradeoff between the easiness and quickness of M&A transactions and the investor returns in terms of takeover premiums (H2) and announcement returns (H3).In Table 3, our findings based a Cox proportional hazard model provide strong support to Hypothesis 1.We further test the relation between Delaware reincorporation and takeover premiums as well as announcement returns in Tables 4 and 5, respectively.In line with our expectation, issuing firms choosing to reincorporate in Delaware experience lower takeover premiums and lower announcement returns in the post-IPO M&A transactions.Therefore, Hypotheses 2 and 3 are supported.
Summary and Conclusions
This article builds on two streams of research on Delaware incorporation and exit strategies for entrepreneurial firms.Specifically, US firms can freely choose to incorporate in any states to become legal persons of that state, and they can reincorporate to other states as their legal domiciles (Daines 2001(Daines , 2002)).In addition, US entrepreneurial firms make important decisions as to their strategies to exit.Some entrepreneurial firms choose to undergo the IPO process and then are acquired shortly after their IPOs.This exit strategy is known as a double-exit strategy.We extend these two lines of research by investigating the relation between Delaware reincorporation and the M&As of newly public firms in their post-IPO stage.
Our evidence reveals that Delaware reincorporation is associated with a higher likelihood of being taken over at the post-IPO stage.Nonetheless, firms choosing to reincorporate to Delaware during their IPOs experience lower takeover premiums, CARs on the announcement, and deal completeness.The findings in this paper indicate that issuing firms choosing to reincorporate to Delaware make a clear tradeoff between the easiness and quickness of acquisition and the lower returns for selling shareholders.
We believe that our study contributes to the literature in several important ways.Firstly, to the best of our knowledge, this is the first study focusing on reincorporation decisions at the IPO stage.Existing research mostly focuses on reincorporation decisions for firms that have already been traded in public market (Cumming and MacIntosh 2002;Heron and Lewellen 1998;Peterson 1988).In contrast, we document that the number of reincorporation events associated with the firm's going public decision is non-trivial.Such decisions may represent important strategic considerations for the issuing firms' operation and financing activities in the post-IPO stage.Secondly, our research sheds further light on the double-exit puzzle.The majority of existing research treats IPOs and M&As as two alternative exit strategies (Amor and Kooli 2019;Bayar and Chemmanur 2006).Our study extends this line of research by focusing on the interplay between the IPO market and M&A market.Although our intention is not to investigate the underlying motives for firms to exit through acquisitions shortly after their IPOs, we report evidence that Delaware reincorporation facilitates the double-exit strategy.Thirdly, our study adds to the literature on mergers and acquisitions related to the events of new equity issuances (Boeh and Dunbar 2021).Firms constantly explore various ways to grow, either internally or externally.Our research highlights the valuation implication on selling equity stakes after raising capital through IPOs.Particularly, we use a switching regression method to correct the self-selection bias, and report that the faster exit through acquisitions after IPOs is associated with significant cost, and that issuing firms which exit through this strategy experience lower returns.
The findings reported in our paper also have important managerial implications for practitioners.Managers of issuing firms can make informed decisions regarding reincorporation decisions at the IPO stage.In the event that they consider exit through M&As after their IPOs, our study facilitates their decision-making process by stressing the lower valuation of exiting through M&As.This article is equally important for legal firms and consulting agencies to gain better insights on the consequences of reincorporation decisions and subsequent options to exit in the post-IPO stage.
Our research is not without limitations.For example, we limit our sample to those issuing firms originally incorporated in non-Delaware states so that we are able to investigate the treatment effect with a proper control group.We are, thus, unable to investigate the subsample of firms which choose to incorporate in Delaware at their onset.Furthermore, we lack data as to which party initiates the reincorporation decision in the IPO process.The incumbent managers, venture capitalists, law firms, and investment banks are all likely to propose the decision to change legal domiciles.Nonetheless, this study suggests a few directions for future research.For example, given that changing legal domiciles means changing the applicable corporate law, researchers can examine the wealth effects of litigation in corporate laws following the reincorporation of newly public firms (Badawi and Chen 2017).It is also plausible to investigate whether Delaware reincorporation is associated with better information disclosure and high analyst coverage (Stewart 2023).
Table 2 .
Comparison between reincorporated firms and stay-at-home-state firms.
Table 3 .
Delaware reincorporation and likelihood of being acquired in the post-IPO stage.
This table presents the estimation result of the Cox proportional hazard regression.The dependent variable is a binary variable equaling 1 if firms are acquired in the next fiscal year for models 1 to 4. Standard errors are in parentheses.The symbols ***, **, and * denote significance at the 1%, 5%, and 10% levels, respectively.
Table 4 .
Reincorporation effects on takeover premium and target cumulative abnormal return (CAR).This table presents estimation results for ordinary least square regression and switching regression, for which only the second-stage results are given.Standard errors are in parentheses.The symbols ***, **, and * denote significance at the 1%, 5%, and 10% levels, respectively.All regressions control for year and industry fixed effects.
Table 5 .
Actual versus hypothetical premium and abnormal return.
Table 6 .
Robustness checks for the subsample of venture-backed firms.This table reports switching regression results conducted for venture-backed firms.The means of the actual takeover premium, cumulative abnormal return, and the deal completeness are compared to their hypothetical counterparts for the reincorporated firms (Panel A) and stay-at-home-state firms (Panel B).The difference in means between actual and hypothetical is reported with significance.The values are based on switching regression models, which contain all control variables.The symbols *** denotes significance at the 1% level. | 8,005.2 | 2024-04-26T00:00:00.000 | [
"Business",
"Economics"
] |
Mission-minded pastoral theology and the notion of God’s power: Maturity through vulnerability
when pastoral theology and care practices engage with views regarding God’s power. The rigorous literature study contributed constructive insights for missional churches that value God’s justice and dignity to all.
Introduction
The emerging post-pandemic era offers many churches and Christian movements a fresh chance to recommit itself to its core calling of being intentionally missional (Bendor-Samuel 2020). Faith communities across the globe have been disrupted. Now they are challenged to find their sole fulfilment in the triune God and to let all their strategies be edified by sound missional theology. Such theology is needed to substitute the often performance-driven ambitions to salvage congregations (Guder 2015). This article aims to contribute to constructive missional theology by grappling with two related issues, namely God's power and Christian maturity. Later, what is implied by this concept of maturity, will be clarified. Suffice to say, as a general introductory remark, that maturity in this article is specifically focused on growing or maturing in Christian faith. Therefore, it is linked to the biblical practices of spiritual formation.
Wonsuk Ma and Kenneth Ross (eds. 2013) indicate extensively how world Christianity is expanding, especially within Pentecostal and Charismatic movements, which are discovering new vision and energy for mission through a strong emphasis on believers' spiritual life. However, it is telling that they (Wonsuk Ma & Kenneth Ross eds. 2013:232) conclude: In these dynamic This article contributed to constructive missional theology by grappling with the issues of God's power and Christian maturity. This is important, because we live in a wounded world fraught with injustice, violations of human dignity, and power abuse. Pastors and other caregivers are called to be involved with caring for people to promote more just societies where human dignity flourishes. Such caring practices form a crucial and practical part of the human embodiment of God's loving mission in our wounded living spaces. The main research question that was addressed, is as follows: How can pastoral theology be directed by a missional understanding of the church so that it can become clear how caring practices embody the missio Dei? Considering this question, this article explicated what kind of theology is appropriate when mission-minded Christian caregivers want to interpret God's power in a fractured world. The author indicated -through a careful and methodical literature study -that the recovery of a missional, trinitarian understanding of God, offers fresh resources for reconceptualising spiritual formation, focusing on mission as authentic discipleship. While unravelling this article's main argument, it was deemed of paramount importance to harmonise our ideas about the power of God, because such notions (about God's power) often dictate how we act within human relationships, which are also replete with power. It was concluded that preference needs to be given to viewing 'power as love' (vulnerability) rather than 'power as force ' (control). Eventually the analysis of relevant literature resources indicated that pastoral care done in congregations, does not merely find its end goal in strengthening believers to grow into maturity in Christ, but also fosters our missional calling as sent disciples of the triune God. In addition, Christian faith maturity was found to be essential if pastoral theology and pastoral care practices -mostly performed by pastors or other caregivers in faith communities -aim to promote justice and human dignity as integral part of the missio Dei. movements, theological maturity and missiological depth are 'conspicuous by their absence'. Why did they come to this conclusion? And further, for what purpose is the notion of Christian maturity significant at all, especially within the academic field of mission studies?
Traditionally, the primary focus of church missionary activities falls on evangelisation for the salvation of people who receive its reconciling message. However, that is not the goal, but rather just the start of what the sharing of the gospel, as integral activity of the missio Dei, implies. Balia and Kim (2010) therefore rightly emphasise the holistic nature of salvation, by asserting the following: As God does not give us a partial salvation, we cannot limit evangelism only to the spiritual realm. Rather, we must acknowledge that evangelism proclaims good news for every part of our life, society and creation. (p. 211) Furthermore, it would be an error to downsize missionary activities to reaching the unreached, because after conversion follows the pilgrimage of discipleship and all the implications of a Christian's journeys of missional spiritual formation (Gibson 2022;ed. Zscheile 2012). The Pauline letters clearly remind us of a fundamental interpenetration of mission and spiritual formation (Berding 2013). According to Dallas Willard (2021:75), spiritual formation is 'the process where one moves and is moved from self-worship to Christ-centred self-denial as a general condition of life in God's present and eternal kingdom'. Biblically speaking, this road of growing into Christ has its own telos, namely reaching maturity. On this road, discipleship needs to be understood -in line with Stephen Bevans (2022) -as a call to mission, rather than 'a static concept of church membership or cozy relationship with Jesus'. Transformed discipleship includes hospitality, creation care, confronting injustice as well as 'being in solidarity with the world's margins' (Bevans 2022:120;see also Keum 2013:15). In essence, such discipleship involves a (life-long) process of radical change in individual and collective attitudes, behaviours and values (ed. Jukko 2022:298).
The formative role of missionminded pastoral theology and care amid human vulnerabilities
From a disciplinary perspective, this article lies in the nexus between missiology and pastoral theology. It aims to discern how pastoral theology should be directed by a missional understanding of the church, because contemporary mission studies, as Kim and Fitchett-Climenhaga assert (2022:5), 'increasingly overlaps with the discipline of practical theology' (within which I deem pastoral theology as a subdiscipline). Church-based pastoral theology and care play a formative role in the lives of Christian disciples. Not only through the care offered by ordained pastors, but also via the mutual care of Christians building up one another within the faith community (Van der Watt 2018). These caring practices are integral in the process of concretely embodying the missio Dei. Dean Flemming (2015:xxi) affirms this sentiment by stating: 'Christian nurture and formation are also missional, not least because they enable and equip Christian communities to engage in the restoring mission of God'.
Considering this, pastoral care done in congregations does not merely find its end goal in strengthening believers to grow into maturity in Christ, but also fosters our missional calling as sent disciples of the triune God of the Bible, sharing (in) the life of God as Father, Son and Spirit (Goheen 2018). Pastoral care practices are part of God's mission as healing and wholeness, aligned within a missional ecclesiology (see Keum 2013:19).
The significance of this (intentional) trinitarian formulation of God can cryptically be motivated as follows: Numerous mission gatherings with global representation have integrated a trinitarian emphasis into their formulations on mission since the second half of the 20th century. 1 One leading missiologist, Darrel Guder (2009:68), thus formulates it as follows: 'To understand the Trinity rightly is to participate in the enabled action of witness which carries out the mission of God the Father, the Son, and the Holy Spirit'. This type of interpretation, that is, drawing practical conclusions for the church's mission, based on the inner life of the (immanent) Trinity, is a-typical in Reformed theological thought. We should be cautious not to misuse the doctrine of the Trinity in ideological ways to attribute divine sanction to ideas for our own preferred cause (see Van den Brink 2003:237).
Nevertheless, there is a wide 'spread' of Reformed theological thought that directs us in the way we should think about the triune character of God (see Smit 2009). In the end, it is important to realise that God is not a problem to be solved, but a divine Person to know. And thus, the concept and the life of the Trinity is indeed relevant to our lives as Christians as we participate in the missio Dei. Famous missiologist Lesslie Newbigin, wrote a book titled, Trinitarian faith and today 's mission (1964), that explicates his thoughts on this theme.
For the purpose of this article's main argument, it is therefore important to note that since the time of the Early Church, the apostolic mission was not narrowly focused on the saving of souls and their integration into communities of the saved. Instead, Guder (2007:256) stresses, 'It was the formation of witnessing communities whose purpose was to continue the witness that brought them into existence'. Guder asserts that we are invited into this formational growth via God's redemptive mission for the purpose of 'walking worthily' (based on key Pauline passages). In essence then, building up the body of Christ to reach maturity, should also serve its going out, that is, the church as a trinitarian, sent (apostolic) community for the sake of the world (missio ecclesiae; see Goheen 2000:224).
Grappling with God's power in patriarchal societies scarred by power abuse 2
The world in which the church exists, is a place where the power of God and the power of people operate simultaneously. Our world is filled with natural and humaninduced disasters, ranging from tsunamis and pandemics to domestic violence and collective trauma fuelled by genocides. In these crises, various powers -divine and human -are at work and it needs to be discerned and described theologically as far as possible. The afore-mentioned disasters intensify humanity's existential realities such as despair, loss and suffering. Such experiences challenge us to grow more resilient and to foster more tolerant faith-based care networks.
Missional ministries, including church-based pastoral care practices, have a prophetic responsibility to discern the powers that dominate our societies and should aim to combat inequalities and diminish vulnerabilities, (among others), for example in terms of gender and race (see eds. Johnson & Zurlo 2020:5). By fulfilling this prophetic role, pastoral care practices pursue ways in which evil can be resisted and transformed by Christian faith communities, instead of trying to rationally make sense of it (see e.g. Swinton 2007). This is a mature approach to dealing with the realities of sin and suffering. Caring responds to more than just individuals coming for help. It also aims to re-arrange the power dynamics within social systems and communities through advocacy, resistance, building coalition, et cetera (Graham 2002:275).
Historical realities such as colonial and imperial rule, have been heavily influenced by and simultaneously given shape to inequalities in human relations. These realities also determine -and sometimes are or were determined by -our theological understanding of the power of God. Subsequently, our understanding of God, that is our God-image, is crucial for interpreting how we use or abuse power in our live as Christians today, and for discerning how to combat power abuse through prophetic pastoral care practices (Louw 2007:92-96).
Now the important question surfaces: What kind of theology is appropriate when mission-minded Christian caregivers want to interpret God's power in a fractured world? How can a balanced theological perspective on the power of God enable pastoral care practitioners -who operate from a missional perspective -to mitigate the vulnerabilities caused by injustice and unequal distributions of power? These questions direct the main inquiry of this article. The biblical 2.Although it exceeds the scope of this article, I acknowledge that the issue of the power of God overlaps with many related issues in a variety of theological fields of study. For instance, within the field of Systematic Theology, it can be connected to the doctrines of the Trinity, providence as well as the image of God (Imago Dei). Different theological traditions, for example liberal process theology (see Case-Winters 1990), evangelical, neo-orthodox, feminist or liberation theologies, approach the issue from different angles. In addition, feminist scholars (cf. Ramshaw 1995;Reuther 1983;Schüssler-Fiorenza 1983) have indicated the links between the domination of women or people from other race groups or cultures, or the subjugation of the earth itself, and patriarchal views of God.
witness is clear: God is powerful. But how? What 'kind' of power does God possess? Grappling with understanding the power of God involves (among others) various interconnected themes such as God's omnipotence and how it relates to the deeply intricate and enmeshed philosophicaltheological theme of theodicy, which also disputes the idea of a loving and just God amid the existence of evil and suffering, human agency, et cetera (see Peckham 2018).
The recovery of a missional understanding of God, based on a trinitarian theology, offers fresh resources for reconceptualising spiritual formation today -not as a theology from above that fosters a universal, abstract idea of idealised mission, but instead on mission as authentic discipleship (Kim & Fitchett-Climenhaga 2022:12). Within this process of theological rediscovery, I argue that a balanced view on God's power, as revealed in the suffering and resurrection of Christ and the empowering work of God's Spirit, is crucial. It includes intentionally choosing to view 'power as love' (vulnerability), instead of 'power as force' (control). Such perspectives on power profess the veracity of the unfailing love (hesed) of a compassionate God, whose presence gives us eschatological hope and the courage to be (without dominating). God's power fosters maturity, and thereby promotes justice, human dignity, and loving service (diakonia) 3 in prophetic ways (see ed. Jukko 2022:304). It follows that God's power should be theologically discerned, not merely as 'power over', but rather as 'power with/in', 4 via the empowerment of the Holy Spirit (Zch 4:6; Migliore 2004:56).
Historical perspectives on power within the church and its mission
According to Lesslie Newbigin (1995), mission is proclaiming the kingdom of the Father, sharing the life of the Son and bearing the witness of the Spirit. God's mission emanates from the power of God, the glorious almighty Father, through the Lord Jesus Christ's resurrection (Eph 1:17-21), executed in the power of the Holy Spirit (Ac 1:8). The gospel is indeed the power of God that brings salvation to everyone who believes (Rm 1:16 The impact of unequal power relations can be found in all types of relationships and in all areas of life. Whether it is men over women, or in different forms of age, race, tribe, or class supremacy. Hence, within human relationships, feminist approaches have emphasised the intertwined dimensions of 'power-over', 'power-to' and 'power-with' (Ehrensperger 2007:34).
Sin has found expression throughout history, both within and beyond the biblical narrative, in lamentable examples of colonial or patriarchal rule and, among others, in race-and gender-based violence, marginalisation and oppression. Most of us still live in male-dominated, patriarchal societies. Spanish sociologist, Manuel Castells (1997:134), contends that patriarchalism is a 'founding structure of all contemporary societies'. According to Chittister (1998:25), patriarchy is built upon four inter-related fundamental realities: domination, hierarchy, dualism, and essential inequality. These realities make power abuse evident in our broken societies' socio-cultural institutions and in education, economy, politics, family, and religion (Okoli & Okwuosa 2020:1). The important point to remember here is that our misdirected theological views of God's power have the potential to mislead us toward the institutionalisation of absolutistic patriarchal values that can distort power relationships in faith communities. This, subsequently, can lead and has led to the oppression of women or people of other racial groups. An example is the apartheid theology, or the ideology of the Dutch Reformed Church in South Africa before 1990.
Historically, the Christian establishment in the West was deeply influenced by the purposes of empires. The church was infiltrated by Constantine imperialism since the 3rd and 4th centuries AD. Human notions of power and authority were openly applied to God. As John Hall (1993:108) succinctly argues: 'Powerful people demand powerful deitiesand get them!' Imperial rule, in turn, gravitated towards and simultaneously gave shape to inequalities in human relations. For this reason, Godly power was, and still is, often interpreted in terms of governance (strength) rather than in terms of care and compassion (vulnerability) (Fiddes 2000:62-29;Louw 2020).
Because of the Christian hierarchy's complicity in the maintenance of the Roman state, Drake (2000:23) rightly asks the poignant question: 'How did a religion whose central tenet is to suffer, rather than do harm -to "turn the other check" -come to accept the coercive power of the state as its reasonable due?' The church succumbed to sustaining the power of an authoritative regime, namely the Roman Empire. 'Money and political power were now at the disposal of the church and paved the way for its expansion' (Vähäkangas 2022:689). A particular understanding of both the power of the church and the power of God, fed this imperialistic thinking, often leading to power abuse and even violence.
Mennonite Church historian, Alan Kreider (2016:296), in his fascinating book, The patient ferment of the Early Church, concludes that contemporary Christians in the West (and possibly further) live with a post-Enlightenment and post-Christendom heritage, wherein the following assumption seems self-evident: 'that in its essence Christianity is violent, and that Christian mission -however loving its professed intentions -is essentially an exercise in imperialism'. No wonder Kreider pleads for a return to the habitus of meek patience, so vivid in the lives and writings of Early Church pioneer Christians like Cyprian, Origen, et cetera who not only spoke great things, but lived them … and who further did not -like others after them in the Constantinian era (including Augustine's so-called missional revolution) -rely on the intolerant power of the empire or state to vindicate their Christian views by coercion or force.
In addition to the above-mentioned, it can be argued that the God-images of many Christian churches in the West were strongly influenced by Greek philosophical thought, and hence portrayed God as distant, wholly impassable, and impassionate. 5 Placher (1994) contends: Perhaps the strangest event in the intellectual history of the West was the identification of the biblical God with Aristotle's unmoved mover or some other picture, derived from Greek philosophy, of God as impassable and unchanging … much of the Christian tradition does seem to have portrayed God as unaffected and unaffectable. (pp. 3-26; see also Placher 1999:192) This portrayal of God eventually also 'prepared the way' to a modernistic anthropology where power, autonomy, and independence, that is, masculine values in patriarchal societies, became the ideological structure of many societies. Therefore, Koopman (2004:190-200) suggests that healthy (gender) relations today should be based on an anthropology of vulnerability, relationality, and dependence. That does not imply that all power and autonomy should be deemed universally dominating. However, the argument here is that a mission-minded pastoral approach intentionally opposes an anthropology of destructive autonomy and self-serving power. The reason being that such views of humans -often originating from unbalanced images of God and God's power -lie at the centre of many of the patriarchal traditions that often leads to the oppression of women, violence, and loss of human dignity.
God's omnipotence revisited from a biblical theological perspective
The Apostles' creed boldly starts with the confession, 'I believe in God, the Father almighty'. Belief in an all-powerful (omnipotent) God is indeed a core part of Christian faith. But the meaning of God's omnipotence is not necessarily evident to all who confess their belief. The almightiness of God does not mean God can do anything at all (Hall 1986:159). Biblical theology teaches us that God's power cannot act contrary to God's goodness, justice, and reason. God cannot lie or pervert 5.This understanding will obviously be viewed very differently in contexts in which the church is socio-politically weak (e.g. in Japan, where Christians are in the minority of about one of the entire population).
justice, because such acts are clearly contradictory with the Christian understanding of God's nature. Thus, the belief in the 'Father almighty' and the notion of God's omnipotence, must be aligned to a proper Christian understanding of God's nature and character (ed. McGrath 2011:209-212). God's almightiness is revealed by God's true character, overflowing with unfailing love and reliability (hesed). In this, we vividly see God's majestic sovereignty and glory. The Bible testifies throughout of God's gracious solidarity with our deepest human plight by the unbreakable bond of God's covenantal love. Thus, God's almightiness is also linked to (social) justice and righteousness (cf. Dt 10:17-19; Louw 2020).
In a wounded world, various meaningful perspectives on God's compassion can guide our reinterpretation of God's omnipotence. John Stott (2006:326) rhetorically asks how, in the real world of pain, one could 'worship a God who was immune to it?'. He (Stott 2006:323) contends that the God who is love, subjects himself to suffering, making himself 'vulnerable to pain', but without thereby weakening his sovereign omnipotence. Kazoh Kitamori (1965) Jürgen Moltmann (quoted in ed. McGrath 2018:35) even argues that a God incapable of suffering is a loveless being (like the God of Aristotle), but he qualifies the suffering: 'God does not suffer like his creature, because his being is incomplete. He loves from the fullness of his being and suffers because of his full and free love'.
In the above-mentioned views, a trinitarian understanding of divine providence is defined by the power of love -visible in the ministry, cross, and resurrection of Jesus and not by some profane notion of God as pure almightiness. Daniel Migliore (2004) asserts strikingly, regarding the power of the triune God: [I]s not raw omnipotence but the power of suffering, liberating, reconciling love. An emphasis on God as Trinity gives providence a different face. The God who creates and preserves the world is not a despotic ruler but our 'Father in heaven'; not a distant God but a God who becomes one of us and accompanies us as the incarnate, crucified, risen Lord; not an ineffective God but one who rules all things by Word and Spirit rather than by the power of coercion. The life and work of Christ presents his body, the church, with an ongoing challenge of doing missions 'under the cross', as the International Missionary Council recognised anew at its gathering in Willingen in 1952. Kirsteen Kim (2010) refers to this event by stating: Repenting of the triumphalist attitudes of the past, it was recognised that, like Christ himself, the church's mission is in weakness not in power (2 Cor 12:9), and that the way of Christ is through suffering before victory (Rm 6:4). So the sacrificial, self-giving nature of the incarnation (Phlp 2:5-11) was recognized again. (pp. 6-7) Jesus denounces the way of earthly rulers or kingdoms that reign by the sword. Jesus himself is killed by such enemies to whom he has shown nothing but love. All these events appear to indicate weakness. However, Paul claims the cross is 'the power of God and the wisdom of God' (1 Cor 1:24 Scripture reveals a pervasive emphasis on God's mighty power (Ps 24:7-8; Eph 1, etc.); therefore, we cannot merely ignore the notion of a God who has 'power over' others, just because we prefer to view God in an alternative way. In addition, we should not restrict our view of God's power only to what God does, or has done in history, because God does not exhaust his power in his works of creation and providence. Hence, understanding the relation between God's power and God's providence is crucial. God's majestic creation and God's sustaining providence therein (Rm 1:20), may lead many Christians to think of God's power as a type of brute strength that can overpower any obstacle by sheer force.
In this regard, Paul Helm (1993) argues: It is tempting to think of God as a Herculean figure, able to outlift and out-throw and outrun all his opponents. Such a theology would be one of physical or metaphysical power; whatever his enemies can do God can do it better or more efficiently than they … But Scripture does not teach that the doctrine of providence follows from divine power in this fashion … there is also a sense in which the providence reveals the weakness of God, and in which the providential purposes of God are furthered by that weakness. (p. 224) In this vein, Hendrikus Berkhof's (1985) theological logic when dealing with the issue of God's almightiness, is worth noting. Basically, Berkhof attempts to hold a symmetrical view of God's transcendence and God's immanence, viewing the latter as basis for the former. Hence, God's almighty power should always be interpreted via the lens of God's love, which is vividly displayed on the cross of Christ. The quality of God's power is different from human categories of power. Berkhof (1985:124-140) therefore uses the phrase 'weerlose overmacht', that is, vulnerable almightiness, which signifies a combination of God's almightiness and his identification with us (see also Placher 1999:204-205).
In short, God identifies with our suffering in the world. Such an identification makes evident the power of God, which includes God's vulnerable solidarity and compassion. God's power cannot be divorced from compassion and responsibility (Migliore 2004:83). It is the power of resurrection and transformation which brings new life out of the suffering and evil of the world (Inbody 1997:140). Through the Holy Spirit, God's power is the power to create new life (Ezk 31:10), to cure, and to rebuild rather than the power to impose, that is, to control. All in all, God's power is indeed defined by his overflowing love -defined by the Spirit of love (Rm 5:5) -to bring about justice, freedom, and well-being (Kim 2012:134-135). Now the important question needs to be answered: How do these insights into God's power relate to the issue of maturity in spiritual formation as part of mission-minded pastoral care?
Cultivating faith maturity through vulnerability as core aim of missionminded spiritual formation Maturity defined Conradie (2016:5) defines maturation as 'the process of becoming mature, the emergence of individual and behavioural characteristics through growth processes over time'. The goal of reaching maturity is widely accepted across cultural disparities. Such a maturation process can be actualised in various spheres of life, for instance as an individual, or as an institution. The process is also multidimensional and can, for example, be hindered by injustices. Regarding individual maturation, bodily or emotional dimensions can (among others) be distinguished. Conradie (2016:6) emphasises that emotional maturity is more complex to analyse, 'because it is never complete'.
As was stated in the introduction, maturity in this article is specifically focused on growing, or maturing in Christian faith. Therefore, it is linked to the practices of spiritual formation. From a biblical perspective, isolated 'maturity', bent in on the self, is undesirable (1 Cor 10:24). Growing toward maturity, from a Christian perspective, implies growing toward the awareness that human existence is inherently relational. Mature Christians therefore realise that they do not belong only to themselves, but also to others, be they immediate (human) others or the ultimate Other (God).
The apostle Paul says in Philippians 3:12: 'I do not claim that I have already succeeded or have already become perfect. I keep striving to win the prize for which Christ Jesus has already won me to himself' Indeed, Christian believers are all still pilgrims on this journey of spiritual formation, which Christ himself has initiated. An important observation to note at this point is that spiritual or faith maturity does not grow in isolation. Conradie (2016:6) aptly highlights the important fact, that an 'individual cannot reach maturity without being part of a network of relationships that becomes mature in love'. Conradie's remark rings true in the individual vs. communal sense of, for instance Christian spiritual formation. But that is not all. In addition, it is essential to realise we cannot and should not isolate our spiritual growth from our emotional growth. The premise is that faith maturity implies that we should also become more emotionally mature people.
However, we know that emotional maturity does not always come to fruition as we grow in our faith. For instance, in the North American context, spiritual maturity has become an acute issue. In his book titled, From here to maturity, Thomas Bergler (2014) argues for surmounting what he defines as 'the juvenilization of American Christianity'. Bergler focuses on churches' youth ministries, but asserts that the dilemma of immaturity is not merely a problem to be solved by adolescents, parents, or youth ministers. The real problem is the glorification of this immature state that is pervasive not only in culture at large, but also in (American) churches. Even adult believers adapted the Christian faith to adolescent preferences, morphing it into a Christianised version of adolescent narcissism. Bergler's antidote (2014) is found in the spiritual nurturing of young people by creating an intergenerational culture of growth for them -mutually, vis-à-vis adultstowards spiritual maturity. This mutual, dynamic process includes observation, learning, teaching, and sharing. It takes place in a broader context of human relationality, because one cannot mature spiritually without both being nurtured by and contributing to the growth of others (Bergler 2014:111-112). Bergler (2014) further emphasises that reaching spiritual maturity is not an unattainable magical process equal to distant perfection. Lastly, he (Bergler 2014:38) laments the fact that 'juvenile spirituality' fails to prepare people for suffering as it is depicted in the Bible: '[M]ature Christians persevere in love, even through hard times'. We know that suffering can generate patience and endurance (Rm 5:3-5). It can release one from being overly self-focused and make one aware of the needs of others. It can also develop maturity and a sense of meaning that enables you to transcend the boundaries set by your present circumstances.
Maturity actualised in the daily lives of Christian disciples living in a wounded world
The goal of Christian maturity is the fruit of a life lived by the Spirit of God, who empowers Christian believers with a living hope (1 Pt 1). Pastoral caregivers keep this goal in mind when they aim to facilitate spiritual health and, in the process, help fostering a mature faith. They are called to equip the faith community to this end. This process is called spiritual formation, and it forms part of mission as authentic discipleship, which involves a (life-long) process of transformation in individual and collective attitudes, behaviours, and values. Church-based pastoral caregivers' core structure for support and counsel in this process -as part of the missio Dei -is the liberating truth of the cross and resurrection of Christ.
But what might 'maturity' mean for Christians today? God promises that he will accomplish his good purpose of maturing us and making us more like his Son ( Further, the unnamed author of the letter to the Hebrews, addresses Christians' struggling with pressures to compromise their faith. Therefore, he exhorts and encourages them instead to 'go on to maturity' (Heb 6:1). The word 'mature' (teleios) belongs to a family of words in the New Testament which convey the notion of wholeness. It can also mean 'perfect' or 'complete', without lacking anything, for example a full life. It becomes clear that the road to maturity is not one of upward mobility, but of sacrificing the self and of servanthood (Phlp 2). Followers of Christ are called to a 'foolish' way of denying one's own desires for self-fulfilment. The way of self-denial is different from 'worldly' wisdom. This kenotic way does not cling to its power, but instead shares it to establish the reign of God (shalom; Mt 5).
A Christological view on maturity, highlights the fact that Jesus is both the author of our faith and the one who matures and perfects it (Heb 12:2). Indeed, our Lord is the One who became mature through his sufferings (Heb 2:10; 5:8-9). In Jesus' sermon on the mount (Mt 5:48), he says: 'Be perfect as your heavenly Father is perfect' (i.e. mature). In the previous verses (5:17-47) Jesus describes the qualities of spiritual maturity. A pneumatological perspective on maturity reveals the reality that mature Christians are shaped by the power of the Holy Spirit to produce a character which displays the fruit of the Spirit (Gl 5:22-25) -both as individual Christians and as part of the body of Christ. The Spirit of Christ gives us spiritual gifts to equip us for God's work (Rm 12:6-8; 1 Cor 12:1-11), and to make us more like Christ himself (2 Cor 3:18). Mature Christian women and men are committed to serve one another with the fruit that comes from the Spirit, because -as The Cape Town Commitment (II-F-3) advocates -'we should not quench the Spirit by despising the ministry of any'.
How then, does spiritual maturity relate to the issue of power? Significantly, keeping in mind previous discussions on power, the characteristics of Christian maturity (discussed here above) represent the opposite of unilateral power and its abuse. Christian maturity, based on a healthy spirituality, implies a constant awareness of the presence of power and its mechanisms in our lives. Consequently, mature Christians seek forms of shared power -'power with' -without veiling or amplifying existing differences in power. They recognise their own desire for power as a 'daily vice' (Pollefeyt, quoted in ed. Dillen 2014:xix).
Mission-minded pastoral caregivers embody and witness to the truth that our worth, power and dignity as equal image bearers, reside in receiving and sharing God's unfailing love. The more we practice the sharing of God's love on our journey as disciples, the more we grow mature in Christ. Ultimately, our human identity is not equivalent to our achievements, success, or power. Instead, our Christian identity flows from the capacity and choice for hospitable and loving relationships.
Conclusion
From all the arguments presented in this article thus far, and in line with the article's main research question and theme, it follows that mature Christians' faith content should reflect a meaningful understanding of God, specifically God's power, which enables a fullness of life for all. Pastoral caregivers are called to lead believers to a mature faith that embodies a congruency between their beliefs about God, and their words and deeds, coram Deo. Essentially, such an integrated, mature faith's basic characteristics are service, compassion and empathy that is sharing God's power through love (diakonia). Hence, I contend that Christian caregivers should discern God's power theologically -not merely as 'power over', but rather as 'power with or within'. Such discernment will help them to foster mature disciples of Christ -through the lifelong process of spiritual formation -who can in turn lead apostolic communities toward justice and dignity for the sake of God's kingdom in the world.
Pastoral caregivers (lay and also ordained) guide Christian believers on their journey of spiritual formation to grow into a loving, mature body focused on Christ, its head (Eph 4:14-16). In this vein, pastoral theology and pastoral care -as part of the long history of the care of souls (cura animarum) in Catholic and Protestant church settings -plays a formative role in the lives of Christian believers.
During the process of faith formation and growth to maturity in Christ, mission-minded pastoral caregivers need to heed to the reality that power is everywhere. It is a complex phenomenon. Everyone has the opportunity for power, and the possibility of exercising power. No one is completely powerless. However, although everyone has the access to power, not everyone possesses the same (amount of) power (Reynaert, in ed. Dillen 2014:3-16). Mature Christians takes power into account by constantly recognising and staying aware of it, and by seeking for forms of shared powerpower with -without blurring or amplifying power differences in human relations.
Pastoral caregivers are called to intentionally strive to use power in a kenotic way by radically following Jesus' example in Philippians 2:5-11 responsibly. The actual transforming power flows from Christ through the work of his Holy Spirit and not via our human-made patriarchal power. God has all the power necessary to be God, and graciously makes this unique divine power -in love -available in all necessities and circumstances (Graham 2002:276).
Our power relationships and how we deal with it as Christians, are regulated by our views of who God is and what God's power entails.
It was indicated throughout how a balanced theological view of God's power can benefit pastoral care practitionerswho operate from a missional perspective -to mitigate the vulnerabilities caused by injustice and unequal distributions of power. I conclude with the assertion that the veracity of the unfailing love (hesed) of a compassionate God has been made manifest in the cross and resurrection of Jesus Christ who's presence, through his Spirit, gives us ultimate (eschatological) hope. Christ's cross (representing God's vulnerable identification) and resurrection (representing God's powerful transformation; Rv 11:15-19), even now anticipates a new quality of being human. This new qualitygiven by the empowering Spirit of God -is orientated toward discipleship and maturity in Christ, which should intentionally promote human dignity, justice and vulnerable courage in our wounded world. | 8,514.8 | 2023-04-26T00:00:00.000 | [
"Philosophy"
] |
Numerical study of the lean premixed PRECCINSTA burner with hydrogen enrichment
Hydrogen combustion is one of the most promising solution to achieve a global decarbonization in power production and transports. Pure hydrogen combustion is far from becoming a standard but, during the energy transition, hydrogen co-firing can be a feasible and economically attractive shortterm measure. The use of hydrogen blending gives rise to several issues related to flashback, NOx emissions and thermo-acoustic instabilities. To improve the understanding of the e↵ect of hydrogen enrichment, herein a numerical analysis of lean premixed hydrogen enriched flames is performed by means of 3D unsteady CFD simulations. The numerical model has been assessed against experimental results for both cold and reacting flows in terms of velocity profile (average) and flame shape (mean OH* radical fields). The burner under investigation is the swirl stabilized PRECCINSTA studied at the Deutsches Zentrum für Luft-und Raumfahrt (DLR). The DLR’s researchers have shown the e↵ect of hydrogen addition on the flame topology and combustion instabilities at various operating conditions in terms of thermal power, equivalence ratio and H2 volume fraction. Simulations are in good accordance with experimental data both in terms of velocity and temperature profiles. The numerical model provides a qualitative estimation of the flame shape.
Introduction
In the past decades, natural gas power generation represented one of the main solutions for the coal-to-gas transition. The replacement of coal with natural gas avoided 95 Mt/year of CO2 emissions [1] but, nowadays, this is no longer sufficient. In 2014, the European Commission proposed a reduction of Greenhouse Gases emissions by 40% with respect to the 1990 levels until 2030 [2].
Green hydrogen combustion can represent a viable solution to reach the decarbonization of power generation industry since it is a carbon-free energy vector for combustion and energy-storing. Many researchers are currently studying the e↵ect of hydrogen addition in natural gas-fuelled turbine with the aim of reaching 100% hydrogen combustion until 2030 [3].
Retrofit solutions for existing gas turbines are currently being developed and they can play a crucial role to move the first steps into hydrogen combustion technology. In fact, with minor changes to the actual combustor design, co-firing of hydrogen up to 30% (by volume fraction) can be achieved, resulting in 11% of carbon reduction [4]. On-field experience could lead to further development of current gas turbine technology allowing up to 100% hydrogen firing. Most of existing gas turbines can be retrofitted to either partially or fully burn hydrogen with small modifications, avoiding large capital spending and extending lifetime of existing plants [5].
Even though, green hydrogen is a carbon-free high-energy content vector, its employment in gas turbine combustion gives rise to several issues. Hydrogen flame speed is up to 10 times faster than natural gas, resulting in a higher risk of flashback and consequently damage to the hardware. Hydrogen addition modifies the thermo-acoustic behavior of the combustion system locally, increasing flame temperature, which could lead to higher NOx emissions. Higher auto-ignition risk due to lower ignition delay time should also be accounted. However, hydrogen properties a↵ect also positively the combustion process. Hydrogen addition widens the flammability limits of the fuel, allowing leaner combustion (and thus lower adiabatic flame temperature) with lesser risk of blowout and reduction of NOx emissions.
With the aim to improve the understanding of hydrogen-enriched methane turbulent combustion, a premixed burner tested at the Institut für Verbrennungstechnik, Deutsches Zentrum für Luft-und Raumfahrt (DLR) has been investigated by means of numerical simulations. The burner under study is the well-known PRECCINSTA burner, which has been widely studied both experimentally [6,7] and numerically [8,13]. The burner features a swirled stabilized flame, produced by twelve radial vanes downstream of a plenum. In addition to the swirler, a central hub is used to stabilize the flame and hold it attached to the burner. The combustion chamber has a squared cross section, ending into an exhaust duct.
Experiments conducted at DLR showed the e↵ect of hydrogen addition in terms of flame shape, sound pressure and peak frequencies for various operating conditions. In this work, 3D unsteady RANS simulations have been performed for both cold and reacting flows. The numerical model has been assessed against experimental results. First, a 3D unstructured mesh of the burner has been created and a grid independence study (for the cold flow) has been performed. Several combustion models (Species Transport and Partially Premixed) have been used to simulate the coventional methane-air combustion. Finally, a methane-hydrogen fuel blend (20% H 2 by volume) has been simulated at fixed equivalence ratio and thermal power. Flame shapes in terms of OH * molar concentrations have been compared against experimental OH * fields obtained by chemiluminescence imaging.
Case study
The geometry investigated in this work reproduces the premixed swirl-stabilized PRECCIN-STA burner (see Fig. 1). The air enters the plenum trough a 25 mm diameter inlet section and then passes into the swirler, composed by 12 radial vanes. The fuel is injected through 1 mm orifices in the swirl vanes. Here, the fuel-air premixing takes place and the flow enters the combustion chamber through a burner nozzle with an exit diameter of D = 27.85 mm and a conical inner blu↵ body. The combustion chamber has a squared cross-section of 85 ÷ 85 mm 2 and a height of 114 mm and ends with a conical surface followed by an exhaust duct with 40 mm inner diameter. This burner has been experimentally studied at DLR [14]. In the cited study, flame shapes for various configuration are observed using OH * line of sight chemiluminescence imaging, a common indicator of heat release. Data were collected for various fuel/air equivalence ratios (0.70, 0.85, 1.05), H 2 volume fraction (0 to 40 % in 5% increments) and thermal power (10,
Conservation equation
Numerical investigations have been conducted in the ANSYS Fluent ® environment in order to simulate the experimental work. Herein, unsteady 3D RANS equation have been employed. Continuity and momentum equations can be written in 3D Cartesian coordinates as [15]: Here ⇢ is the density, p is the pressure, u is the velocity, and the subscript i, j, k are the three Cartesian coordinates. The terms on the right hand side of Eq. (2) are respectively the pressure gradient, the divergence of the viscous stress tensor and the divergence of the Reynolds stresses tensor.
The energy equation can be written as: where e is the energy, k the thermal conductivity and σ i, j the divergence of the total stresses tensor.
Turbulence modeling
The model used for the turbulence closure of the U-RANS equations is the Realizable k − ✏ model [16]. The k-✏ model [17] is a semi-empirical model based on model transport equations for the turbulence kinetic energy (k) and its dissipation rate (✏). The k-✏ models have been widely used in last years to simulate combustion processes where the wall treatment is not a primary concern, thanks to their robustness, economy, and reasonable accuracy [18]. The Realizable k − ✏ model provides superior performance for flows involving rotation and recirculation [15]. Other turbulence models has been used (i.e. Transition-SST, Reynold Stress, DES-RANS models) but no significant variations in the results have been detected.
Partially premixed combustion model
Modeling turbulent combustion is one of the most difficult challenges for computational fluid dynamics. Combustion model can be divided into two macro categories: premixed and nonpremixed (or di↵usion) combustion.
Premixed combustion models assume that fuel and oxidizer enter the computational domain already perfectly premixed at molecular level. The combustion occurs as a thin flame front that propagates from the burnt gases (products) to the fresh unburnt gases (reactants). The flame front propagation is modeled by solving a transport equation for the densityweighted mean reaction progress variable c, which ranges from 0 in the fresh gases to 1 in the burnt gases.
In non-premixed combustion fuel and oxidizer are not mixed before and they enter the computational domain in distinct streams. Mixing must bring reactants in the reaction zone fast enough for combustion to proceed. Under the assumptions of constant thermodynamic pressure and low Mach numbers, heat capacities equal and constant for all species, Lewis numbers all equal to unity [9], the thermochemistry can be reduced to a single parameter: the mixture fraction f , defined as where Z i is the mass fraction of the generic element i. The subscript ox denotes the value at the oxidizer stream inlet and the subscript f uel denotes the value at the fuel stream inlet. Mixture fraction indicates how much of the total mixture comes from the fuel stream. Under certain hypothesis [9], the mixture fraction method allows to break down the problem into two parts: the mixing problem, consisting in calculating the f -field, and the flame structure problem, consisting in linking the temperature, fuel mass fraction and oxidizer mass fraction to f . The termochemistry is pre-calculated through a reaction mechanism, the interaction between chemistry and turbulence is modeled using the mixture fraction variance and the informations on temperature, density and species are linked to the mixture fraction f through a Probability Density Function (PDF). The laminar flame speed is calculated through a piecewise-linear polynomial function of f realized to fit results obtained in detailed simulations from previous works [19].
While in theory these two models are sharply distinguished, in practical application they often tends to overlap. In the case study, fuel and oxidizer enter the combustion chamber from two di↵erent streams, but the mixing occurs upstream of the combustion chamber. In this work, the Partially Premixed model has been used: it is a combination of the two previous models that solves a transport equation for both the mean reaction progress variable c (to determine the position of the flame front ) and the mean mixture fraction f (and its variance f 0 ). Turbulent flame speed has been calculated through the Zimont model [10][11][12].
Chemistry can be modeled as being in chemical equilibrium (Equilibrium model), being near chemical equilibrium (Steady Di↵usion Flamelet model), or far from chemical equilibrium (Unsteady Laminar Flamelet model).
With the Chemical Equilibrium model, fuel properties are calculated through nonadiabatic equilibrium calculation and they only depends on mean mixture fraction, mixture fraction variance and enthalpy levels. Kinetic e↵ects are not accounted since a reaction mechanism is not present in the model.
The idea behind the steady laminar flamelet approach is the modeling of a turbulent flame brush as an ensemble of discrete, steady laminar flames, called flamelets. The individual flamelets are assumed to have the same structure as laminar flames in simple configurations, and are obtained by experiments or calculations. These laminar flamelets are then embedded in a turbulent flame using statistical PDF methods. This formulation takes into account local turbulence e↵ects via strain rates. Thus, the results are not only dependent on the local mixture of fuel and oxidizer and enthalpy levels, but also on the local turbulence level. Laminar flamelet approach allows realistic chemical kinetic e↵ects modeling with considerable computational savings.
Combustion can be modeled as adiabatic or non-adiabatic: adiabatic formulation is a simpler model that involves a two-dimensional look up table but does not allow the modeling of some type of reacting systems (like multiple fuel inlets systems).
In this work, a Partially Premixed model with both Steady Di↵usion Flamelet and Chemical Equilibrium modeling of thermochemistry is used. The Steady Di↵usion Flamelet model model proved to be the most reliable for this case study.
Numerical model
The cold and reacting flow have been studied by means of 3D unsteady RANS simulations.
Boundary conditions for the cold flow and the methane-air combustion have been collected and deduced from [8]. For the cold flow, a mass flow inlet with G a = 12 g/s condition has been imposed on the inlet section. A pressure-outlet condition with atmospheric pressure has been imposed at the outlet and the walls are considered adiabatic and with the no-slip condition. It must be noted that the atmospheric chamber downstream of the exhaust tube is added to push the outlet boundary condition as far as possible from the zone where the mixing and the combustion occurs to avoid the influence of the boundary condition on the results. The methane reacting case is characterized by the same air mass flow rate (G a = 12 g/s), an equivalence ratio of 0.75, an and a thermal power of 27 kW. Boundary conditions for hydrogen combustion have been inherited by the operating condition reported in [14]. The φ = 0.70, H 2 volume fraction = 20%, P th = 20 kW, case has been chosen as reference case for the numerical simulations.
The Realizable k − ✏ model has been used for the turbulence closure of the U-RANS simulations for both cold and reacting flow. The combustion for both methane and hydrogen was modeled with the Partially Premixed combustion model. A time step of ∆t = 2 ⇥ 10 −5 s was used for all the unsteady simulations. Solution convergence was determined when the residuals was less than 10 −3 for all the variables.
Grid independence study
Before starting the reacting flow simulations, a grid sensitivity analysis has been conducted on two grids by means of unsteady cold flow RANS simulations.
Unstructured meshes with two di↵erent grid densities have been created by means of Pointwise ® (see Table 1 and Fig. 2). The fine grid has been realized through a refinement of the coarse grid, especially in the mixing zone (between the burner nozzle and the blu↵ body) and in the first half of the combustion chamber (see Fig. 3).
These grids have been compared to the grid used in [8]. In particular, mean axial and tangential velocity profiles for various sections in the combustion chamber have been compared. It must be noted that in the reference work, LES simulations have been performed thus the grid used is much finer (around 3 millions cells) than the grids created in this study.
Results show good accordance with the reference case (see Figs. 4 and 5). The two grids show a similar trends and the di↵erences on the velocity profile are negligible. However, the fine grid has been chosen because the additional time required for the simulation is negligible and the fine grid may resolve better the mixing process in the reacting flow simulations. The mechanism used to simulate the reaction kinetics for hydrogen combustion is the renowned GriMech 3.0, which provides detailed chemistry, featuring 325 reactions and 53 species [20]. The GriMech mechanism has been already used to study hydrogen-methane flames [21,22] and it has been validated for a wide range of global fuel equivalence ratios and of hydrogen contents in fuel blends. In order to evaluate the OH * molar concentration, the reaction mechanism to model the formation and quenching of OH * used by [23], consisting of 12 reactions (2 for formation and 10 for quenching), has been added to the base reaction mechanism.
Methane combustion
A campaign of RANS simulations has been performed to investigate the Partially Premixed combustion model and to find the best fitting configuration. This study has been conducted on a methane-air case to compare the results with a reference case simulated by mean of LES simulation, on the same geometry configuration [8].
The first case is a chemical equilibrium, adiabatic simulation. The boundary condition are the ones used in [8] and described in Section 3.4. As stated in Section 3.3, chemical Fig. 6. Here, mean temperature profiles for two sections in the combustion chamber have been compared. Both models predicts a lower temperature at high radial distance and over-predict it near the symmetry axis. Steady Flamelet model fits better the reference results near to the minima of the profiles. Di↵erent inlets for the fuel are usually an issue for adiabatic models. For this reason, a simulation with steady laminar flamelet model with non-adiabatic formulation has been performed. As it can be seen from Fig. 6, the non adiabatic model fits better the reference results, especially for high radial position. The model under predicts the temperature profiles (mean error is 7.8%) but, considering that the reference case results are performed on a much more finer grid and by means of LES simulations, the accuracy is acceptable.
Hydrogen addition
Once the combination of combustion and turbulence models has been chosen (i.e., Partially Premixed Combustion, GriMech 3.0 reaction mechanism with OH * sub-mechanism [23], Realizable k−✏ turbulence model), a simulation with a blended fuel of methane and hydrogen has been carried out. The boundary conditions have been adjusted to match the ones used in the reference experimental work [14], i.e., thermal power 20 kW, equivalence ratio 0.70 and H 2 volume fraction 20%. The flame shape assessment has been performed comparing the mean OH * molar concentration with the experimental line-of-sight OH * chemiluminescence imaging reported in [14]. As stated in [24], OH * emission intensity is proportional to OH * concentration, so the intensity distribution of OH * chemiluminsecence can be described qualitatively by the OH * concentration distribution.
In order to post process and compare the experimental flame shape against numerical results, a Matlab ® script has been written. Through this script, the OH * fields picture reported in [14] and in Fig. 7a, has been converted into an indexed gray-scale image (with 256 levels), returned as a numeric array of the same dimensions of the input gray-scale image and compared with the mean OH * molar concentration profile monitored by Fluent ® (normalized by the maximum global value) for several sections of the combustion chamber.
Results for the case with hydrogen addition are shown in Fig. 7. It can be seen that the numerical and experimental profile have the same trend, with a plateau in the central zone and two peaks around 10 and 20 cm away from the symmetry axis of the combustion chamber. Experimental and numeric peaks positions are close near the chamber inlet while they tend to move away as the distance from the chamber inlet increases.
Conclusions
Unsteady RANS simulations have been used to investigate the a↵ect of hydrogen addition on the flame shape of a premixed swirled burner and the results were compared to experimental Figure 7. a) Flame shape by OH * chemiluminescence imaging from [14]; b) Numerical flame shape by OH * molar concentration; c) -f) OH * normalized molar concentration comparisons data. Mean flow and temperature profile were studied for cold and reacting flow respectively. A comparison between experimental OH * chemiluminescence imaging and numerical OH * molar concentration fields have been performed in order to evaluate the flame shape. Results are in good agreement with the experimental data, considering that the experimental flame shape (Fig. 7a) is obtained using time-resolved OH * chemiluminescence imaging so it is likely to capture the radiation emitted in the whole combustion chamber while the OH * molar concentration field (Fig. 7b) is evaluated on a single section (parallel to the symmetry axis). While there is an o↵set, the results showed in Figs. 7c and 7d, exhibit a good accordance with the experimental data. Future studies will be focused on an improvement of the OH * reaction mechanism and on the post-processing techniques used to link OH * molar concentration and heat release. Nevertheless, unsteady RANS proved to be a reliable technique for preliminary turbulent premixed combustion analyses. | 4,492.6 | 2021-01-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
Mirror Symmetry and Fano Manifolds
We consider mirror symmetry for Fano manifolds, and describe how one can recover the classification of 3-dimensional Fano manifolds from the study of their mirrors. We sketch a program to classify 4-dimensional Fano manifolds using these ideas.
Introduction
We give a sketch of mirror symmetry for Fano manifolds and we outline a program to classify Fano 4-folds using mirror symmetry. As motivation, we describe how one can recover the classification of Fano 3-folds from the study of their mirrors. A glance at the table of contents will give a good idea of the topics covered. We take a stripped-down view of mirror symmetry that originated in the work of Golyshev [Gol07] and that can also be found in [Prz07].
Local systems
A local system of rank r on a (topological) manifold B is a locally constant sheaf V of r-dimensional Q-vector spaces. To give a local system is equivalent to give its monodromy representation ρ : π 1 (B, x) → Aut V x ∼ = GL r (Q) where x ∈ B. We write r = rk V. The central theme of this note is the detailed comparison of two different ways that local systems arise in mathematics.
All local systems in this note: (a) support-at least conjecturally-an additional structure such as a (polarised) variation of (pure) Hodge structure, or a structure of an l-adic sheaf over a base B defined over a number field. 1 ; and (b) have an integral structure, for instance they are local systems of free Z-modules. In particular we assume throughout that V is polarised, i.e. that it carries a nondegenerate symmetric or antisymmetric bilinear form ψ : V ⊗ V → Q.
Let C be a compact topological surface, S ⊂ C a finite set, and V a local system on U = C \ S. We denote by x ∈ U a point and by j : U = C \ S ֒→ C the natural (open) inclusion. If s ∈ S and γ s ∈ π 1 (U, x) is a loop around s, then we write T s = ρ(γ s ) ∈ Aut V x for the monodromy transformation; T s is defined only up to conjugation, but this will be unimportant in what follows.
Definition 2.1. The ramification of V is: If V as above is a local system on U = C \ S, and the genus of C is g, then, by Euler-Poincaré, rf V + (2g − 2) rk V = −χ(C, j ⋆ V). If V is nontrivial irreducible, then H 0 (C, j ⋆ V) = V π1(U,x) x = (0) and, dually, also H 2 (C, j ⋆ V) = (0). Thus, if C = P 1 and V is nontrivial irreducible, then: We call the quantity rf V − 2 rk V the ramification index of V. Even from a purely topological perspective, local systems with ramification index zero seem special. As far as we know, to date there has been no systematic study of l-adic sheaves on P 1 of ramification zero.
Local systems from Laurent polynomials
Local systems arise classically in algebraic geometry as the cohomology groups of the fibers of a morphism f : X → B.
The classical period of a Laurent polynomial. We discuss the special case where f : (C × ) n → C is a Laurent polynomial in n variables, that is, an element of the polynomial ring C[x 1 , x −1 1 , . . . , x n , x −1 n ] where x 1 , . . . , x n are the standard co-ordinates on (C × ) n .
Definition 3.1. Let f : (C × ) n → C be a Laurent polynomial. The classical period of f is: Theorem 3.2. The classical period satisfies an ordinary differential equation L · π f (t) ≡ 0, where L ∈ C t, D is a polynomial differential operator and D = t d dt .
Proof. In short: our period π f (t) is a specialisation of integrals which are solutions of the differential systems introduced in [GZK89], for which we recommend the survey [Sti07]. We next explain this in greater detail. Let P ⊂ Z n be the Newton polytope of f and denote by m 0 , . . . , m N ∈ P ∩ Z n the lattice points in P . If P does not contain the origin then the classical period is constant and there is nothing to prove, so we assume that m 0 = 0. Write: Reparametrizing t if necessary, we reduce to the case where a 0 = 0. Denote by ι : Z n ֒→ Z n+1 the affine embedding "at height 1": u i x mi is the generic Laurent polynomial with Newton polytope P , then it is well-known [Bat94,Sti07] that the period: dx n x n satisfies the holonomic differential system 2 gkz(A, c) where c = (−1, 0, . . . , 0) [Sti07, §2.5]. To get the operator L, restrict the coefficients to u i = a i for i > 0, change the variable u 0 to t = −1/u 0 , and note that π f (t) = u 0 Φ g (u 0 , a 1 , . . . , a n ).
2 That is, the system of differential equations: More precisely the period satisfies the extended GKZ system of [HKTY95, §3.3] or, equivalently, the better behaved GKZ system of [BH]. In the important case when P is a reflexive polytope, the standard GKZ is the same as the better behaved GKZ. The rank of the local system of solutions of the better behaved system is always the normalised volume Vol P .
Definition 3.3. The Picard-Fuchs operator L f ∈ C t, D is the operator: such that L f · π f ≡ 0, where k is taken to be as small as possible and, once k is fixed, we choose L f so that deg p k is as small as possible. This defines L f uniquely up to multiplication by a constant. We say that the order ord L f of L f is k, and the degree deg L f is the maximum of deg p 0 , deg p 1 , . . . , deg p k .
It is clear from what we said above that ord L f ≤ Vol P .
Remark 3.4. The local system Sol L f is an irreducible summand of the polarised variation of Hodge structure gr W n−1 R n−1 f ! Z (C × ) n . By [Del71,Thm 4.5], L f has regular singularities.
How to compute the Picard-Fuchs operator and the ramification. Consider the period sequence (c m ) m≥0 , where c m = coeff 1 (f m ). Expanding π f (t) as a power series in t and applying the residue theorem n times yields: In practice, to compute L f one uses knowledge of the first few terms of the period sequence and linear algebra to guess the recursion relation; note that the computation of c m , say for 1 ≤ m ≤ 600, is very expensive. Given L f , one can compute rf(Sol L f ) algorithmically using elementary Fuchsian theory.
Example 3.5. If f (x, y) = x + y + x −1 y −1 , then: The coefficients satisfy the recursion relation: and, by what we said, this is equivalent to: Studying this ODE, one finds that the ramification defect rf(Sol L f ) − 2 rk(Sol L f ) is zero.
Local systems from quantum cohomology
Local systems also arise in the study of quantum cohomology, as solutions of the regularised quantum differential equation. When X is a Fano manifold, the space of solutions of the regularised quantum differential equation for X defines a local system on P 1 \ S.
Fano manifolds. Recall that a complex projective manifold X of complex dimension n is called Fano if the anticanonical line bundle −K X = ∧ n T X is ample. If n = 2, X is called a del Pezzo surface. It is well-known that a del Pezzo surface is isomorphic to P 1 × P 1 or the blow up of P 2 in ≤ 8 general points: thus, there are 10 deformation families of Fano manifolds in two dimensions. There are 105 deformation families of 3-dimensional Fano manifolds: 17 families with b 2 = 1 and 88 families with b 2 ≥ 2 [Isk77, Isk78, Tak89, MM04]. We state a theorem of Mori that plays a crucial role in what follows: Theorem 4.1. Let X be a Fano manifold. Denote by NE X ⊂ H 2 (X; R) the Mori cone of X: that is, the convex cone generated by (classes of ) algebraic curves C ⊂ X. Then NE X is a rational polyhedral cone.
The quantum period of a Fano manifold. When X is Fano, denote by X 0,k,m the moduli space of stable morphisms f : Here we are mainly interested in X 0,1,m and the evaluation morphism at the marked point: ev : X 0,1,m → X Denote by ψ the first Chern class of the universal cotangent line bundle on X 0,1,m , that is, the relative dualising sheaf ω π of the forgetful morphism π : X 0,1,m → X 0,0,m .
Definition 4.2. The quantum period of X is the power series G X (t) = m≥0 p m t m where p 0 = 1, p 1 = 0, and p m = X0,1,m ψ m−2 ev ⋆ (pt) for m ≥ 2. The sequence (p m ) m≥0 is the quantum period sequence.
Theorem 4.3. The quantum period of a Fano manifold X satisfies a ordinary differential equation Q · G X (t) ≡ 0, where Q ∈ Z t, D is a polynomial differential operator and D = t d dt . Proof. In short: our quantum period G X (t) is a specialisation of one component of the small J-function. The result then follows from general properties of quantum cohomology going back to Dijkgraaf. We next recall the relevant facts from the theory of quantum cohomology 3 and explain this in greater detail.
In what follows we denote by X 0,k,β the moduli space of stable morphisms of class β ∈ NE X ∩ H 2 (X, Z). Recall that the small quantum product a * b of even degree cohomology classes a, b ∈ H • (X; C) is defined by the following formula, which is to hold for all c ∈ H • (X; C): where (a, b) = X a∪b is the Poincaré pairing, q β lies in the group ring C[H 2 (X; Z)] 4 , and: a, b, c 0,3,β = is the 3-point correlator. The structure of the small quantum product is equivalent to an integrable algebraic connection ∇ on: • the trivial bundle with fiber the even part H ev (X; C) of H • (X; C), over In other words T is the torus with character group Hom groups (T, C × ) = H 2 (X; Z), co-character group Hom groups (C × , T) = H 2 (X; Z), and group of C-valued points The fact that this connection is algebraic globally on T (in fact, the coefficients of the connection are polynomials) follows from the fact that quantum cohomology is graded and that −K X > 0 on NE X. The fact that the connection is integrable (flat) is a fundamental property of quantum cohomology: it follows from the WDVV equations. Integrability means that the action of Lie Recall that the small J-function of X is: where J β = ev ⋆ 1 1−ψ , ev : X 0,1,β → X is the evaluation map at the marked point, and we expand 1 1−ψ as a power series in ψ. It is well-known that J X (q) is a solution of the quantum D-module and therefore it tautologically satisfies an 4 In general we should work with the subgroup H 2 (X) alg ⊂ H 2 (X); here and in the rest of the paragraph we use the fact that if X is Fano manifold then H 2 (X) = H alg 2 (X). 5 Here O and M are sheaves of D-modules in the analytic topology on T, and Hom is the sheaf of homomorphisms.
algebraic PDE. Note that J X (q) is cohomology-valued but it makes sense to take its degree-zero component J 0 X (q) ∈ H 0 (X; C); we can regard J 0 X (q) as a C-valued function, because H 0 (X; C) is canonically generated by the identity class 1.
Finally, the anticanonical class −K X ∈ H 2 (X; Z) is a co-character of T, that is, −K X gives a group homomorphism which we denote κ : where t is the co-ordinate function on C × , the discussion above makes it clear that G X (t) satisfies an algebraic ODE.
Definition 4.4. The quantum differential operator of X is the operator Q X ∈ Z t, D of lowest order, as in Definition 3.3, such that Q X · G X (t) ≡ 0.
How to compute Q X . In practice one starts by fixing a basis {T a } of H ev (X; Z) with T 0 = 1 the identity class. Let M = M (t) be the matrix of quantum multiplication by −K X in this basis, written as a function on C × by composing with κ : C × → T. Next consider the differential equation on C × : Then the first column of Ψ is J X • κ(t); the first entry of the first column is our quantum period G X (t). Next we consider the system: The column s 0 is annihilated by the differential operator Q X = D 3 − 27t 3 , and so Computing G X using the quantum Lefschetz theorem. We explain how to calculate the quantum period of a Fano complete intersection in a toric manifold using the quantum Lefschetz theorem of Kim, Lee, and Coates-Givental. For us, a toric variety is a GIT quotient X = C r / / χ (C × ) b where (C × ) b acts via the composition of a group homomorphism ρ : (C × ) b → (C × ) r with the canonical action of (C × ) r on C r . The group homomorphism ρ is given dually by a b × r integral matrix: that we call the weight data of the toric variety X. The weight data alone do not determine X: it is necessary to choose a stability condition, i.e. a (C × ) b -linearized line bundle L on C r . This choice is equivalent to the choice of a character χ ∈ Z b of (C × ) b ; denoting by L χ the corresponding line bundle, we have: Having made this choice, the set of stable points is: The set of χ ∈ Z b for which U s (χ) is non-empty generates a rational polyhedral cone in R b equipped with a partition into locally closed rational polyhedral chambers defined by requiring that U s (χ) depends only on the chamber containing χ.
We always choose χ in the interior of a chamber of maximal dimension, and then define X = U s (χ)/(C × ) b . Under the identification Z b = H 2 (X; Z) = Pic(X) the chamber containing χ is identified with the ample cone Amp X; in this way too we regard the columns D i of the weight data D as elements of H 2 (X). The appropriate Euler sequence shows that −K X = r i=1 D i . Theorem 4.6. [Giv98] Let X be a toric Fano manifold. Then Theorem 4.7. Let F be a Fano toric manifold and let L 1 , . . . , L c be nef line bundles on F such that A = −(K F + c i=1 L i ) ∈ Amp F. Let X be a smooth complete intersection of codimension c in X, defined by the equation and let a 1 be such that F X = 1 + a 1 t + O(t 2 ). Then G X (t) = exp(−a 1 t)F X (t).
The regularised quantum period and mirror symmetry. The operator Q X has a pole of order 2 (an irregular singularity) at ∞, thus it cannot directly be compared with L f . This suggests the following definitions: Definition 4.8. The regularised quantum period is the Fourier-Laplace transform G X (t) = (m!)p m t m of the quantum period G X (t). The regularised quantum differential operator of X is the operator Q X ∈ Z t, D of lowest order, as in Definition 3.3, such that Q X · G X (t) ≡ 0.
Definition 4.9. The Laurent polynomial f is mirror-dual to the Fano manifold With this definition a Fano manifold has infinitely many mirrors if it has any at all. The relationship between different mirrors of del Pezzo surfaces is investigated in [GU10,CG12], where it is shown that the different mirror Laurent polynomials f are related by cluster transformations, and together define a global function on a cluster variety.
Extremal local systems and extremal Laurent polynomials
Which local systems arise from the quantum cohomology of Fano manifolds? Golyshev first made the observation that there are effective bounds on the ramification of the regularised quantum local system V = Sol Q X of a Fano manifold X.
Definition 5.1. [Gol] A local system V on C = P 1 \S is extremal if it is irreducible, nontrivial, and rf V = 2 rk V. A Laurent polynomial f is extremal if the local system Sol L f of solutions of the ODE L f · () ≡ 0 is extremal. We write ELP for "extremal Laurent polynomial".
The regularised quantum local system of any 3-dimensional Fano manifold is extremal. We believe that extremal motivic sheaves and Laurent polynomials are interesting in their own right. It would be nice to work out a topological classification of integral polarised extremal local systems.
Example 5.2. Consider a semistable rational elliptic surface f : X → C. In general f has 12 singular fibers. Beauville [Bea82] classified surfaces with the smallest possible number, 4, of singular fibers. On each of these X, it is easy to find an open set (C × ) 2 ∼ = U ⊂ X such that f | U is an extremal Laurent polynomial.
Examples in low dimensions
We describe two classes of Laurent polynomials: Minkowski polynomials (MPs) and Hodge-Tate polynomials. (For simplicity we describe these only when the number of variables involved is 2 or 3.) MPs are especially nice because: (a) they are, experimentally and conjecturally, of low ramification; and (b) any 3dimensional Fano manifold with very ample tangent bundle is mirror-dual to a MP.
The Minkowski ansatz. Let P be a lattice polytope. Then P ∩ Z n generates an affine lattice whose underlying lattice we denote by Lattice(P ).
Definition 6.1. A lattice polytope P is admissible if the relative interior of P contains no lattice points. A lattice polytope P ⊂ R n is reflexive if the following two conditions hold: (a) Int P ∩ Z n = {0}; (b) the polar polytope: is a lattice polytope.
Definition 6.2. Let Q ⊂ R n be a lattice polytope. A lattice Minkowski decomposition of Q is a decomposition of Q as a Minkowski sum Q = R + S of lattice polytopes R, S such that Lattice(Q) = Lattice(R) + Lattice(S).
Fix a reflexive polytope P ⊂ R n of dimension ≤ 3. We describe a recipe, the Minkowski ansatz, to write down Laurent polynomials: with Newt(f ) = P . We need to explain how to choose the coefficients a m . In all cases we take a 0 = 0; this is a normalisation choice that corresponds to the fact that p 1 = 0. If F ⊂ P is a face of P , the face term corresponding to F is the Laurent polynomial: If P is a reflexive polygon then we just need to specify the edge terms. If E = [µ, µ + eν] is an edge of P , where ν is primitive, we take the corresponding term to be f E = x µ (1 + x ν ) e . If P is a reflexive 3-tope, then we treat the edges as just said. It remains to specify the face terms f F . First, lattice Minkowski decompose each face into irreducibles: We say that such a decomposition is admissible if all F i are admissible. Assuming that each face of P has an admissible decomposition, fix such a decomposition: then we take the face term to be: f F = f Fi where f Fi is given by putting coefficients on the edges of F i exactly as above. Note that the Minkowksi ansatz can associate to a reflexive 3-tope P more than one Laurent polynomial (if one or more faces of P admit more than one admissible decomposition), or exactly one Laurent polynomial (if every face of P admits a unique admissible decomposition), or no Laurent polynomial (if some face of P admits no admissible decomposition).
MP in 2 variables. There are 16 reflexive polygons and each supports exactly one MP. This gives 16 MPs but only 8 distinct (classical) period sequences. These are the quantum period sequences of the del Pezzo surfaces of degree ≥ 3, that is, of the del Pezzo surfaces with very ample anti-canonical bundle. The 8 period sequences are extremal with two exceptions: the first we already met in Example 3.6 (the mirror of F 1 ), and the other is: Example 6.3 (the mirror of dP 7 ). f (x, y) = x + y + x −1 + y −1 + x −1 y −1 . Here: − 2t 4 (D + 1)(669D + 970) − 731t 5 (D + 1)(D + 2) and the ramification defect rf(Sol L f ) − 2 rk(Sol L f ) is equal to 1.
MP in 3 variables. In 3 variables, we have (http://www.fanosearch.net): • there are 4,319 reflexive 3-topes [KS98]; • they have 344 distinct facets, and these have 79 lattice Minkowski irreducible pieces; • of these, the admissible ones are A n -triangles for 1 ≤ n ≤ 8; • MPs supported on reflexive 3-topes give rise to only 165 (classical) period sequences. They are all extremal. (This is the polytope with id 519664 in the GRDB database of toric canonical Fano 3-folds [BK].) The pentagonal facet has two Minkowski decompositions, and hence the polytope supports two Minkowski polynomials: The classical periods associated to f 1 and f 2 begin as: π 1 (t) = 1 + 6t 2 + 90t 4 + 1860t 6 + 44730t 8 + 1172556t 10 + · · · π 2 (t) = 1 + 4t 2 + 60t 4 + 1120t 6 + 24220t 8 + 567504t 10 + · · · and the corresponding Picard-Fuchs operators are: Hodge-Tate polynomials. Let f be a Laurent polynomial in 3 variables with Newton polytope P , let F be a facet of P , and let f F be the corresponding face term. Let X F be the toric variety corresponding to the polygon F . The equation f F = 0 defines a curve in X F . If f is a MP then each such curve is of genus zero, thus MPs are Hodge-Tate in the following sense.
Definition 6.5. A 3-variable Laurent polynomial f with Newton polytope P is Hodge-Tate if for all facets F ⊂ P , the curve f F = 0 has geometric genus zero.
One might hope that Hodge-Tate polynomials are of low ramification.
Example 6.6. Consider the pictured polygon. This is one of the smallest faces for which the Minkowski ansatz has nothing to say. Consider the Laurent polynomial with this Newton polygon given by f = y(x −1 + 2 + x) + y −1 + a. For generic a (the completion of) f = 0 is a curve of geometric genus 1; it becomes of genus 0 exactly when a ∈ {−4, 0, 4}. Let us take a = 4 and use this as a new "puzzle piece" for assembling a Laurent polynomial.
Consider the 3-dimensional reflexive polytope with id 547363 in the GRDB database of toric canonical Fano 3-folds [BK]. This polytope has four faces: two smooth triangles, one A 2 -triangle, and one face isomorphic to the polygon shown above. The corresponding Laurent polynomial is: It has period sequence: 1, 0, 8, 0, 120, 0, 2240, 0, 47320, 0, . . . and Picard-Fuchs operator: The Laurent polynomial F is Hodge-Tate but is not a MP. It is extremal, and is of manifold type in the sense of §7, but is not mirror-dual to any 3-dimensional Fano manifold.
Minkowski polynomials and Fano 3-folds
Recall that in 3 variables there are 165 Minkowski (classical) period sequences and, correspondingly, 165 Picard-Fuchs operators. We write L f = t k P k (D) where is a polynomial in D, and denote by L f (0) = P 0 (D) the operator at t = 0. It turns out that, if L f is one of the 165 Minkowski Picard-Fuchs operators, then L f (0) splits as a product of linear factors over the rationals. We say that L f is of manifold type if all the roots are integers; otherwise we say that L f is of orbifold type. Exactly 98 of the Minkowski Picard-Fuchs operators are of manifold type and we have verified, by direct computation of invariants on both sides, that they mirror the 98 deformation families of 3-dimensional Fano manifolds X such that −K X is very ample. It will be interesting to see if the Minkowski Picard-Fuchs operators of orbifold type mirror Fano orbifolds.
It is natural to ask what invariants of a Fano manifold X can be computed from the knowledge of the differential operator Q X alone. This is a subtle question [EHX97,ES06], but in the case of 3-folds we have good numerical evidence for the following: Hope 7.1 (Galkin, Golyshev, Iritani, van Straten). Let X be a 3-dimensional Fano manifold and let J X (t) and J 0 X (t) be as defined above (in the proof of Theorem 4.3). Then: where the limit is taken as t tends to +∞ along the real axis. The characteristic class Γ(T X ) is defined in [KKP08,Iri09].
We briefly mention a promising line of thought. Consider a 3-fold toric Gorenstein canonical singularity X σ , so that σ = R + (ι F ) where F ⊂ Z 2 is a a lattice polygon and ι : Z 2 → Z 3 is an affine embedding at height one as above. According to [Alt97], deformation components of the singularity correspond to Minkowski decompositions of F . This suggests that Minkowski polynomials f with Newt f = P may correspond to smoothing components of the singular toric Fano 3-fold X with fan polytope P . It would be nice to make this precise, and to interpret the Minkowski polynomials in terms of holomorphic disk counts in the framework of Hori, Gross-Siebert, or Kontsevich-Soibelman.
Fano 4-folds?
In 4 dimensions, there are over 473 million reflexive polytopes. Building on the Kreuzer-Skarke classification [KS00], we are now in the process of making a database of facets and of computing their lattice Minkowski decompositions. We plan to classify: Minkowski polynomials (and more general low ramification Laurent polynomials) in 4 variables; their period sequences; and their Picard-Fuchs operators. This will give a list of candidate families of Fano 4-folds, and we aim to: compute the (conjectural) invariants of these Fano 4-folds assuming that they exist; and construct the Fano explicitly in many cases. Eventually, we hope turn this story into a classification theory. | 6,626.2 | 2012-05-04T00:00:00.000 | [
"Mathematics"
] |
Evolution in Software Product Lines: Defining and Modelling for Management
Evolution in Software Product Line (SPL) is claimed when there are changes in the requirements, product structure or the technology being used. Currently, many different approaches have been proposed on how to manage SPL assets and some also address how evolution affects these assets. However, the usefulness, effectiveness and applicability of these approaches are unclear, as there is no clear consensus on what an asset is. In this work, we plan to reduce complexity in SPL evolution management. For this goal, the difficulty is defining and modeling SPL evolution and we expect to propose a flexible way to manage it. However, a large variety of artifacts is considered in SPL evolution studies, but feature models are by far the most researched ones. Feature models are widely used to represent SPLs and have been greatly developed in the Feature-Oriented Reuse Method (FORM). Consequently, in our previous works, after observed that this method has a loose structure since it does not provide guidance to reuse and rigorously analyze its assets, we have extended FORM to FORM/BCS (the Feature Oriented Reuse Method with Business Component Semantics) by enveloping its assets among which feature models with business component semantics. The contribution and the novelty of this work is that, by highlighting formally the concept of software asset and revisiting feature business components, to add new information when analyzing a domain, such as clashing actions. conflicts or undesired interactions between existing features in a product line and new features due to evolution of the product line can be manage in a flexible way.
Introduction
The Software Product Line Engineering (SPLE) [1] is an approach that aims at creating individual software applications based on a core platform, while reducing the time-to-market and the cost of development [2]. Many SPLE-related issues have been addressed both by researchers and practitioners, such as variability management, product derivation, reusability, etc. According to authors in [3], a Software Product Line (SPL) is "a set of software-intensive systems sharing a common, managed set of features that satisfy the specific needs of a particular market segment or mission and that are developed from a common set of core assets in a prescribed way". The main benefit of defining a SPL is that the reuse of all assets can be systematically organized [4].
There are two distinct phases in SPL definition: domain engineering and application engineering. The domain engineering phase starts with domain analysis, where domain knowledge is used to identify common and variable features, and these features are then realized during domain design and implementation. Application engineering focuses on product creation, first by identifying customer needs, which are then used to guide product derivation. In this way, the cost of developing and maintaining core assets is spread across all the products in a SPL, and is not specific to each separate product [5]. Note that the domain knowledge, asset realization, product configuration, etc., can all evolve over time [6].
The concept of evolution [7,8] is intrinsic to software, since customer requirements and needs change over time, so software must evolve to remain useful [9]. However, the software evolution process is quite challenging since a fragile balance must be maintained: software quality must be preserved but software structure tends to degrade over time. The following challenges have been identified [10,6] in the case of SPL evolution: 1) there are different types of assets, which are defined at different levels of abstraction and variability; 2) there is a high number of interdependencies between assets; 3) a SPL usually has a longer life-span than a single product; and 4) a SPL is larger and more complex than its individual products. Currently, many different approaches have been proposed on how to manage SPL assets and some also address how evolution affects these assets. However, the usefulness, effectiveness and applicability of these approaches are unclear, as there is no clear consensus on what an asset is. In this work, our research method consist of highlighting formally the concept of software asset and revisiting feature business components, to add new information when analyzing a domain, such as clashing actions so that we can manage evolution in a flexible way. The base is feature models [11] which are widely used to present commonality and variability (C & V) information of a product line compactly (see Figure1. for example). We have extended Feature models in the Feature Oriented Reuse Method with Business Component Semantics (FORM/BCS) [12,13,14,15,16,17]. Each product in the product line is derived from a selection of a valid combination of features [18] -a process known as product configuration [19,20]. . presents an example of enterprise software for tertiary institutions of an anonymous country. The product line, referred to as National Educational Management Product Line (NatEduMgtPl), was initiated by the Ministry of Higher Education in that country. The vision of the product line is to provide software products to state universities, other higher institutions, and Enterprise Resource Planning (ERP) vendors. The educational institutions in the country implement the BMD (Bachelor, Master and Doctorate) system -which make their core operations largely the same-hence a product line.
The remainder of the paper is organized as follows. Section 2 details out research design, method, instrument and analysis technique. Section 3 highlights formally software assets and revisits FORM/BCS feature business components. Section 4 defines and models evolution in Software product Line so that we can see how evolution affects feature business components. Section 5 presents related work and section 6 concludes the work and gives perspectives.
Research design, method, instrument and analysis technique
Many different approaches have been proposed on how to manage SPL assets and some also address how evolution affects these assets. However, the usefulness, effectiveness and applicability of these approaches are unclear, as there is no clear consensus on what an asset is.
In this regard, we think that the first concern on evolution in SPL is to establish a clear vision on concepts and then processes. The envy to clarify software assets encourages us to first highlight formally this concept. To avoid lack of understanding and ambiguities, we specify the description of software assets using Z notation.
Secondly, knowing that the management of software product line evolution is complex and this evolution is due to requirements and needs change, we revisit the specification of feature business components proposed in the FORM/BCS method [12,13,14,15,16,17], as it is the first software asset produced when analyzing the domain, to anticipate evolution very early. In this revision, we enrich features business components with new information such as clashing actions so that we can manage evolution in a flexible way. In the proposed analysis technique, for feature business components, the analyst must find and give, if it's possible, a clash action for all actions in that asset. These clashing actions advice on conflicts and undesired interactions between features and the analyst can avoid or correct them when new features and adaptation points due to evolution appear in user's requirements and needs.
We know that SPL is actually a continue process and we cannot think about all possible variant, but, by this contribution, we want to improve the flexibility of that process.
Software Assets
A software asset is composed of a set of software products derived from different activities of the life cycle. Specifically: requirements, architecture definition, analysis model, design model, code, test programs, test reports.
The different products which compose a software asset are in fact the representation of that asset at different level of abstraction (need, analysis, design, realization, texts). When the software asset is reused, each of these software assets can then be reused in the corresponding step (before, during and after coding). Specifically, test Masterdegree
Doctoratedegree Payment Registration
Differ-Learning Live-Learning programs are strongly reusable. The person who desires evaluate a software asset for reuse can take existing test programs to enforce the software asset in his own environment. It is important not to limit reuse at code level, but exploit all software assets.
Reusable software assets must be provided with necessary information for their reuse (the software asset description, also call « meta-information » [21]). This additional information allows facilitate software asset manipulation during his life cycle. It is in particular following elements: Classification information which allows facilitate corresponding software assets research, description of software asset which allows to understand rapidly functions and main features of the software asset, documentation of the software asset which allows to understand how enforce and customize the software asset, information related to tests and software asset qualification to facilitate his evaluation by a potential reuse stakeholder, information about software asset origin and property to obtain support or complementary information.
All these characteristics are summarised in the specification below using Z notation. This schema in Table 1 shows that a software asset is made up of two types of information: the body (containing effectively reuse software assets) and description (containing information allowing reuse process support). Information of qualification and classification correspond respectively to the qualification process and the classification process.
This model also brings to light the imbrications of software assets, and the fact that, beside composition relations, software assets can have others types of links illustrating, for example, the fact that a software asset uses an other software asset. That means, a software asset needs, to run, functionalities of another software asset. The software asset reuser must then decide if he also reuses associated software assets or he is able to provide himself an equivalent implementation. Typically, a vertical software asset, if it has an important granularity, will lean probably on component techniques (for example graphical objects or a middleware).
Software Asset Description
The description of a software asset gives its intention, the engineering activity the descriptor plans to perform, its target, the concerned business and the environment that is the context. The above Z notation schema specifies software asset description. Details on the following concepts: Domain, Process, Business Activity, Context & Context-awareness can be found in [16].
Software Asset Bodies
A body of a software asset is composed of software products effectively reuse. These software products can be analysis models, design models, source codes, user documentation, runnable codes, test reports, test scenarios, test programs. The following schema models software asset bodies. If we use the feature oriented reuse method with business component semantics, the body will be a feature realization if we are in the analysis stage, a conceptual realization, a process realization or a module realization if we are in the design stage.
Evolution in Software Product Lines
Feature models are widely used to represent SPLs and have been extended in the Feature Oriented Reuse Method with Business Component Semantics (FORM/BCS). Software product line evolution is the necessity to have in that product line new features, variability points or the death of old ones. This continuous phenomenon is due to changes in the requirements, product structure and newly emerging technologies. The integration of new features or variability points can creates conflicts or undesired interactions between them. For example when you add new features, they can enter in conflict with old ones. Let us take an example in libraries, if you want to ensure a sufficient availability of books and previously you authorize long term loans, the two features will be in conflict. Equally, when you remove an old feature, if this feature is used by another one, you will create inconsistency. That why the management of this situation is complex. To study evolution in SPLs, we first look at the feature business component which is a software asset in which the body is a feature realization [12] to see how we can improve the specification of his constituents that are his description and body. Knowing that processes are essentials in the description of feature business components, we start by revising their specification.
Processes with clashing actions
Evolution can occur in requirements or in new technologies and the first thing to observe is that, when a new variation point appears, to take it into consideration, we must guaranty that it don't create conflict with an existing feature or an undesired interaction between features in the product line. We think that, to avoid these conflicts, it is useful to anticipate them when analyzing a domain. We introduce then new information such as clashing actions when modeling processes.
Specifying clashing tasks in business activities
To manage evolution in software product lines, it is important to decompose business activities so that we can detect antagonist tasks between them. Antagonist tasks are tasks which cannot be performed together. A business activity has a set of "mandatory" tasks, a set of "optional" tasks, a set of "alternative" tasks, a set of "or" tasks and a set of "clashing" tasks. It can be primitive or not. The following schema specifies business activities for the management of evolutions. When the context is clear we write:
Business tasks
The decomposition of tasks allows detecting antagonist tasks. A business task has a set of "mandatory" operations, a set of "optional" operations, a set of "alternative" operations, a set of "or" operations and a set of "clashing" operations. It can be primitive or not. The following schema specifies business tasks for the management of evolutions.
Evolution mechanism
The specification of processes (sub section 2.1.2) shows that a process can be seen as a set of business activities. A non primitive business activity has decomposition. This decomposition groups the set of his "mandatory" tasks, the of his "optional" task, the of his "alternative" tasks and the set of his "or" tasks. A business activity has also a set of "clashing" tasks. A clashing task of a business activity is a task which cannot run with the tasks in his decomposition.
In a software product line, evolution is the apparition of a new variation point or the disappearing of an old one. A new variation point in feature business component as specified in the Feature Oriented Reuse Method with Business Component Semantics is a new feature with his variation points. A feature corresponds to a business activity [12]. To consider a new variation point, we must check if this new variation point doesn't create a clash with the existing ones.
Each new adaptation point has a parent feature and the evolution process of a feature business component consists of inserting the new feature as part of his parent.
From there, we define the two following functions which are essential in our evolution mechanism: is_clashed and insert.
Given a feature business component fbc and his new feature adaptation point nap, the function is_clashed returns "false" if for each feature in the solution part of fbc, the activity of nap is not in conflict with the activity of f.
Related Works
Stability is one of the most important properties of software. It is defined as "The capacity of the software product to avoid unexpected effects from modification of the software" [22]. Many product line approaches assume that activities in domain and application engineering can take a fairly stable product line for granted. However, real-world product lines inevitably and continuously evolve. Managing evolution is thus success-critical, particularly in model-based approaches to ensure consistency after changes to meta-models, models, and actual artifacts. In [23,24], several authors have stressed the importance of approaches for product line evolution to avoid the erosion of a product line, i.e., the deviation from the product line model up to the point where key properties no longer hold. Several approaches have been proposed for managing the evolution of software product lines [4], ranging from verification techniques to ensure consistent evolution, to model-based frameworks dedicated to the evolution of feature-based variability models [25]. For example, an interesting research thread proposes evolution templates for co-evolving a variability model and related software artifacts [26,27,28].
A model-driven product line approach that focuses on the issue of domain evolution and product line architectures is described in [29]. Authors discuss several challenges for the evolution of model-driven software product line architectures and present their solution for supporting evolution with automated domain model transformations. Such transformations could also be useful in our context to realize the update rules to support the evolution of the variability models in SPLs when applying model-driven techniques.
Another example is the work in [30], who present tool support for the evolution of software product lines based on the grow-and-prune model. They support identifying and refactoring code that has been created by copy and paste and which might be moved from product level to product line level. Refactoring of a SPL is not the scope of our work which, for the moment, is not situated at the code level. However, the work and tool are useful to support refactoring the SPL code.
A SPL evolution approach that preserves the original behaviour of evolving product lines, i.e., products that could be generated before evolution can still be generated after the evolution, is proposed in [31]. This of course is only possible if restricting the removal of certain needed features, which makes the process easier but also constitutes a limitation of this approach.
To keep a configuration consistent with a feature model even after evolution of the latter, in [32] authors present an approach that automatically evolves the configuration with respect to the changes performed in the model while also taking into consideration the possible cardinalities. Such an approach is useful.
Hyper feature models are introduced in [33]. These models are capable of versioning the features and their constraints to maintain evolution traceability over time and guarantee the compatibility of one version of a feature with versions of another one. Feature traceability is thus a central concern in SPL evolution approaches, and has been shown to be essential in a feature-oriented project [34]. In [35], authors was largely inspired by this earlier work on evolving software product lines, and extended this work by considering runtime management of such evolution.
Ideas developed in this contribution enter in pioneer works on feature orientation and come from our previous articles [12,13]. The specificity of our approach is that, by putting inside feature business components, information able to guide evolution, we give intrinsic ability, which is since his genesis, to software product lines to evolve smoothly.
Conclusions and Future Research
Real-world product lines inevitably and continuously evolve, then we cannot avoid the necessity of evolution in a software product line. The scientific community tries to manage evolution in software product lines but faces some difficulties link to the definition and modeling of this phenomenon in software product lines. We think that this situation is due in a large part to the fact that there is no consensus on what a software asset is. In this article after defining formally what a software asset is, we have study evolution in the first software product line asset of the feature oriented reuse method with business component semantics, the feature business component. The result is that, we find and introduce new properties in the definition of processes such as clashing actions. These new fields have allowed defining news functions for the management of evolution. This work increases the ability of software product lines to evolve in a flexible way. We plan to study erosion of a software product line which is the deviation from the product line model up to the point where key properties no longer hold. | 4,506.6 | 2022-05-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
ARA: accurate, reliable and active histopathological image classification framework with Bayesian deep learning
Machine learning algorithms hold the promise to effectively automate the analysis of histopathological images that are routinely generated in clinical practice. Any machine learning method used in the clinical diagnostic process has to be extremely accurate and, ideally, provide a measure of uncertainty for its predictions. Such accurate and reliable classifiers need enough labelled data for training, which requires time-consuming and costly manual annotation by pathologists. Thus, it is critical to minimise the amount of data needed to reach the desired accuracy by maximising the efficiency of training. We propose an accurate, reliable and active (ARA) image classification framework and introduce a new Bayesian Convolutional Neural Network (ARA-CNN) for classifying histopathological images of colorectal cancer. The model achieves exceptional classification accuracy, outperforming other models trained on the same dataset. The network outputs an uncertainty measurement for each tested image. We show that uncertainty measures can be used to detect mislabelled training samples and can be employed in an efficient active learning workflow. Using a variational dropout-based entropy measure of uncertainty in the workflow speeds up the learning process by roughly 45%. Finally, we utilise our model to segment whole-slide images of colorectal tissue and compute segmentation-based spatial statistics.
Network architecture
The ARA-CNN network accepts RGB images of size (128, 128, 3) as its input (Fig. S1A), where the values represent respectively: vertical resolution, horizontal resolution and the number of colour channels.The images from the training dataset were downsized to these dimensions.Input values are propagated to the first part of the network called stem (Fig. S1B).The stem contains a convolutional layer consisting of 64 filters, with filter size of (7, 7) and stride size of (4,4).This is directly followed by max pooling with window size of (2, 2), and identically sized strides.The output from this part is of size (16,16,64), where the values are: reduced width, reduced height and the number of filters.These operations decrease the spatial dimensions by a factor of 8, which in turn significantly reduces memory usage and can be considered an adaptation of network topology to a relatively simple texture structure of the input 1 .
The stem is followed by the first block (Fig. S1C).The main aim of this part is to learn and extract initial discriminative low-level image features.It consists of 4 residual sections, where the input to each block is transformed by a convolutional layer with 64 filters -each sized (3, 3) and with stride of size (1,1).The result of this convolution is added back to the input, which creates a residual connection.The final section of this block is followed by an average pooling with window size of (2, 2).This makes the output of this part of the network to be shaped (8, 8, 64).The next part of the model is the second block (Fig. S1D), which learns and extracts the final discriminative features -this time they are more high-level and abstract.Its structure is the same as that of the first block.After the final average pooling with window size (2, 2), the output from this part is of size (4,4,64).
The model has two outputs in total: the auxiliary output (Fig. S1E) and the main output (Fig. S1F).The main purposes of the former are to provide a better training signal to the stem and the first block (by making the features more discriminative) and to deal with the vanishing gradient problem 2 during training.The output from the first block is transformed by global average pooling.Next, it is transformed by a fully-connected layer with 32 filters and dropout with rate of 0.5 (explained in Dropout, see main text).Finally, it is fed to the fully-connected output layer with a softmax activation function.The final output is used for making the actual predictions.After the second block, the data is transformed by exactly the same set of transformations as in the auxiliary output -global average pooling, then a fully-connected layer with dropout, followed by a final output layer.
If not stated otherwise, each convolutional filter has dilation and stride set to 1. Additionally, each layer in the network (except the outputs) connects to a Batch Normalization layer.When deep learning models are trained, the distribution of inputs for each layer changes as a result of modified parameters in preceding layers, which slows down the whole process.Batch Normalisation combats this by normalising layer inputs for each training mini-batch.This enables the use of higher learning rates and significantly speeds up the training.Batch Normalisation also acts as a regulariser and reduces overfitting.The activation function used throughout the model is Leaky ReLU 3 : where the parameter α is set to 0.1 and x is a weighted sum of inputs to a network unit.
Tissue slide segmentation
In histological image analysis, the labelling of image patches is only the first step in the process of segmentation.To get a full overview of a tissue slide, it is necessary to see how image patches of different classes are placed in relation to each other and to measure their relative abundance.In particular, it is interesting to determine the neighbourhood of tumour cells.For example, the tumour being infiltrated by immune cells may be a marker of good prognosis.Kather et al. 4 showed a simple segmentation approach using standard classification methods.We present a recreation of their procedure using the ARA-CNN model (see Image segmentation in Methods).An example segmentation of five full tissue slides from the Kather et al. 4 dataset is presented in Fig. S2.The segmentation can obviously be improved -the approach with stitching image patches is after all quite rudimentary.However, it can be good enough to see the aforementioned spatial relationships.As a basic spatial statistic, for each slide we generated a summary of tissue class distribution (Fig. S2).Histograms such as these can be used as a filter to find images for further consideration (e.g.those with high tumour concentration) in an automated diagnosis system.
Figure S1
. Structure of the ARA-CNN model.The network takes as input RGB images with dimensions of 128x128 pixels.They are passed to the stem, which contains a convolutional layer responsible for reducing the spatial dimensions of the input.This is followed by the first block and the second block, responsible for learning low-level and high-level image features respectively.Both of these blocks consist of four residual sections, with each of these sections containing a convolutional layer and a residual connection.The model has two outputs overall -an auxiliary output from the first block and a final output from the second block.Both of them use the softmax activation function.
Figure S2 .
Figure S2.Segmentation.Segmentation of five large tissue slides from the colorectal cancer dataset.The leftmost column presents the original WSIs, the second one shows the segmentation done with our classification algorithm, while the third one is a visualisation of the Tumour class probability (the lighter the segment, the more probable that there is a tumour there).The last column contains a class distribution histogram -each bar represents the percentage of a given class in the segmented image. | 1,624 | 2019-06-03T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
A Linear seesaw model with hidden gauge symmetry
We propose a natural realization of linear seesaw model with hidden gauge symmetry in which $SU(2)_L$ triplet fermions, one extra Higgs singlet, doublet and quartet scalar are introduced. Small neutrino mass can be realized by two suppression factors that are small vacuum expectation value of quartet scalar and inverse of Dirac mass for triplet. After formulating neutrino mass matrix, we discuss collider phenomenology of the model focusing on signals from exotic charged particles production at the LHC.
I. INTRODUCTION
One of the big mystery in the standard model (SM) of particle physics is the mass spectrum and flavor structure of fermions. In particular, existence of physics beyond the SM is required from at least two non-zero neutrino masses for its generating mechanism. Moreover, the neutrino mass indicates a hint of structure of new physics as it should explain the smallness of the mass. Actually many mechanisms to generate neutrino mass are proposed such as canonical seesaw model [1][2][3][4], inverse seesaw model [5,6], linear seesaw model [6][7][8], etc. Note here that mass hierarchies in the neutral mass matrix are always assumed in order to get sizable neutrino mass. Thus appropriate explanations about these hierarchies are also one of the important tasks in our models and there exist several explanations [9,10]. In light of the motivation, one interesting scenario is to generate neutrino mass using the exotic fields which are large SU(2) L multiplets like quartet, quintet or septet [11][12][13][14][15][16][17], since we can suppress neutrino mass by small vacuum expectation value (VEV) of a large multiplet scalar and/or restricted structure of interactions including large multiplet fields. Furthermore, this kind of scenario would induce interesting phenomenology at collider experiments, since a large multiplet field contain multi-charged particles such as doubly-charged scalar/fermion.
In this letter we propose a natural realization of linear seesaw model with hidden gauge symmetry in which SU(2) L triplet fermions, one extra Higgs singlet, doublet and quartet scalar are introduced. Interestingly, tiny neutrino mass is realized by two suppression effects; inverse of Dirac mass for the triplet fermion and small VEV of the quartet scalar which is required by the constraint from ρ-parameter, where the quartet VEV is induced in a similar way to the Higgs triplet model [18]. We formulate neutrino mass matrix and estimate typical size of Yukawa coupling constants associated with triplet fermion and SM leptons.
Then we discuss collider phenomenology of our scenario focusing on production of exotic charged particles at the large hadron collider (LHC). Particularly interesting signals come from Yukawa interaction associated with triplet fermion, quartet scalar and SM lepton which represent a specific signature of our model. This letter is organized as follows. In Sec. II, we introduce our model, and formulate Higgs sector, neutral gauge sector, neutrino sector, and relevant interactions. In Sec.III, we discuss collider phenomenologies of exotic charged particles considering specific signature in our model. Finally we devote the summary of our results and the conclusion in Sec.IV.
lower index a is the number of family that runs over 1-3.
II. MODEL SETUP AND CONSTRAINTS
In this section, we formulate our model introducing hidden gauge symmetry U(1) H .
At first, we add three families of SU (2) where H is expected to be the SM-like Higgs. All the field contents and their charge assignments are summarized in Table I.
We write the singlet and doublet scalar fields by The quartet scalar H 4 with hypercharge Y = −1/2 is represented as where subscripts for singly charged component distinguish two independent fields, and (H 4 ) ijk is the symmetric tensor notation with SU(2) L index {i, j, k} taking 1 or 2, de- indicates symmetric indices under exchange among them. Note also that neutral component is written by where two charged components are distinguished as independent fermions 1 . The mass of Σ is given by Dirac type: where we have omitted flavor index. Note that Majorana mass term of the triplet fermions is forbidden by U(1) H symmetry and type-III seesaw mechanism is absent in our setup.
The relevant Yukawa Lagrangian under these symmetries is given by 2 whereH ≡ iσ 2 H, and upper indices (a, b) = 1-3 are the number of families, and y ℓ and M Σ can be diagonal matrix without loss of generality due to the redefinitions of the fermions.
Here, we explicitly write our Lagrangian in terms of each components; From these Yukawa couplings, we obtain mass matrices defined by where m D and δ D contribute to neutrino mass matrix as we discuss below. In our model we assign lepton number 1 to Σ L,R and the term with y L breaks lepton number conservation.
A. Scalar sector
The scalar potential of our model is 1 We can also write Σ as symmetry tensor form Σ 11 = Σ + , Σ 12 = Σ 21 = Σ 0 / √ 2 and Σ ′− = Σ 22 . 2 Since the structure of quark sector is exactly same as the one in the SM, we neglect it hereafter.
where V 4 indicates trivial four point interaction terms. The parameters in V 4 are assumed to satisfy constraints from unitarity and perturbativity, and we do not discuss them in our analysis since it is not closely related to neutrino mass generation and collider physics. The non-trivial scalar potential terms are given by where SU(2) L indices are implicitly contracted to be gauge invariant in the first term. These non-trivial terms forbid dangerous massless Goldstone bosons (GBs) that would be induced from H 1,4 after symmetry breaking. The VEVs of the scalar fields are obtained by imposing the condition ∂V/∂v H,1,4,Φ = 0, where we assume M 2 4 > 0 in the potential. Then v 4 is roughly given by This VEV is restricted by the ρ-parameter which is given by where the experimental value is ρ = 1.0004 +0.0003 −0.0004 at 2σ confidence level [19]. On the other hand, we also require v ≡ v 2 where cos α(sin α) correspond to mixing among neutral scalars in two Higgs doublet, and After U(1) H symmetry breaking, CP-odd component of singlet scalar Φ is absorbed by Z ′ boson as Nambu-Goldstone boson (NGB) while CP-even component is physically neutral scalar boson. Under small mixing assumption, this CP-even scalar boson does not provide any interesting phenomenologies and we will not discuss it hereafter.
In this model, we have massive Z ′ boson from spontaneous breaking of U(1) H gauge symmetry. Here we assume Z ′ mass is mostly induced by the VEV of singlet scalar Φ such that where g H is the gauge coupling constant associated with U(1) H . Since SM particles are not charged under the U(1) H , Z ′ is hidden gauge boson and it is difficult to directly produce it at collider experiments. Thus we will not discuss Z ′ boson physics further in this paper.
C. Neutrino sector
After the spontaneous symmetry breaking, neutral fermion mass matrix with 9×9 is given by Then the active neutrino mass matrix can approximately be found as where δ D << M Σ is expected. Let us estimate the neutrino mass order. If m D ≈ O(0.01) is expected to find the sizable neutrino masses; m ν ∼ 10 −10 GeV.
Moreover, in terms of Yukawa coupling constant and VEVs, we can write Taking M Σ = 1000 GeV, v 4 = 1 GeV and v 1 100 GeV, we can realize m ν 10 −10 GeV with y L ∼ y R 10 −4 which is similar magnitude to those in generating SM charged leptons.
The neutrino mass matrix is diagonalized by One of the elegant ways to reproduce the current neutrino oscillation data [19] is to apply the Casas-Ibarra parametrization [20], and find the following Here A is an arbitrary 3 by 3 anti-symmetric matrix with complex value; A + A T = 0. Note here that all the components of m D should not exceed 100 GeV, once perturbative limit of y R is taken to be 1.
Non-unitarity:
Constraint of non-unitarity should always be taken into account in case of larger neutral mass matrix whose components are greater than three by three, since experimental neutrino oscillation results suggest nearly unitary. In case of the linear seesaw, when non-unitarity matrix U ′ M N S is defined, one can typically parametrize it by the following form: where F ≡ (M T Σ ) −1 m D is a hermitian matrix, and U ′ M N S represents the deviation from the unitarity. Considering several experimental bounds [21], one finds the following constraints [22]: Here, we show a benchmark point to satisfy the neutrino oscillation data [23], non-unitarity constraints, and perturbativity y R 1, within our parameter choices.
where we assumed real elements of A and the normal neutrino mass ordering with vanishing neutrino mass for the lightest neutrino, for simplicity. Even though one analyzes it with a general framework, reproducing neutrino oscillation data with these constraints can easily be achieved due to enough input parameters.
D. Lepton flavor violations (LFVs) and charged-lepton mass matrix
Since y R is expected to be small from the previous discussion, we focus on the Yukawa term y L that gives rise to µ → eγ processes at one-loop level, which is the most stringent constraint from the MEG experiment [24]; therefore, its branching ratio is given by B(µ → eγ) ≤ 4.2 × 10 −13 . While our branching ratio is given by where we assume all the masses in the components of Σ and H 4 to be degenerate, [19], and Comparing our branching ratio with the experimental one, one finds the following bounds on Yukawa couplings: where we fix M Σ = 600 GeV and m ϕ =1000 GeV.
Charged-lepton mass matrix: Next, we discuss the charged-lepton mixing that is also restricted by the current experimental data. Similar to the case of LFVs, we neglect the contribution to y R because it is sufficiently small. Furthermore, we assume the mass matrices m D and M Σ to be diagonal for simplicity. Then, one finds the charged-lepton fermion mass matrix as The mass matrix is diagonalized by transformation (e L(R) , E L(R) ) → V † L(R) (e L(R) , E L(R) ).
III. COLLIDER PHENOMENOLOGY OF THE MODEL
In this section, we discuss production of exotic particles in the model at the LHC. Signals of our exotic particles are explored by estimating production cross section and formulating branching ratios. In particular we focus on charged particles in quartet scalar and triplet fermions since they induce specific signature of the model.
A. Production cross sections
The components of quartet scalar and triplet fermions can be produced by electroweak interaction at the LHC. For quartet scalar H 4 , gauge interactions are derived from kinetic term: where g is the gauge coupling for SU(2) L , e is electromagnetic coupling constant, s W (c W ) = sin θ W (cos θ W ) with the Weinberg angle θ W , and (H 4 ) m indicates the component of H 4 which has the eigenvalue of diagonal SU(2) L generator T 3 given by m; For triplet fermion, we explicitly write gauge interactions such that The exotic charged particles can be produced via interactions with the SM gauge bosons.
The production cross sections are estimated using CalcHEP [29] with the CTEQ6 parton distribution functions (PDFs) [30]. In Fig. 1 we show cross sections for pair production of exotic charged particles at the LHC 13 TeV. We find that production cross section for Σ ± (Σ ′± ) pair production is larger than those for charged scalars in quartet when the mass scale is same. For O(1) TeV mass of exotic fermions, we can obtain production cross section ∼ 1 fb which can give sizable number of event at the LHC.
B. Decay branching ratios of exotic particles
Here, we consider decay processes of exotic charged particles and estimate their branching ratios (BRs).
Firstly, we consider decay of charged scalar bosons from quartet. Partial decay width for the processes including Σs in final state are given by Then we derive partial decay widths for two gauge boson final states such that In Fig. 2, we show the BRs for ϕ ++ and ϕ + 1 as a function of the Yukawa coupling y L where we assume only one element of y L ab dominates for simplicity, and we fixed some parameters such as v 4 = 1 GeV, M Σ = 600 GeV and M 4 = 1000 GeV. We find that BR for two massive gauge boson mode is dominant if the Yukawa coupling is y L 0.1, where W + γ mode in decay of ϕ + 1 is negligible since it is found to be always less than ∼ 10 −5 . The BRs for ϕ + 2 and ϕ 0 have similar behavior where ϕ 0 R can decay into ZZ while ϕ 0 I cannot decay into two gauge boson.
The exotic fermions decay into SM lepton and scalar bosons through Yukawa interaction in Eqs.
C. Signals at the LHC
Here we discuss signals of our model at the LHC focusing on charged particles in quartet scalar and triplet fermion. The charged scalar boson from H 4 dominantly decays into two SM gauge bosons since Yukawa coupling constant y L tends to be much smaller than ∼ 0.1 to obtain active neutrino mass consistent with observations. Thus signal processes will be Then W ± and Z bosons further decay into either jets or leptons. For such a signal, detailed discussions are found in, for example, refs. [16,[26][27][28]. Then we focus on signals from exotic charged fermion production hereafter.
We consider two cases of mass relation in considering the charged fermion Σ ± (Σ ′± ); (A) Table. II with their fractions obtained from product of BRs for each particle.
Here we assume M 4 < M Σ , y R ≪ y L 0.1 and v 4 = 1 GeV and we do not distinguish For Σ + Σ − production, we obtain the largest number of events from ννW + W − ZZ final state. When W ± and Z bosons from one of Σ ± decay into leptons and the other gauge bosons decay into jets signal event up to the detector level is given by where j indicates jet and E T is missing transverse energy. Thus our signal is multi-lepton with jets and missing transverse energy. For M Σ = 600 GeV, the number of events without kinematical cut can be estimated as ∼ 8, taking integrated luminosity as 300 fb −1 . Although the number of event is not large, we would find the signal since the number of SM background (BG) events is expected to be small. In addition, we can partially reconstruct mass of Σ ± from ℓ ± ℓ + ℓ − E T . In the high-luminosity LHC (HL-LHC) experiments, we can obtain more events and more parameter region will be explored.
For Σ ′+ Σ ′− production, the most clear signal would come from the final state When W ± bosons from one Σ ′± decay into leptons and the other gauge bosons decay into jets signal event up to the detector level is given by For M Σ = 600 GeV, we obtain the production cross section σ(pp → Σ ′+ Σ ′− ) ≃ 7.6 fb as shown in Fig. 1. The products of production cross section and BRs is then σ(pp → process; for example we obtain cross section for pp → W + W − ℓ + ℓ − → ℓ + ℓ + ℓ − ℓ − νν as 0.18 fb in the SM estimated by MADGRAPH5 [31]. This signature is clearer than the previous case, since the number of events is larger and final state includes three same sign leptons. In this case, we can partially reconstruct mass of Σ ′± from ℓ ∓ ℓ ± ℓ ± E T . However it is not trivial to select three charged leptons to reconstruct the mass from four charged leptons in the final state and we need to perform detailed analysis. In addition, we need to impose appropriate tagging and kinematical cuts to reduce BG events for getting sufficient significance; for example jet tagging will be useful to reduce W + W − ℓ + ℓ − BG and cuts regarding angles among charged leptons can be used to choose charged leptons as decay products of one Σ ′± .
Furthermore detector level simulation is required to take into account detector efficiency in order to obtain realistic number of events at the experiments. The detailed simulation study including BG events and kinematical cuts are beyond our scope of this paper, since the final states contain many particles and the analysis will be very complicated.
IV. SUMMARY AND DISCUSSIONS
We have constructed a model with hidden U(1) gauge symmetry which can naturally realize linear seesaw mechanism by introducing SU(2) L triplet Dirac fermion and quartet scalar fields. Then an induced active neutrino mass is suppressed by two factors; small VEV of quartet scalar and inverse of TeV scale Dirac mass for triplet fermion, where small quartet VEV is also required by the ρ-parameter constraint. Furthermore, small VEV of the quartet is naturally realized by mechanism similar to Higgs triplet model.
We have formulated active neutrino mass matrix with linear seesaw mechanism which is given by Yukawa coupling constants associated with interactions among triplet fermion, Then we have discussed collider phenomenology of the model focusing on production of exotic charged particles. Specific signatures of our model are obtained, when an exotic charged fermion decays into SM lepton and a scalar boson from quartet which dominantly decay into SM gauge bosons. Then our signals are multi-leptons with jets and missing transverse energy. We have found that the number of signal events is O(10) at the LHC 13 TeV with integrated luminosity of 300 fb −1 , when triplet fermion mass is ∼ 600 GeV.
Although the number of events is not large, we may observe the signal since the number of SM background events should be also small for multi-lepton final state. In the HL-LHC experiments, we can obtain larger number of events and larger parameter region can be explored. More detailed simulation study is left as a future work. | 4,323.2 | 2018-06-19T00:00:00.000 | [
"Physics"
] |
Development of a Room Temperature SAW Methane Gas Sensor Incorporating a Supramolecular Cryptophane A Coating
A new room temperature supra-molecular cryptophane A (CrypA)-coated surface acoustic wave (SAW) sensor for sensing methane gas is presented. The sensor is composed of differential resonator-oscillators, a supra-molecular CrypA coated along the acoustic propagation path, and a frequency signal acquisition module (FSAM). A two-port SAW resonator configuration with low insertion loss, single resonation mode, and high quality factor was designed on a temperature-compensated ST-X quartz substrate, and as the feedback of the differntial oscillators. Prior to development, the coupling of modes (COM) simulation was conducted to predict the device performance. The supramolecular CrypA was synthesized from vanillyl alcohol using a double trimerisation method and deposited onto the SAW propagation path of the sensing resonators via different film deposition methods. Experiential results indicate the CrypA-coated sensor made using a dropping method exhibits higher sensor response compared to the unit prepared by the spinning approach because of the obviously larger surface roughness. Fast response and excellent repeatability were observed in gas sensing experiments, and the estimated detection limit and measured sensitivity are ~0.05% and ~204 Hz/%, respectively.
Introduction
Mining accidents with heavy casualties caused by methane gas explosions occur frequently, leading to huge economic losses. The most effective way to respond to such an issue is the early detection and monitoring of methane accumulation in mines or landfills. The current approaches for sensing methane gas include gas chromatography, electrochemical, optical, and semiconductor technologies. These techniques differ substantially in their approaches, and each have their own advantages and disadvantages [1][2][3][4]. Gas chromatography can perform an accurate quantitative analysis on methane gas, but, it is expensive and unsuitable for in situ monitoring which is essential in most cases [1]. Electrochemical methane gas sensors face greater challenges in real world applications because of the inertness of the methane molecule and their slow response [2]. The main challenge for optical methane sensors is that it is hard to find a suitable light source in the infrared range, and range, and they also suffer from complicate sensor configurations and humidity interference [3][4][5]. As for the semiconductor methane gas sensors, their high operation temperature makes them unsuitable in mine environments due to the risk of explosions [6][7][8].
Therefore, the development of smart sensors with high sensitivity, fast response, excellent stability, and capable of room temperature operation is a necessary link for methane gas sensing and monitoring. The birth of the so-called surface acoustic wave (SAW) sensor technology opens up a new way for methane gas sensing. By means of sensitive materials with specific selectivity, SAW sensors have some excellent features such as high-sensitivity, ambient-temperature operation, simple packaging requirements, low cost, fast response, small size, and large dynamic range [9,10]. A typical SAW-based gas sensor configuration is composed of a differential oscillator array, and some sensitive materials deposited in sensing channels on the SAW propagation path of the SAW devices. The adsorption of the gas molecules to be analyzed by the sensitive interface modulates the SAW propagation properties. The corresponding change in velocity is read out by recording the frequency signal, which is directly proportional to the gas concentration. Obviously, the SAW device only plays a "quantification" role, while a qualitative assessment of the analyte is completed by suitable sensitive films. Unfortunately, it is not easy to find a good sensitive material candidate for sensing methane until the supramolecular species cryptophane A (CrypA) was synthesized and confirmed to show excellent selectivity to methane gas. As the smallest of the cryptophane family, Cryp A was utilized as the sensitive interface and exhibits an amazing affinity towards methane gas molecules by supramolecular interactions between the host and methane molecules [11,12], which arises from size complementarity and efficient van der Waals interactions. Recently, a quartz crystal microbalance (QCM)-based methane gas sensor configuration was presented employing CrypA as the sensitive interface [13,14], and superior selectivity, a fast response and a low detection limit of 0.05% were achieved at room temperature. However, the QCM-based methane gas sensor easily reaches saturation when the applied methane concentration is over 0.2%, that this makes it difficult to meet the actual requirements of underground methane gas alarms (the alarm point methane gas concentration is usually 1%).
Hence, to address this issue, the main contribution in this work is to develop a new methane gas sensor incorporating SAW technology and CrypA films, which try to provide fast and accurate measurements in a larger dynamic range. The proposed sensor configuration consists of differential resonator-oscillators, and a sensitive coating on the sensing device surface, and a frequency signal acquisition module (FSAM), as shown in Figure 1. A two-port SAW resonator is designed on a temperature-compensated ST-X quartz substrate as the feedback of the differential oscillator. Lower insertion loss, high quality factor, and single resonation mode were achieved in the SAW devices developed by the photolithographic technique. For sensing methane gas, CrypA was synthesized from vanillyl alcohol and deposited onto the surface of the sensing SAW device. Different film deposition methods were applied for the CrypA coating to achieve higher sensor responses. The proposed SAW sensor was exposed to various concentrations of methane gas at room temperature, and the resulting performance, measured in terms of sensitivity, detection limit, and repeatability, was characterized experimentally.
Technique Realization
This section describes the realization of the physical structure of the CrypA-coated SAW methane sensor.
Two-Port SAW Resonator
A two-port SAW resonator configuration was reproducibly developed on a temperaturecompensated ST-X quartz substrate as the oscillation feedback, as shown in Figure 1. 1600 Å Al-strip was deposited onto ST-X quartz wafer by using the photolithographic process. A thin SiO 2 with 200 Å thickness is deposited on the device surface to protect the electrodes in the CrypA deposition by using plasma enhanced chemical vapor deposition (PECVD), and the SiO 2 coating is amorphous and porous to increase the sensing contact area, which is benefitial for the interaction between the CrypA and methane gas. The design parameters of the SAW device are listed in Table 1. Prior to the SAW device development, the coupling of modes (COM) model was used for performance simulation. Using the typical admittance matrix solution for whole device, rYs " r y 11 y 12 y 21 y 22 s, the frequency response S 21 of the SAW device can be deduced by [15]: S 21 "´2 y 12 ? G in G out pG in`y11 qpG out`y22 q´y 12 y 21 (1) Here, G in and G out are the input and output impedance, respectively. Using Equation (1), the SAW resonator with 1600 Å thick Al-strip was simulated utilizing the corresponding structure parameters listed in Table 1. The simulated frequency response of the SAW resonator is depicted in Figure 2. Low insertion loss of 4.5 dB, high quality factor of~3500 and single resonation mode were achieved. Then, the fabricated SAW resonator ( Figure 2) was characterized by using a network analyzer, and in comparison with the simulation. The measured result agrees well with the simulation. The resonant frequency of the developed SAW device is measured as 299.4 MHz.
Technique Realization
This section describes the realization of the physical structure of the CrypA-coated SAW methane sensor.
Two-Port SAW Resonator
A two-port SAW resonator configuration was reproducibly developed on a temperaturecompensated ST-X quartz substrate as the oscillation feedback, as shown in Figure 1. 1600 Å Al-strip was deposited onto ST-X quartz wafer by using the photolithographic process. A thin SiO2 with 200 Å thickness is deposited on the device surface to protect the electrodes in the CrypA deposition by using plasma enhanced chemical vapor deposition (PECVD), and the SiO2 coating is amorphous and porous to increase the sensing contact area, which is benefitial for the interaction between the CrypA and methane gas. The design parameters of the SAW device are listed in Table 1. Prior to the SAW device development, the coupling of modes (COM) model was used for performance simulation.
Here, Gin and Gout are the input and output impedance, respectively. Using Equation (1), the SAW resonator with 1600 Å thick Al-strip was simulated utilizing the corresponding structure parameters listed in Table. 1. The simulated frequency response of the SAW resonator is depicted in Figure 2. Low insertion loss of 4.5 dB, high quality factor of ~3500 and single resonation mode were achieved. Then, the fabricated SAW resonator ( Figure 2) was characterized by using a network analyzer, and in comparison with the simulation. The measured result agrees well with the simulation. The resonant frequency of the developed SAW device is measured as 299.4 MHz.
Differential Resoantor-Oscillator
The fabricated SAW sensor chip was loaded into a standard metal base (seen in the inset of Figure 2). As the feedback in the oscillation path, all transducers of the SAW resonators were connected by each oscillator circuit which was composed of an amplifier (BGA2817), phase shifter, mixer (UPC2758) and LPF and so on, as shown in Figure 3. The output of the each oscillator was mixed to obtain a different frequency in the MHz range and reduce the influence of the thermal expansion of the piezoelectric substrate. The mixed oscillation frequency signal was picked by a FPGA-based FSAM, and plotted in real-time by a self-made interface display program. Usually, the frequency stability of the oscillator affects significantly the detection limit of the gas sensor. Therefore, an experiment was conducted to measure the frequency stability of the developed differential resonator-oscillator at room temperature (20˝C) controlled by an incubator. To improve the frequency stability, the oscillation was conducted at the frequency point corresponding to the lowest insertion loss by a strategically phase modulation [15]. The measured short-term (in seconds) frequency stability of the oscillator without a sensor coating is˘1.5 Hz/s, and the medium term (in hours) frequency stability is better than˘25/h at the equilibrium status ( Figure 4). One of the reasons for the frequency fluctuations is the environmental temperature which can only be stabilized to within˘0.5˝C.
Differential Resoantor-Oscillator
The fabricated SAW sensor chip was loaded into a standard metal base (seen in the inset of Figure 2). As the feedback in the oscillation path, all transducers of the SAW resonators were connected by each oscillator circuit which was composed of an amplifier (BGA2817), phase shifter, mixer (UPC2758) and LPF and so on, as shown in Figure 3. The output of the each oscillator was mixed to obtain a different frequency in the MHz range and reduce the influence of the thermal expansion of the piezoelectric substrate. The mixed oscillation frequency signal was picked by a FPGA-based FSAM, and plotted in real-time by a self-made interface display program. Usually, the frequency stability of the oscillator affects significantly the detection limit of the gas sensor. Therefore, an experiment was conducted to measure the frequency stability of the developed differential resonator-oscillator at room temperature (20 °C) controlled by an incubator. To improve the frequency stability, the oscillation was conducted at the frequency point corresponding to the lowest insertion loss by a strategically phase modulation [15]. The measured short-term (in seconds) frequency stability of the oscillator without a sensor coating is ±1.5 Hz/s, and the medium term (in hours) frequency stability is better than ±25/h at the equilibrium status ( Figure 4). One of the reasons for the frequency fluctuations is the environmental temperature which can only be stabilized to within ±0.5 °C.
Differential Resoantor-Oscillator
The fabricated SAW sensor chip was loaded into a standard metal base (seen in the inset of Figure 2). As the feedback in the oscillation path, all transducers of the SAW resonators were connected by each oscillator circuit which was composed of an amplifier (BGA2817), phase shifter, mixer (UPC2758) and LPF and so on, as shown in Figure 3. The output of the each oscillator was mixed to obtain a different frequency in the MHz range and reduce the influence of the thermal expansion of the piezoelectric substrate. The mixed oscillation frequency signal was picked by a FPGA-based FSAM, and plotted in real-time by a self-made interface display program. Usually, the frequency stability of the oscillator affects significantly the detection limit of the gas sensor. Therefore, an experiment was conducted to measure the frequency stability of the developed differential resonator-oscillator at room temperature (20 °C) controlled by an incubator. To improve the frequency stability, the oscillation was conducted at the frequency point corresponding to the lowest insertion loss by a strategically phase modulation [15]. The measured short-term (in seconds) frequency stability of the oscillator without a sensor coating is ±1.5 Hz/s, and the medium term (in hours) frequency stability is better than ±25/h at the equilibrium status ( Figure 4). One of the reasons for the frequency fluctuations is the environmental temperature which can only be stabilized to within ±0.5 °C.
Synthesis and Characterization of CrypA
The synthesis of CrypA is depicted in Scheme 1, which was followed from the well-known two-step method [16]. Then, the synthesized CrypA was characterized by 1
Synthesis and Characterization of CrypA
The synthesis of CrypA is depicted in Scheme 1, which was followed from the well-known twostep method [16]. Then, the synthesized CrypA was characterized by 1
CrypA Deposition
Usually, the CrypA molecules are easily gathered and crystallized on the piezoelectric substrate surface, so the CrypA molecules are first dissolved in tetrahydrofuran (THF) prior to deposition on the sensing SAW device surface. The detailed composition of the CrypA-solution is that 3.0 mg CrypA, 0.3 mg polyvinyl chloride (PVC) and 0.6 mg dioctyl sebacate were dissolved in 2 mL THF. The polyvinyl chloride was used to improve the adhesion of CrypA. To determine the more effective way for CrypA formation, drop-coating and spin-coating were tested. Before the CrypA deposition, the SiO2 surface of the sensing SAW device was cleaned of any contaminants by a routine cleaning procedure involving rinsing in piranha solution (H2SO4 -H2O2 = 3:1 v/v), a DI water rinse and drying by N2. For drop-coating, 0.3 μL CrypA-solution was dropped on the cleaned SiO2 film surface between the IDTs of the sensing resonator, and then cured at 80 °C for 40 min in an oven. For spincoating, 100 μL solutions were spin at 2000 rpm for 30 s on a ST-quartz wafer with SAW resonator patterns. The wafer was also cured at 80 °C for 40 min and then diced into individual SAW resonators.
Then, the surface topography of CrypA-coated sensor chips utilizing different CrypA coating methods was characterized by the atomic force microscope (AFM), as shown in Figure 6. As a rough estimate, the thicknesses of the CrypA coating deposited by spin-coating and drop-coating are ~600 Scheme 1. Synthesis of CrypA.
Synthesis and Characterization of CrypA
The synthesis of CrypA is depicted in Scheme 1, which was followed from the well-known twostep method [16]. Then, the synthesized CrypA was characterized by 1
CrypA Deposition
Usually, the CrypA molecules are easily gathered and crystallized on the piezoelectric substrate surface, so the CrypA molecules are first dissolved in tetrahydrofuran (THF) prior to deposition on the sensing SAW device surface. The detailed composition of the CrypA-solution is that 3.0 mg CrypA, 0.3 mg polyvinyl chloride (PVC) and 0.6 mg dioctyl sebacate were dissolved in 2 mL THF. The polyvinyl chloride was used to improve the adhesion of CrypA. To determine the more effective way for CrypA formation, drop-coating and spin-coating were tested. Before the CrypA deposition, the SiO2 surface of the sensing SAW device was cleaned of any contaminants by a routine cleaning procedure involving rinsing in piranha solution (H2SO4 -H2O2 = 3:1 v/v), a DI water rinse and drying by N2. For drop-coating, 0.3 μL CrypA-solution was dropped on the cleaned SiO2 film surface between the IDTs of the sensing resonator, and then cured at 80 °C for 40 min in an oven. For spincoating, 100 μL solutions were spin at 2000 rpm for 30 s on a ST-quartz wafer with SAW resonator patterns. The wafer was also cured at 80 °C for 40 min and then diced into individual SAW resonators.
Then, the surface topography of CrypA-coated sensor chips utilizing different CrypA coating methods was characterized by the atomic force microscope (AFM), as shown in Figure 6. As a rough estimate, the thicknesses of the CrypA coating deposited by spin-coating and drop-coating are ~600
CrypA Deposition
Usually, the CrypA molecules are easily gathered and crystallized on the piezoelectric substrate surface, so the CrypA molecules are first dissolved in tetrahydrofuran (THF) prior to deposition on the sensing SAW device surface. The detailed composition of the CrypA-solution is that 3.0 mg CrypA, 0.3 mg polyvinyl chloride (PVC) and 0.6 mg dioctyl sebacate were dissolved in 2 mL THF. The polyvinyl chloride was used to improve the adhesion of CrypA. To determine the more effective way for CrypA formation, drop-coating and spin-coating were tested. Before the CrypA deposition, the SiO 2 surface of the sensing SAW device was cleaned of any contaminants by a routine cleaning procedure involving rinsing in Piranha solution (V(H 2 SO 4 ):V(H 2 O 2 ) = 3:1), a DI water rinse and drying by N 2 . For drop-coating, 0.3 µL CrypA-solution was dropped on the cleaned SiO 2 film surface between the IDTs of the sensing resonator, and then cured at 80˝C for 40 min in an oven. For spin-coating, 100 µL solutions were spin at 2000 rpm for 30 s on a ST-quartz wafer with SAW resonator patterns. The wafer was also cured at 80˝C for 40 min and then diced into individual SAW resonators.
Then, the surface topography of CrypA-coated sensor chips utilizing different CrypA coating methods was characterized by the atomic force microscope (AFM), as shown in Figure 6. As a rough estimate, the thicknesses of the CrypA coating deposited by spin-coating and drop-coating are~600 nm Sensors 2016, 16, 73 6 of 10 and~200 nm, respectively. It is also obviously that the CrypA film formed by drop-coating has an obviously larger rougher surface with many fluctuations and bubbles, whereas the film surface coated with spin-coating is much smoother. nm and ~200 nm, respectively. It is also obviously that the CrypA film formed by drop-coating has an obviously larger rougher surface with many fluctuations and bubbles, whereas the film surface coated with spin-coating is much smoother.
Gas Sensor Experimental Setup
The developed CrypA-coated sensing chip and reference chip were placed in a surface nickelplated Aluminum gas chamber with volume of 500 mL (Figure 7a), and connected to corresponding oscillation circuit, respectively. The experimental set up in Figure 7b was utilized to characterize the sensor responses towards methane gas at room temperature. The SAW sensor was exposed to N2 and CH4 gas alternately via the gas path, as shown in Figure 7b. To study the humidity effects, air was used as the diluents, and the relative humidity (RH) was controlled by a streaming standard humidity generator (RST-GX-2, Beijing Naisisa New Technology Development Corp, Beijing, China.). The sensor signal were collected at 60 points per minute by the FSAM, and plotted by the personal computer in real time. (a)
Gas Sensor Experimental Setup
The developed CrypA-coated sensing chip and reference chip were placed in a surface nickel-plated Aluminum gas chamber with volume of 500 mL (Figure 7a), and connected to corresponding oscillation circuit, respectively. The experimental set up in Figure 7b was utilized to characterize the sensor responses towards methane gas at room temperature. The SAW sensor was exposed to N 2 and CH 4 gas alternately via the gas path, as shown in Figure 7b. To study the humidity effects, air was used as the diluents, and the relative humidity (RH) was controlled by a streaming standard humidity generator (RST-GX-2, Beijing Naisisa New Technology Development Corp, Beijing, China.). The sensor signal were collected at 60 points per minute by the FSAM, and plotted by the personal computer in real time. nm and ~200 nm, respectively. It is also obviously that the CrypA film formed by drop-coating has an obviously larger rougher surface with many fluctuations and bubbles, whereas the film surface coated with spin-coating is much smoother.
Gas Sensor Experimental Setup
The developed CrypA-coated sensing chip and reference chip were placed in a surface nickelplated Aluminum gas chamber with volume of 500 mL (Figure 7a), and connected to corresponding oscillation circuit, respectively. The experimental set up in Figure 7b was utilized to characterize the sensor responses towards methane gas at room temperature. The SAW sensor was exposed to N2 and CH4 gas alternately via the gas path, as shown in Figure 7b. To study the humidity effects, air was used as the diluents, and the relative humidity (RH) was controlled by a streaming standard humidity generator (RST-GX-2, Beijing Naisisa New Technology Development Corp, Beijing, China.). The sensor signal were collected at 60 points per minute by the FSAM, and plotted by the personal computer in real time.
Sensor Performance Evaluation
First, the repeatability of the developed CrypA-coated SAW sensor was evaluated. Figure 8 showed a response profile obtained from three consecutive 18 min on-off exposures to 5% of CH4 in pure N2 at 20 °C using the developed sensors with CrypA deposited by different coating approaches. ~1 kHz of frequency response was achieved from the sensor with CrypA coated by drop-coating method (Figure 8a). It can also be noted that three gas exposures are in good reproducible run. The gathered frequency signal showed a rapid rise upon exposure to CH4 and reaches approximately the
Sensor Performance Evaluation
First, the repeatability of the developed CrypA-coated SAW sensor was evaluated. Figure 8 showed a response profile obtained from three consecutive 18 min on-off exposures to 5% of CH 4 in pure N 2 at 20˝C using the developed sensors with CrypA deposited by different coating approaches.
1 kHz of frequency response was achieved from the sensor with CrypA coated by drop-coating method (Figure 8a). It can also be noted that three gas exposures are in good reproducible run. The gathered frequency signal showed a rapid rise upon exposure to CH 4 and reaches approximately the equilibrium (saturation) value in 12 s. When the gas was removed by N 2 injection, the sensor response returned to its initial baseline within 20 s. It means the 90% response time of~12 s and recovery time of 20 s with good repeatability were obtained at room temperature. These promising results indicated that this sensor exhibits fast response and excellent repeatability in response to CH 4 . The sensor response towards 5% CH 4 was also conducted from the sensor with CrypA coated by spinning-coating, as shown in Figure 8b, only~100 Hz of frequency response was observed, much smaller than that of drop-coating CrypA sensor. Considering the CrypA film topography characteristics mentioned above, the experimental results indicate that the surface roughness of CrypA coating influences greatly on the sensor response. Sensitive film with larger rougher surface provides larger surface-to-volume ratio, thus more CrypA molecules are able to contact and absorb methane gas, resulting in higher sensor response.
Sensor Performance Evaluation
First, the repeatability of the developed CrypA-coated SAW sensor was evaluated. Figure 8 showed a response profile obtained from three consecutive 18 min on-off exposures to 5% of CH4 in pure N2 at 20 °C using the developed sensors with CrypA deposited by different coating approaches. ~1 kHz of frequency response was achieved from the sensor with CrypA coated by drop-coating method (Figure 8a). It can also be noted that three gas exposures are in good reproducible run. The gathered frequency signal showed a rapid rise upon exposure to CH4 and reaches approximately the equilibrium (saturation) value in 12 s. When the gas was removed by N2 injection, the sensor response returned to its initial baseline within 20 s. It means the 90% response time of ~12 s and recovery time of ~20 s with good repeatability were obtained at room temperature. These promising results indicated that this sensor exhibits fast response and excellent repeatability in response to CH4. The sensor response towards 5% CH4 was also conducted from the sensor with CrypA coated by spinning-coating, as shown in Figure 8b, only ~100 Hz of frequency response was observed, much smaller than that of drop-coating CrypA sensor. Considering the CrypA film topography characteristics mentioned above, the experimental results indicate that the surface roughness of CrypA coating influences greatly on the sensor response. Sensitive film with larger rougher surface provides larger surface-to-volume ratio, thus more CrypA molecules are able to contact and absorb methane gas, resulting in higher sensor response. Then, we exposed the CrypA-coated sensors by using the drop-coating to CH4 with different concentrations to characterize their sensitivity. Figure 9 shows the real-time response of the CrypA sensor to low (Figure 9a) and high (Figure 9b) CH4 concentrations at room temperature (20 °C). And the maximum frequency response was plotted vs. the corresponding CH4 concentrations as shown in Figure 10. It is obviously that as the gas concentration increased, the sensor frequency signal also increased with approximately linearity. The sensitivity in CH4 concentration range of 0.2%~5% was Then, we exposed the CrypA-coated sensors by using the drop-coating to CH 4 with different concentrations to characterize their sensitivity. Figure 9 shows the real-time response of the CrypA sensor to low ( Figure 9a) and high (Figure 9b) CH 4 concentrations at room temperature (20˝C). And the maximum frequency response was plotted vs. the corresponding CH 4 concentrations as shown in Figure 10. It is obviously that as the gas concentration increased, the sensor frequency signal also increased with approximately linearity. The sensitivity in CH 4 concentration range of 0.2%~5% was evaluated as~204 Hz/% with well linearity of 0.99827. Also, from Figure 9b, the sensor response of 50 Hz occurs at CH 4 concentration of 0.2%. It means lower detection limit will be expected because the present oscillator exhibits excellent short-term frequency stability of˘1.5 Hz/s. Hence, based on the International Union of Pure and Applied Chemistry (IUPAC) (Zurich, Switzerland), the detection limit of the developed sensor can be estimated to less than 0.05%, the same rank to the reported CrypA-coated QCM sensor but the dynamic range is larger more [9]. The measure results indicate that the presented CrypA-coated SAW sensor was very promising for CH 4 detection and monitor in homes, industries and mines.
Then, we exposed the CrypA-coated sensors by using the drop-coating to CH4 with different concentrations to characterize their sensitivity. Figure 9 shows the real-time response of the CrypA sensor to low ( Figure 9a) and high (Figure 9b) CH4 concentrations at room temperature (20 °C). And the maximum frequency response was plotted vs. the corresponding CH4 concentrations as shown in Figure 10. It is obviously that as the gas concentration increased, the sensor frequency signal also increased with approximately linearity. The sensitivity in CH4 concentration range of 0.2%~5% was evaluated as ~204 Hz/% with well linearity of 0.99827. Also, from Figure 9b, the sensor response of ~50 Hz occurs at CH4 concentration of 0.2%. It means lower detection limit will be expected because the present oscillator exhibits excellent short-term frequency stability of ±1.5 Hz/s. Hence, based on the International Union of Pure and Applied Chemistry (IUPAC) (Zurich, Switzerland), the detection limit of the developed sensor can be estimated to less than 0.05%, the same rank to the reported CrypA-coated QCM sensor but the dynamic range is larger more [9]. The measure results indicate that the presented CrypA-coated SAW sensor was very promising for CH4 detection and monitor in homes, industries and mines.
Humidity Effect Evaluation
Usually, the gas sensors are usually deployed in the ambient dynamic environment that may affect significantly the sensor performance [17]. Therefore, it is necessary to study the humidity effect on these sensors. Figure 11 illustrated the crossed humidity sensitivity of the developed SAW sensor, it clearly indicates that the sensor signal increased as the RH increased. The response to a constant methane concentration of 5% but different humidity levels of 0%, 20%, and 30% was 1015 Hz, 1258 Hz, and 1381 Hz, respectively. Obviously, the humidity affected the sensor performance significantly. The reason for the humidity effect is the adsorption of water molecules in CrypA while methane
Humidity Effect Evaluation
Usually, the gas sensors are usually deployed in the ambient dynamic environment that may affect significantly the sensor performance [17]. Therefore, it is necessary to study the humidity effect Sensors 2016, 16, 73 9 of 10 on these sensors. Figure 11 illustrated the crossed humidity sensitivity of the developed SAW sensor, it clearly indicates that the sensor signal increased as the RH increased. The response to a constant methane concentration of 5% but different humidity levels of 0%, 20%, and 30% was 1015 Hz, 1258 Hz, and 1381 Hz, respectively. Obviously, the humidity affected the sensor performance significantly. The reason for the humidity effect is the adsorption of water molecules in CrypA while methane sensing. As described in Reference [10], the humidity effect on sensor response can be alleviated in different ways like calibration by creating a special database relating to different humidity level, or utilizing a polymer coating on the reference device to estimate the humidity effect by difference method.
Humidity Effect Evaluation
Usually, the gas sensors are usually deployed in the ambient dynamic environment that may affect significantly the sensor performance [17]. Therefore, it is necessary to study the humidity effect on these sensors. Figure 11 illustrated the crossed humidity sensitivity of the developed SAW sensor, it clearly indicates that the sensor signal increased as the RH increased. The response to a constant methane concentration of 5% but different humidity levels of 0%, 20%, and 30% was 1015 Hz, 1258 Hz, and 1381 Hz, respectively. Obviously, the humidity affected the sensor performance significantly. The reason for the humidity effect is the adsorption of water molecules in CrypA while methane sensing. As described in Reference [10], the humidity effect on sensor response can be alleviated in different ways like calibration by creating a special database relating to different humidity level, or utilizing a polymer coating on the reference device to estimate the humidity effect by difference method.
Conclusions
A room temperature SAW sensor incorporating a CrypA coating was developed for sensing methane gas. The two-port SAW resonsators with low insertion loss, high quality factor, and single resonation mode were fabricated on a temperature-compensated ST-X quartz substrate as the feedback element in oscillation path. The synthesized Supramolecular CrypA was deposited onto the sensing SAW device as the sensitive material for sensing methane gas. Different CrypA film deposition approaches were conducted to achieve higher sensor response. The sensor responses to methane were evaluated experimentally. Fast response and excellent repeatability were observed, and the estimated detection limit and sensitivity are ~0.05% and ~204 Hz/%, respectively. | 7,130.2 | 2016-01-01T00:00:00.000 | [
"Physics"
] |
LOW COST WIRELESS WEATHER MONITORING SYSTEM
Http://www.ijetmr.com©International Journal of Engineering Technologies and Management Research [35-39] LOW COST WIRELESS WEATHER MONITORING SYSTEM Kirankumar G.Sutar *1 *1 Lecturer, Department of Electronics & Telecommunication, Bharati Vidyapeeth’s Institute of Technology, Palus, (MS), INDIA Abstract: Weather monitoring holds great importance and have uses in several areas ranging from keeping track of agricultural field weather conditions to industrial conditions monitoring. Weather monitoring plays an important role in human life, so the collection of information about weather changes is very important. This paper describes a weather monitoring system which enables the monitoring of weather parameters like Temperature, Humidity and Light intensity. Sensor module includes the sensors like temperature, humidity and light sensor. The system is developed using ZigBee wireless module. The measured weather parameters are Temperature, Humidity and Light intensity. The developed system is cost effective, compact and portable.
INTRODUCTION
In an industry during certain hazards it will be very difficult to monitor the parameter through wires and analog devices such as transducers. To overcome this problem we use wireless device to monitor the parameters so that we can take certain steps even in worst case. Few years back the use of wireless device was very less, but due the rapid development in technology, now-adays, we use maximum of our data transfer through wireless like Wi-Fi, Bluetooth, Wi-Max, etc. A wireless weather monitoring system which enables to monitor the weather parameter in an industry or anywhere can be designed by using ZigBee technology. The parameters can be displayed on the PC's screen.
This paper focuses on the use of multiple sensors; several sensors that are able to continuously read some parameters that indicate the weather conditions such as temperature, humidity and light intensity. As the monitoring is intended to be carried out in a remote area with limited access, signal or data from the sensor unit will then be transmitted wirelessly to the base monitoring system. technology, it is found that ZigBee technology is most reliable and suitable for indoor as well as outdoor sensor network. ZigBee is a communication standard for use in the wireless sensor network defined by the ZigBee Alliance that adopting the IEEE 802.15.4 standard for its reliable communication. It fulfills the requirement for a low cost, easy to use, minimal power consumption and reliable data communication between sensor nodes. It provides a transmission speed typically 250 kbps over a range of 10 to 100 meters and can be configured in star, mesh or peer-to-peer topologies.
The development of Graphical User Interface (GUI) for the monitoring purposes at the base monitoring station is another main component. The GUI should be able to display the parameters being monitored.
2.1.WIRELESS SENSOR NETWORK (WSN)
WSN can operate in a wide range of environments and provide advantages in cost, size, power, flexibility and distributed intelligence, compared to wired ones. Monitoring applications have been developed in medicine, agriculture, environment, military, machine/building, toys, motion tracking and many other fields. Architectures for sensor networks have been changing greatly over the last 50 years, from the analogue 4-20 mA designs to the bus and network topology of today. Bus architectures reduce wiring and required communication bandwidth. Wireless sensors further decrease wiring needs, providing new opportunities for distributed intelligence architectures. For field bus architecture, the risk of cutting the bus that connects all the sensors persists. WSN eliminates all the problems arising from wires in the system. This is the most important advantage of using such technology for monitoring. A WSN is a system comprised of radio frequency (RF) transceivers, sensors, microcontrollers and power sources.
Currently two standard technologies are available for WSN: ZigBee and Bluetooth. Both operate within the Industrial Scientific and Medical (ISM) band of 2.4 GHz, which provides license free operations, huge spectrum allocation and worldwide compatibility.
For applications where higher data rates are important, Bluetooth clearly has the advantage since it can support a wider range of traffic types than ZigBee. However, the power consumption in a sensor network is of primary importance and it should be extremely low. Bluetooth is probably the closest peer to WSNs, but its power consumption has been of secondary importance in its design. Bluetooth is therefore not suitable for applications that require ultra-low power consumption; turning on and off consumes a great deal of energy. In contrast, the ZigBee protocol places primary importance on power management; it was developed for low power consumption and years of battery life. Bluetooth devices have lower battery life compared to ZigBee, as a result of the processing and protocol management overhead which is required for ad hoc networking. Also, ZigBee provides higher network flexibility than Bluetooth, allowing different topologies. ZigBee allows a larger number of nodesmore than 65,000 Sensorsaccording to specification.
2.2.SYSTEM ARCHITECTURE
The system contains two parts. One is transmitter part and another one is receiver part. The transmitter part consists of weather sensors, microcontroller and Zigbee and the receiver part consist of a PC interfaced with Zigbee through PC serial port. The system monitors temperature, humidity and light intensity with the help of respective sensors. The data from the sensors are collected by the micro controller and transmitted to the receiver section through wireless medium. All the parameters are viewed by the PC using program in the receiver side.
Microcontroller
The AT89C52 is a low-power, high-performance CMOS 8-bit microcomputer with 8K bytes of Flash programmable and erasable read only memory (PEROM). The device is manufactured using Atmel's high-density nonvolatile memory technology and is compatible with the industrystandard 80C51 and 80C52 instruction set and pin out. The on-chip Flash allows the program memory to be reprogrammed in-system or by a conventional nonvolatile memory programmer. By combining a versatile 8-bit CPU with Flash on a monolithic chip, the Atmel AT89C52 is a powerful microcomputer which provides a highly-flexible and cost-effective solution to many embedded control applications.
ZigBee Network
ZigBee is a specification for a suite of high level communication protocols using small, low power digital radios based on an IEEE 802 standard for personal area network. The technology defined by the ZigBee specifications is intended to be simpler and less expensive than other WPANs such as Bluetooth. ZigBee is targeted at radio frequency applications that require a low data rate, long battery life and secure networking.
Sensors
A sensor is a device that measures a physical quantity and converts it into a signal which can read by an observer or by an instrument.
A. Temperature Sensor
National semiconductor's LM35 IC has been used for sensing the temperature. It is an integrated circuit sensor that can be used to measure temperature with an electrical output proportional to the temperature. The temperature can be measured more accurately with it than using thermistor.
B. Humidity sensor
Humidity sensor works on the principle of relative humidity and gives the output in the form of voltage. This analog voltage provides the information about the percentage relative humidity present in the environment. A miniature sensor consisting of a RH sensitive material deposited on a ceramic substrate. The AC resistance (impedance) of the sensor decreases as relative humidity increases.
C. Light Dependent Resistor (LDR)
Light Dependent Resistor (LDR), is light sensitive device most often used to indicate the presence or absence of light, or to measure the light intensity. In the dark, resistance is very high, sometimes up to 1MΩ, but when the LDR sensor is exposed to light, the resistance drops dramatically, even down to a few ohms, depending on the light intensity. LDRs have a sensitivity that varies with the wavelength of the light applied.
2.3.SYSTEM IMPLEMENTATION
The system is designed and the sensors are fixed to measure the weather parameters. The base station (receiver part) consists of a same Zigbee module programmed as a coordinator that receives the data sent from the sensor node (transmitter part) wirelessly. Data received from the sensor node is sent to the computer using the RS 232 protocol and data received is displayed using the built GUI on the base monitoring station.
2.4.SOFTWARE DETAILS
Software is an integral part of any control system; it interacts with hardware to carry out different functions. In the given problem the software can be divided into following subparts: KEIL µVision3 software has been used.
RESULTS AND DISCUSSIONS
The goal was to design and develop a low cost Microcontroller based Wireless Weather Monitoring System. To achieve this hardware was developed with compatible software in KEIL so that the above mentioned parameters can be monitored. The hardware with compatible software is of simple design, cost effective and accurate. It can be observed that the temperature sensor shows a good level of stability as well as accuracy. The humidity sensor and LDR of system also shows a very good accuracy.
CONCLUSION
The system has been successfully implemented. The implemented system is successful in measuring the Temperature, Humidity and Light intensity. Real-time data can be seen from a GUI window in Personal Computer. | 2,072 | 2020-01-29T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Transporters at the Interface between Cytosolic and Mitochondrial Amino Acid Metabolism
Mitochondria are central organelles that coordinate a vast array of metabolic and biologic functions important for cellular health. Amino acids are intricately linked to the bioenergetic, biosynthetic, and homeostatic function of the mitochondrion and require specific transporters to facilitate their import, export, and exchange across the inner mitochondrial membrane. Here we review key cellular metabolic outputs of eukaryotic mitochondrial amino acid metabolism and discuss both known and unknown transporters involved. Furthermore, we discuss how utilization of compartmentalized amino acid metabolism functions in disease and physiological contexts. We examine how improved methods to study mitochondrial metabolism, define organelle metabolite composition, and visualize cellular gradients allow for a more comprehensive understanding of how transporters facilitate compartmentalized metabolism.
Introduction
The compartmentalization of metabolic pathways into one or more subcellular organelles is, except for rare cases, a fundamental characteristic of eukaryotic organisms. Metabolic compartmentalization allows for specific pools of enzymes, substrates, and cofactors to be maintained within each organelle, providing unique subcellular conditions to fulfill specialized biochemical functions. The mitochondrion is one such organelle, originating as symbiotic α-proteobacteria that co-evolved within a proto-eukaryote host [1,2]. Many changes to proto-mitochondrial functions have evolved since the initial endosymbiosis occurred [3], complicating our understanding of the metabolic reasoning behind the symbiotic relationship; however, present day mitochondria are complex organelles that participate in broad and critical cellular and biochemical roles. Mitochondria, in addition to canonical ATP generation, play an important biosynthetic role; and amino acid metabolism is intricately linked to this functional output. Notably, amino acids do not freely diffuse across the inner mitochondrial membrane and require specific transport proteins to facilitate their exchange.
Other oncogenes have been further shown to mediate pro-cancer effects through upregulation of glutaminolysis. In pancreatic ductal adenocarcinoma cells (PDAC), oncogenic KRAS was shown to shift glutamine metabolism by upregulating cytoplasmic GOT1 and downregulating GLUD1, stimulating a pathway in which glutamine-derived aspartate from the mitochondria is used as a metabolite to generate cytosolic OAA [11]. Subsequent activity of malate dehydrogenase (MDH) and cytosolic malic enzyme (ME1) supply PDAC cells with reduced pyridine nucleotides necessary for redox homeostasis [11]. In a separate study, PDAC cells cultured in acidic conditions also exhibited an increased dependence on glutamine for redox homeostasis and anaplerosis [31]. In colorectal cancer (CRC), mutations in the PIK3CA gene, which encodes for the p100α subunit of PI3K, lead to upregulation of GPT2 and reliance on glutamine-derived TCA intermediates to sustain growth [32]. Additionally, the liver receptor homolog 1 (LRH1) has been implicated as a transcription factor, which drives tumor formation via effects on glutamine metabolism in hepatocellular carcinoma (HCC) [33]. Overall, the upregulation of glutaminolysis in cancer cells is near ubiquitous and achieved through many different oncogenic effectors.
Inhibiting aberrant glutaminolysis has largely focused on targeting mitochondrial glutaminase activity. BPTES and the more soluble and bio-available CB-839 selectively inhibit GLS1 and have been investigated as anti-neoplastic agents in several contexts [34][35][36][37][38]. However, certain cancer types (e.g., pancreatic, lung) demonstrate contradicting sensitivity to GLS inhibition in vitro and in vivo, suggesting that tumors may be more glutaminolysis independent in vivo than modeled in culture [39,40]. Further, these studies highlight the plasticity of glutamine and glutamate metabolism and suggest that cells may autonomously re-route metabolic flux to supply glutamate through other means ( Figure 1). Glutamate is produced during the synthesis of purine and pyrimidine nucleobases and the glycosylation subunit N-acetyl-glucosamine (GlcNAc). Purine and pyrimidine synthesis utilize the γ-nitrogen of glutamine to generate 5-phospho-β-D-ribosylamine (PRA) and carbamoyl phosphate (CP) by phosphoribosyl pyrophosphate amidotransferase (PPAT) and the carbamoyl phosphate synthetase (CPS) domain of the CAD complex, respectively [41][42][43]. Cytosolic glutamine is also a substrate for glutamine-fructose 6-phosphate aminotransferase (GFPT1), which is used to produce glutamate and glucosamine 6-phosphate (GlcN6P) a precursor for O-linked N-acetylglucosaminylation [44]. Furthermore, cytosolic asparagine synthesis by asparagine synthetase (ASNS) yields glutamate as well. Given the numerous glutamate supply routes, efforts to target the glutamine demands in cancer have broadened to identify antagonists targeting more than one glutamine-dependent enzyme simultaneously. 6-diazo-5-oxo-L-norleucine (DON) was developed decades ago as a potential antineoplastic agent for its inhibitory activity against many glutamine-dependent enzymes, including glutaminase and glutamine amidotransferases [45]. However, gastrointestinal (GI) toxicity in the majority of patients receiving DON limited its clinical use [46]. More recently, pro-drug forms of DON have been developed with enhanced delivery properties to either the brain or tumors and reduced GI toxicity, which has reinvigorated interest in using glutamine antagonists as antitumor agents [47][48][49][50][51]. An alternative strategy is limiting cancer cell access to glutamine by inhibiting transporter-dependent uptake. Two of the most well documented glutamine transporters are from the SLC1A and SLC38A solute carrier (SLC) families, and the more promiscuous Na + /Cl − -dependent SLC6A14/ATB 0,+ transporter can also play a role in importing glutamine [52][53][54]. SLC1A5/ASCT2 inhibitors have been identified and exhibit promising anti-tumor properties in preclinical models [55][56][57][58][59]. Furthermore, targeting secondary glutamine transporters (e.g., SLC38A2, SLC6A14) genetically or pharmacologically (e.g., α-methyltryptophan) significantly suppresses amino acid homeostasis and tumor growth in pancreatic cancer [60,61].
Aspartate
Aspartate is an acidic non-essential amino acid that can be acquired by either de novo synthesis and/or import from external sources. However, circulating levels of aspartate in physiological conditions are low (~10 µM) and maintained by liver aspartate transaminases; thus, synthesis likely provides the majority of cellular aspartate in most contexts [4]. Biosynthesis of aspartate is carried out via aspartate aminotransferases (glutamic-oxaloacetic transaminases) in the cytosol (GOT1) and in the mitochondrial matrix (GOT2), which as discussed above utilize glutamate as the amino-nitrogen source. Aspartate has many biosynthetic fates within the cell (e.g., proteins, nucleotides, and amino acids) and also serves as an exchange factor for the aspartate-glutamate carrier (AGC1/AGC2), an es-sential component of the malate-aspartate-shuttle (MAS) (Figure 2). MAS is responsible for transferring electrons from cytosolic NADH to mitochondrial NADH, as reducing equivalents (e.g., NAD(P)H) cannot directly cross the inner mitochondrial membrane. However, recent studies identified that SLC25A51 and SLC25A52 facilitate mitochondrial NAD + transport [62][63][64]. Subsequent activity of the MAS and/or UCP2 is required to export aspartate into the cytosol where it can be used as a proteinogenic source and/or a precursor for arginine and asparagine synthesis [65,66].
Metabolites 2021, 11, 112 5 of 28 matrix (GOT2), which as discussed above utilize glutamate as the amino-nitrogen source. Aspartate has many biosynthetic fates within the cell (e.g., proteins, nucleotides, and amino acids) and also serves as an exchange factor for the aspartate-glutamate carrier (AGC1/AGC2), an essential component of the malate-aspartate-shuttle (MAS) (Figure 2). MAS is responsible for transferring electrons from cytosolic NADH to mitochondrial NADH, as reducing equivalents (e.g., NAD(P)H) cannot directly cross the inner mitochondrial membrane. However, recent studies identified that SLC25A51 and SLC25A52 facilitate mitochondrial NAD + transport [62][63][64]. Subsequent activity of the MAS and/or UCP2 is required to export aspartate into the cytosol where it can be used as a proteinogenic source and/or a precursor for arginine and asparagine synthesis [65,66].
Figure 2.
Biochemical pathways and transporters involving aspartate (Asp) and related intermediates. Aspartate is transported by the plasma membrane transporter SLC1A3, which also transports glutamate. Aspartate is synthesized by glutamic-oxaloacetic transaminases (GOT) present in the cytosol (GOT1) or mitochondria (GOT2). Mitochondrial efflux of aspartate mainly occurs through SLC25A12 or SLC25A13, which counter-exchange glutamate and are critical components of the malate-aspartate-shuttle (MAS), and UCP2. Cytosolic aspartate is used as a substrate for asparagine and arginine synthesis via asparagine synthetase (ASNS) and argininosuccinate synthase (ASS1) and as a substrate for nucleotide biosynthesis, contributing carbon and nitrogen to purine and pyrimidines (marked in red). Cytosolic asparagine is used as an exchange factor for several amino acids through an unknown plasma membrane transporter. AA, amino acid; AcCoA, acetyl-coenzyme A; αKG, α-ketoglutarate; Asn, asparagine; Asp, aspartate; FH, fumarate hydratase; Gln, glutamine; Glu, glutamate; GSH, reduced glutathione; GSSG, oxidized glutathione; Mal, malate; Oac, oxaloacetate; Pyr, pyruvate; SDH, succinate dehydrogenase; UCP2, uncoupling protein 2.
In many contexts, aspartate is predominantly synthesized by mitochondrial GOT2 and is suggested to be one output of mitochondrial electron transport chain (ETC) activity in proliferating cells [11,[67][68][69][70][71][72]. Although ATP is another major output of ETC activity, proliferating cells with sufficient access to glucose can switch to aerobic glycolysis to largely satisfy these requirements [67]. Aspartate serves a biosynthetic role, acting as a nitrogen donor for adenine synthesis and a carbon backbone via orotate for pyrimidine Figure 2. Biochemical pathways and transporters involving aspartate (Asp) and related intermediates. Aspartate is transported by the plasma membrane transporter SLC1A3, which also transports glutamate. Aspartate is synthesized by glutamic-oxaloacetic transaminases (GOT) present in the cytosol (GOT1) or mitochondria (GOT2). Mitochondrial efflux of aspartate mainly occurs through SLC25A12 or SLC25A13, which counter-exchange glutamate and are critical components of the malate-aspartate-shuttle (MAS), and UCP2. Cytosolic aspartate is used as a substrate for asparagine and arginine synthesis via asparagine synthetase (ASNS) and argininosuccinate synthase (ASS1) and as a substrate for nucleotide biosynthesis, contributing carbon and nitrogen to purine and pyrimidines (marked in red). Cytosolic asparagine is used as an exchange factor for several amino acids through an unknown plasma membrane transporter. AA, amino acid; AcCoA, acetyl-coenzyme A; αKG, α-ketoglutarate; Asn, asparagine; Asp, aspartate; FH, fumarate hydratase; Gln, glutamine; Glu, glutamate; GSH, reduced glutathione; GSSG, oxidized glutathione; Mal, malate; Oac, oxaloacetate; Pyr, pyruvate; SDH, succinate dehydrogenase; UCP2, uncoupling protein 2.
In many contexts, aspartate is predominantly synthesized by mitochondrial GOT2 and is suggested to be one output of mitochondrial electron transport chain (ETC) activity in proliferating cells [11,[67][68][69][70][71][72]. Although ATP is another major output of ETC activity, proliferating cells with sufficient access to glucose can switch to aerobic glycolysis to largely satisfy these requirements [67]. Aspartate serves a biosynthetic role, acting as a nitrogen donor for adenine synthesis and a carbon backbone via orotate for pyrimidine synthesis. The availability of aspartate has been suggested to be limiting for the proliferation of certain cancers. Sullivan et al. utilized a guinea pig asparaginase (gpASNase1) to supply tumors with asparagine-derived aspartate and observed enhanced tumor growth in HCT116 and AL1376 colorectal and murine PDAC cell lines, respectively [73]. Interestingly, gpASNas1 had little to no effect on the human AsPC1 tumor growth. Similarly, some cancer cells utilize a plasma membrane glutamate and aspartate transporter, SLC1A3, to provide aspartate in conditions where de novo synthesis is restricted by ETC inhibition [74]. Environmental acquisition of aspartate by SLC1A3 has also been implicated in hypoxic microenvironments or in response to glutamine restriction [72,74,75]. Hypoxia reportedly suppresses mitochondrial aspartate biosynthesis via HIF1α-dependent down-regulation of GOT1 and GOT2 in Von Hippel-Lindau (VHL)-deficient renal carcinoma cells [76]. However, pancreatic cancer cells have been shown to sustain aspartate biosynthetic fluxes in oxygen tensions as low as 0.1% O 2 through activity of Complex III+IV containing respiratory supercomplexes, which are suggested to promote efficient respiration in limiting oxygen environments [77]. Notably, maximal HIF stabilization occurs in~1% O 2 , well above tensions where oxygen becomes limiting for mitochondrial respiration [78]. Although glutaminolysis provides cells with the majority of carbon necessary to synthesize aspartate, in cancer subtypes driven by TCA cycle deficiencies (e.g., SDH-or FH-deficiency), pyruvate carboxylase activity can divert glucose-derived pyruvate to supply oxaloacetate necessary for this anabolic function [79][80][81][82][83][84]. This shift to PC-dependent aspartate synthesis was also observed in PDAC tumors in vivo and in breast and lung cancer cell lines exposed to hypoxic oxygen tensions [77,85]. Taken together, aspartate is a critical anabolic metabolite necessary to supply nucleotides for proliferating cancer cells; however, its synthesis from glutamineand/or glucose-derived pathways are complex and highly dependent on the environmental context and nutrient availability.
Cytosolic aspartate is utilized by ASNS and ASS1 for asparagine and arginine biosynthesis, respectively ( Figure 2). The production of these amino acids supports protein translation, but also play indirect roles for proliferation. For example, asparagine acts as an amino acid exchange factor to facilitate the influx of other amino acids (e.g., serine, threonine) necessary to regulate mammalian target of rapamycin complex 1 (mTORC1) activity and proliferation [86]. Arginine is a major source of cellular nitric oxide (NO), through activity of iNOS, or is catabolized by arginase as the final step of the urea cycle. Arginine, and other basic amino acids (e.g., lysine, ornithine), can also transport into the mitochondria by SLC25A29 [87]. Expression of SLC25A29 was shown to be elevated in several cancer cell lines and important for NO production by a mitochondrial NOS [88]. Activity of extrahepatic arginase 2 (ARG2) also regulates mitochondrial NO production [87,89,90]. Notably, several cancers down-regulate the activity of these pathways via silencing of ASS1 and/or ASNS expression, creating a dependence (auxotrophy) for environmental and/or stromal acquisition of these amino acids [91][92][93][94]. Silencing of ASS1 and/or ASNS may provide a selective advantage for cancer cells, allowing for diversion of aspartate towards other anabolic pathways such as nucleotide biosynthesis. Importantly, activity of the MAS and/or other mitochondrial aspartate transporters (e.g., UCP2) represent a key step for regulating compartmental availability of this critical amino acid [65,66].
Serine, Glycine and Alanine
The metabolism of small neutral amino acids serine, glycine, and alanine occurs in both the cytosol and mitochondria and has implications for physiology and human diseases ( Figure 3). Serine consists of a simple hydroxymethyl side chain and is either taken up or synthesized de novo from the glycolytic intermediate 3-phosphoglycerate by three cytosolic enzymes, phosphoglycerate dehydrogenase (PHGDH), phosphoserine aminotransferase (PSAT1), and phosphoserine phosphatase (PSPH). Activity of serine synthesis is important for normal development, as deletion of Phgdh causes embryonic lethality in part due to neurological defects [95]. Serine, specifically D-serine, is thought to be a critical excitatory neurotransmitter acting as a co-agonist of the N-methyl D-aspartate (NMDA) receptor on neurons, and depletion due to deficient synthesis or racemase activity likely leads to catastrophic neurotoxicity [96]. Abnormal D-serine levels in the brain are thought to contribute to neurodegenerative disorders such as Alzheimer's disease and schizophrenia [97][98][99][100]. Expression of serine biosynthesis enzymes are highly regulated by several factors, including NRF2-ATF4 [101,102], c-Myc [103], and hypoxia inducible factors [104]. Many of these transcriptional regulators are altered in cancer and contribute to increased serine synthesis flux, but PHGDH expression was also found to be amplified through copy number gain of a genomic region on chromosome 1p12 in a subset of breast and melanoma [105,106]. PHGDH expression has been demonstrated to support tumor growth specifically in low serine environments, such as cerebrospinal fluid where concentrations are significantly lower than plasma, and dietary restriction of serine and glycine reduces tumor growth in preclinical cancer models and enhances activity of mitochondrial inhibitors [107][108][109][110][111]. Furthermore, a subset of PDAC cells downregulate serine synthesis enzymes, and neuronal supply has been shown to supply cancer cells with serine specifically in null environments [112]. As PHGDH is thought to be the rate-limiting step of serine synthesis and important for the proliferation of PHGDH-amplified cancer cell lines, several inhibitors have been developed [113][114][115]. Notably, serine synthesis and uptake can occur in parallel with catabolism, depending on the context and cell intrinsic demand for serine and/or its many catabolic outputs.
Serine is utilized for glycine synthesis as well as ceramide and sphingolipid synthesis, nucleotide synthesis, folate-mediated one carbon metabolism (FOCM), S-adenosyl methionine regeneration for methylation reactions, and transsulfuration for cysteine biosynthesis ( Figure 3). Glycine synthesis requires the cytosolic and mitochondrial enzymes serine hydroxymethyltransferase (SHMT1/2) producing one-carbon units in the form of 5,10methylene-tetrahydrofolate (5,10-meTHF). Subsequent activity of methylenetetrahydrofolate dehydrogenases (MTHFD1/1L/2) releases formate, which can be transported across mitochondrial membranes. The resulting metabolic cycle acts as a shuttle for NAD(P)H reducing equivalents and one-carbon units required for thymidine and purine synthesis. The directionality of the FOCM metabolic cycle operates predominantly oxidatively in the mitochondria but can reverse to maintain one-carbon supply for nucleotide synthesis in cases where mitochondrial isoforms are deleted [116][117][118]. High activity and dependence on SHMT1/2 for proliferation has been demonstrated in multiple cancer contexts, and inhibitors targeting these enzymes have been developed [105,106,119,120]. In replete environments, serine catabolism by SHMT2 can occur in excess and release of glycine and formate, termed "formate overflow", has been reported for several cancer and nontransformed cell lines and in mice [121]. Through this mechanism, serine catabolism acts as a significant source of both ATP and NAD(P)H, and flux through this pathway was demonstrated to support oxidative mitochondrial metabolism [122]. In response to pharmacological inhibition of respiration expected to increase mitochondrial NADH/NAD + , mitochondrial serine catabolism is sustained, whereas other enzymatic NADH sources (e.g., pyruvate dehydrogenase) were feedback inhibited [123]. Thus, serine metabolism is highly complex and can provide cells with glycine, one-carbon units, NAD(P)H, and ATP depending on the context and cellular demand [124].
Glycine can also contribute one-carbon units through mitochondrial activity of the glycine cleavage system (GCS), which releases CO 2 , NH 3 , and 5,10-meTHF ( Figure 3). Notably, activity of GCS in many cancer cell lines was found to be low relative to serine catabolism [125]. However, in the context of cancer cell lines with high mitochondrial SHMT2 flux, activity of GCS is necessary to clear excess glycine and prevent a build-up of the toxic byproducts aminoacetone and methylglyoxal derived from the interconversion of glycine and threonine [126]. Methylglyoxal was found to accumulate in non-small cell lung cancers relative to normal tissue, and sequestration of toxic methylglyoxal requires glutathione (GSH) and activity of glyoxalase (GLO1) to prevent cellular damage [127]. Activity of the GCS has been shown to be important for the maintenance of stem cell pluripotency through epigenetic regulation [128]. Extracellular glycine can also be used for SHMT-dependent serine synthesis but requires an exogenous source of formate [125].
In addition to FOCM, glycine is also a precursor required for the synthesis of GSH and δ-aminolevulinic acid (δ-ALA; mitochondrial) necessary for heme biosynthesis. Synthesis of glutathione occurs in the cytosol, requiring mitochondrial export of glycine and GSH import into intracellular organelles including the mitochondria, which contains~10-15% of cellular GSH at a similar concentration to the cytosol [129]. Maintenance of GSH pools is important for regulating the activity of proteins sensitive to post-translational oxidation of cysteine residues (e.g., PTP1B) [130][131][132]. Insight into the activity and downstream role of serine and glycine metabolism can be gained from examination of extracellular uptake and secretion; however, cytosolic-mitochondrial exchange is equally important and requires a number of plasma membrane and mitochondrial transporters [121,133].
Alanine synthesis requires cytosolic and/or mitochondrial glutamic-pyruvic transaminases (GPT1/2). Physiological synthesis occurs in skeletal muscle from pyruvate and glutamate derived from glycolysis and BCAA catabolism, respectively. Alanine secreted by muscles provides the carbon necessary for gluconeogenesis in the liver, which in turn provides glucose back to muscles and sequesters the nitrogen produced from alanine catabolism as urea [134][135][136][137]. The resulting glucose/alanine cycle, referred to as the "Cahill cycle", is an important organ crosstalk relevant during normal physiology, exercise, fasting, and disease [138]. Dysregulation of this cycle has been proposed to occur in cancer patients, whereby increased protein turnover and/or muscle breakdown ("cachexia") releases alanine, BCAAs, and other amino acids for hepatic gluconeogenesis [139][140][141]. Elevated hepatic alanine-to-glucose conversion was measured in lung and other cancer patients, but plasma alanine levels remained mostly stable [142][143][144][145]. Hepatocytes express both cytosolic (GPT1) and mitochondrial (GPT2) isoforms required for de novo alanine synthesis and catabolism. However, biochemical parameters and studies suggest that GPT1 (Km, ala = 34 mM) and GPT2 (Km, ala = 2 mM) exhibit preference towards alanine anabolism and catabolism, respectively, although this was highly dependent on the method used to ascertain directionality [146][147][148][149][150]. Recent evidence suggests that pancreatic cancer cells have a high demand for alanine, and scavenge alanine from stromal sources (e.g., activated stellate cells) [60,151]. The majority of human pancreatic cancer cells selectively express GPT2, at both the transcript and protein level, suggesting that alanine metabolism occurs mainly in the mitochondria [60]. Notably, mitochondrial alanine catabolism by GPT2 requires activity of a mitochondrial alanine transporter, which was functionally identified decades ago but remains unknown [147]. Low expression of cytosolic GPT1, which catalyzed alanine synthesis from pyruvate in hepatocytes, and alanine uptake was suggested to provide pancreatic cancer cells with the capacity to retain pyruvate in the cytosol and support aerobic glycolysis [60,152]. In contrast, naïve T lymphocytes require alanine for activation as neither GPT1 nor GPT2 are expressed at sufficient levels [153]. Alanine production from pyruvate was found to be important for the metastasis of breast cancer cells, providing a source of α-ketoglutarate used for collagen hydroxylation and extracellular matrix (ECM) remodeling [154]. Importantly, it has been suggested that transport across the plasma membrane may be the main ratelimiting step of alanine metabolism at extracellular concentrations <1 mM [147,155,156]. Normal plasma levels of alanine are~0.2-0.4 mM, but elevated levels (~1 mM) have been measured intratumorally, suggesting altered alanine metabolism and availability in cancer and tumor-associated stromal cells [157,158]. Notably, SLC38A2/SNAT2 was identified to be the main concentrative alanine transporter utilized by pancreatic cancer cells and targeting SLC38A2 was sufficient to suppress alanine uptake by pancreatic cancer cells and cause significant re-wiring of compartmentalized pyruvate metabolism [60]. Taken together, these studies suggest that perturbing alanine metabolism in cancer is possible by altering plasma membrane transport, and mitochondrial alanine transport may be a key player in glucose-pyruvate-alanine metabolism by skeletal muscle, hepatocytes, and cancer cells.
Branched-Chain Amino Acids
Branched chain amino acids (BCAAs) include leucine, isoleucine, and valine and are derived from dietary sources. Because of their essentiality in mammals, BCAA transport and sensing in addition to catabolic mechanisms of acquisition (e.g., autophagy, macropinocytosis) has attracted much interest. Cellular uptake of BCAAs is mainly facilitated by the L-type amino acid transporter (SLC7A5/LAT1), which requires dimerization with SLC3A2/CD98 to function. It also transports aromatic amino acids (e.g., tyrosine, phenylalanine) ( Figure 4) [159,160]. Notably, LAT1 is sodium-independent and relies on other amino acids to serve as exchange factors to facilitate net BCAA import [159]. Leucine is well characterized to influence mTORC1 signaling, which is aberrantly activated across many cancer types [161]. Leucine can activate mTORC1 signaling through direct sensing by Sestrin2 and disruption of the Sestrin2-Gator2 interaction, triggering a signaling cascade through downstream effectors (e.g., eukaryotic translation initiation factor 4E binding protein 1, p70-S6 kinase, ULK1) [161][162][163][164]. These signals coordinate proliferation through activity of autophagy and protein, lipid, and nucleotide synthesis.
Because of the abundant expression of SLC1A5/ASCT2 and LAT1, it has been suggested that ASCT2-dependent glutamine uptake may serve as the exchange factor for BCAA influx by LAT1. However, ASCT2 is dispensable for the proliferation and mTORC1 signaling in many cancer lines, and ASCT2 functions primarily as an exchanger unable to concentrate glutamine sufficiently to drive LAT1 activity [55,165,166]. Thus, secondary active glutamine transporters (e.g., SNAT1/SLC38A1, SNAT2/SLC38A2, SLC6A14/ATB 0,+ ) are more likely to contribute to glutamine concentration for LAT1-mediated exchange. However, deletion of SLC38A2 in pancreatic cancer failed to impact either BCAA or glutamine uptake flux despite significantly decreasing intracellular glutamine levels [60]. Rather, transporter cooperativity between glutamine and BCAA transporters may be more important for level maintenance. Indeed, LAT1 knockout results in a~90% decrease in leucine transport in hepatocellular carcinoma cells but fails to illicit proliferative defects, and knockdown or inhibition of LAT1 did not negatively impact mTORC1 re-activation following EAA stimulation [165,167]. Furthermore, knockout of SLC3A2/CD98 abolished 90% of leucine uptake by LAT1 in colon adenocarcinoma cells, but proliferative defects and activation of the GCN2-linked amino acid stress response were not observed [168]. Thus, plasma membrane transport of BCAA and/or glutamine may not be limiting or is highly dependent on the cellular context, and minimal transport capacity may be sufficient to satisfy the biosynthetic and catabolic demands for these amino acids. In contrast, LAT1 was significantly upregulated in an Apc fl/fl ; LSL-Kras G12D/+ ; Villin CreER mouse model of colorectal cancer, and targeted deletion of Slc7a5 resulted in delayed tumorigenesis and improved survival [169]. Furthermore, JPH203, a small molecule inhibitor of LAT1, has shown significant pre-clinical efficacy in colorectal cancer and T-cell lymphoblastic lymphoma/leukemia and was well-tolerated in a Phase I study in patients with advanced solid tumors [170][171][172]. Other transport systems can facilitate BCAA uptake, including the Na+-dependent SLC6A19/B0AT1, which may contribute to differing sensitivity in response to LAT1-deletion [173,174]. Inhibitors targeting SLC6A19/B0AT1 have been developed using in silico and high-throughput screening approaches [175,176].
Aside from being used for protein synthesis, BCAAs can contribute to anabolic and bioenergetic outputs important for human physiology and dysregulated activity is attributed to multiple diseases (reviewed in [161,177,178]) (Figure 4). Through catalytic activity of highly reversible branched chain aminotransferases (BCAT1/2) localized within the cytosol (BCAT1) or mitochondrial matrix (BCAT2), BCAA catabolism provides cells with amino-nitrogen for glutamate synthesis as well as branched chain ketoacids (BCKAs) (e.g., α-ketoisocaproic, KIC; α-ketoisovaleric, KIV; α-keto-β-methylvaleric, KMV) that contribute to acyl-CoA synthesis, lipogenesis, and TCA cycle metabolism. While BCAT2 is ubiquitously expressed, BCAT1 is selectively expressed in the brain, ovary, and placenta [179]. BCAT1 is commonly up-regulated in many different cancer lines, such as human glioblastoma, breast cancer, and non-small cell lung carcinoma (NSCLC), while BCAT2 seems more important for pancreatic cancer [180]. Furthermore, elevated plasma BCAA levels are associated with several diseases, including cardiovascular disease, pancreatic cancer, and breast cancer [139,181,182]. In the mitochondria, BCKAs can undergo irreversible decarboxylation by the branched chain α-ketoacid dehydrogenase (BCKDH) complex, which consists of three subunits (E1, E2, and E3). Activity of BCKDH is negatively regulated by the phosphorylation status of the E1 subunit. BCKDH kinase (BCKDK) and the Mg 2+ /Mn 2+ -dependent 1 K protein phosphatase (PPM1K) coordinate the activity of BCKA oxidation. Activity of PPM1K was shown to positively regulate BCAA catabolism important for leukemogenesis [177,183]. Furthermore, defective BCKA oxidation drives the inborn error of metabolism maple syrup urine disease (MSUD), and dysregulated BCKDH activity is also attributed to several human diseases (e.g., diabetes, cancer) [184].
pyruvate metabolism [60]. Taken together, these studies suggest that perturbing alanine metabolism in cancer is possible by altering plasma membrane transport, and mitochondrial alanine transport may be a key player in glucose-pyruvate-alanine metabolism by skeletal muscle, hepatocytes, and cancer cells.
Acyl-CoA products of BCAA oxidation (e.g., acetyl-CoA, propionyl-CoA, succinyl-CoA) have the potential to contribute carbon for oxidative TCA cycle activity and/or lipogenesis, suggesting that BCAA may serve as an important fuel source for proliferative cells. In addition, acetyl-CoA derived from leucine can provide direct proliferative signals through acetylation of Raptor via EP300, which in turn negatively regulates autophagosome formation and activates mTORC1 signaling [185,186]. Whether this represents a major metabolic contribution, particularly to the TCA cycle, depends highly on the context. The metabolic contribution of BCAA-derived acyl-CoA has been extensively characterized in mutant Kras-driven tumors (e.g., pancreatic, lung) given the correlation between elevated plasma levels and disease progression [139]. In acute myeloid leukemia (AML), human pancreatic cancer, and colorectal cancer cells, as well as in LSL-Kras G12D/+ ; Trp53 flox/flox -driven lung and pancreatic tumors, 13 C-labeled BCAAs contributed minimally to mitochondrial TCA cycle intermediates irrespective of which BCAT1/2 isoform is expressed in each context [180,[187][188][189][190]. In contrast, cancer-associated fibroblasts derived from human pancreatic tumors showed higher BCAA oxidation flux than pancreatic cancer cells, and BCKAs secreted from CAFs were incorporated into the TCA cycle in human pancreatic cancer cells through subsequent oxidation [191]. Similarly, 13 C-KIC, derived from leucine catabolism, was shown to be oxidized by tumors in a rat glioma model using hyperpolarized nuclear magnetic resonance (NMR) spectroscopy [192]. Transport of BCKAs across the plasma membrane is mainly facilitated by monocarboxylate transporters, MCT1/SLC16A1 and MCT4/SLC16A4, allowing cells to share pools of circulating BCKAs to convert to BCAAs if needed [193][194][195][196]. In adipocytes, BCAAs represent a major anaplerotic and lipogenic source. Acetyl-CoA or propionyl-CoA is utilized for even-or odd-chain fatty acid synthesis and, in addition to succinyl-CoA, contributes significantly to TCA cycle intermediates (e.g., citrate) [197][198][199]. Adipose tissue can also utilize BCAA catabolism to generate mono-methylated branched-chain fatty acids through promiscuous activity of carnitine acetyltransferase (CRAT) and fatty acid synthase (FASN) [200]. Notably, the methylmalonyl-CoA mutase required to convert propionyl-CoA to succinyl-CoA is B12-dependent, and odd-chain fatty acids and methylmalonic acid (MMA) accumulate in adipocytes only when cultured in media deficient in cobalamin (e.g., DMEM) [199]. In a recent study, increased MMA levels in circulation correlate with increasing age, and MMA was found to promote an epithelial-mesenchymal transition (EMT)-like phenotype and contribute to increased tumorigenesis [201].
Mitochondrial Amino Acid Carriers
Many of the metabolic fates of the above discussed amino acids center in and around the mitochondria, and mitochondrial transporters likely play a critical role in facilitating the activity of amino acid metabolism (Figures 1-4). Eukaryotic mitochondria comprise an outer and inner membrane that separate the internal matrix from the cytosol. The two mitochondrial membranes form complex substructures that include cristae and contact sites between membranes and with other organelles, all of which can impact mitochondrial function [202][203][204]. The outer mitochondrial membrane (OMM) is highly permissive up tõ 5 kDa, and translocases are employed to import mitochondrial-targeted proteins across both inner and outer membranes. However, the inner mitochondrial membrane (IMM) is impermeable to most small molecules, similar to other cellular membranes, allowing the mitochondrial matrix to maintain a distinct metabolite composition compared to the surrounding cytosol. Specific mitochondrial transporters are required to facilitate exchange of ions and metabolites; such as adenine nucleotides, amino acids, acyl-carnitines, and small organic acids. The 53 membered SLC25 family represents the largest component of mitochondrial transporters. Other transmembrane protein families, such as the sideroflexin family (SFXN), the mitochondrial pyruvate carrier (MPC1/2), certain ATP-binding cassette transporter (ABCB) isoforms, and splice variants of other solute carriers (SLCs), also contribute to mitochondrial transport. Excellent reviews of our current knowledge of mitochondrial transporters can be found elsewhere [205][206][207]. Recent progress has been made on the identification of mitochondrial amino acid carriers, including those that transport serine (SFXN1/3), glutamine (mitochondrial targeted SLC1A5 variant), and branched chain amino acids (SLC25A44) [208][209][210].
Despite the important role that serine plays for nucleotide, glycine, and one-carbon metabolism and the compartmentalization of these pathways; the transporter(s) involved in its transport into the mitochondria have only recently been identified. Kory et al. identified that sideroflexin 1 (SFXN1) and other SFXN homologs act as inner mitochondrial membrane-localized serine transporters [209]. To identify the mitochondrial serine transporter, Kory et al. utilized a functional genetic screening approach in cells lacking the cytosolic arm of FOCM, creating an increased reliance on mitochondrial serine transport for proliferation. Functionally, SFXN1 was important for glycine pool maintenance and folate charging, owing to defective oxidative mitochondrial serine-dependent FOCM activity. SFXN1-null cells were not auxotrophic for glycine, suggesting that other sideroflexin homologs, of which there are five, may relay some compensatory activity. Through subsequent functional genetic screening in SFXN1-null cells, the authors found that SFXN3 was a likely candidate for redundant mitochondrial serine transport. In vitro liposome reconstitution of SFXN1 and stable isotope tracing suggest that SFXN1 is capable of importing serine and other small neutral amino acids, including alanine, cysteine, and glycine. This study fills an important gap in our knowledge of mitochondrial serine transport and highlights the power of functional genetic screening, stable-isotope tracing, and metabolomics to characterize transporter function in relevant contexts. Given the redundant function of some sideroflexin homologs, complete suppression of mitochondrial transport may require inhibition of multiple targets to treatment of aberrant serine metabolism in diseases like cancer. Mitochondrial glycine import and/or export may also play an important role in facilitative FOCM and purine and glutathione biosynthesis. SLC25A38, and its yeast homolog Hem25, was recently characterized as a mitochondrial glycine transporter [211]. Mutations in SLC25A38 give rise to congenital sideroblastic anemia, caused by a defect in heme biosynthesis [212]. Notably, SHMT2 activity could, in theory, provide mitochondrial glycine for heme biosynthesis; however, the authors found that Shm1 and Shm2 (yeast homologs of SHMT1 and SHMT2, respectively) did not significantly contribute to heme synthesis [211]. Notably, it is not clear whether SLC25A38 also facilitates mitochondrial glycine export, which may be important for purine and glutathione synthesis in nutrient-limited environments.
The anabolic and bioenergetic outputs of glutaminolysis require activity of a mitochondrial glutamine transporter that was known to exist but only recently identified [213]. Yoo et al. identified a variant of the plasma membrane transporter SLC1A5 localized to the mitochondrial inner membrane capable of importing glutamine (SLC1A5_var) [208]. To identify this candidate, the authors hypothesized that a mitochondrial glutamine transporter would share structural homology to its plasma membrane equivalent (pm-SLC1A5). A shorter SLC1A5_var that lacked exon 1 of pmSLC1A5, exposing a predicted mitochondrial targeted sequence, was hypothesized to be a candidate mitochondrial glutamine transporter. Mitochondrial localization and glutamine transport activity of SLC1A5_var was confirmed by immunofluorescence co-localization, subcellular fractionation, metabolomics, and stable isotope tracing experiments in cells or isolated mitochondria lacking SLC1A5_var. Notably, SLC1A5_var expression was positively regulated in response to hypoxia (1% O 2 ) and hypoxia mimetics (e.g., deferoxamine, cobalt chloride) through a HIF2α-dependent transcriptional mechanism. Although glutaminolysis represents a major carbon source for TCA cycle intermediates in normal conditions, hypoxia and/or mitochondrial dysfunction leads to significant rewiring of glutamine metabolism through reductive carboxylation pathways to support lipogenic flux [214][215][216][217][218]. However, many pancreatic cancer cell lines are capable of sustaining oxidative TCA cycling even at 0.1% O 2 , and Yoo et al. demonstrate that SLC1A5_var activity in pancreatic cancer cells is important for ATP generation from glutamine in hypoxia [77,208]. SLC1A5_var activity promoted glutathione production and ROS scavenging in response to oxygen limitation and was important for gemcitabine resistance mechanisms in cancer cells. Overall, mitochondrial glutamine transport by SLC1A5_var represents an interesting therapeutic strategy for limiting the glutamine demands of cancer cells.
As highlighted above, BCAA catabolism bridges across cytosolic and mitochondrial compartments with transamination and oxidation requiring activity of cytosolic/mitochondrial BCAT1/2 and IMM-localized BCKDH. Recently, Yoneshiro et al. identified that BCAAs serve as important substrates for brown adipose tissue (BAT) metabolism and found that SLC25A44 acts as a key component required for mitochondrial BCAA transport and utilization [210]. Following cold-exposure, plasma levels of valine alone or all three BCAAs decreased in high BAT-containing male adults or obese mice, respectively [210]. 13 C-labeled leucine contributed significantly to TCA intermediates in human brown adipocytes following noradrenaline treatment, suggesting that mitochondrial oxidation contributes to BCAA clearance in BAT. Furthermore, BAT selectively expresses the mitochondrial BCAT2, not cytosolic BCAT1, thus requiring mitochondrial import. To identify the mitochondrial BCAA transporter, Yoneshiro et al. quantified transcript levels of SLC25 family members and identified several transporters, including uncharacterized SLC25A39 and SLC25A44, of which only SLC25A44 was induced following cold exposure. Functional loss-and gain-of-function experiments and liposomal reconstitution experiments confirmed that SLC25A44 functions as a BCAA transporter required for mitochondrial import required by BAT for thermogenesis and BCAA clearance. BCKAs are also transported across the inner mitochondrial membrane for use as potential acyl-CoA sources. Mitochondrial BCKA transport is facilitated by monocarboxylate transporter 1 (MCT1/SLC16A1), although MCT2/SLC16A2 has also been implicated in certain contexts (e.g., normal brain, breast cancer cell lines) [219,220].
In the past few years, significant headway has been made into a more comprehensive understanding of mitochondrial amino acid transporter identity. These studies highlight the diversity of approaches that can be used to identify mitochondrial amino acid transporter function. Cytosolic and mitochondrial amino acid exchange facilitated by mitochondrial transporters is critical for redox shuttle activity (e.g., MAS, FOCM). With the recent identification of key mitochondrial transporters required for amino acid and NAD+ exchange, including SLC25A51 and SLC25A52, we now have the tools necessary to dissect how amino acid redox shuttle activity and/or direct NAD + import influence compartmentalized redox homeostasis [62][63][64][65][66]209,211]. Several amino acid transporters are not yet known, including those for asparagine, tryptophan, alanine, methionine, phenylalanine, tyrosine, cysteine, and proline [206]. While we have known for decades that certain amino acids are metabolized by isolated mitochondria (e.g., proline; Figure 1) [221,222], recent techniques that enable better quantification of mitochondrial metabolism, transport, and metabolite composition will catalyze a deeper understanding of whether mitochondrial transport occurs and which transporters are involved.
Approaches to Quantify Mitochondrial Metabolism and Transport
Our understanding of metabolite composition within mitochondria and other organelles is mainly derived from our understanding of the metabolic enzymes, transporters, and pathways localized within. Approaches to define the mitochondrial proteome include proteomics analysis of isolated mitochondria, fluorophore-tagging, immunofluorescence, and computational prediction of protein targeting. While reliance of any single approach offers caveats and potential false discovery, cross-referencing of multiple studies provides more accurate prediction of localization. For this reason, MitoMiner (v4.0, 2018) and Mito-Carta (v3, 2020) integrate multiple data types and apply machine learning algorithms to provide comprehensive publicly available databases of the mitochondrial proteome [223,224]. Recently, Chen et al. manually curated a list of 346 possible mitochondrial metabolites, referred to as the "MITObolome", from MitoCarta (v1, 2008) cross-referenced from a list of mitochondrial transporters and enzymes and their substrates extracted from KEGG, which formed the basis for targeted absolute quantification of~100 metabolites from mitochondria isolated from HeLa cells using a rapid immuno-capture approach [225][226][227]. In contrast, untargeted, "top-down" metabolomic profiling methods have also been used to characterize mitochondrial metabolite composition using traditional differential centrifugation (DC) isolation. For example, Roede et al. used a combination of anion exchange and reverse phase liquid chromatography coupled to mass spectrometry to identify >2100 metabolic features in isolated mitochondria [228]. While there is no consensus of mitochondrial metabolite composition, these studies provide insight into transporter requirements for metabolites not synthesized within the mitochondria. Given the robustness of tagging outward-facing organelle-localized proteins for immuno-capture, several groups have applied this strategy to rapidly isolate lysosomes [229], peroxisomes [230], synaptic vesicles [231], and melanosomes [232] for metabolomic and/or proteomic characterization. Future efforts to rapidly fractionate intracellular compartments whilst preserving metabolite composition will add to our growing understanding of metabolic compartmentalization in relevant contexts and in vivo [233].
Alternative strategies have also been applied to understand mitochondrial metabolic compartmentalization, including selective permeabilization of the plasma membrane. Digitonin selectively permeabilizes the plasma membrane through interaction with cholesterol and pore formation, and other permeabilization agents have also been applied for a similar aim (e.g., saponin, recombinant perfringolysin O) [234,235]. Selective permeabilization has been used to separate mitochondria and cytoplasm for decades [236], and recent efforts to optimize this methodology have resulted in new approaches to measure and/or estimate compartmentalized metabolic flux. Nonnenmacher et al. used 13 C-labeled pyruvate and glutamine in digitonin-permeabilized A549 cells to quantify how mitochondrial utilization of two major fuels is affected by pharmacological and genetic perturbations [237]. Similarly, but in mitochondria isolated by DC from skeletal muscle and cultured muscle cells, Gravel et al. applied stable-isotope tracing using 13 C-pyruvate and unlabeled malate to quantify TCA cycle activity in response to pharmacological ETC inhibition [238]. Optimized digitonin permeabilization enabled rapid cytosolic separation from intracellular organelles, that include mitochondria, in as short as 25 s with 90% purity [239]. While sacrificing purity for speed, Lee et al. were able to computationally predict flux distribution and directionality across metabolic pathways localized in both the cytosol and mitochondria (e.g., isocitrate dehydrogenase 1/2/3) [239]. In general, speed and purity are major concerns when isolating mitochondria for downstream metabolite profiling as certain metabolites exhibit high turnover rates that may affect their levels during isolation and postextraction (e.g., pyruvate, ATP, NADH), convoluting biological interpretation [240,241]. For targeted pathway analysis, inhibitors that prevent enzymatic conversion during purification have successfully been used to characterize lactate metabolism by mitochondrial LDH and may be important to isolate transport activity [242,243].
Notably, many of these approaches can be used to predict and quantify mitochondrial amino acid metabolism and transport. For example, stable-isotope tracing in plasma membrane permeabilized conditions or prior to mitochondrial isolation provides a quantitative means of measuring rates of amino acid transport and catabolism. In addition, high purity mitochondrial isolation and proteomic characterization provides a detailed "menu" of transporters expressed in a particular cellular or environmental context. Classical molecular approaches, including proteoliposome reconstitution, will also continue to be invaluable in characterizing the function and functional regulation of specific transporters [244][245][246][247][248][249][250][251].
Conclusions
Amino acid metabolism is complex and regulated by compartmentalization into distinct subcellular organelles, transporter-mediated exchange, and cellular demands. Despite playing a critical role in regulating metabolic activity, mitochondrial transporters are poorly characterized. Advances in genetic and analytical techniques will shed light into this important class of metabolic regulators.
Author Contributions: K.G.H., A.S.J. and S.J.P. were involved in all stages, including writingoriginal draft preparation, writing-reviewing and editing, and figure preparation. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Acknowledgments:
The authors would like to Kate Hollinshead for critical review and suggestions.
Conflicts of Interest:
The authors declare no conflict of interest. | 9,392.6 | 2021-02-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Experimental characterization of an ultrafast Thomson scattering x-ray source with three-dimensional time and frequency-domain analysis
060702-1 We present a detailed comparison of the measured characteristics of Thomson backscattered x rays produced at the Picosecond Laser-Electron Interaction for the Dynamic Evaluation of Structures facility at Lawrence Livermore National Laboratory to predicted results from a newly developed, fully three-dimensional time and frequency-domain code. Based on the relativistic differential cross section, this code has the capability to calculate time and space dependent spectra of the x-ray photons produced from linear Thomson scattering for both bandwidth-limited and chirped incident laser pulses. Spectral broadening of the scattered x-ray pulse resulting from the incident laser bandwidth, perpendicular wave vector components in the laser focus, and the transverse and longitudinal phase spaces of the electron beam are included. Electron beam energy, energy spread, and transverse phase space measurements of the electron beam at the interaction point are presented, and the corresponding predicted x-ray characteristics are determined. In addition, time-integrated measurements of the x rays produced from the interaction are presented and shown to agree well with the simulations.
I. INTRODUCTION
The use of short laser pulses to generate high intensity, ultrashort x-ray pulses enables exciting new experimental capabilities, such as femtosecond pump-probe experiments used to temporally resolve structural dynamics of high-Z materials on atomic (femtosecond) time scales [1,2].The most promising methods for generating tunable, very high brightness electromagnetic radiation at short wavelengths ( < 1 A) and pulse durations ( < 1 ps) rely on either coherent radiation produced by x-ray free-electron lasers, such as the planned Linac Coherent Light Source [3], or incoherent production through relativistic Thomson scattering, which has previously been employed for pioneering time-resolved diffraction measurements at LBNL [4,5], and is currently being investigated at several laboratories around the world [6 -11].Additionally, a growing number of research groups worldwide are exploring different x-ray production mechanisms such as ultrafast, laser driven K sources [12] and electron bunch slicing in synchrotrons [13].While coherent radiation sources generate higher power and narrower spectral bandwidths when compared to incoherent scattering, the potential of a Thomson source for high peak brightness within a relatively compact and affordable system makes it an attractive alternative for many applications.
The capability to accurately predict the spatial, spectral, and temporal characteristics of Thomson backscattered x rays is crucial for both the design of successful Thomson x-ray sources, as well as future experiments and applications utilizing such sources.While the theory of Thomson backscattered radiation is well known and has been documented extensively [14 -20], there remains a need to have a complete three-dimensional (3D) timeresolved computational capability for the full determination of the temporally and spatially resolved spectra and intensity distributions produced from a Thomson interaction for arbitrary geometries.In particular, this capability will greatly benefit the understanding of the production of x-ray pulses with chirped (or time correlated) spectra [21][22][23][24], potentially enabling significant improvement in the time resolution of femtosecond pump-probe experiments.With this motivation, a newly developed fully 3D time and frequency-domain code used for calculations of Thomson scattering of a short, intense laser pulse with a relativistic electron bunch has been developed at Lawrence Livermore National Laboratory.The details of this code, and the underlying theory, are presented in a companion paper [25].
In this paper, we present a comparison of timeintegrated spatial and spectral intensity measurements performed at the Picosecond Laser-Electron Interaction for the Dynamic Evaluation of Structures (PLEIADES) Thomson x-ray source to x-ray production simulations produced with the 3D time and frequency-domain code.This benchmarking proves important for both verifying and understanding the characteristics of the x-ray beam produced at the PLEIADES facility, as well as for verifying the validity of the theory used to develop the computer code.The x rays are produced by the collision of a 0.3 nC, 50 -60 MeV electron bunch with a terawatt (TW) PHYSICAL REVIEW SPECIAL TOPICS -ACCELERATORS AND BEAMS, VOLUME 7, 060702 (2004) 060702-1 1098-4402=04=7(6)=060702(12)$22.50 2004 The American Physical Society 060702-1 800 nm laser pulse to produce peak x-ray energies in the 60 -70 keV range.In this paper, electron beam energy, energy spread, and transverse phase space measurements of the electron beam at the interaction point are presented, and the predicted x-ray characteristics based on these measurements are determined.In addition, timeintegrated measurements of the x rays produced from the interaction are presented and shown to agree well with the simulations.
II. 3D TIME AND FREQUENCY-DOMAIN THEORY
In this section, a brief overview of the theory behind the time and frequency-domain code is presented.For a single photon colliding with a plane wave, the number of scattered photons per unit solid angle, per unit frequency, per unit time is given by where is the Thomson cross section, n is the photon density at the position r e of the electron, !0 is the incident electromagnetic wave (or photon) frequency, k 0 is the incident wave vector, e is the velocity of the electron normalized to the speed of light c, !s is the scattered photon frequency, and g is the geometry dependent relativistic doppler up-shift of the scattered photon colliding with a plane wave, given by where is the angle of the scattered photon direction from the direction of the electron in the lab frame.Under the paraxial approximation [26,27], the spatial density of photons in the incoming laser beam can be represented by the expression where N is the total number of photons in the bunch, z R w 2 0 = 0 is the Rayleigh range of the laser focus, and we have utilized a coordinate system (x L , y L , z L ) such that the z L axis is antiparallel to the laser beam direction.
Equation (1) gives the complete spectral, spatial, and temporal properties of the scattered x-ray beam for the case of a single electron colliding with a plane wave and represents the basis of the 3D time and frequency-domain code.To account for the laser pulse bandwidth, as well as the distribution of perpendicular components of the wave vector k ?due to laser focusing, Eq. ( 1) is integrated over both the time-dependent laser bandwidth, represented by and the perpendicular wave vector components, which, assuming a radially symmetric focus, can be represented by the distribution function In Eq. ( 5), w 0 is the 1=e 2 intensity radius of the laser pulse, and perfect coherence has been assumed.In Eq. ( 4), z L ct represents the longitudinal position within the laser pulse, while !0 and !0 represent the center frequency and 1=e 2 spectral bandwidth at a given .For the case of a bandwidth-limited pulse duration (i.e., no dependence), the spectral bandwidth is given by the Fourier limit relation, where t 0 is the 1=e 2 temporal width and a Gaussian distribution has been assumed.The total scattered photon spectral density flux for a single electron can then be obtained by performing the integration where N e represents the total number of scattered photons for a single electron.The integration over !0 is performed analytically to obtain where dN 1 =ddt is the spectrally integrated photon density flux for the case of a single electron colliding with a plane wave and is equal to the right-hand side of Eq. ( 1) without the presence of the delta function.The presence of the laser bandwidth simply results in a corresponding width in the Doppler shifted x-ray spectrum, while the effect of k ? is to shift the maximum scattered photon energy due to the slight change in the relative incident angle between the electron and the incident wave vector, which leads to the spreading of the observed spectrum in a given direction.Finally, to account for the effects of the electron beam energy spread and emittance (i.e., spread in the direction of different electrons), the electron beam can be represented by a series of macroparticles, and the results from each electron in a bunch are summed resulting in where q e is the charge represented by the macroparticle, and e and e represent the scattering angles of the photons in a rotated lab frame (x e , y e , z e ) such that the z e axis is collinear with each electron and can be expressed in nonrotated coordinates through a simple rotation.The primary broadening mechanism is typically due to the electron beam divergence, which results in a shift in the center direction of the scattered x-ray distribution for each electron, resulting in a corresponding spread in x-ray energies at any given observation point.In addition, if the electron beam divergence is significant in comparison to the characteristic 1= angular radiation width from a single electron, the overall divergence of the x-ray beam will also be dependent on the details of the transverse electron beam phase space.Equation ( 9) is valid provided the scattered x-ray beam can be considered to be incoherent, which is true provided the dimension of the electron bunch is much larger than the scattered x-ray wavelength.
The expression for d=d in the lab frame is a relatively complicated expression for arbitrary interaction geometries and laser polarizations, but can be approximated in the high limit to be [25] 1 ÿ 4 2 2 1 2 2 2 sin 2 ; (10) where r 0 is the classical electron radius, and are the scattering angles of the x rays with respect to the electron direction, and it is assumed that the laser is polarized in the plane where =2.A complete derivation of the general form of the cross section can be found in the companion paper [25], where a more complete discussion on the general properties of Thomson radiation can also be found.
Equation (9) represents the basic algorithm for the 3D time and frequency-domain code.It is both fully three dimensional, taking into account effects from the laser and electron beam 6D phase space at the interaction, and completely time and frequency resolved, allowing computation of the temporally and spatially dependent spectra for arbitrary interaction geometries.The background motion of the electron through the laser pulse is assumed to be ballistic, and the temporal information of the x-ray pulse is determined by calculating the time of flight of the scattered photon to a detector at a specified distance to the interaction at each time step in the simulation.Spatial information of the scattered x-ray pulses is determined by performing this calculation for several different observation directions specified by and .The assumptions inherent in the 3D time and frequency-domain code include (i) the normalized vector potential of the incident laser pulse, eA=mc, is much less than 1, (ii) the incident photon energy in the electron rest frame is much less than the electron rest mass (i.e., h! 0 0 mc 2 ), and (iii) the scattered x-ray wavelength is much shorter than the size of the electron bunch (i.e., incoherent scattering).
III. BENCHMARKING THE CODE
To benchmark the code, x-ray data taken from the PLEIADES Thomson x-ray source at LLNL [11] is compared to simulated results, given the measured electron beam and laser beam parameters.PLEIADES is a high brightness 10 -100 keV Thomson x-ray source designed for single-shot diffraction and radiography experiments in high-Z materials.The facility includes an S-band rf photoinjector, a 100 MeV S-band accelerator, and a TWclass, 800 nm Ti:sapphire, chirped-pulse amplification (CPA) [28] laser system capable of delivering up to 500 mJ of energy in a 50 fs Fourier transform-limited pulse.The 81.557 MHz Kerr-lens mode-locked oscillator at the front end of the laser system serves as the master clock of the experiment.A single pulse from the oscillator is selected and amplified by the CPA and two four pass amplifiers to produce the drive pulse for the Thomson interaction.A second seed pulse is selected from the oscillator and transported to a separate CPA and frequency tripler to produce the drive pulse for the photocathode rf gun.The rf photoinjector used to produce the electron beam for PLEIADES is based on a 1.6-cell standing-wave geometry.The 266 nm UV laser profile at the gun cathode is a 2 mm diameter apertured Gaussian with a 3 ps rms temporal pulse length.The electron charge is typically about 250 pC, and the energy of the electron bunch out of the gun is about 3.5 MeV.The beam generated by the photoinjector is then accelerated to energies ranging between 20 and 100 MeV by four 2.5 m, SLAC-type traveling-wave accelerating sections.The experiment operates at a 10 Hz repetition rate.
Figure 1 shows a schematic of the interaction region.The electron beam is incident from the right and is focused by a quadrupole triplet with a focal length of about 25 cm.The laser beam, incident from the left and polarized in the plane of the image, is focused with an off-axis parabola with a focal length of 1.5 m.The focus is folded by a 0.5 in.BK7 flat placed directly in the beam line.The F number of the focus is about 25, and the measured M 2 of the laser is 1.6, leading to a 1=e 2 intensity radius of 36 m.The x rays produced in the interaction travel in the direction of the electron beam, while the electron beam itself is bent by a dipole magnet and transported to a shielded beam dump.The x rays pass through the laser turning optic (0.5 in.BK7 flat) and are detected by a CsI(Tl) scintillator fiber coupled to a 16 bit charge-coupled device (CCD) camera with a 3:1 taper.The CCD array has a pixel count of 1340 1300, with a pixel size of 20 20 m.This corresponds to an effective size of 60 60 m at the scintillator, yielding a total detection area of about 63 cm 2 .With the CCD placed 1.82 m from the interaction, this corresponds to a detectable solid angle of about 43 43 mrad 2 (0.019 sr).
The CCD was calibrated with a 60 keV 241 Am source, yielding a sensitivity of 7.4 counts=photon, or correspondingly, 0.12 counts=keV at 60 keV. Figure 2(a) shows a typical x-ray profile detected with the x-ray CCD.This image was produced by the collision of a 57 MeV, 0.25 nC electron bunch focused to an rms spot size of about 50 m.The peak x-ray energy is about 77 keV.The laser pulse contained about 400 mJ of energy with a center wavelength near 810 nm and was focused to an rms spot size of about 18 m, or 36 m 1=e 2 intensity radius.Figure 2(b) shows the intensity of the x-ray spot integrated along the y (vertical) axis vs the divergence angle along the x (horizontal) axis, where was calculated from the transverse position on the CCD scintillator and the distance of the CCD from the interaction point.It is seen that the intensity profile is Gaussian in shape.Because of the broadening of the x-ray intensity profile induced by the divergence of the electron beam, as well as the attenuation through the laser turning optic of the low energy x rays that comprise most of the energy in the wings of the angular profile, the measured intensity profile differs from the theoretical Lorentzian intensity profile for a single electron.An integration of the Gaussian profile results in an estimated total count number in the CCD image of 4:5 10 6 .A very crude estimate of the total energy in the pulse can be reached by simply dividing this number by the single wavelength calibrated sensitivity (shown in Table I) yielding 3:7 10 7 keV.As discussed in Sec.III B, however, an accurate estimate will require full knowledge of the spectrum of the x-ray pulse.
In this section, two comparisons between measurements of the x-ray beam characteristics and theoretical predictions from the time and frequency-domain code are presented.First, we compare the measured x-ray beam profile shown in Fig. 2 to the simulated profile, where the measured electron beam characteristics were used to determine the input parameters of the code.Next, the simulated and measured effect of a K filter on the intensity profile are compared as a means of testing the spatially correlated spectral content predicted by the calculations.It is shown that in both cases, good agreement between measurement and theory is observed.
A. Electron beam diagnostics and reconstruction
The sensitivity of the scattered x-ray spectrum to the divergence of the electron beam necessitates an accurate understanding of the electron beam parameters at the interaction region to successfully model the detected x-ray beam characteristics.For the case we consider here, the electron beam divergence is comparable to the characteristic 1= angular width of the x-ray distribution for the case of a single electron, implying that to accurately predict the angular intensity profile, an accurate knowledge of the transverse phase space of the electron beam at the interaction will be required.
In order to determine the electron beam emittance, quadrupole scans were performed upstream of the interaction.In addition, the electron beam energy and energy spread were determined by measuring the dispersionlimited spot size of the electron beam around the dipole bend after the interaction point.The quadrupole scan was performed with a quadrupole magnet placed about 2.0 m upstream of the interaction.The results of this measurement revealed a somewhat asymmetric beam, with a vertical emittance significantly larger than the horizontal emittance.This ultimately results in a larger vertical divergence of the electron beam at the interaction location.This asymmetry likely originates during the transport and acceleration through the accelerator due to slight misalignments of both the electron beam direction and the linac solenoid axis with respect to the rf axis of the linac sections.Efforts to identify and correct the specific cause of the asymmetry are ongoing.
In principle, the beam parameters determined from the quadrupole scan can be used to calculate the propagation properties through the remainder of the beam line using the known optimized beam line element settings to determine the horizontal and vertical divergences of the electron beam at the interaction.However, errors in the measurement, inaccuracies in the beam line component calibrations, and aberrations in the final focus optics can lead to multiplied inaccuracies in the beam parameters calculated at the interaction point.It is more desirable, therefore, to directly measure the beam parameters at the interaction point.To accommodate this need, the spot size was also obtained by imaging the optical transition radiation produced from an optically finished aluminum cube placed at the interaction point, while the divergence was inferred from measuring the spot size with a yttrium-aluminum garnet (YAG) scintillator around the dipole bend downstream of the interaction point.Because of the relatively large divergence (a few mrad) and the relatively small energy spread (about 0.2%), the dispersion around the bend can be neglected in determining the divergence from this method.The results of these measurements for the case under consideration is shown in Fig. 3, while a summary of the electron beam parameters is shown in Table II.It should be noted that in Table II the emittance values were determined from the quadrupole scan upstream of the interaction, while the rms spot size and divergence at the interaction were measured independently.While these three values are consistent to within roughly a factor of 2, in both cases the measured emittance is lower than the emittance suggested by the spot size and divergence measurements.This suggests the possibility of emittance growth during the beam transport and final focus, possibly caused by spherical or chromatic aberrations.The bunch length of the electron beam is roughly 3 ps rms as determined by PARMELA simulations of the electron beam production and acceleration, which is consistent with measurements performed with an Imacon 500 Series streak camera of the optical transition radiation produced by the electron bunch.This second method for determining the electron beam divergence has the additional advantage that rotation of the elliptical electron beam focus about the orthogonal horizontal and vertical axes associated with the quadrupole focus dimensions can be resolved, yielding a more accurate model of the true electron beam phase space.It is seen from the measurement shown in Fig. 3 that the focus is elliptical in nature, which is consistent with the asymmetric emittances measured with the quadrupole scan.However, the ellipse is rotated about 20 from the horizontal and vertical orthogonal axes, which would not have been apparent from the quadrupole scan results alone.In addition, with the direct measurement of the electron beam parameters, no assumption about the distribution functions describing the electron beam is necessary, allowing the inclusion of effects resulting from non-Gaussian distributions into the simulation of the x-ray beam production.
To input the true electron beam parameters into the 3D time and frequency-domain code, a distribution of macroparticles is reconstructed from the electron beam measurements.If Gaussian distributions can be assumed, it is sufficient for the macroparticle distribution to be derived from a 6D Gaussian distribution representing the measured rms beam properties shown in Table II f x xf y yf z zf x 0 x 0 f y 0 y 0 f ; (11) where the rms parameters are listed in Table II, and it is assumed that the electron beam is at a waist at t 0, with the propagation of each particle in time assumed to be ballistic, and each particle's velocity determined from x 0 , y 0 , and .This assumption will be valid provided the plasma oscillation period (1=! p ) of the electron beam is much longer than its transit time through the laser pulse, which implies that space-charge effects can be neglected.It is also assumed that there is no correlation between the different components of the distribution, which will automatically be true for x 0 , y 0 , x, and y provided the beam is at a waist.On the other hand, correlations between time and energy may very well be present in an actual electron beam, though for the timeintegrated measurements and simulations presented in this paper, these will not be relevant.In general, however, the 3D time and frequency-domain code is well suited to studying the effects of such correlations provided a suitable macroparticle distribution, whether reconstructed from measurements or taken from particle dynamics simulations, is utilized.In Eq. ( 11), the x 0 , y 0 , x, and y axes have been chosen to line up with the minor and major axes, respectively, of the elliptical distribution.This can be achieved for arbitrary orientations by a simple rotation of coordinates.To model non-Gaussian distributions, a superposition of Gaussian distributions can be implemented, such that for a given coordinate, the distribution can be expressed as where In particular, the measured vertical (y) divergence of the electron beam (Fig. 3) was found to be poorly approximated by a Gaussian distribution.However, a superposition of three Gaussians with appropriate relative amplitudes, angular widths, and average value offsets very closely approximates the measured divergence.This is seen in Fig. 4, which shows very close agreement between the measured distribution function of dy=dz and the analytic expression that was used to generate the macroparticle distribution.
B. Comparison of simulated to measured x-ray intensity profile
The reconstructed electron beam model can now be inserted into the 3D time and frequency-domain code to calculate the expected x-ray intensity profile produced in the experiment.To accurately predict the angular size (divergence) and shape of the x-ray beam, the transmission and detection efficiency of the x rays produced at the interaction will have to be considered.Both the transmission of the x rays though the laser turning optic (Fig. 1) and the response of the CsI scintillator to the incident x rays have a strong spectral dependence over the spectral range of the source, resulting in a strong dependence of the detected intensity profile on the spectral content of the x-ray pulse.Figure 5 shows both the transmission vs wavelength through the BK7 flat, where the increased width of the flat due to the 40 incident angle has been taken into account, as well as the interaction probability vs wavelength for the 145 m thick CsI scintillator.Both of these curves must be used in determining the theoretically expected x-ray profile by applying the appropriate where P t !s is the probability that a photon of frequency !s will be transmitted to the detector, P d !s is the probability that the photon is detected, and N D denotes the total number of detected photons.The theoretical intensity profile can then be obtained by integrating Eq. ( 14) over t and !s for each macroparticle and summing over all the macroparticles in the electron bunch distribution.For the case under consideration, the angular FWHM of the electron distribution (approximately equal to 2.35 0 x ) is comparable to the FWHM of the intensity distribution for a single electron, approximately equal to 1=.Thus, it is expected that the electron bunch will have a significant effect on the x-ray intensity profile.This is illustrated in Fig. 6, which shows the calculated x-ray profile for the measured electron bunch colliding with a laser pulse polarized in both the horizontal (corresponding to the experimental case) and the vertical planes.In the case of the vertically polarized laser pulse, the elongation of the x-ray profile in the horizontal dimension is largely counteracted by the larger focus angle of the electron bunch in the vertical dimension, resulting in a fairly symmetric looking profile.On the other hand, with the laser pulse polarized in the horizontal dimension, the elongation effects of the polarization and the electron beam focus add, resulting in a predicted x-ray profile much like the measured profile (shown in Fig. 2).
A direct comparison of the measured and theoretical intensity profiles is shown in Fig. 7, displaying excellent agreement between the measured and calculated profiles.These were obtained by taking a line out along the major and minor axes of the measured (Fig. 2) and simulated [Fig.6(a)] intensity profiles.The slight differences between the theoretical and the measured widths can possibly be explained by errors in the electron beam emittance and energy determination, affecting both the width and the spectrum of the x-ray pulse.In addition, if the electron beam focus is sufficiently far from the interaction, from either unoptimized quadrupole settings and/ or timing errors between the electron and laser pulses, only the smaller convergence angles will be sampled by the laser pulse, resulting in less broadening of the x-ray profile due to the electron beam divergence.
C. K-edge absorption observation
To further test the validity of the 3D time and frequency-domain code, the effects of K absorption near the peak scattered x-ray wavelength on the measured intensity profile were observed and compared to that predicted by simulation.This was performed by placing a 125 m Ta foil in the path of the x-ray beam.The transmission characteristics for the Ta foil are shown in Fig. 8.If the electron beam energy is tuned such that the on-axis scattered photon energy is near the K absorption edge (K edge), the observed intensity profile will be very sensitive to the produced spectrum.
We consider two cases of K-edge absorption.In one, the electron beam energy was tuned to 55 MeV, resulting in a peak on-axis photon energy of 73.1 keV, and in the PRST-AB 7 EXPERIMENTAL CHARACTERIZATION OF AN . . .
(2004)
060702-8 060702-8 other, the electron energy was 57 MeV, resulting in a peak on-axis photon energy of 78 keV.To resolve the K-edge absorption in the x-ray intensity profile, the electron beam transport and final focus optics were tuned to a smaller electron beam convergence angle at the interaction, which results in a larger electron beam spot size, but increases the correlation between the x-ray wavelength and the observation angle.For both electron beam energies, the rms divergence of the electron beam in the horizontal and vertical dimensions were 0.9 mrad and 1.3 mrad, respectively.In the first case, the peak photon energy on axis is only slightly above the K-edge energy of 68 keV, resulting in a large degree of attenuation in the center portion of the detected intensity profile.A measurement of a 100 shot integrated intensity profile for this case is shown in 9(a).In the second case, the on-axis photon energy is sufficiently far above the K-edge such that most photons are transmitted through the foil, while photons slightly off axis are attenuated.This results in a detected intensity profile roughly described by a dark ring surrounding a bright center [Fig.10(a)].Simulations of both cases are shown in Figs.9(b) and 10(b).It was found that excellent agreement with the measured profile was obtained provided the electron beam energy in the simulation was about 1 MeV higher than the measured electron beam energy.This slight discrepancy is most likely explained by systematic errors in the electron beam energy measurement due to alignment and field calibration errors of the dipole magnet.In this respect, the x-ray measurements in conjunction with the detailed information provided by the 3D time and frequency-domain code can provide an important check of the electron beam diagnostics.The use of Thomson scattered x rays as a tool for diagnosing electron beam properties has previously been implemented by Leemans et al. [16].
IV. X-RAY BEAM FLUX AND BRIGHTNESS DETERMINATION
The good agreement between the theoretical and the measured intensity profiles and K-edge transmission characteristics provides good evidence that the spatially correlated spectral content of the Thomson scattered x rays calculated by the 3D time and frequency-domain code is accurate.Hence, we can confidently utilize this information in extrapolating from the integrated intensity of the CCD image to infer the number of photons produced in the interaction, as well as x-ray flux and spectral brightness of the source.
Because of the spectrally dependent transmission of the x-ray beam to the detector, P t !, and the detection sensitivity, !, the energy detected by the CCD will depend greatly on the spectral content of the x-ray beam.Theoretically, the total number of counts in the integrated CCD image will be given by where the detection sensitivity, !, in units of counts=unit photon energy, can be expressed in terms of the sensitivity at the calibrated photon energy !c : Equation ( 15) is simply Eq. ( 14 Note that Eq. ( 17) is a relatively fast calculation to perform, since it depends only on the time integration of the product of the electron and photon flux multiplied by the total Thomson cross section, with no need to integrate over solid angles or scattered frequency.From the computation of the ratio of Eq. ( 17) to Eq. ( 15), which will be defined as , the total photon dose in the x-ray pulse at the interaction point can be easily determined from the integrated CCD image such that where P e cj k k 0 ÿ e j T q e e R n r e t; tdt Likewise, the total energy in the x-ray pulse can be expressed as where is the average photon energy in the x-ray pulse.For the case under consideration, these calculations result in 0:98 photons=count.For the measured x-ray intensity profile shown in Fig. 2, this results in a measured total photon dose of N T 4:4 10 6 , with an average photon energy given by h h! s i 37:2 keV.The properties of the measured x-ray beam are summarized in Table III.
Finally, the peak brightness of the measured x-ray pulse can be obtained from the time-dependent spectra determined from the 3D time and frequency-domain code in conjunction with the measured x-ray dose.The time duration of the x-ray pulse closely mimics the duration of the electron bunch due to the head collision geometry, resulting in an x-ray pulse duration of about 3 ps rms, and a peak photon flux of about 6 10 18 photons=s.The source size is the result of the convolution of the electron and laser spot sizes at the interaction and is roughly equal to 18 m rms.The calculated time-dependent spectrum is shown in Fig. 11, where a peak spectral brightness of about 5 10 15 photons=s mm 2 mrad 2 0:1%b:w: is predicted.
Note that while N CCD is the only directly measured quantity in Eq. ( 17), both the values for and h h! s i are relatively insensitive to the details of the electron beam and laser beam distributions.This is due to the fact that the quantities are integrated over all wavelengths and observation angles, tending to wash out the effects of the spectral broadening in any given observation TABLE III.Measured x-ray beam characteristics.Source size and divergence parameters are the root mean square intensity values.The source size is inferred from measurements of the laser spot size and electron spot size.The spectral information is inferred from simulations of the Thomson interaction based on measurements of the electron beam characteristics, and the photon count and divergence is inferred from the CCD image of the x-ray beam (Fig. 2 direction.To a high degree of accuracy, these quantities can be simply calculated from the single electron scattering cross section and angular spectral dependence described by Eqs. ( 1) and ( 2), where !0 is now the average laser frequency, and is the average electron beam Lorentz factor.Thus, it can be concluded that in the regime being considered here, and h h! s i are primarily functions of the overall electron beam energy, laser wavelength, and interaction geometry, all of which can be experimentally determined with a high degree of confidence.This, along with the excellent agreement between the simulated and measured x-ray beam characteristics, suggests that the parameters listed in Table III are accurate estimates of the true x-ray beam properties.
V. CONCLUSIONS
Measured electron and x-ray beam parameters have been used to benchmark a newly developed threedimensional time and frequency-domain code designed to provide complete 3D time-resolved computational capability for the full determination of the temporal and spatially resolved spectra and intensity distributions produced from a Thomson interaction of arbitrary geometry.This capability is crucial for both the design of successful Thomson x-ray sources, as well as future experiments and applications utilizing such sources.The measured intensity profile of the x-ray beam produced from the collision of a 3 ps, 55 MeV electron beam with a 50 fs, 800 nm laser pulse was found to agree very well with that predicted from simulations with the new code after the inclusion of the spectrally dependent transmission and the detection efficiency of the x rays.In addition, the spatial structure of the measured x-ray intensity profile induced by K absorption through a Ta foil agreed very well with simulations.The input parameters of the simulation were determined by careful measurements of the electron beam energy, energy spread, spot size, and divergence at the interaction point.Thus, it can be inferred that time-integrated spatially correlated spectra predicted by the code is accurate.The simulated x-ray spectrum and overall detection efficiency predicted by the code was utilized to estimate the total x-ray dose in the measured beam to be about 4 10 6 photons.The peak flux and peak spectral brightness of the measured beam were determined to be 6 10 18 photons=s and 5 10 15 photons=s mm 2 mrad 2 0:1%b:w: respectively.Finally, we note that a full description of the theory behind the 3D time and frequency-domain code, as well as detailed calculations of femtosecond x-ray pulse production through Thomson scattering, is presented in a companion paper.
FIG. 2 .
FIG. 2. (Color) Single shot x-ray beam image detected with the x-ray CCD (a), and y integrated background subtracted intensity profile vs the divergence angle in x for the same image (b).
FIG. 1 .
FIG. 1. (Color) Schematic of the interaction region for the PLEIADES Thomson x-ray source.
FIG. 3 .
FIG. 3. Optical transition radiation image of the electron beam spot at the interaction (a), and an image from a YAG scintillator taken 0.8 m downstream of the interaction point used to determine the divergence of the electron beam (b).
FIG. 5 .
FIG. 5. Interaction probability vs x-ray energy for the CCD scintillator (a), and the transmission probability through the laser turning optic (b).
FIG. 4 .
FIG. 4. Reconstructed macroparticle representation of the electron beam divergence (a), and (b) comparison of the measured x integrated electron beam divergence in y (squares) to the distribution function used to construct the macroparticle distribution.
FIG. 6 .
FIG.6.(Color) Theoretical intensity profiles determined from the measured laser and electron beam parameters for the case of an x polarized incident laser pulse (a), which corresponds to the measured case (Fig.2), and a y polarized laser pulse (b).
FIG. 10 FIG. 9 .
FIG. 10. (Color) Measured (a) and simulated (b) intensity profile for x-rays transmitted through a 0.125 mm Ta foil.The measured electron beam energy was 57 MeV, while the energy used in the simulation was 58 MeV.
FIG. 11
FIG. 11. (Color) On axis time-dependent x-ray spectrum of the experimentally produced x-ray beam as determined by the 3D time and frequency-domain code.
TABLE I .
X-ray CCD specifications.
TABLE II .
Measured electron beam parameters at the interaction point, including the normalized rms emittance, the rms spot size, divergence, energy, and energy spread.The electron bunch length was determined from PARMELA simulations of the electron beam production and transport. | 8,572.2 | 2004-01-27T00:00:00.000 | [
"Physics"
] |
Localization of Outdoor Mobile Robots Using Curb Features in Urban Road Environments
Urban road environments that have pavement and curb are characterized as semistructured road environments. In semistructured road environments, the curb provides useful information for robot navigation. In this paper, we present a practical localization method for outdoor mobile robots using the curb features in semistructured road environments. The curb features are especially useful in urban environment, where the GPS failures take place frequently. A curb extraction is conducted on the basis of the Kernel Fisher Discriminant Analysis (KFDA) to minimize false detection. We adopt the Extended Kalman Filter (EKF) to combine the curb information with odometry and Differential Global Positioning System (DGPS). The uncertainty models for the sensors are quantitatively analyzed to provide a practical solution.
Introduction
Outdoor environments have irregular shapes and changes in geometry and illumination due to the weather condition.Therefore, environmental uncertainty is relatively high.There are numerous studies for autonomous navigation of mobile robot in outdoor environments.Typical examples are the autonomous vehicles that were developed through DARPA Grand/Urban Challenges [1,2].Most of the vehicles were equipped with a variety of high cost sensors in order to overcome various uncertainties.
The aim of this work is to develop a practical localization method for outdoor mobile robots.In particular, this study focuses on surveillance robots in urban road environments.A localization method using a small number of sensors is proposed instead of using multiple high cost sensors.
The fusion of a global positioning system (GPS) and inertial measurement unit (IMU) has been widely used for the outdoor localization of mobile robots [3,4].However, it is difficult to ensure the accurate pose estimation in dense urban environment, where GPS signal is degraded by the multipath errors and satellite blockage.Therefore, the use of environmental features has been studied to enhance the precision of the estimated robot pose in dense urban environment [5,6].In [7], a 3D representation of the local environment is used to detect obstruction of the GPS signals, which is blocked by buildings.In [8,9], the extracted line features of buildings are used to estimate the robot position.However, the available information regarding buildings is sparse in many places, and the slow update rate limits the performance of localization.In [10], the road centerline is extracted to correct the lateral position of the vehicle in mountainous forested paths.However, the correction of the heading error is not considered by the extracted road centerline.
Generally, urban road environments are paved, and the curbs act as the boundaries of the roads.Therefore, urban road environments are characterized as semistructured road environments.In semistructured road environments, the curb provides useful information for robot navigation.Therefore, the curb features have been widely used for navigation strategies and localization methods.In [11], Wijesoma et al. propose a method based on EKF for detection and tracking of the curbs.The range and angle of the curbs are obtained from an LRF measurement.However, quantitative performance analysis of the curb detection was not clearly shown.In [12,13], the curb on one side of the road is extracted using a vertical LRF for vehicle localization.The lateral error of 2 Mathematical Problems in Engineering a vehicle is reduced by map matching approach using the extracted curb point.However, the correction of the heading angle is not considered, because there is no angle information of the curb.
A method for traversable region detection using road features such as road surface, curbs, and obstacles is proposed in our previous works [14][15][16].The curb features are derived by the geometric features of the road to obtain curb candidates.In order to extract the correct curb among the curb candidates, a validation gate is derived from principal component analysis (PCA).The curb extraction is performed successfully in a road environment.However, there was a fundamental limitation on PCA.The validation gate does not consider the classification of the curb and noncurb data.Consequently, false detection where noncurb data are extracted as curbs may occur.The precision of estimated pose is decreased, when the false detection data are used for localization.Therefore, it is important to reduce the false detection rate.
The contribution of this paper can be summarized by two schemes.The first contribution is the robust curb extraction scheme by using a single Laser Range Finder (LRF).In order to reduce the number of false detection of the curbs, the classification of the curb data and the noncurb data is conducted by using Kernel Fisher Discriminant Analysis (KFDA) in [17][18][19].On the basis of our previous works, geometrical features of the curb are defined.The second contribution is the integrated localization scheme using curb features on the basis of quantitative sensor uncertainty models.In particular, the uncertainty model for the extracted curb is quantitatively determined from experiments.Extended Kalman Filter (EKF) is exploited to combine the curb features with odometry and Differential Global Positioning System (DGPS) information.The extracted curbs enable accurate estimation of the pose of the robot even when the temporal GPS blackout takes place.
The remainder of this paper is organized as follows.A method for curb extraction is presented in Section 2. Section 3 describes the localization method for an outdoor mobile robot using the extracted curb.It also introduces the uncertainty models for the sensor measurements.The experimental results of the proposed method are presented in Section 4. Finally, the conclusion of this research is presented in Section 5. 2.2.Road Feature Detection.A road feature detection scheme was proposed in our previous works [14,15].The previous works on road feature detection are briefly reviewed.The first step for the road feature detection is the road surface extraction.In order to identify the road surface, the LRF measurements in expected road region are selected as candidates for the road surface .Multiple line segments are constructed by combining consecutive data points of .The angle of the road surface is parallel with the -axis in the ideal case.Therefore, the range data are saved as road surface points if the angle of the line segment is within threshold .The road surface is extracted as a line by using the least square method.The extracted road surface provides and from the robot.The overall algorithm can be summarized in Algorithm 1.
Road Feature Extraction
Figure 2 shows the ideal model of a semistructured road environment.The geometrical features of the road can be used to extract the curb features.The curb has the following four attributes.
(Att 1) The angular difference between the curb orientation ( ) and road surface angle ( ) is close to 90 ∘ .(Att 2) The gap between the horizon distance of the road surface ( ) and the value of the curb point (B or C) is close to 0. (Att 3) The angular difference between the left curb ( ĀB) and the right curb ( CD) is close to 0. (Att 4) The difference between the road width and the gap between two curbs is close to 0. It is assumed that the road width is known.
It is commonly assumed that the robot navigates parallel to the curb.It is reasonable to assume that the robot is moving along the road without significant change of orientation in most cases.Moreover, the vertical surfaces of the curb are perpendicular to the road surface.When we scan the road environments using the LRF, the road features are composed of straight lines with different orientations.In order to distinguish different line segments that correspond to the road surface, the curbs, and the sidewalk, it is helpful to assume that the robot's heading direction points forward along the road.The road width is assumed to be known by given map information.
The first attribute is used to select the curb candidates.The line segments that satisfy the following condition are selected as the curb candidates: (1) ← the LRF measurement.
(3) ← The number of the LRF measurement points.(4) ← Expected range of the road surface points (5) for ← 1 to (6) if , ∈ , then (7) ← save as candidate for the road surface.(8) ← angle of candidate for the road surface.(9) ← The number of candidates for the road surface.(10) end if (11) end for (12) The configuration of a robot and installation of a LRF. where Once the curb candidates are determined, attributes 2, 3, and 4 are used to extract the correct curb out of the curb candidates.The attribute values 2 , 3 , and 4 that correspond to each attribute are numerically calculated for the pair of right and left curb candidates as follows: is the gap between the right and left curb points ( = C, − C, ).When a curb exists on only one side, 2 and 3 are computed from the curb candidates for one side. 4 is assumed to be 0. The data vector of the curb candidates is given as follows: (3)
Curb Extraction Using Kernel Fisher Discriminant Analysis (KFDA).
KFDA is applied to extract the correct curb from the curb candidates.KFDA aims to find a discriminant function for optimal data classification.Therefore, the discriminant function can be used to classify the curb candidates as curb and noncurb class.The curb extraction is conducted by the following procedure.First, the training data with class information are selected.The discriminant function is derived from the offline computation of the training data.
When the discriminant function is obtained, classification of the curb candidates is carried out by the discriminant function in real time.
The discriminant function is defined as a vector to maximize the following object function in the kernel feature space F. Consider the following: The "between-class variance matrix" Φ and the "withinclass variance matrix" Φ are defined as follows: with ℓ is the number of total training samples, and ℓ is the number of class samples.
Training data { 1 , 2 , . . ., ℓ } consist of data with obvious class information.Each sample x training, ∈ R 3 belongs to one of two classes that include curb and noncurb class.In order to obtain equivalent effects on classification, the components of each sample should be normalized.Consider the following: Here, () and () are the mean and standard deviation of each attribute, respectively.The training data that are mapped to the kernel feature space are represented by an ℓ × ℓ matrix .The Gaussian kernel is used for mapping the data onto kernel feature space F. Consider the following: is the control parameter that needs to be tuned to improve classification performance [19].If is too small, the overfitting problem may take place.Although the classification accuracy with respect to the training data increases, the classification performance with respect to the test data becomes poor under the occurrence of the overfitting.If is too large, the underfitting problem may take place.The classification performance will not be satisfactory at all cases when the underfitting takes place.Therefore, should be carefully selected.In this paper, is manually tuned under the consideration of experimental performances.
The discriminant function that maximizes the object function in (4) is derived by an eigenvalue problem.In order to project the training data onto a one-dimensional (1D) solution space, the eigenvector that corresponds to the largest eigenvalue is defined as the discriminant function for data classification.The discriminant function is given by an ℓ×1 vector.Consider the following: The classification of the training data is conducted by an inner product of the data matrix and the discriminant function .The projected training data training are distributed in the 1D solution space.Consider the following: The class of test data can be predicted by using the discriminant function .The test data denote the curb candidates.The test data x test is projected to the 1D solution space, as shown in the following equation: The class properties of the training data are used to predict the class of the test data test .The Mahalanobis distances between each class and the test data are computed by the following equation: where Φ and Φ denote the mean and standard deviation of the class distribution, respectively.The class of the test data is determined as the class with the smallest Mahalanobis distance.When the test data are classified as the curb class, they are extracted as the curb.
Outdoor Localization Using Curb Feature
3.1.System Design.This paper adopts EKF to estimate the robot pose using curb features.EKF is a well-known method for mobile robot localization and sensor fusion [20][21][22].When the initial pose of the robot and adequate observations are provided, the pose of the robot can be estimated by correcting the odometry error.The EKF process consists of a prediction step and an update step in sampling time .DGPS and an LRF are used to correct odometry errors.Figure 3 shows a block diagram of the localization process.
Measurement Uncertainty Model.
The GPS error mainly occurs due to the following two factors.One is the pseudorange errors caused by systematic factors.Another is the geometric constellation of satellites.In this paper, DGPS is used to minimize the pseudorange errors.The uncertainty model for DGPS is computed under the consideration of the "dilution of precision" in relation to the geometric constellation of satellites and the pseudorange error or so-called "userequivalent range error" [23].Consider the following: The error covariance R DGPS, is given as (12).The elements of the covariance matrix are provided by the DGPS receiver in real time.
Several studies were proposed to define the error covariance of line features [24,25].However, the uncertainty models are appropriate for a static condition or a low-speed driving condition on a flat surface.However, the outdoor mobile robots are usually driven in the uneven terrain.The measurement noise of the extracted curb occurs due to the wobble of the robot.Therefore, the uncertainty model for the curb needs to be defined by experiments under resultant driving conditions.
In order to define the error covariance R curb , we consider the noise model of the extracted curb as shown in Figure 4.The information of the extracted curb contains estimation errors in the range and angle.Therefore, the angle and range measurements for the extracted curb are composed of the "true" angle and the "true" range , along with the estimation errors.Consider the following: where and are the estimation errors for a curb and have random variances 2 and 2 , respectively.The ground truth of the curb locations can be measured by an additional LRF, which is attached to the side of the robot.Because the additional LRF is directly facing the curb, the range data from the additional LRF provide reliable geometric information of the curb during robot's movement.The measurement of the curb from the additional LRF is more accurate than the forward-pointing tilted LRF, because the number of range data points that correspond to the curb is much larger.The curb is represented as a straight line by the application of the least square method.The ground truth implies the relative range and orientation of the curb.The estimation errors are computed for the extracted curbs.Therefore, the error covariance of the curb is defined by the distribution of the estimation errors.By using a large number of measurements of the estimation errors and , the error covariance is calculated as follows: The covariance matrices are experimentally defined as constant values for the left and right curbs.
Extended Kalman Filter Localization Using Curb Feature.
The odometry data from the wheel encoders are used to predict the robot pose.By using the incremental distance Δ −1 and orientation Δ −1 , the predicted robot pose is given by the following equation: Extracted curb (from 1st estimated robot perspective) is the predicted robot pose at sampling time .When the initial pose of the robot is given, the robot pose is represented by , , and in a global coordinate frame.The uncertainty of the predicted robot pose consistently increases because of the accumulative errors of the odometry.
Actual curb (curb line map)
In order to correct the odometry error, the first measurement correction is performed using DGPS measurements.The update frequency is set to 1 Hz on the basis of the DGPS measurement frequency.When the available position measurement is provided, the observation vector is given in global coordinates.Consider the following: The observation model for the current state is described as follows: The measurement Jacobian matrix H is given as follows: The second measurement correction is conducted by using the curb features.The observation vector for the extracted curb is given by (19).When the curbs are extracted on both sides, there are two measurement vectors for the left and right curbs as shown in Figure 5.The sequence of the second measurement correction is from the right curb to the left curb.The update rate is 5 Hz if the curbs are continuously extracted.
The robot pose is corrected by comparing the curb with the map.The extracted curb is matched with the th line of the curb map.The observation model for the th line and the robot pose at time is given by the following equation: The Jacobian matrix H is defined as follows: The consistency of EKF relies on the observation model.If an erroneous sensor observation is provided, the system does not provide a consistent result.Therefore, the outliers that lie outside of the uncertainty bounds should be rejected.A normalized innovation squared (NIS) test is implemented in order to confirm the consistency of the filter.NIS value has a 2 distribution with respect to degrees of freedom.Consider the following: where S is the innovation matrix which is defined as The NIS value for valid measurements should be within the threshold of the 2 distribution.However, the erroneous measurements are discarded when the NIS value lies outside the threshold boundary.A threshold value that corresponds to a 95% confidence region is used for outlier rejection.
Experimental Results
4.1.Experimental Setup. Figure 6 shows the sensor system attached to the mobile robot.The outdoor mobile robot is a Pioneer P3-AT, a commercial outdoor platform of MobileRobots.The wheel encoders attached to the driving motors provide odometry information.SICK LMS-100 was used to detect the curb.The LRF was tilted by 5 ∘ toward the ground.A Novatel DGPS system was used to measure the global position.The robot was equipped with a rover antenna, and a base station was located on the roof of a nearby building.The "true" value for the curb and the ground truth were measured using an additional LRF (HOKUYO URG-04lx) attached to the side of the robot.
The experiment was performed in Korea University in Seoul, Korea, as shown in Figure 7.The robot was driven manually along the yellow dashed line from "S" to "G".The curbs along the experimental path are represented as solid green lines.The target environment has semistructured geometry that is composed of paved roads and curbs in most of the region.Furthermore, there are tall buildings and tunnels that degrade the DGPS precision.The travel distance along the experimental path was 775 m, and average speed of the robot was 0.5 m/s.
Curb Extraction Results
. The curb extraction process in a semistructured road environment is shown in Figure 8. Figure 8(a) shows the road environment where the experiment was performed.The red dotted line represents the scan area of the tilted LRF. Figure 8(b) shows the extracted road surface on the basis of the scanned LRF data in the road environment.When the road surface is extracted, the line segments that satisfy attribute 1 in Section 2.2 are selected as the curb candidates.Figure 8(c) shows the line segments that correspond to the curb candidates.The first and the second candidates for each side of the curbs are represented by blue and green lines, respectively.There are two candidates on each side.Therefore, four pairs of candidates can be extracted as curbs: (R1, L1), (R1, L2), (R2, L1), and (R2, L2).The class prediction results for each pair of curb candidates are listed in Table 1.The pairs of candidates (R1, L1) and (R1, L2) are classified as the curb class.Finally, candidate (R1, L1) is extracted as the curbs as shown in Figure 8(d), which has the smallest Mahalanobis distance with respect to the curb class.If the LRF data that correspond to the vertical surface are detected, the proposed method will perform successfully despite the small and less distinct curbs.
The curb extraction was performed while the robot navigates through the experimental path.Figure 9 shows the curb mapping result from the curb extraction.The curb map is shown in the right bottom in Figure 9.The extracted curbs are denoted by the magenta dots.The results indicate that most of the curbs along the experimental path were successfully extracted.In order to demonstrate the robustness of the proposed method, comparison of classification performance between our previous method and the proposed method is summarized in Table 2.The accuracy of the curb extraction with PCA was 91.3% for 4751 scanned datasets.The accuracy was improved to 98.6% with the proposed KFDA.A confusion matrix is presented below.The conventional method and the proposed method show 97.1% and 98.2% of true positive rate, respectively.The curb extraction is performed successfully by the application of both methods.However, the most important factor in the curb extraction is to reduce the false detection rate.False detection is that the noncurb data are misclassified as the curb data.The accuracy of the estimated pose can be decreased, when the false curbs are used for localization.The false detection rate was 27.4% with PCA.The false detection rate was reduced to 0.4% with the proposed KFDA.The result implies that most of the noncurb data were classified as noncurb data correctly.In particular, measurement errors were 2.1 m on average in area F. The standard deviation that was measured in area F was 6.4 m on average.The result implies that the regional properties of DGPS were obtained by measuring the DGPS uncertainty in real time.However, the precision of the localization only by DGPS can be decreased when the robot navigates near the buildings.The curb estimation errors are considered in order to define the error covariance.The most accurate result for the quantitative curb uncertainty is obtained by measuring the estimation errors in experimental environments.The estimation errors were measured while the robot navigates through the road with curbs.Ground truth for the curb was provided by an additional LRF that is attached to the side of the robot as shown in Figure 11.Ground truth of the curb was provided by line feature from the additional LRF data.Experiments were conducted in order to measure the estimation errors prior to the localization experiment.The following results show the estimation errors for the range and angle of the extracted curb.
Figure 12 shows the histogram of the curb estimation errors for each side.The number of the estimation error measurements is approximately 5000.The covariance matrix of the curb was computed from the distributions of estimation errors.The elements of error covariance for each side of the curb are listed in Table 3.
The covariance representation for the extracted curbs is shown in Figure 13.The extracted curbs are represented by blue lines, and the error bounds for the curbs are represented by green dotted lines.The resultant curbs exist within the error bounds with 95% confidence.truth was computed using the additional LRF attached to the side of the robot.
Figure 15 shows the lateral errors of the localization results.The position errors when using the fusion of the odometry and DGPS are shown in Figure 15(a).The lateral errors in each area were usually greater than 1 m.In particular, the maximum error in area F was 5 m.In contrast, the localization result which was corrected by odometry, DGPS, and curb information shows that the lateral errors were within 0.6 m across the entire area, as shown in Figure 15(b).Therefore, the proposed method shows robust performance in terms of lateral position estimation errors despite the large DGPS errors.The heading errors of the localization results are represented in Figure 16.The result that was corrected using only the DGPS data is shown in Figure 16(a).The heading errors were greater than 1 ∘ on average.The variance was larger than 2 ∘ .However, when the curb information is used for correction, the heading errors were remarkably decreased.Furthermore, the errors rarely exceed 3 ∘ in the entire area.
In experiments, the proposed method showed precise and robust performance for the lateral and heading errors, despite the large DGPS errors and a temporal blackout.
Conclusion
This paper presents a localization method for outdoor robots using curb features in semistructured road environments.
A reliable curb extraction scheme is proposed to classify the curb candidates as curb and noncurb classes.Most of the curbs in an experimental path are extracted with high accuracy.An EKF-based localization is also proposed to combine the extracted curbs with odometry and DGPS measurements.The uncertainty models of the sensors are defined by experiments to provide a practical solution for localization.From experimental results, the robustness of the proposed method is demonstrated in real road experiments.The curb features can correct significantly the lateral position and heading errors in dense area, where the DGPS signal gets degraded by buildings.
2. 1 .
System Configuration.Figure1shows the configuration of the mobile robot and the LRF coordinate configuration.A single onboard LRF with a tilting angle of is used to extract road features such as the road surface and curbs.The following list shows the nomenclature for road feature extraction.The variables are described with respect to the robot coordinate frame: : road width; : nominal tilt angle of LRF; : angle between the detected road surface points and -axis; : horizontal look-ahead distance from the robot to the road surface points; ( C, , C, ): coordinates of the right curb edge (point C); C, : angle between the right curb and -axis; ( C, , C, ): coordinates of the left curb edge (point B); C, : angle between the left curb and -axis.
Figure 2 :
Figure 2: Ideal model of semiconstructed environment (blue dotted line: LRF data, red line: extracted road surface).
Figure 3 :Figure 4 :
Figure 3: A block diagram of localization process.
Figure 5 :
Figure 5: Extracted curb features and a line map with respect to the global coordinate frame.
4. 3 .
Uncertainty Measurement Results.The DGPS measurements are represented by red dots along the experimental path in Figure 10(a).The areas with large DGPS errors are represented by A-F. Figure 10(b) shows the standard deviation of the DGPS measurements.The standard deviation of the DGPS is about 2 m in open area.There were temporal blackouts in areas B, D, and E due to satellite blockage.
4. 4 .
Outdoor Localization Results.The localization results are shown in Figure 14.The proposed method was compared with the conventional framework for outdoor localization that combines DGPS measurement with odometry.The blue dash-dot line represents the localization results corrected only by the DGPS measurements.The estimated paths show some difference with respect to the reference path (yellow dotted line) in area A-F due to the DGPS errors.The magenta line represents the localization results corrected by DGPS and the extracted curb.When the curb information is applied, the results show that the robot position well matches the reference path, even if the DGPS errors are large or a blackout occurs (area A-F).It is clear that the curb information plays a dominant role.The following part presents the localization errors in area A-F, as shown in Figure 14.The localization errors were compared with the conventional framework that combines DGPS measurement with odometry.The ground
Figure 12 :Figure 13 :
Figure 12: Distribution of estimation errors for (a) left and (b) right curbs.
Figure 14 :
Figure 14: Localization results in semistructured road environment.
Figure 15 :Figure 16 :
Figure 15: Lateral errors (a) corrected by DGPS and (b) corrected by DGPS and extracted curb.
Table 1 :
Class prediction for pairs of curb candidates.
Table 2 :
Classification results including confusion matrix.
Table 3 :
Error covariance of extracted curb. | 6,367.2 | 2014-04-08T00:00:00.000 | [
"Computer Science"
] |
The histone acetyltransferase GCN5 and the transcriptional coactivator ADA2b affect trichome initiation in Arabidopsis thaliana
Figure 1: Relationships between players that affect trichome initiation in Arabidopsis thaliana. Gibberellin (GA) binds to the receptor GID1 and forms an association with the DELLA repressors, which themselves act to block trichome formation. The GA-GID1-DELLA complex interacts with SLEEPY1 (SLY1). SLY1 is a component of a complex that polyubiqutinylates the DELLAs, resulting in their proteasome-mediated degradation. SPINDLY (SPY) and SECRET AGENT (SEC) have been shown to covalently modify the DELLAs, with opposite effects on activity. Our recent data supports a role for the histone acetyltransferase GCN5 and associated factor ADA2b in inhibiting trichome formation. These chromatin modifiers may work through stimulating expression of one of the DELLA proteins. Our data also indicate that GCN5 and ADA2b block expression of SEC and that ADA2b affects SPY, suggesting additional input from these epigenetic factors in regulation of the trichome initiation pathway.
: Relationships between players that affect trichome initiation in Arabidopsis thaliana. Gibberellin (GA) binds to the receptor GID1 and forms an association with the DELLA repressors, which themselves act to block trichome formation. The GA-GID1-DELLA complex interacts with SLEEPY1 (SLY1). SLY1 is a component of a complex that polyubiqutinylates the DELLAs, resulting in their proteasome-mediated degradation. SPINDLY (SPY) and SECRET AGENT (SEC) have been shown to covalently modify the DELLAs, with opposite effects on activity. Our recent data supports a role for the histone acetyltransferase GCN5 and associated factor ADA2b in inhibiting trichome formation. These chromatin modifiers may work through stimulating expression of one of the DELLA proteins. Our data also indicate that GCN5 and ADA2b block expression of SEC and that ADA2b affects SPY, suggesting additional input from these epigenetic factors in regulation of the trichome initiation pathway.
Description
This integrations article considers data from Kotak et al. 2019 andTrachtman et al. 2019. Our interest is in understanding the place of chromatin modifiers in developmental control pathways, working in concert with or in parallel to established transcriptional factor networks, hormonal signaling, etc.
Previously we have described how the histone acetyltransferase GCN5 and its associated transcriptional coactivator ADA2b affect trichome morphogenesis (Kotak et al. 2018). We have now shown that these epigenetic factors also play a role in specification of trichome cell fate, as evidenced by an increase in trichome number and/or density in disruption mutant backgrounds (Kotak et al. 2019). Wang et al. 2019 also recently showed an increase in trichome density on the first pair of true leaves in several gcn5 mutant backgrounds. Therefore, GCN5 and ADA2b act to limit trichome initiation (Fig. 1) from a field of epidermal leaf cells, a well-described developmental process involving many genes (Pesch and Hulskamp 2009) In addition, a number of phytohormones affect trichome development (reviewed in Fambrini and Pugliesi 2019). We focused on pathways connected to gibberellin (GA) signaling, which has been shown to stimulate trichome development in Arabidopsis (Chien and Sussex, 1996). Mutations in two members of the DELLA repressor protein family, RGA and GAI, can rescue the glaborous phenotype of ga1-3 mutants, indicating a role for these proteins in repressing trichome formation (Dill and Sun, 2001). The DELLAs themselves are negatively regulated by GA in concert with SLY1, which induces degradation via the proteasome pathway (Dill 2004) and by SEC, which has been shown to covalently modify RGA (Zentella et al. 2016). SPY also covalently modifies the DELLAs, in a way that promotes their activity (Zentella et al. 2017, Fig. 1).
With disruption of GCN5 or ADA2b, GAI expression in rosette leaves is slightly decreased while there is no detectable change in RGA expression (Trachtman et al. 2019;Vlachonasios et al. 2003). Decreased expression of a DELLA repressor would be consistent with the increased number and density of rosette leaf trichomes observed in gcn5 and ada2b disruption mutant backgrounds (Kotak et al.2019, Wang et al. 2019, Fig. 1). However, given the modest observed effects on expression of GAI only, it seems unlikely that this change alone fully explains GCN5's and ADA2b's roles in trichome initiation. Gan et al. (2007) have reported that GAI also has a role in limiting trichome branching so the effects on GAI may also relate to trichome morphogenesis.
We also show that disruption of GCN5 or ADA2b leads to an increased expression of SEC (Trachtman et al. 2019, Fig. 1). This effect is consistent with increased trichome number in gcn5 and ada2b disruption mutants. Our data suggest a role for ADA2b in limiting expression of SPY, which would be expected to decrease trichome number. This transcriptional effect on SPY may relate to the decrease in absolute number of trichomes observed only in the second true leaf in an ada2b-1 mutant background (Kotak et al. 2019). This finding could be further explored by conducting expression analysis in the first and second true leaves separately. It should also be noted that since qRT-PCR experiments do not directly measure functional protein levels, there could be translational or post-translational effects to consider.
While data from gcn5-1 and gcn5-6 is generally consistent, somewhat different effects (as seen with SLY1; Trachtman et al. 2019) may expected due to the nature of these lesions in the GCN5 locus and/or the genetic backgrounds (Ws vs Col ecotypes, respectively). It is also important to note that we examined mature rosette leaves, as it is technically difficult to isolate trichomes for this analysis, especially from the mutant plants. This may result in variability seen in some experiments (e.g. SEC in gcn5-1; Trachtman et al. 2019).
This work places epigenetic players in a developmental pathway alongside other transcriptional regulators. To uncover more details about how these factors act and interact, chromatin immunoprecipitation could be used to assess changes in histone acetylation state or potentially GCN5 and ADA2b binding at the GAI, SEC, and SPY promoter, to determine if these loci are direct targets of these chromatin modifiers. | 1,365 | 2019-10-17T00:00:00.000 | [
"Biology"
] |
Pregnant Women Infected with Pandemic H1N1pdm2009 Influenza Virus Displayed Overproduction of Peripheral Blood CD69+ Lymphocytes and Increased Levels of Serum Cytokines
The first pandemic of the 21st century occurred in 2009 and was caused by the H1N1pdm influenza A virus. Severe cases of H1N1pdm infection in adults are characterized by sustained immune activation, whereas pregnant women are prone to more severe forms of influenza, with increased morbi-mortality. During the H1N1pdm09 pandemic, few studies assessed the immune status of infected pregnant women. The objective of this study was to evaluate the behavior of several immune markers in 13 H1N1pdm2009 virus-infected pregnant (PH1N1) women, in comparison to pregnant women with an influenza-like illness (ILI), healthy pregnant women (HP) and healthy non-pregnant women (HW). The blood leukocyte phenotypes and the serological cytokine and chemokine concentrations of the blood leukocytes, as measured by flow cytometry, showed that the CD69+ cell counts in the T and B-lymphocytes were significantly higher in the PH1N1 group. We found that pro-inflammatory (TNF-α, IL-1β, IL-6) and anti-inflammatory (IL-10) cytokines and some chemokines (CXCL8, CXCL10), which are typically at lower levels during pregnancy, were substantially increased in the women in the ILI group. Our findings suggest that CD69 overexpression in blood lymphocytes and elevated levels of serum cytokines might be potential markers for the discrimination of H1N1 disease from other influenza-like illnesses in pregnant women.
Introduction
By August 2009, nearly 277,000 cases of H1N1pdm09 viral infection had been reported, and at least 3,205 deaths were documented globally. In Mexico, 85 deaths were reported, of which sixteen percent were of pregnant women [1][2][3][4][5]. Several studies have demonstrated that pregnant women are at greater risk of hospitalization, admission to an intensive care unit, death, and other severe outcomes related to H1N1pdm2009 viral infection [6][7][8].
Pregnancy is an altered immune state with increased susceptibility to infectious diseases [9,10]. The systemic immunity status, at the cellular and cytokine levels, has been poorly studied in pregnant women infected with the H1N1pdm2009 virus.
A massive cytokine response resulting from the sustained activation of blood stream leukocytes after infection has been suggested to be the major pathogenic mechanism of the H1N1pdm2009 virus [11,12]. The production of specific cytokines such as the tumor necrosis factor (TNF)-a, interleukin (IL)-1, IL-6 and IL-10, and chemokine IL-8, are induced by viral infection, as demonstrated by an analysis of the sera of pandemic influenzainfected patients [13][14][15].
Influenza virus types share the ability to activate T and B lymphocytes in a polyclonal manner, stimulating nonspecific T and B cell responses such as inflammatory cytokine production [11,12,[16][17][18]. CD69 is a marker of early activation on the membrane surface of hematopoietic cells, including T cells, B cells and monocytes, and it correlates with their ability to induce cell responses [19][20][21][22].
The goal of this study was to examine the cellular phenotypes of blood lymphocytes and representative serum cytokines and chemokines during acute H1N1pdm2009 virus infection in pregnant women and in pregnant women with influenza-like illnesses, compared with those of healthy pregnant and nonpregnant women. We found that CD69 on T lymphocytes and the TNF-a, IL-1b, IL-6 and IL-10 sera cytokines as well as CXCL8 were increased in H1N1pdm2009 virus-infected women. Our findings suggest that CD69 overexpression in blood lymphocytes and elevated levels of serum cytokines/chemokines might be used as markers for the discrimination of H1N1 disease from influenzalike illnesses in pregnant women.
Patients and sample collection
This research work was jointly conducted by The National Institute of Perinatology (INPer) and the Medical Research Unit on Immunochemistry (UIMIQ), Specialties Hospital, National Medical Center ''Siglo XXI''. Both Institutional Ethics Committees approved the study (Research projects INPer: 212250-06191 and IMSS: R-2009-785-104). Fifty-four women were enrolled in the study after signed informed consent was obtained.
The study group was stratified into four subgroups as follows: 1) confirmed H1N1pdm2009 virus-infected pregnant women (PH1N1, n = 13); 2) pregnant women with flu-like illness (ILI, n = 11); 3) healthy pregnant women (HP, n = 12); and 4) healthy non-pregnant women (HW, n = 18). H1N1pdm2009 viral infection was confirmed by specific real-time reverse transcriptionpolymerase chain reaction (rRT-PCR) using in-house designed primers that were crosschecked in accordance with US guidelines. The analysis was performed by the Institute for Epidemiologic Diagnosis and Reference (InDRE) in Mexico City.
The participants had a previous clinical evaluation, and women from the PH1N1 and ILI groups showed the following signs or symptoms: cough, fever, sore throat, rhinorrhea, headache, myalgia, joint pain, dyspnea, conjunctivitis, sore back, diarrhea, asthenia, nausea, and/or vomiting. After a patient agreed to participate in the study, healthcare personnel collected blood specimens in silicone-coated and heparinized tubes (BD Vacutainer, N.J, USA) samples were processed immediately after collection.
Cell surface markers assessment
Fifty microliters of whole blood was mixed with the following fluorochrome-conjugated antibodies: anti-CD3/FITC (IM2635 Immunotech), anti-CD19/PerCP (MHCD1931/Invitrogen), anti-CD14/PE-Cy7 (557742/BD), anti-CD62L/APC (MHCD62L05/ Invitrogen), anti-CD69/PECy5 (555532BD) and anti-CD86/PE (MHCD8604 Invitrogen). Appropriate isotype controls for each antibody were used. After 15 min in the dark at room temperature (RT), 500 mL of working FACS Lysing solution (Becton-Dickinson, CA, USA) was added, and the reaction was incubated for 10 min at RT. The cell suspensions were washed once with a 1-mL fraction of phosphate-buffered saline by centrifugation at 9006g for 5 min at RT and then analyzed. Ten thousand total single events were acquired in a FACS Aria I flow cytometer (BD, biosciences USA) equipped with the FACSDiva 6.1.3 software (BD PharMingen). The analysis was performed using the Infinicyt Analytical Software (Cytognos). The percentages of CD62L, CD69 and CD86 expressing cells were determined from the CD3, CD19 or CD14-gated positive events from the SSC vs CD3, CD19 or CD14 respective dot plots.
Determination of cytokine and chemokine levels
The serum concentrations of cytokines (TNF-a, IL-1b, IL-6, IL-12p70 and IL-10) and chemokines (CXCL8/IL-8, CXCL9/MIG, CCL2/MCP-1 and CXCL10/IP-10) were determined using a cytometric bead array (CBA) kit (BD PharMingen, San Diego, CA, USA), according to the manufacturer's instructions. The data analysis was performed using the GraphPad Prism software (GraphPad Software, San Diego, CA, USA). Log-transformed data were used to construct standard curves fitted to 10 discrete points using a 4-parameter logistic model. The concentrations in the test samples were calculated using interpolations of their corresponding standard curves.
Statistical analysis
The differences in the clinical manifestations between the PH1N1 and ILI groups were compared using the Xi 2 test, and the differences between the study groups were compared using the Kruskal-Wallis test with Dunn's multiple comparison post-test using the GraphPad Software.
Characteristics of patient groups
The patient samples were analyzed during the second wave of the pandemic (January-October 2010). Table 1 includes the demographic and obstetric characteristics of the recruited women among the groups.
The ILI and PH1N1 women frequently showed symptoms of cough, headache, fever, rhinorrhea and/or odynophagia; however, we did not observe differences in the frequency of these symptoms, although dyspnea was more frequent in the PH1N1 group ( Table 2). All of the presumed influenza-infected pregnant patients received the WHO Guidelines-recommended oseltamivir regimens, which consisted of 300 mg bid for 5 days. The arterial blood gases were tested in the PH1N1 patients; six of these patients (46.1%) showed hypoxemia (PO 2 ,70 mmHg), and four (30.7%) had respiratory alkalosis. Fifty percent of the PH1N1 women were ambulatory, and the remaining patients were hospitalized, most frequently with a diagnosis of pneumonia. The HP, ILI and PH1N1 groups were similar in terms of age, weeks of gestation, number of pregnancies or the presence of underlying diseases. The average hospitalization time was 5.4+/20.9 days for achieving full recovery in all the patients.
Perinatal complications were more frequent in the hospitalized patients than they were in the ambulatory PH1N1 women. Preterm labor, preterm birth or intrauterine growth restriction were observed in six (37.5%) patients. No perinatal complications were documented in the ILI patients. The PH1N1 women showed a 20-fold greater risk for obstetric complications (OR = 20, 95% CI 1.5 to 57.2) and an 8-fold greater risk for neonatal complications (OR = 8, 95% CI 1.5 to 15.5) compared with the HP group. Obstetric complications were more frequent in the hospitalized patients than in the ambulatory patients (3 vs 0, p, 0.05).
Peripheral blood leukocytes
Similar percentages of lymphocytes, monocytes and granulocytes among the total leukocytes were observed among the HW, PW, ILI and PH1N1 women. However, the percentages of T cells were significantly higher (p,0.001) and those of B cells were lower in the PH1N1 than in the HW, HP and ILI groups, specifically for T lymphocytes ( Early activation markers on CD3, CD19 and CD14 positive cells A significantly higher (p,0.001) percentage of CD69+ CD3+ cells were observed for the PH1N1 group than for the control groups (51.5+/236.0 vs. 0.4+/20.2, 0.4+/20.5, 0.6+/20.5 for HW, HP and ILI); however, these differences were not observed for the CD69+ cells among the CD19+ or CD14+ cells (Figure 1). In addition, no differences in the percentages of CD62L+ or CD86+ cells among the CD3+ or CD19+ lymphocytes among the groups were found (Table 3).
In the healthy pregnant women the percentages of CD86+ CD14+ cells significantly diminished (p,0.01) compared to those of healthy women (14.7+/225.6 vs 64.8+/216%). However, pregnant women with an ILI condition reached similar levels of CD86+ cells in CD14+ cells as that of healthy women (Table 3).
CXCL8 and CXCL10 chemokine concentrations
The serum concentrations of the CXCL9 and CCL2 chemokines appear to be unaffected by pregnancy and tend to be poorly induced by influenza-like or H1N1pdm2009 viral infection (Figure 3a and 3b). The CXCL8/IL-8 concentration was higher (p,0.01) in the women in the PH1N1 group than it was in the HP women (188.8+/2226.8 vs 35.0+/224.4 pg/mL) (Figure 3c). In addition, we observed that the concentrations of CXCL10/IP-10 were higher (p,0.05) in the women in the ILI group than in the HP women (422.3+/2368.8 vs 155.3+/2106.7 pg/mL) and were poorly induced (p.0.05) in the PH1N1 group (302.9+/2248.6 vs 155.3+/2106.7 pg/mL) (Figure 3d).
Discussion
Pregnant women are at a greater risk for developing complications associated with influenza viral infection [6]. During the 2009 H1N1 influenza A pandemic, pregnant women were approximately 4-5 times more likely to develop severe disease compared with non-pregnant individuals in the general population. We analyzed the cellular activation phenotypes and serum cytokine concentrations to characterize the systemic inflammatory condition in this risk group.
Cellular and humoral inflammatory mediators have been implicated in the development of complications during localized and systemic infections, including influenza [23][24][25][26][27][28]. According to a previous report, H1N1 influenza-infected pregnant women are more likely to develop perinatal complications [25].
CD69 expression on the lymphocyte surface has been identified as an early activation marker in vitro and in vivo, and CD69 exerts regulatory functions that affect the differentiation and synthesis of cytokines, resulting in the modulation of the inflammatory response. Additionally, CD69 has been used as an early immune marker for human viral infections and chronic inflammatory diseases [29][30][31][32]; it could be expressed constitutively in monocytes, and their activation results in increases of reactive oxygen species [19]. We observed an increased proportion of CD69+ cells only in the T lymphocytes in H1N1pdm2009 virusinfected pregnant women compared with influenza-like illnessinfected pregnant women. This elevated proportion of T cells in the PH1N1 group was accompanied by a strong inflammatory environment in the serum, correlating with an increase in complications and most likely caused by the ''cytokine storm'' of H1N1pmd2009 and other lethal influenza viruses observed here and by other investigations involving non-pregnant subjects [33][34][35][36][37]. This higher proportion of CD69 in T lymphocytes could be derived from the indirect action of inflammatory cytokines; the higher proportion of CD69 could be a strategy to control a potentially overwhelming inflammatory response. It has been reported that IL-1b, IFN-c or TNF-a-activated endothelium induces CD69 expression on T lymphocytes, most likely as a regulatory mechanism to control the differentiation and production of cytokines in these cells [38]. A CD69 deficiency enhances the inflammatory phenomenon in cases of infective diseases in which the inflammatory environment is essential for aggravating the infection [29,39]. The interaction of the endothelium with lymphocytes in the context of H1N1 infection has been poorly explored, and endothelial-myeloid cell interactions were originally proposed [40,41]. We suggest that CD69 expression on T lymphocytes is a consequence of the highly inflammatory environment, resulting in limiting the cytokine storm and preventing clinical complications in pregnant women, including promoting fetal loss.
CD86 is an activation marker and co-stimulatory molecule that is strongly expressed by mature antigen presenting cells (APC) such as B cells, monocytes, dendritic cells or macrophages [42,43]. We found that a low percentage of monocytes express CD86 as a result of pregnancy, in contrast to the initial CD86 expression of monocytes in ILI in pregnancy; however H1N1pdm2009 viral infection in pregnant women did not reach the same CD86 percentage as those reached by ILI. It has been reported that the H1N1pdm2009 virus could impair the adaptive immunological response by the induction of Programmed Death-Ligand 1 (PD-L1, B7-H1, CD274) molecules, which inhibit antigen-preventing cell maturation and the T lymphocyte response, which could play a role in the context of immunity tolerance, particularly in pregnancy [44].
We observed that the serum concentrations of TNF-a, IL-1b, IL-6, and CXCL8 are diminished during pregnancy in healthy women, in accordance with an immunotolerant state [45]. We found that pandemic influenza infection in pregnant women resulted in remarkable increases of serum inflammatory cytokines, even higher than those observed in the ILI group. According to other reports, this inflammatory environment could explain the increase in abnormal pregnancy outcomes [6,[46][47][48]. In addition to the serum inflammatory milieu, higher concentrations of antiinflammatory cytokine IL-10 were found in the women in the PH1N1 group. This observation has been reported by others using experimental models or in studies of non-pregnant patients with severe pneumonia caused by H1N1 influenza infection [49]. To the best of our knowledge, this study is the first report of the inflammatory effects of H1N1 in pregnant women. The IL-10 increase could be a biological presentation similar to that of the compensatory anti-inflammatory response syndrome (CARS) in sepsis, in which the peripheral blood inflammatory milieu could be accompanied by anti-inflammatory molecules. This syndrome typically leads to a state of susceptibility to secondary infections [50][51][52] and is likely the cause of the increasing rates of secondary infections among influenza-infected pregnant women [53][54][55].
While serological levels of CXCL10/IP-10 were significantly higher (p,0.05) in ILI women than healthy pregnant women, was not significant between PH1N1 women and healthy pregnant women, indicatively that this chemokine is not restricted to pandemic H1N1pdm2009 virus as previously been reported by other, is more closely related to the recruitment and activation of T cells to mediate adaptive responses in tissues by any type of respiratory infection [56][57][58][59].
These results provide evidence for a differential inflammatory response against H1N1pdm2009 viral infection in pregnancy, which could explain how the clinical course differs from those of other influenza-like diseases in pregnant women. We suggest that CD69 could be used as a marker to discriminate between pandemic influenza and other respiratory diseases. The particular inflammatory milieu could explain the H1N1-associated adverse clinical outcomes that are more frequent in infected pregnant women. The effect of the increased inflammatory response of H1N1pdm2009 viral infections on perinatal complications or on the fetal or neonatal immune response in pregnancy should be explored. | 3,547.6 | 2014-09-25T00:00:00.000 | [
"Medicine",
"Biology"
] |
NQO1 and NQO2 Regulation of Humoral Immunity and Autoimmunity*
NAD(P)H:quinone oxidoreductase 1 (NQO1) and NRH:quinone oxidoreductase 2 (NQO2) are cytosolic enzymes that catalyze metabolic reduction of quinones and derivatives. NQO1-null and NQO2-null mice were generated that showed decreased lymphocytes in peripheral blood, myeloid hyperplasia, and increased sensitivity to skin carcinogenesis. In this report, we investigated the in vivo role of NQO1 and NQO2 in immune response and autoimmunity. Both NQO1-null and NQO2-null mice showed decreased B-cells in blood, lower germinal center response, altered B cell homing, and impaired primary and secondary immune responses. NQO1-null and NQO2-null mice also showed susceptibility to autoimmune disease as revealed by decreased apoptosis in thymocytes and pre-disposition to collagen-induced arthritis. Further experiments showed accumulation of NADH and NRH, cofactors for NQO1 and NQO2, indicating altered intracellular redox status. The studies also demonstrated decreased expression and lack of activation of immune-related factor NF-κB. Microarray analysis showed altered chemokines and chemokine receptors. These results suggest that the loss of NQO1 and NQO2 leads to altered intracellular redox status, decreased expression and activation of NF-κB, and altered chemokines. The results led to the conclusion that NQO1 and NQO2 are endogenous factors in the regulation of immune response and autoimmunity.
The roles of genetic factors in immune deficiency and autoimmune diseases have been recognized for decades (17). B cells play a key role in regulation of immune system. B cells produce antibodies, provide support to other mononuclear cells, and contribute directly to inflammatory pathways (18). Impaired B cell production, maturation, homing, and activation are known to lead to defective immune response (19). Dysfunctional immune response and impaired apoptosis in T cells have been implicated in many immunological abnormalities including autoimmune lymphoproliferative syndrome (18,20,21).
In this report, we investigated the in vivo role of NQO1 and NQO2 in immune response and autoimmunity. Both NQO1null mice as well as NQO2-null mice demonstrated lower B cells in peripheral blood, decreased germinal center response, altered B cell homing, and impaired antibody responses. NQO1-null and NQO2-null mice also showed increased susceptibility to autoimmune disease as revealed by decreased apoptosis in thymocyte and predisposition to collagen-induced arthritis. The loss of NQO1 and NQO2 led to accumulation of NADH and NRH that altered intracellular redox status. The studies also demonstrated decreased expression and lack of activation of NF-B and altered chemokines and chemokine receptors. These results suggest that the loss of NQO1 and NQO2 leads to altered intracellular redox status that results in decreased expression and lack of activation of NF-B. This leads to B-cell deficiency and alterations in the homing of B cells and impaired humoral immune response and autoimmunity. labeled anti-CD19, 2.5 l of PE-labeled anti-CD4, and 2.5 of l of CyChrom-labeled anti-CD8 antibodies gently vortexed and incubated on ice in the dark for 30 min. Red blood corpuscles were hemolyzed and fixed using Coulter Q-prep and analyzed using a Coulter EPICS XL-MCL flow cytometer. Femurs were cut at both ends. Bone marrow was flushed with sterile cold PBS. After two PBS washes, the cells were suspended in annexin binding buffer to a concentration of 10 ϫ 10 6 cells/ml. Spleen cells were suspended in cold PBS using the rough surface of glass slides. Red blood cells were lysed using red blood cell lysis buffer containing 15.5 mM NH4Cl, 1 mM KHCO3, and 0.001 mM EDTA. Cells were suspended in annexin binding buffer to a concentration of 10 ϫ 10 6 cells/ml. Thymocytes were obtained from the thymus and suspended in cold PBS using the rough surface of glass slides. Cells were then suspended in annexin binding buffer to a concentration of 10 ϫ 10 6 cells/ml. 100 l of cell suspensions (bone marrow, spleen, or thymus) was added to the appropriate antibodies (1 l of annexin V-FITC, 2.5 l of PE-labeled anti-CD19, 2.5 l of FITC-labeled anti-CD43, 2.5 l of FITC-labeled anti-CD25, 2.5 l of FITC-labeled anti-IgD, 2.5 l of FITC-labeled anti-CD19, 2.5 l of PE-labeled anti-CD4, and 2.5 l of CyChrom-labeled anti-CD8 antibodies). Samples were gently vortexed and incubated on ice, in the dark, for 30 min. Samples were then fixed in 4% paraformaldehyde and then analyzed using Coulter EPICS XL-MCL flow cytometer.
Thymidine Incorporation Assay of Proliferation in the Bone Marrow and Spleen Cells-The wild-type, NQO1-null, and NQO2-null mice were sacrificed, and their femurs and spleen were obtained. The bones were cut, and marrow was flushed out gently with RPMI 10% fetal bovine serum with antibiotics. Cell samples were cultured in triplicates in 96-well plates for 48 h. 1 Ci of [H 3 ]thymidine was added to each well. Twentyfour hours later, cells were harvested on a glass fiber filter mat using Tomtec Harvester 96. Incorporated thymidine was measured by scintillation counter. Cell suspensions were prepared from the spleens in RPMI 10% fetal bovine serum with antibiotics. Cell samples were cultured in triplicates in 96-well plates. 0.5 g of Con A was added to each well. Forty-eight hours later, 1 Ci of [H 3 ]thymidine was added to each well. Twenty-four hours later, cells were harvested, and incorporated thymidine was measured.
Evaluation of Germinal Center Response-Eight-week-old wild-type, NQO1-null, and NQO2-null mice were injected intraperitoneally with alum-precipitated 50 g of 4-hydroxy-3nitrophenyl acetate (NP) conjugated to chicken ␥-globulin (CGG). Twelve days after immunization, spleens were obtained. Each spleen was split in half. One half was frozen for histology (22). Sections were cut and then probed by immunohistochemistry using antibodies against germinal center B cells marker GL-7 by procedures as described (22). The other half was suspended in cold PBS using the rough surface of glass slides. Red blood cells were lysed using red blood cell lysis buffer containing 15.5 mM NH 4 Cl, 1 mM KHCO 3 , and 0.001 mM EDTA. Cells were suspended in cold PBS in 10 ϫ 10 6 cells/ml. 100 l of the spleen cell suspension was then added to 2.5 l of FITC-labeled anti-B220 and PE-labeled anti-GL-7. Samples were gently vortexed and incubated on ice, in the dark, for 30 min. Samples were then fixed in 4% paraformaldehyde and then analyzed using Coulter EPICS XL-MCL flow cytometer.
Primary and Secondary Immune Response Assessment-Eightweek-old wild-type, NQO1-null, and NQO2-null mice were injected intraperitoneally with 50 g of NP conjugated to CGG as described (22). Twelve days after immunization, spleen and bone marrow were obtained for primary immune response analysis. For another set of mice, 8 weeks after primary immunization, mice were injected intraperitoneally with 20 g of NP-CGG. Assays for serum immunoglobulin levels were performed at 0, 5, and 10 days after secondary immunization. On day 12, mice were sacrificed and analyzed for antibody-forming cells (AFC). NP-specific antibody-forming cells were quantitated by ELISPOT assay using nitrocellulose filters coated with NP-BSA 5:1 and 25:1. Labeled anti-IgM Ab and anti-IgG Ab were used to visualize NP-specific AFC.
Collagen-induced Arthritis Models-Eight-week-old wildtype, NQO1-null, and NQO2-null mice were immunized with chicken collagen II to induce arthritis as described (23). Incidences of arthritis were recorded. Clinical arthritis scores were calculated using a scale from 0 to 3 for each paw with a maximum score of 12 per mouse. Anti-collagen antibody titers were measured on days 21 and 42. On day 42, mice were euthanized. Sections in the paws were examined microscopically. NAD(P)H:NAD(P) and NRH:NR Ratio in Bone Marrow, Spleen, and Thymus-The following procedure was used to collect and process the tissue for determination of NAD(P): NAD(P)H and NRH:NR ratio because of the sensitivity of these molecules. The bone marrow, spleen, and thymus were surgically removed while the mice were under anesthesia to avoid changes in the levels of the pyridine nucleotides. The bone marrow was then instantly placed in liquid nitrogen (24). While frozen, the ends of the bone were cut using a surgical blade, and while the marrow was thawing, it was flushed by using the solution containing 200 mM KCN, 1 mM bathophenanthroline, and 60 mM KOH. The pyridines were extracted with chloroform and analyzed from these tissues by HPLC and procedures as described previously (4,25).
Electrophoretic Mobility Shift Assay-One million bone marrow cells were obtained from wild-type and NQO1-null mice and treated with 10 g/ml lipopolysaccharide (LPS). The nuclear extract was prepared, and an electrophoretic mobility shift assay was performed by procedures as described previously (26). The NF-B binding oligonucleotide sequence used was 5Ј-TTGTTACAAGGGACTTTCCGCTGGGGACTTTC-CAGGCAGGCGTGG-3Ј.
Microarray Analysis-We used Affymetrix GeneChip mouse expression set 430 and RNA from untreated mice bone marrow for microarray analysis. Three samples for each genotype (wild type, NQO1-null, and NQO2-null) were analyzed. Each sample included a pool of bone marrow cells from five mice. RNA samples were prepared from bone marrow cells using a Qiagen RNeasy kit. The excellent quality of the RNA samples was confirmed with an Agilent 2100 Bioanalyzer. Our core facility analyzed the samples and provided data to us. We used dChip 1.3 and GeneSpring software to analyze data. We categorized alterations in gene expression in several groups according to gene function using dChip 1.3. These categories included apoptotic genes such as p53, Bax, Bcl-2, caspase 2, caspase 3, caspase 8, apoptosis inhibitor 6, and C/EBP; interleukins, chemokines, and their receptors such as CXCR4, CCL9, CCR1, CXCL12, interferon ␥ receptor, interferon ␥-induced GTPase, interleukin 7, interleukin 10, and interleukin 10 receptor; and transcription regulation genes, DNA damage response genes, and DNA replication and metabolism genes. The results for chemokines and chemokine receptors are presented as fold increase or decrease.
Immunological Phenotype of NQO1-null and NQO2-null
Mice-Previously, we generated NQO1-null mice deficient in NQO1-and NQO2-null mice deficient in NQO2 (6, 27). An analysis of blood, bone marrow, and spleen from wild-type, NQO1-null, and NQO2-null mice was performed. Flow cytometry analysis of blood lymphocytes showed a decrease in the number of CD19ϩ B cells and an increase in CD4ϩ T cells (Fig. 1, A and B). However, NQO1-null and NQO2-null mice showed higher numbers of CD19ϩ cells in the bone marrow ( Fig. 1C; compare with A and B). To trace different stages of B cell development in the bone marrow, we analyzed bone marrow cells for CD19 and CD43 (for pro-B cells), CD19 and CD25 (for pre-B cells), and CD19 and IgD (for mature B cells). There was no difference between wild-type, NQO1-null, and NQO2null mice in the number of B cell progenitors, pro-B and pre-B cells (Fig. 1C). However, there was a significant increase in the Mice were euthanized, and bone marrow was collected from femurs. After two PBS washes, the cells were suspended in annexin binding buffer to a concentration of 10 ϫ 10 6 cells/ml. To cells in separate tubes, we added PE-labeled anti-CD19 antibody; annexin V-FITC and PE-labeled anti-CD19 antibody; FITClabeled anti-CD43 and PE-labeled anti-CD19 antibody, FITC-labeled anti-CD25 and PE-labeled anti-CD19 antibody, FITC-labeled anti-IgD and PE-labeled anti-CD19 antibody. Assays for the determination of B cells, apoptosis in B cells, and B cell development were essentially performed as described by the manufacturer and measured using a Coulter EPICS XL-MCL flow cytometer. E and F, flow cytometry analysis of the spleen. Spleens were obtained from mice after euthanization. Spleen cells were suspended in cold PBS. Red blood cells were lysed. Cells were suspended in annexin binding buffer to a concentration of 10 ϫ 10 6 cells/ml. 100 l of cell suspension was added to FITC-labeled anti-CD19, PE-labeled anti-CD4, and CyChrom-labeled anti-CD8 antibodies (E ) and to annexin V-FITC and PE-labeled anti-CD19 antibody (F ). Samples were gently vortexed and incubated on ice, in the dark, for 30 min. Assays for the determination of B cells, apoptosis in B cells, CD4ϩ T cells (helper), and CD8ϩ T cell (cytotoxic) were essentially performed as described by the manufacturer and measured using Coulter EPICS XL-MCL flow cytometer. G, proliferation of bone marrow and spleen cells. The mice were sacrificed, and their femurs and spleens were removed. Bone marrow and spleen cells were exposed to 1 Ci of [H 3 ]thymidine. Twenty-four hours later, cells were harvested, and incorporated thymidine was measured. OCTOBER 13, 2006 • VOLUME 281 • NUMBER 41 number of mature IgDϩ B cells in the bone marrow of NQO1null and NQO2-null mice (Fig. 1C). Apoptosis in bone marrow B cells was lower in NQO1-null and NQO2-null mice as compared with wild-type mice (Fig. 1D). Flow cytometry analysis of spleen lymphocytes showed no difference between wild-type, NQO1-null, and NQO2-null mice (Fig. 1E). In contrast to lower apoptosis in bone marrow B cells, the knock-out mice showed no difference in apoptosis of spleen B cells as compared with wild type (Fig. 1F).
NQO1 and NQO2 Regulate Immune Response and Autoimmunity
We then measured proliferation of bone marrow cells and spleen T cells using thymidine incorporation assay. The results are shown in Fig. 1G. NQO1-null bone marrow cells showed significantly higher proliferation rate than that of wild type ( p Ͻ 0.01). NQO2-null bone marrow does not proliferate significantly different from that of wild type. NQO1-null spleen cells showed significantly higher proliferation rate than that of wild type ( p Ͻ 0.001). NQO2-null spleen T cells proliferate slightly faster than that of wild type ( p Ͻ 0.05).
We observed that NQO1-null mice have bigger spleen than wild-type mice. We weighed the spleens of 10 wild-type, NQO1-null, and NQO2-null mice. NQO1-null mice spleen is slightly but significantly bigger than that of wild-type mice (wild type, 105 Ϯ 8, and NQO1-null, 134 Ϯ 10 mg; p Ͻ 0.05). No significant difference between NQO2-null and wild type in spleen weight was observed.
We analyzed the spleen cells from wild-type, NQO1-null, and NQO2-null mice for marginal zone B cells by flow cytometry after staining for marginal zone B cell markers CD23 and CD21 ( Fig. 2A). CD23ϩCD21ϩ marginal zone B cells showed a significant decrease in the spleen of NQO1-null mice ( Fig. 2A). CD23ϩCD21ϩ marginal zone B cells were also lower in NQO2-null mice as compared with wild type (Fig. 2A). In same experiment, CD21ϩ cells were found Spleens were obtained from wild-type, NQO1-null, and NQO2-null mice after euthanization. Spleen cells were suspended in cold PBS. Red blood cells were lysed. Cells were suspended in cold PBS to a concentration of 10 ϫ 10 6 cells/ml. 100 l of cell suspension was added to FITC-labeled anti-CD21 and PE-labeled anti-CD23 antibodies. Samples were gently vortexed and incubated on ice, in the dark, for 30 min. Assays for the determination of CD21ϩ and CD23ϩ splenic B cells were essentially performed as described by the manufacturer and measured using Coulter EPICS XL-MCL flow cytometer. B and C, germinal center (GC) response. Eight-weekold wild-type, NQO1-null, and NQO2-null mice were injected intraperitoneally with 50 g of NP conjugated to CGG. Twelve days after immunization, spleens were obtained. Germinal center response was evaluated by flow cytometry analysis for B cell marker B220 and germinal center marker Gl-7 and immunohistochemistry analysis using antibodies against germinal center marker GL-7.
increased in NQO1-null mice as compared with wild-type mice. NQO2-null mice also demonstrated an increase in CD21ϩ cells. However, the increase was significantly lower than NQO1-null mice. The significance of the increase in CD21ϩ cells remains unknown.
Humoral Immune Response in NQO1-null and NQO2-null Mice-To assess the humoral immune response in NQO1-null mice and NQO2-null mice, we started by evaluating the germinal center response. Eight-week-old wild-type, NQO1-null, and NQO2-null mice were injected intraperitoneally with 50 g of alum-precipitated NP conjugated to CGG (NP-CGG). Twelve days after immunization, spleen cells and tissue sections were analyzed for germinal center response. Both flow cytometry and immunohistochemistry analysis showed significantly lower germinal center response in NQO1-null mice ( p Ͻ 0.01) (Fig. 2, B and C). Some decrease, but not statistically significant, was observed in NQO2-null mice (Fig. 2, B and C).
For more thorough assessment of humoral immune response, we measured primary and secondary antigen-specific antibody response in wild-type, NQO1-null, and NQO2-null mice. Eight-FIGURE 3. Primary and secondary humoral immune response. A and C, primary humoral immune response. Eight-week-old wild-type, NQO1-null, and NQO2-null mice were injected intraperitoneally with NP conjugated to CGG. Twelve days after immunization, spleen and bone marrow were obtained for primary immune response analysis. NP-specific AFC were quantitated by ELISPOT assay using nitrocellulose filters coated with NP-BSA 5:1 and 25:1. Labeled anti-IgM Ab and anti-IgG Ab were used to visualize NP-specific AFC. B and D, secondary humoral immune response. Seven-week-old wild-type, NQO1-null, and NQO2-null mice were injected intraperitoneally with 50 g of NP conjugated to CGG. Eight weeks after primary immunization, mice were injected intraperitoneally with 20 g of NP-CGG. On day 12, mice were sacrificed and analyzed for AFC. NP-specific AFC were quantitated by ELISPOT assay using nitrocellulose filters coated with NP-BSA 5:1 and 25:1. Labeled anti-IgM Ab and anti-IgG Ab were used to visualize NP-specific AFC. . Serum analysis for antigen-specific IgG after primary and secondary immunization. Eight-week-old wild-type, NQO1-null, and NQO2null mice were injected intraperitoneally with 50 g of NP conjugated to CGG. Eight weeks after primary immunization, mice were injected intraperitoneally with 20 g of NP-CGG. Assays for serum immunoglobulin levels were performed at 0, 5, and 10 days after secondary immunization. OCTOBER 13, 2006 • VOLUME 281 • NUMBER 41
JOURNAL OF BIOLOGICAL CHEMISTRY 30921
week-old wild-type mice were immunized with NP-CGG as described earlier. Twelve days after immunization, spleen and bone marrow were obtained for primary immune response analysis. For another set of mice, 8 weeks after primary immunization, mice were injected intraperitoneally with 20 g of soluble NP-CGG. Twelve days later, mice were sacrificed and analyzed for secondary immune response. Assessment of immune response was done by measuring the number of AFC. Labeled anti-IgM antibody and anti-IgG1 antibody were used to visualize NP-specific AFC. Both NQO1-null and NQO2-null mice (especially NQO1-null) showed weaker primary and secondary immune response (Fig. 3). Serum levels of NP-specific IgG were measured on days 0, 5, and 10 after the secondary immunization. NP-specific IgG were lower in NQO1-null and NQO2-null mice (Fig. 4).
Autoimmunity in NQO1-null and NQO2-null Mice-Decreased apoptosis in the thymus and bone marrow cells of NQO1-null and NQO2-null mice and increased spleen T cell proliferation in NQO1-null mice pointed to the possibility of higher susceptibility to autoimmune disease. We used the collagen-induced arthritis model to assess the susceptibility of NQO1-null and NQO2-null mice to autoimmunity. We found that NQO1-null mice developed arthritis earlier and for a longer duration than wild-type mice (Fig. 5A). Arthritis in NQO1null mice was more severe than that in wild-type mice. This is reflected as a higher arthritis clinical score (Fig. 5B). There was no significant difference between NQO2-null and wild-type mice in the onset and severity of arthritis (Fig. 5, A and B). However, arthritis lasted longer in NQO2-null mice as compared with wild type (Fig. 5, A and B).
Alteration in Intracellular Redox Status, Decreased Expression, and Activation of NF-B and Altered
Chemokines and Chemokine Receptors-The analysis of bone marrow NADH, NAD, NRH, and NR showed a significant increase in NADH:NAD ratio in NQO1-null and NRH:NR ratio in NQO2-null mice (Fig. 6A). Similar analysis also showed a significant increase in NADPH:NADP ratio in NQO1-null mice as compared with wild-type mice (data not shown). We used electrophoretic mobility shift assay to investigate the expression and LPS activation of NF-B in wildtype, NQO1-null, and NQO2-null mice. The results are presented in Fig. 6B. Decreased NF-B binding to DNA was observed with bone marrow nuclear extract from NQO1null mice as compared with wild-type mice (compare lanes 3 and 5). LPS treatment demonstrated a significant increase in NF-B binding to DNA in wild-type mice (compare lanes 3 and 4). The LPS-mediated activation of NF-B binding was more or less not observed in NQO1-null mice (compare lanes 5 and 6). The decreased binding of NF-B and lack of LPS activation of NF-B binding to DNA was also observed in NQO2-null mice (data not shown). However, the magnitude of difference was lower in NQO2-null than in NQO1null mice. The results on the increase in NADH:NAD and NRH:NR ratios and lower NF-B binding and lack of LPS activation of NF-B binding to DNA were also observed in spleen and thymus of NQO1-null and NQO2-null mice (data not shown). Microarray analysis of bone marrow from untreated wild-type, NQO1-null, and NQO2-null mice were performed. The microarray analysis revealed alterations in chemokines and chemokine receptors associated with the loss of NQO1 and NQO2 in respective null mice (Fig. 6C). Specially noted were chemokine (CXC motif) ligand 12, receptor 4, and receptor 1. Interestingly, the alterations noted were of lower FIGURE 5. Collagen-induced arthritis model and apoptosis in thymocytes. A and B, collagen-induced arthritis. wild-type, NQO1-null, and NQO2-null mice were injected intradermally at the base of the tail with 200 g of chicken collagen II emulsified in complete Freund's adjuvant. Twenty-one days later, mice were injected with 100 g of collagen II in complete Freund's adjuvant. Mice were observed for signs of arthritis. Incidence of arthritis was recorded (A). Clinical arthritis scores were calculated using a scale from 0 to 3 for each paw with a maximum score of 12 per mouse (B). C and D, flow cytometry analysis of the thymus. Thymocytes were obtained from wild-type, NQO1-null, and NQO2-null mice after euthanization. Thymocytes were suspended in cold PBS. Cells were then suspended in annexin binding buffer to a concentration of 10 ϫ 10 6 cells/ml. 100 l of cell suspension was added to 2.5 l of PE-labeled anti-CD4 and 2.5 l of CyChrom-labeled anti-CD8 antibodies (C) and 1 l of annexin V-FITC (D). Samples were gently vortexed and incubated on ice, in the dark, for 30 min. CD4ϩCD8Ϫ T cells, CD8ϩCD4Ϫ T cell, CD4ϩCD8ϩ T cells, and apoptosis in thymocytes were measured using Coulter EPICS XL-MCL flow cytometer. magnitude in NQO2-null mice as compared with NQO1-null mice.
DISCUSSION
The experiments in this study, for the first time, establish a physiological role of NQO1 and NQO2 in control of immune response and autoimmunity. The NQO1-null and NQO2-null mice showed impaired humoral immune response. The magnitude of impairment in humoral immune response was of higher magnitude in NQO1-null mice than NQO2-null mice. Phenotypic analysis of NQO1-null and NQO2-null mice showed a decrease in the number of B cells in the blood and an increase in the bone marrow, whereas no change in B cell number was observed in the spleen. Further investigations revealed that mature B cells and not pro-and pre-B cells increased in bone marrow of NQO1-null and NQO2-null mice. Since the maturation of B cells to IgD positive mature B cells takes place in the peripheral lymphoid organs, spleen, and lymph nodes (not in the bone marrow), the increase in IgD positive mature B cells in the bone marrow most likely resulted due to the increased homing of mature B cells into the bone marrow. Microarray analysis of the bone marrow showed increased chemokine (CXC motif) receptor 4 (CXCr4) and chemokine (CXC motif) ligand 12 (CXCL12), 20% in NQO1-null and 5% in NQO2-null mice. CXCr4 and CXCL12 are the chemokine receptor and ligand involved in the homing of B cells to the bone marrow (28).
NQO1-null and NQO2-null mice (especially NQO1-null) showed weaker primary and secondary antibody response. This was demonstrated by the lower number of NP-specific AFC in the spleen and bone marrow and lower serum NP-specific IgG after primary and secondary immunization with NP-CGG. Germinal center response was also weaker, especially in NQO1-null mice. All these observations led to the conclusion that the loss of NQO1 or NQO2 (NQO1 more than NQO2) resulted in impaired humoral immune response, suggesting that NQO1 and NQO2 are significant endogenous factors in regulation and proper functioning of immune response.
T helper cells are major perpetrators in autoimmunity (29). The decreased apoptosis has been linked to autoimmunity (30). Mutations in genes that regulate apoptosis, such as Fas, FasL, caspase 10, and caspase 8, result in higher susceptibility to autoimmune diseases (20,21,31,33,34). Flow cytometric analysis of thymus showed lower apoptosis in NQO1-null and NQO2-null thymocytes as compared with wild-type mice. The thymus is a main place for T cell tolerance. T cell tolerance includes elimination of autoreactive T cells (negative selection), mostly by apoptosis (29). Lower apoptosis in the thymus of NQO1-null and NQO2-null mice might compromise T cell tolerance. This might have allowed autoreactive T cells to escape negative selection and thus increased susceptibility to develop autoimmune disease. These findings together pointed toward the possibility of increased susceptibility in NQO1-null or NQO2-null mice to develop autoimmunity. Indeed, NQO1-null mice developed arthritis earlier and for a longer duration than wild-type mice in collagen-induced arthritis model. NQO2-null mice did not show significant difference in autoimmunity from wild-type mice in the same experiment. The higher sensitivity of NQO1null mice to induce arthritis could be due to impaired T cell tolerance in the lymphoid organs as a result of decreased apoptosis and increased proliferation of T cells. This is supported by the lower apoptosis seen in the thymus and bone marrow cells of NQO1-null mice and the increased proliferation in bone marrow cells and splenic T cells. The decreased apoptosis and increased proliferation in the lymphoid organs of NQO2-null mice were milder than that in NQO1-null mice.
The above observations raised an interesting question regarding the mechanism of NQO1 and NQO2 regulation of immune response and autoimmunity. The loss of NQO1 and NQO2 led to alterations in intracellular redox status. This was due to accumulation of reduced NADH in NQO1-null mice and NRH in NQO2-null mice. Alterations in the redox status of the cells presumably changed transcription and/or modifica-FIGURE 6. NQO1 relationship to immune response and autoimmunity. A, NAD(P)H:NAD(P) ratio in bone marrow of wild-type, NQO1-null, and NQO2-null mice. Femurs were surgically removed, and pyridines were extracted with chloroform and analyzed by HPLC procedures as described under "Materials and Methods." The data are shown only for NADH/NAD. B, electrophoretic mobility shift assay. One million bone marrow cells were untreated or treated with 10 g/ml LPS for 30 min. Nuclear extracts were prepared, and NF-B binding was analyzed by electrophoretic mobility shift assay. Only the shifted bands are shown. C, microarray analysis. RNA from wild-type, NQO1-null, and NQO2-null mice bone marrow were used to perform microarray analysis. The differences in selected chemokines and receptors are listed as compared with wild-type mice.
tion of factors including the loss of expression and lack of LPS activation of NF-B and alterations in chemokines (including CXCr4 and CXCL12). The redox modulation of NF-B and chemokine/receptor is reported earlier (35). LPS has been shown to cause apoptosis in B cells by activating NF-B (36), CD4ϩCD8ϩ thymocytes, and lymphoid organs (32). Failure of activation of NF-B might have contributed to reduced apoptosis in thymocytes in NQO1-null and NQO2-null mice. The alterations in intracellular redox status combined with lower expression and lack of activation of NF-B might have altered the homing of B cells and reduced antibody responses. The changes in B cells were translated to decreased primary and secondary immune response. Decreased apoptosis and increased proliferation of thymocytes contributed to autoimmunity in NQO1-null mice.
The results on impaired immune response and autoimmunity in NQO1-null and NQO2-null mice are extremely significant and have major impact on human health. This is since 2-4% of human individuals are homozygous for C 3 T mutation in the NQO1 gene, leading to proline to serine substitution, and totally lack the NQO1 protein (10,11). In addition, greater than 20% individuals are heterozygous and carry one mutated NQO1 allele. These individuals lack 50% NQO1 protein. The NQO1 homozygous, and heterozygous mutant individuals are expected to have impaired immune response and are at risk for autoimmune diseases. This is the first study toward genotyping human individuals for lack of NQO1 and problems associated with impaired immune response and autoimmunity.
In conclusion, NQO1 and NQO2 are important endogenous factors in regulation of immune response and autoimmunity. The loss of NQO1 or NQO2 (especially NQO1) results in impaired humoral immune response and higher susceptibility to autoimmune diseases. The alterations in intracellular redox status due to the loss of NQO1 and NQO2 presumably led to changes in expression and induction of factors including NF-B, chemokines, and chemokine receptors. These changes resulted in altered B cell homing, reduced B cell response, and decreased apoptosis of thymocytes. These changes led to compromised immune response and autoimmunity. The detailed mechanisms of the role of NQO1 and NQO2 role in regulation of immune response and autoimmunity await future investigations. | 6,570.2 | 2006-10-13T00:00:00.000 | [
"Biology",
"Medicine"
] |
Priority Enabled Grant-Free Access With Dynamic Slot Allocation for Heterogeneous mMTC Traffic in 5G NR Networks
Although grant-based mechanisms have been a predominant approach for wireless access for years, the additional latency required for initial handshake message exchange and the extra control overhead for packet transmissions have stimulated the emergence of grant-free (GF) transmission. GF access provides a promising mechanism for carrying low and moderate traffic with small data and fits especially well for massive machine type communications (mMTC) applications. Despite a surge of interest in GF access, how to handle heterogeneous mMTC traffic based on GF mechanisms has not been investigated in depth. In this paper, we propose a priority enabled GF access scheme which performs dynamic slot allocation in each 5G new radio subframe to devices with different priority levels on a subframe-by-subframe basis. While high priority traffic has access privilege for slot occupancy, the remaining slots in the same subframe will be allocated to low priority traffic. To evaluate the performance of the proposed scheme, we develop a two-dimensional Markov chain model which integrates these two types of traffic via a pseudo-aggregated process. Furthermore, the model is validated through simulations and the performance of the scheme is evaluated both analytically and by simulations and compared with two other GF access schemes.
S IMULTANEOUS packet transmissions over the same radio resource cause performance deterioration for wireless access due to potential collisions among transmissions from competing devices. In fourth generation (4G) cellular networks, i.e., long term evolution-advanced (LTE-A), this problem was primarily addressed using grant-based (GB) communications. For GB channel access, a device follows a four-step handshake procedure for initial access with an evolved nodeB (eNB) by first transmitting a preamble before it obtains a grant for its data packet transmission. Once access is granted by the eNB, a data packet can be successfully transmitted without collision under ideal channel conditions. The initial preamble transmission, however, is still subject to collision(s) and could require multiple transmissions depending on traffic load and the availability of preamble resources at the eNB.
In LTE-A, the time required for initial four-step handshaking, which occurs prior to a data transmission, is in the order of 15 ms [1]. This is not a major concern since many 4G applications do not have stringent low latency requirements. In emerging fifth generation (5G) networks specified by the 3rd generation partnership project (3GPP), however, a variety of applications necessitate novel approaches for ultra-reliable low latency communications (URLLC) and massive machine type communications (mMTC). For small data transmissions which are common for mMTC traffic, the amount of control overhead required before an actual data transmission in GB schemes is too high with respect to the actual data to be transmitted and the handshake procedure lasts too long [2].
Although GB initial access is still kept as a legacy mechanism in 5G new radio (NR) networks, to perform such a four-step initial access procedure requires extra delay and protocol overhead [3], [4]. As an alternative to reduce overall latency, another category of mechanisms for data transmission, known as grant-free (GF), configured grant, or without grant, has emerged [4], [5]. Different from the GB principle, devices in GF communications transmit their data packets together with (or without using specific) control messages directly to a 5G NR nodeB (gNB) in available GF slots without requiring the initial access procedure. In other words, no dedicated preamble transmission for granting access and allocating radio resources is required for GF communications before starting a data packet transmission [3]. The benefits brought by this principle in terms of shortened delay and reduced protocol overhead make GF mechanisms attractive for various applications with URLLC/mMTC requirements and small data packets [1].
For periodic or deterministic traffic, a gNB can allocate dedicated slots to devices for their data transmissions. However, such a mechanism will lead to resource underutilization and This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ long delay when traffic load is low or sporadic which is the case for many mMTC applications. Due to the unpredictability of sporadic traffic arrival patterns, it is beneficial to apply a random access protocol for GF data transmissions based on the principles of ALOHA or slotted ALOHA [6]. Furthermore, GF transmissions are generally recommended for small data transmission with a low or moderate level of traffic arrivals [7], [8].
A. Related Work 1) GF Communications: While GF is a more popular terminology favored by the research community, similar mechanisms are commonly referred to as configured grant or without grant in 3GPP specifications [4], [5], [9], [10]. In brief, existing GF based transmission schemes can be classified into four major categories, as summarized below. (i) GF reactive: A device needs to send its GF transmission and wait for an acknowledgment (ACK) or a negative ACK (NACK) from the gNB. If no ACK is received within the ACK timeout, or a NACK is received, the same packet will be retransmitted up to a retry limit; (ii) GF reactive with power boost: In order to increase successful reception probability, the transmit power of each retransmission could be higher than that of the previous unsuccessful transmission; (iii) K repetitions without feedback: A device transmits proactively K > 1 replicas of the same data packet across different GF slots in the same subframe [9]; and (iv) K repetitions with feedback: Similar to (iii), but it requires feedback from a gNB regarding its transmission status. Accordingly, a device will stop its transmission attempt once an ACK is received. Furthermore, the 3GPP states clearly that at least an uplink transmission scheme without grant is supported for URLLC and an uplink transmission scheme without grant is targeted to be supported for mMTC [5].
On the other hand, recent academic efforts foresee the feasibility of facilitating multi-packet reception by applying more advanced technologies for instance non-orthogonal multiple access (NOMA) and multiple-input multiple-output (MIMO) to GF transmissions. By treating collisions as interference through successive joint decoding or successive interference cancellation (SIC), [11] derived expressions for outage probability and throughput for GF-NOMA transmissions. In [12], a semi-GF scheme which provides dedicated GB access for one user while facilitating the other users with GF opportunistic access was proposed. Another recent work investigated the suitability of applying non-orthogonal sequences for abbreviating preamble collisions for GF transmissions and concluded that such sequences did not necessarily lead to better performance than the orthogonal ones [13]. In general, GF access exhibits the characteristic of slotted ALOHA-alike access mechanisms as presented below.
2) Slotted, Framed Slotted, and SIC-Enabled Slotted ALOHA for MTC Access: Depending on multi-packet reception is enabled or not, numerous variants of ALOHA-alike protocols, including framed slotted ALOHA (FSA), multi-channel slotted ALOHA, and SIC-enabled slotted ALOHA play an important role for medium access in mMTC [2], [14].
Based on the requirements for mMTC applications and design principles, FSA can be operated with either fixed or flexible frame length [15]. On the other hand, channels in multi-channel slotted ALOHA regard to different kinds of orthogonal resources such as codes or preambles which are used in the same, for instance time slot, during the initial access procedure. Using different orthogonal resources, multiple devices can access to a common channel simultaneously [1]. However, the amount of resources is still limited. For random access of mMTC traffic without multi-packet reception capability, a collision happens if two or more devices select the same preamble for their initial access or transmit their packets simultaneously in the same slot. More recent work intends to resolve collision following the principle of SIC through coded slotted ALOHA, e.g., in the form of frameless ALOHA [16].
Furthermore, priority oriented schemes in FSA have been studied previously. In [17], a pseudo-Bayesian ALOHA algorithm with mixed priorities was proposed. Similar to the pseudo-Bayesian ALOHA scheme presented in [18], the algorithm proposed in [17] allows multiple independent Poisson traffic streams compete for a slot or a batch of slots in a frame each with an assigned transmission probability. Following the idea on resource sharing, an adaptive framed pseudo-Bayesian ALOHA algorithm was proposed in [19].
Considering that the subframe length in NR is constant as 1 ms regardless of the adopted NR numerology [20], we adopt a fixed subframe length for our scheme design. Furthermore, since no dedicated preamble for initiating access and resource allocation is needed for GF transmissions, the access scheme proposed below in Sec. III allows devices transmit their packets directly to the associated gNB in the allocated GF slots. The scheme is designed upon the FSA principle but is based on the NR frame structure to be presented in Subsec. II-A.
B. Contributions
So far, little work has been done considering GF access for heterogeneous mMTC traffic. In this paper, we consider heterogeneous GF traffic arrivals with different reliability and/or latency requirements and propose a novel GF based access and data transmission scheme with dynamic slot allocation (DSA) in each NR subframe. Hereafter, the scheme is referred to as DSA-GF which stands for DSA for GF based access for heterogeneous traffic. Targeting at providing better performance to high priority traffic (HPT), the scheme accommodates the remaining slots in the same subframe to low priority traffic (LPT) so that higher total slot utilization is achieved.
In contrast to most existing work which generally neglected slot based GF transmissions and slot utilization, this paper targets at 5G NR numerology as the basis for our scheme design and intends to maximize slot utilization for heterogeneous traffic integrated with priority enabled access. Through dynamic slot allocation, the dependence of two types of GF traffic is handled and modeled through a pseudo-aggregated process where both traffic types share available slots in each subframe and slot allocation to HPT is independent of that of LPT.
In brief, the main contributions of this paper are summarized as follows.
• Based on the NR frame structure, a novel GF based data transmission scheme, DSA-GF, which considers arrivals of heterogeneous GF traffic is proposed. The scheme performs traffic estimation, access control, and dynamic slot allocation on a subframe-by-subframe basis. Based on our scheme, both HPT access privilege and LPT resource preservation are achieved and they are bound together smoothly in each subframe. • To evaluate the performance of the proposed scheme, a two-dimensional (2D) Markov chain model, in which a pseudo-aggregated process is defined to link two types of GF traffic by considering their coherence for slot allocation in a common subframe, has been developed. For a network with the same configuration, the number of states in our model is much less than what is needed in conventional Markov models. • Extensive discrete-event based simulations have been performed to validate the preciseness of the developed model and assess the performance of the DSA-GF scheme. Through performance assessment under various HPT/LPT traffic variations and comparison with two other GF schemes, the effectiveness of the proposed scheme is further demonstrated. In a nutshell, the uniqueness and novelty of our paper are reflected by the fact that this work is anchored at a niche with an intersection among 5G NR numerology, traffic estimation based dynamic slot allocation, proper handling of heterogeneous traffic considering the performance of both HPT and LPT, and pseudo-aggregated 2D Markov chain modeling for heterogeneous traffic. To the best of our knowledge, this is the first attempt which is dedicated for 5G NR numerology based GF transmission with dynamic slot allocation at the subframe level for heterogeneous traffic, combined with a Markov model with a significantly reduced state space bridging both types of traffic together for performance evaluation.
Furthermore, it is worth mentioning that the 3GPP has newly decided to discontinue NOMA as a work-item for 5G NR but leave it as a study-item for beyond 5G [21]. Under such a circumstance, the importance of investigating viable GF schemes based on the existing NR frame structure remains significant and it becomes even an imperative task as such schemes may serve as the basis or at least references for NOMA based GF scheme design.
The remainder of the paper is organized as follows. Sec. II provides preliminaries on NR numerology and presents the network scenario. In Sec. III, the proposed scheme is explained in details. Then we develop a 2D Markov model in Sec. IV to analyze its performance. Thereafter, Sec. V illustrates the numerical results. Finally, the paper is concluded in Sec. VI.
II. PRELIMINARIES, SCENARIO AND ASSUMPTIONS
This section presents the NR frame structure which forms the basis for our scheme design and outlines the scenario.
A. 5G NR Frame Structure and Numerologies
With 15 kHz as a baseline for subcarriers as used in 4G, 5G NR defines five numerologies based on subcarrier spacing Δf = 2 β * 15 kHz, where β = 0, 1, . . . , 4 is the numerology index, with different slot duration lengths downwards from 1 ms to 62.5 μs [20], [22]. As depicted in Fig. 1, the per frame duration in NR is still 10 ms, and the same as in LTE-A, one frame consists of 10 subframes each with 1 ms duration. Moreover, one NR subframe may have one (for β = 0) or multiple (up to 16) slots depending on the value of the numerology index β.
Depending on the size of a packet, one or multiple orthogonal frequency division multiplexing (OFDM) symbols out of the available 14 symbols within a slot can be utilized by GF traffic [9], [10]. Considering that GF transmissions are targeted at small data packets in mMTC networks [24], we assume in this study that a packet with the size of less than 14 OFDM symbols is sufficient for one GF packet transmission. The remaining symbol(s) within the same slot can be allocated to other data traffic (for instance GB transmissions) and control information exchange as NR allows flexible uplink and downlink scheduling at a symbol level within one NR slot [20]. As such, all slots in a subframe can be utilized for GF data transmissions.
B. Scenario and Traffic Arrivals
Consider a scenario where an NR cell covers a large number of mMTC devices. Although both GF and GB devices may coexist, this study focuses only on GF data transmissions. More specially, GF data transmissions considered in this study are performed in each subframe following the DSA-GF scheme presented in the next section. A device is regarded as active if it has one packet ready to transmit. The transmission of a device is regarded as successful if no other device transmits in the same slot and it is confirmed through an ACK message provided at the end of each subframe. If two or more devices transmit in the same slot, a collision occurs and all involved transmissions are considered to be failed. If a device does not obtain a transmission opportunity due to the constraint of the permission probability in the current subframe or its transmission in the current subframe collided, it will try again in the next subframe based on a new permission probability broadcast by the gNB right before the next subframe begins.
Although the total number of mMTC devices covered by a cell could be huge [25], they generate typically sporadic traffic with small packet sizes. Therefore, the number of arrivals per subframe, i.e., within 1 ms, is rather limited. To reflect this, we adopt a combination of number of devices and activation probability as an indicator to represent offered traffic.
Without loss of generality, we consider numerology β = 3 as an example in most figures and descriptions in this paper. Later on in Subsec. V-F, we further demonstrate the applicability of the scheme to two other numerologies, i.e., β = 2 and β = 4 which have 4 and 16 slots per subframe respectively.
Two categories of traffic arrivals are considered, known as HPT and LPT respectively. While HPT requires superior performance, LPT can tolerate longer access delay and higher packet loss. For slot allocation in each subframe, HPT has access privilege over its counterpart, i.e., LPT. In the considered cell covered by one gNB, there are a finite number of HPT and LPT devices, denoted by M x with x = 1 for HPT and x = 2 for LPT, respectively. The arrival process for both categories follows a Bernoulli process. That is, each device generates one data packet per subframe with activation probability a x . This assumption means that each device has at most one packet ready to transmit at each subframe. Furthermore, we assume that the ACK message transmission from the gNB is always successful. No channel impairment is considered in this study and propagation delay is regarded to be negligible compared with access delay.
III. PROPOSED TRANSMISSION SCHEME FOR GF TRAFFIC
The DSA-GF scheme focuses on the NR frame structure and features the flavors of both 4G and 5G access mechanisms such as access class barring and unified access control [1], [23], imposing different permission probabilities to heterogeneous types of traffic. It is operated on a subframe-by-subframe basis. First of all, an observation-based slot allocation algorithm assigns an optimal number of slots to serve HPT transmissions in order to achieve maximum throughput, low access delay and reduced packet loss probability. In the meantime, the algorithm takes into account the performance of LPT through slot preservation to LPT in order to avoid starvation of LPT. To do so, the maximal number of slots to be allocated to HPT is restricted to the total number of slots per subframe minus one (for β = 2 and 3) or two (for β = 4). Then the remaining slots in the same subframe will be allocated to LPT. For a given subframe, the more slots allocated to HPT, the less slots assigned to LPT.
For a given numerology, the total number of slots per subframe, denoted by U , is a constant and it is decided by the NR frame structure presented above. Inspired by the pseudo-Bayesian broadcast algorithm for slotted ALOHA proposed in [18], we develop a novel random access scheme for NR based GF transmissions as presented below. While the algorithm in [18] targeted at slotted ALOHA with a single slot, the protocol designed in this paper is tailored to operations where multiple slots together form one subframe, taking into account the NR frame structure. A list of notations used in this paper and their explanations can be found in Tab. I.
A. Transmission Principles of DSA-GF
At the beginning of each subframe, the gNB provides to all devices through a broadcast message with the permission probability for each type of traffic, denoted as p x where x = 1 for HPT and x = 2 for LPT respectively. With probability p x , each active device randomly selects one of the allocated slots to type x within the current subframe to transmit its packet. With probability 1 − p x , the device postpones its transmission to the next subframe. The permission probability is updated for each subframe based on two ingredients.
For each type, the gNB first observes each slot of the current subframe and counts the number of holes h (a slot that is not occupied by any transmission(s) is referred to as a hole), successes s (a slot with a single packet transmission), and collisions c (a slot with more than one packet transmissions). Then, it proceeds to estimate the number of packets involved in the transmissions of the current subframe.
Second, the gNB estimates the new arrivals of type x during the current subframe, which together with the backlogged devices will attempt to transmit their packets with an updated permission probability in the next subframe. Backlogged devices are those devices that postpone their transmission in the current subframe due to the imposed permission probability plus those devices that were involved in collisions. Furthermore, active devices comprise both backlogged devices and new arrivals. In the next subframe, all active devices will attempt to transmit following the permission probability for each traffic type (details are given in the next subsection).
In DSA-GF, new arrivals follow the immediate first transmission (IFT) principle. By IFT, it is meant that any just-arrived packet in the current subframe will be potentially transmitted in the next immediately available subframe according to the updated permission probability provided by the gNB. Upon the successful reception of a packet transmission, immediate feedback is performed. The operation of the DSA-GF scheme is illustrated in Fig. 2.
B. Detailed Access Procedure for Heterogeneous Traffic
Within each subframe, there are a total number of U slots shared by both streams, one from each type of devices.
Let m x denote the number of slots allocated to the HPT (x = 1) and LPT (x = 2) flows respectively. We have m 1 + m 2 = U . The same notations for subscripts apply to other expressions throughout the context. Denote by u 1,min (u 1,max ) the minimum (maximum) number of slots that can be allocated to HPT at any subframe, in such a way that our scheme reserves at least one slot per subframe for LPT so that no starvation happens to LPT regardless of HPT traffic intensity. In what follows, we present how m x and p x are updated from subframe to subframe.
During each subframe, the gNB observes what happened in each slot. Let (h, s, c) x,t denote the number of holes, successes and collided slots, respectively, observed for type x during subframe t. Obviously, we have h x,t + s x,t + c x,t = m x,t with m 1,t + m 2,t = U . Furthermore, letλ x,t be the estimation of new arrivals assessed by the gNB, i.e., the estimated number of devices that have generated a packet during subframe t. Then, the (m, p) x,t → (m, p) x,t+1 update is performed according to the three steps presented below.
Step 1: Update the estimated number of active devices for HPT and LPT, at the end of subframe t.
• First, for x = 1, 2, based on the observations (h, s, c) x,t and the estimated number of active devices at the beginning of subframe t, w x,t , the gNB estimates the number of backlogged devices at the end of this subframe t,ŵ x,t+1 . For that purpose, we extend Rivest's pseudo-Bayesian broadcast control algorithm [18] to data transmissions with multiple slots in each subframe so that In this expression, 1.3922 × c x,t represents an increment in the estimated number of collided packets which will attempt to transmit again following the rule given in Step 3, and h x,t + s x,t represent the idle slots plus successful transmissions that will not retransmit in the next subframe. • Second, for x = 1, 2, the gNB estimates the number of new arrivals during subframe t,λ x,t . Considering a network in the steady state where the offered traffic and the carried traffic reach an equilibrium, we setλ x,t equal to s x,t , the number of successes at subframe t. A more elaborated estimator of the arrival process is however out of the focus of this paper. • Third, the total number of active devices at the end of subframe t ready for transmission at subframe t+1 is the sum ofŵ x,t+1 andλ x,t . Since w x,t+1 cannot be negative, we set w x,t+1 = max(ŵ x,t+1 , 0) +λ x,t . Note that w x,t+1 can be any non-negative real number.
Step 2: Update the number of slots to be allocated in the next subframe t + 1.
• To give higher priority to HPT, we first allocate m 1,t+1 = max u 1,min , min( w 1,t+1 , u 1,max ) and then configure m 2,t+1 = U − m 1,t+1 . That is, the capacity that is not assigned to HPT will be allocated to LPT. In the above expression, a ceiling function is introduced considering that m x,t+1 is an integer number such as u x,min ≤ m x,t+1 ≤ u x,max .
Step 3:Update the permission probabilities for subframe t+1 for each type of traffic.
That is, when the estimated number of active devices, w x,t+1 , is greater than the number of allocated slots, m x,t+1 , the assigned permission probability becomes less than 1. Otherwise, it is 1. Note that for each type of traffic the same permission probability applies to all active devices.
IV. DISCRETE-TIME MARKOV MODEL FOR DSA-GF
To evaluate the performance of the proposed DSA-GF scheme for heterogeneous GF traffic, we develop a 2D Markov model which integrates HPT and LPT through a pseudo-aggregated process. During each subframe, every device generates one data packet with probability a x according to a Bernoulli process. For packet buffering, a packet rejection mechanism is adopted meaning that a packet is rejected when it arrives at a device and finds the buffer full [26].
A. Building a Discrete-Time Markov Model
Thanks to the memoryless property of the arrival processes, we can build a discrete-time Markov chain for the presented DSA-GF scheme. For this purpose, let us observe the system at the border of two consecutive subframes, e.g., at the time instant when subframe t−1 ends (or subframe t begins), where t ∈ Z (Z is the set of integer numbers). Subframe by subframe, these time instants are regarded as the transition instants in the developed Markov model defined by a set of three random variables for each type of traffic. For traffic type x where x = 1 (HPT) or x = 2 (LPT), let W x,t be the random variable (r.v.) representing the number of active devices estimated by the gNB at transition instant t, U x,t be the r.v representing the number of slots in subframe t allocated to traffic type x, and N x,t be the r.v. representing the actual number of active devices (new arrivals plus backlogged devices) ready for transmission in subframe t.
The transition probabilities of the Markov chain, in a compact format, are as follow.
where (W , U , N ) It is worth mentioning that the Markov chain defined in (1) entails high complexity. In what follows, we opt a lightweight and consistent procedure which consists of the following three phases. 1) Subsec. IV-B performs the analysis of HPT since its behavior is independent of that of LPT; 2) Subsec. IV-C builds a pseudo-aggregated process which takes into account the correlation or dependence between these two types of traffic for slot allocation in the same subframe; and 3) Subsec. IV-D presents the performance of LPT.
B. The Analysis of High Priority Traffic 1) Modeling the HPT Process: Consider that a total number of M 1 devices generate data packets according to a Bernoulli process with probability a 1 . Clearly, a Markov chain can be built at the transition instants as defined above. Using (1) and omitting the random variables related to the notations of LPT, we have the corresponding transition probabilities, i.e., In (2), the following short notations are used: (w, m, n) 1,t ≡ (μ, u, i) and (w, m, n) 1,t+1 ≡ (ν, v, j). For convenience, we restrict the values of w 1,t t ∈ N to natural numbers (notice that, according to Step 1 of the DSA-GF scheme, w 1,t can be any real number). Such a restriction makes it possible to enumerate or list the states of the Markov chain. Since there exists a deterministic relationship between u = m 1,t and μ = w 1,t , i.e., u = max u 1,min , min(μ, u 1,max ) , only two random variables, W 1,t and N 1,t , are sufficient to fully describe this Markov chain. In other words, the Markov chain with three sets of r.v. defined in (1) shrinks to a 2D model. Accordingly, the short notation of P μ,i;ν,j in (2) represents the set of corresponding transition probabilities from subframe t to subframe t + 1. For expression clarity in the rest of this subsection, subscript "1" which is meant for HPT is intentionally omitted unless it is explicitly necessary.
Let us first derive the explicit expressions for P μ,i;ν,j starting with the transition μ → ν. Based on the observations of slots for traffic type 1 during subframe t, i.e., (h, s, c) t where h t +s t +c t = u, the gNB uses a function f ((h, s, c) to estimate the number of backlogged devices, i.e.,ŵ t+1 = μ + f ((h, s, c) t ) (see Step 1: First presented in Subsec. III-B). After that and following Step 1: Second and Step 1: Third, the estimated number of new arrivals during subframe t is taken into account, such that the estimated number of devices active at the beginning of subframe t + 1 is ν = max(ŵ t+1 , 0) +λ t .
Although μ = w t is set to an integer number, in general, neitherŵ t+1 norλ t is an integer number. As ν = w t+1 is also set to be an integer number, we introduce the 'ceil' operation such that, ((h, s, c) Note that the updated probability p t+1 applies to all active devices at subframe t + 1 and it is restricted to be a fraction of two integer numbers. Second, we evaluate the transition probability i → j referred to in (2). For that purpose, we consider in the first step the departure process, i.e., for packets that successfully finished their transmissions during the actual subframe t. At the beginning of subframe t, each of the i active devices chooses to transmit with permission probability p t or to postpone its transmission with probability 1 − p t , respectively. Then, the probability that z out of i active devices (0 ≤ z ≤ i) transmit in subframe t follows a binomial distribution, st,ct denote the probability that z packets (active devices) intend to access over u slots of subframe t resulting in s t successful transmissions and c t collided slots. For any packet transmission, each of the z active devices chooses, with equal probability, one of the u slots of subframe t. Jointly considering these two sequential and independent actions, we obtain the probability that within subframe t, s t out of i active devices succeed in the transmission of its own packet whereas the other i−s t devices were involved in collisions or deferred their transmissions. Analytically, it is expressed as, In (4), R z,u st,ct can be evaluated using, for instance, the recursions given at [28].
In the second step, we take into account the number of devices that will be active at the transition instant at the end of subframe t. Since the arrival of packets comes from M 1 sources each one with activation probability a 1 , the arrival process follows a binomial distribution. Jointly considering the departure and arrival processes, which are independent of each other, we have, where A M1−i+s j−i+s (a 1 ) follows the binomial distribution, as A k l (a 1 ) = B k l (a 1 ) = k l a l 1 (1 − a 1 ) k−l . The set Ω 1 defined in (5) represents the set of (h, s, c) t values observed in subframe t that satisfy the following two conditions, Then, the solution in the steady state regime is given by the stochastic row vector π (πe = 1) which can be obtained from the linear equation π = πP with π = {π μ,i }, P = {P μ,i;ν,j }.
Here, e is a column vector of all 1's, π μ,i is the probability that at the start of an arbitrary subframe the number of active devices estimated by the gNB is μ and the actual number of active devices is i.
2) Throughput, Access Delay, and Packet Loss Probability for HPT: Based on π, we derive below expressions for the performance of HPT in terms of four parameters as defined below.
Firstly, the mean value of the number of successfully transmitted packets within one subframe, defined as throughput per subframe, is obtained according to, In (7), the set Δ 1 shown as (μ, i) ∈ Δ 1 contains all possible values in μ and i and the set C shown as c t ∈ C contains all possible collided slots such that h t + s t + c t = u. Observe that the relationship between μ = w t and u = m t is given in (6). The second equality is obtained after some algebraic operations and the details are omitted for the sake of brevity. Instead, a short clue is outlined as follows. Using DSA-GF, the expected number of successful transmissions when i active devices access to a set of u slots with permission probability p t = min(u/μ, 1) is given by, Then, the last equality in (7) is a weighted sum of (8) with probabilities π μ,i . To give further insights on HPT performance in terms of resource utilization, how long a packet has to stay in a buffer, and how likely a packet may get lost, we define three other parameters. The mean value of the number of successfully transmitted packets within one slot, i.e., throughput per slot, which represents resource utilization is obtained based on (7), Thirdly, access delay in this study, d sf 1 , is defined as the mean sojourn time a packet stays in a buffer until it is successfully transmitted. Using Little's formula, the average number of customers in our steady state system (which is the mean number of active devices at the beginning of an arbitrary subframe, obtained by (μ,i)∈Δ1 iπ μ,i ) equals to d sf 1 multiplied by the average successful rate (i.e., the average number of successful transmissions per subframe, γ sf 1 ). Therefore, we have The fourth performance parameter, packet loss probability, is defined as the ratio of the rejected, i.e., offered minus carried traffic, to the offered traffic. For HPT, it is expressed as
C. Linking HPT and LPT With a Pseudo-Aggregated Process
Based on the 2D Markov chain that models the HPT behavior, denoted as X, we construct a tailored pseudo-aggregated process that links the HPT process with the LPT process.
Let F be the set of integers {u 1,min , …, u 1,max }. Based on the initial Markov chain X with known values on E, we associate the pseudo-aggregated Markov chain Y with potential values on F , defined by: Y t = m ⇐⇒ X t ∈ F (m) for all values of t ∈ Z. Observe that, due to the mapping procedure, the pseudo-aggregated process includes the statistics of the number of slots allocated to HPT devices in the same subframe. Then, the transition probabilities of the pseudo-aggregated Markov chain Y are given as follows, where u = max u 1,min , min(μ, u 1,max ) and v = max u 1,min , min(ν, u 1,max ) . Clearly, the probabilitiesP u,v for u 1,min ≤ u, v ≤ u 1,max constitute the Markov chain that counts the number of slots per subframe allocated to HPT devices. The Markov chain defined by (12) preserves the mean values (sojourn times in each set of state) of the original process, but in general higher statistical moments of these two processes are different from each other.
By solving the linear equationπ =πP withπ = {π u } andP = {P u,v }, the stochastic vectorπ (πe = 1) is obtained. Accordingly, the statistics of the r.v. number of slots allocated per frame for HPT can be easily obtained. This pseudo-aggregated Markov chain provides a link between HPT and LPT. This link will be used to analyze the performance of LPT as presented next.
D. The Analysis of Low Priority Traffic 1) Modeling the LPT Process:
The analysis of LPT can be derived in a similar and parallel way as its HPT counterpart. The main difference is that the number of slots per subframe allocated to LPT is dictated by the dynamic behavior of the HPT occurring in the same subframe. A link between both types of traffic is established based on the pseudo-aggregated process defined above, hence simplifying the analysis of LPT. Intuitively, this approach could loose the "synchronization" or the existing coupling between HPT and LPT. However, the rationale behind our analysis lies on the fact that this approach largely captures the behavior of LPT, which utilizes the remaining capacity, i.e., a number of slots in the same subframe that are not allocated to the HPT transmissions.
Correspondingly, in a parallel way to (2) and omitting the r.v. of HPT, we have In (13), the same notations as in (2) have been introduced, but now it is referred to LPT, i.e., (w, m, n) 2,t ≡ (μ, u, i) and (w, m, n) 2, t+1 ≡ (ν, v, j). However, the difference between (2) and (13) is that in the LPT case the transitions w 2,t ≡ μ → w 2,t+1 ≡ ν and m 2,t ≡ u → m 2,t+1 ≡ v evolve independently of each other, whereas the second transition is dictated by the behavior of the HPT process. Accordingly, in contrast to the HPT process which is represented by 2 random variables, 3 random variables are needed to identify the Markov chain of the LPT process. The evaluation of (13) is similar to the counterpart model of HPT. First, for the transition (μ, i) → (ν, j), we consider the packets that have been successfully transmitted, i.e., the departure process (see (4)).
where R z,u st,ct has the same meaning as in (4). Furthermore, in parallel to (5), we have the following expression for LPT.
where A M2−i+s j−i+s (a 2 ) follows the binomial distribution similar to the one presented in (5) but for LPT. In (15), a set, Ω 2 , which is defined as (h, s, c) t ∈ Ω 2 , is the set of values that satisfy the following two conditions, To gain clarity in the rest of this paragraph, we have recovered the notations with subscripts in m x,t and w x,t where x = 1 for HPT and x = 2 for LPT, respectively. Consider that, at subframe t, we have m 2,t = U − m 1,t . In the next subframe t + 1, the gNB will allocate m 2,t+1 = U − m 1,t+1 with probabilityP m1,t,m1,t+1 given by (12), i.e., by the transition probabilities of the pseudo-aggregated Markov chain. In other words, the number of slots per subframe allocated to LPT by the gNB in the next subframe t + 1 only depends on the transitions m 1,t → m 1,t+1 of HPT. We highlight this fact with the inequality of (16). Then, the equivalent expression of (3) for LPT devices becomes, ν = max(w 2,t + f ((h, s, c) 2,t ), 0) +λ 2,t ; v = m 2,t+1 = U − m 1,t+1 ; By combining (15) with the transition probabilities (12) of the pseudo-aggregated Markov chain, we obtain the transition probabilities corresponding to the Markov chain for LPT, Note that P μ,i;ν,j in (18) refers to (15), i.e., it is meant for LPT and it differs from (5) which refers to HPT. Through (18), we claim that 1) the product of both probabilities reflects the 'independence' in the treatment of both types of traffic; and 2) the correlation or dependence between HPT and LPT is taken into account with the transition probabilities (12) of the pseudo-aggregated Markov chain.
The steady state regime for LPT is given by the stochastic row vector π (πe = 1) derived by solving the linear equation π = πP with π = {π μ,u,i } and P = {P μ,u,i;ν,v,j }. Then, π μ,u,i is the steady state probability at the start of an arbitrary subframe of the number of active devices estimated by the gNB being μ, the number of active devices being i, and the slot allocated to the LPT flows being u.
2) Throughput, Access Delay, and Packet Loss Probability for LPT: Similar to the HPT case, we assess the performance of LPT with respect to the same four parameters defined above. In particular, the throughput per subframe for LPT is obtained as follows The second equality in (19) is derived in the same way as what is deduced in (7). In a similar way as for (9), the expression of the throughput per slot for LPT is obtained as Similar to (10), the access delay for LPT is obtained by Lastly, similar to the expression in (11), the packet loss probability for LPT is defined as V. SIMULATIONS AND NUMERICAL RESULTS This section presents the numerical results obtained from both the analytical model and discrete-event based simulations. The proposed DSA-GF scheme has been implemented based on a custom-built simulator constructed in MATLAB which mimics the behavior of the scheme. Extensive simulations are performed under various configurations. The results with respect to the four performance parameters defined in Sec. IV, i.e., throughput per subframe/slot (in terms of number of packets per subframe/slot), access delay for the successfully transmitted packets, and packet loss probability, are presented below. Two other GF access schemes, known as complete sharing and GF reactive (see Subsec. V-E), have also been implemented and the performance of these three schemes is compared with each other therein. The applicability of DSA-GF to two other numerologies is validated in Subsec. V-F.
A. Simulation Setup and Model Validation
Consider an NR cell with all three GF schemes, i.e., DSA-GF, complete sharing, and GF reactive, enabled. As mentioned earlier in Subsec. II-B, although the total number of mMTC devices covered by the cell could be large, the number of devices attempting for channel access at a given subframe is considered to be rather limited. In this study, we consider that the device population for each type varies from 10 up to 100, coupled with different activation probabilities. The offered traffic intensities are represented by M 1 a 1 and M 2 a 2 (in terms of packets per subframe) for HPT and LPT, respectively. Except Subsec. V-F which considers numerology β = 2 and β = 4, we adopt β = 3 for our performance evaluations in all simulations presented below. Note that no matter there are U = 4, 8, or 16 slots in each subframe, all of them are available for GF transmissions (discussed in Subsec. II-A). For these simulations, we set u 2,min ≥ 1. That is, u 1,max ≤ U − 1. The other parameters like u 1,min are configured in favor of HPT performance with the concrete values shown in each figure caption or the corresponding explanations. For all simulation results presented below, we report the average values obtained from multiple runs of simulations.
The accuracy of the developed Markov model is verified through extensive simulations. Under all network configurations, the analytical and simulation results coincide with each other so tightly that the curves obtained from these two methods are largely overlapping. As such, the accuracy of the developed Markov model is validated. As two examples, we plot separately in Figs. 4 and 5 the curves obtained from both analysis and simulations. For the sake of illustration clarity, we do not plot both sets of results in other figures.
B. HPT Performance With Variable Device Population
As explained earlier, the performance of HPT is independent of that of LPT. Accordingly, we evaluate the performance of HPT by varying the number of HPT devices M 1 and the activation probability a 1 while keeping the offered traffic constant as M 1 a 1 = 1. Keep in mind that the actual number of slots allocated to HPT per subframe is governed by the DSA-GF scheme where both u 1,min and u 1,max are tunable parameters but they do not vary on a subframe or frame basis. The performance of HPT in terms of throughput per slot and per subframe is illustrated in Fig. 3 where u 1,min = 1 and u 1,max = 4, 5, 6, or 7 respectively. It is clear that the achieved throughput per slot for these configured u 1,min and u 1,max values is slightly higher than the maximum throughput for slotted ALOHA, i.e., 1/e ≈ 0.3679, which is obtained with an infinite population. This is because the number of devices in our simulations is finite. For instance, for a fixed value of M 1 = (10, 20, 30, . . . , 70, 80), the resulting successful probability takes the values as (0.3874, 0.3774, 0.3741, . . ., 0.3705, 0.3701) respectively, indicating a slightly lower successful probability which approaches the throughput per slot for slotted ALOHA as M 1 increases.
On the other hand, we observe that, as M 1 becomes larger, 1) the achieved throughput per subframe increases monotonically towards a maximum value and 2) these values are much higher than that of the throughput per slot. For 1), note first that when a collision occurs, the corresponding packet remains pending to the next subframe. This behavior contributes to an increased number of packets awaiting to be transmitted. Furthermore, the devices that succeeded in the current subframe will also generate with probability a 1 one packet ready for transmission in the next subframe. The net effect is that, when M 1 increases, the mean value of the number of backlogged packets increases slightly and more slots will be allocated to HPT, resulting in thus higher throughput per subframe. For 2), it is because multiple slots within the same subframe are utilized by HPT devices. For example, when (u 1,min , u 1,max ) = (1, 4), (1, 5), (1, 6), or (1, 7) and M 1 = 30, there are on average 2.5555, 2.5842, 2.5939, or 2.5969 number of slots allocated to HPT respectively. Indeed, this result is in accordance with the relationship between per subframe and per slot throughput expressed in (9).
Furthermore, the obtained access delay and packet loss probability performance is depicted in Figs. 4 and 5 respectively. With a larger device population, DSA-GF needs more subframes to accommodate HPT packets, leading to an increasing trend for access delay. On the other hand, the achieved access delay decreases significantly with a larger u 1,max due to the fact that more slots are available for HPT. With a larger u 1,max value, a competing device obtains a higher probability of selecting a unique slot for successful transmission, resulting in a lower delay. In Fig. 5, it is shown that a larger u 1,max leads to a lower loss probability. With a lager number of HPT devices, the activation probability decreases in order to maintain constant offered traffic. Hence, the impact of buffer limitation is reduced. Correspondingly, the packet loss probability decreases with an increasing M 1 . Moreover, one may notice a decreasing gap between two adjacent curves in these two figures with a larger u 1,max . This is because the performance acceleration rate declines when u 1,max increases.
C. LPT Performance With Variable Offered Traffic
To evaluate the performance of LPT devices, we vary the offered traffic load by LPT devices M 2 a 2 given that M 1 a 1 = 1 with M 1 = 100 and (u 1,min , u 1,max ) = (5,7). Under such traffic conditions, the average number of slots allocated to HPT is m 1,t ≈ 5.0115. Accordingly, LPT obtains m 2,t ≈ 2.9885 slots on average. Figs. 6-8 illustrate the performance in terms of the four parameters defined above. Fig. 6 illustrates the obtained throughput per subframe/slot for LPT as M 2 a 2 increases. Initially, the throughput per subframe increases linearly with M 2 a 2 and gradually the behavior reaches a stable limit when the network approaches saturation. A similar trend is observed for the behavior of throughput per slot. The reason is as follows. Since our scheme follows the principle of ALOHA, the highest throughput that can be achieved is m 2,t /e = 2.9985/2.7183 = 1.1031. Therefore, as long as M 2 a 2 < 1.1031, LPT will exhibit a linear throughput response corresponding to the offered LPT traffic load. The more we increase the offered traffic M 2 a 2 , the closer we are approaching to the theoretical limit. When M 2 a 2 approaches the value of 1.1031, the curve starts to bend and in an asymptotic way it reaches the maximum throughput value. Fig. 7 reveals the access delay for successful LPT packet transmissions. When the LPT traffic load increases, a higher number of collisions occur, causing packets to wait for a longer period of time in the buffer. Accordingly, the average delay increases. Recall that devices are equipped with a buffer of unit size. When a new packet arrives and finds the buffer full, it is rejected. This is indeed the implementation of the packet rejection mechanism [26]. It causes a higher packet loss probability when the offered LPT traffic increases, as shown in Fig. 8.
Although a loss probability higher than 1% is out of interest, it is worth studying the asymptotic behavior of the loss probability for LPT, i.e., when a 2 → 1. Under the principle of blocking a new packet when the buffer is occupied, the asymptotic loss probability can be expressed as the fraction (M 2 − m 2,t e −1 )/M 2 . It becomes 0.9725, 0.9843, and 0.9989 for M 2 = 40, 70, and 100 devices, respectively. These values match perfectly the results provided by the Markov model. Furthermore, due to the introduction of a single size buffer, the asymptotic behavior of the delay performance can be derived as follows. For a given number of LPT devices M 2 , when a 2 → 1 (which is the condition for saturation), all M 2 buffers are full, each with one packet ready for transmission at the beginning of each subframe. Since the mean number of successful transmissions per subframe is given by m 2,t e −1 , the mean number of subframes that a given packet has to wait in its buffer is given by M 2 e/m 2,t . Following the same illustrative example given above with M 2 = 40, 70, and 100 LPT devices, the obtained access delay becomes 36.3832, 63.6706, and 90.9581 subframes, respectively. The same as above, these results are in precise agreement with the ones obtained from the Markov model, as expressed in (21).
D. Impact of Offered HPT Traffic Load on HPT/LPT Performance
To assess the impact of offered HPT traffic load on the performance of both HPT and LPT, we perform two sets of simulations, with a combination of constant or variable traffic loads for HPT or LPT respectively. As already discussed above, the performance of HPT remains constant throughout the whole range of the M 2 a 2 variations. In other words, these results confirm that HPT's performance remains intact regardless of the variations of the injected LPT traffic load.
On the other hand, LPT's performance will be dominated by HPT traffic intensity since LPT can only occupy the remaining slots in the same subframe that are not allocated to HPT packets. As shown in Fig. 9, while the HPT throughput per subframe increases linearly as M 1 a 1 increases (until the saturation point), the LPT throughput per subframe has to sacrifice its performance. With respect to the performance of DSA-GF in terms of access delay and packet loss probability shown in Figs. 10-11, it is convincing that HPT achieves better performance than LPT does.
E. Performance Comparison With Complete Sharing and GF Reactive
First of all, note that no traffic classification is introduced in these two reference schemes. Before presenting the results, let us outline briefly the principles of these two schemes as follows. 1) Complete sharing works similarly as the proposed scheme. However, the slot allocation and data transmission process in complete sharing does not enable any priorities. Instead of treating HPT and LPT separately, a single class of arrivals will compete for access in all available slots in each subframe. The packet transmission probability is dynamically adjusted on a subframe-by-subframe basis following the same pseudo-Bayesian estimation process. 2) The GF reactive scheme discussed in Subsec. I-A. No permission probability exists in this scheme, i.e., a failed transmission attempt shall for sure try again in the next subframe. To avoid the situation that an 'unlucky' packet could attempt to transmit forever, a retry limit of 10 is configured in our simulations for GF reactive. In this study, we do not include any proactive GF scheme due to the consideration that high collision could occur for GF proactive with K > 1 since two or more packet replicas from the same device will compete for slot access inside the same subframe in GF proactive schemes.
The numerical results obtained from the three studied schemes are compared in Figs. 12-16 where GF-R and CS in the legends stand for GF reactive and complete sharing, respectively. With respect to the achieved throughput per subframe, the values obtained from all three schemes (for DSA-GF, it is meant for the sum of HPT and LPT throughput) are very close to each other (the curves for throughput per slot for GF-R and CS are indeed overlapping). This is because the offered traffic in all cases is high enough so that the highest slot utilization has been reached. Thanks to the privilege given to HPT with (u 1,min , u 1,max ) = (5, 7), the throughput per subframe for HPT exhibits the highest values, at the cost of reduced LPT throughput.
When it turns to access delay, one may observe a similar trend. That is, HPT achieves the lowest delay across the whole range of device populations, obtained after a small sacrifice of LPT's delay. On the other hand, the reason that GF reactive reaches lower access delay than complete sharing does is that more access opportunities are given to GF reactive devices due to the fact that there is no permission probability as well as the constraint of the retry limit.
Let us now compare the performance in terms of packet loss probability for those three schemes. It is convincing that HPT under the DSA-GF scheme achieves the lowest packet loss probability thanks to its access privilege. This result reveals once again the benefit brought by introducing priority for dynamic slot allocation. On the other hand, when comparing the packet loss probabilities for complete sharing and GF reactive, the results meet our intuition that complete sharing performs better. This is because complete sharing imposes access control via a permission probability when collisions are detected in the previous subframe, thus limiting the number of competing devices in the current subframe. Given that the number of slots in each subframe is fixed, the lesser the number of competing devices, the lower the packet loss.
F. Applicability of DSA-GF to Numerology β = 2 and β = 4 Considering that the subframe duration is fixed for all numerologies as 1 ms, we keep the offered traffic per subframe constant, however, with different combinations of device populations and activation probabilities. More specifically, for β = 4, we configure four sets of device populations as M 1 = M 2 = 40, 60, 80, and 100 for HPT and LPT, each set coupled with an activation probability of a 1 = a 2 = 1/20, 1/30, 1/40, and 1/50 respectively. In this way, the offered traffic per subframe equals to M x a x = 2 for each type of traffic, i.e., 2 packets/subframe. For β = 3 and 2, devices are split into 2 and 4 groups respectively due to resource allocation explained in the next paragraph. Accordingly, we have M x a x = 1 and 0.5 per subframe for β = 3 and 2, since there are 2 and 4 parallel subframes respectively. Detailed configurations on M x and a x for each numerology can be found in Figs. 14-17.
To accommodate the offered traffic, the gNB can either allocate one subframe for β = 4, or two parallel subframes over the frequency domain for β = 3. This configuration is reasonable since the subcarrier spacing in β = 4 is twice as much as in β = 3. Following the same logic, there will be four parallel subframes over the frequency domain when β = 2 is adopted. As such, the total number of slots in all three numerologies is the same as 16 slots, however, grouped into 4, 8, or 16 slots per subframe for β = 2, 3, or 4 respectively. Accordingly, we configure the tunable parameters as (u 1,min , u 1,max ) = (2, 3), (4,6), and (6, 12) respectively.
In Figs. 14-17, the performance of the DSA-GF scheme is illustrated as a histogram for the four sets of device population and activation probability configurations respectively. As shown in Fig. 14, the achieved throughput per slot remains almost constant in all three numerologies. On the other hand, the throughput per subframe is doubled when a higher-level numerology is adopted. As expected, the achieved throughput per subframe reaches 0.4865, 0.9736, and 1.9522 for β = 2, 3, and 4 respectively. This is due to the fact that more slots per subframe are available with a higher-level numerology. When observing the achieved packet loss probability in Fig. 17, it is evident that a higher-level numerology leads to lower packet loss. This is because more slots are aggregated in one subframe as resources for devices to share when a higher-level numerology is adopted. In addition, for a fixed numerology, the probability of packet loss decreases as M 2 increases, since a 2 reduces when M 2 a 2 is constant.
Finally, let us compare the performance of HPT and LPT. Although the offered HPT and LPT traffic is the same, the achieved throughput per subframe for HPT is slightly higher than that of LPT. As a consequence, better performance has been achieved for HPT than for LPT in terms of access delay and packet loss probability. This benefit is brought by the priority enabled DSA-GF adaptive algorithm which performs well in all studied numerologies and network configurations. As shown in Fig. 15, the access delay for HPT is slightly decreasing with a higher-level numerology whereas (much) higher delays are experienced by LPT. The same trend applies to the packet loss probability performance as well, as illustrated in Fig. 17. Obviously, better performance for HPT can be achieved by increasing the values of u 1,min and u 1,max , at the expenses of slight penalties for the performance of LPT.
G. Further Discussions
The DSA-GF scheme considers the distinctive characteristics of two co-existing traffic types in a 5G NR network. Although it is unavoidable to sacrifice the performance of LPT in order to ensure the high performance of HPT, serious access congestion for LPT can be avoided or minimized through proper parameter configurations. In general, there is a tradeoff between the performance of these two traffic classes when deciding the values for u 1,min and u 1,max .
Furthermore, u 1,min and u 1,max are two configurable parameters. Their values are considered to be pre-configured based on gNB's observations as well as service requirements and do not change over a short term (i.e., neither on a subframeby-subframe nor on a frame-by-frame basis).
VI. CONCLUSION AND FUTURE WORK
This paper presents a priority enabled GF access and data transmission scheme which enables dynamic slot allocation for heterogeneous GF traffic in 5G NR networks. Based on the NR frame structure, the proposed scheme grants access privilege for slot occupancy to high priority traffic based on traffic estimation and the observed transmission status and allocates the remaining slots in each subframe to low priority traffic. While the performance of high priority traffic is guaranteed through proper configuration of relevant parameters, low priority traffic also enjoys satisfactory performance. Furthermore, the precedence of high priority traffic and the dependence between two heterogeneous traffic classes are captured through a Markov model which derives a pseudo-aggregated process to bridge the aforementioned dependency. Through both analysis and simulations, we demonstrate the elegance and effectiveness of the scheme with respect to four performance parameters, i.e., throughput per subframe and per slot, access delay, and packet loss probability, as well as its applicability. To achieve optimal performance, proper parameter tuning is needed based on network setup and traffic conditions. How to adjust u 1,min and u 1,max configurations periodically, e.g., in the order of seconds, over a long term, or reactively depending on real-time traffic measurements, and how to deal with estimation error are left as our future work. | 14,187.6 | 2021-05-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Generating a Metal-responsive Transcriptional Regulator to Test What Confers Metal Sensing in Cells*
Background: Metal-specific transcription has been correlated with the relative properties of a cells' set of metal sensors. Results: A one-residue substitution enabled a DNA-binding formaldehyde sensor to detect Zn(II) and cobalt. Conclusion: Weaker DNA affinity combined with tighter Zn(II) affinity enabled Zn(II) sensing with a smaller coupling free energy. Significance: Relative affinity determined the best sensor in the set for Zn(II) but not for cobalt.
Metal-sensing and DNA-binding transcriptional regulators are central to the machinery that optimizes buffered metal concentrations inside cells to enable correct protein metallation (1,2). In general, the tighter the K metal of a metal sensor, the lower the [buffered metal] (1). Fresh experimental approaches are needed to test hypotheses about the mechanisms determining which metal(s) each sensor detects. Uncertainty also remains about the nature of the exchangeable pools of different metals, including the major ligands and the precise buffered metal concentrations, and how these vary under different environmental conditions or between organisms.
Metal sensors tend to bind divalent metals with an order of affinity that matches the Irving Williams series, regardless of which metal(s) they detect in a cell (1)(2)(3). This raises questions about how a sub-set of sensors can detect the weaker binding metals in vivo (4 -6). One facet of the solution is that the kinetics of access to different metals can vary from sensor to sensor, for example due to interactions with specific donor molecules, including metallochaperones (1, 6 -8). Another part of the solution is that the allosteric mechanism connecting metal binding to DNA binding can be metal-selective (9 -12). Thus, a weaker binding metal can nonetheless be more effective at triggering the conformational changes that alter gene expression (10,13). For metal-dependent de-repressors and co-repressors the coupling free energy, ⌬G C metal-sensor⅐DNA , is typically larger for more effective metals (9). Unexpectedly, here we see how a metal can also become effective without increasing ⌬G C metal-sensor⅐DNA , if K DNA of the apo-form of a de-repressor is suitably weakened, to confer two mechanistic advantages in favor of Zn(II)-detection. Contrary to general dogma, here ⌬G C Zn(II)-sensor⅐DNA is smaller in the Zn(II)-sensing mutant relative to the nonsensing wild type protein.
In the course of a collaborative program to characterize the complement of metal sensors from Salmonella enterica serovar typhimurium strain SL1344 (hereafter referred to as Salmonella), we identified two genes encoding proteins with sequence similarity to members of the CsoR/RcnR family of DNA-binding and metal-responsive transcriptional de-repressors (14 -17). These are now shown to be Salmonella homo-logues of RcnR and FrmR. RcnR in Escherichia coli responds to cobalt and nickel, whereas CsoR, first discovered in Mycobacterium tuberculosis, responds to Cu(I) (15)(16)(17). Related metal sensors characterized from other bacteria detect the same metals (18 -27). Additionally, two homologues have been identified that respond to effectors other than metals, namely CstR from Staphylococcus aureus, which detects persulfide, plus E. coli FrmR (28 -30). CsoR forms a three helix bundle that assembles into tetramers (15). The sensory Cu(I) site exploits a conserved Cys-thiolate from the N-terminal end of helix ␣2 of one subunit in combination with an HXXXC motif from within helix ␣2Ј of a second subunit (Fig. 1A) (15). Three ligands in similar locations (with HXXXC replaced by HXXXH), along with additional ones from the N-terminal region of helix ␣1Ј, are recruited to the sensory metal site of RcnR (Fig. 1B) (17,31,32). A single residue variant of E. coli RcnR (H3E) also responds to Zn(II) (31).
In a global screen to discover the consequences of the readthrough of amber stop codons, E. coli FrmR (which has such a stop) emerged as the transcriptional repressor of the frmRAB operon (30). FrmA has formaldehyde dehydrogenase activity, and the operon was subsequently shown to respond to exogenous formaldehyde (30,33). This operon is de-repressed during anaerobic respiration using trimethylamine N-oxide as the terminal electron acceptor where endogenous formaldehyde is generated as a by-product of trimethylamine N-oxide demethylation (34). CO-releasing molecules and chloride treatments also trigger expression of the frm operon (35,36). There are no published studies of the Salmonella FrmR homologue. At least two potential metal ligands are retained in Salmonella FrmR, namely Cys at the N terminus of helix ␣2 but HXXXE (rather than HXXXH of paralogous Salmonella RcnR) at helix ␣2Ј (Fig. 1, A and B). Despite sequence similarity between FrmR and other CsoR/RcnR family members, whether or not (any) (15,17). C, schematic representation of the frmRA operon (to scale) from Salmonella indicating the frmRA promoter, which includes a candidate FrmR-binding site and forms one strand of frmRAPro.
FrmRs de-repress gene expression in response to metals remains untested.
Recent studies have shown that relative affinity, relative allostery, and relative access determine the ability of metal sensors to respond selectively in vivo (1). This is exemplified by comparing metal affinities (K metal ) and metal-responsive allostery (⌬G C metal-sensor⅐DNA ) among multiple metal sensors, and for multiple metals in Synechocystis PCC 6803 (1,6,11,18). Thus, InrS responds to nickel in vivo and has a K Ni(II) that is substantially tighter than K Ni(II) of cobalt-sensing CoaR and Zn(II)-sensing ZiaR or Zur (a representative from each family of metal sensors that is present in this organism) (18). Provided the distribution of Ni(II) follows thermodynamic equilibrium predictions, as the [Ni(II)] rises, InrS will be the first to respond, de-repressing expression of nrsD (encoding a Ni(II)-efflux protein) and preventing [Ni(II)] from approaching K Ni(II) of the other sensors (18). For Zn(II), K Zn(II) of nickel-sensing InrS is similar to Zn(II)-sensing ZiaR, but crucially the allosteric mechanism of ZiaR is more responsive to Zn(II) compared with InrS (⌬G C Zn(II)-ZiaR⅐DNA Ͼ ⌬G C Zn(II)-InrS⅐DNA ) (11). Thus ZiaR will require a lower fractional Zn(II) occupancy than InrS to de-repress its target gene ziaA (encoding a Zn(II)-efflux ATPase). In this manner, ZiaR can prevent [Zn(II)] from exceeding the threshold where occupancy of DNA by InrS becomes sufficiently low for aberrant expression of nrsD to occur (11). In contrast, cobalt sensing does not correlate with relative affinity, and CoaR has the weakest K Co(II) in the set of sensors (6). There is evidence that the cobalt effector may be preferentially channelled to CoaR, and thus relative access has been invoked as the explanation for selective detection of cobalt (6). In summary, it is hypothesized that the sensor, which is triggered by a metal, is simply the most responsive within a cells' set of sensors, based upon relative affinity, relative allostery, and relative access (1). This hypothesis is now tested via a mutation conferring gain-of-metal sensing.
Here, the Co(II), Zn(II)-, and Cu(I)-binding affinities of Salmonella FrmR are determined and compared with equivalent data for the cognate sensors of these metals, namely Salmonella homologues of RcnR, Zn(II)-sensing ZntR and Zur, and Cu(I)sensing CueR. FrmR is found not to sense metals within cells, yet an E64H substitution (creating a Salmonella RcnR-like helix ␣2Ј HXXXH motif) gains responsiveness to cobalt and Zn(II) in vivo. By comparing the biochemical properties of Salmonella FrmR with FrmRE64H, and then relating these parameters to endogenous sensors for cobalt, Zn(II), and Cu(I), the relative properties that, in combination, enable metal sensing are identified.
Experimental Procedures
Bacterial Strains and DNA Manipulations-S. enterica sv. typhimurium strain SL1344 was used as wild type, and strain LB5010a was used as a restriction-deficient modification-proficient host for DNA manipulations. Both were a gift from J. S. Cavet (University of Manchester). E. coli strain DH5␣ was used for routine cloning. Bacteria were cultured with shaking at 37°C in Luria-Bertani (LB) medium or M9 minimal medium (37), supplemented with thiamine (0.001% w/v) and L-histidine (20 g ml Ϫ1 ). Carbenicillin (100 g ml Ϫ1 ), kanamycin (50 g ml Ϫ1 ), and/or chloramphenicol (10 g ml Ϫ1 ) were added where appropriate. Cells were transformed to antibiotic resistance as described (37,38). All generated plasmid constructs were checked by sequence analysis. Primers are listed in supplemental Table S1.
Generation of Salmonella Deletion Mutants-Deletion derivatives of strain LB5010a were obtained using the Red method (38) using plasmid pKD3 and primers 1 and 2 for frmR or primers 3 and 4 for gshA. Mutagenesis was performed using strain LB5010a and selection of mutants achieved using LB medium supplemented with chloramphenicol. Mutations were subsequently moved to SL1344 or derivatives using P22 phage transduction and validated by PCR using primers 5 and 6 for frmR or primers 7 and 8 for gshA. The antibiotic resistance cassette from the ⌬frmR::cat locus was removed using the helper plasmid pCP20 carrying the FLP recombinase.
Generation of Promoter-lacZ Fusion Constructs and -Galactosidase Assays-P frmRA or P frmRA -frmR was amplified from SL1344 genomic DNA using primer 9 and either primer 10 (for P frmRA ) or 11 (for P frmRA -frmR) and ligated into pGEM-T. Sitedirected mutagenesis to generate P frmRA -frmRE64H and P frmRA -frmR DOWN was conducted via the QuikChange protocol (Stratagene) using pGEM-P frmRA -frmR as template and primers 12-23. Codon optimization of the frmRE64H coding region to generate P frmRA -frmRE64H UP (supplemental Table S2) was achieved using GeneArt Gene Synthesis (Life Technologies, Inc.) and optimization for Salmonella typhimurium. The rcnR-P rcnA region was amplified from SL1344 genomic DNA using primers 24 and 25. Digested fragments were cloned into the SmaI/BamHI site of pRS415 (39). P zntA cloned into pRS415 was provided by J. S. Cavet (University of Manchester). The resulting constructs were introduced into strain LB5010a prior to strain SL1344. -Galactosidase assays were performed as described (40) in triplicate on at least three separate occasions. Overnight cultures were grown in M9 minimal medium, diluted 1:50 in fresh medium supplemented with maximum noninhibitory concentrations (MNIC 3 ; defined as the maximum concentration which inhibited growth by ϳ 10%) of metals, formaldehyde, EDTA, or N,N,NЈ,NЈ-tetrakis(2-pyridylmethyl)ethylenediamine (TPEN), and grown to mid-logarithmic phase prior to assays. For time course experiments, cells were grown to early logarithmic phase and statically cooled to 25°C for 20 min followed by a 2-h incubation in the presence of metal or formaldehyde. The metal salts used were MnCl 2 , C 6 ), and washed with 10 column volumes of the same buffer. ZntR and Zur were treated the same way, except using 5 mM NaCl, 1 mM EDTA, 5 mM DTT, 10 mM HEPES, pH 7.0, for ZntR, and 100 mM NaCl, 5 mM DTT, 1 mM EDTA, 10 mM HEPES pH 7.8 for Zur. FrmR, FrmRE64H, ZntR, and Zur were eluted in a single step using respective binding buffers plus 500 mM NaCl. RcnR was diluted to 100 mM NaCl, 10 mM EDTA, 10 mM DTT, 10 mM HEPES, pH 7.0, and applied to an equilibrated 5-ml HiTrap SP column (GE Healthcare), washed in the same buffer plus 200 mM NaCl, and eluted in 300 mM NaCl. CueR was expressed and purified as described previously (41). Anaerobic protein stocks were prepared by applying purified protein to a pre-equilibrated 1-ml HiTrap heparin column (diluting FrmR, FrmRE64H, ZntR, and Zur as described above and without dilution of RcnR for ZntR, Zur, RcnR, and CueR, respectively. It was noted that the absorbance spectra of FrmRE64H differed from FrmR (by exhibiting a shoulder at ϳ300 nm), except in two early preparations, and these were not used further. Reduced thiol and metal content were assayed as described previously (18), and all anaerobic protein samples (maintained in an anaerobic chamber) were Ն90% reduced and Ն95% metal-free, with the exception of Zur which contained ϳ1 M eq of Zn(II) (per monomer) as purified. All in vitro experiments were carried out under anaerobic conditions using Chelex-treated and N 2 -purged buffers as described previously (18). UV-visible Absorption Spectroscopy-Experiments were carried out in 100 mM NaCl, 400 mM KCl, and 10 mM HEPES, pH 7.0, for FrmR, FrmRE64H, ZntR, Zur, and CueR, with inclusion of 5% (v/v) glycerol for RcnR. Concentrations of metal stocks (CoCl 2 , NiCl 2 , CuCl, and ZnCl 2 ) were verified by ICP-MS. CuCl was prepared as described previously and confirmed to be Ͼ95% Cu(I) by titration against bathocuproine sulfonate (BCS) (42). CoCl 2 , NiCl 2 , or CuCl (Ͼ95% Cu(I)) were titrated into protein, or ZnCl 2 was titrated into protein pre-equilibrated with CoCl 2 , and the absorbance spectra were recorded at equilibrium using a 35 UV-visible spectrophotometer (Perkin-Elmer Life Sciences). Precipitation of ZntR was observed with further ZnCl 2 additions to Co(II)-ZntR than those shown.
Protein-Metal Migration by Size-exclusion Chromatography-FrmR, FrmRE64H, or Zur were incubated (60 min) with an excess of ZnCl 2 , CuCl (Ͼ95% Cu(I)) or EDTA (as stated) in 100 mM NaCl, 400 mM KCl, and 10 mM HEPES, pH 7.0, and an aliquot (0.5 ml) was resolved by size-exclusion chromatography (PD-10 Sephadex G25, GE Healthcare) in the same buffer conditions. Fractions (0.5 ml) were analyzed for metal by ICP-MS and protein by the Bradford assay using known concentrations of FrmR, FrmRE64H, or Zur as standards. Failure to recover all of the copper during experimentation with FrmR or FrmRE64H suggests (at least) some competition from and copper binding by the Sephadex matrix.
Protein-Chelator-Cu(I) Competitions-Experiments were carried out in 100 mM NaCl, 400 mM KCl, and 10 mM HEPES, pH 7.0. CuCl (Ͼ 95% Cu(I)) was titrated into a mixed solution of protein and BCA, and the absorbance at 562 nm was recorded at equilibrium. Data were fit to the models described in the figure legends and Table 1 footnotes using Dynafit to determine Cu(I)-binding constants (43).  2Cu(I) ϭ 1.58 ϫ 10 17 M Ϫ2 at pH 7.0 for BCA (48). For BCS, the absorbance at 483 nm was recorded following titration with CuCl (to generate a calibration curve) or following preincubation of BCS with CuCl (10 min) and addition of CueR. The absorbance at 483 nm was monitored to equilibrium. K Cu(I) of the tightest site of CueR was calculated using Equation 1 (48), CueR is expected to be a dimer with two metal-binding sites (49) that bind Cu(I) with negative cooperativity (41) (48). Fluorescence Spectroscopy-Experiments were carried out in 100 mM NaCl, 400 mM KCl, 10 mM HEPES, pH 7.0. ZnCl 2 was titrated into ZntR, and fluorescence emission spectra ( ex ϭ 280 nm, em ϭ 303 nm, and T ϭ 20°C) were recorded at equilibrium using a Cary Eclipse fluorescence spectrophotometer. Precipitation was observed with addition of more than 1.1 M eq ZnCl 2 .
Protein Quantification by Liquid Chromatography-Tandem Mass Spectrometry-Cellular lysates were prepared from logarithmic cultures grown in M9 minimal medium. Cell number was determined by enumeration on LB agar plates. Harvested cells were resuspended in 40 mM NaCl, 160 mM KCl, 10 mM EDTA, 10 mM DTT, 10 mM HEPES, pH 7.8, with addition of protease inhibitor mixture (Sigma), and post-sonication, the soluble cell lysate was syringe-filtered (0.45-m pore size), snap-frozen in liquid N 2 , stored at Ϫ80°C, and thawed on ice before use. Total protein was determined by the Bradford assay, using BSA as a standard. Purified stocks of FrmR or FrmRE64H were quantified by amino acid analysis (Proteomics Core Facility, University of California), stored at Ϫ80°C, and thawed on ice before dilution in PBS to 0.6 mg ml Ϫ1 . For standard curves, proteins were further diluted in soluble cell lysates from ⌬frmR cells to generate standard curve concentrations of 5, 10, 50, 250, 425, and 500 ng of 100 l Ϫ1 (which defined the limits for quantification). Aliquots were stored at Ϫ80°C. Working internal standards were prepared by dilution of labeled peptides ([ 13 C 6 , 15 N 4 ]arginine residues) GQVEALER[ 13 C 6 , 15 N 4 ], DEL-VSGETTPDQR[ 13 C 6 , 15 N 4 ], and DHLVSGETTPDQR[ 13 C 6 , 15 N 4 ] (Thermo Fisher) in 15% (v/v) acetonitrile with 0.1% (v/v) formic acid solution to obtain final concentrations of 313 fmol l Ϫ1 of each peptide. For experimental samples (soluble lysates from ⌬frmR cells containing the P frmRA -frmR construct and variants) and standard curve samples, 100 l was precipitated using 300 l of methanol (mixing at 2000 rpm for 1 min) before centrifugation (900 ϫ g, 5 min, room temperature). Pellets were suspended in 400 l of 200 mM NH 4 HCO 3 in 10% (v/v) methanol (mixing at 2000 rpm for 10 min) and 10 l internal standard added. Pellet digestion was performed with 10 l of trypsin (14 mg ml Ϫ1 ) and mixing (1000 rpm, 37°C, 16 h) and stopped with 10 l of 15% (v/v) formic acid. The digested samples were centrifuged (6000 ϫ g for 5 min at room temperature) to remove particulate material. Solvent was removed from clarified supernatants (50 -100 l) using a centrifugal evaporator (Thermo Scientific SpeedVac system). Samples were separated by gradient elution at 0.3 ml min Ϫ1 using a Zorbax Eclipse Plus C18 column (2.1 ϫ 150 mm, 3.5-m particles; Agilent Technologies) at 30°C. Mobile phase A and B consisted of 0.1% (v/v) formic acid in water and 0.1% (v/v) formic acid in acetonitrile, respectively. Aliquots (20 l) were applied to a 6500 triple quadrupole mass spectrometer (AB Sciex) operating in positive ionization mode. Acquisition methods used the following parameters: 5500 V ion spray voltage; 25 p.s.i. curtain gas; 60 p.s.i. source gas; 550°C interface heating temperature; 40 V declustering potential; 26 V collision energy; and 27 V collision cell exit potential. Scheduled multiple reaction monitoring was carried out with a 90-s multiple reaction monitoring detection window and 1.00-s target scan time. A quadratic 1/x 2 weighted regression model was used to perform standard calibration.
The coefficient of determination (R 2 ) was Ͼ0.990 for GQV-EALER in all validation runs.
Determining Intracellular [Glutathione]-Intracellular glutathione was measured using a glutathione assay kit (Sigma) according to the manufacturer's instructions. Cellular lysates were prepared from overnight cultures grown in M9 minimal medium, diluted 1:50 in fresh medium, and grown to early logarithmic phase, statically cooled to 25°C for 20 min, followed by 30-min incubation in the absence or presence of MNIC ZnSO 4 . Viable cells were enumerated on LB agar, and [glutathione] was calculated using a cell volume of 1 fl.
Fluorescence Anisotropy-Complementary single-stranded oligonucleotides 38 (hexachlorofluorescein-labeled) and 39 (containing the identified FrmR-binding site and flanking nucleotides, Fig. 1C) were annealed by heating 10 or 200 M of each strand in 10 mM HEPES, pH 7.0, 150 mM NaCl to 95°C, and cooled to room temperature overnight. For protein-DNA stoichiometry experiments, the fluorescently labeled and annealed probe (designated frmRAPro) was diluted to 2.5 M in 10 mM HEPES, pH 7.0, 60 mM NaCl, 240 mM KCl, and 5 mM EDTA and titrated with FrmR or FrmRE64H prepared in 100 mM NaCl, 400 mM KCl, 10 mM HEPES, pH 7.0, and 5 mM EDTA. For K DNA determination, frmRAPro was diluted to 10 nM, with addition of 5 mM EDTA or 5 M ZnCl 2 as required. FrmR or FrmRE64H was prepared as above with inclusion of 5 mM EDTA or 1.2 M eq of ZnCl 2 or CuCl (Ͼ95% Cu(I)) as appropriate. Changes in anisotropy (⌬r obs ) were measured using a modified Cary Eclipse fluorescence spectrophotometer (Agilent Technologies) fitted with polarizing filters ( ex ϭ 530 nm, em ϭ 570 nm, averaging time ϭ 20 s, replicates ϭ 5, and T ϭ 25°C) as described previously (11). Upon each addition, the cuvette was allowed to equilibrate for 5 min before recording data. Data were fit to the model described in the figure legends and Table 2 footnotes using Dynafit (43). For experiments with Cu(I)-or Zn(II)-FrmR or FrmRE64H, where DNA binding did not saturate, the average fitted ⌬r obs maximum value from apoprotein experiments was used in the script. The coupling free energy ⌬G C , linking DNA binding to metal binding, was calculated as described previously (11) using the following: ⌬G C ϭ ϪRTlnK C , where r ϭ 8.314 J K Ϫ1 mol Ϫ1 (gas constant), T ϭ 298.15 K (temperature at which experiment was conducted), and K C ϭ K DNA metal-protein /K DNA apoprotein (9). Mean ⌬G C values (and standard deviations) were calculated from the full set of (equally weighted) possible pairwise permutations of K C .
Fractional Occupancy Models-Fractional occupancy of the tightest metal-binding site of a sensor with metal as a function of buffered [metal], was determined using the following: () ϭ [metal] buffered /(K metal ϩ [metal] buffered ). K metal ϭ K D (tightest site) of sensor for metal, experimentally determined (K metal sensor ) ( Table 1) (48). For FrmR (and variants), K metal was additionally calculated for the DNA-bound form (K metal sensor⅐DNA ) from the coupling constant (K C ) (Fig. 10E). The concentration of apoand Zn(II)-protein at a given [Zn(II)] was calculated using the number of tetramers per cell (FrmR and variants; Fig. 9K), and a cell volume of 1 fl. Fractional DNA occupancies with apo-and Zn(II)-protein over a range of protein concentrations were modeled using Dynafit (43) (1:1 binding of tetramer/DNA; assuming the binding of one tetramer conferred repression) with K DNA (from Table 2) and [P frmRA ] as fixed parameters (sample Dynafit script is also shown in the supplemental material). [P frmRA ] was calculated assuming 15 copies cell Ϫ1 (due to the presence on low copy number reporter plasmid) and a cell volume of 1 fl. The response was set at 1/[P frmRA ]. The fractional occupancy of P frmRA with apo-and Zn(II)-protein was summed to give fractional occupancy of P frmRA at any given buffered [Zn(II)].
Results
CsoR/RcnR-like Repressor FrmR Solely Detects Formaldehyde and Not Metals-Despite similarity between FrmR and metalsensing transcriptional de-repressors, exposing Salmonella cultures to maximum noninhibitory concentrations of MnCl 2 , C 6 H 5 FeO 7 , CoCl 2 , NiSO 4 , CuSO 4 or ZnSO 4 does not de-repress expression from P frmRA -frmR fused to lacZ in ⌬frmR cells ( Fig. 2A). Exposure of cells to MNIC of formaldehyde does derepress expression from P frmRA -frmR ( Fig. 2A). The formaldehyde response was lost, and basal expression was elevated in cells harboring a similar construct (P frmRA ) devoid of frmR (Fig. 2, B and C). Thus, in common with E. coli FrmR (30), the Salmonella homologue represses expression from the frmRA operator-promoter with repression alleviated by formaldehyde, and here we show that repression by Salmonella FrmR is not alleviated by metals.
Substitution of FrmR Glu-64 for an RcnR Metal-Ligand Confers Zn(II) and Cobalt Detection in Cells-Replacement of FrmR residue 64 (glutamate) with histidine (a metal-ligand in RcnR; Fig. 1, A and B) generates a metal-sensing variant of FrmR (Fig. 2D). Repression is alleviated by CoCl 2 and ZnSO 4 in ⌬frmR cells containing P frmRA -frmRE64H (but not P frmRA -frmR) fused to lacZ (Fig. 2, D and E). MnCl 2 , C 6 H 5 FeO 7 , NiSO 4 , and CuSO 4 did not affect expression from P frmRA -frmRE64H (or P frmRA -frmR), although formaldehyde responsiveness was retained. Notably, metal-responsive family members RcnR and CsoR do respond to nickel and copper (15)(16)(17). In summary, a single residue change that mimics the metal-sensing site of RcnR is sufficient to create a detector of cellular Zn(II) and cobalt.
FrmRE64H and FrmR Both Bind Co(II), Cu(I), and Zn(II)-It was anticipated that the introduced histidine residue created a metal-binding site in FrmR. However, titration of FrmRE64H or FrmR with Co(II) results in the appearance of spectral features in the region of 330 nm, indicative of S3 Co(II) ligand-tometal charge transfer (LMCT) bands consistent with Co(II) binding to both proteins (Fig. 3, A and E). For FrmR and FrmRE64H, the intensities of the feature at saturation ϳ0.9 ϫ 10 3 M Ϫ1 cm Ϫ1 are consistent with a single thiolate ligand (50). The intensities of a second set of Co(II)-dependent features in the region of 600 nm, indicative of d-d transitions (50), suggest tetrahedral coordination geometry. Binding curves are linear up to 1 eq of Co(II), implying K Co(II) is too tight to estimate by this method (Fig. 3, A and E, insets). Cu(I)-dependent features similarly indicate tight binding of at least 1 eq of metal, and 1 eq of Cu(I) binds sufficiently tightly to co-migrate with either protein during size exclusion chromatography (Fig. 3, B, C, F, and G). One equivalent of Zn(II) (which is spectrally silent) also co-migrates with each protein during size exclusion chromatography (Fig. 3, D and H). Preliminary Ni(II)-binding experiments with FrmRE64H were ambiguous, but because no in vivo nickel response had been detected for FrmRE64H, Ni(II) affinities were not pursued.
Determination of K Zn(II) , K Co(II) , and K Cu(I) for FrmRE64H and FrmR-As the FrmRE64H variant, but not FrmR, responds to Zn(II) and cobalt in cells, it was anticipated that this substitution had succeeded in tightening the affinity for these metals. The chromophores mag fura-2 and quin-2 form 1:1 complexes with Zn(II) and undergo concomitant changes in absorbance upon metal binding, which can be used to monitor competition with proteins and hence to estimate protein K Zn(II) (11, 13, 44 -48, 51). Titration of 10.1 M or 12.2 M mag fura-2 with Zn(II) in the presence of FrmRE64H (18.8 M, monomer) or FrmR (20.4 M, monomer), respectively, gave negligible change in absorbance up to 0.5-0.75 eq of Zn(II) per protein monomer, implying competition with the chromophore for metal (Fig. 4, A and B). At these protein concentrations, CsoR/RcnR family members exist as tetramers with four metal-binding sites per tetramer, and with some evidence of negative cooperativity between sites (12,17,18,52). A 1:1 stoichiometry equating to four Zn(II) per tetramer was observed for both FrmRE64H and FrmR (Fig. 3, D and H), but the fourth sites are too weak to compete with mag fura-2 (hence competition is complete after addition of ϳ24.2 and ϳ27.5 M Zn(II) (Fig. 4, A and B)). Data were fit to models describing tight binding of 3 M eq of Zn(II)/ tetramer, with dashed lines representing simulated curves describing K Zn1-3 10-fold tighter or 10-fold weaker than the calculated affinity (Fig. 4, A and B). For both proteins, this suggests K Zn1-3 at or approaching the tighter limit of the assay using mag fura-2 (K Zn(II) mag fura-2 ϭ 2.0 ϫ 10 Ϫ8 M). Competitions were therefore conducted with 13.4 or 14.1 M quin-2 (K Zn(II) quin-2 ϭ 3.7 ϫ 10 Ϫ12 M) and FrmRE64H (42.7 M, monomer) or FrmR (39.9 M, monomer), respectively (Fig. 4, C and D). Again, data were fit to models describing binding of 3 M eq of Zn(II)/tetramer (as expected, the fourth sites did not show competition with quin-2) with dashed lines in Fig. 4, C and D, describing simulated curves for K Zn1-3 10-fold tighter or 10-fold weaker than the calculated affinity of the proteins. Mean values of K Zn1-3 2.33 (Ϯ0.3) ϫ 10 Ϫ11 M and 1.7 (Ϯ0.7) ϫ 10 Ϫ10 M for FrmRE64H and FrmR, respectively, are thus within the range of this assay (Fig. 4, C and D, and Table 1).
Cuprous affinities of both proteins were determined using BCA ( 2 ϭ 10 17.2 M Ϫ2 (48)) revealing competition in each case for 2 M eq of Cu(I) per monomer, but with greater competition and hence tighter affinity for FrmRE64H than FrmR (Table 1 and Fig. 4, I and J). The data were fit to models describing binding of eight Cu(I) ions per tetramer (see Table 1 footnotes for details), which for FrmR depart from simulated curves describing binding of the tightest two Cu(I) ions (K Cu1-2 ) 10-fold tighter and 10-fold weaker than the fitted value (Fig. 4I), giving FrmR K Cu1-2 ϭ 4.9 (Ϯ1.6) ϫ 10 Ϫ15 M (Table 1). In contrast, K Cu1-2 for FrmRE64H is too tight to measure by this assay. However, FrmRE64H does not significantly compete with 10 M BCS ( 2 ϭ 10 19.8 M Ϫ2 ) (48), with saturation of the BCS 2 Cu(I) complex observed at ϳ5 M CuCl (Fig. 4K). These data imply that FrmRE64H K Cu1-2 can only marginally depart from the value estimated using BCA (K Cu1-2 ϳ5 ϫ 10 Ϫ16 M) ( Table 1). It is noted that the final absorbance for the BCS 2 Cu(I) complex in the presence of protein was lower than predicted from its known extinction coefficient; hence, the possibility of a ternary complex cannot be ruled out. In summary, the two metals that FrmRE64H now detects, Co(II) and Zn(II), bind approximately an order of magnitude more tightly than to FrmR (Table 1).
Cognate K metal of Salmonella Zn(II), Cobalt, and Cu(I) Sensors ZntR, Zur, RcnR, and CueR-If metal sensing is dictated by relative affinity within the set of Salmonella metal sensors, the affinity of FrmRE64H for Zn(II) and Co(II) would need to become comparable with cellular sensors for these metals. Conversely, Cu(I) affinity would need to remain weaker than Cu(I)-sensing CueR making Cu(I) still undetectable (40,41,54). The Salmonella sensors for Zn(II) and Co(II) are confirmed here as ZntR, Zur, and RcnR (Fig. 5) (55,56). Expression is induced from P zntA and P rcnA in wild type cells exposed to MNIC ZnSO 4 and CoCl 2 , respectively (Fig. 5, A and B). Notably minimal media for this strain (SL1344) require histidine that may influence Ni(II) availability. Titration of ZntR with Co(II), as a spectral probe for Zn(II)-binding sites, generated features diagnostic for LMCTs and d-d transitions consistent with ϳ3 thiolate-Co(II) bonds per ZntR monomer and tetrahedral coordination geometry (Fig. 5C) (50). These features saturate at ϳ1 eq of Co(II) and are bleached by addition of ϳ1 eq of Zn(II) (Fig. 5D). Zn(II) (ϳ1 eq) also quenched ZntR auto-fluorescence (Fig. 5E). Salmonella ZntR is expected to be a dimer based on similarity to the E. coli homologue (49), implying a stoichiometry of two Zn(II) ions per dimer. Titrations of 18.6 M quin-2 and ZntR (16.0 M, monomer) with Zn(II) were fit to models describing detectable binding of two distinguishable Zn(II) ions per dimer (K Zn1 and K Zn2 ); the estimated mean values are shown in Table 1 (Fig. 6A). The optimized curves depart from simulated curves describing K Zn1 or K Zn2 10-fold tighter or 10-fold weaker than their fitted values, although K Zn2 does approach the simulated curve describing K Zn2 as 10-fold weaker (Fig. 6A). It remains possible that a higher Zn(II) stoichiometry may be achieved for Salmonella ZntR under some conditions (as observed for E. coli ZntR (49)). Importantly, we show here that ZntR binds only two Zn(II) ions per dimer with sufficient affinity to compete with quin-2.
Zur from Salmonella, and in common with other bacteria (57-60), contains a structural Zn(II) ion that remains associated with the protein (20 M, monomer) in the presence of excess (1 mM) EDTA (Fig. 5F). In the absence of EDTA, at least one further equivalent of Zn(II) binds sufficiently tightly to comigrate with the protein during size exclusion chromatography (Fig. 5F). Titration of apo-Zur (Zn(II)-saturated at the structural site) with Co(II) generated features diagnostic for LMCTs and d-d transitions consistent with two to three coordinating thiol groups, which saturate between 1.5 to 2 eq of Co(II) per monomer (Fig. 5G) (50). These features are bleached with addition of 1.5 to 2 eq of Zn(II) (Fig. 5H). Zur family members exist as dimers (57)(58)(59), and here data show there are at least three exchangeable sites per dimer that are accessible to both Co(II) and Zn(II). A total of 35.5 M Zn(II) is required to fully saturate Zur (11.7 M, monomer) and mag fura-2 (12.1 M), consistent with two monomer equivalents ((2 ϫ 11.7 M) ϩ 12.1 M ϭ 35.5 M) of exchangeable Zn(II) binding to Zur (І four sites per dimer) with sufficient affinity to show some competition with mag fura-2. Of these, an estimated three sites per dimer com-pletely withhold Zn(II) from mag fura-2 (Fig. 5I). The data in Fig. 5I were fit to a model describing four exchangeable sites per Zur dimer with dashed lines representing simulated curves describing K Zn4 10-fold tighter and 10-fold weaker than the fitted K Zn4 value, and a tighter limit for K Zn4 was estimated from replicate titrations (Table 1). To estimate K Zn1-2 and K Zn3 , quin-2 (9.6 M) and Zur (13.7 M, monomer) were titrated with Zn(II) and fit to models describing competition from 1.5 eq of Zn(II) per monomer (exchangeable sites 1-3 per dimer, but not site 4) with mean values for K Zn1-2 and K Zn3 shown in Table 1 (Fig. 6B). The optimized curve departs from simulated curves describing K Zn1-2 as 10-fold tighter or 10-fold weaker than the fitted value. K Zn3 departs from a simulated curve describing K Zn3 as 10-fold tighter, but it approaches a simulated curve describing K Zn3 as 10-fold weaker (Fig. 6B).
Titration of RcnR with Ni(II) or Co(II) generated spectral features that saturated at 1 eq of metal (Fig. 5, J and K). Ni(II)-RcnR demonstrated features Ͻ300 nm and weak d-d transitions consistent with a six coordinate octahedral Ni(II)-binding site, as seen for E. coli RcnR (17). An additional Co(II)-dependent feature at 314 nm appeared with time (Fig. 5L). Co(II)-dependent fluorescence quenching of fura-2 (13.2 M) in the presence of RcnR (18.4 M, monomer) was fit to a model describing competition from three sites per RcnR tetramer with two sites (K Co1-2 ) tighter than the third (K Co3 ) (Fig. 6C). The optimized curve departs from simulated curves describing K Co1-2 as 10-fold tighter or 10-fold weaker and K Co3 as 10-fold tighter than the respective fitted values. Mean values (generated from multiple titrations) for K Co1-2 and a range for K Co3 are shown in Table 1.
Sensor
Metal a Data were fit to a model describing Co(II) binding with equal affinity to four sites (K Co1-4 ) on an FrmR or FrmRE64H tetramer, determined by competition with BisTris (n ϭ 3). A weaker limit is defined for FrmRE64H. b Data were fit to a model describing Zn(II) binding with equal affinity to the first three sites (K Zn1-3 ) on an FrmR or FrmRE64H tetramer, determined by competition with quin-2 (n ϭ 3). c Data were fit to a model describing Cu(I) binding with equal affinity to the first two sites (K Cu1-2 ), with equal affinity to sites 3 and 4 (K Cu3-4 ), and with equal affinity to sites 5-8 (K Cu5-8 ) on an FrmR tetramer (with K Cu1-2 Ͻ K Cu3-4 Ͻ K Cu5-8 ), determined by competition with BCA (n ϭ 4). A tighter limit is defined for FrmR K Cu5-8 . d Fit to a model describing Co(II) binding to the first site (K Co1 ) on an FrmRE64H tetramer, determined by competition with fura-2 (n ϭ 3). e Data were fit to a model describing Cu(I) binding with equal affinity to the first two sites (K Cu1-2 ), with equal affinity to sites 3 and 4 (K Cu3-4 ), with equal affinity to sites 5 and 6 (K Cu5-6 ), and with equal affinity to sites 7 and 8 (K Cu7-8 ) on an FrmRE64H tetramer (with K Cu1-2 Ͻ K Cu3-4 Ͻ K Cu5-6 Ͻ K Cu7-8 ), determined by competition with BCA (n ϭ 4). A tighter limit is defined for FrmRE64H K Cu7-8 . f Approximation reflects the fact that sites 1 and 2 on an FrmRE64H tetramer outcompete BCA for Cu(I) but fail to compete with BCS (although the formation of a ternary complex cannot be ruled out). g Data were fit to a model describing Zn(II) binding to three sites (K Zn1-2 and K Zn3 ) on a Zur dimer (with the structural site already filled) with equal affinity to the first two sites, (K Zn1-2 ) and K Zn1-2 Ͻ K Zn3 , determined by competition with quin-2 (n ϭ 3). h Data were fit to a model describing Zn(II) binding to the fourth site (K Zn4 ) on a Zur dimer (with the structural site already filled), determined by competition with mag fura-2 (n ϭ 3). Only a tighter limit can be determined. i Data were fit to a model describing Zn(II) binding to two sites (K Zn1 and K Zn2 ) on a ZntR dimer (K Zn1 Ͻ K Zn2 ), determined by competition with quin-2 (n ϭ 3). j Data were fit to a model describing Co(II) binding to three sites (K Co1-2 and K Co3 ) on an RcnR tetramer with equal affinity to the first two sites (K Co1-2 ) and K Co1-2 Ͻ K Co3 , determined by competition with fura-2 (n ϭ 3). k Range represents the fact that RcnR exhibits linear absorbance features upon titration with Co(II) to 1 M eq per monomer, but site 3 does not sufficiently complete with fura-2 for Co(II). l Data were determined by competition with BCS (n ϭ 6) and describing binding of Cu(I) to the first site (K Cu1 ) on a CueR dimer.
Salmonella CueR out-competes a 10-fold molar excess of BCS (41), and here a 100-fold and then a 75-fold excess of BCS (the latter in Fig. 6D) were used to estimate K Cu1 (Table 1). In summary, the tightest exchangeable sites of the endogenous metal sensors are tighter for their cognate metals than either FrmR or FrmRE64H, in every case (Table 1). However, the dif- Solid line describes competition from ZntR for 1 M eq of Zn(II) per monomer (two independent sites per dimer; K Zn1 and K Zn2 ). Dashed lines describe K Zn1 10-fold tighter and 10-fold weaker than fitted K Zn1 (K Zn2 fixed to fitted K Zn2 ). Dotted lines describe K Zn2 10-fold tighter and 10-fold weaker than fitted K Zn2 (K Zn1 fixed to fitted K Zn1 ). B, representative (n ϭ 3) quin-2 absorbance upon titration of quin-2 (9.6 M) and Zur (13.7 M, monomer) with ZnCl 2 . Solid line describes competition from Zur for 1.5 M eq of Zn(II) per monomer (three sites per dimer with two independent binding events: K Zn1-2 , and K Zn3 ). Dashed lines describe K Zn1-2 10-fold tighter and 10-fold weaker than fitted K Zn1-2 (K Zn3 fixed to fitted K Zn3 ). Dotted lines describe K Zn3 10-fold tighter and 10-fold weaker than the fitted K Zn3 (K Zn1-2 fixed to fitted K Zn1-2 ). C, representative (n ϭ 3) fura-2 fluorescence emission upon titration of fura-2 (13.2 M) and RcnR (18.4 M, monomer) with CoCl 2 . Solid line describes competition from RcnR for 0.75 M eq of Co(II) per monomer (three sites per tetramer, with two independent binding events: K Co1-2 and K Co3 ). Dashed lines describe K Co1-2 10-fold tighter and 10-fold weaker than the fitted K Co1-2 (K Co3 fixed to fitted K Co3 ). Dotted lines describe K Co3 10-fold tighter and 10-fold weaker than the fitted K Co3 (K Co1-2 fixed to fitted K Co1-2 ). ference in K Zn(II) between FrmRE64H and cognate Zn(II) sensors is the smallest.
Cognate Metal Sensors Out-compete FrmR for Metal-To confirm, or otherwise, that FrmR K metal is weaker than CueR K Cu(I) , ZntR K Zn(II) , and RcnR K Co(II) , pairwise competitions were conducted for the tightest metal-binding site in which metallated FrmR was incubated with apo-forms of the respective sensors. Cu(I)-FrmR co-migrates with copper following heparin affinity chromatography (Fig. 7A). However, after mixing Cu(I)-FrmR with apo-CueR (which can be differentially resolved), copper migrates with CueR (Fig. 7A). Likewise after mixing Zn(II)-FrmR with apo-ZntR, Zn(II) predominantly migrates (using different fractionation buffers to those in Fig. 7A) with ZntR (Ͼ 90% of control) (Fig. 7B). Diagnostic spectral features (d-d transitions) that discern Co(II)-FrmR, with tetrahedral binding geometry, from Co(II)-RcnR, with octahedral binding geometry, are lost upon addition of apo-RcnR to Co(II)-FrmR (Fig. 7C). Thus, in every case the cognate sensor out-competes FrmR confirming that FrmR K metal is weaker. Relative (to the cognate sensors) metal affinity could account for why wild type FrmR does not respond to metals within cells. Fig. 8A compares the calculated fractional occupancies of the tightest exchangeable sites (from K metal in Table 1) of FrmR and FrmRE64H for Zn(II), Cu(I), and Co(II) with the respective cognate Salmonella sensors, as a function of metal concentration. To detect Cu(I), FrmRE64H would require intracellularly buffered Cu(I) concentrations to rise ϳ3 orders of magnitude higher than necessary for detection by CueR, which could explain why FrmRE64H remains unresponsive to Cu(I). In contrast, partial Zn(II) occupancy of FrmRE64H will occur at Zn(II) concentrations below those required to saturate ZntR (Fig. 8A). Thus, theoretically, a 10-fold increase in K Zn(II) of FrmRE64H relative to FrmR may be sufficient to enable some Zn(II) detection within the cell.
Glutathione Enhances Metal Detection by FrmRE64H and RcnR-In addition to responding to Zn(II), the FrmRE64H variant also responds to cellular cobalt (Fig. 2D), yet K Co(II) for FrmRE64H is ϳ500-fold weaker than the endogenous cobalt sensor RcnR (Fig. 8A and Table 1). An ϳ10-fold increase in K Co(II) alone cannot readily explain why this variant of FrmR has become responsive to cobalt. Recent studies of the complement of metal sensors from a cyanobacterium concluded that the detection of Zn(II) and nickel matched predictions based upon equilibrium thermodynamics, but this was untrue for cobalt (6,11,18). In that system, a substantial kinetic component was invoked for the preferential distribution of cobalt to the cobalt sensor and away from sensors for other metals (6).
The possibility that glutathione is required for the detection of cobalt (and Zn(II)) by FrmRE64H was investigated in ⌬frmR/ ⌬gshA cells containing P frmRA -frmRE64H fused to lacZ (Fig. 8, B and C). Cells lacking glutathione showed a negligible response to either metal. Previous studies of Zn(II) sensors have found that the low molecular weight thiol, bacillithiol, competes for metal thus reducing responses (61). ZntR-mediated expression in response to Zn(II) from the zntA promoter shows negligible difference in ⌬gshA cells compared with wild type (Fig. 8D). However, in common with regulation by FrmRE64H, the response of RcnR to cobalt was also reduced, but not lost, in cells missing glutathione (Fig. 8E). Thus, glutathione aids the Table 1. B, -galactosidase activity in ⌬frmR (filled circles) or ⌬frmR/⌬gshA (open circles) containing P frmRA -frmRE64H following exposure (2 h) of logarithmic cells to CoCl 2 . C, as B but with ZnSO 4 instead of CoCl 2 . D, -galactosidase activity in Salmonella (wild type, defined earlier) (filled circles) or ⌬gshA (open circles) containing P zntA grown as described in C. E, -galactosidase activity in wild type (filled circles) or ⌬gshA (open circles) containing rcnR-P rcnR following growth in conditions described in B. detection of cobalt by two different sensors but has varied effects on Zn(II) sensing.
Basal Repression by FrmRE64H Is Less than by FrmR-The tightening of K Zn(II) (and K Co(II) ) is modest suggesting that additional factors might contribute to the gain-of-metal detection by FrmRE64H (Figs. 2D and 8A and Table 1). It was noted that basal expression from the frmRA promoter is greater in cells containing FrmRE64H compared with wild type FrmR (Fig. 2, D and E). Expression remains elevated in cultures treated with EDTA or the Zn(II) chelator TPEN, implying that this is not a response to basal levels of intracellular metal (Fig. 9, A and B). As a control, ZntR-mediated -galactosidase expression from the zntA promoter does decline upon equivalent treatment with EDTA or TPEN (Fig. 9, C and D). Furthermore, because metal responsiveness from P frmRA -frmRE64H is affected by glutathione (Fig. 8, B and C), glutathione levels were measured but found not to be significantly altered between ⌬frmR cells expressing P frmRA -frmR or P frmRA -frmRE64H in either the presence (3.8 (Ϯ0.5) and 4.5 (Ϯ0.6) mM, respectively) or absence (4.4 (Ϯ0.8) and 3.3 (Ϯ0.4) mM, respectively) of added Zn(II).
Codon Optimization or De-optimization Alters FrmRE64H or FrmR Cell Ϫ1 but Does Not Switch Metal Perception-Loss of repression by FrmRE64H compared with FrmR could in theory be due to reduced protein abundance, for example due to impaired stability of the mutant protein. To test this suggestion, constructs were generated in which FrmRE64H codons were optimized for efficient translation (62,63), designated P frmRA -frmRE64H UP . Conversely, FrmR expression was de-optimized by introduction of rare arginine codons (62,63), designated P frmRA -frmR DOWN . This approach was chosen to alter abundance of the proteins while preserving the transcriptional architecture. Basal expression was enhanced in cells containing frmR DOWN and reduced in cells containing frmRE64H UP relative to the respective controls and yielding matched levels of basal lacZ expression by frmR DOWN versus frmRE64H UP (Fig. 9E). Moreover, the numbers of FrmRE64H and FrmR tetramers per cell, as determined by quantitative mass spectrometry, were indeed increased and decreased, respectively, in cells harboring the codon-altered variants (Fig. 9, G-K). Cells containing any of the variants, frmRE64H, frmRE64H UP , frmR, frmR DOWN , all showed enhanced expression following exposure to MNIC of formaldehyde, but crucially only the strains expressing FrmRE64H responded to Zn(II) and cobalt (Fig. 9F). Notably, the abundance of FrmRE64H is no less than FrmR (Fig. 9K), and an alternative explanation is needed for elevated basal expression in cells containing FrmRE64H.
⌬G C Zn(II)-FrmRE64H⅐DNA Is Less than ⌬G C Zn(II)-FrmR⅐DNA with Apo-FrmRE64H K DNA Being Weaker-Fluorescence anisotropy was used to monitor interactions between either FrmRE64H or FrmR and a fluorescently labeled double-stranded DNA fragment of the target operator-promoter, frmRAPro (Fig. 1C). DNA-protein stoichiometry was first determined by monitoring DNA binding to a relatively high concentration of frmRAPro (2.5 M) with saturation observed at ϳ20 M FrmRE64H or FrmR (monomer) consistent with binding of two tetramers (Fig. 10, A and B). A limiting concentration of frm-RAPro (10 nM) was subsequently titrated with apo-or Zn(II)saturated FrmRE64H or FrmR in the presence of 5 mM EDTA or 5 M Zn(II), respectively, and anisotropy data were fitted to models describing the binding of two nondissociable proteintetramers per DNA molecule (Fig. 10, C and D). The calculated FIGURE 9. Basal expression from P frmRA -frmRE64H is higher than P frmRA -frmR. -Galactosidase activity in ⌬frmR containing P frmRA -frmR (filled circles) or P frmRA -frmRE64H (open circles) following growth to early exponential phase in the presence of EDTA (A) or TPEN (B). C, expression from P zntA in wild type Salmonella, grown as described in A, or D, as described in B. E, expression in ⌬frmR containing P frmRA -frmR (white bars), P frmRA -frmR DOWN (dashed white bars), P frmRA -frmRE64H (gray bars), or P frmRA -frmRE64H UP (dashed gray bars) following growth to early exponential phase, and F, following exposure (2 h) to Zn(II), Co(II) or formaldehyde, or untreated control. G-J, multiple reaction monitoring, quantitative MS of cell extracts. Representative (n ϭ 3) extracted LC-MS chromatograms of ion transitions detected in ⌬frmR containing P frmRA -frmR (G), P frmRA -frmR DOWN (H), P frmRA -frmRE64H (I), or P frmRA -frmRE64H UP (J). Transitions 451. 24 DNA binding affinities (n Ն 3) are shown in Table 2. K DNA was similarly determined for Cu(I)-FrmR (Table 2), but weak K Co(II) precluded equivalent K DNA estimations for Co(II)-saturated proteins ( Table 1). Metal binding weakens DNA binding, but unexpectedly this is true of wild type FrmR as well as FrmRE64H.
The degree to which metal binding allosterically inhibits DNA binding has previously been expressed as the coupling free energy (⌬G C ) calculated from the ratio of K DNA of apo-and holo-proteins and using a standard thermodynamic function (see Refs. 9,11,64 and the footnotes to Table 2). This yields Zn(II)-FrmR ⌬G C ϭ ϩ2.03 (Ϯ0.08) kcal mol Ϫ1 (⌬G C Zn(II)-FrmR⅐DNA ) and Zn(II)-FrmRE64H ⌬G C ϭ ϩ1.24 (Ϯ0.16) kcal mol Ϫ1 (⌬G C Zn(II)-FrmRE64H⅐DNA ) ( Table 2). Unexpectedly, this approach revealed that Zn(II) is less, not more, allosterically effective when binding to FrmRE64H than to FrmR, with the former having the smaller coupling free energy. However, inspection of the DNA binding curves (Fig. 10, C and D), and K DNA values ( Table 2), reveals that this results from apo-FrmRE64H having a weaker DNA affinity than apo-FrmR. These data explain the loss of basal repression by FrmRE64H. Importantly, despite a lesser ⌬G C , because K DNA of Zn(II)-FrmRE64H is not tighter than Zn(II)-FrmR (Table 2), at equivalent Zn(II) saturation DNA occupancy by FrmRE64H will still be less than FrmR, in effect rendering FrmRE64H more sensitive to de-repression. Moreover, assuming a closed system, coupled thermodynamic equilibria infer that any effect of metal binding on K DNA is reciprocated in an effect of DNA binding on K Zn(II) (Fig. 10E) (9,61,65,66). Thus a smaller ⌬G C Zn(II)-FrmRE64H⅐DNA means an even tighter K Zn(II) for DNAbound FrmRE64H relative to FrmR. The inferred K Zn(II) sensor⅐DNA (on-DNA) is 5.3 ϫ 10 Ϫ9 and 1.9 ϫ 10 Ϫ10 M for FrmR and FrmRE64H, respectively. A weaker K DNA thereby contributes in two ways to the mechanism enabling metal perception by the FrmRE64H variant, and overall, a tighter K Zn(II) plus a weaker K DNA act in combination to confer Zn(II) sensing.
Discussion
Substitution of one amino acid has created a metal sensor from the formaldehyde-responsive, DNA-binding transcriptional de-repressor FrmR (Fig. 2). Contrasting the biochemical properties of these two proteins (FrmR and FrmRE64H), along with endogenous Salmonella metal sensors (Figs. 3-7, 8A, and 10, A and B, and Tables 1 and 2), demonstrates what is required for metal sensing within cells. These data test (by gain-of-func- Data were fit to a model describing a 2:1 protein tetramer (nondissociable):DNA stoichiometry (binding with equal affinity), and lines represent simulated curves produced from the average K DNA determined across the experimental replicas shown. E, coupled thermodynamic equilibria (assuming a closed system) describing the relationship between FrmR tetramer (P), Zn(II) (Z), and P frmRA (D) (9,65,66). The coupling constant (K C ) is determined from the ratio K 4 /K 3 tion) theories that have been developed from correlations between the biochemical properties of various metal sensor proteins and the metals they detect (1,2,6,9,11,18). The single residue change in FrmRE64H tightens K Zn(II) by ϳ10-fold and weakens apo-K DNA by ϳ10-fold, and in combination these changes to metal binding and DNA binding make Zn(II) sensitivity comparable with endogenous Zn(II) sensors ZntR and Zur (Figs. 2, 8A, and 10, A and B). In common with recent studies of cobalt detection in other cells (6,67), relative access (a major kinetic contribution) is invoked to explain the gain of cobalt detection by FrmRE64H, a response that is assisted by glutathione (Fig. 8).
Unexpectedly, the native FrmR protein binds Co(II), Zn(II), and Cu(I) (Fig. 3 and 4). Moreover, Zn(II) and Cu(I) are shown by fluorescence anisotropy to be allosterically effective and able to weaken K DNA , thereby raising questions about why native FrmR does not normally de-repress gene expression in response to these metal ions ( Fig. 10C and Table 2). Crucially, by characterizing Salmonella ZntR and RcnR (Fig. 5, A-E and J-L), and by measuring cognate K metal of Salmonella Zn(II)sensing ZntR and Zur, Cu(I)-sensing CueR, and cobalt-sensing RcnR (Figs. 5, F-I, and 6, and Table 1), it becomes evident that in each case the respective metal affinity of FrmR is substantially weaker than each cognate sensor and it cannot compete (Figs. 7 and 8A). Values for K Zn(II) , K Co(II) , and K Cu(I) for Salmonella Zur, ZntR, RcnR, and CueR determined here are comparable with analogous sensors from some other organisms (Figs. 5 and 6) (1,11,17,28,64,68). The ability of FrmR to respond to metals in vitro but not within cells (Figs. 2, A and E, and 10C), coupled with relative K metal values (Table 1), provides another line of evidence that metal sensing within cells is a combined product of a set of sensors (1). The best sensor in the set is the one that responds to each element (1,11). In each case K metal for FrmR is substantially weaker than the respective K metal for the best in the set of sensors in Salmonella (Fig. 8A), and so it does not respond.
The E64H substitution was intended to create a metal-binding site more analogous to RcnR and indeed K Co(II) plus K Zn(II) and K Cu(I) are all tighter by ϳ10-fold compared with FrmR ( Fig. 4 and Table 1), but they all remain weaker than the respective cognate metal sensor (Table 1 and Figs. 6 and 8A). Nonetheless, for Zn(II) the affinity of FrmRE64H approaches that of known Zn(II) sensors such that there is overlap in fractional metal occupancy curves as a function of [Zn(II)] (Fig. 8A). The free energy of coupling of Zn(II) binding to DNA binding for FrmRE64H also changes relative to FrmR (Fig. 10, C and D, and Table 2). However, the change is the opposite of what might be predicted (1,9,11,64), with Zn(II) appearing to be less, not more, allosterically effective in the mutant protein (⌬G C Zn(II)-FrmRE64H⅐DNA Ͻ ⌬G C Zn(II)-FrmR⅐DNA ). Importantly, these values incorporate a much weaker K DNA for apo-FrmRE64H (Fig. 10D), which lowers overall promoter occupancy enhancing sensitivity to de-repression. Moreover, if regulation is dominated by metal binding to the DNA-protein complex to promote DNA dissociation, then the lesser ⌬G C of FrmRE64H infers an even tighter K Zn(II) (assuming a closed system (9, 61, 65, 66)) of the active DNA-bound species relative to FrmR (Fig. 10E).
Unlike for Zn(II), the enhanced K metal of FrmRE64H does not approach that of cognate sensors for Cu(I) or cobalt (Fig. 8A). Thus, relative affinity is consistent with the continued inability of FrmRE64H to detect Cu(I). However, the gain of cobalt sensing by FrmRE64H is enigmatic. In studies of the model cyanobacterium, Synechocystis PCC 6803, the detection of nickel and Zn(II) correlated with relative affinity and relative allostery within the set of sensors, but the detection of cobalt was attributed to relative access (1,6,11,18). Somehow, the cobalt effector was preferentially available to the cobalt sensor CoaR relative to sensors for other metals. Thus, although Zn(II) sensors ZiaR and Zur had tighter affinities for Co(II) than CoaR and both were (allosterically) responsive to Co(II) in vitro, neither ZiaR nor Zur responded to cobalt in the cell, whereas CoaR with weaker K Co(II) responded (6). Unlike Synechocystis CoaR, because FrmR has not evolved to detect cobalt, it is difficult to understand why cobalt should be channeled to FrmRE64H (Fig. 8A and Table 1). FrmR and cobalt-sensing RcnR do share common ancestry, and so interaction with a cobalt donor could perhaps be an evolutionary relic. Glutathione complexes are components of the buffered cellular pools for a number of metals (69). Because the substrates for formaldehyde dehydrogenase, FrmA, which is regulated by FrmR, are S-(hydroxymethyl) glutathione and S-nitrosoglutathione, it is also formally possible that FrmR can respond to glutathione adducts (33,70). Here, we see that cobalt and Zn(II) sensing by FrmRE64H is somehow assisted by glutathione (Fig. 8, B and C). This is opposite to what has previously been observed in the detection of cellular Zn(II) in other systems where the glutathione-substitute, bacillithiol, competes with Zn(II) sensors (61), and here we see a negligible effect of glutathione on Zn(II) sensing by ZntR (Fig. 8D). Whether glutathione aids the detection of cobalt due to cobalt binding and trafficking or due to redox effects on the oxidation state of cobalt or its ligands remains to be established.
Basal repression by FrmRE64H is less than FrmR, and this is explained by weaker K DNA of apo-FrmRE64H (Figs. 9 and 10, C and D). In pursuing the explanation for this phenotype, the abundance of both proteins was adjusted by optimizing or deoptimizing codons, an approach that preserves the transcriptional architecture. These changes were confirmed to increase and decrease the number of copies of FrmRE64H and FrmR per cell, respectively, with concomitant gain and loss of repression leading to matched levels of basal expression (Fig. 9, E-K). However, the magnitude of these changes in protein abundance alone was insufficient to switch FrmR into a metal sensor or to stop FrmRE64H from responding to Zn(II) or cobalt (Fig. 9F). Nonetheless, in theory, a change in relative protein abundance could alter metal competition with other sensors by mass action, and relative abundance should be added to the list of relative properties (affinity, allostery, and access) that determine which sensor is the best in the set to respond to a metal.
By how much do tighter K Zn(II) and weaker K DNA values of the apoprotein enhance the sensitivity of FrmRE64H to Zn(II), alone and in combination? By using the parameters set out in Tables 1 and 2, plus Fig. 9K, it has become possible to estimate fractional occupancy of the frmRA operator-promoter with repressor, either FrmR or FrmRE64H, as a function of [Zn(II)] (refer to "Experimental Procedures," supplemental material, and Fig. 10, F-I). First, a weaker K DNA of apo-FrmRE64H causes operator-promoter occupancy to be less than FrmR even in the absence of elevated Zn(II) (Fig. 10F), which explains the small but detectable (Figs. 2 and 9), basal de-repression. Individually, the determined tighter K Zn(II) or weaker apo-K DNA alone enhance the sensitivity of FrmRE64H to [Zn(II)] by ϳ1 order of magnitude (dotted and dashed lines on Fig. 10F), although in combination they increase sensitivity by ϳ2 orders of magnitude. If regulation is dominated by metal binding to (and promoting dissociation of) DNA-bound protein, then the inferred (weaker) K Zn(II) of the DNA-adduct becomes the relevant parameter (Fig. 10E). Under this regime (which assumes a closed system), the weaker K DNA of apo-FrmRE64H lessens ⌬G C Zn(II)-FrmRE64H⅐DNA and infers a tighter K Zn(II) of DNAbound FrmRE64H (on DNA) relative to FrmR. This, in combination with the measured tighter K Zn(II) , enhances sensitivity to [Zn(II)] by ϳ3 orders of magnitude (Fig. 10H).
There is ambiguity about the buffered concentrations of metals in cells (1). These values are important because relative metal availability influences metal occupancy by metalloproteins (71,72). Plausible limits on cellular buffered [Zn(II)] are defined by FrmRE64H, FrmR, FrmRE64H UP , and FrmR DOWN (Fig. 10, F-I). In the absence of elevated exogenous Zn(II), for FrmRE64H to fully repress, the buffered [Zn(II)] must be held below 10 Ϫ11 M, even if the inferred weaker (on DNA) K Zn(II) is applied to all molecules (Fig. 10H). This low (sub-nanomolar) value suggests that metalloproteins acquire competitive metals such as Zn(II) when there is no hydrated metal pool. These estimates of the buffered concentration of Zn(II) are consistent with the hypothesis that metalloproteins acquire Zn(II) via associative ligand exchange from a polydisperse buffer (1), rather than a hydrated pool of ions. This represents an associative cell biology of Zn(II).
Whether or not a significant pool of hydrated ions contributes to the metallation and hence regulation of FrmRE64H (and by inference other metal sensors) remains unresolved (73). One view is that metal sensors respond to hydrated ions at ϳ10 Ϫ9 M once the buffer is saturated (73). For FrmR DOWN to be unresponsive when cells are challenged with elevated exogenous Zn(II), the buffered [Zn(II)] must remain somewhere below 10 Ϫ8 M, even if the inferred weaker (on DNA) value for K Zn(II) is assigned to all FrmR molecules (Fig. 10I). This limit drops to 10 Ϫ10 M, if the determined (off DNA) K Zn(II) is used (Fig. 10G). Conversely, for FrmRE64H to respond, the buffered [Zn(II)] need only exceed 10 Ϫ11 M using the inferred weaker (on DNA) K Zn(II) (Fig. 10H). This places the intracellular [Zn(II)], at which FrmRE64H responds, somewhere within the range 10 Ϫ11 to 10 Ϫ8 M.
A long term aspiration is to gather analogous K metal , K DNA apoprotein , and K DNA metal-protein values, for cognate and noncognate metals, plus protein abundance for a cells' complement of metal sensors. In this manner, comparative models of sensor occupancy with metal (as in Fig. 8A) could be refined to more sophisticated and comparative models of promoter occupancy by repressors, as shown in Fig. 10, F-I. In turn, this should render transcriptional responses to metals predictable. In closing, the (subtle) biochemical changes, which in combination enable FrmRE64H to detect a sub-set of metals, support a view that (modest) differences in the relative properties of a cells' complement of sensors dictate which sensor is the best in the set to detect each metal inside cells. | 14,016.8 | 2015-06-24T00:00:00.000 | [
"Chemistry"
] |
Locating and Classifying Defects with Artificial Neural Networks
Locating defects and classifying them by their size was done with an Adaptive Neuro Fuzzy Procedure (ANFIS). Postulated void of three different sizes (1x1 mm, 2x2 mm and 2x1 mm) were introduced in a bar with and without a notch. The size of a defect and its localization in a bar change its natural frequencies. Accordingly, synthetic data was generated with the finite element method. A parametric analysis was carried out. Only one defect was taken into account and the first five natural frequencies were calculated. 495 cases were evaluated. All the input data was classified in three groups. Each one has 165 cases and corresponds to one of the three defects mentioned above. 395 cases were taken randomly and, with this information, the ANN was trained with the backpropagation algorithm. The accuracy of the results was tested with the 100 cases that were left. This procedure was followed in the cases of the plain bar and a bar with a notch. In the next stage of this work, the ANN output was optimized with ANFIS. The accuracy of the localization and classifications of the defects was improved.
Introduction
One of the main problems related with numerical methods is the convergence of the results to the solution.In the case of localization of defects, ANN is useful for this purpose.It has been used in conjunction with the Finite Element Method, following an Inverse Computation Analysis.Defects are commonly localized evaluating the mechanical response of a structure.It depends on its geometrical characteristics.Marwala and Hunt [1] established that the vibration characteristics of a body can be used for this purpose.Input data can be obtained with frequency response or modal analysis.Nonetheless, and in accordance with their conclusions, best results are obtained when this input data is based on the natural frequencies.
In the open literature, there are reported cases in which the vibration response is used in this indirect approach.Defects are identified with the Finite Element Method or experimental analysis in conjunction with ANN.As an example, the characterization of horizontal cracks in beams of isotropic materials can be mentioned.The results show that crack length can be estimated considering the dynamic response [2].Following this approach, horizontal cracks in beams are detected with the dynamic response after a transverse load is applied.The experimental results are used with ANN [3].Regarding the evaluation of delamination in composites made of carbon and epoxi, the Finite Strip Method with ANN are used [4].
Neuro fuzzy logic has been used in the classification of defects.An application in welds, can be found in [5].Through ultrasonic time-of-flight diffraction, a data set is created, then it is analyzed with three methods: ANN, fuzzy logic classifier and neuro fuzzy system.The last one seems to be the best.Also, this technique has been used in the improvement of the tracking for radar/infrared system [6].These results show that the algorithm can effectively adjust the system and has a capability in resisting uncertain information.Other application is the classification of buried pipe defects [7].In this case, the proposed neuro fuzzy model is tested versus other five different methods.The neuro fuzzy classifier performs classification accuracies around 90% on real concrete pipe images.
In preliminary work, a reduction of the accuracy of the results was observed, when a defect in a bar is localized and classified with a single ANN.For this purpose, an inverse computational analysis was followed.The first five natural frequencies were the input data.This synthetic data was obtained with the Finite Element Method.This analysis was complemented with ANN.Considering the ideas mentioned above, the purpose of this work is the improvement of such accuracy with a neuro fuzzy analysis.
As it was mentioned before, ANN gives a good localization of the defects when the natural frequencies are used to generate the input data.However, the accuracy of the evaluation is reduced when classification is involved.Neuro fuzzy analysis can be used for the optimization of the solution.For the purpose of this work, it is proposed to use both methods.A single ANN will be used in the localization of classification of defects.Its results will be optimized with a neuro-fuzzy system.The consistency of the procedure will be analyzed by introducing some changes in the geometry of the component analyzed.
Statement of the problem
The purpose of this neuro-fuzzy analysis is the localization and classification of defects in the bars shown in Fig. 1.In both cases, the thickness is 1 mm, therefore plane stress conditions may be assumed.It is considered that the dynamic response depends on the bar dimensions and the localization of the defect.Consequently, synthetic data, based on the analysis of natural frequencies, was generated when a defect was located in the shadowed area (20 mm x 8 mm).This area is located 51 mm away from the left end.Three types of defects were considered.Their dimensions are 1x1mm, 2x2 mm and 2x1mm.This arrangement was selected in such way that the voids may be considered as internal defects.In order to evaluate the robustness of the neuro fuzzy model, two cases were considered: (1) a plain bar and (2) a bar with a notch of 3x4 mm.In the last case, the effect of the geometrical discontinuities on the results was analyzed.
Numerical Analysis
Finite Element Analysis.A parametric analysis was done in order to create the synthetic data.In this way, the first five natural frequencies were calculated when only one defect is located in the shadowed area of Fig. 1.This was done with ANSYS 10.0 code.495 cases were run.This data was classified in three groups.Each one corresponds to the type of postulated defects (1x1mm, 2x2 mm and 2x1mm).165 cases are in each group.
The finite element mesh has 1507 SOLID45 elements.It was developed with the automatic mesh generation algorithm.The bar was clamped at its right end.Besides, the modulus of elasticity of the material, Poisson ratio and density are 200 GPa, 0.3 and 7800 kg/m 3 respectively.ANN Analysis.The ANN analysis was done with MATLAB 6.5.As it was mentioned before, the input data was developed with the first five natural frequencies.The output data are the coordinates X and Y of the defects and a number that classified the type of defect.The origin of the coordinate system is localized at the bottom left corner of the bar.In this way, all the points in the bar have positive coordinates.All the data was normalized.395 cases were taken randomly in order to train the proposed Neural Network and the accuracy of the localization and classification of the defects were assessed with the other 100, which were left.In the case of the plain bar, the ANN that gave best results has five layers (25-20-15-5-3).All the layers have log neurons.Training was done with backpropagation, following the TRAINSCG procedure.50000 epochs were required.
One has to keep in mind that this network gives continuous results.Consequently, the criteria of the defect classification are 0.5, 0 and 1 for 1x1, 2x2 and 2x1, respectively.The tolerance range that was considered for the first case is 0.45 to 0.55.For the second defect, it was 0-0.10 and for the third type of defect, it was 0.9-1.0. the analysis of the notched bar, an ANN of four layers (25-18-11-3) with log neurons was used.It was trained with 100000 epochs.TRAINSCG algorithm was used.The classification criterion is the following: 0, 0.5 and 1 for 2x2mm, 1x1mm and 2x1mm respectively.A similar tolerance range was used (0-0.1,0.45-0.55 and 0-9-1.0).Fuzzy Analysis.A neuro fuzzy system is proposed to improve the results previously obtained with the ANN analysis.The neuro fuzzy technique provides a method for the development of models which relate input and output data.In this procedure, a network with membership functions is proposed and the parameters of such functions are calculated with the training data.This learning method works similarly to that of neural networks.In this work, the fuzzy logic tool box of MATLAB 6.5 was used, following ANFIS procedure.The optimization of the results was done with a network, which has four linear membership functions.Their type is gbellmt.In all cases, trainning was done with 40 epochs.
Analysis of the results
Plain Bar Analysis with an ANN. the evaluation of the results related with the localization of defects, an accuracy range of ±5% was considered.Under this condition, 71 X coordinates were localized and all the Y coordinates were localized.Besides, 99 defects were classified.As an illustration purpose, ten points of the assessment data were taken randomly.Fig. 3 compares the estimated and the real coordinates.The classification of the defects is shown in Fig. 4.
Plain Bar Analysis with ANFIS.In order to improve the evaluation of the X coordinate, the ANN output was fed to the adaptive neuro fuzzy interference system (ANFIS).When a ±5% accuracy range was considered, 87 X coordinates and all Y coordinates were localized.All the defects were classified.The analysis of the absolute errors obtained in the estimation of X coordinate with ANN, shows that maximum value obtained is 10.98 mm.Two cases are around this figure.This situation is improved with the neuro fuzzy analysis, because this parameter is reduced (6.96mm).Only one case was found around this figure.In general terms, the localization error, using ANN, is less than 7 mm.In relation with neuro fuzzy analysis, the average error is lower than 4.2 mm.Regarding Y coordinate, the maximum absolute error is 0.047 mm.One has to keep in mind that the longest side of the zone, in which the defects are located, is parallel to the X axis.For this reason, a bigger absolute error in X direction is expected.The absolute error of ten defects taken randomly is shown in Fig. 5.
120
Advances in Experimental Mechanics VI Fig. 5. Absolute error for ten points taken randomly (plain bar) Notched Bar Analysis with an ANN.Initially, the obtained results were evaluated with an accuracy range of ±5%.Under this condition, 59 X coordinates were localized.In the case of Y coordinates, 32 of them were localized.Moreover, 82 were classified.When the range of accuracy is increased (±10%), 85 X coordinates and 52 Y coordinates were localized.Besides 82 defects were classified.
Notched Bar Analysis with ANFIS.Initially, the results obtained were evaluated within an accuracy range of ±5%.86 X coordinates were located.In the case of Y coordinates, 61 of them were localized.This situation is improved when accuracy range is increased (±10%).In this way, X coordinate and 83 Y coordinates were localized.All the defects were correctly classified.The analysis of the absolute errors shows that 8.8 mm is the maximum value in the estimation of X coordinate with ANN.Three coordinates are within this range.Alternatively, 6.26 mm was the maximum error in the neuro fuzzy analysis.Four points were around this value.
Mechanics and Materials Vols. 13-14
Regarding Y coordinates, the maximum error is 3.011 mm in the ANN analysis.This is the case of one point.This situation is enhanced in the neuro fuzzy analysis.0.52 mm was the maximum error.Eight points are around this value.Fig. 8 illustrates the absolute for ten points randomly.
Fig. 8. Absolute error for ten points taken randomly
Conclusions
In the development of this work, the results obtained in the localization and classification of defects with ANN, were compared with those obtained with ANFIS.In all cases, neuro fuzzy analysis gave better results.Besides, as geometry of the dominion of interest becomes more complex, the accuracy of the results is reduced.This situation was observed when a notch was Regarding the ANN analysis, diverse attempts were tried.For this purpose, the number of layers and neurons were varied.Nonetheless, the accuracy of the prediction of localization and classification was not improved substantially.This evaluation depends on different factors such as the input data, the architecture of the ANN and the training algorithm among others.In a previous work [8], ANN has been used for the localization of defects in a similar problem.The input data was the dynamic strains measured at the boundary of the bar.Although several attempts were tried, the improvement of the accuracy was similar as it was observed in this work.This process is heuristic and it is not possible to establish clear outlines at this respect.
Neuro fuzzy technique improves substantially the solution.The introduction of the notch in the bar causes a less perturbation in the numerical results.An advantage of this neuro fuzzy procedure is this improvement can be done without difficulty.
The input data required in this hybrid analysis can be obtained by experimental means or generated by a numerical method.In the first instance, frequency response has to be evaluated.With the computing infrastructure at hand, the input data is obtained and processed easily.Care should be taken in the identification of some noise in this data.Regarding the synthetic data, diverse numerical methods can be used.In this case, the validation of the data is important.
Finally this kind of analysis provides new guide lines in the analysis of complex geometries with some noise perturbing the numerical and experimental data. | 2,947 | 2008-07-01T00:00:00.000 | [
"Engineering",
"Computer Science",
"Materials Science"
] |
Four new operations related to composition and their reformulated Zagreb Index
The first reformulated Zagreb indexEM1(G) of a simple graphG is defined as the sum of the terms (du+ dv− 2) over all edges uv of G. In 2017, Sarala et al. [3] introduced four new operations(F product) of graphs. In this paper, we study the first reformulated Zagreb index for the F -product of some special well-known graphs such as subdivision and total graph.
Introduction
For vertex u ∈ V (G), the degree of the vertex u in G, denoted by d G (u), is the number of edges incident with u in G.A topological index of a graph is a parameter related to the graph; it does not depend on labeling or pictorial representation of the graph.In theoretical chemistry, molecular structure descriptors (also called topological indices) are used for modeling physicochemical, pharmacologic, toxicologic, biological and other properties of chemical compounds [5].Several types of such indices exist, especially those based on vertex and edge distances.One of the most intensively studied topological indices is the Wiener index.Two of these topological indices are known under various names, the most commonly used ones are the first and second Zagreb indices.
The Zagreb indices have been introduced more than thirty years ago by Gutman and Trinajestić [6].They are defined as M 1 (G) = (d G (u) + d G (v)).The Zagreb indices are found to have appilications in QSPR and QSAR studies as well, see [4].For the survey on theory and application of Zagreb indices see [9].Feng et al. [7] have given a sharp bounds for the Zagreb indices of graphs with a given matching number.Khalifeh et al. [8] have obtained the Zagreb indices of the Cartesian product, composition, join, disjunction, and symmetric difference of graphs.Furtula and Gutman in [16] recently investigated this index and named this index as forgotten topological index or F -index and showed that the predictive ability of this index is almost similar to that of first Zagreb index and for the entropy and acetic factor, both of them yield correlation coefficients greater than 0.95.The F -index of a graph G is defined as Recently, Shirdel et al. [15] introduced a variant of the first Zagreb index called hyper-Zagreb index.The hyper-Zagreb index of G is denoted by HM (G) and defined as 2 .In [15], the hyper-Zagreb indices of the Cartesian product, composition, join and disjunction of graphs are obtained.The hyper Zagreb indices of some classes of chemical graphs are obtained in [11,13,14].Pattabiraman and Vijayaragavan have obtained the hyper-Zagreb indices of some special classes of graphs [20].Some upper and lower bounds on hyper-Zagreb index for a connected graph are obtained by Falahati-Nezhad and Azari [19].
Milicević et al. [23] in 2004 reformulated the Zagreb indices in terms of edge-degrees instead of vertex-degrees d(e) 2 , where d(e) denotes the degree of the edge e in G, which is defined by d(e) = d(u) + d(v) − 2 with e = uv.The use of these descriptors in QSPR study was also discussed in their report [23].Reformulated Zagreb index, particularly its upper/lower bounds has attracted recently theat tention of many mathematicians and computer scientists, see [21,22,23,24,25].In this paper, we study the first reformulated Zagreb index for the F -product of some special well-known graphs such as subdivision and total graph.
Main Results
The Cartesian product, G 2 H, of the graphs G and H has the vertex set For a connected graph G, there are four related graphs as follows: (i) The subdivision graph S(G) is the graph obtained from G by replacing each edge of G by a path of length two.
(ii) R(G) is obtained from G by adding a new vertex corresponding to each edge of G, then joining each new vertex to the end vertices of the corresponding edge.(iv) The total graph T (G) has as its vertices the edges and vertices of G. Adjacency in T (G) is defined as adjacency or incidence for the corresponding elements of G, see Figure 1.Eliasi and Taeri [2] introduced the following four operations of the graphs G 1 and G 2 based on the Cartesian product of these graphs.
Let F be one of the symbols S, R, Q, or T. The F -sum G + F H is a graph with the set of vertices V (G + F H) = (V (G) ∪ E(G)) × V (H) and two vertices (g 1 , h 1 ) and (g 2 , h 2 ) of G + F H are adjacent if and only if g 1 = g 2 and h 1 h 2 ∈ E(H) or h 1 = h 2 and g 1 g 2 ∈ F (G), see Figure 2. The Zagreb indices of the F -sum of graphs are obtained by Deng et al. [17].The F -index of four operations on some special graphs are computed by Ghobadi and Ghorbaninejad [18].Eliasi and Taeri [2] have obtained the Wiener index of four new sums of graphs.
In this sequence, Sarala et al. [3] introduced the following four operations of the graphs G 1 and G 2 based on the composition of these graphs.
Let F be one of the symbols S, R, First we compute the first reformulated Zagreb index of the graph Theorem 2.1.Let G i be a graph with n i vertices and m i edges, i = 1, 2.
Proof.Let {x 1 , x 2 , . . ., x n 1 } and {y 1 , y 2 , . . ., y n 2 } be the vertex sets of G 1 and G 2 , respectively.From the definition of first reformulated Zagreb index and the structure of the graph G 1 [G 2 ] S , we have where A 1 and A 2 are the sums of the terms, in order.We shall calculate A 1 and A 2 of (1) separately.First we calculate the sum www.ijc.or.idFor each vertex Thus From the definitions of first and hyper-Zagreb indices, we obtain: Next we find the value of the sum A 2 .
x 1 and e are incident in G 1 .
By the definitions of F -index and first Zagreb index, we get Adding A 1 and A 2 , we obtain the required result.
Next we obtain the first reformulated Zagreb index of the graph Theorem 2.2.Let G i be a graph with n i vertices and m i edges, i = 1, 2.
Proof.By the definition of first reformulated Zagreb index and the structure of where A 1 and A 2 are the sums of the terms, in order.
We shall obtain the value of A 1 and A 2 of (2) separately. where and From A 2 and A 2 , we have ).Using (2) and the sums A 1 ,A 2 we obtain the desired result.
Next we find the first reformulated
Theorem 2.3.Let G i be a graph with n i vertices and m i edges, i = 1, 2.
Proof.By the definition of first reformulated Zagreb index, where A 1 and A 2 are the sums of the terms, in order.
We shall calculate A 1 and A 2 of 4 separately. www.ijc.or.idwhere One can see that for a vertex where X i and X j are vertices of From the sums A 2 and A 2 , we have Adding A 1 and A 2 , we get the desired result.
Finally, we obtain the first formulated Zagreb index of G 1 [G 2 ] T .Theorem 2.4.Let G i be a graph with n i vertices and m i edges, i = 1, 2.
Proof.By the definition of first formulated Zagreb index, where A 1 to A 4 are the sums of the terms, in order.We shall calculate A 1 to A 4 of 5 separately.A similar arguments of A 1 and A 2 in Theorem 2.2, we have and x 1 x 2 ∈E(T (G 1 )) x 1 x 2 ∈E(T (G 1 )) x 1 x 2 ∈E(T (G 1 )) A similar argument of A 2 in Theorem 2.3 , we obtain Adding the sums A 1 to A 4 , we get the desired result.
u∈V (G) d G (u) 2 and M 2 (G) = uv∈E(G) d G (u)d G (v).Note that the www.ijc.or.id
Four
new operations related to composition and ... | K. Pattabiraman and A. Santhakumar first Zagreb index may also written as M 1 (G) = uv∈E(G)
(
iii) Q(G) is obtained from G by inserting a new vertex into each edge of G, then joining with edges those pairs of new vertices on adjacent edges of G.
Figure 3 .
Sarala et al. [3] have obtained the Zagreb indices of F -product of graphs.www.ijc.or.id
Four
new operations related to composition and ... | K. Pattabiraman and A. Santhakumar
Figure 2 .
Figure 2. The graph G, H, and its G + F H.
Four
new operations related to composition and ... | K. Pattabiraman and A. Santhakumar
Figure 3 .
Figure 3.The graph G, H, and its G[H] T .
Four
new operations related to composition and ... | K.Pattabiraman and A. Santhakumar | 2,147.6 | 2018-06-12T00:00:00.000 | [
"Mathematics"
] |
Estimating related words computationally using language model from the Mahabharata - an Indian epic
'Mahabharata' is the most popular among many Indian pieces of literature referred to in many domains for completely different purposes. This text itself is having various dimension and aspects which is useful for the human being in their personal life and professional life. This Indian Epic is originally written in the Sanskrit Language. Now in the era of Natural Language Processing, Artificial Intelligence, Machine Learning, and Human-Computer interaction this text can be processed according to the domain requirement. It is interesting to process this text and get useful insights from Mahabharata. The limitation of the humans while analyzing Mahabharata is that they always have a sentiment aspect towards the story narrated by the author. Apart from that, the human cannot memorize statistical or computational details, like which two words are frequently coming in one sentence? What is the average length of the sentences across the whole literature? Which word is the most popular word across the text, what are the lemmas of the words used across the sentences? Thus, in this paper, we propose an NLP pipeline to get some statistical and computational insights along with the most relevant word searching method from the largest epic 'Mahabharata'. We stacked the different text-processing approaches to articulate the best results which can be further used in the various domain where Mahabharata needs to be referred.
I. INTRODUCTION
Natural Language processing is the cutting-edge technology equipped with efficient tools and techniques to deal with unstructured text data. Using NLP pipeline techniques, a large amount of text can be processed very quickly and accurately. The most important point of processing the fictional text using NLP is that the text will be analyzed without adding any sentiments to it. 'Mahabharata' is the story orally often narrated and recreated across the world in different forms. Thus, humans have sentiments attached to them by default. So, to get the computational details about Mahabharata we used the elements of the NLP pipeline to answer the following questions which do not have any sentiment aspect attached with it. 1) How rich is 'Mahabharata' in terms of words? 2) Does the sentence length of 'Mahabharata' distribute normally across the whole literature? Apart from this, we are addressing the problem that how can we find the most related word from such a large text without reading it. In this paper, we are approaching the NLP Pipeline followed by the language model which is searching for the most related words from the large text.
II. LITERATURE REVIEW
Mahabharata is a treasure of life lessons. To make this treasure of life lessons understandable for common people, it is important to translate it into the local languages used by people in daily life. The first translation of Mahabharata was written in the Persian Language entitled 'Razmnameh' on the order of Mughal Emperor Akbar in the 18th Century Later on, this is followed by English, Hindi, and other regional Languages [1]. The literature is narrating the phenomena and story which is lived by more than 200 people [2] which has been redacted between 400BCE and 400CE [3]. we can see the glimpse of various events that occurred in past across India and even across the globe [4]. Among these chunks, the city Bishnupur in West Bengal, India is famous for its terracotta temples. These temple's walls are carved with terracotta panels describing various events from 'Mahabharata'. These images are captured and used as a 3D image dataset known as BHID (Bishnupur Heritage Image Dataset) for various computer vision applications. BHID is a dataset containing a total of 4233 images which is in the public domain and is considered a central resource for research related to digital heritage [5]. The story of Mahabharata is retold in various art forms like plays, short stories, paintings, poems such as 'Kiratarjunyam' to make people understand the right ethics that not to make difference between 'High-man' and 'Lowman' where 'Lord Shiva' himself described as 'Kirat' [6] and translated books in various Indian languages. Though the Orality affects the translation, according to paper [7] the translation of literature may be treated as an independent text because "A study of translation is a study of language" and Mahabharata is retold in various Indian languages which can be a free translation or a literal translation. Here the difference between free and literal translation comes up in the picture because of the orality. Between these all forms of art, a unique art called 'Wayang (leather puppets)' is famous for recreating the Mahabharata story in Bali -Indonesia. Sudiatmika (et al. 2021) and fellow researchers have classified the 'Mahabharata Events' presented in this art form. They used the R-CNN algorithm to achieve the recognization of events and the characters such as 'Wayang Arjuna' and 'Wayang Yudhistira' [4]. People are always interested in hearing or watching fictional or fantasy stories. Thus, stories inside Mahabharata are always being attractive for creative people. This Epic is even inspiring for the technologist to create various taxonomies for the fictional domain (TiFI) [8] and launch 'ENTYFI' -the first technique for typing entities in fictional text. This 5-steps technique is useful to generate supervise fiction typing, supervised real-world typing and unsupervised typing [9]. A large number of events and characters in the epic is also useful for Ontology (a knowledge representation structure). In the current scenario, the web resources are more explored for ontology enrichment rather than the questionanswer-pair (QA-pair). Authors in paper [10] applied such QA-pair on the 'Mahabharata Domain' to convert them into potential triples (subject, predicate, and object) and identify the triples which are new, more precise, and related to the domain for ontology enrichment in literature. During an ACM conference on 'Data and Application Security and Privacy (CODASPY) in a panel session prof. Rakesh Varma compared the data security issues with the Mahabharata War. He mentioned that in this world of data we are facing an untold war where attackers are motivated and working more sophisticatedly rather than we are fragmented with our data [11]. Apart from the angles like literature, Security, Technology, Digital Heritage Research, text generation, or literature of translations, Mahabharata is referred for the analysis of the 'Ludo Game' played on an android device. The analysis of different nine games concluded that the face of the dice is not equally distributed. Thus, the dice is biased and the dice algorithm is designed in such a way to make the game closer for an exciting experience for the users. Authors in [12] took the context from the history as well that the ludo game is inspired from that game called 'Pachishi' which is similar to the game played in Mahabharata called 'Chaupar'. While looking at the various aspect of Mahabharata, excluding the psychological aspects is not possible. Authors in paper [13] has explored the evidence for the most fundamental metaphor used for the mind -"The Mind is a Container" in Indian Epic 'Mahabharata' and 'Ramayana' plus the Greek Epic poem 'Homer and Hesiod' to traverse the cognitive phenomena in the epic literature. This study provides many uncommon aspects of our mental life. The description of the concept of the mind container is elaborated on the base of the epics by (a) Ascription, Location, the content of the mind container, (b) Scope of mind container concerning consciousness and memory, (c) control over the content and (d) functions of the mind container.
Mahabharata Wiki Article is featured in the 100 most viewed Wikipedia article list. It is easy to give the context of the literature to people who belong to different domains. Thus, it is important to have a computational, analytical, and sentimental analysis of the text to get meaningful insights [14] In [15], the authors have derived interesting insights from the English translation of Mahabharata [2] by applying Pre-processing, POS tagging, Co-occurrence analysis, sentiment analysis of text and characters, and emotional analysis. The Insights which are given about the character and phenomena are versatile enough to use in different domains. According to [16] paper, The important characters of the epic Arjuna and Bheema had a common struggle and they trust each other abilities more intensely, this is also derived by [15] in the sentimental analysis across the text that "Arjuna and Bheema faced more negativity around them". In paper [16], the author brings the concept of considering the human values while designing the AI Agents. The similarity between humans and AI agents is positively correlated to the trust factor. Thus, inspired from the story of 'Challenging the powerful kings like Jarasandha and Chitrasena by arjuna and Bheema' can help AI-Agent developers to involve Value similarity in the outline of AI-Agent development. Apart from the technology development, the treatise has relevance to the modern society and is helpful to derive management lessons such as Strategic Management, Creation and relation with powerful friends and Allies, Effective Leadership Style, Successful Team Building, Shared goal and Ownership of the Goal, Commitment to the Goal, Role Clarity, Understanding the ground realities and Empowering Women [17]. The most important part of this Epic is 'Bhagavad Gita' said during Bhishma Parva also gives lessons of intrapersonal skills like Self-development, sublimation/management of the physical dimensions, sublimation/management of the psychological dimensions, Deontology, desire management, anger management, mind management, Emotional Stability, Fear Management, self-motivation, Empathy, and social welfare [18]. This epic gives the zoom version of the art of concentration with the lifespan of Arjuna. Different events can lead us to derive the factors which can be considered for the concentration like Enthusiasm, Dedication, Aptitude, Emotional or Physical state, and Environment [19]. The Epic context is shaping the thinking of society over the centuries. And this is reflected in our modern literature for children and adults. The stories derived from the epic show the disability as a curse or sin, but the modern literature shows the positivity and power of the disability. It portrays the usefulness of disabled people to society. In the context of Mahabharata, the approach towards the disability may fall under the bucket of "Don'ts" [20].
III. METHODOLOGY
This paper aims to carve the non-semantic, statistical, and computational insights along with finding the most relevant words from the largest Indian Epic 'Mahabharata'. Figure-1 shows the NLP pipeline, which is defined to get robust results on the text. During this experiment "The Mahabharata of Krishna Dwaipayana Vyasa -The English translation (1886-1889) by Kesri Mohan Ganguli" is used in '.EPUB' format as a dataset.
A. .EPUB file conversation into data structure
The '.EPUB' -(electronic publication) format is a very popular format of the e-book in digital documentation. This format is not only useful to read e-books using multiple devices such as android/mac mobiles, tablets, laptops, or desktops but these files are also useful for text processing. The EPUB format is released as an archive file built on the XHTML method. The tag format of XHTML can be flattened into any data structure which is readable by machine language. Here the whole e-book is converted into a python list Data structure.
As shown in above figure 2, the 'Mahabharata' e-book is divided into sequential data structure. While conversation the 'New line' is converted into ' n' and page break is converted into ' o'. apart from this, we do have some unwanted elements such as comma (,) semicolon (;) and apostrophe 's' ('s).
B. Text Cleaning
The Mahabharata story contains many punctuations which are important to understand the sentiments for humans, but it not useful for the machine. Text cleaning is addressing the problem to handle unwanted elements. Using python library 're' (Regular Expression) and 'string' the redundant elements such as comma (,) semicolon (;) and apostrophe 's' ('s) are removed from the whole text and the text is now stored a unit string. In the general case, full-stop (.) is also removed during the text cleaning of the data-set but this process required fullstop (.) while performing the next step of the pipeline called tokenization. the reason behind keeping the full stop is to define the end of the sentences. After tokenization, we can find the number of words occupied in each sentence which can be identified as a word distribution pattern.
C. Tokenization
The concept of dividing the text document into small snippets is known as tokenization. The tokenization can be applied in two different ways on the text document: (a) Sentence tokenization and (b) word tokenization. These can generate a bunch of sentences, words, phrases, tokens, or symbols [21]. Usually, Tokenization is applied as a primary and conventional text-preprocessing step in an NLP pipeline.
In the text-preprocessing of the Mahabharata, we used the 'Natural Language toolkit -sent tokenize()' method to divide the whole text into sentences. The whole Mahabharata is divided into 1,30,700 sentences with variable lengths. The length distribution is described in figure 4.
As shown in figure 4, most of the sentences have a length between 20 to 70 words. And very few sentences are having a length of less than 20 and some outliers do have higher lengths like the sentence on the 121306 index is having a length of 1850 words.The text is not only divided into chunks of sentences but also into unit words to add more granularity into text preprocessing. This is achieved using the technique called word tokenization. Here we used the 'Natural Language Toolkit -word-tokenize()' method which divides the whole Mahabharata text into 27,49,461 uni-grams (only one word).
D. Text Normalization
The human written text includes Function Words and Content words. These text data, specifically the fictional text is a combination of all the grammatical ups and downs. Thus, these data do have high randomness. to reduce the randomness of the text and maintain the significant meaning of the text, the text normalization can be performed on the whole text.
On the Mahabharata text, we are applying two popular techniques Stemming and Lemmatization. These tasks are followed in the NLP pipeline to transform the fictional text into the standard form of the language. Both these tasks are followed by 'Removing Stop Words' on the text.
In Mahabharata text, many words do not have critical significance but are used with high frequency throughout the whole epic to form the correct grammar. these words are not useful to improve the performance of any language model and they will also take some computational time in the further analysis process. These words do not have any information in terms of sentiment analysis as well. So, it is advisable to remove stop words (words like a, an, the, are, have, etc. ) along with the text normalization tasks.
1) Stemming with Stop Words:
One word having the same semantic meaning can be written in many formats with human language. Stemming is a technique that removes the affixes and suffixes attached to the word and tries to bring out the stem word or root word from the text. Among popular stemming techniques like Lancaster stemmer, Porter Stemmer, and Snowball stemmer, we used Porter stemmer to get the root words of the whole Mahabharata text.
2) Lemmatisation with stop words: The process of Lemmatisation is designed with the same purpose which is addressed by stemming. Lemmatisation is also used to cutting down the words to their root word. However, in Lemmatisation, the inflection of the word is not just broken-off, but it uses the concept of lexical knowledge. Using this converts the words into base form. Thus, it holds the sentiments of the text more strongly. Here we used 'wordnetlemmatizer' to achieve this task. The selection between stemming and Lemmatisation can be done based on the database on which the Language model is going to be built. The Mahabharata is a fictional text, and to extract features from this large epic, a strong sentimental hold on the text is required. Thus, based on the comparison of stemming and Lemmatisation we decide to build the language model on lemmatized text
E. The Language Model
The second objective of this paper is to find similar words from the Mahabharata fictional text. So basically, we are targeting to implement a model which can process as illustrated in figure 6.
Here we have a large amount of fictional text which can be considered as unannotated data for training a model. Thus, according to [21] word2vec is well liked model to be applied on data which do not have any adulteration.Word2vac is a combination of two different algorithms applied together on corpus. These two algorithms are known as CBOW (Continuous Bag of Words) and Skip-Gram. This model is developed with three different layers. (a) Input Layer, (b) Single Hidden Layer and (c) Output Layer. The input layer is consisted with set of neurons which is having shape of the total number of words in the vocabulary. This vocabulary is specifically built according to the corpus. In this paper our corpus is the book "Mahabharata" and the vocabulary created The magnitude of a single hidden layer is equal to the dimensionality of the result word vector. Here we trained a word2vec model to get the 100-dimension resultant vector. So, the size of the hidden layer is 100 dimensional. And the output layer is having the same magnitude as the input layer. Considering 'V' words in the vocabulary (where V=25794) and 'N' is the dimension of the resultant vector (where N=100). Thus, the connections from the input layer to the hidden layer can be constituted by the WI matrix having the shape of V × N. Here each row and column represents each word of vocabulary and the dimension of the resultant vector respectively. Likewise, the connections from the hidden layer to the output layer can be constituted by a WO matrix having the shape of N × V. Here each row and column represents the dimension of the resultant vector and each word of vocabulary respectively.
Considering the above sample corpus (figure 8), the vocabulary created based on this corpus can be represented as follows: The sample corpus vocabulary has 33 words. This vocabulary is considering each unique word given in the sample corpus. So, there are 33 input neurons and 33 output neu-rons. We have 100 neurons in the hidden layer. Thus, our connections neurons between the input layer to the hidden layer can be represented as WI(33 ×100) and the connection neurons between the hidden layer to the output layer can be represented as WO(100 × 33). Now before we train the word2vec model these matrices are initialized with small random numbers. Now looking at the corpus, if we want that word2vec model finds the relationship between the words "durvasa" and "vow"; the word "durvasa" is known as context, and "vow" is known as the target. Now, these inputs can be multiplied with the randomly initialized WI(33 ×100) matrix tending towards the hidden layer, and then the output at the hidden layer will be multiplied with WO(100 × 33)matrix while tending towards the output layer. The target of this model is to compute probabilities for words at the output layer. This is achieved in word2vec as it implements the softmax function.
The idea behind using word2vec is, this model is used to represent the words by a vector of numbers. In our case, we provide the target word as input to the model. It will compute the cosign similarities between all other words available in the vocabulary and send it back as output with top n words.
The sample target words with their similar words along with the cosign similarity between target word and context word is shown
V. CONCLUSION
In this paper, the NLP-based experiment on the Mahabharata is carried out on a basic level. We trained the word2vec model on the corpus to get the most similar words from the text itself. The reason behind selecting word2vec is to deal easily with the high-dimensional word vectors. In this paper, we can reach the basic aspects like uni-gram vocabulary, sentence distribution, 100-dimensional vector representation, and word similarities.
VI. FUTURE SCOPE
Though in the current scenario we do have similar models like 'Glove' and 'Fast Text' which we are targeting to apply and compare the results. The comparison of these models will bring a robust argument that which model is giving the best result on fictional text. Apart from the text-similarity, we are protagonist. These observations can be matched with human behavior with the help of a specific questionnaire based on organizational behavior. This can provide a profile of a person and production capacity in his/her working environment. Thus, the results of this paper can be mapped with future research to identify the professional perspective of a human personality based on the Mahabharata. | 4,816 | 2023-05-09T00:00:00.000 | [
"Computer Science"
] |
Study of Slow Sand Filtration in Removing Total Coliforms and E.Coli
This study was aimed to evaluate the performance of SSF in removing bacteria (Total Coliforms and E. Coli) in regard to grain size distribution and grain shape intermittently. Two methodological approaches used in this reasearch were literature review and laboratory work. Bacteria removal was analyzed considering two different filter media (Rhine sand-spherical shape and Lava sand-angular shape) with three different grain size distributions. The best performance was attained by filter column F4 which consisted of Lava sand and had the configuration C2 (d10 = 0.07 mm; Cu = 4.2). This filter column achieved 4.7log-units removal of Total Coliforms and 5.0log-units removal of E. coli. The results show that a smaller grain size and an angular shape of sand grain lead to an increase in bacteria removal.
INTRODUCTION
Water is the most essential resource for everyone in the world, especially for drinking water and sanitation. Nowadays, many people still do not have a safe and sustainable access neither to drinking water nor to sanitation (WHO & UNICEF, 2010). Indonesia is one of the Southeast Asian Countries and has some regions where the water cannot be easily accessed by people. One of these regions is Gunung Kidul District, Yogyakarta Special Province. This region is located in the southern part of Yogyakarta Special Province. Most of Gunung Kidul District areas are situated in a karst landform zone. Gunung Kidul karst area occupies 65% of the Western Gunung Sewu (Thousand Hills) karst (Haryono & Day, 2004). The special feature of karst formation leads people living in this area suffer from water scarcity, mainly during the dry season. Karst formations consist of carbonate and gypsum rock that have a high solubility rate and also high infiltration rate. As a result, this region undergoes an extreme water shortage especially during the dry seasons. This situation could be enhanced by pumping up the water from underground to the surface (Nestmann et al., 2011). Since the water quality is not good, the water in the surface should be treated first before its distribution. Slow Sand Filtration (SSF) was selected as the most appropriate technology considering several factors including regulatory requirements, cost, and operation. The aim of this study is to evaluate the performance of SSF regarding bacteria removal considering some variables such as grain size distribution and grain shape. Furthermore, to achieve the main objective of this research, specific objectives need to be accomplished and are defined as follows: 1. To ascertain whether SSF might be also used to remove (reduce) the bacteria content from the raw water.
2. To analyze the effect of grain size distribution and grain shape on filter performance in regard to bacteria removal.
RESEARCH METHODS
Laboratory tests were conducted in filter columns. The experimental setup consisted of 5 columns with a diameter of 5.2 cm and a height of 120 cm containing a layer of sand supported by a layer of gravel. Each column has a valve at the bottom outlet connected with a hose and a clamp. The clamp was applied in each column in order to easily be able to control the filtration velocity at the outlet.
The bacteria removal was analyzed based on three sand filter configurations within two different sand types and under intermittent operation mode. These three configurations of sand filter were C1 (d10 = 0.13 mm, Cu = 3.7), C2 (d10 = 0.07 mm, Cu = 4.2), and configuration of control filter (d10 = 0.2 mm, Cu = 2.1). in the beginning of this experiment, all filter columns were operated under HLR 0.1 -0.2 m/h and at the end, those columns were operated with HLR 0.03 m/h. These sands with their configurations were poured into five filter columns. A gravel layer (5 cm) was placed at the bottom of each column, supporting the sand bed (50 cm), in order to avoid the sand flowing out of the filter that can be seen in Figure 1. An additional gravel layer (5 cm) was placed upon the surface of the sand bed to reduce or avoid the disturbance of surface layer of sand when the water was introduced into the column. The columns were constructed as follows: filter columns F1 and F2 followed the configuration C1, filter columns F3 and F4 followed the configuration C2, and filter column F5 was filled with sand directly from the nature. Filter columns F1, F3, and F5 were filled with Rhine sand while filter columns F2 and F4 were filled with Lava sand. During column construction, the sand characteristics i.e. porosity, permeability, and sand surface area were calculated by some equations.
Porosity Permeability
Sand surface area where n is porosity, V void or Vv is volume or the pore space, and V total or V is the total volume of sample, γd is the dry unit weight, Gs is the specific gravity, γw is the water unit weight, Ws is the dry sample weight, D is the diameter of the filter column, and H is the sand bed height. For permeability, a is the cross-sectional area of the standpipe, A is the cross-sectional area of the specimen, L is the length of the specimen, h o is the elevation above the datum of water in the standpipe at the beginning of the experiment (t = 0), and hf is the elevation above the datum of water in the standpipe at time t. For Sand surface area, As is specific surface area (m 2 /m 3 ), ds is the specific grain diameter, and p is porosity, A is the total sand surface area (m 2 ), d is the inner diameter of the filter column, and l is the bed depth.
Intermittent operation mode was applied in this experiment by feeding the columns with the raw water once in a day and five days in a week. By doing so, pause periods of a minimum of 24 hours could be achieved in this time interval. Water samples were taken from influent and effluent so that the microbiological test could be conducted to measure the bacteria concentration in the water.
Evaluating the bacteria removal is one of the main objectives in this resaerch. To evaluate the bacteria removal, the concentration of bacteria from influent and effluent in every time feeding must be known. The colilert-18 was used to measure the bacteria concentration from influent and effluent water. The colilert-18 measurement was done every afternoon so that the result could be log reduction = log(cfu/100ml)inflog(cfu/100ml)eff
a. Effect of grain size distribution on bacteria removal
The overall performance of these columns regarding bacteria removal was good achieving 1.6 -4.7 log-units or 97.7 -99.998% removal of total coliforms and 1.6 -5 log-units or 97.6 -99.999% removal of E. coli. The best performance with the consistent result was attained by filter column F4 which consisted of Lava sand and had the configuration C2 (d10 = 0.07 mm and Cu = 4.2). Ausland et al. (2002), Langenbach et al. (2009), Bellamy et al. (1985a and Bellamy et al. (1985b) found out that a decrease in grain size leads to an increase in treatment efficiency. From the results obtained in this study, these two indicator bacteria seem to follow the same trend, i.e. the highest bacteria removal corresponded to the finest grain size. The difference of bacteria removal becomes evident when configuration C1 and C2 are compared as can be seen in Figure 2. Stevik et al. (2004) explained that adsorption is the most important mechanism in retaining bacteria compared to straining. An increase in sand surface area leads to an increase in adsorption spots on sand and biofilm attached to the sand grains. The results achieved in this study showed that indeed, finer sand or smaller grain sizes present a larger sand surface area compared to coarse sand and therefore provide more adhesion or adsorption spots.
b. Effect of sand type (grain shape) on bacteria removal
The results showed that not only the grain size distribution but also sand type (grain shape) played an important role in removing bacteria by slow sand filtration. filter columns for both total coliform and E. coli was the same) and the sand type considering that both filter columns have the same grain size distribution (d10 and Cu). This comparison was made according to the configurations used in this experiment. There were two configuration applied i.e. configuration C1 (coarser; d10 = 0.13 mm, Cu = 3.7) and configuration C2 (finer; d10 = 0.07 mm, Cu = 4.2).
As can be seen in the figures above (Figure 3 and 4), the same pattern was noticed in all comparisons. Lava sand appears to give a better result in removing both total coliforms and E. coli for both configurations. Lava sand was able to reduce the concentration of bacteria in the effluent reaching 2.5 -4.7 log-units removal of total coliforms and 2.5 -5 log-units removal of E. coli.
Lava sand performed better than Rhine sand. It was likely caused by the different grain shape between Lava sand (angular shape) and Rhine sand (circular shape). Barton et al. (2007) stated that the more angular the sand grains, the more particles in wider range of size could be strained out in the filter. The results obtained seem to be consistent to that reported by Barton et al. (2007). The materials used have different shape, sizes, and chemical composition due to their origin. These different characteristics of materials could affect contaminant removal. Barton et al. (2007) indicated that the interstitial straining of particles decreased as the circularity coefficient increased. The highest interstitial straining was achieved by crushed limestone and the smallest one was spherical material from the river. It showed that the angular material caught the impurities more efficiently than the rounded material thus this angular material enhanced the filter performance in removing waterborne contaminants. Moreover, Stevik et al. (1999) reported that in
c. Effect of influent water quality (bacteria concentration)
The best bacteria removal was achieved by filter column F4. It was probably caused by a higher concentration of indicator bacteria in its influent water as can be seen in Table 1 (20% of wastewater and 80% of tap water). This high number of bacteria seemed to improve the adsorption mechanisms that enhance bacteria removal. Stevik et al. (2004) mentioned that the increase of bacteria concentration leads to improve the collisions between bacteria and media surface and subsequently increase the likelihood of adhesion process within the media. The rate of adsorption increases linearly with the bacteria concentration (Stevik et al., 2004).
CONCLUSIONS AND SUGGESTIONS a. Conclusions
The results of these experiments were summarized in terms of the effectiveness of bacteria removal under three configurations of grain size distributions and two different sand types which has different grain shape. From the results obtained it can be concluded as follows: 1. SSF is a reliable process to improve microbiological quality of water.
2. A smaller grain size leads to an increase in bacteria removal.
3. A higher bacteria removal was achieved by the filter columns filled with the Lava sand (angular shape) compared to that with Rhine sand (spherical shape).
4. This high number of bacteria in the influent tended to improve the adsorption mechanisms that enhance bacteria removal.
b. Suggestions
To improve the findings of this present study, some suggestions are as follows: 1. The modification of experimental setup should be carried out, i.e. using a pump in the filter outlet in order to get a more precise and constant filtration rate. Within this constant filtration rate, the results would become more reliable to be used as a basis to design a pilot plant of slow sand filtration.
2. To ascertain the same concentration of bacteria in the influent, the method of microbiological culture should be done before the experiments are carried out. This method aims to multiply microbiological organisms by growing them up in predetermined culture media under the controlled laboratory conditions. By doing so, the precise measurement of filter efficiency on bacteria removal could be attained.
3. The further experiment relating to sand properties should be conducted in order to get a trustworthy analysis in particle shape.
4. The subsequent experiments in terms of the development of biofilm or biolayer on the sand surface or within the sand bed should be carried out to assure the effect of intermittent operation to the development of biofilm or biolayer.
5. The following study should be conducted to know the clogging time or the running period of sand filter.
6. The research of slow sand filtration using local material regarding bacteria removal and turbidity should be carried out to acquire the real implementation of this study in the case area (Gunung Kidul).
Volume 6 Nomor 2 Juni 2014
Jurnal Sains dan Teknologi Lingkungan 115 7. The chemical properties of both sand filter and raw water should be considered in the subsequent study.
8. Filter column F4 (Lava sand, d10 = 0.07 mm and Cu = 4.2) performed the best bacteria removal in this experiment achieving 4.7 log-removal of total coliforms and 5.0 log-removal of E. coli. However, with this filter configuration, an excellent performance can be achieved under specific condition i.e. a HLR 0.03 m/h. According to this condition, it would be not anymore economically feasible due to the need of larger area. It is probably good as a decentralized SSF or a household SSF.
9. Filter column F5 (Rhine sand, d10 = 0.2 mm and Cu = 2.1) with the configuration in the range of recommendation and HLR 0.1 -0.2 m/h, performed a better bacteria removal than filter columns F1, F2, and F3. It achieved 2.7 log-removal of both total coliforms and E.coli. This filter characteristic or configuration is very good to be applied as a centralized SSF. | 3,204.6 | 2014-09-13T00:00:00.000 | [
"Engineering"
] |
Higher dualisations of linearised gravity and the $A_1^{+++}$ algebra
The non-linear realisation based on $A_1^{+++}$ is known to describe gravity in terms of both the graviton and the dual graviton. We extend this analysis at the linearised level to find the equations of motion for the first higher dual description of gravity that it contains. We also give a systematic method for finding the additional fields beyond those in the non-linear realisation that are required to construct actions for all of the possible dual descriptions of gravity in the non-linear realisation. We show that these additional fields are closely correlated with the second fundamental representation of $A_1^{+++}\,$.
Introduction
It was shown that the conjectured [1,2] non-linear realisation of the semi-direct product E 11 ⋉ℓ 1 of E 11 with its vector representation contains the fields and the equations of motion of every maximal supergravity theory [3,4]. For a review, see [5]. As such, it contains the metric of gravity and the three form in eleven dimensions and there are very good reasons to believe that these are the only degrees of freedom that the non-linear realisation possesses [6,7]. However, the non-linear realisation contains an infinite number of fields, of which only a few are the usual fields of the maximal supergravity theories.
It was conjectured in [8] and proven in [9] that many of these remaining fields represent equivalent descriptions of the degrees of freedom of the maximal supergravity theories. For example, in E 11 at levels 1, 4, 7, . . . , we find the fields A a 1 a 2 a 3 , A a 1 ...a 9 ,b 1 b 2 b 3 , A a 1 ...a 9 ,b 1 ...b 9 ,c 1 c 2 c 3 , and so on, which are related by an infinite set of duality relations. This ensures that the only degrees of freedom are those which are usually contained in the first field, the three form.
However, any of these fields can be used to give an equivalent formulation of these degrees of freedom. At levels 2, 5, 8, . . . , the story is similar except that the the block of three indices a 1 a 2 a 3 in each field is replaced by a block of six indices a 1 . . . a 6 . Then, at levels 0, 3, 6, 9, . . . , we find fields associated with gravity. Indeed, at level zero, we find the usual description of gravity with the field h ab . At level three, we find the field A a 1 ...a 8 ,b which was proposed to provide a dual description of gravity, while at level six we have A a 1 ...a 9 ,b 1 ...b 8 ,c , at level nine we find A a 1 ...a 9 ,b 1 ...b 9 ,c 1 ...c 8 ,d , . . . , and so on. These fields also provide alternative descriptions of gravity and all the fields are related by a set of duality relations which ensure that the theory only propagates a single graviton. In fact, there are other fields in the non-linear realisation and some of these are required to account for the gauged supergravities.
It is useful to give an account of the history of the dual graviton field. It was first observed by Curtright that the field A a 1 a 2 ,b = A [a 1 a 2 ],b could describe pure gravity in five dimensions [10].
It was then proposed that the field A a 1 ...a D−3 ,b may describe pure gravity in D dimensions [11].
In order the show that the field A a 1 ...a 8 ,b at level three in the non-linear realisation of E 11 ⋉ℓ 1 did indeed describe gravity, a parent action in D dimensions was given in [1,12]. By first linearising the parent action of [1], then varying the result with respect to one field or the other and finally substituting inside the linearised parent action, we obtain either the Fierz-Pauli action in the form where local Lorentz invariance holds, i.e. in terms of the field h ab that is neither symmetric nor antisymmetric, or we obtain an action in terms of the dual field A a 1 ...a D−3 ,b . This result was fully explained and also extended to higher spin fields in reference [13]. As shown in [1,12], the parent action of [1] also led to duality relations between the two fields. In this way, it was clear that the dual graviton field A a 1 ...a D−3 ,b really did provide an equivalent formulation of gravity at the linearised level. Further connections were also established in [13] between [1], [10] and [11].
These developments are reviewed at the beginning of Section 3.
It was also conjectured that the non-linear realisation of the semi-direct product A +++ D−3 ⋉ ℓ 1 of the very-extended algebra A +++ D−3 with its vector representation, contains pure gravity in D dimensions [14]. Following early preparatory work in references [15] and [16], this was indeed shown to be the case in four and eleven dimensions [17] and [18] respectively. In four dimensions, at the lowest level, this non-linear realisation contains the usual field of gravity h ab . At higher levels -indicated by numbers in brackets after each field -in addition to other fields, it contains h a b (0) ; A (ab) (1) ; A a 1 a 2 ,(b 1 b 2 ) (2) ; A a 1 a 2 ,b 1 b 2 ,(c 1 c 2 ) (3) ; A a 1 a 2 ,b 1 b 2 ,c 1 c 2 ,(d 1 d 2 ) (4) ; . . . (1.1) where groups of indices are antisymmetric unless otherwise indicated by round brackets ( · · · ) in which case they are symmetric. We interpret these fields as being related to dual descriptions of gravity. The field at level one is called the dual graviton. We then find the first higher dual graviton at level two, the second higher dual graviton at level three, and so on. The equations of motion at the full non-linear level, as well as the duality relations, were found for h a b and A (ab) in four dimensions [17] and in eleven dimensions [18].
The non-linear realisations of E 11 ⋉ ℓ 1 and A +++ D−3 ⋉ ℓ 1 lead to an infinite number of duality relations which can then be used to derive the equations of motion of the fields. These field equations are constructed from fields that are irreducible representations of A D−1 and, as a result, they have more and more space-time derivatives for the fields at higher and higher levels. The equations of motion require only the fields in the non-linear realisation, and they correctly describe the relevant degrees of freedom. In [19] and [20], equations of this type which describe the irreducible representations of the Poincaré group were given precisely. As shown in [20] and reviewed in [21], one can also integrate these equations to find equations of motion that are second order in space-time derivatives provided that one makes a particular gauge choice that leads to the Labastida [22,23] gauge transformations for arbitrary mixed-symmetry fields where the gauge parameters obey trace constraints. In fact, the duality relations derived from the non-linear realisation only hold modulo gauge transformations which can, as a matter of principle, be deduced from the non-linear realisation. See, for example, [4] or the review [24].
However, one must also introduce extra fields in order to have duality relations that hold as equations of motion in the usual sense and not just as equivalence relations [25].
A parent action containing the fields A a 1 a 2 a 3 , A a 1 ... a 9 ,b 1 b 2 b 3 , . . . occurring at levels 1, 4, . . . in the E 11 ⋉ ℓ 1 non-linear realisation, also containing certain extra fields, was worked out in [25] along the lines of [9,26]. Depending on which field was eliminated, one found an action only in terms of one field or the other. In this way, the authors of [25] found an action for the latter field which we call the first higher dual of the three from. The higher level fields were also discussed in [25], as were the infinite chain of duality relations and analogous results for the six form. Hence, using parent actions, one could find the additional fields required in order to write down an action, or duality relations, for the higher dual fields.
A similar strategy had previously been suggested for pure gravity in [9]. The method of parent actions was used to produce, for the first time, an infinite number of higher dual action principles, thereby proving the conjecture established in [8] on the equivalent dual descriptions of gravity. These parent actions involve extra fields in comparison to those that appear in the non-linear realisation of A +++ 1 ⋉ ℓ 1 .
In this paper, we further pursue the approach set forth in [9] to higher dual descriptions of gravity, focusing on four space-time dimensions for the sake of concreteness. We provide an explicit procedure for constructing the parent actions that relate the different higher dual formulations of gravity. Using these parent actions, one can directly obtain action principles for each subsequent higher dual graviton. We find extra fields on top of those already in the non-linear realisation of A +++ 1 ⋉ ℓ 1 . These extra fields are required to formulate actions for the dual fields as well as the duality relations between dual fields at adjacent levels.
We will compare the type of additional fields required to form higher dual actions with those contained in the adjoint representation and the second fundamental representation, denoted l 2 , of A +++ Following earlier results [15,16], the non-linear realisation of the semi-direct product A +++ 1 ⋉ ℓ 1 of A +++ 1 with its vector representation was computed at low levels in [17]. This calculation will be reviewed in this section. The Dynkin diagram for A +++ While no complete description of the generators of any such Kac-Moody algebra exists, they can still be analysed by decomposing them with respect to certain subalgebras. Deleting node 4 from the Dynkin diagram of A +++ 1 allows us to analyse the algebra in terms of its decomposition into GL(4) [15]. The resulting generators can be classified in terms of a level which, in this case, is the number of up minus down GL(4) indices divided by two. The positive low level generators R α are given, alongside the level zero generator, by The number in brackets corresponds to the level of the generators and the subscripts enumerate the generators when there is more than one with the same index structure. Groups of indices are antisymmetric except when shown to be symmetrised using round brackets. generators can be found in [17].
The generators in the vector representation of the A +++ 1 are denoted by L A and, when decomposed into representations of GL(4), the low level generators found in [17] are given by where, as before, groups of indices are antisymmetric except for those in round brackets which are symmetric. Subscripts denote different generators when the multiplicity is greater than one.
Generators in the vector representation commute and their commutators with the generators of A +++ 1 are given in [17].
The construction of the equations of motion follows the same pattern as that for E 11 ⋉ ℓ 1 .
See [5,24] for reviews. For the non-linear realisation based on A +++ 1 ⋉ℓ 1 in [17], we start from the group element of A +++ 1 ⋉ ℓ 1 denoted by g = g L g A , where g A and g L are group elements that are constructed in terms of non-negative level generators of the adjoint and vector representations, respectively, of A +++ 1 . They take the form Therefore, the theory is populated by a set of fields A α which contains the graviton h a b , the dual graviton A ab , the first higher dual graviton A ab,cd , and so on. We see from the list of generators in (2.1) that we have the generator R a 1 a 2 ,b 1 b 2 ,c 1 c 2 ,(d 1 d 2 ) at level four which results in the second higher dual graviton A a 1 a 2 ,b 1 b 2 ,c 1 c 2 ,(d 1 d 2 ) . Indeed, the pattern continues so that one finds such fields at every level. This leads to an infinite tower of dual formulations of pure gravity with fields that depend on the generalised coordinates z A = {x a , z a , z abc , z ab,c , . . .} .
The non-linear realisation is invariant under rigid transformations g 0 ∈ A +++ . This means that generic group elements g = g L g A are invariant under g → g 0 g and g → gh , (2.5) where g 0 is a rigid (i.e. constant) group element and h is a local transformation which can be used to set the coefficients of all negative level generators in g A to zero [27]. The equations of motion are just those that are invariant under these transformations and, as for E 11 , they are essentially unique.
The dynamics of the non-linear realisation is often constructed using Maurer-Cartan forms Here, E Π A can be thought of as a vierbein on the generalised space-time. Its lowest component is the gravitational vierbein given by e µ a = (exp(h)) µ a . The G Π,α are the components of the Maurer-Cartan form where the index Π is a world-volume (derivative) index and α is an index in the adjoint representation.
The low level Maurer-Cartan forms in the A +++ 1 direction are given by They are found as the coefficients of K a b , R bc and R a 1 a 2 ,bc in the Maurer-Cartan form, where ∂ Π is the derivative with respect to the coordinates z Π . The dynamics is actually constructed where "· · · " corresponds to terms arising when higher contributions to the vierbein E Π A are taken into account. These contributions contain derivatives with respect to the higher level coordinates. Working with G A,α has the advantage that it only transforms under I c (A +++ 1 ) .
Rather than deriving the equations of motion, one can derive a set of duality relations from which the equations of motion can be deduced, as explained in [3,5,24]. The duality relations between low level fields at the lowest level of generalised space-time derivatives are [17] where ω a,b 1 b 2 is the usual expression for the spin connection in terms of the vierbein which is given in terms of the low level Maurer-Cartan forms by with G a,bc = e a µ e b ν ∂ µ e νc .
Equation (2.9) relates the graviton field h a b appearing at level zero to the dual graviton field A ab at level one, while equation (2.10) is a duality relation between A ab at level one and the first higher dual graviton field A a 1 a 2 ,bc appearing at level two.
By combining equations (2.9) and (2.10), one derives a duality relation between the graviton and the first higher dual graviton: The above duality relations only hold modulo certain gauge transformations, as indicated by the symbol "= ", so they really are equivalence relations. This is explained in [3,4,24,27].
In order to further manipulate the above duality relations, one needs to know what the gauge transformations are. We will obtain them in the next section. As explained in [1], the duality relation (2.9) may be turned into a usual equation by adding an antisymmetric component to the symmetric dual graviton A a 1 a 2 . This 2-form field will later be found inside the second fundamental representation of A +++
In what follows we are only concerned with the linearised theory and so we drop, in particular, the det e factors.
Gauge transformations
It was proposed in [28] that a theory constructed from a non-linear realisation of g +++ ⋉ ℓ 1 , where g +++ is any very-extended Kac-Moody algebra and ℓ 1 is its vector (first fundamental) representation, is invariant under a particular set of gauge transformations whose parameters are in a one-to-one correspondence with the spectrum of ℓ 1 . For the linearised theory where base and fiber indices are identified, these gauge transformations take the form In this equation, C −1 α,β is the inverse of the Cartan-Killing metric C α,β for g +++ . The matrix (D β ) E F is that for the vector representation and, in particular, it occurs in the commutator (2.14) In addition, we have used the partial derivative in the linearised theory ∂ F = ∂ ∂z F . The gauge parameters Λ A correspond to elements in the vector representation.
Hence, in order to evaluate the gauge transformations, we require the inverse Cartan-Killing matrix and the analogous matrix for ℓ 1 at the corresponding level. The gauge transformations for the graviton and the dual graviton in the non-linear realisation of A +++ 1 ⋉ ℓ 1 were computed in [17] and we now extend these previous results to the main object of study in this paper: the first higher dual graviton.
We begin with the computation of the Cartan-Killing form which is determined by requiring that it is invariant. For our current purposes this means that it should satisfy where (· , ·) is the symmetric non-degenerate bilinear form on A +++ 1 that generalises the Killing form for finite-dimensional semi-simple Lie algebras. One finds that . Taking the previous results from [17], we find that the Cartan-Killing metric up to the level of the first higher dual graviton is given by where the basis is ordered to match the scalar products of Note that the only non-zero entries of C α,β are found when the levels of α and β sum to zero.
The inverse Cartan-Killing metric is given by The vector representation appears in the commutators of (2.14) which were given in [17] at low levels. Omitting the commutators with K a b , they are given by To study the vector representation at the level of the first higher dual graviton, we need to compute certain commutators at higher levels. One finds that where the last expression should be taken so that is is anti-symmetric in a 1 and a 2 .
Using the inverse Cartan-Killing metric (2.18) and reading off the analogous matrix for ℓ 1 from equations (2.19)-(2.21), we find that the gauge transformations with gauge parameters are given, for the fields at low levels, by The parameters satisfy Λ a 1 a 2 a 3 = Λ (a 1 a 2 a 3 ) and Λ a 1 a 2 ,b = Λ [a 1 a 2 ],b with the irreducibility condition Λ [a 1 a 2 ,b] = 0. In these equations, we have not written the gauge transformations that involve derivatives with respect to the higher level coordinates.
Linearised equations of motion
The duality relations given in (2.9)-(2.12) only hold modulo certain transformations which arise from the gauge transformations for the fields involved in the duality relations. As we computed these in the previous section, we can now compute the resulting transformations up to which the duality relations hold. Having done this, we can then compute the equations of motion from the duality relations at the linearised level.
We first consider the duality relation between gravity and dual gravity in (2.9). Using the gauge transformation (2.24) we find that, at the linearised level, it takes the form In deriving this result, we have used local Lorentz symmetry to symmetrise the h ab field, and so we obtain the variation δh ab = ∂ (a ξ b) . Had we not done this Lorentz gauge fixing, then the first term on the right-hand-side of (2.26) would have been replaced by a Lorentz transformation.
We have removed the dot above the equals sign in (2.25) since it holds as a usual equation.
To find the equations of motion, we have to eliminate the gauge transformations from the duality relations by taking derivatives and, at the same time, eliminating one of the two fields involved. In the case of (2.25), we can eliminate the gauge parameter ξ b 1 b 2 by taking an exterior derivative of E a,b 1 b 2 which produces By contracting a 1 with b 1 , we find that the term involving the dual graviton vanishes due to the fact that we have anti-symmetrised derivatives and A ab is symmetric. Thus, we find that which is the equation of motion for linearised gravity.
We can also write (2.27) as and so it vanishes when we contract it with η a 1 c 1 . As a result, we find that which we recognise as the equation of motion for the dual graviton at the linearised level, which agrees with the results of [17] where the full non-linear equation of motion was found and its linearised version was also given.
We will now carry out the same procedure for the duality relation involving the dual graviton and the first higher dual graviton (2.10). Using the gauge transformations in (2.23) and (2.24), we find that the duality relation becomes By taking two derivatives, we find that the gauge parameters disappear. We obtain which can also be written as The first term vanishes if we sum over e 1 and b 1 and also e 2 and b 2 . From this, we obtain which is indeed the correct equation of motion for the first higher dual graviton in four spacetime dimensions [19].
Clearly, the duality equation (2.12) between the graviton and the first higher dual graviton will also lead to the same equations since it can be deduced from the above duality relations.
However, it is instructive to treat this in the same way. Using the gauge transformation (2.24), we find that this duality relation is given by The gauge parameter ξ a is then eliminated by taking a derivative as follows: Contracting a 1 and b 1 allows us to discard the first term as it is the equation of motion for linearised gravity. We are left with the equation Then, taking one more derivative, we can eliminate the last gauge parameter to arrive at This is the correct equation of motion for the first higher dual graviton at the linearised level that we have also found in (2.34).
The equations of motion (2.28), (2.30) and (2.34) for the graviton, the dual graviton, and the first higher dual graviton, with their respective symmetry types , , and , are tracelessness equations that may be written in the form where Tr ij denotes a trace over columns i and j in a given Young diagram, and where we have introduced the curvature tensors for each field. They are given explicitly by As we have explained above, the duality relations only hold modulo gauge transformations, although the equations of motion derived from them hold exactly. One of the points of this paper is to obtain the extra fields that are required to have duality relations that also hold exactly. We will find evidence that these extra fields are contained in the second fundamental representation of A +++ 1 , denoted ℓ 2 . The content of this representation can be deduced by enlarging the A +++ 1 algebra by attaching an additional node to the node labelled 2 in the A +++ 1 Dynkin diagram, and then by taking only the generators of this enlarged algebra that have level one with respect to this new node. One may then deduce the commutation relations between generators in the adjoint and ℓ 2 representations of A +++ 1 by using the fact that the level is preserved and that the Jacobi identities must hold. One can then add new fields corresponding to the ℓ 2 generators and deduce their A +++ 1 transformations from their commutation relations.
As the role of the new fields is to soak up the gauge transformations in the duality relations, the next step must be to propose their gauge transformations. This involves writing down the variation of the ℓ 2 fields in terms of the derivative, which belongs to ℓ 1 , acting on the gauge parameters that also belong to the ℓ 1 representation. This transformation can be deduced using level matching and group theory. Given these transformations, one can then finally obtain new duality relations which hold as exact equations, at least in principle, and in detail at low levels.
We leave this calculation to a future paper.
Higher dualisations of linearised gravity
In this section we give an action principle in four dimensions for the first higher dual graviton A ab,cd whose equations of motion and gauge transformations were obtained from the non-linear realisation of A +++ 1 ⋉ ℓ 1 in the previous section. We will only be concerned with free dynamics and we will build the action principle for the dual field A ab,cd ≡ A [ab],cd ≡ A ab,(cd) using the off-shell dualisation procedure proposed in [9]. In that paper, a field-theoretical interpretation was given for an infinite subset of E 11 generators that transform in the GL(11)-irreducible representations whose Young tableaux are given in column notation as In what follows, we first recall the basic ideas behind the parent action procedure to derive dual actions for linearised gravity in any dimension D , and then we will direct our attention to the four-dimensional case for which A +++ 1 is the relevant Kac-Moody algebra.
From the graviton to the dual graviton
Off-shell dualisation of linearised gravity h a b around D-dimensional Minkowski space-time was initiated in [1] and [12]. This was investigated further in [13] where the authors made contact with the Curtright action [10] and generalised this duality to higher-spin fields with spin s > 2 .
Although the analysis of [1] began with the fully non-linear Einstein-Hilbert action, it is only for its linearisation that one can make the dual graviton and all of its higher dual generalisations appear off-shell [9]. Following the original idea [1], consider the second order Einstein-Hilbert action based on the vielbein e µ a : Indeed, the field equation of Y ab;c can solved for Y ab;c in terms of Ω(e) which yields the parent action linearised around Minkowski spacetime, where e µ a = δ µ a + h µ a , reads where Ω ab,c (h) := 2 ∂ [a h b]c and the field h ab has no symmetry on its two indices. The equation of motion for h ab yields The Poincaré lemma implies that the dual field This new field is completely antisymmetric in its first D −3 indices but it has no definite GL(D) symmetry otherwise: Inserting this back into the linearisation of (3.6) produces a consistent quadratic action S[C] that describes linearised gravity by construction. Note that the field h ab acted as a Lagrange multiplier for the constraint (3.7). It is not an auxiliary field like Y a 1 ...a D−2 ;b is, but the dual action obtained by substituting (3.8) inside the parent action (3.6) is classically equivalent to the original linearised Einstein-Hilbert action. The reader might want to see [29] for more comments on this issue.
Until now, the dual field C a 1 ...a D−3 ;b as defined in (3.8) However, one may check [12,13] that, after inserting (3.8) into (3.6), the resulting action S[C] is invariant under a shift symmetry inherited from the local Lorentz symmetry that we call the dual graviton [1,12,13]. In the antisymmetric convention for Young tableaux, the GL(11) irreducibility condition of the dual graviton is the over-antisymmetrisation identity It is important to stress the fact that the dynamics of linearised gravity around Minkowski space-time, as given by the variational principle based on the original Fierz-Pauli action, can equivalently be described from the dual action principle S[A a 1 ...a D−3 ,b ] given in [13]. The reason is that both the Fierz-Pauli action and the dual action appear upon elimination of different fields from the same parent action. Moreover, as explained in [13], the dual graviton in four dimensions is a symmetric field A ab = A (ab) and the dual action S[A ab ] reproduces the standard Fierz-Pauli action. In D = 4, one concludes that "Fierz-Pauli is dual to Fierz-Pauli" [13].
In the next part, we review the dualisation procedure first explained in [9], which takes the and produces a dual action featuring the first higher dual graviton c as well as an extra field that cannot be eliminated from the action. In four dimensions, the first higher dual graviton A a 1 a 2 ,bc corresponds to the A +++ 1 generator R a 1 a 2 ,(bc) at level 2. Therefore, this approach makes direct contact with the previous section where the non-linear realisation of A +++ 1 ⋉ ℓ 1 was reviewed. The extra fields that enter each higher dual action principle will then be shown to be closely correlated with the ℓ 2 representation of A +++ 1 .
Although they are not needed in order to write down self-duality equations, they are necessary for the off-shell formulation of various generations of higher dual graviton fields.
3.2 The first higher dual graviton in four dimensions Action principle. As explained in [13], around Minkowski spacetime of dimension D = 4, the dual graviton A a 1 ...a D−3 ,b is a symmetric rank-2 tensor A ab ≡ A (ab) and the dual action is just the Fierz-Pauli action given as follows, up to boundary terms that we neglect: We stress that the curl Ω ab,c (A) := 2 ∂ [a A b]c is not featured in this formulation of the Fierz-Pauli action. Instead, it features the full gradient G a;bc (A) := ∂ a A bc without any antisymmetrisation over indices. As proposed in [9], we define the following parent action S[G a;bc , D ab; cd ] : featuring the two independent fields G a;bc = G a;(bc) and D ab; (cd) . The latter of these two fields is defined up to a gauge transformation it is easy to see that the parent action (3.12) is invariant under the combined transformations On the one hand, one can vary the parent action (3.12) with respect to the GL(4)-reducible field D da;bc that acts as a Lagrange multiplier for the constraint ∂ [a G d];bc = 0 . This constraint is identically solved by for some symmetric tensor A bc . Substituting G a;bc (A) for G a;bc inside the parent action (3.12) reproduces the original Fierz-Pauli action (3.11).
On the other hand, in the parent action (3.12), the independent field G a;bc can be considered to be an auxiliary field. Its equation of motion c)e (3.18) can be solved algebraically to express G a;bc in terms of D ab; cd as follows: Upon substituting this expression for G a;bc into the parent action (3.12), we obtain the following alternative description of linearised gravity around four-dimensional Minkowski spacetime: This action is invariant under the gauge transformation (3.16). We emphasise that (3.20) describes the same free graviton dynamics as the Fierz-Pauli action (3.11). The reason is that both action principles arise from the same parent action S[G, D] when it is extremised with respect to one field or the other. Note that the spectrum of fields is in one-to-one correspondence with those that are obtained by taking the tensor product of a 2-form with a symmetric rank-2 tensor. This is depicted in terms of Young tableaux as follows: In what follows, we decompose the dual field D ab; cd into GL (4) (3.24) Hodge dualising X ab; cd and Z a; c on their lower indices produces the GL(4)-irreducible fields These fields satisfy GL(4) irreducibility conditions in the antisymmetric convention for Young tableaux. That is to say, they satisfy the over-antisymmetrisation identities:
This action is invariant under the gauge transformations
where the gauge parameters λ abc and µ ab,c are GL(4)-irreducible: , µ (ab,c) ≡ 0 . In terms of this equivalent representation for the gauge parameter, one has Notice that λ abc and m ab,c ∼ µ ab,c match the gauge parameters Λ a 1 a 2 a 3 and Λ a 1 a 2 ,b at level 2 in the ℓ 1 representation of A +++ 1 given in (2.22). Up to trivial gauge parameter redefinitions, the transformation law of A ab,cd with respect to m ab,c and λ abc fully agrees with (2.24).
Off-shell dualisation is a different approach to the A +++ is closely related to extra fields that appear during off-shell dualisation.
Therefore, we expect to obtain (3.33) and (3.34) from the non-linear realisation by modifying it in a suitable way that incorporates the ℓ 2 representation.
The gauge parameters λ abc and µ ab,c arise from the decomposition Θ abc; de = 2 ε abci (−λ dei + µ de,i ) , (3.35) so that the Θ abc; de part of the gauge transformation in (3.16) reads We stress that this dual action principle (3.28)-(3.30) describes equivalent dynamics to the Fierz-Pauli action principle. Namely, it propagates a single graviton in four-dimensional Minkowski spacetime. It is an alternative off-shell description of linearised gravity and we will further analyse this action principle in Section 4. In that section, in order to make contact with the Labastida formulation for a gauge field with the symmetries of the first higher-dual graviton, we need to change convention for Young tableau. We refer to Appendix A for this change of convention.
Note that the field content of the theory {A ab,cd , Z abc,d } is in one-to-one correspondence with the set of Young diagrams obtained in the tensor product This depicts the set of GL(4)-irreducible tensors that are contained in the reducible tensor D ab;cd := 1 2 ε abij D ij; cd . (3.38) In Section 4, we will build two gauge invariant curvature tensors that do not vanish on-shell.
Anticipating this (more technical) result, the curvature for the gauge field A ab,cd starts like where ". . ." is used to denote terms that involve the field Z abc,d and where it is understood that indices with the same letters are antisymmetrised. For example, . Then, in that same section, we show that the field equations for A ab,cd are equivalent to As demonstrated in [19,20], this form of field equation is precisely what one should have for a mixed-symmetric gauge field A ab,cd that propagates non-trivially in four dimensional Minkowski spacetime. It is of higher-derivative type for a gauge field with more than two columns in its Young tableau representation, but a partial gauge-fixing procedure was found in [20] that brings such higher-derivative field equations down to the two-derivative equations (for bosonic fields) postulated in [22,23]. with respect to its GL(4) subalgebra (see Table 2).
Field theoretical analysis at higher levels
After having discussed, in great detail, off-shell dualisation from the dual graviton A ab = A (ab) to the first higher dual graviton A ab,cd , we may now proceed to the next step in the off-shell dualisation procedure. Recall that the dualisation procedure at level one transformed our set of fields from a symmetric tensor A ab to the GL(4)-reducible field D ab; cd whose Hodge dual This demonstrates how to label Young tableaux either by the heights of their columns as in (cde,f ) ≡ 0 . This decomposition into GL(4)-irreducible components becomes where · · · denotes projection onto irreducible components. Indices a 1 a 2 are antisymmetric and indices c 1 c 2 c 3 are symmetric. The GL(4) irreducibility conditions are together with the tracelessness constraints Projecting onto the symmetry of the final four indices, we find with inverse formulas Previously, in order to dualise A ab off-shell, we decomposed D ab; cd into traceless components {X, Z} , whose Hodge duals are the irreducible components of the Hodge dual D ij;cd of D ab; cd .
In order to make contact with the E -theory literature expressed using fields and generators in the antisymmetric convention, we will do something similar to {X, Y, Z, W } . Hodge dualising all of them on their first blocks of indices creates GL(4)-irreducible fields in the symmetric convention, which may then be written in the antisymmetric convention with fields labelled {A , Y , Z, W } . The full calculation is given in Appendix A, from which we find Off-shell dualisation from the dual graviton to the first higher dual graviton is given by where D A denotes one round of off-shell dualisation applied only to the dual graviton at the previous level. At the next level, dualising only the first higher dual graviton A a 1 a 2 ,bc produces a new action D 2 A (S[A ab ]) = D A (S[A ab,cd , Z abc,d ]) with the following set of fields The pattern is starting to become clear now. Label groups of k symmetric and k antisymmetric indices by a(k) and a[k], respectively. For example, After dualising the (n − 1) th higher dual graviton A a 1 [2],a 2 [2], ... ,a n−1 [2],c (2) , the set of independent fields will contain the n th higher dual graviton , ... ,a n−1 [2],a n [2],c(2) ∼ a 1 1 a 2 1 · · · a n 1 c 1 c 2 a 1 2 a 2 2 · · · a n which generalises (3.43). Ultimately, at the n th level of higher dualisation, the action will be given in terms of the following set of independent fields: In parallel with the A (n) family of dual graviton fields, the Z (n) family of extra fields starts to appear at the first level of higher dualisation when Z abc,d enters the action, while the Y (n) and W (n) families both enter the action at the second level. With index structure explicit, we see that there is only A ab ∼ A (0) [1,1] at level zero, i.e. at the level of the usual dual graviton. , [4,1,1] , P [4,2] , Q (2) [3,3] , (3.76) or equivalently as , [4,2,1,1] , P , (3.83) or equivalently as Let's take inventory. At levels one and two, dualising every field at every level, we have whereas the third level is visualised as The third higher dual graviton which consists of the fields A (4) , Y (4) , Z (4) and W (4) . In addition to this, the extra fields at the third level of higher dualisation can also be dualised off-shell, and they produce the following fields at the fourth level of higher dualisation: and respective multiplicities in the order presented above (1 , 4 , 3 , 12 , 12 , 3 , 8 , 2 , 24 , 6 , 6 , 12 , 10 , 9 , 6) .
The number of families of fields will clearly continue to increase with further dualisation.
Finally, note that W (n) is the same as A (n−2) with four extra antisymmetric indices, so they are Hodge dual. Dualising every field at every stage but only keeping track of the A (n) and W (n) families, we know that off-shell dualisation of A (n) produces both A (n+1) and W (n+1) = * A (n−1) .
At the n th level of higher dualisation, we have one copy of A (n) and at least one copy of the k th Hodge dual A (n−2k) for every positive integer k such that 0 ≤ n − 2k ≤ n . When n is even, we find that our set of independent fields contains A (0) , A (2) , A (4) , . . . , A (n) as a subset.
Similarly, when n is odd, we find that it contains A (1) , A with respect to its A 3 subalgebra. The structure of A +++ 1 is studied at each level by looking at the representation content (i.e. the weight space) of the A 3 subalgebra. Any generic A 3 weight can be expressed as
Contact with
where λ i is the i th fundamental weight of A 3 . This weight may also be written as λ = [p 1 , p 2 , p 3 ] and we may depict this weight (and its corresponding representation) by a Young diagram with p 3 columns of height 1, p 2 columns of height 2, and p 1 columns of height 3. With this, we have roots so that the new simple root α * is included.
In previous sections, we have found the extra fields appearing in the action principles and duality relations at low levels. Now we are finally ready to show, level-by-level, that off-shell dualisation produces a set of extra fields that is closely correlated with the ℓ 2 representation.
In particular, at the n th level of higher dualisation, we count fields that appear in the adjoint representation at level n + 1 and in the ℓ 2 representation at level n. This will then be compared against the set of extra fields that we obtain by off-shell dualising every field at every level.
In Table 3, the 'adj' and 'ℓ 2 ' columns contain the field multiplicities in the adjoint and ℓ 2 representations, respectively, and the 'total' column gives their sum. The 'maximal off-shell' column tells us how many of each field is found by off-shell dualising every field at every level.
Lastly, the 'net' column is the 'maximal off-shell' column minus the 'total' column. It tells us if we have too many, too few, or the right amount of fields with maximal off-shell dualisation.
From gravity to dual gravity. Recall the dualisation of gravity in Section 3.1 which gave us the dual graviton with GL(4)-irreducible symmetry type Y[D − 3, 1] and an extra (D − 2)-form which may be gauged away with a shift symmetry [1]. In four dimensions, the decomposition gives us a symmetric rank-2 field A ab and a 2-form field. The symmetric field is the familiar dual graviton a 0 := A (0) [1,1] in the adjoint representation of A +++ 1 at level 1 in Table 1, and the [2] is found in the ℓ 2 representation at level 0 in Table 2.
In contrast to what happens later, we do not dualise the extra 2-form field as it can be shifted away.
The weight [0, 1, 2] in the adjoint representation at level 2 (see Table 1) corresponds to the first higher dual graviton field a 1 := A (1) [2,1,1] , and the weight [1, 0, 1] in the ℓ 2 representation at level 1 (see Table 2) corresponds to the extra field b := Z (1) [3,1] required to build a consistent dual action principle. So, at the first level of higher dualisation, we see the complete correspondence between the fields produced during off-shell dualisation and the generators of the adjoint and There is a perfect match at this level. In Table 3, we see that b appears zero times in the adjoint at level 2 and once at level 1 in ℓ 2 . It is also required exactly once at this off-shell dualisation at this level, hence the zero in the 'net' column. Note that, although they both have the same A 3 weight [1, 0, 1], the two fields b and d 8 in ℓ 2 at levels 1 and 3, respectively, are not compared or counted together since they appear at different levels.
In fact, d 8 should be viewed as a Y[4, 3, 1] field.
The second level of higher dualisation. Taking the first higher dual graviton A [3,2,1] and W (2) [4, 1,1] . This is the minimal off-shell dualisation of linearised gravity. In addition, we may dualise the extra field from the Since it does not appear in the adjoint, c 1 appears only once in our tables. Moreover, c 1 appears precisely once in maximal off-shell dualisation at this level, so there is a perfect match for c 1 .
Moving onto the next field, we see in Table 3 that c 2 := Z (2) [3,2,1] appears once in the adjoint and once in the ℓ 2 , and it also appears twice in maximal off-shell dualisation: once from the dualisation of a 1 and once more from the dualisation of b . Another perfect match. The reader might like to check that c 4 := P (2) [4,2] gives yet another match. Unfortunately, we do not find a match for every extra field. For example, c 3 := W (2) [4,1,1] does not appear in the adjoint, and it appears once in the ℓ 2 . However, two of them are required for maximal off-shell dualisation. In other words, although ℓ 2 contains enough for the minimal off-shell description, we go slightly over when we dualise every field at every level. It is even more peculiar with c 5 := Q (2) [3,3] since it is not contained in the tables at all, yet it is needed for There is a perfect match for the GL(D) types of fields required but, as for multiplicities, we appear to be lacking a small number of fields in the tables. One possible solution could be to dualise some extra fields but not all of them. By carefully selecting which fields to dualise, this would provide an off-shell description of gravity that does not exceed the field content of the adjoint and ℓ 2 representations of A +++ 1 but nonetheless, within this restriction, as many fields as possible are dualised. This lies somewhere between the minimal and maximal off-shell dualisations, and we call it the optimal off-shell dualisation of linearised gravity.
The third level of higher dualisation. Moving onto the next level, we find that maximal off-shell dualisation produces the fields in (3.84) with Young tableaux (3.87). This set of fields contains the third higher dual graviton a 3 := A [2,2,2, 1,1] in the adjoint at level 4, and a number of extra fields that are obtained by dualising the set independent fields {a 2 , c 1 , . . . , c 5 } from the previous level. As before, looking at Table 3, we find that some fields are a perfect match and some are not. In fact, for almost all of the extra fields introduced at this level, we have a surplus of fields in the maximal off-shell description compared with the fields that are available from the adjoint and ℓ 2 representations. With the exception of the rogue scalar field d 9 := O (3) [4,4] , the Young tableaux for the extra fields perfectly match those of the spectrum of ℓ 2 at level 3. Rogue scalars like this one are found in the maximal off-shell description at all odd levels of higher dualisation greater than or equal to this level.
Optimal off-shell dualisation. In order to understand the differences between these higher dualisation schemes, it is useful to give examples at low levels. All three of them coincide at the first level of higher dualisation where the dual graviton A ab is dualised to give A ab,cd and so optimal off-shell dualisation is attained by choosing any one of these six fields not to dualise at the previous level. This is important: optimal off-shell dualisation is, in general, not unique.
Then again, we do not yet know what will happen at higher levels. It is possible that these various pathways to optimal off-shell dualisation may converge at higher levels. It would be interesting to draw the graph of optimal pathways at higher levels and to study its topology. root (0, 0, n, n + 1) whose squared length is equal to 2. That is, the n th higher dual graviton appears in the adjoint at level n + 1.
However, even at low levels, this will turn out not to produce the entire spectrum of ℓ 2 and, indeed, less than half of the spectrum of ℓ 2 at level 3 is found if we only dualise the A (n) fields.
The A +++ 1 algebra has been shown to contain the minimal off-shell dualisation of linearised gravity in four dimensions. Of course, extra fields may also be dualised off-shell, but dualising too many of them leads to field multiplicities that exceed those provided by the adjoint and Maximal off-shell dualisation contains too many fields, but it is quite interesting nonetheless. Despite some discrepancies in the multiplicities of Table 3, the correct Young tableaux shapes appear in this maximal scheme. More work is needed to fully understand the role of the rogue scalar O [4,4] at the third level of higher dualisation, and the other scalars at odd higher levels of dualisation. The situation at the fourth level of maximal off-shell dualisation is more severe with fields that have a surplus as high as +7. However, ignoring multiplicities, the Young tableaux at this level in the maximal off-shell description perfectly matches the spectrum of ℓ 2 at level 4.
In this section, we have identified a possible solution to the tricky problem of mismatched multiplicities at each level: optimal off-shell dualisation. In this scheme, one carefully chooses which extra fields to dualise so that, at each level of higher dualisation, the set of extra fields is contained in the relevant representations of A +++ 1 with multiplicities that do not exceed those in the 'total' column in Table 3. In this section, at the first level of higher dualisation, we show that the fields A ab,cd and Z abc,d may be repackaged into new fields: A ab and A ab,cd with the respective symmetry types of A (0) = A ab and A (1) = A ab,cd . We will show that the gauge transformation laws of these two fields are almost identical to those of the Fierz-Pauli field for A ab and the Labastida gauge field [23] with symmetry type Y(3, 1) = Y[2, 1, 1] for A ab,cd , with additional terms that entangle the two gauge transformation laws. In order to make contact with the Labastida formalism where mixed-symmetry fields are given in the symmetric convention for Young tableaux, we will also use this convention for the first higher dual graviton in this section. However, it should be noted that this convention is not used in the context of E -theory.
The graviton tower action at low levels
The equivalent formulation we will present for the action of the first higher dual graviton in terms of A ab,cd and A ab has the advantage of showing more explicitly the number of degrees of freedom through an on-shell duality relation between the gauge invariant curvature tensors of the two fields, as is usual in this context [11,19,34].
Change of variables. Recall that the fields appearing in the action (3.24) were X ab; ij and Z a; e , the latter being the Hodge dual of Z bcd,e , see (3.25 From the independent field variables X ab; ij and Z a; e we introduce the two-form field that transforms like while we recall from Section 3.2 that the field φ abc, d transforms like We define the Y(3, 1)-type gauge field φ abc,d := φ abc,d + 3 4 η (ab U c)d (4.4) that transforms like where λ abc := λ abc − 1 4 η (ab τ c) , µ ab,c := µ ab,c + 1 6 η ab τ c − η c(a τ b) . (4.6) The advantage of this change of variable is that the newly defined gauge parameters λ abc and µ ab,c have the same trace: This also implies that, among the three linearly independent gauge fields { φ abc,d , U ab , Z (a;b) }, We now express the action (3.24) in terms of the independent fields X ab; cd , U ab and f ab := Z (a;b) : Substituting this into the parent Lagrangian, we obtain the following dual Lagrangian which is given, up to a total derivative, by This action is invariant under δf ab = 1 4 ε a cde ∂ c µ bd,e + ε b cde ∂ c µ ad,e + ∂ (a ǫ b) − 1 4 η ab ∂ c ǫ c , (4.14) Finally, we combine the scalar field S with the traceless symmetric field f ab := Z (a;b) to get We therefore obtain a new action S[ A ab , A ab,cd ] which takes the form and X a 1 a 2 ; are understood throughout. This allows us to write the repackaged first higher dual graviton A ab,cd in a variety of useful ways. For example, it was convenient to write the above action in terms of A ab and X ab; cd . It is invariant under the following intertwined gauge transformations: Trivially, one would need to make use of (4.18) before checking gauge invariance under (4.21).
The λ abc and µ ab,c parts of the gauge transformations for φ abc,d coincide with the Labstida gauge transformations for a gauge field of type Y(3, 1). In particular, the two gauge parameters are constrained to have equal trace. The ǫ a part of the gauge transformations for the (traceful) symmetric rank-two tensor A ab corresponds to linearised diffeomorphisms. However, notice that A ab also transforms with the µ ab,c gauge parameter, and that φ abc,d transforms with the gauge parameter ǫ a . As we have seen in [30] for higher dualisation of gauge fields, we find that the action contains fields that resemble the original dual graviton A ab and the first higher dual graviton φ abc,d with entangled gauge transformation laws. By the construction of our dual action using the parent action procedure, we know that the on-shell degrees of freedom are only those of a single massless spin-2 field around four-dimensional Minkowksi spacetime.
Nevertheless, we will rederive this fact from the field equations. It is clear that the dual action is more than just the sum of the Fierz-Pauli and Labastida actions.
It is also important to remember that this repackaging approach seeks to drastically redefine our fields for reasons that will become clear towards the end of this section. As a result, gauge transformations (4.20) and (4.21) are not expected to resemble the gauge transformations for A ab and A ab,cd that were found in Section 2.2 and Section 3.2.
Field equations. The equations of motion for the fields A ab and φ abc,d are given by We find that the left-hand side of the field equation for A ab is related to the trace of K ma,nb : where K ab := η mn K am,bn and K := η ab K ab . Obviously, on-shell, we have K ab ≈ 0 which is to be compared with the field equation (2.28) in Section 2.3. This is analogous to the Ricci-flat equation in linearised gravity.
We also have the following gauge-invariant quantity with three derivatives: Therefore, on-shell, we find the following relation that will be instrumental in showing that the degrees of freedom are those of a single graviton: On-shell duality relation. We find that the gauge-invariant tensor G ab,cd; e is related to K ab,cd and the left-hand-sides of the equations of motion in the following way: These equations are important in several respects. The tensors K ab,cd and G abc,de,f g can be called the field strengths for the repackaged dual graviton and first higher dual graviton, resepctively. Indeed, they do not vanish on-shell and they are gauge invariant. This field equation completes those found in Section 2.3.
With these field equations, we have found a strong parallel with the analogous equations derived in Section 2.3. However, since no action principle was considered in that section, each field strength was a function of a single field. Instead, in the off-shell formulation found in the present section that requires the extra field Z abc,d to be repackaged, equations necessarily entangle both fields due to the nature of the gauge transformation laws.
We conjecture that this dualisation and repackaging procedure creates an increasingly tall tower of new repackaged dual gravitons whose gauge transformation laws are intertwined. In particular, for a given tower with highest level N, the field φ (n) at level n ≤ N should transform as a Labastida gauge field of symmetry type Y[2, . . . , 2, 1, 1] . Its gauge transformation law should contain terms that entangle it with the repackaged fields at every level lower than n .
Moreover, if n < N , then its gauge transformation law will also be entangled with that of the repackaged field at level n + 1 . It may even be possible to redefine fields so that the repackaged dual graviton at level n is entangled only with those at level n + 1 and n − 1 .
For the graviton tower action S[ A ab , A ab,cd ] in (4.17), the extra field Z abc,d was completely hidden by the specific field and gauge parameter redefinitions used to construct A ab . However, this may only be possible at low levels, so we cannot yet exclude the possibility that some extra fields may still be present in the graviton tower actions at higher levels.
The non-linear realisation contains an infinite number of dualisations of gravity. It consists of an infinite set of duality relations, the first of which involves only the graviton and the dual graviton. This relation was worked out at the full non-linear level in [17,18]. In Section 2, we have used the non-linear realisation to work out the linearised equations of motion for the first higher dual graviton.
While on the other hand, in [9] it was shown that pure linearised gravity could be described by any member of an infinite family of action principles, each involving more and more fields.
Some of these fields were shown in [9] to have a direct connection with the adjoint representation of the very-extended A +++ D−3 algebra, while other fields received no interpretation at that time. In the present paper, where we focus on D = 4 for the sake of concreteness, we showed that the aforementioned fields are all associated with generators in the ℓ 2 representation of A +++ 1 in the sense that there exist generators in ℓ 2 that have the same GL(4) types. We have carried out this match up to level four and, while there is a striking agreement at low levels, some of the multiplicities differ for the extra fields.
We also constructed, at the level of the first higher dual graviton, a new action principle featuring two fields A ab and A ab,cd with the GL(4) symmetry types Y[1, 1] and Y[2, 1, 1] of the dual graviton and the first higher dual graviton, respectively. The gauge transformations of these two fields are those of the dual graviton and the corresponding Labastida field, along with extra terms that entangle the two fields. Remarkably, the field equations can be obtained from a duality relation between the gauge invariant curvatures of these repackaged fields, which further demonstrates that our original action only propagates a single graviton.
That the field equations can be encapsulated in a set of duality relations is in full agreement with the method of obtaining the field equations in the non-linear realisation of A +++ 1 ⋉ ℓ 1 .
In a future work in preparation, we will extend our analysis to pure gravity in five dimensions where the relevant algebra is A +++ 2 . We will also consider pure gravity and the bosonic sector of maximal supergravity in eleven dimensions. It is well-known that the relevant Kac-Moody algebras for these theories are A +++ 8 and E 11 , respectively. There, we will also show how their ℓ 2 representations are related to the set of off-shell fields entering higher dual action principles.
It will be important to modify the coset space used to construct the non-linear realisation for these algebras in order to incorporate ℓ 2 . Consequently, this will account for the extra fields that were thought to be missing from E -theory until now.
Finally, it would be interesting to make a contact with [35] where the importance of the ℓ 2 representation of E 11 was noticed in a similar context. It is not yet clear to us that there is a connection since their equations of motion are obtained from the E 11 pseudo -Lagrangian by a variational principle supplemented by extra duality relations that are not derived by variation.
More specifically, variations with respect to constrained fields (which carry a section constraint index) vanish only when these extra duality relations are imposed. In contrast, our off-shell dualisation approach produces equations of motion and duality relations that are all obtained by varying dual actions. Nothing external needs to be imposed here. Another line of research is to investigate the possible non-linear extensions of the higher dual actions considered here.
B Representations of A +++ 1 at the next level | 14,462 | 2022-08-24T00:00:00.000 | [
"Physics"
] |
EPLIN: a fundamental actin regulator in cancer metastasis?
Treatment of malignant disease is of paramount importance in modern medicine. In 2012, it was estimated that 162,000 people died from cancer in the UK which illustrates a fundamental problem. Traditional treatments for cancer have various drawbacks, and this creates a considerable need for specific, molecular targets to overcome cancer spread. Epithelial protein lost in neoplasm (EPLIN) is an actin-associated molecule which has been implicated in the development and progression of various cancers including breast, prostate, oesophageal and lung where EPLIN expression is frequently lost as the cancer progresses. EPLIN is important in the regulation of actin dynamics and has multiple associations at epithelial cells junctions. Thus, EPLIN loss in cancer may have significant effects on cancer cell migration and invasion, increasing metastatic potential. Overexpression of EPLIN has proved to be an effective tool for manipulating cancerous traits such as reducing cell growth and cell motility and rendering cells less invasive illustrating the therapeutic potential of EPLIN. Here, we review the current state of knowledge of EPLIN, highlighting EPLIN involvement in regulating cytoskeletal dynamics, signalling pathways and implications in cancer and metastasis.
Introduction
The incidence of cancer is slowly rising and has become a global burden. A fundamental reason why cancer is such a problem is because of its ability to spread, invade surrounding tissue and potentially form secondary cancers at distinct sites around the body by metastasis. Cancer hallmarks include uncontrolled cell growth and evasion of cell death, and this ultimately can lead to tumour formation. According to the World Health Organisation (WHO), 8.2 million people died from cancer in 2012 worldwide [1]. In the UK alone, mortality rates reached 162,000 annual deaths [2]. This illustrates a considerable need for better treatment, diagnosis and management of the disease. Epithelial protein lost in neoplasm (EPLIN) is a molecule involved in regulation of the actin cytoskeleton and has been implicated in the development and progression of various cancer types, displaying frequent downregulation or loss in cancer, creating a potential for prognostic targeting and as a tumour suppressor. This current review discusses EPLIN's role in actin dynamics and in the pathophysiology of cancer development and progression.
Epithelial protein lost in neoplasm
EPLIN is a cytoskeletal, actin-binding protein encoded by the LIMA1 gene. EPLIN was initially identified in oral cancers for its differential expression between normal oral epithelial cells and human papilloma virus (HPV)-immortalised oral epithelial cells [3]. EPLIN exists as two distinct isoforms, a 600 amino acid EPLINα isoform and a larger 759 amino acid EPLINβ isoform, generated from an alternative pre-mRNA splicing event (see Fig. 1) [4]. The EPLINα isoform has been implicated in the progression of various cancers, and this was initially recognised in oral cancer, breast, prostate and xenograft tumours where EPLIN expression was either downregulated or completely abolished [4]. The amino acid sequence of EPLIN is characterised by a single centrally located LIM domain which supposedly aids structural selfdimerisation and contains subdomains for zinc binding (see Fig. 2) [4,5]. LIM-domain-containing proteins are frequently present in molecules responsible for cytoskeletal organisation, such as the focal adhesion phosphoprotein, paxillin [6]. EPLIN is important in the regulation of actin dynamics and aids actin filament bundle assembly, and the amino terminal of the EPLIN protein structure is essential for this localisation to the actin cytoskeleton [7]. The EPLIN genomic structure consists of 11 exons and ten introns, with exons 1-3 only present in EPLINβ and EPLINα utilising exons 4-11 of LIMA1 for transcription [5]. The EPLIN gene has two separate promoter regions; the EPLINβ promoter is near the start of the gene in exon 1, whilst EPLINα initiates ∼50 kb downstream near the end of intron 3, prior to exon 4 and at amino acid position 161 in the EPLINβ protein (see Fig. 1) [5]. Sequence analysis has revealed that EPLIN is conserved across species with EPLINα and EPLINβ isoforms present in mouse, displaying 77 and 75 % identity similarity for human EPLINα and EPLINβ, respectively (see Fig. 3) [8]. A role for EPLIN has also been suggested in muscle development in pigs, where EPLIN displayed a temporal expression pattern with only the EPLINα isoform present in developing skeletal muscle [9]. Since the discovery of EPLIN, our lab has shown that aberrant EPLIN expression is associated with the progression of various cancer types including breast, oesophageal, pulmonary and prostate cancer. EPLINα levels decrease as the cancer progresses and becomes more advanced, giving EPLINα potential to provide prognostic value, and overexpression analysis suggests that EPLINα is a putative tumour suppressor [10][11][12][13][14]. The described loss of EPLIN in cancer has functional implications on the actin cytoskeleton and may contribute to enhanced metastatic potential of cancer cells.
The epithelial protein lost in neoplasm interactome: regulation in actin dynamics
EPLIN has a number of functional partners (see Fig. 4/ Table 1), and the globular protein actin is central to the function of EPLIN. EPLIN has two functional actin-binding sites which flank the central LIM domain, and it is this binding capacity that engenders actin cross linking and actin filament bundle assembly [15]. A fibrillar pattern is displayed by both isoforms, and expression of EPLINα enhances the size and number of actin filament stress fibres and can also inhibit membrane ruffling via the signalling GTPase, Rac1 [15]. EPLIN therefore directly interacts with actin which suggests a possible role for EPLIN in cell migration, adhesion and cell morphology. Actin is an abundant, multifunctional protein responsible for cell migration in eukaryotic cells. Actin is part of the cytoskeletal network which consists of microtubules, microfilaments and intermediate filaments which are vital for cellular functions. Actin exists as monomers (G-actin) and Fig. 1 Schematic diagram of the LIMA1 genomic structure and EPLIN structural isoforms. The LIMA1 gene consists of 11 exons and ten introns. EPLINα differs from EPLINβ at the amino terminus where an additional 160 amino acids are present in EPLINβ. Shown below EPLINβ is the 52amino acid centrally located LIM domain common to both EPLIN isoforms. Adapted from [4] filamentous polymers (F-actin) and is important for physiological functions including cell locomotion, cytokinesis, maintenance of cell shape and muscle contraction [24]. Transcription of the LIMA1 gene is suggested to be primarily controlled by monomeric G actin, with the actin-MAL-SRF signalling pathway regulating EPLIN production [25]. Maul et al. [15] illustrated that EPLINα has three significant features: EPLINα has at least one binding site for actin and can cross link and bundle actin filaments, EPLINα stabilises actin filaments in vitro, and EPLINα inhibits branching nucleation of actin filaments by the Arp 2/3 complex [15]. Therefore, this suggests that EPLIN may orchestrate actin filament dynamics by stabilising actin cytoskeletal networks [15]. Additionally, EPLIN has been shown to form part of an actin-remodelling complex composing of EPLIN, β-actin, γ-actin and gelsolin which co-localises at the plasma membrane to the tumour suppressor, phosphatase and tensin homolog (PTEN) [26]. PTEN is a well-established tumour suppressor molecule, so this asks the question of whether the interaction of PTEN and the actin-remodelling complex is itself an element suppressing the development of neoplastic tissue and whether any interruption of these complexes may promote cancer progression. Based on these findings, EPLIN accommodates actin to accomplish various actin-related cellular processes including cell motility and migration and cell junctional adhesion [17]. There is increasing evidence to suggest that EPLIN regulates actin structures in cooperation with the signal transduction adaptor protein, paxillin. When EPLIN is overexpressed, paxillin exhibits an increased staining pattern for both human endothelial cells line (HECV) and PC-3 cells [12,13]. This colocalisation pattern is also observed in cultured human mesangial cells at focal adhesion sites, and coimmunoprecipitation results confirm an association between the two molecules [16]. EPLIN and paxillin may form a complex and potentially stabilise focal adhesions to co-ordinate actin dynamics in a complimentary manner. Given EPLIN's role in actin dynamics, it is strongly implicated in cellular processes including cell migration and invasion, and thus, downregulation or loss of EPLIN expression in cancer may likely affect the metastatic potential of cancer cells. Actin and EPLIN are located in epithelial cells at the adherens junction (AJ) and contribute to functional cellular adhesion between adjacent cells.
The adherens junction
The AJ is a type of anchoring junction found predominantly in epithelial cells, also referred to as the zonula adherens, which functionally link the actin cytoskeleton together in cells via Areas of amino acids that are conserved across species are highlighted. The region shown is amino side of the EPLINβ protein, where EPLINα originates at amino acid (AA) p. 161. ClustalW generated using BioEdit Biological Sequence Alignment software linker molecules. EPLIN is an actin-binding protein, which functions to bundle actin filaments; therefore, EPLIN presence is required at AJ along with filamentous actin. AJ contains various protein complexes along with EPLIN, and these include cadherins, catenins and p120 catenins (see Fig. 5) [28]. Within the AJ, cadherins and catenins associate together to form the cadherin-catenin complex and EPLIN provides a direct physical link for this complex to the actin cytoskeleton [17]. The cadherin-catenin complex is composed of Ecadherin, β-catenin and α-catenin, with E-cadherin positioned between adjacent cells and the catenins positioned in the cytoplasmic space of each cell [29]. E-cadherin binds directly to β-catenin which sequentially binds α-catenin, generating the cadherin-catenin complex (see Fig. 5) [30]. α-Catenin is a crucial player at the AJ and was initially recognised as responsible for providing the bridge between the cadherin-catenin Paxillin May form a complex with EPLIN to co-ordinate actin dynamics. IHC of PCa tissue vs normal reveals that EPLIN overexpression influences paxillin expression and localisation. Co-localisation, co-precipitation and an in situ proximal ligation assay revealed direct association between the two molecules in cultured human mesangial cells. [12,13,16] α-Catenin Immunoprecipitation and GST pull-down assays reveal that EPLIN interacts with α-catenin, forming a cadherin-βcatenin-α-catenin-EPLIN complex. [17]
Supervillin
In vivo co-localisation studies and in vitro GST pull-down assays reveal that EPLIN interacts with the peripheral membrane protein, supervillain. [18] PINCH-1 Pull-down assays reveal that endogenous EPLIN co-immunoprecipitates with endogenous PINCH-1 in keratinocytes. [19] ERK ERK phosphorylates EPLIN and decreases EPLIN affinity to F-actin promoting cell migration. Inhibition of ERK abolishes EPLIN expression and reduces tumour-suppressive ability of EPLIN.
[10, 13, 20] DNp73 In melanoma cells, both EPLIN isoforms are inhibited by DNp73, and this drives a more invasive phenotype. [21] SATB2 EPLIN is differentially regulated by the DNA-binding protein, SATB2. SATB2 regulates the actin cytoskeleton via EPLIN association. When SATB2 is knocked out, osteosarcoma cells show reduced migration and are less invasive, and this is mediated by EPLIN. [22] Cav-1 EPLIN regulates the lipid raft tumour-suppressive protein, Cav-1. Co-immunoprecipitation and mass spectroscopy analysis revealed that EPLIN and Cav-1 bind to each other in normal and RasV12 cells. [23] complex and actin [28,31]. This principle, however, has come under scrutiny, as direct in vitro binding between α-catenin and actin has never been detected [32,33]. Cavey and coworkers [34] realised that α-catenin is not essential for Ecadherin stability, as complexes of α-catenin and E-cadherin were detected in RNAi α-catenin embryos [31]. Therefore, it is apparent that cadherin-actin interaction is regulated not only by α-catenin but by a number of actin-binding proteins that are associated with α-catenin, including EPLIN [30,35]. Abe and Takeichi [17] demonstrated, by immunoprecipitation and GST pull-down assays, that both EPLIN isoforms directly interact with the α-catenin VH3 plus C-terminal region to generate a cadherin-catenin-EPLIN-actin complex at cell junctions [17]. When EPLIN is depleted, the bridge to Factin was unable to form due to loss of organisation of the apical actin belt, with punctate accumulation of E-cadherin at cell junctional points [17]. This illustrates the importance of EPLIN in producing functional epithelial junctions. Additional molecules involved in regulating the AJ include the membrane cytoskeletal protein, vinculin, and the actin filamentbinding protein, afadin [36,37]. Recent work from Taguchi et al. [35] illustrated that EPLIN and vinculin may collaborate together in AJ formation via binding α-catenin either together or individually and cooperatively aid junctional adhesion [35]. The cooperation of EPLIN and vinculin in cellular adhesion is also evident in endothelial cells. At the endothelial AJ, the endothelial E-cadherin homolog, VE-cadherin, interacts with βand γ-catenins, which sequentially bind α-catenin and EPLIN in an analogous fashion to epithelial cells [38]. This allows the recruitment of vinculin and ultimately promotes strengthening of inter-endothelial junctions [38]. The authors propose a role for EPLIN in tension dissemination at the endothelial AJ in a mechanosensory mechanism [38]. The machinery from actomyosin exerts tension through EPLIN, which causes α-catenin to adopt a more accessible conformation, revealing a vinculin-binding site and allowing vinculin recruitment and actin association at endothelial cell-cell AJ [39]. This mechanotransduction mechanism consisting of EPLIN and α-catenin suggests that the endothelial AJ is regulated in a spatial and temporal fashion [39]. Finally, EPLIN also appears to be important for attachment to F-actin in endothelial cells; when EPLIN expression is downregulated, the organisation of F-actin is considerably disrupted, leading to multiple holes in the actin cytoskeleton [38]. Altogether, this suggests that the AJs of epithelial and endothelial cells are orchestrated by various actin-binding, α-catenin-associated molecules and are dynamically regulated, with EPLIN having a critical role in cell adhesion, creating further implications of EPLIN loss in cancer.
Epithelial protein lost in neoplasm-a key player in cell division?
Cell division is the splitting of one cell into two, where biological information is passed onto daughter cells. For this process to successfully occur, various proteins need to functionally regulate the division and these include Rho GTPases, cyclin-dependant kinases, integrins, cdc42, focal adhesion Adapted from [27] kinases, myosin and the globular protein actin [40]. With this in mind, an actin-binding protein like EPLIN may potentially have a regulatory role in cell division. This has been recently shown using HeLa cells where EPLIN depletion resulted in large numbers of multinucleated cells, signifying cytokinesis failure during cell division [41]. In successful mitotic division, actin and myosin II accumulate at the cleavage furrow during cytokinesis and EPLIN loss compromised each protein's ability to efficiently do this [41]. EPLIN appears to be important for the accumulation of other mitotic regulatory proteins including the GTPases RhoA and cdc42, where EPLIN depletion resulted in either a significantly reduced concentration of RhoA or a misplaced location of cdc42 at the cleavage furrow [41]. EPLIN aids this successful cell division in conjunction with a number of regulatory proteins including supervillin and the oncogenic kinesin, KIF14, suggesting a complex network of regulatory proteins at the cleavage furrow [18]. Altogether, this suggests that EPLIN may have an integral role in cytokinesis and loss may lead to aneuploidy and genomic instability of daughter cells [41]. Therefore, EPLIN is crucial to coordinate actin and myosin dynamics throughout cell division and loss in cancer cells could have downstream effects on successful cytokinesis, increasing their tendency to form a cancer [41].
Post-translational modification
Extracellular signal-regulated kinase (ERK) is a member of the mitogen-activated protein kinase (MAPK) family and is important in the regulation of actin organisation by phosphorylating various proteins including paxillin, focal adhesion kinase (FAK) and other protein kinases and nuclear transcription factors to co-ordinate cellular processes [42,43]. ERK is implicated in cellular events including cell migration and may facilitate this by phosphorylation of actin-bundling proteins like EPLIN [20]. The protein structure of EPLIN has multiple putative phosphorylation sites (see Fig. 6), and it has been shown that ERK phosphorylates EPLIN at Ser360, Ser602 and Ser692 in vitro and in vivo [20]. Phosphorylation at the carboxy terminal of EPLIN decreases affinity to F-actin and thus provokes a reorganisation of the actin cytoskeleton, enhancing cell migration [20]. This implicates ERK in actin organisation and cell motility with EPLIN being a critical substrate for phosphorylation [20]. A recent study by Zhang et al. [44] illustrated that this ERK-mediated phosphorylation of EPLIN is itself regulated by epidermal growth factor (EGF) and revealed how targeting this signalling cascade can be manipulated to reduce epithelial-mesenchymal transition (EMT) and, thus, prostate cancer invasiveness [44]. ERK also plays a role in targeting EPLIN to focal adhesions and effects the interaction with paxillin; activation of the MEK-ERK pathway both reduced localisation of EPLIN to sites of focal adhesions and abolished paxillin interaction [16]. These data suggest that ERK is functionally related to EPLIN and provides a critical regulatory role for appropriate actin dynamics and may have implications in cancer progression.
The role of epithelial protein lost in neoplasm in cancer
Cancer progression involves various cellular, morphological and molecular alterations which result in a transformed cellular phenotype, ultimately having the potential to invade surrounding tissue and disseminate throughout the body. Cancer treatment options remain largely unspecific and create various undesired side effects. Therefore, elucidating a molecular target for treating cancer, in addition to understanding the mechanism of cancer development, is crucial. EPLIN first received attention for its involvement in cancer in 1999 where EPLIN downregulation was described in various cancer cell lines [4]. Altogether, low levels of EPLIN transcript were found in 8/8 oral cancer cell lines, 5/6 breast cancer cell lines and 4/4 prostate cancer cell lines [4]. Using PC-3 and DU-145 prostate cancer cell lines, EPLIN expression was significantly reduced compared to primary prostate epithelial cells (PrEC), whereas the prostate specific antigen (PSA) positive LNCaP and LAPC4 prostate cancer cell lines failed to express EPLINα at all [4]. This notion of EPLIN loss is also seen in breast cancer where EPLIN expression in cell lines BT-20, SKBr-3, MCF-7, T-47D and MDA-MB-231 was either reduced or completely lost [4]. Lastly, the authors demonstrated EPLIN as a putative tumour suppressor molecule, where overexpression of EPLINα caused a reduction in cancer cell growth [4]. Interestingly, when EPLINα was depleted in breast cancer cell lines, EPLINβ either remained consistent or actually increased [4]. This illustrates the potential cancer protective effects that the EPLINα isoform may exert in various cancer cell systems. EPLIN overexpression has also proved effective in altering the growth phenotype and morphology in additional cell systems including anchorage-independent NIH3T3 transformed cells [7]. Using a soft agar assay and utilising the activated Cdc42 or the chimeric nuclear oncogene EWS/Fli-1 to transform NIH3T3 cells, EPLIN overexpression resulted in a ∼80 % decrease in colony formation for Cdc42 transformed cells, with a similar growth decrease in EWS/Fli-1 transformed cells [7]. Interestingly, EPLIN displayed heterogeneous staining throughout Ras cells rather than localisation to the actin cytoskeleton [7]. This implies that oncogenic transformation affects the EPLIN/actin architecture, and thus, the localisation of EPLIN to the actin cytoskeleton may be important to exert its suppressive ability [7].
Prostate cancer
There is increasing evidence to suggest that EPLIN is implicated in the development of prostate cancer and the process of EMT. EMT is a process where polarised epithelial cells are downregulated and subjected to biochemical and morphological changes. Epithelial cells can become transformed to a mesenchymal cell phenotype, losing their cell polarity and cell adhesion at cellular junctions [45]. The converted mesenchymal cell phenotype has a reorganised cytoskeleton and experiences alterations in cell signalling which engenders enhanced migratory and invasiveness capabilities and increased resistance to apoptosis [45,46]. During these cellular changes, the actin molecular architecture is disrupted and protein complexes like the cadherin-catenin complex and epithelial markers are lost [47]. EPLIN is associated with the cadherin-catenin complex and contributes to functional cytoskeletal dynamics [17]. Zhang and co-workers [47] used biochemical and functional approaches to demonstrate that EPLIN is a negative regulator of EMT and invasiveness in prostate cancer (PCa) cells. EPLIN was significantly decreased in cells of more mesenchymal morphology (known as the androgen refractory cancer of the prostate (ARCaPM) cell lineage model), suggesting that EPLIN downregulation is directly implicated in EMT, along with the cadherin-catenin complex [47]. Depletion of EPLIN also provokes various [20] is indicated. Phosphorylation status predicted using NetPhos 2.0 software other morphological changes including disassembly of AJ, increased migratory and invasive potential of cells in vitro, activation of β-catenin signalling, increased expression of vimentin, increased chemoresistance and decreased expression and nuclear translocation of E-cadherin [47]. Lastly, the authors used immunohistochemistry to show that EPLIN downregulation is correlated with cancer progression in multiple cancer models including lymph node metastasis in PCa, where EPLIN expression was significantly reduced [47]. Altogether, this illustrates that EPLIN may be involved in the regulation of EMT and PCa progression and loss of EPLIN can lead to diverse downstream cellular effects. EPLIN in PCa has also been recently evaluated by our laboratory using the classical PCa cell line, PC-3. By immunohistochemistry (IHC), EPLIN displayed a significant decrease in staining for tumour cells in comparison to normal (see Fig. 7a) [12]. This is accompanied by quantification of staining intensity within the cohort, where lower levels of EPLIN staining are associated with cancerous and higher-tumour-grade samples (see Fig. 7c, d) [12]. Overexpression analysis of EPLINα resulted in decreased growth rate in tumour cells in vitro, along with reduced invasiveness and cell adhesion to the extracellular matrix (ECM) [12]. Mice injected with PC-3 cells overexpressing EPLINα developed tumours at a markedly decreased rate in comparison to control [12]. Furthermore, cells overexpressing EPLINα displayed a greater staining pattern for the focal adhesion targeting protein, paxillin, implying that EPLIN may also be present at these plaques [4,12]. Altogether, these results demonstrate the potential of EPLIN for monitoring PCa progression and how it can be manipulated to suppress PCa via impeding cancerous traits.
Breast cancer
Our lab has recently evaluated EPLIN involvement in cancer progression in a number of model systems [10][11][12][13][14]. By comparing EPLINα IHC staining in normal vs tumour cells in breast cancer progression, EPLINα was found to be (2) IHC analyses of normal and cancerous clinical prostate sections revealed a greatly reduced staining pattern of EPLIN in tumour samples. Overexpression of EPLIN in PC-3 cells negatively impacted cell growth in vitro and in vivo, and were less invasive and had reduced adhesion to the ECM. (3) EPLIN is implicated in the process of EMT, and IHC analysis revealed that EPLIN loss is correlated with prostate cancer progression, with a significant reduction of EPLIN expression in tissues with lymph node metastases compared to primary tumours and normal prostate tissues.
(1) [4] (2) [12] (3) [10,47] Breast cancer (1) [4] (2) [10] (3) [47] Oesophageal cancer Q-PCR analyses revealed lower levels of EPLINα transcript in tumour tissues compared to normal. Higher-tumourgrade samples had lower EPLIN transcript. Patients who died of the cancer had significantly lower levels of EPLIN transcript. Patients with local advanced T stage cancer (T2-T4) and patients with lymphatic metastasis had lower levels of EPLINα transcript. Overexpression analysis caused cells to be less invasive and to have a reduced growth rate in vitro and in vivo. [11] Pulmonary cancer Q-PCR analyses revealed reduced levels of EPLINα transcript in tumour samples compared to normal. Tissues of a higher TNM stage and where there was nodal involvement also had lower EPLIN transcript. Overexpression analysis revealed a reduction of cell growth and motility in the SKMES-1 cell line. [14] Colorectal cancer IHC analyses revealed that EPLIN is significantly reduced in lymph node metastatic tumours compared to primary tumours in colorectal cancer. [47] SCCHN IHC analyses revealed a reduction of EPLIN staining of cancerous tissue with lymph node metastasis compared to primary tumours. [47] Oral cancer Northern analyses determined that EPLIN expression in 8/8 oral cancer cell lines is reduced compared to control G3PDH. [4] substantially weaker in tumour cells than in normal epithelial cells (see Fig. 7b) [10]. This correlated with lower EPLINα transcript in tumour samples compared to normal samples (see Fig. 7e) with lower EPLIN levels being associated with higher tumour grade, a poorer patient prognosis and reduced overall survival rates (see Fig. 7f-h) [10]. IHC analyses in breast cancer from additional research groups also show EPLIN loss as the tumour becomes more aggressive, specifically comparing EPLIN immunointensity of primary tumours vs tumours with lymph node metastases [47]. Lastly, in vitro and in vivo overexpression analysis of EPLIN highlighted significant reductions in cell growth and cell invasion using transfected Fig. 8 Proposed EPLIN signalling pathways and implications for loss in cancer. When cancer is not present, EPLIN associates with the actin cytoskeleton linking the cadherin-catenin complex to F-actin via interaction with α-catenin. The signal transduction protein, paxillin, interacts with EPLIN in the cytoplasm, and this complex likely stabilises actin dynamics. ERK phosphorylates EPLIN regulating cell motility and migration. When cancer is present and EPLIN is lost, the actin cytoskeleton becomes less organised and this induces membrane ruffling. Paxillin targeting is likely lost reducing focal adhesion between the cadherincatenin complex and actin. These molecular, cellular and morphological consequences may result in increased metastatic potential including enhanced cell migration and motility. Signalling pathways summarised from [12,16,20,47]. Image generated using Pathway Builder 2.0 software breast cancer cell lines, and also, highly significant reductions in tumour size were observed in nude mice inoculated with EPLIN-α-transfected vs control MDA-MB-231 breast cancer cells [10].
Further pathological implications
Clinical implications for EPLIN also include oesophageal and pulmonary cancer (Table 2). Q-PCR analysis displayed reduced expression of EPLINα in an oesophageal cancer cohort for both cancerous tissue and cancer cell models [11]. With regard to tumour histological grade, tumour-node-metastasis (TNM) status, nodal status and survival status, EPLINα transcript generally decreased with levels significantly lower in patients who ultimately died from the cancer, suggesting that EPLINα is implicated in oesophageal cancer progression and may give an indication for cancer prognosis [11]. Overexpression analysis of EPLINα in the KYSE150 cell line resulted in decreased growth and invasiveness compared to normal, suggesting that EPLINα has tumour-suppressive ability by regulating cellular aggressiveness in oesophageal cancer [11]. In a pulmonary cancer cohort also conducted by our lab, Q-PCR analysis showed a reduction of EPLINα expression in tumour vs normal samples, where EPLIN was also reduced in later TNM stages and cancers with lymph node involvement [14].
Using the SKMES-1 cell line, overexpression of EPLINα via transfection in the pEF6 expression vector inhibited cell growth and cell motility [14]. In addition to the apparent molecular loss of EPLIN in various cancers, EPLIN also appears to be reduced at the protein level for colorectal cancer and squamous cell carcinoma of the head and neck (SCCHN), where IHC analysis revealed that EPLIN is decreased in cancers with lymph node metastases vs primary tumours [47]. Collectively, these studies suggest EPLIN may be a clinical indicator for cancer progression in addition to providing further evidence of a tumour-suppressive role for EPLINα in the regulation of cancer progression.
Finally, in addition to the implication of EPLIN in the spread and progression of cancer, a recent publication provides a link between EPLIN and renal diseases where patients with either membranoproliferative glomerulonephritis (MPGN) or IgA nephropathy had a decreased expression profile for EPLIN via IHC analysis [16]. This advocates the idea that EPLIN may be involved in the pathology of various disease states.
Angiogenesis
Angiogenesis is the formation of new blood vessels from preexisting vessels and is essential for wound healing and normal growth and development. The angiogenic process is frequently utilised by cancer cells, by a means of metastasis, to reach secondary sites around the body and develop secondary tumours. Angiogenesis is therefore a critical factor when targeting cancer therapies. EPLINα demonstrates a suppressive role in angiogenesis, where overexpression analysis in the HECV endothelial cell line resulted in a reduced capacity to generate tubular structures in a Matrigel tubule formation assay when compared to vector controls [13]. This regulatory effect was also apparent in vivo where mice injected with HECV cells overexpressing EPLINα in conjunction with cancer cells developed tumours significantly slower than controls [13]. Forced expression also appears to exert an effect on cell matrix adhesion and migration capabilities in this cell line where cells overexpressing EPLINα both migrated at a significantly slower rate and were significantly less able to adhere to the Matrigel basement membrane [13]. This suppressive role in angiogenesis illustrates that EPLINα has potentially various regulatory mechanisms for reducing cancer metastasis and could be an effective target for cancer therapy.
Conclusions and outlook
The interaction of EPLIN and actin has provided an excellent model for investigating multiple aspects of cancer progression over the last decade. The discovery of EPLIN led to a subtle paradigm shift in structural view and organisation of cytoskeletal dynamics, with the acknowledgment that various actinrelated molecules contribute to multiple dynamic processes underlying cellular migration and invasion. There is an established link between EPLIN and cancer progression with frequent downregulation of EPLIN in more aggressive cell lines, reduced staining in cancerous tissue samples and reduced growth potentials when there is forced expression of the EPLINα isoform in vitro and in vivo. EPLIN is functionally linked to molecules like actin and paxillin and has been implicated in a number of potential pathways to enhance metastatic potential (outlined in Fig. 8). However, the precise mechanistic action of EPLIN and, subsequently, how EPLIN loss contributes to the development of cancer remain elusive. Mechanistic investigations will therefore be crucial to elucidate the full importance of EPLIN in cancer pathophysiology. | 6,727 | 2015-09-09T00:00:00.000 | [
"Biology",
"Medicine"
] |
Real-Time Human Motion Capture Driven by a Wireless Sensor Network
. The motion of a real object model is reconstructed through measurements of the position, direction, and angle of moving objects in 3D space in a process called “motion capture.” With the development of inertial sensing technology, motion capture systems that are based on inertial sensing have become a research hot spot. However, the solution of motion attitude remains a challenge that restricts the rapid development of motion capture systems. In this study, a human motion capture system based on inertial sensors is developed, and the real-time movement of a human model controlled by real people’s movement is achieved. According to the features of the system of human motion capture and reappearance, a hierarchical modeling approach based on a 3D human body model is proposed. The method collects articular movement data on the basis of rigid body dynamics through a miniature sensor network, controls the human skeleton model, and reproduces human posture according to the features of human articular movement. Finally, the feasibility of the system is validated by testing of system properties via capture of continuous dynamic movement. Experiment results show that the scheme utilizes a real-time sensor network-driven human skeleton model to achieve the accurate reproduction of human motion state. The system also has good application value.
Introduction
With the development of computer science, 3D display technology has become increasingly mature; human motion reconstruction technology has broad application prospects in areas such as game, film and television design [1,2], medical science [3,4], and sports training [5,6].In the field of game design, many major sports games, such as "3Dbasketball" and "NBA, " have combined human motion capture with virtual reality technology, so objects have become truly vivid and lifelike.In the film industry, many films, such as "Avatar" and "The Adventures of Tintin, " predominantly use human motion capture and virtual reality technology.In the field of medical science, the technology is also widely used in orthopedics, surgeries, physical therapy, and other areas to help doctors fully understand patients' condition through a comparison of the motion attitude of patients before and after rehabilitation.In the field of sports training field, realtime monitoring of athletes' training state is achieved through the display of a real-time tracking interface in athletes who wear motion capture equipment, such as wearable sensor vests [7].
At present, the common motion capture systems are based on either optical or inertial sensors.In general, optical systems are used for large game production, film shooting, and other applications that require high accuracy.However, they have high demands on the environment and are also expensive.Because of their relatively short history, motion capture systems that are based on inertial sensors have yet to be fully developed.Given their low cost and close connection with the development of information technology features, however, such systems have become a new research field in human-machine interaction.
Motion capture, as the name implies, is "the reconstruction and simulation of the test object's motion behavior." In scientific jargon, the model can be repeated to test the movement of an object through acquisition of the motion data of the test object and mapping of these data onto the motion 2 International Journal of Computer Games Technology capture focuses on the interaction process between humans and computers.How to interact, the use of any medium, how to facilitate object interaction, building the interactive time, and other issues are important to consider, because these issues determine the merits of the system.A number of challenges were confronted by scholars who focused on interactive algorithms and real-time system performance.Exceptions are world-renowned companies, such as "3dSuit" and "Xsens", which have developed excellent-performing motion capture systems.Based on trade secrets and other issues, however, the core of the algorithm remains a major issue that affects not only the accuracy of the motion capture system but also its real-time properties.Basing on existing algorithms, this study analyzes in depth the data captured by sensors and establishes a data-driven algorithm model from sensors to human models.We successfully facilitate the interaction between human and model.The real-time properties of the system are also considerably improved compared with those of the traditional system.
The selection model is also closely linked with the merits of the system.The performance of different models may also be different.At present, the human body model is mainly categorized into the rod model, solid model, surface model, and multi-level model [8].The rod model selects a limited rigid segment linking with a joint that lacks fidelity; the solid model simulates the structure of the human body through simple, solid graphics that requires a large amount of calculation and has poor stability; the multi-level model includes the skeleton, muscle, and skin layers that have high complexity and require more computation; and the surface model is composed of skeleton and skin layers that are easy to realize and require a small amount of calculation [9].In this study, we propose a method based on surface model theory to construct a 3D human skeleton model.
The process and results of the experiment are described in this study.In Section 2, starting from rigid body dynamics, we analyze the structure and design the hardware system.In Section 3, the human motion data based on the polymerization mechanism is analyzed, and the attitude of the reconstruction process is described according to the algorithm.In Section 4, the experimental results are presented, and theoretical verification is conducted.In Section 5, the experiment is discussed further.
Related Work
Motion capture has a relatively long history, as reflected in the old Chinese saying "Set up a pole and see its shadow"; the meaning of this sentence is to judge the movement of the pole based on the shadow through solar light irradiation; in fact, the process is a way to capture the attitude of an object, but such a process still considerably depends on the original light imaging.With advancements in science and motion capture technology, the development from original light imaging to an optical and inertial sensing level-based system represents a major leap forward.Optical motion capture usually obtains the trajectory of human motion data through high-speed cameras that shoot continuous image sequences of the human body; the target unit is required to be equipped with the relevant features of the spot to complete the monitoring and tracking processes.A previous study [10] described optical motion capture in detail.Inertial motion capture involves human motion capture data obtained from the subject who wears limb sensor nodes on the target unit.As a result, the motion model achieves motion reconstruction.In the present study, we focus on the problem of inertial motion capture.
2.1.Attitude Algorithm.In a motion capture system based on inertial sensing, the human body structure can be simplified to a number of rigid bodies connected by a hinge joint that consists of a more rigid body structure through real-time estimates of each rigid body posture.The real-time movement of the human body is then described, so parts of the body pose estimation serve as the basis of motion capture based on inertial sensing.
Attitude estimation of the human limb is primarily achieved by placement of the sensor in the surface of the rigid body.Miniature inertial sensors are found in a threeaxis accelerometer, three-axis gyroscope, and three-axis magnetometer that can be considered as gravity vector under static or low speed; three-axis accelerometer can be used as an inclinometer to determine the relative direction of the horizontal plane.The magnetometer measures the magnetic field vector under the sensor coordinate systems, and the principle followed is similar to that of a compass, which can be used to determine the rotation around the vertical axis.The gyroscope is used to measure angular velocity, and the angle can be obtained through the angular velocity integral.Acceleration and the magnetic field are complementary to the angular velocity, so the data of accelerometer and magnetometer can be used to eliminate the drift by integration of the angular velocity.
However, the acceleration and the magnetic field strength may be disturbed in different situations.When accelerometers measure motion acceleration and gravitational acceleration, motion acceleration is negligible in high-speed situations.Ferromagnetic materials or electronic products can disturb the surrounding magnetic field and form magnetic vector bias.These two issues have a major effect on attitude estimation.
Many domestic and foreign scholars have proposed different methods to address these problems.For instance, Luinge et al. [11] drifted the angle to the horizontal only by using microgyroscopes and accelerometers but did not eliminate the rotational drift on the vertical direction.Foxlin [12] described two navigation systems based on inertial and magnetic sensors that were used for commercial purpose.Roetenberg et al. [13,14] proposed a compensation Kalman filter algorithm, in which he integrated a compensation model on the basis of the error model to offset the effect of changes in the magnetic field to the attitude estimation.However, the algorithm model calculation was complicated, and high-speed movement acceleration interference was not considered.Bachmann [15] proposed a Kalman filter algorithm based on quaternion, and he estimated the direction by using the low-frequency part of acceleration and magnetic field intensity; he also measured attitude with the angular velocity at high speed.Yun and Bachmann [16] used the QUEST algorithm to obtain a quaternion and thus estimate the attitude with acceleration and magnetic field vectors.Further integration with the angular velocity was conducted, and a linearized observation equation was used; however, a higher complexity than that of the QUEST algorithm was achieved.Later, Yun et al. used the factor quaternion algorithm (FQA) [17] to replace QUEST and, as a result, the computational complexity of the algorithm was reduced.However, the removal of the QUEST algorithm still did not address the problem on the magnetic field and the interference of acceleration.
We attempted to reduce the impact of the magnetic field and the acceleration of attitude on the basis of various postures in the reference fusion algorithm.
Motion Model Construction Method.
In human motion capture techniques, the human body model is a graphical description for the human form by abstraction of the human body to simulate or reproduce the action of the human model; with this approach, the purpose of this study can be achieved with the use of human motion tracking data acquired from the device.The goal of the inertial sensing motion capture system is to utilize motion sensors for the collection of data on the human body in order to drive the established human model and thus achieve an approximate simulation of human motion.The establishment of a human motion model that is in line with the characteristics of human behavior is therefore important for the vivid simulation of human action.
According to the data collection methods and the direction of motion analysis, the construction methods of the human model in the human motion capture system are usually different.Generally, however, the human motion model can be classified as 2D or 3D based on different spatial properties.
A 2D human model refers to the calibration of human motion and a description of the movement of the limbs.This model can be further classified into the 2D sticks and 2D regional models.
The 2D sticks model uses geometry to represent the skeletal structure of the body.For example, a straight line can be used to represent bones, whereas points represent joints.Karaulova et al. [18] developed a stratified human skeleton model with straight lines and points in a human body motion tracking system on the basis of monocular video sequences.The highly simplified description of the human body in the model makes it suitable for video-based motion capture sequences.However, the lack of representation of the human form makes the model lose its verisimilitude.The 2D regional model refers to the use of a 2D region to represent a certain part of the body in the analysis of motion capture based on image sequence.Leung and Yang [19] used the strip area to represent a part of the human body in constructing a human body model in a human body contour marking system.The 2D regional model generally applies to the extraction of the characteristics of a region in terms of human motion in graphic sequence.However, this method causes considerable information loss in the process of image processing.
3D human body model calibration refers to the human body limbs in human motion in 3D space and to the description of gestures.The model can generally be categorized into the 3D geometric model and the 3D mesh model.
The 3D geometry representation method refers to the use of some basic geometric figures to complete the construction of the human body model in 3D space.Wachter and Nagel [20] established a 3D human body model by using an elliptical cone in human motion capture based on monocular video sequences.Remondino and Roditakis [21] used lines and points to represent human body bone joints and the human skeletal system, respectively, as well as the ellipse to represent the 3D human body model in a single image or in monocular video sequence human motion reconstruction.
The 3D mesh model is defined according to the surface characteristics of the human body.It builds on the model of the human body through many facets and grids.Using this method to establish a model of the human body results in a lifelike outcome because the method can vividly describe outer human characteristics.Sminchisescu [22] proposed a construction method by using an elliptic grid for the human body model to establish a human model and facilitate motion reconstruction based on monocular video sequences.Theobalt et al. [23] designed a human-level model represented by triangular meshes to strengthen the outline of human motion capture.The goal of the inertial sensing motion capture system is to capture the attitude of each limb through inertial sensors and use the motion data to drive the human model to achieve motion tracking.Establishing a 3D model of the human body is therefore necessary.
Although the 3D mesh model can vividly describe the contour features of the muscle and skin, the muscle deformation will occur when the body is in motion, and this phenomenon increases the difficulty of graphic reconstruction.To address this situation, this study constructs a 3D human skeleton model that completely matches the human body biomechanical characteristics of the human body model.We also use kinematics theory to restrain and limit the development of the skeletal model and thus achieve accurate limb posture features during human movement.
Attitude Calculations
In this section, we describe the solution to the attitude of human motion through sensors.Initially, we select the types of sensors and consider the means of communication between hardware.Then, we calculate the attitude of a single node by combining sensor data.Finally, we calculate the attitude from a single node to the entire body according to the movement mechanism of the human skeleton.
Data Collection and Communication Network.
The inertial sensor-based human motion capture system examined in this study contains an attitude acquisition module, an information processing module, and an attitude reconstruction module.The attitude acquisition module combines the mpu-9150 (integrated accelerometer, gyroscope, and magnetometer) microsensor with nrf-51822 (BLE 4.0 ultralow-power module), mpu-9150, and nrf-51822, as can be seen in Figures 1 and 2. The module forms a small sensor network between each node.We also set two sink nodes to receive the transmission data from the subnode.The sensor network configuration is shown in Figure 3.To communicate with the host computer terminal, we use Bluetooth technology; the transmission distance is about 10 meters.
Single Node Attitude Solution.
In the process of human motion, each joint wears a sensor node, and the attitude calculation algorithm of each node consists of the following steps.
Step 1 (sensor calibration).For the gyroscope, the main work is to remove zero offset, which is to calculate the zero bias through many times' weighted average and make the initial value minus zero offset.For the accelerometer, at first, we set low-pass filter to eliminate jitter effects, and then using the weighted average filter to eliminate jitter effect further.The calibration of the magnetometer involves two aspects: hard offset and soft offset.Hard offset is mainly caused by the hard magnetic materials.It shows that the scatter diagram of random rotation magnetometer readings is a spherical shell away from the origin.The center of the spherical shell is the vector value of the hard offset.In this system, we obtain the hard offset through a large number of samplings and calculation of the minimum residual value.Soft offset is mainly caused by soft magnetic materials.It shows that the scatter diagram of random rotation magnetometer readings is an ellipsoidal shell centered at the origin.The value of the soft offset can be represented in a matrix; through matrix solution, the soft offset can be calculated.The calibration of the magnetometer is therefore achieved.To reduce the noiseinduced jitter, a low-pass filter must be set to filter the data of the accelerometer and magnetometers.
Step 2 (the angular velocity integral).We measure angular velocity information in the process of human motion by using a gyroscope.With integral operation on the angular velocity, the angle component in the process of motion can be obtained.Using a first-order high-pass filter followed by a first-order Taylor expansion method, we finally obtain initial quaternion information of the human body in the process of movement.
Step 3 (solution attitude of the accelerometer and magnetometer).We use FQA algorithm to transform the acceleration value and the magnetic field information into the attitude value.The advantages are as follows.First, the magnetometer data have an effect on the heading angle only (Yaw).Second, using the algorithm significantly reduces the effect of the magnetic field on the attitude computation.Third, the amount of calculation is considerably reduced.The FAQ is also limited, but the use of an ideal environment means no linear acceleration exists, and if it does, it is only under the effect of the magnetic field.This condition means that the FAQ results in little disturbance in linear acceleration and magnetic field interference.
We conducted a verification experiment.We placed the inertial sensor model in static state and observed the attitude calculated by the FAQ algorithm.Then, we repeat making a ferromagnetic material away and close to the sensor module and continue to observe the changes in the moment waveform.Figure 4(a) presents the attitude angle calculated by the FAQ algorithm when little magnetic interference exists.Figure 4(b) presents the attitude angle calculated by the FAQ algorithm when large magnetic interference exists.From the chart, we can see that in the case of nonlinear acceleration, the roll angle and the pitch angle calculated by the FAQ algorithm have no distortion with or without magnetic interference.Magnetic interference only has an effect on the Yaw angle.
Step 4 (sensor data fusion).We fuse the data of the sensors.In summary, in the case of linear acceleration or a large magnetic field, only the attitude information obtained through the angular velocity integral works.With the uniqueness of the representation of the direction cosine matrix considered, the opposite case sets the weight and average value for the attitude value obtained from the two preceding methods.In slow motion, the two move cooperatively; during strenuous exercise, only the angular velocity integral works.
The above process can be summarized as a complementary filter.The single node attitude solution process is shown in Figure 5.By using the attitude calculation of a single node with the above algorithm, we can obtain the attitude data of each node.In the actual process, we commonly use the rotation matrix to represent the rotation of a 3D object.Many methods can be used to express the rotation matrix.The most common ones are the Euler angle and quaternion.However, the Euler angle involves the gimbals problem, so we use the quaternion to represent the rotation matrix in the actual process.To understand the process of the experiment, we converted the quaternion to the Euler angle.Then, we choose the palm as the wearing node and observe the effect of the curve motion.
(a) We place the palm on the level direction; this is step 1.After a smooth, palm repeats the movement: vertically bends down to a certain angle and returns to the horizontal state, this process is considered as step 2. After step 2, the palm is bent vertically up to a certain angle and then is returned to the horizontal position; this is step 3.The movement process is shown in Figure 6(a), and the attitude motion curve is shown in Figure 6(b).
(b) We place the palm to be stable on the level direction; this is step 1.The palm is rotated right to a certain angle along the arm axis and then is returned to the horizontal position; this process is considered as step 2. After step 2, the palm is rotated left to a certain angle along the arm axis and then is returned to the horizontal position; this is step 3.The movement process is shown in Figure 7(a), and the attitude motion curve is shown in Figure 7(b).
(c) We place the palm on the level direction; this is step 1.
The palm is bent right to a certain angle along the arm axis and then is returned to the horizontal position; this process is considered as step 2. After step 2, the palm is bent left to a certain angle along the arm axis and then is returned to the horizontal position; this is step 3.The movement process is shown in Figure 8(a), and the attitude motion curve is shown in Figure 8(b).
With the motion curve, the designed algorithm of the experiment solves the interference problem of the magnetic field and the acceleration in the attitude computation.The attitude calculation accuracy can meet the requirements.
The Body Attitude Solution.
The cooperative motion of various body limbs is involved in the process of human motion tracking.Analysis of the human body kinematics principle indicates that the motion between various nodes involves related links.Therefore, calculating the attitude of the entire body is necessary.
Posture Initialization Calibration.
In the process of using sensor nodes to capture human motion, three coordinate transformations are involved: inertial sensor coordinates, geomagnetic coordinates, and human skeleton coordinates.The inertial sensor coordinate is set as the calibrated coordinate system for the chip itself.The establishment of geomagnetic coordinates is due to the fact that the data on each axle of the magnetometer determine the magnetic field distribution detected by each axis.However, according to the distribution of the magnetic field of earth, the world coordinate system can be defined as the geomagnetic coordinates (the origin is the center of the earth, the -axis is along the equatorial plane pointing outward, the -axis is along the equatorial plane with a right orientation, and the -axis points to the north).The human skeleton coordinates will be defined in the following article.During the experiment, we must initialize the pose calibration every time in the testing process; that is, calibrate the motion of the joint body of Step 1 Step 2 Step 3 the experimental individual and obtain the rotation matrix of any part of the human skeletal coordinate system from the sensor coordinates of this bone because of the difference between the wearing position and the individual.We take the limb system as an example.According to the sensor node programming, the lower limb needs three sensor nodes, which are foot , leg , and thigh , respectively, to conduct the initialization calibration.Figure 9 shows the lower limb structure diagram.The calibration process is as follows.
X-axis Y-axis Z-axis
(1) Lower limb toward the lateral stretch: with the static action at this time, the acceleration of gravities and can be obtained from a lower leg sensor node and foot node, and then the rotation matrix of of the leg and of of the foot can be calibrated: . ( On the basis of step 1, turn the lower limbs forward about 90 ∘ from rear of your body, then turn about 90 ∘ backward from front of your body.From the dynamic posture, in the lower limb movement process, we can obtain the output angular velocity readings and and the reverse readings and , which correspond to the thigh, leg, and foot sensor nodes.Therefore, the rotation matrix of of the foot and of of the leg can be calibrated as follows: ( The rotation matrix for the foot and the leg can be further obtained as follows: (2) The lower limbs shall be upright initially, then extend them right ahead about 90 ∘ , with feet being perpendicular to the horizontal plane.We then obtain the thigh angular International Journal of Computer Games Technology velocity , so the thigh rotation matrix of can be calibrated as follows: (3) The lower limbs shall be upright initially, then lift the lower limbs backward at a constant speed until they are nearly perpendicular to the trunk, that is, return to the state in Step (1).According to this position, we can obtain the acceleration of gravity in the thigh when it is in the upright position.When the leg is raised to the front, we can obtain the acceleration of gravity in the thigh at the moment, which in turn obtains the thigh rotation matrix of at the moment: At the same time, we obtain the rotation matrix :
Real-Time Calibration Attitude in the Human Motion
Process.obtain an accurate position of the human skeleton model with the use of inertial sensors to measure the data of human motion, we need to calibrate the relative position between the attitude measurement unit and the human body movement between moving limbs.The calibration process aims to determine the rotation matrix between the inertial sensor coordinate system and the measured body coordinate system and between two movements of the limbs.From the perspective of rigid body dynamics, the two connected segment bones are considered a rigid structure.For example, the No- − 1 and No- bones are connected, and then the attitude solution is determined.The rigid structure of the connected skeleton is shown in Figure 10.Suppose −1 is the rotation matrix of the sensor coordinate system that corresponds to the − 1 bones compared with the geomagnetic coordinates. −1 is the rotation matrix of the − 1 bones with respect to their layout of the sensor coordinate system. −1 can be calculated through the initial data measured by the sensor, and −1 can be obtained in the initialization calibration process, which can calculate the attitude matrix −1 of the bone in its coordinates: The rotation matrix From formula (7) and ( 8), we can obtain Suppose −1 is the transformation matrix from No-−1 bone to No- bone of the rigid body.From the coordinate nature of the conversion matrix, we can obtain the following: No-m bone
X-axis Y-axis Z-axis
No-m − 1 bone where −1 is the translation matrix from No- − 1 bone to No- bone, and each element of −1 is the true 3D data of the No- bone.In this study, the translation vector from the sensor coordinate system to the body coordinate system, as well as the coordinate translational component from the geomagnetic to the sensor node coordinate, is the upward component obtained from the acceleration value integration.
The transformation matrix multiplication process proposed by Craig [24] indicates that the transformation matrix in the No- bone corresponding to the first bone that can be calculated: According to the position of the first sensor node, the transformation matrix of the No- bone can be calculated.In the experiment, however, we can measure and calculate the spatial coordinates , , and of the pelvic area in the model: where denotes the vector in body and is the corresponding limb length; the recurrence relations are as follows: The vector of each limb can be obtained by order recursion.Therefore, the attitude and coordinate of the No- bone can be computed according to the root node coordinates; virtual human motion reappearance can finally be implemented.We can obtain the vector of each limb, followed by recursion.Therefore, the attitude and coordinate of the No- bone can be calculated according to the coordinate of the root node.Finally, reproducing the motion attitude of the virtual human is realized.
Model Construction
The human body is a complex life system, and its exercise system consists of more than 200 bones, more than 600 muscles, and countless nerves commanding these bones and muscles; in this regard, specifying each piece of bone in the experiment is impossible [25,26].Basing on the idea of the surface model and in view of the human body posture data collected with the microsensor, we propose a method to construct a 3D human skeleton model.
Hierarchical Human Body Model Based on Constraints.
In the literature [27], the automatic estimation of the hierarchical topological structure human skeleton model was described; a detailed induction to the motion of the skeleton on the basis of the biomechanics principle was performed.In the present study, we took the topological structure of the model and divided the human body into 16 bones and 14 joints.Bone segments are connected by joints.We defined the relationship between the joints as the relationship between parent and son.Parent node movement will lead to relevant subnode movement.In the definition of the hierarchical structure in the human body, the pelvic node is the center of gravity of the human body and is also the reference node of the entire body.The pelvic node extends upward and downward and traverses the entire skeleton.The skeleton 11(a).Each joint of the human body presents angle constraints, so Baerlocher [28] presented a clear definition: the joints in different bones of the body are categorized as rotary, hinge, and universal joints.Rotary joints have only one direction of freedom, and their rotation range is limited to a certain extent.Hinge joints have one degree of freedom, and they can twist and varicose around other joints.Universal joints have two degrees of freedom, and they have a structure similar to that of a ball groove.They can move around the ball groove structure.Table 1 shows the types and scope of activities of human body joints.
The Coordinate System of the Human Skeleton.
This study presents a coordinate system of the human skeleton based on joint tree structure through analysis of the human skeletal hierarchy and the principle of 3D animation structure.The pelvic joints are taken as the reference points, and the coordinate system of other skeleton is calibrated according to the forward kinematics principle.The skeletal coordinate system after calibration is shown in Figure 11(b).
The 3D Models of Motion
Capture.Based on the above relationship, we designed a skeletal motion model consistent with human motion by loading a human skeleton file based on Microsoft Visual Studio2012 platform and OpenGL development library environment.Furthermore, we can adjust the skeleton size and save the motion data of each bone.Finally, we designed the terrain, so that the effect is realistic.Figure 12 describes the 3D human skeletal motion tracking system.
Results and Discussion
To validate the 3D human model that simulates real human motion, we compared the key frame curve extracted from the real human motion video image [29] with a gesture curve when the skeleton simulates relative movement.
In the experiment, we took the up limb system as an example and shot the video sequence of continuous movement in the process of bending of human arms.Through image processing, we extracted the key frames curve of the elbow bending angle during the arm bending process and compared it with the elbow bend angle data curve of the real human body simulating the corresponding action.According to their fitting degree, we can judge the accuracy of the model.
We designed an arm motion tracking verification platform based on the algorithm described in this study.In the course of the experiment, we dressed the sensor node in the elbow and tracked the real-time motion of the elbow.The image sequence is based on the plane, so we tracked the bending angle of the arm in the direction of the vertical plane and recorded the motion process with a high-speed video camera.By contrast, in the visual motion video images and 3D motion effects, we can evaluate the motion tracking effect.By extracting the blending angle through the arm flexion extension in the video sequence and by comparing the output angle of the sensors, we can evaluate the effect of the system according to the data point.Figure 13 shows the comparison of the effect of motion tracking and the video image, and Figure 14 shows the comparison of the extracted output from the sequence of video frames and the actual output angle of the sensors.Figure 14 depicts the fact that two curve fittings can basically meet the requirements; the angle error is less than 1 degree.Therefore, the model has some reference value.To evaluate the performance of the entire system, actual human testing was performed.We tested the motion tracking effect vis-a-vis the real motion of the human body by placement of sensor nodes on all specified parts of the human body.We took a photo on the effect of motion tracking in the experiment with a high-speed camera and conducted some real-time motion capture.
(1) With the arm motion process taken as an example, when the system has stabilized after wearing during the experiment, the right arm does the movement, as shown in Figure 15(a).The system can accurately track the motion trajectories of the arm.Figure 15(b) shows the system dynamic trajectory tracking of the arm.(2) In the preceding process, the motion capture of a single arm is described.However, we need multiple motion joint investigations to assess the reliability of the entire system.In this step, we conducted the motion capture test on the cooperative motion of two arms.With the waving motion in life taken as an example, Figure 16 describes the capture process in a real human hand waving action.(3) Finally, with the entire human body movement taken as an example, we captured the motion from still standing to turning and kicking.With human soft tissue jitter and other relevant errors considered, the model can accurately track the real-time motion of the human body through the dynamic tracking of actual human bodies.
Conclusions
In this paper, starting from the movement mechanism of the human body skeleton, we constructed a new attitude calculation method based on inertial sensing theory.We also verified the attitude calculation method through the movement of hand joints.Then, we established a 3D human motion model according to hierarchical modeling theory based on computer graphics.Finally, we investigated the performance of the system by taking photos of the effect of motion capture with a high-speed camera.The experimental results show that the scheme can effectively realize the realtime motion tracking of the 3D model of the human body.However, the system still has some limitations even if it can capture human motion effectively.In the process of communication, the motion data are packed together and transmitted to the host computer, but time loss inevitably occurs in the processes of transmission and attitude calculation.Our calculation demonstrates that the transmission of the entire data package to the host computer takes about 19 ms, but the refresh time for the system interface has a time delay of about 1 ms.Therefore, the refresh time of the entire system needs about 20 ms.In the real world, however, if the refresh rate is higher than 40 ms, the human eye can easily detect the delay.The 20 ms delay is thus acceptable for the human eye despite some hysteresis.Of course, the accuracy of the sensor also affects the performance of the system.For example, the sensors of motion capture systems, such as "3Dsuit" and "Xsens MVN, " are of high precision, high sensitivity, and fast reaction speed, but these systems are relatively expensive.Taking cost into consideration, our experiment selected the mpu-9150 sensor whose sensibility has a certain gap with that of the special inertial sensor.When the sensor is in high-speed motion, the response will be somewhat hysteretic, but this does not happen if the sensor is in low-speed motion.Therefore, the key to addressing such a problem is to choose sensitive inertial sensors and to further improve the algorithm.However, the model selection also needs further improvement.The capture effect of the system can meet the development requirements of some small interactive games.
Figure 3 :
Figure 3: Configuration of the sensor network.
Figure 4 :Figure 5 :
Figure 4: (a) Attitude calculated by the FAQ algorithm when little magnetic interference exists in the stationary state.(b) Attitude calculated by the FAQ algorithm when large magnetic interference exists in the stationary state.
Figure 6 :
Figure 6: (a) The movement process.(b) The attitude curve when the palm is bent vertically down and up.
Figure 7 :
Figure 7: (a) The movement process.(b) The attitude curve when the palm is rotated left and right along the arm axis.
Figure 8 :
Figure 8: (a) The movement process.(b) The attitude curve when the palm is bent left and right along the arm axis.
Figure 10 :
Figure 10: Rigid structure of the connected skeleton.
Figure 11 :
Figure 11: (a) Tree structure of the human hierarchy.(b) Coordinate system of the human skeleton.
Figure 16 :
Figure 16: Tracking effect when the human body does waving movement.
Figure 17(a) shows the real human motion, and Figure 17(b) shows a set of the motion capture effects.
Figure 17 :
Figure 17: (a) Real human movement when a human does the motion from turning to kicking.(b) Tracking effect when a human does the motion from turning to kicking.
Table 1 :
Description of human body joints. | 8,612.8 | 2015-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Equal-time kinetic equations in a rotational field
We investigate quantum kinetic theory for a massive fermion system under a rotational field. From the Dirac equation in curved space we derive the complete set of kinetic equations for the spin components of the covariant and equal-time Wigner functions. While the particles are no longer on a mass shell in general case due to the rotation-spin coupling, there are always only two independent components, which can be taken as the number and spin densities. With the help from the off-shell constraint we obtain the closed transport equations for the two independent components in classical limit and at quantum level. The classical rotation-orbital coupling controls the dynamical evolution of the number density, but the quantum rotation-spin coupling explicitly changes the spin density.
We investigate quantum kinetic theory for a massive fermion system under a rotational field. From the Dirac equation in curved space we derive the complete set of kinetic equations for the spin components of the covariant and equal-time Wigner functions. While the particles are no longer on a mass shell in general case due to the rotation-spin coupling, there are always only two independent components, which can be taken as the number and spin densities. With the help from the off-shell constraint we obtain the closed transport equations for the two independent components in classical limit and at quantum level. The classical rotation-orbital coupling controls the dynamical evolution of the number density, but the quantum rotation-spin coupling explicitly changes the spin density.
I. INDRODUCTION
From the lattice simulations [1] of quantum chromodynamics (QCD), it is widely accepted that there exists a phase transition from a hadron gas to a quark-gluon plasma at high temperature. The experimental efforts of high energy nuclear collisions at Relativistic Heavy Ion Collider (RHIC) and Large Hadron Collider (LHC) have provided many sensitive signatures [2] of the new state of matter created in the early stage of the collisions. Considering the very short lifetime of the collision zone, the highly excited quark-gluon system spends a considerable fraction of its life in a non-equilibrium state, and the dynamical tool to treat dissipative processes in nuclear collisions and the approach to hydrodynamic evolution is in principle quantum transport theory. In classical transport theory, all the physical currents are connected with the number distribution function f . The quantum mechanical analogue of f is the Wigner function W which is a 4 × 4 matrix in spin space [3]. A relativistic and gauge covariant kinetic theory for quarks and gluons has been derived, both in a classical framework [4] and as a quantum kinetic theory [5] based on the Wigner functions defined in covariant [3] and equal-time [6] phase spaces. Many applications to the quark-gluon plasma, such as linear color response, color correlations and collective plasma oscillations [7,8], have been discussed in this framework using a semi-classical expansion of the quantum transport theory.
The study on QCD phase transitions at finite temperature and density is recently extended to including electromagnetic fields, since the strongest fields in nature is believed to be generated in nuclear collisions at RHIC and LHC energies [9,10]. Under such strong electromagnetic fields some anomalous phenomena for massless quarks, such as chiral magnetic effect [11,12], are experimentally discovered in non-central nuclear collisions [13,14]. Since the created fields drop down very fast and appear only in the very beginning of the collisions, most of the theoretical investigations is in the frame of quantum kinetic theory [15][16][17][18][19][20][21]. Apart from the electromagnetic fields, the strongest rotational field in nature can also be produced in nuclear collisions. The maximum magnitude is expected to be about 0.1m π in noncentral Au+Au collisions at RHIC energy [22,23]. A direct consequence of such strong rotation is the polarization of final state hadrons [24] through spin-orbital coupling at quark level. Different from the electromagnetic fields which rapidly decay in time, the angular momentum conservation during the evolution of the collision will lead to a more visible rotational effect on the final state. The other advantage of rotational effect is that it becomes more strong in intermediate nuclear collisions where there might be new physics related to high baryon density. There have been a lot of theoretical investigations on the rotational effect and spin polarization in high energy nuclear collisions [25][26][27][28][29][30][31][32][33]. In this paper, we aim to set up a quantum kinetic theory in a rotational field.
There are two editions for the quantum kinetic theory in the frame of Wigner function. One is the covariant version [3] for the Wigner function W (x, p) defined in 8-dimensional phase space, and the other is the equal-time version [6] for the Wigner function W 0 (x, p) in 7-dimensional phase space. The advantage of the former is the explicit covariance under a Lorentz transformation, and the latter is directly related to the physical distributions defined in equal-time phase space. Of course, W 0 is not manifestly Lorentz covariant. In both the covariant and equal-time formalisms, an important aspect of the kinetic theory is that, the complex kinetic equation can be split up into a constraint and a transport equation, where the former is a quantum extension of the classical mass-shell condition, and the latter is a generalization of the Vlasov-Boltzmann equation. The complementarity of these two ingredients is essential for a physical understanding of quantum kinetic theory [34][35][36]. In this paper, we focus on a complete description of the equal-time Wigner function in a rotational field, by considering the coupled constraint and transport equations. We will derive the classical and quantum transport equations for the two independent spin components, namely the number density and spin density, by using the semi-classical expansion of the kinetic equations.
The vortical field ω of a system can be either generated self-consistently by the curl of the medium velocity ω = ∇×v or considered as an external field, depending on the particles we describe in kinetic equations. For light quarks which are constituents of the medium, the quark vorticity is just the rotation of the medium, but for heavy flavors which are considered as a probe of the medium, the vorticity in kinetic equations can be treated as an external field. We will consider in this paper the latter and neglect the collision terms among particles, in order to focus on the coupling between particles and the external rotational field. This version of the mean field approximation, which treats the particles quantum mechanically, but uses the classical approximation for the field, is widely used in electrodynamic kinetic theory [6,[15][16][17][18][19][20][21].
The paper is organized as follows. In section II we obtain in curved space the Dirac equation and its non-relativistic limit under a rotational field which is the basis to derive kinetic equations in the frame of Wigner function. We then calculate the kinetic equations for the covariant Wigner function W (x, p) and its spin components in section III. By taking the energy integration of the covariant kinetic equations we obtain the constraint and transport equations for the spin components of the equal-time Wigner function W 0 (x, p) in section IV. In section V we semi-classically solve the equal-time equations. We will focus on the coupling between the particle spin and the rotational field. We summarize the work in section VI.
II. DIRAC AND SCHRÖDINGER EQUATION IN ROTATIONAL FIELD
The starting point to derive a relativistic or non-relativistic kinetic theory for quarks in Wigner function formalism is the Dirac equation or Schrödinger equation. The system under a rotational field can be equivalently regarded as a system at rest in a rotating frame, as has been discussed in Ref. [37] where the rotation of a quark system enhances the chiral symmetry restoration strongly and Ref. [38] where the covariant kinetic theory for chiral fermions in external electromagnetic field is extended to curved space systematically. To avoid confusion, we use in the following the indices {µ, ν, λ, σ} and {α, β, γ, δ} to separately describe Lorentz vectors and tensors in curved and flat space, known respectively as coordinate and non-coordinate basis [39].
The Lagrangian density for fermions under mean-field approximation in non-coordinate basis has the following form where √ −g is related to the coordinate we choose. Considering that, in coordinate basis the tangent space T p M and cotangent space T * p M are expanded in ∂ µ and dx µ , the coordinate transformation between the two spaces can be expressed asê where {ê α } is required to be orthonormal with respect to g (= g µν dx µ ⊗ dx ν ), which means the relation g(ê α ,ê β ) = e µ α e ν β g µν = η αβ or inversely g µν = e α µ e β ν η αβ . With the requirement of local Lorentz invariance, the Lagrangian density in coordinate basis becomes with the affine connection Γ α β µ = η βγ e α ν (∂ µ e ν γ + e σ γ Γ ν µσ ). We now consider a system under rotation with a constant vorticity denoted by ω. The local velocity of this rotating frame is given by v = ω × x, and the space-time metric is written as where we have introduced a specific tetrad [37], and v = ω × r is the velocity of the coordinate transformation. It is worth noticing that, the choice of the tetrad is not unique, since the degrees of freedom of a n-dimensional metric is (n + 1)n/2 and of the tetrad n 2 . After plunging the chosen tetrad into the Lagrangian, we obtain . Under the choice of the space-time metric (4), the higher orders of the rotational field, namely the terms ∼ ω 2 and ω 3 , vanish automatically, and only the linear term ∼ ω appears in the Lagrangian density. However, since the velocity of the coordinate transformation v = ω × r has been taken in the non-relativistic form, the Lagrangian (6) is valid only for small ω, with |ωx| ≪ 1. From the structure of the Lagrangian, the rotational field ω serves as a chemical potential coupled to the total angular momentum J = x ×p + s which is conserved during the evolution of the system.
With the known Lagrangian density it is easy to derive the Dirac equation for quarks in the rotational field, The corresponding Schrödinger equation can be obtained by considering the non-relativistic limit of the Dirac equation in a standard way. Considering the stationary solution of the Dirac equation, ψ(x) = ψ(x)e −iEt , the stationary wave function ψ(x) satisfies the equation To move to the familiar non-relativistic expression, we separate the quark energy into the mass and the kinetic energy, E = m + ǫ, write the stationary wave function in a two-component form, ψ(x) = (φ(x), χ(x)) T , and take the Pauli- with the total angular momentum J = x ×p + σ/2 in its two-dimensional form. From the second equation, the small component χ can be expressed as to the first order in 1/m. Substituting it into the first equation leads to the Schrödinger equation for the large component φ, which is the same as obtained by using non-relativistic Galilean transformation [40].
To the second order in 1/m, the small component χ becomes Taking the commutation relations between x i andp j and between σ i and σ j and employing the above Schrödinger equation to the first order in 1/m, we obtain the Scgrödinger equation to the second order, the only relativistic correction is to the kinetic energy.
III. COVARIANT KINETIC EQUATIONS
The core ingredient to describe the transport phenomena of a non-equilibrium system is the distribution function in phase space. Wigner function is the quantum analogue to the classical distribution function, and has been widely adopted in the investigation of quantum transport phenomena, such as spin polarization [20,27] in heavy ion collisions and pair creation in QED systems [6,34,41]. The covariant Wigner function W (x, p) for fermions is defined as the ensemble average of the Wigner operator in vacuum state, and the Wigner operator is the four-dimensional Fourier transform of the covariant density matrix [3], with x ± = x ± y/2, where the gauge link is defined as with the gauge field A µ . Since we focus in this paper on the rotational effect, we neglect the gauge field and in turn the gauge link.
The covariant kinetic equation in the Wigner function formalism is derived by calculating the first-order derivatives of the density matrix and using the Dirac equations for the fields ψ andψ. After a straightforward calculation, we obtain the equation of motion for the Wigner function in phase space which is equivalent to the equation of motion for the field in coordinate space, where the extended momentum and derivative operators in phase space are defined as with the orbital angular momentum l = x × p. Note that, the rotational effect changes only the particle energy from p 0 to p 0 + π 0 and time derivative from ∂ t to d t , and the vector momentum p and space derivative ∇ are not modified.
In comparison with nuclear collisions at extremely high energy, the rotational effect will become more important in heavy ion collisions at intermediate energy where the baryon density becomes high. Aiming at a kinetic theory in such case, we have included here the baryon chemical potential µ B which shifts the particle energy. In order to semi-classically solve the kinetic equations below, we have displayed the −dependence explicitly. It is clear that, the highest order quantum correction in the operators comes from the term ∼ 2 . Very different from the classical distribution which is a scalar function, the Wigner function in quantum case is a 4 × 4 matrix in spin space, it includes in general case 16 independent components. Because of their characteristic properties under Lorentz transformations, it is convenient to choose the 16 matrices 1, iγ 5 , γ µ , γ µ γ 5 , σ µν /2 as the basis for an expansion of the Wigner function in spin space, All the components Γ α = {F, P, V µ , A µ , S µν } are real functions, since the basis elements transform under hermitian conjugation like the Wigner function itself, W + = γ 0 W γ 0 . They can thus be interpreted as phase-space densities. Their physical meaning becomes clear in the equal-time formalism which will be discussed below. The expansion (18) decomposes the kinetic equation into 5 coupled Lorentz covariant equations for the 5 spinor components Γ α . Since these components are real and the operators P µ and D µ are self-adjoint, one can separate the real and imaginary parts of these 5 complex equations, and These equations are firstly derived by Vasak, Gyulassy and Eltz [42] for a QED system and are recently used to describe the chiral magnetic effect of a quark system in electromagnetic fields [15][16][17][18][19][20]. These equations can further be divided into two groups. Those equations with terms multiplied by p 0 hidden in P 0 form the constraint group which links the Wigner function W and its first order energy moment p 0 W , and the others form the transport group which describes the evolution of W in phase space. These will be discussed in more detail in the equal-time formalism. Similar to the Klein-Gordon equation for the wave function ψ(x) which describes the plane-wave solution of the Dirac equation satisfying the on-shell condition p 2 −m 2 = 0, we can obtain the phase-space version of the Klein-Gordon equation for the Wigner function W (x, p) by acting the kinetic equation (16) with the operator γ µ K µ + /2γ 5 γ µ ω µ +m, which leads to We will see in the following that, this equation controls whether the particle is on the mass shell.
IV. EQUAL-TIME KINETIC EQUATIONS
From the definition (14), it is easy to see that the covariant Wigner function at a fixed time is related to the fields at all times. Therefore, the covariant kinetic equations in general case cannot be solved as an initial value problem, and we should go to the equal-time formalism of the kinetic theory, by doing energy integration of the covariant equations [34]. The equal-time Wigner function is defined as with x ± = x ± y/2. It is clear that, the equal-time Wigner function is not Lorentz covariant, and the two Wigner functions are related to each other through the energy integration, This indicates that, the equal-time Wigner function is the zeroth order energy moment of the covariant Wigner function. This is the reason why we label the equal-time Wigner function using the subscript 0. In general case, particles moving in a medium are not on the mass shell, and the covariant Wigner function is equivalent to the collection of all the energy moments [36] W n (x, p) = dp 0 p n 0 W (x, p)γ 0 with n = 0, 1, 2, .... Only in the quasi-particle approximation where particles are on the shell and the covariant Wigner function satisfies the on-shell condition W (x, p)(p 2 − m 2 ) = 0, the two Wigner functions are equivalent to each other. Similar to the covariant scenario, the equal-time Wigner function is decomposed into 8 components in spin space, the equal-time components f i (x, p) and g i (x, p) (i = 0, 1, 2, 3) are the zeroth order energy moments of the corresponding covariant components Γ α (x, p).
By taking p 0 −integration of the covariant equations (19) and (20), one obtains two groups of equal-time kinetic equations, and 2 dp 0 The kinetic equations (26) and (27) form, respectively, the transport and constraint groups. The former is an extension of the Boltzmann equation, it describes the phase-space evolution of the 8 equal-time distributions in a rotational field. The latter is an extension of the on-shell condition f (x, p)(p 2 − m 2 ) = 0 associated to the Boltzmann equation. Since particles are generally not on the mass shell, the off-shell constraints cannot be neglected arbitrarily, and only the two groups together form a complete description of the quantum system. This is firstly pointed out by Zhuang and Heinz for a QED system [34][35][36].
The constraints play a tremendous role in calculating some of the physical distributions. Let's consider the energy density as an example. From the energy-moment tensor, the energy distribution in phase space is the first-order energy moment of the covariant component V 0 , Without the constraints (27) which links the zeroth-and first-order energy moments, there is no way to calculate the energy distribution in the frame of kinetic theory. With the help from the constraints (27), ǫ is a combination of the equal-time spin components, where the components f 1 , f 3 , g 0 and g 1 are controlled by the transport equations (26).
V. SEMI-CLASSICAL EXPANSION
The equal-time kinetic equations can directly be solved for some non-perturbative problems like pair production in electromagnetic fields [6,34,41]. As a systematical way the semi-classical expansion is widely used in covariant [15][16][17] and equal-time [6,[34][35][36]43] kinetic theories for massive [19] and massless [15][16][17] fermions. We discuss in this section the semi-classical expansion of the equal-time kinetic equations (26) and (27). Considering the fact that, the rotational field appears only up to the second order in , the kinetic equations at zeroth, first and second order of contain already all the quantum effects from the rotation on the phase-space distributions.
We take the expansion for the covariant and equal-time wigner functions W (x, p) and W 0 (x, p) and the operator Π µ , Note that the other operator D µ contains only the classical part. We first consider the Klein-Gordon equation (21) at the zeroth order in , This is just the on-shell condition for classical particles with energy, with ǫ p = m 2 + p 2 . Different from the kinetic theory for QED where the electromagnetic fields do not affect the free-particle shell [15][16][17][18][19][20]34], the rotational field here changes the shell from ǫ p to E p due to the interaction of the orbital angular momentum with the rotational field. The reason is clear: the electromagnetic fields E and B are derivatives of the gauge potential but ω appears directly in the effective gauge potential ω × x [40]. The derivative leads to the appearance of E and B at least at the first order in , but ω starts to contribute at the zeroth order. Considering the two elementary solutions of the classical Wigner function, corresponding to the positive and negative energies, the constraint equations (27) reduce the number of independent spin components from 8 to 2. The independent components can be chosen to be f 0 and g 0 , and the others can be expressed in terms of them explicitly, It is now the point to understand the physics of the spin components at quasi-particle level. Expressing the charge current and total angular momentum tensor in terms of the equal-time Wigner function, it is clear that the independent components f 0 and g 0 are, respectively, the particle number density and spin density, and g 1 is the number current density [6,34]. Taking the classical relation f 1 = p/|p| · g 0 for massless fermions, f 1 can be interpreted as the helicity density. The components f 3 and f 2 describe the contribution from spontaneous chiral and U A (1) symmetry breaking of the system to the particle mass [43]. From the non-relativistic limit g 3 → g 0 and the comparison of the term −m/(2m)σ · ω in the Schrödinger equation (11) in rotational field for particles with effective charge m with the term −e/(2m)σ · B in the Schrödinger equation in QED for particles with charge e, g 3 which is known as the magnetic moment density [6,44] in electromagnetic fields can be understood as the rotational moment density. Considering the classical relation g 2 = p × g 0 /m, g 2 describes the spin property in the direction perpendicular to the particle momentum. Using the above classical relations, the energy density in quasi-particle approximation is simply expressed in terms of the number distributions with positive and negative energy, Since any derivative is multiplied by a factor of , the classical limit of the transport equations (26) cannot describe the phase-space evolution of the classical components but shows again some of the relations appeared already in the classical constraints (35). To describe the dynamical evolution of the equal-time Wigner function, we should go to the first order of the transport equations (26), Eliminating the first-order components f The two equations are both in the Vlasov from. The particle velocity appeared in the free-streaming terms is modified by the rotation induced linear velocity x × ω, and the classical part of the rotational potential −ω · l in the Dirac equation leads to a mean-field force (Coriolis force) −∇(−ω · l) = −ω × p. For the spin density g 0 , there is an extra term ω × g 0 indicating the spin-rotation interaction, similar to the term B × g 0 in spinor QED. From the transport equations we obtain the equations of motion of the system, Considering positive energy, the total force acting on the particles contains both the Coriolis force and centrifugal force. In order to investigate spin induced anomalous phenomena in a rotational field, one needs to go beyond the classical limit and derive quantum transport equations. To this end, we consider the Klein-Gordon equation (21) again to see if quantum particles are still on a mass shell . At the first order in , the whole operator acting on the Wigner function becomes which is γ-matrix dependent. Therefore, there is no longer a common mass shell for all the spin components. To confirm this conclusion, we try the quasi-particle solution of the first-order constraint equations with an on-shell condition W (1) (x, p)(p 2 0 − E 2 p ) = 0. Note that, if the quasi-particle E p does exist, it should be different from the classical energy E p due to the modification by quantum fluctuations. However, the procedure fails. Massive particles cannot be on the shell when quantum effect is included. The case here is very different from the chiral limit where massless particles are always on a shell at any order of [17]. The second procedure we try is the spin-dependent on-shell condition Γ (1) α (x, p)(p 2 0 − E 2 pα ) = 0. This procedure fails again. Neither a common on-shell nor a componentdependent on-shell can be the solution of the constraint equations for massive fermions [19]. The quantum effects in a general kinetic theory are essentially reflected in two aspects, one is the spin, and the other is the off-shell constraint.
Without the on-shell condition, the constraint equations (27) at the first order in becomes with the energy shifts To close the equal-time constraint equations (42) which are related to the covariant components through the energy shifts (43), we need to consider the semi-classical expansion of the covariant kinetic equations (19) and (20). At classical level, the vector component is proportional to Π (0) µ , and both the vector and axial-vector are on the mass shell, where f (x, p) and g µ (x, p) are arbitrary Lorentz scalar and vector distributions. After a straightforward but a little bit tedious algebra, the vector and axial-vector at first order in can be decomposed into where δ ′ means the derivative of the δ-function.
Taking together the first order transport and constraint equations (37) and (42) for the equal-time components and (45) for the covariant components, we determine uniquely the energy shifts 0 , There are here three kinds of quantum corrections. The first one is a direct analogy to the classical relations shown in (35), by simply replacing the classical components f 0 . The second one comes from the derivative of the classical components, remembering that a derivative in kinetic equations is always accompanied by a factor of . The third correction is from the interaction with the external field which appears only in the rotational moment g 3 .
The dynamical evolution of the equal-time Wigner function W 0 (x, p) at the first order in is controlled by the transport equations (26) at the second order in , By eliminating the second-order components and taking into account the classical and first-order kinetic equations (35), (37) and (42), we obtain finally the transport equations for the two independent quantum distribution functions, namely the number density f While the number density satisfies the same transport equation as the classical one, the coupling between the two independent components leads to a new term on the right-hand side of the quantum transport equation for the spin density.
Following the way we used to derive transport equations in classical case and to the first order in , it is not a problem to obtain transport equations for the second-order components of the Wigner function. As has been mentioned above, the rotational field appears only up to the second order of in the kinetic equations, there should be no more new information when going beyond the second order.
VI. SUMMARY AND OUTLOOK
We investigated the quantum kinetic theory for a massive fermion system under a rotational field in Wigner function formalism. We derived the two groups of kinetic equations in covariant and equal-time versions, one is the constraint group which describes the off-shell effect in quantum case, and the other is the transport group which is the quantum analogy to the classical Boltzmann equation. For the structure of a quantum kinetic theory, the off-shell constraint is essentially important. It provides the physical interpretation for all the equal-time spin components, reduces the number of independent distribution functions, and closes the transport equations for the number density and spin density at classical level and quantum level.
The interaction with the external rotational field through total angular momentum changes significantly the transport properties of the particles. The classical rotation-orbital coupling controls the dynamical evolution of the number distribution. It adds a linear velocity x × ω to the particle velocity, and the induced Coriolis force p × ω behaves as a mean field force acting on the particles. Apart from the classical coupling, the quantum rotation-spin coupling changes the spin distribution but does not affect the number distribution. While the two distributions are independent in classical limit, the number density influences the spin density at quantum level.
There are still some questions we need to discuss in the future. One is the application of the obtained transport equations for the number and spin distributions. Since an equal-time transport equation can be solved as an initial value problem, the two transport equations can be used to describe the evolution of heavy quarks in high energy nuclear collisions to see the rotational effect on their propagation in hot medium. The collision terms should be included in a complete kinetic theory. The collisions among particles control the approaching of a system from nonequilibrium to equilibrium and will bring in the self-generated vorticity. The finite size effect is also important for a system under rotation. For a constant rotation, to guarantee the law of causality, the size of the system is under the constraint of R max ω ≤ 1. This means that, a rotational system should be finite, and therefore, one should consider the finite size effect on the Wigner function. | 6,944 | 2021-01-19T00:00:00.000 | [
"Physics"
] |
Higher-order topological insulators, topological pumps and the quantum Hall effect in high dimensions
Topological insulators are materials with spectral bands associated with an integer-valued index, manifesting through quantized bulk phenomena and robust boundary effects. In this Rapid Communication, we demonstrate that higher-order topological insulators are descendants from a high-dimensional chiral semimetal. Specifically, we apply dimensional reduction to an ancestor four-dimensional Chern insulator, and obtain two-dimensional (2D) second-order topological insulators when the former becomes chiral. Correspondingly, we derive the quantized charge accumulation at the corners of the 2D descendants and relate it to the topological index—the second Chern number—of the ancestor model. Our results provide a clear connection between the boundary states of higher-order topological insulators and topological pumps—the latter being dynamical realizations of the quantum Hall effect in high dimensions.
A paradigmatic example of a TI is the Chern insulator (CI), appearing in even dimensions. CIs exhibit quantized bulk transport responses [18,19] proportional to the topological indices (Chern numbers) characterizing their spectra. Interestingly, using dimensional reduction, a CI in d dimensions is mapped to a family of models in d − m dimensions, dubbed the "descendant pump family". A periodic and adiabatic scan over the descendant pump family similarly results in quantized bulk transport that is proportional to the Chern numbers of the ancestor CI. Archetypical examples of such mapping are the [two-dimensional to one-dimensional (2D → 1D)] reduction of a 2D CI to Thouless's 1D topological pump [20][21][22][23][24] and the [four-dimensional (4D) → 2D] reduction of a 4D CI to 2D topological pumps [18,25,26], see Table I co-dimension h cross the gap, i.e., states localized in h dimensions but extended in the remaining. Crucially, under dimensional reduction, the boundary states of the CI are mapped to the boundaries of the descendant pump family. For example, the Chern numbers of a 4D CI imply boundary states with codimensions h = 1 and h = 2 that, after dimensional reduction, are mapped to the edges and corners of the descendant 2D pump family [26].
Although CIs and topological pumps share equivalent Chern numbers, a relationship between topological indices in different classes and dimensions is obtained by imposing symmetry constraints when performing dimensional reduction. For example, the [4D → three-dimensional (3D)] reduction of a 4D CI yields 3D Z 2 TIs when time-reversal (T.R.) symmetry is imposed [27], see Table I. This allows for the derivation of a Z 2 index, characterizing the descendant 3D family, directly from the second Chern number of the ancestor 4D model. Correspondingly, the surface states of the 3D Z 2 TI descend from the h = 1 boundary states of the 4D ancestor model, i.e., the boundary states related to the second Chern number.
In this Rapid Communication, we reveal a connection among higher-order TIs, descendant pump families, and highdimensional CIs. Specifically, we show that 2D second-order TIs are the 2D descendants of a 4D chiral semimetal. We do so by first defining an ancestor 4D CI with well-defined first and second Chern numbers and then applying (4D → 2D) reduction to obtain the descendant 2D pump family [18,25,26]. We, then, show that the topological index, associated bulk responses, and the corresponding boundary phenomena of the 4D CI directly imply the properties of second-order TIs in the limit where the former becomes chiral. Respectively, we show that the descendant 2D family is divided into regions in parameter space separated by (bulk-or edge-) gap closures; these regions are distinguished by the appearance of midgap 0D states, localized at the corners. We derive the quantization of corner charge by extending the formula of Jackiw-Rebbi [42] to the 2D descendants and connecting it to the second Chern flux of the ancestor 4D Hamiltonian. Finally, using dimensional reduction, we generate various higher-order TIs solely via flux insertions through different planes of the ancestor model, leading to a simple procedure for finding new models for material realizations.
We consider a tight-binding model describing spinless charged particles moving on a 4D hypercubic lattice in the presence of a magnetic field [see Fig. 1
(a)],
where e μ is a lattice unit vector in direction μ, t μ is the nearest-neighbor hopping amplitude, t μν is the next-nearestneighbor hopping amplitude, and the magnetic flux is incorporated using Peierls' substitution [45]. The third term in Eq. (1) denotes the threading of b flux through each xy plaquette, Finally, a potentialV (m) = (−1) m x +m y V 0 c † m c m with V 0 as a constant (similarly when t z = 0 = t w ) gaps the spectrum, and the 4D model is a CI with well-defined first and second Chern numbers.
The chosen gauge in Eq. (2) leaves the HamiltonianĤ 4D invariant under lattice translations in the z and w directions. We can, therefore, write Eq. (1) in terms of the quasimomenta [43]. The inset shows the spectrum for t z /t x = 0. (e) The area S enclosing the interface between a nontrivial 1D TI and the vacuum (top) and the domain wall configuration in the k z -parameter space that describes it (bottom). In addition, the charge density ρ at half-filling is sketched. and where k ν is treated as an external parameter [46]. The descendant Hamiltonianĥ μ (k ν ) defines a 1D chain with a unit cell with two degrees of freedom, identical to the Su-Schrieffer-Heeger (SSH) chain [47]. The topological index pertaining to the 1D SSH model is the bulk dipole moment P μ (k ν ) (also known as polarization), which can be calculated using the Wilson loop formalism [32,48]. As a function of k ν , the bulk dipole P μ (k ν ) mod 1 takes two values of 0 and 1/2 with the quantization imposed by chiral symmetry; this ensures the existence of gap closing points in the (k μ , k ν )-parameter space, i.e., at the 2D Dirac cones [cf. Fig. 1(b)]. An on-site potential (−1) m μ V 0 c † m c m (or similarly a nonzero hopping t ν [43] in the Creutz model) breaks chiral symmetry and gaps the spectrum. As a consequence, the (k μ , k ν )-parameter space acquires a well-defined first Chern number c 1 = 2π 0 dk ν ∂ k ν P μ (k ν ), given by the integral of the change in dipole over the entire descendant 1D familyĥ μ (k ν ), see Fig. 1(c) [21,24,49]. Subsequently, the adiabatic evolution ofĥ μ [k ν (t )] along k ν dynamically realizes the 2D quantum Hall effect, dubbed topological pumping [10,21,24], see Table I.
A nonvanishing bulk dipole P μ (k ν ) = 0 has corresponding boundary effects where 0D subgap states appear at the material's boundary, see Fig. 1(d) [49]. Using the formula of Jackiw-Rebbi [42], we calculate the charge q S accumulated in a region S enclosing the boundary [see Fig. 1(e)] by linearizing the dynamics around zero energy to obtain the low-energy Hamiltonianĥ(k ν ) = |k μ | d · σ where d = {v(k ν )k μ , μ 1 (k ν ), μ 0 (k ν )} is a real-valued vector [50], σ = {σ y , σ x , σ z } are three anticommuting matrices {σ i , σ j } = 2δ i j , and is a cutoff energy scale. The accumulated charge q S is whered = d |d| andS = S[− , ] is the integration domain. The interface between two insulators in real space is equivalent to a domain wall in the k ν -parameter space [49], see Fig. 1(e). Thus, Eq. (7) is the first Chern (Berry) flux attached to the region defined byS in the 2D Brillouin zone (BZ) of the ancestor model. Importantly,S does not cover the entire 2D BZ of the ancestor model. Hence, in the limit where chiral symmetry is restored, |q S | becomes quantized to two values of 1/2 or 0, that correspond to encircling or not encircling a singularity with ±1/2 Berry flux. As a consequence, the 1D familyĥ μ (k ν ) is divided into a trivial and a nontrivial region with q S = 0 or 1/2, respectively, in accord with the value of the bulk dipole q S = S ∂ r μ P μ dr μ , cf. Fig. 1(c). This is known as the bulk-boundary correspondence of 1D Z TIs, i.e., the relation between the quantized bulk dipole and the charge at the boundary [50]. Note that Eq. (7) also describes the accumulation of nontopological charge at the boundary between two insulators (Tamm states), arising from surface polarizability [51].
In similitude to the (2D → 1D) reduction above, the main goal of this Rapid Communication is to demonstrate that the 4D Hamiltonian (1) leads to 2D second-order TIs. A (4D → 2D) reduction of Eq. (1) yields the 2D , describing a square lattice on the xy plane made out of SSH chains [cf. Eq. (5)] in both the x and the y directions and where each xy plaquette is threaded by a magnetic flux b, see Fig. 2(a). Assuming b = π , the topological invariant of the resulting 2D descendantĥ π xy (k) is associated with a quantized bulk quadrupole moment Q xy (either 0 or ± 1 2 ), which can be calculated using nested Wilson loops [32,38,52], see Fig. 2(b). Breaking chiral symmetry with an on-site potentialV (m) [similarly with hopping t z = 0 = t w in Eq. (1)] results in a 4D CI with a well-defined second Chern number [53] Fig. 2(b). Hence, an adiabatic evolution ofĥ π xy (k) over a closed surface in thek-parameter space dynamically realizes The charge density ofĥ π xy (k) at half-filling andk = (0, 0) has ±1/2 charge deviation at the corners. In (b)-(d), we used t x = t, t x /t y = 1, and t xz /t x = t yw /t y = 0.45. In (b), we used t z = t w = 0.001t to minimally open the gap. the 4D quantum Hall effect, dubbed 2D topological pumping [18,25,26], see Table I.
The bulk responses of the descendant 2D pump family have associated boundary phenomena [26] where: (i) 1D edge states, i.e., states of co-dimension h = 1, appear in the spectrum, and (ii) 0D corner states, i.e., states of co-dimension h = 2, disperse as a function ofk, see Figs. 2(c) and 2(d). We generalize Eq. (7) and derive [53] the charge accumulation q C in a region C enclosing the corner of the 2D material [cf. Fig. 2 4 } are five anticommuting matrices { μ , ν } = 2δ μν , and is a cutoff energy scale. We obtain
(a)] with low-energy Hamiltonianĥ
whered = d |d| andC = C × [− , ] 2 is the integration domain. Since the corner of the material can be expressed as the intersection of two edges, i.e., two domain walls in thekparameter space [cf. Fig. 2(a)], q C is equivalent to the second Chern flux attached to the region defined byC in the 4D BZ of the ancestor Hamiltonian. Note thatC does not cover the entire 4D BZ of the ancestor CI. Hence, in the limit where chiral symmetry is restored, |q C | becomes quantized to 1/2 or 0, corresponding to encircling or not encircling a 4D singularity with ± 1 2 second Chern flux. For the 2D descendantĥ π xy (k), we find that thek-parameter space is divided into trivial regions with |q C | = 0 and nontrivial regions with |q C | = 1/2; the latter exhibiting zero-energy states localized at the corners, in accord with the value of the bulk quadrupole q C = C ∂ r x ∂ r y Q xy d 2 r, cf. Figs. 2(b)-2(d).
The connection between charge accumulation at the corner and the second Chern flux is a key outcome of this Rapid Communication. In general, such charges can arise due to bulk properties as well as due to boundary effects. The charge accumulation q C in a finite macroscopic 2D material interfaced with another material, e.g, the vacuum, can be calculated using the electric multipole moments [32], where ρ bulk = −∇ · P + 1 2 ∂ r μ ∂ r ν Q μν are the contributions due to the bulk dipole P and quadrupole Q xy densities, ρ ∂ =ˆ n · P| ∂ −n μ ∂ r ν Q μν | ∂ are the contributions due to a "free" edge dipoleˆ n · P| ∂ and quadrupolen μ ∂ r ν Q μν | ∂ densities, and ρ ∂∂ = 1 2n α μn β ν Q μν is the contribution due to a point charge created by a free quadrupole density at the intersection of two edges with normal vectorsˆ n α andˆ n β . Hence, a nontrivial value of q C can originate from a combination of bulk and surface terms [53].
For the 2D Hamiltonianĥ π xy (k), the only nonvanishing contributions to the corner charge q C arise from a quantized bulk quadrupole Q xy . On the other hand, starting from the 4D ancestor Hamiltonian (1) with b = 0 we obtain, using (4D → 2D) reduction, a 2D descendantĥ 0 xy (k) composed of coupled SSH chains along the x and y directions, see Fig. 3(a). This modelĥ 0 xy (k) has zero bulk quadrupole Q xy but nonzero edge dipoles P| ∂ that result in two distinct phases with q C = 0 or 1/2; the latter having zero-energy states localized at the corners that merge into the bulk or edge spectrum at gap closing points [see Figs. 3(b) and 3(c)]. As a third example, we start from Eq. (1) with b = 0 and thread a π flux through each xw plaquette. This leads, using (4D → 2D) reduction, to a 2D descendant family denoted byĥ 0,π xy (k), describing SSH chainsĥ x (k z ) along the x direction, coupled to alternating SSH chainsĥ y (k w + π x) in the y direction, see Fig. 3(d). The charge q C is now a combination of bulk and surface terms that sum to quantized values 0 or 1/2, see Fig. 3(e). The spectrum is separated by bulk-or edge-gap closing points into regions characterized by the appearance of zero-energy states localized at the corners, see Fig. 3(f). In all three cases, the corner charge accumulation is associated with a nontrivial value of the second Chern flux [53].
The demonstrated relationship between the 4D chiral semimetal (1) and the 2D second-order TIs offers a plethora of generalizations. Namely, a wide variety of 4D ancestor models can be constructed where various planes are threaded with 2π/q fluxes (where q is an even integer), and different directions are dimensionally reduced. An even q is crucial for obtaining a low-energy theory corresponding to decoupled Dirac cones [20], thus, defining regions in the BZ that are separated by gap closures. Moreover, our results readily extend to 3D, explaining the appearance of hinge modes and relating corner states to a six-dimensional CI and its third Chern number [19,30]. These charges arise from a combination of Fig. 2(c). In (b), (c), (e), and (f), we have used t x = t, t y /t x = 1/10, t xz /t x = t yw /t y = 0.45, and an on-site staggered mass V 0 = 0.001t. octapole, quadrupole, and dipole moments. Equivalently, our procedure generates multiple topological pump realizations where charge transport is proportional to the modulation of the bulk dipole, quadrupole, and octapole moments. This naturally explains the appearance of surface, hinge, and corner modes (cf. Fig. 2(b) and Ref. [18]).
In this Rapid Communication, we reveal a connection among the physics of high-order TIs, topological pumps, and Chern insulators using dimensional reduction. This allows us to define a topological index associated with the charge accumulation at the corners, leading to a simple unifying understanding of standard TIs and higher-order TIs. It engenders a single ancestor high-dimensional insulator and uses dimensional reduction as a tool to find new higher-order TIs, each with its own low-dimensional description. Comparing our invariant with the electric multipole expansion, we establish that corner charges arise from the combination of bulk and surface multipole moments. | 3,824.8 | 2020-06-04T00:00:00.000 | [
"Physics"
] |
Biomolecular and Genetic Prognostic Factors That Can Facilitate Fertility-Sparing Treatment (FST) Decision Making in Early Stage Endometrial Cancer (ES-EC): A Systematic Review
Endometrial cancer occurs in up to 29% of women before 40 years of age. Seventy percent of these patients are nulliparous at the time. Decision making regarding fertility preservation in early stage endometrial cancer (ES-EC) is, therefore, a big challenge since the decision between the risk of cancer progression and a chance to parenthood needs to be made. Sixty-two percent of women with complete remission of ES-EC after fertility-sparing treatment (FST) report to have a pregnancy wish which, if not for FST, they would not be able to fulfil. The aim of this review was to identify and summarise the currently established biomolecular and genetic prognostic factors that can facilitate decision making for FST in ES-EC. A comprehensive search strategy was carried out across four databases; Cochrane, Embase, MEDLINE, and PubMed; they were searched between March 1946 and 22nd December 2022. Thirty-four studies were included in this study which was conducted in line with the PRISMA criteria checklist. The final 34 articles encompassed 9165 patients. The studies were assessed using the Critical Appraisal Skills Program (CASP). PTEN and POLE alterations we found to be good prognostic factors of ES-EC, favouring FST. MSI, CTNNB1, and K-RAS alterations were found to be fair prognostic factors of ES-EC, favouring FST but carrying a risk of recurrence. PIK3CA, HER2, ARID1A, P53, L1CAM, and FGFR2 were found to be poor prognostic factors of ES-EC and therefore do not favour FST. Clinical trials with bigger cohorts are needed to further validate the fair genetic prognostic factors. Using the aforementioned good and poor genetic prognostic factors, we can make more confident decisions on FST in ES-EC.
Introduction
According to the World Health Organization, endometrial carcinoma (EC) is the most common gynaecological cancer in Europe [1]. The incidence of EC is approximately 15,000 newly diagnosed women each year, of which 4% are at reproductive age [2]. In low-risk diseases, total hysterectomy and bilateral salpingo-oophorectomy provide patients with up to 93% chance of cure [3]. However, temporal preservation of the uterus in early stage endometrial cancer (ES-EC) is an available option for women who have a strong will to preserve fertility and achieve spontaneous pregnancy. This raises primarily medical but also ethical and social dilemmas since fertility-sparing treatment (FST) consists of compromising radical care in the effort of allowing them to reproduce. Sometimes, these Int. J. Mol. Sci. 2022, 23, 2653 2 of 20 patients will need to preserve their fertility by assisted reproductive technology (ART) [4]. They would also freeze their oocytes via a vitrification system, usually using a GnRh antagonist as a trigger to freeze all the gametes in ovarian stimulation [5][6][7]. Type 1, low-grade EC (G1, G2), is considered an ES-EC. It is currently the only histological type of EC that can be addressed with a fertility-sparing approach. For FST to be possible, myometrium and lymph-vascular space must not be involved adnexal invasion should not be seen. Preliminary evidence in disease progression and life expectancy in patients following temporal uterine preservation for ES-EC are encouraging and appear to be an acceptable management option. In a recent systematic review by Schuurman et al., 2021 [8] 62.6% of patients with complete remission on FST were reported to have a pregnancy wish. Among these patients with complete remission, 36.9% became pregnant. Nevertheless, the long-term outcome, survival rate, and quality of life in these patients are not yet prospectively investigated.
Most EC cases are sporadic, and only 10% of them are considered familiar. For that small percentage of usually younger individuals, tissue genetics and biomolecular markers are vital prognostic factors for the extent of EC progression, and it is, therefore, important to consider them prior to the FST decision. A diagnostic classification of the tumour based on molecular biology was provided by The Cancer Genome Atlas Research Network (CGARN) in 2013. CGARN prompted a growing interest in risk factor stratification of patients based on molecular biology and genetics of the tumour [9]. Some of these prognostic factors, microsatellite instability (MSI), mismatch repair genes (MMR), polymerase epsilon (POLE), tumour protein 53 (TP53), human epidermal growth factor receptor 2 (HER2), Kirsten rat sarcoma viral oncogene homolog (KRAS) and phosphatase, and tensin homolog (PTEN) mutations are already well established in the clinical setting and management of patients. Further research in recent years continued to support these as prognostic factors and gave way for the novel discovery of additional genetic and biomolecular markers with promising results.
There are several reasons why the identification and validation of prognostic factors is important in oncology [10]. By determining which genes or biomolecular factors are prognostic of outcomes, we gain insights into the physiology as well as pathology of the disease. Secondary, appropriate treatment modalities can be established either through genetically targeted treatments or through treatment personalised to the patient. Prognostic factors can also be used in the design, conduct, and analysis of clinical trials as well as preventatively for patients' families and informatively regarding their own risk of recurrence or death [11].
The aim of this review is to identify and summarise the currently established biomolecular and genetic prognostic factors that can facilitate the decision for FST in cases of ES-EC. Markers that designate bad prognosis, metastasis, and early recurrency could be used to deny FST. On the other hand, markers that demonstrate a good prognosis can help clinicians in decision-making for the management of patients wishing to preserve fertility. The secondary outcome was setting an initial path towards establishing guidelines for FST management in patients with ES-EC. In doing so, without forgetting that an individual approach is mandatory as each patient's characteristics and expectations regarding motherhood differ, it is equally important for the psychological impact of these gynaecological diseases to be considered [12].
Endometrioid Endometrial Cancer (EECs)
Bokhman et al., 1987 identified 70-80% of EC as EECs. EECs are linked to unopposed estrogen stimulation in young postmenopausal women [13]. Jongen et al., 2009 [14] conclude that while patients with estrogen receptor-alpha positive tumours have better overall survival, the absence of progesterone receptor-A is also an independent prognostic factor for disease-free survival and disease relapse. Specifically, the expression of estrogen receptor-A and the ratio of progesterone receptor-A and -B were associated with lower grade tumours as well as both shorter disease-free survival and shorter overall survival.
EECs can be further differentiated according to clinical and histopathological variables but also according to the activation and inactivation of certain genes. Studies showed that EECs with K-RAS, HER2, and b-Catenin gain function as well as microsatellite and PTEN loss-of-function alterations [15][16][17]. Other commonly mutated genes in ES-EC include FGFR2, ARID1A, CTNNB1, PIK3CA, and PIK3R1 [18][19][20]. However, the benefit of the classification of the aforementioned genes in premenopausal women who are interested in FST is unclear [21].
Established Genetic and Biomolecular Markers as Prognostic Factors
Four prognostic categories were established through CGARN. POLE ultra-mutated, MSI hypermutated low copy-number abnormalities, and high copy-number abnormalities. Each group is characterised by specific mutations and a different prognosis (Figure 1). conclude that while patients with estrogen receptor-alpha positive tumours have better overall survival, the absence of progesterone receptor-A is also an independent prognostic factor for disease-free survival and disease relapse. Specifically, the expression of estrogen receptor-A and the ratio of progesterone receptor-A and -B were associated with lower grade tumours as well as both shorter disease-free survival and shorter overall survival. EECs can be further differentiated according to clinical and histopathological variables but also according to the activation and inactivation of certain genes. Studies showed that EECs with K-RAS, HER2, and b-Catenin gain function as well as microsatellite and PTEN loss-of-function alterations [15][16][17]. Other commonly mutated genes in ES-EC include FGFR2, ARID1A, CTNNB1, PIK3CA, and PIK3R1 [18][19][20]. However, the benefit of the classification of the aforementioned genes in premenopausal women who are interested in FST is unclear [21].
Established Genetic and Biomolecular Markers as Prognostic Factors
Four prognostic categories were established through CGARN. POLE ultra-mutated, MSI hypermutated low copy-number abnormalities, and high copy-number abnormalities. Each group is characterised by specific mutations and a different prognosis ( Figure 1). The POLE ultra-mutated category has the most favourable prognosis. It is currently associated with longer progression-free survival and correlated with PTEN, PIK3R1, PIK3CA, FBXW7, KRAS, and TP53 mutated genes [17]. MSI and MMR groups are portrayed to have an intermediate prognosis. The specific mutated genes involved with this group are PTEN, KRAS, and ARID1A [22]. The MSI group represents altered mechanisms of MMR genes (MLH1, MSH2, MSH6, or PMS2), of which their inactivation leads to MSI accumulations [23]. Finally, we have the low copy number group. This group has an intermediate prognosis and is associated with CTNNB1 and PTEN gene alterations [22]. The high copy-number group is linked to high-grade EC and, more specifically, the serous histotype; therefore, we are not going to discuss this further. A more detailed analysis is presented in Table 1 and will be described further below. Figure 2 portray a visual representation of the functions of each gene described and the pathological consequences of their mutations. The POLE ultra-mutated category has the most favourable prognosis. It is currently associated with longer progression-free survival and correlated with PTEN, PIK3R1, PIK3CA, FBXW7, KRAS, and TP53 mutated genes [17]. MSI and MMR groups are portrayed to have an intermediate prognosis. The specific mutated genes involved with this group are PTEN, KRAS, and ARID1A [22]. The MSI group represents altered mechanisms of MMR genes (MLH1, MSH2, MSH6, or PMS2), of which their inactivation leads to MSI accumulations [23]. Finally, we have the low copy number group. This group has an intermediate prognosis and is associated with CTNNB1 and PTEN gene alterations [22]. The high copy-number group is linked to high-grade EC and, more specifically, the serous histotype; therefore, we are not going to discuss this further. A more detailed analysis is presented in Table 1 and will be described further below. Figure 2 portray a visual representation of the functions of each gene described and the pathological consequences of their mutations.
PTEN
The phosphatase and tensin homolog (PTEN) mutations occur early in the neoplastic process of ES-EC reported in 57−83% of cases and represent the most common genetic mutation reported [48]. PTEN gene alteration is located at chromosome 10q23 and behaves as a tumour suppressor gene. It encodes for both a lipid and a protein phosphatase, inducing cell cycle arrest at the G1/S checkpoint. Additionally, it inhibits the growth-factor-stimulated mitogen-activated protein kinase (MAPK) signalling pathway. Subsequently, it affects focal adhesion formation, cellular differentiation, and proliferation as well as cell spread, migration, inflammatory responses, and apoptosis [49]. Salvesen et al., 2004 [24] demonstrated a significant (p = 0.05) association between PTEN expression loss and metastatic disease. PTEN mutation in exon 5 and 8 was also significantly correlated with ES-EC, low grade, young age, and favourable prognosis. Additionally, PTEN alterations were associated with microsatellite instability (MSI), decreased hMLH1 expression, hMLH1 inactivation, and hMLH1 methylation [24].
PTEN negatively regulates the downstream pathway of phosphatidylinositol 3-kinase (PI3K), suppressing cellular growth, proliferation, and survival [28,38]. The dominant acti-vation event in the PI3K pathway appears to be PTEN protein loss. The PI3K/AKT/mTOR pathway is the most deregulated signalling pathway and is affected in more than 80% of ES-EC [28,38] [51] demonstrated that PIK3R1 (p85α) mutations occur at a higher rate in EC than in any other tumour lineage. PIK3CA and PIK3R1 mutations are independently linked to favourable survival, although PIK3CA is also linked to recurrency in ES-EC [39,52]. On the other hand, PIK3R1 was deemed an unfavourable prognostic factor for ES-EC [25]. Furthermore, Hayes et al., 2006 [39] identified PIK3CA as a marker for disease invasion, confirming Samuel et al.'s findings.
POLE
POLE mutations occur in 7-12% of ECs [29]. POLE is responsible for the regulation of glycolysis and cytokine secretion and therefore affects cell metabolism and immune response. Imboden et al., 2019 [52] identified that patients with POLE-mutated tumours were significantly younger. In this study, patients with POLE mutation appeared to also be nulliparous and to have a history of smoking. The tumours themselves are portrayed to be aneuploidy more frequently. As for prognosis, these patients appeared to have significantly better results and particularly excellent prognoses in cases with hotspot mutations. POLE is deemed a good prognostic factor regarding overall survival and has favourable outcomes and therefore is a good indicator for FST [35][36][37][38]. Haruma et al., 2018 [30] demonstrated that POLE mutations in EC are associated with a reduced risk of progression-free survival and distant metastases. This was also demonstrated in combination with MSI features (implicating MMRd) [31,32].
EGFR and HER2
Epidermal growth factor receptor (EGFR) is a family of receptors (HER1, HER2, and HER4) that are frequently implicated in EC due to their strong association with the PI3K/AKT and RAS/RAF/MEK pathways [53]. EGFR was found in 43-67% of patients with EC and was also linked with shortened disease-free and overall survival [53]. More specifically, HER2 gene amplification and receptor overexpression was demonstrated in EC. High HER2 expression is an independent prognostic factor influencing progression-free and overall survival in ES-EC [40,41]. Over-expression of HER2 was more common among more aggressive cancers with a significantly worse prognosis [41]. After a median follow-up of 50 months, there were 43 (25.4%) recurrences, of which the majority of recurrences were in the HER2-positive cohort (50.0%). HER2 is also linked to an increased chance of recurrence. However, Morrison et al., 2006 [40] reported HER2 to play a minor role in ES-EC, which is more common in the clinical setting.
Among the EGFR family of receptors, HER-3 expression was not identified in EC [54]. The EGFR family was identified in 39.7% of patients, with HER4 being the majority of expressions (49.2%) and HER2 at 41.3%. However, HER-2 was reported as a more significant prognostic factor in ES-EC than HER4, and it was also associated with high MLH1 expression [55].
CTNNB1
CTNNB1 mutation is found in 20-40% of cases of ES-EC. Imboden et al., 2020 [26] report this mutation to be up to 50% in FIGO I, grades 1 and 2, which is almost double that of PTEN mutations (27%). It is a vital component of E-cadherin, which is responsible for cell adhesion and is closely associated with the Wnt signalling pathway. Activation of the Wnt pathway contributes to the progression of tumours, abnormal proliferation, and gene expression. CTNNB1 mutations were also reported in dual mutation co-operativity such as with PTEN loss and also accompanied by a KRAS mutation [34]. However, CTNNB1 mutations were less commonly present in MSI positive tumours than other genetic alterations [56] Myers et al. revealed that patients with CTNNB1 mutation have a risk of recurrence which is nine times higher than in those without mutation [57]. Moreover, Kurnit et al., 2017 [34] found the presence of CTNNB1 mutations at ES-EC to be associated with higher rates of disease recurrence, lower rates of deep myometrial invasion, and lymphatic/vascular space invasion. Additionally, Stelloo et al., 2016 [35] characterise CTNNB1 mutations as a more aggressive subset in ES-EC, with 35% of intermediate features in high-risk patients.
KRAS
KRAS is an inactivated oncogene involved in signal transduction by communicating with several cell membrane receptors, including EGFR [58]. Current evidence correlates KRAS mutations with the down-regulation of the MAPK and PI3K/AKT pathways as well as the up-regulation of endometrial cell oestrogen receptors, leading to excessive cell proliferation and carcinogenesis [15,59] and increased cell proliferation and apoptosis.
KRAS has consistently appeared in several studies and is reported to have a relatively high prevalence in ES-EC. KRAS mutations were detected in 10-30% of EC and 6-16% of cases of endometrial hyperplasia [60,61]. Byron et al., 2012 reported KRAS mutations to be as high as 19% in ES-EC [54]. Furthermore, KRAS mutations were found to be linked with MSI-positive EC. On the other hand, no association was found with FGFR2 and CTNNB1 mutations [45,56].
KRAS mutation can cause hypermethylation changes in genome expression. More specifically, Muraki et al., 2019 found hypermethylation of the MLH1 promoter in 40% of ES-EC cases, which can cause concurrent loss of function in DNA repair proteins [36,62]. Wang et al., 2012 [42] reported KRAS to have significant effects on the recurrence of EC individually as well as when combined with PIK3CA [25]. KRAS mutations were also associated with longer disease-free survival.
FGFR2
FGFR2 mutations were reported independently in EC [45]. Studies demonstrate a similar link between somatic oncogenic FGFR2 mutations in EC as with cervical cancer. Dutt et al., 2008 [46] reported FGFR2 somatic mutations in 12% of EC samples. Furthermore, it was found that patients with FGFR2 mutation had shorter progression-free survival and EC-specific survival [63]. Gatius et al., 2011 [45] showed that ES-EC has a higher expression of FGFR2 than non-endometrioid EC. In the early stages of the disease, FGFR2 mutations were correlated with shorter disease-free and overall survival [62]. However, Pollock et al., 2007 [47] identified no association between FGFR2 mutation and overall or disease-free survival in ES-EC.
FGFR2 immunostaining was statistically significantly associated with estrogen and progesterone receptors and inversely associated with PTEN expression. Additionally, FGFR2 mutations coexisted with mutations in PTEN, PIK3CA, and CTNNB-1. FGFR2 mutations were also significantly more common in MSI-positive tumours than CTNNB1 mutations and appeared to have shorter disease-free survival [45].
ARID1A
ARID1A was reported in 19-44% of ECs [42]. ARID1A (AT-rich interactive domain 1A) is located on chromosome 1p36.11 and encodes ARD1A protein which is a vital component of the SWI/SNF (switch/sucrose non-fermenting) complex. This complex is responsible for regulating proliferation, DNA repair, differentiation, and tumour suppression [64]. Two studies identified ARD1A as a tumour suppressor gene linked to gynaecological diseases and, more specifically, EC [42,65]. Since then, studies reported an increased number of mutations in the ARID1A gene in 26-37% of EC-EC. Werner et al., 2013 [42] reported that loss of ARID1A is associated with an early event in the process of the carcinogenesis of endometrioid carcinomas and with deep myometrial infiltration. The ARID1A T-rich interactive domain family was also associated with MSI as frequently as 23.1% [9] 2.2.8. P53 Suppressor P53 protein (TP53), encoded by the P53 gene, is highly involved in the cell cycle, differentiation, and apoptosis. P53 mutations lead to rapid tumour progression and invasion, which is associated with poor ES-EC prognosis [63]. Mutation of the P53 gene was found in 10-20% of EC, while TP53 overexpression was present in 20-30%, being a very common abnormality in several human cancers [27,66].
P53 was described to have poor results in mortality and was linked to both recurrence and metastasis independently as well as when combined with L1CAM [27,43,44,66,67]. L1CAM is an X-linked genetic mutation located on the Xq28 gene. It encodes for the L1 protein, which spans the cell membrane of nerve cells and allows for neighbouring cell adhesion. L1 protein also plays a role in migration, the organisation of neurons, and axon outgrowth [68]. Additionally, Kommos et al., 2018 [43] identified L1CAM expression to be present in 80% of P53 abnormal tumours. The PORTEC trial found that positive L1CAM expression in stage I EC patients had a significant correlation with distant recurrence and overall survival [69]. L1CAM expression is, therefore, significantly, but not universally, associated with mutant P53. It may be strong enough for clinical implementation as a prognostic marker in combination with P53 as well as a promising therapeutic target [70].
Discussion
This is the first thorough systematic review that identifies and summarises the most well-established biomolecular and genetic prognostic factors that facilitate FST decisionmaking in cases of ES-EC. The classification of biomolecular and genetic prognostic factors as 'good', 'fair,' and 'poor' is an important aspect in the management of patients who wish to risk ES-EC progression for a chance in motherhood. Markers that designate bad prognosis, metastasis, and early recurrency could be used to deny FST, and on the contrary, markers that demonstrate a good prognosis can help clinicians in the management of patients wishing to preserve their fertility. Markers that are fair prognostic factors require further discussion with the patient, discussing the risks. Our review summarises and describes these results and their significance but also indicates the current gaps of knowledge in this field of research.
Fertility Sparing Treatment (FST)
The recurrence rate was reported to be as high as 35%, and patients were advised on the importance of pursuing pregnancy soon after remission and hysterectomy soon after family planning completion [71][72][73]. Kim et al., 2009 [74] reported the recurrence rate after FST to be 38.9% in ES-EC, which is much higher than 5.5% and 5.5% in combined histology EC and G2 EC, respectively. Alternatively, endometrial intraepithelial neoplasia was shown to have a much higher recurrence rate (50%) after FST in a 3 months interval follow-up of endometrial sampling by hysteroscopy. Moreover, Kim et al., 2009 [74] showed that after a median follow-up of 40.7 months, 12 patients (66.7%) preserved their uterus and 8 patients (53.3%) became pregnant with a total of 14 successful pregnancies among patients trying to become pregnant in both groups. Thirty-three percent of patients were reported to have stable disease, and 66.7% had a complete response rate, of which 25% relapsed [75]. Other studies report a relatively high number of foetal losses at 31.3% but also a live birth rate of 72% after FST for EC [76]. However, the limited number of studies describing obstetric outcomes can influence these numbers. It is important to note that there are also clinicopathological factors that can affect FST, such as polycystic ovarian syndrome (PCOS), obesity, diabetes, anovulation, exogenous oestrogen exposure, nulliparity, amenorrhea, and irregular menstruation [11,77,78]. More specifically, in patients with PCOS and reproductive failure, metformin administration and vitamin D supplementation with inositol successfully improved ovulation restoration [79][80][81]. This was especially true in pregnancy where, due to insulin resistance, patients tend to develop gestational diabetes [82].
Eligibility Criteria for FST
When considering a conservative management approach in ES-EC, we should consider the clinical and pathological characteristics of the tumour in order to select the appropriate medical intervention. A conservative management approach could be considered in patients < 40 years old who intend to preserve fertility and plan to conceive as soon as possible after remission. They should have no contraindications for medical treatment and a histological diagnosis of grade I EC; histotype: endometrioid with positive hormone receptor (type I), tumour diameter < 2.0 cm, stage IA without myometrial and adnexal involvement, negative lymph-vascular space invasion, and diffuse immunohistochemical expression of progesterone receptors on endometrial biopsy. These are the patients who are considered to be at "low risk" [83]. Furthermore, according to the Gynecologic Oncology Group (GOG) and Federation International of Gynecologic and Obstetrics (FIGO), the most important prognostic factors for lymph node metastasis in patients with EC were the grade of tumour and the depth of myometrial invasion with the risk of involvement less than 1% and excellent 5 year progression-free survival of 95% if the tumour is grade 1 with overall survival of 90%. In the absence of these risk factors, a conservative approach to surgical staging is feasible, safe, and not associated with an increase in cancer-related mortality [84,85].
PTEN
The use of PTEN alteration as a prognostic factor is still controversial. Studies found PTEN mutations to be associated with favourable clinical and pathologic characteristics, while PTEN promoter methylation and PTEN loss of function were linked with poor prognosis and metastatic disease [19,24,86]. On the one hand, it is suggested that PTEN may be a tumour cell regulator for invasion and metastasis, but on the other hand, it is suggested that PTEN inactivation by mutation is an early event in endometrial tumourigenesis and therefore not linked to the metastatic progression of the disease [24,48,86]. PTEN alterations were linked to advanced disease in cancers other than EC but rarely presented in gynaecological cancers other than EC [24,87].
Studies showed that PTEN mutations occur in the earliest stages of EC and frequently coexist with other mutations [22,46]. PTEN loss of function is one of the most frequently identified mutations in ES-EC and negatively affects the regulation of the PI3K-AKT. In fact, PTEN and PI3K/Akt/mTOR signalling pathways were associated with poor prognosis [88][89][90]. Interestingly, these mutations were reported in more cases of EECs (75%) than in non-EECs (43%) [91].
To conclude, PTEN loss is overall a good prognostic factor for ES-EC. However, these findings are inconsistent among the literature, and therefore large clinical trials are required to examine its effectiveness as a prognostic factor for FST in patients with ES-EC. Additionally, the accurate and fast identification of this mutation is vital given the narrow time window that the clinicians have to decide patient eligibility for FST in order to achieve optimal therapeutic benefit. Djordjevic et al., 2012 [91] demonstrated that PTEN immunohistochemistry is a more effective tool in detecting the majority of cases with PTEN loss of function in a quick and cost-effective manner compared to PTEN sequencing.
MSI and MMR
MMR genes work with the DNA repair system to promote genetic stability. The MMR system plays a core role in carcinogenic mechanisms of ES-EC. These actions cause oncogene mutation, inactivation of tumour suppressor genes, and oncogene activation leading to chaotic cell proliferation and, consequently, carcinogenesis.
Mutations during DNA replication and defects in the MMR genes result in MSI. MSIs have a predictive value for the efficacy of immune checkpoint inhibitors in metastatic tumours regardless of primary tissue origin. This was initially discovered in HNPCC patients along with MMR mutations [92].
MSI is a useful biomarker for identifying patients who have a good prognosis [30]. However, results on MSI as a prognostic factor are inconsistent. It is linked to recurrence but not to metastasis and overall survival. We can therefore argue that MSI is a fair prognostic factor for FST [30]. This inconsistency is based on a number of studies showing insignificant results rather than contradictory data on the effect of MSI in ES-EC. Testing for MMR status/MSI in ES-EC is of vital importance, and it is also identified in patients who are at higher risk for human non-polyposis colorectal cancer (HNPCC/Lynch Syndrome) [17]. Testing for EC in patients with HNPCC is therefore advised, even though currently there is limited evidence on the benefits of HNPCC-associated EC screening.
HNPCC-associated EC cases lack additional mutations, suggesting that in the MMRd context, few additional molecular changes lead from pre-invasive lesions to carcinoma [93]. For these patients, hysterectomy and bilateral salpingo-oophorectomy might be a more appropriate treatment method than FST as a preventative measure for both endometrial and ovarian cancer. This should preferably be before the age of 40 years [94].
Travaglino et al. also showed Dusp6, a MAPK signalling pathway marker, to be an indicator of good response to treatment along with the deficiency of MMR [88]. This was also true when combined with PTEN. The combination of PTEN involvement, MMRd, and Dusp6 deficiency were proven to be important prognostic factors in conservative treatment failure.
However, evidence from MMRd studies is inconsistent. MMR status is suggested to be a predictive biomarker for FST. It is described as not associated with disease progression, but at the same time, it is linked to have a relatively high rate of recurrence. Chung et al., 2021 identified 9 patients with MMRd among a cohort of 54 (17%). Four of these patients (44%) underwent immediate hysterectomy because of FST failure, and three patients (33%) presented with an upstaged diagnosis after hysterectomy [33]. However, this study had a relatively small cohort, and for validity, larger-scale studies are required.
In conclusion, the International Society of Gynecological Pathology recommended using MMR-immunohistochemistry (IHC) testing for both MMR status and MSI in all EC samples, irrespective of patient age [95]. Using IHC, the expression of four MMR proteins MLH1, PMS2, MSH6, and MSH2, are assessed, and additionally, PMS2 and MSH6 antibodies can also be assessed [96].
POLE
The POLE gene encodes the major catalytic subunit of DNA polymerase-ε [51]. Polymerase-ε is thought to function in strand synthesis [13]. POLE mutations improve the prognosis of EC by regulating cellular metabolism through AMF/AMFR signal transduction [31]. Li et al. identified both AMF/PGI and AMFR/gp78 to have higher expression in POLE mutants [31]. Comprehensive low expression of POLE and high expression of AMFR/gp78 showed a positive correlation with patient survival time. Phosphoglucose isomerase (PGI) is a glycolytic enzyme involved in the gluconeogenesis-glycolysis pathways. It is an extracellular cytokine as well as an autocrine motility factor (AMF). Therefore, AMF/PGI plays a dual role as a phosphor-glucose isomerase that catalyses the interconversion of glucose-6-phosphate and fructose-6-phosphatein glycol metabolism when it is effective as a cytokine [31,53,97]. Furthermore, POLE mutations are also linked to ultrahigh mutation rates and frequent activation of WNT/CTNNB1 signalling. Li et al., 2019 [31] showed that the presence of POLE mutations in the early clinical stage (I + II) and low histologic grade (G1) EC had a favourable prognosis. The main reason suggested was the somatic POLE ultra-mutation which causes an abundance of antigenic neoepitopes that triggers an anti-tumour immune response [98]. Stello et al., 2016 [35] also described favourable features in 50% of POLE-mutant EC in the absence of MSI and CTNNB1 mutations. Haruma et al., 2018 [30] demonstrated that ECs POLE mutations are associated with a reduced risk of recurrence and distant metastases. This was also demonstrated in combination with MSI features (implicating dMMR). Despite POLE mutations being considered a good prognostic factor, favouring FST, Veneris et al. [29] analysed a case where recurrence was observed, concluding that POLE-mutated EC has high tumour mutation burden, tumour neoantigen production, and tumour-infiltrating T cells. In addition, Van Gool et al. reported POLE mutations in 7-12% of EC and demonstrated that POLE-mutant ECs have an increased lymphocytic infiltrate in comparison to POLE wild-type/MSI-high and POLE wild-type/MSS subgroups [70]. However, these studies include a low number of participants with complex EC histology.
Imboden et al., 2019 [52] concluded that the POLE-mutated EC definition needs further specification to achieve a more accurate report on survival prognosis. This requires the inclusion of clinicopathologic characteristic variants of uncertain significance such as parity, BMI status, and smoking status. POLE-mutated EC was linked, though not statistically significant, to nulliparous women with lower BMI and often current or past smokers.
Therefore, despite the fact that POLE mutations can significantly improve ES-EC prognosis, in order to be eligible for FST, additional genetic alterations specific to the patient's characteristics need to be considered.
EGFR, HER2
HER2 is linked to poor overall survival in EC. Morrison et al., 2006 [40] reported a median overall survival of 5.2 years for patients with overexpression of HER2, 3.5 years for patients with expression of HER2, and 13 years for patients who did not express HER2 on their cancers. Okuda et al., 2010 [19] reported HER2 overexpression to be more frequent in non-EECs and suggested that HER2 overexpression in ES-EC characterises late progression and differentiation events. A small cohort study also confirmed increased expression of EGFR and HER2 overexpression [53]. Lastly, Erickson et al., 2020 [41] reported HER2positive tumours to have worse progression-free survival, recurrence, and overall survival, after a median follow-up of 50 months in 169 stage I uterine serous carcinomas.
EGFR receptors have a vital role in the carcinogenesis of EC. More specifically, HER2 and HER4 overexpression is linked to more aggressive, high-grade ECs and indicate a poor prognosis. In ES-EC, the evidence is limited, but EGFR and HER2 are so far consistently considered poor prognostic factors for ES-EC. FST is therefore not recommended for these patients as more harm than good might be seen from delaying treatment [41].
CTNNB1
Patients with CTNNB1 somatic gene mutation appear at high incidence in ES-EC. The accumulation of beta-catenin was inversely correlated with the patient's age [89]. In this setting, the CTNNB1 mutation has a major role as a molecular classifier of EC, especially in young patients [33]. Following FST with progesterone, the expression of β-catenin was significantly increased in patients with disease progression [90].
When it comes to recurrency, CTNNB1 is described as a fair prognostic factor in ES-EC, even though the results are not always significant. Additionally, Kurnit et al., 2017 [34] found CTNNB1 mutations to be risk factors for disease recurrence even in presumed lowrisk patients. These patients are usually of a younger age, at an early stage of the disease, and have a low incidence of lymphatic vascular space. Although Hu et al., 2019 [90] did not identify it as a marker on recurrence, they describe that due to its high prevalence in ES-EC, it can play a role in pathogenesis and early treatment.
Further research on CTNNB1 mutation-related ES-EC and its prognosis after FST are essential in identifying its clinical relevance in decision making.
KRAS
Although evidence is limited and occasionally conflicting, there is a clear trend in the literature showing that KRAS plays a role early in EC progression, especially when the disease originates from hyperplastic endometrium [99]. Cote et al., 2012 [37] exclusively studied African American patients and significantly associated KRAS as a mutation that commonly occurs in ES-EC. Additionally, there were no observed mutations in other histological EC tumours such as serous, clear cell, or mucinous tumour types arguing that KRAS mutations may not have metastatic potential. However, so far, metastasis in KRAS mutations has not been studied [37].
KRAS is so far defined as a good prognostic indicator in mortality but has proven to have a high prevalence in recurrency. Data on metastasis is not yet available in literature, and KRAS can therefore be used as a fair prognostic factor in ES-EC favouring FST [59]. However, the results are relatively inconsistent, and the cohorts are too small to be able to validate their significance in the literature [25,37].
FGFR2
Although KRAS and FGFR2 mutations share similar activation of the MAPK pathway, Byron et al., 2012 [62] and Jeske et al., 2014 [63] suggest very different roles in tumour biology. Furthermore, Jeske deemed a significantly higher relative risk of failure and shorter progression-free survival when known clinicopathological factors such as age, stage, and grade when taken into consideration.
Jeske and colleagues showed FGFR2 mutation as more prevalent among advanced age (≥70 years) patients. FGFR2 mutations were also consistent with more aggressive disease and were more common in patients with more advanced stage III/IV, although this did not reach statistical significance [63].
The identification of activating mutations in FGFR2 in ES-EC is of direct clinical relevance, and further studies are required to identify the relationship with recurrence, metastasis, and overall survival.
ARID1A
ARID1A RNA expression is significantly correlated with ARID1A protein loss. Thus, loss of ARID1A appears to be an early event in the carcinogenesis of endometrioid uterine carcinomas, and the association with deep myometrial infiltration may suggest importance for invasiveness. Werner et al., 2013 [42] identified ARID1A loss to be associated with younger patients and diploid tumour cells, suggesting ARID1A loss relationship with less aggressive EC. Lastly, the evidence in the literature is inconsistent, and there is no relationship status between ARID1A loss and disease progression or overall survival.
P53
TP53 mutation is described as one of the most important molecular factors which predict prognosis in ES-EC and is associated with an unfavourable outcome [19,56]. Levine et al. reported fewer TP53 mutations in ES-EC compared to more frequent mutations (PTEN, CTNNB1, PIK3CA, ARID1A, KRAS). Sherman et al., 1995 [16] debated that in endometrial intraepithelial carcinoma, as well as in transformation and dedifferentiation of other neoplasms, P53 protein expression plays a significant role. Therefore, L1CAM status presence among TP53 mutations can be a significant prognostic factor for worse disease-specific survival. This was described consistently over the literature, and we can therefore define TP53, as well as its combination with L1CAM, as poor prognostic factors for ES-EC and subsequently for FST.
Fertility Sparring Treatment in Current Clinical Practice
The European Society of Gynecological Oncology (ESGO) confirm that FST is a safe option for patients with ES-EC (Stage1A with endometrial histological type and grade 1 EC) [100].
A Swedish nationwide population-based cohort study identified that natural fertility was maintained after FSS in all patients with 11% of women giving birth to healthy children, all delivered at full-term [101]. Additionally, complete and partial response to progestinbased FST was identified to be up to 83% in patients with ES-EC. Relapse was diagnosed in 20% of these patients, with the total number of pregnancies at 43% and total live births at 30% [102]. In a systematic review and meta-analysis by Gallos et al., 2012 28% of women with ES-EC progressed to have live births following FST. Similarly, Cappelletti et al., 2021 suggested that progestin-based FST is viable for women with well-differentiated, clinicalstage 1A, EEC. Even though only one out of five women were estimated to achieve a live birth, the use of prognostic factors can improve both patient treatment selection and reproductive outcomes [103].
Limitations
To date, there are limited articles available on the oncological safety of FST as well as a limited number of studies specific to ES-EC. The papers that are specific do not always look into individual genetic mutations but in groups of several genetic mutations. Some of these genetic alterations are already established in clinical practice, but a lot of them are novel and not reliably tested through clinical trials. The literature mainly consists of small cohorts, retrospective case series, and animal studies. Animal tissue studies were not deemed appropriate for this study and were therefore excluded.
The genes tested through large cohort studies were compared to cancers other than ES-EC and therefore affected by genetic factors which may not be solely impactful in ES-EC. Additionally, the cohort of patients tested in ES-EC is smaller compared to the cohorts used to test genetic mutations in other cancers such as ovarian and colorectal cancer. Follow-up of these patients is often short, and incidence of pregnancy and pregnancy outcomes are inadequately reported. Consequently, FST uncertainty in patients with ES-EC is high and prognostic factors favouring FST for women with Stage IA Grade 1,2 EC (ES-EC) are limited.
Inclusion Criteria
The articles were screened to check that they met the following criteria: early stage, low-grade, endometrial cancer patients, reporting recurrence, metastasis, overall survival, obstetric outcomes, or progression-free survival, reporting prognostic genetic or biomolecular markers, and in the English language. All studies on animals or studies involving ex situ tissues were excluded. This yielded 29 results across the 4 databases. This was reduced to 26 after duplicates were removed. The titles and abstracts were screened by the first 2 authors (P.T. and S.D.), and 20 potentially relevant articles were found. Of the remaining studies, the full manuscripts were reviewed by the first 2 authors to ascertain whether the inclusion criteria were met. If there was disagreement between the first 2 authors regarding a study, the matter was referred to the most senior author (V.T.). Consequently, 18 articles were selected for inclusion in the review. The bibliography of these manuscripts was then independently screened by the first 2 authors, searching for any other potentially relevant studies. Twenty further studies were found by this method, bringing the final total to thirty-eight studies, of which thirty-four were finally unanimously agreed to be included (Figure 3). This systematic review was registered with PROSPERO (CRD42022312003) and is in line with the PRISMA criteria checklist [51]. marker*.mp.'') This yielded 29 results across the 4 databases. This was reduced to 26 after duplicates were removed. The titles and abstracts were screened by the first 2 authors (P.T. and S.D.), and 20 potentially relevant articles were found. Of the remaining studies, the full manuscripts were reviewed by the first 2 authors to ascertain whether the inclusion criteria were met. If there was disagreement between the first 2 authors regarding a study, the matter was referred to the most senior author (V.T.). Consequently, 18 articles were selected for inclusion in the review. The bibliography of these manuscripts was then independently screened by the first 2 authors, searching for any other potentially relevant studies. Twenty further studies were found by this method, bringing the final total to thirty-eight studies, of which thirty-four were finally unanimously agreed to be included ( Figure 3). This systematic review was registered with PROSPERO (CRD42022312003) and is in line with the PRISMA criteria checklist [51].
Data Extraction
Once the studies were selected, the manuscripts were reviewed independently by the first 2 authors. The primary objective was to collect genetic and biomolecular prognostic factors of ES-EC. The secondary outcome was to identify which of these molecular mechanisms had an impact on functional and clinical outcomes at the end of a follow-up period. Outcomes were measured in accordance with recurrence, metastasis, overall survival, progression-free survival, or obstetric outcomes. Patient demographics, the number of tissues tested, genetic identification technique, adverse events, treatment failure, and details of concomitant therapies were also recorded when available. The final 34 articles encompassed 9165 patients (Table 2).
Methodological Quality Assessment
Two authors (P.T. and S.D.) independently assessed the methodological quality of each study using the Critical Appraisal Skills Program (CASP) to increase the rigour of this review. This allowed for a structured approach in assessing the results and their clinical relevance. The following domains were assessed to see whether the criteria were "Clearly met" (+) or "Clearly not met" (−). "Cannot tell" (?) was used to describe cases in which the authors were not able to assess whether criteria were met. If there was disagreement between the authors, then the senior author (V.T.) was consulted, and disagreement was resolved by consensus.
Conclusions
PTEN, PIK3CA, KRAS, FGFR2, CTNNB1, MSI, and ARIDIA mutations are linked to good five-year survival (85%) for EC, and TP53 is labelled as a poor prognostic factor with 55% five-year survival for EC [78]. However, this data is not ES-EC specific. At the reproductive stage, where ES-EC is the most common clinical presentation of EC, the data is still inconsistent and not universally agreed upon. After recurrence rate, risk of metastasis, and mortality were considered; PTEN and POLE alterations were found to be good prognostic factors of ES-EC, favouring FST. MSI, CTNNB1, and K-RAS alterations were found to be fair prognostic factors of ES-EC, favouring FST, but have a higher risk of recurrence. PIK3CA, HER2, ARID1A, P53, L1CAM, and FGFR2 were found to be poor prognostic factors of ES-EC and, therefore, not favouring FST ( Figure 2). However, in the decision-making process, patients' clinicopathological characteristics have to be taken into consideration. Interestingly, currently there are numerous ongoing clinical trials that investigate different types of FST (NCT01594879, NCT03241914, NCT02990728, NCT03463252, NCT03538704, NCT04362046) but there are no current clinical trials focusing on patient treatment selection using genetic prognostic factors. In the future, larger clinical trials and studies with bigger cohorts will be needed to confidently choose FST for the treatment of ES-EC from favourable prognostic factors. | 9,952.6 | 2022-02-28T00:00:00.000 | [
"Biology"
] |
Usformer: A small network for left atrium segmentation of 3D LGE MRI
Left atrial (LA) fibrosis plays a vital role as a mediator in the progression of atrial fibrillation. 3D late gadolinium-enhancement (LGE) MRI has been proven effective in identifying LA fibrosis. Image analysis of 3D LA LGE involves manual segmentation of the LA wall, which is both lengthy and challenging. Automated segmentation poses challenges owing to the diverse intensities in data from various vendors, the limited contrast between LA and surrounding tissues, and the intricate anatomical structures of the LA. Current approaches relying on 3D networks are computationally intensive since 3D LGE MRIs and the networks are large. Regarding this issue, most researchers came up with two-stage methods: initially identifying the LA center using a scaled-down version of the MRIs and subsequently cropping the full-resolution MRIs around the LA center for final segmentation. We propose a lightweight transformer-based 3D architecture, Usformer, designed to precisely segment LA volume in a single stage, eliminating error propagation associated with suboptimal two-stage training. The transposed attention facilitates capturing the global context in large 3D volumes without significant computation requirements. Usformer outperforms the state-of-the-art supervised learning methods in terms of accuracy and speed. First, with the smallest Hausdorff Distance (HD) and Average Symmetric Surface Distance (ASSD), it achieved a dice score of 93.1% and 92.0% in the 2018 Atrial Segmentation Challenge and our local institutional dataset, respectively. Second, the number of parameters and computation complexity are largely reduced by 2.8x and 3.8x, respectively. Moreover, Usformer does not require a large dataset. When only 16 labeled MRI scans are used for training, Usformer achieves a 92.1% dice score in the challenge dataset. The proposed Usformer delineates the boundaries of the LA wall relatively accurately, which may assist in the clinical translation of LA LGE for planning catheter ablation of atrial fibrillation.
Left atrial (LA) fibrosis plays a vital role as a mediator in the progression of atrial fibrillation.3D late gadolinium-enhancement (LGE) MRI has been proven effective in identifying LA fibrosis.Image analysis of 3D LA LGE involves manual segmentation of the LA wall, which is both lengthy and challenging.Automated segmentation poses challenges owing to the diverse intensities in data from various vendors, the limited contrast between LA and surrounding tissues, and the intricate anatomical structures of the LA.Current approaches relying on 3D networks are computationally intensive since 3D LGE MRIs and the networks are large.Regarding this issue, most researchers came up with two-stage methods: initially identifying the LA center using a scaled-down version of the MRIs and subsequently cropping the full-resolution MRIs around the LA center for final segmentation.We propose a lightweight transformer-based 3D architecture, Usformer, designed to precisely segment LA volume in a single stage, eliminating error propagation associated with suboptimal two-stage training.The transposed attention facilitates capturing the global context in large 3D volumes without significant computation requirements.Usformer outperforms the state-of-the-art supervised learning methods in terms of accuracy and speed.First, with the smallest Hausdorff Distance (HD) and Average Symmetric Surface Distance (ASSD), it achieved a dice score of 93.1% and 92.0% in the 2018 Atrial Segmentation Challenge and our local institutional dataset, respectively.Second, the number of parameters and computation complexity are largely reduced by 2.8x and 3.8x, respectively.Moreover, Usformer does not require a large dataset.When only 16 labeled MRI scans are used for training, Usformer achieves a 92.1% dice score in the challenge dataset.The proposed Usformer delineates the boundaries of the LA wall relatively accurately, which may assist in the clinical translation of LA LGE for planning catheter ablation of atrial fibrillation.
Introduction
The development of atrial fibrillation (AF) is strongly linked to the presence of left atrial (LA) fibrosis [1,14].The accurate assessment of LA fibrosis using 3D late gadolinium-enhanced (LGE) MRI is indispensable for informed clinical diagnosis and treatment planning [2,14,20,30].However, the current method of labor-intensive manual segmentation introduces noteworthy variability.Therefore, the pursuit of automatic and highly accurate LA segmentation is of great interest for clinical adoption [12,18,28].However, this endeavor encounters challenges due to the intricate nature of LA shapes, patient-specific variations in shapes and sizes, as well as issues of low contrast and background noise [12,19].
Convolutional neural networks (CNNs) have demonstrated a high level of effectiveness across various applications, like pixel-wise detection of defects with complex and varied shapes [17,24,27,38].The application of CNNs in LA segmentation is also promising.For example, during the 2018 Atrial Segmentation Challenge, 15 CNN-based methods surpassed the performance of the two traditional atlas-based methods by approximately 7% in dice score [44].Among these, the methods based on the U-Net [31] model demonstrated the best performance.As a popular self-configuring UNet-based framework, nnU-Net [10] has also demonstrated great performance in the LA segmentation task [34].The skip connections incorporated into U-Net serve a dual purpose: not only do they recover spatial information for detailed segmentation, but they also effectively address the potential issue of vanishing gradients during training.
Approaches for left atrial (LA) segmentation using CNNs can be categorized into three main types: 2D, single 3D, and two-stage 3D methods, as illustrated in Fig. 1.In 2D approaches, each slice of a 3D scan is segmented independently along the out-of-plane axis, and the outcomes from each slice are aggregated to generate the final 3D prediction [3,35,40,43].For example, GCW-UNet, a 2D U-Net modification developed by Wong et al. [40], obtained a noteworthy dice score of 93.57% in the 2018 Atrial Segmentation Challenge dataset.During the segmentation of individual slices, the model takes in three Gaussian-blurred images, each featuring different degrees of blurring.The inclusion of a channel weight module and Gaussian blurring in GCW-UNet allows for the comprehensive capture of both intricate details and the overall contours of the left atrium (LA).In Bian et al.'s research [3], ResNet [32] was incorporated with dilated convolution and integrated with PSPNet [49].The inclusion of spatial pyramid pooling merged features at various scales, contributing to improved precision in boundary delineation.Despite the computational efficiency of 2D methods, they might neglect the correlation among adjacent slices in a 3D scan, possibly resulting in inaccuracies in boundary delineation.
On the contrary, 3D techniques involve the direct segmentation of the entire 3D LGE MRI, taking into account the correlation among adjacent slices.Nevertheless, current 3D methodologies exhibit inefficiencies related to both time and memory usage, primarily because of the considerable size of 3D scans.In the 2018 Atrial Segmentation Challenge, the 4th-ranking model is a single 3D CNN, proposed by Vesal et al. [37].Their approach involved the use of dilated convolution to expand the receptive field and residual connections to gather features from different layers.However, it's noteworthy that this model is the largest in the challenge, containing 104 million parameters-50 times larger than the smallest one.
In an attempt to alleviate the computational and memory demands, many scholars have shifted their focus to implementing two-stage methodologies [13,42,45].Initially, the center of LA is determined by analyzing a down-scaled representation of the LGE MRIs.Subsequently, a fixed zone encompassing this identified center is extracted as the region of interest (ROI).The subsequent step focuses on the segmentation of the LA within this specified ROI.Training two V-Net-based networks with identical architectures but distinct functions, Xia et al. [42] addressed coarse and fine segmentation of the left atrium (LA).The first network's role is to determine the coordinates of the LA center through coarse segmentation, while the second network, utilized in the subsequent stage, focuses on achieving finer segmentation.Rather than utilizing coarse segmentation, Jamart et al. [13] initially implemented a 2D V-net [25] to regress the coordinates of the LA center.Nonetheless, a challenge arises with two-stage methodologies: training both networks simultaneously is intricate, leading to the potential propagation of errors from the initial network to the second one.
Supervised learning methods mentioned earlier often demand a substantial amount of labeled data.But, in some situations, only a limited amount of densely annotated data is available due to the labor-intensive and time-consuming process of delineating the left atrium (LA) boundary.In such cases, semi-supervised learning (SSL)-based methods have been proposed as an alternative to leverage the abundance of unlabeled data to improve LA segmentation [8,21,23,48].Models in SSL are trained using a combination of limited labeled data and a larger set of unlabeled data.For instance, CA-Net [48] achieves a dice score of 90.09% even when trained with only 16 labeled data and 64 unlabeled data on the 2018 left atrial segmentation challenge.The CA-Net framework incorporates a discriminator that estimates the probability of unlabeled data being treated as labeled data.This mechanism enables the effective utilization of unlabeled data to enhance segmentation performance.However, the accuracy that semi-supervised methods can achieve is much lower than supervised learning methods trained with large-scale datasets, which necessitates a supervised learning method requiring fewer data.Some previous methods struggle to exploit long-range relations among the pixels in the image and 3D volume.To enlarge the receptive field, CNNs need to increase the kernels' size or the depth of the network, which, however, increases the networks' complexity and requires more training data to avoid overfitting.Different from CNNs, transformer architecture obtains long-range relations with the assistance of self-attention mechanism [5,22,41].In the case of medical image segmentation, transformers have been applied to a wide variety of tasks, such as cell instance segmentation [29] or brain tumor segmentation [39] with promising performance.However, they have huge computation complexity and a large number of parameters.UNeXt [36], a UNet-like architecture using shifted multi-layer perceptions, is proposed to reduce the computation burden and prediction time.But when UNeXt is tailored for the LA segmentation task in a 2D or 3D manner, the accuracy is sacrificed due to shifted multi-layer perceptions.
To address the limitations of the aforementioned methods, we introduce Usformer, 1 a small 3D transformer-based model aiming to achieve accurate segmentation of LA in just one stage.Within the upper layers, inter-slice correlations are captured by employing 3D convolutions.In the lower layers, the application of transposed attention allows for the extraction of long-range interactions within 3D volumes, with a marked decrease in computational demands compared to regular attention mechanisms.Usformer is validated in the 2018 atrial segmentation challenge [44] and our local institutional NU dataset.It outperforms the state-of-the-art supervised and semi-supervised methods in accuracy, computation complexity, and robustness.Moreover, Usformer does not require a large-scale dataset.The key contributions of this paper can be outlined as follows: • A postprocessing-free end-to-end network, Usformer, is proposed for accurate left atrium segmentation, which prevents error propagation caused by sub-optimal two-stage training.It has the potential to aid in the clinical translation of 3D LA LGE for planning the ablation of atrial fibrillation.• A transposed attention module is adopted in Usformer to alleviate the computational burden.Although the standard transformer attention module captures the global context, the burden increases quadratically with the size of the 3D input.Thus through transposed attention, Usformer enables capturing the global context and the correlation among the surrounding slices without increasing the complexity of the model as much.• Usformer capability is validated in two datasets: the public Atrial Segmentation Challenge Dataset and our local institutional dataset.In both datasets, Uformer outperforms current state-of-the-art methods.Moreover, we demonstrate that Uformer does not require a large dataset.We train it on only 16 densely labeled samples and show that it outperforms other semi-supervised learning methods in accuracy, computation complexity, and robustness.
Subsequent sections are organized as described below: Section 2 provides in-depth insights into the proposed network, offering information about its architecture, attention mechanism, and loss function.Datasets and implement details are described in Section 3. The outcomes of the experiments and corresponding analyses are presented in Section 4. Concluding remarks and future directions are discussed in Section 5.
Methods
UNet-based methods, while effective for medical image analysis, often struggle to capture global context over the entire image or volume.However, the proposed Usformer addresses this limitation through its transposed attention mechanism.Usformer's architecture and how the transposed attention mechanism works are elaborated upon in this section.A combination of dice loss and binary cross-entropy loss forms the loss function used in Usformer training, which is also described in this section.
The network architecture
Our proposed model, Usformer, is depicted in Fig. 2. Like the classical U-Net architecture, the encoder and decoder networks of Usformer are on the left and right sides, respectively.Extracting high-level features from the input volume, the encoder network steadily reduces the size of the feature maps while the decoder network progressively reconstructs these features to generate segmentation maps at the original size.Spatial accuracy is improved through the incorporation of skip connections, which establish connections between high-level and low-level features.Despite its merits, the U-Net architecture faces limitations such as a limited receptive field and an inability to capture crucial global information, which plays an essential role in semantic segmentation. 1The code is available at https://github .com/HuiLin0220 /Usformer .git.Addressing this constraint involves incorporating transformer blocks into the encoder, allowing them to capture the global context through their self-attention mechanism.Therefore, the Usformer encoder is designed with three convolutional stages, followed by two transformer stages.Within each transformer stage, there is one transformer block, succeeded by a convolutional layer and either a max pooling or upsampling layer.Within each transformer block, a transposed attention module and a feed-forward network are present, with the feed-forward network consisting of fully connected layers that typically include non-linear activation functions for introducing non-linearity.As mentioned in Section 2.2, the computation cost grows with the number of input voxels.Thus, to keep the computational cost of the attention down, transformer blocks are put after three convolutional stages to decrease the input size.This architecture also allows each feature vector to encode higher-level information.
A probability map is generated as the segmentation output, illustrating the likelihood of each pixel belonging to the LA.Pixels exceeding a predetermined threshold probability are classified as part of the LA.Threshold values will be explored in Section 3.2.
Attention mechanism
The yellow box denotes the transposed attention module [47] in the transformer block.Details are displayed in Fig. 3. Through bias-free convolutional layers, Query (), Key (), and Value ( ) are derived from a layer-normalized input of size Ĥ × Ŵ × Ẑ × Ĉ.The dimensions in the X, Y, and Z directions are denoted by Ĥ, Ŵ , Ẑ, respectively, with representing the count of input voxels, equal to Ĥ × Ŵ × Ẑ.Then, the matrix undergoes transposition to maintain the size of the attention map created by and at Ĉ × Ĉ instead of × .Hence, the computation of the output from transposed attention is as follows: In the given equations, , , ∈ ℝ × Ĉ serve as three representations of the input within the transposed attention module.The attention scores obtained from the product ⊺ are transformed into a probability distribution through the softmax function (⋅).
The softmax function takes the raw scores and converts them into probabilities, ensuring that each score becomes a value between 0 and 1, and the entire set of scores sums to 1.The complexity of computing ⊺ is in the order of ( Ĉ2 ).Attention computation in a regular self-attention module [5] follows the equation (, , ) = ( ⊺ ) .With a complexity of ( 2 Ĉ), the computation of ⊺ is considerable.But transposed attention proves to be significantly more computationally efficient, given the constraints Ĉ ≪ and ( Ĉ2 ) ≪ ( 2 Ĉ).
Loss function
Equation (2) serves the purpose of determining the total segmentation loss , achieved through a weighted combination of dice loss and binary cross-entropy loss (BCE), as outlined in [4,6].The BCE cross-entropy loss assigns equal importance to the loss of all pixels.However, the considerable class imbalance between LA and the background hinders the effective contributions of LA pixels to the training process.In contrast, the dice loss , being one of the area-based metrics, remains steady irrespective of the background's size, providing a resolution to the class imbalance problem in the LA segmentation dataset [11].However, relying solely on the dice loss can introduce instability in the training process when the foreground is small, as slight changes can disproportionately impact the dice loss.Thus, it is crucial to include both losses in to ensure a stable and efficient training procedure. is given by: Fig. 3. Transformer attention module, where the matrix K is transposed to significantly decrease computation complexity.The output of the transposed attention is calculated by Equation (1).Ĥ × Ŵ × Ẑ represent the input size, and the variable represents the total number of voxels present in the input, which is calculated as Ĥ × Ŵ × Ẑ, much larger than the channel number Ĉ.The computation complexity of the transposed module is ( 2 Ĉ), much smaller than the conventional module's ( Ĉ2 ).
In this expression, denotes the ground truth, where ∈ {0, 1}, and Ŷ represents the model output, with Ŷ falling in the range of [0, 1].In this context, 0 denotes the background, while 1 denotes the left atrium (LA).The weight of BCE loss, will be explored in Section 3.2.
Experiments
The proposed Usformer is implemented and validated in two datasets, i.e., the commonly used 2018 Atrial Segmentation Challenge dataset [44] and our local institutional NU dataset.Three state-of-the-art supervised learning methods mentioned in Section 1, i.e., the nnU-Net framework [10], UNeXt [36], and TMS-Net [35] are implemented as baselines.The codes of the methods in the 2018 challenge are not publically available, but their published results are compared with Usformer.The 3D dice score, Hausdorff Distance (HD), and Average Symmetric Surface Distance (ASSD) are applied to evaluate the model accuracy.The number of Floating Point Operators (FLOPs) and parameters are applied to evaluate the computation complexity of networks.Moreover, Usformer is also compared with the latest semi-supervised learning methods trained with only a small portion of labeled data.Experimental details and results are discussed in the following sections.
Datasets
Two datasets are utilized to validate the effectiveness of our method: the 2018 Atrial Segmentation Challenge dataset [44] and our local institutional dataset, which are introduced in detail in this subsection.
2018 Atrial Segmentation Challenge Dataset 2 From individuals diagnosed with atrial fibrillation, this dataset contains a total of 154 3D MRI scans.The data were provided by multiple centers but were mostly from The University of Utah.Researchers engaged in the study of LA segmentation commonly utilize this dataset.As listed in Table 1, the image acquisition matrix is 288×288×44 or 320×320×44 pixels with a spatial resolution of 1.25×1.25×2.5 mm 3 and then interpreted by a factor of 2 to 576×576×88 or 2 https://www .cardiacatlas.org/atriaseg2018 -challenge /atria -seg -data/.The manual segmentation of the LA cavity was carried out with consensus by three trained raters for both datasets.This segmentation included structures such as the mitral valve (MV), left atrial appendage (LAA), and pulmonary vein (PV) sleeves.The LA endocardial surface border was meticulously annotated through manual tracing of the PV and LA blood pool.The PV sleeves were limited to a maximum extension of 10 mm from the endocardial surface [26].Although the criterion for manual segmentation remained the same, the tasks were conducted by different individuals utilizing two different software platforms.The inherent potential for inconsistency in the two datasets is unavoidable.Fig. 4 presents example 3D LGE MRIs in both two datasets with manual segmentations denoted in orange.Manual segmentation was carried out for each LGE MRI, performed slice by slice from the axial view (IJ-plane), and the resulting segmentations were assembled in the K direction to generate the 3D LA geometry.The automated segmentation of LGE scans for the left atrium faces the following challenges: (1) Class imbalance emerges because the left atrium constitutes a minor portion of the overall volume.
(2) Indistinct boundaries contribute to the challenge of differentiating the left atrium from neighboring tissues.
(3) Reduced image quality poses a challenge in identifying the left atrium.The evaluation of image quality in the 2018 Atrial Segmentation Challenge [44] through the Signal-to-Noise Ratio (SNR) demonstrated that fewer than 15% of the MRI data met the criteria for high quality.
(4) The intricate structure of the anatomy, including slender and lengthy components such as the mitral valve (MV), left atrial appendage (LAA), and pulmonary vein (PV), is a common source of segmentation errors.
(5) The diverse shapes and sizes observed among patients pose a challenge in creating a generalized model for LA segmentation.(6) Unlike the challenge dataset, the NU dataset presents a greater diversity in imaging orientations and spatial resolution, which is a different challenge.
To mitigate the impact of randomness in the training process, we conducted three random splits for each method.We then calculated the mean and variance of the test results, which are summarized in Tables 2 and 3.By performing multiple splits and reporting the aggregate statistics, we provide a more reliable estimate of the performance of each method and offer insights into the consistency and stability of the results.
Implementation details
3D dice score, Hausdorff Distance (HD), and Average Symmetric Surface Distance (ASSD) [15,33,44] are utilized in our paper for model assessment and comparison.Following the 2018 challenge, the dice score is taken as the main metric, and HD and ASSD provide a more comprehensive evaluation of LA segmentation accuracy.The Dice score evaluates the alignment of the segmented LA with the actual LA, while HD and ASSD assess boundary accuracy and spatial dissimilarity.They are formulated using Equations ( 3), (4), and (5).
Model accuracy is assessed by calculating the average across all scans in the testing set.Moreover, the networks' computational complexity is assessed by considering the number of model parameters and Floating Point Operators (FLOPs). )
𝐷𝑖𝑐𝑒 = 2𝑇 𝑃 2𝑇 𝑃 + 𝐹 𝑁 + 𝐹 𝑃
where and denote surfaces of prediction and ground truth volumes, respectively.p and g are surface voxels in and .(⋅) represents the distance between two voxels. represents the number of voxels in the corresponding volume.
In our work, conducting all experiments, we employed a workstation that housed a single NVIDIA A100-PCI GPU card with a memory capacity of 40 GB, a 2.0 GHZ AMD EPYC 7702P CPU, 503 GB of RAM, and running Linux 3.10.0.The Usformer model was trained using the Stochastic Gradient Descent (SGD) optimizer for a total of 200 epochs.Usformer utilized the cosine annealing learning rate schedule, starting with an initial learning rate of 0.001.The cosine annealing learning rate schedule has a smoother learning rate curve, which can contribute to more stable and reliable convergence during training.
To enhance generalizability and prevent overfitting, data augmentation techniques were implemented.A 50% probability was used to apply data augmentation, which included scaling, rotation, and translation to each IJ plane.The I and J axes experienced random selection of the scaling factor, rotation angle, and translation within the intervals (0.5, 1.5), (−25 • , 25 • ), and (−10, 10) pixels, respectively.Our experimental results demonstrated a 2.1% improvement in the 3D dice score through the application of data augmentation techniques, emphasizing its role in enhancing generalizability and preventing overfitting.
Demonstrating robust performance, our proposed method shows a mere 0.01% difference in the 3D dice score when the threshold (detailed in Section 2.1) is modified within the range of 0.1 to 0.9.The threshold value selected for our work is 0.5.
To determine the weight of BCE loss, , Usformer was trained with varying from the value list [0, 0.1, 0.5, 0.9, 1, 10, 100].The best 3D dice score with Usformer is realized when is 1 in our analysis.Thus, is set to 1 in the following experiments.
Comparative results
State-of-the-art supervised learning-based methods to compare with the proposed Usformer include the top 5 methods [3,9,37,42,45] from the challenge in terms of dice score and the four latest methods unrelated to the challenge [10,15,35,36], as listed in Table 3.Among them, the codes of nnU-Net framework [10], UNeXt [36], and TMS-Net [35] are publicly accessible.Therefore, we implemented these three methods with hyperparameter settings mentioned in their papers on the challenge and NU datasets, listing the results we obtained in Table 2 and Table 3.For a fair comparison, all four models are implemented in the same workstation.Fig. 5 presents a boxplot of their comparative results along with the p-values in the analysis of significant differences.However, it is difficult to replicate the results of the other six methods without public codes.Therefore, we list their published results [15,44] in Table 2.
Table 2
Analysis of the proposed and advanced supervised learning approaches on the challenge dataset's testing set.Displayed in the first four rows are our experiments, while the remaining rows show the results provided by the authors (The codes of rows 5-10 are not publicly accessible).Rows 5-9 are the Top 5 methods in the challenge concerning the 3D dice score, and their results are disclosed in [44].Fig. 5. Significant difference analysis between our proposed Usformer and the other three baselines, i.e., nnU-Net [10], UNeXt [36], and TMS-Net [35] concerning the 3D dice score in both challenge and NU datasets.
Method
By comparing Table 2 and Table 3, it is worth noting that each method exhibits worse performance on the NU dataset, primarily due to the presence of diverse imaging orientations and varying spatial resolutions within the dataset, as listed in Table 1.Despite these challenges, Usformer still achieves promising performance on LA segmentation.
As presented in Table 2 and Table 3, our proposed Usformer outperforms nnU-Net and UNeXt in higher robustness, much fewer parameters, much less number of computation, lower HD and ASSD with similar or higher 3D dice scores.As depicted in Fig. 5, the differences in dice score between Usformer and nnU-Net are not statistically significant ( = 0.95, and = 0.67) on the challenge and NU dataset.Although not statistically significant, Usformer achieves higher accuracy 0.4% than nnU-Net on the NU dataset.Compared to UNeXt, Usformer achieves 2.5% higher 3D dice scores on the challenge dataset ( < 0.001) and 3.5% higher scores on the NU dataset ( < 0.02).Compared to TMS-Net, Usformer achieves 1.9% higher 3D dice scores on the challenge dataset ( < 0.001) and 2.5% higher scores on the NU dataset ( < 0.01).Moreover, Usformer has the lowest standard deviation among all the methods on Fig. 6. Results of LA segmentation in the axial view by Usformer, nnU-Net [10], UNeXt [36], and TMS-Net [35].Cases are randomly selected from the challenge and NU datasets, respectively.Each visualization includes the 2D dice score, denoted in the top left corner.Red and green delineate the contours of manual and predicted segmentation.Arrows highlight regions where Usformer exhibits notably superior performance in comparison to the other two baselines.Viewing this figure in color is advised in the printed edition.both datasets.Two compelling points are that Usformer has a very low number of parameters and computations.In terms of overall parameter count, Usformer (5.8M) is significantly less than nnU-Net (44.7M) and UNeXt (26.5M).The number of Floating Point Operators (FLOPs) is the metric used for assessing the computation.Usformer has the least GFLOPs of 522.9 compared to nnU-Net's 2003.6 and UNeXt's 603.3.We also conducted comparisons of models based on training and prediction times, as listed in Table 2. To enhance accuracy, Usformer makes a slight trade-off in speed when compared to TMS-Net.But Usformer still demonstrates rapid training and prediction capabilities, delivering each prediction within 10 seconds-a quality well-suited for clinical applications.
Displayed in Fig. 6 are randomly selected examples of LA segmentation results for nnU-Net, UNeXt, TMS-Net, and Usformer.The challenge dataset is depicted in the first two rows, while the NU dataset is illustrated in the last two rows.Our proposed approach exhibits a high level of precision in delineating LA segments, not only at small sizes (highlighted by the yellow arrow in the third row), but also with complex shapes (highlighted by the yellow arrow in the last row).Even though the boundary between LA and RA is unclear (highlighted by the yellow arrow in the first row), Usformer delineates the boundary accurately.Furthermore, Our approach surpasses nnU-Net, UNeXt, and TMS-Net, achieving markedly higher 2D dice scores.The segmentation outcomes exhibit significantly improved proximity to the manual segmentation, as highlighted by all the yellow arrows.
Usformer's efficiency is improved with the integration of the transposed attention module, which reduces computational complexity and facilitates the understanding of global information.This guarantees Usformer's potential for efficient, accurate, and robust LA segmentation, as demonstrated in Table 2, Table 3, and Fig. 6.
Notably, as for the first two cases in Fig. 6, Usformer's improvement in 2D dice is larger than the improvement in 3D dice.There are two main reasons.First, a model's performance varies across different individual patients and slices, as shown in Figs. 5 and 6.In some cases, Usformer performs better than other models, and vice versa.Second, the 2D dice score tends to change more than the 3D dice score with the same number of pixels' changes since the denominator of the 2D dice is much smaller.Therefore, both in the 2018 challenge and our work, the average of 3D dice is utilized as one of the metrics of accuracy rather than the average 2D Dice.
Error analysis
For each dataset, three cases are selected from each testing set for 3D and 2D visualization, representing the worst, median, and best performances in terms of the proposed method's 3D dice score, as shown in Figs.7 and 8. Fig. 7 visualizes the surface distance between manual segmentation and prediction by our proposed Usformer.LA segmentation results in the axial perspective are depicted in Fig. 8. Figs.7 and 8 indicate that our proposed method exhibits favorable outcomes in left atrium segmentation, even in the face of substantial variations in the shapes observed across patients.The LA shapes are complex, but the overall predictions demonstrate a smooth and accurate outcome.With respect to the surface distance, the error is small and lies within the tolerance range.The proposed method showcases its proficiency in outlining left atrial (LA) segments in the last two rows of Fig. 8, effectively handling complex shapes and challenges arising from low contrast with the surroundings.
As shown in Figs.7 and 8, the main errors are on the MV (highlighted by arrow (1)) and the PV (highlighted by arrow (2)).The errors in the MV can be attributed to the unclear boundary between LA and LV and the flat shape labeled by observers.As pointed out by arrows (1) in Fig. 8, the mitral valve (MV) identified by the observers as a flat plane was predicted by the proposed method as a circle, resulting in numerous false positives.The area containing errors has poor contrast, and observers may segment the region with significant variability, leading to confusion for the network.The errors observed in the PV are primarily attributed to its elongated, slender, and diverse shapes.Observers might segment the PVs with varying lengths, contributing to confusion for the network.91.3 CA-Net [48] 90.1 UA-MT [46] 88.9 SCC [21] 89.8 SASSNet [16] 89.3 LG-ER-MT [7] 89.6 DTC [23] 89.4 : Best result.: Second-best result.
Dataset scale
Usformer was trained using different amounts of training cases and tested on the same testing set to explore how the amount of training samples affects the performance of LA segmentation.Fig. 9 presents the trends on the challenge and NU datasets.A certain number of cases from the training set was randomly selected for training each time.We repeated each experiment three times and reported the average of three tested results.
Usformer has good performance even though it just used 16 training cases.As shown in Fig. 9, Usformer reached a dice score of 92.1% even though it was trained only with 16 labeled scans in the challenge dataset.Compared with the challenge dataset, Usformer requires a larger scale in dealing with the NU dataset due to its more varied imaging orientations and spacings.To reach a score of 91.6%, Usformer requires 68 labeled scans from the NU dataset.The latest semi-supervised methods only use the training set of the challenge data and divide it into two sets, 80 scans for training and 20 for testing.They were trained on the 16 labeled and 64 unlabeled scans from their training set.For comparison purposes, we trained Usformer using 16 scans randomly selected from their training set and tested on the same testing set they disclosed online.We repeated the experiment three times and took the average of three tested results.As listed in Table 4, it is obvious that Usformer outperforms the latest semi-supervised methods by 1.1% in terms of dice score while only 16 annotations are available for training.
Discussion and conclusions
Accurate segmentation of the left atrium (LA) is crucial for the assessment of LA fibrosis, aiding in informed clinical diagnosis and treatment planning.The study introduces Usformer, a network characterized by its small size, speed, and accuracy in the left atrium (LA) segmentation.Three significant contributions are outlined.Firstly, the implementation of an end-to-end framework eliminates error propagation inherent in two-stage methods.Secondly, the incorporation of transposed attention within Transformer blocks enables the learning of long-range dependencies among voxels in large 3D volumes without bringing a high computational cost.As detailed in Section 1, the 3D CNN-based method proposed by Vesal et al. [37] carries a substantial computational and memory burden, amounting to 18 times the parameter count of our Usformer.Lastly, the reduced complexity of our model allows it to train with a reduced dataset while still achieving promising performance in LA segmentation (see Section 4).
Our method proves effective even in the presence of challenging image quality in LGE MRI, demonstrating promising outcomes on both the NU dataset and the public 2018 Atrial Segmentation Challenge set, achieving average 3D dice scores of 93.1% and 92.0%, respectively.The number of parameters and computation complexity of Usformer, respectively, are reduced by 2.8x and 3.8x over the state-of-the-art nnU-Net.Moreover, Usformer outperforms the latest semi-supervised learning method, CA-Net, by 2.3% in terms of dice score when trained with only 16 labeled MRI scans.
It is unclear how well the presented method can adapt to modalities beyond LGE.To investigate the model's generalization capabilities, we plan to expand our dataset by collecting samples from various modalities, machines, and centers.Moreover, to gain a more complete insight into LA anatomy, future research could investigate the combination of various imaging modalities, including computed tomography or alternative MRI sequences.
To conclude, the proposed small Usformer delineates the left atrium in LGE MRI scans with high accuracy and low computation memory.It introduces a versatile and viable choice, minimizing the expenses associated with manual segmentation.The proposed network is expected to demonstrate success in various segmentation challenges.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Fig. 1 .
Fig. 1.The proposed Usformer belonging to single 3D methods captures the inter-slice correlation not included in the 2D methods and avoids error propagation introduced in two-stage methods.
Fig. 2 .
Fig. 2.The architecture of Usformer.It is designed for end-to-end left atrium segmentation from 3D LGE MRIs.In the final two stages, the U-Net architecture integrates transformer blocks represented by the orange boxes.The transposed block includes both a transposed attention module (shown in Fig.3) and a feed-forward network made up of fully connected layers. × × represents the size of a 3D LGE scan.All feature maps are 3D volumes instead of 2D images.For additional insights into Usformer, please turn to Section 2.
Fig. 4 .
Fig. 4. Example 3D LGE MRIs in the challenge and NU datasets with manual segmentations denoted in orange.Each slice of the LGE MRI scans underwent manual segmentation, and the resulting results were aggregated to construct a 3D model of the left atrium.Viewing this figure in color is advised in the printed edition for optimal visualization.640×640×88 pixels.The imaging orientation (IJK) is not disclosed on the dataset website.But based on a thorough review of all the slices in the dataset, it appears that the imaging orientation is axial.The challenge's initial training set is randomly divided, with a 4:1 ratio for training and validation.The testing set remains unchanged and corresponds to the original challenge dataset.Our local (Northwestern University [NU]) Dataset This dataset comprises 178 3D MRI scans provided by Northwestern University.As listed in Table 1, the image acquisition matrix is 192×192×52, 192×192×48 or 224×224×52 pixels with varied spatial resolutions, like 0.75×0.75×2.0mm 3 , 1.5×1.5×2.2 mm 3 , etc.The imaging orientation (IJK) of the NU dataset is oblique coronal.The original dataset is randomly split into 114 for training, 29 for validation, and 35 for testing.The manual segmentation of the LA cavity was carried out with consensus by three trained raters for both datasets.This segmentation included structures such as the mitral valve (MV), left atrial appendage (LAA), and pulmonary vein (PV) sleeves.The LA endocardial surface border was meticulously annotated through manual tracing of the PV and LA blood pool.The PV sleeves were limited to a maximum extension of 10 mm from the endocardial surface[26].Although the criterion for manual segmentation remained the same, the tasks were conducted by different individuals utilizing two different software platforms.The inherent potential for inconsistency in the two datasets is unavoidable.Fig.4presents example 3D LGE MRIs in both two datasets with manual segmentations denoted in orange.Manual segmentation was carried out for each LGE MRI, performed slice by slice from the axial view (IJ-plane), and the resulting segmentations were assembled in the K direction to generate the 3D LA geometry.The automated segmentation of LGE scans for the left atrium faces the following challenges:(1) Class imbalance emerges because the left atrium constitutes a minor portion of the overall volume.(2)Indistinct boundaries contribute to the challenge of differentiating the left atrium from neighboring tissues.(3)Reduced image quality poses a challenge in identifying the left atrium.The evaluation of image quality in the 2018 Atrial Segmentation Challenge[44] through the Signal-to-Noise Ratio (SNR) demonstrated that fewer than 15% of the MRI data met the criteria for high quality.(4)The intricate structure of the anatomy, including slender and lengthy components such as the mitral valve (MV), left atrial appendage (LAA), and pulmonary vein (PV), is a common source of segmentation errors.(5)The diverse shapes and sizes observed among patients pose a challenge in creating a generalized model for LA segmentation.(6) Unlike the challenge dataset, the NU dataset presents a greater diversity in imaging orientations and spatial resolution, which is a different challenge.To mitigate the impact of randomness in the training process, we conducted three random splits for each method.We then calculated the mean and variance of the test results, which are summarized in Tables2 and 3.By performing multiple splits and reporting the aggregate statistics, we provide a more reliable estimate of the performance of each method and offer insights into the consistency and stability of the results.
Fig. 7 .
Fig.7.Three-dimensional representation of the best, median, and worst left atrium segmentation implemented by our method regarding the 3D dice score.The first and second columns are from the challenge and NU datasets, respectively.Distance from the manual segmentation to the prediction is indicated by the color of the surface.For improved visualization, the surface distances are rescaled within the range of 0 to 10 mm.Arrows (1) and (2) highlight the errors in MV and PV, respectively.Viewing this figure in color is advised in the printed edition.
Fig. 8 .
Fig. 8. Results of LA segmentation in the axial view by our Usformer on the challenge and NU datasets.The three rows display cases with the worst, median, and best performances by Usformer, as measured by the 3D dice score.Three slices of each example case are presented.The 2D dice score is indicated in the top left corner of each visualization.Red and green delineate the contours of manual and predicted segmentation.Viewing this figure in color is advised in the printed edition.
Fig. 9 .
Fig. 9.The performance of Usformer trained with different numbers of cases in the challenge and NU datasets.
Table 1
Differences of 3DLGE MRIs in the challenge and NU datasets.
/ / : Number of scans used for training/validation; : Hausdorff Distance; : Average Symmetric Surface Distance. : Number of parameters; : Number of floating point operators; : Training time; : Prediction time.: Best results; : Second-best results.
Table 3
Analysis of the proposed and cutting-edge approaches on the NU dataset.
Table 4
Comparative results with the latest semi-supervised learning methods while only 16 annotations are available for training.All methods are tested in the same set.The authors' disclosed results are listed in the table. | 9,255.4 | 2024-03-01T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Nonlocal generalized quantum measurements of bipartite spin products without maximal entanglement
Measuring a nonlocal observable on a space-like separated quantum system is a resource-hungry and experimentally challenging task. Several theoretical measurement schemes have already been proposed to increase its feasibility, using a shared maximally-entangled ancilla. We present a new approach to this problem, using the language of generalized quantum measurements, to show that it is actually possible to measure a nonlocal spin product observable without necessarily requiring a maximally-entangled ancilla. This approach opens the door to more economical arbitrary-strength nonlocal measurements, with applications ranging from nonlocal weak values to possible new tests of Bell inequalities. The relation between measurement strength and the amount of ancillary entanglement needed is made explicit, bringing a new perspective on the links that tie quantum nonlocality, entanglement and information transmission together.
Introduction
Almost since its inception, the behavior of space-like separated quantum systems has been at the heart of multiple heated controversies around quantum mechanics [1][2][3], as well as the key to some of its most promising technological applications. These include superdense coding [4,5], quantum teleportation [6,7], entanglement swapping [8,9] and device-independent quantum key distribution [10][11][12] among others. All have in common that they rely on the measurement of an operator that contains information about not just one, but several, possibly entangled, quantum particles.
Sometimes, one might be faced with a situation where those different parts are space-like separated and direct interaction between them is not available. The question of whether or not it is possible to measure such multipartite observables instantaneously in this case was first answered in the negative by Landau and Peierls [13] in 1931, on the grounds of locality constraints.
Yet it was proven much later that such nonlocal measurements are in fact possible for certain observables, given adequate resources [14][15][16]. When the different parts are separated, they are made to strongly interact with an additional maximally-entangled state, a precious resource in quantum information [17,18], that is used to carry out the measurement and store the result.
This type of measurement scheme is often referred to as a von Neumann (VN) measurement [19], and the use of a maximally-entangled meter (MEM) state has been shown to solve the problem of achieving complete Bell state measurement [20,21], even in linear-optical systems [22]. The interaction between the system and the meter leading to the final result can then be made instantaneous, even though retrieving said result from the entangled meter requires some finite amount of time, as dictated by special relativity. Such a strong VN measurement of nonlocal variables has already been implemented using hyperentangled photonic quantum systems [23], and has also led to the direct measurement of a nonlocal wavefunction using the modular value formalism [24,25]. Furthermore, if one is ready to part with the VN approach and discard the final state of the system, all nonlocal observables become measurable [26][27][28] via so-called verification measurements and finite entanglement consumption. However VN measurements can be more than just strong (projective) measurements [29], which have been discussed so far. By suitably tailoring the system-meter interaction, as in figure 1, one can manage to only retrieve part of the information about a quantum state, in order to somewhat preserve it [30]. This has been successfully applied to local systems for quantum metrology [31,32], or in quantum foundations when one wishes to limit the effects of the measurement back-action via weak measurements [33,34].
One can naturally wonder if this type of interaction tuning can be extended to the nonlocal case. The quantum erasure scheme, developed by Brodutch and Cohen [35] and recently implemented by Li and al. [36], provides a solution by effectively reproducing a nonlocal arbitrary-strength VN interaction. It also extends the class of measurable nonlocal observables, by inserting a probabilistic element that prevents running afoul of causality.
This comes at a price however: on top of a MEM, an extra local meter is necessary to store the result, thus making this method difficult to implement experimentally, as the simplest case indeed requires a total of five distinct qubits. In this Paper, we present a simpler method that can be used to measure nonlocal spin products, yielding the same post-measurement state evolution and statistics as the quantum erasure method, while using less resources, with only four qubits necessary in total.
Our approach presents a complementary point of view to the problem of nonlocal measurements that relies on the language of generalized quantum measurements [29,37] applied to spin product observables. We prove that in this particular case, it is possible to reproduce the behavior of an arbitrary-strength nonlocal measurement using a non-maximally-entangled meter (NMEM), a weaker resource than what was needed in previous schemes. In particular, we show that the optimal amount of meter entanglement necessary is directly related to the desired measurement strength, and that excessive entanglement may on the contrary degrade the purity of the post-measurement system state. One can then achieve a nonlocal weak measurement with only a limited amount of ancillary entanglement, which greatly increases experimental feasibility, notably for linear-optical implementations.
The structure of this paper is as follows. In section 2, we review one-qubit generalized measurements, which constitute the starting point for the later extension to the nonlocal case. We describe in section 3 the main result of this paper, namely how to measure a spin product observable on two qubits using an NMEM state. We then compare it to the quantum erasure scheme in section 4. In section 5, we study a possible alternative to the above using a maximally-entangled state and its impact on the post-measurement state of the system. In section 6, we draw from the previous sections to establish a relation between measurement strength and ancillary entanglement in the two-qubit case. Finally, we conclude in section 7 by exposing the advantages and applications of this approach, as well as possible extensions.
Generalized measurement of a single qubit
A VN generalized quantum measurement consists of an interaction between two quantum states, respectively called the system S, initially in the state |ψ S , and a property of which we wish to measure; and the meter M, prepared in a known initial state, which we will use to measure S. The interaction is followed by a projective measurement on M, in order to read out the result. By designing an appropriate tunable interaction between the system and the meter, one can actually carry out measurements of different strengths, with much more flexibility than what is allowed by projective measurements.
Several such useful interactions have been proposed in the past for the measurement of single qubits (see [38] for instance). We here focus on the one described in [39], that can be used to measure the system spin σ z , and which is represented in figure 1. It consists in a local rotation applied to M in order to obtain the following meter state: followed by a controlled-NOT (CNOT) gate between the meter and the system. After, the result is retrieved via a projective measurement of σ M z on M, the corresponding positive-operator valued measure (POVM) effects for the whole process are given by where designates the identity operator.
Computing the statistics associated to this POVM reveals that the S ≡ cos(2θ) factor acts as the measurement strength, with S = 1 corresponding to a strong measurement (perfect meter-system correlation) and S = 0 corresponding to no measurement at all (no correlations): This generalized measurement scheme for one qubit has the advantage of being implementable using linear optics for polarization qubits [40] and has been used to test experimentally Ozawa's error-disturbance relations [41][42][43] as well as to measure weak values [44].
Generalized spin product measurement via a non-maximally entangled meter
We consider a bipartite qubit system where a pair of qubits is distributed between Alice (A) and Bob (B). For clarity, this pair of qubits is initially assumed to be in a pure (possibly entangled) state |Ψ S . Our goal is to answer the following: is it possible to extend the generalized measurement process of section 2 described by the POVM in equation (2) to the case of a two-qubit observable? Namely, we will now attempt to extend our measurement of σ z to a measurement of the product observable σ z A σ z B . It has already been shown that one can carry a projective measurement of σ z A σ z B by using a MEM, e.g. the Bell state Φ + M [22][23][24]. Following such previous approaches that established maximally-entangled qubit pairs (ebits) as the standard resource for nonlocal quantum protocols, one may try to start with a nonlocal meter initialized in the state Φ + M . A straightforward generalization of the process described in section 2 would for instance consist in transforming this initial nonlocal meter state Φ + M into a superposition of eigenstates associated with different outcomes, analogous to the one in equation (1): where Φ + M and Ψ + M are the usual maximally-entangled Bell states, corresponding to global measurement outcomes +1 and −1 respectively, α = π 4 − θ and |± ≡ 1 √ 2 |0 ± |1 . However, interpreting equation (3c) as the Schmidt decomposition [37] for the state |Φ M shows that such a state is in general not maximally-entangled, hence not accessible from a initially prepared Bell state via local unitaries [45].
The state |Φ M can however be easily obtained from the state Φ + M via some non-unitary operation that would discard unwanted amplitudes, in a fashion similar to a filter, in order to achieve the desired imbalance between the Schmidt coefficients of equation (3c).
Restricting ourselves to unitary operations, one can extract the state (3a) from a Bell state probabilistically with a 50% success rate, or deterministically using a classical communication channel between Alice and Bob as guaranteed by Nielsen's majorization theorem [46]. An example of such a possible implementation will be presented in section 4.
In general, if one has an entangled qubit pair with known Schmidt coefficients λ 0 and λ 1 , one can obtain the corresponding meter state starting from the Schmidt basis and applying a Hadamard gate H on each side. Let us note that from the point of view of entanglement resource theory, non-maximally-entangled states are less costly than Bell states, and may be prepared directly without any need for a prior Bell state.
Description of the measurement scheme
Let us now assume that the NMEM state |Φ M has been successfully prepared for some θ between 0 and π 4 . Alice and Bob can now proceed to couple their qubits with the meter via local CNOT gates, as depicted in figure 2, before each (projectively) measuring their meter qubit. For each of the four possible local outcomes, the final system state is given by the following measurement operators, where the measurement operator M ij is by definition the operator which when applied to the system (and not the meter), yields the final system state when the measurement result is (i, j): where Π ij is the projector on |ij , i.e. Π ij = |ij ij|. From the four different local outcomes, the global outcomes are computed classically by allowing Alice and Bob to share their results. Considering only the global outcomes and discarding any remaining local information, the evolution can be described by two different quantum operations, one for each result (see figure 3). The unnormalized post-measurement states of the system are given by the action of the following Figure 4. Comparison between the quantum circuit representations of the quantum erasure scheme (a) proposed in reference [35] and this paper's approach (b) in the case of a spin product measurement. To make the comparison fair, we place ourselves under the same constraints as reference [35], where Alice and Bob are not allowed to communicate. (a) The quantum erasure method uses a MEM, post-selection (PS) and an additional erasure step. (b) The entanglement reduction method: starting from a MEM, one first needs to reduce the entanglement using an additional local qubit before proceeding with the measurement process. Comparing this approach with (a) shows how the two measurement schemes are complementary in this particular case.
superoperators on the initial density matrix ρ = |ψ ψ|: These operations form the quantum instrument I [47,48], which fully encapsulates the measurement process as it provides a complete description of both post-measurement states and measurement statistics, as we will see below.
The POVM effects can be obtained directly from the quantum instrument I, via the relation E r = I * r [ ], where * designates the superoperator adjoint, obtained by taking the adjoints of the measurement operators M ij . This yields: Substituting with the expressions for the measurement operators (4), the POVM can be rewritten in the following more compact way: This is the desired nonlocal generalization of the POVM of equation (2), which yields the statistics expected from a genuine nonlocal measurement.
Moreover, we have M ++ = M −− and M +− = M −+ , hence for a given global result, the evolution of the system does not depend on the local results. This allows us to rewrite the state evolution (5) in terms of two effective measurement operators, one for each global result: These operators only involve projectors on the two-dimensional eignespaces of the observable being measured, as is to be expected in the case of a degenerate observable, first studied by Luders [49]. All eigenstates thus remain unchanged by the measurement and this process is not entanglement-breaking, which are characteristics of an ideal nonlocal measurement. This is the core result of this paper: it is possible implement a nonlocal measurement of a spin product using only a meter state that need not be maximally-entangled. This is in sharp contrast with other nonlocal VN measurement schemes developed so far [15,35].
Comparison with the quantum erasure method
The method we have just presented is deterministic, once the two parties are allowed to communicate. However if no communication between Alice and Bob is permitted whatsoever, Alice can still teleport her local result to Bob by post-selecting her part of the meter onto a known state. For causality reasons, this can only succeed with probability 50%. The result is then encoded in a single local meter on Bob's side. In this section, we will consider the case where the two parties share a previously prepared MEM and are not allowed to communicate, for comparison purposes with the protocol developed by Brodutch et al [35], namely the quantum erasure method. This is in no way the most efficient way of implementing our new scheme, as one could just directly prepare an NMEM without the need for an initial MEM.
The quantum erasure method consists of four steps (see figure 4(a)): first, a strong coupling between Alice's and Bob's systems and their shared MEM; followed by a post-selection on Alice's part of the MEM to teleport her result to Bob. Then, Bob realizes a weak coupling between his remaining part of the MEM and an additional local meter. Finally, Bob needs to erase the excess information contained in the MEM by projecting his part on the unbiased state |+ M B .
In our scheme, Alice (or Bob) first implement transformation (3a) to reduce the entanglement of the meter, using for instance an additional ancillary local state (see figure 4(b)). They subsequently proceed to strongly couple their systems with the resulting meter state. The result can finally be teleported from one side to the other by post-selecting one part of the meter on a known state, say |0 M A .
We thus show an example of a weak measurement without weak coupling [50]: the weak coupling is replaced by a suitably prepared meter, in our case an NMEM. The reduced entanglement guarantees that no excess information is stored in the meter, which makes the erasure step unnecessary.
Generalized spin product measurement via a MEM
Before further discussing our results, it is interesting to study what might happen if we try to realize a nonlocal generalized measurement directly using a MEM, for instance the state Φ + . Instead of trying to transform it into the state (3a), let us consider the meter state resulting from two local rotations implemented on Alice's and Bob's sides, of angles θ 1 and θ 2 respectively, as shown on figure 5.
We obtain (up to a global phase) the following state: with θ def = θ 2 − θ 1 . As expected, this is different from the state (3b); this will have consequences on the post-measurement system state.
If Alice and Bob locally couple their meter qubits to their system qubits via CNOT gates and locally measure their meters (see figure 5), the corresponding measurement operators are: (cos(θ) (Π 01 + Π 10 ) + sin(θ) (Π 00 − Π 11 )) (10b) Using equation (6), we obtain the same POVM as in section 3: (15) is represented as a straight line, and corresponds to an efficient measurement with zero classical noise.
However in this case, since M ++ = M −− and M +− = M −+ , we see that a same global result can lead to two different state evolutions. Indeed, some knowledge about the local state of the system can be retrieved from the phase information in the final state. Ignoring the individual outcomes (coarse-graining) thus adds classical noise to the system: the post-measurement state is in general mixed even if the initial state of the system was pure. Such a measurement process is sometimes labeled as an inefficient quantum measurement [29].
The amount of classical noise introduced by the coarse-graining can be evaluated via the difference in purity between the initial and the final states Δγ. It is found to be maximal when the initial state is an equal (in modulus) superposition of states associated with different global results, for instance |+ A |+ B .
In this case, the purity degradation Δγ (going from an initially pure state γ = 1 to a mixed state γ < 1) can be related to the measurement strength S: We see that for a strong measurement (S = 1) the system purity is unaffected, whereas for a weak measurement (S → 0), the system purity tends to 1 2 .
Generalization and discussion
We saw previously that for a nonlocal generalized measurement to be efficient, i.e. without added classical noise, the entanglement of the meter state need to be adjusted in accordance with the desired measurement strength. Hereafter, we shall use the concurrence [51] as our main measure of entanglement, defined as follows for a pure two-qubit state: where λ 0 and λ 1 are the Schmidt coefficients. The concurrence is a well-studied measure of entanglement that can be evaluated experimentally, using two copies of the state for instance [52]. As was shown in section 3, for a nonlocal measurement to be efficient, the meter state should be such that coefficients associated to same global outputs are equal, as in equation (3b): It turns out that in this case, the resulting measurement strength S is directly equal to the concurrence C of the meter state: Let us now turn to the case when, as in section 5, the entanglement C contained in the meter state is higher than the desired measurement strength S. It is then impossible to generate an ideal meter state, but one can still obtain the desired strength by applying appropriate local unitaries in order to prepare the following state: 1 √ 2 cos θ |00 + e iφ sin θ |01 + sin θ |10 + cos θ |11 .
This is a generalized form of equation (9). The resulting phase φ is linked to the meter entanglement C and the measurement strength S by the relation: The ideal case of section 3 and the case of section 5 are recovered by setting φ = 0 and φ = π, respectively. The excess entanglement manifests itself through the added phase φ, which in turn is responsible for the purity degradation of the post-measurement system state. As in section 5, this additional classical noise is maximal when the system being measured is initially in the state |+ A |+ B . The purity degradation can then be written as: We recover the efficient measurement case (Δγ = 0) by setting φ = 0 and the extreme noisy case of section 5 (Δγ = 1 2 ) by setting φ = π. One can combine relations (17) and (18) to numerically evaluate the noise, as represented in figure 6. We see that in order to make a measurement of strength S, one needs at least an amount of entanglement equal to S. A consequence of this fact is that a nonlocal strong measurement can only be achieved using Bell states. We also notice that the noise increases non-linearly as the measurement strength deviates from the meter entanglement.
Conclusions
In this paper, we discussed a new approach to measure nonlocal spin products, using the formalism of generalized quantum measurements. We found that one can achieve an efficient genuine nonlocal generalized measurement using an NMEM state. In particular, we established relations between the desired measurement strength and the necessary entanglement for the measurement to be efficient, that is to say without any additional classical noise. The effect of excessive entanglement was evaluated and found to be detrimental to the purity of the post-measurement state, but not to the overall measurement statistics. Another advantage of this new measurement scheme is that it does not require any quantum erasure step after the interaction. This approach is thus remarkably resource-efficient compared to other already existing schemes [25,35] and does not involve probabilistic steps.
The method proposed here is also feasible using linear optics, using hyperentangled photon pairs for instance [23,24], so that 2 photons suffice to realize all necessary 4 qubits and one does not need to implement CNOT gates between different photons. Non-maximally-entangled states can be generated with current technology with high purity, using spontaneous parametric-down conversion for instance [53]. It is also possible to distribute with high fidelity entangled photons over several dozens kilometers [54], in order to guarantee the nonlocality aspect of the experiment.
For clarity purposes, we focused our attention on the measurement of the spin product σ z A ⊗ σ z B , but the proposed scheme can be easily adapted to measure any nonlocal spin product by applying appropriate one-qubit gates. Spin product measurement is a special case of nonlocal measurement as it is one of the few that can be directly measured in the VN paradigm without violating causality. Measuring spin products is crucial in tests of quantum nonlocality, such as testing Bell inequalities. Measuring a spin product as been shown to be equivalent to measuring a modular sum, a relatively easier task. The question of whether or not our approach can be extended to more general observables remains open. Another question of interest is the generalization to the multipartite case, where there are unequivalent types of entanglement, which is the subject of a separate work [55].
A promising application for this scheme resides in the measurement of weak values [3,33] in a nonlocal setting, which can be obtained directly as the weak limit of postselected conditioned averages [56]. Measuring nonlocal observables is also important in quantum error correction [57] and variable measurement strength could be useful quantum computing without strong measurements [58]. | 5,483.8 | 2021-03-12T00:00:00.000 | [
"Physics"
] |
Analysis of the Temporal and Spatial Evolution of Recovery and Degradation Processes in Vegetated Areas Using a Time Series of Landsat TM Images ( 1986-2011 ) : Central Region of Chihuahua , Mexico
This paper analyzed the temporal and spatial evolution of vegetation dynamics in various land covers in the basin of the Laguna Bustillos, Region of Cuauhtémoc, Chihuahua, Mexico. We used an NDVI time series for the months of March to April (early spring). The series was constructed from Landsat TM images for the period 1986-2011. The results show an increase of NDVI for vegetated areas, especially in conifer cover, while shrub and grassland showed a positive trend but with lower statistical significance. The increase in minimum temperatures in early spring, during the study period, was the most important factor in explaining the increase of NDVI in vegetated areas. A spatially distributed analysis shows large areas without an NDVI trend, corresponding to areas with sparse vegetation cover (degraded areas). Moreover, there are also areas with a negative trend (loss of vegetation), explained by the exploitation of trees to produce firewood which is mainly carried out by the ejidos in the region. These results help to focus human and financial resources in places where the benefit will be greatest.
Introduction
Vegetation dynamics play an important role in the evaluation of environmental processes due to the close relationship between the biosphere and global environmental parameters, which involves, among other things, the concentration of atmospheric CO 2 (Zeng et al., 1999.), the influence of vegetation in the local water cycle (Beguería et al., 2003.), landscape structure and diversity (Olsson et al., 2000.), and erosion and sediment transport processes (Alatorre & Beguería, 2009;Alatorre, et al., 2011).
Systematic evaluations of vegetation activity in northwestern Mexico are scarce, although there are some works based on remote sensing.For example, Salinas-Zavala et al., (2002) studied the macro-regional effect of El Niño (ENSO) on indicators such as the Normalized Difference Vegetation Index (NDVI), and provided information for understanding the interannual variability of processes such as the increase in vegetation activity at large geographic scales.Later authors such as Franklin et al. (2006), Romo (2006), and Bravo and Castellanos (2013) used the same index to monitor Aerial Primary Production (APP) in natural areas and areas used for livestock, discriminating the effect of human activities and of the natural cycles of vegetation in this area of the country.
The results of these previous studies are restricted to very specific regions and periods (Central and Coastal Sonora in periods of one year), or to the use of satellite images (AVHRR) whose spatial resolution does not allow detailed studies.This causes significant research gaps in the region, making it impossible to: i) assess the changes in vegetation cover in recent decades; ii) detect regionaltrends in vegetation biomass; iii) study the changes in foliar activity of forest regions; iv) analyze how climatic variables (temperature and/or precipitation) and spatial patterns control aridity; v) determine the anthropogenic effect of land use; and vi) study the temporal and spatial variations of vegetation dynamics in degraded areas (gullies and erosion risk areas) where vegetation is sparse.In this regard, the objectives of this study were: i) analyze the temporal evolution of vegetation activity in vegetated areaslocated in the mountains and foothills that surround the study area using a homogenized time series of Landsat TM images (1986-2011); ii) determine which climatic variables control vegetation activity and define statistically significant temporal trends; and iii) analyze the spatial distribution exhibited by temporal trends of vegetation activity as an indicator of recovery or degradation of the vegetation cover, and quantify the effects of various topographic factors on these trends.
Study Area
The study area is located in the Bustillos lagoon basin,between 28˚13'19'' and 28˚59'35''N,and 106˚34'39'' and 107˚10'33''W (Figure 1(A)), with a total area of 3288 km 2 .The basin is closed irregularly by the mountains of Pedernales, San Juan, Salitrera, Chuchupate, Sierra Azul and Rebote, so the only contribution of water is from rain.The basin has an average elevation of 2000 m, and is surrounded on the north, east, west and southwest by a set of peaks averaging 2400 m, with some peaks reaching up to 2887 m (Figure 1(B)).The National Water Commission (CONAGUA, 2010) indicates that the basin weather has an annual average rainfallof 415.7 mm, with a semi-arid, temperate climate, and minimum and maximum average temperatures of 14.6˚C and 38˚C across the year, respectively (Figure 1(C)).
The valley bottomis mainly occupied by Phaeozem soils (Figure 1(D)), characterized by a marked accumulation of organic matter in the upper soil, which makes them fertile soils able to support a variety of crops and pastures.There are also Vertisols in the region, which are characterized by alternating between swelling and shrinking of the clays.This type of soils become hard in the dry season and plastic in the wet, making tilling very difficult except in the short transition periods between the two seasons.In general, with good management, these are very productive soils.Luvisols have great potential for a large number of crops because they are moderately weathered soils with a high degree of saturation.Leptosols predominate in the mountains and foothills.These soils are characterized by low depth (less than 30 cm) and their high content of gravel.They are unsuitable for cultivation, with very limited potential for tree crops or grass.It is best to keep them under forest cover, since their high susceptibility to erosion makes it necessary to control their use.
Finally, the information available about land tenure shows that the area is dominated by private ownership (Figure 1(E)), mainly the valley, which has the best conditions for the development of agricultural activities, while ejido properties are located in the mountains and foothills, which, due to their physiographic features, do not allow for a more intensive exploitation.
Selection and Preparation of the Database
To obtain a map of land cover and use, we used a Landsat TM scene (spatial resolution of 30 m) taken on October 2010, a month of the year in which there is a lower frequency of cloud cover and when perennial and seasonal crops are at their maximum development, occupying 100% of the cultivated area, which prevents confusion between bare soil and crop areas, or between irrigated meadows and natural grasslands, all of them occupying a large area of the surface under study.The image was geometrically corrected using ground control points (e.g., road intersections, airport runway intersections, bends in rivers features and the like) and the algorithm developed by Palá and Pons (1996) with the software Miramon, which takes into account the topographical distortion by incorporating a DTM.
For the classification of the Landsat TM (October 2010) scene, the atmospheric effect on the electromagnetic signal was corrected using the radiative transfer model 6S (Vermote et al., 1997), that included external atmospheric information.Subsequently, the effect introduced by the lighting conditions was corrected to compensate for differences caused by uneven ground, for which we used an anisotropic or non-Lambertian topographic reflectance model, which provides greater robustness than Lambertian models (Riano et al., 2003).
For the NDVI time serieswe built a homogenized series of Landsat TM images for the period 1986-2011.The database includes 18 images, corresponding to the beginning of spring (March-April).The methods of geometric correction and topographic illumination correctionapplied to each of the images in the time series were the same as were applied to the image of October 2010.However, to homogenize the images of this time series we used the ATMOSC module of the Idrisi Kilimanjaro software and the Cos (t) model, which is an improvement over the Dark Object Subtraction model (DOS) (Eastman et al., 2004).The Cos (t) model proposed by Chavez (1988) uses dark object subtraction to correct the haze effect and includes an estimate of transmittance, which represents the absorption by atmospheric gases and Rayleigh scattering.The DOS model assumes the atmospheric transmittance is 1.0 and spectral diffuse sky irradiance is 0.0 and the path radiance due to haze is estimated by specifying the digital number (DN) of objects that should have a reflectance or brightness of each band near of zero (e.g., deep clear lakes); however, transmittance (T) is calculated with the cosine of the zenith angle of the sun (90 -solar elevation).The data required for this method are: i) date and time of image acquisition; ii) solar and satellite angles; iii) the values of gain and offsetand the mean wavelength of the band that must be corrected.The gain and offsetvalues were incorporated into the units mW•cm −2 •sr −1 •um −1 (milliWatts per square centimeter per steradian per micron).To check the units yielded by applying a gain and offset, we selected one of the bands in the visible to near-infrared range.Then it multiplies the highest possible image value (e.g., 255 for a byte image) by the gain and then add the offset.If the value that results is in the range of 10 -30, the units are mW•cm −2 •sr −1 •um −1 , which is correct for use.However, if the units are a factor of 10 higher (e.g., in the hundreds), the units are W•sr −1 •um −1 .In this case, must shift the decimal place to the left by one digit for both the gain and the offset.
This time series of Landsat TM images was used to identify vegetation dynamics, as well as their temporal and spatial patterns, in the vegetated areas located in the mountains and foothills that surround the study area.Table 1 shows the date of each of the images used for the time series.
Definition of Thematic Categories and Training Areas
An important objective was to define ground truth pointsto define training areas of the Landsat TM image (October 2010), with maximum spectral heterogeneity, that represent the thematic categories present in the study area.Although the main objective of this study focuses on the analysis of vegetation dynamics as a function of vegetation activity in vegetated areas located in mountains and foothills, for theclassification algorithm it was necessary to establish a priori classesthat adequately represented the variability of land cover types presentin the entire study area.This is because the algorithm of maximum likelihoodconsiders not only the average characteristics of the spectral signatureof each class, but also the covariance among classes, allowing for amore precise discrimination.
Aerial orthophotos were used in the establishmentof thematic classes in the scene,with the help of 250 ground truth points, and also in the selection of trainingareas for each thematic class.The degree of discrimination between categories was determined by the spectral signature of each of the categories and a contingency matrix generated by ERDAS 8.7 Software from spectral reflectance bands.Finally, NDVI values (October 2010) (rescaled to values between 0 and 1, where 0 corresponds to values of −1 and 1 to values of (1) were incorporated as an additional band; this was necessary to distinguish more robustly the spectral signature of each of the land covers and uses.
After verifying the adequacy of the training sample, we applied the maximum likelihood method for classification.To validate the resulting classification, we calculated sensitivity and specificity statistics from a confusion matrix (Alatorre & Beguería, 2009), with the help of 500 independent random points which were classified by interpretation of aerial orthophotos for verification.
NDVI Time Series (1986-2011)
NDVI time series were obtained from the homogenized series of Landsat TM images (1986-2011) with the purpose of analyzing vegetation dynamics as a function of vegetation activity in vegetated areas located in the mountains and foothills.The NDVI was calculated as (Rouse et al., 1974): where IR ρ is the reflectance in the near infrared region of the electromagnetic spectrum and R ρ is the reflectance in the red region.NDVI is a measure of the photosynthetic capacity of the plants (Ruimy et al., 1994) and of leaf resistance to water vapor transfer (Tucker & Sellers, 1986).However, some studies have shown a strong correlation of NDVI with the fraction of photosynthetically active radiation, vegetation biomass, green cover and leaf area index (e.g.Tucker, 1979;Tucker et al., 1981;Sellers, 1985).Thus, high NDVI values are indicative of high vegetation activity.
To analyze the effects of climate on vegetation activity, we obtained a database from the National Meteorological Service (SMN) of Mexico; specifically, we asked data from the National Bank of Climate Data, which has historical records of the national climatological network (5000 stations), in some cases from the late last century to the present.The information used was obtained from the station in Cuauhtémoc, Chihuahua, Mexico (SMN Key: 8026), containing daily data of precipitation and daily maximum/minimum temperature, with standardized data from 1942 to 2010.The time series of total precipitation and maximum/minimum average temperatures were calculated from the normalized daily series, and in this case only the precipitation daily values were summed and the temperatures were averaged for the period immediately preceding the date of each image, thus, climatic series were calculated for the following periods prior to the date of the image: 15 days, 30 days, three months (January, February and March for March images; February, March and April for April images) and 6 months (October to March and November to April, respectively).
The topographic variables were also analyzed to assess their effect on vegetation activity (Figure 2), for this, we used a DTM with a resolution of 30m from the Mexican Continuum of Elevations (CME) which was made by the National Institute of Statistics and Geography (INEGI) and made available for download in (http://www.inegi.org.mx/geo/content/datosrelieve/continental/continuoelevaciones.aspx).The slope (%) and orientation (aspect) were obtained with the DTM (Figure 2(A) and Figure 2(B)); some studies have shown the importance of these factors to explain the recovery rates of vegetation (Pueyo & Beguería, 2007).We also derived from the DTM the vertical dissection or potential for dissection of the geocomplex (Figure 2(C)); this map illustrates the categories of the types of relief according to the morphometric classification by levels of vertical dissection (Priego et al., 2010) derived of a DTM.Vertical dissection determines several of the features of landscape structure; on the one hand: i) the distribution of some of its components (e.g., the distribution of temperature, precipitation, somehow vegetation and partially soils and other surface materials); on the other ii) determines its capacity of association as a spatio-temporal organization (Priego et al., 2010).
Influence of the Temporal Trends of Climatic Factors on the Temporal Variation of NDVI (1986-2011)
The existence or not of statistically significant temporal trends of NDVI has been used to detect processes of increase or decrease of vegetation activity.However, vegetation activity (and hencethe NDVI) can be affected by a number of natural factors, mostly climatic, which also undergo temporal variations.Therefore, when analyzing temporal series of NDVI it is important to account for the variance explained by those factors, in order to isolate the NDVI trends attributable to vegetation dynamics.Statistical tests based on the dependent variable alone such as the Man-Kendall's trend detection time series of the NDVI do not identify the driving factors involved.As a consequence it is difficult to determine the existence of trends apart from those driving factors.Therefore, in order to determine the existence of temporal trends in the time series of mean NDVI values for each land cover class we performed a multivariate regression analysis against time (the year of acquisition ofthe images) and a set of climatic and astronomical covariates.
As a preliminary step we undertook a correlation analysis to determine the most appropriate time span for the climatologic time series.For the early spring images (March-April) wefound that the climatological series computed for the three months prior to the images had the greatest correlation with the NDVI time series.
As the date of acquisition of the image did not coincide year to year, which could have affected the NDVI (especially in this period of year, which is very close to the start of growing period), we also included as a covariate the Julian day of the image as a covariate.To check for temporal trends of the NDVI values that were not explained by the variability of climatic factors and the date of acquisition (Julian day) of the images, we also included the year of acquisition of the images as a covariate.We used a backward stepwise procedure based on theAkaike's information criterion (AIC) statistic, as implemented inthe function stepAIC in the MASS library of the R package forstatistical analysis (Venables & Ripley, 2002).This function preserves for analysis only those variables that significantly explain the temporal evolution of NDVI for the different types of land cover, while the variables that do not contribute to explain NDVI values are rejected.The analysis of the results is based on: i) the goodness of fit and the degree of statistical significance of the regressions; ii) the selection of the explanatory variables; and iii) the value of the beta coefficient (standardized) to classify the variables according to their relative importance in explaining the NDVI.The presence of temporal trends in observed NDVI values that cannot be attributed to climatic or astronomical (Julian day) factors corres-pond to revegetation or degradation processes, which are correlated with the covariate "year" (and sign).
The Role of Topographic Factors and of the Spatial Distribution of Temporal Trends of NDVI (1986-2011) in Vegetated Areas
The analysis described in the previous section allowed us to determine the existence of statistically significant temporal trends in observed mean NDVI values for each land cover, and the relevance of climatic and astronomical factors.However, no spatial discrimination was made between areas with positive or negative trends.Thus, the next step was to repeat the multivariate analysis pixel by pixel, using the same set of climatic and astronomical (Julian day) covariates.This made it possible to obtain a map of the temporal trends of NDVI that are not explained by the covariates and, thereby, to observe the areas subject to degradation (negative trend) or recovery (positive trend) processes.
Finally, with the help of the map of the spatial distribution of the temporal trends of NDVI over the study area, we analyzed the degree of control exercised by topographic variables over trends: i) slope (%); ii) slope orientation; and iii) vertical dissection, all derived from a DTM (Figure 2).
Selection of Categories and Training Areas
Land cover variability of the study area comprised eight classes: human settlements (urban and rural), agriculture, apple orchards, bare soil (poor vegetation cover 0% -30%), water bodies, grassland with scattered shrubs, shrub, and coniferous forest.The training sample was used to obtain spectral signatures foreach thematic class (Figure 3).The bare soil category was characterized by high values of brightness in all bands and greater spectral variability, and due to the absence of vegetation cover, NDVI values were the lowest, characteristics common to areas of bare soil.The categories of vegetation showed a typical spectral signature, with high values of reflectance in the bands of the infrared region (TM-4 and TM-5) and a sharp decrease towards the thermal region.In general, the spectral information shows a good discrimination between vegetation units.The inclusion of NDVI values in the spectral signatures undoubtedly helped us to make a better discrimination between categories of vegetation (Figure 3).
The contingency matrix obtained by applying the classification algorithm of maximum likelihood to the training sample indicates that all categories have success rates higher than 80% (Table 2).In the case of coniferous forests, they were confounded with the shrub category 16% of the time, mainly because these units are located in areas of transition between the two categories.This result would indicate that the resulting classification is very consistent.
Thematic Classification of the Images
Once the spectral separability of the different thematic units of the study area was validated, we proceeded to apply the method of maximum likelihood classification to obtain the land cover and use map (Figure 4).The use of an independent set of randomly selected pixels allowed for validating the classification model, which was very good (87.64%overall accuracy) (Table 3).The biggest confusion occurred with the category of apple orchards and bodies of water, with an error of commission of 10.53% and 0.00%, and an error of omission of 22.73% and 23.80%, respectively.The category of coniferous forest was the best discriminated from the rest of the units, with an error of commission of only 2.7%.However, if the NDVI values as another spectral band to build the spectral signature of each unit would have not been included, confusion would have been higher.The confusion between apple orchards and bodies of water was higher only when spectral information of the bands was used, which caused an overestimation of the area occupied by apple orchards, large areas of forests were confused as apple orchards, even slight confusion with some agricultural areas.There were also confusion between the categories of water bodies and bare soil when only the spectral information of the bands was used, this caused by the high turbidity of the bodies of water.
The spatial distribution of the area occupied by each category is showed in the Table 3.We observed that the areas occupied by the categories of human and agricultural settlements (urban and rural) and apple orchards are located in the bottom of the basin, where soils and topography are more suitable for human activities (see section on the study area).The bare soil category is found mainly in the transition zones between the valley and the mountains, particularly in the foothills, where intense erosion processes have resulted in continuous gully formations.The spatial distribution of the vegetation categories of grassland with scattered shrubs, and shrub and coniferous forest, suggest a gradual transition from the foothills to the highest parts of the mountains that surround the Laguna Bustillos basin (Figure 4).
Influence of Climate on the Temporal Trends of NDVI (1986-2011) in Vegetated Areas
The time series of the mean NDVI values showed a clear difference between the different land covers and useslocated in the mountains and foothills that surround the study area, with the coniferous forest category showing the highest mean values, followed by the categories of shrub, grassland with scattered shrubs and, finally, the bare soil category with the lowest mean values (Figure 5 and Figure 6).This progressive transition between the mean NDVI values of each of the categories is related to the spatial distribution of the different land covers (Figure 4 and Figure 5), where the lowest values are observed in foothills (bare soil and grassland with scattered shrubs), and the highest values in the higher parts of the mountain ranges that surround the study area (coniferous forest).
Preliminary visual inspection revealed remarkable differences in NDVI trends (Figure 6) related to the different land cover and use categories.In general, it appears that the NDVI values of vegetated areas show a positive trend, which is more evident for the category of coniferous forest.In contrast, the bare soil category shows Table 3. Confusion matrix between thematic categories (proportion and number of total pixels).Categories: agriculture (A); bodies of water (BW); coniferous forest (CF); apple orchard; human settlements (HS); grassland with scattered shrubs (G + SS); shrub (S); and, bare soil (BS).a dominant negative trend.Accordingly, foothill areas can be considered degraded areas (gullies and erosion risk areas) due to sparse and incipient vegetation cover.
The multivariate regression analysis showed a good fit for the observed NDVI values, which showed very good significance for all land covers and uses (Table 4).The best model fit was obtained for the vegetation categories, particularly for the shrub category, and the worst fit was obtained for the bare soil category.
Moreover, we identified climatic variables that significantly control the temporal trends of NDVI, suggesting that the climatic conditions recorded in the past 26 years are important for explaining the evolution of vegetation activity.Average minimum temperature was the most significant explanatory factor, as shown by the values of the beta coefficients (standardized) (Table 4).The effect of minimum temperature was positive in all cases, reflecting the temporal importance of a warmer climate in late winter and early spring when plants start their growth period.This hypothesis is supported by the fact that coniferous forests showed a much lower influence from minimum temperatures than the other vegetation categories, as might be expected due to the perennial nature of needle leaves.The maximum temperature also had a positive effect on NDVI trends, although less than the minimum temperature.In contrast, accumulated rainfall did not have an effect on the temporal trends of NDVI for vegetation categories, indicating that the availability of water in the study area has no control on vegetation activity in early spring.Contrary to what was observed for vegetation categories, the bare soil category showed no correlation with any of the climatic factors.
Moreover, the acquisition date of the image (Julian day) was significant for all vegetation categories, but not for bare soil, demonstrating the relevance of the phenological state of vegetation at this time of year and the impor- tance of including this variable in studies where the NDVI is used as a measure of vegetation activity during critical periods of growth (Alatorre et al., 2010).In this case, the acquisition date of the image presented a negative correlation with the temporal trend of NDVI recorded for vegetation categories, which means that in an earlier period higher NDVI values were registered than in later period.Early spring phenology is one of the vegetation traits that show higher response to climate (Badeck et al., 2004).These results suggest that the early spring phenology of vegetation may have high temperature sensitivity in an increasingly warmer climate.
Once the evolution of the temporal trends of NDVI was explained by climatic and astronomical factors, it was possible to determine the existence of trends in the time series of NDVI (time variable, Table 4).Positive temporal trends of NDVI were found for vegetation categories, while a negative trend was found for the bare soil category.The strongest trend corresponded to the shrub category, with an increase of 7.21%, while for bare soil it was −3.56%, for the period 1986-2011.
The results obtained from regression analysis (Table 3) allowed us to make a more comprehensive interpretation of the patterns observed in the temporal trends of NDVI (Figure 6).The apparent upward trend in NDVI for vegetation categories could be explained by a similar trend in minimum and maximum temperatures.Only in the case of the bare soil category the multivariate regression analysis indicated a negative trend that is not explained by climate variables, suggesting that in these areas there is a decrease of vegetation activity as a result of degradation or loss of vegetation in the last decades, possibly related to intensive erosion processes.
These results are closely related to the changes of vegetation dynamics observed in other parts of the world.In the Western Spanish Pyrenees, Vicente-Serrano et al. ( 2004) found a positive temporal trend in NDVI values for forests and areas with good vegetation cover, which is associated with an increase in mean annual temperature, patterns of land abandonment and natural revegetation processes (Lasanta et al., 2007;Hill et al., 2008).In the Central Spanish Pyrenees, Alatorre et al. (2011) studied the vegetation dynamics in areas with good vegetation cover, with erosion risk and with active erosion; the results showed that the increase in temperature and changes in cloud cover during the study period were the most important factors in explaining the evolution of the mean values of NDVI (1984NDVI ( -2007) ) observed in areas with good vegetation cover.A positive temporal trend in the mean values of NDVI for vegetation categories was found also by the present study, and it was shown that the temporal trends of minimum and maximum temperatures in the three months prior to the date of acquisition of the Landsat TM images exerted an opposite influence on the NDVI.
The fact that accumulated precipitation has no significant effect on the evolution of the mean values of NDVI could be explained by the fact that the availability of water in the study area is not a limiting factor for vegetation growth, which receives an average of about 415.7 mm•year −1 , especially in summer, with 18% (75 mm) of the total annual rainfall occurring in winterperiod in which evaporation is lower, so water availability is not affected in early spring.In the study area the vegetated areas are generally distributed in relatively wet areas, in the mountains and foothills that surround the study area, where the role of precipitation may be relatively less important than that of temperature in the timing of the greenness onset.Shen et al. (2014), demonstrated in their study as earlierseason vegetation has greater temperature sensitivity of spring phenology in northern hemisphere, where the role ofprecipitation may be relatively less important than that oftemperature, especially in forest areas.
Finally, significant negative trend was identified in the present study in bare soil category.Given the high intensity of land use in the region, these trends in the NDVI are easily attributable to human activity in the foothills of the study area, which has caused a degradation of the vegetation cover in the last decades, insomuch that currently these areas are considered bare soilwith an incipient vegetation cover.Regarding the latter possibility, studies of smaller areas have found similar temporal trends in the mean values of NDVI for vegetation, but human impact has been included as one of the explanatory factors.For example, Fuller (1998) analyzed seven years of mean NDVI values for Senegal during the period 1987-1993, and observed spatial differences in the temporal trends of NDVI with respect to agricultural and pasture management practices.Pelkey et al. (2000) reported that the creation of natural protected areas in Tanzania favored the growth of vegetation cover and biomass in areas with more intensive land use.The presence of a residual trend in NDVI values after accounting for climatic influence is regarded as evidence that other factors, such as human land use, affect the vegetation cover.
Influence of Topographic Factors and of the Spatial Distribution of NDVI Temporal Trends (1986-2011) on Vegetated Areas
The negative temporal trend in NDVI for the bare soil category could not be explained by climatic factors, suggesting the presence of degradation processes in some sectors of the study area.This possibility prompted a detailed assessment of the spatial distribution of temporal trends of NDVI to determine the presence of areas of degradation.
Figure 7 shows the spatial distribution of the negative and positive trends of NDVI in vegetated areas.There are large areas with positive temporal trends, which were explained by climatic factors (see section 4.3); however, it is also possible to see areas with negative trends, which can be associated with degradation processes of the vegetation cover.Moreover, the areas with no NDVI trend (stable) are areas that, due to their state of degradation, have not shown vegetation activity in recent decades .
The pixel by pixel determination of the temporal trends of NDVI allowed us to assess the occurrence of recovery and degradation processes of the vegetation cover in each of the land cover and use categories (Figure 8).In general, it can be seen that positive NDVI trends were present in all the categories analyzed.These results confirm that the coniferous forest category had greater recovery, followed, in descending order, by the categories of shrub, grassland with scattered shrubs and, finally, bare soil.Moreover, there were also negative trends in all categories, but they were more evident in the bare soil category, affecting 18% of it.
Finally, using the map of the spatial distribution of temporal trends of NDVI (Figure 7), we analyzed the degree of control exercised by topographic variables on the temporal trends of NDVI (Figure 9).The frequency distribution of topographic slopes (%) clearly shows that degradation processes are more pronounced in slopes of less than 10˚, while recovery processes become more significant as the degree of inclination increases.As for the orientation of the slopes, degradation processes are observed in areas with little insolation during the year (Northeast and East), while recovery processes have a greater presence on the slopes with opposite orientation to those with degradation (Southwest and West).Vertical dissection, in turn, helped establish that degradation processes have greater presence in foothill areas, while recovery processes are mainly located in the rugged areas of the mountains.These results demonstrate that degradation processes are more pronounced in more accessible areas (slopes less than 10˚), which can be explained by the exploitation of trees to produce wood that is mainly carried out by the ejidos of the region (Figure 1(E)).Moreover, the fact that degradation processes have a greater presence on slopes with little insolation while recovery processes have a greater presence on slopes with greater insolation, is in line with other studies (Lasanta et al., 2000;Vicente-Serrano et al., 2004;Lasanta & Vicente-Serrano, 2006;Pueyo & beguería, 2007).Finally, according to vertical dissection, degradation processes occur most frequently in foothills, which are characterized by being transition zones between different land occupations and where most human activities are performed.
Conclusions
This work shows the usefulness of remote sensing and GIS techniques for basic and applied geo-environmental research at basin and regional scales (areas of study between 10 and 10,000 km 2 ).The use of supervised classification techniques by the maximum likelihood method from an a priori set of categories (covers) allowed us to obtain a land cover and use map of the study area with a confidence level of 84%.The correct selection of training areas and the inclusion of NDVI allowed us to locate areas for each of the categories (replicas), resulting in a maximum variability of their spectral signatures.
The homogenized series of NDVI for the months of March-April (early spring) allowed us to analyze the spatial and temporal dynamics of vegetation activity in vegetated areas (coniferous forest, shrub and grassland with scattered shrubs) and in degraded areas (bare soil) with sparse vegetation.The results were spatially coherent and NDVI patterns were clear, coinciding with the spatial distribution of vegetation cover and land use.In summary, this study demonstrated that a significant increase in vegetation activity took place in a representative mountainous area of the central region of Chihuahua, Mexico, over the past 26 years, in early spring, which is largely explained by the temporal variation experienced by the minimum temperature in the study period .Coniferous forest and shrub are the categories that showed the greatest increase in vegetation activity, while the increase of activity in grassland with scattered shrubs has been moderate.Moreover, the active erosion and extreme environmental conditions present in the bare soils which are located mainly in foothills restricted the recovery process of vegetation during this time period.Finally, it is clear that a DTM is a useful tool for a first approach to perform a morphological exploration of the spatial distribution of temporal trends of NDVI and of recovery and degradation processes of vegetation.The slope and vertical dissection were the factors that best discriminated between recovery and degradation processes of the vegetation cover, showing that in the study period the loss of vegetation occurred more frequently in areas with lower slope and in foothills.The methodology and the results obtained in this research are a powerful tool for institutions that have responsibility for implementing environmental and land use management plans for mitigating the degradation processes of the areas vegetated, principally forests, allowing prevention efforts to be concentrated in places where the benefit will be greatest.
Figure 3 .
Figure 3. Brightness curves (spectral signature) for each of the thematic categories in the six Landsat TM bands plus NDVI values (rescaled to values from 0 to 1, where 0 corresponds to values of −1 and 1 to values of (1).
Figure 4 .
Figure 4. Land cover and use map obtained by the method of supervised classification by the maximum likelihood method.
Figure 5 .
Figure5.Map of the vegetation categories located in mountains and foothills for the analysis of the temporal evolution of the mean values of NDVI for each of the categories.The delimitation of such areas was carried out by means of vertical dissection, i.e., we selected areas corresponding from slightly dissected hills to heavily dissected mountains, and the categories of agriculture, apple orchards, water bodies and human settlements were removed.
Figure 6 .
Figure 6.Temporal evolution of the mean values of NDVI among vegetation categories and the temporal evolution of climatic variables.
Figure 8 .
Figure 8. Frequency analysis (histogram) of the temporal trends of NDVI on the different categories of vegetation present in the mountain and foothill areas.
Table 1 .
Data from Landsat 5 TM images of the study area used to identify vegetation dynamics as a function of vegetation activity in the period 1986-2011.
Table 2 .
Contingency matrix of the classification applied to the training sample (proportion and number of total pixels).Categories: agriculture (A); bodies of water (BW); coniferous forest (CF); apple orchard; human settlements (HS); grassland with scattered shrubs (G + SS); shrub (S); and, bare soil (BS).
Table 4 .
Multivariate regression analysis of the observed NDVI values for each land cover and use.
Note: The variables excluded by the stepwise procedure based on AIC (Akaike's information criterion statistic) appear with "−".* Level of significance α = 0. | 8,555.4 | 2015-01-26T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Building a Digital Learning Hub: Moodle-Based E-Learning for Sekolah Penggerak in Kolaka Utara
Moodle serves as a valuable platform for creating online courses, training, and internet-based education while supporting essential e-Learning content distribution standards, specifically referring to SCORM. This research pursues three primary objectives: (1) the development of Moodle-based e-Learning in SMPN 4 Kolaka Utara, (2) the development of a MOODLE-based e-Learning content package adhering to the SCORM standard, and (3) the comprehensive evaluation of the Moodle-based e-Learning development process. Adopting the ADDIE (Analysis, Design, Development, Implementation, and Evaluation) approach, the outcome of this research is the establishment of an innovative E-learning platform, named mesikolah.com, accessible online via https://mesikolah.com/. These courses meet the stringent SCORM standardization criteria, including Accessibility, Adaptability, Affordability, Durability, Interoperability, and Reusability. Formative evaluation results demonstrate a strong alignment with the chosen research approach, ADDIE, while the summative evaluation reveals positive user feedback, with an impressive 80% satisfaction rate on the e-Learning platform. Although this research focuses on the e-Learning development stage and content package preparation, further investigations are warranted, such as classroom action research. Notably, the current content package predominantly comprises text and flash presentations, necessitating future development of diverse media types, such as video on demand or video streaming.
INTRODUCTION
The Covid-19 pandemic, which has affected the world for the past 3 years, has significantly impacted the education sector, leading to a decline in students' conventionally through limited face-to-face meetings. To enhance the learning experience and align with the spirit of Merdeka Belajar (Freedom of Learning), it becomes imperative to develop a website-based LMS for the school. The proposed LMS system will provide various functionalities, such as uploading learning materials, creating quizzes and questions, establishing discussion forums, and managing students' progress. By integrating face-to-face and online learning through LMS, a blended learning system will be formed. This approach combines the benefits of traditional face-to-face teaching with the virtual classroom experience, enriching the overall learning environment (Llerena-Izquierdo, 2022). The introduction of online learning through this innovative LMS platform is expected to significantly increase the effectiveness of learning at SMP Negeri 4 Kolaka Utara. Students will have access to a more diverse range of learning resources and opportunities for interactive engagement, promoting a more dynamic and flexible learning process. With the integration of technology in education, students can learn at their own pace, engage in collaborative discussions, and receive personalized support from teachers (Gal & Israel-Fishelson, 2020). Moreover, teachers can effectively track students' progress, provide timely feedback, and adapt their teaching strategies to suit individual learning styles.
Based on the description provided, the porpose of this study is to develop an LMS platform to support face-to-face learning at SMPN 4 Kolaka Utara. The outcomes of this research will aid SMPN 4 Kolaka Utara in their school digitization efforts, particularly in terms of technology-based learning management. This will empower teachers to enhance the overall quality of learning, especially during the challenging times of the Covid-19 Pandemic and the post-pandemic era, leading to more effective and engaging learning experiences.
RESEARCH METHOD
The type of research conducted is research and development, employing the ADDIE model as the chosen development model. ADDIE is a systematic learning design model that was selected due to its systematic and theory-based approach to e-learning design (Urh et al., 2015). This structured and sequential model is well-suited to address the specific needs and characteristics of both teachers and students in the e-Learning development process. The ADDIE model consists of five steps, which are as follows: (1) analyze, (2) design, (3) development, (4) implementation, and (5) evaluation. Each step in the model serves a crucial purpose in systematically addressing and resolving e-Learning development-related challenges (Sa'adah, 2021).
In this study, data was collected using observation and survey techniques. The survey employed questionnaires to evaluate and gather user feedback from teachers at SMP Negeri 4 Kolaka Utara. The sampling method utilized was simple random sampling, ensuring that each member of the population had an equal chance of being selected as a sample (Sa'adah, 2021). The questionnaire employed a 4-point response format from a Likert scale, offering the following alternative responses: Strongly Agree (SA), Agree (A), Disagree (D), and Strongly Disagree (SD). The Likert scale score was determined in advance. The Likert scale is a widely used method for measuring responses and opinions in surveys. It allows participants to express their level of agreement or disagreement with specific statements, enabling researchers to gain valuable insights into participants' perspectives and perceptions. By using simple random sampling and the Likert scale, the study ensures a fair representation of teachers' opinions at SMP Negeri 4 Kolaka Utara and facilitates a systematic approach to data collection and analysis. This methodology enhances the reliability and validity of the findings, enabling researchers to draw meaningful conclusions and make informed recommendations based on the feedback obtained from the survey.
The indicators utilized for evaluating the e-learning to be developed are presented in Table 1. Table 1.
Graphic design 3
Navigation 2
Individual impact 3
Data analysis was conducted to determine the evaluation results. In this study, statistical aspects were not extensively examined, so the data was analyzed using a descriptive percentage system. Descriptive data was presented in percentage form, following the formula: P = n x 100 & N Description: P = Percentage sub variable n = Value of each sub variable N = Max score After obtaining the percentage for each sub-variable indicator, the next step involved analyzing the data by referring to the range percentage chart and the system criteria to determine the corresponding criteria.
RESULT AND DISCUSSION
The e-Learning portal has been created based on an analysis of the current condition of human resources, infrastructure, and school conditions. The development of the e-Learning portal utilized MOODLE version 3.8.+4, which supports SCORM format. Before being installed on the hosting server, a prototype installation was conducted to ensure that the portal would function correctly. The prototype installation took place on a local server using the XAMPP 1.6.7 package with PHP 5.2.6, Apache 2.2.9, MySQL 5.0.51, and Win XP SP2 OS. The e-Learning portal was named "mesikolah.com," derived from the local language of the Tolaki-Mekongga, the local language of the area where SMP Negeri 4 Kolaka Utara school is located. The word "mesikolah" signifies "to go to school." By considering the specific needs and context of the school, the development of mesikolah.com aimed to create an effective and userfriendly e-Learning platform. Its implementation will support the school's educational objectives and enhance the learning experience for both teachers and students (Dhika, et al., 2020) The display design of mesikolah.com has been carefully modified to be as interesting and user-friendly as possible, ensuring that users feel comfortable while using the e-Learning portal. On the home page, menus and blocks were International Journal of Education, Social Studies, And Management (IJESSM) Volume 3, Issue 2, June 2023 Page 90-105 created to facilitate users' access to the e-Learning portal. Picture 1 depicts the appearance of the home page of the e-Learning portal.
Portal of Mesikolah
Before accessing the contents of the e-Learning portal, every user must log in to the system to gain access to the provided menus. This user login serves the purpose of managing user access rights based on the authority set by the Administrator. To log in, users can use the login menu on the front page. Additionally, users have the option to choose between Indonesian and English as their preferred language. The process of adding users can only be performed manually by the Administrator to ensure proper control over user management. This allows the Administrator to maintain oversight and control over user accounts and access privileges effectively (Shurygin & Sabirova, 2017).
The user permission access in the system is explained in the following table: Table 3.
User Access Distribution User Description
Guests Guest users have access to reading and can be considered as observers.
Students
Student users are participants in college and can access all of the lecture materials according to the courses they have taken.
Teacher
This user is an educator staff that become the administrator.
Administrator
A user with the highest access and authority in the system.
The preparation of the course content package was organized according to the guidelines for creating the Content Aggregation Package. To ensure ease of accessibility and usability for both teachers and students, the subject content packages were divided into two parts, namely learning module and course presentation.
Learning Module
The Learning Module was divided into two Content Aggregation Packages, namely Summaries and Subject Modules. The Summaries were further divided into 10 materials, while the Subject Modules were divided into 7 Chapters. All assets in this Content Aggregation Package were in text files in PDF format, enabling students to access them directly online and save them as self-study materials. Picture 2 displays the Content Aggregation Package of the learning module.
Course Presentation
The Subject Presentation was arranged based on the number of meetings for each subject. The preparation of the learning content packages followed the school semester program, ensuring that each meeting had its own dedicated Content Aggregation Package. Picture 3 illustrates the Content Aggregation Package of the Subject Presentation.
Picture 3. Content Aggregation Package of the Subject Presentation
In the course syllabus category, the course format follows a topic format with the Satuan course syllabus.pdf file. In the learning module category, there are two SCORM Content Aggregation Packages that use the course with a topic format as well. To simplify the college maintenance process, the presentation format was set up using a weekly format. The weekly format was chosen because it allows for setting specific starting and ending dates for each material. During each meeting, learners have the option to either view the presentation online through the e-Learning portal using a browser or download the material in PowerPoint format for independent study. This flexibility enables students to access the course materials based on their preferences and learning pace.
Data Evaluation Analysis
An online questionnaire is administered to users for evaluation after using E-Learning. The analysis of the questionnaire's scores for each examined indicator can be observed in the following tables. Tables 4 to 9 provide the score analysis of the online questionnaire for evaluating different aspects of user experience with the E-Learning platform. The scores are categorized based on the indicators and question items, and the percentages indicate the level of satisfaction for each category.
Categories GOOD GOOD GOOD
This table shows the score analysis for indicators such as "Interested to use" and "Want to use." The overall user satisfaction percentage is 82%, indicating a "GOOD" level of satisfaction. The table presents the score analysis for indicators like "Easy to learn" and "Mistake frequency." The overall usability percentage is 76%, signifying a "GOOD" level of usability. This table displays the score analysis for the "Graphic Design" indicator. The overall graphic design satisfaction percentage is 81%, indicating a "GOOD" level of satisfaction.
100
The table provides the score analysis for the "Navigation" indicator. The overall navigation satisfaction percentage is 75%, signifying an "ENOUGH" level of satisfaction. This table presents the score analysis for the "Content" indicator, with individual scores for question items 1, 2, 3, and 8. The overall content satisfaction percentage is 82%, indicating a "GOOD" level of satisfaction. This table displays the score analysis for indicators like "Motivation," "Problem Solving," and "Technology Responsivity." The overall individual impact satisfaction percentage is 82%, signifying a "GOOD" level of satisfaction.
Discussion This research has resulted in the creation of an e-Learning portal named Mesikolah.com, built using MOODLE LMS, and accessible online at https://mesikolah.com/. The development of e-Learning content packages followed the SCORM standardization guidelines. All prepared learning packages were then compiled into Content Aggregation Packaging and uploaded into the MOODLE LMS. To assist students in accessing the materials effectively, appropriate categories and subcategories were created in the lecture management section of MOODLE. The lecture format in MOODLE was tailored to match the type of material created. The Learning Module was designed with a topic format to allow students to access the material at any time that suits them (Sahidu et al., 2020). On the other hand, the Subject Presentation adopted a Weekly format, facilitating lecturers in delivering coherent material each week, which can be followed by students according to theirweekly meetings. All learning materials can be saved and used as study references at home. Through the e-Learning portal, Mesikolah.com, students can benefit from a well-structured and accessible platform, enhancing their learning experience and promoting self-directed learning. Lecturers can efficiently manage course materials and deliver content in an organized and cohesive manner, promoting effective teaching and learning processes (Yawan, 2022). Overall, the e-Learning portal represents an essential tool in modern education, empowering both students and educators to engage in a dynamic and interactive learning environment.
The content creation process in E-Learning content packages adhered to the SCORM standardization. After organizing all learning packages, they were compiled into Content Aggregation Packaging and uploaded into MOODLE LMS. Properly categorized and subcategorized lectures in MOODLE's lecture management feature aid students in following the material effectively. The class format in MOODLE was tailored to match the type of content created. The learning module utilized a topic format, allowing students to access materials at their convenience (Oproiu, 2015). On the other hand, the course presentation adopted a weekly format, making it easier for lecturers to sequentially share material presentations each week, and students can follow the sequence accordingly . All learning materials can be downloaded and used for self-study purposes. The availability of downloadable materials enhances students' learning experience, allowing them to review and engage with the content even beyond class sessions (Halil, Nasruddin, Sejati, & Sugiarto, 2023). Through this systematic approach to content creation and delivery, the e-Learning platform promotes effective teaching and learning processes, empowering both students and educators to access and engage with educational materials in a flexible and convenient manner (Benta, et al., 2014;Chang, 2016).
The E-Learning package's results have qualified as a learning content package created according to the SCORM standardization, meeting the following criteria. The E-Learning portals are easily accessible online and equipped with search engines, allowing users to find every component in the Content Aggregation Package effortlessly. The sequential arrangement of materials ensures users can follow the content seamlessly. The materials in the Content Aggregation Package have been customized to align with the accepted curriculum and correspond with the course syllabus. The package demonstrates efficiency and productivity, especially when used on a large scale. Once produced, the Content Aggregation Package can be reused without the need for reproduction, leading to cost and time savings in material redevelopment. Any development can be easily followed by repackaging, eliminating the need to create from scratch, configure, or undergo a time-consuming re-storage process. The SCORM Repository serves as a medium to store and retrieve all available Content Aggregation Packages, making them accessible and reusable on various LMS platforms with different tools and platforms. Moreover, the SCORM Repository is accessible to anyone, facilitating the use of SCORM 2004 format on other compatible LMS systems. In addition, the customized hierarchical arrangement provides easy access to learning materials and allows for seamless additions without altering the context. The material arrangement can be modified as needed without affecting the existing content, ensuring ease of reuse and adaptability.
The data from the questionnaire provided valuable feedback from users and served as a formative evaluation for the development of this E-Learning model. The analysis of the questionnaire in Table 4 revealed a high user satisfaction percentage of 82%, indicating that students generally showed excitement and interest in the E-Learning system. Usability, which assesses the ease of access to materials, is a crucial aspect in determining the effectiveness of an E-Learning package. The analysis in Table 5 showed a good usability score of 76%, indicating that students found it easy to access and study the materials, and the frequency of errors generated by the system was low. Graphic design plays a significant role as it influences users' visual satisfaction. Table 6 indicated that the graphic design of the E-Learning portal falls within the good category, with 81% satisfaction, implying that users were content and found the platform easy to navigate as a learning tool. However, in the navigation aspect 6, it scored 75% with the label "ENOUGH." This suggests that the navigation system of the MOODLE LMS might be considered complex by users. Overall, the formative evaluation based on the questionnaire data demonstrated positive outcomes for user satisfaction, usability, and graphic design, while highlighting the need for potential improvements in the navigation aspect to enhance user experience further.
The learning materials must be tailored to the course syllabus. The analysis of the questionnaire in Table 7 reveals that 82% of respondents rated the content as meeting the "GOOD" criteria. They found that the material provided by this E-Learning model aligns well with the course study, is comprehensive and well-structured, and proved to be beneficial in understanding the architectural concepts of the computer systems course. Table 8 presents an analysis of the Individual Impact factor. In this category, the percentage is 82%, indicating that users (students) were motivated to study the materials provided through this E-Learning model. Additionally, the model significantly helped them solve problems they encountered during face-to-face classes (83%). Moreover, using this E-Learning model, students became more aware of technological developments in their daily lives (87%). Overall, the formative evaluation through the questionnaire demonstrates the positive impact of the E-Learning model on students' motivation, problem-solving skills, and awareness of technological advancements (Saputra, Halil, Sukariasih, & Erniwati, 2022). It also confirms that the content aligns well with the curriculum and proves to be a valuable and helpful resource for students in their studies.
CONCLUSION
The development of e-Learning has resulted in an accessible online portal named "Masikolah," accessible through the address http://masikolah.com. The e-Learning content package successfully adheres to SCORM standardization, meeting the criteria of Accessibility, Adaptability, Affordability, Durability, Interoperability, and Reusability. The formative evaluation affirms that the research aligns well with the chosen approach, the ADDIE model. Meanwhile, the summative evaluation indicates that users' feedback for this e-Learning platform meets the "GOOD" criteria, with a satisfaction percentage of 80%. In conclusion, the implementation of this e-Learning portal, Masikolah, represents a significant advancement in educational technology, ensuring easy access, adaptability, and cost-effectiveness, while providing a durable and reusable platform for educational content. The positive feedback from users further underscores the success of this development, encouraging its continued use and potential for further enhancement in the future. | 4,303.6 | 2023-08-29T00:00:00.000 | [
"Education",
"Computer Science"
] |
The Application of Collaborative Business Intelligence Technology in the Hospital SPD Logistics Management Model.
BACKGROUND
We aimed to apply collaborative business intelligence (BI) system to hospital supply, processing and distribution (SPD) logistics management model.
METHODS
We searched Engineering Village database, China National Knowledge Infrastructure (CNKI) and Google for articles (Published from 2011 to 2016), books, Web pages, etc., to understand SPD and BI related theories and recent research status. For the application of collaborative BI technology in the hospital SPD logistics management model, we realized this by leveraging data mining techniques to discover knowledge from complex data and collaborative techniques to improve the theories of business process.
RESULTS
For the application of BI system, we: (i) proposed a layered structure of collaborative BI system for intelligent management in hospital logistics; (ii) built data warehouse for the collaborative BI system; (iii) improved data mining techniques such as supporting vector machines (SVM) and swarm intelligence firefly algorithm to solve key problems in hospital logistics collaborative BI system; (iv) researched the collaborative techniques oriented to data and business process optimization to improve the business processes of hospital logistics management.
CONCLUSION
Proper combination of SPD model and BI system will improve the management of logistics in the hospitals. The successful implementation of the study requires: (i) to innovate and improve the traditional SPD model and make appropriate implement plans and schedules for the application of BI system according to the actual situations of hospitals; (ii) the collaborative participation of internal departments in hospital including the department of information, logistics, nursing, medical and financial; (iii) timely response of external suppliers.
Introduction
Hospital logistics is one of the application fields for logistics. It mainly covers the procurement, storage, distribution, usage and the control of medical equipment, drugs and supplies in hospital. Since 1950s, scholars have paid attention to hospital material management, especially in the inventory management of medical materials. With the development of supply chain and logistics related theories, the hospital logistics attracted extensive attention, and many scholars started to systematically explore the hospital logistics (1)(2)(3).
Their primarily focus was on the development of management models in the field of procurement, inventory and distribution. In recent years, several models have been proposed by scholars in terms of procurement. Lapierre et al. (4) designed procurement decisionmaking method of hospital materials with the tabu search algorithm. By analyzing the collaborative supply model between hospitals and medical suppliers, Centobelli et al. (5) proposed a more streamlined procurement strategy of medi-cal supplies based on e-business. With the help of fuzzy failure mode and effect analysis (FMEA), Kumru et al. (6) improved the procurement procedures and methods. When it comes to the inventory management, Bijvank et al. (7) tried to optimize the hospital inventory by designing and using the capacity model and ability model. Shan et al. (8) employed the method of greedy algorithm to deal with the multi-level inventory problems in the hospital. Other researchers (9)(10)(11), proposed selecting the optimal distribution model with the methods of process modeling and mixed linear programming modeling based on the graph theory. In some developed countries, several hospitals have adopted a new type of logistics management model, which is called SPD (supply, processing and distribution) model (12). The SPD model contains three links in the hospital logistics supply chain including supply, processing and distribution. In 1960s, Gordon A. Friesen, firstly put forward the idea of "hospital logistics management and supply integration". The original purposes of the idea were to realize the integration management of procurement, inventory, distribution and consumption with the support of informationization. With the recent development in this field, SPD model has been partly applied to hospitals' daily operation and management in several countries including Japan and China (12)(13)(14)(15). In Japan, the SPD model has realized the unified management of hospital logistics procedures such as procurement, usage, recovery and distribution of medical products (e.g., drugs, medical supplies and equipments) with the help of information systems. In China, several hospitals in Shanghai, Nanjing, Tianjing and Lhasa, have introduced the SPD model, in which the procurement, inventory, package and distribution of hospital supplies were entrusted to a thirdparty company, and hospitals only need to pay for the supplies according to the actual consumption. Although there are many advantages of SPD model in hospital logistics management, the intelligence and collaboration of SPD model are still relatively low, which hindered the improvement of management efficiency to large extent.
Business intelligence (BI) transfers various data from enterprises into information or knowledge displayed in a cooperative manner that enterprise managers interested in. They were able to provide scientific evidence for the enterprise managers' decision-making and finally strengthen the competitive advantage of enterprises. In the early stage, scholars primarily focused on the data warehouse and data marts. Ariyachandra et al. (16) put forward the architecture of data mart bus and selected the data mart structure of consistent correlation dimension. One year later, they also proposed an independent and separated architecture for the data mart (17). Schaffner et al. (18) proposed hub-and-spoke architecture and chose centralized data warehouse as well as dependent architecture of data mart. Wu et al. (19) suggested the architecture of SOA-ITPA and adopted service-oriented reusable component technology as well as extensive ETL (extract, transform and load) services, in which the components communicate with each other via open standard message protocol such as XML and SOAP. However, with the change of business environment, the existing BI systems can hardly meet the demands of efficient data analysis and collaborative business process optimization. With the development of hospitals in terms of construction scale, structure complication and management informationization, the need for scientific, standardized and lean management of hospitals is more obvious than ever. As one of the important aspects in hospital management, logistics management is now facing the realistic problems of low efficiency and high cost. This study aimed to explore the key technologies of hospital logistics collaborative BI system including architecture, data warehouse, data mining and collaboration, so as to improve the efficiency of hospital SPD logistics management model.
Materials and Methods
We performed literature review to understand SPD and BI related theories and recent research status. Firstly, we searched Engineering Village database for English articles and China National Knowledge Infrastructure (CNKI) for Chinese articles published from 2011 to 2016. In addition, we searched Google for related books, conference papers, Webpages, etc. The keywords for literature review included "SPD", "supply, processing and distribution", "BI", " business intelligence", "hospital", "logistics", and "supply chain". At last, we incorporated the ideas from the literature review into a practical study protocol of the application of collaborative BI technology in the hospital SPD logistics management model. In this study, several technologies and methods were employed. Specifically, we adopted the methods of rough set theory and firefly algorithm-based data preprocessing to construct the data warehouse, we utilized improved data mining technologies including support vector machine (SVM) supplier selection, supply chain collaboration-based inventory optimization and swarm intelligence firefly algorithm-based multiobjective decision-making to solve key problems in hospital logistics collaborative BI system, we used instant communication technology and SPD workflow optimization method to realize the collaboration of BI system, and we selected Windows Vista operating system as the development environment of experimental platform. The overall research technology roadmap of study is presented in Fig.1.
Architecture of hospital logistics collaborative BI system
Collaborative BI combines the traditional BI and collaborative technology to satisfy the specific needs of business enterprises' collaborative decision-making and business process. Hospital logistics collaborative BI system is divided into 6 levels including business data source, data collection, information integration, knowledge model, query analysis and virtual collaboration: Level 1: Business data source level refers to structured, semi-structured and unstructured source data provided by different main bodies of hospital logistics supply chains and internal departments of hospital. They are original source data and problems on the hospital logistics supply chain. Level 2: Data collection level provides connect and access to a variety of hospital logistics operation management related data source. It maps various data sources to the handler to realize the data extraction, transformation and update. Level 3: Information integration level is the level that integrates many of the underlying data sources. It uses the source data from data warehouse to describe data extracted from data collection level and reshape the data syntax and semantics of the next level, to realize the construction of shared data warehouse and complete the multi-source data transformation and integration of hospital logistics supply chain. Level 4: Knowledge model level describes the business processes, knowledge and information models of hospital logistics management, and the semantics regarding their interdependence as well as correlation. Besides, it will modeling based on the relationships between shaped knowledge models extracted from the perspective of business processes and users. This way, it would realize information and business collaborative process logic modeling between different main bodies of hospital logistics supply chain. Level 5: Query analysis level provides tools and applications, which are considered the cores of the collaborative BI function. These include query, report, correlation analysis, trend and predictive analysis. These functions can be called by virtual collaboration level to set up the views and reports, to learn and perform model analysis, as well as to provide visualized exploration of realtime data to facilitate comparison and prediction of final information. Level 6: Virtual collaboration level provides a rich user interface and virtual spaces based on logistics business specific parameter setting, which are employed to display and solve problems in hospital logistics including medical consumables supplier selection, inventory optimization, product distribution, information coordination, and business coordination, etc. These facilitate users to check, share, analyze, and coordinate different forms of data, and finally to realize the users' assessment and decision-making (Fig. 2). The architecture mainly consists of 7 function modules, these include multi-source heterogeneous data preprocessing module, shared data warehouse construction module, SVM-based medical consumable materials supplier classification service module, swarm intelligence firefly algorithm-based inventory optimization module, swarm intelligence firefly algorithm-based medical consumables distribution module, information collaboration service module and business collaboration service module. System management is composed of operation maintenance management and security management.
Data warehouse construction technology research
The construction of data warehouse used centralized storage, retrieval, and other corresponding multi-source data process in the environment of collaborative BI. At the same time, rough set data mining algorithm and swarm intelligence firefly optimization algorithm were also employed to deal with the data in the data warehouse. These provided reliable data for data analysis services of collaborative BI system, and to improve accuracy and efficiency of data analysis services.
Design of data warehouse
Shared data warehouse can be divided into two types including centralized and decentralized models. The centralized shared data warehouse requires creating physical data warehouse at first. The storage and management of data in centralized shared data warehouse is operational data that has been processed and arranged. When decisions are made, every related enterprises and internal departments transmit data to the data warehouse via specified data marts. This made centralized data warehouse to have some extent of advantages in keeping the consistency of data. By contrast, decentralized data warehouse is set up based on virtual overall data warehouse. Our study aimed to establish a centralized shared data warehouse via the integration of data marts in hospital logistics management, in order to realize the multi-agent data support in the collaborative BI system and satisfy the collaborative BI system users' demands of data analyses and data-mining of specified business information.
There were many unstructured or semi-structured data in different parts of the hospital logistics supply chain including the evaluation information of suppliers, the qualifications of medical manufacturers, as well as document information for hospital internal communications. In order to better deal with these types of data we first created corresponding document template according to specific application needs and subsequently tried to write transform programs to read the contents of these unstructured/semistructured files and adopted different rules to transform them to standard XML documents. Finally, we analyzed the XML documents and relational database and figured out some rules for the transformation from XML documents to relational database tables. This was supported by the relational model-based data warehouse. The construction of data warehouse in hospital logistics BI system was the standardized integration of information flows from upstream to downstream of logistics supply chain. Considering the fact that the flow of information upstream was heterogeneous and the structure of raw data varied across different parts of supply chain, information systems, as well as modules, we took several measures to define the metadata uniformly. At first, according to the Berlin core data set, we provided metadata information for every member in the supply chain by reference to uniformed metadata specifications that were defined by the logistics alliance. Secondly, data exchange and integration among various systems were achieved through metadata mapping. Finally, according to the idea of oriented to application integration (OAI), we designed a metadata framework and information exchange model from the perspective of logistics information exchange to realize the data integration of different parts in the hospital logistics supply chain. Data storage refers to the storage mechanism of shared data warehouse in hospital logistics collaborative BI system. In order to solve the existing contradiction between model stability and demand variability, we stratified the data storage of shared data warehouse into three levels: (i) temporary buffer area, (ii) integrated data area and (iii) summary data area. The buffer function loads multi-source system data in hospital logistics supply chain into a temporary buffer area, the area works in the course of transform and load from data source system to integrated data area. Integrated data area, the core of the whole data storage, analyses every event from different dimensions via constructing multi-dimensional model. The integrated data area can be divided into three sub-areas: (i) operational data area, (ii) data analysis area and (iii) data archiving area. The operational data area stores data that were classified according to the application themes and provides uniformed hospital logistics data views based on real-time operation data. The data analysis area mainly stores historical data of hospital logistics operation decision analysis. The data archiving area is used to store the historical archived data derived from operational data area and data analysis area. The summary data area is a virtual zone without actual stored data, it is arranged to store the extra-preprocessed data for the use of the font-end. According to the relevant definition in the database, the data are named with clear business meaning, i.e., integrate the tables, fields and their complicated relationships into business terms or indicator names that could directly be displayed in the font-end.
Rough set theory and firefly algorithm-based data preprocessing
In order to improve the efficiency and accuracy of swarm intelligence algorithm-based data mining, we used a rough set theory to perform data preprocessing of data warehouse. The data mining technology combined the rough set theory and firefly algorithm together, which not only took the quality of sub-data sets into consideration, but also ensured the subtraction of raw data to the maximum extent. The objective function of the firefly algorithm is: The basic steps are as follows: at first, we identified a proper method of subtraction (R), depending on which firefly populations were randomly generated by coding. Then each individual was evaluated by fitness function that has already been designed. At last, we figured out subsets that met the function These steps helped us not only get the subsets with the smallest data scale, but also keep the original characters of data sets at the maximum extent.
Data mining technology research
The data of collaborative BI system in the hospital SPD model is complex, multidimensional, and nonlinear. Using data mining, we were able to solve several problems including medical consumables supplier selection, inventory optimization, as well as medical consumables distribution. These were very important in the hospital logis-tics management. In other words, by adopting improved data mining algorithm that met the demand of collaborative BI system in the hospital SPD model, we achieved the following goals: (i) we realized more accurate and effective classification and selection of medical consumables suppliers; (ii) we completed the procurement control, inventory optimization and inventory shortage warning in the first-level warehouse of the SPD model; (iii) we solved the optimization problems associated with medical consumables distribution in the secondary consumption points of the SPD model, especially the problems associated with the dynamic adjustment of distribution point, period and amount.
SVM-based supplier selection
The SVM method is widely used in the system evaluation. By mapping the points in lowdimensional space to high-dimensional space, we used the principle of linear partition to determine the classification boundaries. We attempted to design medical consumables supplier evaluation and classification methods based on SVM, to deal with specific problems of supplier selection in the hospital SPD model. We, first, extracted the objective function and constraint condition of medical consumables supplier selection in the hospital SPD model. Then we extracted feature data sets as the training sample source according to supplier multidimensional attribute to complete the design of classifier. Finally, we attempted to realize the actual classification of medical consumables suppliers by classifying the medical consumables supplier sample into corresponding categories via the already designed classifier.
Supply chain collaboration-based inventory optimization
The core of the supply chain inventory optimization in hospital SPD logistics model is the effective control of first-stage warehouse inventory level. Given the close association existed among inventory level setting and procurement strategy and safe inventory, the amount of procurement and procurement period directly affects the level of inventory. Therefore, hospital logistics inven-tory optimization is considered a type of datadriven analysis and design task, which requires analysis on large amounts of data followed by establishing a relational model between inventory costs and service levels. According to the twostage inventory structure design, we divided the inventory optimization method into two steps: (i) we figured out the key parameters' approximate fluctuation scopes of first-stage warehouse inventory in the SPD model using the time series method; (ii) we designed discrete control parameters and construct corresponding solution spaces, and then identified the most feasible inventory optimization solutions by using swarm intelligence firefly algorithm.
Swarm intelligence firefly algorithm-based multi-objective decision-making
The effect of medical consumables distribution in SPD model mainly depends on the satisfaction of multiple targets including distribution amount, distribution point, distribution lead time, as well as dynamic adjustment of the distribution amount. Considering the medical consumables distribution multi-objective decision-making in the model of SPD may face several conditions such as large data scale, complicate structure and dynamic objective, we employed the swarm intelligence firefly algorithm method to deal with the multi-objective decision-making problems. We, first, established a mathematical model of multiobjective decision-making via extracting multiple decision-making objectives such as timeliness, accuracy and economy. The construction of constraint condition set was then analyzed. Secondly, we identified the solution spaces of multiobjective decision-making using swarm intelligence firefly algorithm method. Finally, we periodically used swarm intelligence firefly algorithm to deal with dynamic data in the data warehouse, and to provide dynamic adjustment strategy for distribution amount, point, and lead time, as well as distribution amount adjustment.
Collaborative technology research
Collaborative technology refers to all types of information and communication technologies that can be collaboratively supported in different levels including information sharing support and task coordination support. Therefore, the business cooperation in the hospital SPD logistics model contains not only the information sharing in traditional information system, but also workflow optimization of information intelligent delivery and business-oriented collaboration.
Instant communication technology-based hospital logistics information collaboration
Instant communication combines audio and video communication and files transfer, and provides necessary platform for information sharing and delivering of logistics collaborative BI system. Therefore, the full use of the instant communication technology requires necessary integration of heterogeneous information in the hospital logistics collaborative BI system. In this study, we provided uniformed structural descriptive data for describing information resources in the hospital logistics collaborative BI system with the help of metadata. We accomplished this by establishing metadata level of hospital logistics data warehouse using the Dublin core data sets, and built logistics information integration model. We also attempted to build dynamic matching models of the supply chain and logistics related information (i.e., the information of demand, inventory, as well as distribution) in BI system via analyzing the attribute and operation information of the supply chain in the SPD model.
SPD workflow optimization-based hospital logistics business collaboration
Workflow is a part of Computer Supported Cooperative Work (CSCW), which plays an important role in researching how to make the groups work cooperatively within the framework of informationization. The SPD workflow is chiefly used to describe the activities of tasks with fixed procedures in the hospital logistics businesses. The core of the SPD workflow optimization is to figure out the "control points" that requires cooperating by mining "up and down streams" and "time rules" in the original workflow. This is to achieve several goals including improved work efficiency, better control process, and effective management of logistics business processes. Therefore, we constructed the up and down streams and time constraint relations of workflow by mining the time data from SPD workflow in the data warehouse, and then built the workflow time constraint relations-based Activity On Edge (AOE) Network. Based on which we figured out the critical paths in SPD workflow net structure through the utilizing of swarm intelligence firefly algorithm and Critical Path Method (CPM).
Construction of experimental platform
This study intended to construct hospital logistics SPD model-oriented collaborative BI system, by which we realized the data and information sharing between the hospitals, the suppliers and the manufacturers in the hospital logistics supply chain. By using SVM and swarm intelligence firefly algorithm methods, we analyzed the recent years' basic information database of hospital information system (HIS) and data provided by companies in the logistics supply chain. We also established a hospital logistics SPD modeloriented collaborative BI system that realized such roles as collaboration, data mining, data analysis, as well as application. The timeliness and collaboration of hospital logistics collaborative BI system in SPD model were improved by the following procedures: (i) through constructing the module of sharing data warehouse, we realized the integration of data marts in hospital logistics management, so as to ensure the data safety and provide services such as basic data process and analysis; (ii) for the extraction of inherent characteristics of data sets, we constructed the module of data preprocessing to reduce the data dimension of collaborative BI system; (iii) by setting up supplier selection optimization module and using the method of SVM, we improved the business processes to meet the demands of supplier evaluation, classification and selection in the hospital logistics SPD modeloriented collaborative BI system; (iv) via establishing inventory optimization module, we were able to solve the inventory optimization problems in hospital logistics warehouse; (v) by creat-ing medical consumables constant distribution module, we could deal with the decision-making processes such as distribution amount, distribution point, distribution lead time, as well as dynamic adjustment of distribution amount; (vi) in order to realize the collaboration of logistics information and information sharing among every parts of the supply chain, we built information collaborative service module; (vii) by building business collaborative service module, we provided the decision-making basis for logistics process optimization in the hospital SPD model. In this study, we chose Windows Vista operating system as the development environment. Other specific requirements are as follows: software structure: J2EE; compile environment of Java language: version JDK 1.
Discussion
With the rapid development of medicine and health services and in the face of ever increasing demand, the use of medical consumables in hospitals is increasing. A hospital, usually, requires multiple varieties of medical consumables with complicated specifications from several suppliers, which results in considerable management difficulties, low management efficiency, as well as unsafe medical activities. We advocate the use of the SPD logistics management model, which have several advantages such as: (i) the procurement information is automatically generated by the system after analyzing the historical data on consumption, inventory level and response time of suppliers. This may largely improve the standardization and automation of procurement; (ii) the inventory of every stage (e.g., the warehouse of center, departments and operating room) is monitored by the logistics management departments, the inventory related variables can be scientifically and coordinately arranged including the ordering point, safe inventory, maximum inventory, and lead time of ordering. This way, we can significantly reduce the cost as well as the inventory risk. (iii) The distribution of consumables is based on inventory optimization, by analyzing the historical consumption habit of clinical departments, the distribution related variables including distribution amount, distribution point, distribution lead time, as well as dynamic adjustment of distribution amount can be properly assigned. By adopting this model, we may reduce the waste of consumables and significantly reduce the workload associated with material management and improve the efficiency of clinical activity. In China, the informationization of hospital management mainly focuses on the information support of the medical activity management, especially the clinical treatment. The information support for hospital logistics management is relatively weak, which makes it difficult to support the construction and implement of the SPD model. Specific problems were mainly manifested in two aspects from the perspective of management informationization. At first, the data utilization rate was relatively low and the existing information construction hardly met the data mining demands in the complex logistics data. Additionally, the collaboration between information systems was quite lacking. Traditional construction of informationization mainly concentrated on the information electronization in the hospital, rather than the outside links of the supply chain, which ignored the collaborative demands among the hospitals, suppliers and manufacturers. With the development of recent decades, BI system has been widely applied in many fields including manufacturing (20), bank (21), logistics (22) and healthcare (23), etc. In the field of logistics, the application of BI system mainly focused on the intelligent procurement management, intelligent inventory management, as well as the intelligent distribution schedule system (22,24). While in the field of healthcare, BI system was commonly used in the hospital performance management, hospital cost accounting analysis and medical insurance decision-making, etc. (25,26). Up to now, very few studies have applied the BI system to hospital logistics management. Therefore, with the implement of medical reform and the adoption of the SPD model, we can construct a collaborative BI system by introducing and applying the idea of BI to hospital logistics. This way, we may realize the business collaboration of hospital logistics management and achieve a few goals such as improving management level and reducing the hospital logistics' costs. We may encounter many difficulties in this study, which requires innovating and improving the traditional SPD model and making appropriate implement plans and schedules for the application of BI system according to the actual situations in hospitals. Additionally, given the fact that the study is based on information integration, the successful implementation of this study requires the collaborative participation of internal departments in hospital including the department of information, logistics, nursing, medical and financial. The timely response of external suppliers is the other significant requirement of this study.
Conclusion
BI transfers various data from enterprises into information or knowledge displayed in a cooperative manner that enterprise managers interested in. Introducing and applying the collaborative BI system to hospital SPD logistics management model may help largely in realizing the business collaboration of hospital logistics management and achieving a few goals such as improving logistics management level, reducing hospital logistics costs, and increasing health service quality as well.
Ethical considerations
Ethical issues (Including plagiarism, informed consent, misconduct, data fabrication and/or falsification, double publication and/or submission, redundancy, etc.) have been completely observed by the authors. | 6,438.6 | 2017-05-28T00:00:00.000 | [
"Business",
"Computer Science",
"Medicine"
] |
An Approximate Maximin-Directed Random Sampling for Clustering Applications
.
INTRODUCTION
Social networking giants like Facebook and Twitter boast billions of users, generating hundreds of gigabytes of content every minute.Retail establishments continuously amass extensive customer data, while platforms like YouTube, with over 1 billion unique users, churn out 100 hours of video content every hour.To illustrate the sheer magnitude, YouTube's content ID service scans an astounding 400 years' worth of video content each day [1,2].Notably, scientists and researchers refer to it as "Big Data".In the face of this deluge of data, the need for robust tools for knowledge discovery becomes imperative.Data mining techniques have firmly established themselves as indispensable instruments for this purpose.Among these techniques, clustering stands out as a method whereby data is partitioned into groups, ensuring that objects within each group share more similarity with one another than with objects in other groups [1].
Suppose n objects are represented as feature vectors . Classic cluster analysis for this kind of static data is discussed in many texts and numerous articles [3][4][5][6][7][8][9][10][11].If the number of samples precludes clustering the data directly, there are two popular ways to approach the problem.First, we may split the data into chunks, process the chunks independently, and aggregate the results [12,13].
A second popular approach is to sample the data, cluster the sample, and then extend the results to the rest of the data set non-iteratively by labeling the remaining points with the nearest prototype method [14].The question addressed in this paper is: what method of sampling produces the "best" samples to use in this context?Certainly (true) Random Sampling (RS) is the best-known method.Progressive sampling using various termination criteria is advocated in [15][16][17].The specification of the MMDRS algorithm requires a bit of notation.
Assume c is an integer number such that 1<c<N.The set ℎ = { ∈ ℜ : 0 ≤ ≤ 1 ∀, ; ∑ = 1 ∀ ; ∑ > 0 ∀ } contains all of the crisp c-partitions of N objects, represented as cN matrices.Equivalently, each U (membership) can be represented as = ∪ =1 ; ∩ = ∅ ∀ ≠ , where {Xi} are the crisp subsets comprising the c clusters.We write ↔ { }.The MMDRS partition of XN is U MM ∈ M hc ′ N where c' is the desired number of smaples to be selected by maximin sampling (MM).
A third approach for sampling is based on a three step process comprising: (i) determination of c' Maximin (MM) prototypes X MM = { 1 , . . ., ′ } ⊂ ; (ii) erection of the nearest prototype partition UMM of XN; and (iii) drawing a specified number of samples from each of the subsets in UMM.This third method is not true random sampling; rather, it is random sampling constrained by drawing samples from specified locations.Since this RS scheme is directed by the MM samples, we will call it the Maximin-Directed Random Sampling (MMDRS) method which was first discussed in the study [18].Since then, this method or some derivative of it have been used frequently in the literature of cluster analysis for static data.One of the challenges with MMDRS is that it is computationally expensive.Therefore, to enhance this aspect of MMDRS, we introduce a new approximate MMDRS (AMMDRS) sampling scheme.The goal of AMMDRS is to be faster and more applicable for big data applications.
So, this article has the following contributions.First, we will introduce the new AMMDRS scheme.Then we will conduct some numerical experiments to compare the quality of samples produced by the three sampling methods: RS, MMDRS, and AMMDRS.Ultimately, we will demonstrate that adopting our approach yields sample quality comparable to MMDRS, all while requiring less computational complexity.The remainder of the paper is organized as follows.In Section 2, we dive into the MM and MMDRS algorithms.We then flesh out the new AMM scheme in Section 3, building on the foundations of the original MMDRS method.In Section 4, we tackle the nuanced idea of what "best" sample really means in the context of cluster analysis.Section 5 sheds light on the datasets used in the analysis and the metrics that gauge their quality.The details of our findings are in Section 6, and we wrap things up with our takeaways in Section 7.
THE MM AND MMDRS ALGORITHMS
The concept of MM sampling was initially introduced in the study [19], where it is characterized as a method for initializing a set of c prototypes, also known as cluster centers, for clustering purposes.Casey and Nagy [20] conducted an overview of the MM algorithm for setting up initial prototypes, which we refer to as the MM principle.
[MM Principle].The initial sample in the batch serves as our first cluster center.From there, we calculate the distances of the other samples from this initial center.The sample farthest away becomes our second center.For every other sample, we consider the shorter of the two distances from these centers.The sample with the largest of these minimum distances is then selected next.Subsequent centers are selected to ensure maximum separation from those already chosen.This ensures that our initial cluster centers are spread widely across the sample space-a property that's intuitively appealing.
Hathaway et al. [18] appended two steps to this sampling scheme.First, the crisp nearest prototype rule (NPR) partition is computed using the MM samples as prototypes.Second, each of the subsets in this partition is subsequently sampled randomly a number of times proportional to the number of points in the subset.This produces a small subset of the larger parent set for approximate clustering and tendency assessment.The resultant sample is called a Maximin Directed Random Sample (MMDRS).The complete pseudo code for the MMDRS algorithm is depicted in The literature contains at least six ways to initialize MM sampling in Line 3. A recent study of this issue [21] determined that, on average, the original and fastest scheme (line 3) is as reliable as the other five methods, so that is the initialization we use.The primary requirement for good samples in the present context is that the cluster proportions in the c' samples from XN be representative of the corresponding proportions for the subsets in XN.If the data are unlabeled, there is no way to ascertain whether any sampling scheme satisfies this desire.But if the data are labeled, we can determine how well the samples match the distribution of the labeled subsets in XN.This intuitive objective informs our definition for what constitutes a best set of samples.Our expectation is that the DRS methods which begin with MM sampling will produce better samples of labeled data than simple RS in terms of matching proportions of sample and parent (in this article we call XN the parent of samples of it made by the three methods).There are three minor results about MMDRS sampling that provide weak guarantees that fuel our expectations.To describe the results, we need Dunn's index [22], discussed next.
Consider two non-empty subsets, S and T∈ ℜ , with an arbitrary metric denoted by : ℜ × ℜ ↦ ℜ + .The diameter of S can be defined as S as Δ(S) = max ⏟ ,∈S {d(, )}.Similarly, we define the set distance δ between S and T as δ(S, T) = min ⏟ ∈S ∈T {d(, )}.For any given partition U ∈ M hcN ↔ {X i }, the separation index of U, widely recognized as Dunn's index (DI, [22]) is: Dunn characterized set U as compact and separated (CS) in relation to metric d under the following conditions: For all subsets s, q, and r, where q≠r, any pair of points x and y from XS are closer to each other (based on metric d) than any other pair u and v, where u is from Xq and v is from Xr. Dunn established that a set X possesses a clear CS partition with respect to d if and only if max ︸ U∈M hcn { DI(U; X)} > 1, the maximum of DI(U;X) over all U in MhcN is greater than 1.Subsequent results tie this particular characteristic of Dunn's index to the MMDRS samples extracted obtained from XN by Algorithm 1: Then lines 1-9 of the MMDRS Algorithm will select at least one object from each of the c clusters.
The MM theorem tells us that when the input da have c CS clusters, lines 1-9 of Algorithm 1 will extract at least one sample from each cluster.Please observe that proposition MM applies to the seeds (the prototypes) which are used to build the MMDRS partition.
. If XN can be partitioned into c compact and separated clusters CS clusters, and c'=c, then .Suppose XN can be partitioned into c CS clusters for c'≥c, and suppose that |St|/N is an integer for all t.Then the proportion of objects in the MMDRS sample from subset t equals the proportion of objects in the parent population for t=1 to c. Proof.Proposition 2, Hathaway et al. [18].These three results have limited utility because the majority of input datasets lack the CS property, and even when they do possess it, it is usually impossible to verify that this is the case.On the other hand, these results do provide some reassurance about the MMDRS procedure, in the sense that at least in some cases, Algorithm 1 obtains samples that do represent all c clusters in the data.Consequently, we expect the MMDRS samples to provide fairly representative proportions of the distribution of the input data.
As a final note, we remark that the actual MM samples drawn by MM lines 1-9 are not part of the sample output, but can easily be included in the output if this is desired.Our experience is that inclusion of the MM samples doesn't make much difference to their quality in terms of representing the distribution of the input data.
In summary, MMDRS demonstrates its effectiveness in generating representative samples from a dataset XN when the cluster proportions in the c' samples derived from XN align closely with the proportions found within the subsets of XN.The generated samples can be used as input to any clustering algorithm to find structure in the data without the need to iteratively accessing the whole data samples.Thus, making it feasible to run most clustering algorithms for very large datasets which is impossible without sampling.However, one drawback of MMDRS is that it needs to span all the data which makes it challenging and time consuming for large datasets.Therefore, reducing the time complexity for this approach will be essential for big data applications.[24,25], but since they don't use directed random sampling as a second step, these methods will not be considered here.Table 2
describes our approximate version of MM sampling:
Lines 1-10 extract the c' AMM samples from XN.The first AMM sample, selected in Line 3 of Algorithm 2, is the first sample in the data.For each additional MM sample, the data is shuffled and split into T chunks.Each successive MM sample is chosen from the new chunk (Xw) instead of the whole input data set (XN).This process is repeated until c' samples are obtained.The DRS procedure (lines10-20 of Algorithm 1) is then used to find ns AMMDRS samples.To summarize, the AMM procedure simply replaces the input data set XN by a chunk Xw at each iteration in the MM part of the MMDRS algorithm.This reduces the computation time for the MM part of the sampling procedure.Now we turn to some ways to measure sampling quality, where the samples are explicitly constructed to support cluster analysis.
It is evident that AMDRS leverages its primary advantages in line 6, where the data is randomly partitioned into multiple segments.Subsequently, AMM operates on each of these chunks, obviating the need to access the entire dataset for sampling.This efficient approach significantly lowers the time complexity by diminishing the volume of data that needs to be processed, reducing it from N (the size of the data) to N/T, where T represents the number of partitions employed by AMDRS.
SAMPLE QUALITY
In our experiments, the datasets are labeled, which means they possess ground-truth c'-partitions, denoted by ∈ ℎ ′ of XN.Assume ni represents the count of points in subset-i, then the total number of points is given by = ∑
𝑐 ′ 𝑖=1
. From this, we can define the proportion vector of XN in ℜ ′ as: Algorithm 1 or Algorithm 2, respectively, extracts c' MMDRS samples XMMDRS, or AMMDR samples XAMMDRS from the input data.Let ′ , ′ ′ denote the number of samples drawn from the t-th subset, 1≤t≤c' by these two algorithms.For these samples we have the corresponding sample proportion vectors in c : Our objective is to evaluate the degree of alignment between VMMDRS and VAMMDRS.Given that these samples are derived from labeled data, it is feasible to create histograms that contrast the counts of points within each labeled subset with those in the samples.This visual approach offers an assessment of how closely the proportions in the original dataset match those in the sample, all while being independent of both N and p. Especially for smaller values of c, a visual comparison can provide a fairly precise gauge of this alignment.
There are multiple methods to analytically compare VMMDRS or VAMMDRS with VN.One straightforward approach involves calculating the distances d(VN, VMMDRS) and d(VN, VAMMDRS), using a suitable metric in ℜ ′ × ℜ ′ .A distance of zero signifies an impeccable alignment between the proportions in the main dataset and the sample.Secondly, the similarity between the two distributions (VMMDRS or VAMMDRS to VN) can be calculated via different methods.The Kolmogorov-Smirnov (KS) test is a statistical test used to compare a sample distribution with a reference probability distribution, or to compare two sample distributions [26].It is a non-parametric test, which means it does not make any assumptions about the shape or parameters of the distributions being compared.It can determine whether two independent samples are drawn from the same population or different populations.This is useful in comparing the characteristics of two groups.Therefore, KS is used to test against the null hypothesis that (VN, VMMDRS) or (VN, VAMMDRS) come from the same distribution.The returned pvalue is used to interpret the results.For our experiments, we will choose a default significance level of α=0.05.Consequently, if p>α=0.05,we uphold the hypothesis that the sample originates from the same distribution as the parent data.In such cases, we will note that the sample has successfully passed the KS test.It is worth mentioning that in our experiments, the number of "samples" for the KS test equates to c', the total count of labeled subsets.Given that the KS test tends to be less precise for smaller sample sizes, it might not offer highly informative outcomes in our context.We will consider a sample to "cover" the input data if every labeled subset gets represented at least once.
NUMERICAL EXPERIMENTS
We conducted all experiments on a system equipped with an INTEL Core i7-8700K CPU and 64 GB of RAM, utilizing MATLAB for implementation.The value of T used in line 6 of Algorithm 2 was 10.The horizontal axis on all of the histograms is the cluster number in the labeled data.So, for example, the horizontal axis for the X15 histograms has 15 ticks at k=1 to 15 corresponding to the 15 labeled subsets in the data.The vertical axis on all of the histograms is the ratio of the number of data points (ni) in subset-i (or sample thereof) to the number of input points (N). 3 lists the four datasets utilized in our experiments.These include three datasets, named as follows: X15 [27], X31 [28], and X6 [29], as well as the Wisconsin Diagnostic Breast Cancer (WDBC) dataset [30].While each of these datasets underwent identical analysis, due to space constraints, we cannot showcase all the figures in this article.However, a comprehensive collection of graphs can be obtained upon request from the second author.
X15, as seen in Figure 1, showcases clusters visibly distinct, stemming from Gaussian distributions with varied means and covariance matrices.Each cluster has a size varying between 300 and 350. Figure 2 presents six histograms for the dataset X15 when c'=20.The input data's histogram is positioned on the upper left, while the random sample is on the upper right.Each histogram is labeled with two values: ED denotes the value of d(VN,VMM(*)) where d represents the Euclidean distance; p signifies the result of the 2-sample KS test (as provided by Matlab) against the significance level α=0.05.A p-value less than the significance level prompts us to reject the 05null hypothesis that both samples come from the same distribution.Conversely, we accept the two samples as being from the same distribution if p>0.05.The values of Euclidean distance in Figure 2 show that Random Sampling produces a much higher value of ED (and hence, a lower quality match to the input distribution) than all four of the MM based methods.Comparing MM to AMM, we see that MM does slightly (but only slightly) better for the c' samples.After applying DRS to the two sets (MM and AMM), the ED values are an order of magnitude smaller, and AMMDR does slightly better than MMDRS.Visually, the two DRS sets are much closer to the input distribution than the RS, MM and AMM sets, confirming that the DRS portion of these two algorithms really improves the quality of the samples drawn.The KS test accepts all 5 samples, but clearly prefers the two DRS methods (equal p values of 0.8899) to the MM and AMM samples (p~0.060).The p value for RS (0.307) lies in-between these two pairs of values, which agrees with the visual assessment that RS matches the distribution of X15 better than both MM methods, but not as well as both DRS methods.
The dataset X31, illustrated in Figure 3, comprises 100 points distributed across 31 Gaussian clusters.As a result, the histogram representing the input data exhibits a uniform profile, each bin containing 1/31~0.0322 of the points, as seen in the upper left view of Figure 4, which exhibits the histograms and statistics (ED and KS test) for the five sampling methods at c'=50.The two DRS methods yield visually superior samples, and the ED for these two samples favors MMDRS, albeit slightly.The RS is visually inferior to the other four methods.The p-values for all 5 samples are quite small; the statistical implication of this is to reject the null hypothesis that any of these samples matches the input distribution at significance level α=0.05. the right, a thick cluster of magenta points nestles within a sparser blue subset.Notably, the lower left section of the scatterplot presents a unique clustering reminiscent of a "fried egg".This configuration consists of a vibrant yellow center (depicting the "yolk") encased by a cyan perimeter, symbolizing the "egg white".The specific sizes of these six clusters are as follows: 50, 92, 38, 45, 158, and 16.
Figure 6. MMDRS and AMMDRS samples of X6: c'=10, ns=100
From Figure 6, first, notice that RS produces a much better visual match to the input data than either MM or AMM, but when DRS is added to the sampling procedure, the visual match of both DRS schemes is slightly better than RS.The ED values agree: RS is better than MM or AMM, but not as good as MMDRS or AMMDRS.All 5 samples pass the KS test, i.e., they accept the match between the samples and parent distributions.In our final experiment, we utilized the Wisconsin Diagnostic Breast Cancer dataset.Figure 7 contains the results.This data set is an odd one, because it has feature vectors in 30 dimensions (p=30), but only N=569 samples.All 5 samples yield the same p value for the KS test, so it is not a useful discriminator for sample quality.Visually, the MM, AMM, and RS samples are poor matches to the input data, while the two DRS samples all look the same and are a better match the actual data.The ED values for the two DRS methods are lower than the MM values and the RS value.From the ED values, we conclude that for this experiment, MMDRS was the best and AMMDRS was next best.
Table 4 shows the CPU time used to compute samples for data set X31.The time required to compute AMM samples is about 1/7 of the time required for MM samples because AMM works on a subset of size N/T of the original dataset, which has N samples.The smaller the subset size (the larger the T value), the smaller will be the time required to compute the AMM samples.But the cost of large T values is the risk of missing samples from the partition of the datasets that does not exist in that subset.Since AMMDRS relies on AMM, it is slightly faster than MMDRS, as can be seen in Table 4.
CONCLUSIONS
In this manuscript, we introduced an innovative Approximate MMDRS (AMMDRS) algorithm designed to facilitate the generation of faithful and representative samples from large datasets.This approach empowers the application of traditional clustering algorithms without the necessity of processing the entire dataset, a critical advantage in scenarios where accessing the complete dataset is computationally challenging or impossible due to resources constraint.The significance of this research lies in its potential to make datadriven decision-making more accessible and practical, particularly in situations where working with big data sets is otherwise infeasible.Consequently, this manuscript contributes to the growing body of knowledge aimed at bridging the gap between data analysis and real-world applications, further underscoring the importance of efficient and accurate sampling techniques for handling big data challenges.
The experiments presented here do suggest that the approximate MM method is faster than MM, without a significant loss in sampling accuracy.This is especially important for big data applications where processing the entire datasets is not feasible.Table 4 shows that simple (undirected) random sampling is faster than either of the MM based DRS methods because no time is expended in building the NPR partition.This will be true for any input data set.But in terms of sample quality for cluster analysis, both of the DRS methods produce samples that provide a more faithful representation of the distribution of the input structure than simple random sampling in the experiments reported here.We have used several with different number of cluster and samples for our experiments, but our experience with these methods suggests that as the size of the input data grows, AMDRS will eventually be superior to MMDRS due to computation complexity of MMDRS which needs to access the whole data.We will test this conjecture with a more extensive empirical study in the future.
REFERENCES
the CS partition of XN.Proof.Theorem 1, Hathaway et al. [23].Proposition MMDRS-1 tells that when the input data have c CS clusters and we choose c'=c, that lines 10-19 of Algorithm 1 find the CS clusters.The number of samples drawn from the t-th subset in Line 16 of the MMDRS algorithm is = ⌈ ( | | )⌉ ; 1 ≤ ≤ ′ .The number |St|/N scales the number of desired samples drawn from the t-th row of UMM by the proportion of samples in that row.Because of the ceiling function, the overall number of samples is approximate, ∑ ′ =1 ≈ .The number and the proportions drawn will be exact under the extra condition that the sampling proportions are all integers, so the ceiling function is not used and ∑ ′ =1 = .
of MM samples: ns=desired number of MMDRS samples MM 2 Initialize: 𝑿
Table 1 below where it is split into two sections, one is the MM sampling and the other one is the DRS sampling.Lines 1-9 extract the c' MM samples from XN. Ties in Line 6 are broken arbitrarily.Lines 10-19 build the elements of the crisp partition ∈ ℎ ′ of XN.The matrix UMM appearing in lines 10, 12 and 20 is commented out since it is not needed to secure the desired MMDRS samples outputted in line 20.We show it to instruct readers on how the partition is used to direct the random sampling.Hopefully this lends some transparency to the DRS scheme.You may recognize UMM as the "k-means" or nearest prototype rule (NPR) partition of XN built by applying Lloyd's algorithm[1]to the input data with k=c' using the c' MM samples as cluster centers.
Table 4 .
Computational times of the proposed sampling methods on X31 dataset | 5,626.6 | 2023-12-27T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Performance comparison of a silica gel-water and activated carbon-methanol two beds adsorption chillers
The aim of the study is to compare the efficiency of adsorption refrigerating equipment working with different working pairs. Adsorption cooling devices can operate with a relatively low temperature of heat sources while consuming only a small amount of electricity for the operation of auxiliary equipment. Refrigerants used in adsorption devices are substances that do not have a negative impact on the environment. All that makes that adsorption refrigeration seems to be a good solution for utilizing renewable and waste heat sources for cold production. To carry out the experiment the adsorption cooling device has been developed and researched in Institute of Heat Engineering at Warsaw University of Technology. The test bench consisted of two cylindrical adsorbers, condenser, evaporator, oil heater and two oil coolers. In order to perform the correct action it has been developed and implemented special control algorithm device, allowed to keep the temperature in the evaporator at a preset level. The unit tested for two sorption pairs: activated carbon – methanol, and silica gel – water. For activated carbon methanol working pair it was obtained energy efficiency rating (EER) equals to 0.14 and specific cooling power (SPC) of 16 W/kg. For silica gel water EER of refrigeration unit was 0.25 and SPC was equal to 208 W/kg.
Introduction
Solid sorption is used in many applications including thermally driven refrigeration technologies.Solid sorption or adsorption refers to processes where a vapor is taken up by a solid when both the gas and the solid are in contact.Vapour molecules are accumulated on the surface of the solid and remain attached to it [1].This process can be reversed and the adsorbed phase can be released from the adsorbent by applying heat to the compound.Adsorption refrigeration covers two processes, which are the heating-desorbing process and the cooling-adsorbing process.Because of that the simple traditional cycle is a type of intermittent refrigeration cycle [2].If the heat source can be provided continually and the continuous refrigeration effect is required, two adsorber bed or multi adsorber beds device need to be designed for an adsorption refrigeration system, for which the heating and cooling processes of multi adsorbers will be complementarily arranged [3].Adding every extra adsorber to device makes the cooling effect of the device is more stable, but it is associated with more complex and more difficult controlling devices work.Because of those reasons the most common adsorption units with continuous cooling effect contain two beds [4].Figure 1 shows thermodynamics states of adsorbers for two bed unit during the operation.During the first half-cycle adsorber 1 is being heated, while adsorber 2 is being cooled, during the second half-cycle adsorber 1 is being cooled, while adsorber 2 is being heated [5].During the first phase of working process (line A-B in Fig. 1), adsorber absorbs applied heat thus increasing the adsorbent's temperature which induces a pressure increase from evaporation pressure to condensation pressure.This phase is also known as isosteric heating [5].The temperature of adsorbent bed increases from t A to T B .After this vapours of refrigerant are released from the bed, and they move towards condenser where they cool down and condense.While adsorber receiving heat continues but being connected to condenser, desorption of vapor get induced and liquefaction of desorbed vapor in condenser further takes place (line B-C in Fig. 1).The temperature of the adsorbent bed increases from t B to t C , but pressure stays at condensing pressure level.This phase is also known as isobaric desorption [5].The liquid refrigerant flows through expansion value and evaporated in the evaporator providing cooling power [6].Meanwhile, temperature of second adsorbent reduces due to release of heat (line C-D in Fig. 1) which induces pressure decrease from condensation pressure to evaporation pressure.This phase is also known as isosteric cooling [5].The pressure get reduced from condensation pressure p c to evaporation pressure p e and temperature also decreases to t D .The adsorbent temperature continues to reduce due to further release in heat which causes adsorption of vapor due to connection with evaporator.The adsorbent is cooled from t D to t A , pressure stays at evaporation level.This phase is also known as isobaric adsorption [5].The whole process of adsorption cooling requires only heat energy which can be supplied through gas or oil fired or solar energy [7].Unlike absorption processes, adsorption cooling systems have the advantage, that can be driven by lower temperature of heat source [8].Driving heat source, can be at such low temperature as 50°C.Therefore, solar radiation, waste heat, as well as geothermal energy can be used to power refrigeration unit systems.Using solar energy, as a free and renewable type of energy, to drive adsorption cooling systems is considered an attractive option as well as a focal point of interest.Adsorption cooling systems can be also used in cogeneration systems [9].Refrigerant used as a working fluid in adsorption cooling devices usually do not have a negative impact on the environment.In conjunction with the use of waste and renewable energy sources, application of these devices use introduce a good environmental and social factor [10].Selection of the working pair is essential to adjust the application.In application where cooling temperature can be above 0°C water can be used as a refrigerant with any adsorbents.For applications below 0°C methanol, ethanol or ammonia with activated carbon can be used [11].Multi bed devices allow to improve efficiency of the unit by recovery the heat between the adsorbers.Such solution can improve the efficiency of the whole system by about 30%.Adsorption units that work in basic continuous cycle have large possibility to recover heat from one adsorber to another.In the beginning phase of the process the first adsorber has to be cooled, when second one has to be heated.Temperature difference, between point C and A (Fig. 1) is large enough to transport heat from one bed to another.Creating such heat recovery system in the beginning phase of process the external heat source and external cooling system is not needed.
Construction of the apparatus
The thermal wave adsorption refrigeration device was designed and built at the laboratory of the Division of Refrigeration and Energy in Buildings (formerly Division of Processes Equipment and Cooling), Faculty of Power and Aeronautical Engineering, Warsaw University of Technology.The thermal wave device allows to recover heat between adsorber beds during the process [12].When the direction of oil circulation is reversed, the secondary fluid recovers the heat from one adsorber and transfers it to the second one.The installation is designed to examine this device comprises three circulation systems: heat transfer fluid, refrigerant and heat sink.Figure 2 shows an overview of the apparatus.The adsorber consists of an inner brazen tube with 12 mm outer diameter, through which the heat transfer medium flows, supplying the heat to the bed or taking the heat back from it.The inner tube is mounted inside an outer tube with 28 mm out diameter.Between the tubes there is a sorption material bed.In the centre of the inner tube a pipe is soldered, through which the refrigerant flows to and from the bed.The whole assembly is housed in a tight foam insulation.The heat flux should occur between the adsorbent bed and the heat transfer medium rather than between the bed and the air surrounding the adsorber.Device was tested with two working pairs.In one configuration, the methanol as refrigerant and activated carbon as sorption material were selected as the working pair.In the second configuration, the water as refrigerant and silicagel as sorption material were selected.
Secondary fluid circulation
The heat is transferred to and from the adsorber beds by the mineral oil circulation system.Oil system comprises a circulating pump, oil surge tank, five heat exchangers consisting of two coolers, two adsorber beds and one electric heater (Fig. 2).An assembly of four solenoid valves was mounted to control the direction of oil flow.In the first half cycle oil is pumped from surge tank through first cooler, first adsorber bed, than through the electric heater to the second adsorber bed.Downstream oil flows through second cooler to the surge tank.In the second half cycle, the oil flows in the opposite direction.The open surge tank protects the system against an uncontrolled pressure rise, and compensates for thermal expansion of the working fluid.
Refrigerant circulation
The refrigerant circulates in the system between adsorber beds, condenser and evaporator.The refrigerant circulation system comprises a set of solenoid valves and in case of methanol non-return valves which provide a connection between the adsorber beds on one side and the condenser and evaporator on the other.Each adsorber bed is connected with condenser and evaporator (Figure 2).This set and proper control of the opening of the valves makes the stream of refrigerant from the desorbing bed always flow to the condenser, and then, after flowing through the electronic expansion valve, to the evaporator.Finally, the vapor refrigerant is adsorbed by the second adsorber bed.Initialization and opening time for valves are determined on the pressure values in the adsorbers.
Heat sink circulation
To dissipate heat from the system heat sink circulation is required.In order to determine heat exchanger capacities, a water circulation system was introduced.The water flows through condenser, evaporator and both heat sink exchangers.All capacities in mentioned heat exchangers are calculated according to Equation 1.
Methodology
In the context of experimental studies a continuous oil temperature were measured at inlet and outlet of adsorbers (Fig. 2).There was also measured temperature of the water before and after all heat exchangers through which water flows.Based on these parameters instantaneous heat capacities have been determined according to the equation 1: Mass flow ݉̇ was determined on the volumetric flow rate ܸ ̇ according to the equation 2: The oil volumetric flow rate ܸ ̇ was determined by measuring the known volume filling time.Volume flow rate was measured during flow in both direction.In both cases, the measured value eqals ܸ ̇=1.7 l/min.For the temperature measuring oil density was ߩ =880 kg/m 3 .For temperature measurements were used NTC5 temperature sensors with measurement uncertainty of 1%.Filling time was measured with accuracy of ±1s and the tank volume was 2 liters.During experiment, there was measured pressure of refrigerant in the adsorbers with piezoelectric quartz pressure transducer having an accuracy of measurement of ±0.3%.Based on the pressure P, there was determined saturation temperature T for evaporation and condensation in accordance to the saturation line equation [13]: Where the coefficients a and b for methanol are a=20.84and b = -4694.For water a=20.5896and b = -5098.26.In the equation 3, the temperature is given in Kelvin degrees and the pressure is calculated in mbar.The temperature determined from the measured pressure determines the heat source requirements the adsorption device can co-operate.The energy efficiency rate EER of adsorption refrigeration devices represents amount of coolness generated in evaporator to amount of heat generated in heat source ratio.It can be calculated from the equation ( 4) wherein Q C is the amount of heat obtained in evaporator during one half cycle according to equation 5. Q HS is the amount of heat supplied to the system at the same time by heat source according to the equation 1 and trapezoidal integration method (equation 6).
wherein t 0 is half cycle start time, and t 1 is time of half cycle period.ܳ ̇ுௌ is the heat flux supplied to the system during selected time step.
Research results
The pressure value in each adsorber was measured and it was used to operate the electromagnetic valves that control the refrigerant and oil flow direction.A pulse type electronic expansion valve was used, so that the pressure and evaporation temperature could be controlled using researchers' software.The boiling point of both tested refrigerants (methanol, water) was set to be 10°C.Fig. 3 and 4 show heat fluxes during four full half cycle, during the further course of the experiment the results are reproducible and characteristics are like those presented.In the first half-cycle the temperature difference between T1 and T2 represents heat of desorption while the temperature difference between T3 and T4 corresponds to heat of adsorption in the second adsorber.Based on the measurements, real instantaneous powers of each component of the system could be determined.The energy efficiency rating EER of adsorption devices is determined based on Eq. ( 4).With measurements of instantaneous thermal power, the amount of heat can be determined from the integrals of the power over the time.What has been done here by trapezoids.For investigated case the average value of EER was obtained at level 0.14 for activated carbon -methanol and 0.25 for silica gel -water.Specific cooling power (SPC) was 16 W/kg for activated carbon -methanol and 208 W/kg for silica gel -water.
Summary
In this article the results of the experimental research concerning a state-of-the-art thermal wave adsorption refrigeration device are presented.Due to different working temperatures of the adsorbers, the change in the thermal power of each system component could be determined.The measurement results are presented in Figures 3 and 4 show four half cycles of the adsorption device operating under steady state conditions.In the following experiment, the cycle is repeated, so that these results can be regarded as representative for the calculation of the energy efficiency rating of the adsorption refrigeration apparatus.
There was energy efficiency rating (EER) obtained at level 0.14 for activated carbon -methanol and 0.25 for silica gel -water working pair.Specific cooling power (SPC) was 16 W/kg for activated carbon -methanol and 208 W/kg for silica gel -water.Low energy efficiency was due to the excessive duration of the half cycles representing approximately 6 minutes. | 3,131.8 | 2017-01-01T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Structural characterization of Class 2 OLD family nucleases supports a two-metal catalysis mechanism for cleavage
Abstract Overcoming lysogenization defect (OLD) proteins constitute a family of uncharacterized nucleases present in bacteria, archaea, and some viruses. These enzymes contain an N-terminal ATPase domain and a C-terminal Toprim domain common amongst replication, recombination, and repair proteins. The in vivo activities of OLD proteins remain poorly understood and no definitive structural information exists. Here we identify and define two classes of OLD proteins based on differences in gene neighborhood and amino acid sequence conservation and present the crystal structures of the catalytic C-terminal regions from the Burkholderia pseudomallei and Xanthamonas campestris p.v. campestris Class 2 OLD proteins at 2.24 Å and 1.86 Å resolution respectively. The structures reveal a two-domain architecture containing a Toprim domain with altered architecture and a unique helical domain. Conserved side chains contributed by both domains coordinate two bound magnesium ions in the active site of B. pseudomallei OLD in a geometry that supports a two-metal catalysis mechanism for cleavage. The spatial organization of these domains additionally suggests a novel mode of DNA binding that is distinct from other Toprim containing proteins. Together, these findings define the fundamental structural properties of the OLD family catalytic core and the underlying mechanism controlling nuclease activity.
INTRODUCTION
Phosphoryl transfer reactions are critical for the synthesis and processing of nucleic acids (1). DNA and RNA polymerization, nuclease degradation, RNA splicing, and DNA transposition all proceed via the same general reaction scheme involving (i) an SN2 nucleophilic attack on the scissile phosphodiester bond, (ii) the formation of a pentavalent transition state and (iii) cleavage of the scissile bond leading to stereo inversion of the scissile phosphate and release of the leaving group (2). These steps depend on the presence of a basic moiety to activate the nucleophile, a general acid to protonate the leaving group, and the presence of positively charged groups to stabilize the developing negative charge in the transition state (3,4). The observed catalytic activity of RNA (5,6) coupled with the presence of two metal ions in the refined structures of alkaline phosphatase (7) and the Klenow fragment with DNA (8) led to the generalized mechanistic hypothesis that metals can substitute for protein side chains in phosphoryl transfer reactions and act as the required general acid and base (9). In this scheme, one metal (metal A) deprotonates the nucleophile while the other (metal B) stabilizes the pentavalent transition state intermediate (2). Despite the prevalence of this mechanism, the number of metal cofactors can vary among different enzyme families. Many homing endonucleases, for example, function using one metal (10,11) while in crystallo catalytic studies of human DNA polymerase reveal an essential catalytic role for a third metal during DNA synthesis (12). Structural characterization of phosphorylhydrolases is therefore necessary for understanding the underlying catalytic strategy employed in each case.
Topoisomerases, DnaG primases, gyrases, RecR recombination proteins and 5S rRNA maturases share a conserved catalytic domain that mediates metal-dependent nicking and cleavage of nucleic acid substrates (13,14). This Topoisomerase/primase (Toprim) domain consists of a four-stranded parallel -sheet sandwiched between two pairs of ␣-helices and contains three key sequence motifs: an invariant glutamate located in the ␣1-1 loop, an invariant glycine following 2, and a conserved DxD motif between ␣3 and 3 (13,15). Crystallography and mutagenesis have shown the conserved E and DxD motif to be critical for metal binding and catalytic activity in multiple contexts (14,(16)(17)(18). Additional active site components vary between Toprim family members based on specific functional requirements. Toposiomerases and gyrases contain a catalytic tyrosine that forms a covalent linkage with the DNA (15,19) whereas DnaG primases have extra acidic residues that coordinate multiple metals needed for nucleotide binding and polymerase activity (16,17,20). While most Toprim proteins play important roles in DNA replication, recombination, and repair, recent structural studies revealed the CWB2 cell wall-anchoring module of Clostridium difficile proteins Cwp8 and Cwp6 also contains a Toprim fold (21). These domains, however, lack the conserved metal binding side chains and form trimers that act in a purely structural capacity.
O vercoming lysogenization defect (OLD) proteins constitute a family of uncharacterized enzymes that contain a predicted N-terminal ATPase domain and C-terminal Toprim domain (13,22). Much of our present understanding of OLD function derives from bacteriophage P2 genetic and biochemical studies. The P2 old gene product interferes with bacteriophage growth in P2 lysogens, kills Escherichia coli recB and recC mutants following P2 phage infection, and causes increased sensitivity of P2 lysogens to X-ray irradiation (23)(24)(25). These effects appear to be accompanied by a partial degradation of tRNA molecules and inhibition of protein synthesis (26,27). P2 OLD purified as a maltose binding protein fusion exhibits 5 -3 exonuclease cleavage of DNA and ribonuclease activity in vitro (28). Recent genetic studies indicate that the Salmonella typhimurium old gene becomes critical under certain growth conditions like temperature stress (29), but its mechanism of action and normal physiological functions remain a mystery. Nothing is known about the activities of other homologs and there are currently no structures of OLD proteins.
Here we identify and define two classes of OLD proteins based on differences in gene neighborhood and amino acid sequence conservation. We purify and characterize the Class 2 OLD proteins from Burkholderia pseudomallei (Bp) and Xanthamonas campestris p.v. campestris (Xcc) and present the crystal structures of their catalytic C-terminal regions at 2.24 and 1.86Å resolution respectively. The structures show a two-domain arrangement containing a Toprim domain with altered architecture and a unique helical domain. Conserved side chains contributed by both domains coordinate two bound magnesium ions in the active site of Bp OLD, which are absolutely required for nuclease activity. The geometry of this catalytic machinery supports a twometal catalysis mechanism for cleavage and shows unexpected structural conservation with the active sites of DnaG primases and bacterial RNase M5 maturases. The spatial organization of these domains additionally suggests a novel mode of DNA binding that is distinct from other Toprim containing proteins. Together, these findings define the fundamental structural properties of the OLD family catalytic core and the underlying mechanism controlling nuclease activity.
Pellets from 500 ml cultures were thawed and resuspended in 30 ml of nickel load buffer supplemented with 10 mM PMSF, 5 mg DNase, 5 mM MgCl 2 , and a Roche complete protease inhibitor cocktail tablet. Lysozyme was added to 1 mg/ml and the mixture was incubated for 15 min rocking at 4 • C. Cells were disrupted by sonication for a total of 4 min and the lysate was cleared of debris by centrifugation at 13 000 rpm (19 685 g) for 30 min at 4 • C. The supernatant was filtered using a 0.45 m syringe filter, loaded onto a 5 ml HiTrap chelating column charged with NiS0 4 , and then washed with nickel load buffer. Proteins were eluted with an imidazole gradient from 30 mM to 1 M. Pooled fractions were dialyzed overnight into TCBg50 buffer (20 mM Tris pH 8.0, 50 mM NaCl, 1 mM EDTA, 5% glycerol, 1 mM DTT) and further purified by anion exchange and size exclusion chromatography (SEC), using a 5 ml HiTrap Q HP column and a Superdex 75 16/600 pg column respectively. Proteins were exchanged into a final buffer of 20 mM HEPES pH 7.5, 150 mM KCl, 5 mM MgCl 2 , and 1 mM DTT during SEC and concentrated to 10-40 mg/ml. Active site mutations were introduced via Quikchange and mutants were expressed and purified in the same manner as wildtype.
Inductively coupled plasma atomic emission spectroscopy (ICP-AES)
Bp CTR and Xcc CTR were cloned into the expression vector pASK-IBA3C, introducing a C-terminal Strep-II tag. Strep-tagged CTR constructs were transformed into BL21(DE3) cells, grown at 37 • C in Terrific Broth to an OD 600 of 0.7-0.9, and then induced with 0.3 mM IPTG overnight at 19 • C. Cells were harvested and washed in Strep buffer (100 mM Tris-HCl pH 8.0, 500 mM NaCl, 5 mM -mercaptoethanol). Pellets were resuspended in 50 ml of Strep buffer supplemented with 3 mg DNAse, 2 mM MgCl 2 , 10 mM PMSF, a Roche complete protease inhibitor cocktail tablet, and 1 mg/ml lysozyme. Following a 10 min incubation at 4 • C, the cells were sonicated and cleared via centrifugation. The supernatant was filtered, loaded onto a 5 ml StrepTrap column, and washed with Strep buffer. The protein was eluted with Strep buffer supplemented with 2.5 mM d-desthiobiotin. The protein was pooled, concentrated, and injected onto a Superdex 75 10/300 GL column. Bp CTR and Xcc CTR were exchanged into a final buffer of 20 mM HEPES pH 7.5 and 50 mM NaCl, which had been first passed through Chelex 100 resin to remove contaminating divalent cations. The final protein sample was concentrated to ∼10 mg/ml. Approximately 500 l of each protein sample was dried under vacuum and resuspended in 10 ml of 2% nitric acid. Samples were analyzed with an iCAP 6000 ICP-ES, Thermo. Measurements were done in triplicate. The determined milliequivalents of metal per protein molecule are listed in Supplementary Table S1.
Crystallization, X-ray data collection, and structure determination Crystals were screened and optimized at the MacCHESS F1 beamline at Cornell University and X-ray diffraction data was collected remotely on the tuneable NE-CAT 24-ID-C beamline at the Advanced Photon Source. Singlewavelength anomalous diffraction (SAD) (30) datasets were collected on a Dectris Pilatus 6MF pixel array detector at 100 K for the platinum, mercury, and iodide derivatives at the energies of 12 300, 11 570, and 7500 eV, respectively. Datasets were integrated and scaled with XDS (31) and Aimless (32) via the RAPD pipeline. Heavy atom sites were located using SHELX (33) and phasing, density modification, and initial model building was carried out using the Autobuild routine of PHENIX (34). Initial figures of merit following density modification was 0.62 for Xcc CTR Pt, 0.64 for Xcc CTR Hg, and 0.504 for Xcc CTR I. Further model building and refinement was carried out in COOT (35) and PHENIX (34) respectively. The final models were refined to the following resolutions and R work /R free : Xcc CTR Pt, 1.86Å, 0.212/0.238; Xcc CTR Hg, 1.95Å, 0.201/0.241; Xcc CTR I, 2.30Å, 0.215/0.275 (Supplementary Table S2).
Bp CTR was crystallized by sitting drop vapor diffusion in 0.1 M HEPES pH 7.5, 0.23 M MgCl 2 , 30% PEG 400 and 0.001 M glutathione with a drop size of 1 l and reservoir volume of 65 l. Crystals appeared within 2-3 days at 20 • C. Samples were cryoprotected by transfer to 100% paratone-N, allowing all mother liquor to exit the crystal prior to freezing in liquid nitrogen. Crystals were of the space group C222 1 with unit cell dimensions a = 83.256Å, b = 105.669, c = 123.764 and ␣ =  = ␥ = 90 • . X-ray diffraction data were collected remotely on the NE-CAT 24-ID-C beamline at the Advanced Photon Source at 100 K on a Dectris Pilatus 6MF pixel array detector. The dataset was integrated and scaled using XDS and Aimless via the RAPD pipeline. The structure was solved by molecular replacement in PHASER (36) using the refined Xcc CTR Pt-soaked structure as the search model. Two molecules were found in the asymmetric unit. Model building and refinement were carried out in COOT (35) and PHENIX (34) respectively. The final model was refined to 2.24Å resolution with an R work /R free of 0.213/0.260 (Supplementary Table S2). The model also contained difference density peaks in the active site that were modeled as two magnesium ions based on the geometry and the components of the crystallization condition.
All structural renderings were generated using Pymol (Schrodinger) and surface electrostatics were calculated using APBS (37). Conservation based coloring was generated using the ConSurf server (38).
Size-exclusion chromatography coupled to multi-angle light scattering (SEC-MALS)
Purified Bp FL (4 mg/ml), Xcc FL (4 mg/ml), Bp CTR (6 mg/ml) and Xcc CTR (6 mg/ml) were subjected to SEC using a Superdex 200 10/300 gl (GE) column equilibrated in SEC-MALS buffer (20 mM HEPES pH7.5, 150 mM NaCl, 5 mM MgCl 2 , 1 mM DTT). The gel filtration column was coupled to a static 18-angle light scattering detector (DAWN HELEOS-II) and a refractive index detector (Optilab T-rEX) (Wyatt Technology). Data were collected continuously at a flow rate of 0.5 ml/min. Data analysis was performed using the program Astra VI. Monomeric BSA (6.0 mg/ml) (Sigma) was used for normalization of the light scattering detectors and data quality control.
DNA cleavage assays
100 ng of lambda DNA or pUC19 plasmid DNA was mixed with 8 M protein to a final volume of 20 l in DNA cleavage buffer (20 mM Tris-OAc pH 7.9, 50 mM KCl, 0.1 mg/ml BSA,10 mM divalent metal). Reactions were incubated at 37 • C for 60 min and quenched with 5 l of 0.5 M EDTA pH 8.0. Samples were analyzed via native agarose electrophoresis. DNA degradation was quantified using BioRad Image Lab software and assessed by measuring the amount of ethidium bromide signal in each lane and comparing it to the protein-free DNA sample. Bar graphs represent the average of three independent trials with error bars representing the standard error of the mean. Mutant constructs were assayed in the presence of 10 mM MgCl 2 and 1 mM CaCl 2 based on ICP-AES and metal titration results.
Exonuclease assays
The following DNA oligonucleotides for exonuclease assays were synthesized commercially by Integrated DNA Technologies (IDT): Exo US (5 or 3 labeled with 6-carboxyfluorescein, 6-FAM) 5 -CTCACTGGTGCTAGGCAACGTTGAAGTGAT CGTACGCGGA-3 Exo WT LS 5 -TCCGCGTACGATCACTTCAACGTTGCCTAG CACCAGTGAG-3 Exo GT LS 5 -TCCGCGTACGATCACTTCAACGTTGCCTGG CACCAGTGAG-3 Lyophilized single-stranded oligonucleotides were resuspended to 1 mM in 10 mM Tris-HCl and 1 mM EDTA and stored at −20 • C until needed. Duplex substrates were prepared by heating equimolar concentrations of complementary strands (denoted with suffixes 'us' and 'ls' indicating upper and lower strands) to 95 • C for 15 min followed by cooling to room temperature overnight. Four substrates were prepared: two wildtype substrates (5 or 3 6FAMlabeled Exo US each with Exo WT LS) and two G:T mismatched substrates (5 or 3 6FAM-labeled Exo US each Exo GT LS). For each substrate, a 150 l reaction containing 8 M protein and 75 pmol of labeled double stranded DNA was prepared in exonuclease buffer (20 mM Tris-OAc pH 7.9, 50 mM K-OAc, 0.1 mg/ml BSA, 10 mM MgCl 2 ,1 mM CaCl 2 ) and incubated at 37 • C. 20 l aliquots were taken at the indicated time points and quenched with 3× loading buffer (80% formamide and 1X TBE). Samples were analyzed by a denaturing (8 M urea) 14% polyacrylamide gel and visualized using Bio-Rad ChemiDoc XRS+.
Identification and classification of OLD homologs
Recombinant expression of P2 OLD produced unstable protein that aggregated and/or precipitated, regardless of the tag or conditions employed. We therefore searched the KEGG database (39) to identify OLD homologs more suitable for structural and biochemical characterization. The initial search was carried out using the E. coli K12 MG1665 OLD homolog (KEGG ID eco:b0876), which is annotated as the uncharacterized protein YbjD and is 18% identical and 35% similar to P2 OLD. These efforts yielded 833 homologs distributed across numerous kingdoms but absent in eukaryotes (Supplementary Table S3). A further search of mapped plasmid genomes available in the Integrated Microbial Genomes database (40) yielded four additional OLD homologs. We then examined the genetic context of each old gene, as inspection of gene neighborhoods has been shown to elucidate unanticipated genetic connections and facilitate new functional predictions (41). Our analyses show that old genes segregate into two primary classes (Supplementary Table S3). Class 1 OLD family members (542/837)--including P2 phage, Escherichia coli, and Salmonella typhimurium--exist as single, isolated genes (Supplementary Figure S1A). Class 2 OLD homologs (295/837) appear in tandem with a UvrD/PcrA/Rep-like helicase (Supplementary Figure S1B), often as an overlapping reading frame. UvrD, PcrA, and Rep are nonhexameric, superfamily 1A helicases that translocate with a 3 -5 polarity and play essential roles in DNA replication, recombination, and repair (42,43). Both classes retain the conserved motifs characteristic of ATPase and Toprim domains, though Class 1 proteins are on average ∼50 amino acids shorter. Each class appears in a number of different phyla, with examples present in both Gram positive and Gram negative bacteria, archaea, and bacteriophage viruses.
A subset of old genes (107/837) exist in species-specific operons (Supplementary Table S3). Neighboring genes within these operons contribute to numerous biological functions including bacterial defense, DNA replication and repair, transcriptional regulation, membrane transport, biosynthesis, metabolism, and signaling (Supplementary Table S3).
We selected numerous candidates from each class for expression studies. Like P2 OLD, most Class 1 homologs behaved poorly during purification. Class 2 homologs, in con-trast, were intrinsically more stable and generally provided greater yields of soluble, monodispersed protein. Specifically, the Class2 OLD homologs from B. pseudomallei and X. campestris p.v. campestris could be purified to homogeneity ( Figure 1A, Supplementary Figure S1C and D) and concentrated to greater than 10 mg/ml without appreciable aggregation or precipitation. Size exclusion chromatography coupled to multi-angle light scattering (SEC-MALS) indicates that Xcc OLD forms stable tetramers in solution while Bp OLD (Bp FL ) exists in equilibrium between dimers and tetramers (Supplementary Figure S1E). In contrast, truncated constructs containing the C-terminal region of each homolog (Xcc CTR , Bp CTR ; Figure 1A and Supplementary Figure S1C and D) were each monomeric by SEC-MALS analysis (Supplementary Figure S1F).
Class 2 OLD proteins exhibit metal-dependent DNA cleavage in vitro
Metal-dependent nicking and cleavage of nucleic acid substrates is a hallmark of Toprim domain-containing proteins (44). To verify that purified Class 2 OLD proteins share a similar activity in vitro, we incubated Bp FL with linearized phage DNA in the presence of different divalent cations ( Figure 1B). Cleavage activity was quantified by measuring the ethidium bromide signal in each lane and calculating the fraction of DNA digested relative to the untreated substrate, which increased under conditions that promote nuclease function. Bp FL exhibits cleavage in the presence of Mg 2+ , degrading approximately 10% of the substrate within an hour. Activity is enhanced in the presence of Mn 2+ , where 60% of the DNA substrate is degraded ( Figure 1B). We also observe weak activity in presence of Zn 2+ and Co 2+ . Bp CTR similarly shows cleavage with Mg 2 (5% degradation) and Mn 2+ (60%), though it is also highly active in the presence of Co 2+ (70%) ( Figure 1C). Given that Co 2+ only stimulates activity in Bp CTR , we suspect that this is a constructspecific artifact rather than a general feature of the OLD nucleases. Xcc FL and Xcc CTR similarly can degrade DNA with Mg 2+ and Mn 2+ but also are partially active in Zn 2+ (Supplementary Figure S2A and B). These data indicate that the critical catalytic resides associated with nuclease function reside in the C-terminal half of OLD proteins and that the N-terminal region containing the ATPase domain is not required for DNA binding or nuclease activity.
We next assessed the ability of Bp OLD to nick and cleave circular plasmids. Bp FL was mixed with supercoiled pUC19 DNA (S) in the presence of different divalent metals and activity was evaluated by the appearance of slower migrating bands as the substrate was nicked (N) and linearized (L) by the enzyme (Figure 1D). Bp FL shows weak nicking activity with all metals as compared to the DNA alone and EDTA controls ( Figure 1D), with Mg 2+ , Mn 2+ and Co 2+ again eliciting the strongest nicking effects. Under these conditions, only Mn 2+ promotes processive cleavage, degrading 55% of the circular substrate ( Figure 1D). Bp CTR shows pronounced nicking activity in the presence of every metal tested, with some processive cleavage stimulated by Mn 2+ (31% degraded), Co 2+ (22% degraded), and Ca 2+ (24% degraded) ( Figure 1E). Xcc FL and Xcc CTR show the strongest nicking and cleavage activities on supercoiled DNA with Mg 2+ , Mn 2+ and Zn 2+ , though XccCTR appears to be able to nick Ca 2+ as well (Supplementary Figure S2C and D). We note that the extent of cleavage in Xcc is less than Bp overall, suggesting it is a less efficient nuclease.
Given the variation in nuclease function we observed for Bp and Xcc OLD with different metals in vitro, we sought to identify which metals are preferentially bound to the CTR constructs in vivo using inductively coupled plasma atomic emission spectroscopy (ICP-AES). This technique can measure the type and amount of metal in a given sample with high accuracy (45). Bp CTR and Xcc CTR constructs were purified using Strep-II tags to avoid any confounding results arising from coincidental metal binding to a His tag. ICP-AES showed calcium to be the most abundant metal associated with both Bp CTR (79.35 mEq) and Xcc CTR (100.27 mEq), followed by magnesium (18.38 and 11.39 mEq, respectively), and then by zinc and nickel (Supplementary Table S1). Sparing amounts of cobalt were detected in the Bp CTR sample, suggesting it is not as physiologically relevant. No manganese was found in either sample (Supplementary Table S1). Given the unexpected presence of calcium, we tested if it may play a role in modulating nuclease activity. Presence of Ca 2+ alone does not promote robust nuclease activity on linear or supercoiled DNA substrates; however, a combination of Ca 2+ and Mg 2+ enhances the activities of both Bp CTR and Xcc CTR above either metal alone (Supplementary Figure S3A and B). Nuclease activity is most stimulated with Mg 2+ in excess and Ca 2+ between 1 and 2 mM. Under these optimal conditions, the activity of both Bp CTR and Xcc CTR is stimulated more than 10-fold on linear DNA compared to Ca 2+ or Mg 2+ alone. Degradation of circular DNA was also enhanced 4-5-fold for Bp CTR and Xcc CTR . Addition of ATP had no appreciable effect on Bp FL cleavage of either substrate (linear versus supercoiled DNA) in the presence of optimal concentrations of Ca 2+ and Mg 2+ (Supplementary Figure S3C), further underscoring the notion that the CTR mediates the DNA binding and nuclease functions. Taken together, our results imply that Ca 2+ acts as an important modulator of OLD family nuclease activity and can potentiate the catalytic effects of these enzymes in the presence of Mg 2+ .
The robust degradation of linear substrates we observe does not explicitly distinguish between exo-and endonuclease activities. To test the exonuclease function and directionality, we incubated Bp CTR with a 40 bp double stranded DNA substrate labeled on the 5 or 3 end with 6carboxyfluorescein (6-FAM). Bp CTR degrades the 3 -labeled substrate in a stepwise manner (Supplementary Figure S4A) while no intermediates or laddering is observed on the 5 -labeled substrate (Supplementary Figure S4B). These findings indicate Bp CTR can act as an exonuclease that digests DNA in the 5 -3 direction as well as an endonuclease that can act on supercoiled, circular DNA substrates.
Overall structures of the Xcc and Bp OLD C-terminal regions
Although full-length Xcc and Bp OLD proteins crystallize in the presence of different adenine nucleotides, diffraction rarely exceeded ∼4Å and interpretable electron density maps could not be obtained owing to severe radiation damage. Isomorphous crystals were never observed for any condition screened thereby preventing merging of data. The truncated Xcc CTR construct, in contrast, yielded crystals that routinely diffracted beyond 2Å and three independent structures were solved using SAD datasets from platinum, mercury, and iodide derivatives (Supplementary Table S2, Supplementary Figures S5 and S6A). These models show strong agreement with an overall RMSD of 0.42-0.44Å and display only slight deviations at the N-terminus near the mercury-binding site and within a flexible loop containing two adjacent glycines (G479 and G480) (Supplementary Figure S6A). Residues 374-387 and 458-463 are disordered in each structure, though present in the purified construct. Crystals of the analogous Bp CTR construct diffracted to a slightly lower resolution (2.2-2.3Å) but produced a more complete structural model (residues 390-594; Figure 2).
Bp CTR contains two domains ( Figure 2A): a Toprim domain (residues 390-504, purple) and a unique helical domain (residues 505-594, yellow) consisting of a five-helix orthogonal bundle and an additional C-terminal amphipathic helix (␣6 H ). ␣6 H extends into a groove along one face of the Toprim's central -sheet, forming extensive hydrophobic interactions ( Figure 2B and C). Helix ␣5 H and the upper portion of ␣6 H , along with the connecting loop, wrap around the hydrophobic helix ␣1 T of the Toprim domain to stabilize the structure further. The contributing hydrophobic side chains are largely conserved among Class 2 OLD proteins (Supplementary Figure S7) and together bury a total surface area of 1341Å 2 . Similar interactions are observed between the domains in the Xcc CTR ( Supplementary Figure S5). Attempts to express Bp CTR and Xcc CTR constructs lacking ␣6 H were unsuccessful as deletion of ␣6 H rendered the proteins insoluble. This likely reflects the critical stabilizing interactions provided by conserved residues along the ␣6 H --sheet interface and the exposure of a large hydrophobic surface if this helix is absent.
Many Toprim family members contain individual structural inserts into the core Toprim fold (Figure 3, Supplementary Figures S8 and S9). These include an insertion of variable size and structure between 2 and ␣2 in topoisomerases, gyrases, and RecR (Insert 1, light blue), short helical insertions between ␣2 and 3 (Insert 2 green) and ␣3 and 4 (Insert 3, cyan) in topoisomerases, a two-stranded -hairpin added between ␣1 and 1 that extends the central  sheet in gyrases and topoisomerase III (Insert 4, red), and an ␣ helix following the shortened 4 in the putative RNase M5 from Aquifex aeolicus (Insert 5, brown). Bp and Xcc OLD lack most of these embellishments but contain an Insert 3 helix (Figure 3, Supplementary Figures S8 and S9, Teal). Class 2 OLD proteins show sequence variability across this insert region (Supplementary Figure S7). Significantly, structural superposition reveals a shift of the Toprim ␣2 and ␣3 helices in OLD proteins relative to all other Toprim family members ( Figure 3B and C, Supplementary Figure S8) while the rest of the core fold is largely unchanged ( Figure 3C, Supplementary Figure S8). The position of these helices is consistent between the Bp CTR and Xcc CTR structures, arguing it is an intrinsic topological feature and not simply due to crystal packing. This comparison also shows that the OLD helical domain is distinctly separated from all other inserts, localized on the opposite side of the Toprim fold ( Figure 3A). We do note that DnaG primases and the putative A. aeolicus RNase M5 contain a helix that structurally aligns with the ␣1 H helix of the OLD helical domain ( Figure 3A, dashed circle).
The helical domain shares structural homology with bacterial controller (C) proteins from restriction-modification (R-M) systems (top hit from the DALI server (46): C.Esp1396I, Z score: 5.1, RMSD 2.5Å) (Figure 4). C proteins act as transcriptional regulators that tune the expression of R-M methyltransferase and restriction genes to ensure that site-specific nuclease activity is delayed until after a bacterial genome is protected by methylation (47). Crystallographic studies have shown that these proteins are dimeric and ␣-helical, with each monomer containing a helix-turnhelix motif (48). Structural superposition aligns Bp helices ␣2, ␣3, ␣5 and ␣6 with ␣1, ␣3, ␣4, and ␣5 of C.Esp1396I ( Figure 4A and B). Bp OLD lacks a helix corresponding to C.Esp1396I ␣2 and contains two additional helices (␣1 and ␣4) that localize to the opposite side of the molecule ( Figure 4A and B). C protein dimers bind DNA operator sites cooperatively to exert concentration-dependent switching of promoter activation and repression (47,49). In this arrangement, ␣4 facilitates dimerization while ␣2 and ␣3 associate with DNA ( Figure 4C). The Bp ␣1 and ␣4 helices would sterically block dimerization and DNA interactions respectively, thus preventing OLD proteins from adopting a similar configuration.
Bp OLD active site suggests a two-metal catalysis mechanism
The Xcc CTR derivative structures contain nothing in their active sites. Bp CTR crystallized in a different space group thereby permitting the helical domain to rotate and scrunch closer to the Toprim domain (Supplementary Figure S6B). Consequently, T506 and E508 shift 1.2 and 1.5Å toward the active site respectively, which facilitates the binding of two magnesium ions in a geometry consistent with two metal catalysis ( Figure 5A, Supplementary Figure S6C). Each magnesium is octahedrally coordinated with a water molecule bridging the two metals where the scissile phosphate would normally sit ( Figure 5A). The metals are spaced 4.9Å apart, suggesting they may move closer together once DNA is engaged. The conserved Toprim glutamate (E400) and the first aspartate of the DxD motif (D455) each provide a ligand to the first magnesium (metal A). The second DxD aspartate (D457) hydrogen bonds with two waters that form two additional metal A ligands. E508, located in ␣H1 of the helical domain, directly coordinates the second magnesium (metal B), while E404 and T506 stabilize additional metal B waters. These side chains are absolutely conserved in Class 2 OLD nucleases (Figure 5B, Supplementary Figure S7). Individual substitutions of metal A ligands (E400A, D455A, D457A) and metal B ligands (E404A, T506A, E508A) yielded no discernible effects on cleavage activity ( Figure 5C and D). Thus, combinations of mutations (3A, E400A/D455A/D457A; 3B, E404A/T506A/E508A) were generated. Mutant combinations of either the metal A or the metal B interacting residues together completely abolish Bp OLD nuclease activity on linear DNA substrates in vitro ( Figure 5C). These substitutions impair the processive degradation of circu-lar plasmids in the presence of Mg 2+ and Ca 2+ , though some nicking activity is still retained ( Figure 5D). Simultaneous mutation of both metal A and metal B sites together (2A/2B, D455A/D457A/T506A/E508A) eliminates processive cleavage and significantly reduces nicking activity relative to the 3A and 3B substitutions ( Figure 5C and D). This suggests that a single metal in either site can facilitate nicking but both sites are required for processive cleavage and degradation.
We also identify a conserved lysine residue in ␣5 H (K562) that extends toward the active site ( Figure 5A and B, Supplementary Figure S7), separated from metal A by 5.5Å and from metal B by 3.8Å. K562A and K562E mutations similarly impair processive nuclease activity without perturbing direct interactions with either magnesium ( Figure 5C and D). K562A and K562E mutations, however, retain the ability to nick DNA ( Figure 5D), similar to the perturbation of the individual metal A and B binding sites. Together these data define the key catalytic machinery of Class 2 OLD nucleases and support a two-metal catalysis mechanism for processive nuclease activity.
The organization of the Bp OLD active site is structurally conserved in RNase M5 enzymes and DnaG primases (Figure 6, Supplementary Figure S9). Along with the invariant Toprim glutamate and conserved DxD aspartates, D31 and E110 in the A. aeolicus RNase M5 homolog spatially align with E404 and E508 in Bp OLD ( Figure 6A). E110 localizes to a C-terminal helix that superimposes with ␣1 H of the Bp OLD helical domain ( Figure 3A, dashed circle; Supplementary Figure S9, inset). DnaG primases contain a similar set of catalytic machinery (16). The analogous C-terminal acidic residue of the DnaG Toprim (D345 in Staphylococcus aureus), however, is directed away from the active site via interaction with a conserved arginine residue (R146 in Staphylococcus aureus) in the adjacent N-terminal subdomain of the RNA polymerase core ( Figure 6B). As a result, a third metal (metal C) binds in the position occupied by E508 in Bp OLD, coordinated by a conserved aspartate residue immediately upstream (D343 in the Staphylococcus aureus) (20) (Figure 6B). The arrangement of metals relative to the core catalytic side chains in these enzymes is distinct from the coordination observed in topoisomerases, where metal B is positioned closer to the DxD motif in the absence of additional acidic residues ( Figure 6C). Metal B and the catalytic lysine in OLD proteins occupy the same position as the catalytic tyrosine that forms a covalent linkage with DNA in toposiomerases (Y782 in Saccharomyces cerevisiae topoisomerase II) (50). These differences highlight the evo-Nucleic Acids Research, 2019, Vol. 47, No. 17 9459 lutionary fine tuning of the Toprim scaffold for unique biological functions.
Structural model for DNA binding
Our attempts to co-crystallize OLD proteins with nucleic acids have thus far been unsuccessful. The robust nuclease activity exhibited by Bp CTR suggests that this fragment alone can associate with DNA in a manner that is competent for cleavage. We therefore computationally modeled DNA onto the Bp CTR structure to gain insight into how OLD nucleases interact with their substrates. Calculation of surface electrostatics identifies four basic patches on one face of Bp CTR that flank a small cleft containing the active site ( Figure 7A). Patch 1 lies between ␣3 H and ␣4 H in the helical domain, formed by R552, K555 and R559 (Supplementary Figure S10A). The catalytic K562 lysine on ␣5 H constitutes patch 2. As noted above, this extends into the active site and has a direct role in nuclease activity ( Figure 5C and D). Patch 3 localizes along ␣3 T in Toprim domain, comprised of R467 and K468, while patch 4 contains R405, which extends from ␣1 T toward 2 (Supplementary Figure S10A). Modeled B form DNA can bind patches 1 and 2 and part of patch 3, but sterically clashes with the protein beyond the active site cleft (Supplementary Figure S10B). In contrast, we obtain a near optimal fit with a bent DNA substrate taken from a co-crystal structure of the bacterial mismatch repair enzyme MutS (51) ( Figure 7A). The presence of a G:T mismatch in this substrate kinks the DNA at a 45 • angle (Supplementary Figure S10C), allowing it to interact unencumbered with all four basic patches ( Figure 7A). Bp CTR does not show any preference for a substrate containing a G:T mismatch in an exonuclease assay (Supplementary Figure S4C and D).
Mutation of positive residues in patch 1 (R552A/K555A) and patch 3 (R467A/R468A) reduces the DNA cleavage activity of Bp CTR on both supercoiled plasmids and linear lambda DNA ( Figure S10D and E), thus indirectly implicating these regions as important for binding. R405A (patch 4) and R559A (patch 1) substitutions do not significantly impair the overall cleavage compared to wildtype (Supplementary Figure S10D and E). We do note, however, accumulation of uncut, supercoiled DNA with every mutant (Supplementary Figure S10E), suggesting each region contributes at least partially to orienting DNA in a manner that promotes endonuclease function. A truncation construct deleting the helical bundle helices ␣1 H -␣5 H but retaining the stabilizing ␣6 H helix ( 505-577) severely impairs both nuclease degradation and nicking (Supplementary Figure S10D and E), further highlighting the importance of this region in DNA binding and catalytic function.
The orientation of the modelled substrate would clash with both the Toprim core ␣2 and ␣3 helices in their canonical positions and the Insert 1 segments present in topoisomerases and gyrases ( Figure 7B), suggesting that OLD nucleases associate with DNA differently than other Toprim proteins. Importantly, this arrangement places one strand directly into the Bp OLD active site cleft with a phosphate residue situated between metal A and metal B ( Figure 7C). K562 is 2.8Å away from the back side of the scissile phosphate, where it would be primed either to stabilize the charge in the transition state along with metal B and/or protonate the leaving group following cleavage. This favors the proposed catalytic mechanism diagrammed in Figure 7D.
DISCUSSION
Here we have described the structural and biochemical characterization of the Class 2 OLD proteins from B. pseudomallei and X. campestris pv. campestris. Bp and Xcc OLD catalyze metal-dependent nicking and cleavage of DNA substrates in vitro. While the N-terminal region containing the ATPase domain is dispensable for these activities, its presence mediates higher ordered oligomerization of Class 2 OLD proteins (Supplementary Figure S1E and F). We suspect that the ATPase domain may act in a regulatory capacity, controlling how and when the catalytic C-terminal region accesses substrates.
The Bp CTR structure elucidates the catalytic machinery of Class 2 OLD proteins. In addition to the canonical invariant glutamate (E400) and DxD aspartates (D455 and D457), we identify E404, T506 and E508 as side chains that play a role metal binding. These residues are absolutely conserved among Class 2 OLD proteins ( Supplementary Figure S7) and together coordinate two bound magnesium ions in a geometry that supports two-metal catalysis ( Figures 5A and 7C and D). Single point mutations at these sites are tolerated, whereas triple mutant substitutions removing all metal coordination completely abolish processive degradation of substrates ( Figure 5C and D). We speculate that a water may be capable substituting as a ligand when a single metal binding residue is mutated, especially since some of the metal contacts in the Bp CTR crystal are water mediated in the absence of substrate.
We also find K562 in the Bp helical domain is critical for efficient catalytic function ( Figure 5). K562 is directed toward the putative scissile phosphate in our Bp CTR -DNA bound model ( Figure 7C), where it would be poised to stabilize the developing negative charge in the transition state and/or protonate the leaving group. Significant perturbation to one part of the key catalytic machinery (metal A, metal B, or K562) still permits Bp CTR to nick and linearize plasmid DNA; however, processive DNA cleavage is only achieved when the three elements are intact ( Figure 5C and D). Truncation of the helical domain ( 505-577) or simultaneous mutation of both metal sites (2A/2B mutant) impairs both functions ( Figure 5C and D, Supplementary Figure S10D and E). Together these results argue that nicking only requires a single metal but full nuclease activity in Class 2 OLD proteins requires proper coordination of two metals and the presence of the conserved lysine. Class 1 OLD proteins are on average ∼50 amino acids shorter and diverge from their Class 2 counterparts in portions of the C-terminal region, which prohibits the unambiguous identification of Class 1 catalytic machinery by sequence alignment alone. Structural and biochemical characterization of the Class 1 OLD homolog from Thermus scotoducts indicates that the mechanisms and machinery we describe here for nuclease cleavage is conserved (Schiltz and Chappie, in review).
The spatial organization of acidic residues in the Bp OLD active site directly mirrors that of RNase M5 maturases ( Figure 6, Supplementary Figure S9). In addition to conserved catalytic residues previously identified through the biochemical characterization of Bacillus subtilis RNase M5 (14), our structural comparison with the available A. aeolicus RNase M5 structure suggests that a C-terminal glutamate (E96 in B. subtilis; E110 in A. aeolicus) will also be critical for 5S RNA maturation. Interestingly, A. aeolicus RNase M5 appears to be truncated and circular permutated. Many other homologs including B. subtilis contain Cterminal helical extensions (13,14) that could fold into a domain like that observed in Class 2 OLD proteins. DnaG primases also share this conserved arrangement of active site residues (16,17); however, structural constraints imposed by the N-terminal subdomain in the RNA polymerase core prevent the coordination of the Toprim metal B in the same manner. A third metal observed in the Staphylococcus aureus DnaG structure (20), which occupies the same position as E508 in Bp OLD, appears to compensate. Importantly, this common active site blueprint is distinct from topoisomerases, gyrases and RecR (Supplementary Figure S9). The overall structural similarity between primases, maturases, and OLD nucleases thus implies a common evolutionary lineage and further segregates the Toprim family into distinct subgroups based on differences in metal coordination, with the distinguishing feature being the presence or absence of additional acidic residues beyond the canonical Toprim glutamate and DxD aspartates.
Our initial biochemistry indicated that Bp CTR and Xcc CTR were more active in Mn 2+ ; however, further analysis by ICP-AES analysis revealed that both of the purified constructs preferentially contained bound Ca 2+ and Mg 2+ and no Mn 2+ (Supplementary Table S1). Addition of calcium potentiates Bp CTR activity with magnesium in vitro. While calcium typically inhibits most nucleases (2), some enzymes like the Staphylococcal nuclease utilize calcium in their active site to cleave DNA (52). Additionally, DNase I is known to be most active in the presence of both magnesium and calcium (53). In the case of DNase I, however, magnesium occupies the active site while calcium binds to other regions of the structure to act as an allosteric enhancer (54). Whether calcium plays a direct role in the active site or modulates activity indirectly, possibly by stabilizing the protein or enhancing DNA binding, remains to be determined. Importantly, the Class 1 OLD homolog from Thermus scotoductus exhibits the same general affinity for calcium and magnesium and shows the same stimulatory response (Schiltz and Chappie, in review). This implies that utilization of calcium and magnesium is conserved and functionally relevant among all OLD homologs.
Computational modeling shows that a bent DNA substrate engages all four basic patches on the surface Bp CTR while B form DNA would sterically clash with portions of the Toprim domain ( Figure 7A and Supplementary Figure S10B). Mutations in patches 1 and 3 reduce Bp CTR activity on both substrates (Supplementary Figure S10D and E), indirectly supporting a role for these regions in DNA binding. These patches flank the active site cleft and in our model anchor the DNA duplex such that one strand is positioned in the active site with a phosphate situated directly between the two bound magnesium ions (Figure 7). The catalytic K562 sidechain resides in patch 2 and engages the substrate at one end of this cleft. Although mutation of R405 in patch 4 does not significantly alter nuclease activity, we note an observable accumulation of the uncut, supercoiled substrate compared to wildtype (Supplementary Figure S10E). This implies patch 4 partially contributes to orienting DNA in a manner that promotes endonuclease function.
The orientation of DNA suggested by our model differs significantly from how other Toprim proteins engage their substrates. Importantly, the structural constraints of this arrangement explain (i) the lack of an insert 1 in OLD Toprim domains, (ii) the significant shift in the positions of the ␣2 and ␣3 Toprim core helices in Bp CTR and Xcc CTR and (iii) the position of the helical domain on the opposite side of the core Toprim fold. In the absence of DNA bound structure, we cannot rule out that substrate binding induces further structural changes in the OLD CTR, including those that would permit the unhindered association with an extended B form DNA duplex. Conformational rearrangements could also be coupled to ATP hydrolysis in the fulllength protein.
Our binding model, however, does not preclude Bp OLD from also binding DNA ends. Here the terminal phosphate would become the scissile phosphate. This arrangement is equally compatible with the catalytic machinery and indeed Bp CTR exhibits 5 -3 exonuclease activity (Supplementary Figure S4). P2 OLD exhibits exonuclease activity in vitro (28) and Bp OLD readily degrades linear lambda DNA in the presence of Mn 2+ or Mg 2+ and Ca 2+ as detailed above. Bp and Xcc OLD also can nick and cleave circular plasmids, suggesting a robust endonuclease activity. OLD nucleases thus appear to act as either an endo-or exonuclease depending on the substrate presented ( Supplementary Figure S4E). The Mre11 nuclease, which functions in double strand break repair and processing, displays a similar duality: it functions as a 3 -5 exonuclease on double strand DNA and an endonuclease on single strand DNA at protruding 3 -and 5 -ends and 3 branches (55-57). Further biochemical characterization will be necessary to determine how these different modes of cleavage contribute to OLD function in vivo.
While the role of P2 OLD in bacteriophage lambda interference is well documented (23), little is known about the function of other OLD homologs in vivo. Our bioinformatics data indicate that OLD proteins are widely distributed across bacteria, archaea, and viral genomes. The presence of old genes in species-specific operons and on mobile elements suggest they confer a functional advantage. We speculate that these proteins may play a novel role in DNA repair and/or replication based the specific association of UvrD/PcrA/Rep helicase with Class 2 OLD proteins. Future genetic experiments will be necessary to validate this hypothesis and define the biological roles of OLD nucleases more explicitly.
DATA AVAILABILITY
The atomic coordinates and structure factors for the Xcc CTR Hg, Pt, and I derivatives are deposited in the Protein Data Bank with accession numbers 6NJW, 6NJX and 6NJV respectively. The atomic coordinates and structure factors of the Bp CTR structure are deposited in the Protein Databank with the accession number 6NK8. | 10,473.4 | 2019-08-10T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Processing Characteristics of Micro Electrical Discharge Machining for Surface Modification of TiNi Shape Memory Alloys Using a TiC Powder Dielectric
Titanium-nickel shape memory alloy (SMA) has good biomedical application value as an implant. Alloy corrosion will promote the release of toxic nickel ions and cause allergies and poisoning of cells and tissues. With this background, surface modification of TiNi SMAs using TiC-powder-assisted micro-electrical discharge machining (EDM) was proposed. This aims to explore the effect of the electrical discharge machining (EDM) parameters and TiC powder concentration on the machining properties and surface characteristics of the TiNi SMA. It was found that the material removal rate (MRR), surface roughness, and thickness of the recast layer increased with an increase in the discharge energy. TiC powder’s addition had a positive effect on increasing the electro-discharge frequency and MRR, reducing the surface roughness, and the maximum MRR and the minimum surface roughness occurred at a mixed powder concentration of 5 g/L. Moreover, the recast layer had good adhesion and high hardness due to metallurgical bonding. XRD analysis found that the machined surface contains CuO2, TiO2, and TiC phases, contributing to an increase in the surface microhardness from 258.5 to 438.7 HV, which could be beneficial for wear resistance in biomedical orthodontic applications.
Introduction
TiNi SMAs have broad application prospects in the aerospace, biomedical, and automobile fields due to their excellent biocompatibility, superelasticity, shape memory effect, and wear resistance [1]. Because the Young's modulus of titanium-nickel alloys is lower than that of other biomedical implant materials, they are widely used for medical implants [2]. In clinical medical applications, product safety and reliability are the primary requirements for long-term implants. However, amino acids and proteins in bodily fluids will accelerate metal corrosion, promoting the release of toxic nickel ions [3]. The release of metallic ions is detrimental to osseointegration and ultimately causes clinical failure [4]. Therefore, the surface modification of titanium-nickel alloys plays an important role in improving corrosion resistance and surface biocompatibility.
Previous studies have shown that a thin surface layer of titanium oxide (2-20 nm) will naturally form on the surface of TiNi alloys, and this layer can act as a barrier to human body corrosion and chemical reactions to limit the diffusion of nickel ions [5]. However, this film is unstable in the human body's complicated and volatile environment and can easily corrode and fall off the alloy material. Therefore, surface treatment techniques have been developed to treat TiNi alloys. Titanium oxide film has good blood compatibility [6] and is biologically inert [7], and it can effectively prevent the precipitation of nickel ions. Several surface treatment methods have been used commercially, such as anodic oxidation [8], plasma immersion ion implantation (PIII) [9], coating [10], and electrochemical polishing [11]. Qin et al. [12] used a glycerol electrolyte to obtain TiO 2 nanotubes on the surface of the TiNi alloy through anodic oxidation, which effectively improved the biocompatibility of the alloy.
Electrical discharge machining (EDM) is an unconventional machining technology that uses a series of pulse discharges between the tool and workpiece to process the workpiece [13]. It is mainly used for high-precision processing of difficult-to-cut materials. Wyszynski et al. [14] realized the high-precision micro-hole machining of cubic boron nitride and determined the optimal parameters. Wu et al. [15] developed a cut-side micro-tool suitable for the micro-EDM system and successfully realized the deep and high aspect ratio micro-holes machining on tungsten cemented carbide. In order to study the machining mechanism of micro-EDM, Liu et al. [16] analyzed the polarity effect of micro-EDM based on the movement characteristics of electrons and positive ions in the discharge plasma channel. Almacinha et al. [17] established an electro-thermal model for a single discharge of an electric discharge machining process based on the Joule heating effect theory. Roy et al. [18] analyzed the physical phenomenon behind occurrences of unusually high discharging points in reverse micro EDM by establishing a numerical model of ions and electrons' movement in the dielectric during machining. To realize the micro-EDM machining of the three-dimensional structure, Roy et al. [19] used reverse micro EDM to generate different shapes of protruded micro features, such as 3D hemispherical and 3D coni-spherical shapes. To study EDM's surface characteristics, Hsieh et al. [2] showed that the EDM process could successfully machine the ternary TiNiZr SMAs while ensuring its shape recovery ability. The recast layer generated on the machined surface can adhere to the substrate effectively by surface alloying, enhancing wear resistance [5]. Peng et al. [3] have reported that EDM can form a nanoporous biocompatible layer on the surface of Ti-6Al-4V, which is conducive to cell growth and proliferation.
To develop improved surface modification technologies, a new method of TiC-powder-assisted micro-EDM is proposed for the formation of a titanium oxide surface, and experiments were performed on a TiNi SMA. The effects of PMEDM parameters on the machining characteristics of TiNi SMA were investigated experimentally, and then, the surface roughness, surface morphology, and microhardness were characterized by relevant characterization techniques. Finally, the thickness and composition of the recast layer were studied deeply. Figure 1 illustrates the principle of the EDM process with TiC powder. By adding TiC-mixed powder particles with a particle size of 2 µm in deionized water, the tool electrode utilizes reciprocating movement to complete the powder-mixed EDM (PMEDM). To study the mechanism of the PMEDM, the following two assumptions were made: (1) the TiC-mixed powder particles are spherical, and (2) the electric field is an electrostatic field, shown as yellow circles and black lines in Figure 2a, respectively.
Principle and Mechanisms
According to the principle of electronics [20], the conductive particles are polarized into bound charges under an electric field's action. When high voltage is applied between the electrode and workpiece, the TiC particles are polarized under the action of the electric field and become a bound charge. According to the electrostatic field theory [21], no matter how strong the electric field is applied to the conductor, the electrostatic balance of the conductor will make the internal field strength of the conductor zero. In order to achieve electrostatic equilibrium, the inside of the TiC particle will generate an electric field opposite to the uniform electric field, E0, as shown in Figure 2b. Therefore, the superposition of the electric field generated by the bound charge and the external electric field distorts the uniform electric field between the electrode and workpiece, as shown in Figure 2c. When the electric field's direction generated by the bound charge overlaps with the direction of the uniform electric field, the actual electric field intensity will reach the maximum value, i.e., points A and B in Figure 2c, and discharge breakdown takes priority here. The maximum value is given by the following [22]: where ε 1 is the dielectric coefficient of the dielectric fluid, ε 2 is the powder's dielectric coefficient, and E0 is the electric field strength of the uniform electric field. The electrostatic field theory shows that the electric field intensity inside an ideal conductor is zero, and its relative dielectric constant is infinite [21]. Therefore the dielectric coefficient, ε 2 , of the conductor tends toward infinity, and takes the limit based on Equation (1) to reach the maximum value, Emax, as follows: Therefore, the addition of the mixed powder increases the electric field strength between the electrode and the workpiece by a factor of three and expands the discharge gap by a factor of three, which promotes the removal of processing debris. and a transistor-type pulse generator (Tektronix AFG3000C, Tektronix, Inc., Beaverton, OR, USA). The pulse waveform is generated by the waveform generator and amplified by a high-voltage amplifier to display the signal on the oscilloscope. The microelectrode movement in micro-EDM was realized by the three-axis micro-nano-motion platform (PI, Germany; M511.DD). A digital oscilloscope (Tektronix MDO 3000) was used to monitor the pulse signal in real-time during the machining process. As demonstrated by Lin et al. [23], negative polarity processing can provide a larger MRR, while positive polarity processing can provide a thicker recast layer on the processed surface. Therefore, we chose to employ positive polarity processing in this study. A micropump was used to circulate and mix the dielectric fluid to ensure that the TiC powder was uniformly dispersed. The detailed experimental processing parameters are shown in Table 1.
Experimental Materials and Measurements
In the EDM process, the workpiece material used in the experiments was TiNi SMA (China Tai'zhou Cinoo Mental material Co., Ltd.). The as-received samples were a rectangular parallelepiped with a length of 100 mm, a width of 300 mm, and a thickness of 0.5 mm. Its elemental composition and main thermophysical properties are presented in Tables 2 and 3, respectively. Brass sheets of 1 mm thick were used as raw material for fabricating the microelectrodes. At present, deionized water is widely used as the dielectric fluid due to its weak electrical conductivity, which can also avoid carbon deposition in the spark oil. TiC powder was added to deionized water at different concentrations. Its physical properties are listed in Table 4. The fabrication of microelectrode and microcavity was shown in Figure 4, and the detailed process was as follows. First, according to the microcavity to be machined, a corresponding microelectrode model was designed (Figure 4a). Secondly, import model parameters into the low-speed wire EDM machine (LS-WEDM, Sodick, Japan; AQ250Ls) CNC System ( Figure 4b) and start cutting (Figure 4c) to fabricate a single microelectrode with a size of 0.8 mm × 1 mm (Figure 4d). The microelectrode was then employed in the micro-EDM to process a microcavity with a depth of 100 µm (Figure 4e). Finally, a microcavity with a high-quality surface was obtained successfully (Figure 4f). A scanning electron microscope (SEM) manufactured by TESCAN, Czech Republic (model: LYRA3 XMH) was used to observe the surface morphology. A laser scanning confocal microscope (LSCM, Keyence, Japan; VK-X260K) was used to measure the surface roughness. Analyses of the EDM-treated surfaces were performed at room temperature using X-ray diffraction (XRD, MiniFlex600, Rigaku Corporation, Tokyo, Japan) at a 2θ scanning rate of 3 • min −1. A microVickers hardness tester (MHV-1000A, HuaXing, Lai'zhou, China) was used to measure the surface hardness under a load of 100 g for 10 s. The average hardness value was taken from at least four test readings for each specimen.
Discharge Waveforms Comparison of EDM and PMEDM
A digital oscilloscope acquired the discharge voltage waveforms to study the effect of the addition of TiC powder on the electro-discharge behavior of the material. Processing parameters, including a pulse width of 4 µs, a duty of 50%, and a machining voltage of 80 V, were determined before micro-EDM processing. The voltage waveforms for the micro-EDM were acquired by an oscilloscope for both without TiC powder addition and with the addition of 5 g/L TiC powder, as shown in Figure 5. Taking the same time (8 µs) for comparison, the number of pulses for the discharge voltage in Figure 5b was significantly higher than in Figure 5a. This result indicates that the addition of TiC powder significantly improved the discharge characteristics of the TiNi SMA in micro-EDM. Moreover, a multiple discharging effect was observed within a single period in Figure 5b, which indicates that the TiC powder refined the discharging energy.
Influence of Machining Process Parameters on the Material Removal Rate
The MRR was calculated as the ratio of the volume of material removed from the workpiece to the processing time (mm 3/min). The volume of material removed was obtained by analyzing the three-dimensional surface topography scanned by LSCM. Figure 6 shows the effect of the concentration of TiC powder on the MRR under different machining voltages and pulse widths. It is clear that the MRR increases with increasing concentration of TiC powder; regardless of the machining voltages and pulse widths, the maximum MRR is obtained at a concentration of 5 g/L. This result confirms that the discharge frequency is increased, and the discharge energy is improved by the addition of TiC powder to the dielectric fluid. In addition, when the concentration of TiC powder is greater than 5 g/L, the MRR tends to decrease. This trend agrees with that reported by Jahan et al. [24]. When the powder concentration is excessively high, the large number of conductive particles between the two poles cannot be removed easily and cause secondary sparking. Eventually, this leads to instability of the machining process and increases the machining time.
It is noted that the MMR increases with increasing machining voltage, as shown in Figure 6a. A high machining voltage can effectively increase the discharge channel's current density, which facilitates the melting and evaporation of materials. Figure 6b also shows an increase in the MRR with the pulse duration. The pulse duration determines the level of discharge energy, and high pulse durations can provide the necessary time to transmit discharge energy. Hence, a high MRR occurs at higher machining voltages and longer pulse widths in the micro-EDM process. Figure 7 shows the effect of the concentration of TiC powder on the surface roughness under different machining voltages and pulse durations. In both cases, the surface roughness decreases with increasing TiC powder concentration up to 5 g/L; then, as the TiC powder concentration increases further, the surface roughness tends to increase. Liew et al. [25] recently showed that adding an appropriate amount of conductive powder to the dielectric fluid can uniformly disperse the discharge energy and reduce the craters' size, thereby improving the surface finish. Nevertheless, high-concentration TiC powder tends to accumulate on the workpiece's surface, which severely inhibits the transfer of discharge energy. Moreover, the deposited powder and melting material cannot be removed from the machining gap in time, which causes more frequent secondary sparking and circuiting. This effect will increase surface roughness. It is noted that the surface roughness increases with increasing voltage in Figure 7a and pulse width in Figure 7b. Under low machining voltages and short pulse widths, the pits on the machined surface were small and shallow, and the melting material was easily removed. When the machining voltage and pulse width were increased, the discharge time became longer, and the single pulse discharge energy increased. This caused the radius and depth of the discharge marks to increase, leading to an increase in the surface roughness.
Surface Morphology of the EDM-Treated TiNi SMA
In this study, the surface microtopography and surface roughness (Ra) was used to evaluate surface quality deviations. Five surface roughness measurements (Ra) made at different positions on the bottom of identical microcavity were averaged. Figure 8 shows SEM micrographs and surface roughness of the microcavities bottom variation with the machined voltage (60-20 V) increased under an applied pulse width of 4 µs and a TiC powder concentration of 5 g/L. When the machined voltage was 60 V, the machined surface was relatively smooth due to the low discharge energy, and the surface roughness was 0.645 µm. However, as the machined voltage increased, the surface quality gradually decreased. Until the machined voltage reaches 100 V, numerous discharge craters and melting drops were observed on the machined surface. Nevertheless, if the machined voltage increased to 120 V continuously, the machined surface became rougher, and the surface roughness up to 1.609 µm. This trend was consistent with that reported by Xu et al. [26]; the surface roughness (Ra) increases with greater machined voltage. This was because the electron flow in the channel had an enhanced bombardment effect on the anode under the condition of high voltage, and many melting drops, debris, micropores were observed on the surface. Using a TiC powder concentration of 5 g/L, a machined voltage of 80 V, the influence of different pulse widths on SEM micrographs and bottom surface roughness of the microcavities were discussed in Figure 9. Similar to machined voltage, it can be found that the machined surface of a short pulse width contains shallower and smaller discharge craters compared to long pulse width; because lower pulse width has a smaller MRR, the TiC powder has enough time to refine the discharge energy at the same machining depth, resulting in a smooth bottom surface. As the pulse width increased to 10 µs, the surface roughness increased up to 1.628 µm. The analysis found that with the substantial increase of the pulse width, the current density in the discharge channel continued to increase, and the bombardment effect of charged particles was enhanced, which led to an increase in the radius and depth of the discharge marks. Moreover, the increase of discharge debris particles in long pulse width machining caused short circuits and arcing phenomenon and led to an unstable machining process [27]. Meanwhile, the melting material cools and solidifies on the surface of the workpiece during the deionization stage. The effect of TiC concentrations on SEM micrographs and surface roughness of the microcavities bottom under an applied machining voltage of 80 V and pulse width of 4 µs is given in Figure 10. Results indicate that the addition of TiC powder improved the surface quality, and the surface roughness was always lower than that of the material without the addition of mixed powder. When the TiC concentration was approximately 5 g/L, the minimum surface roughness was Ra 0.828 µm. According to Bui et al. [28], the addition of powder reduces the dielectric resistivity and increases the discharge gap. Conductive particles such as graphite [29], cobalt [30], and molybdenum [31] can form chains across the electrodes and enlarge the gap distance, which not only allows more working fluid to flow through but also lowers the single pulse explosion pressure, resulting in smaller and shallower craters. As mentioned in Section 4.1, TiC powder can disperse the discharge energy and increase the discharge gap. The discharge distribution becomes more uniform. Furthermore, many debris, micropores, and microcracks were observed on the machined surfaces with the pure dielectric fluid. According to the measurement results of surface roughness, when using higher TiC concentrations, such as 10 g/L, the sizes of the discharge craters are increased compared with those obtained with lower TiC concentrations.
Influence of Micro-EDM Parameters on the Recast Layer
The formation of the recast layer is affected by many factors, and the single pulse discharge energy is a key parameter for the formation of the recast layer [32]. Cross-section images of the recast layers in TiC-dielectric under different pulse widths are shown in Figure 11. As shown in Figure 11a-d, the thickness of the recast layer slightly increased with increasing pulse width. The discharge energy of micro-EDM was relatively low, which resulted in minimal changes to the thickness of the recast layer. The thickness of the recast layer measured in this experiment ranged from 1 to 3 µm, which was consistent with the research of Tan and Yeo [33]. The recast layer was composed of materials from the workpiece, electrode, and dielectric fluid. Increasing the pulse width caused more material to be melted and resolidified, thereby increasing the thickness of the recast layer. The MRR reached a maximum at 5 g/L TiC, and the melting material and deposited particles could be effectively removed. Therefore, a thin recast layer was formed in the 5 g/L TiC dielectric. The high concentration of TiC powder caused frequent secondary discharge and short circuits, which will generate a large amount of heat. This heat accumulation on the surface of TiNi SMA is beneficial for the formation of the recast layer. Jahan et al. [34] found that TiO 2 has good biocompatibility and can provide a protective coating for biomedical implant applications. Hence, micro-EDM can be used to modify the surface of TiNi SMA and improve the biocompatibility of the titanium-nickel alloy. The machined surfaces stand the continuous heating and cooling processes in EDM, which form a surface layer composed of the recast layer, heat affected zone, and base metal [34]. The recast layer has a great influence on the surface properties. Therefore, it is necessary to study the changes in surface microhardness. The microhardness curve at different distances from the center of the cavity and microhardness measurement of the substrate surface as shown in Figure 14a,b, respectively. The results show that the surface microhardness can reach 438.7 HV after micro-EDM, which is approximately 1.7 times the base material hardness. Chen et al. [35] recently showed that machined surfaces' hardening effect originates from the recast layer. Combined with Figure 13, the XRD analysis revealed that the machined surface was composed of TiC, Ti2O, Cu2O, and TiNi, which could improve the surface's microhardness.
Conclusions
The machining performance and feasibility of modifying the surface of a TiNi SMA through micro-EDM with the addition of TiC particles to the dielectric were discussed in this study. Discharge voltage waveforms demonstrated that the number of pulses in TiC-dielectric was significantly higher than in pure dielectric. The MRR, surface roughness, and thickness of the recast layer increased with an increase in discharge energy. MRR increased with an increase of TiC concentration, reaching a maximum at 5 g/L. Adding TiC particles to the dielectric can improve the surface finish by observing the surface morphology, and the machined surface in TiC-dielectric has smaller melting drops and craters compared to deionized water. The best surface finish occurred at a TiC concentration of 5 g/L. A layer ranging from 1 to 3 µm was obtained on the machined surface. The surface microhardness increased due to the formation of a recast layer containing TiC, Cu 2 O, and Ti 2 O; its hardness could reach 438.7 HV. Thus, this method can improve the wear resistance of the implant material, especially for orthodontic applications. | 5,019 | 2020-11-01T00:00:00.000 | [
"Materials Science"
] |
Transverse target spin asymmetries in exclusive $\rho^0$ muoproduction
Exclusive production of $\rho^0$ mesons was studied at the COMPASS experiment by scattering 160 GeV/$c$ muons off transversely polarised protons. Five single-spin and three double-spin azimuthal asymmetries were measured as a function of $Q^2$, $x_{Bj}$, or $p_{T}^{2}$. The $\sin \phi_S$ asymmetry is found to be $-0.019 \pm 0.008(stat.) \pm 0.003(syst.)$. All other asymmetries are also found to be of small magnitude and consistent with zero within experimental uncertainties. Very recent calculations using a GPD-based model agree well with the present results. The data is interpreted as evidence for the existence of chiral-odd, transverse generalized parton distributions.
Introduction
The spin structure of the nucleon is a key issue in experimental and theoretical research since a few decades.The most general information on the partonic structure of hadrons is contained in the generalised parton correlation functions (GPCFs) [1,2], which parameterise the fully unintegrated, offdiagonal parton-parton correlators for a given hadron.These GPCFs are 'mother distributions' of the generalised parton distributions (GPDs) and the transverse momentum dependent parton distributions (TMDs), which can be considered as different projections or limiting cases of GPCFs.While GPDs appear in the QCD-description of hard exclusive processes such as deeply virtual Compton scattering (DVCS) and hard exclusive meson production (HEMP), TMDs can be measured in reactions like semiinclusive deep inelastic scattering (SIDIS) or Drell-Yan processes.The GPDs and TMDs provide complementary 3-dimensional pictures of the nucleon.In particular, when Fourier-transformed to impact parameter space and for the case of vanishing longitudinal momentum transfer, GPDs provide a three dimensional description of the nucleon in a mixed momentum-coordinate space, also known as 'nucleon tomography' [3,4].Moreover, GPDs and TMDs contain information on the orbital motion of partons inside the nucleon.
The process amplitude for hard exclusive meson production by longitudinal virtual photons was proven rigorously to factorise into a hard-scattering part and a soft part [5,6].The hard part is calculable in perturbative QCD (pQCD).The soft part contains GPDs to describe the structure of the probed nucleon and a distribution amplitude (DA) to describe the one of the produced meson.This collinear factorisation holds in the generalised Bjorken limit of large photon virtuality Q 2 and large total energy in the virtualphoton nucleon system, W , but fixed x Bj , and for |t|/Q 2 1.Here t is the four-momentum transfer to the proton and x Bj = Q 2 /2M p ν, where ν is the energy of the virtual photon in the lab frame and M p the proton mass.
For hard exclusive meson production by transverse virtual photons, no proof of collinear factorisation exists.In phenomenological pQCD-inspired models k ⊥ factorisation is used, where k ⊥ denotes the parton transverse momentum.In the model of Refs.[7,8,9], electroproduction of a light vector meson V at small x Bj is analysed in the 'handbag' approach, in which the amplitude of the process is a convolution of GPDs with amplitudes for the partonic subprocesses γ * q → V q and γ * g → V g.Here, q and g denote quarks and gluons, respectively.The partonic subprocess amplitudes, which comprise corresponding hard scattering kernels and meson DAs, are calculated in the modified perturbative approach where the transverse momenta of quark and antiquark forming the vector meson are retained and Sudakov suppressions are taken into account.The partons are still emitted and reabsorbed from the nucleon collinear to the nucleon momentum.In such models, cross sections and also spin-density matrix elements for HEMP by both longitudinal and transverse virtual photons can be well described simultaneously [7,10].
At leading twist, the chiral-even GPDs H f and E f , where f denotes a quark of a given flavor or a gluon, are sufficient to describe exclusive vector meson production on a spin 1/2 target.These GPDs are of special interest as they are related to the total angular momentum carried by partons in the nucleon [11].A variety of GPD fits using all existing DVCS proton data has shown that the contributions of GPDs H f are dominant.They are constrained [12,13,14,15] over the presently limited accessible x Bj range, by the very-low x Bj data of the HERA collider and by the high x Bj data of HERMES and JLab.There exist constraints on GPDs E f for valence quarks from fits to nucleon form factor data [16], HERMES transverse proton data [17] and JLab neutron data [18].A parameterisation of chiral-even GPDs [9], which is consistent with the HEMP data of HERMES [19] and COMPASS [20], was recently demonstrated to successfully describe almost all existing DVCS data [21].This is clear evidence for the consistency of the contemporary phenomenological GPD-based description of both DVCS and HEMP.
There exist also chiral-odd -often called transverse -GPDs, from which in particular H f T and E f T were shown to be required [22,23] for the description of exclusive π + electroproduction on a transversely polarised proton target [24].It was recently shown [25] that the data analysed in this letter are also sensitive to these GPDs.
This Letter describes the measurement of exclusive ρ 0 muoproduction on transversely polarised protons with the COMPASS apparatus.Size and kinematic dependences of azimuthal modulations of the cross section with respect to beam and target polarisation are determined and discussed, in particular in terms of the above introduced chiral-odd GPDs.
Formalism
The cross section for exclusive ρ 0 muoproduction, µ N → µ ρ 0 N , on a transversely polarised target reads [26]: Here, S T is the target spin component perpendicular to the direction of the virtual photon.The beam polarisation is denoted by P .The azimuthal angle between the lepton scattering plane and the production plane spanned by virtual photon and produced meson is denoted by φ, whereas φ S is the azimuthal angle of the target spin vector about the virtual-photon direction relative to the lepton scattering plane (see Fig. 1).The S T dependent part of Eq. ( 1) contains eight different azimuthal modulations: five sine modulations for the case of an unpolarised beam and three cosine modulations for the case of a longitudinally polarised beam.Neglecting terms depending on m 2 µ /Q 2 , where m µ denotes the mass of the incoming lepton, the virtual-photon polarisation parameter ε describes the ratio of longitudinal and transverse photon fluxes and is given by: The symbols σ νλ µσ in Eq. ( 1) stand for polarised photoabsorption cross sections or interference terms, which are given as products of helicity amplitudes M: where the sum runs over µ = 0, ±1 and ν = ±1/2.The helicity amplitude labels appear in the following order: vector meson (µ ), final-state proton (ν ), photon (µ or σ), initial-state proton (ν or λ).For Here k k k, k k k , q q q and v v v represent three-momentum vectors of the incident and the scattered muon, the virtual photon and the meson respectively.The symbol S T denotes the component of the target spin vector perpendicular to the virtual-photon direction.
brevity, the helicities −1, −1/2, 0, 1/2, 1 will be labelled by only their signs or zero, omitting 1 or 1/2, respectively.Also the dependence of σ νλ µσ on kinematic variables is omitted.The amplitudes of those cross section modulations that depend on target polarisation are obtained from Eq. ( 1) as follows: Here, unpolarised (longitudinally polarised) beam is denoted by U (L) and transverse target polarisation by T. The φ-integrated cross section for unpolarised beam and target, denoted by σ 0 , is given as a sum of the transverse and longitudinal cross sections: The amplitudes given in Eq. ( 4) will be referred to as asymmetries in the rest of the paper.
Experimental set-up
The COMPASS experiment is situated at the high-intensity M2 muon beam of the CERN SPS.A detailed description can be found in Ref. [27].
The µ + beam had a nominal momentum of 160 GeV/c with a spread of 5% and a longitudinal polarisation of P ≈ −0.8.The data were taken at a mean intensity of 3.5 • 10 8 µ/spill, for a spill length of about 10 s every 40 s.A measurement of the trajectory and the momentum of each incoming muon is performed upstream of the target.
The beam traverses a solid-state ammonia target that provides transversely polarised protons.The target is situated within a large aperture magnet with a dipole holding field of 0.5 T. The 2.5 T solenoidal field is only used when polarising the target material.A mixture of liquid 3 He and 4 He is used to cool the target to 50 mK.Ten NMR coils surrounding the target allow for a measurement of the target polarisation P T , which typical amounts to 0.8 with an uncertainty of 3%.The ammonia is contained in three cylindrical target cells with a diameter of 4 cm, placed one after another along the beam.The central cell is 60 cm long and the two outer ones are 30 cm long, with 5 cm space between cells.The spin directions in neighbouring cells are opposite.Such a target configuration allows for a simultaneous measurement of azimuthal asymmetries for the two target spin directions in order to become independent of beam flux measurements.Systematic effects due to acceptance are reduced by reversing the spin directions on a weekly basis.With the three-cell configuration, the average acceptance for cells with opposite spin direction is approximately the same, which leads to a further reduction of systematic uncertainties.
The dilution factor f , which is the cross-section-weighted fraction of polarisable material, is calculated for incoherent exclusive ρ 0 production using the measured material composition and the nuclear dependence of the cross section.It amounts typically to 0.25 [20].
The spectrometer consists of two stages in order to reconstruct scattered muons and produced hadrons over wide momentum and angular ranges.Each stage has a dipole magnet with tracking detectors before and after the magnet, hadron and electromagnetic calorimeters and muon identification.Identification of charged tracks with a RICH detector in the first stage is not used in the present analysis.
Inclusive and calorimetric triggers are used to activate data recording.Inclusive triggers select scattered muons using pairs of hodoscopes and muon absorbers whereas the calorimetric trigger relies on the energy deposit of hadrons in one of the calorimeters.Veto counters upstream of the target are used to suppress beam halo muons.
Event selection and background estimation
The presented work is a continuation of the analysis of A sin(φ−φ S ) UT for exclusive ρ 0 mesons produced off transversely polarised protons at COMPASS and it is based on the same proton event sample as in Ref. [20].The essential steps of event selection and asymmetry extraction are summarized in the following.The considered events are characterized by an incoming and a scattered muon and two oppositely charged hadrons, h + h − , with all four tracks associated to a common vertex in the polarised target.In order to select events in the deep inelastic scattering regime and suppress radiative corrections, the following cuts are used: Q 2 > 1 (GeV/c) 2 , 0.003 < x Bj < 0.35, W > 5 GeV and 0.1< y < 0.9, where y is the fractional energy of the virtual photon.The production of ρ 0 mesons is selected in the two-hadron invariant mass range 0.5 GeV/c 2 < M π + π − < 1.1 GeV/c 2 , where for each hadron the pion mass hypothesis is assigned.This cut is optimized towards high yield and purity of ρ 0 production, as compared to non-resonant π + π − production.The measurements are performed without detection of the recoiling proton in the final state.Exclusive events are selected by choosing a range in missing energy, where M X is the mass of the undetected recoiling system.This mass is calculated from the four-momenta of proton, photon and meson, which are denoted by p, q, and v respectively.Although for exclusive events E miss ≈ 0 holds, the finite experimental resolution is taken into account by selecting events in the range |E miss | < 2.5 GeV, which corresponds to 0 ± 2σ where σ is the width of the Gaussian signal peak.Non-exclusive background can be suppressed by cuts on the squared transverse momentum of the Fig. 2: The E miss distribution in the range 2.4 (GeV/c) 2 < Q 2 ≤ 10 (GeV/c) 2 , together with the signal plus background fits (solid curve).The dotted and dashed curves represent the signal and background contributions, respectively.In the signal region -2.5 GeV < E miss < 2.5 GeV, indicated by vertical dashdotted lines, the amount of semi-inclusive background is 35%.
vector meson with respect to the virtual photon direction, p 2 T < 0.5 (GeV/c) 2 , the energy of the ρ 0 in the laboratory system, E ρ 0 > 15 GeV, and the photon virtuality, Q 2 < 10 (GeV/c) 2 .An additional cut p 2 T > 0.05 (GeV/c) 2 is used to reduce coherently produced events.As explained in Ref. [20] we use p 2 T rather than t.After the application of all cuts, the final data set of incoherently produced exclusive ρ 0 events consist of about 797000 events.The average values of the kinematic variables are Q 2 = 2.15 (GeV/c) 2 , x Bj = 0.039, y = 0.24, W = 8.13 GeV, and p 2 T = 0.18 (GeV/c) 2 .In order to correct for the remaining semi-inclusive background in the signal region, the E miss shape of the background is parameterised for each individual target cell in every kinematic bin of Q 2 , x Bj , or p 2 T using a LEPTO Monte Carlo (MC) sample generated with COMPASS tuning [28] of the JETSET parameters.The h + h − MC event sample is weighted in every E miss bin i by the ratio of numbers of h ± h ± events from data and MC, which improves the agreement between data and MC significantly [20].
For each kinematic bin, target cell, and spin orientation a signal plus background fit is performed, whereby a Gaussian function is used for the signal shape, and the background shape is fixed by MC as described above.The fraction of semi-inclusive background in the signal range is 22%, nevertheless the fraction strongly depends on kinematics and varies between 7% and 40%.An example is presented in Fig. 2. The background corrected distributions, N sig k (φ, φ S ), are obtained from the measured distributions in the signal region, N sig,raw k (φ, φ S ), and in the background region 7 GeV < E miss < 20 GeV, N back k (φ, φ S ).The distributions N back k (φ, φ S ) are rescaled with the estimated numbers of background events in the signal region and afterwards subtracted from the N sig,raw k (φ, φ S ) distributions.
Results and discussion
The asymmetries are evaluated using the background-corrected distributions N sig k (φ, φ S ) by combining data-taking periods with opposite target polarisations.The events of the two outer target cells are summed up.The number of exclusive ρ 0 mesons as a function of φ and φ S , where the index j denotes the (φ, φ S ) bin, can be written for every target cell n as: Here, a ± j,n is the product of spin-averaged cross section, muon flux, number of target nucleons, acceptance, and efficiency of the spectrometer.The angular dependence reads: The symbol A m UT(LT),raw denotes the amplitude for the angular modulation m.After the subtraction of semi-inclusive background, the "raw" asymmetries A m UT, raw and A m LT,raw are extracted from the final sample using a two-dimensional binned maximum likelihood fit in φ and φ S .They are used to obtain the transverse target asymmetries A m UT(LT) defined in Eq. ( 4) as: Here, P T is used, which in COMPASS kinematics is a good approximation to S T .The depolarisation factors are given by: In order to estimate the systematic uncertainty of the measurements, we take into account the relative uncertainty of the target dilution factor (2%), the target polarisation (3%), and the beam polarisation (5%).Combined in quadrature this gives an overall systematic normalisation uncertainty of 3.6% for the asymmetries A m UT and 6.2% for A m LT .Additional systematic uncertainties are obtained from separate studies of i) a possible bias of the applied estimator, ii) the stability of the asymmetries over data-taking time, and iii) the robustness of the applied background subtraction method and the correction by the depolarization factors from Eq. (11).A summary of systematic uncertainties for the average asymmetries can be found in Table 1.The total systematic uncertainty is obtained as a quadratic sum of these three components.In Eq. ( 1), S T is defined with respect to the virtual-photon momentum direction, while in the experiment transverse polarization P T is defined relative to the beam direction.The transition and meson helicities 0 and ±1, respectively.These GPDs are used since several years to describe DVCS and HEMP data.The suppressed γ * T → ρ 0 T transitions are described by the helicity amplitudes M ++,++ and M +−,++ , which are likewise related to H and E. By the recent inclusion of transverse, i.e. chiralodd GPDs, it became possible to also describe γ * T → ρ 0 L transitions.In their description appear the amplitudes M 0−,++ related to chiral-odd GPDs H T [23,25] and M 0+,++ related to chiral-odd GPDs E T [22].The double-flip amplitude M 0−,−+ is neglected.The transitions γ * L → ρ 0 T and γ * T → ρ 0 −T are known to be suppressed and hence neglected in the model calculations.
All measured asymmetries agree well with the calculations of Ref. [25].In Eq. ( 12), the first two terms represent each a combination of chiral-even GPDs H and E. The inclusion of chiral-odd GPDs by the third term has negligible impact on the behaviour of A sin(φ−φ S ) UT , as can be seen when comparing calculations of Refs.[9] and [25].The asymmetry A sin(φ−φ S ) UT itself may still be of small magnitude, because for GPDs E in ρ 0 production the valence quark contribution is expected to be not large.This is interpreted as a cancellation due to different signs and comparable magnitudes of GPDs E u and E d [20].Furthermore, the small gluon and sea contributions evaluated in the model of Ref. [9] cancel here to a large extent.The asymmetries A sin φ S UT and A cos φ S LT represent imaginary and real part, respectively, of the same difference of two products M * M of two helicity amplitudes, where the first term of this difference represents a combination of GPDs H T and H, and the second a combination of E T and E. As can be
Summary
Asymmetries related to transverse target polarisation were measured in azimuthal modulations of the cross section at COMPASS in exclusive ρ 0 muoproduction on protons.The amplitudes of five singlespin asymmetries for unpolarised beam and three double-spin asymmetries for longitudinally polarised beam were extracted over the entire COMPASS kinematic domain as a function of Q 2 , x Bj , or p 2 T .The asymmetry A sin φ S UT was found to be −0.019± 0.008(stat.)± 0.003(syst.).All other asymmetries were also found to be of small magnitude but consistent with zero within experimental uncertainties.Very recent model calculations agree well with the present results.The results represent first experimental evidence from hard exclusive ρ 0 leptoproduction for the existence of non-vanishing transverse GPDs H T .
Fig. 1 :
Fig.1: Definition of the angles φ and φ s .Here k k k, k k k , q q q and v v v represent three-momentum vectors of the incident and the scattered muon, the virtual photon and the meson respectively.The symbol S T denotes the component of the target spin vector perpendicular to the virtual-photon direction.
Fig. 3 :
Fig.3: Single-spin azimuthal asymmetries for a transversely (T) polarised target and unpolarised (U) beam.The error bars (bands) represent the statistical (systematic) uncertainties.The curves show the predictions of the GPD model[25].They are calculated for the average W , Q 2 and p 2 T of our data set, W = 8.1 GeV/c 2 and p 2 T = 0.2 (GeV/c)2 for the left and middle panels, and at W = 8.1 GeV/c 2 and Q 2 = 2.2 (GeV/c)2 for the right panels.The asymmetry A sin(3φ−φ S ) UT is assumed to be zero in this model. | 4,751.6 | 2013-10-05T00:00:00.000 | [
"Physics"
] |
3 D Finite Element Modeling of Single Bolt Connections under Static and Dynamic Tension Loading
The Naval UnderseaWarfare Center has funded research to examine a range of finite element approaches used for modeling bolted connections subjected to various loading conditions. Research focused on developing finite element bolt representations that were accurate and computationally efficient. A variety of finite element modeling approaches, from detailed models to simplified ones, were used to represent the behavior of single solid bolts under static and dynamic tension loading. Test cases utilized models of bolted connection test arrangements (static tension and dynamic tension) developed for previous research and validated against test data for hollow bore bolts (Behan et al., 2013). Simulation results for solid bolts are validated against experimental data from physical testing of bolts in these load configurations.
Introduction
The Navy relies heavily on finite element analysis for assessments of various systems and components during stowage and handling operations, as well as during potential shock events.Analytical assessments usually involve finite element models that are necessarily very complex in order to adequately represent the overall system response.These systems often include bolted connections, which must be incorporated into the finite element models due to requirements to evaluate the response of any bolts to the external loading of interest.Given that the level of detail necessary to accurately model these systems creates finite constraints on model size and run time, the bolted connections in these systems are rarely modeled in a detailed fashion.Of course, the quality of results from a structural assessment depends on the accuracy of the underlying bolt representations.Thus, the focus of the current research is to examine a range of finite element modeling techniques used for representing bolted connections subjected to various loading conditions.Numerous finite element modeling approaches, from detailed models to simplified ones, were used to model the behavior of single bolts under tension loading in as-tested physical configurations, and results were compared to experimental data.
Literature Review
Previous research on simplified approaches to connection modeling has focused on finite element modeling of bolted connections with validation through experiment, although much of this research has focused on non-Navy applications.This research falls into a variety of categories including simplified connection models for applications ranging from progressive collapse to pipe structure behavior and to plate structure behavior.
Prior research on simplified modeling of joints in pipe structures was carried out by Luan et al. [7].They developed a simplified nonlinear model with bilinear springs to model the bolted flange joints in cylindrical pipe structures.They compared the performance of this model against dynamic impact loading test data, as well as results using a simple beam model of the entire joint and a detailed finite element model of the pipe, joint, and bolts.
Previous research has been conducted on simplified modeling of bolted connections in generic or plate structures [3,[8][9][10].Kwon et al. [8] modeled bolt behavior using a detailed model and a selection of simplified "practical" models for both static loading and modal analysis experiments.Kim et al. [9] modeled bolt behavior using four different approaches: a solid bolt model, a coupled bolt model, a spider bolt model, and a no-bolt model.Their detailed solid-element based model with contact and their simplified coupled bolt model, which used a single beam element with degree-of-freedom coupling between its nodes and the solid element nodes of the plates on their outermost surfaces, produced the most accurate results as compared to experimental results [9] for a static loading experiment of a simple lap joint.
Shi et al. [10] investigated a bolted joint where a thin plate was connected by a single bolt at each end to a thicker plate that was fixed to a main frame.They developed twelve simplified models, which were used to simulate a drop test where the middle of the thin plate was hit by an impactor.The best results using a simplified approach were produced using deformable shell elements to model the bolt-nut assembly, where the bending stiffness of the cylindrical shell had been set equal to that of the bolt shank and contact between the bolt shank and the plates was included.
A variety of work has carried out regarding detailed finite element modeling of bolted connections with validation through experiment.McCarthy et al. [11] and C. T. McCarthy and M. A. McCarthy [12] presented results on single-lap, single-bolt composite joints with titanium bolts.Base model development and validation against experimental strain gauge data from the joint surface and experimental joint stiffness data is discussed in [11], and C. T. McCarthy and M. A. McCarthy [12] examined the effects of bolt clearance on various aspects of joint behavior including joint stiffness, bolt rotation, bolt-hole contact area, stress distribution in the laminate, and the onset of failure.Prior research at NUWCDIVNPT, by Behan et al. [13], involved the development of highly detailed finite element models of bolt arrangements as tested in actual physical experiments, where model results compared favorably to experimental data for hollow bore bolts subjected to static and dynamic shear and tensile loading.
This work builds on previous work in that the detailed and simplified finite element models of a bolted connection were tested under both static and dynamic loading, for tensile test arrangements.The intent was to test the performance of a given set of finite element model representations of a single solid bolt under both static and dynamic tension loading conditions.
Experimental Background
Previous research detailed a set of experiments carried out to investigate bolt behavior under a variety of load types [13,14] using hollow bore bolts.For the current research, similar experiments were performed with single noninstrumented solid bolts, and data from these tests were used for validation of the numerical results presented later in this paper.The bolt material was K-Monel K500, and the bolts themselves were 15.24 cm (6 in) long and 6.35 mm (0.25 in) in diameter, with a hex head and a standard 1.9 cm (0.75 in) length of 20UNC-2A thread.Prior to loading, bolts were torqued to provide a pretension force equivalent to 2/3 of the yield stress for K500, consistent with standard practice for naval structures.
The test configurations for the static tension and dynamic tension experiments are shown in Figures 1(a) and 1(b), respectively.During the static tension tests, the tensile force was applied to a tensile plate via an adapter connected to an Instron machine, as in Figure 1(a).For the dynamic tension tests, the test bracket was mounted onto an interface plate on a Lightweight Shock Machine (LWSM) [15], in the manner shown in Figure 1(b), and the impact of the LWSM vertical hammer on the interface plate produced accelerations in the directions shown in the figure.Further details of the testing are discussed in [13].
Although these experimental configurations are named for pure loading states, the authors realize that the loads actually imparted to the test bolts were not restricted to pure tension.Naming conventions follow from idealized test conditions, where each configuration has the potential to produce a loading state near to a pure one.
Numerical Modeling
Finite element models of the test fixture hardware and bolts were developed in ABAQUS, using ABAQUS/Standard for the static models and ABAQUS/Explicit for the dynamic models.To represent the constitutive behavior of the K-Monel K500 of the bolts, a piecewise hardening model was fit to quasistatic tensile test data from coupons of the bolt material [13,14], as shown in Figure 2. Mechanical and physical properties of the K500 material included a density of 8.435 g/cm 3 (7.893× 10 −4 lb-s 2 /in 4 ), elastic modulus of = 178.2GPa (25.8 × 10 3 ksi), Poisson's ratio of ] = 0.32, yield strength (0.2% offset) of = 729.5 MPa (105.8 ksi), and ultimate strength of = 1367.2MPa (198.3 ksi) [13,14].The test fixture was modeled with linear elastic materials.In previous work, it was shown that rate-dependent material properties did not have a significant effect on simulation results for hollow bore bolts in the same experimental test configurations as those employed for the current research, where comparisons were made between models using quasistatic material properties and models with estimated ratedependent material properties at a strain rate of 1000/s [13].Given that the hollow bore bolts and solid bolts experienced similar rates of deformation in the same experimental configurations (on the order of 25/s for the dynamic tension test, as quantified by the full resolution models described later in this paper), it is reasonable to use the quasistatic K500 material properties to represent the bolt material in simulations of both static and dynamic test configurations.
For the bolts themselves, element types were varied by modeling approach.The detailed models included continuum elements (C3D8 and C3D6) and the simplified models employed a variety of element types and kinematic constraints as described in a later section of this paper.For all models, the test fixture hardware was meshed with a combination of continuum elements (C3D8 and C3D6) [16,17] comparing model results between the two models and it was found that there was extremely good agreement between both models for bolt axial force as measured at each increment in applied displacement.Thus, it was determined that the submodel was indeed sufficient to accurately capture the response of the tested configuration, and all subsequent analyses utilized the submodel in order to save computational time.
All of the numerical simulations of the static test included a static preload to 2/3 of the yield stress for K500, consistent with the physical experiments.This pretightening force was applied in the first stage of the static analysis using the * PRE-TENSION SECTION, * SURFACE, and * STEP keywords, which worked together to incrementally develop a prescribed force over the bolt cross-section in the direction of the bolt longitudinal axis [16,17].After completion of the preload, the main static analysis proceeded with incremental application of displacement, where an arc length algorithm was used to guide the solution process.The arc length method in ABAQUS/Standard is a modified Riks algorithm that assumes proportional loading and calculates each increment in load proportionality factor based on the current arc length and increment in displacement [18][19][20].Arc length parameters used in the static analyses included an initial arc length of 0.0254 mm (0.001 in), maximum length of 25.4 mm (1.0 in), and minimum length of 0.00127 mm (5 × 10 −5 in).Displacements were incrementally applied to the top face of the tensile plate, in the sense shown in Figure 3, and the loaddisplacement response was calculated using the described arc length method.
General Model Setup: Dynamic Tension.
For the dynamic tension test arrangement, a submodel approach was not sufficient to accurately capture the response of the tested configuration, given the eccentric nature of the applied loading to the test fixture and the finite compliance of the test fixture.Thus, the entire test fixture was explicitly modeled, as shown in Figure 4.
In the dynamic models, the static bolt preload was accounted for by using the * INITIAL CONDITIONS keyword [ 16,17] to apply an initial stress, equivalent to 2/3 of the yield stress, to the bolt shank prior to the initiation of the transient dynamic analysis.During the dynamic analysis, the base fixture was driven with experimental velocity histories measured at the accelerometers mounted on the LWSM interface plate during the tests, as depicted in Figure 4.The main analysis was conducted using explicit time integration with automatic time stepping.
Detailed Bolt Models.
For the detailed bolt models, the solid bolt was meshed with mostly C3D8 elements, with a few C3D6 elements [16,17].Since it was observed that the bolt specimens typically failed in the threaded region, the threaded region of the detailed bolt models was modeled using the correct profile for 20UNC-2A thread [21], but with the simplification that the threads were arranged in a concentric pattern rather than the helical angle seen in the physical specimens.This modeling approach captured the effect of the reduced area in the threads without adding the complexity of producing a well-behaved mesh along a helically varying thread profile.Figures 5(a) and 5(b) show details of the tension test bolt mesh, which included 469,500 elements.The test fixture and tensile plate were modeled using a total of 146,100 elements, and the nut and washer were modeled with 46,300 elements.
In models that included the detailed bolt representations, contact between the washer and fixture, washer and nut, bolt head and tensile plate, and bolt threads and nut was modeled using tied contact for the static tension tests and finite sliding contact for the dynamic tension tests.Finite sliding contact was included between all other moving parts, as applicable (bolt body and tensile plate/bracket, tensile plate and test bracket, and tensile plate and guide rods).
Simplified Bolt Models.
In addition to the detailed bolt models, five simplified bolt models were developed and investigated.Figure 6(a) depicts the outline of the detailed model, and Figures 6(b)-6(d) show the various simplified models.For brevity, Figure 6(b) actually represents simplified models #1-3, since these models have similar pictorial representations.All simplified models used linear Timoshenko beam elements (element type B31), with uniform circular cross-sectional area, to represent the bolt body.Shear effects were accounted for using the * TRANSVERSE SHEAR option [17].Along the shank of the bolt, the cross-section diameter was set to the nominal bolt diameter, 6.35 mm (0.25 in).In the threaded region, the beam elements were assigned a reduced cross-sectional area, with diameter set to the bolt thread minimum pitch diameter, 4.84 mm (0.19 in), except as noted.was modeled with 8 Timoshenko beam elements, with 6 elements to represent the bolt shank and 2 elements for the threaded region.The node at each end of the bolt was designated as an independent node, with degree-of-freedom coupling between this node and the surrounding plate nodes at the edge perimeter of the through hole.Degree-of-freedom coupling, for the translational degrees of freedom, was achieved via the keywords * KINEMATIC and * COUPLING [17].
Simplified Model #2: Timoshenko Beams with Multipoint
Constraints, with a Hole.This model was the same as model #1 except that the degree-of-freedom coupling was achieved via multipoint constraints.The relevant keyword was * MPC, with the BEAM option [17].
Simplified Model #3: Timoshenko Beams with Rigid
Links, with a Hole.This model employed the same bolt body modeling approach as simplified models #1 and #2.However, the linkage at each end was achieved using rigid links.For the static tension tests, these elements were of type RB3D2 (available only in ABAQUS/Standard) and were associated with the keyword * RIGID BODY, and for the dynamic tension tests, truss elements of type T3D2 were used [17].
Simplified Model #4: Timoshenko Beams with Shell Head
and Embedded End, with a Hole.In this model, the body of the bolt was modeled with 10 Timoshenko beam elements, with 6 elements for the bolt shank and 4 elements for the threaded length of the bolt.The bolt head was modeled using S4 shell elements.For the static models, tied contact was defined between the shell elements of the head and the solid elements of the tensile plate in the vicinity of the bolt hole; however, the dynamic models all employed finite sliding contact with friction between these surfaces.The washer and the nut were explicitly modeled with solid elements and the endmost beam element in the threaded region shared its end nodes with the nut.It is important to note that this approach would be less complicated for bolted joints involving blind holes rather than a through hole like the one tested in the described experiments, because the endmost beam element could share nodes with the component at the bottom of the blind hole and no nut would have to be added to the simulation to provide the necessary constraint at the threaded end.
Simplified Model #5: Timoshenko Beams with Shell Head
and Embedded End, without a Hole.This model was the same as model #4 except that the bolt hole was not explicitly modeled.
Results and Discussion
Comparisons were made between the numerical models and the experiments using global metrics, since local metrics such as strain data in or on the bolts were not available.For the static tests, load-displacement data served as the validation metric.For the dynamic tests, comparisons were made between experimental and simulated velocity data at the accelerometer location on the tensile plate, shown in Figure 4. Static tension model simulations were performed on four cores of a local HPC cluster, where each core included a 2.67 GHz Intel Xeon processor and 32 GB RAM.Simulations with the detailed bolt model required approximately 3.8 hours of wall clock time to fully trace the nonlinear equilibrium load-displacement path to a maximum displacement of 4.1 mm (0.16 in).However, analyses with the simplified bolt models required 0.2-1.3hours of wall clock time to reach a maximum displacement of 4.7-2.2mm (0.18-0.09 in).Dynamic tension analyses were carried out on the SGI Altix ICE (Spirit) HPC cluster at the US Air Force Research Laboratory (AFRL).Using 16 cores, the detailed bolt model required 260 hours of wall clock time to produce 150 ms of response.Running on 16 cores, the simplified bolt models needed 17-26 hours of clock time to yield 150 ms of response.
Static Tension.
For all static tension models, force-displacement results were compared to experimental load-displacement data.In the numerical simulations, the axial force in the bolt at the midpoint of its grip length was recorded for each increment in applied displacement.For the detailed model, the force was calculated over a user-defined section using the * SECTION PRINT keyword [17].Figure 7 shows a schematic of the cross-section location in the detailed static tension bolt model.Figure 8.This is consistent with the physical experiments, where the bolts ultimately fractured at the threads.For the contours of effective plastic strain shown in Figure 8, the maximum effective plastic strain has been set at 20%, corresponding with the strain at the ultimate stress of the tested K500 material [13,14].
The detailed model force-displacement results are compared to experimental load-displacement data in Figure 9.The numerical results exhibit very good correlation with the experimental data.
Simplified Models, Static Tension.
The force-displacement results for the simplified models can be grouped into two sets for models with similar end conditions.Simplified models #1-3, which used kinematic constraints or rigid links as the linkages from the bolt body to the tensile plate and test bracket, produced similar results, as shown in Figure 10.When using the minimum pitch diameter in the threaded region, the results compare very well to experimental data.
Simplified models #4-5 used shell elements to model the bolt head and the endmost beam element in the threaded region of the bolt body shared nodes with a detailed model of the nut.The presence of the bolt hole did not seem to matter for this case, as exhibited by the similarity in forcedisplacement results for simplified models #4-5 shown in Figure 11.When using the minimum pitch diameter in the threaded region, the results compared well to experimental data.
A small parametric study was carried out using simplified model #1 in order to ascertain the effect of the value used to model the bolt diameter in the threaded region.For this, three simulations were performed, each with a different bolt diameter assigned to the beam elements in the threaded region.The diameter values used corresponded to minimum (minor) thread pitch, 4.84 mm (0.19 in); basic thread pitch, 5.52 mm (0.22 in); and maximum (major) thread pitch, 6.35 mm (0.25 in).As shown in Figure 12, the best results were produced using minimum thread pitch.
Dynamic Tension.
For the dynamic tension models, velocity histories at the accelerometer location on the tensile plate were compared to experimental accelerometer data.In the numerical simulations, the velocity in the direction of the hammer blow on the LWSM was recorded every 1 ms and the data were extracted from and averaged over several elements located on the tensile plate surface underneath the accelerometer footprint.
Detailed Model, Dynamic
Tension.Similar to the static tension arrangement, results produced for the dynamic tension configuration using the detailed bolt model showed that the bolt experienced the highest levels of deformation in the reduced-area region of the bolt threads.Although the tested bolt did not fail in the experiment, it did exhibit slight stretching and visible damage in the threads, which is consistent with the simulated results.Contours of effective plastic strain at 150 ms are shown in Figure 13, where the maximum effective plastic strain has been set at 20%.
Velocity results at the accelerometer location obtained using the detailed model are plotted against the first 80 ms of experimental data in Figure 14.The simulated data compare well with the experimental data.
Simplified Models, Dynamic Tension.
Although the simulated results for the static tension test configurations fell into two groups with similarly modeled end conditions (#1-3 and #4-5), it is more appropriate to sort the simulated results for the dynamic tension tests into three groups.Figure 15 depicts results from simplified models #1-3, together with the test data and shows that results from simplified models #1-2 are very similar to one another but that they differ from results using simplified model #3.
Simplified models #1-2 and #3 have been plotted against test data separately in Figures 16 and 17, respectively.Simplified models #1-2 produced results that were very similar to one another, as would be expected since both models employed kinematic coupling to transfer bolt forces to the plates comprising the bolted joint.
The differences in response between the two groups can be attributed to their differences in end constraints.For simplified models #1-2, the bolt body end nodes essentially stay in place in a relative sense.In simplified model #1, the translational degrees of freedom of the perimeter nodes of the solid elements at the edge of the through hole are kinematically coupled to the displacements of the bolt body end node.Similarly, simplified model #2 (MPC, type BEAM) provides multiple rigid beams between the bolt body end node and the perimeter nodes at the edge of the through hole, where no relative rotation is allowed between the bolt body end beam element and the notional rigid beams.However, simplified model #3 does not include any rotational constraints on the relative motion between the bolt body and the linkages, creating a pinned end condition between the bolt body and the linking elements.This is not a problem for the static tension configuration, given the very controlled motion of the bolt during this test.However, for the dynamic tension case, the resulting motion of the bolt during the test is physically unrealistic because the resistance to rotational motion between the bolt head and shank is not captured by this model.This leads to a marked overprediction of tensile plate velocity at the accelerometer location for the first few peaks of the velocity response for simplified model #3, seen in Figure 17.
Simulated results are very similar to each other for simplified models #4-5, which used shell elements to represent the bolt head and where the bolt shank was tied to the nut via shared nodes in the endmost beam.Analogous to the static tension results, the presence of the bolt hole did not have much effect, as shown by the similarity in the simulated velocity results shown for each model in Figure 18.
It is interesting to note that models employing a shell element-based head appeared to produce velocity results that were more modulated than the test data in contrast with the other simplified models.This was also the case for the detailed model results, as seen in Figure 14.In fact, the detailed model and simplified models #4-5 produced results very similar to one another, as shown in Figure 19.This modulation is thought to result from applying the velocity inputs from the interface plate directly to the test fixture rather than explicitly modeling the interface plate (and its interaction with the base fixture).
For all of the simplified models, contact was not included between the beam elements representing the bolt shank and the surface of the through hole.In the dynamic tension tests, this modeling assumption led to violations of physical bolt clearance effects.To briefly investigate this, an analysis was run with simplified model #4 where contact was included between the beam elements representing the bolt shank and the surface of the through hole and it was found that inclusion of this contact did not have an appreciable effect on system response at the accelerometer location.Despite the exclusion of bolt clearance effects, all simplified models were able to produce results that correlated well with test data in both static and dynamic test configurations.
Russell Comprehensive
Error.The Russell comprehensive error metric, which calculates variations in magnitude and phase between two transient data sets, was used to quantify the correlation between the experimental and simulated velocity data.The phase (RP) and magnitude (RM) Russell error metrics were used to calculate a comprehensive (RC) Russell error metric as follows [22]: In (1), and represent the calculated and measured responses, respectively.In the context of a comparison of simulated and experimental velocity data for a certain data set, subjective measures of correlation have been tied to set values of the Russell comprehensive error metric [22].These subjective measures of correlation were "excellent, " "acceptable, " or "poor, " where these levels are defined as excellent, RC ≤ 0.15; acceptable, 0.15 < RC ≤ 0.28; and poor, RC > 0.28 [22].
The Russell comprehensive error metric was calculated for all of the dynamic tension cases.Comparisons were made between the experimental data and the numerical data and the results are listed in Table 1.Per the subjective measures of correlation [22], all finite element approaches used to model the bolt produced "acceptable" results.The detailed model and simplified approaches #4 and #5, which used shell elements to represent the bolt head and with shared nodes to anchor the bolt body to the nut, produced results with slightly superior correlation with the data, but all modeling approaches were comparable in terms of correlation with test data.
Conclusions
In this paper, a variety of approaches were used to model the response of bolted connections involving single solid K-Monel K500 bolts.Detailed finite element models, as well as five simplified finite element models, of the bolt were used to simulate physical experiments involving static and dynamic tension test configurations.The detailed finite element model was comprised of continuum elements while the simplified models involved beam elements to model the bolt body and various end conditions to transfer the bolt forces to the joint at each end of the bolt.
All simplified models employed beam elements of uniform circular cross-section to model the bolt body.Results from a parameter study showed that, in static tension test arrangements, simulations produced results that most closely matched experimental data when modeling the bolt shank with a diameter equal to the nominal bolt diameter and the threaded region with a diameter equal to the minimum (minor) thread pitch diameter, as compared to the basic or maximum (major) thread pitch diameter.
Models were validated against test data for the static and dynamic tension experiments.For the static test configurations, force-displacement results were compared to experimental load-displacement data, where the numerical results provided good correlation with the test data for detailed and simplified approaches.In the dynamic tests, simulated velocity results were compared to data from an accelerometer mounted onto one of the plates joined by the bolt.The Russell comprehensive error metric was used to quantify the correlation between the test data and simulated results produced using the various finite element modeling approaches.Russell error results showed that there is acceptable correlation between the experimental and the numerical velocity data for all models.
Future research can extend these models towards developing simplified finite element bolt representations capable of capturing shear loading as well as tensile loading.Despite the fact that bolt clearance effects were not included in the simplified models tested for this research, the tested simplified models all performed well for both static and dynamic tension configurations.However, the nature of shear loading will necessitate the inclusion of bolt clearance effects for accurate representation of bolt shear response using simplified means.
Figure 1 :
Figure 1: Section view of (a) static tension test and (b) dynamic tension test. .
Figure 8 :
Figure 8: Contours of effective plastic strain at maximum applied displacement, detailed static tension model.
Figure 9 :
Figure 9: Force-displacement results for static tension test, detailed model.
Figure 12 :
Figure 12: Force-displacement results for static tension test, simplified model #1, using diameter values equal to minimum, basic, and maximum thread pitch in the threaded region. | 6,573 | 2015-02-22T00:00:00.000 | [
"Engineering"
] |
Equidistribution in Shrinking Sets and L^4-Norm Bounds for Automorphic Forms
We study two closely related problems stemming from the random wave conjecture for Maass forms. The first problem is bounding the $L^4$-norm of a Maass form in the large eigenvalue limit; we complete the work of Spinu to show that the $L^4$-norm of an Eisenstein series $E(z,1/2+it_g)$ restricted to compact sets is bounded by $\sqrt{\log t_g}$. The second problem is quantum unique ergodicity in shrinking sets; we show that by averaging over the centre of hyperbolic balls in $\Gamma \backslash \mathbb{H}$, quantum unique ergodicity holds for almost every shrinking ball whose radius is larger than the Planck scale. This result is conditional on the generalised Lindelof hypothesis for Maass eigenforms but is unconditional for Eisenstein series. We also show that equidistribution for Maass eigenforms need not hold at or below the Planck scale. Finally, we prove similar equidistribution results in shrinking sets for Heegner points and closed geodesics associated to ideal classes of quadratic fields.
1.1.1. Random Wave Conjecture. Let B 0 (Γ) denote the set of Hecke-Maaß eigenforms of weight zero and level 1 on the modular surface Γ\H, where Γ = SL 2 (Z) and H denotes the upper half-plane; we normalise g ∈ B 0 (Γ) to be such that g, g · · = Γ\H |g(z)| 2 dµ(z) = 1, where dµ(z) = y −2 dx dy. A well-known conjecture of Berry [Ber77] and Hejhal and Rackner [HejRa92] states that a Hecke-Maaß eigenform g ∈ B 0 (Γ) of large Laplacian eigenvalue λ g = 1/4 + t 2 g ought to behave like a random wave. Here by a random wave, we mean a function of the form where η(λ) → ∞ as λ → ∞ and η(λ) = o(λ), each f is a normalised Hecke-Maaß eigenform, and the coefficients c f are independent Gaussian random variables of mean 0 and variance 1. These are a randomised model of eigenfunctions of the Laplacian in the large eigenvalue limit λ → ∞, and it is easier to prove (almost surely) results for random waves than for true eigenfunctions.
For Γ\H, there are situations in which random waves do not behave precisely like Laplacian eigenfunctions: random waves satisfy sup z∈K |g λ (z)| ≍ K √ log λ almost surely for every compact subset K, whereas Milićević [Mil10, Theorem 1] proved the existence of a dense subset of points z ∈ Γ\H for which a subsequence of Hecke-Maaß eigenforms g ∈ B 0 (Γ) may be much larger. Nonetheless, it is conjectured that Laplacian eigenfunctions should, on the whole, be well-modelled by random waves. This (admittedly loosely defined) conjecture is known as the random wave conjecture.
In this paper, we study two aspects of this conjecture: bounds for the L 4 -norm of an automorphic form, and quantum unique ergodicity in shrinking balls. The former is a special case of the Gaussian moments conjecture, while the latter is a refinement of quantum unique ergodicity.
1.1.2. Gaussian Moments Conjecture. A particular manifestation of the random wave conjecture states that the moments of a Hecke-Maaß eigenform g ∈ B 0 (Γ) should be identical to those of a Gaussian random variable in the large eigenvalue limit.
Conjecture 1.1 (Gaussian Moments Conjecture). Let K be any fixed compact continuity set of Γ\H, so that the boundary of K has µ-measure zero, and let g ∈ B 0 (Γ) be a Hecke-Maaß eigenform normalised such that g, g = 1. Then for every nonnegative integer n, (1.2) 1 Var K (g) n/2 vol(K) K g(z) n dµ(z) converges to as t g tends to infinity. Here Var K (g) · · = 1 vol(K) K |g(z)| 2 dµ(z).
When K is replaced by a noncompact set, the Gaussian moments conjecture ought not necessarily to hold for high moments. As explained in [HeSt01, Section 4], using a heuristic appearing in [Hej99,Section 7], the transition range of the Whittaker function leads to a "tidal pulse" phenomenon near the cusp of Γ\H; when K is replaced by Γ\H, so that Var Γ\H (g) = vol (Γ\H) −1 , one can thereby show that there exists a subsequence of Hecke-Maaß eigenforms g ∈ B 0 (Γ) for which (1.2) grows like a power of t g whenever n ≥ 12 is even. This is closely related to the fact that there exists a subsequence of Hecke-Maaß eigenforms for which Nonetheless, it is not unreasonable to conjecture that the Gaussian moments conjecture holds for smaller moments when K is replaced by Γ\H. Indeed, the conjecture holds by definition for n ∈ {0, 2} and is easily shown to also be true when n = 1, as both sides vanish, while for n = 3, this can be shown to hold via the work of Watson [Wat08].
1.1.3. Quantum Unique Ergodicity. Another manifestation of the randomness of Hecke-Maaß eigenforms is quantum unique ergodicity. Conjecture 1.3 (Quantum Unique Ergodicity in Configuration Space). Let g ∈ B 0 (Γ) be a Hecke-Maaß eigenform normalised such that g, g = 1. Then the probability measure |g(z)| 2 dµ(z) converges in distribution to the uniform probability measure on Γ\H as t g tends to infinity, so that for every continuity set B ⊂ Γ\H, as t g tends to infinity.
By the Portmanteau theorem, this conjecture is equivalent to for every bounded continuous function on Γ\H. It behoves us to mention that there is a stronger formulation of quantum unique ergodicity, namely quantum unique ergodicity in phase space, which is the cosphere bundle S * (Γ\H) ∼ = Γ\SL 2 (R): not only should the sequence of probability measures |g(z)| 2 dµ(z) equidistribute on the configuration space Γ\H, but that a microlocal lift of these measures to Wigner distributions on phase space should equidistribute with respect to the Liouville measure.
Quantum unique ergodicity in phase space, and hence also in configuration space, is known to be true via the work of Lindenstrauss [Lin06] and Soundararajan [Sou10]. However, this proof does not quantify the rate of equidistribution; in particular, it does not give explicit rates of decay for the terms (1.5) Γ\H f (z)|g(z)| 2 dµ(z) for fixed f ∈ C b (Γ\H) as t g tends to infinity. Watson [Wat08, Corollary 1] has shown that optimal decay rates for these integrals follow directly from the generalised Lindelöf hypothesis.
The n = 2 case of the Gaussian moments conjecture for the set K = Γ\Hnamely the L 4 -norm of g -shares many similarities with quantum unique ergodicity in configuration space. In fact, it is extremely closely related to a more refined version of quantum unique ergodicity, namely equidistribution on shrinking sets.
1.1.4. Randomness of Eisenstein Series. The Gaussian moments conjecture and quantum unique ergodicity ought to be true, once suitably modified, when g(z) = E(z, 1/2 + it g ) is an Eisenstein series. Eisenstein series are not square-integrable, so one must use some sort of regularisation. One method is to use Zagier's regularisation of divergent integrals [Zag82]; another is to replace E(z, 1/2 + it g ) with the truncated Eisenstein series Λ T E(z, 1/2 + it g ) for some T ≥ 1; this is defined for ℜ(s) > 1 by Λ T E(z, s) · · = E(z, s) − γ∈Γ∞\Γ ℑ(γz)>T ℑ(γz) s + Λ(2 − 2s) Λ(2s) ℑ(γz) 1−s and extended by meromorphic continuation to the complex plane; here Λ(s) denotes the completed Riemann zeta function. For quantum unique ergodicity, we need not deal with the truncated version of the Eisenstein series provided that we take into account the growth of the L 2 -norm of an Eisenstein series on compact sets. Theorem 1.6 (Luo-Sarnak [LS95, Theorem 1.1]). For any compact continuity set K ⊂ Γ\H and for g(z) = E (z, 1/2 + it g ), as t g tends to infinity.
Since K is compact, one can replace g(z) with Λ T E (z, 1/2 + it g ) for some T sufficiently large dependent on K. The presence of log(1/4 + t 2 g ) essentially stems from the Maaß-Selberg relation; see Corollary 2.3.
Quantum unique ergodicity in phase space is also known for Eisenstein series; this is a result of Jakobson [Jak94, Theorem 1].
1.2. The L 4 -Norm Problem. The L 4 -norm problem for a Hecke-Maaß eigenform g is the second nontrivial case of the Gaussian moments conjecture.
Conjecture 1.7 (L 4 -Norm Problem). Let g ∈ B 0 (Γ) be a Hecke-Maaß eigenform normalised such that g, g = 1. As t g tends to infinity, A similar statement can be formulated when g is an Eisenstein series, though some care must be taken, since Eisenstein series are not square-integrable; see [DK18].
In general, an unconditional proof of the L 4 -norm problem seems quite difficult. A weaker conjecture (see, for example, [Sar03, Conjecture 4]) is that In certain special cases, this has been shown: when g is a dihedral Maaß eigenform, this is a result of Luo [Luo14], while when g is a truncated Eisenstein series, this is a result of Spinu [Spi03] (with the implicit constant of course dependent on the truncation parameter T ).
Buttcane and Khan [BK17b, Theorem 1.1] have recently given a proof, conditional on the generalised Lindelöf hypothesis, of the L 4 -norm problem for a Hecke-Maaß eigenform g ∈ B 0 (Γ). Our first main result is to give an unconditional upper bound for the L 4 -norm of a truncated Eisenstein series that is sharper than (1.8).
Theorem 1.9. Let g(z) = Λ T E (z, 1/2 + it g ). We have that Up to the implicit constant, Theorem 1.9 should be sharp, for the Maaß-Selberg relation implies that Remark 1.10. Theorem 1.9 was previously claimed by Spinu [Spi03, Theorem 1.2], as was a proof of (1.8) for Hecke-Maaß cusp forms by Sarnak and Watson [Sar03, Theorem 3]; in both cases, however, the proofs are incomplete, as we shall discuss further in Remark 3.3.
Remark 1.11. Djanković and Khan [DK18] have recently reformulated the L 4 -norm problem for Eisenstein series by studying a regularised fourth moment of an Eisenstein series in the sense of Zagier [Zag82]; cf. Section 2.2. This has the advantage that one ought to be able to prove an asymptotic for this regularised fourth moment, whereas Theorem 1.9 only provides an upper bound for the fourth moment of a truncated Eisenstein series.
1.3. Quantum Unique Ergodicity in Shrinking Sets. A natural strengthening of quantum unique ergodicity is to determine whether equidistribution still occurs if we vary the set B with t g ; in particular, if the size of B shrinks as t g increases. This small scale equidistribution should be thought of as a reinterpretation of determining the rate of equidistribution, as opposed to determining explicit rates of decay for the terms in (1.5). Proving equidistribution in shrinking sets has applications towards bounds for the L p -norms and size of nodal domains of eigenfunctions of the Laplacian; see [HezRi16]. We denote by B = B R (w) the hyperbolic ball of radius R centred at w ∈ Γ\H: its hyperbolic volume is which is independent of the centre w.
Question 1.12. Let g ∈ B 0 (Γ) be a Hecke-Maaß eigenform normalised such that g, g = 1. For what conditions on R, with regards to t g , is it still true that as t g tends to infinity?
In the general setting of negatively curved manifolds, this question has independently been answered by Han [Han15, Theorem 1.5] and Hezari and Rivière [HezRi16, Proposition 2.1] for a full density subsequence of Laplacian eigenfunctions with the radius R shrinking at a rate (log λ g ) −β for a particular range of β > 0 dependent on the manifold.
We should not expect equidistribution to hold when R ≪ t −1 g ; indeed, Hejhal and Rackner [HejRa92, Section 5], writing Ψ n in place of g, λ n in place of λ g = 1/4 + t 2 g , and A in place of R, state that . . . in the physics literature, c/ √ λ n is commonly referred to as the de Broglie wavelength. At length scales below c/ √ λ n , one expects the topography of Ψ n to look "essentially sinusoidal", that is, regular. It is only when A is substantially bigger than the de Broglie wavelength that one stands any chance of seeing any type of Gaussian distribution. We confirm this statement by showing that if R ≪ A t −1 g (log t g ) A for any A > 0, then there exist infinitely many points w ∈ Γ\H for which (1.13) does not hold, so that the sequence of probability measures |g(z)| 2 dµ(z) does not equidistribute on the shrinking balls of radius t −1 g (log t g ) A centred at these points. We think of R ≍ t −1 g as being the Planck scale, so that equidistribution need not occur within a logarithmic window of the Planck scale.
Theorem 1.14. Let g ∈ B 0 (Γ) be a Hecke-Maaß eigenform normalised such that g, g = 1. For every fixed Heegner point w ∈ Γ\H, we have that |g(z)| 2 dµ(z) = Ω exp 2 log t g log log t g 1 + O log log log t g log log t g for R ≪ A t −1 g (log t g ) A for any A > 0 as t g tends to infinity. Nevertheless, we should expect equidistribution to occur at every scale larger than the Planck scale, namely R ≫ t −δ g for any δ < 1. Towards this, Young [You16] has proved the following.
In fact, with little work, we can improve the range in Young's result for Eisenstein series.
A simpler version of Question One can also reformulate Question 1.12 probabilistically by asking for which scales equidistribution holds almost surely with respect to a random eigenbasis of Laplacian eigenfunctions; positive results towards this question appear in the work of Han [Han17] and Han and Tacy [HT16].
We study a related question: instead of demanding that equidistribution hold in shrinking balls of radius R > 0 centred at w for every point w ∈ Γ\H, we relax this requirement by instead asking whether equidistribution holds in shrinking balls B R (w) for almost every w ∈ Γ\H.
1.3.1. Conditional Results. We are able to give a conditional proof of equidistribution in almost every shrinking ball when g ∈ B 0 (Γ) and R ≫ t −δ g for any 0 < δ < 1, that is, at all scales above the Planck scale.
Theorem 1.17. Let g ∈ B 0 (Γ) be a Hecke-Maaß eigenform normalised such that g, g = 1. Assume the generalised Lindelöf hypothesis, and suppose that R ≍ t −δ g for some 0 < δ < 1. Then for any c ≫ ε t > c converges to zero as t g tends to infinity.
1.3.2. Unconditional Results. Proving unconditional results seems to be much more difficult. Nevertheless, we are able to do so when g(z) = E (z, 1/2 + it g ) is an Eisenstein series.
Theorem 1.18. Let g(z) = E (z, 1/2 + it g ). Suppose that R ≍ t −δ g for some converges to zero as t g tends to infinity, where D(g; w) is given by (5.7).
This result is consistent with Theorem 1.6 due to the following.
Lemma 1.19. In any compact subset K of Γ\H, we have that for all w ∈ K, In particular, we may rephrase Theorem 1.18 in the following way.
Corollary 1.20. Let g(z) = E (z, 1/2 + it g ), and let K be a fixed compact subset of Γ\H. Suppose that R ≫ ε t −1+ε g . Then for any fixed c > 0, vol > c converges to zero as t g tends to infinity.
Equidistribution of Geometric Invariants of Quadratic Fields in Shrink-
ing Sets. Finally, in Section 6, we study a similar equidistribution problem in shrinking sets. Associated to each narrow ideal class A of the narrow class group Cl + K of a quadratic number field K = Q( √ D) is a geometric invariant. For D < 0, this is a Heegner point z A , while for D > 0, this is a closed geodesic C A or a hyperbolic orbifold Γ A \N A having this closed geodesic as its boundary; we explain these geometric invariants in more detail in Section 6.1.
For each fundamental discriminant D, we choose a genus G K ⊂ Cl + K in the group of genera Gen K · · = # Cl + K denotes the narrow class number of K. Duke, Imamoḡlu, and Tóth have proved the following equidistribution theorem.
as D → −∞ through fundamental discriminants, and If we sum over all genera, so that we are studying equidistribution associated to the full narrow class group, then this result is due to Duke [Duk88, Theorem 1] for Heegner points and closed geodesics, while this result becomes trivial for hyperbolic orbifolds, for there is no error term whatsoever in this case. Moreover, the equidistribution of closed geodesics has a stronger realisation: instead of merely asking for the equidistribution of closed geodesics on Γ\H, we may lift these geodesics to phase space S * (Γ\H) ∼ = Γ\SL 2 (R) and demand equidistribution with respect to the Liouville measure. This has been proved by Chelluri [Che04].
It is natural to ask whether equidistribution still occurs if B shrinks as |D| grows. Towards this, Young [You17a] has proved the following.
for fixed δ < 1/24, where Cl K denotes the class group of K and h K · · = # Cl K denotes the class number. Assuming the generalised Lindelöf hypothesis, (1.23) holds as D → −∞ through fundamental discriminants for fixed δ < 1/8.
In fact, from the method of proof, it is clear that Young's theorem applies to genera mutatis mutandis, and proves equidistribution not only of Heegner points, but also of closed geodesics and hyperbolic orbifolds.
Once again, we may weaken the demand that equidistribution hold in shrinking balls of radius R > 0 centred at w for every point w ∈ Γ\H and instead study whether equidistribution holds in shrinking balls B R (w) for almost every w ∈ Γ\H.
We prove the following conditional result.
Theorem 1.25. Suppose that R ≍ |D| −δ . Assuming the generalised Lindelöf hypothesis, we have that for converges to zero as D → ∞ along fundamental discriminants.
Unconditionally, we obtain the following weaker results.
> c converges to zero as D → −∞ along odd fundamental discriminants, while for > c converges to zero as D → ∞ along odd fundamental discriminants, and for all δ > 0 and c > c converges to zero as D → ∞ along odd fundamental discriminants.
The fact that these geometric invariants equidistribute on almost every ball of different scales should not come as a surprise, and essentially boils down to the fact that a Heegner point has dimension 0, a closed geodesic has dimension 1, and a hyperbolic orbifold has dimension 2. For Heegner points, we need roughly R 2 balls to cover Γ\H, so we require the number of Heegner points #G K corresponding to the genus G K to be at least R 2 in order to expect equidistribution; this is the scale R ≍ (−D) −1/4 . For closed geodesics, on the other hand, R balls will cover roughly 1/R of Γ\H, but a closed geodesic may intersect more than one ball, so we only require the total length A∈GK ℓ (C A ) of closed geodesics corresponding to the genus G K to be at least R; this is the scale R ≍ D −1/2 . Finally, we should expect equidistribution at all scales for hyperbolic orbifolds, since these are just (possibly uneven) coverings of Γ\H.
1.5. Idea of Proof. The chief idea behind the proof of the aforementioned small scall equidistribution theorems is to use Chebyshev's inequality to reduce the problem to bounding a variance. For example, The method of bounding the variance in order to show equidistribution in almost every shrinking ball is also used in [GW17, Theorem 1.6] for eigenfunctions of the Laplacian on T 2 , as well as in both [EMV13, Theorem 1.3] and [BRS16, Theorem 1.8], where the problem investigated is not quantum unique ergodicity, but rather the equidistribution of lattice points on the sphere.
The variance is an inner product of functions in L 2 (Γ\H), as is the fourth moment of a truncated Eisenstein series; both are thereby amenable to being spectrally expanded via Parseval's identity. The resulting spectral sum over Hecke-Maaß forms f occurring in the spectral expansion Var(g; R) when g is an Eisenstein series is essentially the same as the spectral sum for fourth moment of a truncated Eisenstein series in the range 0 < t f ≪ ε R −1+ε , whereas for t f ≫ 1/R, it is much smaller.
Finally, we use the Watson-Ichino formula to write | |g| 2 , f | 2 as a product of Lfunctions. This reduces the problem to bounding certain moments of L-functions, with the length of these moments corresponding inversely to the radius of the shrinking ball.
Though not a manifestation of the random wave conjecture, the equidistribution problems in Section 1.4 nonetheless involve equidistribution on Γ\H, and the proofs of Theorems 1.25 and 1.26 contain many of the same ingredients as the proofs of Theorems 1.17 and 1.18. The chief difference is that in place of | |g| 2 , f | 2 , we have Weyl sums; akin to the Watson-Ichino formula, these can be expressed as a product of L-functions via the work of Duke, Imamoḡlu, and Tóth [DIT16].
1.6. Connections to Subconvexity. The rate of equidistribution for quantum unique ergodicity for Hecke-Maaß eigenforms g ∈ B 0 (Γ) can be quantified via explicit rates of decay for for fixed f ∈ B 0 (Γ) and ψ ∈ C ∞ c (R + ) as t g tends to infinity. Via the Watson-Ichino formula, this is equivalent to obtaining subconvex bounds of the form for some absolute constant δ > 0. Similarly, quantifying the rate of equidistribution for quantum unique ergodicity for g(z) = E(z, 1/2 + it g ) is equivalent to obtaining subconvex bounds of the form For quantum unique ergodicity in almost every shrinking ball of radius R for Hecke-Maaß eigenforms g ∈ B 0 (Γ), on the other hand, we will show that we require bounds of the form That is, we require subconvex moment bounds for L-functions uniformly in two parameters: t f and t g . Thus this is a problem of hybrid subconvexity. Proving such bounds unconditionally seems to be currently out of reach for moments involving GL 3 ×GL 2 Rankin-Selberg L-functions. For g(z) = E(z, 1/2 + it g ), on the other hand, the required subconvex moment bounds are and the fact that these moments only involve GL 2 L-functions makes this problem tractable. It is for this reason that we are able to prove Theorem 1.18 unconditionally, whereas Theorem 1.17 is conditional.
Integrals of Automorphic Forms and L-Functions
However, this is no longer the case when we replace the Eisenstein series with the truncated Eisenstein series The following explicit formula for the inner product of two truncated Eisenstein series is known as the Maaß-Selberg relation.
Using the Taylor expansions together with the fact that |ϕ(1/2 + it g )| = 1 and that It remains to use Stirling's formula to find that and [IK04, Theorem 8.29] to give the bound
The Watson-Ichino Formula.
To deal with spectral sums involving terms of the form | |g| 2 , f | 2 , one can use the Watson-Ichino formula, which essentially states that the square of the integral over Γ\H of the product of three automorphic forms is equal to a product of completed L-functions involving those automorphic forms. In particular, if f, g ∈ B 0 (Γ), then from [Ich08, Theorem 1.1] and [Wat08, Here Λ(s, π) denotes the completed L-function of an automorphic representation π of GL n (A Q ): this is of the form (2.7) Λ(s, π) = q s/2 π L ∞ (s, π)L(s, π), where q π denotes the conductor of π, L ∞ (s, π) is the archimedean part of Λ(s, π), which is of the form π −ns/2 n j=1 Γ( s+κπ,j 2 ) for some κ π,j ∈ C, and L(s, π) is the usual nonarchimedean part of Λ(s, π). Note that the numerator in the Watson-Ichino formula factorises: Similar results also hold when either f or g is replaced with an Eisenstein series.
A similar result also holds when g is an Eisenstein series.
Finally, when f is also an Eisenstein series, the integral is no longer convergent. One can work around this issue by replacing this integral with a regularised integral. This is defined by Zagier [Zag82] in the following way. Let F : Γ\H → C be a continuous function of moderate growth, so that there exists c j , α j ∈ C and nonnegative integers n j such that for all N ≥ 0 at the cusp at infinity, with no α j equal to 0 or 1. Then there exists a function E(z) that is a linear combination of Eisenstein series and derivatives of Eisenstein series E(z, α), each satisfying ℜ(α) > 1/2, such that for some δ > 0, at the cusp at infinity. The regularised inner product of two functions f, g such that f g = F is continuous and of moderate growth is defined to be Moreover, if f and g depend on complex parameters, then we may extend both sides via analytic continuation where possible.
Proposition 2.10 ([Zag82, Equation (44)]). We have that In practice, it is the nonarchimedean part L(s, π) of a completed L-function Λ(s, π) that is difficult to deal with; this is because the asymptotic behaviour of the archimedean part of a completed L-function can be inferred via Stirling's approximation.
Lemma 2.12. The product of the archimedean parts of the completed L-functions in Propositions 2.8, 2.9 (with t = t f ), and 2.10 (with s 1 = s 2 = 1/2 + it g and s = 1/2 + it f ) is equal to where Proof. The product of the archimedean parts of the completed L-functions is The result then follows directly from Stirling's approximation.
On occasion, we also need to deal with lower bounds for L(1, sym 2 f ). This is less complex than values of L-functions within the critical strip 0 < ℜ(s) < 1; indeed, the following is known.
Sharp Bounds for the L 4 -Norm of a Truncated Eisenstein Series
3.1. The Spectral Expansion of the L 4 -Norm. We wish to determine sharp bounds for g 4 L 4 (Γ\H) = Γ\H |g(z)| 4 dµ(z) with g(z) = Λ T E(z, 1/2 + it g ) in terms of t g . Our first step is to express this quantity as a spectral sum, which requires the spectral decomposition of L 2 (Γ\H).
In particular, the following spectral expansion of the L 4 -norm of g is simply Parseval's identity with g 1 = g 2 = |g| 2 .
Corollary 3.2. Let g ∈ L 2 (Γ\H) be of rapid decay. Then This is reduced to understanding bounds for the inner product of |g| 2 with eigenfunctions of the Laplacian. The first term in this expansion is the inner product of |g| 2 with the constant function and Corollary 2.3 shows that It remains to treat the cuspidal and continuous spectra.
3.2.
Ranges of the Spectral Decomposition for the L 4 -Norm. We divide the spectral expansion of the L 4 -norm of g(z) = Λ T E(z, 1/2+it g ) given in Corollary 3.2 into different parts, then analyse each part individually. There are two main ranges of the continuous spectrum to consider, which depend on a small fixed parameter δ > 0: • the initial range 0 ≤ |t| ≤ 2t g + t 1−δ g , and • the tail range |t| > 2t g + t 1−δ g . Both of these ranges will be shown to contribute a negligible amount via subconvexity estimates for the L-functions appearing in the integral.
For the contribution from the cuspidal spectrum, the summation over B 0 (Γ) may be broken up into different ranges depending on t f . There are four main ranges of the cuspidal spectrum left to consider, which depend on a fixed small parameter δ > 0: • the short initial range 0 ≤ t f ≤ t 1−δ g , • the bulk range t 1−δ We divide the spectral sum into these particular ranges due to the size of the product of analytic conductors of L-functions. The analytic conductor of which is large when t f lies in the bulk range, but is small in the short initial range, and drops in the short transition range. For this reason, the main contribution will be shown to arise from the bulk range, while the contribution from the two short ranges will be shown to be negligible. Assuming the generalised Lindelöf hypothesis, this can be proven directly; see [BK17b, Section 5]. Finally, the exponential decay in (2.13) arising from the archimedean components of the completed L-functions indicates that the tail range contributes a negligible amount.
Remark 3.3. In [Spi03, Chapter 6], Spinu sketches an unconditional proof of Theorem 1.9. The proof, however, only treats the spectral sum in the range αt g < t f < 2(1 − α)t g for any fixed α > 0 (essentially the bulk range), in which the contribution of the spectral sum ought to be nonnegligible. The remaining ranges, which all ought to contribute a negligible amount, are left unaddressed. This same issue is present in a claim of Sarnak and Watson [Sar03, Theorem 3(a)] of the bound g L 4 (Γ\H) ≪ ε t ε g for Hecke-Maaß cusp forms, under the assumption of the Selberg eigenvalue and Ramanujan conjectures (but not the generalised Lindelöf hypothesis, as in [BK17b, Theorem 1.1]). Sarnak (personal communication) subsequently has retracted this claim, and instead only claims this bound for the contribution of the spectral sum in the bulk range, as the method he uses is unable to treat the short initial range.
We are able to treat the short initial and transition ranges, left untreated by Spinu, by applying the work of Jutila [Jut04], Ivić [Ivi01], and Jutila and Motohashi [JM05] on certain hybrid moments of L-functions. We do not know how to treat these ranges when g is a Hecke-Maaß cusp form.
3.3. Spectral Methods to Bound the Continuous Spectrum. From Corollary 3.2, we must bound Here c is any constant less than 1/2 − 2θ, where θ is a positive constant such that |g| 2 , f 2 .
Lemma 3.6 ([Spi03, Theorem 4.2]). We have that This allows us to use Proposition 2.9 and Lemma 2.12. We divide the cuspidal spectrum into four ranges, as discussed in Section 3.2. The convexity bound for the associated L-functions together with the Weyl law shows that the tail range is negligible. So it remains to bound the first three ranges.
Lemma 3.8 ([Spi03, Proposition 5.5]). We have that
Remark 3.9. Spinu uses the large sieve only to prove Lemma 3.7 and employs a more complex method in proving Lemma 3.8; nonetheless, one can in fact use the local large sieve, as stated in [Luo14,Lemma] ≪ (log t) 2/3 (log log t) 1/3 .
It therefore suffices to show that
for some δ ′ > 0. We divide the short transition range 0 < t f < t 1−δ g into dyadic intervals H ≤ t f < 2H, of which there are roughly log t g intervals, on which
It then suffices to show that for
This bound follows from the work of Jutila [Jut04], Ivić [Ivi01], and Jutila and Motohashi [JM05]. It is worth noting that the purpose of these works is to obtain Weyl-type subconvexity bounds for Hecke-Maaß eigenforms f ∈ B 0 (Γ), so long as |t| is not too close to t f ; here q(f, s) denotes the analytic conductor of L(s, f ). Conveniently, their methods to obtain such bounds involve obtaining bounds for the exact type of spectral sum that we are studying.
Lemma 3.10. For t ≥ 0 and H ≫ 1, we have that Proof. For H ≥ t 1/2 , this follows from [JM05, Theorem 2], which states that for t ≥ 0 and H ≫ 1, For H ≤ t 1/2 , this follows from the subconvexity bound Corollary 2], and from [Jut04,Theorem], which states that for t ≥ 0 and 1 ≪ G ≪ H, Corollary 3.11. For any δ > 0, we have that and Ivić's [Ivi01] bounds for moments of L(1/2, f ) in short intervals of t f close to 2t g . A similar idea works when g is a truncated Eisenstein series. We must show that for some δ ′ > 0. We use the Cauchy-Schwarz inequality to see that this spectral sum is bounded by t −3/2 g times the square root of the product of The first sum is bounded by and a similar expression holds for the second sum. We then apply the following lemma to show that each sum is bounded by a constant multiple dependent on ε of t 3−δ 2 +ε g , from which the result follows. Similarly, for H ≫ 1, 0 ≤ t ≪ H 3/2−ε , and 0 ≤ G ≤ (H + t) 4/3 H −1+ε , we have that Corollary 3.13. For any 0 < δ < 2/3, we have that 3.8. Spectral Methods to Bound the Bulk Range. In [Spi03, Chapter 6], Spinu proves the bound where ρ(z, w) · · = log |z − w| + |z − w| |z − w| − |z − w| denotes the hyperbolic distance on H. The function u : H × H → [0, ∞) is a point-pair invariant. From this, a function k : [0, ∞) → C gives rise to a point-pair invariant k(z, w) · · = k(u(z, w)) on H. The Selberg-Harish-Chandra transform maps sufficiently well-behaved functions k : [0, ∞) → C to functions h : R → C. This transform is given in three steps as follows: Note that h(t) is real whenever t is real. We shall take k(z, w) = k R (z, w) equal to the indicator function of a small ball of radius R centred at a point w, normalised by the volume of this ball. So and consequently We require the following asymptotics for h R (t), which are extremely similar to the analogous result for T 2 ; see [GW17, Lemma 2.1].
Lemma 4.2 (Cf. [Cha96, Lemma 2.4]). As R tends to zero, we have that
if Rt tends to zero, if Rt tends to infinity.
Proof. If R and Rt both converge to zero, then the dominated convergence theorem implies that If R converges to 0 and Rt converges to some value in (0, ∞), then similarly via [GR07,8.411.10]. So it remains to prove the case that R converges to 0 and Rt tends to infinity. To do this, we let We show that is pointwise convergent as R tends to zero and is uniformly convergent to 0 as x tends to infinity, from which the Moore-Osgood theorem allows us to interchange the order of limits taken in order to obtain the desired asymptotic. Indeed, the dominated convergence theorem once again shows that h(R, x) converges to as R tends to zero. For the uniform convergence as x tends to infinity, we integrate by parts and make the substitution r = 2 R arsinh sin v sinh R 2 , yielding Using stationary phase, with the two critical points being the endpoints ±π/2, we find that there exists some R 0 > 0 such that sup R∈(0,R0) For a function k : [0, ∞) → C, we may form the automorphic kernel which is Γ-invariant in both variables. When k(u) = k R (u), we write K(z, w) = K R (z, w).
Lemma 4.3. If f : Γ\H → C is an eigenfunction of the Laplacian with eigenvalue 1/4 + t 2 f , then Proof. This follows from [Iwa02, Theorem 1.14]. Note that there it is assumed that not only is k(u) compactly supported, but that it is smooth; this, however, is not essential to the proof. Instead, we merely require that k(z, w) be twice differentiable in both variables µ-almost everywhere.
Proposition 4.4 ([Mil10, Theorem 1]). For every fixed Heegner point w ∈ H, |g(w)| = Ω exp log t g log log t g 1 + O log log log t g log log t g as t g tends to infinity.
Proof of Theorem 1.14. For g ∈ B 0 (Γ), It follows by the Cauchy-Schwarz inequality that Theorem 1.14 then follows from Lemma 4.2 and Proposition 4.4.
Remark 4.5. Theorem 1.14 also holds for Maaß newforms g ∈ B * 0 (Γ 0 (q)) for any q > 1, for Proposition 4.4 is proved in this generality (and in fact in even further generality).
Remark 4.6. Since it is conjectured that max w∈K |g(w)| ≪ K,ε t ε g for every compact subset K of Γ\H, we cannot expect any significant improvement to Theorem 1.14 via this line of reasoning.
Proof of Conditional Results.
In this section, we prove the following.
Theorem 1.17 then follows directly via Chebyshev's inequality. Our starting point towards proving Proposition 5.1 is the following spectral expansion of Var(g; R).
Proposition 5.2. Let g ∈ B 0 (Γ) be a Hecke-Maaß eigenform normalised such that g, g = 1. Then Var(g; R) is equal to Proof. Via Lemmata 3.1 (namely Parseval's identity) and 4.3, |g| 2 , K R (·, w) is equal to Upon squaring and integrating over w, we obtain the desired identity.
Proof of Proposition 5.1 for 0 < δ < 1. We use Propositions 5.2 and 2.8 and Lemmata 4.2 and 2.12. We then divide the spectral expansion in Proposition 5.2 into various ranges. Just as in Section 3.2, there are two main ranges of the continuous spectrum to consider: • the initial range 0 ≤ |t| < 2t g + t δ g , and • the tail range |t| > 2t g + t δ g . The division of the cuspidal spectrum into parts depends on δ. When R ≍ t −δ g with 0 < δ < 1, the ranges are: • the short initial range 0 < t f ≤ t δ g , • the polynomial decay range t δ g < t f < 2t g + t 1−δ g , • the tail range t f ≥ 2t g + t 1−δ g . Thus Var(g; R) is bounded by a constant multiple dependent on ε of • From [BK17b, Lemma 2.1], the initial and tail ranges of the continuous spectrum are bounded by t −1+ε g . • The convexity bounds for L(1/2, f ) and L(1/2, sym 2 g ⊗ f ) show that the tail range of the cuspidal spectrum is rapidly decaying. • For the other two ranges, the generalised Lindelöf hypothesis implies that the product of these two L-functions is bounded by a constant multiple dependent on ε of t ε g , and then the Weyl law for Γ\H and partial summation imply that the contribution of the cuspidal spectrum is bounded by t δ−1+ε g . This completes the proof.
Proof of Proposition 5.1 for δ > 1. In this case, the division of the cuspidal spectrum into parts involves an additional range, and there is a dependence on an small fixed parameter δ ′ > 0: • the short initial range 0 < t f ≤ t 1−δ ′ g , which once again is bounded by , which is asymptotic to 6/π from the proof of [BK17b, Proposition 2.2], • the short transition range 2t , which is negligible. This completes the proof.
Remark 5.3. Just as with Theorem 1.14, the bound Var(g; R) ≪ ε t −(1−δ)+ε g for R ≍ t −δ g with 0 < δ < 1 in Proposition 5.1 also holds for Maaß newforms g ∈ B * 0 (Γ 0 (q)) for any q > 1. Indeed, [IK04,Theorem 15.5] gives the spectral decomposition of L 2 (Γ 0 (q)\H), though there are Eisenstein series corresponding to each cusp and the orthonormal basis of Maaß cusp forms are no longer necessarily Hecke-Maaß eigenforms. Nonetheless, Blomer and Milićević have given an orthonormal basis of B 0 (Γ 0 (q)) involving linear combinations of oldforms and newforms [BM15, Lemma 9], and a similar basis exists for the space of Eisenstein series [You17b], and these can be coupled with the work of Hu on the Watson-Ichino formula in this generality [Hu17].
Remark 5.4. In fact, the method of proof of [BK17b, Proposition 2.2] together with Lemma 4.2 show that if R ∼ (Ct g ) −1 for some positive constant C, then [GR07,(8.473.1) and (6.552.4)], which converges to 6/π as C tends to infinity.
5.2.
Proof of Unconditional Results. We first sketch how to prove Theorem 1.16.
Next, we cover the proof of the following, from which Theorem 1.17 will be derived.
Since the constant term of F (z) is we have that F (z) − E(z) = O(y 1/2−δ ) for some δ > 0 at the cusp at infinity, and consequently F − E ∈ L 2 (Γ\H). Lemmata 3.1 (namely Parseval's identity) and 4.3 then imply that The left-hand side is equal to F, K R (·, w) − E, K R (·, w) , and Lemma 4.3 allows us to calculate E, K R (·, w) explicitly. On the right-hand side, the inner product E, f vanishes whenever f ∈ B 0 (Γ), being the linear combination of inner products of Eisenstein series with a cusp form, and similarly F − E, 1 vanishes via [Zag82, Equation (36) and Section 2]. Finally, we claim that the inner product 2it) .
We now define (5.7) Here γ 0 is the Euler-Mascheroni constant and (1 − e(mw)) denotes the Dedekind eta function; note that ℑ(w) 6 η(w) 24 is a Maaß cusp form of weight 12 and level 1 that is nonvanishing outside the single cusp of Γ\H. That D(g; w) is, in some sense, the "true" average of |E(z, 1/2 + it g )| 2 on compact sets, rather than log Proof of Lemma 1.19. This follows from (2.4), (2.5), and (2.6), together with the fact that ℑ(w) 6 η(w) 24 is nonvanishing in K.
With this in hand, we can finally give the spectral expansion of Var(g; R).
Proposition 5.10. Let g(z) = E (z, 1/2 + it g ). Then Var(g; R) is equal to Proof. This follows directly from Lemma 5.9 after an application of Parseval's identity in Lemma 3.1.
Proof of Proposition 5.5. We use Propositions 5.10 and 2.9 and Lemmata 4.2 and 2.12. We then divide the spectral expansion in Proposition 5.10 into various ranges. The two ranges of the continuous spectrum are: • the initial range 0 ≤ |t| < 2t g + t δ g , and • the tail range |t| > 2t g + t δ g . The cuspidal spectrum can be broken into five ranges, which depend on a small fixed parameter 0 < δ ′ < 1 − δ: • the short initial range 0 < t f ≤ t δ g , • the short initial polynomial decay range t δ The continuous spectrum is readily dealt with: For the cuspidal spectrum, we have the following: • The convexity bounds for L(1/2, f ) and L(1/2 + 2it g , f ) show that the tail range is rapidly decaying. • The short initial range is bounded by a constant multiple dependent on ε of t − min{1−δ,1/6}+ε g upon dividing into dyadic intervals and applying Lemma 3.10.
• For the bulk polynomial decay range, we divide into dyadic intervals and use Lemma 3.7, which shows that this range is bounded by t • We divide the short transition polynomial decay range into intervals of length t 1/3 g , use the Cauchy-Schwarz inequality, and apply Lemma 3.12, which gives the bound t − 7 2 (1−δ)+ε g . Proposition 5.5 is proven upon taking δ ′ = 5 7 (1 − δ).
Using stationary phase as in the proof of Lemma 4.2, or alternatively using [Cha96, Lemma 2.4], we have that |h R 2t g + i 2 | 2 ≪ t −3(1−δ) g , while Stirling's approximation implies that Next, we note that , then for all sufficiently large t g , So piecing everything together, we find that if c ≫ ε t −2δ+ε Taking T = ct 3 2 (1−δ) g yields the result.
Equidistribution of Geometric Invariants of Quadratic Fields
6.1. Geometric Invariants of Quadratic Fields. Let K = Q( √ D) be a quadratic field of discriminant D. We denote by h + K · · = # Cl + K the narrow class number of K and h K · · = # Cl K the (wide) class number of K; note that Cl + K = Cl K , so that h + K = h K , except when D > 1 and O × K contains no elements of norm −1, in which case h + K = 2h K . Each narrow ideal class A of Cl + K is associated to an SL 2 (Z)-equivalence class of binary quadratic forms Q(x, y) = ax 2 + bxy + cy 2 of discriminant D.
Associated to equivalence classes of binary quadratic forms are geometric invariants: if D < 0, this is a Heegner point z A ∈ Γ\H, while if D > 0, these are a closed geodesic C A ⊂ Γ\H and a hyperbolic orbifold Γ A \N A whose boundary is C A . This last geometric invariant was introduced by Duke, Imamoḡlu, and Tóth in [DIT16].
6.1.1. Heegner Points z A . Given a binary quadratic form Q(x, y) = ax 2 + bxy + cy 2 of discriminant b 2 − 4ac = D < 0, the point lies in H. The equivalence class of binary quadratic forms containing Q(x, y), and hence the corresponding ideal class A ∈ Cl K , thereby corresponds to a point z = z A in Γ\H, which we call a Heegner point. 6.1.2. Closed Geodesics C A . Given a binary quadratic forms Q(x, y) = ax 2 + bxy + cy 2 of discriminant b 2 − 4ac = D > 0, the points determine the endpoints of a closed geodesic in H. The equivalence class of binary quadratic forms containing Q(x, y) thereby corresponds to a closed geodesic C = C A in Γ\H. The length ℓ(C A ) · · = CA ds of C A , with ds 2 = y −2 dx 2 +y −2 dy 2 , is equal to 2 log ǫ + K , where ǫ + K > 1 is the smallest unit with positive norm in the ring of integers O K of K, so that ǫ + K = (x + y √ D)/2 with (x, y) ∈ R 2 + the fundamental solution to the Pell equation x 2 − Dy 2 = 4. Note that ǫ + K is equal to ǫ K , the fundamental unit of K, if O × K contains no elements of norm −1, whereas ǫ + K = ǫ 2 K if O × K does contain elements of norm −1.
Hyperbolic Orbifolds
be a real quadratic field of discriminant D > 1. Associated to a narrow ideal class A ∈ Cl + K is an invariant ((n 1 , . . . , n ℓA )), where ℓ A is a positive integer and n 1 , . . . , n ℓA are integers; this is the primitive cycle, unique up to cyclic permutations, occurring in the minus continued fraction expansion of each point w ∈ K for which 1 > w > σ(w) > 0 and wZ + Z ∈ A. We define the elements S · · = ± 0 1 −1 0 , T · · = ± 1 1 0 1 of PSL 2 (Z), which generate PSL 2 (Z) as the free product of S and T . For each k ∈ {1, . . . , ℓ A }, define This is an elliptic element of order 2 in PSL 2 (Z). We set Γ A · · = S 1 , · · · , S ℓA , T n1+···+n ℓ A , which is a thin subgroup of PSL 2 (Z). The Nielsen region N A of Γ A is the smallest nonempty PSL 2 (Z)-invariant open convex subset of H. Then Γ A \N A is a hyperbolic orbifold, which naturally projects onto Γ\H. The boundary of Γ A \N A is a simple closed geodesic whose image in Γ\H is C A , and the volume of Γ A \N A is πℓ A .
Remark 6.1. In fact, Γ A depends on the choice of w. The resulting hyperbolic orbifold Γ A \N A ends up being only unique up to translation; however, the projection of Γ A \N A onto Γ\H is independent of the choice of w.
6.2.1. Variances and Weyl Sums. We define The proofs of Theorems 1.25 and 1.26 follow via Chebyshev's inequality from the following two propositions.
Proposition 6.4. We have that Proof. This follows from the spectral expansion of K R and Parseval's identity.
To bound these variances, we require upper bounds for the Weyl sums as well as lower bounds for #G K , A∈GK ℓ (C A ), and A∈GK vol (Γ A \N A ).
Lemma 6.5. We have that Proof. We have that #G K = 2 1−ω(|D|) h + K and ℓ(C A ) = 2 log ǫ + K , while it is shown in [DIT16, Proposition 1] that The class number formula states that The result then follows from the Landau-Siegel theorem and the bound L(1, χ D ) ≪ log |D|.
6.2.2. Genus Characters. The character group Gen K of Gen K is the group of real characters of Cl + K . These genus characters are indexed by unordered pairs of coprime fundamental discriminants d 1 , d 2 ∈ Z satisfying d 1 d 2 = D. To each pair d 1 , d 2 , we let χ = χ d1,d2 denote the genus character corresponding to d 1 , d 2 : this is a real character of the narrow class group Cl + K that extends multiplicatively to all nonzero fractional ideals via χ(p) · · = χ d1 (N (p)) if (N (p), d 1 ) = 1, χ d2 (N (p)) if (N (p), d 2 ) = 1, for any prime ideals p ∤ d K , where χ d1 , χ d2 are the primitive real Dirichlet characters modulo d 1 , d 2 respectively. In particular, χ is a quadratic character unless either d 1 or d 2 is 1, in which case it is the trivial character.
We abuse notation and write G K for an element in the coset of Cl + K corresponding to the genus G K . This allows us to write and analogous identities for W GK (zA),∞ (t), W GK (CA),∞ (t), and W GK (ΓA\NA),∞ (t). This has the advantage that we are able to show in each case that the square of the sum over A ∈ Cl + K is essentially equal to a product of L-functions.
Proof. This follows from [DIT16, Theorem 3], akin to the proof of Lemma 6.7.
For unconditional results, we make use of the following bounds. Proof of Proposition 6.3. We bound the variance by breaking up into ranges as in the proof of Proposition 6.2. Instead of applying the generalised Lindelöf hypothesis, we use the generalised Hölder inequality with exponents (3, 3, 3). Via the bounds in Lemmata 6.11 and 6.12, together with the Weyl law, we obtain the result.
Representations of Integers by Indefinite Ternary Quadratic Forms.
We briefly describe how the results in this section can be interpreted in terms of indefinite ternary quadratic forms. For simplicity, we only discuss the case of negative discriminant and summing over all genera; for positive discriminant, a detailed presentation can be found in [ELMV12, Section 2]. Consider the indefinite ternary quadratic form Q(a, b, c) = b 2 − 4ac.
It is natural to ask whether the normalised level sets G D cover V Q,−1 (R) randomly as D tends to −∞ along fundamental discriminants. Each level set V Q,D (Z) is countably infinite, and V Q,−1 (R) is isomorphic to C\R, which is not of finite volume, so one cannot immediately rephrase this random covering as equidistribution.
On the other hand, the group SO Q (Z) · · = A ∈ SL 3 (Z) : Q(Ax) = Q(x) for all x = (a, b, c) ∈ Z 3 acts transitively on V Q,D (Z), and the quotient space SO Q (Z)\G D is finite for all fundamental discriminants D, with cardinality equal to h K . Moreover, SO Q (Z) is a discrete subgroup of SO Q (R) of finite covolume, and V Q,−1 (R) ∼ = SO Q (R)/K with K equal to the maximal compact subgroup of SO Q (R), and so the space SO Q (Z)\V Q,−1 (R) is of finite volume. Thus to ask whether the normalised level sets G D randomly cover V Q,−1 (R) can be rephrased as asking whether the finite sets SO Q (Z)\G D equidistribute in the finite volume space SO Q (Z)\V Q,−1 (R). This has a positive answer by naturally realising this result in terms of the equidistribution of Heegner points on Γ\H, as proved by Duke [Duk88, Theorem 1]. Indeed, the fact that Q is indefinite implies that SO Q is isomorphic to the split special orthogonal group SO 1,2 , and we have the accidental isomorphism SO 1,2 ∼ = PGL 2 , while K ∼ = SO 2 (R). From this, we see that SO Q (Z)\V Q,−1 (R) ∼ = PGL 2 (Z)\PGL 2 (R)/SO 2 (R) ∼ = Γ\H, while SO Q (Z)\G D is naturally identified with the set of Heegner points {z A ∈ Γ\H : A ∈ Cl K }.
With this reinterpretation in mind, we now see that Proposition 6.2 implies that under the assumption of the generalised Lindelöf hypothesis, almost every shrinking ball of radius R ≍ (−D) −δ with 0 < δ < 1/4 in SO Q (Z)\V Q,−1 (R) contains a normalised equivalence class of points (a, b, c) ∈ Z 3 that represent the integer D by the indefinite ternary quadratic form Q(a, b, c) = b 2 − 4ac. This complements [BRS16, Theorem 1.8], where the analogous result is proved for the definite ternary quadratic form Q(a, b, c) = a 2 + b 2 + c 2 . | 13,038.2 | 2017-05-16T00:00:00.000 | [
"Mathematics"
] |
Study of the Pattern Preparation and Performance of the Resistance Grid of Thin-Film Strain Sensors
The thin-film strain sensor is a cutting-force sensor that can be integrated with cutting tools. The quality of the alloy film strain layer resistance grid plays an important role in the performance of the sensor. In this paper, the two film patterning processes of photolithography magnetron sputtering and photolithography ion beam etching are compared, and the effects of the geometric size of the thin-film resistance grid on the resistance value and resistance strain coefficient of the thin film are compared and analyzed. Through orthogonal experiments of incident angle, argon flow rate, and substrate negative bias in the ion beam etching process parameters, the effects of the process parameters on photoresist stripping quality, etching rate, surface roughness, and resistivity are discussed. The effects of process parameters on etching rate, surface roughness, and resistivity are analyzed by the range method. The effect of substrate temperature on the preparation of Ni Cr alloy films is observed by scanning electron microscope. The surface morphology of the films before and after ion beam etching is observed by atomic force microscope. The influence of the lithography process on the surface quality of the film is discussed, and the etching process parameters are optimized.
Introduction
As the development of the micro-machine field rapidly increases, the requirements for machining accuracy in machine manufacturing are also increasing. Cutting force, which is an important parameter in the metal cutting process, directly affects both the work-piece quality and the tool life. As a result, it is particularly important to accurately measure the cutting force. In recent years, researchers have conducted a significant amount of work on cutting force [1][2][3][4]. The different types of cutting dynamometers mainly include strain dynamometers, piezoelectric dynamometers, current dynamometers, and capacitive dynamometers, etc. Among these, strain dynamometers and piezoelectric dynamometers are widely used. The piezoelectric dynamometer measures the cutting force through the piezoelectric effect of piezoelectric crystals [5,6]; it has the advantages of high rigidity and good dynamic characteristics, but it is sensitive to strong magnetic field interference, and charge leakage will occur when the humidity is high. The strain dynamometer measures the cutting force through the strain effect of film resistance [7], which has the characteristics of high rigidity, excellent dynamic characteristics, strong anti-interference ability, and good linearity.
Due to its small size and high precision, the film strain sensor can be used in embedded development and has become one of the main directions of sensor development [8][9][10][11][12]. Alloy film strain sensors are generally divided into the following, according to their functions: substrate, insulating layer, sensitive layer, electrode area, and protective layer. The thin-film sensitive layer is the core part of the thin-film sensor. The material properties, the shape and size of the sensitive layer, and its electrical and mechanical properties are the key to determining the pros and cons of the thin-film sensor. Ni80Cr20 film is used for thin-film resistors and sensing materials of commonly used thin-film strain gauges due to its high reliability, high resistivity, high sensitivity coefficient, and low temperature coefficient of resistance (TCR) [13]. Ni80Cr20 alloy film needs to be processed in the form of a resistance grid as the strain sensitive layer, so it is necessary to pattern the film.
The film patterning method mainly includes the following: Chijui Han et al. used micro-molding and stamping to manufacture thin-film resistor grids [14]. This method improves the preparation efficiency, but the stamping process changes the grain arrangement of the surrounding film, increases the internal stress of the film, and thereby reduces the surface quality of the film. Shuwen Jiang et al. used a metal reticle method to pattern the resistive layer of the thin-film sensitive layer [15], but there is a certain limit on the width of the resistive gate. The metal mask is difficult to fabricate and is not suitable for the preparation of the fine film resistor grid.
At present, the resistance grid is mainly obtained by lithography and ion beam etching. The film patterning method mainly includes the following: First, the most common method is to form an anti-pattern photoresist pattern by photolithography, then deposit a sensitive layer through a deposition system and remove the photoresist to form a desired pattern [16]; this method of preparing a resistive gate is the same as photolithography magnetron sputtering (PMS). IBE is a well-established technique for patterning magnetic and precious metals in memory applications, and it has the characteristics of non-selectivity, accuracy, directivity, high resolution, and flexible processing [17,18]. Narasimhan Srinivasan et al. studied the relationship between ion beam etching uniformity, angle related etching rate, and etching surface quality, including the surface quality and accuracy of thin-film grids [19]. Abdullaev et al. studied ion beam etching of dense porous PZT films under different Ar+ ion flux incidence angles. The effects of etching rate and incidence angle on the microstructure of the films were discussed [20]. Soyer et al. selected the film material and discussed the relationship between the etching rate of PZT on silicon substrate and process parameters (gas composition, current density, and accelerating voltage), though some electrical damage was produced in the etching process. The evolution of electrical properties has been studied when a metal mask protects the PZT layer [21,22]. Gallium oxide thin films were deposited on sapphire substrates at different temperatures by laser pulse laser deposition. It was found that, with the increase in substrate temperature, the crystallinity of the thin film increased and the etching rate decreased [23]. In the preparation scheme of the film strain sensor sensitive layer resistance grid, the use of photolithography and IBE of the film patterning method can improve the process of magnetron sputtering Ni Cr film, thereby improving the film properties. This method of preparing a resistive gate by photolithography and ion beam etching is the same as photolithography ion beam etching (PIBE).
In this paper, the two film patterning methods of photolithography magnetron sputtering and photolithography ion beam etching are compared and analyzed, and the influence of the structure size of the thin-film resistance grid prepared by the two methods on the film performance is discussed.
Thin-Film Resistance Grid Patterning Method Comparison Test Design
In this experiment, IBE is applied as the preparation method of the alloy film resistance strain sensor sensitive grid and compared with the previous resistance grid preparation method of PMS. The preparation process of the two sets of thin-film resistor grids is shown in Figures 1 and 2. The experimentally prepared alloy film strain sensor is mainly used for tensile experimental research, and the shape of the substrate is the same. First, a surface treatment was performed on a stainless-steel substrate with a thickness of 0.55 mm, and the preparation of the insulating layer was completed by sputtering a thickness of 600 nm of Al 2 O 3 and a thickness of 300 nm of Si 3 N 4 onto the surface [24]. Figure 1 shows the film patterning process of PMS. On the stainless-steel substrate with an insulating layer on the surface, photolithography is used to develop and remove the photoresist pattern on the insulating layer substrate, and the thickness of the adhesive layer is approximately 2.5 µm. A layer of Ni Cr film with a thickness of 800 nm was then prepared by magnetron sputtering, and the photoresist and the film on the glue were then peeled off to complete the fabrication of the resistance layer of the sensitive layer.
is mainly used for tensile experimental research, and the shape of the substrate is the same. First, a surface treatment was performed on a stainless-steel substrate with a thickness of 0.55 mm, and the preparation of the insulating layer was completed by sputtering a thickness of 600 nm of Al2O3 and a thickness of 300 nm of Si3N4 onto the surface [24]. Figure 1 shows the film patterning process of PMS. On the stainless-steel substrate with an insulating layer on the surface, photolithography is used to develop and remove the photoresist pattern on the insulating layer substrate, and the thickness of the adhesive layer is approximately 2.5 μm. A layer of Ni Cr film with a thickness of 800 nm was then prepared by magnetron sputtering, and the photoresist and the film on the glue were then peeled off to complete the fabrication of the resistance layer of the sensitive layer. In the process of preparing the thin-film resistance grid of the sensitive layer of the thin-film sensor by PMS, the photoresist with the substrate will fail at 150 °C and above, so the preparation process of the subsequent sensitive layer film is highly required. Figure 2 shows the film patterning process of PIBE. First, a layer of Ni Cr film is sputtered onto a stainless-steel substrate with an insulating layer on the surface by magnetron sputtering. The photoresist is then developed on the Ni Cr film by photolithography to form a resistance grid-shaped pattern. Next, a thin-film resistor grid is etched by IBE technology. Finally, the remaining photoresist is stripped with a cleaning agent, such as acetone, to complete the fabrication of the resistor grid. An example of a film mask used for lithography exposure in the two processes is shown in Figure 3. is mainly used for tensile experimental research, and the shape of the substrate is the same. First, a surface treatment was performed on a stainless-steel substrate with a thickness of 0.55 mm, and the preparation of the insulating layer was completed by sputtering a thickness of 600 nm of Al2O3 and a thickness of 300 nm of Si3N4 onto the surface [24]. Figure 1 shows the film patterning process of PMS. On the stainless-steel substrate with an insulating layer on the surface, photolithography is used to develop and remove the photoresist pattern on the insulating layer substrate, and the thickness of the adhesive layer is approximately 2.5 μm. A layer of Ni Cr film with a thickness of 800 nm was then prepared by magnetron sputtering, and the photoresist and the film on the glue were then peeled off to complete the fabrication of the resistance layer of the sensitive layer. In the process of preparing the thin-film resistance grid of the sensitive layer of the thin-film sensor by PMS, the photoresist with the substrate will fail at 150 °C and above, so the preparation process of the subsequent sensitive layer film is highly required. Figure 2 shows the film patterning process of PIBE. First, a layer of Ni Cr film is sputtered onto a stainless-steel substrate with an insulating layer on the surface by magnetron sputtering. The photoresist is then developed on the Ni Cr film by photolithography to form a resistance grid-shaped pattern. Next, a thin-film resistor grid is etched by IBE technology. Finally, the remaining photoresist is stripped with a cleaning agent, such as acetone, to complete the fabrication of the resistor grid. An example of a film mask used for lithography exposure in the two processes is shown in Figure 3. In the process of preparing the thin-film resistance grid of the sensitive layer of the thin-film sensor by PMS, the photoresist with the substrate will fail at 150 • C and above, so the preparation process of the subsequent sensitive layer film is highly required. Figure 2 shows the film patterning process of PIBE. First, a layer of Ni Cr film is sputtered onto a stainless-steel substrate with an insulating layer on the surface by magnetron sputtering. The photoresist is then developed on the Ni Cr film by photolithography to form a resistance grid-shaped pattern. Next, a thin-film resistor grid is etched by IBE technology. Finally, the remaining photoresist is stripped with a cleaning agent, such as acetone, to complete the fabrication of the resistor grid. An example of a film mask used for lithography exposure in the two processes is shown in Figure 3.
Resistance Value and Resistance Strain Coefficient with Different Geometric Dimensions of Thin-Film Strain Sensors Prepared by PMS and PIBE
(1) The length of the longitudinal grid of the thin-film strain sensor A semiconductor characterization system is used to measure the resistance of thin-film strain sensors with different lengths of longitudinal grid obtained by PMS and PIBE. The measurement results are averaged by three measurements. The results are
Resistance Value and Resistance Strain Coefficient with Different Geometric Dimensions of Thin-Film Strain Sensors Prepared by PMS and PIBE
(1) The length of the longitudinal grid of the thin-film strain sensor A semiconductor characterization system is used to measure the resistance of thin-film strain sensors with different lengths of longitudinal grid obtained by PMS and PIBE. The measurement results are averaged by three measurements. The results are shown in Table 1. The error and range of the measured values are analyzed. From the resistance measurement results in Table 1, it can be seen that the resistance values of the thin-film strain sensors by the two methods have stability. The resistance value range of 3-5 mm length in PMS is stable within 20.4, and the resistance value range in PIBE is 26.1. It can be observed that the average resistance error multiple increases slightly from 3.5 mm to 4.1 mm in PMS, and that increases from 2.1 to 2.4 in PIBE. In the two methods, the thin-film resistance grid sample with a resistance grid length of 3 mm offers the best result on the total resistance, single grid range, and average error multiple, and the average error multiples are 3.523 and 2.165, respectively.
The resistance strain coefficient of the thin-film sensor needs to be measured and analyzed by tensile strain. A digital display push-pull meter is used for the tensile test, and a signal acquisition instrument (DASP) is used for data acquisition and analysis. The thin-film sensor is connected to the signal acquisition system to form a 1/4 Wheatstone bridge circuit. The wiring of the thin-film sensor with elastic substrate is shown in Figure 4. According to the tensile strain, the theoretical tensile strain of the elastic substrate of the thin-film sensor is calculated by Equation (1).
where E = 195 Gpa is the elastic modulus and A is the cross-sectional area of the measured position. Within the range of elastic deformation, when the safety factor of 304 stainless steel is 2, the allowable stress [σ] = 102.5 MPa and the elastic base can withstand the maximum tensile force Fmax = 451 N. Therefore, the range of 0-400 n is selected for tensile test.
The signal acquisition instrument (DASP) can match the voltage with the resistance value and maximize the output response. The equipment can directly output the voltage According to the tensile strain, the theoretical tensile strain of the elastic substrate of the thin-film sensor is calculated by Equation (1).
where E = 195 Gpa is the elastic modulus and A is the cross-sectional area of the measured position.
Within the range of elastic deformation, when the safety factor of 304 stainless steel is 2, the allowable stress [σ] = 102.5 MPa and the elastic base can withstand the maximum tensile force F max = 451 N. Therefore, the range of 0-400 n is selected for tensile test.
The signal acquisition instrument (DASP) can match the voltage with the resistance value and maximize the output response. The equipment can directly output the voltage change and strain value. The resistance change can be obtained through the voltage change. The correlation is shown in Formula (2), in which U i is the input voltage and U 0 is the output voltage.
The resistance variation and resistance stability of thin-film strain sensors with different lengths of longitudinal grid under tensile test are analyzed. The tensile results are shown in Figure 5.
the thin-film sensor is calculated by Equation (1).
where E = 195 Gpa is the elastic modulus and A is the cross-sectional area of the measured position. Within the range of elastic deformation, when the safety factor of 304 stainless steel is 2, the allowable stress [σ] = 102.5 MPa and the elastic base can withstand the maximum tensile force Fmax = 451 N. Therefore, the range of 0-400 n is selected for tensile test.
The signal acquisition instrument (DASP) can match the voltage with the resistance value and maximize the output response. The equipment can directly output the voltage change and strain value. The resistance change can be obtained through the voltage change. The correlation is shown in Formula (2), in which Ui is the input voltage and U0 is the output voltage.
The resistance variation and resistance stability of thin-film strain sensors with different lengths of longitudinal grid under tensile test are analyzed. The tensile results are shown in Figure 5. As shown in Figure 5, linear fitting and strain coefficient calibration are carried out on the measurement results of the thin-film strain sensor by different methods. The slope of the fitting curve is the resistance strain coefficient of the thin-film strain sensor. The resistance strain coefficient decreases gradually with the increase in the length of the longitudinal grid when it is in the range of 3-6 mm. The resistance strain coefficient of the thin-film resistance grid is between 1.46-1.62 and 1.54-1.62, respectively, which is within the range of the Ni-Cr material strain coefficient. Among them, the maximum strain coefficients of the strain grid with a longitudinal grid length of 3 mm are 1.61 and 1.62, respectively.
The resistance values of thin-film strain sensors by the two methods have stability. Among them, the resistance error of the thin-film resistance grid prepared by PIBE is significantly lower than that of PMS; the difference between the errors of two resistance multiples is almost 1.5. Because the adjustment of substrate temperature is increased in the process of preparing Ni-Cr thin-film by PIBE, the resistance value of thin-film resistance grid is optimized to a certain extent.
According to Figure 5, the resistance strain coefficients of thin-film strain sensors prepared by PIBE are higher than those prepared by PMS. Resistance value and the geometric size of the thin-film resistance grid are the two main aspects affecting the resistance strain coefficient; the influence of the length of the thin-film resistance grid on the strain coefficient is very small compared with that of thin-film resistance, and the main reason for the increase in strain coefficient is the lower resistance value of the thin-film strain sensor.
(2) The width of the longitudinal grid of the thin-film strain sensor The thin-film strain sensors have longitudinal grids with widths of 0.05 mm, 0.1 mm, 0.15 mm, and 0.2 mm. The resistance value of the thin-film strain sensor is 1200 Ω, and the other dimensions of the thin-film resistance grids are shown in Table 2. The semiconductor characterization system (4200 SCS) is used to measure the resistance of thin-film strain sensors with longitudinal grids of different widths. The measurement results, range, and error analysis results are shown in Table 3. As can be seen from the resistance measurement results in Table 2, the resistance values of thin-film strain sensors by the two methods have stability. The resistance value range of 0.05-0.2 mm width in PMS is stable within 21.5, and the resistance value range in PIBE is 33.1. The other three single grid ranges are less than 21.1 in PIBE. With the increase of resistance grid width from 0.05 mm to 0.2 mm, the total resistance and the average error multiple gradually decrease. For thin-film resistance grids with widths of 0.05 mm to 0.2 mm, the average error multiples are 3-5.2 and 2.8-4, respectively. The thin-film resistance grid sample of 0.2 mm wide performs the best on the total resistance, single grid range, and average error multiple, with an average error multiple of 3.045. The thin-film resistance grid sample with the 0.1 mm width performs the best on the total resistance, single grid range, and average error multiple, and the average error multiple is 2.139.
As shown in Figure 6, linear fitting and strain coefficient calibration were carried out on the measurement results of the thin-film strain sensor by different methods. With the increase in width, the resistance strain coefficients of the two methods first increase and then decrease when the width of the grid is in the range of 0.05-0.2 mm. The resistance strain coefficients of thin-film strain sensors are between 1.45-1.55 and 1.45-1.62, respectively, which are in the range of Ni-Cr material strain coefficients. Among them, the maximum strain coefficients of the strain grid with a longitudinal grid width of 0.1 mm are 1.55 and 1.62, respectively. increase in width, the resistance strain coefficients of the two methods first increase and then decrease when the width of the grid is in the range of 0.05 mm-0.2 mm. The resistance strain coefficients of thin-film strain sensors are between 1.45-1.55 and 1.45-1.62, respectively, which are in the range of Ni-Cr material strain coefficients. Among them, the maximum strain coefficients of the strain grid with a longitudinal grid width of 0.1 mm are 1.55 and 1.62, respectively. According to Figure 6, the resistance with different widths of thin-film strain sensors prepared by the two methods is roughly similar to the results of different lengths. With the same width, the resistance error of the thin-film strain sensors prepared by PIBE is lower than that of PMS, and the difference of resistance error multiple between the two is almost 1.5. With the increase in the width, the resistance value of the thin-film sensor prepared by PIBE first decreases and then increases. The change of the width of the thin-film sensor has a significant effect on the resistance strain coefficient of the thin-film sensor. The resistance strain coefficient of the thin-film strain sensor prepared by PIBE is generally higher than that by PMS.
(3) The thickness of the longitudinal grid of the thin-film strain sensor The thin-film strain sensors have longitudinal grids with thicknesses of 800 nm, 900 nm, 1000 nm, and 1100 nm. The resistance value of the thin-film strain sensors is 1200 Ω, and the other dimensions of the thin-film resistance grids are shown in Table 4. According to Figure 6, the resistance with different widths of thin-film strain sensors prepared by the two methods is roughly similar to the results of different lengths. With the same width, the resistance error of the thin-film strain sensors prepared by PIBE is lower than that of PMS, and the difference of resistance error multiple between the two is almost 1.5. With the increase in the width, the resistance value of the thin-film sensor prepared by PIBE first decreases and then increases. The change of the width of the thin-film sensor has a significant effect on the resistance strain coefficient of the thin-film sensor. The resistance strain coefficient of the thin-film strain sensor prepared by PIBE is generally higher than that by PMS.
(3) The thickness of the longitudinal grid of the thin-film strain sensor The thin-film strain sensors have longitudinal grids with thicknesses of 800 nm, 900 nm, 1000 nm, and 1100 nm. The resistance value of the thin-film strain sensors is 1200 Ω, and the other dimensions of the thin-film resistance grids are shown in Table 4. From the resistance measurement results in Table 5, it can be seen that the resistance values of the thin-film strain sensors by the two methods have stability. With the resistance grid thickness increase from 800 nm to 1100 nm, the total resistance value gradually decreases. The minimum range of the 1000-nm thick thin-film resistance grid is 14 in PMS, and the minimum range of the 900-nm thick thin-film resistance grid is 14.1 in PIBE. With the increase of resistance grid thickness from 800 nm to 1100 nm, the average error multiple decreases and then increases, and the values are between 3.6-4.4 and 2.1-2.5, respectively. When the thickness of the thin-film resistance grid sample is 1000 nm, the total resistance, single grid range, and average error multiple are better, and the average error multiples are 3.632 and 2.125, respectively. The resistance variation and resistance stability of thin-film strain sensors with different thicknesses of grid under tensile test are analyzed. The tensile results are shown in Figure 7. As shown in Figure 7, linear fitting and strain coefficient calibration were carried out on the measurement results of the thin-film strain sensor by different methods. With the increase in thickness, the resistance strain coefficient gradually increases, and the resistance strain coefficient of the thin-film resistance grid is between 1.45-1.56 and 1.57-1.63, respectively, which are in the range of the strain coefficient of Ni-Cr material. Among them, the maximum strain coefficients of the strain grating with the thickness of 1100 nm are 1.56 and 1.63, respectively.
Among them, the resistance error of the thin-film resistance grid prepared by photolithography is still significantly lower than that prepared by photolithography magnetron sputtering, and the difference between the two resistance error multiples is about 1.5. With the increase in thickness, the resistance of the thin-film resistance grid decreases gradually. The change of the thickness of the thin-film sensor has a significant effect on the resistance strain coefficient of the thin-film sensor. The resistance strain coefficient of the thin-film strain sensor prepared by PIBE is generally higher than that by PMS.
Orthogonal Experiment on Process Parameters of PIBE
In the pattering process of the Ni Cr film of PIBE, the cathode current is 5.8 A and the neutralization current is 6.10 A. Both are the matching current generated according to the change of voltage, and the regulation effect is limited; the ion beam energy is the accumulation of the arc voltage and the negative bias of the substrate. The arc voltage is 45 V, which has little effect. In the process, the negative bias of the substrate can be adjusted to change the properties of the film. The acceleration voltage is 300 V, and the As shown in Figure 7, linear fitting and strain coefficient calibration were carried out on the measurement results of the thin-film strain sensor by different methods. With the increase in thickness, the resistance strain coefficient gradually increases, and the resistance strain coefficient of the thin-film resistance grid is between 1.45-1.56 and 1.57-1.63, respectively, which are in the range of the strain coefficient of Ni-Cr material. Among them, the maximum strain coefficients of the strain grating with the thickness of 1100 nm are 1.56 and 1.63, respectively.
Among them, the resistance error of the thin-film resistance grid prepared by photolithography is still significantly lower than that prepared by photolithography magnetron sputtering, and the difference between the two resistance error multiples is about 1.5. With the increase in thickness, the resistance of the thin-film resistance grid decreases gradually. The change of the thickness of the thin-film sensor has a significant effect on the resistance strain coefficient of the thin-film sensor. The resistance strain coefficient of the thin-film strain sensor prepared by PIBE is generally higher than that by PMS.
Orthogonal Experiment on Process Parameters of PIBE
In the pattering process of the Ni Cr film of PIBE, the cathode current is 5.8 A and the neutralization current is 6.10 A. Both are the matching current generated according to the change of voltage, and the regulation effect is limited; the ion beam energy is the accumulation of the arc voltage and the negative bias of the substrate. The arc voltage is 45 V, which has little effect. In the process, the negative bias of the substrate can be adjusted to change the properties of the film. The acceleration voltage is 300 V, and the coupling coefficient is 1.25.
As a result, the incident angle of the ion beam, argon flow rate, and substrate negative bias are the three main process parameters of ion beam etching. Table 6 is the factor level table of the orthogonal test of the three process parameters. Table 7 shows the factor level orthogonal test table L9 (33) in Table 2 and the test results. The etching rates of Ni Cr alloy and photoresist are obtained by the thickness dividing time, and the roughness is measured by optical microscope. In the process of ion beam etching, AZ6140 photoresist is used as the mask material to form the resistance grid pattern of the Ni-Cr film. It can be seen from Table 7 that, under the same process parameters, the etching rate of photoresist is 1 nm/min-3.7 nm/min lower than that of Ni-Cr film. The etching rate of photoresist increases with the increase in the negative bias of the substrate, and the increase value of the etching rate of photoresist changes in the range of 2 nm when the negative bias of substrate increases by 100 V. At the same time, the ratio of the etching rate of photoresist to the ion beam etching of Ni-Cr film is between 0.86 and 0.96; as a result, AZ6140 photoresist can meet the conditions of ion beam etching.
The etched photoresist was subjected to a peeling test. When the negative bias voltage of the substrate was 550 V, the etched photoresist had incomplete peeling, such as in Test Nos. 3, 5, and 7. This can be explained by the fact that when the negative bias voltage of the substrate is high, the ion beam energy would be increased, resulting in the carbonization of the photoresist. As a result, the negative bias voltage of the base should be less than 550 V in order to reduce the adverse impact of photoresist peeling on the nickel chromium film.
Resistance Strain Coefficient and Error Analysis of Resistance Grid
Statistical analysis was carried out according to the four levels of each size parameter design. Figure 8 shows the influence of the geometric size of the grid on the resistance value of the thin-film resistance grid by the two preparation methods. As show in Figure 8a,b, the width of the longitudinal grid has the highest impact on the resistance value of the thin-film resistance grid, and the resistance error generally decreases with the increase in the width; with the increase in the thickness, the resistance error first decreases and then increases, and it reaches the lowest when the thickness is 1000 nm. Relating to the length of the thin-film resistance grid, in Figure 8b, the curve of error fluctuates slightly multiple times up and down, and this shows a stable trend. The error multiple of the length of the resistance grid prepared by the two methods is relatively minimal. The resistance strain coefficients of the thin-film resistance grid prepared by the two preparation methods are shown in Figure 9. Figure 9. Analysis of resistance strain coefficients of thin-film resistor grid geometry.
As Figure 9 shows, the resistance strain coefficient of the width of the longitudinal grid obtained by the two methods first increases and then decreases, and the corresponding resistance strain coefficient value is the largest when the width is 0.1 mm. The resistance strain coefficient increases as the thickness of the grid prepared by PMS increases, and the resistance strain coefficient first decreases and then increases as the thickness of the grid obtained by PIBE increases. The resistance strain coefficient first decreases and then increases when the length of the resistance grid prepared by PMS increases, and the resistance strain coefficient decreases when the thickness of the resistance grid obtained by PIBE increases. The thickness of the longitudinal grid has the least effect on the resistance strain coefficient. According to the above analysis, it can be seen that a better resistance strain coefficient can be obtained by PIBE. For the resistance grid, the cutting force is mainly measured in the length direction of the grids; the PIBE is used to obtain the resistance grid in this paper. The resistance strain coefficients of the thin-film resistance grid prepared by the two preparation methods are shown in Figure 9. The resistance strain coefficients of the thin-film resistance grid prepared by the two preparation methods are shown in Figure 9. Figure 9. Analysis of resistance strain coefficients of thin-film resistor grid geometry.
As Figure 9 shows, the resistance strain coefficient of the width of the longitudinal grid obtained by the two methods first increases and then decreases, and the corresponding resistance strain coefficient value is the largest when the width is 0.1 mm. The resistance strain coefficient increases as the thickness of the grid prepared by PMS increases, and the resistance strain coefficient first decreases and then increases as the thickness of the grid obtained by PIBE increases. The resistance strain coefficient first decreases and then increases when the length of the resistance grid prepared by PMS increases, and the resistance strain coefficient decreases when the thickness of the resistance grid obtained by PIBE increases. The thickness of the longitudinal grid has the least effect on the resistance strain coefficient. According to the above analysis, it can be seen that a better resistance strain coefficient can be obtained by PIBE. For the resistance grid, the cutting force is mainly measured in the length direction of the grids; the PIBE is used to obtain the resistance grid in this paper. Figure 9. Analysis of resistance strain coefficients of thin-film resistor grid geometry.
As Figure 9 shows, the resistance strain coefficient of the width of the longitudinal grid obtained by the two methods first increases and then decreases, and the corresponding resistance strain coefficient value is the largest when the width is 0.1 mm. The resistance strain coefficient increases as the thickness of the grid prepared by PMS increases, and the resistance strain coefficient first decreases and then increases as the thickness of the grid obtained by PIBE increases. The resistance strain coefficient first decreases and then increases when the length of the resistance grid prepared by PMS increases, and the resistance strain coefficient decreases when the thickness of the resistance grid obtained by PIBE increases. The thickness of the longitudinal grid has the least effect on the resistance strain coefficient. According to the above analysis, it can be seen that a better resistance strain coefficient can be obtained by PIBE. For the resistance grid, the cutting force is mainly measured in the length direction of the grids; the PIBE is used to obtain the resistance grid in this paper.
Comparative Analysis of Thin-Film Resistor Grid Preparation Methods
The thin-film resistor grids prepared by PIBE and the thin-film resistor grids prepared by PMS are analyzed for electrical properties and surface morphology. Thin-film resistor grids with a resistance of 1200 Ω are selected. By preparing thin-film resistor grids with different widths, the influence of the width of the resistor grid on the film resistance is analyzed, and the film size is determined by Formula (3).
Some of the thin-film resistor grids produced by the PIBE are shown in Figure 10a. The semiconductor characterization system 4200 SCS is used to measure the resistance of the thin-film resistor gate. The experimental equipment is shown in Figure 10b.
Comparative Analysis of Thin-Film Resistor Grid Preparation Methods
The thin-film resistor grids prepared by PIBE and the thin-film resistor grids prepared by PMS are analyzed for electrical properties and surface morphology. Thin-film resistor grids with a resistance of 1200 Ω are selected. By preparing thin-film resistor grids with different widths, the influence of the width of the resistor grid on the film resistance is analyzed, and the film size is determined by Formula (3).
Some of the thin-film resistor grids produced by the PIBE are shown in Figure 10a. The semiconductor characterization system 4200 SCS is used to measure the resistance of the thin-film resistor gate. The experimental equipment is shown in Figure 10b. Three samples are selected and measured three times to obtain the average value. The final measurement results and errors are shown in Table 8. It can be seen from Table 8 that, under the same width, the average error multiple of the resistance value of the resistance gate prepared by the PIBE method is lower than that of the PMS method. Through the overall longitudinal observation, the accuracy of the resistance value of the thin-film resistance gate increases as the width of the resistance gate increases. SEM is used to observe the size, surface morphology, and contour integrity of the thin-film resistor grid prepared by the PIBE method, as shown in Figure 11. Figure 11a shows a thin-film resistor grid sample of 0.1 mm width prepared by the PIBE method, Three samples are selected and measured three times to obtain the average value. The final measurement results and errors are shown in Table 8. It can be seen from Table 8 that, under the same width, the average error multiple of the resistance value of the resistance gate prepared by the PIBE method is lower than that of the PMS method. Through the overall longitudinal observation, the accuracy of the resistance value of the thin-film resistance gate increases as the width of the resistance gate increases. SEM is used to observe the size, surface morphology, and contour integrity of the thin-film resistor grid prepared by the PIBE method, as shown in Figure 11. Figure 11a shows a thin-film resistor grid sample of 0.1 mm width prepared by the PIBE method, and Figure 11b shows a thin-film resistor grid sample of 0.15 mm width prepared by the PIBE method. The overall structure is complete, with very few missing and redundant areas. In addition, the grid size is uniform and stable. and Figure 11b shows a thin-film resistor grid sample of 0.15 mm width prepared by the PIBE method. The overall structure is complete, with very few missing and redundant areas. In addition, the grid size is uniform and stable.
Effects of Etching Process Parameters on Etching Rate, Surface Quality, and Resistivity
According to the orthogonal test results in Table 7, the effects of various factors on the film etching rate, film surface roughness, and resistivity are discussed by range analysis, as shown in Table 9, where X is the incident angle, Y is the argon flow, Z is the negative bias voltage of the substrate, ki (i = 1, 2, 3) is the mean value of the measurement results at every various levels, and Rj (j = A, B, C) is the range of various factors. From the results in the Table 9, the ki of X, Y, and Z have small changes and change in the range of 24.5-29.4, but the Rj of X, Y, and Z have great changes and change in the range of 0.9-9.1. As a result, the negative bias voltage has the greatest impact to the film etching rate. Similarly, the incident angle has the greatest impact on the film roughness, and the incident angle has the greatest impact on the film resistivity. Figure 12 is a parameter range analysis of the film etching rate, roughness, and film resistivity; they are the results of process parameters such as incident angle, argon flow, and substrate negative bias.
Effects of Etching Process Parameters on Etching Rate, Surface Quality, and Resistivity
According to the orthogonal test results in Table 7, the effects of various factors on the film etching rate, film surface roughness, and resistivity are discussed by range analysis, as shown in Table 9, where X is the incident angle, Y is the argon flow, Z is the negative bias voltage of the substrate, ki (i = 1, 2, 3) is the mean value of the measurement results at every various levels, and R j (j = A, B, C) is the range of various factors. From the results in the Table 9, the ki of X, Y, and Z have small changes and change in the range of 24.5-29.4, but the R j of X, Y, and Z have great changes and change in the range of 0.9-9.1. As a result, the negative bias voltage has the greatest impact to the film etching rate. Similarly, the incident angle has the greatest impact on the film roughness, and the incident angle has the greatest impact on the film resistivity. Figure 12 is a parameter range analysis of the film etching rate, roughness, and film resistivity; they are the results of process parameters such as incident angle, argon flow, and substrate negative bias. Figure 12a shows the range analysis of thin-film etching rate. The influence order of process parameters on the etching rate is s follows: substrate negative bias > argon flow > incident angle. Because the substrate negative bias can be superimposed with arc electrode voltage to form ion source energy, increased ion beam energy accelerates the etching speed. The increase in argon flow increases the number of argon ions and accelerates the etching speed similarly. Figure 12b shows the range analysis of film surface roughness. The influence order of process parameters is as follows: incident angle > substrate negative bias > argon flow. Changing the incident angle of the ion beam can effectively improve the surface quality of the etching surface. When the incident angle is 45°, the surface roughness value is the smallest and the surface quality is the best. Figure 12c shows the range analysis of resistivity. The influence order of process parameters is as follows: incidence angle > substrate negative bias > argon flow. The resistivity is the smallest when the incidence angle is 45° and the resistivity is the largest when the incidence angle is 20°, indicating that the incidence angle would change the distance between thin-film particles and cause the change of resistivity.
Variance Analysis of Thin-Film Etching Process Parameters
The regression equations of etching rat,e V, surface roughness, Ra, and resistivity on incident angle, substrate negative bias, and argon flow are established using Minitab software and the multiple regression fitting method, as shown in Formula (4). The analysis of variance of the regression equation is shown in Table 5.
According to Table 10, when the inspection level is taken as α = 0.005, the F distribution table is checked to obtain Fα (3,5) = 16.53. Compared with the data of F in the table, the response regression models of etching rate, V, have reached a significant level, but there is a large gap between the response regression models of surface roughness and resistivity. Figure 12a shows the range analysis of thin-film etching rate. The influence order of process parameters on the etching rate is s follows: substrate negative bias > argon flow > incident angle. Because the substrate negative bias can be superimposed with arc electrode voltage to form ion source energy, increased ion beam energy accelerates the etching speed. The increase in argon flow increases the number of argon ions and accelerates the etching speed similarly. Figure 12b shows the range analysis of film surface roughness. The influence order of process parameters is as follows: incident angle > substrate negative bias > argon flow. Changing the incident angle of the ion beam can effectively improve the surface quality of the etching surface. When the incident angle is 45 • , the surface roughness value is the smallest and the surface quality is the best. Figure 12c shows the range analysis of resistivity. The influence order of process parameters is as follows: incidence angle > substrate negative bias > argon flow. The resistivity is the smallest when the incidence angle is 45 • and the resistivity is the largest when the incidence angle is 20 • , indicating that the incidence angle would change the distance between thin-film particles and cause the change of resistivity.
Variance Analysis of Thin-Film Etching Process Parameters
The regression equations of etching rat, e V, surface roughness, Ra, and resistivity on incident angle, substrate negative bias, and argon flow are established using Minitab software and the multiple regression fitting method, as shown in Formula (4). The analysis of variance of the regression equation is shown in Table 5.
According to Table 10, when the inspection level is taken as α = 0.005, the F distribution table is checked to obtain F α (3,5) = 16.53. Compared with the data of F in the table, the response regression models of etching rate, V, have reached a significant level, but there is a large gap between the response regression models of surface roughness and resistivity.
In order to obtain a good surface quality of the Ni-Cr alloy film and stable film deposition rate, the etching process parameters are as follows: 45 • ion beam incidence angle, 7.5 sccm argon flow and 450 V substrate negative bias, 25.3 nm/min etching rate of Ni-Cr alloy thin-film, and 23.4 nm/min etching rate of AZ6140 photoresist.
Effect of Substrate Temperature on Ni Cr Alloy Film
When the resistance grid is prepared by the PMS method, the photoresist is coated before the preparation of the Ni Cr film, and the photoresist fails at 150 • C. When the resistance grid is prepared by PIBE, the photoresist is coated after the preparation of the nickel chromium film, which will not be affected by temperature. Figure 13 shows the surface morphology of Ni Cr film without and with substrate temperature by magnetron sputtering by scanning electron microscopy (SEM). Figure 13a is an SEM of the Ni Cr film without adding substrate temperature. The surface of the Ni Cr film has an obvious grain size, and the overall particle growth is uneven, resulting in low surface flatness, dark gloss, and large surface roughness. Figure 13b is an SEM of the Ni Cr film, adding the substrate temperature, which is higher than 200 • C. The grain growth of the Ni Cr film is relatively dense, and the gloss is bright, so the overall surface is even and flat. In order to obtain a good surface quality of the Ni-Cr alloy film and stable film deposition rate, the etching process parameters are as follows: 45° ion beam incidence angle, 7.5 sccm argon flow and 450 V substrate negative bias, 25.3 nm/min etching rate of Ni-Cr alloy thin-film, and 23.4 nm/min etching rate of AZ6140 photoresist.
Effect of Substrate Temperature on Ni Cr Alloy Film
When the resistance grid is prepared by the PMS method, the photoresist is coated before the preparation of the Ni Cr film, and the photoresist fails at 150 °C. When the resistance grid is prepared by PIBE, the photoresist is coated after the preparation of the nickel chromium film, which will not be affected by temperature. Figure 13 shows the surface morphology of Ni Cr film without and with substrate temperature by magnetron sputtering by scanning electron microscopy (SEM). Figure 13a is an SEM of the Ni Cr film without adding substrate temperature. The surface of the Ni Cr film has an obvious grain size, and the overall particle growth is uneven, resulting in low surface flatness, dark gloss, and large surface roughness. Figure 13b is an SEM of the Ni Cr film, adding the substrate temperature, which is higher than 200 °C. The grain growth of the Ni Cr film is relatively dense, and the gloss is bright, so the overall surface is even and flat. Figure 14 shows the surface topography of the Ni Cr film by atomic force microscopy (AFM). As shown in Figure 14a, the surface of the Ni Cr film has more and higher peaks, so the roughness is higher. However, the surface of the Ni Cr film in Figure 14b has fewer and lower peaks, and the roughness is lower. Figure 14c shows that the film surface is relatively uniform as a whole, with small and dense wave peaks. As a result, substrate temperature and ion beam etching can improve the surface quality of Ni Cr alloy films. Figure 14 shows the surface topography of the Ni Cr film by atomic force microscopy (AFM). As shown in Figure 14a, the surface of the Ni Cr film has more and higher peaks, so the roughness is higher. However, the surface of the Ni Cr film in Figure 14b has fewer and lower peaks, and the roughness is lower. Figure 14c shows that the film surface is relatively uniform as a whole, with small and dense wave peaks. As a result, substrate temperature and ion beam etching can improve the surface quality of Ni Cr alloy films.
Conclusions
In this paper, photolithography and ion beam etching are combined to pattern Ni Cr alloy film in a thin-film strain sensor to obtain the resistance grid structure, which is a new method of obtaining high-quality patterns on the metal film of thin-film strain sensors.
(1) PIBE is more suitable for Ni Cr alloy film patterning than PMS, because substrate temperature during the preparation of the Ni Cr film in PMS would fail at high temperatures. By comparing the single gate range, the average error multiples and resistance strain coefficients of the Ni-Cr alloy resistance grid obtained by PIBE are better than those obtained by PMS. Additionally, the stability of the resistance value of the thin-film resistor gate prepared by the PIBE method is better than that of the PMS method.
(2) In the PIBE, the influence order of the etching process parameters on the etching rate is as follows: substrate negative bias > argon flow > incident angle. The influence order on the film surface roughness is as follows: incident angle > substrate negative bias > argon flow. The influence order on the resistivity is as follows: incident angle > substrate negative bias > argon flow. The resistivity is smallest when the incident angle is
Conclusions
In this paper, photolithography and ion beam etching are combined to pattern Ni Cr alloy film in a thin-film strain sensor to obtain the resistance grid structure, which is a new method of obtaining high-quality patterns on the metal film of thin-film strain sensors.
(1) PIBE is more suitable for Ni Cr alloy film patterning than PMS, because substrate temperature during the preparation of the Ni Cr film in PMS would fail at high temperatures. By comparing the single gate range, the average error multiples and resistance strain coefficients of the Ni-Cr alloy resistance grid obtained by PIBE are better than those obtained by PMS. Additionally, the stability of the resistance value of the thin-film resistor gate prepared by the PIBE method is better than that of the PMS method.
(2) In the PIBE, the influence order of the etching process parameters on the etching rate is as follows: substrate negative bias > argon flow > incident angle. The influence order on the film surface roughness is as follows: incident angle > substrate negative bias > argon flow. The influence order on the resistivity is as follows: incident angle > substrate negative bias > argon flow. The resistivity is smallest when the incident angle is 45 • , and resistivity is the largest when the incident angle is 20 • . Through the analysis of variance and considering the influence of stripping photoresist, the etching process parameters can be 45 • ion beam incidence angle, 7.5 sccm argon flow, and 450 V substrate negative bias; the etching rate of nickel chromium alloy film is 25.3 nm/min, and the etching rate of AZ6140 photoresist is 23.4 nm/min.
(3) The test results of SEM and AFM show that increasing the substrate temperature can refine the surface grains, and PIBE can make the film surface of the Ni Cr alloy resistance grid smooth, the film surface is relatively uniform, and the surface roughness value is small. | 11,935 | 2022-06-01T00:00:00.000 | [
"Physics",
"Materials Science"
] |
A Note on the Degenerate Type of Complex Appell Polynomials
: In this paper, complex Appell polynomials and their degenerate-type polynomials are considered as an extension of real-valued polynomials. By treating the real value part and imaginary part separately, we obtained useful identities and general properties by convolution of sequences. To justify the obtained results, we show several examples based on famous Appell sequences such as Euler polynomials and Bernoulli polynomials. Further, we show that the degenerate types of the complex Appell polynomials are represented in terms of the Stirling numbers of the first kind.
The sequence of complex Appell polynomials {A n (z)} ∞ n=0 can be obtained by either of the following equivalent conditions: or the following formal equality where n! , (a 0 = 0) is a formal power series with coefficients a n called by Appell numbers.There is a large number of classical sequences of polynomials in Appell polynomials and a list of famous classical Appell polynomials is shown as the Bernoulli polynomials, the Euler polynomials, the Hermite polynomials, the Genocchi polynomials, the generalized Bernoulli polynomials, the generalized Euler polynomials, etc. (see [2,3] for more examples).
Many authors have obtained useful results by considering the Appell polynomials of a complex variable by splitting complex-valued polynomials into real and imaginary values: Analytic properties of the sequence of complex Hermite polynomials are studied in [12] and their orthogonality relation is Symmetry 2019, 11, 1339; doi:10.3390/sym11111339www.mdpi.com/journal/symmetryestablished in [13,14].Also, authors in [15] show the representation of the real and imaginary parts in the complex Appell polynomials in terms of the Chebyshev polynomials of the first and second kind.
With the help of the research of complex Appell polynomials, their degenerate versions have been also extensively studied looking for useful identities, as well as their related properties, since Carlitz introduced degenerate formulas of special numbers and polynomials in [16,17].Further, there have been studies of various degenerate numbers and polynomials by means of degenerate types of generating functions, combinatorial methods, umbral calculus, and differential equations.For example, several authors have studied the degenerate types of Appell polynomials, such as Bernoulli and Euler polynomials (see [18][19][20][21][22][23]) and their complex version [24], degenerate gamma functions, degenerate Laplace transforms [25], and their modified ones [26].
The research for degenerate versions of known special numbers and polynomials brought many valuable identities and properties into mathematics.In the future, we hope the results of the degenerate types of complex Appell polynomials can be further applicable to many different problems in various areas.
The aim of this paper is to introduce Appell polynomials of a complex variable and their degenerate formulas and provide some of their properties and examples.Also, we study some further properties of the degenerate type of Appell polynomials and show that degenerate cosine-and sine-Appell polynomials can be expressed by the Stirling numbers of the first kind.
The paper is organized as follows.In Section 2, we recall the complex Appell polynomials with cosine-and sine-Appell polynomials and present some properties and their relations.Section 3 introduces the degenerate version of complex Appell polynomials and provides some expressions, properties, and examples.Finally, Section 4 contains the conclusion of this study.
Complex Appell Polynomials
In this section, we introduce the cosine-Appell polynomials and sine-Appell polynomials by splitting complex Appell polynomials into real and imaginary parts, and present some properties, which can apply to any Appell-type polynomial, as mentioned in the introduction.Definition 1.For n ∈ N ∪ {0}, we define the cosine-Appell polynomials A (c) n (x, y) and the sine-Appell polynomials A (s) n (x, y) by the generating functions respectively, The definition in (3) with the fact (2) implies that Also, it is easily observed that for z = x − iy, and Further, noting that n (x, y) = (A n (z)), z = x + iy for n ≥ 0, it can be checked that the cosine-Appell polynomials and the sine-Appell polynomials satisfy the following properties: The above properties are easily proved by the comparison of coefficients after polynomial expansion of the generating functions and we omit the proofs here for lack of space.
We next investigate further properties of complex Appell polynomials.
Theorem 1.For n, m ∈ N ∪ {0}, z = x + iy, the following product of complex Appell polynomials is established: m (x, −y) Proof.The product of the identities for A n (z) and A m ( z) from (4) shows the desired identity.
Remark 1.Note that the sequences {A and respectively.For example, the first four consecutive polynomials are listed as in Tables 1 and 2.
Table 1.Expressions of the first four Let n be a nonnegative integer and z = x + iy.Then, the complex Appell polynomials satisfy the following identities: Proof.The right side of the first identity we get directly by the 2-fold binomial convolution of sequence {A n (x)} ∞ n=0 (thus using the square of exponential generating function (A(t)e xt ) 2 ).Alternatively, rewriting (A(t)e xt ) 2 as the product of two exponential generating functions by this way: (A(t)e (x+iy)t )(A(t)e (x−iy)t ), we obtain the left side of the first identity by the binomial convolution of sequences {A n (z)} and {A n ( z)}.The second identity we obtain in a similar argument.
Remark 2. In particular, if we consider
for some sequences C n (z) and S n (z).As A n (x) = x n for A(t) = 1, we have from identities (5) and (6) The sequences C n (z) and S n (z) satisfy the following formulas.
The following two subsequent theorems show that the complex Appell polynomials can be split into C n (z) and S n (z) and their relations.Theorem 3.For n ∈ N ∪ {0}, the Appell-type polynomials satisfy the following relations with C n (z) and S n (z), Proof.As A(t)e xt (cos(yt) + i sin(yt)) exponential generating functions for A n (z), z = x + iy, we have directly equation ( 13) by the binomial convolution of sequences {a n } ∞ n=0 and {C n (z) + iS n (z)} ∞ n=0 .
Theorem 4. For k > 0, the cosine-Appell polynomials and the sine-Appell polynomials satisfy the following properties, Proof.As A(t)e kt e ±xt cos(±yt) exponential generating function for n (k ± x, y), we get the first line in formula ( 14) by the binomial convolution of sequences {A n (k)} ∞ n=0 and {C n (x + iy)(±1) n } ∞ n=0 .The second line of (14) follows from formula (5).Similarly, identity (15) can be proved.
Next, the derivatives of A
Similarly, it can be seen that By using ( 16) and ( 17), it is easily shown that
Degenerate Type of Complex Appell Polynomials
In this section, we introduce the degenerate type of complex Appell polynomials based on the non-degenerate ones given in Definition 1 and study some of their properties.To do this, we first recall and introduce several definitions, some notations, and basic properties.
Then we define the degenerate type of complex Appell polynomials by the generating function respectively.
Letting x = i in Equation ( 18), we find, for λ ∈ R\{0} so that lim From Definitions 2 and 3 with property (21), one can see that for z = x + iy, Also, the following property can be stated.
Lemma 3.
For n ≥ 0 and z = x + iy, let A n (x, y; λ) and A n (x, y; λ) be the degenerate cosine-Appell and sine-Appell polynomials defined in (27).Then, we have Proof.We first note that from ( 26) and (27), 28), so that the desired identities are easily obtained.
n (x, y; λ)} ∞ n=0 can be explicitly determined when A λ (t) is specified.For example, for A n (z; λ) = E n (z; λ) and A n (z; λ) = B n (z; λ) as defined in (23) and (24) respectively, the first four consecutive polynomials can be listed as in Tables 3 and 4. One can check that from Tables 1 and 2 Let n be a non-negative integer.The degenerate cosine and sine functions, namely cos and respectively, where S 1 (n, m) are the Stirling polynomials of the first kind, which satisfy (for details see [35][36][37]) Proof.We show the proof for cos where we use the well-known identity (see [35,37]) We next give an expression of A Theorem 6.Let n be a non-negative integer.Then, the following identities hold: Proof.The identity for A (c) n (x, y; λ) we get easily by the binomial convolution of the sequence {A n (x; λ)} ∞ n=0 and the sequence in Formula (29).Similarly, the identity for A n (x, y; λ) we obtain by the binomials convolution of sequences {A n (x; λ)} ∞ n=0 and (30).
Lemma 5. If we assume that
Proof.We show the proof of the first formula only, as the proof of the second one can be done similarly.
Finally, we show that the degenerate types of cosine-and sine-Appell polynomials, A n (x, y; λ) and A Proof.We prove the first identity only, as the second one can be proved similarly.By the binomials convolution of sequences {A n (x; λ)} ∞ n=0 and (29) and using identity (28) we obtain the first identity.
nTheorem 5 .
(x, y) and A (s) n (x, y) show that the sequence {A n (z)} ∞ n=0 satisfies the condition(1).For all n ∈ N ∪ {0} and z = x + iy, the sequence {A n (z)} ∞ n=0 is verified by a sequence of complex Appell polynomials in terms of A (c) n (x, y) and A (s) n (x, y).Proof.Noting that ∂ ∂x A (c) 0 (x, y) = 0, the derivative of the cosine-Appell polynomials satisfies e xt cos(yt) = A(t)te xt cos(yt) =
Definition 3 .
) = cos(yt) + i sin(yt).Now, using the degenerate functions(26), we define the following polynomials.For a nonnegative integer n, let us define the degenerate cosine-Appell polynomials A (c) n (x, y; λ) and the degenerate sine-Appell polynomials A (s) n (x, y; λ) by the generating functions, respectively, as follows:A λ (t)e x λ (t) cos
Remark 4 .
It is noted that the sequences {A (c) n (x, y; λ)} ∞ n=0 and {A
λ
(t), are the exponential generating functions of the sequences
λ
(t) only, as the proof for sin
n
(x, y; λ) and A (s) n (x, y; λ) in terms of Stirling numbers of the first kind.
λ
(t) is the exponential generating function of the sequence
Example 1 .
first identity is proved.Type 2 degenerate Euler polynomials E
Table 2 .
Expressions of the first four B n (x, y) and B (s) n (x, y).
Table 3 .
Expressions of the first four E
Table 4 .
Expressions of the first four B | 2,513 | 2019-10-31T00:00:00.000 | [
"Mathematics"
] |
Novel significant stage-specific differentially expressed genes in hepatocellular carcinoma
Background Liver cancer is among top deadly cancers worldwide with a very poor prognosis, and the liver is a vulnerable site for metastases of other cancers. Early diagnosis is crucial for treatment of the predominant liver cancers, namely hepatocellular carcinoma (HCC). Here we developed a novel computational framework for the stage-specific analysis of HCC. Methods Using publicly available clinical and RNA-Seq data of cancer samples and controls and the AJCC staging system, we performed a linear modelling analysis of gene expression across all stages and found significant genome-wide changes in the log fold-change of gene expression in cancer samples relative to control. To identify genes that were stage-specific controlling for confounding differential expression in other stages, we developed a set of six pairwise contrasts between the stages and enforced a p-value threshold (< 0.05) for each such contrast. Genes were specific for a stage if they passed all the significance filters for that stage. The monotonicity of gene expression with cancer progression was analyzed with a linear model using the cancer stage as a numeric variable. Results Our analysis yielded two stage-I specific genes (CA9, WNT7B), two stage-II specific genes (APOBEC3B, FAM186A), ten stage-III specific genes including DLG5, PARI, NCAPG2, GNMT and XRCC2, and 35 stage-IV specific genes including GABRD, PGAM2, PECAM1 and CXCR2P1. Overexpression of DLG5 was found to be tumor-promoting contrary to the cancer literature on this gene. Further, GABRD was found to be signifincantly monotonically upregulated across stages. Our work has revealed 1977 genes with significant monotonic patterns of expression across cancer stages. NDUFA4L2, CRHBP and PIGU were top genes with monotonic changes of expression across cancer stages that could represent promising targets for therapy. Comparison with gene signatures from the BCLC staging system identified two genes, HSP90AB1 and ARHGAP42. Gene set enrichment analysis indicated overrepresented pathways specific to each stage, notably viral infection pathways in HCC initiation. Conclusions Our study identified novel significant stage-specific differentially expressed genes which could enhance our understanding of the molecular determinants of hepatocellular carcinoma progression. Our findings could serve as biomarkers that potentially underpin diagnosis as well as pinpoint therapeutic targets.
Background
Liver cancer is the second most deadly cancer in terms of mortality rate, with a very poor prognosis [60]. It accounted for 9.1% of all cancer deaths, and 83% of the annual new estimated 782,000 liver cancer cases worldwide occur in developing countries [13]. Liver cancer showed the greatest increase in mortality in the last decade for both males (53%) and females (59%) [8]. Liver hepatocellular carcinoma (LIHC) or simply hepatocellular carcinoma (HCC) is the most common type of liver cancer, accounting for nearly 85% of liver cancers. 78% of all reported cases of HCC were due to viral infections (53% Hepatitis B virus and 25% Hepatitis C virus) [38]. There are several non-viral causes of HCC as well, mainly aflatoxins and alcohol [10]. As shown in Fig. 1, all the factors converge to a common mechanism of genetic alterations that lead to the acquisition of cancer hallmarks [20] and the eventual emergence of a cancer cell [11]. Genetic alterations constitute the heart of the problem, and studying changes due to these genetic alterations is paramount to understand HCC. Earlier gene expression studies using EST data detected differential expression in cancer tissue compared to noncancerous liver and proposed the existence of genetic aberrations and changes in transcriptional regulation in HCC [58]. The Cancer Genome Atlas (TCGA) research network [41] have subtyped and identified many potential targets for HCC based on a comprehensive multi-omics analysis. An independent analysis of TCGA RNA-Seq data encompassing 12 cancer tissues has uncovered liver cancer-specific genes [37]. Zhang et al. [63] have performed mutation analysis of HCC, and Yang et al. [59] combined TCGA expression data and natural language processing techniques to identify cancer-specific markers.
The burden of disease and mortality rate are both inversely correlated with the cancer stage. The response rate to therapy is also inversely correlated with stage. To the best of our knowledge, there are no reported research in the literature that have dissected the stagespecific features of HCC. The cancer staging system is based on gross features of cancer anatomical penetration, and one such standard is the American Joint Committee on Cancer (AJCC) Tumor-Node-Metastasis (TNM) staging [2]. It is reasonable to hypothesize that the stage-specific gross changes are associated with signature molecular events, and try to probe such molecular bases of stage-wise progression of cancer. We had earlier published on stage-specific "hub driver" genes in colorectal cancer [36]. A stage-focussed analysis of colorectal cancer transcriptome data yielded negative results vis-a-vis the AJCC staging system [25].
Data preprocessing
Normalized and log 2 -transformed Illumina HiSeq RNA-Seq gene expression data processed by the RSEM pipeline [29] were obtained from TCGA via the firebrowse. org portal [6]. The patient barcode (uuid) of each sample encoded in the variable called 'Hybridization REF' was parsed and used to annotate the controls and cancer samples (Fig. 2). To annotate the stage information of the cancer samples, we obtained the clinical information dataset for HCC from firebrowse.org (LIHC.Merge_ Table 1 Contrast matrix with control. Each stage (indicated by '1') is contrasted against the control (indicated by '-1') in turn Clinical.Level_1.2016012800.0.0.tar.gz) and merged the clinical data with the expression data by matching the "Hybridization REF" in the expression data with the aliquot barcode identifier in the clinical data. The stage information of each patient was encoded in the clinical variable "pathologic stage". The pathologic stage is essentially the surgical stage, prior to any treatment received, determined with the tissue obtained at the time of surgery. This interpretation is reinforced in the TCGA HCC sample inclusion criteria as follows: "Surgical resection of biopsy biospecimens were collected from patients diagnosed with hepatocellular carcinoma (HCC), and had not received prior treatment for their disease (ablation, chemotherapy, or radiotherapy)" (The TCGA [41]). The availability of this unequivocal information enables the analysis of cancer stages. The substages (A,B,C) were collapsed into the parent stage, resulting in four stages of interest (I, II, III, IV). We retained a handful of other clinical variables pertaining to demographic features, namely age, sex, height, weight, and vital status. With this merged dataset, we filtered out genes that showed little change in expression across all samples (defined as σ < 1). Finally, we removed cancer samples from our analysis that were missing stage annotation (value 'NA' in the "pathologic stage"). The data preprocessing was done using R (www.r-project.org).
Fig. 4
A Venn representation of the pairwise stages contrasts. A gene could be differentially expressed in any combination of the four stages and this could be represented by a 4-bit string, one bit for each stage. For e.g., '1111' at the overlap of all four stages would be assigned to genes that are differentially expressed in all four stages Table 2 Contrast matrix for inter-stage contrasts. There are six possible pairwise contrasts between the stages that are essential to identifying stage-specific genes
Linear modelling
Linear modelling of expression across cancer stages relative to the baseline expression (i.e, in normal tissue controls) was performed for each gene using the R limma package [42]. The following linear model was fit for each gene's expression based on the design matrix shown in Fig. 3a: where the independent variables are indicator variables of the sample's stage, the intercept α is the baseline expression estimated from the controls, and β i are the estimated stagewise log fold-change (lfc) coefficients relative to controls. The linear model was subjected to empirical Bayes adjustment to obtain moderated t-statistics [34]. To account for multiple hypothesis testing and the false discovery rate, the p-values of the F-statistic of the linear fit were adjusted using the method od Hochberg and Benjamini [22]. The linear trend across cancer stages for the top significant genes were visualized using boxplots to ascertain the regulation status of the gene relative to the control.
Monotonic mean expression
The linear model in eqn. (1) would not be sufficient to identify genes with an ordered monotonic trend of expression across cancer stages. Addressing this question would also help assess whether monotonic changes of gene expression were observed with disease progression. Towards this end, we designed a model of gene expression where the cancer stage was treated as a numeric variable: where X takes a value in [0,1,2,3,4] corresponding to the sample stage: [control, I, II, III, IV], respectively. It Table 3 AJCC Cancer staging. The correspondence between the AJCC staging and the TCGA staging for LIHC is noted, along with the number of LIHC cases in each stage in the TCGA dataset. Control indicates the number of normal tissue control samples, and NA denotes cases where the stage information is unavailable Table 4 Summary of key demographic features of the dataset. For continuous variables (age,height, weight and BMI), the mean ± standard deviation is given. BMI is calculated only for patients with both height and weight data Table 5 Top 10 genes of the linear model. The log-fold change expression of the gene in each stage relative to the controls are given, followed by p-value adjusted for the false discovery rate, and the regulation status of the gene in the cancer stages with respect to the control was noted the mean expression of a gene could show the following monotonic patterns across cancer stages: (i). monotonic upregulation, where mean expression follows: control < I < II < III < IV. (ii). monotonic downregulation, where mean expression follows: control > I > II > III > IV.
The sets of genes conforming to either (i) or (ii) were identified to yield monotonically upregulated and monotonically downregulated genes. These two sets were merged, and the final set of genes with monotonic changes of expression with cancer progression was obtained. This final set was ranked by the adj. p-values from the model estimated by eqn. (2).
Pairwise contrasts
To perform contrasts, a slightly modified design matrix shown in Fig. 3b was used, which would give rise to the following linear model of expression for each gene: where the controls themselves are one of the indicator variables, and the β i are all coefficients estimated only from the corresponding samples. Our first contrast of interest, between each stage and the control, was achieved using the contrast matrix shown in Table 1. Four contrasts were obtained, one for each stage vs control. A threshold of |lfc| > 2 was applied to each such contrast to identify differentially expressed genes (with respect to the control). We used the absolute value of the lfc, since driver genes could be either upregulated or downregulated. Genes could be differentially expressed in any combination of the stages or no stage at all. To analyze the pattern of differential expression (with respect to the control), we constructed a four-bit binary string for each gene, where each bit signified whether the gene was differentially expressed in the corresponding stage. For example, the string '1100' indicates that the gene was differentially expressed in the first and second stages. There are 2 4 = 16 possible outcomes of the four-bit string for a given gene corresponding to the combination of stages in which it is differentially expressed. This is illustrated in set-theoretic terms in Fig. 4. In our first elimination, we removed genes whose |lfc| < 2 for all stages. For each remaining gene, we identified the stage that showed the highest |lfc| and assigned the gene as specific to that stage for the rest of our analysis.
Significance analysis
We applied a four-pronged criteria to establish the significance of the stage-specific differentially expressed genes.
(i). Adj. p-value of the contrast with respect to the control < 0.001. The expression profile of a driver gene in cancer samples would markedly depart from that for the controls, which motivates the use of a stringent threshold here. (ii). (ii)-(iv) P-value of the contrast with respect to other stages < 0.05. The use of a more relaxed cutoff would improve the sensitivity of stage-specific detection.
To obtain the above p-values (ii) -(iv), we used the contrast matrix shown in Table 2, which was then used an an argument to the contrastsFit function in limma.
Further analyses
Principal component analysis (PCA) were performed using prcomp in R. To choose 100 random genes, we used the rand function. Gene set enrichment analysis were performed on KEGG (https://www.genome.jp/kegg/) and Gene Ontology [5] using kegga and goana in limma, respectively. In order to visualize outlier genes that are significant with a large effect size, volcano plots could be obtained by plotting the -log10 transformed p-value vs. the log fold-change of gene expression. Heat maps of significant stage-specific differentially expressed genes were visualized using heatmap and clustered using hclust. Novelty of the identified stage-specific genes was ascertained by screening against the Cancer Gene Census v84 [14].
Results
The TCGA expression data consisted of expression values of 20,532 genes in 423 samples. After the completion of data pre-processing, we obtained a final dataset of expression data for 18,590 genes across 399 samples annotated with the corresponding sample stage (available in Supplementary File S1). The stagewise distribution of TCGA samples along with the corresponding AJCC staging is shown in Table 3. A statistical summary of demographic details including age, sex, height, weight, and vital status is shown in Table 4. The body mass index (BMI) distribution was derived from patient clinical data that had both height and weight (i.e, neither was 'NA'). The average age of Fig. 6 Boxplots illustrating stage-specificity of differentially expressed genes. Extremal expression in a stage could be either maximal expression or minimal expression relative to the control and all other stages, and could be termed maximal differential expression. Here we show genes with maximal differential expression in stage-I (WDR72; minimum expression), stage-II (GLI4, maximum expression; COLEC11, minimum expression), stage-III (CKAP2; maximum expression), and stage-IV (MAPK11; maximum expression) onset of HCC was around 60 years, and the average BMI was about 26, indicating a possible link with ageingassociated pathology and obesity.
The dataset was processed through voom in limma to prepare for linear modelling [28]. At a p-value cutoff of 0.05, 14,843 genes were significant for the linear model given by eqn. (1). Even raising the bar to 1E-5, 9618 genes remained significant in the linear modelling, thus implying a strong linear trend in their expression across cancer stages relative to control. This was not entirely surprising since one of the hallmarks of cancer phenotype is genome-wide instability [20]. The linear modelling highlighted top ranked genes, some upregulated in HCC (GABRD, PLVAP, CDH13) and some downregulated (CLEC4M, CLEC1B, CLEC4G). The lfc for each stage with respect to control of top ten genes (ranked by adjusted p-value) are shown in Table 5, along with their inferred regulation status. Boxplots of the expression of the top 9 genes (Fig. 5) indicated elevated expression across cancer stages relative to control for up-regulated genes, while depressed expression across cancer stages relative to control was indicative of downregulated genes. (Boxplots of all other genes in the top 200 are provided in the Supplementary Fig. S1) It is worthwhile to note that a given gene might have maximal differential expression in any stage (not necessarily stage 4), and the linear trend does not suggest the order of expression across stages (Fig. 6).
A PCA of the top 100 genes from the linear model was visualized using the top two principal components (Fig. 7a). A clear separation of the controls and the cancer samples could be seen, suggesting the extent of differential expression of these genes in cancer samples. Hence linear modelling yields cancer-specific genes versus normal controls, and the results for the all the genes, including the top 100, are provided in order in Supplementary File S2. For comparison, a PCA plot of 100 randomly sampled genes (Fig. 7b) failed to show any separation of the cancer and control samples.
To ascertain an ordered trend of expression across cancer stages, the linear model given by eqn. (2) was fit. At a p-value of 0.05, 14,127 genes were significant, and raising the bar to 1E-5 still left 8032 genes significant. A goodness of fit with eqn. (2) does not equate with a monotonic trend of expression; i.e., a a gene with a significant linear fit is not required to follow a monotonic trend of mean expression with cancer stage. Using the definition of monotonicity given in the Methods section, we found 2109 genes showing strictly monotonic expression with the cancer stage and reaching maximum absolute mean expression in stage IV. Each such gene was annotated and ranked with the p-value from eqn. (2). This yielded 1977 genes with significant (i.e, p-val < 0.05) monotonic trends of mean expression across cancer stages, with 1602 upregulated and 375 downregulated. The top 20 such genes are presented in Table 6.
The results from the linear modelling were in contrast with those obtained by Huo et al. [25] and were most likely driven by an improved design and the inclusion of 51 controls in our study. These positive results provided the impetus to pursue stage-driven analysis. Given the conventional AJCC staging, gene expression differences would play a major role in driving the cancer progression. To identify the stage-specific differentially expressed genes, we applied the first contrast matrix ( Table 2) and constructed the four-bit stage string of each gene. Based on the stage strings, we binned all the genes, and the string-specific gene lists corresponding to all the partitions in the Venn diagram (Fig. 4) is made available in Supplementary File S3. The size of each such partition is illustrated in Fig. 8. We eliminated the 16,135 genes corresponding to the stage string '0000' (|lfc| < 2 in all stages). To establish the significance of the remaining genes, we applied the second contrast (Table 3) and passed each gene through the four filter criteria. The gradual reduction in candidate stage-specific genes as each criterion was applied, is shown in Table 7. Only genes that passed all criteria were retained as significant stagespecific differentially expressed genes. We obtained 2 stage-I specific, 2 stage-II specific, 10 stage-III specific and 35 stage-IV specific genes (Table 8). Figure 9 shows the volcano plot of these 49 stage-specific genes.
In view of the limited sample size for stage-IV and consequent low power for rejecting false-positives, we stipulated that each stage-IV specific gene would display a smooth increasing or decreasing expression trend through cancer progression culminating in maximum differential expression in stage-IV. On this basis, we pruned the 35 stage-IV specific genes to just the top ten by significance in the linear modelling. This yielded a total of 24 stage-specifc genes of interest.
A heatmap of the lfc expression of these stage-specific genes across the stages was generated (Fig. 10a) and revealed a systematic gradient in expression relative to control, involving both downregulation and overexpression. The map was clustered on the basis of differential expression (i.e, |lfc|) both across stages and across features (i.e, genes) (Fig. 10b). It was seen that stage I genes clustered together, stage II genes co-clustered with NCAPG2 and DLG5 from stage-III, all the other stage-III genes clustered together, while the stage-IV genes formed two separate clusters. It was interesting to note that GABRD emerged as an outgroup to all the clusters, demonstrating its uniqueness.
To identify the biological processes specific to each stage, we used the genes with maximal |lfc| in each stage and performed a stagewise gene set enrichment analysis on two ontologies, the GO and KEGG pathways. Salient results with respect to KEGG pathways are presented below (Table 9) and the complete KEGG and GO results are available in Supplementary Tables S1 and S2, respectively. In stage I, we found the significant enrichment of cell-cycle signaling pathways (Hippo, Wnt, HIF-1), and viral infection-related pathways (cytokine-cytokine receptor interaction, human papillomavirus infection, HTLV-I infection). In stage II, key signalling pathways (Ras, MAPK) were aberrant. Two liver-specific pathways, alcoholism and cytochrome P450 mediated metabolism of xenobiotics were enriched, as well as standard cancer pathways of bladder, brain, stomach, and skin that might involve generic genetic alterations necessary for cancer cell growth. In stage III, we noticed the significant enrichment of Metabolic pathways that summarize cellular metabolism. This might indicate the metabolic shift needed by the cancer to grow and invade neighboring tissues. Other salient significantly enriched pathways pertained to increased cell cycle progression, DNA replication, chemical carcinogenesis, p53 signaling pathway and cellular senescence, all hallmark processes critical to cancer progression. Stage IV gene set was significantly enriched for bile-related processes (bile secretion, primary bile acid biosynthesis), and ABC transporters (possibly conferring a drug-resistant advanced cancer phenotype). A signaling pathway related to diabetic complications was enriched as well, indicating the role of co-morbidities in driving liver cancer progression. The enrichment analysis of the top 100 genes of the linear model is included in the Supplementary Table S3.
Discussion
When differentially expressed genes are identified in a two-class cancer vs control manner, the information about stage-specificity of differential expression is lost. By applying our protocol, this information is recovered and available for dissection. The top linear model genes and all the stage-specific differentially expressed genes (Table 10) were analyzed with respect to the existing literature.
Top genes of linear models
Three C-type lectin domain proteins (CLEC4M, CLEC1B, CLEC4G) were detected in the top ten genes of linear model given by eqn. (1). Interestingly, this identical cluster of three genes was detected as the most significantly downregulated liver cancer-specific genes in a qPCR study of an independent cohort of 65 tumornormal matched cases [21]. On screening the top 200 linear model (1) genes against cancer driver genes in the Cancer Gene Census, only four genes were found, namely BUB1B, CDKN2A, EZH2, and RECQL4. The top 200 genes of the linear model given by eqn. (2) overlapped with 111 genes of linear model (1) and yielded six genes from the Cancer Gene Census, namely BUB1B, EZH2, CDKN2C, CANT1, POLD1, and STIL. Both CDKN2A and CDKN2C are cyclin-dependent kinase inhibitors. CDKN2A was a member of the gene signatures for HCC prognosis independently proposed by Gillet et al. [16] and Yang et al. [59]. It was remarkable that GABRD stood out as the top gene in both the linear models, and with a monotonic order of expression with the cancer stage. GABRD is discussed further in the section on Stage-IV specific genes. A gene with a monotonicity of expression may be increasingly upregulated as the cancer initiates, progresses and metastasizes, signalling its oncogenic progression; or conversely, it may be increasingly downregulated with the cancer stages, signalling the loss of tumor suppressor activity. Screening the top 200 genes with monotonic expression against Fig. 7 Principal components analysis of cancer vs control. a The first two principal components of the top 100 genes from linear modeling are plotted. It could be seen that control samples (red) clustered independent of the cancer samples (colored by stage). b The same analysis repeated with 100 random genes failed to effect a clustering of the control samples relative to the cancer samples the Cancer Gene Census yielded a completely different set of six genes: HSP90AB1, ALDH2, ESR1, PPP2R1A, HIST1H4I, SEPT5. HSP90AB1, a heat shock protein and molecular chaperone, was a key result of Xu et al. [56] where it played a dual role, one in the set of 50 hub genes correlated with Barcelona Clinic Liver Cancer (BCLC) staging of HCC patients, and another, in the set of 13 hub genes correlated with overall survival of HCC patients. HSP90AB1 might have a significant role in the aetiology of HCC, given that its expression is known to be upregulated by hepatitis B virus encoded X protein [31]. The monotonic changes in HSP90AB1 might further facilitate its known roles in angiogenesis [19]. Fig. 11). NDUFA4L2 has been identified as a target gene of HIF-1 (hypoxia-inducible transcription factor-1), and a key factor driving the metabolic reprogramming in hypoxic microenvironments [46]. Our findings established that not only was NDUFA4L2 significantly overexpressed in HCC (as noted in [27]), but its overexpression follows a significant monotonic pattern across cancer stages, a much stronger statement that would support the role of NDUFA4L2 in driving HCC progression. Similarly, the expression of CRHBP has been recently shown to be negatively associated with the tumor size in HCC [55]. Table 6 Top 20 genes with significant monotonic patterns of expression. Intercept, Coefficient and Adj. p-value are from the linear model given by eqn. (2). Status indicates monotonic upregulation (UP) or monotonic downregulation (DOWN). The genes are sorted by significance (adj.p-value) Our study provides a more quantitative account of the significant monotonic downregulation of CRHBP with the HCC stage. Two proteins of the glycosylphosphatidylinositol (GPI) anchoring system, PIGU and PIGC, were top genes with respect to significant monotonic expression (Table 6); of these, PIGU is a known bladder cancer oncogene [18].
Stage-I specific DEGs (Fig. 12) CA9 is a member of carbonic anhydrases, which are a large family of zinc metalloenzymes that catalyse the reversible hydration of carbon dioxide. Its expression in clear cell Renal carcinoma, but not in functional kidney cells has gained attention for its use as a pre-operative biomarker [30]. The WNT7B protein is part of the Wnt family, a family of secreted signalling proteins. Elevated WNT7B in pancreatic adenocarcinoma has been found to mediate anchorage independent growth [4]. Surprisingly, both CA9 and WNT7B are downregulated in HCC, most so in stage-I, contrary to their role in other cancers. A concrete interpretation of the role of these genes in HCC awaits appropriately designed experimental studies.
It is pertinent to ask the following question here: which genes are essential for the initiation of HCC? Table 7 Number of genes in each step of the significance analysis. Differential expression is defined with respect to a threshold |logFC| = 2. Significance analysis proceeds first by significance (i.e, p-value) with respect to control, followed by p-value in each possible pairwise contrast between the different stages. Exclusive DE genes refer to genes differentially expressed in only one of the four stages (corresponding to the bit strings '1000', '0100', '0010' and '0001') Clearly these genes would be differentially expressed in stage I relative to control. All significantly differentially expressed genes with maximal |lfc| in stage-I would be the best candidates for genes involved in the initiation of HCC. These 122 genes are provided in the Supplementary File S3.
Stage-II specific DEGs (Fig. 13) APOBEC3B, a DNA cytidine deaminase, is a known cancer driver gene in the Cancer Gene Census, but there are no literature reports of its stage-specificity in any cancer. It is known to account for half the mutational load in breast carcinoma, and its target sequence context Table 8 Final set of highlighted genes in each stage. The genes in each stage are ordered by increasing adjusted p-values of the linear modelling analysis. Stage-IV specific genes with monotonic changes of expression correlating with disease progression are highlighted Fig. 9 Volcano plot of the 49 significant stage-specific differentially expressed genes. Stage 1 genes, red; Stage 2, blue; Stage 3, green; and Stage 4, orange. The genes are seen to orient away from the origin and the axes, indicating significance and effect size Fig. 10 Heatmap plots of the final 24 stage-specific genes. a heatmap generated from the lfc values of all the stage-specific genes (arranged stagewise). The color gradient spans the spectrum from downregulation (blue) to overexpression (red). Log fold changes upto sixfold are seen, indicating 64 times differential expression with respect to control. b Representation of the stagewise gene expression based on clustering of differential expression profiles was found to be highly mutated in Bladder, lung, cervix, neck, and head cancers as well [7]. Further studies have attributed specific hypermutation signatures across all cancers to the APOBEC family, including APOBEC3B [1]. Here APOBEC3B is upregulated, increasing its capacity to inflict the hypermutator phenotype, and highlighting an intriguing stage-specificity in its action. FAM186A polymorphisms have been reported in GWAS and SNP studies on colorectal cancer patients and shown to have a significant odds ratio in risk heritability [48].
FAM163A was a component of the 8-gene signature used for the risk stratification of HCC patients [39].
Stage-III specific DEGs (Fig. 14) C12orf48, also known as PARI, participates in the homologous recombination pathway of DNA repair, and its overexpression has been reported in pancreatic cancer [35]. Further PARI was recently identified as a transcriptional target of FOXM1 [62], which is a wellvalidated upregulated gene in HCC [21]. DLG5 is a cell Table 9 Gene set enrichment analysis. Stage-specific gene sets (all the differentially expressed genes, corresponding to row 'DE genes' in Table 6) were analyzed for significant enrichment with respect to KEGG Pathways. Significance was based on p-value <0.05 Table 10 Stagewise effect sizes and significance of stage specific genes. The stagewise log foldchanges of differential expression of each candidate stage-specific gene in tumor samples relative to normal control samples are shown, along with significance values, and its inferred regulation status. In stage-IV, only the top 10 genes are shown. The stage-specificity of the genes are emphasized polarity gene and its downregulation has been implicated in the malignancy of breast [32], prostate [49] and bladder cancers [65]. It has been recently found that lower DLG5 expression is correlated with advanced stages of HCC and essential for invadopodium formation, an event critical to cancer metastasis [26]. It is surprising that our study has identified a stage-III specific upregulation in DLG5. Interestingly, evidence is emerging to lend support to our finding that DLG5 might be tumor-promoting. In a very recent review, Saito et al. [43] reinterpreted published results on cell polarity and cancer, and advanced an alternative perspective on the role of polarity regulators in cancer biology. They argued that both cellular and subcellular polarity would be regulated by DLG5 and related polarity proteins. Subcellular polarity might improve the Fig. 12 Boxplot of stage-I specific genes. It is seen that CA9 and WNT7B are both maximally downregulated in stage-I cellular fitness for proliferation and stemness, thereby causing tumor promotion. Hence cell polarity regulation is anti-tumorigenic and subcellular polarity regulation is pro-tumorigenic, and our analysis has uncovered the protumorigenic upregulated activity of DLG5. ECT2 encodes a guanine nucleotide exchange factor that remains elevated during the G2 and M phase in cellular mitosis. ECT2 is found to be upregulated in lung adenocarcinoma and lung squamous cell carcinoma [66], as well as in invasive breast cancer [52]. NCAPG2 is a component of the condensing II complex and involved in chromosome segregation during mitosis. NCAPG2 level were found to be increased in non-small cell lung cancer, and its overexpression was found to be correlated with lymph node metastasis, thus enabling the use of NCAPG2 as a poor prognostic biomarker in lung adenocarcinoma [61]. GNMT is a methyltransferase that catalyses conversion of S-adenosine methionine to s-adenosyl cysteine. In the absence of GNMT, S-adenosine methionine causes hypermethylation of DNA, which represses GNMT levels and is found in HCC samples [24]. This is an epigenetic mechanism for loss of function of tumor suppressors and our study here confirmed the downregulation of GNMT expression. PRR11 is found to be over-expressed in lungs, and its silencing using siRNA resulted in cell cycle arrest and apoptotic cell death, followed by decreased cell growth and viability [64]. A similar knock out experiment of PRR11 in hilar cholangiocarcinoma cell lines resulted in decreased cellular proliferation, migration, and tumor growth [9]. WDHD1 is a key post-transcriptional regulator of centromeric, and consequently genomic, integrity [23] and its overexpression has been identified as biomarker of acute myeloid leukemia [53], and lung and esophageal carcinomas [44]. C15orf42 has been implicated in nasopharyngeal carcinoma [3]. ORC6L overexpression has been identified as a prognostic biomarker of colorectal cancer possibly by enhancing chromosomal instability [54]. XRCC2 was found to increase locally advanced rectal cancer radioresistance by repairing DNA double-strand breaks and preventing cancer cell apoptosis [40]. XRCC2 was also highlighted in the gene signature for HCC prognosis advanced by Gillet et al. [16].
Stage-IV specific DEGs (Fig. 15) GABRD, which was the top gene in the linear models as well, encodes for the delta subunit of the gamma-amino butyric acid receptor. The GABA receptor family was found to be frequently downregulated in cancers, except for GABRD, which was found to be up-regulated. Gross et al. [17] proposed that the GABA receptor gene family might play a role in the proliferation independent differentiation of cancer cells. GBX2 is part of the GBX gene family, which are homeobox containing DNA binding transcription factors. GBX2 is overexpressed in prostate cancer and studies show that expression of GBX2 is required for malignant growth of human prostate cancer [15]. PECAM1 overexpression has been linked to peritoneal recurrence of stage II/III gastric cancer patients [47]. CEND1 has been identified as a cell-cycle protein [50]. PGAM2 is a glycolytic enzyme whose upregulation is essential for tumor cell proliferation [57]. NR1I2 downregulation has been used in constructing a prognostic 9-genes expression signature of gastric cancer [51]. GDF5 has been shown to be a downstream target of the TGF-beta signaling pathway [33], stimulating angiogenesis required for the growth and spread of the cancer. GPR1 has been reported to be involved in promoting cutaneous squamous cell carcinoma migration [12]. Two other stage-IV specific genes, namely the downregulated CXCR2P1, which is a C-X-C motif chemokine receptor 2 pseudogene 1, and LOC25845, are minimally documented in the literature in the context of HCC, other cancers or any other Fig. 13 Boxplot of stage-II specific genes. It is seen that both APOBEC3B and FAM186A are maximally overexpressed in stage-II, the trend following an inverted U-shape condition. It is worth mentioning however that CXCR2, a member of the GPCR protein family binding the interleukin IL8, has been reported as an effective non-invasive blood based biomarker for HCC [45]. It is notable that ARHGAP42, a Rho GTPase activating protein, was another key result of Xu et al. [56], finding a place both in their set of 50 hub genes correlated with the BCLC staging of HCC patients, and in the set of 13 hub genes correlated with overall survival of HCC patients.Most of the stage-IV specific genes show contra-regulation (i.e, no clear trend) across cancer stages, and only 15 of the 35 genes revealed a monotonic pattern of expression (highlighted in Table 8). The other 20 genes could be unique to the hallmarks of stage-IV cancer, e.g., processes related to lymph node involvement and/or metastasis.
Conclusion
We have developed an original protocol for the stagewise dissection of the HCC transcriptome. We were able to successfully fit a linear model across cancer stages and detected genes with a strong linear expression trend in the cancer phenotype. These genes were found to effectively separate the control and cancer samples. We Fig. 14 Boxplot of stage-III specific genes. Except for GNMT, the expression of stage-III specific genes show a peak in stage-III, with the expression trend following an inverted U-shape across the stages. The expression trend is convex and reversed for the downregulated GNMT, with minimum expression in stage-III were able to assign 2455 differentially expressed genes into one of four stages and visualized their stage specific expression using boxplots. Using a multi-layered approach, we were able to assess the significance of each stage-specific DEG and narrowed down to a handful of candidate significant stage-specific DEG's. Our analysis yielded two stage-I specific genes (CA9, WNT7B), two stage-II specific genes (APOBEC3B, FAM186A), ten stage-III specific genes (including DLG5, NCAPG2, GNMT and XRCC2) and 35 stage-IV specific genes (including GABRD and CXCR2P1). Though most of these genes constituted novel findings in the context of HCC, a comprehensive literature search indicated connections with other cancer conditions. The analysis of monotonicity of expression has uncovered two genes with documented HCC connection, namely NDUFA4L2 and CRHBP. Correlation of our analysis with gene signatures based on the BCLC staging system revealed two common genes, namely HSP90AB1 and ARHGAP42. Our study might deepen our understanding of the mechanistic basis of HCC progression, and lay the foundation for the development of HCC diagnosis and treatment strategies. Translational research could transform our results into a panel of biomarkers for early clinical decision-making and Fig. 15 Boxplot of top 10 stage-IV specific genes. All genes, except NR1I2 and CXCR2P1, show a smooth increasing expression trend reaching peak expression in stage-IV. In the case of NR1I2 and CXC2RP1, the trend is reversed, with the expression decreasing smoothly to touch the minimum in stage-IV rational drug development. It is straightforward to extend our computational methodology to the stage-based analysis of other cancers to obtain a fuller view of disease initiation, progression, and metastasis. | 8,531.6 | 2019-07-05T00:00:00.000 | [
"Biology"
] |
Impact of cutting and sheep grazing on ground – active spiders and carabids in intertidal salt marshes ( Western France )
Impact of cutting and sheep grazing on ground–active spiders and carabids in salt marshes (West France).— The aims of this study were to characterize spider (Araneae) and ground beetle (Coleoptera Carabidae) communities in managed (cutting and sheep grazing) and non–managed salt marshes and to assess the efficiency of management regimes in these particular ecosystems. The two groups were studied during 2002 in salt marshes of the Mont Saint–Michel Bay (NW France) using pitfall traps. By opening soil and vegetation structures cutting and grazing enhanced the abundances of some halophilic species of spiders and ground beetles. Nevertheless, grazing appeared to be too intensive as spider species richness decreased. We discuss the implications of management practices in terms of nature conservation and their application in the particular area of intertidal salt marshes.
Introduction
Salt marshes are intertidal ecotones between terrestrial and marine systems.They are among the most restricted habitats in the world, covering less than 0.01% of the planet's surface (Desender & Maelfait, 1999;Lefeuvre et al., 2003).In Europe, the area of salt marshes has decreased dramatically in recent decades (Dijkema et al., 1984) and they currently have a linear and very fragmented distribution along coasts; conservation of these habitats is therefore of high interest (e.g., Bakker et al., 2002).These ecosystems also have a high conservation value as they are subjected to periodical flooding by tides and thus exhibit specific characteristics concerning plant cover (spatial succession from the high to the low marsh) and invertebrate assemblages that resist regular submergence by seawater (monthly in Europe) and the resultant high soil salinities (Foster & Treherne, 1976;Irmler et al., 2002;Pétillon et al., 2004Pétillon et al., , 2006)).
European salt marshes are currently endangered by many direct or indirect human impacts such as habitat destruction, diffuse soil pollution from adjacent agricultural fields, eutrophication and overgrazing (Desender & Maelfait, 1999;Goeldner-Gianella, 1999;Adam, 2002).Furthermore, cessation of grazing may lead to dominance of a single plant species such as the tall grass Elymus athericus (Bockelmann & Neuhaus, 1999;Valéry et al., 2004), and hence to loss of plant (Bos et al., 2002;Bakker et al., 2003) and halophilic spider (Pétillon et al., 2005a(Pétillon et al., , 2005b) ) biodiversity.Several studies have emphasized the role of abandonment of agricultural practices in the expansion of Elymus athericus in northern Europe (e.g.Dijkema, 1990).Management is therefore necessary to reduce the effects of invasion, to decrease the rate of spread of the invasive species and, in a general way, to conserve young stages of salt marshes.The present study was conducted to determine whether salt marshes should be managed in order to conserve young successional stages and their related biodiversity.Both direct (via changes in vegetation structure and heterogeneity) and indirect (via changes in microclimate and other aspects of the microhabitat) effects of management practices are expected to alter community composition (Zulka et al., 1997;Georges, 1999), especially in comparison to those associated with the invasive grass E. athericus.
Spiders (Marc et al., 1999;Bell et al., 2001) and ground beetles (Luff et al., 1992;Rainio & Niemelä, 2003) are known to react strongly to changes in microhabitat conditions and are consequently often used as indicators of the effects of management practices.According to McGeoch (1998), such groups are qualified as ecological indicators in function of their sensitivity to environmental stress factors.In the present work we studied communities of spiders and ground beetles at stations submitted to management plans to determine whether practises tended to favour or disfavour species of high conservation value inhabiting salt marshes.
The practises most likely to modify the initial composition of the salt-marsh fauna were mowing and sheep grazing.Management impacts were studied by comparing habitat variables and communities between managed and non-managed plots.
Study sites and habitat characteristics
The Mont-Saint-Michel Bay (NW France) is an extensive littoral zone (500 km²) located between Brittany and Normandy (48° 40' N, 1° 40' W).This macrotidal system is characterized by a high tidal range (mean tidal range: 10-11 m, maximum: 16 m).The intertidal area is unique in Europe for its size, consisting of 180 km 2 of intertidal flats and 40 km 2 of salt marshes.These marshes are drained by a dense creek system (Lefeuvre et al., 2003) and are flooded during 43% of tides when the tidal range is greater than 11.25 m (spring tides).Flooding lasts on average 2 h per tide but the drainage time determines the whole submersion period.The marshes are delimited in their upper part by seawalls that are not submerged during high tides.Two sites were investigated on either side of Mont Saint-Michel: one to the west ("Ferme Foucault" site: code F, 48° 37' N, 1° 32' W) and the other to the east ("la Rive" site: code R, 48° 37' N, 1° 29' W) (fig.1).The stations close to the seawall at both sites were subjected to human interference: station F1 at the "Ferme Foucault" site is cut annually in mid-June whereas station R1 at the "la Rive" site is subjected to heavy sheep grazing (up to 100 sheep per hectare: Legendre & Schricke, 1998).Managed and non-managed stations (stations F2 and R2) were compared at similar salt-marsh zones (upper zone: from 0 to 300 m) and the only apparent varying factor between stations was the presence / absence of management practises (cutting and grazing).
Biotic habitat characteristics at each station were described within a radius of 1 m around each pitfall trap (i.e.four replicates per station).Four variables were used: litter depth (to the nearest mm), vegetation height (to the nearest cm), percentage cover of each plant species and percentage cover of bare soil (%).Soil salinity (estimated by pore water electrical conductivity), soil water content and temperature were also measured using a W. E. T. sensor connected to a moisture meter HH2 (both by Delta-T Devices Ltd., Cambridge, UK).All abiotic measurements were made with a specific clay soil calibration and repeated four times at each station during the summer of 2002.
Sampling techniques and species identification
Cursorial (i.e.ground active) spiders and ground beetles were sampled with pitfall traps, consisting of polypropylene cups (10 cm diameter, 17 cm deep) with ethylene-glycol as preservative.Traps were covered with a raised wooden roof to keep out rain and were visited weekly when tides permitted (i.e. about three weeks per month) from April to November 2002.Four pitfall traps were installed at each station.They were spaced 10 m apart, the distance considered the minimum to avoid interference between traps (Topping & Sunderland, 1992).Traps were consequently considered true replicates of each type of area studied (grazed vs. ungrazed and cut vs. uncut).Catches in pitfall traps were related to trapping duration and pitfall perimeter, which calculates an "activity trappability density" (number of individuals per day and per m: Sunderland et al., 1995).Ground beetles and spiders were preserved in 70% ethanol, and identified and conserved at the laboratory.Ground beetles were identified using Jeannel (1942) and Trautner & Geigenmüller (1987) and adult spiders using Roberts (1987Roberts ( , 1995) ) and Heimer & Nentwig (1991).Nomenclature follows Lindroth (1992) as far as possible for ground beetles and Canard (2005) for spiders, except for Pardosa purbeckensis, absent from this work but now considered a valid species (Canard, pers. comm.).
Data analyses
Human impact was assessed by comparing two conservation criteria, i.e. abundance of halophilic species and species richness, between natural and disturbed stations.Species richness is widely used as a conservation target (e.g., Noss, 1990;Bonn & Gaston, 2005).The use of stenotopic species is also recommended in studying the impact of human activities on arthropod communities (Samways, 1993;New, 1995;Dufrêne & Legendre, 1997).In this study, the target species were halophilic species, defined by their preference or exclusive presence in salt marsh habitats, which can be assessed using distribution maps (the relevant British atlases are Harvey et al., 2002 for spiders and Luff, 1998 for ground beetles).Statistics on the abundances of halophilic species were performed only for species represented by at least 10 individuals.All means in the tables and figures are presented with their standard error (mean ± SE).Mean environmental and community variables were compared with MINITAB version 12.1.using one-way ANOVA (management treatment as fixed term) tests because the data had a normal distribution (according to Kolmogorov-Smirnov tests).
Results
The grazed station was characterised by very short vegetation dominated by Puccinellia maritima (table 1), much lower than that of reference station (R1 vs. R2: ANOVA, 7 df, F-ratio = 90.34,p < 0.001).The cut station was characterised by a lower percentage cover of Elymus athericus (Festuca rubra represented another 10% of cover), a shorter vegetation (F1 vs. F2: ANOVA, 7 df, F-ratio = 98.45, p < 0.001) and a thinner litter layer (F1 vs. F2: ANOVA, 7 df, F-ratio = 147.00,p < 0.001) than the reference station.A total of 3,974 spiders belonging to 46 species and 54 taxa (including immature and unidentified species) were caught.The percentage of halophilic species was low, with six species being recorded at both sites (see taxonomic list in appendix 1).A total of 924 adult ground beetles belonging to 27 species were caught.Ten of these are considered halophilic (see taxonomic list in appendix 2).
Species and taxonomic richness of spiders (both total and mean richness) were significantly lower at sheep-grazed stations (fig.2).Sheep grazing did not affect ground beetle species richness.Mean abundances of the spider Pardosa purbeckensis were lower with grazing, whereas those of Erigone longipalpis were higher.The same was true for the ground beetle Bembidion mimimum.
Mean taxonomic and species richness of spiders did not differ statistically between the cut and the non-cut stations (fig.3).Mean and total species richness of ground beetles was significantly higher in the cut stations.The dominant spider species, Pardosa purbeckensis, was significantly more abundant in the cut station than in the non-cut stations.The co-dominant spider species, Arctosa fulvolineata, was less abundant in the cut stations.
Discussion
In this explanatory study, despite the existence of true replicates within each station, stations were confounded with management treatment.This sam- pling design can thus be considered as a case of pseudoreplication in the sense of Hulbert (1984).Our problem was to reduce possible differences between stations (that were consequently within the same site) because comparing stations between different sites often leads to an increase of variance due to the existence of other co-varying factors (Oksanen, 2001).This assumption consequently merits a larger-scale study and the following recommendations for managing salt marshes are not only based on our own results but also on the existing bibliography.
Afulv
Cutting tended to favour the spider Pardosa purbeckensis and disfavour Arctosa fulvolineata.Harvey et al. (2002) suggested that adult P. purbeckensis prefers low vegetation, which is in accordance with our results.In contrast, Arctosa fulvolineata prefers deep litter where it is often found during the day at 3-4 cm depth (Pétillon, pers. obs.).This species might be disfavoured by the structure of the cut habitats that have a thinner and less complex litter due to organic matter export.Typical mesophilic ground beetle species such as Amara sp., Bembidion lampros and Calathus melanocephalus and species belonging to the genus Pterostichus (P.cupreus, P. versicolor, P. vernalis) occurred in the cut station with short grassland vegetation.By creating above-surface ground conditions close to those existing in lower parts of the salt marsh, cutting therefore has a positive effect on the abundances of some halophilic species.As a general rule, by providing new microhabitats and microclimate conditions (Wise, 1993), litter tends to favour nocturnal wanderers, ambush hunters and "litter-sensitive" sheet-weavers (Bell et al., 2001).These groups are therefore likely disfavoured by cutting (shown by Cattin et al., 2003 in wet meadows), as would also be the halophilic species belonging to families of these groups (as in the case of the nocturnal wander A. fulvolineata).In general, cutting tended to increase total and mean species richness for both spiders and ground beetles, although the difference between mean richness by pitfall traps was not significant for spiders.These results can be related to the fact that cutting, independently from its effects on species abundance, allows halophilic species to survive and more ubiquitous species to establish.
Sheep grazing tends to favour some halophilic species of both ground beetles (Bembidion minimum: Desender & Verdyck, 2001) and spiders (Erigone longipalpis: Harvey et al., 2002) that are characterized by high dispersal capacities.As dispersal capacity is often related to the succession of species within a habitat (Southwood, 1962), our results are consistent with the general assumption that pioneer species are most successful in stressed habitats (Bell et al., 2001).Grazing, like cutting, opens the soil and vegetation structure and is therefore likely to favour some characteristic halophilic species (present study; for spiders, see Zulka et al., 1997;Harvey et al., 2002).However, species of high conservation interest, particularly spi-ders such as Pardosa purbeckensis, declined in grazed habitats.The negative impact on spider species richness is explained by a homogenous cover with no refuges that tends to disfavour cursorial species, and especially diurnal wanderers.Like cutting, grazing has an impact on communities not only regarding structural habitat changes, but also in respect to changes in microclimate conditions.Grazing can therefore have direct and indirect effects on species abundances, for both stenotopic (present study; Bonte et al., 2000) and ubiquitous species (Gardner et al., 1997;Dennis et al., 2001).The effects of grazing on species richness were different between spiders (decrease of richness) and ground beetles (no significant effect), tending to support the idea that grazing is too intensive in the Mont Saint-Michel bay.Over-grazing is in fact likely to reduce species richness (Gardner et al., 1997;Zulka et al., 1997), mainly because of heavy trampling effects (Bell et al., 2001).
Despite its potential as a good method, for biological control of invaders (Shea & Chesson, 2002), sheep grazing in the Mont Saint-Michel Bay is presently too intensive.Although a few halophilic species are enhanced, spider species abundance and richness has decreased.Cessation of intensive sheep-grazing has been recommended for salt marsh biodiversity conservation (Kiehl et al., 1996), but such change can lead to an increase of Elymus athericus, with possible loss of typical halophilic species (Pétillon et al., 2005a(Pétillon et al., , 2005b)).It is consequently recommended to maintain a low stocking rate (i.e. between 0.5 and 1.5 sheep ha -1 ), as positive effects are considered greatest at intermediate disturbance intensities (hypothesis well known for vegetation diversity and positively tested for arthropods: e.g.Dennis et al., 2001;Suominen et al., 2003).Cutting presently appears to be a recommended technique for enhancing species richness for both ground beetles and spiders, in accordance with Pozzi et al. (1998) who concluded that cutting was needed for the conservation of the most valuable grassland rather than grazing by sheep or cattle.Finally a cutting regime in June is recommended because spring and autumn cuttings are known to have few effects on spider communities than summer cuttings (Bell et al., 2001).The impact of different dates of cutting is currently being studied in the Mont St-Michel Bay to verify this assumption. | 3,445.4 | 2007-12-01T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
The Altotiberina Low-Angle Normal Fault (Italy) Can Fail in Moderate-Magnitude Earthquakes as a Result of Stress Transfer from Stable Creeping Fault Area
: Geological and geophysical evidence suggests that the Altotiberina low-angle (dip angle of 15–20 ◦ ) normal fault is active in the Umbria–Marche sector of the Northern Apennine thrust belt (Italy). The fault plane is 70 km long and 40 km wide, larger and hence potentially more destructive than the faults that generated the last major earthquakes in Italy. However, the seismic potential associated with the Altotiberina fault is strongly debated. In fact, the mechanical behavior of this fault is complex, characterized by locked fault patches with a potentially seismic behavior surrounded by aseismic creeping areas. No historical moderate (5 ≤ Mw ≤ 5.9) nor strong (6 ≤ Mw ≤ 6.9)-magnitude earthquakes are unambiguously associated with the Altotiberina fault; however, microseismicity is scattered below 5 km within the fault zone. Here we provide mechanical evidence for the potential activation of the Altotiberina fault in moderate-magnitude earthquakes due to stress transfer from creeping fault areas to locked fault patches. The tectonic extension in the Umbria–Marche crustal sector of the Northern Apennines is simulated by a geomechanical numerical model that includes slip events along the Altotiberina and its main seismic antithetic fault, the Gubbio fault. The seismic cycles on the fault planes are simulated by assuming rate-and-state friction. The spatial variation of the frictional parameters is obtained by combining the interseismic coupling degree of the Altotiberina fault with friction laboratory measurements on samples from the Zuccale low- angle normal fault located in the Elba island (Italy), considered an older exhumed analogue of Altotiberina fault. This work contributes a better estimate of the seismic potential associated with the Altotiberina fault and, more generally, to low-angle normal faults with mixed-mode slip behavior.
Introduction
The Altotiberina fault (ATF) is located at the Tuscany-Umbria-Marche regional boundary within the Northern Apennines (Figure 1), a NE-verging thrust-fold belt undergoing NE-trending extension at a rate of about 3 mm/yr [1]. ATF was identified by the interpretation of seismic reflection profiles that highlighted a regional low-angle normal fault dipping 15-20 • toward the Adriatic Sea [2,3]. On the ATF hanging wall a set of minor syn-and antithetic splay faults sole into the detachment at 4-6 km depth. These structures, characterized by higher dip angles compared to the ATF, have generated small-moderate-magnitude earthquakes; the largest one of M w = 5.1 occurred in 1984 on the Gubbio fault plane [4]. According to the works of [5,6] only microseismic events (<2.3 M L ) have been located along the 500-1000 m thick fault zone cross-cutting the upper crust from 4-5 km down to 14-16 km depth and coinciding with the geometry and location of the ATF (Figure 1). The seismicity nucleating along the ATF is characterized by a nearly constant rate of earthquake production r = 7.30 × 10 −4 earthquakes day −1 km 2 , corresponding to about three events per day with M L < 2.3. The microseismicity nucleating on the ATF is not able to explain the amount of deformation associated with the short-and long-term slip rate inferred by geological [7] and geodetic studies [8,9], suggesting a prevalent aseismic deformation. In support of this hypothesis, talc minerals, characterized by a very low friction coefficient (0.05 < µ S < 0.23; [10]) with a velocity-strengthening slip behavior (e.g., creeping), have been observed to form interconnected foliated networks within the Zuccale fault core, a low-angle normal fault located in the Elba island (Italy) and considered the (older) exhumed analogue of the ATF [11][12][13]. Only recently these hypotheses have been confirmed [6,14]. Indeed, to investigate spatial variations of the frictional behavior along the ATF surface, Anderlini et al. [14] mapped the coefficient of interseismic coupling (the ratio of the long-term seismic slip rate to the tectonic slip rate) by inverting GPS data. They found that about half of the ATF surface below 5 km of depth is characterized by creep, producing a long-term slip rate of 1.7 ± 0.3 mm/yr, while the remaining portion of the same fault is locked and it may be capable of generating M6.5+ earthquakes. In addition, thanks to a 5-year-long (2010-2014) high-resolution earthquake catalogue of about 40,000 events [6,15] observed a striking positive correlation between the creeping regions and the microseismic activity (<2.3 M L ), characterized by clusters of repeating earthquakes, whereas locked portions (asperities) are noticeably less productive or almost silent. The seismic moment released by the ATF seismicity accounts for 30% of the geodetic one [6], implying aseismic deformation. Finally, Independent Component Analysis of GPS time series revealed a large aseismic contribution for swarm-like activity that occurred in the hanging wall of the ATF in 2013-2014 [16]. The ATF deformation pattern is thus consistent with a mixed-mode (aseismic and seismic) slip behavior [6,14,16,17].
These studies have clarified how ATF accommodates the extension in the Umbria-Marche crustal sector of the Northern Apennine, but one question about the seismic potential of this fault remains unsolved: whether the locked patches of ATF can generate moderate (5 ≤ Mw ≤ 5.9) magnitude earthquakes. In facts, different historical moderate magnitude earthquakes are associated with the region where the ATF is located ( [18]; Figure 1), but is not possible to discriminate if these earthquakes occurred directly along the ATF plane or/and along the synthetic and antithetic faults of the ATF. In addition the locked ATF fault patches could be characterized by high frictional strength as the lens of competent material found in the Zuccale fault core [13]. If this were the case, then high sliding friction coefficient (µ ≥ 0.6 [19]) would inhibit the sliding along these patches since they are located in a low-angle plane that is misoriented with the extensional stress field characterized by vertical σ 1 [20]. In this way, the rupture should occur only on other well-oriented faults like the Gubbio fault (GF) dipping 40 • SW and intersecting the ATF at ∼5 km of depth. This behavior is predicted by the frictional fault reactivation theory [21,22] and is consistent with the absence of instrumental moderate-to-strong earthquakes on normal faults dipping less than 30 • [20]. However, this theory does not account for the stress transfer induced by creeping fault segments located on the same fault plane of the locked patches like in the case of ATF. In addition, the heterogeneous distribution of frictional parameters can allow the fault to be prone to start a rupture [23]. To address this question we perform a numerical simulation of the long-term deformation along ATF and GF by a geomechanical model that takes in account the spatial variation of fault friction. The topography is exaggerated (×5) on the z-axis for visualization purposes. The blue squares indicate 5.5 < M w < 6.0 historical earthquakes. The magenta squares indicate M w ≥ 6.0 historical earthquakes [18]. A velocity of 3 mm/yr is applied on the NE boundary according to [1]. The Altotiberina fault (ATF) plane is colored to illustrate the spatial variation of the A parameter of the rate-and-state friction law (Equation (1)) based on the interseismic coupling obtained by [14]. (b) Detail of the different slip-modes imposed on the two fault planes (VW = velocity-weakening; VS = velocity-strengthening). The gray dots represent the M L < 3.9 earthquakes located on the ATF plane over 4.5 years (2010-2014 [6]). The yellow star represents the location of the last largest instrumental M w 5.1 earthquake that occurred in 1984 on the Gubbio fault. The blue, green and red circles represent the location of the ATF-VS, ATF-VW and GF-VW points respectively where different physical quantities are represented in detail in Figure 5.
Model Geometry and Material Properties
The geometry and mesh of the model are built using the software Cubit (https://cubit.sandia. gov/) and imported into Abaqus [24] for finite element modeling. The geometry is characterized by a crustal volume of 150 × 150 × 40 km 3 where the ATF and GF are located. The surface topography is resampled at 1 km from Shuttle Radar Topographic Mission (SRTM, http://www2.jpl.nasa.gov/ srtm/; Figure 1). ATF and GF are represented by two planar surfaces dipping 18 • NE and 40 • SW respectively [2,3,25]. The surfaces are imprinted in the crustal block through Boolean operators. The volume is meshed by 3,673,141 tetrahedral elements. The nodes along the fault surfaces are split in two following the split-node technique described in [26] to allow fault slip. The crust is characterized entirely by a frictional-elastic rheology. For the elastic part we consider a shear modulus (G) of 28 GPa, a poisson ratio (ν) equal to 0.25 and a density of 2500 kg/m 3 .
Rate-and State-Dependent Friction Law
The deformation along ATF and GF is governed by the rate-and-state dependent friction law [27][28][29][30]: where L is a critical slip distance, µ 0 is the reference friction coefficient at the reference slip rate V 0 and A − B are the rate-and-state parameters, which can be used to model both stable, velocity-strengthening fault segments (A − B > 0) and potentially seismic, velocity-weakening fault segments (A − B < 0). Two details need to be specified about the rate-and-state friction law. The first one is the definition of the function f that describes the evolution of the state variable. In this work we use the aging-law: The second one is its indefinite behavior when the slip rate is zero. To avoid this case, we use the approximation of the rate-and-state depending friction proposed by [31,32], reading for V smaller than V linear , where V linear has the dimension of a velocity, and is a cutoff for a linear slip rate dependence. This approximation of the rate-and-state friction law has been implemented in the FRIC-subroutine of the Abaqus software [24].
Modeling the Mixed-Mode Fault Slip Behavior
The rate-and-state parameters are constrained by frictional laboratory experiments conducted on fault rocks of the Zuccale fault (the exhumed analogous of the ATF [13]) and shown in Table 1. The mixed-mode fault slip behavior of the ATF is simulated by varying the A parameter of the rate-and-state depending friction law with the interseismic coupling degree proposed by Anderlini et al. [14] and shown in Figure 1. This technique has been successfully used by Kaneko et al. [33] to simulate the coseismic interaction between fault area with velocity-weakening and velocity-strengthening behavior. To compute A on each element of the computational grid on the ATF plane we proceed as follows: first we fix the B parameter to 0.01. Then we define the maximum value (most positive difference) A − B = 0.0059 for the velocity-strengthening and the minimum value (most negative difference) A − B = −0.0048 for the velocity-weakening behavior, in accordance with the values obtained from laboratory experiments [13]. Finally, we associate these values to the end-members values (1 and 0) of the interseismic coupling ( Figure 1 and Table 1). The interseismic coupling is interpolated on the computational mesh nodes by a weighted mean function. On GF plane instead we impose a homogeneous velocity-weakening behavior with initial frictional properties as shown in Table 2.
Initial Stress Field Conditions
The simulation is performed in two subsequent steps. In the first one, we apply a gravitational loading and an initial stress field on the entire domain (geostatic step). We define an extensional stress regime where the maximum stress axis is vertical (σ 1 ) and the middle (σ 2 ) and the minimum (σ 3 ) principal stress are horizontals and oriented in the NW-SE and NE-SW directions respectively, in agreement with the regional-scale tectonic regime of the study area [34]. The stress field resulting from the first stage is defined as uniaxial strain reference frame [35]. This state of stress is characterized by a vertical stress where ρ is the density, g is the gravity acceleration and z is the depth. The horizontal stress can be calculated as follows: where ν is Poisson's ratio. In this way, for ν = 0.25, the vertical stress is three times larger than the horizontal stress. We consider the Terzaghi effective stress principle to compute the effective vertical stress where P p is the pore pressure, fixed at the hydrostatic value where ρ f is the density of the fluid into the pores. In this first step, the boundary conditions applied to the model are the following: the upper boundary (topographic surface) is free to move in all directions, while the lateral boundaries of the domain and the bottom are kept fixed in the normal direction. After this first step, the system is at equilibrium and the solution is used as the initial condition for the second step. In the second step we stretch the model (crustal extension is simulated) for 50,000 years, applying a constant horizontal velocity of 3 mm yr −1 on the NE lateral boundary (Figure 1), according to the present-day strain rate and kinematics of the region [1].
Fault Slip Condition
The seismic cycle along ATF and GF is modeled in the second step. Fault reactivation follows the Amonton law: where σ n = σ n − Pp is the effective normal stress, σ n = σ σ σ · n n n is the normal stress, n n n is the normal vector to the fault surface (σ n is positive in compression); τ = σ σ σn n n − σ n n n n is the shear stress and µ is the sliding friction that evolves following Equation (1). From the Amonton law, we calculate the slip tendency (ST) factor as the ratio of shear stress to normal stress acting on the plane of weakness: The slip tendency indicates if one fault is in a stable or unstable state of stress: if ST < µ the state of stress is stable, and no slip occurs along the fault plane. Otherwise, if ST ≥ µ the strength of the fault is overcome, and slip starts to propagate along the fault plane. Figure 2 shows the initial stress conditions obtained on the domain and on the ATF and GF at beginning of the extension step (initial time t = 0). At this time, the state of stress on the velocity-weakening ATF area and GF is not critical (Figure 2) and hence they are locked. At t = 0, only the ATF areas with velocity-strengthening regime are critical (Figure 2) because they have a slip tendency value higher than the friction coefficient, so they can slip into the next time increment.
Reactivation of Misoriented and Potentially Seismic Fault Patches
Different characteristic snapshots of the time evolution of the slip on the ATF and GF are shown in Figure 3. At 500 years the slip on ATF is limited only to the velocity strengthening patches. In fact, in the initial stress field condition imposed on the model, the critical shear stress is already overcome in the first time step, but only in the velocity-strengthening patches. At 2500 and 5000 years slip starts to propagate also in the velocity weakening patches in the shallower part of ATF, but localized at the interface with the velocity-strengthening fault area. In this period GF also starts to slip with a stick-slip deformation style. The GF failure occurs in the central part of the fault and propagates in SE-direction. At 10,000 years slip along the ATF plane begins to propagate in the central part, while the entire GF plane has reached failure. It should be noted that GF slips completely only after 10,000 years, even though, with an average dip angle of 40 • , GF is classified as a well-oriented normal fault [20,22]. This long time is mainly due to the initial stress condition imposed on the GF plane, which is far from the critical value for its reactivation (Figure 2). At 30,000 years the ATF slip is localized from 800 m to almost 5000 m of depth. At the end of the simulation (50,000 years) the entire ATF locked area also fails. We computed an averaged long-term slip rate equals to 0.44 mm/year and 0.1 mm/year along the ATF and GF, respectively. Different characteristic snapshots of the time evolution of the cumulative shear stress on ATF and GF are shown in Figure 4. At 500 years the shear stress computed on the faults depends exclusively on the initial stress as an initial condition. Indeed, the shear stress gradient depends mainly on the depth in accordance with the lithostatic load. At 500 and 2500 years the initial shear stress on ATF is perturbed by the continuous slip of the velocity-strengthening patches. The main shear stress increment is localized at the interface between velocity-strengthening-velocity-weakening patches. The shear stress on GF increases according to the tectonic extension simulated in the model. At 5000 years the shear stress on GF is also perturbed due to the initiation of slip along this fault. At 10,000 and 30,000 years higher shear stress is located on the deeper velocity weakening patches of ATF (almost 10 km of depth). This larger concentration of shear stress, due to the stress transfer from creeping segments, allows the activation also of the entire ATF area with velocity weakening behavior ( Figure 3).
Stick-Slip Versus Creeping Deformation
The time evolution of the slip, shear stress, effective normal stress and sliding friction for different characteristic points whose locations are marked in Figure 1b are shown in Figure 5. For a better visualization, we subtracted the corresponding value computed at the initial time from each physical quantity. In Figure 5a, the characteristic stick-slip deformation style is visible for the nodes with velocity-weakening behavior located on ATF and GF planes. For the time span considered (1700 years), the ATF-VW point fails in two events whereas the GF-VW point fails three times. In comparison, the slip on the node located on the ATF area with velocity-strengthening behavior increases continuously for the entire time interval considered. Concerning the shear stress evolution (Figure 5b), the ATF-VS point located on the ATF velocity-strengthening patch remains stable around the value 18.45 MPa as consequence of the continuous release of stress due to the creeping deformation. On the contrary the shear stress on the ATF-VW and GF-VW points varies by almost 0.5 and 1 MPa, respectively, during the interseismic phase. The variation of the normal stress is more noticeable in the GF-VW point located on GF. In fact, for the same time interval of 1700 years ( Figure 5c) the effective normal stress decreases 1.1 MPa with respect to the ATF-VW point located on the ATF velocity-weakening patch, which decreases of 0.1 MPa. An important aspect is that the variation of shear stress and normal stress on the ATF area with velocity-weakening behavior are recovered almost completely during the coseismic phase unlike GF due to continuous loading by the ATF creeping segments. The sliding friction for the node located on the velocity-strengthening patch (ATF-VS point, Figure 5d) remains almost unvaried around the value of 0.13. Otherwise the ATF-VW and GF-VW points located on the ATF velocity-weakening patch and on GF respectively increase to a maximum value of 0.71 in the interseismic phase. Peak values of 0.75 are reached during the coseismic phase due to the direct effect of the rate-and-state friction law. Figure 5. Time evolution of the sliding friction, shear stress, normal stress and slip for different characteristic points located on the ATF and GF planes (see Figure 1b for the location). For a better visualization, we subtracted their corresponding value computed at the initial time from each physical quantity.
Discussion and Conclusions
The results of the numerical simulations conducted in this work demonstrate that the seismic activation of the entire locked area of the Altotiberina fault is indeed possible (Figure 3). The averaged long-term slip rate computed along the Altotiberina fault plane (0.44 mm/year) is in accordance with the long-term slip rate estimated from studies based on geodynamic constrains (0.1-1.0 mm/year [36]) and geological data (1 mm/years [7,25]). According to Anderlini et al. [14], if we assume that the entire locked area fails in a single event, then the Altotiberina fault could host a M 6 earthquake. However, we cannot exclude the possibility that earthquakes with larger magnitude occur along the Altotiberina fault. Indeed, as demonstrated by [37], stable creeping fault segments can become unstable due to rapid shear heating of pore fluids. In this way, the seismic rupture could propagate into creeping segments of the faults, increasing the magnitude of the earthquake. This model of rupture propagation has been used only for the Tohoku-Oki earthquake [37], located in a geodynamic context where tectonic forces are large compared to those active in the Northern Apennine. For this reason, new studies should verify the applicability of this model to other tectonic settings.
The activation of the locked fault area occurs through continuous stress transfer from adjacent creeping segments located on the same fault plane. In this contest, creeping fault segments assume another role in the redistribution of the tectonic stress. Creeping faults segments have been considered to be inhibitors to the propagation of the rupture front during the coseismic phase [33], but we now recognize that they also assume a role promoting rupture [38]. Stress transfer from creeping fault areas could hence be the main mechanism to induce moderate-magnitude earthquakes along low-angle normal faults. To understand whether mixed-mode slip behavior is a peculiar characteristic of low-angle normal faults LANFs is hence a new challenge to reevaluate the seismic potential of these structures.
Funding: This research received no external funding. | 5,005 | 2020-04-16T00:00:00.000 | [
"Geology",
"Environmental Science"
] |
Non-relativistic scale anomalies
We extend the cohomological analysis in arXiv:1410.5831 of anisotropic Lifshitz scale anomalies. We consider non-relativistic theories with a dynamical critical exponent z = 2 with or without non-relativistic boosts and a particle number symmetry. We distinguish between cases depending on whether the time direction does or does not induce a foliation structure. We analyse both 1 + 1 and 2 + 1 spacetime dimensions. In 1 + 1 dimensions we find no scale anomalies with Galilean boost symmetries. The anomalies in 2 + 1 dimensions with Galilean boosts and a foliation structure are all B-type and are identical to the Lifshitz case in the purely spatial sector. With Galilean boosts and without a foliation structure we find also an A-type scale anomaly. There is an infinite ladder of B-type anomalies in the absence of a foliation structure with or without Galilean boosts. We discuss the relation between the existence of a foliation structure and the causality of the field theory.
JHEP06(2016)158
Scale anomalies in non-relativistic theories have been the subject of recent studies. Specifically, in the context of Lifshitz field theories satisfying an anisotropic scale symmetry of the form: t → λ z t, it has been found that the allowed structures for scale anomalies satisfying the Wess-Zumino consistency conditions vary depending on the values of the dynamical exponent z and the dimension d [1]. In all cases studied only B-type anomalies were found [1]. The anomaly coefficients for z = 2 in 2 + 1 dimensions were computed in particular examples using heat kernel and holography in [2][3][4]. The authors of [5,6] extended the study to non-relativistic theories with Galilean boost symmetry through a null reduction of a relativistic theory in one higher dimension. A complete classification of scale anomalies consistent with the Wess-Zumino consistency conditions in non-relativistic theories is valuable, and is likely to have theoretical as well as experimental manifestations (e.g. at quantum critical points). It could, for instance, lead to non-relativistic RG flow theorems for anomaly coefficients [7,8], or a non-relativistic generalization of the relation between scale and conformal invariance [9].
In this paper we present a complete classification of non-relativistic scale anomalies in various setups, by analysing the appropriate cohomology in a curved background. This includes a separate analysis for cases, in which the time 1-form does not induce a foliation structure, i.e., when it does not satisfy the Frobenius condition. In some of the setups studied, we take into account an additional Galilean boost symmetry accompanied by a background U(1) gauge field associated to particle number. We focus our analysis on the case of z = 2 in 1 + 1 and 2 + 1 dimensions, however the prescription we present is valid also for other dimensions and the analysis can be extended in a straightforward manner. The generalizations to other values of z are non-trivial since in the case of z = 2 the gauge and scale transformations do not commute. We leave this for future study.
JHEP06(2016)158
In order to couple our theory to a curved background and perform the cohomological analysis we use both a spacetime metric g µν and a 1-form t µ representing the time direction to build our cohomological invariants. Our description is equivalent to the Newton-Cartan geometry [5,[10][11][12][13][14][15][16][17][18] which we review in subsection 2.4.
We find that in 1 + 1 spacetime dimensions, when we impose Galilean boost symmetry the theory has no anomalies for z = 2. In 2 + 1 dimensions for a z = 2 Galilean theory we find the following results. When the time direction induces a foliation there are no boost invariants outside the purely spatial sector. 1 In the purely spatial sector the anomalies are identical to the Lifshitz case [1]. In the absence of a foliation structure there are no boost invariants with less than four derivatives. In [5] it has been claimed that for the case of z = 2 Galilean theories, the results can be derived from a null reduction of a relativistic theory in one higher dimension. This implies the existence of an A-type anomaly (in the terminology of [19]) in 2 + 1 dimensions. We indeed establish this using our cohomological analysis directly in 2+1 terms rather than using the null reduction (see equation (5.13)). This allows us to directly compare our results to the Lifshitz case. In all the other cases we consider all anomalies are B-type. In the cases without a foliation structure (with or without Galilean boosts) there is an infinite ladder of anomalies generated by multiplying B-type anomalies by the spatial anti-symmetric part of the derivative of the normalized time direction. We discuss possible consequences of an absence of a foliation structure. This paper is organised as follows. In section 2 we describe the various choices that have to be made when coupling our theory to curved spacetime. We include a comparison with the Newton-Cartan geometry of [10]. In section 3 we present the Ward identities and describe the cohomological setup which we use to study the anisotropic scale anomalies. We include in this section a classification by sectors. In sections 4 and 5 we detail our results for the various setups in 1 + 1 and 2 + 1 dimensions with z = 2. We conclude in section 6 with a list of possible future directions. Most of the technical details are left for appendixes.
JHEP06(2016)158
not satisfy the Frobenius condition: n [α ∇ β n γ] = 0, (2.1) which in the differential form language reads n ∧ dn = 0. Note, that even when (2.1) is satisfied we cannot specialize in our analysis to the case dn = 0 since this condition is not invariant under anisotropic Weyl transformations. In our analysis we distinguish between four cases: 1. With Frobenius and Galilean boost invariance -This case was considered in [6]. As mentioned in the introduction we disagree with their results.
2. With Frobenius and no Galilean boost invariance -this is the case of Lifshitz field theories, studied in our previous paper [1] as well as [2][3][4] and most of past literature on scaling anomalies in Lifshitz field theories. In most of these cases the ADM decomposition is used, which implies that the Frobenius condition is satisfied.
3. Without Frobenius and with Galilean boost invariance -This case was considered in [5] by relating it to a relativistic case in d + 2 dimensions via a null reduction.
Without Frobenius and with no Galilean boost invariance -This case has not been considered in the literature so far.
Our aim is to compare the cohomology of scaling anomalies in these four cases and consider how the aforementioned choices influence the results.
Implications of the absence of a foliation structure
In analysing the cases 3 and 4 outlined above, we will be considering a curved background where the 1-form n µ does not satisfy the Frobenius condition (2.1). Thus, the curved background lacks a foliation structure. It has been noted in the literature (see e.g. [15]), that such cases should be avoided since they imply a breakdown of causality in the nonrelativistic field theory. The argument is based on Caratheodory's theorem (see e.g. [20] theorem 6.13). The theorem asserts that if the Frobenius condition is not satisfied at a point x, then there is a neighborhood of x where any point in the neighborhood can be reached from x by a future directed curve. This implies a lack of causal structure. On the other hand, the non-relativistic field theories whose scale anomalies we wish to analyse, are defined in flat space which does have a foliation and therefore has a natural causal structure. The curved background structure to which we couple the theories is only providing sources to the various field theory currents. In particular, the 1-form n µ couples to the field theory energy current. Imposing the Frobenius condition on the source n µ means that we do not allow a calculation of correlations functions of all the components of the energy current. There is, however, no a priori reason for such a requirement and it is not obvious how it follows from any causality requirement imposed on the non-relativistic field theory in flat space.
JHEP06(2016)158
What Caratheodory's theorem certainly implies is that one should take care when attempting to mathematically formulate and calculate the correlation functions of all the components of the energy current using the background 1-form source n µ , and it is possible that there is no such consistent framework. However, we see no such mathematical difficulty when using this source that does not satisfy the Frobenius condition in the cohomological calculation. As pointed out above, there is a major difference between the scale anomalies depending on whether the 1-form source satisfies the Frobenius condition or not. While with a Frobenius condition one has only B-type anomalies, we find that without a Frobenius condition one can have an A-type non-relativistic scale anomaly as well as an infinite number of B-type anomalies.
In studying the non-Frobenius case, we closely follow [1] in terms of notations. Although the time 1-form no longer represents a foliation as it does in the Frobenius case, many of the definitions used in the Frobenius case may be easily extended to this case. In particular, one can still decompose any tensor to components which are tangent and normal to the space directions (using the time direction n µ and the projector on the space directions P µν = g µν + n µ n ν ).
Generally, the covariant derivative of the 1-form n α can be decomposed as follows: where (K S ) µν , (K A ) µν and a α are space tangent tensors (normal to the time direction) that satisfy: 1. (K S ) µν = 1 2 L n P µν is symmetric and reduces to the extrinsic curvature of the foliation in case the Frobenius condition is satisfied.
(K
is anti-symmetric and vanishes in the Frobenius case.
3. a α = L n n α is the acceleration vector.
As in the Frobenius case we can still define a space tangent derivative as follows: for T αβ... a space tangent tensor. This definition still satisfies ∇ ρ P µν = 0. Using the commutation of two such space derivatives one can show that for any space tangent vector V α : where we have defined: and:
JHEP06(2016)158
In the Frobenius case, the tensor R αρµν reduces to the intrinsic Riemann tensor of the foliation, however generally it does not have all of the regular symmetries of the Riemann tensor. It is therefore useful to define a modified tensor: which satisfies the usual Riemann tensor symmetries except for the second Bianchi identity, and coincides with R αρµν in the Frobenius case. Many of the identities derived in our previous paper [1] can be generalized to the non-Frobenius case (see appendix A.1). In general these identities will be modified with terms involving K A µν . One example is the following set of relations between a α and K A µν : Note that ∇ µ a ν is no longer symmetric in this case. We thus conclude that the main implications of the lack of a foliation structure on our cohomological analysis is in the addition of the extra 2-form K A αβ to the list of basic tangent tensors as it appears in [1], as well as the appropriate modifications to the various identities satisfied by the basic tangent tensors.
Implications of Galilean boost symmetry
The symmetry group of Lifshitz theories is composed of time and space translations, rotations and Lifshitz scaling. The generalization of these symmetries to a curved background is given by symmetries under time-direction-preserving diffeomorphisms (TPD) and anisotropic Weyl transformations (see subsection 2.3). However, in many cases the non-relativistic theory satisfies the full Schrödinger algebra. In curved spacetime we have to consider two additional symmetries. In the terminology of [10] these are the Milne boosts and a U(1) gauge symmetry. 3 In the literature, the coupling of Galilean-invariant theories to a curved spacetime is usually implemented using the Newton-Cartan geometry (see [5,[10][11][12][13][14][15][16][17]). However, since our goal in this work is to compare the anisotropic scaling cohomologies of Galilean and non-Galilean-invariant theories, we find it useful to have a joint framework for the description of both types of theories on a curved background. We do this by including the gauge field associated with the conserved particle number, along with the Milne boost and U(1) gauge symmetries, in our previously developed framework. For our purposes, this description is equivalent to the Newton-Cartan one. We compare our terminology with the Newton-Cartan one in subsection 2.4.
The first implication of the Galilean symmetry is the presence of an additional gauge field A µ as in [5], associated to the particle number current. We can decompose the gauge field into space tangent and normal components as follows:
JHEP06(2016)158
The gauge invariant data is encoded in the field-strength tensor F µν , or alternatively in the electric and magnetic fields, defined by: both are space tangent. The second implication of the Galilean symmetry is the presence of the gauge symmetry and the Milne boost symmetry. In cohomological terms, in this case we are looking at the relative cohomology of the anisotropic Weyl operator with respect to Milne boosts and gauge transformations, in addition to TPD. We are therefore required to restrict the possible terms in the cohomology to ones which are both gauge invariant and Milne boost invariant. The restriction to gauge invariant terms is easily achieved by using the electric and magnetic fields rather than the gauge field itself, but the restriction to boost invariant terms is less obvious. We accomplish it here by starting with all TPD and gauge invariant terms, performing a Milne boost transformation on each of them and finding the combinations which are boost invariant. 4 In conclusion, the implications of the Galilean symmetry in our analysis is the addition of the electric and magnetic fields to the list of basic tangent tensors as it appears in [1], and the restriction of the various terms considered in the analysis to ones which are gauge and Milne boost invariant.
The relevant symmetries
In this subsection, we detail the relevant symmetries for our problem and the way in which they act on the various background fields: g µν , t µ , A µ or alternatively for theories that require a vielbein formalism e a µ , t a , A µ . In the following sections, whenever it is possible, we refer to the most general case -the one that includes the gauge field and does not assume the Frobenius condition -and the other cases are inferred by setting the gauge field or K A αβ to zero. The cohomological analysis will then be performed for each case separately.
1. Galilean Boost invariance. That is, invariance under infinitesimal Milne boosts (in the terminology of [5]) which are given by: where W µ is a space tangent (W µ n µ = 0) parameter of the transformation. Equivalently in terms of the vielbeins:
JHEP06(2016)158
This is the curved space version of Galilean boosts: 2. Gauge invariance. That is, invariance under a standard U(1) gauge symmetry associated with particle number, given by: (2.14) 3. Anisotropic Weyl invariance. That is, invariance under anisotropic Weyl transformations: where P µν = g µν + n µ n ν is the spatial projector and the weight of the gauge field can be determined from the Galilean algebra (more specifically from the commutator of a translation and a Galilean boost). Alternatively, using the vielbeins: (2.16) 4. Invariance under time direction preserving diffeomorphisms (TPD). These are diffeomorphisms that preserve the time direction, that is, diffeomorphisms with a parameter ξ that obeys L ξ t α ∝ t α . This can be extended to any diffeomorphism by having the time direction 1-form transform appropriately: 17) or in vielbein formalism (we also include here local Lorentz transformations):
JHEP06(2016)158
Note, that when using the BRST description, one also has to define the action of δ B W , δ G Λ , δ W σ , δ D ξ and δ L α on the Grassmannian parameters W µ , Λ, σ, ξ µ and α a b such that We detail them here only for the z = 2 case which is the case we consider in this work:
Comparison with the Newton-Cartan geometry
In this subsection we compare our notations and conventions to those of [10] in which the non-relativistic setup used a Newton-Cartan (NC) geometry. The NC geometry is defined in terms of the spatial metric without a priori referring to an external full spacetime metric (which is ambiguous). The relevant curved data is the spatial metric h µν NC , the local time direction n NC µ and the velocity vector v µ NC satisfying: along with the gauge field A NC µ . This uniquely defines h NC µν such that: These NC structures relate to our definitions as follows: (2.22) The infinitesimal Milne boost transformations in the NC framework are given by: (2.23) This transformation corresponds to (2.11) with W µ ≡ −h µν NC ψ ν . Next, we turn to the definition of the covariant derivative. Two choices are common in the NC literature for the affine connection, both of which have non-vanishing torsion. First, the gauge invariant connection, given by: where F NC ρσ is the field strength associated with the U(1) gauge field. The second is the boost invariant connection, given by: . (2.25) We, however, use the standard torsionless Levi-Civita connection associated with the metric g µν . The relation between this connection and the gauge invariant NC connection is given by: where ∇ µ is the covariant derivative associated with the Levi-Civita connection. Note also that when projected on space tangent directions (as in (2.3)), the Levi-Civita and the gauge invariant NC connections coincide: whereas the space projected boost invariant NC connection is given by: We would like to stress that the choice of connection is not essential to the cohomological analysis we are performing: the algebra of the various symmetries we are considering does not depend on the choice of connection. In addition, since the difference between the Levi-Civita connection and the NC ones is a tensor that is composed of the basic background fields (the metric, the time direction and the gauge field), any scalar written using one connection can be decomposed as a combination of scalars written using the other. The number of independent invariant expressions therefore does not depend on the choice of connection either. While the Levi-Civita connection may be considered a less "natural" choice for the Galilean case, it is nevertheless more convenient for our calculations, and for the purpose of comparing them to the non-Galilean cases. The results can always be translated to the NC formalism using the above formulas.
Finally, we compare the various field theory currents that couple to the background fields, as defined in [1] and in subsection 3.1, to the NC ones as defined [10] (see also [17]). The NC currents are defined from the variation of the action as follows: where J µ NC , P NC µ ,E µ NC and T NC µν represent the particle number current, the momentum density, the energy current and the spatial stress tensor respectively. 5 The corresponding currents in our conventions are defined via: The relations between the currents in our conventions and the NC ones are then given by:
The cohomological problem
Our main goal is to find the possible anomalous contributions to the Ward identity that corresponds to Lifshitz scale symmetry in the various cases outlined in section 2, by finding the non-trivial solutions of the Wess-Zumino consistency conditions. As in [1], we use the cohomological description of the problem, in terms of a BRST-like ghost. In this description one studies the relative cohomology of the nilpotent anisotropic Weyl operator δ W σ with respect to the other symmetries of the problem. The possible anomalies are terms A σ of ghost-number 1 and with the right scaling dimension which are cocycles (i.e. satisfy the WZ consistency conditions): with σ a Grassmannian transformation parameter, and are not coboundaries (i.e. cannot be canceled by an appropriate counterterm): for G a local functional of the background fields, where both A and G are invariant under the rest of the symmetries of the problem.
Ward identities
We start by studying the relevant Ward identities associated with the symmetries of subsection 2.3. Here again we refer to the most general case, with non-relativistic boost invariance, a gauge field and in which the time direction is not hypersurface orthogonal. Assume a classical action S(g µν , t α , A α , {φ}) or alternatively S(e a µ , t b , A α , {φ}), where {φ} are the dynamic fields. Define the various currents as follows. The stress energy tensor: The variation of the action with respect to the time direction 1-form: as well as its normalized version:Ĵ α ≡ |g µν t µ t ν |J α , (3.5) and the mass current, given by:
JHEP06(2016)158
Note that J α is space tangent, i.e. J α t α = 0, since the action is invariant under local rescaling of the time direction 1-form. In cases where one can use either the metric or the vielbein descriptions, the following relation exists between T µν (g) , T µν (e) ≡ T (e) µ a e aν and J α : For time direction preserving diffeomorphisms (TPD), the corresponding Ward identities are given by: (3.8) or equivalently in terms of T µν (e) : The Ward identity corresponding to the U(1) gauge invariance is simply the conservation of the current J µ m : Using it we get a simplified version of the previous ward identities: (3.11) The Ward identity corresponding to anisotropic Weyl symmetry is given by: And finally, for the boost invariant cases, the Ward identity corresponding to Milne boosts is given by: 13) or alternatively: which is the famous statement of equality between particle number current and momentum density.
In this work we study the possible form of the anomalous corrections to the anisotropic Weyl Ward identity (3.12), assuming the other symmetries are not anomalous.
Constructing time direction preserving diffeomorphism invariants
As explained in our previous paper [1], the cohomological analysis starts by constructing all possible TPD invariant expressions of a certain Lifshitz scaling dimension. This can be accomplished by taking all possible contractions of a set of basic space tangent tensors. A tensor T αβγ... is called space tangent if In the case without a foliation structure we have to add K A µν to the list of basic tangent tensors. In the cases with Galilean boost-invariance, the electric and magnetic fields E µ , B µν associated with the U(1) gauge symmetry are also included. The list of basic tangent tensors then becomes: 1. The spatial metric P µν = g µν + n µ n ν .
The tensors K
4. The modified "intrinsic" Riemann tensor R µνρσ as defined in equation (2.7).
6.
Lie derivatives (temporal derivatives) in the direction of n α : L n . Note that if some tensor T αβ... is space tangent, then L n T αβ... is also space tangent.
8. The electric field E µ and the magnetic field B µν as defined in equation (2.10).
The various tensors were chosen such that they scale uniformly under anisotropic Weyl scaling transformation with a scaling dimension d σ : where σ(x) is the Grassmannian local parameter of the anisotropic Weyl transformation and ∂σ stands for any term proportional to derivatives of the ghost σ. The basic tangent tensors have the following scaling dimensions: where T αβ... is any space tangent tensor with uniform scaling dimension. Various relevant identities for the basic tangent tensors are listed in appendix A.1. The complete boost and Weyl transformation rules for these tensors are listed in appendixes A.2-A.3.
Classification by sectors
The various terms in the cohomology all have the form √ −gφ, where φ is a scalar of uniform scaling dimension −(d + z), built from contractions of the basic tangent tensors of subsection 3.2. Suppose that n K S , n K A , n a , n R , n , n ∇ , n L , n B and n E are the number of instances of the various basic tangent tensors (as indicated by the subscript) that appear in φ, and n P the number of spatial metric instances required to contract them. For the scaling dimension to be correct we require: For all indexes in φ to be contracted in pairs we require: From requirements (3.18) and (3.19) we obtain the conditions: If we define n T ≡ n K S − n K A + n L + n B + 2n E as the total number of time derivatives and n S ≡ 2n K A + n a + n ∇ + 2n R − n E as the total number of spatial derivatives in the expression, 6 we get the following form for these conditions: The numbers n T , n S and n remain unchanged when applying the Weyl operator δ W σ to any tangent tensor or when using identities relating different tangent tensors. We therefore classify expressions according to sectors, each corresponding to specific values of (n T , n S , n ). When studying the cohomological problem we may focus on each sector separately. We also define for convenience the total number of derivatives: which is always positive, unlike n T and n S . Note that the time reversal and parity properties of the expressions in a certain sector are given by: An important difference in the classification to sectors from the case studied in [1] is that the negative contributions to equation (3.22) allow in some cases for an infinite number of sectors. For z integer, the total contribution of the electric field is always
JHEP06(2016)158
positive. This is not the case for the contribution of K A . In the case of z = 2 for instance, the total contribution of K A is vanishing hence allowing for an infinite number of sectors, each corresponding to a different number of derivatives n D . Each of these sectors may (and in fact does, as we show in the following sections) contain a different set of possible independent anomalies. Thus we conclude that a direct consequence of discarding the Frobenius condition is the possibility of having an infinite set of independent anomalous contributions to the Ward identities of the theory.
A final remark is in order, regarding the cases with Galilean boost invariance. As mentioned in section 2, these cases require finding the combinations of expressions which are invariant under Milne boosts. In order to make use of the classification to sectors for this type of analysis, we define the scaling dimension of the boost transformation parameter W µ to be: d σ [W µ ] = −z, so that the scaling dimension of a scalar expression remains invariant under δ B W . As a consequence, for boost-ghost number 1 expressions, the boost parameter W µ contributes (n T , n S , n ) = (1, −1, 0) to equation (3.22).
A prescription for finding the anomalous terms
In this subsection we give a detailed prescription for finding the anomalous terms in the relative cohomology of the anisotropic Weyl operator for z = 2 and any d. The prescription is as follows: 1. Identify the sectors. Those are the sets n T , n S and n satisfying (3.22), (3.23). The cohomological analysis can be performed for each sector separately.
2. Build all TPD and gauge invariant expressions in each sector by contracting the basic tangent tensors of subsection 3.2. Use the electric and magnetic field rather than the gauge field itself to obtain gauge invariant expressions. We denote the independent basis of expressions φ i , taking into account the relevant identities from appendix A and additional dimensionally dependent identities.
3. For the cases with Galilean boost symmetry identify Milne boost invariant expressions: denote by χ j all the independent TPD and gauge invariant expressions with one W µ in that sector. These span the possible results of the Milne boost transformations. Find the boost transformation δ B W φ i and express it as a linear combination of χ j . Suppose this linear combination is given by: . . , n BI as our new independent basis of expressions, where n BI is the number of independent boost invariant expressions. 4. To find the cocycles of the relative cohomology of the anisotropic Weyl operator: -Build the integrated expressions of ghost number one: -Apply the Weyl operator δ W σ to each of these terms to obtain ghost number two expressions. Reduce each of them to a linear combination of independent expressions of the form L j = √ −g σψ j where ψ j are ghost number one expressions. Suppose these linear combinations are given by:
JHEP06(2016)158
-Find all linear combinations of the basic ghost number one expressions E = C i I i (where C i are constants) that satisfy δ W σ E = 0, by solving the linear system of equations: M ij C i = 0. The space of solutions is the cocycle space. Let E i , i = 1, . . . , n cc be some basis for this space, where n cc is its dimension.
5. To find the coboundaries of the relative cohomology: -Build the integrated expressions of ghost number zero: -Apply the Weyl operator δ W σ to each of them to obtain ghost number one expressions. Reduce each of them to a linear combination of the expressions I i . Suppose these combinations are given by: The span of these combinations is the coboundary space. Let F i , i = 1, . . . , n cb be some basis for this space, where n cb is its dimension.
6. Finally, to find the anomalous terms in the cohomology, check which of the cocycles I i are not in the span of the coboundaries F i . We denote these by A i , i = 1, . . . , n an , where n an = n cc − n cb is the number of independent anomalies.
For distinguishing between A-type and B-type anomalies we use the same definitions as in our previous paper. B-type anomalies are defined as Weyl invariant scalar densities (trivial descent cocycles up to coboundary terms) whereas A-type anomalies have nontrivial descent. This does not necessarily align with the definition of A-type anomalies as topological invariants. For further discussion on this topic see [1].
Scale anomalies for 1+1 dimensions with z=2
In 1 + 1 dimensions the Frobenius condition is always satisfied, and the magnetic field B µν and modified "intrinsic" Riemann tensor R αρµν vanish identically. We study the Galilean case here (as opposed to the Lifshitz one that was studied in the previous paper).
The classification to sectors is done according to equation (3.20) which takes here the form: where we have added the boost parameter for convenience when classifying the results of the boost transformations. For z = 2 we obtain: For the condition (3.21) to be satisfied we must have n = 1. We have the following sectors: (n T , n S , n ) = (0, 3, 1), (1, 1, 1) or (2, −1, 1). We find that there are no possible anomalies in any of them. Note that in 1 + 1 dimensions there is no need to keep track of indexes, and so we suppress them throughout this section.
The (0,3,1) sector
The independent ghost number zero expressions in this sector are given by: All of them are invariant under boosts. The cohomological analysis is therefore identical to the corresponding one in the previous paper [1], where we found no possible anomalies.
The (1,1,1) sector
Here, the independent ghost number zero expressions are given by: We first look for boost invariant combinations. The independent boost-ghost number one expressions are given by: Performing boost transformations on the ghost number zero expressions gives: and it is easy to check that there are no boost invariants in this sector.
The (2,-1,1) sector
This sector contains just one ghost number zero expression φ 1 = E. This expression is not boost invariant.
Scale anomalies for 2+1 dimensions with z=2
The main case we study in this work is the one of 2 + 1 dimensions with a dynamical exponent of z = 2. As detailed in previous sections, we compare 4 different cases in our analysis: 1. The case with the Frobenius condition satisfied and Galilean boost invariance, 2. The case with the Frobenius condition satisfied and no Galilean boost invariance, 3. The case without the Frobenius condition and with Galilean boost invariance, 4. The case without the Frobenius condition and with no Galilean boost invariance.
JHEP06(2016)158
While the second case was already studied in our previous paper [1], the others are new. As noted in subsection 3.3, cases 3 and 4 (the ones in which the Frobenius condition is not satisfied) contain an infinite number of sectors with increasing total number of derivatives n D , while cases 1 and 2 contain only a finite number of sectors with n D ≤ 4. In this work we restrict our attention to the sectors with n D < 4 and the parity even sector with n D = 4 in all 4 cases, leaving other sectors to future work. In some of the cases we offer some conclusions regarding other sectors as well.
For calculations in 2 + 1 dimensions it is convenient to define the scalars B, K A and the tensorsK αβ ,K S αβ as follows: In general, we find that cases 1,2 and 4 contain only B-type anomalies (in the sectors we study), whereas case 3 contains both an A-type anomaly and B-type anomalies. We also find that cases 3 and 4 allow for an infinite number of B-type anomalies. We find in general that boost invariant expressions only exist in the purely spatial sectors (that is the ones with n T = 0, n S = 4), which are left unchanged compared to the Lifshitz case studied in [1]. We therefore find only 1 possible anomaly in this case, which is B-type and given in (5.7).
With Frobenius and Galilean boost invariance
This result is in contradiction with [6], where it is claimed that there is an A-type anomaly in this case. The discrepancy can probably be traced to the fact that while there are 12 independent ghost number zero expressions in the (0,4,0) sector (see equation (4.34) in [1]), there are 16 expressions in [6] (equation (3.18)) that are being treated as independent. We suspect the that these 16 terms are not independent and that this leads to an incorrect result.
Proof of boost invariance of the purely spatial sectors
In this subsection we prove that when the time direction is hypersurface orthogonal (that is, the Frobenius condition is satisfied), all of the possible independent TPD invariant expressions in the purely spatial sectors (the ones with n T = 0) are invariant under Galilean boosts.
JHEP06(2016)158
We start by noting that in this case K A µν = 0. We therefore have for the purely spatial sectors: and therefore n K S = n L = n B = n E = 0. We are left with expressions containing only a µ , ∇ µ , R αβγδ (which is in this case the same as R αβγδ ) and the spatial projector P µν . The boost transformation properties of these tensors can be found in appendix A.2 and reduce to the following in the Frobenius case: where we have definedδ B W to be the space projected boost transformation of a space tangent tensor:δ Note that since δ B W P µν = 0, this projected boost transformation operator satisfies the Leibniz product rule: where T αβ...µν... and S γδ...µν... are space tangent tensors. Note also that for a scalar expression φ: From these properties we conclude that for any scalar φ built only from a µ , ∇ µ and R αβγδ we have δ B W φ = 0. Therefore any scalar in the purely spatial sectors is boost invariant. This implies that the purely spatial sectors of our previous paper [1] are left unchanged since all the terms in these sectors are automatically boost invariant, and the rest of the cohomological analysis for these sectors is the same. We are therefore left with the only anomaly found there in the (0, 4, 0) sector: which is B-type. The cohomology of the (0, 4, 1) sector contained no cocycles.
With Frobenius and no Galilean boost invariance
This case was studied in full in our previous paper on Lifshitz cohomology. We repeat our results here for completeness. The possible anomalies we found for this case were: where the superscript indicates the sector each of them belongs to, and we define: Both of these anomalies are B-type.
Without Frobenius and with Galilean boost invariance
As explained in subsection 3.3, this case contains an infinite number of sectors. As previously stated, we restrict our analysis to the sectors with n D < 4 and the parity even n D = 4 sector. These are the following sectors: In general we find no boost invariant expressions in the sectors with n D < 4. The sector (0,4,0), however, does contain various boost invariant expressions. We find two possible anomalies in this sector: one A-type anomaly and one B-type anomaly. The structure of the cohomology in this sector mirrors the cohomology of relativistic conformal anomalies in 3+1 dimensions. This is a consequence of the null-reduction as discussed in subsection 5.3.1.
Note that out of all four different cases this is the only case that contains a possible Atype anomaly. Therefore in order to find an A-type anomaly one has to both give up the Frobenius condition and impose Galilean boost invariance. We did not study in detail the sectors with n D > 4. However we can demonstrate that these sectors contain an infinite number of B-type anomalies. The argument is detailed in subsection 5.3.3.
Comparison with the null reduction
One can relate the Newton-Cartan structure defined on a (d+1)-dimensional manifold M d+1 to the geometric structure of a (d + 2)-dimensional Lorentzian manifold M d+2 with a null isometry via the null-reduction procedure (see [5] and references therein). One considers a (d + 2)-dimensional manifold with a Lorentzian metric G AB and a null Killing vector n M , and decomposes G along coordinates (x − , x µ ), where x − is a coordinate along the integral curves of the null vector n M . The various NC structures then arise as components of the (d + 2)-dimensional metric decomposed along these coordinates: It can then be shown that diffeomorphism invariance on M d+2 is equivalent to the combination of diffeomorphism, Milne boost and U(1) gauge invariance on the NC manifold M d+1 . 7 Thus the problem of finding Milne boost and gauge invariant scalars on M d+1 is 7 The U(1) gauge symmetry in the NC geometry appears as a symmetry under the reparameterization of the x − coordinate on M d+2 , while Milne boost symmetry appears as an ambiguity in the decomposition of the metric G along these coordinates.
JHEP06(2016)158
mapped to the problem of finding scalars on M d+2 built from the metric G AB and the null vector n M . 8 The anisotropic Weyl transformation can also be introduced on the M d+2 manifold by defining (for z = 2): Using the null reduction we can derive several expectations for our cohomological analysis for the (2 + 1)-dimensional, z = 2 case without Frobenius and with Galilean boost invariance. First, note that both the total number of derivatives n D in an expression and its Weyl scaling dimension d σ are preserved when reducing expressions from the Lorentzian (3 + 1)-dimensional manifold to the (2 + 1)-dimensional manifold. We can therefore expect the following: 1. Since there are no scalar expressions in the (3 + 1)-dimensional manifold with less than 4 derivatives (n D < 4) and scaling dimension of d σ = 4, we expect to find no boost and gauge invariant expressions in the sectors with n D < 4.
2. Scalar expressions in the (3 + 1)-dimensional manifold that are built only from the metric G AB and the Riemann tensor R ABCD (but not from n M ) have n D = d σ . Therefore in the (2 + 1)-dimensional manifold the corresponding expressions satisfy this relation too, which implies that such scalar expressions with the correct dimension of d σ = 4 belong to the sectors with n D = d σ = 4 in our analysis (that is the sectors with 4 space derivatives). On the other hand, these expressions transform under the Weyl transformation exactly like the corresponding expressions from the well-known 3 + 1 relativistic conformal case (since the metric transforms the same). Therefore we expect the anisotropic Weyl cohomology in the n D = 4 sectors to mirror the one from the 3 + 1 conformal case: the (0, 4, 0) is expected to contain one A-type anomaly corresponding to the (3+1)-dimensional Euler density E 4 , one B-type anomaly corresponding to the Weyl tensor squared W 2 , and one coboundary corresponding to the trivial R. The (0, 4, 1) sector (which we do not study in this work) is expected to contain one B-type anomaly corresponding to the (3 + 1)-dimensional Pontryagin density.
These expectations are indeed met in our results. Note that, while we use the null reduction here to derive these expectations, our cohomological analysis is performed directly in the 2 + 1 setting and does not make use of the null reduction.
The (0,4,0) sector
In this sector we find four boost invariant quantities which can be identified with the null reduction of R 2 , W 2 , E 4 and R of the (3 + 1)-dimensional theory up to multiplication JHEP06(2016)158 by some constant coefficient (see equation (C.17) and the definition of the φ i -s in (C.14)). As expected, out of these, there are three independent Weyl cocycles, one of which is a coboundary. Hence we find two possible anomalies in this sector, given by the densities: is a B-type anomaly (that is, a Weyl-invariant density). The details of the calculations, as well as the method used for identifying the various expressions with the respective (3 + 1)-dimensional expressions, can be found in appendix C.
It is important to note how these results reduce to the ones of subsection 5.1 when we require the Frobenius condition to be satisfied, that is when setting K A = 0. It is easy to see that, when K A = 0, the B-type anomaly A reduces to the expression 2( ∇ µ + a µ )(a ν ∇ ν a µ − a µ ∇ ν a ν ). While this expression is still a cocycle of the relative cohomology, it becomes a coboundary in the K A = 0 case, and can be removed by adding the following counter term to the action: which is both gauge and boost invariant when K A = 0. This explicitly shows that in order to obtain an A-type anomaly one has to forgo the Frobenius requirement.
Sectors with more than four derivatives
While we have not fully studied the relative Weyl cohomology in the sectors with n D > 4 (there is an infinite number of such sectors), we show here that one can find an infinite number of independent B-type anomalies in these sectors. First, we note that the total contribution of K A to equation (3.20) is zero (for z = 2), hence we are allowed to have as many K A instances as we want in our expressions. More specifically, any scalar expression φ with scaling dimension 4 can be multiplied by K n A (where n is any integer number) to get another expression (K A ) n φ with the same scaling dimension.
We also note that for z = 2, K A is both boost and Weyl invariant (see appendix A.2 and A.3) and hence for any B-type anomaly density in the relative cohomology A, (K A ) n A also represents a B-type anomaly (it cannot be a coboundary term as it is clearly not a total derivative, and Weyl coboundaries are always total derivatives as explained in [1]). For represents a B-type anomaly with n D = 4 + n derivatives in the relative cohomology for any n. We see that giving up the Frobenius condition implies the possibility of having an infinite number of independent anomalies. We emphasize that these are not necessarily the only anomalies in the sectors with n D > 4. A full cohomological analysis is required in order to obtain all possible anomalies in these sectors.
Without Frobenius and with no Galilean boost invariance
As explained in subsection 3.3, like the previous case, this case contains an infinite number of sectors. We again restrict our attention to the sectors with n D < 4 and the parity even n D = 4 sector, which are the following sectors: Altogether we find 6 different possible anomalies in these sectors, all of which are Btype: the (2, 0, 0), (2, 0, 1) and (1, 2, 1) sectors remain unchanged from the Lifshitz case with the Frobenius condition satisfied, which was studied in [1]. Therefore as in that case, the sector (2, 0, 0) contains only the B-type anomaly given by the density: the (2, 0, 1) sector is empty and the (1, 2, 0) has no possible anomalies. The (1, 2, 1) sector is changed from the Frobenius case, and unlike that case, contains a B-type anomaly given by the density: Finally, the (0, 4, 0) sector is changed as well, and contains 4 different possible B-type anomalies, given by the densities: (5.17) Note that out of these six anomalies, the ones labeled as A infinitely many Table 1. Number of anomalies in the different sectors for the relative Weyl cohomology in 2 + 1 dimensions and z = 2, denoted by the number of time and space derivatives and the parity property: (n T , n S , n ). n an denotes the number of anomalies. n D denotes the total number of derivatives.
As in the subsection 5.3, the sectors with n D > 4 were not studied in detail, but an argument similar to the one outlined in subsection 5.3.3 is valid here as well, namely if A represents a B-type anomaly in the relative Weyl cohomology then (K A ) n A represents one as well. Therefore, for example, the expression: A (n) = (K A ) n Tr(K 2 S ) − 1 2 K 2 S represents an anomaly for any n (with n D = 2 + n derivatives). This structure can be clearly seen in the anomalies labeled A
Conclusions and summary
We have studied non-relativistic scale anomalies in various setups, using the cohomological description of the WZ consistency conditions. These include cases with or without a foliation structure (i.e. the Frobenius condition) and with or without Galilean boost invariance. The analysis was carried out explicitly for dynamical exponent z = 2 both in 1 + 1 and in 2 + 1 dimensions. The results extend the analysis of Lifshitz scale anomalies in [1].
In 1 + 1 dimensions the Frobenius condition is automatically satisfied and we found no anomalies in the case with Galilean boost invariance. In 2 + 1 dimensions we summarize our findings in table 1 below.
The results of our cohomological analysis in 2+1 dimensions lead to several interesting observations. First, when the Frobenius condition is imposed, there are no new possible anomalies in the boost invariant case compared to the ones found in the Lifshitz case discussed in [1], and in fact one is left only with the single anomaly from the sector with 4 space derivatives. Second, when the Frobenius condition is not imposed, it is possible to have an infinite number of independent B-type anomalies. Third, the case with no Frobenius condition imposed and no boost invariance does contain new possible B-type anomalies over the Lifshitz case with Frobenius discussed in [1], but we found no A-type anomalies (up to 4 derivatives). Finally, the case with no Frobenius condition imposed and with boost invariance (which is the one studied in [5]) does contain an A-type anomaly in the parity even n D = 4 sector, and in fact the structure of the Weyl cohomology in JHEP06(2016)158 this sector mirrors the structure of the conformal case in 3 + 1 dimensions, as expected from the null reduction and in agreement with [5]. This sector thus contains an A-type anomaly corresponding to the Euler density in 3 + 1 dimensions and a B-type anomaly corresponding to the Weyl tensor squared in 3 + 1 dimensions.
We therefore conclude that in order to have an A-type anomaly (at least up to four derivatives) in the anisotropic Weyl cohomology in 2+1 dimensions and with z = 2 one has to both impose Galilean boost invariance and give up the foliation structure of spacetime. However in doing so, one introduces the possibility of having an infinite set of independent anomalies in the cohomology. Whether this has any interesting implications or imposes any restrictions on the underlying field theories is left for future study. In particular, the issue of causality which we discussed should be addressed.
Since we have not fully studied the Weyl cohomology of the infinitely many sectors with n D > 4 in the cases without the Frobenius condition, it would also be interesting to study them in detail in the future, and try to prove our conjecture that there is no A-type anomaly in the non-boost-invariant case. Another possibility for future work would be comparing the various cases we have studied here for higher dimensions, as well as understanding the cohomological structures of anomalies for z = 2 for each of these cases.
Several other research directions follow from our work. In terms of field theory, a better understanding of the implications of the Frobenius condition when coupling a nonrelativistic theory to a curved spacetime would be desirable, both in the boost invariant and the non-boost invariant cases. Studying the behavior of the various anomaly coefficients along RG flows, and especially the one of the A-type anomaly in the boost invariant case, could lead to RG flow theorems for non-relativistic theories. This is particularly interesting since the value of z may change along the flow. It would also be interesting to address in this context the issue of anisotropic scale versus full Schrödinger invariance.
A Useful formulas
In this appendix we gather various formulas required for the analysis presented in this paper. The formulas are organized as follows: -General identities and definitions can be found in subsection A.1, -Milne boost transformation rules can be found in subsection A.2, -Anisotropic Weyl transformation rules can be found in subsection A.3.
JHEP06(2016)158
Throughout this appendix we use T αβ... to denote a generic space tangent tensor and 2d = to denote equalities that only hold in 2 + 1 dimensions.
A.1 Definitions and identities
In this subsection we present some definitions related to the basic tangent tensors as discussed in subsection 3.2 and identities that relate them to each other. We start by recounting the definitions of the basic space tangent tensors.
A tensor T αβ... is called space tangent if it satisfies: Any tensor can be rendered space tangent by projecting it on the space directions using the space projector P µν = g µν + n µ n ν . We decompose the derivative of the normalized time one-form as follows: where K S µν , K A µν and a µ are all space tangent tensors. a µ is the acceleration vector given by: K S µν is symmetric and given by: It reduces to the extrinsic curvature of the foliation if the Frobenius condition is satisfied. We denote its trace by K S ≡ (K S ) µ µ . K A µν is antisymmetric and is given by: It vanishes in the Frobenius case. We denote by K αβ the total space tangent part of ∇ α n β , that is: We also define the space tangent Levi-Civita tensor as: The following identities follow immediately from the above definitions: In theories with Galilean boost invariance we decompose the gauge field as follows:
JHEP06(2016)158
We also define the electric and magnetic fields as follows: where F µν is the field strength of the gauge field A µ . In 2 + 1 dimensions the tensors K A µν and B µν contain only one independent component each, and it is convenient to define: Additionally, when writing parity odd terms in 2 + 1 dimensions it is sometimes useful to define the following tensors:K so thatK S αβ contains the traceless part of K S αβ . It then follows that: Given a space tangent p-formF αβ... , we define the space tangent exterior derivative as follows: (dF ) αβγ... ≡ P α α P β β P γ γ (dF ) α β γ ... , (A.14) where dF is the standard exterior derivative ofF, and (dF ) αβγ... is a space tangent (p + 1)form. It can then be easily shown that: 9 dF = dF + n ∧ L nF , (A. 15) where ∧ is the standard wedge product between forms. A general p-form F αβ... can always be decomposed as follows: whereF is a space tangent p-form and F n is a space tangent (p − 1)-form. For example, from (A.2) we have the following decomposition for dn: (A.18) 9 We use bold symbols to designate p-forms.
JHEP06(2016)158
Using (A.18) twice and noting that d 2 F = 0 we find the following identities: Similarly, using (A.18) on dn and noting that d 2 n = 0 we find the following identities for K A µν and a µ :d 20) or in index notation: Finally, using (A.18) on the field strength tensor F µν we find the following identities for the electric and magnetic fields (these are just the homogeneous Maxwell equations): In the case of 2 + 1 dimensions, the first identity is trivial, and the second reduces to: in index notation.
Next we turn to discuss space tangent derivatives and the Riemann tensor. Given a space tangent tensor T αβ... , we define its space tangent covariant derivative as follows: Note that both the spatial metric P µν and the space tangent Levi-Civita tensor are covariantly constant under this derivative: From this definition, the following formula holds for the exchange of space tangent derivatives: where R αρµν is defined as the space tangent tensor:
JHEP06(2016)158
and R αρµν is the standard Riemann curvature associated with the covariant derivative ∇ µ and defined via: Note that in the case where n α satisfies the Frobenius condition, R αρµν reduces to the intrinsic Riemann curvature of the foliation it induces. However generally this tensor does not have all of the regular symmetries of the Riemann tensor. It is therefore useful to define a modified Riemann tensor: 10 which satisfies the usual Riemann tensor symmetries except for the second Bianchi identity which is replaced by: where the antisymmetrization is everywhere on the σ, µ and ν indexes. We then define the equivalents of the Ricci tensor and scalar for this modified Riemann tensor R αρµν as follows: (A.32) In 2 + 1 dimensions the modified Riemann tensor contains only one independent component which we choose to be the scalar R. We can then write R αβγδ in terms of R: and the formula (A.27) takes the form: In the case where the Frobenius condition is satisfied, R αρµν and R αρµν coincide, and equation (A.30) reduces to one of the Gauss-Codazzi relations for the foliation induced by n α . We can also find generalizations for the other Gauss-Codazzi relations as follows:
JHEP06(2016)158
Using the generalized Gauss-Codazzi relations we can derive the following formula for exchanging a space tangent derivative and a Lie derivative in the direction of n α : Applying a Lie derivative to equation (A.27) and using formula (A.37) twice, we can derive the following identity for the Lie derivative of R αβµν (this is a consequence of the d + 1 dimensional second Bianchi identity): We can use this identity to derive a similar expression for the Lie derivative of the scalar R: Finally, we have the following formulas for integration by parts in terms of space tangent derivatives and Lie derivatives in the direction of n µ : whereJ µ is a space tangent vector and φ is a scalar.
A.2 Milne boost transformation rules
In this subsection we detail the transformations of the various basic tangent tensors under infinitesimal Milne boosts as derived from the definitions in (2.11), and in subsection A.1. From (2.11) we have:
JHEP06(2016)158
From these transformations, and the expressions for the Levi-Civita connection and the acceleration, we obtain: For future convenience, we define the space projected boost transformation of a space tangent tensor as follows:δ where T αβγ... is a space tangent tensor. Since δ B W P µν = 0, this projected boost transformation operator satisfies the Leibniz product rule: Therefore for a scalar φ built from contractions of space tangent tensors, we may safely substitute δ B W withδ B W in our analysis. The following space projected boost transformation rules can be derived from the definitions and identities in subsection A.1:
JHEP06(2016)158
Finally, for the transformation rules of the gauge field, the electric field and the magnetic field we have the following: (A.59)
A.3 Weyl transformation rules
In this subsection we detail the transformations of the various basic tangent tensors under (infinitesimal) anisotropic Weyl transformations. These transformations can be derived from the definitions in (2.15), and in subsection A.1. Starting from (2.15), we have: Next, from these transformations and the expression for the Levi-Civita connection and the Lie derivative we obtain the following: From these formulas, as well as the definitions and identities of subsection A.1, the following Weyl transformation rules can be derived:
JHEP06(2016)158
Finally, for the transformations of the gauge field, the electric field and the magnetic field we have: Note that for z = 2, when applying a Weyl transformation to a gauge invariant expression one can obtain an non-gauge-invariant expression, since the U(1) and the scale symmetries no longer commute in this case. For the case of 2 + 1 dimensions with z = 2, these transformation rules reduce to:
B The case with Frobenius and Galilean boost invariance
In the following appendix we detail the calculations behind the results of the cohomological analysis for the case with the Frobenius condition satisfied and with Galilean boost invariance. The calculations are organized according to various sectors as explained in subsection 5.1. In this case we set K A = 0 since the Frobenius condition is satisfied and we include the gauge field contributions. This sector contains the following ghost number zero, TPD and gauge invariant, independent expressions: If we now define the independent boost-ghost number one expressions:
JHEP06(2016)158
then the Milne boost transformations readδ B W φ i = B ij χ j , where B ij is given by: It is easy to check that no boost invariant combinations exist in this sector.
B.2 The (2,0,1) sector
This sector contains the following ghost number zero, TPD and gauge invariant, independent expressions: Note that˜ µν ∇ µ E ν is related to the others by the Maxwell equation (A.24). The boostghost number one expressions read: The Milne boost transformations readδ B W φ i = B ij χ j , where B ij is given by: and there are no boost invariant expressions in this sector.
B.3 The (1,2,0) sector
This sector contains the following ghost number zero expressions: The boost-ghost number one expressions read:
JHEP06(2016)158
The matrix B ij for the Milne boost transformations reads: and no boost invariant combinations exist.
B.4 The (1,2,1) sector
This sector contains the following ghost number zero expressions: The boost-ghost number one expressions read: (B.12) The matrix B ij for the Milne boost transformations reads: and once again no boost invariant combinations exist.
JHEP06(2016)158 C The case without Frobenius and with Galilean boost invariance
In the following appendix we detail the calculations behind the results of the cohomological analysis for the case in which the Frobenius condition is not satisfied and with Galilean boost invariance. The calculations are organized according to various sectors as explained in subsection 5.3. In this case we include contributions from the gauge field since we have Galilean boost invariance and include K A since the Frobenius condition is not satisfied. The equations for classification by sectors (3.20), (3.22) become: This case contains an infinite number of sectors, but as explained in subsection 5.3 we focus on the sectors with n D = n T + n S < 4 and the parity even sector with n D = 4. We again follow everywhere the notations of subsection 3.4. All results agree with the expectations from the null reduction as described in subsection 5.3.1 and in [5].
C.1 The (2,0,0) sector
This sector contains the following ghost number zero, TPD and gauge invariant, independent expressions: The boost-ghost number one expressions are given by: The Milne boost transformations readδ B W φ i = B ij χ j , where B ij is given by: This sector contains following three ghost number zero expressions: The boost-ghost number one expressions are given by: No boost invariant combinations exist in this sector.
C.3 The (1,2,0) sector
This sector contains the following ghost number zero expressions: Since this sector contains a larger number of expression it will be more convenient to write the Milne boost transformations explicitly (instead of the matrix B ij ). These read: and we find no boost invariant expressions in this sector.
JHEP06(2016)158
C.4 The (1,2,1) sector This sector contains the following independent ghost number zero expressions: The boost-ghost number one expressions read: The boost transformations are given by: and once again no boost invariant combinations exist in this sector.
C.5 The (0,4,0) sector
This sector contains the following independent ghost number zero expressions: The independent boost-ghost number one expressions in this sector are given by:
JHEP06(2016)158
Note that, in building this list of independent expressions, one has to take into account dimensionally dependent identities which are derived from the requirement that, for any space tangent tensor in 2 + 1 dimensions, the antisymmetrization of 3 or more indexes vanishes. An example of such an identity is W α K A˜ µν a µ ∇ (ν a α) = χ 9 − χ 6 .
The boost transformations of the ghost number zero expressions are given by: Overall, we find a 4-dimensional space of boost invariant combinations in this sector, as expected from the null reduction arguments outlined in subsection 5.3.1. To allow for an easier comparison with the null reduction, we choose a basis for this space that corresponds to the various (parity even) scalars of the analogous (3 + 1)-dimensional manifold with n D = 4 derivatives. The expressions in this basis are given by: are proportional to the (3 + 1)-dimensional manifold curvature scalars R 2 3+1 , 3+1 R 3+1 , W 2 and E 4 respectively. 11 The identification of these combinations with the corresponding (3 + 1)-dimensional scalars was made using the following arguments: 1. Start by identifying the (2 + 1)-dimensional expression corresponding to the (3 + 1)dimensional Ricci scalar R 3+1 . This is a boost invariant expression with n D = d σ = 2, and therefore belongs to the (0, 2, 0) sector. It can be easily checked that there is 11 Here, R3+1 is the Ricci scalar, W 2 is the Weyl tensor squared and E4 is the Euler density of the (3 + 1)-dimensional manifold.
4. The (3+1)-dimensional Weyl tensor squared W 2 is identified with β W 2 3 by noting that it is the only boost-invariant combination which is Weyl invariant (this can be verified using the Weyl transformations detailed later in (C.23)).
The (3 + 1)-dimensional Euler density is identified with
4 by noting that it is the only boost-invariant combination which is both a total derivative and contains no derivatives of order greater than 2.
Next, we turn to the Weyl cohomology itself. The independent Weyl-ghost number one expressions in this sector read: ψ 29 =˜ αβ K A L n ∇ α σ a β , ψ 30 =˜ αβ L n ∇ α σ ∇ β K A , ψ 31 = K 2 A E α ∇ α σ, ψ 32 = K A L n K A L n σ, ψ 33 = K 2 A L n 2 σ, ψ 34 = K 2 A K S L n σ. 12 It is easy to check that the operator + a µ ∇µ corresponds to the (3 + 1)-dimensional 3+1, either by noting that ( + a µ ∇µ)φ is a total derivative, or using the null reduction directly.
D The case without Frobenius and with no Galilean boost invariance
In the following appendix we detail the calculations behind the results of the cohomological analysis for the case in which the Frobenius condition is not satisfied and without Galilean boost invariance. The calculations are organized according to various sectors as explained in subsection 5.4. In this case we have no gauge field. We have to include K A since the Frobenius condition is not satisfied. The equations for classification by sectors (3.20), (3.22) read: n T = n K S − n K A + n L , n S = 2n K A + n a + n ∇ + 2n R , 2n T + n S = 4.
(D.1)
Like the previous case, this case also contains an infinite number of sectors. We focus on the sectors with n D = n T + n S < 4 and the parity even sector with n D = 4. An immediate consequence of equation (D.1) is that for the sectors with two time derivatives and no space derivatives n K A = 0. These sectors are therefore identical to the corresponding sectors in the Lifshitz with Frobenius case, which was analysed in [1]. Therefore JHEP06(2016)158 the sector (2,0,0) contains one anomaly, given by: and the (2,0,1) sector is empty. The (1,2,0) sector is also unchanged from the Frobenius case as there are no TPD invariant expressions involving K A in this sector. There are therefore no anomalies in this sector. This conclusion holds for any value of z as explained in [1].
D.2 The (1,2,1) sector
As explained in [1] the cohomological analysis for sectors with a single time derivative can be performed for a general value of z (at least when the gauge field associated with the particle number is not involved). This is because these sectors satisfy equation (3.22) for any value of z, and the independent expressions in these sectors remain the same for all values of z. We call these sectors universal. Since the (1,2,1) sector is one of these universal sectors, we keep z as a general parameter in the following analysis.
(D.4)
We define I i = √ −g σφ i and L i = √ −g σψ i . Note that L 8−10 are not independent terms, as they are related to L 1−7 via integration by parts:
JHEP06(2016)158
The Weyl transformations of the integrated ghost number one expressions are given by δ W σ I i = −M ij L j where: There are n cc = 5 independent cocycles in this sector, given by:
JHEP06(2016)158
The coboundary space is of dimension n cb = 4, and we choose the basis: where G i = √ −g φ i . We therefore conclude there is n an = 1 possible anomaly in this sector, which is B-type (trivial descent), and given by (up to coboundary terms):
JHEP06(2016)158
There are n cb = 12 independent coboundaries: We are therefore left with n an = 4 anomalies, all of which are B-type: (D.13) The explicit expressions can be found in equation (5.17).
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 16,153.2 | 2016-06-01T00:00:00.000 | [
"Physics"
] |
1-loop amplitudes from the Halohedron
We recently proposed the Halohedron to be the 1-loop Amplituhedron for planar 𝜙3 theory. Here we prove this claim by showing how it is possible to extract the integrand for the partial amplitude mn1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {m}_n^1 $$\end{document} (l, ..., n|1, ... ,n) from the canonical form of an Halohedron which lives in an abstract space. This space is just a step away from ordinary kinematical space at 1-loop, because it is composed by abstract variables associated to propagators of 1-loop Feynman diagrams. Such variables, however, are unbound from momentum conservation relations that would give problems such as double poles. As an application of our construction, we exploit a well known recursion formula for the canonical form of a polytope in order to produce an expression for the 1-loop integrand which would not be evident starting from Feynman diagrams.
Introduction
In a recent work we proposed the Halohedron to be the 1-loop Amplituhedron for the planar φ 3 theory [1]. The Halohedron emerged naturally by using hyperbolic geometry in the study of positive geometries living in the moduli space of genus one Riemann surfaces, M 1,n . Such considerations were most natural in (1+2) dimensions, 1 where the hyperboloid model provides a simple way to solve the scattering equations [8][9][10], which, at least at tree level, are known to provide maps from positive geometries defined in kinematical space to positive geometries in the moduli space [2,11].
However, choosing a specific dimension clashes with the general wisdom of the scattering equations, which is instead working in arbitrary dimension. Indeed, in our case it proved to be a substantial obstacle in extracting the amplitude from the canonical form of these positive geometries, even at tree level. This is mainly due to the fact that, if a specific dimension is picked, then the Mandelstam variables have to satisfy non-linear Gram identities that do not interact well with the constraints used to cut the Associahedron. An interesting approach to this problem may be to think of (1+2)-dimensional kinematical space, or any other d-dimensional kinematical space, as a particular subspace of the arbitrary dimension Mandelstam space, obtained from the latter by imposing the Gram identities. This idea is inspired by [13], where it was shown how to extract amplitudes from general subspaces of the Mandelstam space, and surely deserves further study.
As one might expect, at 1-loop there are new sources of problems. Whilst tadpoles and external leg bubbles (in dimension D > 6) are known to cancel [12], internal bubbles have some issues which we would like to explain from different perspectives. Firstly, internal bubbles cause the integrand -however defined -to have double poles. Indeed, a diagram such as in figure 1 gives a contribution to the integrand of the form 2 1/s 2 I . One may try to 1 For another approach where 1+2 dimensions were instrumental, see [7]. 2 We stick to the convention that, for any subset I of the external particle labels {1, . . . , n}, sI denotes a multi-particle Mandelstam invariant, i.e. sI = k 2 I and k µ I = i∈I k µ i where ki are the external momenta.
JHEP12(2019)074
s I s I Figure 1. The topology of bubble diagrams make so that double poles arise. get around this, by exploiting the fact that a shift in the loop variable allows to rewrite the internal 2-point function as s I /(ℓ·k I ) 2 , but then a new double pole ℓ·k I arises. It is difficult to understand how an integrand with double poles may emerge from the canonical form of a positive geometry, which is defined to have simple poles! 3 Another reason why bubbles are a problem is that we would like to interpret the propagators of a Feynman diagram as coordinates over a positive geometry which should be n-dimensional (the dimension of M 1,n ). The Halohedron is the natural candidate, but then again we do not know how to treat bubbles, that give only n − 1 independent propagators. In this sense, external leg bubbles and tadpoles are a problem as well, because momentum conservation force a propagator to be identically 0! In this paper we propose a very simple way to overcome all these problems. The key idea, quite similar to the Big Kinematic Space proposed in [2], is to loosen the propagators from the constraints coming from momentum conservation, which forces tadpoles and external bubbles propagators to be zero, and internal bubble propagators to be equal. Therefore, we think of the propagators as abstract variables X I and we cut an Halohedron in this space.
Another crucial step is noting that two vertices of the Halohedron lie on the same 1-dimensional edge if the corresponding Feynman diagrams are related by a simple generalisation of the mutation introduced in [2]. This basic observation has the consequence that the canonical form of the Halohedron, once a reference diagram g * has been fixed, can be written as a sum over all 1-loop planar diagrams where the product I∈g runs over all propagators of the diagram g and the measure dµ g * , given by dµ g * = I∈g * dX I , is defined up to an overall sign.
To obtain the 1-loop integrand for bi-adjoint theory from Ω Hn , we strip off the measure dµ g * , kill the tadpoles and external bubbles contribution by sending the corresponding variables to infinity and finally go back to the physical kinematical space by substituting X I → s I . Note that as a consequence of these replacements momentum conservation and JHEP12(2019)074 double poles are restored. In this sense, we can finally state that the Halohedron is the 1-loop Amplituhedron for planar φ 3 theory.
The new geometrical picture allows to find new recursion formulae for the 1-loop amplitudes by exploiting the standard machinery developed in [2,3]. For example, for the 4-point 1-loop integrand 4 we find the expression where ℓ i is the momentum flowing through particles i and i + 1. The individual terms of this expansion cannot be obtained recombining Feynman diagrams. In addition, they possess spurious poles which cancel only in the cyclical sum, as it is usual the case when we triangulate a positive geometry. This paper is structured as follows. In section 2 we describe the convex realisation of the Halohedron in an abstract kinematical space and in section 3 we show how to extract the 1-loop integrand for bi-adjoint theory from its canonical form. In section 4 we illustrate how to obtain recursive formulae for the integrand by means of a simple triangulation of the Halohedron, in particular we provide a detailed derivation of (1.1). We conclude by discussing some directions for future investigations.
The abstract space Halohedron
In this section we define a convex realisation of the Halohedron in an affine space X with coordinates (X 1 , . . . , X n ). We are going to do so by defining a set of linear functions X I such that the region where they are all positive cuts an Halohedron. These functions will be in 1-1 correspondence with the facets of the Halohedron, and thus with propagators of 1-loop planar diagrams, 5 and will be labelled as the planar variables listed in table 1. The Halohedron is then realised as the intersection of the region where all loop propagators are positive with the space X. We can think of the space X as an abstraction of the natural kinematical space of all planar variables, it is a subspace where the planar variables satisfy relations that guarantee the realisation of the Halohedron, while they do not satisfy other usual relations. For example, momentum conservation is not enforced since the planar variables X (i,...,j) and X (j+1,...,i−1) -which are dual to the propagators at the sides of an internal bubble -will not be equal on X. Indeed, this would not be possible since they correspond to different facets of the Halohedron.
In order to find the correct form of the functions X I , we implement the convex realisation of the Halohedron described in [15], which is obtained by iterated truncations of an n-dimensional cube. We center one of the corners of the cube at the origin of the space X, so that the coordinates X i become the face variables of n of the facets of the cube. Next, we introduce functions Table 1. List of the arcs and corresponding facets of the Halohedron, which are also labelled by dual planar variables.
X i ≥ 0 and X (i,i+1) ≥ 0 define the initial cube, which is to be truncated at the intersection of the faces X (i,i+1) in order of increasing dimensions. The first truncation happens at the vertex where all the facets X (i,i+1) meet and it is implemented by considering the function where ǫ 0 is a new positive constant. Requiring X 0 ≥ 0 shaves off the vertex where all the X i,i+1 are zero, creating a new facet -which is a (n − 1)-simplex -at the end of all the truncations such facet will be cutted into the cyclohedral facet of the Halohedron. Similarly one truncates all the one dimensional faces given by the intersection of the faces X a,a+1 for a ∈ (i, i + 1, . . . , i − 1), by introducing functions and demanding them to be positive. The truncations easily generalize to every dimension, for every subset I ⊂ (1, 2, . . . , n) of cyclically consecutive indices and of cardinality |I| ≥ 3 we consider a function where I ′ is obtained from I by dropping the last element and ǫ I is a positive constant. The variables X i , X i,i+1 ,X 0 and X I together span the whole set of facets of the Halohedron JHEP12(2019)074 H n , and the region where they are simultaneously positive gives a convex realisation of it, an example is shown in figure 2. Finally, we remark the constants ǫ I cannot be chosen arbitrarily. The reason is that they modulate the depth of the truncations which must not be too deep, for example ǫ 0 must be smaller than ǫ i or the facet X 0 created by the truncation will touch the facet X i .
The canonical form and the integrand
We now study the canonical form of the Halohedron we introduced in the previous section.
In general, when working with a n-dimensional simple polytope P (i.e. one whose vertices are adjacent to exactly n facets) the canonical form can be written as invariant under X f → α(X)X f . In practice this is guaranteed by the following mutation rule. Suppose v and v ′ are adjacent on the same 1-dimensional boundary E of the polytope. Then they are given by the intersection of two sets of facets which have all but two elements equal. Let us call X f and X f ′ these two elements. Once the facets in the wedge product are ordered so that X f and X f ′ are in the same position, we must have sign(v) = − sign(v ′ ). This has to be, because we can take iterate residues of Ω P until we are left with Ω E = d log(X f ) ± d log(X f ′ ), and so we see that we need a "−" in order to avoid a double pole at infinity. We can label the vertices of both the Associahedron and the Halohedron in terms of Feynman diagrams. Then, two vertices happen to be adjacent to the same edge E if and only if their Feynman diagrams are related by an s/t-channel swap (or mutation) as in figure 3. For the Halohedron, the mutation rule is generalised to include a swap of an IR tadpole with a UV one, 6 see I runs over all the propagators of a diagram g, and sign(g) is fixed by mutating every diagram from a chosen reference one. Crucially, since our functions X are not constrained by the usual momentum-conservation relations, every diagram contributes to Ω Hn : IR/UV tadpoles, internal and external bubbles as well. We would like to express all the forms dµ g = I∈g dX I appearing in (3.2) in terms of a single one, so that we can extract from Ω Hn the rational function Ω Hn which we will interpret as the amplitude. In the Associahedron case, this was done by using physical propagators in lieu of X. Because of momentum conservation, they have to satisfy the 7-term identity
JHEP12(2019)074
which holds for any partition of the set {1, . . . , n} into four sets I 1,2,3,4 of adjacent indices. In fact, this identity is equivalent to momentum conservation: if the propagators are thought of as abstract variables, the 7-term identity is sufficient to restrict them to the physical sub-space where all relations among propagators hold [2]. The 7-term identity, together with the constraints k i,j = c i,j where c i,j is a positive constant and (i, j) non adjacent indices, implies ds I 1 ∪I 2 = −ds I 1 ∪I 4 + ds I 1 + ds I 2 + ds I 3 + ds I 4 .
Since for mutated diagrams the terms ds I i are shared, we are allowed to exchange an s-channel for a t-channel in the measure of diagrams. We pick a relative sign which is balanced by the one implied by the mutation rule, and thus we can express the canonical form of the Associahedron as where g * is the reference graph chosen. It is clear that the rational function Ω A n−3 , obtained by stripping of the measure dµ g * from Ω A n−3 , is a tree level amplitude. We would like this story to repeat for the Halohedron, but if we imposed the 7-term identity we would end up again with physical propagators, so that the contribution of internal bubbles would disappear from (3.2). In fact, we do not need to impose any constraints on the variables X. The functions X defined in (2.3) are such that dX ∧ · · · = −dX ′ ∧ . . . where X and X ′ denotes the distinct propagators of two mutated diagrams, whose shared propagators are represented by the dots. We now prove this statement studying case by case the various types of mutations.
JHEP12(2019)074
Cut/Factorisation adjacency. Consider two diagrams such as in figure. i + 1 The corresponding measures are for the diagram on the left and on the right, respectively. Since we have 3) trivially holds.
IR/UV adjacency. The situation is slightly more complicate for a pair of IR and UV tadpoles.
Keeping in mind that we are under a wedge with this factor, we can write
JHEP12(2019)074
Note that, despite its name, the adjacency swaps a cut propagator X j with a tadpole propagator X (i,...,i−1) . This time we have to focus on the two shared propagators at the sides of the bubble, whose variables are where in the last passage we used the fact that the overlined terms vanish under wedge with the two shared propagators.
s-channel/u-channel adjacency. Finally, we have a adjacency involving the tree structure of the diagram. In the figure we draw the loop part on the leg I 1 but its actual position is irrelevant, only that it is the same in both diagrams.
All the four shared variables X I j have to be kept in consideration, and in particular remember that dX I 1 = −d(X i + · · · + X j−1 ). 7 For the diagram on the left we have we freely added a shared propagator and recognised the overlined term as dX I 1 .
JHEP12(2019)074
By virtue of (3.3), we can again write the canonical form (3.2) using a single measure, we choose d n X = n i=1 dX i . Doing so we obtain Ω Hn = Ω Hn d n X, (3.4) where Ω Hn is given by the sum over 1-loop planar diagrams. In the sum are involved also UV/IR tadpoles and diagrams with bubbles on external legs. Such unphysical contributions appear with terms X 0 , X i,...,i+n or X i,...,i+n−1 in the denominator, which in turn are given by expressions linear in the X i and in the various ǫ I . Therefore, we can kill the external bubbles and the tadpoles by taking the limit ǫ I → ∞ for those I which correspond to tadpole and bubble facets. After that, if we substitute X I → s I , s I being the physical propagator associated to X I , we are left with the 1-loop integrand! More precisely, note that each variable X I carries an ǫ I term uniquely associated to it. Therefore, we can first solve the ǫ I for all the X's and then the substitution X I → s I can be done unambiguously, even if we have an expression for Ω Hn where the constants ǫ I and the variables X i are not manifestly appearing in a combination from which we can recognise a variable X I . Before ending this section, we would like to give another interpretation of the ǫ → ∞ limit. In order to do so, it is convenient to switch to a projective language. We think of the coordinates X i of our abstract space as affine coordinates on P n , i.e. we introduce the projective vector Y = (1, X 1 , . . . , X n ).
Facets are given by linear equations of the form for a suitable dual vector W f , which is again naturally projective. For example, for n = 4, the UV facet has a dual vector given by ǫ (i,i+1) , 1, 1, 1, 1 , and by taking the projective limit this becomes W UV = (1, 0, 0, 0, 0), looking back at (3.5) it is now clear that we need Y = (0, * , * , * , * ), that is the facet has moved to the hyperspace at infinity. The ǫ → ∞ prescription has then a simple projective meaning: it deforms the Halohedron so that its un-physical facets are at infinity and possibly degenerate depending on the ratios of the ǫ that are sent to infinity.
Recursion formula for the 1-loop integrand
We understood how to extract the 1-loop integrand from the canonical form of the Halohedron, reproducing the Feynman diagram representation. However, we can obtain new JHEP12(2019)074 UV facet. This facet is associated to the variable The rational function Ω X 0 is given by a sum over all 20 UV-tadpole diagrams. If we group those associated with the same IR tadpole propagator, say X (i,i+1,i+2,i+4) , we obtain .
Since this facet is going to infinity, we forget the ǫ dependence of the planar variables appearing in (4.4), after which the terms in the bracket of (4.4) sum up to the numerator cancels with the denominator outside the bracket leaving us with and summing over the four IR tadpole propagators we get plugging (4.5) back in (4.3) we finally get the contribution of the UV facet to the recursion Tadpole facet. We focus on the facet X (1,2,3,4) , the remaining ones are obtained through a cyclic shift. The rational function Ω X (1,2,3,4) is given by a sum over 10 IR and UV tadpole diagrams. However, the UV are killed by the limit ǫ 0 → ∞ which we take before the limit ǫ (1,2,3,4) → ∞. Therefore we are left with 5 IR tadpole diagrams, whose contribution is almost identical to (4.4) a part for the prefactor: where s i,j = 2k i · k j and ℓ µ i is the momentum flowing between particles i and i + 1, e.g. ℓ µ 1 = ℓ µ , ℓ µ 2 = ℓ µ + k µ 2 and so on. After this is done, with a bit of algebra we get the following expression for the 4-point integrand where the sum is over the remaining three cyclically shifted terms. Note that (4.11) has double poles s 12 = 0 and s 23 = 0 coming from the internal bubbles. If we expand around s 12 = 0 we get 1 s 2 and from the coefficients of the expansion we read the internal bubble contribution and the two contributions to the residue s 12 = s 34 = 0.
Conclusions and outlook
In this paper we have shown that it is possible to extract the 1-loop amplitude from the canonical form of an Halohedron realised as a convex polytope in an abstract kinematical space, thus proving the conjecture we made in our previous work: the Halohedron is the 1-loop Amplituhedron. The fact that an integrand with double poles can be obtained from the canonical form of a positive geometry is a remarkable proof of principle, 8 and gives new strength to the idea that some deep physical concept, lurking in the shadows of the Lagrangian formalism of Quantum Field Theories, is captured by positive geometries. There are many directions for future investigations. The most natural one is to move at higher loop level, considering the moduli spaces of the Poincare' disk with several circles evicted, using the notation of [15], this is M (0,ℓ+1)(0,n) . However, as proven there, it fails to be a polytope. The reason is the presence of a geodesical arc whose contraction lowers the dimension by two. Nevertheless, this arc is equivalent to the UV arc of the Halohedron, and it is not associated to a physical singularity of the integrand. Therefore, it is likely possible to hide this problem at infinity, following the same spirit we did here for the UV facet.
A somewhat simpler generalisation of our work would be to find an expression for integrands with two different orderings. In light of the lessons from the tree level story [2,5,6,13], it is quite natural to expect them to be found from intersection of Halohedra sitting in the full moduli space M 1,n , or by pull back of a single 1-loop planar scattering form to the intersection of two n-dimensional abstract spaces.
A third interesting avenue would be to study other triangulations of the Halohedron and the associated recursion formulae for the integrand. For example, it would be interesting to try to reproduce the ring diagram equality -which underpins the forward limit formula of [12] -using as reference point for the triangulation X * = (ǫ 1 , . . . , ǫ n ), so that the cut facets do count in the recursion formula. It would be also nice to understand if it is possible to give an interpretation of partial fractions identities, which can be obtained by residue theorems [14], using the geometry of the Halohedron.
Finally, we mentioned that the ǫ → ∞ limit has a natural projective meaning: it is actually sending the corresponding facets to hyperplanes at infinity. Therefore, it is tempting to define a limit positive geometry whose canonical form directly gives the integrand. It would be fascinating to understand if this is possible, and probably would give a more beautiful and geometrical understanding on the way tadpoles and bubbles cancel each other. | 5,496.4 | 2018-06-05T00:00:00.000 | [
"Physics"
] |